From rolf at cs.rug.nl Wed Feb 1 05:11:38 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Wed, 1 Feb 1995 11:11:38 +0100 Subject: Ph.D. thesis available (Correction) Message-ID: Dear netters, the path in my announcement was wrong, the FTP filename should be FTP-filename: /pub/neuroprose/Thesis/wuertz.ps.Z The URLs are alright (that's what I've tested). Sorry for the confusion. Rolf From thimm at idiap.ch Wed Feb 1 07:19:47 1995 From: thimm at idiap.ch (Georg Thimm) Date: Wed, 1 Feb 95 13:19:47 +0100 Subject: WWW page for NN conference, workshop, and other event announcements available Message-ID: <9502011219.AA24501@idiap.ch> WWW page for announcements of events on NN available! ---------------------------------- This page allows you to enter and lookup announcements for conferences, workshops, talks and other events on neural networks. The entries are grouped into: - multi-day events for a larger number of people (conferences, congresses, big workshops,...), - multi-day events for a small audience (small workshops, summer schools,...), and - one day events (talks, presentations,...). The entries are ordered chronologically and presented in a standardized format for fast and easy lookup. An entry contains: - the date and place of the event, - the title of the event, - a hyper link to more information about the event, - a contact address (surface mail address, email address, telephone number, and fax number), - deadlines, and - a field for short comments. The URL of the beast is: I hope you find this helpful. Please send me any comments and suggestions. Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse -------------------------------------------------------------- From schmidhu at informatik.tu-muenchen.de Wed Feb 1 06:16:36 1995 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 1 Feb 1995 12:16:36 +0100 Subject: paper available: incremental self-improvement Message-ID: <95Feb1.121644met.42284@papa.informatik.tu-muenchen.de> ON LEARNING HOW TO LEARN LEARNING STRATEGIES Technical Report FKI-198-94 (20 pages) Juergen Schmidhuber Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany November 24, 1994 Revised January 31, 1995 This paper introduces the ``incremental self-improvement paradigm''. Unlike previous methods, incremental self-improvement encourages a reinforcement learning system to improve the way it learns, and to improve the way it improves the way it learns, without significant theoretical limitations -- the system is able to ``shift its induc- tive bias'' in a universal way. Its major features are: (1) There is no explicit difference between ``learning'', ``meta-learning'', and other kinds of information processing. Using a Turing machine equivalent programming language, the system itself occasionally executes self-delimiting, initially highly random ``self-modifi- cation programs'' which modify the context-dependent probabilities of future programs (including future self-modification programs). (2) The system keeps only those probability modifications computed by ``useful'' self-modification programs: those which bring about more payoff per time than all previous self-modification programs. (3) The computation of payoff per time takes into account all the computation time required for learning -- the entire system life is considered: boundaries between learning trials are ignored (if there are any). A particular implementation based on the novel paradigm is presented. It is designed to exploit what conventional digital machines are good at: fast storage addressing, arithmetic operations etc. Experiments illustrate the system's mode of operation. ------------------------------------------------------------------- FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/fki/fki-198-94.ps.gz (use gunzip to uncompress) Alternatively, retrieve the paper from my home page: http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html If you don't have gzip/gunzip, I can mail you an uncompressed postscript version (as a last resort). There will be a future revised version of the tech report. Comments are welcome. Juergen Schmidhuber From craig at magi.ncsl.nist.gov Wed Feb 1 13:39:32 1995 From: craig at magi.ncsl.nist.gov (Craig Watson) Date: Wed, 1 Feb 95 13:39:32 EST Subject: Mugshot Identification Database Message-ID: <9502011839.AA14301@magi.ncsl.nist.gov> National Institute of Standards and Technology announces the release of NIST Special Database 18 Mugshot Identification Database (MID) NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size, compressed with lossless compression. Each CD-ROM requires approximately 530 megabytes of storage compressed and 1.2 gigabytes uncompressed (2.2 : 1 average compression ratio). There are images of 1573 individuals (cases), 1495 male and 78 female. The database contains both front and side (profile) views when available. Separating front views and profiles, there are 131 cases with two or more front views and 1418 with only one front view. Profiles have 89 cases with two or more profiles and 1268 with only one profile. Cases with both fronts and profiles have 89 cases with two or more of both fronts and profiles, 27 with two or more fronts and one profile, and 1217 with only one front and one profile. Decompression software, which was written in C on a SUN workstation [1], is included with the database. NIST Special Database 18 has the following features: + 3248 segmented 8-bit gray scale mugshot images (varying sizes) of 1573 individuals + 1333 cases with both front and profile views (see statistics above) + 131 cases with two or more front views and 89 cases with two or more profiles + images scanned at 19.7 pixels per mm + image format documentation and example software is included Suitable for automated mugshot identification research, the database can be used for: + algorithm development + system training and testing The system requirements are a CD-ROM drive with software to read ISO-9660 format and the ability to compile the C source code written on a SUN workstation [1]. Cost of the database: $750.00. For ordering information contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov All other questions contact: Craig Watson craig at magi.ncsl.nist.gov (301)975-4402 [1] The SUN workstation is identified in order to adequately specify or describe the subject matter of this announcement. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment is necessarily the best available for the purpose. craig watson ************************************************************ * National Institute of Standards and Technology (NIST) * * Advanced Systems Division * * Image Recognition Group * * Bldg 225/Rm A216 * * Gaithersburg, Md. 20899 * * * * phone -> (301) 975-4402 * * fax -> (301) 840-1357 * * email -> craig at magi.ncsl.nist.gov * ************************************************************ From tgd at chert.CS.ORST.EDU Wed Feb 1 13:38:57 1995 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 1 Feb 95 10:38:57 PST Subject: Four Papers Message-ID: <9502011838.AA07424@edison.CS.ORST.EDU> The following four papers are available for ftp access (abstracts are included below): Zhang, W., Dietterich, T. G., (submitted). A Reinforcement Learning Approach to Job-shop Scheduling. ftp://ftp.cs.orst.edu/users/t/tgd/papers/tr-jss.ps.gz Dietterich, T. G., Flann, N. S., (submitted). Explanation-based Learning and Reinforcement Learning: A Unified View. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-ebrl.ps.gz Dietterich, T. G., Kong, E. B., (submitted). Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-bias.ps.gz Kong, E. B., Dietterich, T. G., (submitted). Error-Correcting Output Coding Corrects Bias and Variance. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-why.ps.gz These and other titles are available through my WWW homepage (see URL at end of message). ---------------------------------------------------------------------- A Reinforcement Learning Approach to Job-shop Scheduling Wei Zhang Thomas G. Dietterich Computer Science Department Oregon State University Corvallis, Oregon 97331-3202 Abstract: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. We employ repair-based scheduling using a problem space that starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The states in this problem space are represented by a set of features. The temporal difference algorithm $TD(\lambda)$ is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step lookahead search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD scheduler performs better than the best known existing algorithm for this task---Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning algorithms can provide a new method for constructing high-performance scheduling systems for important industrial applications. ---------------------------------------------------------------------- Explanation-Based Learning and Reinforcement Learning: A Unified View Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Nicholas S. Flann Department of Computer Science Utah State University Logan, UT 84322-4205 Abstract: In speedup-learning problems, where full descriptions of operators are always known, both explanation-based learning (EBL) and reinforcement learning (RL) can be applied. This paper shows that both methods involve fundamentally the same process of propagating information backward from the goal toward the starting state. RL performs this propagation on a state-by-state basis, while EBL computes the weakest preconditions of operators, and hence, performs this propagation on a region-by-region basis. Based on the observation that RL is a form of asynchronous dynamic programming, this paper shows how to develop a dynamic programming version of EBL, which we call Explanation-Based Reinforcement Learning (EBRL). The paper compares batch and online versions of EBRL to batch and online versions of RL and to standard EBL. The results show that EBRL combines the strengths of EBL (fast learning and the ability to scale to large state spaces) with the strengths of RL (learning of optimal policies). Results are shown in chess endgames and in synthetic maze tasks. ---------------------------------------------------------------------- Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms Thomas G. Dietterich Eun Bae Kong Department of Computer Science 303 Dearborn Hall Oregon State University Corvallis, OR 97331-3202 Abstract: The term ``bias'' is widely used---and with different meanings---in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance---rather than appropriate or inappropriate machine learning bias---is an important cause of poor performance for decision tree algorithms. ---------------------------------------------------------------------- Error-Correcting Output Coding Corrects Bias and Variance Eun Bae Kong Thomas G. Dietterich Department of Computer Science 303 Dearborn Hall Oregon State University Corvallis, OR 97331-3202 Abstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k>>2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method---like any form of voting or committee---can reduce the variance of the learning algorithm. Furthermore---unlike methods that simply combine multiple runs of the same learning algorithm---ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local behavior of C4.5. ---------------------------------------------------------------------- Thomas G. Dietterich Voice: 503-737-5559 Department of Computer Science FAX: 503-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd Oregon State University Corvallis, OR 97331-3102 From rolf at cs.rug.nl Thu Feb 2 07:43:30 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Thu, 2 Feb 1995 13:43:30 +0100 Subject: Ph.D. thesis available (2nd correction) Message-ID: Dear netters, my apologies for posting a wrong filename, a wrong URL, AND a wrong correction. Now REAL filename/URL is: FTP-filename: /pub/neuroprose/Thesis/wuertz.thesis.ps.Z URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/wuertz.thesis.ps.Z Thanks to Mohammed Karouia and Frank Schnorrenberg who pointed out the errors. I promise that the contents are more accurate than the postings. :-) Rolf From jaap.murre at mrc-apu.cam.ac.uk Thu Feb 2 08:22:13 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Thu, 2 Feb 1995 13:22:13 GMT Subject: CALMLIB 4.24 available Message-ID: <199502021322.NAA19519@betelgeuse.mrc-apu.cam.ac.uk> The following file has recently (2-2-1994) been added to our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/calm424.zip It contains the latest version of the CALMLIB, C-routines for simulating modular neural networks based on the CALM paradigm. CALM stands for Categorizing And Learning Module (e.g., see J.M.J. Murre, R.H. Phaf, & G. Wolters [1992]. 'CALM: Categorizing and Learning Module'. Neural Networks, 5, 55-82; and J.M.J. Murre [1992]. 'Learning and Categorization in Modular Neural Networks'. Hillsdale, NJ: Lawrence Erlbaum [USA/Canada], and Hemel Hempstead: Harvester Wheatsheaf [rest of the world]). An executable demonstration program for PCs (486 DX recommended) using CALM to learn to recognize handwritten characters that can be entered with a mouse can also be obtained from this site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/digidemo.zip This program has been developed with the CALMLIB. -- Jaap Murre (jaap.murre at mrc-apu.cam.ac.uk) From mackay at mrao.cam.ac.uk Thu Feb 2 12:45:00 1995 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Thu, 2 Feb 95 17:45 GMT Subject: Preprint announcement from David J C MacKay Message-ID: The following preprints are available by anonymous ftp or www. WWW: The page: ftp://131.111.48.8/pub/mackay/README.html has pointers to abstracts and postscript of these publications. ------------- Titles ------------- 1) Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks 2) Density Networks and their application to Protein Modelling 3) A Free Energy Minimization Framework for Inference Problems in Modulo 2 Arithmetic 4) Interpolation models with multiple hyperparameters -------------- Details -------------- 1) Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks by David J C MacKay Review paper to appear in `Network' (1995). Final version (1 Feb 95). 41 pages. (508K) ftp://131.111.48.8/pub/mackay/network.ps.Z 2) Density Networks and their application to Protein Modelling by David J C MacKay Abstract: I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The model can then discover for itself a description of the data in terms of an underlying latent variable space of lower dimensionality. I present preliminary results of the application of these models to protein data. (to appear in Maximum Entropy 1994 Proceedings [1995]) ftp://131.111.48.8/pub/mackay/density.ps.Z (130K) 3) A Free Energy Minimization Framework for Inference Problems in Modulo 2 Arithmetic by David J C MacKay Abstract: This paper studies the task of inferring a binary vector s given noisy observations of the binary vector t = A s mod 2, where A is an M times N binary matrix. This task arises in correlation attack on a class of stream ciphers and in other decoding problems. The unknown binary vector is replaced by a real vector of probabilities that are optimized by variational free energy minimization. The derived algorithms converge in computational time of order between w_{A} and N w_{A}, where w_{A} is the number of 1s in the matrix A, but convergence to the correct solution is not guaranteed. Applied to error correcting codes based on sparse matrices A, these algorithms give a system with empirical performance comparable to that of BCH and Reed-Muller codes. Applied to the inference of the state of a linear feedback shift register given the noisy output sequence, the algorithms offer a principled version of Meier and Staffelbach's (1989) algorithm B, thereby resolving the open problem posed at the end of their paper. The algorithms presented here appear to give superior performance. (to appear in Proceedings of 1994 K.U. Leuven Workshop on Cryptographic Algorithms) ftp://131.111.48.8/pub/mackay/fe.ps.Z (101K) 4) Interpolation models with multiple hyperparameters by David J C MacKay and Ryo Takeuchi Abstract: A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant alpha, and the noise model has a single parameter beta. The ratio alpha/beta alone is responsible for determining globally all these attributes of the interpolant: its `complexity', `flexibility', `smoothness', `characteristic scale length', and `characteristic amplitude'. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of `conditional convexity' when designing models with many hyperparameters. (submitted to IEEE PAMI) ftp://131.111.48.8/pub/mackay/newint.ps.Z (179K) To get papers by anonymous ftp, follow the usual procedure: ftp 131.111.48.8 anonymous cd pub/mackay binary get ... ------------------------------------------------------------------------------ David J.C. MacKay email: mackay at mrao.cam.ac.uk Radio Astronomy, www: ftp://131.111.48.24/pub/mackay/homepage.html Cavendish Laboratory, tel: +44 1223 337238 fax: 354599 home: 276411 Madingley Road, Cambridge CB3 0HE. U.K. home: 19 Thornton Road, Girton, Cambridge CB3 0NP ------------------------------------------------------------------------------ From Nicolas.Szilas at imag.fr Thu Feb 2 13:30:19 1995 From: Nicolas.Szilas at imag.fr (Nicolas Szilas) Date: Thu, 2 Feb 1995 19:30:19 +0100 Subject: Paper available Message-ID: <199502021830.TAA10687@meteore.imag.fr> Hello, The following paper is now available by anonymous ftp: ----------------------------------------------------------------------- ACTION FOR LEARNING IN NON-SYMBOLIC SYSTEMS ------------------------------------------- Nicolas SZILAS(1) and Eric RONCO(2) (1) ACROE & LIFIA-IMAG, INPG - 46, av. felix Viallet, 38031 GRENOBLE Cedex, France (2) LIFIA-IMAG, INPG - 46, av. felix Viallet, 38031 GRENOBLE Cedex, France As a cognitive function, learning must be considered as an interactive process, especially if progressive learning is concerned; the system must choose every time what to learn and when. Therefore, we propose an overview of interactive learning algorithms in the connectionist literature and conclude that most studies are concerned with the search of the most informative patterns in the environment, whereas few of them deal with progressive complexity learning. Subsequently, some effects of progressive learning are experimently studied in the framework of supervised learning in layered networks: the results exhibit great benefits of progressive increasing the difficulty of the environment. To design more advanced interactive connectionist learning, we study the psycholoical automatization theories, which show that taking into account complex environments implies the emergence of successive levels of automated processing. Thus, a model is then proposed, based on hierarchical and incremental integration of modular procesing, corresponding to a progressive increase of the environment difficulty. ----------------------------------------------------------------------- To appear in the European Conference on Cognitive Science, Saint-Malo, France, April 1995. anonymous ftp to: imag.fr filename: pub/LIFIA/szilas.eccs95.e.ps.Z ---------------------------------------------------------------------- Nicolas SZILAS e-mail: Nicolas.Szilas at imag.fr LIFIA-ACROE 46, avenue Felix Viallet 38031 Grenoble Cedex, France tel.: (33) 76-57-48-12 ---------------------------------------------------------------------- From dwang at cis.ohio-state.edu Thu Feb 2 15:24:30 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 2 Feb 1995 15:24:30 -0500 Subject: A summary on Catastrophic Interference Message-ID: <199502022024.PAA05441@shirt.cis.ohio-state.edu> Catastrophic Interference and Incremental Training A brief summary by DeLiang Wang Several weeks ago I posted a message on this network asking about the research status of catastrophic interference. I have since received more than 50 replies either publicly or privately. I have benefited much from the discussions, and I hope others have too. I have read/scanned through most of the papers (those easily accessible) prompted from the replies. To thank all of you who have replied, I have compiled a brief summary of my readings. Since I do not want to post a long review paper on the net, my following summary will be very brief, which means an inevitable oversimplification of those models mentioned here and neglect of some other work. (1) Catastrophic interference (CI) refers to the phenomenon that later training disrupts results of previous training. It was pointed out as a criticism to multi-layer perceptrons with backpropagation training (MLP) by Grossberg (1987), and systematically revealed by McCloskey & Cohen (1989) and Ratcliff (1990). Why is CI bad? It prevents a single network from incrementally accumulating knowledge (the alternative would be to have each network learn just one set of transform), and it poses severe problems for MLP to model human/animal memory. (2) Catastrophic interference is model-specific. So far, the problem is revealed only in MLP or its variants. We know that models where different items are represented without overlaps (e.g. ART) do not have this problem. Even some models with certain overlaps do not have this problem (see, for example, Willshaw, 1981; Wang and Yuwono, 1995). Unfortunately, many studies on CI carry general titles, such as "connectionist models" and "neural network models", and leave people the impression that CI is a general problem with all neural network (NN) models. These general titles are used both by critics and proponents of neural networks. This type of titles may be justified in early times when the awareness of NN needed to be raised. In this development stage of the field, results about NN should be specified and the title should properly reflect the scope of the paper. (3) Tradeoff between distributedness and interference. The major cause of CI is the "distributedness" of representations: Learning of new patterns needs to use those weights that participate in representing previously learned patterns. Much investigation to overcome CI is directed towards reducing the extent of distributedness. We can say that there is a tradeoff between distributedness and interference (as said earlier, no overlaps no CI). (4) Ways of overcoming CI. The problem has been studied by a number of authors, all of whom work on MLP or its variants. Here is a list of proposals to alleviate CI: * Reduce overlapping in hidden unit representations. French (1991, 1994), Kruschke (1992), Murre (1992), Fahlman (1991). * Orthogonolization. This idea was proposed long ago for reducing cross-talks in associative memories. The same idea works here to reduce CI. See Kortge (1990), Lewandowsky (1991), Sloman and Rumelhart (1992), Sharkey and Sharkey (1994), McClelland et al. (1994). * Prior training. Assuming later patterns are drawn from the same underlying function as earlier patterns, prior training of the general task can reduce RI (McRae and Hetherington, 1993). This proposal will not work if later patterns have little to do with previously trained patterns. * Modularization. The idea is similar to earlier ones. We have a hierarchy of different networks, and each network is selected to learn a different category of tasks. See Waibel (1989), Brashers-Krug et al. (1995). * Retain only a recent history. The idea here is that we let past patterns forget and only retain a limited number of patterns including the new one (reminiscent of STM). See Ruiz de Angulo and Torras (1995). (5) Transfer studies. Another body of related, but different, work studies how previous training can facilitate acquiring a new pattern. The effect of acquiring new pattern on previous memory, however, is not explored. See Pratt (1993), Thrun and Mitchell (1994), Murre (1995). (6) What about associative memory? In my original message, I suspected that associative memory models that can handle correlated patterns (see Kanter and Sompolinsky, 1987; Diederich and Opper, 1987) should suffer the same problem of catastrophic interference. Unfortunately, no response has touched on this issue. Are people taking associative memory models seriously nowadays? (7) To summarize, the following two ideas, in my opinion, seem to hold greatest promise for solving RI. The first idea is to reduce the receptive field of each unit, thus reducing the overlaps among different feature detectors. RBF (radial basis function) networks fall into this type. After all, limited receptive fields are characteristic of brain cells, and all-to-all connections are scarce if existing at all. The second idea is to introduce some form of modularization so that different underlying functions are handled in different modules (reducing overlaps among differing tasks). This may not only solve the problem of CI, but also facilitate acquiring new knowledge (positive transfer). Furthermore, this idea is consistent with the general principle of functional localization in the brain. References (More detailed references for tech. reports were posted before): Brashers-Krug T., R. Shadmehr, and E. Todorov (1995): In: NIPS-94 Proceedings, to appear. Diederich S. and M. Opper (1987). Phys. Rev. Lett. 58, 949-952. Fahlman S. (1991): In NIPS-91 Proceedings. French, R.M. (1991): In: Proc. the 13th Annual Conf. of the Cog Sci Society. French, R.M. (1994): In: Proc. the 16th Annual Conf. of the Cog Sci Society. Grossberg S. (1987). Cognit. Sci. 11, 23-64. Kantor I. & Sompolinsky H.(1987). Phys. Rev. A 35, 380-392. Kortge, C.A. (1990): In: Proc. the 12th Annual Conf. of the Cog Sci Society. Krushke, J.K. (1992). Psychological Review, 99, 22-44. Lewandowsky, S. (1991): In: Relating theory and data: Essays on human memory in honor of Bennet B. Murdock (W.Hockley & S.Lewandowsky, Eds.). McClelland, J., McNaughton, B., & O'Reilly, R. (1994): CMU Tech report: PNP.CNS.94.1. McCloskey M., and Cohen N. (1989): In: The Psychology of Learning and Motivation, 24, 109-165. McRae, K., & Hetherington, P.A. (1993): In: Proc. the 15th Annual Conf. of the Cog Sci Society. Murre, J.M.J. (1992): In: Proc. the 14th Annual Conf. of the Cog Sci Society. Murre, J.M.J. (in press): In: J. Levy et al. (Eds): Connectionist Models of Memory and Language. London: UCL Press. Pratt, L. (1993): In: NIPS-92 Proceedings. Ratcliff, R (1990): Psychological Review 97, 285-308 Ruiz de Angulo V. and C. Torras (1995): IEEE Trans. on Neural Net., in press. Sharkey, N.E. & Sharkey, A.J.C. (1994): Technical Report, Department of Computer Science, University of Sheffield, U.K. Sloman, S.A., & Rumelhart, D.E. (1992): In A. Healy et al.(Eds): From learning theory to cognitive processes: Essays in honor of William K. Estes. Thrun S. & Mitchell T. (1994): CMU Tech Rep. Waibel A. (1989). Neural Computation 1, 39-46. Wang D.L. and B. Yuwono (1995): IEEE Trans.on Syst. Man Cyber. 25(4), in press. Willshaw D. (1981): In: Parallel Models of Associative Memory (Eds. G. Hinton and J. Anderson, Erlbaum). From dayan at ai.mit.edu Thu Feb 2 21:16:48 1995 From: dayan at ai.mit.edu (Peter Dayan) Date: Thu, 2 Feb 95 21:16:48 EST Subject: PhD Studentship at Edinburgh Message-ID: <9502030216.AA26937@peduncle> ======================================================================= PhD Study in Computational Models of Spatial Learning and Memory, with particular reference to the Hippocampal Formation Prof Richard Morris, University of Edinburgh Peter Dayan, MIT ======================================================================= A Faculty scholarship from the University of Edinburgh is available for a PhD student to work on the project described below. The student will be registered with Prof Morris at the University of Edinburgh, but will also visit to conduct research at MIT. Computational Models of Spatial Learning and Memory, with particular reference to the Hippocampal Formation There is substantial evidence for the involvement of the hippocampal formation in the process of learning and using information about space. The purpose of this project is to build a computational model of spatial learning that addresses the related data at three levels of inquiry. - electrophysiological studies on place cells and on synaptic plasticity at various loci in the hippocampus; - behavioural studies on the way that animals construct models of their environments, the cues they use, and the way that they employ this information to plan paths and find targets; - computational studies on the representation of space, model construction and reinforcement learning. This computational model will start from and contribute to behavioural experiments in the `Manhattan Maze' (Biegler and Morris, Nature, 1993), which has been used to probe the way that particular landmarks and their inter-relationships control the way that rats orient themselves in space. Confirmatory experiments in the open field water maze will also be possible. This project should suit a student with qualifications in mathematics or computer science who has a demonstrable interest in psychology/neurobiology; or a student with qualifications in psychology/neurobiology with demonstrable competence in computational modelling. The stipend is 6391 pounds sterling a year. Applicants should send copies of their CV and a statement of their research interests to: Prof RGM Morris Dr P Dayan Centre for Neuroscience Dept of Brain and Cognitive Sciences University of Edinburgh E25-201, MIT Crichton Street Cambridge, Edinburgh EH8 9LE Massachusetts 02139 Scotland USA r.g.m.morris at ed.ac.uk dayan at psyche.mit.edu Notification of the award will be made by 30th June 1995 and the studentship will start in October 1995. From N.Sharkey at dcs.shef.ac.uk Fri Feb 3 05:02:42 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 3 Feb 95 10:02:42 GMT Subject: ROBOCALL Message-ID: <9502031002.AA15436@entropy.dcs.shef.ac.uk> ********************************************************* * ROBOCALL * * * * Special Robotics Track of EANN'95 * * * * 1 page abstract due by 16th February, 1995 * * * ********************************************************* Sorry about the late announcement it keeps bouncing. We will consider any papers relevant to the use of Neural Computing to Robotics. email abstracts to Jill at dcs.sheffield.ac.uk Organiser: Noel Sharkey N.Sharkey at dcs.sheffield.ac.uk Sub-Committee Michael Arbib (USA) Valentino Braitenberg (Germany) Georg Dorfner (Austria) John Hallam (Scotland) Ali Zalzala (England) CONFERENCE DETAILS BELOW International Conference on Engineering Applications of Neural Networks (EANN '95) Helsinki, Finland August 21-23, 1995 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, food engineering and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann95 at aton.abo.fi by *31 January 1995*, by e-mail in PostScript format, or in TeX or LaTeX. Plain ASCII is also acceptable. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 31 January 1995. Notification of acceptance will be sent around 1 March. The number of full papers will be very limited. You will receive a submission number for each abstract you send. If you haven't received one, please ask for it. Special tracks have been set up for applications in robotics (N. Sharkey, n.sharkey at dcs.shef.ac.uk), control applications (E. Tulunay, ersin_tulunay at metu.edu.tr), biotechnology/food engineering applications (P. Linko), and mineral and metal industry (J. van Deventer, jsjvd at maties.sun.ac.za). You can submit abstracts to the special tracks straight to their coordinators or to eann95 at aton.abo.fi. Local program committee A. Bulsari J. Heikkonen (Italy) E. Hyv\"onen P. Linko L. Nystr\"om S. Palosaari H. Sax\'en M. Syrj\"anen J. Sepp\"anen A. Visa International program committee G. Dorffner (Austria) A. da Silva (Brazil) V. Sgurev (Bulgaria) M. Thompson (Canada) B.-Z. Chen (China) V. Kurkova (Czechia) S. Dutta (France) D. Pearson (France) G. Baier (Germany) C. M. Lee (Hong Kong) J. Fodor (Hungary) L. M. Patnaik (India) H. Siegelmann (Israel) R. Baratti (Italy) R. Serra (Italy) I. Kawakami (Japan) C. Kuroda (Japan) H. Zhang (Japan) J. K. Lee (Korea) J. Kok (Netherlands) J. Paredis (Netherlands) W. Duch (Poland) R. Tadeusiewicz (Poland) B. Ribeiro (Portugal) W. L. Dunin-Barkowski (Russia) V. Stefanuk (Russia) E. Pupyrev (Russia) S. Tan (Singapore) V. Kvasnicka (Slovakia) A. Dobnikar (Slovenia) J. van Deventer (South Africa) B. Martinez (Spain) H. Liljenstr\"om (Sweden) G. Sj\"odin (Sweden) J. Sj\"oberg (Sweden) E. Tulunay (Turkey) N. Sharkey (UK) D. Tsaptsinos (UK) N. Steele (UK) S. Shekhar (USA) J. Savkovic-Stevanovic International Conference on Engineering Applications of Neural Networks (EANN '95) Registration information The registration fee is FIM 2000 until 15 March, after which it will be FIM 2400. A discount of upto 40 % will be given to some participants from East Europe and developing countries. Those who wish to avail of this discount need to apply for it. The application form can be sent by e-mail. The papers may not be included in the proceedings if the registration fee is not received before 15 April, or if the paper does not follow the specified format. If your registration fee is received before 15 February, you are entitled to attend one tutorial for free. The fee for each tutorial will be FIM 200, to be paid in cash at the conference site. No decisions have yet been made about which tutorials will be presented, since tutorial proposals can be sent until 31 January. The registration fee should be paid to ``EANN 95'', the bank account SYP (Union Bank of Finland) 220518-125251 Turku, Finland through bank transfer or you could send us a bank draft payable to ``EANN 95''. If it is difficult to get a bank draft in Finnish currency, you could send a bank cheque or a draft of GBP 280 (sterling pounds) until 15 March, or GBP 335 after 15 March. If you need to send it in some other way, please ask. The postal address for sending the bank drafts or bank cheques is EANN '95/SEA, Post box 34, 20111 Turku 11, Finland. Registration form can be sent by e-mail. ---------------------------------------------------------------------- If you are from East Europe or a developing country and would like to apply for a discount, please send the application first, or with the registration form. If you get the discount and if you have already paid a larger amount, the difference will be refunded in cash at the conference site. -------------------------------------------------------CUT HERE------- Registration form Name Affiliation (university/company/organisation) E-mail address Address Country Fax Have you submitted one or more abstracts ? Y/N Registration fee sent FIM/GBP by bank transfer / bank draft / other (please specify) Any special requirements ? Date registration fee sent Date registration form sent ---------------------------------------------------------------------- From pah at unixg.ubc.ca Fri Feb 3 12:57:00 1995 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Fri, 3 Feb 1995 09:57:00 -0800 (PST) Subject: A summary on Catastrophic Interference In-Reply-To: <199502022024.PAA05441@shirt.cis.ohio-state.edu> Message-ID: > * Reduce overlapping in hidden unit representations. French (1991, >1994), Kruschke (1992), Murre (1992), Fahlman (1991). > > * Prior training. Assuming later patterns are drawn from the same >underlying function as earlier patterns, prior training of the general task >can reduce RI (McRae and Hetherington, 1993). This proposal will not >work if later patterns have little to do with previously trained patterns. Actually, prior training is a method of reduction of overlap at the hidden layer, as it achieves the same goal naturally, and will work if later patterns have little to do with previously trained patterns. McRae and Hetherington demonstrated that the method was sucessful with autoencoders and then replicated this with a network that computed arbitrary associations (i.e., random). Given that prior training also consisted of random associations, there was no general function the net could derive (i.e., later patterns had little to do with previous patterns). The training merely had the effect of 'sharpening' the hidden unit responses so that later items would not be distributed across all or most units, as is the case in a naive net. I would make an empirical prediction that in a net trained sequentially to recall faces, prior training on white-noise would probably suffice to reduce CI. Cheers, Phil Hetherington From haarmann at ibalm.psy.cmu.edu Fri Feb 3 13:29:33 1995 From: haarmann at ibalm.psy.cmu.edu (Henk Haarmann (Henk Haarmann)) Date: Friday, 03 Feb 95 13:29:33 EST Subject: hybrid modelling workshop announcement Message-ID: -------------------------------------------------------------------------- Cognitive Modeling Workshop, July 21-25 '95, Carnegie Mellon University Please bring this announcement to the attention of appropriate applicants. We invite applications to attend a 6-day summer workshop on cognitive modeling to be held in the psychology department at Carnegie Mellon University in Pittsburgh from July 15 to July 20. The workshop dates have been selected to enable participants to easily attend the Cognitive Science meetings, which will be held also in Pittsburgh July 21-25, immediately after the workshop. The workshop is intended for Ph.D. students, postdocs, and junior faculty with an active research background in cognitive psychology but with little or no experience in computer simulation modeling. Participants will be given intensive practical experience in using 3CAPS, a hybrid symbolic-connectionist modeling language, which should enable them to apply it in their own domain of research. 3CAPS' focus is on the central role of working memory in constraining storage and processing in a variety of cognitive domains, including mental rotation, text processing, normal and aphasic sentence comprehension, human computer interaction and automated telephone interaction. Several tutors will guide participants through excercises in a computer lab in each of these domains. Participants will also be shown how to develop a 3CAPS model from scratch. A final component of the program involves a contrastive analysis, that is, comparison with a related modeling language, ACT-R, in the area of algebraic problem solving. Experience in computer modeling is not essential, but some knowledge of computer programming in general is necessary. Applications should send a cover letter, and a curriculum vitae, and in the case of graduate students, arrange for one letter of recommendation. The DEADLINE for application is March 15, 1995. The workshop intends to provide room and board and reimburse airfare (if needed) to all participants who are U.S. residents, and to provide room and board only for non-U.S participants. Enrollment is competitive and limited so early application is strongly encouraged. Applicants will be notified of their acceptance by April 15, 1995. The workshop is sponsored by the Division of Cognitive and Neural Science and Technology of the Office of Naval Research. Applications should be sent to: Henk Haarmann Cognitive Modeling Workshop Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213-3890 A repetition of this message with more extensive program information can be found on the World Wide Web (WWW): From changjcc at delta.eecs.nwu.edu Fri Feb 3 14:47:36 1995 From: changjcc at delta.eecs.nwu.edu (Jyh-Chian Chang) Date: Fri, 3 Feb 1995 13:47:36 -0600 (CST) Subject: RIGID Linear Transformation with NN??? Message-ID: <9502031947.AA13146@delta.eecs.nwu.edu> Dear NN researcher: We all know that it's easy to construct a linear NN to do linear transformations (i.e., A = R X, here A and X are vectors and R is the transformation matrix). However, is it possible to build a NN to do RIGID linear transformations? "RIGID" means that the rotation matrix, R, must be an orthogonal matrix (i.e., all its columns form an orthonormal space; rotating without scaling and shearing). I know that there are many non-NN methods to solve this problem (e.g., Quaternion-based, SVD-based approaches, and so on.). Does anyone here have any ideas about how to build a NN to do the rigid linear transformations? Any suggestions are highly appreciated! Thank you in advance! Garry From pollack at cs.brandeis.edu Fri Feb 3 16:17:20 1995 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Fri, 3 Feb 1995 16:17:20 -0500 Subject: Graduate Opportunities at Brandeis University Message-ID: <199502032117.QAA01990@jade.cs.brandeis.edu> In May 1994 Brandeis University announced the opening of the new Volen National Center for Complex Systems with the goal of promoting interdisciplinary research and collaboration between faculty from Biology, Computer Science, Linguistics, Mathematics, Neuroscience, Physics, and Psychology. The Center, whose main mission is to study the brain, intelligence, and advanced computation, has already earned accolades from ascientists world wide, and continues to expand. Brandeis University is located in Waltham, a suburb 10 miles west of Boston, with easy rail access to both Cambridge and Downtown. Founded in 1948, it is recognized as one of the finest private liberal arts and science universities in the United States. Brandeis combines the breadth and range of academic programs usually found at much larger universities, with the friendliness of a smaller and more focused research community. The Computer Science Department is located entirely in the new Volen Center building and is the home of four Artificial Intelligence faculty actively involved in the Center activities and collaborations: Rick Alterman, Maja Mataric, Jordan Pollack, and James Pustejovsky. Professor Alterman's research interests are in the areas of artificial intelligence and cognitive science. A recent project focused on the problems of everyday reasoning. A model was developed where agent goal-directed behavior is guided by pragmatics rather than by analytic techniques. His work with Zito-Wolf developed techniques for skill acquisition and learning; the focus was on building case representations of procedural knowledge. Work with Carpenter was focussed on building a reading agent that can actively seek out and interpret instructions that are relevant to a "break down" situation for the overall system. Two current projects support the evolution and maintenance of a collective memory for a community of distributed heterogeneous agents who plan and work cooperatively, and with building interactive systems that improve their own performance by keeping track of the history of interactions between the end-user and the system. For more information see http://www.cs.brandeis.edu/dept/faculty/alterman. Jordan Pollack's research interests lie at the boundary between neural and symbolic computation: How could simple neural mechanisms organized naturally into multi-cellular structures by evolution provide the capacity necessary for cognition, language, and general intelligence? This view has lead to successful work on how variable tree-structures could be represented in neural activity patterns, how dynamical systems could act as language generators and recognizers, and how fractal limit behavior of recurrent networks could represent mental imagery. He has also worked on evolutionary and co-evolutionary learning in strategic game playing agents, as well as with teams of simple agents who cooperate on complex tasks. Prof. Pollack encourages students with backgrounds and interests in AI, machine learning, dynamical systems, fractals, connectionism, and ALife to apply. For more information see http://www.cs.brandeis.edu/dept/faculty/pollack. Maja Mataric's interdisciplinary research focuses on understanding systems that integrate perception, representation, learning, and action. Her current work is applied to synthesis and analysis of behavior in situated agents and multi--agent systems, and on learning and imitation, in software, dynamical simulations, and physical robots. Learning new behaviors and behavior selection, as well as memory and representation are the main thrusts of the research. The newest project models learning by imitation, through the interaction of a collection of cognitive systems, including perception (attention and analysis), memory (declarative and non-declarative representations), action sequence planning, motor control, proprioception, and learning. Prof. Mataric encourages students with interests and/or backgrounds in AI, autonomous agents, machine learning, cognitive science, and cognitive neuroscience to apply. For more information see http://www.cs.brandeis.edu/dept/faculty/mataric. James Pustejovsky conducts research in the areas of computational linguistics, lexical semantics, information retrieval and extraction, and aphasiology. The main focus of his research is on the computational and cognitive modeling of natural language meaning. More specifically, the investigation is in how words and their meanings combine to meaningful texts. This research has focused on developing a theory of lexical semantics based on a methodology making use of formal and computational semantics. There are several projects applying the results of this theory to Natural Language Processing, which in effect, empirically test this view of semantics. These include: an NSF grant with Apple to automatically construct index libraries and help systems for applications; a DEC grant to automatically convert a trouble-shooting text-corpus into a case library. He also is currently working with aphasiologist Dr. Susan Kohn on word-finding difficulties and sentence generation in aphasics. For more information see http://www.cs.brandeis.edu/dept/faculty/pustejovsky. The four AI faculty work together and with other members of the Volen Center, creating new interdisciplinary research opportunities in areas including cognitive science (http://fechner.ccs.brandeis.edu/cogsci.html) computational neuroscience, and complex systems at Brandeis University. To get more information about the Volen Center for Complex Systems, about the Computer Science Department, and about the other CS faculty, see: http://www.cs.brandeis.edu/dept The URL for the graduate admission information is http://www.cs.brandeis.edu/dept/grad-info/application.html From tesauro at watson.ibm.com Fri Feb 3 16:59:43 1995 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Fri, 3 Feb 95 16:59:43 EST Subject: NIPS*94 proceedings Message-ID: This is to announce a change of publisher for the proceedings of NIPS*94. The proceedings will be published by MIT Press. All conference attendees will receive a free copy of the proceedings. (Attendees whose mailing address has changed since they registered for the meeting should send their new address to nips94 at mines.colorado.edu.) The citation for the volume will be: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. -- Gerry Tesauro NIPS*94 General Chair From esann at dice.ucl.ac.be Fri Feb 3 17:00:33 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sat, 4 Feb 1995 00:00:33 +0200 Subject: Neural Processing Letters Vol.2 No.1 Message-ID: <9502032258.AA17826@ns1.dice.ucl.ac.be> You will find enclosed the table of contents of the January 1995 issue of "Neural Processing Letters" (Vol.2 No.1). We also inform you that subscription to the journal is now possible by credit card. All necessary information is contained on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 Neural Processing Letters, Vol.2, No.1, January 1995 ____________________________________________________ - A new scheme for incremental learning C. Jutten, R. Chentouf - A nonlinear extension of the Generalized Hebbian learning J. Joutsensalo, J. Karhunen - Morphogenesis of neural networks O. Michel, J. Biondi - Compartmetal modelling with artificial neural networks C.J. Coomber - Quantitative object motion prediction by an ART2 and Madaline combined neural network Q. Zhu, A.Y. Tawfik - An ANNs-based system for the diagnosis and treatment of diseases G.-P. K. Economou, D. Lymberopoulos, C.E. Goutis - A learning algorithm for Recurrent Radial-Basis Function networks M.W. Mak _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From P.McKevitt at dcs.shef.ac.uk Fri Feb 3 08:12:43 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Fri, 3 Feb 95 13:12:43 GMT Subject: REACHING-FOR-MIND: AISB-95 WKSHOP/FINAL-CALL Message-ID: <9502031312.AA24988@dcs.shef.ac.uk> <> <> <> ******************************************************************************* REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND Advance Announcement FINAL CALL FOR PAPERS AND PARTICIPATION AISB-95 Workshop on REACHING FOR MIND: FOUNDATIONS OF COGNITIVE SCIENCE April 3rd/4th 1995 at the The Tenth Biennial Conference on AI and Cognitive Science (AISB-95) (Theme: Hybrid Problems, Hybrid Solutions) Halifax Hall University of Sheffield Sheffield, England (Monday 3rd -- Friday 7th April 1995) Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Chair: Sean O Nuallain Dublin City University, Dublin, Ireland & National Research Council, Ottawa, Canada Co-Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, England WORKSHOP COMMITTEE: John Barnden (New Mexico State University, NM, USA) Istvan Berkeley (University of Alberta, Canada) Mike Brady (Oxford, England) Harry Bunt (ITK, Tilburg, The Netherlands) Peter Carruthers (University of Sheffield, England) Daniel Dennett (Tufts University, USA) Eric Dietrich (SUNY Binghamton, NY, USA) Jerry Feldman (ICSI, UC Berkeley, USA) John Frisby (University of Sheffield, England) Stevan Harnad (University of Southampton, England) James Martin (University of Colorado at Boulder, CO, USA) John Macnamara (McGill University, Canada) Mike McTear (Universities of Ulster and Koblenz, Germany) Ryuichi Oka (RWC P, Tsukuba, Japan) Jordan Pollack (Ohio State University, OH, USA) Zenon Pylyshyn (Rutgers University, USA) Ronan Reilly (University College, Dublin, Ireland) Roger Schank (ILS, Northwestern, USA) NNoel Sharkey (University of Sheffield, England) Walther v.Hahn (University of Hamburg, Germany) Yorick Wilks (University of Sheffield, England) WORKSHOP DESCRIPTION The assumption underlying this workshop is that Cognitive Science (CS) is in crisis. The crisis manifests itself, as exemplified by the recent Buffalo summer institute, in a complete lack of consensus among even the biggest names in the field on whether CS has or indeed should have a clearly identifiable focus of study; the issue of identifying this focus is a separate and more difficult one. Though academic programs in CS have in general settled into a pattern compatible with classical computationalist CS (Pylyshyn 1984, Von Eckardt 1993), including the relegation from focal consideration of consciousness, affect and social factors, two fronts have been opened on this classical position. The first front is well-publicised and highly visible. Both Searle (1992) and Edelman (1992) refuse to grant any special status to information-processing in explanation of mental process. In contrast, they argue, we should focus on Neuroscience on the one hand and Consciousness on the other. The other front is ultimately the more compelling one. It consists of those researchers from inside CS who are currently working on consciousness, affect and social factors and do not see any incompatibility between this research and their vision of CS, which is that of a Science of Mind (see Dennett 1993, O Nuallain (in press) and Mc Kevitt and Partridge 1991, Mc Kevitt and Guo 1994). References Dennett, D. (1993) Review of John Searle's "The Rediscovery of the Mind". The Journal of Philosophy 1993, pp 193-205 Edelman, G.(1992) Bright Air, Brilliant Fire. Basic Books Mc Kevitt, P. and D. Partridge (1991) Problem description and hypothesis testing in Artificial Intelligence In ``Artificial Intelligence and Cognitive Science '90'', Springer-Verlag British Computer Society Workshop Series, McTear, Michael and Norman Creaney (Eds.), 26-47, Berlin, Heidelberg: Springer-Verlag. Also, in Proceedings of the Third Irish Conference on Artificial Intelligence and Cognitive Science (AI/CS-90), University of Ulster at Jordanstown, Northern Ireland, EU, September and as Technical Report 224, Department of Computer Science, University of Exeter, GB- EX4 4PT, Exeter, England, EU, September, 1991. Mc Kevitt, P. and Guo, Cheng-ming (1995) From Chinese rooms to Irish rooms: new words on visions for language. Artificial Intelligence Review Vol. 8. Dordrecht, The Netherlands: Kluwer-Academic Publishers. (unabridged version) First published: International Workshop on Directions of Lexical Research, August, 1994, Beijing, China. O Nuallain, S (in press) The Search for Mind: a new foundation for CS. Norwood: Ablex Pylyshyn, Z.(1984) Computation and Cognition. MIT Press Searle, J (1992) The rediscovery of the mind. MIT Press. Von Eckardt, B. (1993) What is Cognitive Science? MIT Press WORKSHOP TOPICS: The tension which riddles current CS can therefore be stated thus: CS, which gained its initial capital by adopting the computational metaphor, is being constrained by this metaphor as it attempts to become an encompassing Science of Mind. Papers are invited for this workshop which: * Address the central tension * Propose an overall framework for CS (as attempted, inter alia, by O Nuallain (in press)) * Explicate the relations between the disciplines which comprise CS. * Relate educational experiences in the field * Describe research outside the framework of classical computationalist CS in the context of an alternative framework * Promote a single logico-mathematical formalism as a theory of Mind (as attempted by Harmony theory) * Disagree with the premise of the workshop Other relevant topics include: * Classical vs. neuroscience representations * Consciousness vs. Non-consciousness * Dictated vs. emergent behaviour * A life/Computational intelligence/Genetic algorithms/Connectionism * Holism and the move towards Zen integration The workshop will focus on three themes: * What is the domain of Cognitive Science ? * Classic computationalism and its limitations * Neuroscience and Consciousness WORKSHOP FORMAT: Our intention is to have as much discussion as possible during the workshop and to stress panel sessions and discussion rather than having formal paper presentations. The workshop will consist of half-hour presentations, with 15 minutes for discussion at the end of each presentation and other discussion sessions. A plenary session at the end will attempt to resolve the themes emerging from the different sessions. ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. Given the urgency of the topic, we expect it to be of interest not only to scientists in the AI/Cognitive Science (CS) area, but also to those in other of the sciences of mind who are curious about CS. We envisage researchers from Edinburgh, Leeds, York, Sheffield and Sussex attending from within England and many overseas visitors as the Conference Programme is looking very international. SUBMISSION REQUIREMENTS: Papers of not more than 8 pages should be submitted by electronic mail (preferably uuencoded compressed postscript) to Sean O Nuallain at the E-mail address(es) given below. If you cannot submit your paper by E-mail please submit three copies by snail mail. *******Submission Deadline: February 13th 1995 *******Notification Date: February 25th 1995 *******Camera ready Copy: March 10th 1995 PUBLICATION: Workshop notes/preprints will be published. If there is sufficient interest we will publish a book on the workshop possibly with the American Artificial Intelligence Association (AAAI) Press. WORKSHOP CHAIR: Sean O Nuallain ((Before Dec 23:)) Knowledge Systems Lab, Institute for Information Technology, National Research Council, Montreal Road, Ottawa Canada K1A OR6 Phone: 1-613-990-0113 E-mail: sean at ai.iit.nrc.ca FaX: 1-613-95271521 ((After Dec 23:)) Dublin City University, IRL- Dublin 9, Dublin Ireland, EU WWW: http://www.compapp.dcu.ie Ftp: ftp.vax1.dcu.ie E-mail: onuallains at dcu.ie FaX: 353-1-7045442 Phone: 353-1-7045237 AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 282-5572 AISB-95 CONFERENCE/LOCAL ORGANISATION CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 REGISTRATION: Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 AISB-95 ENQUIRIES: Gill Wells, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: g.wells at dcs.shef.ac.uk Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5590 Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb VENUE: The venue for registration and all conference events is: Halifax Hall of Residence, Endcliffe Vale Road, GB- S10 5DF, Sheffield, UK, EU. FaX: +44 (0) 114-266-3898 Tel: +44 (0) 114-266-3506 (24 hour porter) Tel: +44 (0) 114-266-4196 (manager) SHEFFIELD: Sheffield is one of the friendliest cities in Britain and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and has a large reseach group (AINN) studying Artificial Intelligence and Neural Networks. It is strongly connected to the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. Sheffield Philosophy Department has the Hang Seng Centre for Cognitive Studies, founded in 1992, which runs a workshop/conference series on a two-year cycle on topics of interdisciplinary interest. (1992-4: 'Theory of mind'; 1994- 6: 'Language and thought'.) The Department of Automatic Control and Systems Engineering is conducting research into Neural Networks for Medical and other applications. AI and Cognitive Science researchers at Sheffield include Guy Brown, Peter Carruthers, Malcolm Crawford, Joe Downs, Phil Green, John Frisby, Robert Gaizauskas, Rob Harrison, Mark Hepple, Zhe Ma, John Mayhew, Jim McGregor, Paul Mc Kevitt, Bob Minors, Rod Nicolson, Tony Prescott, Peter Scott, Steve Renals, Noel Sharkey, and Yorick Wilks. From rafal at mech.gla.ac.uk Sat Feb 4 07:24:56 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Sat, 4 Feb 1995 12:24:56 GMT Subject: Workshop on Neurocontrol Message-ID: <2217.199502041224@gryphon.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow Scotland, UK NACT Project ^^^^^^^^^^^^ The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT I Workshop ^^^^^^^^^^^^^^^ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Proceedings of the meeting will be published in an edited book format. A social programme will be prepared for the weekend immediately following the meeting where participants will be able to sample the various cultural and recreational offerings of Central Scotland (a visit to a whisky distillery is included) and the easily reached Highlands. Contributed papers ^^^^^^^^^^^^^^^^^^ The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Submissions should take the form of an extended abstract of six pages in length and the DEADLINE is 1 March 1995. Accepted extended abstracts will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement "NACT I". The first named author of each paper will be used for all correspondence unless otherwise requested. Final selection of papers will be announced in mid-March 1995. Address for submissions ^^^^^^^^^^^^^^^^^^^^^^^ Dr Rafal Zbikowski Department of Mechanical Engineering James Watt Building University of Glasgow Glasgow G12 8QQ Scotland, UK rafal at mech.gla.ac.uk Schedule summary ^^^^^^^^^^^^^^^^ 1 March 1995 Deadline for submission of contributed papers Mid-March 1995 Notification regarding acceptance of papers 18-19 May 1995 Workshop From dsilver at csd.uwo.ca Sun Feb 5 09:42:31 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Sun, 5 Feb 95 9:42:31 EST Subject: NUmber of linear dichotomies of a binary d-Dim space Message-ID: <9502051442.AA01092@church.ai.csd.uwo.ca.csd.uwo.ca> It is well known that a d-dimensional 2-class hypothesis space containing n patterns in "general position" will have L(n,d) linear dichotomies [see 1,2,3]; where L(n,d) is computed using a recursive equation: L(n,d) = L(n-1,d) + L(n-1,d-1) with the boundary conditions of: L(1,d) = 2 and L(n,1) = 2n. This can also be stated as: L(n,d) = 2 SUM(i=0 to d) ( n-1 ) ; for n > d ( i ) L(n,d) = 2^n ; for n <= d Example: d = 2, n = 4, then: L(4,2) = L(3,2) + L(3,1) = L(2,2) + L(2,1) + 2(3) = L(1,2) + L(1,1) + 4 + 6 = 2 + 2 + 4 + 6 = 14 Furthermore, there will be L(n,d)/2 (eg. 14/2 = 7) hyperplanes involved in partitioning the classes for the (eg. 14) dichotomies. However, if the patterns are based on binary components, then the pattern points are the vertices of a hypercube; in which case the general position condition does not hold and the L(n,d) value provides only an upper bound on the number of linear dichotomies. Question: Is there an expression/procedure for computing the exact number of liner dichotomies of a binary d-dimensional hypothesis space? Any information would be most helpful. I will collect and summarize received replies on this matter. .. Many thanks, Danny. Ref: (1) "The Mathematical Foundations of Learning Machines" by Nils J. Nilsson, Morgan Kaufman, San Mateo, CA; 1990 (originally published in 1965 as"learning Machines"). -- p. 32-34 (2) "Neural Networks in Computing Intelligence" by LiMin Fu, McGraw-Hill, inc, NY, 1994. -- p. 71-73 (3) A set of n points are said to be in "general position" in a d-dimensional space iff no subset of d+1 points lies on any d-1 dimensional hyperplane. In the case of a binary hypercube, this implies that no subset of d+1 pattern points may lie on any one d-1 hypercube face. -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= From S.W.Ellacott at bton.ac.uk Mon Feb 6 05:12:09 1995 From: S.W.Ellacott at bton.ac.uk (ellacott) Date: Mon, 6 Feb 95 10:12:09 GMT Subject: No subject Message-ID: <9502061012.AA02413@diamond.bton.ac.uk> ************************* MAIL FROM STEVE ELLACOTT ************************** *******REMINDER : ABSTRACTS DUE 17TH FEBRUARY*********** 2nd Announcement and CALL FOR PAPERS MATHEMATICS of NEURAL NETWORKS and APPLICATIONS (MANNA 1995) International Conference at Lady Margaret Hall, Oxford, July 3-7, 1995 run by the University of Huddersfield in association with the University of Brighton We are delighted to announce the first conference on the Mathematics of Neural Networks and Applications (MANNA), in which we aim to provide both top class research and a friendly motivating atmosphere. The venue, Lady Margaret Hall is an Oxford College, set in an attractive and quiet location adjacent to the University Parks and River Cherwell. Applications of neural networks (NNs) have often been carried out with a limited understanding of the underlying mathematics but it is now essential that fuller account should be taken of the many topics that contribute to NNs: approximation theory, control theory, genetic algorithms, dynamical systems, numerical analysis, optimisation, statistical decision theory, statistical mechanics, computability and information theory, etc. . We aim to consider the links between these topics and the insights they offer, and identify mathematical tools and techniques for analysing and developing NN theories, algorithms and applications. Working sessions and panel discussions are planned. Keynote speakers who have provisionally accepted invitations include: N M Allinson (York University, UK) S Grossberg (Boston, USA) S-i Amari (Tokyo) M Hirsch (Berkeley, USA) N Biggs (LSE, London) T Poggio (MIT, USA) G Cybenko (Dartmouth USA) H Ritter (Bielefeld, Germany) J G Taylor (King's College, London) P C Parks (Oxford) It is anticipated that about 40 contributed papers and posters will be presented. The proceedings will be published, probably as a volume of an international journal, and contributed papers will be considered for inclusion. The deadline for submission of abstracts is 17 February 1995. Accommodation will be available at Lady Margaret Hall (LMH) where many rooms have en- suite facilities - early bookings are recommended. The conference will start with Monday lunch and end with Friday lunch, and there will be a full-board charge (including conference dinner) of about #235 for this period as well as a modest conference fee (to be fixed later). We hope to be able to offer a reduction in fees to those who give submitted papers and to students. There will be a supporting social programme, including reception, outing(s) and conference dinner, and family accommodation may be arranged in local guest houses. Please indicate your interest by returning the form below. A booking form will be sent to you. Thanking you in anticipation. Committee: S W Ellacott (Brighton) and J C Mason (Huddersfield) Co-organisers; I Aleksander, N M Allinson, N Biggs, C M Bishop, D Lowe, P C Parks, J G Taylor, K Warwick ______________________________________________________________________________ To: Ros Hawkins, School of Computing and Mathematics, University of Huddersfield, Queensgate, Huddersfield, West Yorkshire, HD1 3DH, England. (Email: j.c.mason at hud.ac.uk) Please send further information on MANNA, July 3 - 7, 1995 Name .......................Address .......................................... ............................................................................. ............................................................................. Telephone ............................. Fax .................................. E Mail ................................ I intend/do not intend to submit a paper Area of proposed contribution ................................................ ***************************************************************************** From lemm at LORENTZ.UNI-MUENSTER.DE Mon Feb 6 12:14:33 1995 From: lemm at LORENTZ.UNI-MUENSTER.DE (Joerg_Lemm) Date: Mon, 6 Feb 1995 18:14:33 +0100 Subject: paper available:Lorentzian Neural Nets Message-ID: <9502061714.AA24463@xtp141.uni-muenster.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/giraud.lorentz.ps.Z The file giraud.lorentz.ps.Z is now available for copying from the Neuroprose repository: (27 pages, compressed file size 215K) TITLE: LORENTZIAN NEURAL NETS AUTHORS: B.G.Giraud Service Physique Theorique, DSM, C.E.Saclay, 91191 Gif/Yvette, France, A.Lapedes and L.C.Liu Theoretical Division, Los Alamos National Laboratory, 87545 Los Alamos, NM, USA, J.C.Lemm Institut fuer Theoretische Physik I, Muenster University, 48149 Muenster, Germany ABSTRACT: We consider neural units whose response functions are Lorentzians rather than the usual sigmoids or steps. This consideration is justified by the fact that neurons can be paired and that a suitable difference of the sigmoids of the paired neurons can create a window response function. Lorentzians are special cases of such windows and we take advantage of their simplicity to generate polynomial equations for several problems such as: i) fixed points of a completely connected net, ii) classification of operational modes, iii) training of a feedforward net, iv) process signals represented by complex numbers. Keywords: Neural Networks, Window-like response functions, Lorentzians and rational fractions, Integer coefficients, Training by solving polynomial systems, Processing of complex numbers, Fixed number of solutions, Classification of operational modes. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/giraud.lorentz.ps.Z From P.McKevitt at dcs.shef.ac.uk Sat Feb 4 10:12:11 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Sat, 4 Feb 95 15:12:11 GMT Subject: No subject Message-ID: <9502041512.AA24541@dcs.shef.ac.uk> ============================================================================== 11 0000000 TH 11 0 0 11 0 0 ANNIVERSARY AISB CONFERENCE 11 0 0 11 0000000 AISB-95 The Tenth Biennial Conference on AI and Cognitive Science SHEFFIELD, ENGLAND Monday 3rd -- Friday 7th April 1995 THEME Hybrid Problems, Hybrid Solutions PROGRAMME CHAIR John Hallam (University of Edinburgh) WORKSHOPS/TUTORIALS CHAIR Robert Gaizauskas (University of Sheffield) CONFERENCE CHAIR/LOCAL ORGANISATION Paul Mc Kevitt (University of Sheffield) ======================================================(SEE WWW FOR MORE DETAILS) EACL-95 (DUBLIN, IRELAND)====================================================== COME TO EACL-95 AT DUBLIN AND THEN FLY TO AISB-95 AT SHEFFIELD EACL-95 7th Conference of the European Chapter of the Association for Computational Linguistics March 27-31, 1995 University College Dublin Belfield, Dublin, IRELAND FOR ANYONE COMING FROM EACL-95 (DUBLIN) THERE ARE FLIGHTS FROM DUBLIN TO **MANCHESTER**, LEEDS/BRADFORD, LIVERPOOL, LONDON AND MIDLANDS ON CARRIERS SUCH AS AER LINGUS, BRITISH MIDLANDS, RYANAIR. (SEE ATTACHED INSERT BELOW) =============================================================================== AISB-95 Halifax Hall of Residence & Computer Science Department University of Sheffield Sheffield, ENGLAND HOSTED BY The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) and The Department of Computer Science (University of Sheffield) IN COOPERATION WITH Departments of Automatic Control and Systems Engineering, Information Studies, Philosphy, Psychology Artificial Intelligence Vision Research Unit (AIVRU) Hang-Seng Centre for Cognitive Studies Institute for Language, Speech and Hearing (ILASH) (University of Sheffield) Dragon Systems UK Limited (Melvyn Hunt) LPA Limited (Clive Spenser) Sharp Laboratories Europe Limited (Paul Kearney) Wisepress Limited (Penelope G.Head) MAIN CONFERENCE Wednesday 5th - Friday 7th April 1995 WORKSHOPS AND TUTORIALS Monday 3rd - Tuesday 4th April 1995 INVITED SPEAKERS +++ Professor ALEX GAMMERMAN +++ (Department of Computer Science, Royal Holloway/New Bedford College, University of London, England) +++ Professor MALIK GHALLAB +++ (LAAS-CNRS, Toulouse, France) +++ Professor GRAEME HIRST +++ (Department of Computer Science, University of Toronto, Canada) +++ Professor JOHN MAYHEW +++ (AIVRU, University of Sheffield, England) +++ Professor NOEL SHARKEY +++ (Department of Computer Science, University of Sheffield, England) PROGRAMME CHAIR John Hallam (University of Edinburgh) PROGRAMME COMMITTEE Dave Cliff (University of Sussex) Erik Sandewall (University of Linkoeping) Nigel Shadbolt (University of Nottingham) Sam Steel (University of Essex) Yorick Wilks (University of Sheffield) WORKSHOPS/TUTORIALS CHAIR Robert Gaizauskas (University of Sheffield) CONFERENCE CHAIR/LOCAL ORGANISATION Paul Mc Kevitt (University of Sheffield) LOCAL ORGANISATION COMMITTEE Phil Green (University of Sheffield) Jim McGregor (University of Sheffield) Bob Minors (University of Sheffield) Tony Prescott (University of Sheffield) Tony Simons (University of Sheffield) PUBLICITY Malcolm Crawford (University of Sheffield) Mark Lee (University of Sheffield) Derek Marriott (University of Sheffield) Simon Morgan (Cambridge) ADMINISTRATIVE ASSISTANT Gill Wells (University of Sheffield) AISB OFFICE (UNIVERSITY OF SUSSEX) Tony Cohn (Chairman) Roger Evans (Treasurer) Chris Thornton (Secretary) Alison White (Executive Office) THEME The world's oldest AI society, the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB), will hold its Tenth Biennial International Conference at The University of Sheffield. The past few years have seen an increasing tendency for diversification in research into Artificial Intelligence, Cognitive Science and Artificial Life. A number of approaches are being pursued, based variously on symbolic reasoning, connectionist systems and models, behaviour-based systems, and ideas from complex dynamical systems. Each has its own particular insight and philosophical position. This variety of approaches appears in all areas of Artificial Intelligence. There are both symbolic and connectionist natural language processing, both classical and behaviour-based vision research, for instance. While purists from each approach may claim that all the problems of cognition can in principle be tackled without recourse to other methods, in practice (and maybe in theory, also) combinations of methods from the different approaches (hybrid methods) are more successful than a pure approach for certain kinds of problems. The committee feels that there is an unrealised synergy between the various approaches that an AISB conference may be able to explore. Thus, the focus of the tenth AISB Conference is on such hybrid methods. The AISB conference is a single track conference lasting three days, with a two day tutorial and workshop programme preceding the main technical event, and around twenty high calibre papers will be presented in the technical sessions. Five invited talks by respected and entertaining world class researchers complete the programme. The proceedings of the conference will be published in book form at the conference itself, making it a forum for rapid dissemination of research results. The preliminary programme for the conference is attached below. Note that the organisers reserve the right to alter the programme as circumstances dictate, though every effort will be made to adhere to the provisional timings and calendar of events given below. __________________________________________________________________________ FURTHER INFORMATION E-mail: aisb95 at dcs.shef.ac.uk (for auto responses) __________________________________________________________________________ AISB-95 CONFERENCE CHAIR/LOCAL ORGANISATION: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 282-5572 AISB-95 PROGRAMME CHAIR: John Hallam Department of Artificial Intelligence University of Edinburgh 5 Forrest Hill Edinburgh EH1 2QL SCOTLAND. E-mail: john at aifh.edinburgh.ac.uk FAX: + 44 (0) 1 31 650 6899 Phone: + 44 (0) 1 31 650 3097 ADDRESS (for registrations) Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 ADDRESS (for general enquiries) Gill Wells, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: g.wells at dcs.shef.ac.uk Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5590 Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb =============================================================================== From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Mon Feb 6 17:25:46 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 06 Feb 95 17:25:46 EST Subject: NIPS*95 Call for Papers Message-ID: <16644.792109546@DST.BOLTZ.CS.CMU.EDU> CALL FOR PAPERS Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major categories for paper submission, with example subcategories, are as follows: Neuroscience: systems physiology, signal and noise analysis, oscillations, synchronization, mechanisms of inhibition and neuromodulation, synaptic plasticity, computational models Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation and estimation theory Implementation: analog and digital VLSI, novel neuro-devices, neurocomputing systems, optical, simulation tools, parallelism Algorithms and Architectures: learning algorithms, decision trees constructive/pruning algorithms, localized basis func- tions, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons Visual Processing: image recognition, coding and classifica- tion, stereopsis, motion detection and tracking, visual psycho- physics Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equali- zation, nonlinear noise removal, auditory scene analysis Applications: time-series prediction, medical diagnosis, finan- cial analysis, DNA/protein sequence analysis, music processing, expert systems, database mining Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning Control, Navigation, and Planning: robotic motor control, pro- cess control, navigation, path planning, exploration, dynamic programming, reinforcement learning Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance, and clarity. Submissions should contain new results that have not been published previously. Authors should not be dissuaded from submitting recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. Paper Format: Submitted papers may be up to eight pages in length, including figures and references. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and MUST indicate one of the nine major categories listed above. Authors may also indicate a subcategory, and their preference, if any, for oral or poster presentation; this preference will play no role in paper acceptance. Unless otherwise indicated, correspondence will be sent to the first au- thor. Submission Instructions: Send six copies of submitted papers to the address below; electronic or FAX submission is not accept- able. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class from within the US or Canada, or sent from overseas via Federal Express/Airborne/DHL or similar carrier must be POSTMARKED by May 20, 1995. All other submissions must ARRIVE by this date. Mail submissions to: Michael Mozer NIPS*95 Program Chair Department of Computer Science University of Colorado Colorado Avenue and Regent Drive Boulder, CO 80309-0430 USA Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu Sites for LaTex style files: Copies of "nips.tex" and "nips.sty" are available via anonymous ftp at helper.systems.caltech.edu (131.215.68.12) in /pub/nips, b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips. The style files and other conference information may also be retrieved via World Wide Web at http://www.cs.cmu.edu:8001/Web/Groups/NIPS/NIPS.html NIPS*95 Organizing Committee: General Chair, David S. Touretzky, CMU; Program Chair, Michael Mozer, U. Colorado; Publications Chair, Michael Hasselmo, Harvard; Tutorial Chair, Jack Cowan, U. Chicago; Workshops Chair, Michael Perrone, IBM; Publicity Chair, David Cohn, MIT; Local Arrangements, Manavendra Misra, Colorado School of Mines; Treasurer, John Lazzaro, Berkeley. DEADLINE FOR SUBMISSIONS IS MAY 20, 1995 (POSTMARKED) -please post- From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Mon Feb 6 17:26:32 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 06 Feb 95 17:26:32 EST Subject: call for workshops: NIPS*95 Message-ID: <16650.792109592@DST.BOLTZ.CS.CMU.EDU> CALL FOR PROPOSALS NIPS*95 Post Conference Workshops December 1 and 2, 1995 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1995 conference, workshops on current topics in neural information processing will be held on December 1 and 2, 1995, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, at- tention, bayesian analysis, benchmarking neural network applica- tions, computational complexity issues, computational neurosci- ence, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connec- tionist models, self-organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are partic- ularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and 3) writing a brief summary. Submission Instructions: Interested parties should submit a short proposal for a workshop of interest postmarked by May 20, 1995. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of in- terest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publi- cations and evidence of scholarship in the field of interest. Submissions should include contact name, address, email address, phone number and fax number if available. Mail proposals to: Michael P. Perrone NIPS*95 Workshops Chair IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 (email: mpp at watson.ibm.com) PROPOSALS MUST BE POSTMARKED BY MAY 20, 1995 -Please Post- From terry at salk.edu Mon Feb 6 23:42:45 1995 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 6 Feb 95 20:42:45 PST Subject: Neural Computation 7:2 Message-ID: <9502070442.AA10237@salk.edu> NEURAL COMPUTATION Volume 7, Number 2, March 1995 Review: Regularization theory and neural networks architectures Federico Girosi, Michael Jones and Tomaso Poggio Notes: A counterexample for temporal differences learning Dimitri P. Bertsekas New perceptron model using random bitstreams Eel-wan Lee and Soo-Ik Chae On the ordering conditions for self-organising maps Marco Budinich and John G. Taylor Letters: A simple competitive account of some response properties of visual neurons in area MSTd Ruye Wang Synchrony in excitatory neural networks D. Hansel, G. Mato and C. Meunier Decorrelated Hebbian learning for clustering and function approximation Gustavo Deco and Dragan Obradovic Identification using feedforward networks Asriel U. Levin and Kumpati S. Narendra An HMM/MLP architecture for sequence recognition Sung-Bae Cho and Jin H. Kim Learning linear threshold approximations using perceptrons Thomas Bylander An algorithm for building a regularized piecewise linear discrimination surface: The perceptron membrane Guillaume Deffuant The upward bias in measures of information derived from limited data samples Alessandro Treves and Stefano Panzeri Representation of similarity in three-dimensional object discrimination Shimon Edelman ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From bernabe at cnm.us.es Tue Feb 7 04:40:21 1995 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Tue, 7 Feb 95 10:40:21 +0100 Subject: papers in neuroprose Message-ID: <9502070940.AA12323@cnm1.cnm.us.es> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bernabe.art1chip.ps.Z The file bernabe.art1chip.ps.Z is now available for copying from the Neuroprose repository. It contains 2 conference papers (the PostScript file prints in 14 pages): Title Paper1: A Real Time Clustering CMOS Neural Engine Title Paper2: Experimental Results of An Analog Current-Mode ART1 Chip Authors: T. Serrano, B. Linares-Barranco, and J. L. Huertas Filiation: National Microelectronics Center (CNM), Sevilla, SPAIN. Sorry, no hardcopies available. Paper1 will appear in the 7th NIPS Volume, and Paper2 has been accepted for presentation at the 1995 IEEE Int. Symp. on Circuits and Systems, Seattle, Washington (ISCAS'95, April 29-May 3). These and related papers can also be obtained through anonymous ftp from the node "ftp.cnm.us.es" and directory "/pub/bernabe/publications". The following abstract describes briefly the topic of the two papers. ABSTRACT: In these papers we present a real time neural categorizer chip based on the ART1 algorithm. The circuit implements a modified version of the original ART1 algorithm more suitable for VLSI implementations. It has been designed using analog current-mode circuit design techniques, and consists basically of current mirrors that are switched ON and OFF according to the binary input patterns. The chip is able to cluster 100 binary pixels input patterns into up to 18 different categories. Modular expansibility of the system is possible by simply interconnecting more chips in a matrix array. This way a system can be built to cluster NX100 binary pixels images into MX18 different clusters, using an NXM array of chips. Avarage pattern classification is performed in less than 1.8us, which means an equivalent computing power of 2.2X10^9 connections per second and connection updates per second. The chip has been fabricated in a single-poly double-metal 1.5um standard digital low cost CMOS process, has a die area of 1cm^2, and is mounted in a 120 pins PGA package. Although the circuit is analog in nature, it is full-digital-compatible since it interfaces to the outside world through digital signals. From KOKINOV at BGEARN.BITNET Tue Feb 7 14:39:37 1995 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Tue, 07 Feb 95 14:39:37 BG Subject: CogSci95 Summer School in Sofia Message-ID: 2nd International Summer School in Cognitive Science Sofia, July 3-16, 1995 First Announcement and Call forPapers The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, panel discussions, student sessions, and intensive informal discussions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Ma, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Il, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Inst. of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Courses Computer Models of Emergent Cognition - Robert French (Indiana University, USA) Hemispheric Mechanisms in Cognition - Eran Zaidel (UCLA, USA) Cross-Linguistic Studies of Language Processing - Elizabeth Bates (UCSD, USA) Aphasia Research - Nina Dronkers (UC at Davis, USA) Selected Topics in Cognitive Linguistics - Elena Andonova (NBU, Bulgaria Spatial Attention - Carlo Umilta' (University of Padova, Italy) Parallel Pathways of Visual Information Processing - Angel Vassilev (NBU Color Vision - Charles De Weert (University of Nijmegen, The Netherlands) Integration of Language and Vision - Geoff Simmons (Hamburg University, Germany) Emotion and Cognition - Cristiano Castelfranchi (CNR, Italy) Philosophy of Mind - Lilia Gurova (NBU, Bulgaria) Analogical Reasoning: Psychological Data and Computational Models - Boicho Kokinov (NBU, Bulgaria) Participants are not restricted on the number of courses they can register for. There will be no parallel running courses. Participant Symposia Participants are invited to submit papers which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in triplicate or electronically (postscript, RTF, or plain ASCII) by March 31. Selected papers will be published in the School's Proceedings after the School itself. Only papers presented at the School will be eligible for publishing. Panel Discussions Language Processing: Rules or Constraints? Vision and Attention Integrated Cognition Student Session At the student session proposals for M.Sc. Theses and Ph.D. Theses will be discussed as well as public defense of such theses. Graduate students in Cognitive Science are invited to present their work. Local Organizers New Bulgarian University, Bulgarian Academy of Sciences, Bulgarian Cognitive Science Society Timetable Application Form: now Deadline for paper submission: March 31 Notification for acceptance: April 30 Early registration: May 15 Arrival day and on site registration July 2 Summer School July 3-14 Excursion July 15 Departure day July 16 Paper submission to: Boicho Kokinov Cognitive Science Department New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria fax: (+3592) 73-14-95 e-mail: kokinov at bgearn.bitnet Send your Application Form to: e-mail: cogsci95 at adm.nbu.bg ------------------------------------------------------------ International Summer School in Cognitive Science Sofia, July 3-14, 1995 Application Form Last Name: First Name: Status: Professor / Academic Researcher / Applied Researcher / Graduate Student / Undergraduate Student Affiliation: University: Department: Country: Mailing address: e-mail address: fax: I would like to attend the following courses: I intend to submit a paper: (title) From marney at ai.mit.edu Tue Feb 7 13:22:09 1995 From: marney at ai.mit.edu (Marney Smyth) Date: Tue, 7 Feb 95 13:22:09 EST Subject: Call For Papers: NNSP95 Message-ID: <9502071822.AA01794@motor-cortex> ********************************************************************** * 1995 IEEE WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * * * * August 31 -- September 2, 1995, Cambridge, Massachusetts, USA * * Sponsored by the IEEE Signal Processing Society * * (In cooperation with the IEEE Neural Networks Council) * * * * * ********************************************************************** Invited Speakers include: Vladimir Vapnik, AT&T Bell Labs. Michael I. Jordan, MIT. FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of IEEE Signal Processing Society, the co-sponsorship of IEEE Neural Network Council, the fifth of a series of IEEE Workshops on Neural Networks for Signal Processing will be held at the Royal Sonesta Hotel, Cambridge, Massachusetts, on Thursday 8/31 -- Saturday 9/2, 1995. Papers are solicited for, but not limited to, the following topics: ++ APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. ++ THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelets in NNs. ++ IMPLEMENTATIONS: Software, digital, analog, hybrid technologies, parallel processing. DETAILS FOR SUBMITTING PAPERS Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and e-mail address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Marney Smyth (Tel.) (617) 253 0547, (Fax) (617) 253 2964 (e-mail) marney at ai.mit.edu. We plan to use the World Wide Web (WWW) for posting further announcements on NNSP95 such as: submitted papers status, final program, hotel information etc. You can use MOSAIC and access URL site: http://www.cdsp.neu.edu. If you do not have access to WWW use anonymous ftp to site ftp.cdsp.neu.edu and look under the directory /pub/NNSP95. Please send paper submissions to: Prof. Elias S. Manolakos IEEE NNSP'95 409 Dana Research Building Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115, USA Phone: (617) 373-3021, Fax: (617) 373-4189 ************************Please Take Note******************************** A limited amount of financial assistance is available for participants whose papers are accepted for the Workshop. If you wish to apply for financial assistance, please submit a C.V. and brief summary of the reasons why you want to participate in the NNSP95 Workshop, when submitting your paper. Financial assistance awards will not be made until after the acceptance date of April 21, 1995. Please submit hard copy (NOT email) applications for Financial Assistance, along with a copy of your paper, to Dr. Federico Girosi at the address below. *********************************************************************** ******************* * IMPORTANT DATES * ******************* Extended summary received by: February 17 Notification of acceptance: April 21 Photo-ready accepted papers received by: May 22 Advanced registration received before: June 2 GENERAL CHAIRS Federico Girosi Center for Biological and Computational Learning andd Artificial Intelligence Laboratory MIT, E25-201 Cambridge, MA 02139 Tel: (617)253-0548 Fax: (617)258-6287 email: girosi at ai.mit.edu John Makhoul BBN Systems and Technologies 70 Fawcett Street Cambridge, MA 02138 Tel: (617)873-3332 Fax: (617)873-2534 email: makhoul at bbn.com PROGRAM CHAIR Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Center for Research and Graduate Studies Electrical and Computer Engineering Dept. 409 Dana Research Building Northeastern University Boston, MA 02115 Tel: (617)373-3021 Fax: (617)373-4189 email: elias at cdsp.neu.edu FINANCE CHAIR LOCAL ARRANGEMENTS Judy Franklin Mary Pat Fitzgerald, MIT GTE Laboratories email: marypat at ai.mit.edu email: jfranklin at gte.com PUBLICITY CHAIR PROCEEDINGS CHAIR Marney Smyth Elizabeth J. Wilson CBCL, MIT Raytheon Co. email: marney at ai.mit.edu email: bwilson at sud2.ed.ray.com TECHNICAL PROGRAM COMMITTEE Joshua Alspector John Makhoul Charles Bachmann Alice Chiang Elias Manolakos A. Constantinides P. Mathiopoulos Lee Giles Nahesan Niranjan Federico Girosi Tomaso Poggio Lars Kai Hansen Jose Principe Yu-Hen Hu Wojtek Przytula Jenq-Neng Hwang John Sorensen Bing-Huang Juang Andreas Stafylopatis Shigeru Katagiri John Vlontzos George Kechriotis Raymond Watrous Stephanos Kollias Christian Wellekens Sun-Yuan Kung Ron Williams Gary M. Kuhn Barbara Yoon Richard Lippmann Xinhua Zhuang From p.j.b.hancock at psych.stir.ac.uk Wed Feb 8 06:17:41 1995 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Wed, 8 Feb 95 11:17:41 GMT Subject: PhD thesis available Message-ID: <9502081117.AA0420543094@nevis.stir.ac.uk.nevis.stir.ac.uk> Since a couple of people have asked for it within the last week, I've made my PhD thesis available by FTP, although it's now a couple of years old. It's on forth.stir.ac.uk (139.153.13.6, currently) directory pub/reports/pjbh_thesis, one compressed postscript file per chapter. There is also a README file that explains what is what. No hardcopies available - the whole thing is 155 pages. Coding strategies for genetic algorithms and neural nets Peter Hancock University of Stirling PhD thesis, Dept. of Computing Science, 1992 Abstract The interaction between coding and learning rules in neural nets (NNs), and between coding and genetic operators in genetic algorithms (GAs) is discussed. The underlying principle advocated is that similar things in ``the world'' should have similar codes. Similarity metrics are suggested for the coding of images and numerical quantities in neural nets, and for the coding of neural network structures in genetic algorithms. A principal component analysis of natural images yields receptive fields resembling horizontal and vertical edge and bar detectors. The orientation sensitivity of the ``bar detector'' components is found to match a psychophysical model, suggesting that the brain may make some use of principal components in its visual processing. Experiments are reported on the effects of different input and output codings on the accuracy of neural nets handling numeric data. It is found that simple analogue and interpolation codes are most successful. Experiments on the coding of image data demonstrate the sensitivity of final performance to the internal structure of the net. The interaction between the coding of the target problem and reproduction operators of mutation and recombination in GAs are discussed and illustrated. The possibilities for using GAs to adapt aspects of NNs are considered. The permutation problem, which affects attempts to use GAs both to train net weights and adapt net structures, is illustrated and methods to reduce it suggested. Empirical tests using a simulated net design problem to reduce evaluation times indicate that the permutation problem may not be as severe as has been thought, but suggest the utility of a sorting recombination operator, that matches hidden units according to the number of connections they have in common. A number of experiments using GAs to design network structures are reported, both to specify a net to be trained from random weights, and to prune a pre-trained net. Three different coding methods are tried, and various sorting recombination operators evaluated. The results indicate that appropriate sorting can be beneficial, but the effects are problem-dependent. It is shown that the GA tends to overfit the net to the particular set of test criteria, to the possible detriment of wider generalisation ability. A method of testing the ability of a GA to make progress in the presence of noise, by adding a penalty flag, is described. From tony at salk.edu Wed Feb 8 22:11:24 1995 From: tony at salk.edu (Tony Bell) Date: Wed, 8 Feb 95 19:11:24 PST Subject: tech report available Message-ID: <9502090311.AA27886@salk.edu> FTP-host: ftp.salk.edu FTP-file: pub/tony/bell.blind.ps.Z The following technical report is ftp-able from the Salk Institute. The file is called bell.blind.ps.Z, it is 0.3 Mbytes compressed, 0.9 Mbytes uncompressed, and 36 pages long (8 figures). It describes work presented at NIPS '94, with various embellishments, and a version of it will appear in Neural Computation in 1995. ------------------------------------------------------------------- Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523 AN INFORMATION-MAXIMISATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION Anthony J. Bell & Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037 ABSTRACT We derive a new learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximisation provides a unifying framework for problems in `blind' signal processing. ------------------------------------------------------------------- Can be obtained via ftp as follows: unix> ftp ftp.salk.edu (or 198.202.70.34) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/tony ftp> get bell.blind.ps.Z ftp> quit unix> uncompress bell.blind.ps.Z unix> lpr bell.blind.ps From mike at stats.gla.ac.uk Thu Feb 9 12:00:55 1995 From: mike at stats.gla.ac.uk (Mike Titterington) Date: Thu, 9 Feb 95 17:00:55 GMT Subject: Meeting on Statistics and Neural Networks Message-ID: <20297.9502091700@milkyway.stats> STATISTICS AND NEURAL NETWORKS OPEN MEETING, 21 APRIL 1995 An open meeting on the above topic will be held in the premises of the Royal Society of Edinburgh under the auspices of the International Centre for Mathematical Sciences . The Meeting will follow on from a Workshop in the area and will take advantage on the presence of several distinguished visitors. It is intended that the presentations will both indicate the cutting edge of current research at this interface and also be accessible to interested parties from the wider statistical, mathematical and neural-networks communities. Invited speakers will include Chris Bishop (Aston), Leo Breiman (Berkeley), Trevor Hastie (Stanford ), Michael Jordan ( MIT ), Laveen Kanal (Maryland) and Brian Ripley (Oxford). In addition, Jim Kay (SASS) will give a perspective of the main points that came out of the preceding Workshop. It is planned to provide coffee and tea. Lunch can be obtained at a variety of nearby establishments. For details contact either J. W. Kay (SASS, Macaulay Land Use Research Institute, Craigiebuckler, Aberdeen, AB9 2QJ; sassk at mluri.sari ac.uk) or D. M. Titterington (Department of Statistics, University of Glasgow, Glasgow G12 8QQ; mike at stats.gla.ac.uk). For application forms [SEE BELOW FOR AN ELECTRONIC VERSION], contact Louise Williamson (ICMS, 14 India Streeet, Edinburgh EH3 6EZ; icms at maths.ed.ac.uk). In the event that the meeting is over-subscribed, places will be allocated on a first-come-first-served basis. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - STATISTICS AND NEURAL NETWORKS OPEN MEETING: APRIL 21, 1995 VENUE - WOLFSON LECTURE THEATRE, ROYAL SOCIETY OF EDINBURGH, 22-24 GEORGE STREET, EDINBURGH EH2 2PQ. Under the auspices of the International Centre for Mathematical Sciences 14 India Street, Edinburgh EH3 6EZ PARTICIPATION AND ACCOMMODATION-INFORMATION REQUEST FORM TITLE: FIRST NAME: SECOND NAME: INSTITUTION: ADDRESS: TELEPHONE: FAX: EMAIL ADDRESS: DATE OF ARRIVAL: DATE OF DEPARTURE: DETAILS OF ACCOMPANYING FAMILY MEMBERS (NUMBER, RELATIONS, AND AGES, IF CHILDREN): ACCOMODATION INFORMATION: I would /would not like information about accommodation (Delete as appropriate) ANY ADDITIONAL COMMENTS: PLEASE RETURN AS SOON AS POSSIBLE WITH REGISTRATION FEE OF 15 POUNDS (10 POUNDS FOR POST-GRADUATE STUDENTS) TO : Louise Williamson ICMS Tel:0131-220-1777 Fax:0131-220-1053 Email:icms at maths.ed.ac.uk !! Please make cheques payable to HERIOT-WATT UNIVERSITY - ICMS !! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - From len at titanic.mpce.mq.edu.au Thu Feb 9 17:29:57 1995 From: len at titanic.mpce.mq.edu.au (Len Hamey) Date: Fri, 10 Feb 1995 09:29:57 +1100 (EST) Subject: Technical Report Available Message-ID: <9502092229.AA29220@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 2250 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/83da6595/attachment.ksh From fritzke at neuroinformatik.ruhr-uni-bochum.de Thu Feb 9 04:22:13 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Thu, 9 Feb 1995 10:22:13 +0100 (MET) Subject: NIPS*94 preprint available Message-ID: <9502090922.AA16891@urda.neuroinformatik.ruhr-uni-bochum.de> ftp-host: ftp.neuroinformatik.ruhr-uni-bochum.de ftp-filename: /pub/manuscripts/articles/fritzke.nips94.ps.gz (148 kB compressed, 8 pages) *** DO NOT FORWARD TO ANY OTHER LISTS *** The following article (NIPS*94 pre-print) is available by ftp from our ftp-server (Bochum, Germany) and via WWW from my homepage (see below). Since our ftp-connection is sometimes a bit slow I have also transferred the file to the neuroprose archive. I assume it will be available from there (with a .Z extension and 240 kB) within a couple of days. I decided, however, to announce the paper now since several people did ask for it. ----------------------------------------------------------------- "A Growing Neural Gas Network Learns Topologies" Bernd Fritzke Institut fuer Neuroinformatik Ruhr-Universitaet Bochum D-44780 Bochum Germany Abstract An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the ``neural gas'' method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation. Thanks to Jordan Pollack for maintaining neuroprose. Bernd -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * 44780 Bochum * Germany FAX. +49-234 7094210 http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From BGoodin at UNEX.UCLA.EDU Thu Feb 9 20:29:00 1995 From: BGoodin at UNEX.UCLA.EDU (Goodin, Bill) Date: Thu, 09 Feb 95 17:29:00 PST Subject: UCLA short course on Fuzzy Logic, Chaos, and Neural Networks Message-ID: <2F3AC808@UNEXGW.UCLA.EDU> On May 22-24, 1995, UCLA Extension will present the short course, "Fuzzy Logic, Chaos, and Neural Networks: Principles and Applications", on the UCLA campus in Los Angeles. The instructor is Harold Szu, PhD, Research Physicist, Washington, DC. This course presents the principles and applications of several different but related disciplines--neural networks, fuzzy logic, chaos--in the context of pattern recognition, control of engineering tolerance imprecision, and the prediction of fluctuating time series. Since research into these areas has contributed to the understanding of human intelligence, researchers have dramatically enhanced their understanding of fuzzy neural systems and in fact may have discovered the "Rosetta stone" to decipher and unify these intelligence functions. For example, complex neurodynamic patterns may be understood and modelled by Artificial Neural Networks (ANN) governed by fixed-point attractor dynamics in terms of a Hebbian learning matrix among bifurcated neurons. Each node generates a low dimensional bifurcation cascade towards the chaos but together they form collective ambiguous outputs; e.g., a fuzzy set called the Fuzzy Membership Function (FMF). This feature becomes particularly powerful for real world applications in signal processing, pattern recognition and/or prediction/control. The course delineates the difference between the classical sigmoidal squash function of the typical neuron threshold logic and the new N-shaped sigmoidal function having a "piecewise negative logic" that can generate a Feigenbaum cascade of bifurcation outputs of which the overall envelope is postulated to be the triangle FMF. The course also discusses applications of chaos and collective chaos for spatiotemporal information processing that has been embedded through an ANN bifurcation cascade of those collective chaotic outputs generated from piecewise negative logic neurons. These chaotic outputs learn the FMF triangle-shape with a different degree of fuzziness as defined by the scaling function of the multiresolution analysis (MRA) used often in wavelet transforms. Another advantage of this methodology is information processing in a synthetic nonlinear dynamical environment. For example, nonlinear ocean waves can be efficiently analyzed by nonlinear soliton dynamics, rather than traditional Fourier series. Implementation techniques in chaos ANN chips are given. The course covers essential ANN learning theory and the elementary mathematics of chaos such as the bifurcation cascade route to chaos and the rudimentary Fuzzy Logic (FL) for those interdisciplinary participants with only basic knowledge of the subject areas. Various applications in chaos, fuzzy logic, and neural net learning are illustrated in terms of spatiotemporal information processing, such as: --Signal/image de-noise --Control device/machine chaos --Communication coding --Chaotic heart and biomedical applications. For additional information and a complete course description, please contact Marcus Hennessy at: (310) 825-1047 (310) 206-2815 fax mhenness at unex.ucla.edu From wermter at nats5.informatik.uni-hamburg.de Thu Feb 9 13:20:55 1995 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Thu, 9 Feb 1995 19:20:55 +0100 Subject: Learning for natural language: final updated call Message-ID: <199502091820.TAA24270@nats2.informatik.uni-hamburg.de> CALL FOR PAPERS AND PARTICIPATION IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter Gabriele Scheler Ellen Riloff University of Hamburg Technical University Munich University of Utah INVITED SPEAKERS ---------------- Eugene Charniak, Brown University, USA Noel Sharkey, Sheffield University, UK PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing. Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge acquisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important because language learning is an enabling technology for many other language processing problems, including noisy speech/language integration, machine translation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will bring together researchers from the US/Canada, Europe, Japan, Australia and other countries working on new approaches to language learning. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP FORMAT --------------- The workshop will provide a forum for the interactive exchange of ideas and knowledge. Approximately 30-40 participants are expected and there will be time for up to 15 presentations depending on the number and quality of paper contributions received. Normal presentation length will be 15+5 minutes, leaving time for direct questions after each talk. There may be a few invited talks of 25+5 minutes length. In addition to prepared talks, there will be time for moderated discussions after two related sessions. Furthermore, the moderated discussions will provide an opportunity for an open exchange of comments, questions, reactions, and opinions. PUBLICATION ----------- Workshop proceedings will be published by AAAI. If there is sufficient interest of the participants of the workshop there may be a possibility to publish the results of the workshop as a book. REGISTRATION ------------ This workshop will take place directly before the general IJCAI-conference. It is an IJCAI policy, that workshop participation is not possible without registration for the general conference. SUBMISSIONS ----------- All submissions will be refereed by the program committee and other experts in the field. Please submit 4 hardcopies AND a postscript file. The paper format is the IJCAI95 format: 12pt article style latex, no more than 43 lines, 15 pages maximum, including title, address and email address, abstract, figures, references. Papers should fit to 8 1/2" x 11" size. Notifications will be sent by email to the first author. Postscript files can be uploaded with anonymous ftp: ftp nats4.informatik.uni-hamburg.de (134.100.10.104) login: anonymous password: cd incoming/ijcai95-workshop binary put quit Hardcopies AND postscript files must arrive not later than 24th February 1995 at the address below. ##############Submission Deadline: 24th February 1995 ##############Notification Date: 24th March 1995 ##############Camera ready Copy: 13th April 1995 Please send correspondence and submissions to: ################################################ Dr. Stefan Wermter Department of Computer Science University of Hamburg Vogt-Koelln-Strasse 30 D-22527 Hamburg Germany phone: +49 40 54715-531 fax: +49 40 54715-515 e-mail: wermter at informatik.uni-hamburg.de ################################################ From njm at cupido.inesc.pt Fri Feb 10 08:46:45 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Fri, 10 Feb 95 13:46:45 +0000 Subject: EPIA'95 - Neural Nets & Genetic Algorithms Worksop CFP Message-ID: <9502101346.AA02701@cupido.inesc.pt> ________________________________________________________ -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION NEURAL NETWORKS AND GENETIC ALGORITHMS -------------------------------------------------------- A subsection of the: FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ________________________________________________________ -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The workshop on Fuzzy Logic and Neural Networks in Engineering, running during the Seventh Portuguese Conference on Artificial Intelligence (EPIA'95), includes a subsection on Neural Networks and Genetic Algorithms. This subsection of the workshop will be devoted to models of simulating human reasoning and behaviour based on GA and NN combinations. Recently in disciplines such as AI, Engineering, Robotics and Artificial Life there has been a rise in interest in hybrid methodologies such as neural networks and genetic algorithms which enables modelling of more realistic flexible and adaptive behaviour and learning. So far such hybrid models have proved very promising in investigating and characterizing the nature of complex reasoning and control behaviour. Participants are expected to base their contribution on current research and the workshop emphasis will be on wide-ranging discussions of the feasibility and application of such hybrid models. This part of the workshop is intended to promote the exchange of ideas and approaches in these areas and for these methods, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. COORDINATION OF THIS SUBSECTION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mukesh Patel Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) P.O.Box 1385, GR 711 10 Heraklion, Crete, Greece Voice: +30 (81) 39 16 35 Fax: +30 (81) 39 16 01/09 Email: mukesh at ics.forth.gr The submission requirements, attendance and deadlines information are the same of the workshop, which Call for Papers is enclosed. Further inquiries could be addressed either to the subsection coordinator or the Workshop address. ============================================================================= ============================================================================= ============================================================================= -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, between October 3-6, 1995. As in previous cases ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program includes tutorials, invited lectures, demonstrations, and paper presentations. The Conference will include three parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems. These workshops will run simultaneously (see below) and consist of invited talks, panels, paper presentations and poster sessions. Fuzzy Logic And Neural Networks In Engineering workshop may last for either 1, 2 or 3 days, depending on the quantity and quality of submissions. FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The search for systems simulating human reasoning in what regards uncertainty has created a strong research community. In particular, Fuzzy Logic and Neural Networks have been a source of synergies among researchers of both areas, aiming at developing theoretical approaches and applications towards the characterization and experimentation of such kinds of reasoning. The workshop is intended to promote the exchange of ideas and approaches in those areas, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. The organization committee invites you to participate, submitting papers together with videos, demonstrations or running systems, to illustrate relevant issues and applications. EXHIBITIONS ~~~~~~~~~~~ In order to illustrate and to support theoretical presentations the organization will provide adequate conditions (space and facilities) for exhibitions regarding the three workshops mentioned. These exhibitions can include software running systems (several platforms are available), video presentations (PAL-G VHS system), robotics systems (such as robotics insects, and autonomous robots), and posters. On the one hand, this space will allow the presentation of results and real-world applications of the research developed by our community and, on the other it will serve as a source of motivation to students and young researchers. SUBMISSION REQUIREMENTS ~~~~~~~~~~~~~~~~~~~~~~~ Authors are asked to submit five (5) copies of their papers to the submissions address by May 2, 95. Notification of acceptance or rejection will be mailed to the first (or designated) author on June 5, 95, and camera ready copies for inclusion in the workshop proceedings will be due on July 3, 95. Each copy of submitted papers should include a separate title page giving the names, addresses, phone numbers and email addresses (where available) of all authors, and a list of keywords identifying the subject area of the paper. Papers should be a maximum of 16 pages and printed on A4 paper in 12 point type with a maximum of 38 lines per page and 75 characters per line ( corresponding to LaTeX article style, 12 pt). Double sided submissions are preferred. Electronic or faxed submissions will not be accepted. Further inquiries should be addressed to the inquiries address. ATTENDANCE ~~~~~~~~~~ Each workshop will be limited to at most fifty people. In addition to presenters of papers and posters, there will be space for a limited number of other participants chosen on the basis of a one- to two-page research summary which should include a list of relevant publications, along with an electronic mail address if possible. A set of working notes will be available prior to the commencement of the workshops. Registration information will be available in June 1995. Please write for registration information to the inquiries address. DEADLINES ~~~~~~~~~ Papers submission: ................. May 2, 1995 Notification of acceptance: ........ June 5, 1995 Camera Ready Copies Due: ........... July 3, 1995 PROGRAM-CHAIR ~~~~~~~~~~~~~ Jose Tome (IST, Portugal) ORGANIZING-CHAIR ~~~~~~~~~~~~~~~~ Luis Custodio (IST, Portugal) SUBMISSION AND INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA'95 Fuzzy Logic & Neural Networks Workshop INESC, Apartado 13069 1000 Lisboa Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95-FLNNWorkshop at inesc.pt PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the workshop organizer to estimate the facilities needed and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | (Fuzzy Logic & Neural Networks Workshop) | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From uli at ira.uka.de Fri Feb 10 11:10:18 1995 From: uli at ira.uka.de (Uli Bodenhausen) Date: Fri, 10 Feb 1995 17:10:18 +0100 Subject: doctoral thesis on automatic structuring available by ftp Message-ID: <"irafs2.ira.270:10.02.95.16.10.29"@ira.uka.de> The following doctoral thesis is available by ftp. Sorry, no hardcopies available. ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/bodenhausen.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/bodenhausen.thesis.Z ----------------------------------------------------------------------------- Automatic Structuring of Neural Networks for Spatio-Temporal Real-World Applications (153 pages) Ulrich Bodenhausen Doctoral Thesis University of Karlsruhe, Germany Abstract The successful application of speech recognition (SR) and on-line handwriting recognition (OLHR) systems to new domains greatly depend on the tuning of a recognizer's architecture to a new task. Architectural tuning is especially important if the amount of training data is small because the amount of training data limits the number of trainable parameters that can be estimated properly using an automatic learning algorithm. The number of trainable parameters of a connectionist SR or OLHR is dependent on architectural parameters like the width of input windows over time, the number of hidden units and the number of state units. Each of these architectural parameters provides different functionality in the system and can not be optimized independently. Manual optimization of these architectural parameters is time-consuming and expensive. Automatic optimization algorithms can free the developer of SR and OLHR applications from this task. In this thesis I develop and evaluate novel methods that allocate connectionist resources for spatio-temporal classification problems automatically. The methods are evaluated under the following evaluation criteria: - Suitability for small systems (~ 1,000 parameters) as well as for large systems (more than 10,000 parameters): Is the proposed method efficient for various sizes of the system? - Ease of use for non-expert users: How much knowledge is necessary to adapt the system to a customized application? - Final performance: Can the automatically optimized system compete with state-of-the-art well engineered systems? Several algorithms were developed and evaluated in this thesis. The Automatic Structure Optimization (ASO) algorithm performed best under the above criteria. ASO automatically optimizes - the width of the input windows over time which allow the following unit of the neural network to capture a certain amount of temporal context of the input signal; - the number of hidden units which allow the neural network to learn non-linear classification boundaries; - the number of states that are used to model segments of the spatio- temporal input, such as acoustic segments of speech or strokes of on-line handwriting. The ASO algorithm uses a constructive approach to find the best architecture. Training starts with a neural network of minimum size. Resources are added to specifically improve parts of the network which are involved in classification errors. ASO was developed on the recognition of spoken letters and improved the performance on an independent test set from 88.0% to 92.2% over a manually tuned architecture. The performances of architectures found by ASO for different domains and databases are also compared to architectures optimized manually by other researchers. For example, ASO improved the performance on on-line handwritten digits from 98.5% to 99.5% over a manually optimized architecture. It is also shown that ASO can successfully adapt to different sizes of the training database and that it can be applied to the recognition of connected spoken letters. The ASO algorithm is applicable to all classification problems with spatio-temporal input. It was tested on speech and on-line handwriting, as two instances of such tasks. The approach is new, requires no domain specific knowledge by the user and is efficient. It is shown for the first time that fully automatic tuning of all relevant architectural parameters of speech and on-line handwriting recognizers (window widths, number of hidden units and states) to the domain and the available amount of training data is actually possible with the ASO algorithm automatic tuning by ASO is efficient, both in terms of computational effort and final performance. ------------------------------------------------------------------------ Instructions for ftp retrieval of this paper are given below. Our university requires that the title page is in German. The rest of the thesis is English. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get bodenhausen.thesis.Z ftp> quit unix> uncompress bodenhausen.thesis.Z Thanks to Jordan Pollack for maintaining this archive. Uli Bodenhausen ======================================================================= Uli Bodenhausen University of Karlsruhe Germany uli at ira.uka.de ======================================================================= From beaudot at morgon.csemne.ch Sun Feb 12 13:20:16 1995 From: beaudot at morgon.csemne.ch (William Beaudot) Date: Sun, 12 Feb 95 19:20:16 +0100 Subject: French Doctoral Thesis available: neural information processing in retina Message-ID: <9502121820.AA27557@csemne.ch> Hi, Sorry, but this announce mainly concerns French readers. ----------------------------------------------------------------------------- The following FRENCH Doctoral Thesis is available by FTP from the TIRFLab ftp-server (Grenoble, France) and via WWW from my homepage. FTP-host: tirf.inpg.fr FTP-file: /pub/beaudot/MYTHESIS/*.ps.Z WWW-link: ftp://tirf.inpg.fr/pub/HTML/beaudot/thesis.html (8.6 Mo compressed, 30 Mo uncompressed, 249 pages, splited into 11 compressed files, one compressed postscript file per chapter) ----------------------------------------------------------------------------- THE NEURAL INFORMATION PROCESSING IN THE VERTEBRATE RETINA: A Melting Pot of Ideas for Artificial Vision KEYWORDS : biological neural networks, retina motion detection, directional selectivity visual adaptation, signal processing spatiotemporal processing, silicon retina English Abstract: The retina is the first neural structure involved in visual perception. Researchers in Artificial Vision often see in it only a hard-wired circuit scarcely more sophisticated than a video-camera, and dedicated to the scanning of images and to the extraction of features leading to a simple computation of Laplacian or temporal derivative. In this thesis, we argue that it makes a lot of more, in particularly from a dynamical point of view, aspect often neglected in Artificial Vision. From a neurobiological inspiration, we show that the retina achieves a spatiotemporal processing really suited to the regularization of visual data, that it extracts a reliable and relevant spatiotemporal information, that it performs a rough motion analysis composed of a motion detection and a directional selectivity, and that it finally presents an elaborate mechanism for the control of sensitivity. This work emphasizes the fact once more that the solutions implemented by nature are both simple and efficient (by a rather good trade-off between complexity and performance), and that they should inspire the designers of artificial visual systems. It also follows from this work two basic consequences: a better understanding of the neural mechanisms involved in early vision and a theoretical framework for the synthesis and analysis of neuromorphic systems straight implementable into silicon. French Abstract: LE TRAITEMENT NEURONAL DE L'INFORMATION DANS LA RETINE DES VERTEBRES : Un creuset d'idees pour la vision artificielle. La retine est la toute premiere structure neuronale impliquee dans la perception visuelle. Les chercheurs en Vision Artificielle n'y voient bien souvent qu'un circuit cable a peine plus sophistique qu'une camera, dediee a l'acquisition de l'image et a l'extraction de primitives se ramenant a un simple calcul de laplacien et de derivee temporelle. Dans cette these, nous soutenons qu'elle realise bien plus, en particulier d' un point de vue dynamique, aspect encore souvent neglige en Vision Arti- -ficielle. En nous appuyant sur des donnees neurobiologiques, nous montrons qu'elle effectue un traitement spatio-temporel bien adapte a la regularisation de l'information visuelle, qu'elle extrait une information spatio-temporelle fiable et pertinente, qu'elle effectue une analyse rudimentaire du mouvement composee d'une detection et d'une selectivite directionnelle, et enfin qu'elle presente un mecanisme de controle de la sensibilite tout a fait remarquable. Ce travail souligne encore une fois le fait que les solutions mises en oeuvre par la nature sont a la fois simples et efficaces (par un bon compromis entre la complexite et la performance), lesquelles devraient inspirer les concepteurs de systemes en Vision Artificielle. De ce travail decoulent aussi deux corollaires fondamentaux : une meilleure comprehension des mecanismes neuronaux impliques dans la vision precoce et un cadre theorique pour la synthese et l'analyse de systemes neuromorphiques directement implantables sur silicium. ----------------------------------------------------------------------------- FTP INSTRUCTIONS: unix> ftp tirf.inpg.fr (or 192.70.29.33) Name: anonymous Password: ftp> cd pub/beaudot/MYTHESIS ftp> binary ftp> mget *.ps.Z ftp> quit unix> uncompress *.ps.Z Be careful : Compressed files require 8.6 Mo ----------------------------------------------------------------------------- Feel free to contact me if you have any problem. -- Dr. William H.A. BEAUDOT E-mail: beaudot at design.csemne.ch C.S.E.M. IC & Systems Dept.: Bio-Inspired Advanced Research Maladire 71, Case postale 41 Phone: (41) 38 205 251 CH-2007 Neuchtel (Switzerland) Fax: (41) 38 205 770 From mani at linc.cis.upenn.edu Mon Feb 13 09:00:41 1995 From: mani at linc.cis.upenn.edu (D. R. Mani) Date: Mon, 13 Feb 1995 09:00:41 -0500 Subject: Tech Report Message-ID: <199502131400.JAA26434@linc.cis.upenn.edu> The following technical report is available. FTP instructions are appended below. - Mani ----- Abstract ---------------------------------------------------------------- Massively Parallel Real-Time Reasoning with Very Large Knowledge Bases: An Interim Report D. R. Mani Lokendra Shastri ICSI TR-94-031 We map structured connectionist models of knowledge representation and reasoning onto existing general purpose massively parallel architectures with the objective of developing and implementing practical, real-time reasoning systems. SHRUTI, a connectionist knowledge representation and reasoning system which attempts to model reflexive reasoning, serves as our representative connectionist model. Realizations of SHRUTI are developed on the Connection Machine CM-2 - an SIMD architecture - and on the Connection Machine CM-5 - an MIMD architecture. Though SIMD implementations on the CM-2 are reasonably fast - requiring a few seconds to tens of seconds for answering queries - experiments indicate that SPMD message passing systems are vastly superior to SIMD systems and offer hundred-fold speedups. The CM-5 implementation can encode large knowledge bases with several hundred thousand (randomly generated) rules and facts, and respond in under 500 milliseconds to a range of queries requiring inference depths of up to eight. This work provides some new insights into the simulation of structured connectionist networks on massively parallel machines and is a step toward developing large yet efficient knowledge representation and reasoning systems. ----- FTP Instructions -------------------------------------------------------- The compressed postscript file is available by anonymous FTP from ftp.icsi.berkeley.edu (128.32.201.6): % ftp ftp.icsi.berkeley.edu Name (ftp.icsi.berkeley.edu:): anonymous Password: ftp> cd pub/techreports/1994 ftp> binary ftp> get tr-94-031.ps.Z ftp> quit % uncompress tr-94-031.ps.Z [Print as usual] ------------------------------------------------------------------------------- D. R. Mani | mani at linc.cis.upenn.edu Dept. of Computer and Information Science | Office: (215) 898-3224 University of Pennsylvania | Home: (610) 325-0528 Philadelphia, PA 19104 | ------------------------------------------------------------------------------- From mmisra at choma.Mines.Colorado.EDU Tue Feb 14 10:26:15 1995 From: mmisra at choma.Mines.Colorado.EDU (Manavendra Misra) Date: Tue, 14 Feb 95 08:26:15 MST Subject: CSNA '95 Call for Papers Message-ID: <9502141526.AA05529@choma.Mines.Colorado.EDU> PRELIMINARY ANNOUNCEMENT AND CALL FOR PAPERS 1995 Annual Meeting, Classification Society of North America, June 22-25, 1995, Denver, Colorado, USA The 1995 annual meeting of the Classification Society of North America (CSNA) will be held in Denver, Colorado from June 22-June 25, 1995 at the Executive Tower Inn, 14th and Curtis Streets. The meeting is supported by the College of Business, University of Colorado at Denver. The meeting plans currently include a short course on Thursday, June 22, a welcoming reception on Thursday night, and regular sessions from Friday morning, June 23 until Sunday noon, June 25, 1995. A banquet is planned for Friday night and a variety of social activities in Denver and the surrounding area will be available. As is traditional at CSNA meetings, the conference will be interdisciplinary and informal. Abstracts of papers presented are distributed, but no formal proceedings are produced. Speakers are encouraged to present work in progress. CONTRIBUTED PAPERS RELATED TO CLASSIFICATION AND CLUSTERING FROM THE PERSPECTIVES OF STATISTICS, BIOLOGY, THE PHYSICAL SCIENCES, BUSINESS, LIBRARY SCIENCE, COMPUTER SCIENCE, PSYCHOLOGY, AND OTHER FIELDS ARE WELCOME AND ARE SOLICITED. We are particularly interested in including a wide variety of contributed papers concerned with applications of classification and clustering as well as methodological issues. In addition, invited sessions on Classification and Clustering in Marketing P. Green, the Wharton School and J. D.Carroll, Rutgers University, organizers New Optimization and Neural Network Approaches to Discriminant Analysis, with Applications Fred Glover, University of Colorado at Boulder,organizer Neural Networks for Classification Manavendra Misra, Colorado School of Mines, organizer (mmisra at mines.colorado.edu) Model Selection Methods in Classification and Clustering Hamparsum Bozdogan, University of Tennessee, organizer Authorship Attribution David Banks, Carnegie Mellon University, Organizer are planned. A Graduate Student Session, in which graduate students will present their research and will meet with mentors to review and discuss the work will also be offered. Abstracts of papers to be considered for presentation at the meetings should be submitted as soon as possible, so as to arrive no later than March 15, 1995. Please limit abstracts to one page or less, and include appropriate keywords indicating the topics the paper addresses. We encourage you to submit abstracts via electronic mail. Abstracts submitted electronically may be in LaTex format or in unformatted text, and should be submitted to the email address listed below. Abstracts submitted in text format should avoid the use of formulae if at all possible. Written abstracts may be mailed to the program chair at the address below, or sent via FAX. Authors will be notified of acceptance of abstracts in late March or early April. If your abstract is intended for the graduate student session, please indicate this clearly. It will help the organizers plan appropriate conference facilities if those intending to attend the meeting and/or submit abstracts will inform the committee (preferably via e-mail) of those intentions as soon as possible, so that appropriate facilities can be arranged. Further information may be obtained from the Program Chair. Detailed registration and hotel information will be distributed in late March or early April. All inquiries, abstracts, etc. should be directed to: Peter Bryant, CSNA-95 College of Business, University of Colorado at Denver Campus Box 165, Denver, Colorado 80217-3364 USA Telephone (303)-556-5833 Fax (303)-556-5899 E-mail csna95 at castle.cudenver.edu. Please distribute or post this notice as appropriate. ***************************************************************************** Manavendra Misra Dept of Mathematical and Computer Sciences Colorado School of Mines, Golden, CO 80401 Ph. (303)-273-3873 Fax. (303)-273-3875 Home messages/fax : (303)-271-0775 email: mmisra at mines.colorado.edu WWW URL: http://vita.mines.colorado.edu:3857/1s/mmisra ***************************************************************************** From mario at joule.physics.uottawa.ca Tue Feb 14 16:01:03 1995 From: mario at joule.physics.uottawa.ca (Mario Marchand) Date: Tue, 14 Feb 1995 16:01:03 -0500 Subject: Neural net paper available by anonymous ftp Message-ID: <9502142101.AA16569@joule.physics.uottawa.ca> The following paper, which has just been accepted for publication in the journal "Neural Networks", is available by anonymous ftp at: ftp://dirac.physics.uottawa.ca/pub/tr/marchand FileName: NN95.ps.Z Title: Learning $\mu$-Perceptron Networks On the Uniform Distribution Authors: Golea M., Marchand M. and Hancock T.R.. Abstract: We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of {\em nonoverlapping\/} perceptrons (also called $\mu$ perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share any input variables. Specifically, we investigate, within the distribution-specific PAC model, the learnability of $\mu$ {\em perceptron unions\/}, {\em decision lists\/}, and {\em generalized decision lists\/}. In contrast to most neural network learning algorithms, we do not assume that the architecture of the network is known in advance. Rather, it is the task of the algorithm to find both the architecture of the net and the weight values necessary to represent the function to be learned. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. Because the algorithms are statistical in nature, they are robust against large amounts of random classification noise. ALSO: you will find other papers co-authored by Mario Marchand in this directory. The text file: Abstracts-mm.txt contains a list of abstracts of all the papers. PLEASE: communicate to me any printing or transmission problems. Any comments concerning these papers are very welcome. From esann at dice.ucl.ac.be Tue Feb 14 11:35:15 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 14 Feb 1995 18:35:15 +0200 Subject: 3rd European Symposium on Artificial Neural Networks: programme & registration Message-ID: <199502141732.SAA15742@ns1.dice.ucl.ac.be> ******************************************************** * 3rd European Symposium on Artificial Neural Networks * * * * What's new in fundamental research? * * * * Brussels - April 19-20-21, 1995 * * * * Preliminary Program * * * ******************************************************** This e-mail contains all information concerning the 3rd European Symposium on Artificial Neural Networks, to be held in Brussels, on 19-20-21 April, 1995: - general information about the conference - the full programme - the committees - practical information - how to register The field of Artificial Neural Networks includes a lot of different disciplines, from mathematics and statistics to robotics and electronics. For this reason, actual studies concern various aspects of the field, sometimes to the detriment of strong, well established foundations for these researches; it is obvious that a better knowledge of the basic aspects of neurocomputing, and more effective comparisons with other computing methods are strongly necessary for a profitable long-term use of neural networks in applications. The purpose of the ESANN series of conferences is to present the latest results in the fundamental aspects of artificial neural networks. The third European Symposium on Artificial Neural Networks will be organized in Brussels, on 19-21 April 1995. It will cover most of the main theoretical, mathematical and fundamental new developments of the field: learning, models, approximation of functions, classification, control, signal processing, biology, self-organisation and many other topics will be treated during the conference. ESANN'95 is coupled with the 'Neurap'95' conference (Marseilles, France, December 1995), which will present the appplications of neural network methods. The program committee of ESANN'95 received 112 submissions of communications; 53 were selected for presentation during the symposium, and will be included in the proceedings. This severe selection guarantees the high quality of the selected papers. Besides these presentations, four invited speakers (David Stork, Hans-Peter Mallot, Pierre Comon and Christian Jutten) will present the state-of-the-art and the recent developments in some particular aspects of the topics covered by the conference. The steering and program committees of ESANN'95 are pleased to invite you to participate to this symposium. More than a formal conference presenting the last developments in the field, ESANN'95 will be also a forum for open discussions, round tables and opportunities for future collaborations. We hope to have the pleasure to meet you in April, in the splendid town of Brussels, and that your stay in Belgium will be as scientifically beneficial as agreeable. ------------------------------- - Programme of the conference - ------------------------------- Wednesday 19th April 1995 ------------------------- 8H30 Registration 9H00 Opening session Session 1: Self-organisation Chairman: F. Blayo (Univ. Paris I, France) 9H10 "Self-organisation, metastable states and the ODE method in the Kohonen neural network" J.A. Flanagan, M. Hasler E.P.F. Lausanne (Switzerland) 9H30 "About the Kohonen algorithm: strong or weak self-organization?" J.-C. Fort*, G. Pages** *Univ. Nancy I & Univ. Paris I (France), **Univ. Paris 6 & Univ. Paris 12 (France) 9H50 "Topological interpolation in SOM by affine transformations" J. Goppert, W. Rosenstiel Univ. Tuebingen (Germany) 10H10 "Dynamic Neural Clustering" K. Moscinska Silesian Tech. Univ. (Poland) 10H30 "Multiple correspondence analysis of a crosstabulations matrix using the Kohonen algorithm" S. Ibbou, M. Cottrell Univ. Paris I (France) 10H50 Coffee break Session 2: Models 1 Chairman: J. Stonham (Brunel Univ., United Kingdom) 11H10 "Identification of the human arm kinetics using dynamic recurrent neural networks" J.-P. Draye*, G. Cheron**, M. Bourgeois**, D. Pavisic*, G. Libert* *Fac. Polytech. de Mons (Belgium), **Univ. Brussels (Belgium) 11H30 "Simplified cascade-correlation learning" M. Lehtokangas, J. Saarinen, K. Kaski Tampere Univ. of Technology (Finland) 11H50 "Active noise control with dynamic recurrent neural networks" D. Pavisic, L. Blondel, J.-P. Draye, G. Libert, P. Chapelle Fac. Polytech. de Mons (Belgium) 12H10 "Cascade learning for FIR-TDNNs" M. Diepenhorst, J.A.G. Nijhuis, L. Spaanenburg Rijksuniv. Groningen (The Netherlands) 12H30 Lunch 14H00 Invited paper: D. Stork (Ricoh California Research Center), A. Sperduti (Univ. di Pisa) "Recent developments in transformation-invariant pattern classification" Session 3: Signal processing and chaos Chairman: M. Hasler (E.P.F. Lausanne, Switzerland) 14H45 "Adaptive signal processing with unidirectional Hebbian adaptation laws" J. Dehaene, J. Vandewalle Kat. Univ. Leuven (Belgium) 15H05 "MAP decomposition of a mixture of AR signal using multilayer perceptrons" C. Couvreur Fac. Polytech. de Mons (Belgium) 15H25 "XOR and backpropagation learning: in and out of the chaos?" K. Bertels*, L. Neuberg*, S. Vassiliadis**, G. Pechanek*** *Fac. Univ. N.-D. de la Paix (Belgium), **T.U. Delft (The Netherlands), ***IBM Microelectronics Div. (USA) 15H45 "Analog Brownian weight movement for learning of artificial neural networks" M.R. Belli, M. Conti, C. Turchetti Univ. of Ancona (Italy) 16H05 Coffee break Session 4: Biological models Chairman: H.P. Mallot (Max-Planck Institut, Germany) 16H30 "Spatial summation in simple cells: computational and experimental results" F. Worgotter*, E. Nelle*, B. Li**, L. Wang**, Y.-C. Diao** *Ruhr Univ. Bochum (Germany), **Academia Sinica (China) 16H50 "Activity-dependent neurite outgrowth in a simple network model including excitation and inhibition" C. van Oss, A. van Ooyen Neth. Inst. for Brain Research (The Netherlands) 17H10 "Predicting spike train responses of neuron models" S. Joeken, H. Schwegler Univ. of Bremen (Germany) 17H30 "A distribution-based model of the dynamics of neural networks in the cerebral cortex" A. Terao*, M. Akamatsu*, J. Seal** *Nat. Inst. of Bioscience and Human Technology (Japan), **CNRS Marseilles (France) 17H50 "Some new results on the coding of pheromone intensity in an olfactory sensory neuron" A. Vermeulen*,**, J.-P. Rospars*, P. Lansky***,*, H.C. Tuckwell****,* *INRA (France), **I.N.P. Grenoble (France), ***Acad. of Sciences (Czech Republic), ****Australian Nat. Univ. (Australia) Thursday 20th April 1995 ------------------------ Session 5: Special session on the Elena-Nerves2 ESPRIT Basic Research project Chairman: M. Cottrell (Univ. Paris I, France) 9H00 Invited paper: P. Comon (Thomson-Sintra, France) "Supervised classification: a probabilistic approach" 9H30 Invited paper: C. Jutten, O. Fambon (Inst. Nat. Pol. Grenoble, France) "Pruning methods: a review" 10H00 "A deterministic method for establishing the initial conditions in the RCE algorithm" J.M. Moreno, F.X. Vazquez, F. Castillo, J. Madrenas, J. Cabestany Univ. Politecnica Catalunya (Spain) 10H20 "Pruning kernel density estimators" O. Fambon, C. Jutten Inst. Nat. Pol. Grenoble (France) 10H40 "Suboptimal Bayesian classification by vector quantization with small clusters" J.L. Voz, M. Verleysen, P. Thissen, J.D. Legat Univ. Cat. Louvain (Belgium) 11H00 Coffee break Session 6: Theory of learning systems Chairman: C. Touzet (IUSPIM Marseilles, France) 11H20 "Knowledge and generalisation in simple learning systems" D. Barber, D. Saad University of Edinburgh (United Kingdom) 11H40 "Control of complexity in learning with perturbed inputs" Y. Grandvalet*, S. Canu*, S. Boucheron** *Univ. Tech. de Compiegne (France), **Univ. Paris-Sud (France) 12H00 "An episodic knowledge base for object understanding" U.-D. Braumann, H.-J. Boehme, H.-M. Gross Tech. Univ. Ilmenau (Germany) 12H20 "Neurosymbolic integration: unified versus hybrid approaches" M. Hilario*, Y. Lallement**, F. Alexandre** *Univ. Geneve (Switzerland), **INRIA (France) 12H40 Lunch Session 7: Biological vision Chairman: J. Herault (Inst. Nat Polyt. Grenoble, France) 14H00 "Improving object recognition by using a visual latency mechanism" R. Opara, F. Worgorter Ruhr Univ. Bochum (Germany) 14H20 "On the function of the retinal bipolar cell in early vision" S. Ohshima, T. Yagi, Y. Funahashi Nagoya Inst. of Technology (Japan) 14H40 "Sustained and transient amacrine cell circuits underlying the receptive fields of ganglion cells in the vertebrate retina" G. Maguire Univ. of Texas (USA) 15H00 "Latency-reduction in antagonistic visual channels as the result of corticofugal feedback" J. Kohn, F. Worgotter Ruhr Univ. Bochum (Germany) 15H20 Coffee break Session 8: Models 2 Chairman: V. Kurkova (Academy of Sciences, Czech Republic) 15H40 "On threshold circuit depth" A. Albrecht BerCom GmbH (Germany) 16H00 "Minimum entropy queries for linear students learning nonlinear rules" P. Sollich Univ. of Edinburg (United Kingdom) 16H20 "An asymmetric associative memory model based on relaxation labeling processes" M. Pelillo, A.M. Fanelli Univ. di Bari (Italy) 16H40 "Invariant measure for an infinite neural network " T.S. Turova Kat. Univ. Leuven (Belgium) 17H00 "Growing adaptive neural networks with graph grammars" S.M. Lucas Univ. of Essex (United Kingdom) 17H20 "Constructing feed-forward neural networks for binary classification tasks" C. Campbell*, C. Perez Vincente** *Bristol Univ. (United Kingdom), **Univ. Barcelona (Spain) 20H00 Conference dinner Friday 21th April 1995 ---------------------- Session 9: Classification and control Chairman: M. Grana (UPV San Sebastian, Spain) 9H00 "Improvement of EEG classification with a subject-specific feature selection" M. Pregenzer, G. Pfurtscheller, C. Andrew Graz Univ. of Technology (Austria) 9H20 "Neural networks for invariant pattern recognition" J. Wood, J. Shawe-Taylor Univ. of London (United Kingdom) 9H40 "Derivation of a new criterion function based on an information measure for improving piecewise linear separation incremental algorithms" J. Cuguero, J. Madrenas, J.M. Moreno, J. Cabestany Univ. Politecnica Catalunya (Spain) 10H00 "Neural network based one-step ahead control and its stability" Y. Tan, A.R. Van Cauwenberghe Univ. of Gent (Belgium) 10H20 "NLq theory: unifications in the theory of neural networks, systems and control" J. Suykens, B. De Moor, J. Vandewalle Kat. Univ. Leuven (Belgium) 10H40 Coffee break 11H00 Invited paper: H.P. Mallot (Max-Planck-Institut, Germany) "Learning of cognitive maps from sequences of views" Session 10: Radial-basis functions Chairman: G. Pages (Univ. Paris VI, France) 11H45 "Trimming the inputs of RBF networks" C. Andrew*, M. Kubat**, G. Pfurtscheller* *Graz Univ. Tech. (Austria), **Johannes Kepler Univ. (Austria) 12H05 "Learning the appropriate representation paradigm by circular processing units" S. Ridella, S. Rovetta, R. Zunino Univ. of Genoa (Italy) 12H25 "Radial basis functions in the Fourier domain" M. Orr Univ. of Edinburgh (United Kingdom) 12H45 Lunch Session 11: Function approximation Chairman: J. Vandewalle (Kat. Univ. Leuven, Belgium) 14H00 "Function approximation by localized basis function neural network" M. Kokol, I. Grabec Univ. of Ljubljana (Slovenia) 14H20 "Functional approximation by perceptrons: a new approach" J.-G. Attali*, G. Pages** *Univ. Paris I (France), **Univ. Paris 6 & Univ. Paris 12 (France) 14H40 "Approximation of functions by Gaussian RBF networks with bouded number of hidden units" V. Kurkova Acad. of Sciences (Czech Republic) 15H00 "Neural network piecewise linear preprocessing for time-series prediction" T.W.S. Chow, C.T. Leung City Univ. (Hong-Kong) 15H20 "An upper estimate of the error of approximation of continuous multivariable functions by KBF networks" K. Hlavackova Acad. of Sciences (Czech Republic) 15H40 Coffee break Session 12: Multi-layer perceptrons Chairman: W. Duch (Nicholas Copernicus Univ., Poland) 16H00 "Multi-sigmoidal units and neural networks" J.A. Drakopoulos Stanford Univ. (USA) 16H20 "Performance analysis of a MLP weight initialization algorithm" M. Karouia, R. Lengelle, T. Denoeux Univ. Compiegne (France) 16H40 "Alternative output representation schemes affect learning and generalization of back-propagation ANNs; a decision support application" P.K. Psomas, G.D. Hilakos, C.F. Christoyannis, N.K. Uzunoglu Nat. Tech. Univ. Athens (Greece) 17H00 "A new training algorithm for feedforward neural networks" B.K. Verma, J.J. Mulawka Warsaw Univ. of Technology (Poland) 17H20 "An evolutive architecture coupled with optimal perceptron learning for classification" J.-M. Torres Moreno, P. Peretto, M. B. Gordon C.E.N. Grenoble (France) -------------- - Committees - -------------- Steering committee ------------------ Francois Blayo Univ. Paris I (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Herault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee -------------------- Agnes Babloyantz Univ. Libre Bruxelles (Belgium) Herve Bourlard ICSI Berkeley (USA) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Holk Cruse Universitat Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universite Nancy I (F) Bernd Fritzke Ruhr-Universitat Bochum (D) Karl Goser Universitat Dortmund (D) Manuel Grana UPV San Sebastian (E) Martin Hasler EPFL Lausanne (CH) Kurt Hornik Techische Univ. Wien (A) Christian Jutten INPG Grenoble (F) Vera Kurkova Acad. of Science of the Czech Rep. (CZ) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Hans-Peter Mallot Max-Planck Institut (D) Eddy Mayoraz RUTCOR (USA) Jean Arcady Meyer Ecole Normale Superieure Paris (F) Jose Mira-Mira UNED (E) Pietro Morasso Univ. of Genoa (I) Jean-Pierre Nadal Ecole Normale Superieure Paris (F) Erkki Oja Helsinky University of Technology (FIN) Gilles Pages Universite Paris VI (F) Helene Paugam-Moisy Ecole Normale Superieure Lyon (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Jean-Pierre Royet Universite Lyon 1 (F) John Stonham Brunel University (UK) John Taylor King's College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet IUSPIM Marseilles (F) Joos Vandewalle KUL Leuven (B) Marc Van Hulle KUL Leuven (B) Christian Wellekens Eurecom Sophia-Antipolis (F) ----------- - Support - ----------- ESANN'95 is organized with the support of: - Commission of the European Communities (DG XII, Human Capital and Mobility programme) - Region of Brussels-Capital - IEEE Region 8 - UCL (Universite Catholique de Louvain - Louvain-la-Neuve) - REGARDS (Research Group on Algorithmic, Related Devices and Systems UCL) -------------------------- - Conference information - -------------------------- Registration fees for symposium ------------------------------- registration before registration after 17th March 1995 17th March 1995 Universities BEF 15000 BEF 16000 Industries BEF 19000 BEF 20000 Registration fees include attendance to all sessions, the ESANN'95 banquet, a copy of the conference proceedings, daily lunches (19-21 April '95), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Advance payments (see registration form) must be made to the conference secretariat by bank transfer on a Belgian bank (free of charges), by bank transfer from a bank abroad account (add BEF 500 for processing fees) or by sending a cheque (add BEF 500 for processing fees). Bank transfers and cheques must be made out in Belgian francs. Language -------- The official language of the conference is English. It will be used for all printed material, presentations and discussions. Proceedings ----------- A copy of the proceedings will be provided to all conference registrants. All technical papers will be included in the proceedings. Additional copies of the proceedings (ESANN'93, ESANN'94 and ESANN'95) may be purchased at the following rates: ESANN'95 proceedings: BEF 2000 ESANN'94 proceedings: BEF 2000 ESANN'93 proceedings: BEF 1500. Add BEF 500 to any single or multiple order for p.&p. and bank charges. Please write to the conference secretariat to order proceedings. Conference dinner ----------------- A banquet will be offered on Thursday 20th to all conference registrants in a famous and typical place of Brussels. Additional vouchers for the banquet may be purchased on Wednesday 19th at the conference. Cancellation ------------ If cancellation is received by 24th March 1995, 50% of the registration fees will be returned. Cancellation received after this date will not be entitled to any refund. ----------------------- - General information - ----------------------- Brussels, Belgium ----------------- Brussels is not only the host city of the European Commission and of hundreds of multinational companies; it is also a marvelous historical town, with typical quarters, famous monuments known throughout the world, and the splendid "Grand-Place". It is a cultural and artistic center, with numerous museums. Night life in Brussels is considerable. There are of lot of restaurants and pubs open late in the night, where typical Belgian dishes can be tasted with one of the more than 1000 different beers. Hotel accommodation ------------------- Special rates for participants to ESANN'95 have been arranged at the MAYFAIR HOTEL (4 stars), and at the FORUM HOTEL (3 stars). The Mayfair Hotel is tastefully decorated to the highest standards of luxury and comfort. It includes two restaurants, a bar and a private parking. Located on the elegant Avenue Louise, the exclusive Hotel Mayfair is a short walk from the "uppertown" luxurious shopping district. Also nearby is the 14th century Cistercian abbey and the magnificent "Bois de la Cambre" park. Single room BEF 3300 Double room or twin room BEF 4000 HOTEL MAYFAIR phone: +32 2 649 98 00 381 av. Louise fax: +32 2 649 22 49 1050 Brussels - Belgium The Forum Hotel is situated in the heart of a "Art Nouveau" quiet, residential and historical area. It includes a bar, a Tuscan restaurant and a private parking. Single room BEF 2200 Double room or twin room BEF 2700 FORUM HOTEL phone: + 32 2 343 01 00 2 av. du Haut-Pont fax: + 32 2 347 00 54 1060 Brussels - Belgium Prices for both hotels include breakfast, taxes and service. Rooms can only be confirmed upon receipt of booking form (see at the end of this booklet) and deposit. Rooms must be booked before 31 March 1995. Public transportation goes directly from the hotels (trams No. 93 & 94 for Mayfair hotel and tram No. 92 from the Forum hotel) to the conference center ("Parc" stop) Conference location ------------------- The conference will be held at the "Chancellerie" of the Generale de Banque. A map is included at the end of this booklet. Generale de Banque - Chancellerie 1 rue de la Chancellerie 1000 Brussels - Belgium Conference secretariat ---------------------- D facto conference services phone: + 32 2 245 43 63 45 rue Masui fax: + 32 2 245 46 94 B-1210 Brussels - Belgium E-mail: esann at dice.ucl.ac.be ------------------------------------------------ - ESANN'95 Registration and Hotel Booking Form - ------------------------------------------------ Ms., Mr., Dr., Prof.: ............................................ Name: ............................................................ first Name: ...................................................... Institution: ..................................................... ................................................................. Address: ......................................................... ................................................................. ZIP: ............................................................. Town: ............................................................ Country: ......................................................... Tel: ............................................................. fax: ............................................................. E-mail: .......................................................... VAT No.: .......................................................... Registration fees ----------------- registration before registration after 17th March 1995 17th March 1995 Universities BEF 15000 BEF 16000 Industries BEF 19000 BEF 20000 University fees are applicable to members and students of academic and teaching institutions. Each registration will be confirmed by an acknowledgment of receipt, which must be given to the registration desk of the conference to get entry badge, proceedings and all materials. Registration fees include attendance to all sessions, the ESANN'95 banquet, a copy of the conference proceedings, daily lunches (19-21 April '95), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Hotel booking ------------- Hotel MAYFAIR (4 stars) - 381 av. Louise - 1050 Brussels Single room : BEF 3300 Double room (large bed) : BEF 4000 Twin room (2 beds) : BEF 4000 H?tel FORUM (3 stars) - 2 av. du Haut-Pont - 1060 Brussels Single room : BEF 2200 Double room (large bed) : BEF 2700 Twin room (2 beds) : BEF 2700 Prices include breakfast, service and taxes. A deposit corresponding to the first night is mandatory. Please tick appropriate: ------------------------ Registration form to ESANN'95 Universities: O registration before 17th March 1995: BEF 15000 O registration after 17th March 1995: BEF 16000 Industries: O registration before 17th March 1995: BEF 19000 O registration after 17th March 1995: BEF 20000 Hotel Mayfair booking O single room deposit: BEF 3300 O double room (large bed) deposit: BEF 4000 O twin room (twin beds) deposit: BEF 4000 Hotel Forum booking O single room deposit: BEF 2200 O double room (large bed) deposit: BEF 2700 O twin room (twin beds) deposit: BEF 2700 Hotels must be booked bafore 31 March 1995; no guaranty of disponibility can be given after this date. Arrival date: ..../..../1995 Departure date: ..../..../1995 O Additional payment if fees are paid through a bank abroad cheque or by a bank transfer from a bank abroad account: BEF 500 Total BEF ____ Payment (please tick): O Bank transfer, stating name of participant, made payable to: Generale de Banque ch. de Waterloo 1341 A B-1180 Brussels - Belgium Acc.no: 210-0468648-93 of D facto (45 rue Masui, B-1210 Brussels) A supplementary fee of BEF 500 must be added if the payment is made from a bank abroad account. O Cheques/Postal Money Orders made payable to: D facto 45 rue Masui B-1210 Brussels - Belgium A supplementary fee of BEF 500 must be added if the payment is made through a bank abroad cheque or postal money order. Only registrations accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. Registration and hotel booking form, together with payment, must be sent as soon as possible, and in no case later than 7th April 1995, to the conference secretariat: D facto conference services - ESANN'95 45, rue Masui - B-1210 Brussels - Belgium _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From yarowsky at unagi.cis.upenn.edu Tue Feb 14 16:29:51 1995 From: yarowsky at unagi.cis.upenn.edu (David Yarowsky) Date: Tue, 14 Feb 95 16:29:51 EST Subject: ACL-95 Corpus-based NLP Workshop - Call for Papers Message-ID: <9502142129.AA27412@unagi.cis.upenn.edu> *** PRIMARY CALL FOR PAPERS **** ACL's SIGDAT and SIGNLL present the THIRD WORKSHOP ON VERY LARGE CORPORA WHEN: June 30, 1995 - immediately following ACL-95 (June 27-29) WHERE: MIT, Cambridge, Massachusetts, USA WORKSHOP DESCRIPTION: As in past years, the workshop will offer a general forum for new research in corpus-based and statistical natural language processing. Areas of interest include (but are not limited to): sense disambiguation, part-of-speech tagging, robust parsing, term and name identification, alignment of parallel text, machine translation, lexicography, spelling correction, morphological analysis and anaphora resolution. This year, the workshop will be organized around the theme of: Supervised Training vs. Self-organizing Methods Is annotation worth the effort? Historically, annotated corpora have made a significant contribution. The tagged Brown Corpus, for example, led to important improvements in part-of-speech tagging. But annotated corpora are expensive. Very little annotated data is currently available, especially for languages other than English. Self-organizing methods offer the hope that annotated corpora might not be necessary. Do these methods really work? Do we have to choose between annotated corpora and unannotated corpora? Can we use both? The workshop will encourage contributions of innovative research along this spectrum. In particular, it will seek work in languages and applications where appropriately tagged training corpora do not currently exist. It will also explore what new kinds of corpus annotations (such as discourse structure, co-reference and sense tagging) would be useful to the community, and will encourage papers on their development and use in experimental projects. The theme will provide an organizing structure to the workshop, and offer a focus for debate. However, we expect and will welcome a diverse set of submissions in all areas of statistical and corpus-based NLP. PROGRAM CHAIRS: Ken Church - AT&T Bell Laboratories David Yarowsky - University of Pennsylvania SPONSORS: LEXIS-NEXIS, Division of Reed and Elsevier, Plc. SIGDAT (ACL's special interest group for linguistic data and corpus-based approaches to NLP) SIGNLL (ACL's special interest group for natural language learning) FORMAT FOR SUBMISSION: Authors should submit a full-length paper (3500-8000 words), either electronically or in hard copy. Electronic submissions should be mailed to "yarowsky at unagi.cis.upenn.edu", and must either be (a) plain ascii text, (b) a single postscript file, or (c) a single latex file following the ACL-95 stylesheet (no separate figures or .bib files). Hard copy submissions should be mailed to Ken Church (address below), and should include four (4) copies of the paper. REQUIREMENTS: Papers should describe original work. A paper accepted for presentation cannot be presented or have been presented at any other meeting. Papers submitted to other conferences will be considered, as long as this fact is clearly indicated in the submission. SCHEDULE: Submission Deadline: March 20, 1995 Notification Date: April 18, 1995 Camera ready copy due: May 11, 1995 CONTACT: Ken Church David Yarowsky Room 2B-421 Dept. of Computer and Info. Science AT&T Bell Laboratories University of Pennsylvania 600 Mountain Ave. 200 S. 33rd St. Murray Hill, NJ 07974 USA Philadelphia, PA 19104-6389 USA e-mail: kwc at research.att.com email: yarowsky at unagi.cis.upenn.edu From bhuiyan at mars.elcom.nitech.ac.jp Wed Feb 15 15:30:14 1995 From: bhuiyan at mars.elcom.nitech.ac.jp (Md. Shoaib Bhuiyan) Date: Wed, 15 Feb 95 15:30:14 JST Subject: Paper available: An improved Neural Network based Edge Detection method Message-ID: <9502150630.AA23476@mars.elcom.nitech.ac.jp> The following paper is available for copying. It was published in Proceedings of Int'l. Conf. on Neural Information Processing, Seoul, Korea, vol. 1, pp.620-625, Oct. 17-20, 1994. An improved Neural Network based Edge Detection method Abstract: Existing edge detection methods provide unsatisfactory results when contrast changes largely within an image due to non-uniform illumination. Koch et al. developed an energy function based upon Hopfield neural network, whose coefficients were fixed by trial and error and remains constant for the entire image, irrespective of the differences in intensity level. This paper presents an improved edge detection method for images where contrast is not uniform. we propose that the energy function parameters for an image with inconsistent illumination should not remain fixed and propose an schedule to change these parameters. The results, compared with those of existing one's, suggest a better strategy for edge detection depending upon both the dynamic range of the original image pixel values as well as their contrast. ----------------------------------------------------------------- The paper can be retrieved via anonymous ftp by following these instructions: unix> ftp ftp.elcom.nitech.ac.jp ftp:name> anonymous Password:> your complete e-mail address ftp> cd pub ftp> get ICONIP.ps.gz ftp> bye unix> gunzip ICONIP.ps.gz unix> lpr ICONIP.ps ICONIP.ps is 3.58Mb, six pages in postscript format. The paper proposes a novel idea to extract edges from an image with high contrast. Your feedback is very much appreciated (bhuiyan at mars.elcom.nitech.ac.jp) -Md. Shoaib Bhuiyan From B344DSL at UTARLG.UTA.EDU Wed Feb 15 15:00:56 1995 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Wed, 15 Feb 1995 14:00:56 -0600 (CST) Subject: Call for items for Newsletter section of Neural Networks Message-ID: Call for Newsletter Items The journal Neural Networks is now adding a newsletter section to facilitate announcement of items of interest to its general readership. Examples include, but are not limited to: ~ new industrial applications of neural networks; ~ new discoveries in neurobiology or psychology that are of interest to neural network researchers; ~ government or international research initiatives that will be of benefit to the neural network field; ~ activities by special interest groups (SIGs) of the International Neural Network Society. We are aiming for our first newsletter section to be part of the issue of Neural Networks to appear at the World Congress on Neural Networks July 17-21. The Newsletter Editors are Harold Szu (representing the SIGs) Marwan Jabri (representing the Japanese Neural Network Society) Stephane Canu (representing the European Neural Network Society) Daniel S. Levine (representing the International Neural Network Society) Each of the four of us as editors are allotted a maximum of 2 newsletter pages per issue, and will judge submissions for relevance and appropriateness. (If one editor receives more than two pages worth of publishable items and another editor less, some items can be reapportioned.) Therefore, I am putting out a call for submissions of items that you wish to publicize ~ preferably in electronic form, to facilitate forwarding to one of the Editors-in- Chief of the journal. (Note: this does not include conference announcements, which are already covered in the journal's current events section.) If you think an item might be appropriate for the Newsletter but are unsure, please send it anyway and the Editors will decide collectively on it. To facilitate going to the press soon enough for July, we would like to have items arrive if possible by March 1, 1995. They can be sent to me at: Professor Daniel S. Levine Department of Mathematics University of Texas at Arlington Arlington, TX 76019-0408, USA e-mail: b344dsl at utarlg.uta.edu fax: 817-794-5802 telephone: 817-273-3598 or any of the other three editors: Dr. Harold Szu 9402 Wildoak Drive Bethesda, MD 20914, USA e-mail: hszu at ulysses.nswc.navy.mil fax: 301-394-3923 telephone: 301-394-3097 Dr. Stephane Canu Departement de Genie Informatique HEUDIASYC - U.R.A. 817 C.N.R.S. Universite de Technologie de Compiegne B.P. 649 60206 Compiegne Cedex, France e-mail: scanu at hds.univ-compiegne.fr fax (+33) 44 23 44 77 telephone: (+33) 44 23 44 83 Dr. Marwan Jabri Department of Electric Engineering Building J03 Sydney University Sydney NSW 2006, Australia e-mail: marwan at sedal.oz.au fax: 61-2-660-1228 telephone: 61-2-351-2240 Please post this notice to other bulletin boards! From koza at CS.Stanford.EDU Wed Feb 15 22:47:30 1995 From: koza at CS.Stanford.EDU (John Koza) Date: Wed, 15 Feb 95 19:47:30 PST Subject: GP-96 Call For Papers Message-ID: FIRST CALL FOR PAPERS (Version 1.0) GP-96 - GENETIC PROGRAMMING 96 July 28 - 31 (Sunday - Wednesday), 1996 Fairchild Auditorium Stanford University Stanford, California This first genetic programming conference will bring together people from the academic world, industry, and government who are interested in genetic programming. The conference program will include contributed papers, tutorials, an invited speaker, and informal meetings. Topics of interest include, but are not limited to, - new applications of genetic programming - theory - extensions and variations of genetic programming - parallelization techniques - mental models, memory, and state - operator and representation issues - relations to biology and cognitive systems - implementation issues - war stories Proceedings will be published by The MIT Press. HONORARY CHAIR (AND INVITED SPEAKER) John Holland, University of Michigan GENERAL CHAIR John Koza, Stanford University PROGRAM COMMITTEE (In Formation): - Russell J. Abbott, California State University, Los Angeles and The Aerospace Corporation - David Andre, Stanford University - Peter J. Angeline, Loral Federal Systems - Wolfgang Banzhaf, University of Dortmund, Germany - Samy Bengio, Centre National d'Etudes des Telecommunications, France - Scott Brave, Stanford University - Walter Cedeno, Primavera Systems Inc. - Nichael Lynn Cramer, BBN System and Technologies - Patrik D'haeseleer, University of New Mexico - Bertrand Daniel Dunay, System Dynamics International - Frederic Gruau, Stanford University - Richard J. Hampo, Ford Motor Company - Simon Handley, Stanford University - Hitoshi Hemmi, ATR, Kyoto, Japan - Thomas Huang, University of Illinois - Hitoshi Iba, Electrotechnical Laboratory, Japan - Martin A. Keane, Econometrics Inc. - Mike Keith, Allen Bradely Controls - Kenneth Marko, Ford Motor Company - Kenneth E. Kinnear, Jr., Adaptive Computing Technology - W. B. Langdon, University College, London - Martin C. Martin, Carnegie Mellon University - Sidney R Maxwell III - David Montana, BBN System and Technologies - Dr. Heinz Muehlenbein, GMD Research Center, Germany - Peter Nordin, University of Dortmund, Germany - Howard Oakley, Institute of Naval Medicine, United Kingdom - Franz Oppacher, Carleton University, Ottawa - Una-May O`Reilly, Carleton University, Ottawa - Michael Papka, Argonne National Laboratory - Timothy Perkis - Justinian P. Rosca, University of Rochester - Conor Ryan, University College Cork, Ireland - Malcolm Shute, University of Brighton - Eric V. Siegel, Columbia University - Karl Sims - Andrew Singleton, Creation Mechanics - Lee Spector, Hampshire College - Walter Alden Tackett, Neuromedia - Astro Teller, Carnegie Mellon University - Patrick Tufts, Brandeis University - V. Rao Vemuri, University of Califonia at Davis - Darrell Whitley, Colorado State University - Alden H. Wright, University of Montana - Byoung-Tak Zhang, GMD, Germany EXECUTIVE COMMITTEE OF PROGRAM COMMITTEE (In Formation) SPECIAL PROGRAM CHAIRS The main focus of the conference (and about two-thirds of the papers) will be on genetic programming. In addition, papers describing recent developments in closely related areas of evolutionary computation (particularly those addressing issues common to various areas of evolutionary computation) will be reviewed by special program committees appointed and supervised by the following special program chairs. - GENETIC ALGORITHMS: David E. Goldberg, University of Illinois - CLASSIFIER SYSTEMS: Rick Riolo, University of Michigan - EVOLUTIONARY PROGRAMMING: David Fogel, University of California at San Diego - EVOLUTION STRATEGIES: PROPOSALS HEREBY SOLICITED TUTORIALS Tutorials will overview (1) genetic programming, (2) closely related areas of evolutionary computation, and (3) neural networks, machine learning, and introductory molecular biology. Most tutorials will be on Sunday, July 28, 1996 and specific times and dates will be announced later. - INTRODUCTION TO GENETIC PROGRAMMING: John Koza, Stanford University - MACHINE LANGUAGE GENETIC PROGRAMMING: Peter Nordin, University of Dortmund, Germany - GENETIC PROGRAMMING USING BINARY REPRESENTATION: Wolfgang Banzhaf, University of Dortmund, Germany - GENETIC ALGORITHMS: David E. Goldberg, University of Illinois - EVOLUTIONARY PROGRAMMING: David Fogel, University of California at San Diego - EVOLUTIONARY COMPUTATION FOR CONSTRAINT OPTIMIZATION: Zbigniew Michalewicz, University of North Carolina - CLASSIFIER SYSTEMS: Robert Elliott Smith, University of Alabama - MOLECULAR BIOLOGY FOR COMPUTER SCIENTISTS: Russell B. Altman, Stanford University -NEURAL NETWORKS David E. Rumelhart, Stanford University - MACHINE LEARNING: Pat Langley, Stanford University - OTHER GENETIC PROGRAMMING TUTORIALS: PROPOSALS HEREBY SOLICITED INFORMATION FOR SUBMITTING PAPERS: Wednesday, January 10, 1996 is the deadline for receipt at the address below of seven (7) copies of each submitted paper. Papers are to be in single-spaced, 12-point type on 8 1/2" x 11" or A4 paper (no e-mail or fax) with full 1" margins at top, bottom, left, and right. Two-sided printing is preferred. Papers are to contain ALL of the following 9 items within a maximum of 10 pages, in this order: (1) title of paper, (2) author name(s), (3) author physical address(es), (4) author e-mail address(es), (5) author phone number(s), (6) a 100-200 word abstract of the paper, (7) the paper's category (chosen from one of the following five alternatives: genetic programming, genetic algorithms, classifier systems, evolutionary programming, or evolution strategy), (8) the text of the paper (including all figures and tables), and (9) bibliography. All other elements of the paper (e.g., acknowledgements, appendices, if any) must come within the maximum of 10 pages. Review criteria will include significance of the work, novelty, sufficiency of information to permit replication (if applicable), clarity, and writing quality. The first-named author (or other designated author) will be notified of acceptance or rejection and reviewer comments by approximately Monday, February 26, 1996. Details of the style of the camera-ready paper will be announced later, but will resemble the SAB-94 and ALIFE-94 conferences recently published by the MIT Press. The deadline for the camera- ready, revised version of accepted papers will be announced later but will be approximately Wednesday, March 20, 1996. Proceedings will be published by The MIT Press and will be available at the conference. One of the authors will be expected to present each accepted paper at the conference. HOUSING: Stanford is about 40 miles south of San Francisco, about 25 miles south of the SF airport, and about 25 miles north of San Jose. There are numerous hotels of all types adjacent to, or near, the campus (many along El Camino Real Avenue in Palo Alto and nearby Mountain View). An optional housing and meals package will be available from the Conference Department at Stanford and will be announced later. FOR MORE INFORMATION: E-mail: GP96 at Cs.Stanford.Edu GP-96 Conference c/o John Koza Computer Science Department Margaret Jacks Hall Stanford University Stanford, CA 94305-2140 USA From rob at comec4.mh.ua.edu Thu Feb 16 14:32:53 1995 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Thu, 16 Feb 95 13:32:53 -0600 Subject: GA conference registration info Message-ID: <9502161932.AA17284@comec4.mh.ua.edu> 6TH INTERNATIONAL CONFERENCE ON GENETIC ALGORITHMS July 15-19, 1995 University of Pittsburgh Pittsburgh, Pennsylvania, USA CONFERENCE COMMITTEE Stephen F. Smith, Chair Carnegie Mellon University Peter J. Angeline, Finance Loral Federal Systems Larry J. Eshelman, Program Philips Laboratories Terry Fogarty, Tutorials University of the West of England, Bristol Alan C. Schultz, Workshops Naval Research Laboratory Alice E. Smith, Local Arrangements University of Pittsburgh Robert E. Smith, Publicity University of Alabama The 6th International Conference on Genetic Algorithms (ICGA-95) brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection, and will include pre-conference tutorials, invited speakers, and workshops. Topics will include: genetic algorithms and classifier systems, evolution strategies, and other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. The conference host for 1995 will be the University of Pittsburgh located in Pittsburgh, Pennsylvania. The conference will begin Saturday afternoon, July 15, for those who plan on attending the tutorials. A reception is planned for Saturday evening. The conference meeting will begin Sunday morning July 16 and end Wednesday afternoon, July 19. The complete conference program and schedule will be sent later to those who register. TUTORIALS ICGA-95 will begin with three parallel sessions of tutorials on Saturday. Conference attendees may attend up to three tutorials (one from each session) for a supplementary fee (see registration form). Tutorial Session I 11:00 a.m.-12:30 p.m. I.A Introduction to Genetic Algorithms Melanie Mitchell - A brief history of Evolutionary Computation. The appeal of evolution. Search spaces and fitness landscapes. Elements of Genetic Algorithms. A Simple GA. GAs versus traditional search methods. Overview of GA applications. Brief case studies of GAs applied to: the Prisoner's Dilemma, Sorting Networks, Neural Networks, and Cellular Automata. How and why do GAs work? I.B Application of Genetic Algorithms Lawrence Davis - There are hundreds of real-world applications of genetic algorithms, and a considerable body of engineering expertise has grown up as a result. This tutorial will describe many of those principles, and present case studies demonstrating their use. I.C Genetics-Based Machine Learning Robert Smith - This tutorial discusses rule-based, neural, and fuzzy techniques that utilize GAs for exploration in the context reinforcement learning control. A rule-based technique, the learning classifier system (LCS), is shown to be analogous to a neural network. The integration of fuzzy logic into the LCS is also discussed. Research issues related to GA-based learning are overviewed. The application potential for genetics-based machine learning is discussed. Tutorial Session II 1:30-3:00 p.m. II.A Basic Genetic Algorithm Theory Darrell Whitley - Hyperplane Partitions and the Schema Theorem. Binary and Nonbinary Representations; Gray coding, Static hyperplane averages, Dynamic hyperplane averages and Deception, the K-armed bandit analogy and Hyperplane ranking. II.B Basic Genetic Programming John Koza - Genetic Programming is an extension of the genetic algorithm in which populations of computer programs are evolved to solve problems. The tutorial explains how crossover is done on program trees and illustrates how the user goes about applying genetic programming to various problems of different types from different fields. Multi-part programs and automatically defined functions are briefly introduced. II.C Evolutionary Programming David Fogel - Evolutionary programming, which originated in the early 1960s, has recently been successfully applied to difficult, diverse real-world problems. This tutorial will provide information on the history, theory, and practice of evolutionary programming. Case-studies and comparisons will be presented. Tutorial Session III 3:30-5:00 p.m. III.A Advanced Genetic Algorithm Theory Darrell Whitley - Exact Non-Markov models of simple genetic algorithms. Markov models of simple genetic algorithms. The Schema Theorem and Price's Theorem. Convergence Proofs, Exact Non-Markov models for permutation based representations. III.B Advanced Genetic Programming John Koza - The emphasis is on evolving multi-part programs containing reusable automatically defined functions in order to exploit the regularities of problem environments. ADFs may improve performance, improve parsimony, and provide scalability. Recursive ADFs, iteration-performing branches, various types of memories (including indexed memory and mental models), architecturally diverse populations, and point typing are explained. III.C Evolution Strategies Hans-Paul Schwefel and Thomas Baeck - Evolution Strategies in the context of their historical origin for optimization in Berlin in the 1960s. Comparison of the computer-versions (1+1) and (10,100) ES with classical optimum seeking methods for parameter optimization. Formal descriptions of ES. Global convergence conditions. Time efficiency in some simple situations. The role of recombination. Auto-adaptation of internal models of the environment. Multi-criteria optimization. Parallel versions. Short list of application examples. GETTING TO PITTSBURGH The Pittsburgh International Airport is served by most of the major airlines. Information on transportation from the airport and directions to the University of Pittsburgh campus, will be sent along with your conference registration confirmation letter. LODGING University Holiday Inn, 100 Lytton Avenue two blocks from convention site $92/day (single) $9 /day parking charge pool (indoor), exercise facilities Reserve by June 18. Call 412-682-6200. Hampton Inn, 3315 Hamlet Street 12 blocks from convention site $72/day (single) free parking, breakfast, and one-way airport transportation Reserve by July 1. Call 412-681-1000. Howard Johnson's, 3401 Boulevard of the Allies 12 blocks from convention site $56/day (single) free parking and Oakland transportation pool (outdoor) Reserve by June 13. Call 412-683-6100. Sutherland Hall (dorm), University Drive-Pitt campus 10 blocks from convention site (steep hill) $30/day, single no amenities (phone, TV, etc.) shared bathroom Reserve by July 1. Call 412-648-1100. CONFERENCE FEES REGISTRATION FEE Registrations received by June 11 are $250 for participants and $100 for students. Registrations received on or after June 12 and walk-in registrations at the conference will be $295 for participants and $125 for students. Included in the registration fee are entry to all technical sessions, several lunches, coffee breaks, reception Saturday evening, conference materials, and conference proceedings. TUTORIALS There is a separate fee for the Saturday tutorial sessions. Attendees may register for up to three tutorials (one from each tutorial session). The fee for one tutorial is $40 for participants and $15 for students; two tutorials, $75 for participants and $25 for students; three tutorials, $110 for participants and $35 for students. The deadline to register without a late fee is June 11. After this date, participants and students will be assessed a flat $20 late fee, whether they register for one, two, or all three tutorials. CONFERENCE BANQUET Not included in the registration fee is the ticket for the banquet. Participants may purchase banquet tickets for an additional $30. Note - Please purchase your banquet tickets nowQyou will be unable to buy them upon arrival. GUEST TICKETS Guest tickets for the Saturday evening reception are $10 each; guest tickets for the conference banquet are $30 each for adults and $10 each for children. Note - Please purchase additional tickets now - you will be unable to buy them upon arrival. CANCELLATION/REFUND POLICY For cancellations received up to and including June 1, a full refund will be given minus a $25 handling fee. FINANCIAL ASSISTANCE FOR STUDENTS With support from the Naval Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, a limited fund has been set aside to assist students with travel expenses. Students should have their advisor certify their student status and that sufficient funds are not available. Students interested in obtaining such assistance should send a letter before May 22 describing their situation and needs to: Peter J. Angeline, c/o Advanced Technologies Dept, Loral Federal Systems, State Route 17C, Mail Drop 0210, Owego, NY 13827-3994 USA. TO REGISTER Early registration is recommended. You may register by mail, fax, or email using a credit card (MasterCard or VISA). You may also pay by check if registering by mail. Note: Students must also send with their registration a photocopy of their valid university student ID or a letter from a professor. Complete the registration form and return with payment. If more than one registrant from the same institution will be attending, make additional copies of the registration form. Mail ICGA 95 Department of Industrial Engineering University of Pittsburgh 1048 Benedum Hall Pittsburgh, PA 15261 USA Fax Fax the registration form to 412-624-9831 Email Receive email form by contacting: icga at engrng.pitt.edu Up-to-date conference information is available on the World Wide Web (WWW) http://www.aic.nrl.navy.mil/galist/icga95/ CALL FOR ICGA '95 WORKSHOP PROPOSALS ICGA workshop proposals are now being solicited. Workshops tend to range from informal sessions to more formal sessions with presentations and working notes. Each accepted workshop will be supplied with space and an overhead projector. VCRs might be available. If you are interested in organizing a workshop, send a workshop title, short description, proposed format, and name of the organizers to the workshop coordinator by April 15, 1995. Alan C. Schultz - schultz at aic.nrl.navy.mil Code 5510, Navy Center for Artificial Intelligence Naval Research Laboratory Washington DC 30375-5337 USA REGISTRATION FORM Prof / Dr / Mr / Ms / Mrs Name ______________________________________________________ Last First MI I would like my name tag to read _____________________________________________ Affiliation/Business ______________________________________________________ Address ______________________________________________________ City ______________________________________________________ State ___________________ Zip ________________________ Country_____________________________________________ Telephone (include area code) Business _______________________________ Home______________________________ FEES (all figures in US dollars) Conference Registration Fee By June 11 ___ participant, $250 ___ student, $100 =$_________ On or after June 12 ___ participant, $295 ___ student, $125 =$_________ July 15 Tutorials Select up to three tutorials, but no more than one tutorial per tutorial session. Tutorial Session I: ___I.A Introduction to Genetic Algorithms ___I.B Application of Genetic Algorithms ___I.C Genetics-Based Machine Learning Tutorial Session II: ___II.A Basic Genetic Algorithm Theory ___II.B Basic Genetic Programming ___II.C Evolutionary Programming Tutorial Session III: ___III.A Advanced Genetic Algorithm Theory ___III.B Advanced Genetic Programming ___III.C Evolution Strategies Tutorial Registration Fee By June 11 ___one tutorial: participant, $40 student, $15 ___two tutorials: participant, $75 student, $25 = $_________ ___three tutorials: participant, $110 student, $35 On or after June 12, participants and students add a $20 late fee for tutorials = $_________ Banquet Ticket (not included in the Registration Fee; no tickets may be purchased upon arrival) participants/adult guest #______ ticket(s) @ $30 = $_________ child #______ ticket(s) @ $10 = $_________ Additional Saturday reception tickets (no tickets may be purchased upon arrival) guest #______ ticket(s) @ $10 = $_________ TOTAL (US dollars) $____________ METHOD OF PAYMENT ___ Check (payable to the University of Pittsburgh, US banks only) ___ MasterCard ___ VISA #__________________________________________ Expiration Date ____________________ Signature of card holder ______________________________________________ Note: Students must submit with their registration a photocopy of their valid student ID or a letter from a professor. Mail ICGA 95, Department of Industrial Engineering, University of Pittsburgh, 1048 Benedum Hall, Pittsburgh, PA 15261 USA Fax 412-624-9831 Email To receive email form: icga at engrng.pitt.edu World Wide Web (WWW) For up-to-date conference information: http://www.aic.nrl.navy.mil/galist/icga95/ From sylee at eekaist.ac.kr Fri Feb 17 22:38:28 1995 From: sylee at eekaist.ac.kr (Soo-Young Lee) Date: Sat, 18 Feb 1995 12:38:28 +0900 Subject: POST-DOC POSITION Message-ID: <199502180338.MAA03152@eekaist.kaist.ac.kr> POSTDOCTORAL POSITION / GRADUATE STUDENTS Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology A postdoctoral position is available beginning immediately or summer 1995. The position is for one year initially, and may be extended for another year. Graduate students with full scholarship are also welcome, especially from developing countries. We are seeking individuals interested in researches on neural net applications and/or VLSI implementation. Especially we emphasizes "systems" approach, which combines neural network theory, application-specific knowledge, and hardware implementation technology for much better perofrmance. Although many applications are currently investigated, speech recognition is the preferred choice at this moment. Experience on digital VLSI will be helpful. Interested parties should send a C.V. and a brief statement of research interests to the address listed below. Present address: Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Fax: +82-42-869-3410 E-mail: sylee at ee.kaist.ac.kr RESEARCH INTERESTS OF THE GROUP The Korea Advanced Institute of Science and Technology (KAIST) is an unique engineering school, which emphasies graduate studies through high-quality researches. All graduate students receive full scholarship, and Ph.D. course students are free from military services. The Department of Electrical Engineering is the largest one with 39 professors, 250 Ph.D. course students, 180 Master course students, and 300 undergraduate students. The Computation and Neural Systems Laboratory is lead by Prof. Soo-Young Lee, and consists of about 10 Ph.D. course students and about 5 Master course students. The primary focus of this laboratory is to merge neural network theory, VLSI implementation technology, and application-specific knowledge for much better performance at real world applications. Speech recognition, pattern recognition, and control applications have been emphasized. Neural network models develpoed include Multilayer Bidirectional Associative Memoryas an extention of BAM into multilayer architecture, IJNN (Intelligent Judge Neural Networks) for intelligent ruling verdict for disputes from several low-level classifiers, TAG (Training by Adaptive Gain) for large-scale implementation and speaker adaptation, and Hybrid Hebbian-Backpropagation Algorithm for MLP for improved robustness and generalization. The correlation matrix MBAM chip, and both MLP and RBF chips with on-chip learning capability had been fabricated. From koch at klab.caltech.edu Sun Feb 19 02:39:30 1995 From: koch at klab.caltech.edu (Christof Koch) Date: Sat, 18 Feb 1995 23:39:30 -0800 Subject: Announcement for Neuromorphic Engineering workshop in Telluride in 95 Message-ID: <199502190739.XAA14936@kant.klab.caltech.edu> CALL FOR PARTICIPATION IN A WORKSHOP ON "NEUROMORPHIC ENGINEERING" JUNE 25 - JULY 8, 1995 TELLURIDE, COLORADO Deadline for application is April 24, 1995. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one two-week workshop that will be held in Telluride, Colorado in 1995. The first Telluride Workshop on Neuromorphic Engineering was held in July, 1994 and was sponsored by the NSF. A summary of the 94 workshop and a list of participants is available over MOSAIC: http://www.klab.caltech.edu/~timmer/telluride.html OR http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on ``active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this two week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. The focus of the first week is on exploring neuromorphic systems through the medium of analog VLSI and will be organized by Rodney Douglas (Oxford) and Misha Mahowald (Oxford). Sessions will cover methods for the design and fabrication of multi-chip neuromorphic systems. This framework is suitable both for creating analogs of specific biological systems, which can serve as a modeling environment for biologists, and as a tool for engineers to create cooperative circuits based on biological principles. The workshop will provide the community with a common formal language for describing neuromorphic systems. Equipment will be available for participants to evaluate existing neuromorphic chips (including silicon retina, silicon neurons, oculomotor system). The second week of the course will be on vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Sessions will cover issues of sensory-motor integration in the mammalian brain. Special emphasis will be placed on understanding neural algorithms used by the brain which can provide insights into constructing electrical circuits which can accomplish similar tasks. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. These researchers will also be asked to bring their own demonstrations, classroom experiments, and software for computer models. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system us based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. FORMAT: Time will be divided between lectures, practical labs, and interest group meetings. There will be three lectures in the morning that cover issues that are important to the community in general. In general, one lecture will be neurobiological, one computational, and one on analog VLSI. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Participants are encouraged to bring their own material to share with others. After dinner, participants will get together more informally to hear lectures and demonstrations. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 4 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrators to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware problems. We will have a network of SUN workstations running UNIX and PCs running windows and LINUX. Up to $500 will be reimbursed for domestic travel and all housing expenses will be provided. Participants are expected to pay for food and incidental expenses and are expected to stay for the duration of this two week workshop. PARTIAL LIST OF INVITED LECTURERS: Richard Anderson, Caltech. Chris Atkeson, Georgia Tech. Dana Ballard, Rochester. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Steve Deiss, Applied NeuroDynamics, San Diego. Chris Dioro, Caltech. Rodney Douglas, Oxford and Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Christof Koch, Caltech. Steve Lisberger, UC San Francisco: Oculomotor System. Shih-Chii Liu, Caltech and Rockwell. Jack Loomis, UC Santa Barbara. Jonathan Mills, Indiana University. Misha Mahowald, Oxford and Zurich. Mark Tilden, Los Alamos: Multi-legged Robots. Terry Sejnowski, Salk Institute and UCSan Diego. Mona Zaghoul, George Washington University. HOW TO APPLY: The deadline for receipt of applications is April 24, 1995 Applicants should be at the level of graduate students or above (i.e. post- doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1995. From sutton at gte.com Sun Feb 19 13:03:59 1995 From: sutton at gte.com (Rich Sutton) Date: Sun, 19 Feb 1995 13:03:59 -0500 Subject: RL papers available by ftp Message-ID: <199502191756.AA21721@ns.gte.com> The following previously published papers related to reinforcement learning are available online for the first time: Sutton, R.S. (1988) "Learning to predict by the methods of temporal differences," Machine Learning, 3, 1988, No. 1, pp. 9--44. Sutton, R.S. (1990) "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming," Proceedings of the Seventh International Conference on Machine Learning, pp. 216--224, Morgan Kaufmann. Sutton, R.S. (1991a) "Planning by incremental dynamic programming," Proceedings of the Eighth International Workshop on Machine Learning, pp. 353-357, Morgan Kaufmann. Sutton, R.S. (1991b) "Dyna, an integrated architecture for learning, planning and reacting," Working Notes of the 1991 AAAI Spring Symposium on Integrated Intelligent Architectures} and SIGART Bulletin 2, pp. 160-163. Sutton, R.S. (1992a) "Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta," Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 171-176, MIT Press. Sutton, R.S., Whitehead, S.D. (1993) "Online learning with random representations." Proceedings of the Tenth Annual Conference on Machine Learning, pp. 314-321, Morgan Kaufmann. These papers can be obtained by ftp from the small archive at ftp.gte.com/reinforcement-learning. See the file CATALOG for filenames and abstracts. From dclouse at cs.ucsd.edu Mon Feb 20 17:32:47 1995 From: dclouse at cs.ucsd.edu (Dan Clouse) Date: Mon, 20 Feb 1995 14:32:47 -0800 (PST) Subject: Language Induction Tech Report Available Message-ID: <9502202232.AA09211@roland> FTP-host: cs.ucsd.edu FTP-filename: /pub/tech-reports/clouse.nnfir.ps.Z The file clouse.nnfir.ps.Z is now available for copying from the University of California at San Diego, Computer Science and Engineering Department ftp server. (20 pages, compressed file size 164K) TITLE: Learning Large DeBruijn Automata with Feed-Forward Neural Networks AUTHORS: Daniel S. Clouse, UCSD CSE Dept. C. Lee Giles, NEC Research Institute, Princeton, NJ. Bill G. Horne, NEC Research Institute, Princeton, NJ. Garrison W. Cottrell, UCSD CSE Dept. ABSTRACT: In this paper we argue that a class of finite state machines (FSMs) which is representable by the NNFIR (Neural Network Finite Impulse Response) architecture is equivalent to the definite memory sequential machines (DMMs) which are implementations of deBruijn automata. We support this claim by drawing parallels between circuit topologies of sequential machines used to implement FSMs and the architecture of the NNFIR. Further support is provided by simulation results that show that a NNFIR architecture is able to learn perfectly a large definite memory machine (2048 states) with very few training examples. We also discuss the effects that variations in the NNFIR architecture have on the class of problems easily learnable by the network. From maggini at mcculloch.ing.unifi.it Tue Feb 21 07:03:49 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 21 Feb 95 13:03:49 +0100 Subject: Call for papers: Neurocomputing Journal Message-ID: <9502211203.AA12867@mcculloch.ing.unifi.it> ========================================================== CALL FOR PAPER Special Issue on Recurrent Networks for Sequence Processing in the Neurocomputing Journal (Elsevier) M. Gori, M. Mozer, A.C. Tsoi, and R.L. Watrous (Eds) ========================================================== Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. The focus of this issue will be on recurrent networks that turn out to be useful for processing any kind of temporal sequences. To this purpose, the network dynamics must be exploited for capturing temporal dependencies more than for relaxing, like Hopfield networks and Boltzmann machines, to steady-state fixed points. Possible topics for papers submitted to the special issue include, but are not limited to: Short- and long-term memory architectures: theoretical analyses and empirical studies comparing a variety of architectures; Learning algorithms (including constructive and pruning schemes) and related theoretical issues (e.g.: learning long-term dependencies, local minima); Integration of prior knowledge and learning from examples; Problem of temporal chunking and learning embedded sequences; Application to recognition of time-dependent signals (e.g. various levels of speech tasks, biological signals both discrete (DNA, protein sequences) and continuous (ECG, EEG)); Application to time-series prediction; Application to modeling and control of dynamic systems (e.g. mobile robot guidance); Application to inductive inference of grammars and to natural language processing. Prospective authors should submit six copies of a manuscript to one of the guest editors by March 30, 1995. ===================================== Marco Gori Dipartimento di Sistemi e Informatica Universita' di Firenze Via S. Marta, 3 50139 Firenze (Italy) voice: +39 (55) 479-6265 fax: +39 (55) 479-6363 e-mail: marco at ingfi1.ing.unifi.it Michael C. Mozer Department of Computer Science University of Colorado Boulder, CO 80309-0430 (USA) voice: +1 (303) 492-4103 fax: +1 (303) 492-2844 e-mail: mozer at neuron.cs.colorado.edu Ah Chung Tsoi Department of Electrical and Computer Engineering University of Queensland Brisbane Qld 4072 Australia voice: +61 (7) 365-3950 fax: +61 (7) 365-4999 e-mail: act at s1.elec.uq.oz.au Raymond L. Watrous Learning System Siemens Corporate Research 755 College Road East Princeton, NJ 08540 voice: +1 (609) 734-6596 fax: +1 (609) 734-6565 e-mail: watrous at learning.scr.siemens.com ================================================= From lpratt at franklinite.Mines.Colorado.EDU Tue Feb 21 03:35:47 1995 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Tue, 21 Feb 95 09:35:47 +0100 Subject: Neural network transfer web page Message-ID: <9502210835.AA01862@franklinite.Mines.Colorado.EDU> Hi all! Per a recent discussion on connectionists, I have started a web page for pointers to neural network transfer. Right now, this page just contains the text of that discussion, along with pointers to all of my papers on transfer. I would be delighted to add pointers to others' papers on transfer to this web page. Just send me the ftp or www address, along with relevant bibliographic information, thanks! I think that this can become an important resource for those of us who are interested in this important subject area. The web page is: http://vita.mines.colorado.edu:3857/0/lpratt/transfer.html --Lori Pratt From french at cogsci.indiana.edu Tue Feb 21 12:46:09 1995 From: french at cogsci.indiana.edu (Bob French) Date: Tue, 21 Feb 95 12:46:09 EST Subject: Sequential learning and interactive tandem networks: paper available Message-ID: FTP-host: cogsci.indiana.edu FTP-filename: /pub/french.tandem-stm-ltm.ps.Z Total no. of pages: 6 The following paper is now available by anonymous ftp from the CRCC archive at Indiana University, Bloomington. Interactive Tandem Networks and the Problem of Sequential Learning Robert M. French CRCC, Indiana University Bloomington, IN 47408 french at cogsci.indiana.edu Abstract This paper presents a novel connectionist architecture to handle the "sensitivity-stability" problem and, in particular, an extreme manifestation of the problem, catastrophic interference. This architecture, called an interactive tandem-network (ITN) architecture, consists of two continually interacting networks, one -- the LTM network -- dynamically storing "prototypes" of the patterns learned, the other -- the STM network -- being responsible for "short-term" learning of new patterns. Prototypes stored in the LTM network influence hidden-layer representations in the STM network and, conversely, newly learned representations in the STM network gradually modify the more stable LTM prototypes. As prototypes are learned by the LTM network, they are dynamically constrained to maximize mutual orthogonality. This system of tandem networks performs particularly well on the problem of catastrophic interference. It also produces "long-term" representations that are stable in the face of new input and "short-term" representations that remain sensitive to new input. Justification for this type of architecture is similar to that given recently by McClelland, McNaughton, & O'Reilly (1994) in arguing for the necessary complementarity of the hippocampal (short-term memory) and neocortical (long-term memory) systems. * * * This paper has been submitted to The 1995 Cognitive Science Society Conference. A longer version of this paper is in preparation and, consequently, comments are welcome. I am currently visiting at the Univeristy of Wisconsin. Any snail-mail should be sent there. Robert M. French Department of Psychology University of Wisconsin Madison, Wisconsin 53706 Tel: (608) 243-8026 FAX: (608) 262-4029 email: french at cogsci.indiana.edu french at merlin.psych.wisc.edu From marwan at sedal.su.oz.au Tue Feb 21 20:21:51 1995 From: marwan at sedal.su.oz.au (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Wed, 22 Feb 1995 12:21:51 +1100 Subject: Jobs Message-ID: <199502220121.MAA17710@sedal.sedal.su.OZ.AU> J O B S -- J O B S -- J O B S -- J O B S -- J O B S -- J O B S -- J O B S (* Note, 2 ads in this message *) Systems Engineering and Design Automation Laboratory Department of Electrical Engineering The University of Sydney Research Engineer Optical Character Recognition The appointee will join a team working on a project funded by the Australian Research Council and aiming at the development of artificial neural network techniques for low resolution character segmentation and recognition. Applicants with a bachelor or higher degree in electrical engineering or computer science with experience in pattern recognition, preferably in the area of neural computing based OCR are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (Level A academic) 30k-40k depending on experience. Duty statement The research engineer will: - perfrom litterature search - design and simulate neural computing architectures for OCR - design and implement software for OCR and interfacing to document scanning equipment - develop and integrate OCR software - work independently REPORTING: Reports to Dr. M. Jabri QUALIFICATIONS: B.E. or B. Computer Science or higher. SKILLS: - design, simulation and tuning of artificial neural networks - C and/or C++ programming - knowledge of Unix - design, implementation and testing of neural computing software - communication with others - writing reports and technical papers. - work independently ************************************************************************ Systems Engineering and Design Automation Laboratory Department of Electrical Engineering The University of Sydney Electronics Research Engineer Adiabatic Computing for Implantable Devices The appointee will be part of a team working on a project in collaboration with a world leading company in the area of implantable devices. The project is funded by the Australian Research Council and aims at investigating circuits and architectures for adiabatic computing in the context of implantable devices. Applicants with a bachelor or higher degree in electronics or computer engineering with experience in full-custom integrated circuits (preferrably analogue) are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (Level A academic) 30k-40k depending on experience. Duty statement The engineer will: - perfrom litterature search - design and implement architecture and circuits for adiabatic computing. - design testing system interface to test circuits - design and implement software for simulating the circuits - work independently REPORTING: Reports to Dr. M. Jabri QUALIFICATIONS: B.E. or higher. SKILLS: - programming in C and/or C++ - knowledge of Unix - design, implementation and testing of electronic and microelectronic circuits - design and implementation of software programs to simulate logic and analogue circuits. - use of computer aided design tools. - communication with others - writing reports and technical papers. - work independently How to apply: o Send CV with covering letter specifying the title of the position to Marwan Jabri (address below) o should include a minimum of 2 referees who can comment on professional activities of the applicant o include in CV comments with respect to desirable expertise and skills Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 351-2240 Fax: (+61-2) 660-1228 email: marwan at sedal.su.oz.au From krogh at nordita.dk Wed Feb 22 10:50:51 1995 From: krogh at nordita.dk (Anders Krogh) Date: Wed, 22 Feb 95 16:50:51 +0100 Subject: Paper available on neural network ensembles Message-ID: <9502221550.AA17959@norsci0.nordita.dk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z The file krogh.ensemble.ps.Z can now be copied from Neuroprose. The paper is 8 pages long. Hardcopies copies are are available. Neural Network Ensembles, Cross Validation, and Active Learning by Anders Krogh and Jesper Vedelsby Abstract: Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme. The paper will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Anders Krogh Nordita Blegdamsvej 17, 2100 Copenhagen, Denmark email: krogh at nordita.dk Phone: +45 3532 5503 Fax: +45 3138 9157 ________________________________________ From french at cogsci.indiana.edu Wed Feb 22 09:57:41 1995 From: french at cogsci.indiana.edu (Bob French) Date: Wed, 22 Feb 95 09:57:41 EST Subject: Problems printing French's sequential learning paper Message-ID: A number of people have had trouble printing my file "Interactive Tandem Networks and the Problem of Sequential Learning" which was announced yesterday. The problem, for those who have already retreived the file, is an extraneous ^D at the beginning and end of the file. (Thanks Guszti Bartfai, Victoria Univ. of Wellington, for pointing out the problem and the fix.) Unfortunately, the file was test-printed by two friends who printed it through Ghostview (it views and prints just fine through Ghostview). So, if you find reading postscript dumps sort of tedious, either remove the ^D from the very beginning and end of the ps file or retrieve the file again from cogsci.indiana.edu (see below). I have fixed the problem. Sorry for any inconvenience this might have caused people. For those who wish to re-retrieve the file: Paper name: Interactive Tandem Networks and the Problem of Sequential Learning anonymous ftp site : cogsci.indiana.edu file name: /pub/french.tandem-stm-ltm.ps.Z Bob French =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Robert M. French Department of Psychology University of Wisconsin Madison, WI 53706 Tel: (608) 262-5207 (608) 243-8026 email: french at merlin.psych.wisc.edu french at cogsci.indiana.edu =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From degaris at hip.atr.co.jp Wed Feb 22 18:28:34 1995 From: degaris at hip.atr.co.jp (Hugo de Garis) Date: Wed, 22 Feb 95 18:28:34 JST Subject: A 50,000 neuron artificial brain Message-ID: <9502220928.AA05644@gauss> Dear Connectionists, A 50,000 Neuron Artificial Brain 50,000 is roughly the number of artificial neurons I can grow/evolve based on cellular automata in the 32 mega CA cell "CAM8" cellular automata machine from MIT that I have on my desk. Using hand coded cellular automata rules to grow neurons from seeder CA cells, one can then feed in genetic algorithm "chromosome" growth strings into dendrites and axons to grow random neural circuits, whose fitness at controlling some process is measured after the circuit is grown and used to transmit CA based neural signals. Using these ideas and a future "superCAM" our group hopes to build grow/evolve an artificial brain of a billion neurons by 2001. This is quite feasible because RAM is cheap (even gigabytes), the states of the CA cells can be stored in gigabytes of RAM, so too the CA state transition rules. The bottle neck is the CA processor. MIT's CAM8 can update 200 million CA cells of 16 bits per second. Our super CAM should be thousands of times faster. Reference : "An Artificial Brain : ATR's CAM-Brain Project Aims to Build/Evolve an Artificial Brain with a Million Neural Net Modules inside a Trillion Cell Cellular Automata Machine", Hugo de Garis, New Generation Computing Journal, Vol. 12, No. 2, Ohmsha & Springer Verlag, 1994. For more information concerning our CAM-Brain Project, please contact Dr. Hugo de Garis, Brain Builder Group, Evolutionary Systems Dept, ATR Human Information Processing Research Labs, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kansai Science City, Kyoto-fu, 619-02, Japan. tel. + 81 (0)7749 5 1079, fax. + 81 (0)7749 5 1008, email. degaris at hip.atr.co.jp Cheers, Hugo de Garis. From ken at phy.ucsf.edu Wed Feb 22 12:55:07 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 22 Feb 1995 09:55:07 -0800 Subject: Paper available: "RFs and Maps in the Visual Cortex" Message-ID: <9502221755.AA01781@coltrane.ucsf.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/miller.rfs_and_maps.ps.Z The file miller.rfs_and_maps.ps.Z is now available for copying from the Neuroprose repository. This is a review paper: Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns (26 pages) Kenneth D. Miller Dept. of Physiology University of California, San Francisco To appear in: Models of Neural Networks III, E. Domany, J.L. van Hemmen, and K. Schulten, Eds. (Springer-Verlag, NY), 1995. ABSTRACT: The formation of ocular dominance and orientation columns in the mammalian visual cortex is briefly reviewed. Correlation-based models for their development are then discussed, beginning with the models of Von der Malsburg. For the case of semi-linear models, model behavior is well understood: correlations determine receptive field structure, intracortical interactions determine projective field structure, and the ``knitting together'' of the two determines the cortical map. This provides a basis for simple but powerful models of ocular dominance and orientation column formation: ocular dominance columns form through a correlation-based competition between left-eye and right-eye inputs, while orientation columns can form through a competition between ON-center and OFF-center inputs. These models account well for receptive field structure, but are not completely adequate to account for the details of cortical map structure. Alternative approaches to map structure, including the self-organizing feature map of Kohonen, are discussed. Finally, theories of the computational function of correlation-based and self-organizing rules are discussed. Sorry, hard copies are NOT available. Ken Kenneth D. Miller Dept. of Physiology UCSF 513 Parnassus San Francisco, CA 94143-0444 internet: ken at phy.ucsf.edu From pkso at castle.ed.ac.uk Wed Feb 22 14:26:54 1995 From: pkso at castle.ed.ac.uk (P Sollich) Date: Wed, 22 Feb 95 19:26:54 GMT Subject: Two NIPS 7 preprints: Query learning, relevance of thermodynamic limit Message-ID: <9502221926.aa03869@uk.ac.ed.castle> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/sollich.imperf_learn.ps.Z FTP-filename: /pub/neuroprose/sollich.linear_perc.ps.Z Hi all, the following two papers are now available from the neuroprose archive (8 pages each, to appear in: Advances in Neural Information Processing Systems 7, Tesauro G, Touretzky D S and Leen T K (eds.), MIT Press, Cambridge, MA, 1995). Sorry, hardcopies are not available. Any feedback is much appreciated! Regards, Peter Sollich ---------------------------------------------------------------------------- Peter Sollich Dept. of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings Tel. +44-131-650 5236 Mayfield Road Edinburgh EH9 3JZ, U.K. ---------------------------------------------------------------------------- Learning from queries for maximum information gain in imperfectly learnable problems Peter Sollich, David Saad Department of Physics, University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT: In supervised learning, learning from queries rather than from random examples can improve generalization performance significantly. We study the performance of query learning for problems where the student cannot learn the teacher perfectly, which occur frequently in practice. As a prototypical scenario of this kind, we consider a linear perceptron student learning a binary perceptron teacher. Two kinds of queries for maximum information gain, i.e., minimum entropy, are investigated: Minimum {\em student space} entropy (MSSE) queries, which are appropriate if the teacher space is unknown, and minimum {\em teacher space} entropy (MTSE) queries, which can be used if the teacher space is assumed to be known, but a student of a simpler form has deliberately been chosen. We find that for MSSE queries, the structure of the student space determines the efficacy of query learning, whereas MTSE queries lead to a higher generalization error than random examples, due to a lack of feedback about the progress of the student in the way queries are selected. ---------------------------------------------------------------------------- Learning in large linear perceptrons and why the thermodynamic limit is relevant to the real world Peter Sollich Department of Physics, University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT: We present a new method for obtaining the response function ${\cal G}$ and its average $G$ from which most of the properties of learning and generalization in linear perceptrons can be derived. We first rederive the known results for the `thermodynamic limit' of infinite perceptron size $N$ and show explicitly that ${\cal G}$ is self-averaging in this limit. We then discuss extensions of our method to more general learning scenarios with anisotropic teacher space priors, input distributions, and weight decay terms. Finally, we use our method to calculate the finite $N$ corrections of order $1/N$ to $G$ and discuss the corresponding finite size effects on generalization and learning dynamics. An important spin-off is the observation that results obtained in the thermodynamic limit are often directly relevant to systems of fairly modest, `real-world' sizes. ---------------------------------------------------------------------------- From krogh at nordita.dk Thu Feb 23 03:00:15 1995 From: krogh at nordita.dk (Anders Krogh) Date: Thu, 23 Feb 95 09:00:15 +0100 Subject: Correction: Paper available on neural network ensembles Message-ID: <9502230800.AA01778@norsci0.nordita.dk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z In my previous mail about our paper in Neuroprose it should have said Hardcopies copies are *NOT* available. Sorry. - Anders From greiner at scr.siemens.com Thu Feb 23 13:41:37 1995 From: greiner at scr.siemens.com (Russell Greiner) Date: Thu, 23 Feb 1995 13:41:37 -0500 Subject: CFP: "Relevance" issue of "Artificial Intelligence" Message-ID: <199502231841.NAA12556@eagle.scr.siemens.com> ************************************************************** ***** CALL FOR PAPERS (please post) ****** ************************************************************** Special Issue on RELEVANCE Journal: ARTIFICIAL INTELLIGENCE Guest Editors: Russell Greiner, Devika Subramanian, Judea Pearl With too little information, reasoning and learning systems cannot work effectively. Surprisingly, too much information can also cause the performance of these systems to degrade, in terms of both accuracy and efficiency. It is therefore important to determine what information must be preserved, i.e., what information is "relevant". There has been a recent flurry of interest in explicitly reasoning about relevance in a number of different disciplines, including the AI fields of knowledge representation, probabilistic reasoning, machine learning and neural computation, as well as communities that range from statistics and operations research to database and information retrieval to cognitive science. Members of these diverse communities met at the 1994 AAAI Fall Symposium on Relevance, to seek a better understanding of the various senses of the term "relevance", with a focus on finding techniques for improving the performance of embedded agents by ignoring or de-emphasizing irrelevant and superfluous information. Such techniques will clearly be of increasing importance as knowledge bases, and learning systems, become more comprehensive to accommodate real-world applications. To help consolidate leading research on relevance, the "Artificial Intelligence" journal is devoting a special issue to this topic. We are now seeking papers on (but not restricted to) the following topics: [Representing and reasoning with relevance:] reasoning about the relevance of distinctions to speed up computation, relevance reasoning in real-world KR tasks including design, diagnosis and common-sense reasoning, use of relevant causal information for planning, theories of discrete approximations. [Learning in the presence of irrelevant information:] removing irrelevant attributes and/or irrelevant training examples, to make feasible induction from very large datasets; methods for learning action policies for embedded agents in large state spaces by explicit construction of approximations and abstractions. [Relevance and probabilistic reasoning:] simplifying/approximating Bayesian nets (both topology and values) to permit real-time reasoning; axiomatic bases for constructing abstractions and approximations of Bayesian nets and other probabilistic reasoning models. [Relevance in neural computational models:] methods for evolving computations that ignore aspects of the environment to make certain classes of decisions, automated design of topologies of neural models guided by relevance reasoning based on task class. [Applications of relevance reasoning:] Applications that require explicit reasoning about relevance in the context of IVHS, exploring and understanding large information repositories, etc. We are especially interested in papers that have strong theoretical analyses complemented by experimental evidence from non-trivial applications. Authors are invited to submit manuscripts conforming to the AIJ submission requirements by 11 Sept 1995 to Russell Greiner or Devika Subramanian Siemens Corporate Research Department of Computer Science 755 College Road East 5141 Upson Hall, Cornell University Princeton, NJ 08540-6632 Ithaca, New York 14853 (609) 734-3627 (607) 255-9189 Papers will be a subject to a standard peer review. The first round of reviews will be completed and decisions mailed by 11 December 1995. The authors of accepted and conditionally accepted manuscripts will be required to send revised versions by 1 March 1996. The special issue is tentatively scheduled to appear sometime in 1996. We also plan to publish this issue as a book. Finally, to help us select appropriate reviewers in advance, authors should email us a title, set of keywords and a short abstract, to arrive by 4 September. To recap the significant dates: 4/Sep/95: Emailed titles, keywords and abstracts due 11/Sep/95: Manuscripts dues 11/Dec/95: First round decisions 1/Mar/96: revised manuscripts due ?? /96: special issue appears (tentative) From thimm at idiap.ch Thu Feb 23 08:06:54 1995 From: thimm at idiap.ch (Georg Thimm) Date: Thu, 23 Feb 95 14:06:54 +0100 Subject: Reannouncement: NN Event Announcements as WWW page & by FTP Message-ID: <9502231306.AA09699@idiap.ch> WWW page and FTP Server for Announcements of Conferences, Workshops and Other Events on Neural Networks ------------------------------------- This WWW page allows you to enter and look up announcements and call-for-papers for conferences, workshops, talks, and other events on neural networks. The three event lists contain 65 forthcoming events and can be accessed via the IDIAP neural network home page with the URL: http://www.idiap.ch/html/idiap-networks.html The lists are now also available as formated ASCII text. The files /html/NN-events/{conferences,workshops,other}.txt.Z are obtainable from the HTML menus or from our FTP server ftp.idiap.ch. Instructions for downloading these files are given below. The entries are grouped into: - Multi-day events for a larger number of people (Conferences, Congresses, large Workshops, etc.), - Multi-day events for a small audience (small Workshops, Summer Schools, etc,), and - One day events (Seminars, Talks, Presentations, etc.). The entries are ordered chronologically and presented in a standardized format for fast and easy lookup. The entry fields are: - the date and place of the event, - the title of the event, - a hyper link to more information about the event, - a contact address (surface mail address, email address, telephone number, and fax number), - deadlines, and - a field for comments. -------------------------------------------------------------- Example FTP session: UNIX> ftp ftp.idiap.ch Name (ftp.idiap.ch:thimm): anonymous Password: [Your e-mail address] ftp> cd html ftp> cd NN-events ftp> bin ftp> mget conferences.txt.Z other.txt.Z workshops.txt.Z mget conferences.txt.Z? y . . ftp> quit UNIX> zcat conferences.txt.Z | lpr -------------------------------------------------------------- I hope you find this service helpful. I am looking forward to comments and suggestions. Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse -------------------------------------------------------------- From office at swan.lanl.gov Thu Feb 23 16:54:07 1995 From: office at swan.lanl.gov (Office Account) Date: Thu, 23 Feb 1995 14:54:07 -0700 Subject: MaxEnt95 Message-ID: <199502232154.OAA07023@goshawk.lanl.gov> ANNOUNCEMENT AND CALL FOR PAPERS The Fifteenth International Workshop on Maximum Entropy and Bayesian Methods 30 July - 4 August 1995 St. John's College Santa Fe, New Mexico, USA The Fifteenth International Workshop on Maximum Entropy and Bayesian Methods will be held at St. John's College in Santa Fe, New Mexico, USA. This Workshop is being jointly sponsored by the Center for Nonlinear Studies and the Radiographic Diagnostics Program, both at Los Alamos National Laboratory, and by the Santa Fe Institute (SFI). SCOPE: Traditional themes of the Workshop have been the application of the maximum entropy principle and Bayesian methods for statistical inference to diverse areas of scientific research. Practical numerical algorithms and principles for solving ill-posed inverse problems, image reconstruction and model building are emphasized. The Workshop also addresses common foundations for statistical physics, statistical inference, and information theory. The Workshop will begin on 31 July with a half-day tutorial on Bayesian methods and the principle of maximum entropy, which will be presented by Wray Buntine and Peter Cheeseman of NASA. The Workshop will also include several reviews of hot topics of broad interest, such as Markov Chain Monte Carlo methods for sampling posteriors, deformable geometric models, and the relation between information theory and physics. Specially organized sessions will highlight other topics, such as Bayesian time-series analysis, entropies in dynamical systems, and data analysis for physics simulations. The Workshop will be held in the beautiful setting of St. John's College, nested in the foothills of the Sangre de Cristo Mountains, two miles from the Santa Fe Plaza. St. John's is a small liberal arts college, which emphasizes a classical curriculum. Social events include a reception at the Santa Fe Institute and an outing to the Science Museum at the Los Alamos National Laboratory. The timing of the Workshop coincides with the peak of the Santa Fe tourist and opera seasons. CALL FOR CONTRIBUTED PAPERS: Contributed papers are requested on the innovative use of Bayesian methods or the maximum entropy principle. The deadline for receipt of abstracts is April 14, 1995. They should be written in LaTeX or ascii and limited to one page of about 400 words. Please include a curriculum vita or a short biographical sketch. Specify preference for an oral or poster presentation. The abstracts will be made available at the Workshop. Manuscripts of accepted papers will be due at the Workshop, in camera-ready form, on diskette, and one hard-copy, and they must be prepared in LaTeX using a style file that will be made available to authors by e-mail or by post. REGISTRATION: To receive registration materials and detailed information about this Workshop, contact Ms. Barbara Rhodes at the address below. Space limitations at St. John's College will restrict the number of attendees to about 125. Early registration will help to assure a place at the meeting. SCHOLARSHIPS: Limited financial support will be available to assist graduate students and postdoctoral fellows who wish to attend the workshop. Requests for support may be submitted along with registration materials. SCIENTIFIC ORGANIZERS: Kenneth Hanson and Richard Silver, Los Alamos National Laboratory Send abstracts and requests for registration materials and other information to the administrative organizer: Ms. Barbara Rhodes Tel: (505) 667-1444 CNLS, MS-B258 Fax: (505) 665-2659 Los Alamos National Laboratory E-mail: maxent at cnls.lanl.gov Los Alamos, NM 87545 USA From piche at mcc.com Thu Feb 23 19:48:20 1995 From: piche at mcc.com (Stephen Piche') Date: Thu, 23 Feb 95 18:48:20 CST Subject: CIFEr Conference Message-ID: <9502240048.AA25192@muon.mcc.com> IEEE/IAFE 1995 Conference on Computational Intelligence for Financial Engineering (CIFEr) April 9-11, 1995 New York City, Crowne Plaza Manhattan Sponsored by: The IEEE Neural Networks Council The International Association of Financial Engineers The IEEE Computer Society The IEEE/IAFE CIFEr Conference is the first major collaboration between the professional engineering and financial communities, and will be the leading forum for new technologies and applications in the intersection of computational intelligence and financial engineering. CONTENTS OF THIS POSTING: > Special Sessions > Technical Program > Tutorials > Exhibit Information > Registration > Early Bird Registration Costs > For More Info and Registration Material SPECIAL SESSIONS ^^^^^^^ ^^^^^^^^ 1. "Electronic Trading in the Next Millennium" Panel Chair: Raymond L. Killian, President, ITG Inc. 2. "Practitioner's Panel -- Investment Strategies using Non- Parametric Analyses" Panel Chair: Steve Armentrout, QED Investments 3. "First International Nonlinear Financial Forecasting Competition: Trading Systems Competition Methodologies and Design" Sponsored by Neurove$t Journal Panel Chairs: Manoel F. Tenorio, Randall Caldwell TECHNICAL PROGRAM ^^^^^^^^^ ^^^^^^^ The CIFEr Technical Program will be held Monday and Tuesday, April 10 -11, 1995 at the Crowne Plaza Manhattan. Registration for the Technical Program includes The Keynote Speech by Robert Merton, Special Sessions, Oral and Poster Presentations, entrance to the Exhibits Hall, invitation to the Sunday, April 9, 5:15 P.M. CIFEr Reception, and the CIFEr Proceedings. TUTORIALS ^^^^^^^^^ There are two tracks for the tutorials: Engineering Tutorial Track and the Finance Tutorial Track. Tutorials will be held from 9:00 A.M. to 5:00 P.M. on Sunday, April 9, 1995 at Crowne Plaza Manhattan. Early registration end March 31, 1995. E1 - AN INTRODUCTION TO GENETIC ALGORITHMS: Melanie Mitchell Santa Fe Institute. E2 - NONLINEAR TIME SERIES TOOLS FOR FINANCIAL MARKETS: Blake LeBaron, Dept. of Economics, University of Wisconsin E3 - NEURAL NETWORKS FOR TEMPORAL INFORMATION PROCESSING, S. Y. Kung, Electrical Engineering, Princeton University F1 - AN INTRODUCTION TO DERIVATIVES; FUTURES, FORWARDS, OPTIONS, AND SWAPS: John F. Marshall, Polyechnic University F2 - ADVANCED OPTION PRICING: ALTERNATIVE DISTRIBUTIONAL HYPOTHESES, The Wharton School F3 - TRADING MODELS AND STRATEGIES: TBA EXHIBIT INFORMATION ^^^^^^^ ^^^^^^^^^^^ Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to exhibit your products. Contact CIFEr-secretariat, Barbara Klemm, at (800) 321-6338. REGISTRATION ^^^^^^^^^^^^ Your conference registration fee includes refreshments, the cocktail reception (on Sunday, April 9 at 5:00 P.M.), and the conference proceedings. Be sure to attend the keynote speech luncheon. You may send a check or money order for your registration fee, or pay by credit. Please make your check payable to "IEEE & IAFE CIFEr '95 Conference" and print the attendee(s) name(s) on the face of the check. EARLY BIRD REGISTRATION THROUGH MARCH 31, 1995 ^^^^^ ^^^^ ^^^^^^^^^^^^ ^^^^^^^ ^^^^^ ^^ ^^^^ CIFEr CONFERENCE REGISTRATION (4/10-11/95) IEEE & IAFE MEMBERS $400 NON-MEMBERS $550 FULL-TIME STUDENTS $190 KEYNOTE SPEECH LUNCHEON MEAL TICKET $ 10 (MONDAY, 4/10/95) FOR MORE INFORMATION AND REGISTRATION MATERIALS ^^^ ^^^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^^^^^^ 1. Telephone the CIFEr Seretariat at (714)752-8205 or (800) 321-6338. They'll be glad to register you and provide addition information. 2. Send an e-mail request to the CIFEr Secretatiat at 74710.2266 at COMPUSERVE.COM You will receive a complete conference announcement which includes the program. CIFEr SECRETARIAT ^^^^^ ^^^^^^^^^^^ Meeting Management IEEE/IAFE Computational Intelligence for Financial Engineering 2603 Main Street, Suite #690 Irvine, California 92714 (714)752-8205 or (800) 321-6338 FAX (714) 752-7444 74710.2266 at COMPUSERVE.COM From Voz at dice.ucl.ac.be Fri Feb 24 11:41:18 1995 From: Voz at dice.ucl.ac.be (Jean-Luc Voz) Date: Fri, 24 Feb 1995 17:41:18 +0100 Subject: Elena Workshop Message-ID: <199502241641.RAA20360@ns1.dice.ucl.ac.be> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I N V I T A T I O N to the ELENA WORKSHOP Project Results and Industrial Openings Louvain-la-Neuve, Belgium 18 April 1995 (An initiative of Technopol Brussels, Value Relay Center) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We are pleased to invite you to the ELENA WORKSHOP: Project Results and Industrial Openings. ELENA is an ESPRIT European Basic Research Action project (No 6891) on classification, neural networks and evolutive architectures, which investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification and incremental learning. This text includes: - Details about the ELENA workshop (aims, who should attend, practical information,...) - A short presentation of the ELENA Esprit project - A registration form to the workshop. See also the WORKSHOP HOMEPAGE on the World Wide Web at URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA_WORKSHOP.html Topics: ~~~~~~~ Classification by neural networks, pruning methods, incremental learning and evolutive architectures, Bayesian statistical classification. Benchmarking studies of classification algorithms. Classifier's digital and analog Hardware implementations. Objectives: ~~~~~~~~~~~ The ELENA consortium desires to transfer his experience to a large class of industrials and scientists. The main objectives of this workshop are: - to describe to industrials and scientists the state-of the-art in classification by neural networks and the latest developments in the framework of the ELENA project, - to provide practical guidelines for classification tools to users, on the choice of algorithms, of benchmarking methods, and on the software and hardware options depending on the application, - to allow industrials and other potential users to apply up-to-date powerful methods of classification in practical situations. The one-day workshop will be organized in Louvain-la-Neuve (25 km from Brussels), Belgium, on 18 April 1995; it will include talks and practical demonstrations. Who should attend the workshop ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The workshop will be of interest for industrials, actual and potential users of classification methods in all domains (image and signal processing, control, pattern recognition, OCR), and also for scientists interested by classification with neural networks. Date ~~~~ The ELENA workshop is organized in Louvain-la-Neuve (Belgium), on 18 April 1994. Workshop Programme ~~~~~~~~~~~~~~~~~~ 09.00 Introduction. 09.15 General overview of the ELENA project : aims, potentialities. Prof. C. Jutten (INPG). 09.45 Main theoretical results and practical recommendations. Dr. P. Comon (Thomson Sintra), Prof. C. Jutten (INPG). Coffee break 11.00 Hardware implementations : practical considerations on digital and analog alternatives. Dr. M. Verleysen (UCL), Dr. A. Guerin (INPG), Prof. J. Cabestany (UPC), Ph. Thissen (UCL). Lunch 14.00 Benchmarking studies : experimental protocols and practical recommendations for classifier users. J.-L. Voz (UCL), Dr. A. Guerin (INPG). 15.00 PACKLIB, an interactive environment for data processing : features and demonstration. Dr. F. Blayo (Univ. Paris1), Y. Cheneval (EPFL). Coffee break 16.15 Discussion. Registration ~~~~~~~~~~~~ To register to the ELENA workshop, please transfer the registration fee to the account indicated overleaf, and send the following form BEFORE APRIL 7TH, 1995 - by fax to: No +32 10 47 86 67 (attn: JL Voz) OR - by mail to: Jean-Luc Voz Universite Catholique de Louvain DICE - Microelectronics Laboratory 3, Place du Levant B-1348 LOUVAIN-LA-NEUVE Belgium OR - by e-mail to: voz at dice.ucl.ac.be +-------------------------------------------------------------------+ | ELENA WORKSHOP | REGISTRATION FORM | | | Title (M., Mrs, Dr, Prof.): ...................................... | Name: ............................................................ | First Name: ...................................................... | Institution: ..................................................... | .................................................................. | Address: ......................................................... | .................................................................. | .................................................................. | Post/Zip code: ................................................... | City: ............................................................ | Country: ......................................................... | Phone: ........................................................... | Fax: ............................................................. | E-mail: .......................................................... | VAT No.:(mandatory for registrants from the European Community): | .................................................................. | | | O I will participate to the ELENA workshop on 18 April 1995 | in Louvain-la-Neuve, Belgium. I will transfer the fee of BEF 3000 | (plus eventual bank and exchange charges) on the account : | Account : "Workshop ELENA" | Acc. No.: 271-0366343-06 | Bank : Generale de Banque | pl. de l'Universite 6 | B-1348 Louvain-la-Neuve | Belgium | | O I will not participate to the workshop, but I am interested | by the ELENA project and any further workshop or published report. | +--------------------------------------------------------------------+ Registration fee of BEF 3000 includes attendance to the workshop, lunch, coffee breaks and printed support. Confirmation of registration and practical details (location,...) will be sent upon receipt of registration form and payment. The ELENA PROJECT ~~~~~~~~~~~~~~~~~ Neural networks are now known as powerful methods for empirical data analysis, especially for approximation (identification, control, prediction) and classification problems. The ELENA project investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning (control of the network size by adding or removing neurons),... ELENA is an ESPRIT III Basic Research Action project (No. 6891). It involves: INPG (Grenoble, F), UPC (Barcelona, E), EPFL (Lausanne, CH), UCL (Louvain-la-Neuve, B), Thomson-Sintra ASM (Sophia Antipolis, F) EERIE (Nimes, F). The coordinator of the project can be contacted at: Prof. Christian Jutten, INPG-LTIRF, 46 av. Flix Viallet, F-38031 Grenoble Cedex, France Phone: +33 76 57 45 48, Fax: +33 76 57 47 90, e-mail: chris at tirf.inpg.fr Overview of the project ~~~~~~~~~~~~~~~~~~~~~~~ Theoretical results point out the relations between Bayesian classifiers and classical Multi-Layer Perceptron (MLP), and propose new algorithms based on Kernel Estimator Classifiers (KEC). Pruning methods (to adapt the network sizes) have been explored on MLP as well as on KEC. Relations between data dimension, number of samples and number of parameters in practical situations have also been addressed. Original non linear mapping algorithms (VQP) for data dimension reduction have also been designed. A simulation environment (PACKLIB) has been developed in the project; it is a smart graphical tool allowing fast programming and interactive analysis. The PACKLIB environment greatly simplifies the user's task by requiring only to write the basic code of the algorithms, while the whole graphical input, output and relationship framework is handled by the environment itself. PACKLIB is used for extensive benchmarks in the ELENA project and in other situations (image processing, control of mobile robots,...). Currently, PACKLIB is tested by beta users and a demo version will be soon available in the public domain. Specific problems related to hardware implementation of incremental algorithms have been adressed; parallel machines with different kinds of systolic architectures, and specialized VLSI processors have been developed and studied in the framework of the ELENA project. The goal of the project was to extract guidelines for the choice of architectures and machines in different situations, taking into account the required performances, but also external constraints such as inputs/outputs, portability, power consumption, versatility,... More Information: ~~~~~~~~~~~~~~~~~ If you need supplementary and practical information about the workshop (access by plane, train or car, hotels,...) contact: Jean-Luc Voz or Michel Verleysen Universite Catholique de Louvain DICE - Microelectronics Laboratory 3, Place du Levant B-1348 LOUVAIN-LA-NEUVE Belgium Phone : +32-10-47.25.51 Secret : +32-10-47.25.40 Fax : +32-10-47.86.67 E_mail : voz at dice.ucl.ac.be verleyse at dice.ucl.ac.be SEE ALSO THE WWW HOMEPAGE of the workshop: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA_WORKSHOP.html A postcript file presenting the ELENA workshop is available for anonymous ftp on: ftp.dice.ucl.ac.be in /pub/neural-nets/ELENA/elena_workshop.ps.Z The ELENA workshop is a joint organization of : - Technopol Brussels, Value Relay Center - The partners of the ELENA consortium. The ELENA workshop is organized in Louvain-la-Neuve (Belgium), on 18 April 1994. It precedes an international conference in the field of artificial neural networks organized in Brussels, the third European Symposium on Artificial Neural Networks (ESANN'95). From ajit at uts.cc.utexas.edu Fri Feb 24 14:01:39 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Fri, 24 Feb 1995 13:01:39 -0600 Subject: Neuroprose Paper: Classifiers on Relatively Compact Sets Message-ID: <199502241901.NAA26303@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.relcompact-class.ps.Z BiBTeX entry: @ARTICLE{atd17, AUTHOR = "Sandberg, I. W. and Dingankar, A. T.", TITLE = "{Classifiers on Relatively Compact Sets}", JOURNAL = "IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications", VOLUME = {42}, NUMBER = {1}, PAGES = {57}, YEAR = "1995", MONTH = "January", ANNOTE = "", LIBRARY = "", CALLNUM = "" } Classifiers on Relatively Compact Sets -------------------------------------- Abstract The problem of classifying signals is of interest in several application areas. Typically we are given a finite number $m$ of pairwise disjoint sets $C_1, \ldots, C_m$ of signals, and we would like to synthesize a system that maps the elements of each $C_j$ into a real number $a_j$, such that the numbers $a_1,\ldots,a_m$ are distinct. In a recent paper it is shown that this classification can be performed by certain simple structures involving linear functionals and memoryless nonlinear elements, assuming that the $C_j$ are compact subsets of a real normed linear space. Here we give a similar solution to the problem under the considerably weaker assumption that the $C_j$ are relatively compact and are of positive distance from each other. An example is given in which the $C_j$ are subsets of $ \Lp(a,b), ~1 \le p < \infty $. From moody at chianti.cse.ogi.edu Fri Feb 24 20:12:50 1995 From: moody at chianti.cse.ogi.edu (John Moody) Date: Fri, 24 Feb 95 17:12:50 -0800 Subject: Graduate Study at the Oregon Graduate Institute Message-ID: <9502250112.AA05935@chianti.cse.ogi.edu> The Oregon Graduate Institute of Science and Technology (OGI) has openings for a few outstanding students in its Computer Science and Electrical Engineering Masters and Ph.D programs in the areas of Neural Networks, Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. OGI has over 15 faculty, senior research staff, and postdocs in these areas. Short descriptions of our research interests are appended below. The primary purposes of this message are to: 1) To to invite enquiries and applications from prospective students interested in studying for a Masters of PhD Degree in the above areas. 2) To notify prospective Ph.D. students who are U.S. Citizens or U.S. Nationals of various fellowship opportunities at OGI. Fellowships provide full or partial financial support while studying for the PhD. OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. Inquiries about the Masters and PhD programs and admissions for either Computer Science or Electrical Engineering should be addressed to: Margaret Day, Director Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Portland, OR 97291 Phone: (503)690-1028 Email: margday at admin.ogi.edu Formal applications should also be sent to Margaret Day. However, due to the late time in the graduate application season, informal applications should be sent directly to the CSE Department. For these informal applications, please include a letter specifying your research interests and photocopies of your GRE Scores and College transcripts. Please send these materials to: Kerri Burke, Academic Coordinator Department of Computer Science and Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503)690-1255 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science and Engineering & Department of Electrical Engineering and Applied Physics Research Interests of Faculty, Research Staff, and Postdocs in Adaptive & Interactive Systems (Neural Networks, Signal Processing, Control, Speech, Language, and Vision) Etienne Barnard (Assistant Professor, EEAP): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EEAP); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction. Joachim Utans (Post-Doctoral Research Associate, CSE): Joachim Utans's research interests include computer vision and image processing, model based object recognition, neural network learning algorithms and optimization methods, model selection and generalization, with applications in handwritten character recognition and financial analysis. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EEAP): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Post-Doctoral Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From pf2 at st-andrews.ac.uk Thu Feb 23 05:33:21 1995 From: pf2 at st-andrews.ac.uk (Peter Foldiak) Date: Thu, 23 Feb 95 10:33:21 GMT Subject: URL suggestion Message-ID: <10889.9502231033@psych.st-andrews.ac.uk> May I suggest that in all future ftp announcements on connectionists the sender should (also) give the standard URL form of the document, so that it would be easier (just by copying and pasting) to get a WWW reader to get it. Thanks, Peter From gomes at ICSI.Berkeley.EDU Mon Feb 27 12:40:21 1995 From: gomes at ICSI.Berkeley.EDU (Benedict A. Gomes) Date: Mon, 27 Feb 1995 09:40:21 -0800 (PST) Subject: Mapping onto parallel machines Message-ID: <199502271740.JAA01859@icsib6.ICSI.Berkeley.EDU> I had posted a request for references on the subject of automatically mapping neural nets onto parallel machines a long time back. The references that I have received over time have been compiled into a bib file and are available by anonymous ftp from ftp://icsi.berkeley.edu/pub/ai/gomes/nn-mapping.bib The same directory also has a summary of some of the papers. Work in this area is diffuse and might be published in a wide variety of areas, including software, parallel systems and neural networks, making it hard to keep track of what has been done. Hence, I would like to repeat my request, so as to update my references. I am interested in both mapping algorithms and simulators, particularly for MIMD machines like the CM-5 and the Cray T3D. Thanks! Benedict Gomes From zoubin at psyche.mit.edu Mon Feb 27 20:06:08 1995 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Mon, 27 Feb 95 20:06:08 EST Subject: Paper available on factorial learning and EM Message-ID: <9502280106.AA12339@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin/factorial.ps.Z URL: ftp://psyche.mit.edu/pub/zoubin/factorial.ps.Z This NIPS preprint is 8 pages long [300K compressed]. Factorial Learning and the EM Algorithm Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Many real world learning problems are best characterized by an interaction of multiple independent causes or factors. Discovering such causal structure from the data is the focus of this paper. Based on Zemel and Hinton's cooperative vector quantizer (CVQ) architecture, an unsupervised learning algorithm is derived from the Expectation--Maximization (EM) framework. Due to the combinatorial nature of the data generation process, the exact E-step is computationally intractable. Two alternative methods for computing the E-step are proposed: Gibbs sampling and mean-field approximation, and some promising empirical results are presented. The paper will appear in G. Tesauro, D.S. Touretzky and T.K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. From bisant at gl.umbc.edu Mon Feb 27 20:59:33 1995 From: bisant at gl.umbc.edu (Mr. David Bisant) Date: Mon, 27 Feb 1995 20:59:33 -0500 Subject: Paper Available: ID of binding sites on E.coli genetic sequences Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bisant.ribosome.ps.Z The file bisant.ribosome.ps.Z is a copy of a paper recently accepted by Nucleic Acids Research. It is now available for copying from the Neuroprose repository. Hardcopies can be photocopied from the journal itself shortly. (18 pages, compressed file size 89K) Title: Identification of Ribosome Binding Sites in Escherichia coli Using Neural Network Models Authors: David Bisant Jacob Maizel Neuroscience Program (151 B) National Cancer Institute, FCRF Stanford University Bldg 469 Rm 151, PO Box B Stanford, CA 94305 Frederick, MD 21701 bisant at decatur.stanford.edu jmaizel at ncifcrf.gov Abstract: This study investigated the use of neural networks in the identification of Escherichia coli ribosome binding sites. The recognition of these sites based on primary sequence data is difficult due to the multiple determinants that define them. Additionally, secondary structure plays a significant role in the determination of the site, and this information is difficult to include in the models. Efforts to solve this problem have so far yielded poor results. A new compilation of Escherichia coli ribosome binding sites was generated for this study. Feedforward backpropagation networks were applied to their identification. Perceptrons were also applied, since they have been the previous best method since 1982. Evaluation of performance for all the neural networks and perceptrons was determined by ROC analysis. The neural network provided significant improvement in the recognition of these sites when compared to the previous best method, finding less than half the number of false positives when both models were adjusted to find an equal number of actual sites. The best neural network used an input window of 101 nucleotides and a single hidden layer of 9 units. Both the neural network and the perceptron trained on the new compilation performed better than the original perceptron published by Stormo et al. in 1982. Keywords: neural networks, ribosome binding sites, nucleic acid sequence analysis, ROC, Escherichia coli URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/bisant.ribosome.ps.Z From peter at ai.iit.nrc.ca Tue Feb 28 00:18:44 1995 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Tue, 28 Feb 1995 10:18:44 +0500 Subject: Workshop on Data Engineering for Inductive Learning Message-ID: <9502281518.AA15776@ksl0j.iit.nrc.ca> Workshop on Data Engineering for Inductive Learning Second and Final Call for Participation IJCAI-95, Montreal (Canada), August 20, 1995 This notice is to inform you that the date of the workshop has been determined and to remind you of the approaching deadline for submissions (including requests for participation without a paper presentation). In inductive learning, algorithms are applied to data. It is well-understood that attention to both elements is critical, but algorithms typically receive more attention than data. Our goal in this workshop is to counterbalance the predominant focus on algorithms by providing a forum in which data takes center stage. Specifically, we invite discussion of issues relevant to data engineering, which we define as the transformation of raw data into a form useful as input to algorithms for inductive learning. Data engineering is a concern in industrial and commercial applications of machine learning, neural networks, genetic algorithms, and traditional statistics. Deadline for submissions: March 31, 1995 Notification of acceptance: April 21, 1995 Submissions available by ftp: April 28, 1995 Actual Workshop: August 20, 1995 For more information about the workshop, see: http://ai.iit.nrc.ca/ijcai/data-engineering.html If you do not have access to the web, contact: peter at ai.iit.nrc.ca From cns-cas at PARK.BU.EDU Tue Feb 28 16:13:39 1995 From: cns-cas at PARK.BU.EDU (cns-cas@PARK.BU.EDU) Date: Tue, 28 Feb 95 16:13:39 EST Subject: VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Message-ID: <199502282113.QAA07418@sharp.bu.edu> VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Friday, March 17, 1995 Boston University George Sherman Union Conference Auditorium, Second Floor 775 Commonwealth Avenue Boston, MA 02215 Co-Sponsored by the Department of Cognitive and Neural Systems, the Center for Adaptive Systems, and the Center for Philosophy and History of Science Program: -------- 8:30am--9:30am: KEN NAKAYAMA, Harvard University, Visual perception of surfaces 9:30am--10:30am: RUDIGER VON DER HEYDT, Johns Hopkins University, How does the visual cortex represent surface and contour? 10:30am--11:00am: Coffee Break 11:00am--12:00pm: STEPHEN GROSSBERG, Boston University, Cortical dynamics of visual perception 12:00pm--1:00pm: PATRICK CAVANAGH, Harvard University, Attention-based visual processes 1:00pm--2:30pm: Lunch 2:30pm--3:30pm: V.S. RAMACHANDRAN, University of California, Neural plasticity in the adult human brain: New directions of research 3:30pm--4:30pm: EVAN THOMPSON, Boston University, Phenomenology and computational vision 4:30pm--5:30pm: DANIEL DENNETT, Tufts University, Filling-in revisited 5:30pm---: Discussion Registration: ------------- The conference is free and open to the public. Parking: -------- Parking is available at nearby campus lots: 808 Commonwealth Avenue ($6 per vehicle), 766 Commonwealth Avenue ($8 per vehicle), and 700 Commonwealth Avenue ($10 per vehicle). If these lots are full, please ask the lot attendant for an alternate location. Contact: -------- Professor Stephen Grossberg Department of Cognitive and Neural Systems 111 Cummington Street Boston, MA 02215 fax: (617) 353-7755 email: diana at cns.bu.edu From wahba at stat.wisc.edu Tue Feb 28 21:17:22 1995 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 28 Feb 95 20:17:22 -0600 Subject: SS-ANOVA for `soft classification' Message-ID: <9503010217.AA18322@hera.stat.wisc.edu> Announcing: Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Retinopathy. by Grace Wahba, Yuedong Wang, Chong Gu, Ronald Klein, MD and Barbara Klein, MD. UWisconsin- Madison Statistics Dept TR 940, Dec. 1994 (WWGKK) ftp: ftp.stat.wisc.edu/pub/wahba/exptl.ssanova.ps.gz Mosaic: http://www.stat.wisc.edu/~wahba/wahba.html - then click on ftp ..... GRKPACK: Fitting Smoothing Spline ANOVA Models for Exponential Families. by Yuedong Wang. UWisconsin- Madison Statistics Dept TR 942, Jan. 1995. (GRKPACK-doc) ftp: ftp.stat.wisc.edu/pub/wahba/grkpack.ps.gz Mosaic: http://www.stat.wisc.edu/~wahba/wahba.html - then click on ftp ...... In WWGKK we develop Smoothing Spline ANOVA (SS-ANOVA) models for estimating the probability that an instance (subject) will be in class 1 as opposed to class 0, given a vector of predictor variables t (`soft' classification). We observe {y_i, t(i), i = 1,..,n} where y_i is 1 or 0 according as subject i's response is `success' or `failure', and t(i) is a vector of predictor variables for the i-th subject. Letting p(t) be the probability that a subject whose predictor variables are t, has a `success' response, we estimate p(t) = exp{f(t)}/(1 + exp{f(t)}} from this data using a smoothing spline ANOVA representation of f. An ANOVA representation gives f as a sum of functions of one variable (main effects) plus sums of functions of two variables (two -factor interactions) ...etc. This representation provides an interpretable alternative to a neural net. The following issues are addressed in this paper; (1) Methods for deciding which terms in the ANOVA decomposition to include (model selection), (2) Methods for choosing good values of the regularization (smoothing) parameters, which control the bias-variance tradeoff, (3) Methods for making confidence statements concerning the estimate, (4) Numerical algorithms for the calculations, and, finally, (5) Public software (GRKPACK). The overall scheme is applied to data from the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) to model the risk of progression of diabetic retinopathy {`success'} as a function of glycosylated hemoglobin, duration of diabetes and body mass index {t}. Cross sectional plots provide interpretable information about these risk factors. This paper provided the basis for Grace Wahba's Neyman Lecture. A preliminary version appeared in NIPS-6. GRKPACK-doc provides documentation for the code GRKPACK, which implements (2)-(4) above. The code for GRKPACK is available in netlib in the file gcv/grkpack.shar. It is recommended that it be retrieved via Mosaic: http://www.netlib.org goto The Netlib Repository, goto gcv, rather than via the robot mailserver, which may subdivide the file. Included in GRKPACK are several examples including the analysis described in WWGKK and the WESDR data. Comments and suggestions concerning the code are requested to be sent to Yuedong Wang yuedong at umich.edu. From john at dcs.rhbnc.ac.uk Tue Feb 28 17:43:37 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 28 Feb 95 22:43:37 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199502282243.WAA03485@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-015: ---------------------------------------- Computability and complexity over the reals by Paolo Boldi, University of Milan Abstract: In this work, we sketch a (rather superficial) survey about the problem of extending some of the classical notions from computation and complexity theory to the non-classical realm of real numbers. We first present an algorithmic approach, deeply studied by Blum, Shub, Smale et al., and give a non-trivial separation result recently obtained by Cucker. Then, we introduce some concepts from another line of research, namely the one based on the notion of computable real number. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-016: ---------------------------------------- Probably Approximately Optimal Satisficing Strategies by Russell Greiner, Siemens Corporate Research Pekka Orponen, Department of Computer Science, University of Helsinki Abstract: A {\em satisficing search problem} consists of a set of probabilistic experiments to be performed in some order, seeking a satisfying configuration of successes and failures. The expected cost of the search depends both on the success probabilities of the individual experiments, and on the {\em search strategy}, which specifies the order in which the experiments are to be performed. A strategy that minimizes the expected cost is {\em optimal}. Earlier work has provided ``optimizing functions'' that compute optimal strategies for certain classes of search problems from the success probabilities of the individual experiments. We extend those results by providing a general model of such strategies, and an algorithm \pao\ that identifies an approximately optimal strategy when the probability values are not known. The algorithm first estimates the relevant probabilities from a number of trials of each undetermined experiment, and then uses these estimates, and the proper optimizing function, to identify a strategy whose cost is, with high probability, close to optimal. We also show that if the search problem can be formulated as an and-or tree, then the PAO algorithm can also ``learn while doing'', i.e. gather the necessary statistics while performing the search. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-018: ---------------------------------------- On real Turing machines that toss coins by Felipe Cucker, Universitat Pompeu Fabra, Marek Karpinski, Universit\"at Bonn, Pascal Koiran, DIMACS, Rutgers University, Thomas Lickteig, Universit\"at Bonn, Kai Werther, Universit\"at Bonn Abstract: In this paper we consider real counterparts of classical probabilistic complexity classes in the framework of real Turing machines as introduced by Blum, Shub, and Smale \cite{BSS}. We give an extension of the well-known ``$\BPP \subseteq \P/\poly$'' result from discrete complexity theory to a very general setting in the real number model. This result holds for real inputs, real outputs, and random elements drawn from an arbitrary probability distribution over~$\R^m$. Then we turn to the study of Boolean parts, that is, classes of languages of zero-one vectors accepted by real machines. In particular we show that the classes $\BPP$, $\PP$, $\PH$, and $\PSPACE$ are not enlarged by allowing the use of real constants and arithmetic at unit cost provided we restrict branching to equality tests. ----------------------- The Report NC-TR-95-015 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-015.ps.Z ftp> bye % zcat nc-tr-95-015.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neurocolt.html Best wishes John Shawe-Taylor From rolf at cs.rug.nl Wed Feb 1 05:11:38 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Wed, 1 Feb 1995 11:11:38 +0100 Subject: Ph.D. thesis available (Correction) Message-ID: Dear netters, the path in my announcement was wrong, the FTP filename should be FTP-filename: /pub/neuroprose/Thesis/wuertz.ps.Z The URLs are alright (that's what I've tested). Sorry for the confusion. Rolf From thimm at idiap.ch Wed Feb 1 07:19:47 1995 From: thimm at idiap.ch (Georg Thimm) Date: Wed, 1 Feb 95 13:19:47 +0100 Subject: WWW page for NN conference, workshop, and other event announcements available Message-ID: <9502011219.AA24501@idiap.ch> WWW page for announcements of events on NN available! ---------------------------------- This page allows you to enter and lookup announcements for conferences, workshops, talks and other events on neural networks. The entries are grouped into: - multi-day events for a larger number of people (conferences, congresses, big workshops,...), - multi-day events for a small audience (small workshops, summer schools,...), and - one day events (talks, presentations,...). The entries are ordered chronologically and presented in a standardized format for fast and easy lookup. An entry contains: - the date and place of the event, - the title of the event, - a hyper link to more information about the event, - a contact address (surface mail address, email address, telephone number, and fax number), - deadlines, and - a field for short comments. The URL of the beast is: I hope you find this helpful. Please send me any comments and suggestions. Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse -------------------------------------------------------------- From schmidhu at informatik.tu-muenchen.de Wed Feb 1 06:16:36 1995 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Wed, 1 Feb 1995 12:16:36 +0100 Subject: paper available: incremental self-improvement Message-ID: <95Feb1.121644met.42284@papa.informatik.tu-muenchen.de> ON LEARNING HOW TO LEARN LEARNING STRATEGIES Technical Report FKI-198-94 (20 pages) Juergen Schmidhuber Fakultaet fuer Informatik Technische Universitaet Muenchen 80290 Muenchen, Germany November 24, 1994 Revised January 31, 1995 This paper introduces the ``incremental self-improvement paradigm''. Unlike previous methods, incremental self-improvement encourages a reinforcement learning system to improve the way it learns, and to improve the way it improves the way it learns, without significant theoretical limitations -- the system is able to ``shift its induc- tive bias'' in a universal way. Its major features are: (1) There is no explicit difference between ``learning'', ``meta-learning'', and other kinds of information processing. Using a Turing machine equivalent programming language, the system itself occasionally executes self-delimiting, initially highly random ``self-modifi- cation programs'' which modify the context-dependent probabilities of future programs (including future self-modification programs). (2) The system keeps only those probability modifications computed by ``useful'' self-modification programs: those which bring about more payoff per time than all previous self-modification programs. (3) The computation of payoff per time takes into account all the computation time required for learning -- the entire system life is considered: boundaries between learning trials are ignored (if there are any). A particular implementation based on the novel paradigm is presented. It is designed to exploit what conventional digital machines are good at: fast storage addressing, arithmetic operations etc. Experiments illustrate the system's mode of operation. ------------------------------------------------------------------- FTP-host: flop.informatik.tu-muenchen.de (131.159.8.35) FTP-filename: /pub/fki/fki-198-94.ps.gz (use gunzip to uncompress) Alternatively, retrieve the paper from my home page: http://papa.informatik.tu-muenchen.de/mitarbeiter/schmidhu.html If you don't have gzip/gunzip, I can mail you an uncompressed postscript version (as a last resort). There will be a future revised version of the tech report. Comments are welcome. Juergen Schmidhuber From craig at magi.ncsl.nist.gov Wed Feb 1 13:39:32 1995 From: craig at magi.ncsl.nist.gov (Craig Watson) Date: Wed, 1 Feb 95 13:39:32 EST Subject: Mugshot Identification Database Message-ID: <9502011839.AA14301@magi.ncsl.nist.gov> National Institute of Standards and Technology announces the release of NIST Special Database 18 Mugshot Identification Database (MID) NIST Special Database 18 is being distributed for use in development and testing of automated mugshot identification systems. The database consists of three CD-ROMs, containing a total of 3248 images of variable size, compressed with lossless compression. Each CD-ROM requires approximately 530 megabytes of storage compressed and 1.2 gigabytes uncompressed (2.2 : 1 average compression ratio). There are images of 1573 individuals (cases), 1495 male and 78 female. The database contains both front and side (profile) views when available. Separating front views and profiles, there are 131 cases with two or more front views and 1418 with only one front view. Profiles have 89 cases with two or more profiles and 1268 with only one profile. Cases with both fronts and profiles have 89 cases with two or more of both fronts and profiles, 27 with two or more fronts and one profile, and 1217 with only one front and one profile. Decompression software, which was written in C on a SUN workstation [1], is included with the database. NIST Special Database 18 has the following features: + 3248 segmented 8-bit gray scale mugshot images (varying sizes) of 1573 individuals + 1333 cases with both front and profile views (see statistics above) + 131 cases with two or more front views and 89 cases with two or more profiles + images scanned at 19.7 pixels per mm + image format documentation and example software is included Suitable for automated mugshot identification research, the database can be used for: + algorithm development + system training and testing The system requirements are a CD-ROM drive with software to read ISO-9660 format and the ability to compile the C source code written on a SUN workstation [1]. Cost of the database: $750.00. For ordering information contact: Standard Reference Data National Institute of Standards and Technology Building 221, Room A323 Gaithersburg, MD 20899 Voice: (301) 975-2208 FAX: (301) 926-0416 email: srdata at enh.nist.gov All other questions contact: Craig Watson craig at magi.ncsl.nist.gov (301)975-4402 [1] The SUN workstation is identified in order to adequately specify or describe the subject matter of this announcement. In no case does such identification imply recommendation or endorsement by the National Institute of Standards and Technology, nor does it imply that the equipment is necessarily the best available for the purpose. craig watson ************************************************************ * National Institute of Standards and Technology (NIST) * * Advanced Systems Division * * Image Recognition Group * * Bldg 225/Rm A216 * * Gaithersburg, Md. 20899 * * * * phone -> (301) 975-4402 * * fax -> (301) 840-1357 * * email -> craig at magi.ncsl.nist.gov * ************************************************************ From tgd at chert.CS.ORST.EDU Wed Feb 1 13:38:57 1995 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 1 Feb 95 10:38:57 PST Subject: Four Papers Message-ID: <9502011838.AA07424@edison.CS.ORST.EDU> The following four papers are available for ftp access (abstracts are included below): Zhang, W., Dietterich, T. G., (submitted). A Reinforcement Learning Approach to Job-shop Scheduling. ftp://ftp.cs.orst.edu/users/t/tgd/papers/tr-jss.ps.gz Dietterich, T. G., Flann, N. S., (submitted). Explanation-based Learning and Reinforcement Learning: A Unified View. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-ebrl.ps.gz Dietterich, T. G., Kong, E. B., (submitted). Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-bias.ps.gz Kong, E. B., Dietterich, T. G., (submitted). Error-Correcting Output Coding Corrects Bias and Variance. ftp://ftp.cs.orst.edu/users/t/tgd/papers/ml95-why.ps.gz These and other titles are available through my WWW homepage (see URL at end of message). ---------------------------------------------------------------------- A Reinforcement Learning Approach to Job-shop Scheduling Wei Zhang Thomas G. Dietterich Computer Science Department Oregon State University Corvallis, Oregon 97331-3202 Abstract: We apply reinforcement learning methods to learn domain-specific heuristics for job shop scheduling. We employ repair-based scheduling using a problem space that starts with a critical-path schedule and incrementally repairs constraint violations with the goal of finding a short conflict-free schedule. The states in this problem space are represented by a set of features. The temporal difference algorithm $TD(\lambda)$ is applied to train a neural network to learn a heuristic evaluation function over states. This evaluation function is used by a one-step lookahead search procedure to find good solutions to new scheduling problems. We evaluate this approach on synthetic problems and on problems from a NASA space shuttle payload processing task. The evaluation function is trained on problems involving a small number of jobs and then tested on larger problems. The TD scheduler performs better than the best known existing algorithm for this task---Zweben's iterative repair method based on simulated annealing. The results suggest that reinforcement learning algorithms can provide a new method for constructing high-performance scheduling systems for important industrial applications. ---------------------------------------------------------------------- Explanation-Based Learning and Reinforcement Learning: A Unified View Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Nicholas S. Flann Department of Computer Science Utah State University Logan, UT 84322-4205 Abstract: In speedup-learning problems, where full descriptions of operators are always known, both explanation-based learning (EBL) and reinforcement learning (RL) can be applied. This paper shows that both methods involve fundamentally the same process of propagating information backward from the goal toward the starting state. RL performs this propagation on a state-by-state basis, while EBL computes the weakest preconditions of operators, and hence, performs this propagation on a region-by-region basis. Based on the observation that RL is a form of asynchronous dynamic programming, this paper shows how to develop a dynamic programming version of EBL, which we call Explanation-Based Reinforcement Learning (EBRL). The paper compares batch and online versions of EBRL to batch and online versions of RL and to standard EBL. The results show that EBRL combines the strengths of EBL (fast learning and the ability to scale to large state spaces) with the strengths of RL (learning of optimal policies). Results are shown in chess endgames and in synthetic maze tasks. ---------------------------------------------------------------------- Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms Thomas G. Dietterich Eun Bae Kong Department of Computer Science 303 Dearborn Hall Oregon State University Corvallis, OR 97331-3202 Abstract: The term ``bias'' is widely used---and with different meanings---in the fields of machine learning and statistics. This paper clarifies the uses of this term and shows how to measure and visualize the statistical bias and variance of learning algorithms. Statistical bias and variance can be applied to diagnose problems with machine learning bias, and the paper shows four examples of this. Finally, the paper discusses methods of reducing bias and variance. Methods based on voting can reduce variance, and the paper compares Breiman's bagging method and our own tree randomization method for voting decision trees. Both methods uniformly improve performance on data sets from the Irvine repository. Tree randomization yields perfect performance on the Letter Recognition task. A weighted nearest neighbor algorithm based on the infinite bootstrap is also introduced. In general, decision tree algorithms have moderate-to-high variance, so an important implication of this work is that variance---rather than appropriate or inappropriate machine learning bias---is an important cause of poor performance for decision tree algorithms. ---------------------------------------------------------------------- Error-Correcting Output Coding Corrects Bias and Variance Eun Bae Kong Thomas G. Dietterich Department of Computer Science 303 Dearborn Hall Oregon State University Corvallis, OR 97331-3202 Abstract: Previous research has shown that a technique called error-correcting output coding (ECOC) can dramatically improve the classification accuracy of supervised learning algorithms that learn to classify data points into one of k>>2 classes. This paper presents an investigation of why the ECOC technique works, particularly when employed with decision-tree learning algorithms. It shows that the ECOC method---like any form of voting or committee---can reduce the variance of the learning algorithm. Furthermore---unlike methods that simply combine multiple runs of the same learning algorithm---ECOC can correct for errors caused by the bias of the learning algorithm. Experiments show that this bias correction ability relies on the non-local behavior of C4.5. ---------------------------------------------------------------------- Thomas G. Dietterich Voice: 503-737-5559 Department of Computer Science FAX: 503-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd Oregon State University Corvallis, OR 97331-3102 From rolf at cs.rug.nl Thu Feb 2 07:43:30 1995 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Thu, 2 Feb 1995 13:43:30 +0100 Subject: Ph.D. thesis available (2nd correction) Message-ID: Dear netters, my apologies for posting a wrong filename, a wrong URL, AND a wrong correction. Now REAL filename/URL is: FTP-filename: /pub/neuroprose/Thesis/wuertz.thesis.ps.Z URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/wuertz.thesis.ps.Z Thanks to Mohammed Karouia and Frank Schnorrenberg who pointed out the errors. I promise that the contents are more accurate than the postings. :-) Rolf From jaap.murre at mrc-apu.cam.ac.uk Thu Feb 2 08:22:13 1995 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Thu, 2 Feb 1995 13:22:13 GMT Subject: CALMLIB 4.24 available Message-ID: <199502021322.NAA19519@betelgeuse.mrc-apu.cam.ac.uk> The following file has recently (2-2-1994) been added to our ftp site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/calm424.zip It contains the latest version of the CALMLIB, C-routines for simulating modular neural networks based on the CALM paradigm. CALM stands for Categorizing And Learning Module (e.g., see J.M.J. Murre, R.H. Phaf, & G. Wolters [1992]. 'CALM: Categorizing and Learning Module'. Neural Networks, 5, 55-82; and J.M.J. Murre [1992]. 'Learning and Categorization in Modular Neural Networks'. Hillsdale, NJ: Lawrence Erlbaum [USA/Canada], and Hemel Hempstead: Harvester Wheatsheaf [rest of the world]). An executable demonstration program for PCs (486 DX recommended) using CALM to learn to recognize handwritten characters that can be entered with a mouse can also be obtained from this site: ftp://ftp.mrc-apu.cam.ac.uk/pub/nn/murre/digidemo.zip This program has been developed with the CALMLIB. -- Jaap Murre (jaap.murre at mrc-apu.cam.ac.uk) From mackay at mrao.cam.ac.uk Thu Feb 2 12:45:00 1995 From: mackay at mrao.cam.ac.uk (David J.C. MacKay) Date: Thu, 2 Feb 95 17:45 GMT Subject: Preprint announcement from David J C MacKay Message-ID: The following preprints are available by anonymous ftp or www. WWW: The page: ftp://131.111.48.8/pub/mackay/README.html has pointers to abstracts and postscript of these publications. ------------- Titles ------------- 1) Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks 2) Density Networks and their application to Protein Modelling 3) A Free Energy Minimization Framework for Inference Problems in Modulo 2 Arithmetic 4) Interpolation models with multiple hyperparameters -------------- Details -------------- 1) Probable Networks and Plausible Predictions - A Review of Practical Bayesian Methods for Supervised Neural Networks by David J C MacKay Review paper to appear in `Network' (1995). Final version (1 Feb 95). 41 pages. (508K) ftp://131.111.48.8/pub/mackay/network.ps.Z 2) Density Networks and their application to Protein Modelling by David J C MacKay Abstract: I define a latent variable model in the form of a neural network for which only target outputs are specified; the inputs are unspecified. Although the inputs are missing, it is still possible to train this model by placing a simple probability distribution on the unknown inputs and maximizing the probability of the data given the parameters. The model can then discover for itself a description of the data in terms of an underlying latent variable space of lower dimensionality. I present preliminary results of the application of these models to protein data. (to appear in Maximum Entropy 1994 Proceedings [1995]) ftp://131.111.48.8/pub/mackay/density.ps.Z (130K) 3) A Free Energy Minimization Framework for Inference Problems in Modulo 2 Arithmetic by David J C MacKay Abstract: This paper studies the task of inferring a binary vector s given noisy observations of the binary vector t = A s mod 2, where A is an M times N binary matrix. This task arises in correlation attack on a class of stream ciphers and in other decoding problems. The unknown binary vector is replaced by a real vector of probabilities that are optimized by variational free energy minimization. The derived algorithms converge in computational time of order between w_{A} and N w_{A}, where w_{A} is the number of 1s in the matrix A, but convergence to the correct solution is not guaranteed. Applied to error correcting codes based on sparse matrices A, these algorithms give a system with empirical performance comparable to that of BCH and Reed-Muller codes. Applied to the inference of the state of a linear feedback shift register given the noisy output sequence, the algorithms offer a principled version of Meier and Staffelbach's (1989) algorithm B, thereby resolving the open problem posed at the end of their paper. The algorithms presented here appear to give superior performance. (to appear in Proceedings of 1994 K.U. Leuven Workshop on Cryptographic Algorithms) ftp://131.111.48.8/pub/mackay/fe.ps.Z (101K) 4) Interpolation models with multiple hyperparameters by David J C MacKay and Ryo Takeuchi Abstract: A traditional interpolation model is characterized by the choice of regularizer applied to the interpolant, and the choice of noise model. Typically, the regularizer has a single regularization constant alpha, and the noise model has a single parameter beta. The ratio alpha/beta alone is responsible for determining globally all these attributes of the interpolant: its `complexity', `flexibility', `smoothness', `characteristic scale length', and `characteristic amplitude'. We suggest that interpolation models should be able to capture more than just one flavour of simplicity and complexity. We describe Bayesian models in which the interpolant has a smoothness that varies spatially. We emphasize the importance, in practical implementation, of the concept of `conditional convexity' when designing models with many hyperparameters. (submitted to IEEE PAMI) ftp://131.111.48.8/pub/mackay/newint.ps.Z (179K) To get papers by anonymous ftp, follow the usual procedure: ftp 131.111.48.8 anonymous cd pub/mackay binary get ... ------------------------------------------------------------------------------ David J.C. MacKay email: mackay at mrao.cam.ac.uk Radio Astronomy, www: ftp://131.111.48.24/pub/mackay/homepage.html Cavendish Laboratory, tel: +44 1223 337238 fax: 354599 home: 276411 Madingley Road, Cambridge CB3 0HE. U.K. home: 19 Thornton Road, Girton, Cambridge CB3 0NP ------------------------------------------------------------------------------ From Nicolas.Szilas at imag.fr Thu Feb 2 13:30:19 1995 From: Nicolas.Szilas at imag.fr (Nicolas Szilas) Date: Thu, 2 Feb 1995 19:30:19 +0100 Subject: Paper available Message-ID: <199502021830.TAA10687@meteore.imag.fr> Hello, The following paper is now available by anonymous ftp: ----------------------------------------------------------------------- ACTION FOR LEARNING IN NON-SYMBOLIC SYSTEMS ------------------------------------------- Nicolas SZILAS(1) and Eric RONCO(2) (1) ACROE & LIFIA-IMAG, INPG - 46, av. felix Viallet, 38031 GRENOBLE Cedex, France (2) LIFIA-IMAG, INPG - 46, av. felix Viallet, 38031 GRENOBLE Cedex, France As a cognitive function, learning must be considered as an interactive process, especially if progressive learning is concerned; the system must choose every time what to learn and when. Therefore, we propose an overview of interactive learning algorithms in the connectionist literature and conclude that most studies are concerned with the search of the most informative patterns in the environment, whereas few of them deal with progressive complexity learning. Subsequently, some effects of progressive learning are experimently studied in the framework of supervised learning in layered networks: the results exhibit great benefits of progressive increasing the difficulty of the environment. To design more advanced interactive connectionist learning, we study the psycholoical automatization theories, which show that taking into account complex environments implies the emergence of successive levels of automated processing. Thus, a model is then proposed, based on hierarchical and incremental integration of modular procesing, corresponding to a progressive increase of the environment difficulty. ----------------------------------------------------------------------- To appear in the European Conference on Cognitive Science, Saint-Malo, France, April 1995. anonymous ftp to: imag.fr filename: pub/LIFIA/szilas.eccs95.e.ps.Z ---------------------------------------------------------------------- Nicolas SZILAS e-mail: Nicolas.Szilas at imag.fr LIFIA-ACROE 46, avenue Felix Viallet 38031 Grenoble Cedex, France tel.: (33) 76-57-48-12 ---------------------------------------------------------------------- From dwang at cis.ohio-state.edu Thu Feb 2 15:24:30 1995 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Thu, 2 Feb 1995 15:24:30 -0500 Subject: A summary on Catastrophic Interference Message-ID: <199502022024.PAA05441@shirt.cis.ohio-state.edu> Catastrophic Interference and Incremental Training A brief summary by DeLiang Wang Several weeks ago I posted a message on this network asking about the research status of catastrophic interference. I have since received more than 50 replies either publicly or privately. I have benefited much from the discussions, and I hope others have too. I have read/scanned through most of the papers (those easily accessible) prompted from the replies. To thank all of you who have replied, I have compiled a brief summary of my readings. Since I do not want to post a long review paper on the net, my following summary will be very brief, which means an inevitable oversimplification of those models mentioned here and neglect of some other work. (1) Catastrophic interference (CI) refers to the phenomenon that later training disrupts results of previous training. It was pointed out as a criticism to multi-layer perceptrons with backpropagation training (MLP) by Grossberg (1987), and systematically revealed by McCloskey & Cohen (1989) and Ratcliff (1990). Why is CI bad? It prevents a single network from incrementally accumulating knowledge (the alternative would be to have each network learn just one set of transform), and it poses severe problems for MLP to model human/animal memory. (2) Catastrophic interference is model-specific. So far, the problem is revealed only in MLP or its variants. We know that models where different items are represented without overlaps (e.g. ART) do not have this problem. Even some models with certain overlaps do not have this problem (see, for example, Willshaw, 1981; Wang and Yuwono, 1995). Unfortunately, many studies on CI carry general titles, such as "connectionist models" and "neural network models", and leave people the impression that CI is a general problem with all neural network (NN) models. These general titles are used both by critics and proponents of neural networks. This type of titles may be justified in early times when the awareness of NN needed to be raised. In this development stage of the field, results about NN should be specified and the title should properly reflect the scope of the paper. (3) Tradeoff between distributedness and interference. The major cause of CI is the "distributedness" of representations: Learning of new patterns needs to use those weights that participate in representing previously learned patterns. Much investigation to overcome CI is directed towards reducing the extent of distributedness. We can say that there is a tradeoff between distributedness and interference (as said earlier, no overlaps no CI). (4) Ways of overcoming CI. The problem has been studied by a number of authors, all of whom work on MLP or its variants. Here is a list of proposals to alleviate CI: * Reduce overlapping in hidden unit representations. French (1991, 1994), Kruschke (1992), Murre (1992), Fahlman (1991). * Orthogonolization. This idea was proposed long ago for reducing cross-talks in associative memories. The same idea works here to reduce CI. See Kortge (1990), Lewandowsky (1991), Sloman and Rumelhart (1992), Sharkey and Sharkey (1994), McClelland et al. (1994). * Prior training. Assuming later patterns are drawn from the same underlying function as earlier patterns, prior training of the general task can reduce RI (McRae and Hetherington, 1993). This proposal will not work if later patterns have little to do with previously trained patterns. * Modularization. The idea is similar to earlier ones. We have a hierarchy of different networks, and each network is selected to learn a different category of tasks. See Waibel (1989), Brashers-Krug et al. (1995). * Retain only a recent history. The idea here is that we let past patterns forget and only retain a limited number of patterns including the new one (reminiscent of STM). See Ruiz de Angulo and Torras (1995). (5) Transfer studies. Another body of related, but different, work studies how previous training can facilitate acquiring a new pattern. The effect of acquiring new pattern on previous memory, however, is not explored. See Pratt (1993), Thrun and Mitchell (1994), Murre (1995). (6) What about associative memory? In my original message, I suspected that associative memory models that can handle correlated patterns (see Kanter and Sompolinsky, 1987; Diederich and Opper, 1987) should suffer the same problem of catastrophic interference. Unfortunately, no response has touched on this issue. Are people taking associative memory models seriously nowadays? (7) To summarize, the following two ideas, in my opinion, seem to hold greatest promise for solving RI. The first idea is to reduce the receptive field of each unit, thus reducing the overlaps among different feature detectors. RBF (radial basis function) networks fall into this type. After all, limited receptive fields are characteristic of brain cells, and all-to-all connections are scarce if existing at all. The second idea is to introduce some form of modularization so that different underlying functions are handled in different modules (reducing overlaps among differing tasks). This may not only solve the problem of CI, but also facilitate acquiring new knowledge (positive transfer). Furthermore, this idea is consistent with the general principle of functional localization in the brain. References (More detailed references for tech. reports were posted before): Brashers-Krug T., R. Shadmehr, and E. Todorov (1995): In: NIPS-94 Proceedings, to appear. Diederich S. and M. Opper (1987). Phys. Rev. Lett. 58, 949-952. Fahlman S. (1991): In NIPS-91 Proceedings. French, R.M. (1991): In: Proc. the 13th Annual Conf. of the Cog Sci Society. French, R.M. (1994): In: Proc. the 16th Annual Conf. of the Cog Sci Society. Grossberg S. (1987). Cognit. Sci. 11, 23-64. Kantor I. & Sompolinsky H.(1987). Phys. Rev. A 35, 380-392. Kortge, C.A. (1990): In: Proc. the 12th Annual Conf. of the Cog Sci Society. Krushke, J.K. (1992). Psychological Review, 99, 22-44. Lewandowsky, S. (1991): In: Relating theory and data: Essays on human memory in honor of Bennet B. Murdock (W.Hockley & S.Lewandowsky, Eds.). McClelland, J., McNaughton, B., & O'Reilly, R. (1994): CMU Tech report: PNP.CNS.94.1. McCloskey M., and Cohen N. (1989): In: The Psychology of Learning and Motivation, 24, 109-165. McRae, K., & Hetherington, P.A. (1993): In: Proc. the 15th Annual Conf. of the Cog Sci Society. Murre, J.M.J. (1992): In: Proc. the 14th Annual Conf. of the Cog Sci Society. Murre, J.M.J. (in press): In: J. Levy et al. (Eds): Connectionist Models of Memory and Language. London: UCL Press. Pratt, L. (1993): In: NIPS-92 Proceedings. Ratcliff, R (1990): Psychological Review 97, 285-308 Ruiz de Angulo V. and C. Torras (1995): IEEE Trans. on Neural Net., in press. Sharkey, N.E. & Sharkey, A.J.C. (1994): Technical Report, Department of Computer Science, University of Sheffield, U.K. Sloman, S.A., & Rumelhart, D.E. (1992): In A. Healy et al.(Eds): From learning theory to cognitive processes: Essays in honor of William K. Estes. Thrun S. & Mitchell T. (1994): CMU Tech Rep. Waibel A. (1989). Neural Computation 1, 39-46. Wang D.L. and B. Yuwono (1995): IEEE Trans.on Syst. Man Cyber. 25(4), in press. Willshaw D. (1981): In: Parallel Models of Associative Memory (Eds. G. Hinton and J. Anderson, Erlbaum). From dayan at ai.mit.edu Thu Feb 2 21:16:48 1995 From: dayan at ai.mit.edu (Peter Dayan) Date: Thu, 2 Feb 95 21:16:48 EST Subject: PhD Studentship at Edinburgh Message-ID: <9502030216.AA26937@peduncle> ======================================================================= PhD Study in Computational Models of Spatial Learning and Memory, with particular reference to the Hippocampal Formation Prof Richard Morris, University of Edinburgh Peter Dayan, MIT ======================================================================= A Faculty scholarship from the University of Edinburgh is available for a PhD student to work on the project described below. The student will be registered with Prof Morris at the University of Edinburgh, but will also visit to conduct research at MIT. Computational Models of Spatial Learning and Memory, with particular reference to the Hippocampal Formation There is substantial evidence for the involvement of the hippocampal formation in the process of learning and using information about space. The purpose of this project is to build a computational model of spatial learning that addresses the related data at three levels of inquiry. - electrophysiological studies on place cells and on synaptic plasticity at various loci in the hippocampus; - behavioural studies on the way that animals construct models of their environments, the cues they use, and the way that they employ this information to plan paths and find targets; - computational studies on the representation of space, model construction and reinforcement learning. This computational model will start from and contribute to behavioural experiments in the `Manhattan Maze' (Biegler and Morris, Nature, 1993), which has been used to probe the way that particular landmarks and their inter-relationships control the way that rats orient themselves in space. Confirmatory experiments in the open field water maze will also be possible. This project should suit a student with qualifications in mathematics or computer science who has a demonstrable interest in psychology/neurobiology; or a student with qualifications in psychology/neurobiology with demonstrable competence in computational modelling. The stipend is 6391 pounds sterling a year. Applicants should send copies of their CV and a statement of their research interests to: Prof RGM Morris Dr P Dayan Centre for Neuroscience Dept of Brain and Cognitive Sciences University of Edinburgh E25-201, MIT Crichton Street Cambridge, Edinburgh EH8 9LE Massachusetts 02139 Scotland USA r.g.m.morris at ed.ac.uk dayan at psyche.mit.edu Notification of the award will be made by 30th June 1995 and the studentship will start in October 1995. From N.Sharkey at dcs.shef.ac.uk Fri Feb 3 05:02:42 1995 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 3 Feb 95 10:02:42 GMT Subject: ROBOCALL Message-ID: <9502031002.AA15436@entropy.dcs.shef.ac.uk> ********************************************************* * ROBOCALL * * * * Special Robotics Track of EANN'95 * * * * 1 page abstract due by 16th February, 1995 * * * ********************************************************* Sorry about the late announcement it keeps bouncing. We will consider any papers relevant to the use of Neural Computing to Robotics. email abstracts to Jill at dcs.sheffield.ac.uk Organiser: Noel Sharkey N.Sharkey at dcs.sheffield.ac.uk Sub-Committee Michael Arbib (USA) Valentino Braitenberg (Germany) Georg Dorfner (Austria) John Hallam (Scotland) Ali Zalzala (England) CONFERENCE DETAILS BELOW International Conference on Engineering Applications of Neural Networks (EANN '95) Helsinki, Finland August 21-23, 1995 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, food engineering and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann95 at aton.abo.fi by *31 January 1995*, by e-mail in PostScript format, or in TeX or LaTeX. Plain ASCII is also acceptable. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 31 January 1995. Notification of acceptance will be sent around 1 March. The number of full papers will be very limited. You will receive a submission number for each abstract you send. If you haven't received one, please ask for it. Special tracks have been set up for applications in robotics (N. Sharkey, n.sharkey at dcs.shef.ac.uk), control applications (E. Tulunay, ersin_tulunay at metu.edu.tr), biotechnology/food engineering applications (P. Linko), and mineral and metal industry (J. van Deventer, jsjvd at maties.sun.ac.za). You can submit abstracts to the special tracks straight to their coordinators or to eann95 at aton.abo.fi. Local program committee A. Bulsari J. Heikkonen (Italy) E. Hyv\"onen P. Linko L. Nystr\"om S. Palosaari H. Sax\'en M. Syrj\"anen J. Sepp\"anen A. Visa International program committee G. Dorffner (Austria) A. da Silva (Brazil) V. Sgurev (Bulgaria) M. Thompson (Canada) B.-Z. Chen (China) V. Kurkova (Czechia) S. Dutta (France) D. Pearson (France) G. Baier (Germany) C. M. Lee (Hong Kong) J. Fodor (Hungary) L. M. Patnaik (India) H. Siegelmann (Israel) R. Baratti (Italy) R. Serra (Italy) I. Kawakami (Japan) C. Kuroda (Japan) H. Zhang (Japan) J. K. Lee (Korea) J. Kok (Netherlands) J. Paredis (Netherlands) W. Duch (Poland) R. Tadeusiewicz (Poland) B. Ribeiro (Portugal) W. L. Dunin-Barkowski (Russia) V. Stefanuk (Russia) E. Pupyrev (Russia) S. Tan (Singapore) V. Kvasnicka (Slovakia) A. Dobnikar (Slovenia) J. van Deventer (South Africa) B. Martinez (Spain) H. Liljenstr\"om (Sweden) G. Sj\"odin (Sweden) J. Sj\"oberg (Sweden) E. Tulunay (Turkey) N. Sharkey (UK) D. Tsaptsinos (UK) N. Steele (UK) S. Shekhar (USA) J. Savkovic-Stevanovic International Conference on Engineering Applications of Neural Networks (EANN '95) Registration information The registration fee is FIM 2000 until 15 March, after which it will be FIM 2400. A discount of upto 40 % will be given to some participants from East Europe and developing countries. Those who wish to avail of this discount need to apply for it. The application form can be sent by e-mail. The papers may not be included in the proceedings if the registration fee is not received before 15 April, or if the paper does not follow the specified format. If your registration fee is received before 15 February, you are entitled to attend one tutorial for free. The fee for each tutorial will be FIM 200, to be paid in cash at the conference site. No decisions have yet been made about which tutorials will be presented, since tutorial proposals can be sent until 31 January. The registration fee should be paid to ``EANN 95'', the bank account SYP (Union Bank of Finland) 220518-125251 Turku, Finland through bank transfer or you could send us a bank draft payable to ``EANN 95''. If it is difficult to get a bank draft in Finnish currency, you could send a bank cheque or a draft of GBP 280 (sterling pounds) until 15 March, or GBP 335 after 15 March. If you need to send it in some other way, please ask. The postal address for sending the bank drafts or bank cheques is EANN '95/SEA, Post box 34, 20111 Turku 11, Finland. Registration form can be sent by e-mail. ---------------------------------------------------------------------- If you are from East Europe or a developing country and would like to apply for a discount, please send the application first, or with the registration form. If you get the discount and if you have already paid a larger amount, the difference will be refunded in cash at the conference site. -------------------------------------------------------CUT HERE------- Registration form Name Affiliation (university/company/organisation) E-mail address Address Country Fax Have you submitted one or more abstracts ? Y/N Registration fee sent FIM/GBP by bank transfer / bank draft / other (please specify) Any special requirements ? Date registration fee sent Date registration form sent ---------------------------------------------------------------------- From pah at unixg.ubc.ca Fri Feb 3 12:57:00 1995 From: pah at unixg.ubc.ca (Phil A. Hetherington) Date: Fri, 3 Feb 1995 09:57:00 -0800 (PST) Subject: A summary on Catastrophic Interference In-Reply-To: <199502022024.PAA05441@shirt.cis.ohio-state.edu> Message-ID: > * Reduce overlapping in hidden unit representations. French (1991, >1994), Kruschke (1992), Murre (1992), Fahlman (1991). > > * Prior training. Assuming later patterns are drawn from the same >underlying function as earlier patterns, prior training of the general task >can reduce RI (McRae and Hetherington, 1993). This proposal will not >work if later patterns have little to do with previously trained patterns. Actually, prior training is a method of reduction of overlap at the hidden layer, as it achieves the same goal naturally, and will work if later patterns have little to do with previously trained patterns. McRae and Hetherington demonstrated that the method was sucessful with autoencoders and then replicated this with a network that computed arbitrary associations (i.e., random). Given that prior training also consisted of random associations, there was no general function the net could derive (i.e., later patterns had little to do with previous patterns). The training merely had the effect of 'sharpening' the hidden unit responses so that later items would not be distributed across all or most units, as is the case in a naive net. I would make an empirical prediction that in a net trained sequentially to recall faces, prior training on white-noise would probably suffice to reduce CI. Cheers, Phil Hetherington From haarmann at ibalm.psy.cmu.edu Fri Feb 3 13:29:33 1995 From: haarmann at ibalm.psy.cmu.edu (Henk Haarmann (Henk Haarmann)) Date: Friday, 03 Feb 95 13:29:33 EST Subject: hybrid modelling workshop announcement Message-ID: -------------------------------------------------------------------------- Cognitive Modeling Workshop, July 21-25 '95, Carnegie Mellon University Please bring this announcement to the attention of appropriate applicants. We invite applications to attend a 6-day summer workshop on cognitive modeling to be held in the psychology department at Carnegie Mellon University in Pittsburgh from July 15 to July 20. The workshop dates have been selected to enable participants to easily attend the Cognitive Science meetings, which will be held also in Pittsburgh July 21-25, immediately after the workshop. The workshop is intended for Ph.D. students, postdocs, and junior faculty with an active research background in cognitive psychology but with little or no experience in computer simulation modeling. Participants will be given intensive practical experience in using 3CAPS, a hybrid symbolic-connectionist modeling language, which should enable them to apply it in their own domain of research. 3CAPS' focus is on the central role of working memory in constraining storage and processing in a variety of cognitive domains, including mental rotation, text processing, normal and aphasic sentence comprehension, human computer interaction and automated telephone interaction. Several tutors will guide participants through excercises in a computer lab in each of these domains. Participants will also be shown how to develop a 3CAPS model from scratch. A final component of the program involves a contrastive analysis, that is, comparison with a related modeling language, ACT-R, in the area of algebraic problem solving. Experience in computer modeling is not essential, but some knowledge of computer programming in general is necessary. Applications should send a cover letter, and a curriculum vitae, and in the case of graduate students, arrange for one letter of recommendation. The DEADLINE for application is March 15, 1995. The workshop intends to provide room and board and reimburse airfare (if needed) to all participants who are U.S. residents, and to provide room and board only for non-U.S participants. Enrollment is competitive and limited so early application is strongly encouraged. Applicants will be notified of their acceptance by April 15, 1995. The workshop is sponsored by the Division of Cognitive and Neural Science and Technology of the Office of Naval Research. Applications should be sent to: Henk Haarmann Cognitive Modeling Workshop Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213-3890 A repetition of this message with more extensive program information can be found on the World Wide Web (WWW): From changjcc at delta.eecs.nwu.edu Fri Feb 3 14:47:36 1995 From: changjcc at delta.eecs.nwu.edu (Jyh-Chian Chang) Date: Fri, 3 Feb 1995 13:47:36 -0600 (CST) Subject: RIGID Linear Transformation with NN??? Message-ID: <9502031947.AA13146@delta.eecs.nwu.edu> Dear NN researcher: We all know that it's easy to construct a linear NN to do linear transformations (i.e., A = R X, here A and X are vectors and R is the transformation matrix). However, is it possible to build a NN to do RIGID linear transformations? "RIGID" means that the rotation matrix, R, must be an orthogonal matrix (i.e., all its columns form an orthonormal space; rotating without scaling and shearing). I know that there are many non-NN methods to solve this problem (e.g., Quaternion-based, SVD-based approaches, and so on.). Does anyone here have any ideas about how to build a NN to do the rigid linear transformations? Any suggestions are highly appreciated! Thank you in advance! Garry From pollack at cs.brandeis.edu Fri Feb 3 16:17:20 1995 From: pollack at cs.brandeis.edu (Jordan Pollack) Date: Fri, 3 Feb 1995 16:17:20 -0500 Subject: Graduate Opportunities at Brandeis University Message-ID: <199502032117.QAA01990@jade.cs.brandeis.edu> In May 1994 Brandeis University announced the opening of the new Volen National Center for Complex Systems with the goal of promoting interdisciplinary research and collaboration between faculty from Biology, Computer Science, Linguistics, Mathematics, Neuroscience, Physics, and Psychology. The Center, whose main mission is to study the brain, intelligence, and advanced computation, has already earned accolades from ascientists world wide, and continues to expand. Brandeis University is located in Waltham, a suburb 10 miles west of Boston, with easy rail access to both Cambridge and Downtown. Founded in 1948, it is recognized as one of the finest private liberal arts and science universities in the United States. Brandeis combines the breadth and range of academic programs usually found at much larger universities, with the friendliness of a smaller and more focused research community. The Computer Science Department is located entirely in the new Volen Center building and is the home of four Artificial Intelligence faculty actively involved in the Center activities and collaborations: Rick Alterman, Maja Mataric, Jordan Pollack, and James Pustejovsky. Professor Alterman's research interests are in the areas of artificial intelligence and cognitive science. A recent project focused on the problems of everyday reasoning. A model was developed where agent goal-directed behavior is guided by pragmatics rather than by analytic techniques. His work with Zito-Wolf developed techniques for skill acquisition and learning; the focus was on building case representations of procedural knowledge. Work with Carpenter was focussed on building a reading agent that can actively seek out and interpret instructions that are relevant to a "break down" situation for the overall system. Two current projects support the evolution and maintenance of a collective memory for a community of distributed heterogeneous agents who plan and work cooperatively, and with building interactive systems that improve their own performance by keeping track of the history of interactions between the end-user and the system. For more information see http://www.cs.brandeis.edu/dept/faculty/alterman. Jordan Pollack's research interests lie at the boundary between neural and symbolic computation: How could simple neural mechanisms organized naturally into multi-cellular structures by evolution provide the capacity necessary for cognition, language, and general intelligence? This view has lead to successful work on how variable tree-structures could be represented in neural activity patterns, how dynamical systems could act as language generators and recognizers, and how fractal limit behavior of recurrent networks could represent mental imagery. He has also worked on evolutionary and co-evolutionary learning in strategic game playing agents, as well as with teams of simple agents who cooperate on complex tasks. Prof. Pollack encourages students with backgrounds and interests in AI, machine learning, dynamical systems, fractals, connectionism, and ALife to apply. For more information see http://www.cs.brandeis.edu/dept/faculty/pollack. Maja Mataric's interdisciplinary research focuses on understanding systems that integrate perception, representation, learning, and action. Her current work is applied to synthesis and analysis of behavior in situated agents and multi--agent systems, and on learning and imitation, in software, dynamical simulations, and physical robots. Learning new behaviors and behavior selection, as well as memory and representation are the main thrusts of the research. The newest project models learning by imitation, through the interaction of a collection of cognitive systems, including perception (attention and analysis), memory (declarative and non-declarative representations), action sequence planning, motor control, proprioception, and learning. Prof. Mataric encourages students with interests and/or backgrounds in AI, autonomous agents, machine learning, cognitive science, and cognitive neuroscience to apply. For more information see http://www.cs.brandeis.edu/dept/faculty/mataric. James Pustejovsky conducts research in the areas of computational linguistics, lexical semantics, information retrieval and extraction, and aphasiology. The main focus of his research is on the computational and cognitive modeling of natural language meaning. More specifically, the investigation is in how words and their meanings combine to meaningful texts. This research has focused on developing a theory of lexical semantics based on a methodology making use of formal and computational semantics. There are several projects applying the results of this theory to Natural Language Processing, which in effect, empirically test this view of semantics. These include: an NSF grant with Apple to automatically construct index libraries and help systems for applications; a DEC grant to automatically convert a trouble-shooting text-corpus into a case library. He also is currently working with aphasiologist Dr. Susan Kohn on word-finding difficulties and sentence generation in aphasics. For more information see http://www.cs.brandeis.edu/dept/faculty/pustejovsky. The four AI faculty work together and with other members of the Volen Center, creating new interdisciplinary research opportunities in areas including cognitive science (http://fechner.ccs.brandeis.edu/cogsci.html) computational neuroscience, and complex systems at Brandeis University. To get more information about the Volen Center for Complex Systems, about the Computer Science Department, and about the other CS faculty, see: http://www.cs.brandeis.edu/dept The URL for the graduate admission information is http://www.cs.brandeis.edu/dept/grad-info/application.html From tesauro at watson.ibm.com Fri Feb 3 16:59:43 1995 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Fri, 3 Feb 95 16:59:43 EST Subject: NIPS*94 proceedings Message-ID: This is to announce a change of publisher for the proceedings of NIPS*94. The proceedings will be published by MIT Press. All conference attendees will receive a free copy of the proceedings. (Attendees whose mailing address has changed since they registered for the meeting should send their new address to nips94 at mines.colorado.edu.) The citation for the volume will be: G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. -- Gerry Tesauro NIPS*94 General Chair From esann at dice.ucl.ac.be Fri Feb 3 17:00:33 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Sat, 4 Feb 1995 00:00:33 +0200 Subject: Neural Processing Letters Vol.2 No.1 Message-ID: <9502032258.AA17826@ns1.dice.ucl.ac.be> You will find enclosed the table of contents of the January 1995 issue of "Neural Processing Letters" (Vol.2 No.1). We also inform you that subscription to the journal is now possible by credit card. All necessary information is contained on the following servers: - FTP server: ftp.dice.ucl.ac.be directory: /pub/neural-nets/NPL - WWW server: http://www.dice.ucl.ac.be/neural-nets/NPL/NPL.html If you have no access to these servers, or for any other information (subscriptions, instructions for authors, free sample copies,...), please don't hesitate to contact directly the publisher: D facto publications 45 rue Masui B-1210 Brussels Belgium Phone: + 32 2 245 43 63 Fax: + 32 2 245 46 94 Neural Processing Letters, Vol.2, No.1, January 1995 ____________________________________________________ - A new scheme for incremental learning C. Jutten, R. Chentouf - A nonlinear extension of the Generalized Hebbian learning J. Joutsensalo, J. Karhunen - Morphogenesis of neural networks O. Michel, J. Biondi - Compartmetal modelling with artificial neural networks C.J. Coomber - Quantitative object motion prediction by an ART2 and Madaline combined neural network Q. Zhu, A.Y. Tawfik - An ANNs-based system for the diagnosis and treatment of diseases G.-P. K. Economou, D. Lymberopoulos, C.E. Goutis - A learning algorithm for Recurrent Radial-Basis Function networks M.W. Mak _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From P.McKevitt at dcs.shef.ac.uk Fri Feb 3 08:12:43 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Fri, 3 Feb 95 13:12:43 GMT Subject: REACHING-FOR-MIND: AISB-95 WKSHOP/FINAL-CALL Message-ID: <9502031312.AA24988@dcs.shef.ac.uk> <> <> <> ******************************************************************************* REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND REACHING FOR MIND Advance Announcement FINAL CALL FOR PAPERS AND PARTICIPATION AISB-95 Workshop on REACHING FOR MIND: FOUNDATIONS OF COGNITIVE SCIENCE April 3rd/4th 1995 at the The Tenth Biennial Conference on AI and Cognitive Science (AISB-95) (Theme: Hybrid Problems, Hybrid Solutions) Halifax Hall University of Sheffield Sheffield, England (Monday 3rd -- Friday 7th April 1995) Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) Chair: Sean O Nuallain Dublin City University, Dublin, Ireland & National Research Council, Ottawa, Canada Co-Chair: Paul Mc Kevitt Department of Computer Science University of Sheffield, England WORKSHOP COMMITTEE: John Barnden (New Mexico State University, NM, USA) Istvan Berkeley (University of Alberta, Canada) Mike Brady (Oxford, England) Harry Bunt (ITK, Tilburg, The Netherlands) Peter Carruthers (University of Sheffield, England) Daniel Dennett (Tufts University, USA) Eric Dietrich (SUNY Binghamton, NY, USA) Jerry Feldman (ICSI, UC Berkeley, USA) John Frisby (University of Sheffield, England) Stevan Harnad (University of Southampton, England) James Martin (University of Colorado at Boulder, CO, USA) John Macnamara (McGill University, Canada) Mike McTear (Universities of Ulster and Koblenz, Germany) Ryuichi Oka (RWC P, Tsukuba, Japan) Jordan Pollack (Ohio State University, OH, USA) Zenon Pylyshyn (Rutgers University, USA) Ronan Reilly (University College, Dublin, Ireland) Roger Schank (ILS, Northwestern, USA) NNoel Sharkey (University of Sheffield, England) Walther v.Hahn (University of Hamburg, Germany) Yorick Wilks (University of Sheffield, England) WORKSHOP DESCRIPTION The assumption underlying this workshop is that Cognitive Science (CS) is in crisis. The crisis manifests itself, as exemplified by the recent Buffalo summer institute, in a complete lack of consensus among even the biggest names in the field on whether CS has or indeed should have a clearly identifiable focus of study; the issue of identifying this focus is a separate and more difficult one. Though academic programs in CS have in general settled into a pattern compatible with classical computationalist CS (Pylyshyn 1984, Von Eckardt 1993), including the relegation from focal consideration of consciousness, affect and social factors, two fronts have been opened on this classical position. The first front is well-publicised and highly visible. Both Searle (1992) and Edelman (1992) refuse to grant any special status to information-processing in explanation of mental process. In contrast, they argue, we should focus on Neuroscience on the one hand and Consciousness on the other. The other front is ultimately the more compelling one. It consists of those researchers from inside CS who are currently working on consciousness, affect and social factors and do not see any incompatibility between this research and their vision of CS, which is that of a Science of Mind (see Dennett 1993, O Nuallain (in press) and Mc Kevitt and Partridge 1991, Mc Kevitt and Guo 1994). References Dennett, D. (1993) Review of John Searle's "The Rediscovery of the Mind". The Journal of Philosophy 1993, pp 193-205 Edelman, G.(1992) Bright Air, Brilliant Fire. Basic Books Mc Kevitt, P. and D. Partridge (1991) Problem description and hypothesis testing in Artificial Intelligence In ``Artificial Intelligence and Cognitive Science '90'', Springer-Verlag British Computer Society Workshop Series, McTear, Michael and Norman Creaney (Eds.), 26-47, Berlin, Heidelberg: Springer-Verlag. Also, in Proceedings of the Third Irish Conference on Artificial Intelligence and Cognitive Science (AI/CS-90), University of Ulster at Jordanstown, Northern Ireland, EU, September and as Technical Report 224, Department of Computer Science, University of Exeter, GB- EX4 4PT, Exeter, England, EU, September, 1991. Mc Kevitt, P. and Guo, Cheng-ming (1995) From Chinese rooms to Irish rooms: new words on visions for language. Artificial Intelligence Review Vol. 8. Dordrecht, The Netherlands: Kluwer-Academic Publishers. (unabridged version) First published: International Workshop on Directions of Lexical Research, August, 1994, Beijing, China. O Nuallain, S (in press) The Search for Mind: a new foundation for CS. Norwood: Ablex Pylyshyn, Z.(1984) Computation and Cognition. MIT Press Searle, J (1992) The rediscovery of the mind. MIT Press. Von Eckardt, B. (1993) What is Cognitive Science? MIT Press WORKSHOP TOPICS: The tension which riddles current CS can therefore be stated thus: CS, which gained its initial capital by adopting the computational metaphor, is being constrained by this metaphor as it attempts to become an encompassing Science of Mind. Papers are invited for this workshop which: * Address the central tension * Propose an overall framework for CS (as attempted, inter alia, by O Nuallain (in press)) * Explicate the relations between the disciplines which comprise CS. * Relate educational experiences in the field * Describe research outside the framework of classical computationalist CS in the context of an alternative framework * Promote a single logico-mathematical formalism as a theory of Mind (as attempted by Harmony theory) * Disagree with the premise of the workshop Other relevant topics include: * Classical vs. neuroscience representations * Consciousness vs. Non-consciousness * Dictated vs. emergent behaviour * A life/Computational intelligence/Genetic algorithms/Connectionism * Holism and the move towards Zen integration The workshop will focus on three themes: * What is the domain of Cognitive Science ? * Classic computationalism and its limitations * Neuroscience and Consciousness WORKSHOP FORMAT: Our intention is to have as much discussion as possible during the workshop and to stress panel sessions and discussion rather than having formal paper presentations. The workshop will consist of half-hour presentations, with 15 minutes for discussion at the end of each presentation and other discussion sessions. A plenary session at the end will attempt to resolve the themes emerging from the different sessions. ATTENDANCE: We hope to have an attendance between 25-50 people at the workshop. Given the urgency of the topic, we expect it to be of interest not only to scientists in the AI/Cognitive Science (CS) area, but also to those in other of the sciences of mind who are curious about CS. We envisage researchers from Edinburgh, Leeds, York, Sheffield and Sussex attending from within England and many overseas visitors as the Conference Programme is looking very international. SUBMISSION REQUIREMENTS: Papers of not more than 8 pages should be submitted by electronic mail (preferably uuencoded compressed postscript) to Sean O Nuallain at the E-mail address(es) given below. If you cannot submit your paper by E-mail please submit three copies by snail mail. *******Submission Deadline: February 13th 1995 *******Notification Date: February 25th 1995 *******Camera ready Copy: March 10th 1995 PUBLICATION: Workshop notes/preprints will be published. If there is sufficient interest we will publish a book on the workshop possibly with the American Artificial Intelligence Association (AAAI) Press. WORKSHOP CHAIR: Sean O Nuallain ((Before Dec 23:)) Knowledge Systems Lab, Institute for Information Technology, National Research Council, Montreal Road, Ottawa Canada K1A OR6 Phone: 1-613-990-0113 E-mail: sean at ai.iit.nrc.ca FaX: 1-613-95271521 ((After Dec 23:)) Dublin City University, IRL- Dublin 9, Dublin Ireland, EU WWW: http://www.compapp.dcu.ie Ftp: ftp.vax1.dcu.ie E-mail: onuallains at dcu.ie FaX: 353-1-7045442 Phone: 353-1-7045237 AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 282-5572 AISB-95 CONFERENCE/LOCAL ORGANISATION CHAIR: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 REGISTRATION: Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 AISB-95 ENQUIRIES: Gill Wells, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: g.wells at dcs.shef.ac.uk Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5590 Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb VENUE: The venue for registration and all conference events is: Halifax Hall of Residence, Endcliffe Vale Road, GB- S10 5DF, Sheffield, UK, EU. FaX: +44 (0) 114-266-3898 Tel: +44 (0) 114-266-3506 (24 hour porter) Tel: +44 (0) 114-266-4196 (manager) SHEFFIELD: Sheffield is one of the friendliest cities in Britain and is situated well having the best and closest surrounding countryside of any major city in the UK. The Peak District National Park is only minutes away. It is a good city for walkers, runners, and climbers. It has two theatres, the Crucible and Lyceum. The Lyceum, a beautiful Victorian theatre, has recently been renovated. Also, the city has three 10 screen cinemas. There is a library theatre which shows more artistic films. The city has a large number of museums many of which demonstrate Sheffield's industrial past, and there are a number of Galleries in the City, including the Mapping Gallery and Ruskin. A number of important ancient houses are close to Sheffield such as Chatsworth House. The Peak District National Park is a beautiful site for visiting and rambling upon. There are large shopping areas in the City and by 1995 Sheffield will be served by a 'supertram' system: the line to the Meadowhall shopping and leisure complex is already open. The University of Sheffield's Halls of Residence are situated on the western side of the city in a leafy residential area described by John Betjeman as ``the prettiest suburb in England''. Halifax Hall is centred on a local Steel Baron's house, dating back to 1830 and set in extensive grounds. It was acquired by the University in 1830 and converted into a Hall of Residence for women with the addition of a new wing. ARTIFICIAL INTELLIGENCE AT SHEFFIELD: Sheffield Computer Science Department has a strong programme in Cognitive Systems and has a large reseach group (AINN) studying Artificial Intelligence and Neural Networks. It is strongly connected to the University's Institute for Language, Speech and Hearing (ILASH). ILASH has its own machines and support staff, and academic staff attached to it from nine departments. Sheffield Psychology Department has the Artificial Intelligence Vision Research Unit (AIVRU) which was founded in 1984 to coordinate a large industry/university Alvey research consortium working on the development of computer vision systems for autonomous vehicles and robot workstations. Sheffield Philosophy Department has the Hang Seng Centre for Cognitive Studies, founded in 1992, which runs a workshop/conference series on a two-year cycle on topics of interdisciplinary interest. (1992-4: 'Theory of mind'; 1994- 6: 'Language and thought'.) The Department of Automatic Control and Systems Engineering is conducting research into Neural Networks for Medical and other applications. AI and Cognitive Science researchers at Sheffield include Guy Brown, Peter Carruthers, Malcolm Crawford, Joe Downs, Phil Green, John Frisby, Robert Gaizauskas, Rob Harrison, Mark Hepple, Zhe Ma, John Mayhew, Jim McGregor, Paul Mc Kevitt, Bob Minors, Rod Nicolson, Tony Prescott, Peter Scott, Steve Renals, Noel Sharkey, and Yorick Wilks. From rafal at mech.gla.ac.uk Sat Feb 4 07:24:56 1995 From: rafal at mech.gla.ac.uk (Rafal W Zbikowski) Date: Sat, 4 Feb 1995 12:24:56 GMT Subject: Workshop on Neurocontrol Message-ID: <2217.199502041224@gryphon.mech.gla.ac.uk> CALL FOR PAPERS Neural Adaptive Control Technology Workshop: NACT I 18--19 May, 1995 University of Glasgow Scotland, UK NACT Project ^^^^^^^^^^^^ The first of a series of three workshops on Neural Adaptive Control Technology (NACT) will take place on May 18--19 1995 in Glasgow, Scotland. This event is being organised in connection with a three-year European Union funded Basic Research Project in the ESPRIT framework. The project is a collaboration between Daimler-Benz Systems Technology Research, Berlin, Germany and the Control Group, Department of Mechanical Engineering, University of Glasgow, Glasgow, Scotland. The project, which began on 1 April 1994, is a study of the fundamental properties of neural network based adaptive control systems. Where possible, links with traditional adaptive control systems will be exploited. A major aim is to develop a systematic engineering procedure for designing neural controllers for non-linear dynamic systems. The techniques developed will be evaluated on concrete industrial problems from within the Daimler-Benz group of companies: Mercedes-Benz AG, Deutsche Aerospace (DASA), AEG and DEBIS. The project leader is Dr Ken Hunt (Daimler-Benz) and the other principal investigator is Professor Peter Gawthrop (University of Glasgow). NACT I Workshop ^^^^^^^^^^^^^^^ The aim of the workshop is to bring together selected invited specialists in the fields of adaptive control, non-linear systems and neural networks. A number of contributed papers will also be included. As well as paper presentation, significant time will be allocated to round-table and discussion sessions. In order to create a fertile atmosphere for a significant information interchange we aim to attract active specialists in the relevant fields. Proceedings of the meeting will be published in an edited book format. A social programme will be prepared for the weekend immediately following the meeting where participants will be able to sample the various cultural and recreational offerings of Central Scotland (a visit to a whisky distillery is included) and the easily reached Highlands. Contributed papers ^^^^^^^^^^^^^^^^^^ The Program Committee is soliciting contributed papers in the area of neurocontrol for presentation at the conference and publication in the Proceedings. Submissions should take the form of an extended abstract of six pages in length and the DEADLINE is 1 March 1995. Accepted extended abstracts will be circulated to participants in a Workshop digest. Following the Workshop selected authors will be asked to prepare a full paper for publication in the proceedings. This will take the form of an edited book produced by an international publisher. LaTeX style files will be available for document preparation. Each submitted paper must be headed with a title, the names, affiliations and complete mailing addresses (including e-mail) of all authors, a list of three keywords, and the statement "NACT I". The first named author of each paper will be used for all correspondence unless otherwise requested. Final selection of papers will be announced in mid-March 1995. Address for submissions ^^^^^^^^^^^^^^^^^^^^^^^ Dr Rafal Zbikowski Department of Mechanical Engineering James Watt Building University of Glasgow Glasgow G12 8QQ Scotland, UK rafal at mech.gla.ac.uk Schedule summary ^^^^^^^^^^^^^^^^ 1 March 1995 Deadline for submission of contributed papers Mid-March 1995 Notification regarding acceptance of papers 18-19 May 1995 Workshop From dsilver at csd.uwo.ca Sun Feb 5 09:42:31 1995 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Sun, 5 Feb 95 9:42:31 EST Subject: NUmber of linear dichotomies of a binary d-Dim space Message-ID: <9502051442.AA01092@church.ai.csd.uwo.ca.csd.uwo.ca> It is well known that a d-dimensional 2-class hypothesis space containing n patterns in "general position" will have L(n,d) linear dichotomies [see 1,2,3]; where L(n,d) is computed using a recursive equation: L(n,d) = L(n-1,d) + L(n-1,d-1) with the boundary conditions of: L(1,d) = 2 and L(n,1) = 2n. This can also be stated as: L(n,d) = 2 SUM(i=0 to d) ( n-1 ) ; for n > d ( i ) L(n,d) = 2^n ; for n <= d Example: d = 2, n = 4, then: L(4,2) = L(3,2) + L(3,1) = L(2,2) + L(2,1) + 2(3) = L(1,2) + L(1,1) + 4 + 6 = 2 + 2 + 4 + 6 = 14 Furthermore, there will be L(n,d)/2 (eg. 14/2 = 7) hyperplanes involved in partitioning the classes for the (eg. 14) dichotomies. However, if the patterns are based on binary components, then the pattern points are the vertices of a hypercube; in which case the general position condition does not hold and the L(n,d) value provides only an upper bound on the number of linear dichotomies. Question: Is there an expression/procedure for computing the exact number of liner dichotomies of a binary d-dimensional hypothesis space? Any information would be most helpful. I will collect and summarize received replies on this matter. .. Many thanks, Danny. Ref: (1) "The Mathematical Foundations of Learning Machines" by Nils J. Nilsson, Morgan Kaufman, San Mateo, CA; 1990 (originally published in 1965 as"learning Machines"). -- p. 32-34 (2) "Neural Networks in Computing Intelligence" by LiMin Fu, McGraw-Hill, inc, NY, 1994. -- p. 71-73 (3) A set of n points are said to be in "general position" in a d-dimensional space iff no subset of d+1 points lies on any d-1 dimensional hyperplane. In the case of a binary hypercube, this implies that no subset of d+1 pattern points may lie on any one d-1 hypercube face. -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) = ========================================================================= From S.W.Ellacott at bton.ac.uk Mon Feb 6 05:12:09 1995 From: S.W.Ellacott at bton.ac.uk (ellacott) Date: Mon, 6 Feb 95 10:12:09 GMT Subject: No subject Message-ID: <9502061012.AA02413@diamond.bton.ac.uk> ************************* MAIL FROM STEVE ELLACOTT ************************** *******REMINDER : ABSTRACTS DUE 17TH FEBRUARY*********** 2nd Announcement and CALL FOR PAPERS MATHEMATICS of NEURAL NETWORKS and APPLICATIONS (MANNA 1995) International Conference at Lady Margaret Hall, Oxford, July 3-7, 1995 run by the University of Huddersfield in association with the University of Brighton We are delighted to announce the first conference on the Mathematics of Neural Networks and Applications (MANNA), in which we aim to provide both top class research and a friendly motivating atmosphere. The venue, Lady Margaret Hall is an Oxford College, set in an attractive and quiet location adjacent to the University Parks and River Cherwell. Applications of neural networks (NNs) have often been carried out with a limited understanding of the underlying mathematics but it is now essential that fuller account should be taken of the many topics that contribute to NNs: approximation theory, control theory, genetic algorithms, dynamical systems, numerical analysis, optimisation, statistical decision theory, statistical mechanics, computability and information theory, etc. . We aim to consider the links between these topics and the insights they offer, and identify mathematical tools and techniques for analysing and developing NN theories, algorithms and applications. Working sessions and panel discussions are planned. Keynote speakers who have provisionally accepted invitations include: N M Allinson (York University, UK) S Grossberg (Boston, USA) S-i Amari (Tokyo) M Hirsch (Berkeley, USA) N Biggs (LSE, London) T Poggio (MIT, USA) G Cybenko (Dartmouth USA) H Ritter (Bielefeld, Germany) J G Taylor (King's College, London) P C Parks (Oxford) It is anticipated that about 40 contributed papers and posters will be presented. The proceedings will be published, probably as a volume of an international journal, and contributed papers will be considered for inclusion. The deadline for submission of abstracts is 17 February 1995. Accommodation will be available at Lady Margaret Hall (LMH) where many rooms have en- suite facilities - early bookings are recommended. The conference will start with Monday lunch and end with Friday lunch, and there will be a full-board charge (including conference dinner) of about #235 for this period as well as a modest conference fee (to be fixed later). We hope to be able to offer a reduction in fees to those who give submitted papers and to students. There will be a supporting social programme, including reception, outing(s) and conference dinner, and family accommodation may be arranged in local guest houses. Please indicate your interest by returning the form below. A booking form will be sent to you. Thanking you in anticipation. Committee: S W Ellacott (Brighton) and J C Mason (Huddersfield) Co-organisers; I Aleksander, N M Allinson, N Biggs, C M Bishop, D Lowe, P C Parks, J G Taylor, K Warwick ______________________________________________________________________________ To: Ros Hawkins, School of Computing and Mathematics, University of Huddersfield, Queensgate, Huddersfield, West Yorkshire, HD1 3DH, England. (Email: j.c.mason at hud.ac.uk) Please send further information on MANNA, July 3 - 7, 1995 Name .......................Address .......................................... ............................................................................. ............................................................................. Telephone ............................. Fax .................................. E Mail ................................ I intend/do not intend to submit a paper Area of proposed contribution ................................................ ***************************************************************************** From lemm at LORENTZ.UNI-MUENSTER.DE Mon Feb 6 12:14:33 1995 From: lemm at LORENTZ.UNI-MUENSTER.DE (Joerg_Lemm) Date: Mon, 6 Feb 1995 18:14:33 +0100 Subject: paper available:Lorentzian Neural Nets Message-ID: <9502061714.AA24463@xtp141.uni-muenster.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/giraud.lorentz.ps.Z The file giraud.lorentz.ps.Z is now available for copying from the Neuroprose repository: (27 pages, compressed file size 215K) TITLE: LORENTZIAN NEURAL NETS AUTHORS: B.G.Giraud Service Physique Theorique, DSM, C.E.Saclay, 91191 Gif/Yvette, France, A.Lapedes and L.C.Liu Theoretical Division, Los Alamos National Laboratory, 87545 Los Alamos, NM, USA, J.C.Lemm Institut fuer Theoretische Physik I, Muenster University, 48149 Muenster, Germany ABSTRACT: We consider neural units whose response functions are Lorentzians rather than the usual sigmoids or steps. This consideration is justified by the fact that neurons can be paired and that a suitable difference of the sigmoids of the paired neurons can create a window response function. Lorentzians are special cases of such windows and we take advantage of their simplicity to generate polynomial equations for several problems such as: i) fixed points of a completely connected net, ii) classification of operational modes, iii) training of a feedforward net, iv) process signals represented by complex numbers. Keywords: Neural Networks, Window-like response functions, Lorentzians and rational fractions, Integer coefficients, Training by solving polynomial systems, Processing of complex numbers, Fixed number of solutions, Classification of operational modes. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/giraud.lorentz.ps.Z From P.McKevitt at dcs.shef.ac.uk Sat Feb 4 10:12:11 1995 From: P.McKevitt at dcs.shef.ac.uk (Paul Mc Kevitt) Date: Sat, 4 Feb 95 15:12:11 GMT Subject: No subject Message-ID: <9502041512.AA24541@dcs.shef.ac.uk> ============================================================================== 11 0000000 TH 11 0 0 11 0 0 ANNIVERSARY AISB CONFERENCE 11 0 0 11 0000000 AISB-95 The Tenth Biennial Conference on AI and Cognitive Science SHEFFIELD, ENGLAND Monday 3rd -- Friday 7th April 1995 THEME Hybrid Problems, Hybrid Solutions PROGRAMME CHAIR John Hallam (University of Edinburgh) WORKSHOPS/TUTORIALS CHAIR Robert Gaizauskas (University of Sheffield) CONFERENCE CHAIR/LOCAL ORGANISATION Paul Mc Kevitt (University of Sheffield) ======================================================(SEE WWW FOR MORE DETAILS) EACL-95 (DUBLIN, IRELAND)====================================================== COME TO EACL-95 AT DUBLIN AND THEN FLY TO AISB-95 AT SHEFFIELD EACL-95 7th Conference of the European Chapter of the Association for Computational Linguistics March 27-31, 1995 University College Dublin Belfield, Dublin, IRELAND FOR ANYONE COMING FROM EACL-95 (DUBLIN) THERE ARE FLIGHTS FROM DUBLIN TO **MANCHESTER**, LEEDS/BRADFORD, LIVERPOOL, LONDON AND MIDLANDS ON CARRIERS SUCH AS AER LINGUS, BRITISH MIDLANDS, RYANAIR. (SEE ATTACHED INSERT BELOW) =============================================================================== AISB-95 Halifax Hall of Residence & Computer Science Department University of Sheffield Sheffield, ENGLAND HOSTED BY The Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB) and The Department of Computer Science (University of Sheffield) IN COOPERATION WITH Departments of Automatic Control and Systems Engineering, Information Studies, Philosphy, Psychology Artificial Intelligence Vision Research Unit (AIVRU) Hang-Seng Centre for Cognitive Studies Institute for Language, Speech and Hearing (ILASH) (University of Sheffield) Dragon Systems UK Limited (Melvyn Hunt) LPA Limited (Clive Spenser) Sharp Laboratories Europe Limited (Paul Kearney) Wisepress Limited (Penelope G.Head) MAIN CONFERENCE Wednesday 5th - Friday 7th April 1995 WORKSHOPS AND TUTORIALS Monday 3rd - Tuesday 4th April 1995 INVITED SPEAKERS +++ Professor ALEX GAMMERMAN +++ (Department of Computer Science, Royal Holloway/New Bedford College, University of London, England) +++ Professor MALIK GHALLAB +++ (LAAS-CNRS, Toulouse, France) +++ Professor GRAEME HIRST +++ (Department of Computer Science, University of Toronto, Canada) +++ Professor JOHN MAYHEW +++ (AIVRU, University of Sheffield, England) +++ Professor NOEL SHARKEY +++ (Department of Computer Science, University of Sheffield, England) PROGRAMME CHAIR John Hallam (University of Edinburgh) PROGRAMME COMMITTEE Dave Cliff (University of Sussex) Erik Sandewall (University of Linkoeping) Nigel Shadbolt (University of Nottingham) Sam Steel (University of Essex) Yorick Wilks (University of Sheffield) WORKSHOPS/TUTORIALS CHAIR Robert Gaizauskas (University of Sheffield) CONFERENCE CHAIR/LOCAL ORGANISATION Paul Mc Kevitt (University of Sheffield) LOCAL ORGANISATION COMMITTEE Phil Green (University of Sheffield) Jim McGregor (University of Sheffield) Bob Minors (University of Sheffield) Tony Prescott (University of Sheffield) Tony Simons (University of Sheffield) PUBLICITY Malcolm Crawford (University of Sheffield) Mark Lee (University of Sheffield) Derek Marriott (University of Sheffield) Simon Morgan (Cambridge) ADMINISTRATIVE ASSISTANT Gill Wells (University of Sheffield) AISB OFFICE (UNIVERSITY OF SUSSEX) Tony Cohn (Chairman) Roger Evans (Treasurer) Chris Thornton (Secretary) Alison White (Executive Office) THEME The world's oldest AI society, the Society for the Study of Artificial Intelligence and Simulation of Behaviour (AISB), will hold its Tenth Biennial International Conference at The University of Sheffield. The past few years have seen an increasing tendency for diversification in research into Artificial Intelligence, Cognitive Science and Artificial Life. A number of approaches are being pursued, based variously on symbolic reasoning, connectionist systems and models, behaviour-based systems, and ideas from complex dynamical systems. Each has its own particular insight and philosophical position. This variety of approaches appears in all areas of Artificial Intelligence. There are both symbolic and connectionist natural language processing, both classical and behaviour-based vision research, for instance. While purists from each approach may claim that all the problems of cognition can in principle be tackled without recourse to other methods, in practice (and maybe in theory, also) combinations of methods from the different approaches (hybrid methods) are more successful than a pure approach for certain kinds of problems. The committee feels that there is an unrealised synergy between the various approaches that an AISB conference may be able to explore. Thus, the focus of the tenth AISB Conference is on such hybrid methods. The AISB conference is a single track conference lasting three days, with a two day tutorial and workshop programme preceding the main technical event, and around twenty high calibre papers will be presented in the technical sessions. Five invited talks by respected and entertaining world class researchers complete the programme. The proceedings of the conference will be published in book form at the conference itself, making it a forum for rapid dissemination of research results. The preliminary programme for the conference is attached below. Note that the organisers reserve the right to alter the programme as circumstances dictate, though every effort will be made to adhere to the provisional timings and calendar of events given below. __________________________________________________________________________ FURTHER INFORMATION E-mail: aisb95 at dcs.shef.ac.uk (for auto responses) __________________________________________________________________________ AISB-95 CONFERENCE CHAIR/LOCAL ORGANISATION: Paul Mc Kevitt Department of Computer Science Regent Court 211 Portobello Street University of Sheffield GB- S1 4DP, Sheffield England, UK, EU. E-mail: p.mckevitt at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5572 (Office) 282-5596 (Lab.) 282-5590 (Secretary) AISB-95 WORKSHOPS AND TUTORIALS CHAIR: Dr. Robert Gaizauskas Department of Computer Science University of Sheffield 211 Portobello Street Regent Court Sheffield S1 4DP U.K. E-mail: robertg at dcs.shef.ac.uk WWW: http://www.dcs.shef.ac.uk/ WWW: http://www.shef.ac.uk/ Ftp: ftp.dcs.shef.ac.uk FaX: +44 (0) 114 278-0972 Phone: +44 (0) 114 282-5572 AISB-95 PROGRAMME CHAIR: John Hallam Department of Artificial Intelligence University of Edinburgh 5 Forrest Hill Edinburgh EH1 2QL SCOTLAND. E-mail: john at aifh.edinburgh.ac.uk FAX: + 44 (0) 1 31 650 6899 Phone: + 44 (0) 1 31 650 3097 ADDRESS (for registrations) Alison White AISB Executive Office Cognitive and Computing Sciences (COGS) University of Sussex Falmer, Brighton England, UK, BN1 9QH Email: alisonw at cogs.susx.ac.uk WWW: http://www.cogs.susx.ac.uk/aisb Ftp: ftp.cogs.susx.ac.uk/pub/aisb Tel: +44 (0) 1273 678448 Fax: +44 (0) 1273 671320 ADDRESS (for general enquiries) Gill Wells, Administrative Assistant, AISB-95, Department of Computer Science, Regent Court, 211 Portobello Street, University of Sheffield, GB- S1 4DP, Sheffield, UK, EU. Email: g.wells at dcs.shef.ac.uk Fax: +44 (0) 114-278-0972 Phone: +44 (0) 114-282-5590 Email: aisb95 at dcs.shef.ac.uk (for auto responses) WWW: http://www.dcs.shef.ac.uk/aisb95 [Sheffield Computer Science] Ftp: ftp.dcs.shef.ac.uk (cd aisb95) WWW: http://www.shef.ac.uk/ [Sheffield Computing Services] Ftp: ftp.shef.ac.uk (cd aisb95) WWW: http://ijcai.org/) [IJCAI-95, MONTREAL] WWW: http://www.cogs.susx.ac.uk/aisb [AISB SOCIETY SUSSEX] Ftp: ftp.cogs.susx.ac.uk/pub/aisb =============================================================================== From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Mon Feb 6 17:25:46 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 06 Feb 95 17:25:46 EST Subject: NIPS*95 Call for Papers Message-ID: <16644.792109546@DST.BOLTZ.CS.CMU.EDU> CALL FOR PAPERS Neural Information Processing Systems Natural and Synthetic Monday, Nov. 27 - Saturday, Dec. 2, 1995 Denver, Colorado This is the ninth meeting of an interdisciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. The confer- ence will include invited talks, and oral and poster presenta- tions of refereed papers. There will be no parallel sessions. There will also be one day of tutorial presentations (Nov. 27) preceding the regular session, and two days of focused workshops will follow at a nearby ski area (Dec. 1-2). Major categories for paper submission, with example subcategories, are as follows: Neuroscience: systems physiology, signal and noise analysis, oscillations, synchronization, mechanisms of inhibition and neuromodulation, synaptic plasticity, computational models Theory: computational learning theory, complexity theory, dynamical systems, statistical mechanics, probability and statistics, approximation and estimation theory Implementation: analog and digital VLSI, novel neuro-devices, neurocomputing systems, optical, simulation tools, parallelism Algorithms and Architectures: learning algorithms, decision trees constructive/pruning algorithms, localized basis func- tions, recurrent networks, genetic algorithms, combinatorial optimization, performance comparisons Visual Processing: image recognition, coding and classifica- tion, stereopsis, motion detection and tracking, visual psycho- physics Speech, Handwriting and Signal Processing: speech recognition, coding and synthesis, handwriting recognition, adaptive equali- zation, nonlinear noise removal, auditory scene analysis Applications: time-series prediction, medical diagnosis, finan- cial analysis, DNA/protein sequence analysis, music processing, expert systems, database mining Cognitive Science & AI: natural language, human learning and memory, perception and psychophysics, symbolic reasoning Control, Navigation, and Planning: robotic motor control, pro- cess control, navigation, path planning, exploration, dynamic programming, reinforcement learning Review Criteria: All submitted papers will be thoroughly refereed on the basis of technical quality, novelty, significance, and clarity. Submissions should contain new results that have not been published previously. Authors should not be dissuaded from submitting recent work, as there will be an opportunity after the meeting to revise accepted manuscripts before submitting final camera-ready copy. Paper Format: Submitted papers may be up to eight pages in length, including figures and references. The page limit will be strictly enforced, and any submission exceeding eight pages will not be considered. Authors are encouraged (but not required) to use the NIPS style files obtainable by anonymous FTP at the sites given below. Papers must include physical and e-mail addresses of all authors, and MUST indicate one of the nine major categories listed above. Authors may also indicate a subcategory, and their preference, if any, for oral or poster presentation; this preference will play no role in paper acceptance. Unless otherwise indicated, correspondence will be sent to the first au- thor. Submission Instructions: Send six copies of submitted papers to the address below; electronic or FAX submission is not accept- able. Include one additional copy of the abstract only, to be used for preparation of the abstracts booklet distributed at the meeting. Submissions mailed first-class from within the US or Canada, or sent from overseas via Federal Express/Airborne/DHL or similar carrier must be POSTMARKED by May 20, 1995. All other submissions must ARRIVE by this date. Mail submissions to: Michael Mozer NIPS*95 Program Chair Department of Computer Science University of Colorado Colorado Avenue and Regent Drive Boulder, CO 80309-0430 USA Mail general inquiries/requests for registration material to: NIPS*95 Registration Dept. of Mathematical and Computer Sciences Colorado School of Mines Golden, CO 80401 USA FAX: (303) 273-3875 e-mail: nips95 at mines.colorado.edu Sites for LaTex style files: Copies of "nips.tex" and "nips.sty" are available via anonymous ftp at helper.systems.caltech.edu (131.215.68.12) in /pub/nips, b.gp.cs.cmu.edu (128.2.242.8) in /usr/dst/public/nips. The style files and other conference information may also be retrieved via World Wide Web at http://www.cs.cmu.edu:8001/Web/Groups/NIPS/NIPS.html NIPS*95 Organizing Committee: General Chair, David S. Touretzky, CMU; Program Chair, Michael Mozer, U. Colorado; Publications Chair, Michael Hasselmo, Harvard; Tutorial Chair, Jack Cowan, U. Chicago; Workshops Chair, Michael Perrone, IBM; Publicity Chair, David Cohn, MIT; Local Arrangements, Manavendra Misra, Colorado School of Mines; Treasurer, John Lazzaro, Berkeley. DEADLINE FOR SUBMISSIONS IS MAY 20, 1995 (POSTMARKED) -please post- From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Mon Feb 6 17:26:32 1995 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Mon, 06 Feb 95 17:26:32 EST Subject: call for workshops: NIPS*95 Message-ID: <16650.792109592@DST.BOLTZ.CS.CMU.EDU> CALL FOR PROPOSALS NIPS*95 Post Conference Workshops December 1 and 2, 1995 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1995 conference, workshops on current topics in neural information processing will be held on December 1 and 2, 1995, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control, architectural issues, at- tention, bayesian analysis, benchmarking neural network applica- tions, computational complexity issues, computational neurosci- ence, fast training techniques, genetic algorithms, music, neural network dynamics, optimization, recurrent nets, rules and connec- tionist models, self-organization, sensory biophysics, speech, time series prediction, vision and audition, implementations, and grammars. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Representation of alternative viewpoints and panel-style discussions are partic- ularly encouraged. Individuals proposing to chair a workshop will have responsibilities including: 1) arranging short informal presentations by experts working on the topic, 2) moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and 3) writing a brief summary. Submission Instructions: Interested parties should submit a short proposal for a workshop of interest postmarked by May 20, 1995. (Express mail is not necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, the proposed length of the workshop (one day or two days), and the planned format. It should motivate why the topic is of in- terest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publi- cations and evidence of scholarship in the field of interest. Submissions should include contact name, address, email address, phone number and fax number if available. Mail proposals to: Michael P. Perrone NIPS*95 Workshops Chair IBM T.J. Watson Research Center P.O. Box 704 Yorktown Heights, NY 10598 (email: mpp at watson.ibm.com) PROPOSALS MUST BE POSTMARKED BY MAY 20, 1995 -Please Post- From terry at salk.edu Mon Feb 6 23:42:45 1995 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 6 Feb 95 20:42:45 PST Subject: Neural Computation 7:2 Message-ID: <9502070442.AA10237@salk.edu> NEURAL COMPUTATION Volume 7, Number 2, March 1995 Review: Regularization theory and neural networks architectures Federico Girosi, Michael Jones and Tomaso Poggio Notes: A counterexample for temporal differences learning Dimitri P. Bertsekas New perceptron model using random bitstreams Eel-wan Lee and Soo-Ik Chae On the ordering conditions for self-organising maps Marco Budinich and John G. Taylor Letters: A simple competitive account of some response properties of visual neurons in area MSTd Ruye Wang Synchrony in excitatory neural networks D. Hansel, G. Mato and C. Meunier Decorrelated Hebbian learning for clustering and function approximation Gustavo Deco and Dragan Obradovic Identification using feedforward networks Asriel U. Levin and Kumpati S. Narendra An HMM/MLP architecture for sequence recognition Sung-Bae Cho and Jin H. Kim Learning linear threshold approximations using perceptrons Thomas Bylander An algorithm for building a regularized piecewise linear discrimination surface: The perceptron membrane Guillaume Deffuant The upward bias in measures of information derived from limited data samples Alessandro Treves and Stefano Panzeri Representation of similarity in three-dimensional object discrimination Shimon Edelman ----- SUBSCRIPTIONS - 1995 - VOLUME 7 - BIMONTHLY (6 issues) ______ $40 Student and Retired ______ $68 Individual ______ $180 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-5 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From bernabe at cnm.us.es Tue Feb 7 04:40:21 1995 From: bernabe at cnm.us.es (Bernabe Linares B.) Date: Tue, 7 Feb 95 10:40:21 +0100 Subject: papers in neuroprose Message-ID: <9502070940.AA12323@cnm1.cnm.us.es> FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/bernabe.art1chip.ps.Z The file bernabe.art1chip.ps.Z is now available for copying from the Neuroprose repository. It contains 2 conference papers (the PostScript file prints in 14 pages): Title Paper1: A Real Time Clustering CMOS Neural Engine Title Paper2: Experimental Results of An Analog Current-Mode ART1 Chip Authors: T. Serrano, B. Linares-Barranco, and J. L. Huertas Filiation: National Microelectronics Center (CNM), Sevilla, SPAIN. Sorry, no hardcopies available. Paper1 will appear in the 7th NIPS Volume, and Paper2 has been accepted for presentation at the 1995 IEEE Int. Symp. on Circuits and Systems, Seattle, Washington (ISCAS'95, April 29-May 3). These and related papers can also be obtained through anonymous ftp from the node "ftp.cnm.us.es" and directory "/pub/bernabe/publications". The following abstract describes briefly the topic of the two papers. ABSTRACT: In these papers we present a real time neural categorizer chip based on the ART1 algorithm. The circuit implements a modified version of the original ART1 algorithm more suitable for VLSI implementations. It has been designed using analog current-mode circuit design techniques, and consists basically of current mirrors that are switched ON and OFF according to the binary input patterns. The chip is able to cluster 100 binary pixels input patterns into up to 18 different categories. Modular expansibility of the system is possible by simply interconnecting more chips in a matrix array. This way a system can be built to cluster NX100 binary pixels images into MX18 different clusters, using an NXM array of chips. Avarage pattern classification is performed in less than 1.8us, which means an equivalent computing power of 2.2X10^9 connections per second and connection updates per second. The chip has been fabricated in a single-poly double-metal 1.5um standard digital low cost CMOS process, has a die area of 1cm^2, and is mounted in a 120 pins PGA package. Although the circuit is analog in nature, it is full-digital-compatible since it interfaces to the outside world through digital signals. From KOKINOV at BGEARN.BITNET Tue Feb 7 14:39:37 1995 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Tue, 07 Feb 95 14:39:37 BG Subject: CogSci95 Summer School in Sofia Message-ID: 2nd International Summer School in Cognitive Science Sofia, July 3-16, 1995 First Announcement and Call forPapers The Summer School features introductory and advanced courses in Cognitive Science, participant symposia, panel discussions, student sessions, and intensive informal discussions. Participants will include university teachers and researchers, graduate and senior undergraduate students. International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Ma, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Il, USA) Christopher HABEL (Hamburg University, Germany) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Inst. of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Courses Computer Models of Emergent Cognition - Robert French (Indiana University, USA) Hemispheric Mechanisms in Cognition - Eran Zaidel (UCLA, USA) Cross-Linguistic Studies of Language Processing - Elizabeth Bates (UCSD, USA) Aphasia Research - Nina Dronkers (UC at Davis, USA) Selected Topics in Cognitive Linguistics - Elena Andonova (NBU, Bulgaria Spatial Attention - Carlo Umilta' (University of Padova, Italy) Parallel Pathways of Visual Information Processing - Angel Vassilev (NBU Color Vision - Charles De Weert (University of Nijmegen, The Netherlands) Integration of Language and Vision - Geoff Simmons (Hamburg University, Germany) Emotion and Cognition - Cristiano Castelfranchi (CNR, Italy) Philosophy of Mind - Lilia Gurova (NBU, Bulgaria) Analogical Reasoning: Psychological Data and Computational Models - Boicho Kokinov (NBU, Bulgaria) Participants are not restricted on the number of courses they can register for. There will be no parallel running courses. Participant Symposia Participants are invited to submit papers which will be presented (30 min) at the participant symposia. Authors should send full papers (8 single spaced pages) in triplicate or electronically (postscript, RTF, or plain ASCII) by March 31. Selected papers will be published in the School's Proceedings after the School itself. Only papers presented at the School will be eligible for publishing. Panel Discussions Language Processing: Rules or Constraints? Vision and Attention Integrated Cognition Student Session At the student session proposals for M.Sc. Theses and Ph.D. Theses will be discussed as well as public defense of such theses. Graduate students in Cognitive Science are invited to present their work. Local Organizers New Bulgarian University, Bulgarian Academy of Sciences, Bulgarian Cognitive Science Society Timetable Application Form: now Deadline for paper submission: March 31 Notification for acceptance: April 30 Early registration: May 15 Arrival day and on site registration July 2 Summer School July 3-14 Excursion July 15 Departure day July 16 Paper submission to: Boicho Kokinov Cognitive Science Department New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria fax: (+3592) 73-14-95 e-mail: kokinov at bgearn.bitnet Send your Application Form to: e-mail: cogsci95 at adm.nbu.bg ------------------------------------------------------------ International Summer School in Cognitive Science Sofia, July 3-14, 1995 Application Form Last Name: First Name: Status: Professor / Academic Researcher / Applied Researcher / Graduate Student / Undergraduate Student Affiliation: University: Department: Country: Mailing address: e-mail address: fax: I would like to attend the following courses: I intend to submit a paper: (title) From marney at ai.mit.edu Tue Feb 7 13:22:09 1995 From: marney at ai.mit.edu (Marney Smyth) Date: Tue, 7 Feb 95 13:22:09 EST Subject: Call For Papers: NNSP95 Message-ID: <9502071822.AA01794@motor-cortex> ********************************************************************** * 1995 IEEE WORKSHOP ON * * * * NEURAL NETWORKS FOR SIGNAL PROCESSING * * * * August 31 -- September 2, 1995, Cambridge, Massachusetts, USA * * Sponsored by the IEEE Signal Processing Society * * (In cooperation with the IEEE Neural Networks Council) * * * * * ********************************************************************** Invited Speakers include: Vladimir Vapnik, AT&T Bell Labs. Michael I. Jordan, MIT. FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of IEEE Signal Processing Society, the co-sponsorship of IEEE Neural Network Council, the fifth of a series of IEEE Workshops on Neural Networks for Signal Processing will be held at the Royal Sonesta Hotel, Cambridge, Massachusetts, on Thursday 8/31 -- Saturday 9/2, 1995. Papers are solicited for, but not limited to, the following topics: ++ APPLICATIONS: Image, speech, communications, sensors, medical, adaptive filtering, OCR, and other general signal processing and pattern recognition topics. ++ THEORIES: Generalization and regularization, system identification, parameter estimation, new network architectures, new learning algorithms, and wavelets in NNs. ++ IMPLEMENTATIONS: Software, digital, analog, hybrid technologies, parallel processing. DETAILS FOR SUBMITTING PAPERS Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and e-mail address if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Marney Smyth (Tel.) (617) 253 0547, (Fax) (617) 253 2964 (e-mail) marney at ai.mit.edu. We plan to use the World Wide Web (WWW) for posting further announcements on NNSP95 such as: submitted papers status, final program, hotel information etc. You can use MOSAIC and access URL site: http://www.cdsp.neu.edu. If you do not have access to WWW use anonymous ftp to site ftp.cdsp.neu.edu and look under the directory /pub/NNSP95. Please send paper submissions to: Prof. Elias S. Manolakos IEEE NNSP'95 409 Dana Research Building Electrical and Computer Engineering Department Northeastern University, Boston, MA 02115, USA Phone: (617) 373-3021, Fax: (617) 373-4189 ************************Please Take Note******************************** A limited amount of financial assistance is available for participants whose papers are accepted for the Workshop. If you wish to apply for financial assistance, please submit a C.V. and brief summary of the reasons why you want to participate in the NNSP95 Workshop, when submitting your paper. Financial assistance awards will not be made until after the acceptance date of April 21, 1995. Please submit hard copy (NOT email) applications for Financial Assistance, along with a copy of your paper, to Dr. Federico Girosi at the address below. *********************************************************************** ******************* * IMPORTANT DATES * ******************* Extended summary received by: February 17 Notification of acceptance: April 21 Photo-ready accepted papers received by: May 22 Advanced registration received before: June 2 GENERAL CHAIRS Federico Girosi Center for Biological and Computational Learning andd Artificial Intelligence Laboratory MIT, E25-201 Cambridge, MA 02139 Tel: (617)253-0548 Fax: (617)258-6287 email: girosi at ai.mit.edu John Makhoul BBN Systems and Technologies 70 Fawcett Street Cambridge, MA 02138 Tel: (617)873-3332 Fax: (617)873-2534 email: makhoul at bbn.com PROGRAM CHAIR Elias S. Manolakos Communications and Digital Signal Processing (CDSP) Center for Research and Graduate Studies Electrical and Computer Engineering Dept. 409 Dana Research Building Northeastern University Boston, MA 02115 Tel: (617)373-3021 Fax: (617)373-4189 email: elias at cdsp.neu.edu FINANCE CHAIR LOCAL ARRANGEMENTS Judy Franklin Mary Pat Fitzgerald, MIT GTE Laboratories email: marypat at ai.mit.edu email: jfranklin at gte.com PUBLICITY CHAIR PROCEEDINGS CHAIR Marney Smyth Elizabeth J. Wilson CBCL, MIT Raytheon Co. email: marney at ai.mit.edu email: bwilson at sud2.ed.ray.com TECHNICAL PROGRAM COMMITTEE Joshua Alspector John Makhoul Charles Bachmann Alice Chiang Elias Manolakos A. Constantinides P. Mathiopoulos Lee Giles Nahesan Niranjan Federico Girosi Tomaso Poggio Lars Kai Hansen Jose Principe Yu-Hen Hu Wojtek Przytula Jenq-Neng Hwang John Sorensen Bing-Huang Juang Andreas Stafylopatis Shigeru Katagiri John Vlontzos George Kechriotis Raymond Watrous Stephanos Kollias Christian Wellekens Sun-Yuan Kung Ron Williams Gary M. Kuhn Barbara Yoon Richard Lippmann Xinhua Zhuang From p.j.b.hancock at psych.stir.ac.uk Wed Feb 8 06:17:41 1995 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Wed, 8 Feb 95 11:17:41 GMT Subject: PhD thesis available Message-ID: <9502081117.AA0420543094@nevis.stir.ac.uk.nevis.stir.ac.uk> Since a couple of people have asked for it within the last week, I've made my PhD thesis available by FTP, although it's now a couple of years old. It's on forth.stir.ac.uk (139.153.13.6, currently) directory pub/reports/pjbh_thesis, one compressed postscript file per chapter. There is also a README file that explains what is what. No hardcopies available - the whole thing is 155 pages. Coding strategies for genetic algorithms and neural nets Peter Hancock University of Stirling PhD thesis, Dept. of Computing Science, 1992 Abstract The interaction between coding and learning rules in neural nets (NNs), and between coding and genetic operators in genetic algorithms (GAs) is discussed. The underlying principle advocated is that similar things in ``the world'' should have similar codes. Similarity metrics are suggested for the coding of images and numerical quantities in neural nets, and for the coding of neural network structures in genetic algorithms. A principal component analysis of natural images yields receptive fields resembling horizontal and vertical edge and bar detectors. The orientation sensitivity of the ``bar detector'' components is found to match a psychophysical model, suggesting that the brain may make some use of principal components in its visual processing. Experiments are reported on the effects of different input and output codings on the accuracy of neural nets handling numeric data. It is found that simple analogue and interpolation codes are most successful. Experiments on the coding of image data demonstrate the sensitivity of final performance to the internal structure of the net. The interaction between the coding of the target problem and reproduction operators of mutation and recombination in GAs are discussed and illustrated. The possibilities for using GAs to adapt aspects of NNs are considered. The permutation problem, which affects attempts to use GAs both to train net weights and adapt net structures, is illustrated and methods to reduce it suggested. Empirical tests using a simulated net design problem to reduce evaluation times indicate that the permutation problem may not be as severe as has been thought, but suggest the utility of a sorting recombination operator, that matches hidden units according to the number of connections they have in common. A number of experiments using GAs to design network structures are reported, both to specify a net to be trained from random weights, and to prune a pre-trained net. Three different coding methods are tried, and various sorting recombination operators evaluated. The results indicate that appropriate sorting can be beneficial, but the effects are problem-dependent. It is shown that the GA tends to overfit the net to the particular set of test criteria, to the possible detriment of wider generalisation ability. A method of testing the ability of a GA to make progress in the presence of noise, by adding a penalty flag, is described. From tony at salk.edu Wed Feb 8 22:11:24 1995 From: tony at salk.edu (Tony Bell) Date: Wed, 8 Feb 95 19:11:24 PST Subject: tech report available Message-ID: <9502090311.AA27886@salk.edu> FTP-host: ftp.salk.edu FTP-file: pub/tony/bell.blind.ps.Z The following technical report is ftp-able from the Salk Institute. The file is called bell.blind.ps.Z, it is 0.3 Mbytes compressed, 0.9 Mbytes uncompressed, and 36 pages long (8 figures). It describes work presented at NIPS '94, with various embellishments, and a version of it will appear in Neural Computation in 1995. ------------------------------------------------------------------- Technical Report no. INC-9501, February 1995, Institute for Neural Computation, UCSD, San Diego, CA 92093-0523 AN INFORMATION-MAXIMISATION APPROACH TO BLIND SEPARATION AND BLIND DECONVOLUTION Anthony J. Bell & Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute 10010 N. Torrey Pines Road La Jolla, California 92037 ABSTRACT We derive a new learning algorithm which maximises the information transferred in a network of non-linear units. The algorithm does not assume any knowledge of the input distributions, and is defined here for the zero-noise limit. Under these conditions, information maximisation has extra properties not found in the linear case (Linsker 1989). The non-linearities in the transfer function are able to pick up higher-order moments of the input distributions and perform true redundancy reduction between units in the output representation. This enables the network to separate statistically independent components in the inputs: a higher-order generalisation of Principal Components Analysis. We apply the network to the source separation (or cocktail party) problem, successfully separating unknown mixtures of up to ten speakers. We also show that a variant on the network architecture is able to perform blind deconvolution (cancellation of unknown echoes and reverberation in a speech signal). Finally, we derive dependencies of information transfer on time delays. We suggest that information maximisation provides a unifying framework for problems in `blind' signal processing. ------------------------------------------------------------------- Can be obtained via ftp as follows: unix> ftp ftp.salk.edu (or 198.202.70.34) (log in as "anonymous", e-mail address as password) ftp> binary ftp> cd pub/tony ftp> get bell.blind.ps.Z ftp> quit unix> uncompress bell.blind.ps.Z unix> lpr bell.blind.ps From mike at stats.gla.ac.uk Thu Feb 9 12:00:55 1995 From: mike at stats.gla.ac.uk (Mike Titterington) Date: Thu, 9 Feb 95 17:00:55 GMT Subject: Meeting on Statistics and Neural Networks Message-ID: <20297.9502091700@milkyway.stats> STATISTICS AND NEURAL NETWORKS OPEN MEETING, 21 APRIL 1995 An open meeting on the above topic will be held in the premises of the Royal Society of Edinburgh under the auspices of the International Centre for Mathematical Sciences . The Meeting will follow on from a Workshop in the area and will take advantage on the presence of several distinguished visitors. It is intended that the presentations will both indicate the cutting edge of current research at this interface and also be accessible to interested parties from the wider statistical, mathematical and neural-networks communities. Invited speakers will include Chris Bishop (Aston), Leo Breiman (Berkeley), Trevor Hastie (Stanford ), Michael Jordan ( MIT ), Laveen Kanal (Maryland) and Brian Ripley (Oxford). In addition, Jim Kay (SASS) will give a perspective of the main points that came out of the preceding Workshop. It is planned to provide coffee and tea. Lunch can be obtained at a variety of nearby establishments. For details contact either J. W. Kay (SASS, Macaulay Land Use Research Institute, Craigiebuckler, Aberdeen, AB9 2QJ; sassk at mluri.sari ac.uk) or D. M. Titterington (Department of Statistics, University of Glasgow, Glasgow G12 8QQ; mike at stats.gla.ac.uk). For application forms [SEE BELOW FOR AN ELECTRONIC VERSION], contact Louise Williamson (ICMS, 14 India Streeet, Edinburgh EH3 6EZ; icms at maths.ed.ac.uk). In the event that the meeting is over-subscribed, places will be allocated on a first-come-first-served basis. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - STATISTICS AND NEURAL NETWORKS OPEN MEETING: APRIL 21, 1995 VENUE - WOLFSON LECTURE THEATRE, ROYAL SOCIETY OF EDINBURGH, 22-24 GEORGE STREET, EDINBURGH EH2 2PQ. Under the auspices of the International Centre for Mathematical Sciences 14 India Street, Edinburgh EH3 6EZ PARTICIPATION AND ACCOMMODATION-INFORMATION REQUEST FORM TITLE: FIRST NAME: SECOND NAME: INSTITUTION: ADDRESS: TELEPHONE: FAX: EMAIL ADDRESS: DATE OF ARRIVAL: DATE OF DEPARTURE: DETAILS OF ACCOMPANYING FAMILY MEMBERS (NUMBER, RELATIONS, AND AGES, IF CHILDREN): ACCOMODATION INFORMATION: I would /would not like information about accommodation (Delete as appropriate) ANY ADDITIONAL COMMENTS: PLEASE RETURN AS SOON AS POSSIBLE WITH REGISTRATION FEE OF 15 POUNDS (10 POUNDS FOR POST-GRADUATE STUDENTS) TO : Louise Williamson ICMS Tel:0131-220-1777 Fax:0131-220-1053 Email:icms at maths.ed.ac.uk !! Please make cheques payable to HERIOT-WATT UNIVERSITY - ICMS !! - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - From len at titanic.mpce.mq.edu.au Thu Feb 9 17:29:57 1995 From: len at titanic.mpce.mq.edu.au (Len Hamey) Date: Fri, 10 Feb 1995 09:29:57 +1100 (EST) Subject: Technical Report Available Message-ID: <9502092229.AA29220@titanic.mpce.mq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 2250 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/83da6595/attachment-0001.ksh From fritzke at neuroinformatik.ruhr-uni-bochum.de Thu Feb 9 04:22:13 1995 From: fritzke at neuroinformatik.ruhr-uni-bochum.de (Bernd Fritzke) Date: Thu, 9 Feb 1995 10:22:13 +0100 (MET) Subject: NIPS*94 preprint available Message-ID: <9502090922.AA16891@urda.neuroinformatik.ruhr-uni-bochum.de> ftp-host: ftp.neuroinformatik.ruhr-uni-bochum.de ftp-filename: /pub/manuscripts/articles/fritzke.nips94.ps.gz (148 kB compressed, 8 pages) *** DO NOT FORWARD TO ANY OTHER LISTS *** The following article (NIPS*94 pre-print) is available by ftp from our ftp-server (Bochum, Germany) and via WWW from my homepage (see below). Since our ftp-connection is sometimes a bit slow I have also transferred the file to the neuroprose archive. I assume it will be available from there (with a .Z extension and 240 kB) within a couple of days. I decided, however, to announce the paper now since several people did ask for it. ----------------------------------------------------------------- "A Growing Neural Gas Network Learns Topologies" Bernd Fritzke Institut fuer Neuroinformatik Ruhr-Universitaet Bochum D-44780 Bochum Germany Abstract An incremental network model is introduced which is able to learn the important topological relations in a given set of input vectors by means of a simple Hebb-like learning rule. In contrast to previous approaches like the ``neural gas'' method of Martinetz and Schulten (1991, 1994), this model has no parameters which change over time and is able to continue learning, adding units and connections, until a performance criterion has been met. Applications of the model include vector quantization, clustering, and interpolation. Thanks to Jordan Pollack for maintaining neuroprose. Bernd -- Bernd Fritzke * Institut f"ur Neuroinformatik Tel. +49-234 7007921 Ruhr-Universit"at Bochum * 44780 Bochum * Germany FAX. +49-234 7094210 http://www.neuroinformatik.ruhr-uni-bochum.de/ini/PEOPLE/fritzke/top.html From BGoodin at UNEX.UCLA.EDU Thu Feb 9 20:29:00 1995 From: BGoodin at UNEX.UCLA.EDU (Goodin, Bill) Date: Thu, 09 Feb 95 17:29:00 PST Subject: UCLA short course on Fuzzy Logic, Chaos, and Neural Networks Message-ID: <2F3AC808@UNEXGW.UCLA.EDU> On May 22-24, 1995, UCLA Extension will present the short course, "Fuzzy Logic, Chaos, and Neural Networks: Principles and Applications", on the UCLA campus in Los Angeles. The instructor is Harold Szu, PhD, Research Physicist, Washington, DC. This course presents the principles and applications of several different but related disciplines--neural networks, fuzzy logic, chaos--in the context of pattern recognition, control of engineering tolerance imprecision, and the prediction of fluctuating time series. Since research into these areas has contributed to the understanding of human intelligence, researchers have dramatically enhanced their understanding of fuzzy neural systems and in fact may have discovered the "Rosetta stone" to decipher and unify these intelligence functions. For example, complex neurodynamic patterns may be understood and modelled by Artificial Neural Networks (ANN) governed by fixed-point attractor dynamics in terms of a Hebbian learning matrix among bifurcated neurons. Each node generates a low dimensional bifurcation cascade towards the chaos but together they form collective ambiguous outputs; e.g., a fuzzy set called the Fuzzy Membership Function (FMF). This feature becomes particularly powerful for real world applications in signal processing, pattern recognition and/or prediction/control. The course delineates the difference between the classical sigmoidal squash function of the typical neuron threshold logic and the new N-shaped sigmoidal function having a "piecewise negative logic" that can generate a Feigenbaum cascade of bifurcation outputs of which the overall envelope is postulated to be the triangle FMF. The course also discusses applications of chaos and collective chaos for spatiotemporal information processing that has been embedded through an ANN bifurcation cascade of those collective chaotic outputs generated from piecewise negative logic neurons. These chaotic outputs learn the FMF triangle-shape with a different degree of fuzziness as defined by the scaling function of the multiresolution analysis (MRA) used often in wavelet transforms. Another advantage of this methodology is information processing in a synthetic nonlinear dynamical environment. For example, nonlinear ocean waves can be efficiently analyzed by nonlinear soliton dynamics, rather than traditional Fourier series. Implementation techniques in chaos ANN chips are given. The course covers essential ANN learning theory and the elementary mathematics of chaos such as the bifurcation cascade route to chaos and the rudimentary Fuzzy Logic (FL) for those interdisciplinary participants with only basic knowledge of the subject areas. Various applications in chaos, fuzzy logic, and neural net learning are illustrated in terms of spatiotemporal information processing, such as: --Signal/image de-noise --Control device/machine chaos --Communication coding --Chaotic heart and biomedical applications. For additional information and a complete course description, please contact Marcus Hennessy at: (310) 825-1047 (310) 206-2815 fax mhenness at unex.ucla.edu From wermter at nats5.informatik.uni-hamburg.de Thu Feb 9 13:20:55 1995 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Thu, 9 Feb 1995 19:20:55 +0100 Subject: Learning for natural language: final updated call Message-ID: <199502091820.TAA24270@nats2.informatik.uni-hamburg.de> CALL FOR PAPERS AND PARTICIPATION IJCAI-95 Workshop on New Approaches to Learning for Natural Language Processing International Joint Conference on Artificial Intelligence (IJCAI-95) Palais de Congres, Montreal, Canada August 21, 1995 ORGANIZING COMMITTEE -------------------- Stefan Wermter Gabriele Scheler Ellen Riloff University of Hamburg Technical University Munich University of Utah INVITED SPEAKERS ---------------- Eugene Charniak, Brown University, USA Noel Sharkey, Sheffield University, UK PROGRAM COMMITTEE ----------------- Jaime Carbonell, Carnegie Mellon University, USA Joachim Diederich, Queensland University of Technology, Australia Georg Dorffner, University of Vienna, Austria Jerry Feldman, ICSI, Berkeley, USA Walther von Hahn, University of Hamburg, Germany Aravind Joshi, University of Pennsylvania, USA Ellen Riloff, University of Utah, USA Gabriele Scheler, Technical University Munich, Germany Stefan Wermter, University of Hamburg, Germany WORKSHOP DESCRIPTION -------------------- In the last few years, there has been a great deal of interest and activity in developing new approaches to learning for natural language processing. Various learning methods have been used, including - connectionist methods/neural networks - machine learning algorithms - hybrid symbolic and subsymbolic methods - statistical techniques - corpus-based approaches. In general, learning methods are designed to support automated knowledge acquisition, fault tolerance, plausible induction, and rule inferences. Using learning methods for natural language processing is especially important because language learning is an enabling technology for many other language processing problems, including noisy speech/language integration, machine translation, and information retrieval. Different methods support language learning to various degrees but, in general, learning is important for building more flexible, scalable, adaptable, and portable natural language systems. This workshop is of interest particularly at this time because systems built by learning methods have reached a level where they can be applied to real-world problems in natural language processing and where they can be compared with more traditional encoding methods. The workshop will bring together researchers from the US/Canada, Europe, Japan, Australia and other countries working on new approaches to language learning. The workshop will provide a forum for discussing various learning approaches for supporting natural language processsing. In particular the workshop will focus on questions like: - How can we apply suitable existing learning methods for language processing? - What new learning methods are needed for language processing and why? - What language knowledge should be learned and why? - What are similarities and differences between different approaches for language learning? (e.g., machine learning algorithms vs neural networks) - What are strengths and limitations of learning rather than manual encoding? - How can learning and encoding be combined in symbolic/connectionist systems? - Which aspects of system architectures and knowledge engineering have to be considered? (e.g., modular, integrated, hybrid systems) - What are successful applications of learning methods in various fields? (speech/language integration, machine translation, information retrieval) - How can we evaluate learning methods using real-world language? (text, speech, dialogs, etc.) WORKSHOP FORMAT --------------- The workshop will provide a forum for the interactive exchange of ideas and knowledge. Approximately 30-40 participants are expected and there will be time for up to 15 presentations depending on the number and quality of paper contributions received. Normal presentation length will be 15+5 minutes, leaving time for direct questions after each talk. There may be a few invited talks of 25+5 minutes length. In addition to prepared talks, there will be time for moderated discussions after two related sessions. Furthermore, the moderated discussions will provide an opportunity for an open exchange of comments, questions, reactions, and opinions. PUBLICATION ----------- Workshop proceedings will be published by AAAI. If there is sufficient interest of the participants of the workshop there may be a possibility to publish the results of the workshop as a book. REGISTRATION ------------ This workshop will take place directly before the general IJCAI-conference. It is an IJCAI policy, that workshop participation is not possible without registration for the general conference. SUBMISSIONS ----------- All submissions will be refereed by the program committee and other experts in the field. Please submit 4 hardcopies AND a postscript file. The paper format is the IJCAI95 format: 12pt article style latex, no more than 43 lines, 15 pages maximum, including title, address and email address, abstract, figures, references. Papers should fit to 8 1/2" x 11" size. Notifications will be sent by email to the first author. Postscript files can be uploaded with anonymous ftp: ftp nats4.informatik.uni-hamburg.de (134.100.10.104) login: anonymous password: cd incoming/ijcai95-workshop binary put quit Hardcopies AND postscript files must arrive not later than 24th February 1995 at the address below. ##############Submission Deadline: 24th February 1995 ##############Notification Date: 24th March 1995 ##############Camera ready Copy: 13th April 1995 Please send correspondence and submissions to: ################################################ Dr. Stefan Wermter Department of Computer Science University of Hamburg Vogt-Koelln-Strasse 30 D-22527 Hamburg Germany phone: +49 40 54715-531 fax: +49 40 54715-515 e-mail: wermter at informatik.uni-hamburg.de ################################################ From njm at cupido.inesc.pt Fri Feb 10 08:46:45 1995 From: njm at cupido.inesc.pt (njm@cupido.inesc.pt) Date: Fri, 10 Feb 95 13:46:45 +0000 Subject: EPIA'95 - Neural Nets & Genetic Algorithms Worksop CFP Message-ID: <9502101346.AA02701@cupido.inesc.pt> ________________________________________________________ -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION NEURAL NETWORKS AND GENETIC ALGORITHMS -------------------------------------------------------- A subsection of the: FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ________________________________________________________ -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The workshop on Fuzzy Logic and Neural Networks in Engineering, running during the Seventh Portuguese Conference on Artificial Intelligence (EPIA'95), includes a subsection on Neural Networks and Genetic Algorithms. This subsection of the workshop will be devoted to models of simulating human reasoning and behaviour based on GA and NN combinations. Recently in disciplines such as AI, Engineering, Robotics and Artificial Life there has been a rise in interest in hybrid methodologies such as neural networks and genetic algorithms which enables modelling of more realistic flexible and adaptive behaviour and learning. So far such hybrid models have proved very promising in investigating and characterizing the nature of complex reasoning and control behaviour. Participants are expected to base their contribution on current research and the workshop emphasis will be on wide-ranging discussions of the feasibility and application of such hybrid models. This part of the workshop is intended to promote the exchange of ideas and approaches in these areas and for these methods, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. COORDINATION OF THIS SUBSECTION ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Mukesh Patel Institute of Computer Science, Foundation for Research and Technology-Hellas (FORTH) P.O.Box 1385, GR 711 10 Heraklion, Crete, Greece Voice: +30 (81) 39 16 35 Fax: +30 (81) 39 16 01/09 Email: mukesh at ics.forth.gr The submission requirements, attendance and deadlines information are the same of the workshop, which Call for Papers is enclosed. Further inquiries could be addressed either to the subsection coordinator or the Workshop address. ============================================================================= ============================================================================= ============================================================================= -------------------------------------------------------- EPIA'95 WORKSHOPS - CALL FOR PARTICIPATION FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP -------------------------------------------------------- Seventh Portuguese Conference on Artificial Intelligence Funchal, Madeira Island, Portugal October 3-6, 1995 (Under the auspices of the Portuguese Association for AI) INTRODUCTION ~~~~~~~~~~~~ The Seventh Portuguese Conference on Artificial Intelligence (EPIA'95) will be held at Funchal, Madeira Island, Portugal, between October 3-6, 1995. As in previous cases ('89, '91, and '93), EPIA'95 will be run as an international conference, English being the official language. The scientific program includes tutorials, invited lectures, demonstrations, and paper presentations. The Conference will include three parallel workshops on Expert Systems, Fuzzy Logic and Neural Networks, and Applications of A.I. to Robotics and Vision Systems. These workshops will run simultaneously (see below) and consist of invited talks, panels, paper presentations and poster sessions. Fuzzy Logic And Neural Networks In Engineering workshop may last for either 1, 2 or 3 days, depending on the quantity and quality of submissions. FUZZY LOGIC AND NEURAL NETWORKS IN ENGINEERING WORKSHOP ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The search for systems simulating human reasoning in what regards uncertainty has created a strong research community. In particular, Fuzzy Logic and Neural Networks have been a source of synergies among researchers of both areas, aiming at developing theoretical approaches and applications towards the characterization and experimentation of such kinds of reasoning. The workshop is intended to promote the exchange of ideas and approaches in those areas, through paper presentations, open discussions, and the corresponding exhibition of running systems, demonstrations or simulations. The organization committee invites you to participate, submitting papers together with videos, demonstrations or running systems, to illustrate relevant issues and applications. EXHIBITIONS ~~~~~~~~~~~ In order to illustrate and to support theoretical presentations the organization will provide adequate conditions (space and facilities) for exhibitions regarding the three workshops mentioned. These exhibitions can include software running systems (several platforms are available), video presentations (PAL-G VHS system), robotics systems (such as robotics insects, and autonomous robots), and posters. On the one hand, this space will allow the presentation of results and real-world applications of the research developed by our community and, on the other it will serve as a source of motivation to students and young researchers. SUBMISSION REQUIREMENTS ~~~~~~~~~~~~~~~~~~~~~~~ Authors are asked to submit five (5) copies of their papers to the submissions address by May 2, 95. Notification of acceptance or rejection will be mailed to the first (or designated) author on June 5, 95, and camera ready copies for inclusion in the workshop proceedings will be due on July 3, 95. Each copy of submitted papers should include a separate title page giving the names, addresses, phone numbers and email addresses (where available) of all authors, and a list of keywords identifying the subject area of the paper. Papers should be a maximum of 16 pages and printed on A4 paper in 12 point type with a maximum of 38 lines per page and 75 characters per line ( corresponding to LaTeX article style, 12 pt). Double sided submissions are preferred. Electronic or faxed submissions will not be accepted. Further inquiries should be addressed to the inquiries address. ATTENDANCE ~~~~~~~~~~ Each workshop will be limited to at most fifty people. In addition to presenters of papers and posters, there will be space for a limited number of other participants chosen on the basis of a one- to two-page research summary which should include a list of relevant publications, along with an electronic mail address if possible. A set of working notes will be available prior to the commencement of the workshops. Registration information will be available in June 1995. Please write for registration information to the inquiries address. DEADLINES ~~~~~~~~~ Papers submission: ................. May 2, 1995 Notification of acceptance: ........ June 5, 1995 Camera Ready Copies Due: ........... July 3, 1995 PROGRAM-CHAIR ~~~~~~~~~~~~~ Jose Tome (IST, Portugal) ORGANIZING-CHAIR ~~~~~~~~~~~~~~~~ Luis Custodio (IST, Portugal) SUBMISSION AND INQUIRIES ADDRESS ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ EPIA'95 Fuzzy Logic & Neural Networks Workshop INESC, Apartado 13069 1000 Lisboa Portugal Voice: +351 (1) 310-0325 Fax: +351 (1) 525843 Email: epia95-FLNNWorkshop at inesc.pt PLANNING TO ATTEND ~~~~~~~~~~~~~~~~~~ People planning to submit a paper or/and to attend the workshop are asked to complete and return the following form (by fax or email) to the inquiries address standing their intention. It will help the workshop organizer to estimate the facilities needed and will enable all interested people to receive updated information. +----------------------------------------------------------------+ | REGISTRATION OF INTEREST | | (Fuzzy Logic & Neural Networks Workshop) | | | | Title . . . . . Name . . . . . . . . . . . . . . . . . . . . | | Institution . . . . . . . . . . . . . . . . . . . . . . . . . | | Address1 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Address2 . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Country . . . . . . . . . . . . . . . . . . . . . . . . . . . | | Telephone. . . . . . . . . . . . . . . Fax . . . . . . . . . . | | Email address. . . . . . . . . . . . . . . . . . . . . . . . . | | I intend to submit a paper (yes/no). . . . . . . . . . . . . . | | I intend to participate only (yes/no). . . . . . . . . . . . . | | I will travel with ... guests | +----------------------------------------------------------------+ From uli at ira.uka.de Fri Feb 10 11:10:18 1995 From: uli at ira.uka.de (Uli Bodenhausen) Date: Fri, 10 Feb 1995 17:10:18 +0100 Subject: doctoral thesis on automatic structuring available by ftp Message-ID: <"irafs2.ira.270:10.02.95.16.10.29"@ira.uka.de> The following doctoral thesis is available by ftp. Sorry, no hardcopies available. ftp://archive.cis.ohio-state.edu/pub/neuroprose/Thesis/bodenhausen.thesis.ps.Z FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/Thesis/bodenhausen.thesis.Z ----------------------------------------------------------------------------- Automatic Structuring of Neural Networks for Spatio-Temporal Real-World Applications (153 pages) Ulrich Bodenhausen Doctoral Thesis University of Karlsruhe, Germany Abstract The successful application of speech recognition (SR) and on-line handwriting recognition (OLHR) systems to new domains greatly depend on the tuning of a recognizer's architecture to a new task. Architectural tuning is especially important if the amount of training data is small because the amount of training data limits the number of trainable parameters that can be estimated properly using an automatic learning algorithm. The number of trainable parameters of a connectionist SR or OLHR is dependent on architectural parameters like the width of input windows over time, the number of hidden units and the number of state units. Each of these architectural parameters provides different functionality in the system and can not be optimized independently. Manual optimization of these architectural parameters is time-consuming and expensive. Automatic optimization algorithms can free the developer of SR and OLHR applications from this task. In this thesis I develop and evaluate novel methods that allocate connectionist resources for spatio-temporal classification problems automatically. The methods are evaluated under the following evaluation criteria: - Suitability for small systems (~ 1,000 parameters) as well as for large systems (more than 10,000 parameters): Is the proposed method efficient for various sizes of the system? - Ease of use for non-expert users: How much knowledge is necessary to adapt the system to a customized application? - Final performance: Can the automatically optimized system compete with state-of-the-art well engineered systems? Several algorithms were developed and evaluated in this thesis. The Automatic Structure Optimization (ASO) algorithm performed best under the above criteria. ASO automatically optimizes - the width of the input windows over time which allow the following unit of the neural network to capture a certain amount of temporal context of the input signal; - the number of hidden units which allow the neural network to learn non-linear classification boundaries; - the number of states that are used to model segments of the spatio- temporal input, such as acoustic segments of speech or strokes of on-line handwriting. The ASO algorithm uses a constructive approach to find the best architecture. Training starts with a neural network of minimum size. Resources are added to specifically improve parts of the network which are involved in classification errors. ASO was developed on the recognition of spoken letters and improved the performance on an independent test set from 88.0% to 92.2% over a manually tuned architecture. The performances of architectures found by ASO for different domains and databases are also compared to architectures optimized manually by other researchers. For example, ASO improved the performance on on-line handwritten digits from 98.5% to 99.5% over a manually optimized architecture. It is also shown that ASO can successfully adapt to different sizes of the training database and that it can be applied to the recognition of connected spoken letters. The ASO algorithm is applicable to all classification problems with spatio-temporal input. It was tested on speech and on-line handwriting, as two instances of such tasks. The approach is new, requires no domain specific knowledge by the user and is efficient. It is shown for the first time that fully automatic tuning of all relevant architectural parameters of speech and on-line handwriting recognizers (window widths, number of hidden units and states) to the domain and the available amount of training data is actually possible with the ASO algorithm automatic tuning by ASO is efficient, both in terms of computational effort and final performance. ------------------------------------------------------------------------ Instructions for ftp retrieval of this paper are given below. Our university requires that the title page is in German. The rest of the thesis is English. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose/Thesis ftp> binary ftp> get bodenhausen.thesis.Z ftp> quit unix> uncompress bodenhausen.thesis.Z Thanks to Jordan Pollack for maintaining this archive. Uli Bodenhausen ======================================================================= Uli Bodenhausen University of Karlsruhe Germany uli at ira.uka.de ======================================================================= From beaudot at morgon.csemne.ch Sun Feb 12 13:20:16 1995 From: beaudot at morgon.csemne.ch (William Beaudot) Date: Sun, 12 Feb 95 19:20:16 +0100 Subject: French Doctoral Thesis available: neural information processing in retina Message-ID: <9502121820.AA27557@csemne.ch> Hi, Sorry, but this announce mainly concerns French readers. ----------------------------------------------------------------------------- The following FRENCH Doctoral Thesis is available by FTP from the TIRFLab ftp-server (Grenoble, France) and via WWW from my homepage. FTP-host: tirf.inpg.fr FTP-file: /pub/beaudot/MYTHESIS/*.ps.Z WWW-link: ftp://tirf.inpg.fr/pub/HTML/beaudot/thesis.html (8.6 Mo compressed, 30 Mo uncompressed, 249 pages, splited into 11 compressed files, one compressed postscript file per chapter) ----------------------------------------------------------------------------- THE NEURAL INFORMATION PROCESSING IN THE VERTEBRATE RETINA: A Melting Pot of Ideas for Artificial Vision KEYWORDS : biological neural networks, retina motion detection, directional selectivity visual adaptation, signal processing spatiotemporal processing, silicon retina English Abstract: The retina is the first neural structure involved in visual perception. Researchers in Artificial Vision often see in it only a hard-wired circuit scarcely more sophisticated than a video-camera, and dedicated to the scanning of images and to the extraction of features leading to a simple computation of Laplacian or temporal derivative. In this thesis, we argue that it makes a lot of more, in particularly from a dynamical point of view, aspect often neglected in Artificial Vision. From a neurobiological inspiration, we show that the retina achieves a spatiotemporal processing really suited to the regularization of visual data, that it extracts a reliable and relevant spatiotemporal information, that it performs a rough motion analysis composed of a motion detection and a directional selectivity, and that it finally presents an elaborate mechanism for the control of sensitivity. This work emphasizes the fact once more that the solutions implemented by nature are both simple and efficient (by a rather good trade-off between complexity and performance), and that they should inspire the designers of artificial visual systems. It also follows from this work two basic consequences: a better understanding of the neural mechanisms involved in early vision and a theoretical framework for the synthesis and analysis of neuromorphic systems straight implementable into silicon. French Abstract: LE TRAITEMENT NEURONAL DE L'INFORMATION DANS LA RETINE DES VERTEBRES : Un creuset d'idees pour la vision artificielle. La retine est la toute premiere structure neuronale impliquee dans la perception visuelle. Les chercheurs en Vision Artificielle n'y voient bien souvent qu'un circuit cable a peine plus sophistique qu'une camera, dediee a l'acquisition de l'image et a l'extraction de primitives se ramenant a un simple calcul de laplacien et de derivee temporelle. Dans cette these, nous soutenons qu'elle realise bien plus, en particulier d' un point de vue dynamique, aspect encore souvent neglige en Vision Arti- -ficielle. En nous appuyant sur des donnees neurobiologiques, nous montrons qu'elle effectue un traitement spatio-temporel bien adapte a la regularisation de l'information visuelle, qu'elle extrait une information spatio-temporelle fiable et pertinente, qu'elle effectue une analyse rudimentaire du mouvement composee d'une detection et d'une selectivite directionnelle, et enfin qu'elle presente un mecanisme de controle de la sensibilite tout a fait remarquable. Ce travail souligne encore une fois le fait que les solutions mises en oeuvre par la nature sont a la fois simples et efficaces (par un bon compromis entre la complexite et la performance), lesquelles devraient inspirer les concepteurs de systemes en Vision Artificielle. De ce travail decoulent aussi deux corollaires fondamentaux : une meilleure comprehension des mecanismes neuronaux impliques dans la vision precoce et un cadre theorique pour la synthese et l'analyse de systemes neuromorphiques directement implantables sur silicium. ----------------------------------------------------------------------------- FTP INSTRUCTIONS: unix> ftp tirf.inpg.fr (or 192.70.29.33) Name: anonymous Password: ftp> cd pub/beaudot/MYTHESIS ftp> binary ftp> mget *.ps.Z ftp> quit unix> uncompress *.ps.Z Be careful : Compressed files require 8.6 Mo ----------------------------------------------------------------------------- Feel free to contact me if you have any problem. -- Dr. William H.A. BEAUDOT E-mail: beaudot at design.csemne.ch C.S.E.M. IC & Systems Dept.: Bio-Inspired Advanced Research Maladire 71, Case postale 41 Phone: (41) 38 205 251 CH-2007 Neuchtel (Switzerland) Fax: (41) 38 205 770 From mani at linc.cis.upenn.edu Mon Feb 13 09:00:41 1995 From: mani at linc.cis.upenn.edu (D. R. Mani) Date: Mon, 13 Feb 1995 09:00:41 -0500 Subject: Tech Report Message-ID: <199502131400.JAA26434@linc.cis.upenn.edu> The following technical report is available. FTP instructions are appended below. - Mani ----- Abstract ---------------------------------------------------------------- Massively Parallel Real-Time Reasoning with Very Large Knowledge Bases: An Interim Report D. R. Mani Lokendra Shastri ICSI TR-94-031 We map structured connectionist models of knowledge representation and reasoning onto existing general purpose massively parallel architectures with the objective of developing and implementing practical, real-time reasoning systems. SHRUTI, a connectionist knowledge representation and reasoning system which attempts to model reflexive reasoning, serves as our representative connectionist model. Realizations of SHRUTI are developed on the Connection Machine CM-2 - an SIMD architecture - and on the Connection Machine CM-5 - an MIMD architecture. Though SIMD implementations on the CM-2 are reasonably fast - requiring a few seconds to tens of seconds for answering queries - experiments indicate that SPMD message passing systems are vastly superior to SIMD systems and offer hundred-fold speedups. The CM-5 implementation can encode large knowledge bases with several hundred thousand (randomly generated) rules and facts, and respond in under 500 milliseconds to a range of queries requiring inference depths of up to eight. This work provides some new insights into the simulation of structured connectionist networks on massively parallel machines and is a step toward developing large yet efficient knowledge representation and reasoning systems. ----- FTP Instructions -------------------------------------------------------- The compressed postscript file is available by anonymous FTP from ftp.icsi.berkeley.edu (128.32.201.6): % ftp ftp.icsi.berkeley.edu Name (ftp.icsi.berkeley.edu:): anonymous Password: ftp> cd pub/techreports/1994 ftp> binary ftp> get tr-94-031.ps.Z ftp> quit % uncompress tr-94-031.ps.Z [Print as usual] ------------------------------------------------------------------------------- D. R. Mani | mani at linc.cis.upenn.edu Dept. of Computer and Information Science | Office: (215) 898-3224 University of Pennsylvania | Home: (610) 325-0528 Philadelphia, PA 19104 | ------------------------------------------------------------------------------- From mmisra at choma.Mines.Colorado.EDU Tue Feb 14 10:26:15 1995 From: mmisra at choma.Mines.Colorado.EDU (Manavendra Misra) Date: Tue, 14 Feb 95 08:26:15 MST Subject: CSNA '95 Call for Papers Message-ID: <9502141526.AA05529@choma.Mines.Colorado.EDU> PRELIMINARY ANNOUNCEMENT AND CALL FOR PAPERS 1995 Annual Meeting, Classification Society of North America, June 22-25, 1995, Denver, Colorado, USA The 1995 annual meeting of the Classification Society of North America (CSNA) will be held in Denver, Colorado from June 22-June 25, 1995 at the Executive Tower Inn, 14th and Curtis Streets. The meeting is supported by the College of Business, University of Colorado at Denver. The meeting plans currently include a short course on Thursday, June 22, a welcoming reception on Thursday night, and regular sessions from Friday morning, June 23 until Sunday noon, June 25, 1995. A banquet is planned for Friday night and a variety of social activities in Denver and the surrounding area will be available. As is traditional at CSNA meetings, the conference will be interdisciplinary and informal. Abstracts of papers presented are distributed, but no formal proceedings are produced. Speakers are encouraged to present work in progress. CONTRIBUTED PAPERS RELATED TO CLASSIFICATION AND CLUSTERING FROM THE PERSPECTIVES OF STATISTICS, BIOLOGY, THE PHYSICAL SCIENCES, BUSINESS, LIBRARY SCIENCE, COMPUTER SCIENCE, PSYCHOLOGY, AND OTHER FIELDS ARE WELCOME AND ARE SOLICITED. We are particularly interested in including a wide variety of contributed papers concerned with applications of classification and clustering as well as methodological issues. In addition, invited sessions on Classification and Clustering in Marketing P. Green, the Wharton School and J. D.Carroll, Rutgers University, organizers New Optimization and Neural Network Approaches to Discriminant Analysis, with Applications Fred Glover, University of Colorado at Boulder,organizer Neural Networks for Classification Manavendra Misra, Colorado School of Mines, organizer (mmisra at mines.colorado.edu) Model Selection Methods in Classification and Clustering Hamparsum Bozdogan, University of Tennessee, organizer Authorship Attribution David Banks, Carnegie Mellon University, Organizer are planned. A Graduate Student Session, in which graduate students will present their research and will meet with mentors to review and discuss the work will also be offered. Abstracts of papers to be considered for presentation at the meetings should be submitted as soon as possible, so as to arrive no later than March 15, 1995. Please limit abstracts to one page or less, and include appropriate keywords indicating the topics the paper addresses. We encourage you to submit abstracts via electronic mail. Abstracts submitted electronically may be in LaTex format or in unformatted text, and should be submitted to the email address listed below. Abstracts submitted in text format should avoid the use of formulae if at all possible. Written abstracts may be mailed to the program chair at the address below, or sent via FAX. Authors will be notified of acceptance of abstracts in late March or early April. If your abstract is intended for the graduate student session, please indicate this clearly. It will help the organizers plan appropriate conference facilities if those intending to attend the meeting and/or submit abstracts will inform the committee (preferably via e-mail) of those intentions as soon as possible, so that appropriate facilities can be arranged. Further information may be obtained from the Program Chair. Detailed registration and hotel information will be distributed in late March or early April. All inquiries, abstracts, etc. should be directed to: Peter Bryant, CSNA-95 College of Business, University of Colorado at Denver Campus Box 165, Denver, Colorado 80217-3364 USA Telephone (303)-556-5833 Fax (303)-556-5899 E-mail csna95 at castle.cudenver.edu. Please distribute or post this notice as appropriate. ***************************************************************************** Manavendra Misra Dept of Mathematical and Computer Sciences Colorado School of Mines, Golden, CO 80401 Ph. (303)-273-3873 Fax. (303)-273-3875 Home messages/fax : (303)-271-0775 email: mmisra at mines.colorado.edu WWW URL: http://vita.mines.colorado.edu:3857/1s/mmisra ***************************************************************************** From mario at joule.physics.uottawa.ca Tue Feb 14 16:01:03 1995 From: mario at joule.physics.uottawa.ca (Mario Marchand) Date: Tue, 14 Feb 1995 16:01:03 -0500 Subject: Neural net paper available by anonymous ftp Message-ID: <9502142101.AA16569@joule.physics.uottawa.ca> The following paper, which has just been accepted for publication in the journal "Neural Networks", is available by anonymous ftp at: ftp://dirac.physics.uottawa.ca/pub/tr/marchand FileName: NN95.ps.Z Title: Learning $\mu$-Perceptron Networks On the Uniform Distribution Authors: Golea M., Marchand M. and Hancock T.R.. Abstract: We investigate the learnability, under the uniform distribution, of neural concepts that can be represented as simple combinations of {\em nonoverlapping\/} perceptrons (also called $\mu$ perceptrons) with binary weights and arbitrary thresholds. Two perceptrons are said to be nonoverlapping if they do not share any input variables. Specifically, we investigate, within the distribution-specific PAC model, the learnability of $\mu$ {\em perceptron unions\/}, {\em decision lists\/}, and {\em generalized decision lists\/}. In contrast to most neural network learning algorithms, we do not assume that the architecture of the network is known in advance. Rather, it is the task of the algorithm to find both the architecture of the net and the weight values necessary to represent the function to be learned. We give polynomial time algorithms for learning these restricted classes of networks. The algorithms work by estimating various statistical quantities that yield enough information to infer, with high probability, the target concept. Because the algorithms are statistical in nature, they are robust against large amounts of random classification noise. ALSO: you will find other papers co-authored by Mario Marchand in this directory. The text file: Abstracts-mm.txt contains a list of abstracts of all the papers. PLEASE: communicate to me any printing or transmission problems. Any comments concerning these papers are very welcome. From esann at dice.ucl.ac.be Tue Feb 14 11:35:15 1995 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 14 Feb 1995 18:35:15 +0200 Subject: 3rd European Symposium on Artificial Neural Networks: programme & registration Message-ID: <199502141732.SAA15742@ns1.dice.ucl.ac.be> ******************************************************** * 3rd European Symposium on Artificial Neural Networks * * * * What's new in fundamental research? * * * * Brussels - April 19-20-21, 1995 * * * * Preliminary Program * * * ******************************************************** This e-mail contains all information concerning the 3rd European Symposium on Artificial Neural Networks, to be held in Brussels, on 19-20-21 April, 1995: - general information about the conference - the full programme - the committees - practical information - how to register The field of Artificial Neural Networks includes a lot of different disciplines, from mathematics and statistics to robotics and electronics. For this reason, actual studies concern various aspects of the field, sometimes to the detriment of strong, well established foundations for these researches; it is obvious that a better knowledge of the basic aspects of neurocomputing, and more effective comparisons with other computing methods are strongly necessary for a profitable long-term use of neural networks in applications. The purpose of the ESANN series of conferences is to present the latest results in the fundamental aspects of artificial neural networks. The third European Symposium on Artificial Neural Networks will be organized in Brussels, on 19-21 April 1995. It will cover most of the main theoretical, mathematical and fundamental new developments of the field: learning, models, approximation of functions, classification, control, signal processing, biology, self-organisation and many other topics will be treated during the conference. ESANN'95 is coupled with the 'Neurap'95' conference (Marseilles, France, December 1995), which will present the appplications of neural network methods. The program committee of ESANN'95 received 112 submissions of communications; 53 were selected for presentation during the symposium, and will be included in the proceedings. This severe selection guarantees the high quality of the selected papers. Besides these presentations, four invited speakers (David Stork, Hans-Peter Mallot, Pierre Comon and Christian Jutten) will present the state-of-the-art and the recent developments in some particular aspects of the topics covered by the conference. The steering and program committees of ESANN'95 are pleased to invite you to participate to this symposium. More than a formal conference presenting the last developments in the field, ESANN'95 will be also a forum for open discussions, round tables and opportunities for future collaborations. We hope to have the pleasure to meet you in April, in the splendid town of Brussels, and that your stay in Belgium will be as scientifically beneficial as agreeable. ------------------------------- - Programme of the conference - ------------------------------- Wednesday 19th April 1995 ------------------------- 8H30 Registration 9H00 Opening session Session 1: Self-organisation Chairman: F. Blayo (Univ. Paris I, France) 9H10 "Self-organisation, metastable states and the ODE method in the Kohonen neural network" J.A. Flanagan, M. Hasler E.P.F. Lausanne (Switzerland) 9H30 "About the Kohonen algorithm: strong or weak self-organization?" J.-C. Fort*, G. Pages** *Univ. Nancy I & Univ. Paris I (France), **Univ. Paris 6 & Univ. Paris 12 (France) 9H50 "Topological interpolation in SOM by affine transformations" J. Goppert, W. Rosenstiel Univ. Tuebingen (Germany) 10H10 "Dynamic Neural Clustering" K. Moscinska Silesian Tech. Univ. (Poland) 10H30 "Multiple correspondence analysis of a crosstabulations matrix using the Kohonen algorithm" S. Ibbou, M. Cottrell Univ. Paris I (France) 10H50 Coffee break Session 2: Models 1 Chairman: J. Stonham (Brunel Univ., United Kingdom) 11H10 "Identification of the human arm kinetics using dynamic recurrent neural networks" J.-P. Draye*, G. Cheron**, M. Bourgeois**, D. Pavisic*, G. Libert* *Fac. Polytech. de Mons (Belgium), **Univ. Brussels (Belgium) 11H30 "Simplified cascade-correlation learning" M. Lehtokangas, J. Saarinen, K. Kaski Tampere Univ. of Technology (Finland) 11H50 "Active noise control with dynamic recurrent neural networks" D. Pavisic, L. Blondel, J.-P. Draye, G. Libert, P. Chapelle Fac. Polytech. de Mons (Belgium) 12H10 "Cascade learning for FIR-TDNNs" M. Diepenhorst, J.A.G. Nijhuis, L. Spaanenburg Rijksuniv. Groningen (The Netherlands) 12H30 Lunch 14H00 Invited paper: D. Stork (Ricoh California Research Center), A. Sperduti (Univ. di Pisa) "Recent developments in transformation-invariant pattern classification" Session 3: Signal processing and chaos Chairman: M. Hasler (E.P.F. Lausanne, Switzerland) 14H45 "Adaptive signal processing with unidirectional Hebbian adaptation laws" J. Dehaene, J. Vandewalle Kat. Univ. Leuven (Belgium) 15H05 "MAP decomposition of a mixture of AR signal using multilayer perceptrons" C. Couvreur Fac. Polytech. de Mons (Belgium) 15H25 "XOR and backpropagation learning: in and out of the chaos?" K. Bertels*, L. Neuberg*, S. Vassiliadis**, G. Pechanek*** *Fac. Univ. N.-D. de la Paix (Belgium), **T.U. Delft (The Netherlands), ***IBM Microelectronics Div. (USA) 15H45 "Analog Brownian weight movement for learning of artificial neural networks" M.R. Belli, M. Conti, C. Turchetti Univ. of Ancona (Italy) 16H05 Coffee break Session 4: Biological models Chairman: H.P. Mallot (Max-Planck Institut, Germany) 16H30 "Spatial summation in simple cells: computational and experimental results" F. Worgotter*, E. Nelle*, B. Li**, L. Wang**, Y.-C. Diao** *Ruhr Univ. Bochum (Germany), **Academia Sinica (China) 16H50 "Activity-dependent neurite outgrowth in a simple network model including excitation and inhibition" C. van Oss, A. van Ooyen Neth. Inst. for Brain Research (The Netherlands) 17H10 "Predicting spike train responses of neuron models" S. Joeken, H. Schwegler Univ. of Bremen (Germany) 17H30 "A distribution-based model of the dynamics of neural networks in the cerebral cortex" A. Terao*, M. Akamatsu*, J. Seal** *Nat. Inst. of Bioscience and Human Technology (Japan), **CNRS Marseilles (France) 17H50 "Some new results on the coding of pheromone intensity in an olfactory sensory neuron" A. Vermeulen*,**, J.-P. Rospars*, P. Lansky***,*, H.C. Tuckwell****,* *INRA (France), **I.N.P. Grenoble (France), ***Acad. of Sciences (Czech Republic), ****Australian Nat. Univ. (Australia) Thursday 20th April 1995 ------------------------ Session 5: Special session on the Elena-Nerves2 ESPRIT Basic Research project Chairman: M. Cottrell (Univ. Paris I, France) 9H00 Invited paper: P. Comon (Thomson-Sintra, France) "Supervised classification: a probabilistic approach" 9H30 Invited paper: C. Jutten, O. Fambon (Inst. Nat. Pol. Grenoble, France) "Pruning methods: a review" 10H00 "A deterministic method for establishing the initial conditions in the RCE algorithm" J.M. Moreno, F.X. Vazquez, F. Castillo, J. Madrenas, J. Cabestany Univ. Politecnica Catalunya (Spain) 10H20 "Pruning kernel density estimators" O. Fambon, C. Jutten Inst. Nat. Pol. Grenoble (France) 10H40 "Suboptimal Bayesian classification by vector quantization with small clusters" J.L. Voz, M. Verleysen, P. Thissen, J.D. Legat Univ. Cat. Louvain (Belgium) 11H00 Coffee break Session 6: Theory of learning systems Chairman: C. Touzet (IUSPIM Marseilles, France) 11H20 "Knowledge and generalisation in simple learning systems" D. Barber, D. Saad University of Edinburgh (United Kingdom) 11H40 "Control of complexity in learning with perturbed inputs" Y. Grandvalet*, S. Canu*, S. Boucheron** *Univ. Tech. de Compiegne (France), **Univ. Paris-Sud (France) 12H00 "An episodic knowledge base for object understanding" U.-D. Braumann, H.-J. Boehme, H.-M. Gross Tech. Univ. Ilmenau (Germany) 12H20 "Neurosymbolic integration: unified versus hybrid approaches" M. Hilario*, Y. Lallement**, F. Alexandre** *Univ. Geneve (Switzerland), **INRIA (France) 12H40 Lunch Session 7: Biological vision Chairman: J. Herault (Inst. Nat Polyt. Grenoble, France) 14H00 "Improving object recognition by using a visual latency mechanism" R. Opara, F. Worgorter Ruhr Univ. Bochum (Germany) 14H20 "On the function of the retinal bipolar cell in early vision" S. Ohshima, T. Yagi, Y. Funahashi Nagoya Inst. of Technology (Japan) 14H40 "Sustained and transient amacrine cell circuits underlying the receptive fields of ganglion cells in the vertebrate retina" G. Maguire Univ. of Texas (USA) 15H00 "Latency-reduction in antagonistic visual channels as the result of corticofugal feedback" J. Kohn, F. Worgotter Ruhr Univ. Bochum (Germany) 15H20 Coffee break Session 8: Models 2 Chairman: V. Kurkova (Academy of Sciences, Czech Republic) 15H40 "On threshold circuit depth" A. Albrecht BerCom GmbH (Germany) 16H00 "Minimum entropy queries for linear students learning nonlinear rules" P. Sollich Univ. of Edinburg (United Kingdom) 16H20 "An asymmetric associative memory model based on relaxation labeling processes" M. Pelillo, A.M. Fanelli Univ. di Bari (Italy) 16H40 "Invariant measure for an infinite neural network " T.S. Turova Kat. Univ. Leuven (Belgium) 17H00 "Growing adaptive neural networks with graph grammars" S.M. Lucas Univ. of Essex (United Kingdom) 17H20 "Constructing feed-forward neural networks for binary classification tasks" C. Campbell*, C. Perez Vincente** *Bristol Univ. (United Kingdom), **Univ. Barcelona (Spain) 20H00 Conference dinner Friday 21th April 1995 ---------------------- Session 9: Classification and control Chairman: M. Grana (UPV San Sebastian, Spain) 9H00 "Improvement of EEG classification with a subject-specific feature selection" M. Pregenzer, G. Pfurtscheller, C. Andrew Graz Univ. of Technology (Austria) 9H20 "Neural networks for invariant pattern recognition" J. Wood, J. Shawe-Taylor Univ. of London (United Kingdom) 9H40 "Derivation of a new criterion function based on an information measure for improving piecewise linear separation incremental algorithms" J. Cuguero, J. Madrenas, J.M. Moreno, J. Cabestany Univ. Politecnica Catalunya (Spain) 10H00 "Neural network based one-step ahead control and its stability" Y. Tan, A.R. Van Cauwenberghe Univ. of Gent (Belgium) 10H20 "NLq theory: unifications in the theory of neural networks, systems and control" J. Suykens, B. De Moor, J. Vandewalle Kat. Univ. Leuven (Belgium) 10H40 Coffee break 11H00 Invited paper: H.P. Mallot (Max-Planck-Institut, Germany) "Learning of cognitive maps from sequences of views" Session 10: Radial-basis functions Chairman: G. Pages (Univ. Paris VI, France) 11H45 "Trimming the inputs of RBF networks" C. Andrew*, M. Kubat**, G. Pfurtscheller* *Graz Univ. Tech. (Austria), **Johannes Kepler Univ. (Austria) 12H05 "Learning the appropriate representation paradigm by circular processing units" S. Ridella, S. Rovetta, R. Zunino Univ. of Genoa (Italy) 12H25 "Radial basis functions in the Fourier domain" M. Orr Univ. of Edinburgh (United Kingdom) 12H45 Lunch Session 11: Function approximation Chairman: J. Vandewalle (Kat. Univ. Leuven, Belgium) 14H00 "Function approximation by localized basis function neural network" M. Kokol, I. Grabec Univ. of Ljubljana (Slovenia) 14H20 "Functional approximation by perceptrons: a new approach" J.-G. Attali*, G. Pages** *Univ. Paris I (France), **Univ. Paris 6 & Univ. Paris 12 (France) 14H40 "Approximation of functions by Gaussian RBF networks with bouded number of hidden units" V. Kurkova Acad. of Sciences (Czech Republic) 15H00 "Neural network piecewise linear preprocessing for time-series prediction" T.W.S. Chow, C.T. Leung City Univ. (Hong-Kong) 15H20 "An upper estimate of the error of approximation of continuous multivariable functions by KBF networks" K. Hlavackova Acad. of Sciences (Czech Republic) 15H40 Coffee break Session 12: Multi-layer perceptrons Chairman: W. Duch (Nicholas Copernicus Univ., Poland) 16H00 "Multi-sigmoidal units and neural networks" J.A. Drakopoulos Stanford Univ. (USA) 16H20 "Performance analysis of a MLP weight initialization algorithm" M. Karouia, R. Lengelle, T. Denoeux Univ. Compiegne (France) 16H40 "Alternative output representation schemes affect learning and generalization of back-propagation ANNs; a decision support application" P.K. Psomas, G.D. Hilakos, C.F. Christoyannis, N.K. Uzunoglu Nat. Tech. Univ. Athens (Greece) 17H00 "A new training algorithm for feedforward neural networks" B.K. Verma, J.J. Mulawka Warsaw Univ. of Technology (Poland) 17H20 "An evolutive architecture coupled with optimal perceptron learning for classification" J.-M. Torres Moreno, P. Peretto, M. B. Gordon C.E.N. Grenoble (France) -------------- - Committees - -------------- Steering committee ------------------ Francois Blayo Univ. Paris I (F) Marie Cottrell Univ. Paris I (F) Nicolas Franceschini CNRS Marseille (F) Jeanny Herault INPG Grenoble (F) Michel Verleysen UCL Louvain-la-Neuve (B) Scientific committee -------------------- Agnes Babloyantz Univ. Libre Bruxelles (Belgium) Herve Bourlard ICSI Berkeley (USA) Joan Cabestany Univ. Polit. de Catalunya (E) Dave Cliff University of Sussex (UK) Holk Cruse Universitat Bielefeld (D) Dante Del Corso Politecnico di Torino (I) Wlodek Duch Nicholas Copernicus Univ. (PL) Marc Duranton Philips / LEP (F) Jean-Claude Fort Universite Nancy I (F) Bernd Fritzke Ruhr-Universitat Bochum (D) Karl Goser Universitat Dortmund (D) Manuel Grana UPV San Sebastian (E) Martin Hasler EPFL Lausanne (CH) Kurt Hornik Techische Univ. Wien (A) Christian Jutten INPG Grenoble (F) Vera Kurkova Acad. of Science of the Czech Rep. (CZ) Petr Lansky Acad. of Science of the Czech Rep. (CZ) Jean-Didier Legat UCL Louvain-la-Neuve (B) Hans-Peter Mallot Max-Planck Institut (D) Eddy Mayoraz RUTCOR (USA) Jean Arcady Meyer Ecole Normale Superieure Paris (F) Jose Mira-Mira UNED (E) Pietro Morasso Univ. of Genoa (I) Jean-Pierre Nadal Ecole Normale Superieure Paris (F) Erkki Oja Helsinky University of Technology (FIN) Gilles Pages Universite Paris VI (F) Helene Paugam-Moisy Ecole Normale Superieure Lyon (F) Alberto Prieto Universitad de Granada (E) Pierre Puget LETI Grenoble (F) Ronan Reilly University College Dublin (IRE) Tamas Roska Hungarian Academy of Science (H) Jean-Pierre Rospars INRA Versailles (F) Jean-Pierre Royet Universite Lyon 1 (F) John Stonham Brunel University (UK) John Taylor King's College London (UK) Vincent Torre Universita di Genova (I) Claude Touzet IUSPIM Marseilles (F) Joos Vandewalle KUL Leuven (B) Marc Van Hulle KUL Leuven (B) Christian Wellekens Eurecom Sophia-Antipolis (F) ----------- - Support - ----------- ESANN'95 is organized with the support of: - Commission of the European Communities (DG XII, Human Capital and Mobility programme) - Region of Brussels-Capital - IEEE Region 8 - UCL (Universite Catholique de Louvain - Louvain-la-Neuve) - REGARDS (Research Group on Algorithmic, Related Devices and Systems UCL) -------------------------- - Conference information - -------------------------- Registration fees for symposium ------------------------------- registration before registration after 17th March 1995 17th March 1995 Universities BEF 15000 BEF 16000 Industries BEF 19000 BEF 20000 Registration fees include attendance to all sessions, the ESANN'95 banquet, a copy of the conference proceedings, daily lunches (19-21 April '95), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Advance payments (see registration form) must be made to the conference secretariat by bank transfer on a Belgian bank (free of charges), by bank transfer from a bank abroad account (add BEF 500 for processing fees) or by sending a cheque (add BEF 500 for processing fees). Bank transfers and cheques must be made out in Belgian francs. Language -------- The official language of the conference is English. It will be used for all printed material, presentations and discussions. Proceedings ----------- A copy of the proceedings will be provided to all conference registrants. All technical papers will be included in the proceedings. Additional copies of the proceedings (ESANN'93, ESANN'94 and ESANN'95) may be purchased at the following rates: ESANN'95 proceedings: BEF 2000 ESANN'94 proceedings: BEF 2000 ESANN'93 proceedings: BEF 1500. Add BEF 500 to any single or multiple order for p.&p. and bank charges. Please write to the conference secretariat to order proceedings. Conference dinner ----------------- A banquet will be offered on Thursday 20th to all conference registrants in a famous and typical place of Brussels. Additional vouchers for the banquet may be purchased on Wednesday 19th at the conference. Cancellation ------------ If cancellation is received by 24th March 1995, 50% of the registration fees will be returned. Cancellation received after this date will not be entitled to any refund. ----------------------- - General information - ----------------------- Brussels, Belgium ----------------- Brussels is not only the host city of the European Commission and of hundreds of multinational companies; it is also a marvelous historical town, with typical quarters, famous monuments known throughout the world, and the splendid "Grand-Place". It is a cultural and artistic center, with numerous museums. Night life in Brussels is considerable. There are of lot of restaurants and pubs open late in the night, where typical Belgian dishes can be tasted with one of the more than 1000 different beers. Hotel accommodation ------------------- Special rates for participants to ESANN'95 have been arranged at the MAYFAIR HOTEL (4 stars), and at the FORUM HOTEL (3 stars). The Mayfair Hotel is tastefully decorated to the highest standards of luxury and comfort. It includes two restaurants, a bar and a private parking. Located on the elegant Avenue Louise, the exclusive Hotel Mayfair is a short walk from the "uppertown" luxurious shopping district. Also nearby is the 14th century Cistercian abbey and the magnificent "Bois de la Cambre" park. Single room BEF 3300 Double room or twin room BEF 4000 HOTEL MAYFAIR phone: +32 2 649 98 00 381 av. Louise fax: +32 2 649 22 49 1050 Brussels - Belgium The Forum Hotel is situated in the heart of a "Art Nouveau" quiet, residential and historical area. It includes a bar, a Tuscan restaurant and a private parking. Single room BEF 2200 Double room or twin room BEF 2700 FORUM HOTEL phone: + 32 2 343 01 00 2 av. du Haut-Pont fax: + 32 2 347 00 54 1060 Brussels - Belgium Prices for both hotels include breakfast, taxes and service. Rooms can only be confirmed upon receipt of booking form (see at the end of this booklet) and deposit. Rooms must be booked before 31 March 1995. Public transportation goes directly from the hotels (trams No. 93 & 94 for Mayfair hotel and tram No. 92 from the Forum hotel) to the conference center ("Parc" stop) Conference location ------------------- The conference will be held at the "Chancellerie" of the Generale de Banque. A map is included at the end of this booklet. Generale de Banque - Chancellerie 1 rue de la Chancellerie 1000 Brussels - Belgium Conference secretariat ---------------------- D facto conference services phone: + 32 2 245 43 63 45 rue Masui fax: + 32 2 245 46 94 B-1210 Brussels - Belgium E-mail: esann at dice.ucl.ac.be ------------------------------------------------ - ESANN'95 Registration and Hotel Booking Form - ------------------------------------------------ Ms., Mr., Dr., Prof.: ............................................ Name: ............................................................ first Name: ...................................................... Institution: ..................................................... ................................................................. Address: ......................................................... ................................................................. ZIP: ............................................................. Town: ............................................................ Country: ......................................................... Tel: ............................................................. fax: ............................................................. E-mail: .......................................................... VAT No.: .......................................................... Registration fees ----------------- registration before registration after 17th March 1995 17th March 1995 Universities BEF 15000 BEF 16000 Industries BEF 19000 BEF 20000 University fees are applicable to members and students of academic and teaching institutions. Each registration will be confirmed by an acknowledgment of receipt, which must be given to the registration desk of the conference to get entry badge, proceedings and all materials. Registration fees include attendance to all sessions, the ESANN'95 banquet, a copy of the conference proceedings, daily lunches (19-21 April '95), and coffee breaks twice a day during the symposium. Advance registration is mandatory. Hotel booking ------------- Hotel MAYFAIR (4 stars) - 381 av. Louise - 1050 Brussels Single room : BEF 3300 Double room (large bed) : BEF 4000 Twin room (2 beds) : BEF 4000 H?tel FORUM (3 stars) - 2 av. du Haut-Pont - 1060 Brussels Single room : BEF 2200 Double room (large bed) : BEF 2700 Twin room (2 beds) : BEF 2700 Prices include breakfast, service and taxes. A deposit corresponding to the first night is mandatory. Please tick appropriate: ------------------------ Registration form to ESANN'95 Universities: O registration before 17th March 1995: BEF 15000 O registration after 17th March 1995: BEF 16000 Industries: O registration before 17th March 1995: BEF 19000 O registration after 17th March 1995: BEF 20000 Hotel Mayfair booking O single room deposit: BEF 3300 O double room (large bed) deposit: BEF 4000 O twin room (twin beds) deposit: BEF 4000 Hotel Forum booking O single room deposit: BEF 2200 O double room (large bed) deposit: BEF 2700 O twin room (twin beds) deposit: BEF 2700 Hotels must be booked bafore 31 March 1995; no guaranty of disponibility can be given after this date. Arrival date: ..../..../1995 Departure date: ..../..../1995 O Additional payment if fees are paid through a bank abroad cheque or by a bank transfer from a bank abroad account: BEF 500 Total BEF ____ Payment (please tick): O Bank transfer, stating name of participant, made payable to: Generale de Banque ch. de Waterloo 1341 A B-1180 Brussels - Belgium Acc.no: 210-0468648-93 of D facto (45 rue Masui, B-1210 Brussels) A supplementary fee of BEF 500 must be added if the payment is made from a bank abroad account. O Cheques/Postal Money Orders made payable to: D facto 45 rue Masui B-1210 Brussels - Belgium A supplementary fee of BEF 500 must be added if the payment is made through a bank abroad cheque or postal money order. Only registrations accompanied by a cheque, a postal money order or the proof of bank transfer will be considered. Registration and hotel booking form, together with payment, must be sent as soon as possible, and in no case later than 7th April 1995, to the conference secretariat: D facto conference services - ESANN'95 45, rue Masui - B-1210 Brussels - Belgium _____________________________ D facto publications - conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 _____________________________ From yarowsky at unagi.cis.upenn.edu Tue Feb 14 16:29:51 1995 From: yarowsky at unagi.cis.upenn.edu (David Yarowsky) Date: Tue, 14 Feb 95 16:29:51 EST Subject: ACL-95 Corpus-based NLP Workshop - Call for Papers Message-ID: <9502142129.AA27412@unagi.cis.upenn.edu> *** PRIMARY CALL FOR PAPERS **** ACL's SIGDAT and SIGNLL present the THIRD WORKSHOP ON VERY LARGE CORPORA WHEN: June 30, 1995 - immediately following ACL-95 (June 27-29) WHERE: MIT, Cambridge, Massachusetts, USA WORKSHOP DESCRIPTION: As in past years, the workshop will offer a general forum for new research in corpus-based and statistical natural language processing. Areas of interest include (but are not limited to): sense disambiguation, part-of-speech tagging, robust parsing, term and name identification, alignment of parallel text, machine translation, lexicography, spelling correction, morphological analysis and anaphora resolution. This year, the workshop will be organized around the theme of: Supervised Training vs. Self-organizing Methods Is annotation worth the effort? Historically, annotated corpora have made a significant contribution. The tagged Brown Corpus, for example, led to important improvements in part-of-speech tagging. But annotated corpora are expensive. Very little annotated data is currently available, especially for languages other than English. Self-organizing methods offer the hope that annotated corpora might not be necessary. Do these methods really work? Do we have to choose between annotated corpora and unannotated corpora? Can we use both? The workshop will encourage contributions of innovative research along this spectrum. In particular, it will seek work in languages and applications where appropriately tagged training corpora do not currently exist. It will also explore what new kinds of corpus annotations (such as discourse structure, co-reference and sense tagging) would be useful to the community, and will encourage papers on their development and use in experimental projects. The theme will provide an organizing structure to the workshop, and offer a focus for debate. However, we expect and will welcome a diverse set of submissions in all areas of statistical and corpus-based NLP. PROGRAM CHAIRS: Ken Church - AT&T Bell Laboratories David Yarowsky - University of Pennsylvania SPONSORS: LEXIS-NEXIS, Division of Reed and Elsevier, Plc. SIGDAT (ACL's special interest group for linguistic data and corpus-based approaches to NLP) SIGNLL (ACL's special interest group for natural language learning) FORMAT FOR SUBMISSION: Authors should submit a full-length paper (3500-8000 words), either electronically or in hard copy. Electronic submissions should be mailed to "yarowsky at unagi.cis.upenn.edu", and must either be (a) plain ascii text, (b) a single postscript file, or (c) a single latex file following the ACL-95 stylesheet (no separate figures or .bib files). Hard copy submissions should be mailed to Ken Church (address below), and should include four (4) copies of the paper. REQUIREMENTS: Papers should describe original work. A paper accepted for presentation cannot be presented or have been presented at any other meeting. Papers submitted to other conferences will be considered, as long as this fact is clearly indicated in the submission. SCHEDULE: Submission Deadline: March 20, 1995 Notification Date: April 18, 1995 Camera ready copy due: May 11, 1995 CONTACT: Ken Church David Yarowsky Room 2B-421 Dept. of Computer and Info. Science AT&T Bell Laboratories University of Pennsylvania 600 Mountain Ave. 200 S. 33rd St. Murray Hill, NJ 07974 USA Philadelphia, PA 19104-6389 USA e-mail: kwc at research.att.com email: yarowsky at unagi.cis.upenn.edu From bhuiyan at mars.elcom.nitech.ac.jp Wed Feb 15 15:30:14 1995 From: bhuiyan at mars.elcom.nitech.ac.jp (Md. Shoaib Bhuiyan) Date: Wed, 15 Feb 95 15:30:14 JST Subject: Paper available: An improved Neural Network based Edge Detection method Message-ID: <9502150630.AA23476@mars.elcom.nitech.ac.jp> The following paper is available for copying. It was published in Proceedings of Int'l. Conf. on Neural Information Processing, Seoul, Korea, vol. 1, pp.620-625, Oct. 17-20, 1994. An improved Neural Network based Edge Detection method Abstract: Existing edge detection methods provide unsatisfactory results when contrast changes largely within an image due to non-uniform illumination. Koch et al. developed an energy function based upon Hopfield neural network, whose coefficients were fixed by trial and error and remains constant for the entire image, irrespective of the differences in intensity level. This paper presents an improved edge detection method for images where contrast is not uniform. we propose that the energy function parameters for an image with inconsistent illumination should not remain fixed and propose an schedule to change these parameters. The results, compared with those of existing one's, suggest a better strategy for edge detection depending upon both the dynamic range of the original image pixel values as well as their contrast. ----------------------------------------------------------------- The paper can be retrieved via anonymous ftp by following these instructions: unix> ftp ftp.elcom.nitech.ac.jp ftp:name> anonymous Password:> your complete e-mail address ftp> cd pub ftp> get ICONIP.ps.gz ftp> bye unix> gunzip ICONIP.ps.gz unix> lpr ICONIP.ps ICONIP.ps is 3.58Mb, six pages in postscript format. The paper proposes a novel idea to extract edges from an image with high contrast. Your feedback is very much appreciated (bhuiyan at mars.elcom.nitech.ac.jp) -Md. Shoaib Bhuiyan From B344DSL at UTARLG.UTA.EDU Wed Feb 15 15:00:56 1995 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Wed, 15 Feb 1995 14:00:56 -0600 (CST) Subject: Call for items for Newsletter section of Neural Networks Message-ID: Call for Newsletter Items The journal Neural Networks is now adding a newsletter section to facilitate announcement of items of interest to its general readership. Examples include, but are not limited to: ~ new industrial applications of neural networks; ~ new discoveries in neurobiology or psychology that are of interest to neural network researchers; ~ government or international research initiatives that will be of benefit to the neural network field; ~ activities by special interest groups (SIGs) of the International Neural Network Society. We are aiming for our first newsletter section to be part of the issue of Neural Networks to appear at the World Congress on Neural Networks July 17-21. The Newsletter Editors are Harold Szu (representing the SIGs) Marwan Jabri (representing the Japanese Neural Network Society) Stephane Canu (representing the European Neural Network Society) Daniel S. Levine (representing the International Neural Network Society) Each of the four of us as editors are allotted a maximum of 2 newsletter pages per issue, and will judge submissions for relevance and appropriateness. (If one editor receives more than two pages worth of publishable items and another editor less, some items can be reapportioned.) Therefore, I am putting out a call for submissions of items that you wish to publicize ~ preferably in electronic form, to facilitate forwarding to one of the Editors-in- Chief of the journal. (Note: this does not include conference announcements, which are already covered in the journal's current events section.) If you think an item might be appropriate for the Newsletter but are unsure, please send it anyway and the Editors will decide collectively on it. To facilitate going to the press soon enough for July, we would like to have items arrive if possible by March 1, 1995. They can be sent to me at: Professor Daniel S. Levine Department of Mathematics University of Texas at Arlington Arlington, TX 76019-0408, USA e-mail: b344dsl at utarlg.uta.edu fax: 817-794-5802 telephone: 817-273-3598 or any of the other three editors: Dr. Harold Szu 9402 Wildoak Drive Bethesda, MD 20914, USA e-mail: hszu at ulysses.nswc.navy.mil fax: 301-394-3923 telephone: 301-394-3097 Dr. Stephane Canu Departement de Genie Informatique HEUDIASYC - U.R.A. 817 C.N.R.S. Universite de Technologie de Compiegne B.P. 649 60206 Compiegne Cedex, France e-mail: scanu at hds.univ-compiegne.fr fax (+33) 44 23 44 77 telephone: (+33) 44 23 44 83 Dr. Marwan Jabri Department of Electric Engineering Building J03 Sydney University Sydney NSW 2006, Australia e-mail: marwan at sedal.oz.au fax: 61-2-660-1228 telephone: 61-2-351-2240 Please post this notice to other bulletin boards! From koza at CS.Stanford.EDU Wed Feb 15 22:47:30 1995 From: koza at CS.Stanford.EDU (John Koza) Date: Wed, 15 Feb 95 19:47:30 PST Subject: GP-96 Call For Papers Message-ID: FIRST CALL FOR PAPERS (Version 1.0) GP-96 - GENETIC PROGRAMMING 96 July 28 - 31 (Sunday - Wednesday), 1996 Fairchild Auditorium Stanford University Stanford, California This first genetic programming conference will bring together people from the academic world, industry, and government who are interested in genetic programming. The conference program will include contributed papers, tutorials, an invited speaker, and informal meetings. Topics of interest include, but are not limited to, - new applications of genetic programming - theory - extensions and variations of genetic programming - parallelization techniques - mental models, memory, and state - operator and representation issues - relations to biology and cognitive systems - implementation issues - war stories Proceedings will be published by The MIT Press. HONORARY CHAIR (AND INVITED SPEAKER) John Holland, University of Michigan GENERAL CHAIR John Koza, Stanford University PROGRAM COMMITTEE (In Formation): - Russell J. Abbott, California State University, Los Angeles and The Aerospace Corporation - David Andre, Stanford University - Peter J. Angeline, Loral Federal Systems - Wolfgang Banzhaf, University of Dortmund, Germany - Samy Bengio, Centre National d'Etudes des Telecommunications, France - Scott Brave, Stanford University - Walter Cedeno, Primavera Systems Inc. - Nichael Lynn Cramer, BBN System and Technologies - Patrik D'haeseleer, University of New Mexico - Bertrand Daniel Dunay, System Dynamics International - Frederic Gruau, Stanford University - Richard J. Hampo, Ford Motor Company - Simon Handley, Stanford University - Hitoshi Hemmi, ATR, Kyoto, Japan - Thomas Huang, University of Illinois - Hitoshi Iba, Electrotechnical Laboratory, Japan - Martin A. Keane, Econometrics Inc. - Mike Keith, Allen Bradely Controls - Kenneth Marko, Ford Motor Company - Kenneth E. Kinnear, Jr., Adaptive Computing Technology - W. B. Langdon, University College, London - Martin C. Martin, Carnegie Mellon University - Sidney R Maxwell III - David Montana, BBN System and Technologies - Dr. Heinz Muehlenbein, GMD Research Center, Germany - Peter Nordin, University of Dortmund, Germany - Howard Oakley, Institute of Naval Medicine, United Kingdom - Franz Oppacher, Carleton University, Ottawa - Una-May O`Reilly, Carleton University, Ottawa - Michael Papka, Argonne National Laboratory - Timothy Perkis - Justinian P. Rosca, University of Rochester - Conor Ryan, University College Cork, Ireland - Malcolm Shute, University of Brighton - Eric V. Siegel, Columbia University - Karl Sims - Andrew Singleton, Creation Mechanics - Lee Spector, Hampshire College - Walter Alden Tackett, Neuromedia - Astro Teller, Carnegie Mellon University - Patrick Tufts, Brandeis University - V. Rao Vemuri, University of Califonia at Davis - Darrell Whitley, Colorado State University - Alden H. Wright, University of Montana - Byoung-Tak Zhang, GMD, Germany EXECUTIVE COMMITTEE OF PROGRAM COMMITTEE (In Formation) SPECIAL PROGRAM CHAIRS The main focus of the conference (and about two-thirds of the papers) will be on genetic programming. In addition, papers describing recent developments in closely related areas of evolutionary computation (particularly those addressing issues common to various areas of evolutionary computation) will be reviewed by special program committees appointed and supervised by the following special program chairs. - GENETIC ALGORITHMS: David E. Goldberg, University of Illinois - CLASSIFIER SYSTEMS: Rick Riolo, University of Michigan - EVOLUTIONARY PROGRAMMING: David Fogel, University of California at San Diego - EVOLUTION STRATEGIES: PROPOSALS HEREBY SOLICITED TUTORIALS Tutorials will overview (1) genetic programming, (2) closely related areas of evolutionary computation, and (3) neural networks, machine learning, and introductory molecular biology. Most tutorials will be on Sunday, July 28, 1996 and specific times and dates will be announced later. - INTRODUCTION TO GENETIC PROGRAMMING: John Koza, Stanford University - MACHINE LANGUAGE GENETIC PROGRAMMING: Peter Nordin, University of Dortmund, Germany - GENETIC PROGRAMMING USING BINARY REPRESENTATION: Wolfgang Banzhaf, University of Dortmund, Germany - GENETIC ALGORITHMS: David E. Goldberg, University of Illinois - EVOLUTIONARY PROGRAMMING: David Fogel, University of California at San Diego - EVOLUTIONARY COMPUTATION FOR CONSTRAINT OPTIMIZATION: Zbigniew Michalewicz, University of North Carolina - CLASSIFIER SYSTEMS: Robert Elliott Smith, University of Alabama - MOLECULAR BIOLOGY FOR COMPUTER SCIENTISTS: Russell B. Altman, Stanford University -NEURAL NETWORKS David E. Rumelhart, Stanford University - MACHINE LEARNING: Pat Langley, Stanford University - OTHER GENETIC PROGRAMMING TUTORIALS: PROPOSALS HEREBY SOLICITED INFORMATION FOR SUBMITTING PAPERS: Wednesday, January 10, 1996 is the deadline for receipt at the address below of seven (7) copies of each submitted paper. Papers are to be in single-spaced, 12-point type on 8 1/2" x 11" or A4 paper (no e-mail or fax) with full 1" margins at top, bottom, left, and right. Two-sided printing is preferred. Papers are to contain ALL of the following 9 items within a maximum of 10 pages, in this order: (1) title of paper, (2) author name(s), (3) author physical address(es), (4) author e-mail address(es), (5) author phone number(s), (6) a 100-200 word abstract of the paper, (7) the paper's category (chosen from one of the following five alternatives: genetic programming, genetic algorithms, classifier systems, evolutionary programming, or evolution strategy), (8) the text of the paper (including all figures and tables), and (9) bibliography. All other elements of the paper (e.g., acknowledgements, appendices, if any) must come within the maximum of 10 pages. Review criteria will include significance of the work, novelty, sufficiency of information to permit replication (if applicable), clarity, and writing quality. The first-named author (or other designated author) will be notified of acceptance or rejection and reviewer comments by approximately Monday, February 26, 1996. Details of the style of the camera-ready paper will be announced later, but will resemble the SAB-94 and ALIFE-94 conferences recently published by the MIT Press. The deadline for the camera- ready, revised version of accepted papers will be announced later but will be approximately Wednesday, March 20, 1996. Proceedings will be published by The MIT Press and will be available at the conference. One of the authors will be expected to present each accepted paper at the conference. HOUSING: Stanford is about 40 miles south of San Francisco, about 25 miles south of the SF airport, and about 25 miles north of San Jose. There are numerous hotels of all types adjacent to, or near, the campus (many along El Camino Real Avenue in Palo Alto and nearby Mountain View). An optional housing and meals package will be available from the Conference Department at Stanford and will be announced later. FOR MORE INFORMATION: E-mail: GP96 at Cs.Stanford.Edu GP-96 Conference c/o John Koza Computer Science Department Margaret Jacks Hall Stanford University Stanford, CA 94305-2140 USA From rob at comec4.mh.ua.edu Thu Feb 16 14:32:53 1995 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Thu, 16 Feb 95 13:32:53 -0600 Subject: GA conference registration info Message-ID: <9502161932.AA17284@comec4.mh.ua.edu> 6TH INTERNATIONAL CONFERENCE ON GENETIC ALGORITHMS July 15-19, 1995 University of Pittsburgh Pittsburgh, Pennsylvania, USA CONFERENCE COMMITTEE Stephen F. Smith, Chair Carnegie Mellon University Peter J. Angeline, Finance Loral Federal Systems Larry J. Eshelman, Program Philips Laboratories Terry Fogarty, Tutorials University of the West of England, Bristol Alan C. Schultz, Workshops Naval Research Laboratory Alice E. Smith, Local Arrangements University of Pittsburgh Robert E. Smith, Publicity University of Alabama The 6th International Conference on Genetic Algorithms (ICGA-95) brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection, and will include pre-conference tutorials, invited speakers, and workshops. Topics will include: genetic algorithms and classifier systems, evolution strategies, and other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. The conference host for 1995 will be the University of Pittsburgh located in Pittsburgh, Pennsylvania. The conference will begin Saturday afternoon, July 15, for those who plan on attending the tutorials. A reception is planned for Saturday evening. The conference meeting will begin Sunday morning July 16 and end Wednesday afternoon, July 19. The complete conference program and schedule will be sent later to those who register. TUTORIALS ICGA-95 will begin with three parallel sessions of tutorials on Saturday. Conference attendees may attend up to three tutorials (one from each session) for a supplementary fee (see registration form). Tutorial Session I 11:00 a.m.-12:30 p.m. I.A Introduction to Genetic Algorithms Melanie Mitchell - A brief history of Evolutionary Computation. The appeal of evolution. Search spaces and fitness landscapes. Elements of Genetic Algorithms. A Simple GA. GAs versus traditional search methods. Overview of GA applications. Brief case studies of GAs applied to: the Prisoner's Dilemma, Sorting Networks, Neural Networks, and Cellular Automata. How and why do GAs work? I.B Application of Genetic Algorithms Lawrence Davis - There are hundreds of real-world applications of genetic algorithms, and a considerable body of engineering expertise has grown up as a result. This tutorial will describe many of those principles, and present case studies demonstrating their use. I.C Genetics-Based Machine Learning Robert Smith - This tutorial discusses rule-based, neural, and fuzzy techniques that utilize GAs for exploration in the context reinforcement learning control. A rule-based technique, the learning classifier system (LCS), is shown to be analogous to a neural network. The integration of fuzzy logic into the LCS is also discussed. Research issues related to GA-based learning are overviewed. The application potential for genetics-based machine learning is discussed. Tutorial Session II 1:30-3:00 p.m. II.A Basic Genetic Algorithm Theory Darrell Whitley - Hyperplane Partitions and the Schema Theorem. Binary and Nonbinary Representations; Gray coding, Static hyperplane averages, Dynamic hyperplane averages and Deception, the K-armed bandit analogy and Hyperplane ranking. II.B Basic Genetic Programming John Koza - Genetic Programming is an extension of the genetic algorithm in which populations of computer programs are evolved to solve problems. The tutorial explains how crossover is done on program trees and illustrates how the user goes about applying genetic programming to various problems of different types from different fields. Multi-part programs and automatically defined functions are briefly introduced. II.C Evolutionary Programming David Fogel - Evolutionary programming, which originated in the early 1960s, has recently been successfully applied to difficult, diverse real-world problems. This tutorial will provide information on the history, theory, and practice of evolutionary programming. Case-studies and comparisons will be presented. Tutorial Session III 3:30-5:00 p.m. III.A Advanced Genetic Algorithm Theory Darrell Whitley - Exact Non-Markov models of simple genetic algorithms. Markov models of simple genetic algorithms. The Schema Theorem and Price's Theorem. Convergence Proofs, Exact Non-Markov models for permutation based representations. III.B Advanced Genetic Programming John Koza - The emphasis is on evolving multi-part programs containing reusable automatically defined functions in order to exploit the regularities of problem environments. ADFs may improve performance, improve parsimony, and provide scalability. Recursive ADFs, iteration-performing branches, various types of memories (including indexed memory and mental models), architecturally diverse populations, and point typing are explained. III.C Evolution Strategies Hans-Paul Schwefel and Thomas Baeck - Evolution Strategies in the context of their historical origin for optimization in Berlin in the 1960s. Comparison of the computer-versions (1+1) and (10,100) ES with classical optimum seeking methods for parameter optimization. Formal descriptions of ES. Global convergence conditions. Time efficiency in some simple situations. The role of recombination. Auto-adaptation of internal models of the environment. Multi-criteria optimization. Parallel versions. Short list of application examples. GETTING TO PITTSBURGH The Pittsburgh International Airport is served by most of the major airlines. Information on transportation from the airport and directions to the University of Pittsburgh campus, will be sent along with your conference registration confirmation letter. LODGING University Holiday Inn, 100 Lytton Avenue two blocks from convention site $92/day (single) $9 /day parking charge pool (indoor), exercise facilities Reserve by June 18. Call 412-682-6200. Hampton Inn, 3315 Hamlet Street 12 blocks from convention site $72/day (single) free parking, breakfast, and one-way airport transportation Reserve by July 1. Call 412-681-1000. Howard Johnson's, 3401 Boulevard of the Allies 12 blocks from convention site $56/day (single) free parking and Oakland transportation pool (outdoor) Reserve by June 13. Call 412-683-6100. Sutherland Hall (dorm), University Drive-Pitt campus 10 blocks from convention site (steep hill) $30/day, single no amenities (phone, TV, etc.) shared bathroom Reserve by July 1. Call 412-648-1100. CONFERENCE FEES REGISTRATION FEE Registrations received by June 11 are $250 for participants and $100 for students. Registrations received on or after June 12 and walk-in registrations at the conference will be $295 for participants and $125 for students. Included in the registration fee are entry to all technical sessions, several lunches, coffee breaks, reception Saturday evening, conference materials, and conference proceedings. TUTORIALS There is a separate fee for the Saturday tutorial sessions. Attendees may register for up to three tutorials (one from each tutorial session). The fee for one tutorial is $40 for participants and $15 for students; two tutorials, $75 for participants and $25 for students; three tutorials, $110 for participants and $35 for students. The deadline to register without a late fee is June 11. After this date, participants and students will be assessed a flat $20 late fee, whether they register for one, two, or all three tutorials. CONFERENCE BANQUET Not included in the registration fee is the ticket for the banquet. Participants may purchase banquet tickets for an additional $30. Note - Please purchase your banquet tickets nowQyou will be unable to buy them upon arrival. GUEST TICKETS Guest tickets for the Saturday evening reception are $10 each; guest tickets for the conference banquet are $30 each for adults and $10 each for children. Note - Please purchase additional tickets now - you will be unable to buy them upon arrival. CANCELLATION/REFUND POLICY For cancellations received up to and including June 1, a full refund will be given minus a $25 handling fee. FINANCIAL ASSISTANCE FOR STUDENTS With support from the Naval Center for Applied Research in Artificial Intelligence, Naval Research Laboratory, a limited fund has been set aside to assist students with travel expenses. Students should have their advisor certify their student status and that sufficient funds are not available. Students interested in obtaining such assistance should send a letter before May 22 describing their situation and needs to: Peter J. Angeline, c/o Advanced Technologies Dept, Loral Federal Systems, State Route 17C, Mail Drop 0210, Owego, NY 13827-3994 USA. TO REGISTER Early registration is recommended. You may register by mail, fax, or email using a credit card (MasterCard or VISA). You may also pay by check if registering by mail. Note: Students must also send with their registration a photocopy of their valid university student ID or a letter from a professor. Complete the registration form and return with payment. If more than one registrant from the same institution will be attending, make additional copies of the registration form. Mail ICGA 95 Department of Industrial Engineering University of Pittsburgh 1048 Benedum Hall Pittsburgh, PA 15261 USA Fax Fax the registration form to 412-624-9831 Email Receive email form by contacting: icga at engrng.pitt.edu Up-to-date conference information is available on the World Wide Web (WWW) http://www.aic.nrl.navy.mil/galist/icga95/ CALL FOR ICGA '95 WORKSHOP PROPOSALS ICGA workshop proposals are now being solicited. Workshops tend to range from informal sessions to more formal sessions with presentations and working notes. Each accepted workshop will be supplied with space and an overhead projector. VCRs might be available. If you are interested in organizing a workshop, send a workshop title, short description, proposed format, and name of the organizers to the workshop coordinator by April 15, 1995. Alan C. Schultz - schultz at aic.nrl.navy.mil Code 5510, Navy Center for Artificial Intelligence Naval Research Laboratory Washington DC 30375-5337 USA REGISTRATION FORM Prof / Dr / Mr / Ms / Mrs Name ______________________________________________________ Last First MI I would like my name tag to read _____________________________________________ Affiliation/Business ______________________________________________________ Address ______________________________________________________ City ______________________________________________________ State ___________________ Zip ________________________ Country_____________________________________________ Telephone (include area code) Business _______________________________ Home______________________________ FEES (all figures in US dollars) Conference Registration Fee By June 11 ___ participant, $250 ___ student, $100 =$_________ On or after June 12 ___ participant, $295 ___ student, $125 =$_________ July 15 Tutorials Select up to three tutorials, but no more than one tutorial per tutorial session. Tutorial Session I: ___I.A Introduction to Genetic Algorithms ___I.B Application of Genetic Algorithms ___I.C Genetics-Based Machine Learning Tutorial Session II: ___II.A Basic Genetic Algorithm Theory ___II.B Basic Genetic Programming ___II.C Evolutionary Programming Tutorial Session III: ___III.A Advanced Genetic Algorithm Theory ___III.B Advanced Genetic Programming ___III.C Evolution Strategies Tutorial Registration Fee By June 11 ___one tutorial: participant, $40 student, $15 ___two tutorials: participant, $75 student, $25 = $_________ ___three tutorials: participant, $110 student, $35 On or after June 12, participants and students add a $20 late fee for tutorials = $_________ Banquet Ticket (not included in the Registration Fee; no tickets may be purchased upon arrival) participants/adult guest #______ ticket(s) @ $30 = $_________ child #______ ticket(s) @ $10 = $_________ Additional Saturday reception tickets (no tickets may be purchased upon arrival) guest #______ ticket(s) @ $10 = $_________ TOTAL (US dollars) $____________ METHOD OF PAYMENT ___ Check (payable to the University of Pittsburgh, US banks only) ___ MasterCard ___ VISA #__________________________________________ Expiration Date ____________________ Signature of card holder ______________________________________________ Note: Students must submit with their registration a photocopy of their valid student ID or a letter from a professor. Mail ICGA 95, Department of Industrial Engineering, University of Pittsburgh, 1048 Benedum Hall, Pittsburgh, PA 15261 USA Fax 412-624-9831 Email To receive email form: icga at engrng.pitt.edu World Wide Web (WWW) For up-to-date conference information: http://www.aic.nrl.navy.mil/galist/icga95/ From sylee at eekaist.ac.kr Fri Feb 17 22:38:28 1995 From: sylee at eekaist.ac.kr (Soo-Young Lee) Date: Sat, 18 Feb 1995 12:38:28 +0900 Subject: POST-DOC POSITION Message-ID: <199502180338.MAA03152@eekaist.kaist.ac.kr> POSTDOCTORAL POSITION / GRADUATE STUDENTS Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology A postdoctoral position is available beginning immediately or summer 1995. The position is for one year initially, and may be extended for another year. Graduate students with full scholarship are also welcome, especially from developing countries. We are seeking individuals interested in researches on neural net applications and/or VLSI implementation. Especially we emphasizes "systems" approach, which combines neural network theory, application-specific knowledge, and hardware implementation technology for much better perofrmance. Although many applications are currently investigated, speech recognition is the preferred choice at this moment. Experience on digital VLSI will be helpful. Interested parties should send a C.V. and a brief statement of research interests to the address listed below. Present address: Prof. Soo-Young Lee Computation and Neural Systems Laboratory Department of Electrical Engineering Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea (South) Fax: +82-42-869-3410 E-mail: sylee at ee.kaist.ac.kr RESEARCH INTERESTS OF THE GROUP The Korea Advanced Institute of Science and Technology (KAIST) is an unique engineering school, which emphasies graduate studies through high-quality researches. All graduate students receive full scholarship, and Ph.D. course students are free from military services. The Department of Electrical Engineering is the largest one with 39 professors, 250 Ph.D. course students, 180 Master course students, and 300 undergraduate students. The Computation and Neural Systems Laboratory is lead by Prof. Soo-Young Lee, and consists of about 10 Ph.D. course students and about 5 Master course students. The primary focus of this laboratory is to merge neural network theory, VLSI implementation technology, and application-specific knowledge for much better performance at real world applications. Speech recognition, pattern recognition, and control applications have been emphasized. Neural network models develpoed include Multilayer Bidirectional Associative Memoryas an extention of BAM into multilayer architecture, IJNN (Intelligent Judge Neural Networks) for intelligent ruling verdict for disputes from several low-level classifiers, TAG (Training by Adaptive Gain) for large-scale implementation and speaker adaptation, and Hybrid Hebbian-Backpropagation Algorithm for MLP for improved robustness and generalization. The correlation matrix MBAM chip, and both MLP and RBF chips with on-chip learning capability had been fabricated. From koch at klab.caltech.edu Sun Feb 19 02:39:30 1995 From: koch at klab.caltech.edu (Christof Koch) Date: Sat, 18 Feb 1995 23:39:30 -0800 Subject: Announcement for Neuromorphic Engineering workshop in Telluride in 95 Message-ID: <199502190739.XAA14936@kant.klab.caltech.edu> CALL FOR PARTICIPATION IN A WORKSHOP ON "NEUROMORPHIC ENGINEERING" JUNE 25 - JULY 8, 1995 TELLURIDE, COLORADO Deadline for application is April 24, 1995. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one two-week workshop that will be held in Telluride, Colorado in 1995. The first Telluride Workshop on Neuromorphic Engineering was held in July, 1994 and was sponsored by the NSF. A summary of the 94 workshop and a list of participants is available over MOSAIC: http://www.klab.caltech.edu/~timmer/telluride.html OR http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on ``active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this two week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. The focus of the first week is on exploring neuromorphic systems through the medium of analog VLSI and will be organized by Rodney Douglas (Oxford) and Misha Mahowald (Oxford). Sessions will cover methods for the design and fabrication of multi-chip neuromorphic systems. This framework is suitable both for creating analogs of specific biological systems, which can serve as a modeling environment for biologists, and as a tool for engineers to create cooperative circuits based on biological principles. The workshop will provide the community with a common formal language for describing neuromorphic systems. Equipment will be available for participants to evaluate existing neuromorphic chips (including silicon retina, silicon neurons, oculomotor system). The second week of the course will be on vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Sessions will cover issues of sensory-motor integration in the mammalian brain. Special emphasis will be placed on understanding neural algorithms used by the brain which can provide insights into constructing electrical circuits which can accomplish similar tasks. Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. These researchers will also be asked to bring their own demonstrations, classroom experiments, and software for computer models. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system us based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. FORMAT: Time will be divided between lectures, practical labs, and interest group meetings. There will be three lectures in the morning that cover issues that are important to the community in general. In general, one lecture will be neurobiological, one computational, and one on analog VLSI. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Participants are encouraged to bring their own material to share with others. After dinner, participants will get together more informally to hear lectures and demonstrations. LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 4 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains. The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrators to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware problems. We will have a network of SUN workstations running UNIX and PCs running windows and LINUX. Up to $500 will be reimbursed for domestic travel and all housing expenses will be provided. Participants are expected to pay for food and incidental expenses and are expected to stay for the duration of this two week workshop. PARTIAL LIST OF INVITED LECTURERS: Richard Anderson, Caltech. Chris Atkeson, Georgia Tech. Dana Ballard, Rochester. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Steve Deiss, Applied NeuroDynamics, San Diego. Chris Dioro, Caltech. Rodney Douglas, Oxford and Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Christof Koch, Caltech. Steve Lisberger, UC San Francisco: Oculomotor System. Shih-Chii Liu, Caltech and Rockwell. Jack Loomis, UC Santa Barbara. Jonathan Mills, Indiana University. Misha Mahowald, Oxford and Zurich. Mark Tilden, Los Alamos: Multi-legged Robots. Terry Sejnowski, Salk Institute and UCSan Diego. Mona Zaghoul, George Washington University. HOW TO APPLY: The deadline for receipt of applications is April 24, 1995 Applicants should be at the level of graduate students or above (i.e. post- doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1995. From sutton at gte.com Sun Feb 19 13:03:59 1995 From: sutton at gte.com (Rich Sutton) Date: Sun, 19 Feb 1995 13:03:59 -0500 Subject: RL papers available by ftp Message-ID: <199502191756.AA21721@ns.gte.com> The following previously published papers related to reinforcement learning are available online for the first time: Sutton, R.S. (1988) "Learning to predict by the methods of temporal differences," Machine Learning, 3, 1988, No. 1, pp. 9--44. Sutton, R.S. (1990) "Integrated architectures for learning, planning, and reacting based on approximating dynamic programming," Proceedings of the Seventh International Conference on Machine Learning, pp. 216--224, Morgan Kaufmann. Sutton, R.S. (1991a) "Planning by incremental dynamic programming," Proceedings of the Eighth International Workshop on Machine Learning, pp. 353-357, Morgan Kaufmann. Sutton, R.S. (1991b) "Dyna, an integrated architecture for learning, planning and reacting," Working Notes of the 1991 AAAI Spring Symposium on Integrated Intelligent Architectures} and SIGART Bulletin 2, pp. 160-163. Sutton, R.S. (1992a) "Adapting Bias by Gradient Descent: An Incremental Version of Delta-Bar-Delta," Proceedings of the Tenth National Conference on Artificial Intelligence, pp. 171-176, MIT Press. Sutton, R.S., Whitehead, S.D. (1993) "Online learning with random representations." Proceedings of the Tenth Annual Conference on Machine Learning, pp. 314-321, Morgan Kaufmann. These papers can be obtained by ftp from the small archive at ftp.gte.com/reinforcement-learning. See the file CATALOG for filenames and abstracts. From dclouse at cs.ucsd.edu Mon Feb 20 17:32:47 1995 From: dclouse at cs.ucsd.edu (Dan Clouse) Date: Mon, 20 Feb 1995 14:32:47 -0800 (PST) Subject: Language Induction Tech Report Available Message-ID: <9502202232.AA09211@roland> FTP-host: cs.ucsd.edu FTP-filename: /pub/tech-reports/clouse.nnfir.ps.Z The file clouse.nnfir.ps.Z is now available for copying from the University of California at San Diego, Computer Science and Engineering Department ftp server. (20 pages, compressed file size 164K) TITLE: Learning Large DeBruijn Automata with Feed-Forward Neural Networks AUTHORS: Daniel S. Clouse, UCSD CSE Dept. C. Lee Giles, NEC Research Institute, Princeton, NJ. Bill G. Horne, NEC Research Institute, Princeton, NJ. Garrison W. Cottrell, UCSD CSE Dept. ABSTRACT: In this paper we argue that a class of finite state machines (FSMs) which is representable by the NNFIR (Neural Network Finite Impulse Response) architecture is equivalent to the definite memory sequential machines (DMMs) which are implementations of deBruijn automata. We support this claim by drawing parallels between circuit topologies of sequential machines used to implement FSMs and the architecture of the NNFIR. Further support is provided by simulation results that show that a NNFIR architecture is able to learn perfectly a large definite memory machine (2048 states) with very few training examples. We also discuss the effects that variations in the NNFIR architecture have on the class of problems easily learnable by the network. From maggini at mcculloch.ing.unifi.it Tue Feb 21 07:03:49 1995 From: maggini at mcculloch.ing.unifi.it (Marco Maggini) Date: Tue, 21 Feb 95 13:03:49 +0100 Subject: Call for papers: Neurocomputing Journal Message-ID: <9502211203.AA12867@mcculloch.ing.unifi.it> ========================================================== CALL FOR PAPER Special Issue on Recurrent Networks for Sequence Processing in the Neurocomputing Journal (Elsevier) M. Gori, M. Mozer, A.C. Tsoi, and R.L. Watrous (Eds) ========================================================== Recurrent neural networks are complex parametric dynamic systems that can exhibit a wide range of different behavior. The focus of this issue will be on recurrent networks that turn out to be useful for processing any kind of temporal sequences. To this purpose, the network dynamics must be exploited for capturing temporal dependencies more than for relaxing, like Hopfield networks and Boltzmann machines, to steady-state fixed points. Possible topics for papers submitted to the special issue include, but are not limited to: Short- and long-term memory architectures: theoretical analyses and empirical studies comparing a variety of architectures; Learning algorithms (including constructive and pruning schemes) and related theoretical issues (e.g.: learning long-term dependencies, local minima); Integration of prior knowledge and learning from examples; Problem of temporal chunking and learning embedded sequences; Application to recognition of time-dependent signals (e.g. various levels of speech tasks, biological signals both discrete (DNA, protein sequences) and continuous (ECG, EEG)); Application to time-series prediction; Application to modeling and control of dynamic systems (e.g. mobile robot guidance); Application to inductive inference of grammars and to natural language processing. Prospective authors should submit six copies of a manuscript to one of the guest editors by March 30, 1995. ===================================== Marco Gori Dipartimento di Sistemi e Informatica Universita' di Firenze Via S. Marta, 3 50139 Firenze (Italy) voice: +39 (55) 479-6265 fax: +39 (55) 479-6363 e-mail: marco at ingfi1.ing.unifi.it Michael C. Mozer Department of Computer Science University of Colorado Boulder, CO 80309-0430 (USA) voice: +1 (303) 492-4103 fax: +1 (303) 492-2844 e-mail: mozer at neuron.cs.colorado.edu Ah Chung Tsoi Department of Electrical and Computer Engineering University of Queensland Brisbane Qld 4072 Australia voice: +61 (7) 365-3950 fax: +61 (7) 365-4999 e-mail: act at s1.elec.uq.oz.au Raymond L. Watrous Learning System Siemens Corporate Research 755 College Road East Princeton, NJ 08540 voice: +1 (609) 734-6596 fax: +1 (609) 734-6565 e-mail: watrous at learning.scr.siemens.com ================================================= From lpratt at franklinite.Mines.Colorado.EDU Tue Feb 21 03:35:47 1995 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Tue, 21 Feb 95 09:35:47 +0100 Subject: Neural network transfer web page Message-ID: <9502210835.AA01862@franklinite.Mines.Colorado.EDU> Hi all! Per a recent discussion on connectionists, I have started a web page for pointers to neural network transfer. Right now, this page just contains the text of that discussion, along with pointers to all of my papers on transfer. I would be delighted to add pointers to others' papers on transfer to this web page. Just send me the ftp or www address, along with relevant bibliographic information, thanks! I think that this can become an important resource for those of us who are interested in this important subject area. The web page is: http://vita.mines.colorado.edu:3857/0/lpratt/transfer.html --Lori Pratt From french at cogsci.indiana.edu Tue Feb 21 12:46:09 1995 From: french at cogsci.indiana.edu (Bob French) Date: Tue, 21 Feb 95 12:46:09 EST Subject: Sequential learning and interactive tandem networks: paper available Message-ID: FTP-host: cogsci.indiana.edu FTP-filename: /pub/french.tandem-stm-ltm.ps.Z Total no. of pages: 6 The following paper is now available by anonymous ftp from the CRCC archive at Indiana University, Bloomington. Interactive Tandem Networks and the Problem of Sequential Learning Robert M. French CRCC, Indiana University Bloomington, IN 47408 french at cogsci.indiana.edu Abstract This paper presents a novel connectionist architecture to handle the "sensitivity-stability" problem and, in particular, an extreme manifestation of the problem, catastrophic interference. This architecture, called an interactive tandem-network (ITN) architecture, consists of two continually interacting networks, one -- the LTM network -- dynamically storing "prototypes" of the patterns learned, the other -- the STM network -- being responsible for "short-term" learning of new patterns. Prototypes stored in the LTM network influence hidden-layer representations in the STM network and, conversely, newly learned representations in the STM network gradually modify the more stable LTM prototypes. As prototypes are learned by the LTM network, they are dynamically constrained to maximize mutual orthogonality. This system of tandem networks performs particularly well on the problem of catastrophic interference. It also produces "long-term" representations that are stable in the face of new input and "short-term" representations that remain sensitive to new input. Justification for this type of architecture is similar to that given recently by McClelland, McNaughton, & O'Reilly (1994) in arguing for the necessary complementarity of the hippocampal (short-term memory) and neocortical (long-term memory) systems. * * * This paper has been submitted to The 1995 Cognitive Science Society Conference. A longer version of this paper is in preparation and, consequently, comments are welcome. I am currently visiting at the Univeristy of Wisconsin. Any snail-mail should be sent there. Robert M. French Department of Psychology University of Wisconsin Madison, Wisconsin 53706 Tel: (608) 243-8026 FAX: (608) 262-4029 email: french at cogsci.indiana.edu french at merlin.psych.wisc.edu From marwan at sedal.su.oz.au Tue Feb 21 20:21:51 1995 From: marwan at sedal.su.oz.au (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Wed, 22 Feb 1995 12:21:51 +1100 Subject: Jobs Message-ID: <199502220121.MAA17710@sedal.sedal.su.OZ.AU> J O B S -- J O B S -- J O B S -- J O B S -- J O B S -- J O B S -- J O B S (* Note, 2 ads in this message *) Systems Engineering and Design Automation Laboratory Department of Electrical Engineering The University of Sydney Research Engineer Optical Character Recognition The appointee will join a team working on a project funded by the Australian Research Council and aiming at the development of artificial neural network techniques for low resolution character segmentation and recognition. Applicants with a bachelor or higher degree in electrical engineering or computer science with experience in pattern recognition, preferably in the area of neural computing based OCR are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (Level A academic) 30k-40k depending on experience. Duty statement The research engineer will: - perfrom litterature search - design and simulate neural computing architectures for OCR - design and implement software for OCR and interfacing to document scanning equipment - develop and integrate OCR software - work independently REPORTING: Reports to Dr. M. Jabri QUALIFICATIONS: B.E. or B. Computer Science or higher. SKILLS: - design, simulation and tuning of artificial neural networks - C and/or C++ programming - knowledge of Unix - design, implementation and testing of neural computing software - communication with others - writing reports and technical papers. - work independently ************************************************************************ Systems Engineering and Design Automation Laboratory Department of Electrical Engineering The University of Sydney Electronics Research Engineer Adiabatic Computing for Implantable Devices The appointee will be part of a team working on a project in collaboration with a world leading company in the area of implantable devices. The project is funded by the Australian Research Council and aims at investigating circuits and architectures for adiabatic computing in the context of implantable devices. Applicants with a bachelor or higher degree in electronics or computer engineering with experience in full-custom integrated circuits (preferrably analogue) are invited. The appointment will be for an initial period of one year and renewable for up to three years, subject to satisfactory progress and funding. If applicable, the appointee can enrol for a higher degree in an area of the project. Salary: (Level A academic) 30k-40k depending on experience. Duty statement The engineer will: - perfrom litterature search - design and implement architecture and circuits for adiabatic computing. - design testing system interface to test circuits - design and implement software for simulating the circuits - work independently REPORTING: Reports to Dr. M. Jabri QUALIFICATIONS: B.E. or higher. SKILLS: - programming in C and/or C++ - knowledge of Unix - design, implementation and testing of electronic and microelectronic circuits - design and implementation of software programs to simulate logic and analogue circuits. - use of computer aided design tools. - communication with others - writing reports and technical papers. - work independently How to apply: o Send CV with covering letter specifying the title of the position to Marwan Jabri (address below) o should include a minimum of 2 referees who can comment on professional activities of the applicant o include in CV comments with respect to desirable expertise and skills Marwan Jabri Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 351-2240 Fax: (+61-2) 660-1228 email: marwan at sedal.su.oz.au From krogh at nordita.dk Wed Feb 22 10:50:51 1995 From: krogh at nordita.dk (Anders Krogh) Date: Wed, 22 Feb 95 16:50:51 +0100 Subject: Paper available on neural network ensembles Message-ID: <9502221550.AA17959@norsci0.nordita.dk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z The file krogh.ensemble.ps.Z can now be copied from Neuroprose. The paper is 8 pages long. Hardcopies copies are are available. Neural Network Ensembles, Cross Validation, and Active Learning by Anders Krogh and Jesper Vedelsby Abstract: Learning of continuous valued functions using neural network ensembles (committees) can give improved accuracy, reliable estimation of the generalization error, and active learning. The ambiguity is defined as the variation of the output of ensemble members averaged over unlabeled data, so it quantifies the disagreement among the networks. It is discussed how to use the ambiguity in combination with cross-validation to give a reliable estimate of the ensemble generalization error, and how this type of ensemble cross-validation can sometimes improve performance. It is shown how to estimate the optimal weights of the ensemble members using unlabeled data. By a generalization of query by committee, it is finally shown how the ambiguity can be used to select new training data to be labeled in an active learning scheme. The paper will appear in G. Tesauro, D. S. Touretzky and T. K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. ________________________________________ Anders Krogh Nordita Blegdamsvej 17, 2100 Copenhagen, Denmark email: krogh at nordita.dk Phone: +45 3532 5503 Fax: +45 3138 9157 ________________________________________ From french at cogsci.indiana.edu Wed Feb 22 09:57:41 1995 From: french at cogsci.indiana.edu (Bob French) Date: Wed, 22 Feb 95 09:57:41 EST Subject: Problems printing French's sequential learning paper Message-ID: A number of people have had trouble printing my file "Interactive Tandem Networks and the Problem of Sequential Learning" which was announced yesterday. The problem, for those who have already retreived the file, is an extraneous ^D at the beginning and end of the file. (Thanks Guszti Bartfai, Victoria Univ. of Wellington, for pointing out the problem and the fix.) Unfortunately, the file was test-printed by two friends who printed it through Ghostview (it views and prints just fine through Ghostview). So, if you find reading postscript dumps sort of tedious, either remove the ^D from the very beginning and end of the ps file or retrieve the file again from cogsci.indiana.edu (see below). I have fixed the problem. Sorry for any inconvenience this might have caused people. For those who wish to re-retrieve the file: Paper name: Interactive Tandem Networks and the Problem of Sequential Learning anonymous ftp site : cogsci.indiana.edu file name: /pub/french.tandem-stm-ltm.ps.Z Bob French =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= Robert M. French Department of Psychology University of Wisconsin Madison, WI 53706 Tel: (608) 262-5207 (608) 243-8026 email: french at merlin.psych.wisc.edu french at cogsci.indiana.edu =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From degaris at hip.atr.co.jp Wed Feb 22 18:28:34 1995 From: degaris at hip.atr.co.jp (Hugo de Garis) Date: Wed, 22 Feb 95 18:28:34 JST Subject: A 50,000 neuron artificial brain Message-ID: <9502220928.AA05644@gauss> Dear Connectionists, A 50,000 Neuron Artificial Brain 50,000 is roughly the number of artificial neurons I can grow/evolve based on cellular automata in the 32 mega CA cell "CAM8" cellular automata machine from MIT that I have on my desk. Using hand coded cellular automata rules to grow neurons from seeder CA cells, one can then feed in genetic algorithm "chromosome" growth strings into dendrites and axons to grow random neural circuits, whose fitness at controlling some process is measured after the circuit is grown and used to transmit CA based neural signals. Using these ideas and a future "superCAM" our group hopes to build grow/evolve an artificial brain of a billion neurons by 2001. This is quite feasible because RAM is cheap (even gigabytes), the states of the CA cells can be stored in gigabytes of RAM, so too the CA state transition rules. The bottle neck is the CA processor. MIT's CAM8 can update 200 million CA cells of 16 bits per second. Our super CAM should be thousands of times faster. Reference : "An Artificial Brain : ATR's CAM-Brain Project Aims to Build/Evolve an Artificial Brain with a Million Neural Net Modules inside a Trillion Cell Cellular Automata Machine", Hugo de Garis, New Generation Computing Journal, Vol. 12, No. 2, Ohmsha & Springer Verlag, 1994. For more information concerning our CAM-Brain Project, please contact Dr. Hugo de Garis, Brain Builder Group, Evolutionary Systems Dept, ATR Human Information Processing Research Labs, 2-2 Hikaridai, Seika-cho, Soraku-gun, Kansai Science City, Kyoto-fu, 619-02, Japan. tel. + 81 (0)7749 5 1079, fax. + 81 (0)7749 5 1008, email. degaris at hip.atr.co.jp Cheers, Hugo de Garis. From ken at phy.ucsf.edu Wed Feb 22 12:55:07 1995 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 22 Feb 1995 09:55:07 -0800 Subject: Paper available: "RFs and Maps in the Visual Cortex" Message-ID: <9502221755.AA01781@coltrane.ucsf.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/miller.rfs_and_maps.ps.Z The file miller.rfs_and_maps.ps.Z is now available for copying from the Neuroprose repository. This is a review paper: Receptive Fields and Maps in the Visual Cortex: Models of Ocular Dominance and Orientation Columns (26 pages) Kenneth D. Miller Dept. of Physiology University of California, San Francisco To appear in: Models of Neural Networks III, E. Domany, J.L. van Hemmen, and K. Schulten, Eds. (Springer-Verlag, NY), 1995. ABSTRACT: The formation of ocular dominance and orientation columns in the mammalian visual cortex is briefly reviewed. Correlation-based models for their development are then discussed, beginning with the models of Von der Malsburg. For the case of semi-linear models, model behavior is well understood: correlations determine receptive field structure, intracortical interactions determine projective field structure, and the ``knitting together'' of the two determines the cortical map. This provides a basis for simple but powerful models of ocular dominance and orientation column formation: ocular dominance columns form through a correlation-based competition between left-eye and right-eye inputs, while orientation columns can form through a competition between ON-center and OFF-center inputs. These models account well for receptive field structure, but are not completely adequate to account for the details of cortical map structure. Alternative approaches to map structure, including the self-organizing feature map of Kohonen, are discussed. Finally, theories of the computational function of correlation-based and self-organizing rules are discussed. Sorry, hard copies are NOT available. Ken Kenneth D. Miller Dept. of Physiology UCSF 513 Parnassus San Francisco, CA 94143-0444 internet: ken at phy.ucsf.edu From pkso at castle.ed.ac.uk Wed Feb 22 14:26:54 1995 From: pkso at castle.ed.ac.uk (P Sollich) Date: Wed, 22 Feb 95 19:26:54 GMT Subject: Two NIPS 7 preprints: Query learning, relevance of thermodynamic limit Message-ID: <9502221926.aa03869@uk.ac.ed.castle> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/sollich.imperf_learn.ps.Z FTP-filename: /pub/neuroprose/sollich.linear_perc.ps.Z Hi all, the following two papers are now available from the neuroprose archive (8 pages each, to appear in: Advances in Neural Information Processing Systems 7, Tesauro G, Touretzky D S and Leen T K (eds.), MIT Press, Cambridge, MA, 1995). Sorry, hardcopies are not available. Any feedback is much appreciated! Regards, Peter Sollich ---------------------------------------------------------------------------- Peter Sollich Dept. of Physics University of Edinburgh e-mail: P.Sollich at ed.ac.uk Kings Buildings Tel. +44-131-650 5236 Mayfield Road Edinburgh EH9 3JZ, U.K. ---------------------------------------------------------------------------- Learning from queries for maximum information gain in imperfectly learnable problems Peter Sollich, David Saad Department of Physics, University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT: In supervised learning, learning from queries rather than from random examples can improve generalization performance significantly. We study the performance of query learning for problems where the student cannot learn the teacher perfectly, which occur frequently in practice. As a prototypical scenario of this kind, we consider a linear perceptron student learning a binary perceptron teacher. Two kinds of queries for maximum information gain, i.e., minimum entropy, are investigated: Minimum {\em student space} entropy (MSSE) queries, which are appropriate if the teacher space is unknown, and minimum {\em teacher space} entropy (MTSE) queries, which can be used if the teacher space is assumed to be known, but a student of a simpler form has deliberately been chosen. We find that for MSSE queries, the structure of the student space determines the efficacy of query learning, whereas MTSE queries lead to a higher generalization error than random examples, due to a lack of feedback about the progress of the student in the way queries are selected. ---------------------------------------------------------------------------- Learning in large linear perceptrons and why the thermodynamic limit is relevant to the real world Peter Sollich Department of Physics, University of Edinburgh Edinburgh EH9 3JZ, U.K. ABSTRACT: We present a new method for obtaining the response function ${\cal G}$ and its average $G$ from which most of the properties of learning and generalization in linear perceptrons can be derived. We first rederive the known results for the `thermodynamic limit' of infinite perceptron size $N$ and show explicitly that ${\cal G}$ is self-averaging in this limit. We then discuss extensions of our method to more general learning scenarios with anisotropic teacher space priors, input distributions, and weight decay terms. Finally, we use our method to calculate the finite $N$ corrections of order $1/N$ to $G$ and discuss the corresponding finite size effects on generalization and learning dynamics. An important spin-off is the observation that results obtained in the thermodynamic limit are often directly relevant to systems of fairly modest, `real-world' sizes. ---------------------------------------------------------------------------- From krogh at nordita.dk Thu Feb 23 03:00:15 1995 From: krogh at nordita.dk (Anders Krogh) Date: Thu, 23 Feb 95 09:00:15 +0100 Subject: Correction: Paper available on neural network ensembles Message-ID: <9502230800.AA01778@norsci0.nordita.dk> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/krogh.ensemble.ps.Z In my previous mail about our paper in Neuroprose it should have said Hardcopies copies are *NOT* available. Sorry. - Anders From greiner at scr.siemens.com Thu Feb 23 13:41:37 1995 From: greiner at scr.siemens.com (Russell Greiner) Date: Thu, 23 Feb 1995 13:41:37 -0500 Subject: CFP: "Relevance" issue of "Artificial Intelligence" Message-ID: <199502231841.NAA12556@eagle.scr.siemens.com> ************************************************************** ***** CALL FOR PAPERS (please post) ****** ************************************************************** Special Issue on RELEVANCE Journal: ARTIFICIAL INTELLIGENCE Guest Editors: Russell Greiner, Devika Subramanian, Judea Pearl With too little information, reasoning and learning systems cannot work effectively. Surprisingly, too much information can also cause the performance of these systems to degrade, in terms of both accuracy and efficiency. It is therefore important to determine what information must be preserved, i.e., what information is "relevant". There has been a recent flurry of interest in explicitly reasoning about relevance in a number of different disciplines, including the AI fields of knowledge representation, probabilistic reasoning, machine learning and neural computation, as well as communities that range from statistics and operations research to database and information retrieval to cognitive science. Members of these diverse communities met at the 1994 AAAI Fall Symposium on Relevance, to seek a better understanding of the various senses of the term "relevance", with a focus on finding techniques for improving the performance of embedded agents by ignoring or de-emphasizing irrelevant and superfluous information. Such techniques will clearly be of increasing importance as knowledge bases, and learning systems, become more comprehensive to accommodate real-world applications. To help consolidate leading research on relevance, the "Artificial Intelligence" journal is devoting a special issue to this topic. We are now seeking papers on (but not restricted to) the following topics: [Representing and reasoning with relevance:] reasoning about the relevance of distinctions to speed up computation, relevance reasoning in real-world KR tasks including design, diagnosis and common-sense reasoning, use of relevant causal information for planning, theories of discrete approximations. [Learning in the presence of irrelevant information:] removing irrelevant attributes and/or irrelevant training examples, to make feasible induction from very large datasets; methods for learning action policies for embedded agents in large state spaces by explicit construction of approximations and abstractions. [Relevance and probabilistic reasoning:] simplifying/approximating Bayesian nets (both topology and values) to permit real-time reasoning; axiomatic bases for constructing abstractions and approximations of Bayesian nets and other probabilistic reasoning models. [Relevance in neural computational models:] methods for evolving computations that ignore aspects of the environment to make certain classes of decisions, automated design of topologies of neural models guided by relevance reasoning based on task class. [Applications of relevance reasoning:] Applications that require explicit reasoning about relevance in the context of IVHS, exploring and understanding large information repositories, etc. We are especially interested in papers that have strong theoretical analyses complemented by experimental evidence from non-trivial applications. Authors are invited to submit manuscripts conforming to the AIJ submission requirements by 11 Sept 1995 to Russell Greiner or Devika Subramanian Siemens Corporate Research Department of Computer Science 755 College Road East 5141 Upson Hall, Cornell University Princeton, NJ 08540-6632 Ithaca, New York 14853 (609) 734-3627 (607) 255-9189 Papers will be a subject to a standard peer review. The first round of reviews will be completed and decisions mailed by 11 December 1995. The authors of accepted and conditionally accepted manuscripts will be required to send revised versions by 1 March 1996. The special issue is tentatively scheduled to appear sometime in 1996. We also plan to publish this issue as a book. Finally, to help us select appropriate reviewers in advance, authors should email us a title, set of keywords and a short abstract, to arrive by 4 September. To recap the significant dates: 4/Sep/95: Emailed titles, keywords and abstracts due 11/Sep/95: Manuscripts dues 11/Dec/95: First round decisions 1/Mar/96: revised manuscripts due ?? /96: special issue appears (tentative) From thimm at idiap.ch Thu Feb 23 08:06:54 1995 From: thimm at idiap.ch (Georg Thimm) Date: Thu, 23 Feb 95 14:06:54 +0100 Subject: Reannouncement: NN Event Announcements as WWW page & by FTP Message-ID: <9502231306.AA09699@idiap.ch> WWW page and FTP Server for Announcements of Conferences, Workshops and Other Events on Neural Networks ------------------------------------- This WWW page allows you to enter and look up announcements and call-for-papers for conferences, workshops, talks, and other events on neural networks. The three event lists contain 65 forthcoming events and can be accessed via the IDIAP neural network home page with the URL: http://www.idiap.ch/html/idiap-networks.html The lists are now also available as formated ASCII text. The files /html/NN-events/{conferences,workshops,other}.txt.Z are obtainable from the HTML menus or from our FTP server ftp.idiap.ch. Instructions for downloading these files are given below. The entries are grouped into: - Multi-day events for a larger number of people (Conferences, Congresses, large Workshops, etc.), - Multi-day events for a small audience (small Workshops, Summer Schools, etc,), and - One day events (Seminars, Talks, Presentations, etc.). The entries are ordered chronologically and presented in a standardized format for fast and easy lookup. The entry fields are: - the date and place of the event, - the title of the event, - a hyper link to more information about the event, - a contact address (surface mail address, email address, telephone number, and fax number), - deadlines, and - a field for comments. -------------------------------------------------------------- Example FTP session: UNIX> ftp ftp.idiap.ch Name (ftp.idiap.ch:thimm): anonymous Password: [Your e-mail address] ftp> cd html ftp> cd NN-events ftp> bin ftp> mget conferences.txt.Z other.txt.Z workshops.txt.Z mget conferences.txt.Z? y . . ftp> quit UNIX> zcat conferences.txt.Z | lpr -------------------------------------------------------------- I hope you find this service helpful. I am looking forward to comments and suggestions. Georg Thimm -------------------------------------------------------------- Georg Thimm E-mail: thimm at idiap.ch Institut Dalle Molle d'Intelligence Fax: ++41 26 22 78 18 Artificielle Perceptive (IDIAP) Tel.: ++41 26 22 76 64 Case Postale 592 WWW: http://www.idiap.ch 1920 Martigny / Suisse -------------------------------------------------------------- From office at swan.lanl.gov Thu Feb 23 16:54:07 1995 From: office at swan.lanl.gov (Office Account) Date: Thu, 23 Feb 1995 14:54:07 -0700 Subject: MaxEnt95 Message-ID: <199502232154.OAA07023@goshawk.lanl.gov> ANNOUNCEMENT AND CALL FOR PAPERS The Fifteenth International Workshop on Maximum Entropy and Bayesian Methods 30 July - 4 August 1995 St. John's College Santa Fe, New Mexico, USA The Fifteenth International Workshop on Maximum Entropy and Bayesian Methods will be held at St. John's College in Santa Fe, New Mexico, USA. This Workshop is being jointly sponsored by the Center for Nonlinear Studies and the Radiographic Diagnostics Program, both at Los Alamos National Laboratory, and by the Santa Fe Institute (SFI). SCOPE: Traditional themes of the Workshop have been the application of the maximum entropy principle and Bayesian methods for statistical inference to diverse areas of scientific research. Practical numerical algorithms and principles for solving ill-posed inverse problems, image reconstruction and model building are emphasized. The Workshop also addresses common foundations for statistical physics, statistical inference, and information theory. The Workshop will begin on 31 July with a half-day tutorial on Bayesian methods and the principle of maximum entropy, which will be presented by Wray Buntine and Peter Cheeseman of NASA. The Workshop will also include several reviews of hot topics of broad interest, such as Markov Chain Monte Carlo methods for sampling posteriors, deformable geometric models, and the relation between information theory and physics. Specially organized sessions will highlight other topics, such as Bayesian time-series analysis, entropies in dynamical systems, and data analysis for physics simulations. The Workshop will be held in the beautiful setting of St. John's College, nested in the foothills of the Sangre de Cristo Mountains, two miles from the Santa Fe Plaza. St. John's is a small liberal arts college, which emphasizes a classical curriculum. Social events include a reception at the Santa Fe Institute and an outing to the Science Museum at the Los Alamos National Laboratory. The timing of the Workshop coincides with the peak of the Santa Fe tourist and opera seasons. CALL FOR CONTRIBUTED PAPERS: Contributed papers are requested on the innovative use of Bayesian methods or the maximum entropy principle. The deadline for receipt of abstracts is April 14, 1995. They should be written in LaTeX or ascii and limited to one page of about 400 words. Please include a curriculum vita or a short biographical sketch. Specify preference for an oral or poster presentation. The abstracts will be made available at the Workshop. Manuscripts of accepted papers will be due at the Workshop, in camera-ready form, on diskette, and one hard-copy, and they must be prepared in LaTeX using a style file that will be made available to authors by e-mail or by post. REGISTRATION: To receive registration materials and detailed information about this Workshop, contact Ms. Barbara Rhodes at the address below. Space limitations at St. John's College will restrict the number of attendees to about 125. Early registration will help to assure a place at the meeting. SCHOLARSHIPS: Limited financial support will be available to assist graduate students and postdoctoral fellows who wish to attend the workshop. Requests for support may be submitted along with registration materials. SCIENTIFIC ORGANIZERS: Kenneth Hanson and Richard Silver, Los Alamos National Laboratory Send abstracts and requests for registration materials and other information to the administrative organizer: Ms. Barbara Rhodes Tel: (505) 667-1444 CNLS, MS-B258 Fax: (505) 665-2659 Los Alamos National Laboratory E-mail: maxent at cnls.lanl.gov Los Alamos, NM 87545 USA From piche at mcc.com Thu Feb 23 19:48:20 1995 From: piche at mcc.com (Stephen Piche') Date: Thu, 23 Feb 95 18:48:20 CST Subject: CIFEr Conference Message-ID: <9502240048.AA25192@muon.mcc.com> IEEE/IAFE 1995 Conference on Computational Intelligence for Financial Engineering (CIFEr) April 9-11, 1995 New York City, Crowne Plaza Manhattan Sponsored by: The IEEE Neural Networks Council The International Association of Financial Engineers The IEEE Computer Society The IEEE/IAFE CIFEr Conference is the first major collaboration between the professional engineering and financial communities, and will be the leading forum for new technologies and applications in the intersection of computational intelligence and financial engineering. CONTENTS OF THIS POSTING: > Special Sessions > Technical Program > Tutorials > Exhibit Information > Registration > Early Bird Registration Costs > For More Info and Registration Material SPECIAL SESSIONS ^^^^^^^ ^^^^^^^^ 1. "Electronic Trading in the Next Millennium" Panel Chair: Raymond L. Killian, President, ITG Inc. 2. "Practitioner's Panel -- Investment Strategies using Non- Parametric Analyses" Panel Chair: Steve Armentrout, QED Investments 3. "First International Nonlinear Financial Forecasting Competition: Trading Systems Competition Methodologies and Design" Sponsored by Neurove$t Journal Panel Chairs: Manoel F. Tenorio, Randall Caldwell TECHNICAL PROGRAM ^^^^^^^^^ ^^^^^^^ The CIFEr Technical Program will be held Monday and Tuesday, April 10 -11, 1995 at the Crowne Plaza Manhattan. Registration for the Technical Program includes The Keynote Speech by Robert Merton, Special Sessions, Oral and Poster Presentations, entrance to the Exhibits Hall, invitation to the Sunday, April 9, 5:15 P.M. CIFEr Reception, and the CIFEr Proceedings. TUTORIALS ^^^^^^^^^ There are two tracks for the tutorials: Engineering Tutorial Track and the Finance Tutorial Track. Tutorials will be held from 9:00 A.M. to 5:00 P.M. on Sunday, April 9, 1995 at Crowne Plaza Manhattan. Early registration end March 31, 1995. E1 - AN INTRODUCTION TO GENETIC ALGORITHMS: Melanie Mitchell Santa Fe Institute. E2 - NONLINEAR TIME SERIES TOOLS FOR FINANCIAL MARKETS: Blake LeBaron, Dept. of Economics, University of Wisconsin E3 - NEURAL NETWORKS FOR TEMPORAL INFORMATION PROCESSING, S. Y. Kung, Electrical Engineering, Princeton University F1 - AN INTRODUCTION TO DERIVATIVES; FUTURES, FORWARDS, OPTIONS, AND SWAPS: John F. Marshall, Polyechnic University F2 - ADVANCED OPTION PRICING: ALTERNATIVE DISTRIBUTIONAL HYPOTHESES, The Wharton School F3 - TRADING MODELS AND STRATEGIES: TBA EXHIBIT INFORMATION ^^^^^^^ ^^^^^^^^^^^ Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to exhibit your products. Contact CIFEr-secretariat, Barbara Klemm, at (800) 321-6338. REGISTRATION ^^^^^^^^^^^^ Your conference registration fee includes refreshments, the cocktail reception (on Sunday, April 9 at 5:00 P.M.), and the conference proceedings. Be sure to attend the keynote speech luncheon. You may send a check or money order for your registration fee, or pay by credit. Please make your check payable to "IEEE & IAFE CIFEr '95 Conference" and print the attendee(s) name(s) on the face of the check. EARLY BIRD REGISTRATION THROUGH MARCH 31, 1995 ^^^^^ ^^^^ ^^^^^^^^^^^^ ^^^^^^^ ^^^^^ ^^ ^^^^ CIFEr CONFERENCE REGISTRATION (4/10-11/95) IEEE & IAFE MEMBERS $400 NON-MEMBERS $550 FULL-TIME STUDENTS $190 KEYNOTE SPEECH LUNCHEON MEAL TICKET $ 10 (MONDAY, 4/10/95) FOR MORE INFORMATION AND REGISTRATION MATERIALS ^^^ ^^^^ ^^^^^^^^^^^ ^^^ ^^^^^^^^^^^^ ^^^^^^^^^ 1. Telephone the CIFEr Seretariat at (714)752-8205 or (800) 321-6338. They'll be glad to register you and provide addition information. 2. Send an e-mail request to the CIFEr Secretatiat at 74710.2266 at COMPUSERVE.COM You will receive a complete conference announcement which includes the program. CIFEr SECRETARIAT ^^^^^ ^^^^^^^^^^^ Meeting Management IEEE/IAFE Computational Intelligence for Financial Engineering 2603 Main Street, Suite #690 Irvine, California 92714 (714)752-8205 or (800) 321-6338 FAX (714) 752-7444 74710.2266 at COMPUSERVE.COM From Voz at dice.ucl.ac.be Fri Feb 24 11:41:18 1995 From: Voz at dice.ucl.ac.be (Jean-Luc Voz) Date: Fri, 24 Feb 1995 17:41:18 +0100 Subject: Elena Workshop Message-ID: <199502241641.RAA20360@ns1.dice.ucl.ac.be> ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ I N V I T A T I O N to the ELENA WORKSHOP Project Results and Industrial Openings Louvain-la-Neuve, Belgium 18 April 1995 (An initiative of Technopol Brussels, Value Relay Center) ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ We are pleased to invite you to the ELENA WORKSHOP: Project Results and Industrial Openings. ELENA is an ESPRIT European Basic Research Action project (No 6891) on classification, neural networks and evolutive architectures, which investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification and incremental learning. This text includes: - Details about the ELENA workshop (aims, who should attend, practical information,...) - A short presentation of the ELENA Esprit project - A registration form to the workshop. See also the WORKSHOP HOMEPAGE on the World Wide Web at URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA_WORKSHOP.html Topics: ~~~~~~~ Classification by neural networks, pruning methods, incremental learning and evolutive architectures, Bayesian statistical classification. Benchmarking studies of classification algorithms. Classifier's digital and analog Hardware implementations. Objectives: ~~~~~~~~~~~ The ELENA consortium desires to transfer his experience to a large class of industrials and scientists. The main objectives of this workshop are: - to describe to industrials and scientists the state-of the-art in classification by neural networks and the latest developments in the framework of the ELENA project, - to provide practical guidelines for classification tools to users, on the choice of algorithms, of benchmarking methods, and on the software and hardware options depending on the application, - to allow industrials and other potential users to apply up-to-date powerful methods of classification in practical situations. The one-day workshop will be organized in Louvain-la-Neuve (25 km from Brussels), Belgium, on 18 April 1995; it will include talks and practical demonstrations. Who should attend the workshop ? ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The workshop will be of interest for industrials, actual and potential users of classification methods in all domains (image and signal processing, control, pattern recognition, OCR), and also for scientists interested by classification with neural networks. Date ~~~~ The ELENA workshop is organized in Louvain-la-Neuve (Belgium), on 18 April 1994. Workshop Programme ~~~~~~~~~~~~~~~~~~ 09.00 Introduction. 09.15 General overview of the ELENA project : aims, potentialities. Prof. C. Jutten (INPG). 09.45 Main theoretical results and practical recommendations. Dr. P. Comon (Thomson Sintra), Prof. C. Jutten (INPG). Coffee break 11.00 Hardware implementations : practical considerations on digital and analog alternatives. Dr. M. Verleysen (UCL), Dr. A. Guerin (INPG), Prof. J. Cabestany (UPC), Ph. Thissen (UCL). Lunch 14.00 Benchmarking studies : experimental protocols and practical recommendations for classifier users. J.-L. Voz (UCL), Dr. A. Guerin (INPG). 15.00 PACKLIB, an interactive environment for data processing : features and demonstration. Dr. F. Blayo (Univ. Paris1), Y. Cheneval (EPFL). Coffee break 16.15 Discussion. Registration ~~~~~~~~~~~~ To register to the ELENA workshop, please transfer the registration fee to the account indicated overleaf, and send the following form BEFORE APRIL 7TH, 1995 - by fax to: No +32 10 47 86 67 (attn: JL Voz) OR - by mail to: Jean-Luc Voz Universite Catholique de Louvain DICE - Microelectronics Laboratory 3, Place du Levant B-1348 LOUVAIN-LA-NEUVE Belgium OR - by e-mail to: voz at dice.ucl.ac.be +-------------------------------------------------------------------+ | ELENA WORKSHOP | REGISTRATION FORM | | | Title (M., Mrs, Dr, Prof.): ...................................... | Name: ............................................................ | First Name: ...................................................... | Institution: ..................................................... | .................................................................. | Address: ......................................................... | .................................................................. | .................................................................. | Post/Zip code: ................................................... | City: ............................................................ | Country: ......................................................... | Phone: ........................................................... | Fax: ............................................................. | E-mail: .......................................................... | VAT No.:(mandatory for registrants from the European Community): | .................................................................. | | | O I will participate to the ELENA workshop on 18 April 1995 | in Louvain-la-Neuve, Belgium. I will transfer the fee of BEF 3000 | (plus eventual bank and exchange charges) on the account : | Account : "Workshop ELENA" | Acc. No.: 271-0366343-06 | Bank : Generale de Banque | pl. de l'Universite 6 | B-1348 Louvain-la-Neuve | Belgium | | O I will not participate to the workshop, but I am interested | by the ELENA project and any further workshop or published report. | +--------------------------------------------------------------------+ Registration fee of BEF 3000 includes attendance to the workshop, lunch, coffee breaks and printed support. Confirmation of registration and practical details (location,...) will be sent upon receipt of registration form and payment. The ELENA PROJECT ~~~~~~~~~~~~~~~~~ Neural networks are now known as powerful methods for empirical data analysis, especially for approximation (identification, control, prediction) and classification problems. The ELENA project investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning (control of the network size by adding or removing neurons),... ELENA is an ESPRIT III Basic Research Action project (No. 6891). It involves: INPG (Grenoble, F), UPC (Barcelona, E), EPFL (Lausanne, CH), UCL (Louvain-la-Neuve, B), Thomson-Sintra ASM (Sophia Antipolis, F) EERIE (Nimes, F). The coordinator of the project can be contacted at: Prof. Christian Jutten, INPG-LTIRF, 46 av. Flix Viallet, F-38031 Grenoble Cedex, France Phone: +33 76 57 45 48, Fax: +33 76 57 47 90, e-mail: chris at tirf.inpg.fr Overview of the project ~~~~~~~~~~~~~~~~~~~~~~~ Theoretical results point out the relations between Bayesian classifiers and classical Multi-Layer Perceptron (MLP), and propose new algorithms based on Kernel Estimator Classifiers (KEC). Pruning methods (to adapt the network sizes) have been explored on MLP as well as on KEC. Relations between data dimension, number of samples and number of parameters in practical situations have also been addressed. Original non linear mapping algorithms (VQP) for data dimension reduction have also been designed. A simulation environment (PACKLIB) has been developed in the project; it is a smart graphical tool allowing fast programming and interactive analysis. The PACKLIB environment greatly simplifies the user's task by requiring only to write the basic code of the algorithms, while the whole graphical input, output and relationship framework is handled by the environment itself. PACKLIB is used for extensive benchmarks in the ELENA project and in other situations (image processing, control of mobile robots,...). Currently, PACKLIB is tested by beta users and a demo version will be soon available in the public domain. Specific problems related to hardware implementation of incremental algorithms have been adressed; parallel machines with different kinds of systolic architectures, and specialized VLSI processors have been developed and studied in the framework of the ELENA project. The goal of the project was to extract guidelines for the choice of architectures and machines in different situations, taking into account the required performances, but also external constraints such as inputs/outputs, portability, power consumption, versatility,... More Information: ~~~~~~~~~~~~~~~~~ If you need supplementary and practical information about the workshop (access by plane, train or car, hotels,...) contact: Jean-Luc Voz or Michel Verleysen Universite Catholique de Louvain DICE - Microelectronics Laboratory 3, Place du Levant B-1348 LOUVAIN-LA-NEUVE Belgium Phone : +32-10-47.25.51 Secret : +32-10-47.25.40 Fax : +32-10-47.86.67 E_mail : voz at dice.ucl.ac.be verleyse at dice.ucl.ac.be SEE ALSO THE WWW HOMEPAGE of the workshop: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA_WORKSHOP.html A postcript file presenting the ELENA workshop is available for anonymous ftp on: ftp.dice.ucl.ac.be in /pub/neural-nets/ELENA/elena_workshop.ps.Z The ELENA workshop is a joint organization of : - Technopol Brussels, Value Relay Center - The partners of the ELENA consortium. The ELENA workshop is organized in Louvain-la-Neuve (Belgium), on 18 April 1994. It precedes an international conference in the field of artificial neural networks organized in Brussels, the third European Symposium on Artificial Neural Networks (ESANN'95). From ajit at uts.cc.utexas.edu Fri Feb 24 14:01:39 1995 From: ajit at uts.cc.utexas.edu (Ajit Dingankar) Date: Fri, 24 Feb 1995 13:01:39 -0600 Subject: Neuroprose Paper: Classifiers on Relatively Compact Sets Message-ID: <199502241901.NAA26303@curly.cc.utexas.edu> **DO NOT FORWARD TO OTHER GROUPS** Sorry, no hardcopies available. URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/dingankar.relcompact-class.ps.Z BiBTeX entry: @ARTICLE{atd17, AUTHOR = "Sandberg, I. W. and Dingankar, A. T.", TITLE = "{Classifiers on Relatively Compact Sets}", JOURNAL = "IEEE Transactions on Circuits and Systems-I: Fundamental Theory and Applications", VOLUME = {42}, NUMBER = {1}, PAGES = {57}, YEAR = "1995", MONTH = "January", ANNOTE = "", LIBRARY = "", CALLNUM = "" } Classifiers on Relatively Compact Sets -------------------------------------- Abstract The problem of classifying signals is of interest in several application areas. Typically we are given a finite number $m$ of pairwise disjoint sets $C_1, \ldots, C_m$ of signals, and we would like to synthesize a system that maps the elements of each $C_j$ into a real number $a_j$, such that the numbers $a_1,\ldots,a_m$ are distinct. In a recent paper it is shown that this classification can be performed by certain simple structures involving linear functionals and memoryless nonlinear elements, assuming that the $C_j$ are compact subsets of a real normed linear space. Here we give a similar solution to the problem under the considerably weaker assumption that the $C_j$ are relatively compact and are of positive distance from each other. An example is given in which the $C_j$ are subsets of $ \Lp(a,b), ~1 \le p < \infty $. From moody at chianti.cse.ogi.edu Fri Feb 24 20:12:50 1995 From: moody at chianti.cse.ogi.edu (John Moody) Date: Fri, 24 Feb 95 17:12:50 -0800 Subject: Graduate Study at the Oregon Graduate Institute Message-ID: <9502250112.AA05935@chianti.cse.ogi.edu> The Oregon Graduate Institute of Science and Technology (OGI) has openings for a few outstanding students in its Computer Science and Electrical Engineering Masters and Ph.D programs in the areas of Neural Networks, Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. OGI has over 15 faculty, senior research staff, and postdocs in these areas. Short descriptions of our research interests are appended below. The primary purposes of this message are to: 1) To to invite enquiries and applications from prospective students interested in studying for a Masters of PhD Degree in the above areas. 2) To notify prospective Ph.D. students who are U.S. Citizens or U.S. Nationals of various fellowship opportunities at OGI. Fellowships provide full or partial financial support while studying for the PhD. OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. Inquiries about the Masters and PhD programs and admissions for either Computer Science or Electrical Engineering should be addressed to: Margaret Day, Director Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Portland, OR 97291 Phone: (503)690-1028 Email: margday at admin.ogi.edu Formal applications should also be sent to Margaret Day. However, due to the late time in the graduate application season, informal applications should be sent directly to the CSE Department. For these informal applications, please include a letter specifying your research interests and photocopies of your GRE Scores and College transcripts. Please send these materials to: Kerri Burke, Academic Coordinator Department of Computer Science and Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503)690-1255 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science and Engineering & Department of Electrical Engineering and Applied Physics Research Interests of Faculty, Research Staff, and Postdocs in Adaptive & Interactive Systems (Neural Networks, Signal Processing, Control, Speech, Language, and Vision) Etienne Barnard (Assistant Professor, EEAP): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EEAP); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction. Joachim Utans (Post-Doctoral Research Associate, CSE): Joachim Utans's research interests include computer vision and image processing, model based object recognition, neural network learning algorithms and optimization methods, model selection and generalization, with applications in handwritten character recognition and financial analysis. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EEAP): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Post-Doctoral Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From pf2 at st-andrews.ac.uk Thu Feb 23 05:33:21 1995 From: pf2 at st-andrews.ac.uk (Peter Foldiak) Date: Thu, 23 Feb 95 10:33:21 GMT Subject: URL suggestion Message-ID: <10889.9502231033@psych.st-andrews.ac.uk> May I suggest that in all future ftp announcements on connectionists the sender should (also) give the standard URL form of the document, so that it would be easier (just by copying and pasting) to get a WWW reader to get it. Thanks, Peter From gomes at ICSI.Berkeley.EDU Mon Feb 27 12:40:21 1995 From: gomes at ICSI.Berkeley.EDU (Benedict A. Gomes) Date: Mon, 27 Feb 1995 09:40:21 -0800 (PST) Subject: Mapping onto parallel machines Message-ID: <199502271740.JAA01859@icsib6.ICSI.Berkeley.EDU> I had posted a request for references on the subject of automatically mapping neural nets onto parallel machines a long time back. The references that I have received over time have been compiled into a bib file and are available by anonymous ftp from ftp://icsi.berkeley.edu/pub/ai/gomes/nn-mapping.bib The same directory also has a summary of some of the papers. Work in this area is diffuse and might be published in a wide variety of areas, including software, parallel systems and neural networks, making it hard to keep track of what has been done. Hence, I would like to repeat my request, so as to update my references. I am interested in both mapping algorithms and simulators, particularly for MIMD machines like the CM-5 and the Cray T3D. Thanks! Benedict Gomes From zoubin at psyche.mit.edu Mon Feb 27 20:06:08 1995 From: zoubin at psyche.mit.edu (Zoubin Ghahramani) Date: Mon, 27 Feb 95 20:06:08 EST Subject: Paper available on factorial learning and EM Message-ID: <9502280106.AA12339@psyche.mit.edu> FTP-host: psyche.mit.edu FTP-filename: /pub/zoubin/factorial.ps.Z URL: ftp://psyche.mit.edu/pub/zoubin/factorial.ps.Z This NIPS preprint is 8 pages long [300K compressed]. Factorial Learning and the EM Algorithm Zoubin Ghahramani Department of Brain & Cognitive Sciences Massachusetts Institute of Technology Cambridge, MA 02139 Many real world learning problems are best characterized by an interaction of multiple independent causes or factors. Discovering such causal structure from the data is the focus of this paper. Based on Zemel and Hinton's cooperative vector quantizer (CVQ) architecture, an unsupervised learning algorithm is derived from the Expectation--Maximization (EM) framework. Due to the combinatorial nature of the data generation process, the exact E-step is computationally intractable. Two alternative methods for computing the E-step are proposed: Gibbs sampling and mean-field approximation, and some promising empirical results are presented. The paper will appear in G. Tesauro, D.S. Touretzky and T.K. Leen, eds., "Advances in Neural Information Processing Systems 7", MIT Press, Cambridge MA, 1995. From bisant at gl.umbc.edu Mon Feb 27 20:59:33 1995 From: bisant at gl.umbc.edu (Mr. David Bisant) Date: Mon, 27 Feb 1995 20:59:33 -0500 Subject: Paper Available: ID of binding sites on E.coli genetic sequences Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/bisant.ribosome.ps.Z The file bisant.ribosome.ps.Z is a copy of a paper recently accepted by Nucleic Acids Research. It is now available for copying from the Neuroprose repository. Hardcopies can be photocopied from the journal itself shortly. (18 pages, compressed file size 89K) Title: Identification of Ribosome Binding Sites in Escherichia coli Using Neural Network Models Authors: David Bisant Jacob Maizel Neuroscience Program (151 B) National Cancer Institute, FCRF Stanford University Bldg 469 Rm 151, PO Box B Stanford, CA 94305 Frederick, MD 21701 bisant at decatur.stanford.edu jmaizel at ncifcrf.gov Abstract: This study investigated the use of neural networks in the identification of Escherichia coli ribosome binding sites. The recognition of these sites based on primary sequence data is difficult due to the multiple determinants that define them. Additionally, secondary structure plays a significant role in the determination of the site, and this information is difficult to include in the models. Efforts to solve this problem have so far yielded poor results. A new compilation of Escherichia coli ribosome binding sites was generated for this study. Feedforward backpropagation networks were applied to their identification. Perceptrons were also applied, since they have been the previous best method since 1982. Evaluation of performance for all the neural networks and perceptrons was determined by ROC analysis. The neural network provided significant improvement in the recognition of these sites when compared to the previous best method, finding less than half the number of false positives when both models were adjusted to find an equal number of actual sites. The best neural network used an input window of 101 nucleotides and a single hidden layer of 9 units. Both the neural network and the perceptron trained on the new compilation performed better than the original perceptron published by Stormo et al. in 1982. Keywords: neural networks, ribosome binding sites, nucleic acid sequence analysis, ROC, Escherichia coli URL: ftp://archive.cis.ohio-state.edu/pub/neuroprose/bisant.ribosome.ps.Z From peter at ai.iit.nrc.ca Tue Feb 28 00:18:44 1995 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Tue, 28 Feb 1995 10:18:44 +0500 Subject: Workshop on Data Engineering for Inductive Learning Message-ID: <9502281518.AA15776@ksl0j.iit.nrc.ca> Workshop on Data Engineering for Inductive Learning Second and Final Call for Participation IJCAI-95, Montreal (Canada), August 20, 1995 This notice is to inform you that the date of the workshop has been determined and to remind you of the approaching deadline for submissions (including requests for participation without a paper presentation). In inductive learning, algorithms are applied to data. It is well-understood that attention to both elements is critical, but algorithms typically receive more attention than data. Our goal in this workshop is to counterbalance the predominant focus on algorithms by providing a forum in which data takes center stage. Specifically, we invite discussion of issues relevant to data engineering, which we define as the transformation of raw data into a form useful as input to algorithms for inductive learning. Data engineering is a concern in industrial and commercial applications of machine learning, neural networks, genetic algorithms, and traditional statistics. Deadline for submissions: March 31, 1995 Notification of acceptance: April 21, 1995 Submissions available by ftp: April 28, 1995 Actual Workshop: August 20, 1995 For more information about the workshop, see: http://ai.iit.nrc.ca/ijcai/data-engineering.html If you do not have access to the web, contact: peter at ai.iit.nrc.ca From cns-cas at PARK.BU.EDU Tue Feb 28 16:13:39 1995 From: cns-cas at PARK.BU.EDU (cns-cas@PARK.BU.EDU) Date: Tue, 28 Feb 95 16:13:39 EST Subject: VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Message-ID: <199502282113.QAA07418@sharp.bu.edu> VISION, BRAIN, AND THE PHILOSOPHY OF COGNITION Friday, March 17, 1995 Boston University George Sherman Union Conference Auditorium, Second Floor 775 Commonwealth Avenue Boston, MA 02215 Co-Sponsored by the Department of Cognitive and Neural Systems, the Center for Adaptive Systems, and the Center for Philosophy and History of Science Program: -------- 8:30am--9:30am: KEN NAKAYAMA, Harvard University, Visual perception of surfaces 9:30am--10:30am: RUDIGER VON DER HEYDT, Johns Hopkins University, How does the visual cortex represent surface and contour? 10:30am--11:00am: Coffee Break 11:00am--12:00pm: STEPHEN GROSSBERG, Boston University, Cortical dynamics of visual perception 12:00pm--1:00pm: PATRICK CAVANAGH, Harvard University, Attention-based visual processes 1:00pm--2:30pm: Lunch 2:30pm--3:30pm: V.S. RAMACHANDRAN, University of California, Neural plasticity in the adult human brain: New directions of research 3:30pm--4:30pm: EVAN THOMPSON, Boston University, Phenomenology and computational vision 4:30pm--5:30pm: DANIEL DENNETT, Tufts University, Filling-in revisited 5:30pm---: Discussion Registration: ------------- The conference is free and open to the public. Parking: -------- Parking is available at nearby campus lots: 808 Commonwealth Avenue ($6 per vehicle), 766 Commonwealth Avenue ($8 per vehicle), and 700 Commonwealth Avenue ($10 per vehicle). If these lots are full, please ask the lot attendant for an alternate location. Contact: -------- Professor Stephen Grossberg Department of Cognitive and Neural Systems 111 Cummington Street Boston, MA 02215 fax: (617) 353-7755 email: diana at cns.bu.edu From wahba at stat.wisc.edu Tue Feb 28 21:17:22 1995 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 28 Feb 95 20:17:22 -0600 Subject: SS-ANOVA for `soft classification' Message-ID: <9503010217.AA18322@hera.stat.wisc.edu> Announcing: Smoothing Spline ANOVA for Exponential Families, with Application to the Wisconsin Epidemiological Study of Retinopathy. by Grace Wahba, Yuedong Wang, Chong Gu, Ronald Klein, MD and Barbara Klein, MD. UWisconsin- Madison Statistics Dept TR 940, Dec. 1994 (WWGKK) ftp: ftp.stat.wisc.edu/pub/wahba/exptl.ssanova.ps.gz Mosaic: http://www.stat.wisc.edu/~wahba/wahba.html - then click on ftp ..... GRKPACK: Fitting Smoothing Spline ANOVA Models for Exponential Families. by Yuedong Wang. UWisconsin- Madison Statistics Dept TR 942, Jan. 1995. (GRKPACK-doc) ftp: ftp.stat.wisc.edu/pub/wahba/grkpack.ps.gz Mosaic: http://www.stat.wisc.edu/~wahba/wahba.html - then click on ftp ...... In WWGKK we develop Smoothing Spline ANOVA (SS-ANOVA) models for estimating the probability that an instance (subject) will be in class 1 as opposed to class 0, given a vector of predictor variables t (`soft' classification). We observe {y_i, t(i), i = 1,..,n} where y_i is 1 or 0 according as subject i's response is `success' or `failure', and t(i) is a vector of predictor variables for the i-th subject. Letting p(t) be the probability that a subject whose predictor variables are t, has a `success' response, we estimate p(t) = exp{f(t)}/(1 + exp{f(t)}} from this data using a smoothing spline ANOVA representation of f. An ANOVA representation gives f as a sum of functions of one variable (main effects) plus sums of functions of two variables (two -factor interactions) ...etc. This representation provides an interpretable alternative to a neural net. The following issues are addressed in this paper; (1) Methods for deciding which terms in the ANOVA decomposition to include (model selection), (2) Methods for choosing good values of the regularization (smoothing) parameters, which control the bias-variance tradeoff, (3) Methods for making confidence statements concerning the estimate, (4) Numerical algorithms for the calculations, and, finally, (5) Public software (GRKPACK). The overall scheme is applied to data from the Wisconsin Epidemiologic Study of Diabetic Retinopathy (WESDR) to model the risk of progression of diabetic retinopathy {`success'} as a function of glycosylated hemoglobin, duration of diabetes and body mass index {t}. Cross sectional plots provide interpretable information about these risk factors. This paper provided the basis for Grace Wahba's Neyman Lecture. A preliminary version appeared in NIPS-6. GRKPACK-doc provides documentation for the code GRKPACK, which implements (2)-(4) above. The code for GRKPACK is available in netlib in the file gcv/grkpack.shar. It is recommended that it be retrieved via Mosaic: http://www.netlib.org goto The Netlib Repository, goto gcv, rather than via the robot mailserver, which may subdivide the file. Included in GRKPACK are several examples including the analysis described in WWGKK and the WESDR data. Comments and suggestions concerning the code are requested to be sent to Yuedong Wang yuedong at umich.edu. From john at dcs.rhbnc.ac.uk Tue Feb 28 17:43:37 1995 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 28 Feb 95 22:43:37 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199502282243.WAA03485@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT): three new reports available ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-015: ---------------------------------------- Computability and complexity over the reals by Paolo Boldi, University of Milan Abstract: In this work, we sketch a (rather superficial) survey about the problem of extending some of the classical notions from computation and complexity theory to the non-classical realm of real numbers. We first present an algorithmic approach, deeply studied by Blum, Shub, Smale et al., and give a non-trivial separation result recently obtained by Cucker. Then, we introduce some concepts from another line of research, namely the one based on the notion of computable real number. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-016: ---------------------------------------- Probably Approximately Optimal Satisficing Strategies by Russell Greiner, Siemens Corporate Research Pekka Orponen, Department of Computer Science, University of Helsinki Abstract: A {\em satisficing search problem} consists of a set of probabilistic experiments to be performed in some order, seeking a satisfying configuration of successes and failures. The expected cost of the search depends both on the success probabilities of the individual experiments, and on the {\em search strategy}, which specifies the order in which the experiments are to be performed. A strategy that minimizes the expected cost is {\em optimal}. Earlier work has provided ``optimizing functions'' that compute optimal strategies for certain classes of search problems from the success probabilities of the individual experiments. We extend those results by providing a general model of such strategies, and an algorithm \pao\ that identifies an approximately optimal strategy when the probability values are not known. The algorithm first estimates the relevant probabilities from a number of trials of each undetermined experiment, and then uses these estimates, and the proper optimizing function, to identify a strategy whose cost is, with high probability, close to optimal. We also show that if the search problem can be formulated as an and-or tree, then the PAO algorithm can also ``learn while doing'', i.e. gather the necessary statistics while performing the search. ---------------------------------------- NeuroCOLT Technical Report NC-TR-95-018: ---------------------------------------- On real Turing machines that toss coins by Felipe Cucker, Universitat Pompeu Fabra, Marek Karpinski, Universit\"at Bonn, Pascal Koiran, DIMACS, Rutgers University, Thomas Lickteig, Universit\"at Bonn, Kai Werther, Universit\"at Bonn Abstract: In this paper we consider real counterparts of classical probabilistic complexity classes in the framework of real Turing machines as introduced by Blum, Shub, and Smale \cite{BSS}. We give an extension of the well-known ``$\BPP \subseteq \P/\poly$'' result from discrete complexity theory to a very general setting in the real number model. This result holds for real inputs, real outputs, and random elements drawn from an arbitrary probability distribution over~$\R^m$. Then we turn to the study of Boolean parts, that is, classes of languages of zero-one vectors accepted by real machines. In particular we show that the classes $\BPP$, $\PP$, $\PH$, and $\PSPACE$ are not enlarged by allowing the use of real constants and arithmetic at unit cost provided we restrict branching to equality tests. ----------------------- The Report NC-TR-95-015 can be accessed and printed as follows % ftp cscx.cs.rhbnc.ac.uk (134.219.200.45) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-95-015.ps.Z ftp> bye % zcat nc-tr-95-015.ps.Z | lpr -l Similarly for the other technical report. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neurocolt.html Best wishes John Shawe-Taylor