From svensen at cns.mpg.de Thu Apr 1 04:14:58 1999 From: svensen at cns.mpg.de (Markus Svensen) Date: Thu, 01 Apr 1999 11:14:58 +0200 Subject: Thesis available Message-ID: <37033912.823AA11@cns.mpg.de> Dear Connectionists My PhD-thesis --- GTM: The Generative Topographic Mapping --- is available for downloading in compressed postscript format from http://www.ncrg.aston.ac.uk/GTM/ (this page also contains other material on the GTM); abstract follows below. It can also be obtained via anonymous ftp (see below) or from the NCRG Publication database (http://www.ncrg.aston.ac.uk/Papers/, tech.rep.no. NCRG/98/024, single-sided version only). Markus Svensen Max-Plank-Institute Email: svensen at cns.mpg.de of Cognitive Neuroscience Phone: +49/0 341 9940 229 Postfach 500 355 Fax (not personal): D-04303 LEIPZIG +49/0 341 9940 221 GERMANY Abstract ======== This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, the aim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis. Keywords: latent variable model, visualization, magnification factor, self-organizing map, principal component analysis Availability via ftp ==================== The thesis is available via anonymous ftp from cs.aston.ac.uk, in the directory neural/svensjfm/GTM. It's avaiblable in two versions: NCRG_98_024.ps.Z contains the final version of the thesis, as it was submitted to Aston University, formatted according to Aston's regulations (1.5 linespacing and margins for singlesided printing); 112 pages on 112 A4 papers NCRG_98_024_dblsided.ps.Z contains the same version of the thesis in terms of content, but formatted slightly differently (single linespacing and margins for doublesided printing, with blank pages added as appropriate); 108 pages on 56 A4 papers. Naturally, the singlesided version will not look very nice when printed doublesided, and vice versa! From erik at bbf.uia.ac.be Fri Apr 2 05:56:51 1999 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Fri, 2 Apr 1999 11:56:51 +0100 (WET DST) Subject: Postdoctoral Position in Computational Neuroscience (Cerebellum) Message-ID: <199904021056.LAA07953@kuifje.bbf.uia.ac.be> The Antwerp Theoretical Neurobiology group has a 2 to 3 year position for a computational neuroscientist, supported by a Human Frontier Science project. We are primarily looking for a new postdoctoral researcher, but have also felloships for highly qualified PhD students available. The project involves realistic network models of the cerebellum. The postdoc will join a group of computational and experimental neuroscientists who study the cerebellum at multiple levels of complexity and is expected to interact closely with the other members of the lab and of the Human Frontier Science team. This specific project involves in a first phase the use of a realistic network model of the cerebellar cortex of the rat to study the effect of granular layer oscillations on Purkinje cell firing. In a second phase the network model will be expanded to study cerebello-olivary interactions. Simulation results will be compared to multi-unit data recorded from anesthetized and awake rats and from knockout mice. More information on the activities of the Antwerp Theoretical Neurobiology group can be found at http://www.bbf.uia.ac.be We are looking for candidates with experience in computational neuroscience using compartmental models and/or using GENESIS. The position is available immediately, but can also be taken up within the next 12 months. Applicants should (e-)mail their curriculum vitae and names of two references to: Prof. Erik De Schutter Born-Bunge Foundation University of Antwerp - UIA B2610 Antwerp BELGIUM erik at bbf.uia.ac.be From mgeorg at SGraphicsWS1.mpe.ntu.edu.sg Thu Apr 8 16:39:06 1999 From: mgeorg at SGraphicsWS1.mpe.ntu.edu.sg (Georg Thimm) Date: Sat, 03 Apr 1999 11:19:06 -12920 Subject: Contents of Neurocomputing 25 (1998) Message-ID: <199904030319.LAA01957@SGraphicsWS1.mpe.ntu.edu.sg> Dear reader, Please find below a compilation of the contents for Neurocomputing and Scanning the Issue written by V. David Sanchez A. More information on the journal are available at the URL http://www.elsevier.nl/locate/neucom . The contents of this and other journals published by Elsevier are distributed also by the ContentsDirect service (see at the URL http://www.elsevier.nl/locate/ContentsDirect). Please feel free to redistribute this message. My apologies if this message is inappropriate for this mailing list; I would appreciate a feedback. With kindest regards, Georg Thimm Dr. Georg Thimm Tel ++65 790 5010 Design Research Center, School of MPE, Email: mgeorg at ntu.edu.sg Nanyang Technological University, Singapore 639798 ******************************************************************************** Vol. 25 (1-3) Scanning the Issue I. Santamaria, M. Lazaro, C.J. Pantaleon, J.A. Garcia, A. Tazon, and A. Mediavilla describe "A nonlinear MESFET model for intermodulation analysis using a generalized radial basis function network". The transistor bias voltages are input to the GRBF network which maps them onto the derivatives of the drain-to-source current associated to the intermodulation properties. In "Solving dynamic optimization problems with adaptive networks" Y. Takahashi systematically constructs adaptive networks (AN) from a given dynamic optimization problem (DOP) which generate a locally-minimum problem solution. The construction of a solution for the Dynamic Traveling Salesman Problem (DTSP) is shown as example. L.M. Patnaik and S. Udaykumar discuss in "Mapping adaptive resonance theory onto ring and mesh architectures" different strategies to parallelize ART2-A networks. The parallel architectural simulator PROTEUS is used. Simulations show that the speedup obtained for the ring architecture is higher than the one obtained for the mesh architecture. H.-H. Chen, M.T. Manry, and H. Chandrasekaran use in "A neural network training algorithm utilizing multiple sets of linear equations" output weight optimization (OWO), hidden weight optimization (HWO), and Backpropagation in training algorithms. Simulations show that the combined OWO-HWO technique is more effective than the OWO-BP and the Levenberg-Marquardt methods for training MLP networks. Y. Baram presents in "Bayesian classification by iterated weighting" a modular and separate calculation of the likelihoods and the weights. This allows for the use of any density estimation method. The likelihoods are estimated by parametric optimization, the weights are estimated using iterated averaging. Results obtained are similar to those generated using the expectation maximization method. V. Maiorov and A. Pinkus prove "Lower bounds for approximation by MLP neural networks" including that any continuous function on any compact domain can be approximated arbitrarily well by a two hidden layer MLP with a fixed number of units per layer. The degree of approximation for an MLP with n hidden units is bounded by the degree of approximation of n ridge functions linearly combined. In "Developing robust non-linear models through bootstrap aggregated neural networks" J. Zhang describes a technique for aggregating multiple networks. Bootstrap techniques are used to resample data into training and test data sets. Combination of the individual network models is done by principal component regression. More accurate and more robust results are obtained than when using single networks. S.-Y. Cho and T.W.S. Chow describe a new heuristic for global learning in "Training multilayer neural networks using fast global learning algorithm * Least squares and penalized optimization methods". Classification problems are used to confirm that a higher convergence speed and ability to escape local minima is achieved with the new algorithm as opposed to other conventional methods. In "A novel entropy-constrained competitive learning algorithm for vector quantization" W.-J. Hwang, B.-Y. Ye, and S.-C. Liao develops the entropy-constrained competitive learning (ECCL) algorithm. This algorithm outperforms the entropy-constrained vector quantizer (ECVQ) design algorithm when the same rate constraint and initial codewords are used. K.B. Eom presents a "Fuzzy clustering approach in unsupervised sea ice classification". Passive radar images taken by multichannel passive microwave imagers are used as input. The see ice types in polar regions are determined by clustering. Hard clustering methods do not apply due to the fuzzy nature of the boundaries between different see-ice types. G.-J. Wang and T.-C. Chen introduce "A robust parameters self-tuning learning algorithm for multilayer feedforward neural network". Automatic adjustment of learning parameters such as the learning rate and the momentum can be acieved with this new algorithm. It outperforms the error backpropagation (EBP) algorithm in terms of convergence, but is also more insensitive to the initial weights. In "Neural computation for robust approximate pole assignment" D.W.C. Ho, J. Lam, J. Xu, and H.K. Tam pose the problem of output feedback robust approximate pole assignment as an unconstrained optimization problem and solve it using a neural architecture and the gradient flow formulation. This formulation allows for a simple recurrent neural network realization. I appreciate the cooperation of all those who submitted their work for inclusion in this issue. V. David Sanchez A. Neurocomputing * Editor-in-Chief * ******************************************************************************** From wolfskil at MIT.EDU Fri Apr 2 14:38:05 1999 From: wolfskil at MIT.EDU (Jud Wolfskill) Date: Fri, 2 Apr 1999 15:38:05 -0400 Subject: book announcement Message-ID: A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 1914 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c5bd37bb/attachment.bin From fmdist at hotmail.com Sat Apr 3 13:15:31 1999 From: fmdist at hotmail.com (Fionn Murtagh) Date: Sat, 03 Apr 1999 10:15:31 PST Subject: "Neurocomputing Letters" - call for submissions Message-ID: <19990403181531.17451.qmail@hotmail.com> Manuscripts are invited for "Neurocomputing Letters" which is published as part of "Neurocomputing". Quick turnaround as regards refereeing and publication are aimed at with a "Letter" paper, which should be short (manuscript text about 8 pages). The Editor-in-Chief of "Neurocomputing" is V. David Sanchez A. More information on "Neurocomputing" is available at http://www.elsevier.nl/locate/neucom The contents of this and other journals published by Elsevier are distributed also by the ContentsDirect service (see http://www.elsevier.nl/locate/ContentsDirect). Contact addresses are as follows. NEUROCOMPUTING - EDITOR-IN-CHIEF V. David Sanchez A. Advanced Computational Intelligent Systems 11281 Tribuna Avenue San Diego, CA 92131 U.S.A. Fax +1 (619) 547-0794 Email dsanchez at san.rr.com http://www.elsevier.nl/locate/neucom NEUROCOMPUTING LETTERS - EDITOR Prof. F. Murtagh School of Computer Science Queen's University of Belfast Belfast BT7 1NN Northern Ireland Email f.murtagh at qub.ac.uk http://www.cs.qub.ac.uk/~F.Murtagh Get Your Private, Free Email at http://www.hotmail.com From stan at mbfys.kun.nl Fri Apr 2 08:34:04 1999 From: stan at mbfys.kun.nl (Stan Gielen) Date: Fri, 02 Apr 1999 14:34:04 +0100 Subject: advertisement for vacant Ph.D. position Message-ID: <3.0.2.32.19990402143404.009f8da0@pop-srv.mbfys.kun.nl> The Dept. of Medical Physics and Biophysics of the University of Nijmegen, The Netherlands, has a vacancy for a Ph.D. STUDENT RESEARCH PROJECT The research project has a theoretical component and an application oriented component. The theoretical component deals with knowledge extraction from neural-network related architectures, which have been trained using large data-bases. Our group has a large experience with Multi-layer Perceptrons, staochastic neural networks and graphical models (see http://www.mbfys.kun.nl/snn). The application oriented part deals with optimization of the production process in a paper production plant. The aim of the project is to train a neural network (or related architecture) with data, obtained during the paper production process, and then -- to find the main relevant parameters, which determine the production process and the quality of the output (quality of paper) -- to extract rules, which provide insight into the production process -- to optimize the process. The project should produce excellent papers in the best available scientific journals. After a period of 4 years, a thesis should be available. For applications, please send your cv to: Prof. Stan Gielen: stan at mbfys.kun.nl From solla at snowmass.phys.nwu.edu Mon Apr 5 21:32:37 1999 From: solla at snowmass.phys.nwu.edu (Sara A. Solla) Date: Mon, 5 Apr 1999 20:32:37 -0500 (CDT) Subject: NIPS*99 -- Call for Papers Message-ID: <199904060132.UAA10273@snowmass.phys.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 8487 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6203c283/attachment.ksh From solla at snowmass.phys.nwu.edu Mon Apr 5 21:32:18 1999 From: solla at snowmass.phys.nwu.edu (Sara A. Solla) Date: Mon, 5 Apr 1999 20:32:18 -0500 (CDT) Subject: NIPS*99 -- Call for Workshop Proposals Message-ID: <199904060132.UAA10264@snowmass.phys.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5108 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0c59708c/attachment.ksh From harnad at coglit.ecs.soton.ac.uk Tue Apr 6 07:40:49 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Tue, 6 Apr 1999 12:40:49 +0100 (BST) Subject: Rolls on "The Brain and Emotion" BBS Call for Book Reviewers Message-ID: Below is the abstract of the Precis of a book that will shortly be circulated for Multiple Book Review in Behavioral and Brain Sciences (BBS): *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** PRECIS OF "THE BRAIN AND EMOTION" (Oxford UP 1998) by Edmund T. Rolls This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 16th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. Please note that it is the book, not the Precis, that is to be reviewed. It would be helpful if you indicated in your reply whether you already have the book or would require a copy. _____________________________________________________________ PRECIS OF "THE BRAIN AND EMOTION" FOR BBS MULTIPLE BOOK REVIEW Oxford University Press on 5th November 1998. Edmund T. Rolls University of Oxford Department of Experimental Psychology South Parks Road Oxford OX1 3UD England. Edmund.Rolls at psy.ox.ac.uk ABSTRACT: The topics treated in The Brain and Emotion include the definition, nature and functions of emotion (Chapter 3), the neural bases of emotion (Chapter 4), reward, punishment and emotion in brain design (Chapter 10), a theory of consciousness and its application to understanding emotion and pleasure (Chapter 9), and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, and with classifying different emotions; and in understanding what information processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward and punishment evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Chapter 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior. KEYWORDS: emotion; hunger; taste; brain evolution; orbitofrontal cortex; amygdala; dopamine; reward; punishment; consciousness ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.rolls.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.rolls ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.rolls To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.rolls When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From xjwang at cicada.ccs.brandeis.edu Tue Apr 6 14:09:13 1999 From: xjwang at cicada.ccs.brandeis.edu (Xiao-Jing Wang) Date: Tue, 6 Apr 1999 14:09:13 -0400 Subject: No subject Message-ID: <199904061809.OAA20240@cicada.ccs.brandeis.edu> POST-DOCTORAL POSITION AT SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY BRANDEIS UNIVERSITY ================================================================== Applications are invited for a post-doctoral fellowship in computational neuroscience, beginning in the fall of 1999. The successful candidate is expected to work on models of working memory processes in prefrontal cortex and their neuromodulation. Projects will be carried out in close interaction and collaboration with experimental neurobiologists. Candidates with strong theoretical background, analytical and simulation skills, and knowledge in neuroscience, are encouraged to apply. Applicants should send promptly a curriculum vitae and a brief description of fields of interest, and have three letters of recommandation sent to the following address. ================================================================== Xiao-Jing Wang Associate Professor Center for Complex Systems Brandeis University Waltham, MA 02454 phone: 781-736-3147 fax: 781-736-4877 http://www.bio.brandeis.edu/pages/faculty/wang.html From spotter at gg.caltech.edu Wed Apr 7 00:09:51 1999 From: spotter at gg.caltech.edu (Steve Potter) Date: Tue, 6 Apr 1999 20:09:51 -0800 Subject: Neural Interface Jobs Message-ID: Dear Connectionists, This is a follow-up to a post I made to the list circa 1991, when I was a grad student and had decided to study learning in cultured neural networks. I asked the list who was working on such things and got a number of helpful replies. Several led me to Caltech, where I have spent the last 5 years working with Scott Fraser and Jerry Pine, building gadgets and learning techniques to make my dream possible. I am now recruiting a post-doc and a computer software/hardware person to help me out on this project. Please spread the word! -Steve Potter spotter at gg.caltech.edu ___________Please Distribute to Interested Parties_________________ I have two positions available immediately for: 1. Neural Interface Programmer 2. Postdoc in Neural Coding See the hypertext version of this announcement at: http://www.caltech.edu/~pinelab/jobs.html New Neuroscience Technology for Studying Learning in Vitro: Multi-electrode and Imaging Analysis of Cultured Networks I received a 4-year RO1 grant (1 RO1 NS38628-01) from the National Institute of Neurological Disorders and Stroke at the National Institutes of Health. We are developing a two-way link between a network of cultured neurons and a computer, capable of stimulating and recording brain cell activity continuously for weeks. We are interested in observing the process of self-organization in the cultured neural network, using multi-electrode arrays, optical recording using voltage-sensitive dyes, and 2-photon laser-scanning microscopy. We are carrying out this cutting-edge interdisciplinary project within the labs of Prof. Jerry Pine and Prof. Scott Fraser. Pine lab: http://www.caltech.edu/~pinelab/pinelab.html Fraser lab: http://www.its.caltech.edu/~fraslab/ More details about the jobs can be found at the web address at the top. Please send or email me your resume or CV: Steve M. Potter, PhD Principal Investigator Senior Research Fellow spotter at gg.caltech.edu 156-29 Biology California Institute of Technology Pasadena, CA 91125 ___________Please Distribute to Interested Parties_________________ From school at cogs.nbu.acad.bg Wed Apr 7 07:23:43 1999 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Wed, 7 Apr 1999 14:23:43 +0300 Subject: CogSci 99 deadline approaches Message-ID: 6th International Summer School in Cognitive Science Sofia, New Bulgarian University July 12 - 31, 1999 International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) William HIRST (New School for Social Sciences, NY, USA) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Each participant will enroll in 6 of the 10 courses offered thus attending 4 hours classes per day plus 2 hours tutorials in small groups plus individual studies and participation in symposia. Brain and Language: New Approaches to Evolution and Developmet (Elizabeth Bates, Univ. of California at San Diego, USA) Child Language Acquisition (Michael Tomasello, MPI for Evolutionary Anthropology, Germany) Culture and Cognition (Roy D'Andrade, Univ. of California at San Diego, USA) Understanding Social Dependence and Cooperation (Cristiano Castelfranchi, CNR, Italy) Models of Human Memory (Richard Shiffrin, Indiana University, USA) Categorization and Inductive Reasoning: Psychological and Computational Approaches (Evan Heit, Univ. of Warwick, UK) Understanding Human Thinking (Boicho Kokinov, New Bulgarian University) Perception-Based Spatial Reasoning (Reinhard Moratz, Hamburg University, Germany) Perception (Naum Yakimoff, New Bulgarian University) Applying Cognitive Science to Instruction (John Hayes, Carnegie-Mellon University, USA) In addition there will be seminars, working groups, project work, discussions. Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: application form, CV, statement of purpose, copy of diploma; if student - academic transcript letter of recommendation, list of publications (if any) and short summary of up to three of them. For participants from Central and Eastern Europe as well as from the former Soviet Union there are scholarships available (provided by Soros' Open Society Institute). They cover tuition, travel, and living expenses. Deadline for application: April 15tht Notification of acceptance: April 30th. Apply as soon as possible since the number of participants is restricted. For more information contact: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria Tel. (+3592) 957-1876 Fax: (+3592) 558262 e-mail: school at cogs.nbu.acad.bg Web page: http://www.nbu.acad.bg/staff/cogs/events/ss99.html From rsun at research.nj.nec.com Wed Apr 7 13:16:36 1999 From: rsun at research.nj.nec.com (Ron Sun) Date: Wed, 7 Apr 1999 13:16:36 -0400 Subject: three technical reports related to reinforcement learning Message-ID: <199904071716.NAA29383@pc-rsun.nj.nec.com> Announcing three technical reports (concerning enhancing reinforcement learners, either in terms of improving their learning processes by dividing up the space or sequence, or in terms of knowledge extraction from outcomes of reinforcement learning) --------------------------------- Learning Plans without a priori Knowledge by Ron Sun and Chad Sessions http://cs.ua.edu/~rsun/sun.plan.ps ABSTRACT This paper is concerned with autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. Different from existing reinforcement learning algorithms that generate only reactive plans and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised, in which first reinforcement learning is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan and then explicit plans are extracted from the reactive plan. Several options in plan extraction are examined, each of which is based on beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model. A shortened version appeared in: Proc. 1998 International Symposium on Intelligent Data Engineering and Learning, October, 1998. Springer-Verlag. --------------------------------- Multi-Agent Reinforcement Learning: Weighting and Partitioning by Ron Sun and Todd Peterson http://cs.ua.edu/~rsun/sun.NN99.ps ABSTRACT: This paper addresses weighting and partitioning, in complex reinforcement learning tasks, with the aim of facilitating learning. The paper presents some ideas regarding weighting of multiple agents and extends them into partitioning an input/state space into multiple regions with differential weighting in these regions, to exploit differential characteristics of regions and differential characteristics of agents to reduce the learning complexity of agents (and their function approximators) and thus to facilitate the learning overall. It analyzes, in reinforcement learning tasks, different ways of partitioning a task and using agents selectively based on partitioning. Based on the analysis, some heuristic methods are described and experimentally tested. We find that some off-line heuristic methods performed the best, significantly better than single-agent models. To appear in: Neural Networks, in press. A shortened version appeared in Proc. of IJCNN'99 --------------------------------- Self-Segmentation of Sequences: Automatic Formation of Hierarchies of Sequential Behaviors by Ron Sun and Chad Sessions http://cs.ua.edu/~rsun/sun.sss.ps ABSTRACT The paper presents an approach for hierarchical reinforcement learning that does not rely on a priori domain-specific knowledge regarding hierarchical structures. Thus this work deals with a more difficult problem compared with existing work. It involves learning to segment action sequences to create hierarchical structures, based on reinforcement received during task execution, with different levels of control communicating with each other through sharing reinforcement estimates obtained by each other. The algorithm segments action sequences to reduce non-Markovian temporal dependencies, and seeks out proper configurations of long- and short-range dependencies, to facilitate the learning of the overall task. Developing hierarchies also facilitates the extraction of explicit hierarchical plans. The initial experiments demonstrate the promise of the approach. A shortened version of this report appeared in Proc. IJCNN'99. Washington, DC. --------------------------------- Dr. Ron Sun NEC Research Institute 4 Independence Way Princeton, NJ 08540 phone: 609-520-1550 fax: 609-951-2483 email: rsun at research.nj.nec.com (July 1st, 1998 -- July 1st, 1999) ----------------------------------------- Prof. Ron Sun http://cs.ua.edu/~rsun Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu From ascoli at osf1.gmu.edu Wed Apr 7 15:52:36 1999 From: ascoli at osf1.gmu.edu (GIORGIO ASCOLI) Date: Wed, 7 Apr 1999 15:52:36 -0400 (EDT) Subject: positions available Message-ID: Please post, distribute, and circulate as you see fit. **************************************************** * * * POSITIONS AVAILABLE: * * (1) Computational Neuroscience Post-Doc * * (2) Computer programmer for Neuroscience project * * * **************************************************** 1) COMPUTATIONAL NEUROSCIENCE POST-DOCTORAL POSITION AVAILABLE A post-doctoral position is available immediately for computational modeling of dendritic morphology, neuronal connectivity, and development of anatomically and physiologically accurate neural networks. All highly motivated candidates with a recent PhD in biology, computer science, physics, or other areas related to Neuroscience (including MD or engineering degree) are encouraged to apply. C programming skills and/or experience with GENESIS or other modeling packages are desirable but not necessary. Post-doc will join a young and dynamic research group at the Krasnow Institute for Advanced Study, located in Fairfax, VA (<20 miles west of Washington DC). The initial research project is focused on (1) the generation of complete neurons in virtual reality that reproduce accurately the experimental morphological data; and (2) the study of the influence of dendritic shape (geometry and topology) on the electrophysiological behavior. We have developed advanced software to build network models of entire regions of the brain (e.g. the rat hippocampus), and the successful candidate will also work on this aspect of the research. The post-doc will be hired as a Research Assistant Professor (with VA state employee benefits) with a salary based on the NIH postdoctoral scale, and will have a private office, a new computer, and full-time access on Silicon Graphics server and consoles. Send CV, (p)reprints, a brief description of your motivation, and names, email addresses and phone/fax numbers of references to: ascoli at gmu.edu (or by fax at the number below) ASAP. There is no deadline but the position will be filled as soon as a suitable candidate is found. Non-resident aliens are also welcome to apply. The Krasnow Institute is an equal opportunity employer. Giorgio Ascoli, PhD Krasnow Institute for Advanced Study at George Mason University, MS2A1 Fairfax, VA 22030 Ph. (703)993-4383 Fax (703)993-4325 2) PROGRAMMER POSITION AVAILABLE FOR NEUROSCIENCE PROJECT A position for a junior scientific programmer is available immediately to work in a research project aimed to the virtual construction of portions of the brain. Candidates must have strong C (C++ a plus) programming skills and software development experience in both Windows and Unix environments. No background in neuroscience is requested, but interest and motivation are highly desirable. All undergraduate/graduate students as well as BA's, BS's, and MS's are encouraged to apply. The position can be either part-time or full-time, and the schedule is extremely flexible. The programmer will join a young and dynamic research group at the Krasnow Institute for Advanced Study, located in Fairfax, VA (<20 miles west of Washington DC). The successful candidate will work with the principal investigator (Dr. Ascoli) on the implementation of algorithms to generate neuronal structures in 3D according to known and novel anatomical rules. In addition, routines will be developed to measure geometrical and topological parameters from the tree-like branching structures. All the executables (and some of the codes at the discretion of the principal investigator) will be publicly distributed, and the programmer will be given full intellectual credit in both software distribution and scientific publications. The programmer will be hired as a Research Instructor (with VA state employee benefits) with a salary proportioned to experience and skills, and will have a new PC and full-time access on Silicon Graphics server and O2 consoles. Send CV, a brief description of your motivation, and names, email addresses and phone/fax numbers of references to: ascoli at gmu.edu (or by fax at the number below) ASAP. There is no deadline but the position will be filled as soon as a suitable candidate is found. Non-resident aliens are also welcome to apply. The Krasnow Institute is an equal opportunity employer. Giorgio Ascoli, PhD Krasnow Institute for Advanced Study at George Mason University, MS2A1 Fairfax, VA 22030 Ph. (703)993-4383 Fax (703)993-4325 From harnad at coglit.ecs.soton.ac.uk Wed Apr 7 16:15:45 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Wed, 7 Apr 1999 21:15:45 +0100 (BST) Subject: Pavlovian Feed-Forward Mechanisms: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** PAVLOVIAN FEED-FORWARD MECHANISMS IN THE CONTROL OF SOCIAL BEHAVIOR by Michael Domjan, Brian Cusato, & Ronald Villarreal This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 8th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ PAVLOVIAN FEED-FORWARD MECHANISMS IN THE CONTROL OF SOCIAL BEHAVIOR Michael Domjan, Brian Cusato, & Ronald Villarreal Department of Psychology University of Texas Austin, Texas 78712 U.S.A. Tel: 512-471-7702 Fax: 512-471-6175 Domjan at psy.utexas.edu ABSTRACT: The conceptual and investigative tools that are brought to bear on the analysis of social behavior are expanded by integrating biological theory, control systems theory, and Pavlovian conditioning. Biological theory has focused on the costs and benefits of social behavior from ecological and evolutionary perspectives. In contrast, control systems theory is concerned with how machines achieve a particular goal or purpose. The accurate operation of a system often requires feed-forward mechanisms that adjust system performance in anticipation of future inputs. Pavlovian conditioning is ideally suited to serve this function in behavioral systems. Pavlovian mechanisms have been demonstrated in various aspects of sexual behavior, maternal lactation, and infant suckling. Pavlovian conditioning of agonistic behavior has been also reported, and Pavlovian processes may be similarly involved in social play and social grooming. In addition, several lines of evidence indicate that Pavlovian conditioning can increase the efficiency and effectiveness of social interactions, thereby improving the cost/benefit ratio. The proposed integrative approach serves to extend Pavlovian concepts beyond the traditional domain of discrete secretory and other physiological reflexes to complex real-world behavioral interactions and helps apply abstract laboratory analyses of the mechanisms of associative learning to the daily challenges animals face as they interact with one another in their natural environment. KEYWORDS: social behavior, biological theory, control theory, feed-forward mechanisms, learning theory, Pavlovian conditioning, aggression, sexual behavior, nursing and lactation, social play, social grooming ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.domjan.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.domjan ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.domjan To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.domjan When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From wahba at stat.wisc.edu Wed Apr 7 19:19:17 1999 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 7 Apr 1999 18:19:17 -0500 (CDT) Subject: SVM's and the GACV Message-ID: <199904072319.SAA09222@hera.stat.wisc.edu> The following paper was the basis for my NIPS*98 Large Margin Classifier Workshop talk. now available as University of Wisconsin-Madison Statistics Dept TR1006 in http://www.stat.wisc.edu/~wahba -> TRLIST .................................................. Generalized Approximate Cross Validation For Support Vector Machines, or, Another Way to Look at Margin-Like Quantities. Grace Wahba, Yi Lin and Hao Zhang. Abstract We first review the steps connecting the Support Vector Machine (SVM) paradigm in reproducing kernel Hilbert space, and and its connection to the (dual) mathematical programming problem traditional in SVM classification problems. We then review the Generalized Comparative Kullback-Leibler Distance (GCKL) for the SVM paradigm and observe that it is trivially a simple upper bound on the expected misclassification rate. Next we revisit the Generalized Approximate Cross Validation (GACV) as a computable proxy for the GCKL, as a function of certain tuning parameters in SVM kernels. We have found a justifiable (new) approximation for the GACV which is readily computed exactly along with the SVM solution to the dual mathematical programming problem. This GACV turns out interestingly, but not surprisingly to be simply related to what several authors have identified as the (observed) VC dimension of the estimated SVM. Some preliminary simulations in a special case are suggestive of the fact that the minimizer of the GACV is in fact a good estimate of the minimizer of the GCKL, although further simulation and theoretical studies are warranted. It is hoped that this preliminary work will lead to better understanding of `tuning' issues in the optimization of SVM's and related classifiers. ................................................. From harnad at coglit.ecs.soton.ac.uk Thu Apr 8 08:09:07 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Thu, 8 Apr 1999 13:09:07 +0100 (BST) Subject: CogPrints: Archive of Articles in Psychology, Neuroscience, etc. Message-ID: CogPrints Author Archive To all biobehavioral, neural and cognitive scientists: You are invited to archive all your preprints and reprints in the CogPrints electronic archive: http://cogprints.soton.ac.uk There have been some very important developments in the area of Web archiving of scientific papers in recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm The CogPrints Archive covers all the Cognitive Sciences: Psychology, Neuroscience, Biology, Computer Science, Linguistics and Philosophy CogPrints is completely free for everyone, both authors and readers, thanks to a subsidy from the Electronic Libraries Programme of the Joint Information Systems of the United Kingdom and the collaboration of the NSF/DOE-supported Physics Eprint Archive at Los Alamos. CogPrints has recently been opened for public automatic archiving. This means authors can now deposit their own papers automatically. The first wave of papers had been invited and hand-archived by CogPrints in order to set a model of the form and content of CogPrints. To see the current holdings: http://cogprints.soton.ac.uk/ To archive your own papers automatically: http://cogprints.soton.ac.uk/author.html All authors are encouraged to archive their papers on their home servers as well. For further information: admin at coglit.soton.ac.uk -------------------------------------------------------------------- BACKGROUND INFORMATION (No need to read if you wish to proceed directly to the Archive.) The objective of CogPrints is to emulate in the cognitive, beural and biobehavioral sciences the remarkable success of the NSF/DOE-subsidised Physics Eprint Archive at Los Alamos http://xxx.lanl.gov (US) http://xxx.soton.ac.uk (UK) The Physics Eprint Archive now makes available, free for all, well over half of the annual physics periodical literature, with its annual growth strongly suggesting that it will not be long before it becomes the locus classicus for all of the literature in Physics. 25,000 new papers are being deposited annually and there are over 35,000 users daily and 15 mirror sites worldwide. (Daily statistics: http://xxx.lanl.gov/cgi-bin/todays_stats) What this means is that anyone in the world with access to the Internet (and that number too is rising at a breath-taking rate, and already includes all academics, researchers and students in the West, and an increasing proportion in the Third World as well) can now search and retrieve virtually all current work in, for example, High Energy Physics, much of it retroactive to 1990 when the Physics archive was founded by Paul Ginsparg, who must certainly be credited by historians with having launched this revolution in scientific and scholarly publication (www-admin at xxx.lanl.gov). Does this mean that learned journals will disappear? Not at all. They will continue to play their traditional role of validating research through peer review, but this function will be an "overlay" on the electronic archives. The literature that is still in the form of unrefereed preprints and technical reports will be classified as such, to distinguish it from the refereed literature, which will be tagged with the imprimatur of the journal that refereed and accepted it for publication, as it always has been. It will no longer be necessary for publishers to recover (and research libraries to pay) the substantial costs of producing and distributing paper through ever-higher library subscription prices: Instead, it will be the beneficiaries of the global, unimpeded access to the learned research literature -- the funders of the research and the employers of the researcher -- who will cover the much reduced costs of implementing peer review, editing, and archiving in the electronic medium alone, in the form of minimal page-charges, in exchange for instant, permanent, worldwide access to the research literature for all, for free. If this arrangement strikes you as anomalous, consider that the real anomaly was that the authors of the scientific and scholarly periodical research literature, who, unlike trade authors, never got (or expected) royalties for the sale of their texts -- on the contrary, so important was it to them that their work should reach all potentially interested fellow-researchers that they had long been willing to pay for the printing and mailing of preprints and reprints to those who requested them -- nevertheless had to consent to have access to their work restricted to those who paid for it. This Faustian bargain was unavoidable in the Gutenberg age, because of the need to recover the high cost of producing and disseminating print on paper, but Paul Ginsparg has shown the way to launch the entire learned periodical literature into the PostGutenberg Galaxy, in which scientists and scholars can publish their work in the form of "skywriting": visible and available for free to all. -------------------------------------------------------------------- Stevan Harnad harnad at cogsci.soton.ac.uk Professor of Psychology harnad at princeton.edu Director, phone: +44 1703 592582 Cognitive Sciences Centre fax: +44 1703 594597 Department of Psychology http://www.cogsci.soton.ac.uk/~harnad/ University of Southampton http://www.princeton.edu/~harnad/ Highfield, Southampton ftp://ftp.princeton.edu/pub/harnad/ SO17 1BJ UNITED KINGDOM ftp://cogsci.soton.ac.uk/pub/harnad/ See: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm From shastri at ICSI.Berkeley.EDU Thu Apr 8 21:00:06 1999 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 08 Apr 1999 18:00:06 PDT Subject: A Biological Grounding of Recruitment Learning and Vicinal Algorithms Message-ID: <199904090100.SAA13020@lassi.ICSI.Berkeley.EDU> Dear Connectionists, The following report may be of interest to you. Best wishes. -- Lokendra Shastri http://www.icsi.berkeley.edu/~shastri/psfiles/tr-99-009.ps.gz ------------------------------------------------------------------------------ A Biological Grounding of Recruitment Learning and Vicinal Algorithms Lokendra Shastri International Computer Science Institute Berkeley, CA 94704 TR-99-009 April, 1999 Biological neural networks are capable of gradual learning based on observing a large number of exemplars over time as well as rapidly memorizing specific events as a result of a single exposure. The primary focus of research in connectionist modeling has been on gradual learning, but some researchers have also attempted the computational modeling of rapid (one-shot) learning within a framework described variably as recruitment learning and vicinal algorithms. While general arguments for the neural plausibility of recruitment learning and vicinal algorithms based on notions of neural plasticity have been presented in the past, a specific neural correlate of such learning has not been proposed. Here it is shown that recruitment learning and vicinal algorithms can be firmly grounded in the biological phenomena of long-term potentiation (LTP) and long-term depression (LTD). Toward this end, a computational abstraction of LTP and LTD is presented, and an ``algorithm'' for the recruitment of binding-detector cells is described and evaluated using biologically realistic data. It is shown that binding-detector cells of distinct bindings exhibit low levels of cross-talk even when the bindings overlap. In the proposed grounding, the specification of a vicinal algorithm amounts to specifying an appropriate network architecture and suitable parameter values for the induction of LTP and LTD. KEYWORDS: one-shot learning; memorization; recruitment learning; dynamic bindings; long-term potentiation; binding detection. From harnad at coglit.ecs.soton.ac.uk Fri Apr 9 13:37:32 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Fri, 9 Apr 1999 18:37:32 +0100 (BST) Subject: EEG AND NEOCORTICAL FUNCTION: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article: NEOCORTICAL DYNAMIC FUNCTION AND EEG by Paul L. Nunez This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by May 14th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ TOWARD A QUANTITATIVE DESCRIPTION OF LARGE SCALE NEOCORTICAL DYNAMIC FUNCTION AND EEG. Paul L. Nunez Permanent Address: Brain Physics Group, Dept. of Biomedical Engineering, Tulane University, New Orleans, Louisiana 70118 pnunez at mailhost.tcs.tulane.edu Temporary Address (6/98 - 6/00): Brain Sciences Institute, Swinburne University of Technology, 400 Burwood Road, Melbourne, Victoria 3122, Australia pnunez at mind.scan.swin.edu.au ABSTRACT: A conceptual framework for large-scale neocortical dynamic behavior is proposed. It is sufficiently general to embrace brain theories applied to different experimental designs, spatial scales and brain states. This framework, based on the work of many scientists, is constructed from anatomical, physiological and EEG data. Neocortical dynamics and correlated behavioral/cognitive brain states are viewed in the context of partly distinct, but interacting local (regionally specific) processes and globally coherent dynamics. Local and regional processes (eg, neural networks) are enabled by functional segregation; global processes are facilitated by functional integration. Global processes can also facilitate synchronous activity in remote cell groups (top down) which function simultaneously at several different spatial scales. At the same time, local processes may help drive (bottom up) macroscopic global dynamics observed with EEG (or MEG). A specific, physiologically based local/global dynamic theory is outlined in the context of this general conceptual framework. It is consistent with a body of EEG data and fits naturally within the proposed conceptual framework. The theory is incomplete since its physiological control parameters are known only approximately. Thus, brain state-dependent contributions of local versus global dynamics cannot be predicted. It is also neutral on properties of neural networks, assumed to be embedded within macroscopic fields. Nevertheless, the purely global part of the theory makes qualitative, and in a few cases, semi-quantitative predictions of the outcomes of several disparate EEG studies in which global contributions to the dynamics appear substantial. Experimental data are used to obtain a variety of measures of traveling and standing wave phenomena, predicted by the pure global theory. The more general local/global theory is also proposed as a "meta-theory," a suggestion of what large-scale quantitative theories of neocortical dynamics may be like when more accurate treatment of local and non-linear effects is achieved. In the proposed local/global theory, the dynamics of excitatory and inhibitory synaptic action fields are described. EEG and MEG are believed to provide large-scale estimates of modulation of these synaptic fields about background levels. Brain state is determined by neuromodulatory control parameters. Some states are dominated by local cell groups, in which EEG frequencies are due to local feedback gains and rise and decay times of post-synaptic potentials. Local frequencies vary with brain location. Other states are strongly global, with multiple, closely spaced EEG frequencies, but identical at each cortical location. Coherence at these frequencies is high over large distances. The global mode frequencies are due to a combination of delays in cortico-cortical axons and neocortical boundary conditions. Many states involve dynamic interactions between local networks and the global system, in which case observed EEG frequencies may involve "matching" of local resonant frequencies with one or more of the global frequencies. KEYWORDS: EEG, neocortical dynamics, standing waves, functional integration, spatial scale, binding problem, synchronization, coherence, cell assemblies, limit cycles, pacemakers ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.nunez.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.nunez ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.nunez *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From bern at cs.umass.edu Fri Apr 9 17:33:18 1999 From: bern at cs.umass.edu (Dan Bernstein) Date: Fri, 09 Apr 1999 17:33:18 -0400 Subject: A tech report on transfer of solutions across multiple RL tasks Message-ID: <199904092133.RAA29163@ganymede.cs.umass.edu> Anouncing a technical report related to solving multiple RL tasks: http://www-anw.cs.umass.edu/~bern/publications/reuse_tech.ps -------------------------------------------------------------------------- Daniel S. Bernstein Adaptive Networks Lab Department of Computer Science University of Massachusetts, Amherst TR-1999-26 April, 1999 We consider the reuse of policies for previous MDPs in learning on a new MDP, under the assumption that the vector of parameters of each MDP is drawn from a fixed probability distribution. We use the options framework, in which an option consists of a set of initiation states, a policy, and a termination condition. We use an option called a \emph{reuse option}, for which the set of initiation states is the set of all states, the policy is a combination of policies from the old MDPs, and the termination condition is based on the number of time steps since the option was initiated. Given policies for $m$ of the MDPs from the distribution, we construct reuse options from the policies and compare performance on an $m+1$st MDP both with and without various reuse options. We find that reuse options can speed initial learning of the $m+1$st task. We also present a distribution of MDPs for which reuse options can slow initial learning. We discuss reasons for this and suggest other ways to design reuse options. Keywords: reinforcement learning, Markov decision processes, options, learning to learn ---------------------------------------------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Daniel S. Bernstein URL: http://www-anw.cs.umass.edu/~bern Department of Computer Science EMAIL: bern at cs.umass.edu University of Massachusetts PHONE: (413)545-1596 [office] Amherst, MA 01003 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From C.Campbell at bristol.ac.uk Sat Apr 10 04:46:09 1999 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Sat, 10 Apr 1999 09:46:09 +0100 (BST) Subject: New Preprint Server (SVMs, COLT, etc) Message-ID: <199904100846.JAA23589@zeus.bris.ac.uk> NEW PREPRINT SERVER We have started up a new preprint server at: http://lara.enm.bris.ac.uk/cig/pubs_nf.htm This contains 16 recent preprints with the titles and authorship listed below. The server would mainly be of interest to researchers in the area of support vector machines and computational learning theory. There are also some further papers on the application of machine learning techniques to medical decision support. Enjoy! Colin Campbell (Bristol University) _________________________________ 1. Data Dependent Structural Risk Minimization for Perceptron Decision Tress. John Shawe-Taylor and Nello Cristianini. 2. Bayesian Voting Schemes and Large Margin Classifiers. Nello Cristianini and John Shawe-Taylor. 3. Bayesian Classifiers are Large Margin Hyperplanes in a Hilbert Space. Nello Cristainini, John Shawe-Taylor and Peter Sybacek, in Shavlik J. (editor) 4. Bayesian Classifiers are Large Margin Hyperplanes in a Hilbert Space. Nello Cristianini, John Shawe-Taylor and Peter Sykacek. 5. Margin Distribution Bounds on Generalisation. John Shawe-Taylor and Nello Cristianini. 6. Robust Bounds on Generalisation from the Margin Distribution. John Shawe-Taylor and Nello Cristianini. 7. Simple Training Algorithms for Support Vector Machines. Colin Campbell and Nello Cristianini. 8. The Kernel-Adatron: a Fast and Simple Learning Procedure for Support Vector Machines. Thilo Friess, Nello Cristianini and Colin Campbell. 9. Large Margin Classification Using the Kernel Adatron Algorithm. Colin Campbell, Thilo Friess and Nello Cristianini. 10. Dynamically Adapting Kernels in Support Vector Machines. Nello Cristianini, Colin Campbell and John Shawe-Taylor. 11. Multiplicative Updatings for Support Vector Machines. Nello Cristianini, Colin Campbell and John Shawe-Taylor. 12. Enlarging the Margin in Perceptron Decision Trees. Kristin Bennett, Nello Critianini, John Shawe-Taylor and Donghui Wu. 13. Large Margin Decision Trees for Induction and Transduction. Donghui Wu, Kristin Bennett, Nello Cristianini and John Shawe-Taylor. 14. Bayes Point Machines: Estimating the Bayes Point in Kernel Space. Ralf Herbrich, Thore Graepel and Colin Campbell. 15. Bayesian Learning in Reproducing Kernel Hilbert Spaces: The Usefulness of the Bayes Point. Ralf Herbrich, Thore Graepel and Colin Campbell. 16. Further Results on the Margin Distribution. John Shawe-Taylor and Nello Cristianini. From Marco.Budinich at trieste.infn.it Sat Apr 10 07:54:19 1999 From: Marco.Budinich at trieste.infn.it (Marco Budinich (tel. +39-040-676 3391)) Date: Sat, 10 Apr 1999 13:54:19 +0200 Subject: Paper available Message-ID: Dear Colleagues, the following paper, to appear in official form on Neural Computation, is presently available at: http://www.ts.infn.it/~mbh/PubABS.html#Nonu_corr all comments are most welcome. All the best, Marco Budinich ---------------------------------------------------------------- Adaptive Calibration of Imaging Array Detectors Marco Budinich and Renato Frison Physics Department - University of Trieste, Italy (to appear on: Neural Computation) Abstract - In this paper we present two methods for non-uniformity correction of imaging array detectors based on neural networks, both of them exploit image properties to supply lack of calibrations while maximizing the entropy of the output. The first method uses a self-organizing net that produces a linear correction of the raw data with coefficients that adapt continuously. The second method employs a kind of contrast equalization curve to match pixel distributions. Our work originates from Silicon detectors but the treatment is general enough to be applicable to many kinds of array detectors like those used in Infrared imaging or in high energy physics. +-----------------------------------------------------------------+ | Marco Budinich | | Dipartimento di Fisica Tel.: +39 040 676 3391 | | Via Valerio 2 Fax.: +39 040 676 3350 | | 34127 Trieste ITALY e-mail: mbh at trieste.infn.it | | | | www: http://www.ts.infn.it/~mbh/MBHgeneral.html | +-----------------------------------------------------------------+ From jfgf at eng.cam.ac.uk Mon Apr 12 06:46:00 1999 From: jfgf at eng.cam.ac.uk (J.F. Gomes De Freitas) Date: Mon, 12 Apr 1999 11:46:00 +0100 (BST) Subject: Software + Papers Message-ID: Dear colleagues You can find the following papers: 1 Sequential Monte Carlo Methods for Optimisation of Neural Network Models (a similar version to appear in Neural Computation). 2 Nonlinear State Space Estimation with Neural Networks and the EM algorithm (possibly to appear in a special issue of VLSI Signal Processing Systems). and the Matlab software at my Cambridge web site: http://svr-www.eng.cam.ac.uk/~jfgf/software.html I'd be grateful for feedback. The abstracts follow: 1 Sequential Monte Carlo Methods for Optimisation of Neural Network Models: We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/sampling importance resampling algorithm (HySIR). In terms of both computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimisation strategy, which allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving on-line, nonlinear and non-Gaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the one-step-ahead probability density functions of the options prices. 2 Nonlinear State Space Estimation with Neural Networks and the EM algorithm: In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forward-backward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and the measurement noise. We find that the method is intrinsically very powerful, simple, elegant and stable. Best wishes Nando _______________________________________________________________________________ JFG de Freitas (Nando) Speech, Vision and Robotics Group Information Engineering Cambridge University CB2 1PZ England http://svr-www.eng.cam.ac.uk/~jfgf Tel (01223) 302323 (H) (01223) 332754 (W) _______________________________________________________________________________ From michael.j.healy at boeing.com Mon Apr 12 11:08:37 1999 From: michael.j.healy at boeing.com (Michael J. Healy 425-865-3123) Date: Mon, 12 Apr 1999 08:08:37 -0700 Subject: Paper on topological semantics Message-ID: <199904121508.IAA19327@lilith.network-b> This is now out in the latest issue of Connection Science. I have a supply of reprints and will be glad to send one to anyone who is interested: M. J. Healy (1999) "A Topological Semantics for Rule Extraction with Neural Networks", Connection Science, vol. 11, no. 1, pp. 91-113. This is the paper I referred to in the Connectionists on-line discussion on connectionist symbol processing and in the follow-up issue of Neural Computing Surveys. It introduces a mathematical approach to analyzing the semantics of neural networks and their applications. It surveys the point-set-topology version of the approach; a more involved category- theoretic version is intended for a later paper. The main example given in this paper concerns the adequacy of a specific neural network architecture for rule extraction. Items discussed in addition to the mathematical background include issues in learning from data, specialization hierarchies, the concept of a prototype data object, continuous functions and rule-based systems, formal verification, and how the topological approach might be applied to neural network analysis and design. Mike -- =========================================================================== e Michael J. Healy A FA ----------> GA (425)865-3123 | | FAX(425)865-2964 | | Ff | | Gf c/o The Boeing Company | | PO Box 3707 MS 7L-66 \|/ \|/ Seattle, WA 98124-2207 ' ' USA FB ----------> GB -or for priority mail- e "I'm a natural man." 2760 160th Ave SE MS 7L-66 B Bellevue, WA 98008 USA michael.j.healy at boeing.com -or- mjhealy at u.washington.edu ============================================================================ From ingber at ingber.com Wed Apr 14 06:26:56 1999 From: ingber at ingber.com (Lester Ingber) Date: Wed, 14 Apr 1999 05:26:56 -0500 Subject: Trading Research/Programmer Positions Chicago Message-ID: <19990414052656.A11119@ingber.com> Trading Research/Programmer Positions Chicago * Open R&D Positions in Computational Finance/Chicago DRW Investments, LLC, a proprietary trading firm based at the Chicago Mercantile Exchange, with a branch office in London, is expanding its research department. This R&D group works directly with other traders as well develops its own automated trading capabilities. * Programmers/Analysts -- Full Time At least 2-3 years experience programming in C or C++. Must have excellent background in Math, Physics, or similar disciplines. An advanced programmer/analyst position will be primarily dedicated to developing and coding algorithms for automated trading. An intermediate programmer/analyst position will be primarily dedicated to database management and integrating R&D codes for trader support. Flexible hours in intense environment. Requires strong commitment to several ongoing projects with shifting priorities. See http://www.ingber.com/ for some papers and code used on current projects. Please email Lester Ingber a resume regarding this position. * Graduate Students -- Part Time We are looking for part-time graduate students in computational finance who might impact our trading practices. We expect that each graduate student will be a full-time PhD student at a local university, spending approximately 10-20 hours/week at DRW. Projects will focus on publishable results. See http://www.ingber.com/ for some papers on current projects. Please email Lester Ingber a resume regarding these positions. * Trading Clerks -- Full Time We are seeking motivated individuals with a strong work ethic and desire to learn about trading. Our primary focus is on financial futures and options. Responsibilities include actively assisting traders in all aspects of trading. Exposure to math including calculus and statistics is desirable. Please email Jeff Levoff a resume or any questions regarding these positions. * Additional Information We are planning our move to larger offices in our present vicinity of the Chicago exchanges before August 1999. Programmer/analyst and graduate student positions will be filled after our move. * Updates Updates on the status of open DRW positions will be placed in http://www.ingber.com/MISC.DIR/drw_open_positions -- /* Lester Ingber http://www.ingber.com/ ftp://ftp.ingber.com * * ingber at ingber.com ingber at alumni.caltech.edu ingber at drwtrading.com * * PO Box 06440 Wacker Dr PO Sears Tower Chicago IL 60606-0440 */ From omlin at waterbug.cs.sun.ac.za Thu Apr 15 08:56:29 1999 From: omlin at waterbug.cs.sun.ac.za (Christian Omlin) Date: Thu, 15 Apr 1999 14:56:29 +0200 Subject: technical report announcement - rule extraction Message-ID: Dear Connectionists, The following technical report (see abstract attached below) A. Vahed, C.W. Omlin, "Rule Extraction from Recurrent Neural Networks using a Symbolic Machine Learning Algorithm" is available from http://www.cs.sun.ac.za/~omlin/papers/iconip_99.paper.ps.gz This paper contains preliminary results and we welcome any comments you may have. Thank you. Best regards, Christian Christian W. Omlin e-mail: omlin at cs.sun.ac.za Department of Computer Science phone (direct): +27-21-808-4210 University of Stellenbosch phone (secretary): +27-21-808-4232 Private Bag X1 fax: +27-21-808-4416 Stellenbosch 7602 http://www.cs.sun.ac.za/people/staff/omlin SOUTH AFRICA http://www.neci.nj.nec.com/homepages/omlin ------------------------------- cut here -------------------------------- Rule Extraction from Recurrent Neural Networks using a Symbolic Machine Learning Algorithm A. Vahed C.W. Omlin Department of Computer Science Department of Computer Science University of the Western Cape University of Stellenbosch 7535 Bellville 7600 Stellenbosch South Africa South Africa avahed at uwc.ac.za omlin at cs.sun.ac.za This paper addresses the extraction of knowledge from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster anal- ysis has led to heuristics which either limit the number of clus- ters that may form during training or limit the exploration of the output space of hidden recurrent state neurons. These limita- tions, while necessary, may lead to decreased fidelity, i.e. the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial-time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input/output behavior. Thus, this method has the poten- tial to increase the fidelity of the extracted knowledge. From dacierno.a at irsip.na.cnr.it Thu Apr 15 09:22:57 1999 From: dacierno.a at irsip.na.cnr.it (Antonio d'Acierno) Date: Thu, 15 Apr 1999 15:22:57 +0200 Subject: New Paper Message-ID: <3715E831.346264AC@irsip.na.cnr.it> Dear Connectionists, the following paper "Back-Propagation Learning Algorithm and Parallel Computers: The CLEPSYDRA Mapping Scheme" (accepted for publication on Neurocomputing) is available at my web site: http://ultrae.irsip.na.cnr.it/~tonino From dacierno.a at irsip.na.cnr.it Thu Apr 15 09:22:57 1999 From: dacierno.a at irsip.na.cnr.it (Antonio d'Acierno) Date: Thu, 15 Apr 1999 15:22:57 +0200 Subject: New Paper Message-ID: <3715E831.346264AC@irsip.na.cnr.it> [ Reposted due to a computer glitch that truncated the abstract. -- Dave Touretzky, CONNECTIONISTS moderator ] Dear Connectionists, the following paper "Back-Propagation Learning Algorithm and Parallel Computers: The CLEPSYDRA Mapping Scheme" (accepted for publication on Neurocomputing) is available at my web site: http://ultrae.irsip.na.cnr.it/~tonino Abstract This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in logP time) are needed. A performance model is proposed and tested on a ring connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented. Keywords: Back-Propagation, Mapping Scheme, MIMD Parallel Computers, SIMD Parallel Computers I welcome any comment and suggestion for improvements! Thank You and Rregards. -- Antonio d'Acierno IRSIP - CNR via P. Castellino, 111 80131 Napoli Italy tel: + 39 081 5904221 fax: + 39 081 5608330 mobile: 0339 6472723 mailto:dacierno.a at irsip.na.cnr.it mailto:adacierno at yahoo.com http://ultrae.irsip.na.cnr.it/~tonino From tononi at nsi.edu Thu Apr 15 18:42:21 1999 From: tononi at nsi.edu (Giulio Tononi) Date: Thu, 15 Apr 1999 15:42:21 -0700 Subject: Junior Fellow (postdoc) positions, The Neurosciences Institute Message-ID: <000201be8791$39591660$1bb985c6@spud.nsi.edu> THE NEUROSCIENCES INSTITUTE, SAN DIEGO The Neurosciences Institute is an independent, not-for-profit organization at the forefront of research on the brain. Research at the Institute spans levels from the molecular to the behavioral and from the computational to the cognitive. The Institute has a strong tradition in theoretical neurobiology and has recently established new experimental facilities. The Institute is also the home of the Neurosciences Research Program and serves as an international meeting place for neuroscientists. JUNIOR FELLOW, NEURAL BASIS OF CONSCIOUSNESS. The Institute has a strong tradition in the theoretical and experimental study of consciousness (see Science, 282:1846-1851). Applications are invited for positions as Junior Fellows to collaborate on experimental and theoretical studies of the neural correlates of conscious perception. Applicants should be at the Postdoctoral level with strong backgrounds in cognitive neuroscience, neuroimaging (including MEG, EEG, and fMRI), and theoretical neurobiology. JUNIOR FELLOW IN THEORETICAL NEUROBIOLOGY. Applications are invited for positions as Junior Fellows in Theoretical Neurobiology. Since 1987, the Institute has had a research program dedicated to developing biologically based, experimentally testable theoretical models of neural systems. Current projects include large-scale simulations of neuronal networks and the analysis of functional interactions among brain areas using information-theoretical approaches. Advanced computing facilities are available. Applicants should be at the Postdoctoral level with strong backgrounds in mathematics, statistics, and computer modeling. JUNIOR FELLOW IN EXPERIMENTAL NEUROBIOLOGY. Applications are invited for positions as Junior Fellow in Experimental Neurobiology. A current focus of the Institute is on the action and pharmacological manipulation of neuromodulatory systems with diffuse projections, such as the noradrenergic, serotoninergic, cholinergic, dopaminergic, and histaminergic systems. Another focus is on behavioral state control and the functions of sleep. Applicants should be at the Postdoctoral level with strong backgrounds in the above-mentioned areas. Fellows receive stipends and research support commensurate with qualifications and experience. Positions are now available. Applications for all positions listed should contain a short statement of research interests, a curriculum vitae, and the names of three references and should be sent to: Giulio Tononi, The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, California 92121; Email: tononi at nsi.edu; URL: http:// www.nsi.edu. From giro-ci0 at wpmail.paisley.ac.uk Fri Apr 16 10:06:36 1999 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Fri, 16 Apr 1999 14:06:36 +0000 Subject: PhD Studentship Available Message-ID: Could you please post the details of this PhD studentship please. ---------------------------------------------------------------------------------------- Funded PhD Studentship Available Computational Intelligence Research Unit Department of Computing and Information Systems University of Paisley Since the emergence and explosive growth of the World Wide Web (WWW) there has been a commensurate growth in the availability of online information. Efficient searching and retrieval of relevant information from the WWW has lagged behind this growth and intelligent information retrieval methods are now required as a matter of urgency. This particular challenge is attracting great interest from the machine learning research community and many software companies are closely monitoring the research output. There are two main approaches to online information retrieval, (1) query based and (2) taxonomic. The query based approach relies on methods such as search engines; which take a users query and this is compared with an existing document collection to find the most likely match. The taxonomic approach relies on manual organisation of the information (online documents) into hierarchic categorisations of the document collection. It is accepted that the design of information retrieval systems utilising a number of intelligent computing paradigms may provide the key to improved information access. This project proposes the fusion of both unsupervised and supervised computational models to adaptively build and maintain suitable document hierarchies (an example being Kohonen's WEBSOM) and then rank and classify existing as well as incoming new documents based on user queries (an example being Bayesian networks). Three years of University funding, for a PhD studentship, is available for this project. This project will be carried out in collaboration with a US based software company. Interested candidates should contact:- Dr Mark Girolami Senior Lecturer Computational Intelligence Research Unit University of Paisley High Street, Paisley PA1 2BE Scotland Tel : +44 141 848 3317 Fax: +44 141 848 3542 giro0ci at paisley.ac.uk From psajda at sarnoff.com Mon Apr 19 17:34:38 1999 From: psajda at sarnoff.com (PAUL SAJDA) Date: Mon, 19 Apr 1999 17:34:38 -0400 Subject: Job: Research Position Message-ID: <371BA16D.C3A25033@sarnoff.com> Sarnoff Corporation Position in Adaptive Image and Signal Processing The Adaptive Image and Signal Processing Group at the Sarnoff Corporation has an immediate opening for a researcher (Member, Technical Staff) in the area of speech and signal processing. Specifically, we are seeking an individual with interest/expertise in speech enhancement, microphone arrays, multi-modal interfaces, and machine learning. Responsibilities will include conducting original research, developing intellectual property in the area of adaptive signal processing, and developing relationships with government and commercial customers. Excellent communication skills and willingness to bring in new business is desired. Experience in MATLAB, C and C++ programming and the UNIX/LINUX operating system is a plus. Education level: PhD. in Electrical Engineering, Computer Engineering, Computer Science or related field. send applications to Dr. Paul Sajda Head Adaptive Image & Signal Processing Sarnoff Corporation CN5300 Princeton, NJ 08533-5300 email: psajda at sarnoff.com Sarnoff Corporation Web page: www.sarnoff.com From sutton at research.att.com Tue Apr 20 14:03:29 1999 From: sutton at research.att.com (Rich Sutton) Date: Tue, 20 Apr 1999 13:03:29 -0500 Subject: Workshop on Reinforcement Learning at UMass/Amherst 4/23 Message-ID: Dear Connectionists: FYI, there will be a workshop on reinforcement learning at the University of Massachusetts this friday, to which the public is welcome. Complete information on the schedule, travel information, etc., is available at http://www-anw.cs.umass.edu/nessrl.99. A crude version of the current schedule (it's better on the web) is given below. Hope to see you there. Rich Sutton ---------------------------------------------------------------------------- Current Schedule Time Speaker Title/Abstract 8:30am Breakfast (bagels, coffee, etc) 9:00am Amy McGovern Welcome 9:05 Invited Speaker: Manuela Veloso (CMU) What to Do Next?: Action Selection in Dynamic Multi-Agent Environments 10:00 Mike Bowling A Parallel between Multi-agent Reinforcement Learning and Stochastic Game Theory Peter Stone and Manuela Veloso (presented by Manuela Veloso) Team-Partitioned Opaque-Transition Reinforcement Learning Will Uther Structural Generalization for Growing Decision Forests 11:10 Break 11:20 Dan Bernstein Reusing Old Policies to Accelerate Learning on New MDPs Bryan Singer Learning State Features from Policies to Bias Exploration in Reinforcement Learning Theodore Perkins and Doina Precup Using Options for Knowledge Transfer in Reinforcement Learning 12:30 Lunch 2:00 Michael Kearns Sparse Sampling Methods for Learning and Planning in Large POMDPs Satinder Singh Approximate Planning for Factored POMDPs using Belief State Simplification Rich Sutton Function Approximation in Reinforcement Learning Yishay Mansour Finding a near best strategy from a restricted class of strategies 3:10 Nicolas Meuleau Learning finite-state controllers for partially-observable environments Keith Rogers Learning using the G-function 3:55 Break 4:05 Doina Precup Eligibility Traces for Off-Policy Policy Evaluation Tom Kalt An RL approach to statistical natural language parsing 4:50 Andrew Barto Closing remarks From hunter at nlm.nih.gov Tue Apr 20 09:12:30 1999 From: hunter at nlm.nih.gov (Larry Hunter) Date: Tue, 20 Apr 1999 09:12:30 -0400 (EDT) Subject: Computational Biology position at National Cancer Institute Message-ID: <199904201312.JAA08307@work.nlm.nih.gov> ** Fellowships in Computational Biology ** ** Section on Molecular Statistics and Bioinformatics ** ** National Cancer Institute ** April 19, 1999 --- Please distribute widely Applications are invited for two anticipated research fellowships in computational approaches to understanding the molecular nature of cancer. One position is specifically dedicated to developing tools and techniques for analyzing gene expression data from the National Cancer Institute's Advanced Technology Center. The other position is more open-ended, intended for researchers interested in any aspect of the development and application of advanced machine learning or statistical techniques in the molecular biology of cancer. The positions will be in the section on Molecular Statistics and Bioinformatics, a recently formed group under the leadership of Dr. Lawrence Hunter. We are machine learning researchers, statisticians, molecular biologists and physicians working to develop computational methods to take advantage of the rapid growth of molecular biological data about cancer, including sequences of oncogenes, gene expression profiles of neoplastic tissues, high throughput screening of anti-tumor compounds, and allellic variation assays such as SNPs. The lab has abundant computational resources, and ready access to all NIH facilities. Candidates should be highly motivated, with excellent programming and writing skills, and a solid understanding of molecular biology. We expect to fill these positions with post-doctoral researchers, but good candidates at more junior or more senior levels may be accomodated. Salaries will be on the NIH scale, commensurate with experience and area of expertise. To apply, send a CV, one or two (p)reprints, a brief description of your motivations and goals, and the names, email addresses and phone numbers of at least three references to the address below. Email is preferred, but fax or mail applications are also acceptable. Larry Hunter Molecular Statistics and Bioinformatics National Cancer Institute, MS-9105 7550 Wisconsin Ave., Room 3C06 Bethesda, MD 20892-9015 tel: +1 (301) 402-0389 fax: +1 (301) 480-0223 email: lhunter at nih.gov From jagota at cse.ucsc.edu Tue Apr 20 21:59:41 1999 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Tue, 20 Apr 1999 18:59:41 -0700 (PDT) Subject: new survey publication Message-ID: <199904210159.SAA06212@arapaho.cse.ucsc.edu> New refereed e-publication action editor: Ron Sun K. McGarry, S. Wermter and J. MacIntyre, Hybrid neural systems: from simple coupling to fully integrated neural networks, Neural Computing Surveys 2, 62--93, 1999. 102 references. http://www.icsi.berkeley.edu/~jagota/NCS Abstract: This paper describes techniques for integrating neural networks and symbolic components into powerful hybrid systems. Neural networks have unique processing characteristics that enable tasks to be performed that would be difficult or intractable for a symbolic rule-based system. However, a stand-alone neural network requires an interpretation either by a human or a rule-based system. This motivates the integration of neural/symbolic techniques within a hybrid system. A number of integration possibilities exist: some systems consist of neural network components performing symbolic tasks while other systems are composed of several neural networks and symbolic components, each component acting as a self-contained module communicating with the others. Other hybrid systems are able to transform subsymbolic representations into symbolic ones and vice-versa. This paper provides an overview and evaluation of the state of the art of several hybrid neural systems for rule-based processing. From oby at cs.tu-berlin.de Wed Apr 21 09:25:50 1999 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Wed, 21 Apr 1999 15:25:50 +0200 (MET DST) Subject: positions in CNS Message-ID: <199904211325.PAA27254@pollux.cs.tu-berlin.de> Postdoctoral and Graduate Student Positions in Computational Neuroscience Neural Information Processing Group, Department of Computer Science, Technical University of Berlin, Germany One postdoctoral and one graduate student fellowship are available for candidates, who are interested to work on computational models of primary visual cortex. The successful candidates may choose between the following topics: - Dynamics of cortical circuits including the role of steppy connections and fast synaptic plasticity. - Representation of information in the visual cortex. - Development of neural circuits and the functional organization of visual cortex (cortical maps). The candidates are expected to join an ongoing collaboration with experimental neurobiologists in the group of Jennifer Lund (Institute of Ophthalmology, University College London) Positions will start in summer / fall 1999. The postdoctoral position will be initially for one year, but an extension is possible. The CS department of the Technical University of Berlin approves students with foreign Masters or Diplom degrees for graduate study when certain standards w.r.t. the subject, grades, and the awarding institution are met. The university also allows for a thesis defense in the English language. Interested candidates please send their CV, transcripts of their certificates, a short statement of their research interest, and a list of publications to: Prof. Klaus Obermayer FR2-1, NI, Informatik, Technische Universitaet Berlin Franklinstrasse 28/29, 10587 Berlin, Germany phone: ++49-30-314-73120, fax: -73121, email: oby at cs.tu-berlin.de prefereably by email. For a list of relevant publications and an overview of current research projects please refer to our web-page at: http://ni.cs.tu-berlin.de/ Klaus ----------------------------------------------------------------------------- Prof. Dr. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ From mozer at cs.colorado.edu Thu Apr 22 14:33:25 1999 From: mozer at cs.colorado.edu (Mike Mozer) Date: Thu, 22 Apr 99 12:33:25 -0600 Subject: Positions in Boulder, Colorado Message-ID: <199904221833.MAA17147@neuron.cs.colorado.edu> Athene Software, Inc. Positions in Machine Learning, Statistics, and Data Mining Athene Software has immediate openings for machine learning, statistical, and data mining professionals in Boulder, Colorado. We are seeking qualified candidates to develop and enhance models of subscriber behavior for telecommunications companies. Responsibilities will include statistical investigation of large data sets, application of machine learning algorithms, development and tuning of data representations, and presentation of results to internal and external customers. Strong communication skills are very important. A Ph.D. in Statistics, Computer Science, Electrical Engineering, or related field is desirable. Experience with Java and Oracle is helpful. Send applications to: Richard Wolniewicz, Ph.D. VP Engineering Athene Software, Inc. 2060 Broadway, Suite 300 Boulder, CO 80302 email: richard at athenesoft.com www.athenesoft.com From mark.plumbley at kcl.ac.uk Tue Apr 20 19:34:19 1999 From: mark.plumbley at kcl.ac.uk (Mark Plumbley) Date: Wed, 21 Apr 1999 00:34:19 +0100 (BST) Subject: CoIL Competition [neural nets and machine learning -- Ed.] Message-ID: Is your water safe? There is increased concern at the impact man is having on the environment. In temperate climates, summer algae growth can result in poor water clarity, mass deaths of river fish and the closure of recreational water facilities. To understand this problem, there is a need to identify the crucial chemical control variables for the biological processes. This is the subject of the first Computational Intelligence and Learning (CoIL) competition. CoIL is an EC-funded Cluster of Networks of Excellence (NoEs), formed in Jan 1999 as a collaboration between ERUDIT, EvoNet, MLNET and NEuroNet, representing Fuzzy Logic, Evolutionary Computing, Machine Learning, and Neural Computing respectively. While the techniques and paradigms of interest to these networks are largely distinct (and sometimes complementary), these various techniques can often be used to tackle similar problems or be used together on the same problem. This CoIL competition has been organised through ERUDIT, and is open to all interested parties. ERUDIT has had very successful competitions itself in 1996 and 1998, and the results of these illustrated how a variety of different techniques can be used to tackle any problem. Water quality samples were taken from sites on different European rivers of a period of approximately one year. These samples were analysed for various chemical substances, and algae samples were collected to determine the algae population distributions. While the chemical analysis is cheap and easily automated, the biological part involves microscopic examination, requires trained manpower and is therefore both expensive and slow. The task of the CoIL competition is to predict the algae frequency distributions on the basis of the measured concentrations of the chemical substances and some global information about the season when the sample was taken, the river size and the fluid velocity. The data is a mixture of qualitative and numeric variables, and some of the data is incomplete. The detailed problem description and the data is available from http://www.erudit.de/erudit/committe/fc-ttc/ic-99/index.htm or by ftp from: FTP Server: ftp.mitgmbh.de Username: anonymous Password: Filename: /pub/problem.zip In case of difficulty obtaining the data, contact: ERUDIT Service Center, c/o ELITE Foundation, Promenade 9, 52076 Aachen, Germany. Phone: +49 2408 6969, Fax +49240894582, email: sh at mitgmbh.de A board of referees will declare a winner and a runner-up. The winners will be invited, free of charge, to attend the EUFIT’99 conference to present their solutions during a special session on September 14, 1999 in Aachen, Germany. Important dates: Apr 15, 1999: Data available May 31, 1999: Deadline for submission of solutions Jul 31, 1999: Announcement of results Sep 14, 1999: Award of winners at the EUFIT '99 conference in Aachen Sep 14, 1999: Presentation of the best solutions For general CoIL information, see http://www.dcs.napier.ac.uk/coil/ ##### STOP PRESS ###### KING'S HAS NEW PHONE AND FAX NUMBERS ##### ------------------------------------------------------------------ Dr Mark Plumbley mark.plumbley at kcl.ac.uk |_/ I N G'S Centre for Neural Networks | \ College Department of Electronic Engineering L O N D O N King's College London, Strand, London, WC2R 2LS, UK Founded1829 Tel: +44 (0)171 848 2241, Fax: +44 (0)171 848 2932 World Wide Web URL: http://www.eee.kcl.ac.uk/~mdp ------------------------------------------------------------------ From p.j.b.hancock at psych.stir.ac.uk Thu Apr 22 06:48:41 1999 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Thu, 22 Apr 1999 11:48:41 +0100 (BST) Subject: Faculty position available Message-ID: We have a senior faculty position available, for which I would be delighted to see some modellers or neuroscience people apply. See my website, or that of Bill Phillips, Barbara Webb, Peter Cahusac or Lindsay Wilson to get an idea of what we are currently doing in this area. Reader / Chair in Psychology The Department wishes to make an appointment to a post at the level of Reader or Chair. The primary role in the early years will be to contribute to research development in one of the Department's existing areas of research strength: Perception; Cognition; Neuroscience; Neuropsychology; Comparative and Developmental Psychology; and Social, Health, Clinical and Community Psychology. Teaching and administrative duties in the early years will be minimal, and there will start-up funding for equipment and studentships. Salary will be within the Senior Lecturer scale (£30,396-£34,464) or by negotiation on the Professorial scale (minimum £35,170). Informal enquiries may be made to Professor Lindsay Wilson, Head of Department on 01786 467640, email j.t.l.wilson at stir.ac.uk. Details of the Department can be found at: www.stir.ac.uk/departments/humansciences/psychology/ Further particulars are available from the Personnel Office, University of Stirling, Stirling, FK9 4LA, tel: (01786) 467028, fax (01786) 466155 or email personnel at stir.ac.uk. Closing date for applications: 13 May 1999. www.stir.ac.uk/departments/admin/personl AN EQUAL OPPORTUNITIES EMPLOYER Peter Hancock Department of Psychology, University of Stirling FK9 4LA Phone 01786 467675 Fax 01786 467641 e-mail pjbh1 at stir.ac.uk http://www-psych.stir.ac.uk/~pjh From steve at cns.bu.edu Thu Apr 22 22:04:45 1999 From: steve at cns.bu.edu (Stephen Grossberg) Date: Thu, 22 Apr 1999 22:04:45 -0400 Subject: Job at Boston University's Department of Cognitive and Neural Systems Message-ID: SYSTEMS ADMINISTRATOR JOB OPENING AT BOSTON UNIVERSITY We are seeking a new Director of the Computation Laboratories for the Department of Cognitive and Neural Systems (CNS) and the Center for Adaptive Systems (CAS) at Boston University, which have active PhD training and research programs in biological and artificial neural network modeling. Both models of how the brain controls behavior, and applications of these insights to outstanding technological problems are developed. The job includes responsibility for planning, developing, purchasing,installing, managing, integrating, reconfiguring, updating, and maintaining the CAS/CNS network for high-end scientific and technological neurocomputing. Participation in research projects is possible for qualified applicants. Salary is commensurate with experience. Contact Cindy Bradford (cindy at cns.bu.edu) for more information. Boston University is an equal opportunity employer. From phwusi at islab.brain.riken.go.jp Fri Apr 23 03:31:46 1999 From: phwusi at islab.brain.riken.go.jp (Si Wu) Date: Fri, 23 Apr 1999 16:31:46 +0900 (JST) Subject: A New idea on Support Vector Machine Message-ID: Dear Connectionists, This is to announce our new idea on support vector machines, "Improving Support Vector Machine Classifiers by Modifying Kernel Functions". We remarked that there exist few theories concerning how to choose a kernel function to fit given data well. This is equivalent to how to choose a smoothing operator. This is a difficult question. We used an idea of conformal transformation given by information geometry to modify a given kernel function. We hope that this gives a new direction for further development of SVMs. The paper will appear in Neural Networks as a Letter. If you are interested in, you can freely download it from http://www.islab.brain.riken.go.jp/~phwusi/Publication.html ************************************ Abstract In this work, we propose a method of modifying a kernel function in a data-dependent way to improve the performance of a support vector machine classifier. This is based on the Riemannian geometrical structure induced by the kernel function. The idea is to enlarge the spatial resolution around the separating boundary surface by a conformal mapping such that the separability between classes is increased. Examples are given specifically for modifying Gaussian Radial Basis Function kernels. Simulation results for both artificial and real data show remarkable improvement of generalization errors, supporting our idea. __________________________________________________________________ | | | Tel: 0081-48-462-4267(H), 0081-48-467-9664(O) | | E-mail: phwusi at islab.brain.riken.go.jp | | http://www.islab.brain.riken.go.jp/~phwusi | | Lab. for Information Synthesis, RIKEN Brain Science Institute | | Hirosawa 2-1, Wako-shi, Saitama 351-01, JAPAN | |________________________________________________________________| From szepes at sol.cc.u-szeged.hu Fri Apr 23 17:05:34 1999 From: szepes at sol.cc.u-szeged.hu (Szepesvari Csaba) Date: Fri, 23 Apr 1999 23:05:34 +0200 (MET DST) Subject: TR announcement Message-ID: Dear Colleagues, The following technical report is available at http://victoria.mindmaker.hu/~szepes/papers/macro-tr99-01.ps.gz All comments are welcome. Best wishes, Csaba Szepesvari ---------------------------------------------------------------- An Evaluation Criterion for Macro Learning and Some Results Zs. Kalmar and Cs. Szepesvari TR99-01, Mindmaker Ltd., Budapest 1121, Konkoly Th. M. u. 29-33 It is known that a well-chosen set of macros makes it possible to considerably speed-up the solution of planning problems. Recently, macros have been considered in the planning framework, built on Markovian decision problem. However, so far no systematic approach was put forth to investigate the utility of macros within this framework. In this article we begin to systematically study this problem by introducing the concept of multi-task MDPs defined with a distribution over the tasks. We propose an evaluation criterion for macro-sets that is based on the expected planning speed-up due to the usage of a macro-set, where the expectation is taken over the set of tasks. The consistency of the empirical speed-up maximization algorithm is shown in the finite case. For acyclic systems, the expected planning speed-up is shown to be proportional to the amount of ``time-compression'' due to the macros. Based on these observations a heuristic algorithm for learning of macros is proposed. The algorithm is shown to return macros identical with those that one would like to design by hand in the case of a particular navigation like multi-task MDP. Some related questions, in particular the problem of breaking up MDPs into multiple tasks, factorizing MDPs and learning generalizations over actions to enhance the amount of transfer are also considered in brief at the end of the paper. Keywords: Reinforcement learning, MDPs, planning, macros, empirical speed-up optimization From hirai at is.tsukuba.ac.jp Sun Apr 25 21:11:27 1999 From: hirai at is.tsukuba.ac.jp (Yuzo Hirai) Date: Mon, 26 Apr 1999 10:11:27 +0900 Subject: PDM Digital Neural Network System. Message-ID: <3723BD3F1E0.6E3DHIRAI@poplar.is.tsukuba.ac.jp> Dear Connectionists readers, We have connected our PDM Digital Neural Network System to the Internet. It is a hardware neural network simulator that consists of 1,008 hardware neurons fully interconnected via 1,028,160 7-bit synapses. The system consists of fully digital circuits and an analog output of each neuron is encoded by Pulse Density Modulation as our real neurons do. The behavior of each neuron is described by a nonlinear first-order differential equation, and the system solves 1,008 simultaneous differential equations in a fully parallel and time-continuous manner. It can solve a WTA network ten thousand times faster than a latest workstation for the best case. The details of the system and how to obtain a permission to use the system, visit http://www.viplab.is.tsukuba.ac.jp/ . Best regards. ******************** Professor Yuzo Hirai Institute of Information sciences and Electronics University of Tsukuba Address: 1-1-1 Ten-nodai, Tsukuba, Ibaraki 305-8573, Japan Tel: +81-298-53-5519 Fax: +81-298-53-5206 e-mail: hirai at is.tsukuba.ac.jp ***************************************************** From risto at cs.utexas.edu Sun Apr 25 23:06:32 1999 From: risto at cs.utexas.edu (risto@cs.utexas.edu) Date: Sun, 25 Apr 1999 22:06:32 -0500 Subject: neuro-evolution software, papers, web demos available Message-ID: <199904260306.WAA03067@tophat.cs.utexas.edu> The JavaSANE software package for evolving neural networks with genetic algorithms is available from the UTCS Neural Networks Research Group website, http://www.cs.utexas.edu/users/nn. The SANE method has been designed as part of our ongoing research in efficient neuro-evolution. This software is intended to facilitate applying neuro-evolution to new domains and problems, and also as a starting point for future research in neuro-evolution algorithms. Abstracts of recent papers on eugenic evolution, on-line evolution, and non-Markovian control are also included below. Demos of these systems as well as other neuroevolution papers are available at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. -- Risto Software: ----------------------------------------------------------------------- JAVASANE: SYMBIOTIC NEURO-EVOLUTION IN SEQUENTIAL DECISION TASKS http://www.cs.utexas.edu/users/nn/pages/software/abstracts.html#javasane Cyndy Matuszek, David Moriarty The JavaSANE package contains the source code for the Hierarchical SANE neuro-evolution method, where a population of neurons is evolved together with network blueprints to find a network for a given task. The method has been shown effective in several sequential decision tasks including robot control, game playing, and resource optimization. JavaSANE is designed especially to make it possible to apply SANE to new tasks with minimal effort. It is also intended to be a platform-independent and parsimonious implementation of SANE, so that can serve as a starting point for further research in neuro-evolution algorithms. (This package is written in Java; an earlier C-version is also available). Papers and Demos: ----------------------------------------------------------------------- SOLVING NON-MARKOVIAN CONTROL TASKS WITH NEUROEVOLUTION Faustino Gomez and Risto Miikkulainen To appear in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-99, Stockholm, Sweden) (6 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#gomez.ijcai99.ps.gz The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. The double pole case, where two poles connected to the cart must be balanced simultaneously is much more difficult, especially when velocity information is not available. In this article, we demonstrate a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version. In both cases, our results show that ESP is faster than other neuroevolution methods. In addition, we introduce an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity. This method enables the system to solve even more difficult versions of the task where direct evolution cannot. A demo of ESP in the 2-pole balancing task can be seen at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. ----------------------------------------------------------------------- REAL-TIME INTERACTIVE NEURO-EVOLUTION Adrian Agogino, Kenneth Stanley, and Risto Miikkulainen Technical Report AI98-266, Department of Computer Sciences, The University of Texas at Austin, 1998 (16 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#agostan.ine.ps.Z In standard neuro-evolution, a population of networks is evolved in the task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone. A demo of on-line evolution in the real-time gaming task is at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. ----------------------------------------------------------------------- EUGENIC EVOLUTION FOR COMBINATORIAL OPTIMIZATION John W. Prior Master's Thesis, Technical Report AI98-268, Department of Computer Sciences, The University of Texas at Austin, 1998 (126 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#prior.eugenic-thesis.ps.Z In the past several years, evolutionary algorithms such as simulated annealing and the genetic algorithm have received increasing recognition for their ability to optimize arbitrary functions. These algorithms rely on the process of Darwinian evolution, which promotes highly successful solutions that result from random variation. This variation is produced by the random operators of mutation and/or recombination. These operators make no attempt to determine which alleles or combinations of alleles are most likely to yield overall fitness improvement. This thesis will explore the benefits that can be gained by utilizing a direct analysis of the correlations between fitness and alleles or allele combinations to intelligently and purposefully design new highly-fit solutions. An algorithm is developed in this thesis that explicitly analyzes allele-fitness distributions and then uses the information gained from this analysis to purposefully construct new individuals ``bit by bit''. Explicit measurements of ``gene significance'' (the effect of a particular gene upon fitness) allows the algorithm to adaptively decide when conditional allele-fitness distributions are necessary in order to correctly track important allele interactions. A new operator---the ``restriction'' operator---allows the algorithm to simply and quickly compute allele selection probabilities using these conditional fitness distributions. The resulting feedback from the evaluation of new individuals is used to update the statistics and therefore guide the creation of increasingly better individuals. Since explicit analysis and creation is used to guide this evolutionary process, it is not a form of Darwinian evolution. It is a pro-active, contrived process that attempts to intelligently create better individuals through the use of a detailed analysis of historical data. It is therefore a eugenic evolutionary process, and thus this algorithm is called the ``Eugenic Algorithm'' (EuA). The EuA was tested on a number of benchmark problems (some of which are NP-complete) and compared to widely recognized evolutionary optimization techniques such as simulated annealing and genetic algorithms. The results of these tests are very promising, as the EuA optimized all the problems at a very high level of performance, and did so much more consistently than the other algorithms. In addition, the operation of EuA was very helpful in illustrating the structure of the test problems. The development of the EuA is a very significant step to statistically justified combinatorial optimization, paving the way to the creation of optimization algorithms that make more intelligent use of the information that is available to them. This new evolutionary paradigm, eugenic evolution will lead to faster and more accurate combinatorial optimization and to a greater understanding of the structure of combinatorial optimization problems. ----------------------------------------------------------------------- FAST REINFORCEMENT LEARNING THROUGH EUGENIC NEURO-EVOLUTION Daniel Polani and Risto Miikkulainen Technical Report AI99-277, Department of Computer Sciences, University of Texas at Austin, 1999 (7 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#polani.eusane-99.ps.gz In this paper we introduce EuSANE, a novel reinforcement learning algorithm based on the SANE neuro-evolution method. It uses a global search algorithm, the Eugenic Algorithm, to optimize the selection of neurons to the hidden layer of SANE networks. The performance of EuSANE is evaluated in the two-pole balancing benchmark task, showing that EuSANE is significantly stronger than other reinforcement learning methods to date in this task. From plesser at pegasus.chaos.gwdg.de Mon Apr 26 10:37:38 1999 From: plesser at pegasus.chaos.gwdg.de (Hans Ekkehard Plesser) Date: Mon, 26 Apr 1999 16:37:38 +0200 Subject: Noise in I&F neurons: from stochastic input to escape rates Message-ID: <9904261437.AA04322@pegasus.chaos.gwdg.de> Dear Connectionists! I would like to announce a paper on noisy integrate-and-fire dynamics that has been accepted for publication in Neural Computation: Noise in integrate-and-fire neurons: from stochastic input to escape rates by Hans E. Plesser and Wulfram Gerstner The paper is available on-line at http://www.chaos.gwdg.de/~plesser/publications.html . Abstract: We analyze the effect of noise in integrate-and-fire neurons driven by time-dependent input, and compare the diffusion approximation for the membrane potential to escape noise. It is shown that for time- dependent sub-threshold input, diffusive noise can be replaced by escape noise with a hazard function that has a Gaussian dependence upon the distance between the (noise-free) membrane voltage and threshold. The approximation is improved if we add to the hazard function a probability current proportional to the derivative of the voltage. Stochastic resonance in response to periodic input occurs in both noise models and exhibits similar characteristics. Hans E. Plesser ------------------------------------------------------------------ Hans Ekkehard Plesser Nonlinear Dynamics Group Tel. : ++49-551-5176-421 MPI for Fluid Dynamics Fax : ++49-551-5176-409 D-37073 Goettingen, Germany e-mail: plesser at chaos.gwdg.de ------------------------------------------------------------------ From prevete at axpna1.na.infn.it Wed Apr 28 05:41:48 1999 From: prevete at axpna1.na.infn.it (Roberto Prevete) Date: Wed, 28 Apr 1999 11:41:48 +0200 Subject: A new Java package Message-ID: <3726D7DC.7FA23C3E@axpna1.na.infn.it> The Java package it.na.cy.nnet for simulation of neural networks is now available from cybernetics webpage www.na.infn.it/Gener/cyber/report.html It is a package to simulate neural networks composed of many elementary units. Documentation is also included. Any comment will be appreciated. Thanks, Roberto Prevete From jon at syseng.anu.edu.au Thu Apr 29 03:23:11 1999 From: jon at syseng.anu.edu.au (Jonathan Baxter) Date: Thu, 29 Apr 1999 17:23:11 +1000 Subject: Paper available Message-ID: <372808DF.8A5DDC14@syseng.anu.edu.au> The following paper is available from http://wwwsyseng.anu.edu.au/~jon/papers/doom2.ps.gz "Boosting Algorithms as Gradient Descent in Function Space" by Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean Abstract: Much recent attention, both experimental and theoretical, has been focussed on classification algorithms which produce voted combinations of classifiers. Recent theoretical work has shown that the impressive generalization performance of algorithms like AdaBoost can be attributed to the classifier having large margins on the training data. We present abstract algorithms for finding linear and convex combinations of functions that minimize arbitrary cost functionals (i.e functionals that do not necessarily depend on the margin). Many existing voting methods can be shown to be special cases of these abstract algorithms. Then, following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, we present a new algorithm (DOOM II) for performing a gradient descent optimization of such cost functions. Experiments on several data sets from the UC Irvine repository demonstrate that DOOM II generally outperforms AdaBoost, especially in high noise situations. Margin distribution plots verify that DOOM II is willing to `give up' on examples that are too hard in order to avoid overfitting. We also show that the overfitting behavior exhibited by AdaBoost can be quantified in terms of our proposed cost function. From peter at biodiscovery.com Thu Apr 29 03:31:49 1999 From: peter at biodiscovery.com (Peter Kalocsai) Date: Thu, 29 Apr 1999 00:31:49 -0700 Subject: Job announcement Message-ID: <001a01be9212$58b86a20$66d9efd1@default> Data Mining Scientist BioDiscovery, Inc. is a leading gene expression image and data analysis firm with an outstanding client list and a progressive industry stance. We are an early-stage start-up company dedicated to the development of state-of-the-art bioinformatics software tools for molecular biology and genomics research. We are rapidly growing and are looking for talented individuals with experience and motivations to take on significant responsibility and deliver with minimal supervision. BioDiscovery is an equal opportunity employer and our shop has a friendly, fast paced atmosphere. We are headquartered in sunny Southern California close to the UCLA campus. We are looking for a talented, creative individual with a strong background in software development, mathematics, statistics, and pattern recognition. Knowledge of biology and genetics is a plus but not necessary. This position involves development and implementation of pattern recognition and data mining algorithms for a number of ongoing and planned projects. This position requires the ability to formulate problem descriptions through interaction with end-user scientist working in the various aspects of genomics. These technical issues must then be transformed into innovative and practical algorithmic solutions. We expect our scientists to have outstanding written/oral communication skills and encourage publications in scientific journals. Requirements: A Ph.D. in Computer Science, Electrical Engineering, Mathematics, Biostatistics, or related field, or equivalent experience is required. Knowledge of biology and genetics is a plus but not necessary. Experience with MatLab, JAVA, or at least 5 years programming experience. Please send resume and cover letter to: hr at biodiscovery.com ---------------------------------------------- Peter Kalocsai, Ph.D. BioDiscovery, Inc. 11150 W. Olympic Blvd. Suite 805E Los Angeles, CA 90064 Ph: (310) 966-9366 Fax: (310) 966-9346 E-mail: peter at biodiscovery.com From vaina at enga.bu.edu Fri Apr 30 20:28:50 1999 From: vaina at enga.bu.edu (Lucia M. Vaina) Date: Fri, 30 Apr 1999 20:28:50 -0400 Subject: Postion in computational fMRI in Brain and Vision Research Lab-BU Message-ID: Computational fMRI, graphics algorithms, and image processing. Full time position in Brain and Vision Research Laboratory, Boston University This exciting new venture in the Brain and Vision Research Laboratory at Boston University, Department of Biomedical Engineering involves visualisation the working (plasticity and restorative plasticity) of the human brain during sensory-motor tasks. Specifically, the postholder will: * explore the uses of real-time and near-real-time analysis >techniques in fMRI studies by applying several existing data abalysis >packages for fMRI. * model the changes of functional connectivity of brain activations using structural equations models and functional connectivity models. * develop and implement motion correction algorithms, elastic matching algorithms. Candidates should have at least an MS in Electrical Engineering, Computer Science, Physics or Mathematics. Good communication and interpersonal skills, excellent background in C programming and should be familiar with the Unix environment. Good knowledge of college level mathematics (linear algebra,partial differential equatuions, statistics and probability),signal processing. Knowledge of computer graphics algorithms is a plus. Familiarity with Sun, or SGI platforms and PC is very desirable. Please send a letter of application along with CV, publication list if available, brief statement of current research and background, and two letters of recommendation to Professor Lucia M. Vaina Brain and Vision Research Laboratory Biomedical Engineering Department College of Engineering Boston University 44 Cummington str Boston, Ma 02115 USA fax: 617-353-6766 (Please note that I will away between May 7-15). Lucia M. Vaina Ph.D., D.Sc. Professor of Biomedical Engineering and Neurology Brain and Vision Research Laboratory Boston University, Department of Biomedical Engineering College of Engineering 44 Cummington str, Room 315 Boston University Boston, Ma 02215 USA tel: 617-353-2455 fax: 617-353-6766 From svensen at cns.mpg.de Thu Apr 1 04:14:58 1999 From: svensen at cns.mpg.de (Markus Svensen) Date: Thu, 01 Apr 1999 11:14:58 +0200 Subject: Thesis available Message-ID: <37033912.823AA11@cns.mpg.de> Dear Connectionists My PhD-thesis --- GTM: The Generative Topographic Mapping --- is available for downloading in compressed postscript format from http://www.ncrg.aston.ac.uk/GTM/ (this page also contains other material on the GTM); abstract follows below. It can also be obtained via anonymous ftp (see below) or from the NCRG Publication database (http://www.ncrg.aston.ac.uk/Papers/, tech.rep.no. NCRG/98/024, single-sided version only). Markus Svensen Max-Plank-Institute Email: svensen at cns.mpg.de of Cognitive Neuroscience Phone: +49/0 341 9940 229 Postfach 500 355 Fax (not personal): D-04303 LEIPZIG +49/0 341 9940 221 GERMANY Abstract ======== This thesis describes the Generative Topographic Mapping (GTM) --- a non-linear latent variable model, intended for modelling continuous, intrinsically low-dimensional probability distributions, embedded in high-dimensional spaces. It can be seen as a non-linear form of principal component analysis or factor analysis. It also provides a principled alternative to the self-organizing map --- a widely established neural network model for unsupervised learning --- resolving many of its associated theoretical problems. An important, potential application of the GTM is visualization of high-dimensional data. Since the GTM is non-linear, the relationship between data and its visual representation may be far from trivial, but a better understanding of this relationship can be gained by computing the so-called magnification factor. In essence, the magnification factor relates the distances between data points, as they appear when visualized, to the actual distances between those data points. There are two principal limitations of the basic GTM model. The computational effort required will grow exponentially with the intrinsic dimensionality of the density model. However, if the intended application is visualization, this will typically not be a problem. The other limitation is the inherent structure of the GTM, which makes it most suitable for modelling moderately curved probability distributions of approximately rectangular shape. When the target distribution is very different to that, the aim of maintaining an `interpretable' structure, suitable for visualizing data, may come in conflict with the aim of providing a good density model. The fact that the GTM is a probabilistic model means that results from probability theory and statistics can be used to address problems such as model complexity. Furthermore, this framework provides solid ground for extending the GTM to wider contexts than that of this thesis. Keywords: latent variable model, visualization, magnification factor, self-organizing map, principal component analysis Availability via ftp ==================== The thesis is available via anonymous ftp from cs.aston.ac.uk, in the directory neural/svensjfm/GTM. It's avaiblable in two versions: NCRG_98_024.ps.Z contains the final version of the thesis, as it was submitted to Aston University, formatted according to Aston's regulations (1.5 linespacing and margins for singlesided printing); 112 pages on 112 A4 papers NCRG_98_024_dblsided.ps.Z contains the same version of the thesis in terms of content, but formatted slightly differently (single linespacing and margins for doublesided printing, with blank pages added as appropriate); 108 pages on 56 A4 papers. Naturally, the singlesided version will not look very nice when printed doublesided, and vice versa! From erik at bbf.uia.ac.be Fri Apr 2 05:56:51 1999 From: erik at bbf.uia.ac.be (Erik De Schutter) Date: Fri, 2 Apr 1999 11:56:51 +0100 (WET DST) Subject: Postdoctoral Position in Computational Neuroscience (Cerebellum) Message-ID: <199904021056.LAA07953@kuifje.bbf.uia.ac.be> The Antwerp Theoretical Neurobiology group has a 2 to 3 year position for a computational neuroscientist, supported by a Human Frontier Science project. We are primarily looking for a new postdoctoral researcher, but have also felloships for highly qualified PhD students available. The project involves realistic network models of the cerebellum. The postdoc will join a group of computational and experimental neuroscientists who study the cerebellum at multiple levels of complexity and is expected to interact closely with the other members of the lab and of the Human Frontier Science team. This specific project involves in a first phase the use of a realistic network model of the cerebellar cortex of the rat to study the effect of granular layer oscillations on Purkinje cell firing. In a second phase the network model will be expanded to study cerebello-olivary interactions. Simulation results will be compared to multi-unit data recorded from anesthetized and awake rats and from knockout mice. More information on the activities of the Antwerp Theoretical Neurobiology group can be found at http://www.bbf.uia.ac.be We are looking for candidates with experience in computational neuroscience using compartmental models and/or using GENESIS. The position is available immediately, but can also be taken up within the next 12 months. Applicants should (e-)mail their curriculum vitae and names of two references to: Prof. Erik De Schutter Born-Bunge Foundation University of Antwerp - UIA B2610 Antwerp BELGIUM erik at bbf.uia.ac.be From mgeorg at SGraphicsWS1.mpe.ntu.edu.sg Thu Apr 8 16:39:06 1999 From: mgeorg at SGraphicsWS1.mpe.ntu.edu.sg (Georg Thimm) Date: Sat, 03 Apr 1999 11:19:06 -12920 Subject: Contents of Neurocomputing 25 (1998) Message-ID: <199904030319.LAA01957@SGraphicsWS1.mpe.ntu.edu.sg> Dear reader, Please find below a compilation of the contents for Neurocomputing and Scanning the Issue written by V. David Sanchez A. More information on the journal are available at the URL http://www.elsevier.nl/locate/neucom . The contents of this and other journals published by Elsevier are distributed also by the ContentsDirect service (see at the URL http://www.elsevier.nl/locate/ContentsDirect). Please feel free to redistribute this message. My apologies if this message is inappropriate for this mailing list; I would appreciate a feedback. With kindest regards, Georg Thimm Dr. Georg Thimm Tel ++65 790 5010 Design Research Center, School of MPE, Email: mgeorg at ntu.edu.sg Nanyang Technological University, Singapore 639798 ******************************************************************************** Vol. 25 (1-3) Scanning the Issue I. Santamaria, M. Lazaro, C.J. Pantaleon, J.A. Garcia, A. Tazon, and A. Mediavilla describe "A nonlinear MESFET model for intermodulation analysis using a generalized radial basis function network". The transistor bias voltages are input to the GRBF network which maps them onto the derivatives of the drain-to-source current associated to the intermodulation properties. In "Solving dynamic optimization problems with adaptive networks" Y. Takahashi systematically constructs adaptive networks (AN) from a given dynamic optimization problem (DOP) which generate a locally-minimum problem solution. The construction of a solution for the Dynamic Traveling Salesman Problem (DTSP) is shown as example. L.M. Patnaik and S. Udaykumar discuss in "Mapping adaptive resonance theory onto ring and mesh architectures" different strategies to parallelize ART2-A networks. The parallel architectural simulator PROTEUS is used. Simulations show that the speedup obtained for the ring architecture is higher than the one obtained for the mesh architecture. H.-H. Chen, M.T. Manry, and H. Chandrasekaran use in "A neural network training algorithm utilizing multiple sets of linear equations" output weight optimization (OWO), hidden weight optimization (HWO), and Backpropagation in training algorithms. Simulations show that the combined OWO-HWO technique is more effective than the OWO-BP and the Levenberg-Marquardt methods for training MLP networks. Y. Baram presents in "Bayesian classification by iterated weighting" a modular and separate calculation of the likelihoods and the weights. This allows for the use of any density estimation method. The likelihoods are estimated by parametric optimization, the weights are estimated using iterated averaging. Results obtained are similar to those generated using the expectation maximization method. V. Maiorov and A. Pinkus prove "Lower bounds for approximation by MLP neural networks" including that any continuous function on any compact domain can be approximated arbitrarily well by a two hidden layer MLP with a fixed number of units per layer. The degree of approximation for an MLP with n hidden units is bounded by the degree of approximation of n ridge functions linearly combined. In "Developing robust non-linear models through bootstrap aggregated neural networks" J. Zhang describes a technique for aggregating multiple networks. Bootstrap techniques are used to resample data into training and test data sets. Combination of the individual network models is done by principal component regression. More accurate and more robust results are obtained than when using single networks. S.-Y. Cho and T.W.S. Chow describe a new heuristic for global learning in "Training multilayer neural networks using fast global learning algorithm * Least squares and penalized optimization methods". Classification problems are used to confirm that a higher convergence speed and ability to escape local minima is achieved with the new algorithm as opposed to other conventional methods. In "A novel entropy-constrained competitive learning algorithm for vector quantization" W.-J. Hwang, B.-Y. Ye, and S.-C. Liao develops the entropy-constrained competitive learning (ECCL) algorithm. This algorithm outperforms the entropy-constrained vector quantizer (ECVQ) design algorithm when the same rate constraint and initial codewords are used. K.B. Eom presents a "Fuzzy clustering approach in unsupervised sea ice classification". Passive radar images taken by multichannel passive microwave imagers are used as input. The see ice types in polar regions are determined by clustering. Hard clustering methods do not apply due to the fuzzy nature of the boundaries between different see-ice types. G.-J. Wang and T.-C. Chen introduce "A robust parameters self-tuning learning algorithm for multilayer feedforward neural network". Automatic adjustment of learning parameters such as the learning rate and the momentum can be acieved with this new algorithm. It outperforms the error backpropagation (EBP) algorithm in terms of convergence, but is also more insensitive to the initial weights. In "Neural computation for robust approximate pole assignment" D.W.C. Ho, J. Lam, J. Xu, and H.K. Tam pose the problem of output feedback robust approximate pole assignment as an unconstrained optimization problem and solve it using a neural architecture and the gradient flow formulation. This formulation allows for a simple recurrent neural network realization. I appreciate the cooperation of all those who submitted their work for inclusion in this issue. V. David Sanchez A. Neurocomputing * Editor-in-Chief * ******************************************************************************** From wolfskil at MIT.EDU Fri Apr 2 14:38:05 1999 From: wolfskil at MIT.EDU (Jud Wolfskill) Date: Fri, 2 Apr 1999 15:38:05 -0400 Subject: book announcement Message-ID: A non-text attachment was scrubbed... Name: not available Type: text/enriched Size: 1914 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c5bd37bb/attachment-0001.bin From fmdist at hotmail.com Sat Apr 3 13:15:31 1999 From: fmdist at hotmail.com (Fionn Murtagh) Date: Sat, 03 Apr 1999 10:15:31 PST Subject: "Neurocomputing Letters" - call for submissions Message-ID: <19990403181531.17451.qmail@hotmail.com> Manuscripts are invited for "Neurocomputing Letters" which is published as part of "Neurocomputing". Quick turnaround as regards refereeing and publication are aimed at with a "Letter" paper, which should be short (manuscript text about 8 pages). The Editor-in-Chief of "Neurocomputing" is V. David Sanchez A. More information on "Neurocomputing" is available at http://www.elsevier.nl/locate/neucom The contents of this and other journals published by Elsevier are distributed also by the ContentsDirect service (see http://www.elsevier.nl/locate/ContentsDirect). Contact addresses are as follows. NEUROCOMPUTING - EDITOR-IN-CHIEF V. David Sanchez A. Advanced Computational Intelligent Systems 11281 Tribuna Avenue San Diego, CA 92131 U.S.A. Fax +1 (619) 547-0794 Email dsanchez at san.rr.com http://www.elsevier.nl/locate/neucom NEUROCOMPUTING LETTERS - EDITOR Prof. F. Murtagh School of Computer Science Queen's University of Belfast Belfast BT7 1NN Northern Ireland Email f.murtagh at qub.ac.uk http://www.cs.qub.ac.uk/~F.Murtagh Get Your Private, Free Email at http://www.hotmail.com From stan at mbfys.kun.nl Fri Apr 2 08:34:04 1999 From: stan at mbfys.kun.nl (Stan Gielen) Date: Fri, 02 Apr 1999 14:34:04 +0100 Subject: advertisement for vacant Ph.D. position Message-ID: <3.0.2.32.19990402143404.009f8da0@pop-srv.mbfys.kun.nl> The Dept. of Medical Physics and Biophysics of the University of Nijmegen, The Netherlands, has a vacancy for a Ph.D. STUDENT RESEARCH PROJECT The research project has a theoretical component and an application oriented component. The theoretical component deals with knowledge extraction from neural-network related architectures, which have been trained using large data-bases. Our group has a large experience with Multi-layer Perceptrons, staochastic neural networks and graphical models (see http://www.mbfys.kun.nl/snn). The application oriented part deals with optimization of the production process in a paper production plant. The aim of the project is to train a neural network (or related architecture) with data, obtained during the paper production process, and then -- to find the main relevant parameters, which determine the production process and the quality of the output (quality of paper) -- to extract rules, which provide insight into the production process -- to optimize the process. The project should produce excellent papers in the best available scientific journals. After a period of 4 years, a thesis should be available. For applications, please send your cv to: Prof. Stan Gielen: stan at mbfys.kun.nl From solla at snowmass.phys.nwu.edu Mon Apr 5 21:32:37 1999 From: solla at snowmass.phys.nwu.edu (Sara A. Solla) Date: Mon, 5 Apr 1999 20:32:37 -0500 (CDT) Subject: NIPS*99 -- Call for Papers Message-ID: <199904060132.UAA10273@snowmass.phys.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 8487 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6203c283/attachment-0001.ksh From solla at snowmass.phys.nwu.edu Mon Apr 5 21:32:18 1999 From: solla at snowmass.phys.nwu.edu (Sara A. Solla) Date: Mon, 5 Apr 1999 20:32:18 -0500 (CDT) Subject: NIPS*99 -- Call for Workshop Proposals Message-ID: <199904060132.UAA10264@snowmass.phys.nwu.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 5108 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/0c59708c/attachment-0001.ksh From harnad at coglit.ecs.soton.ac.uk Tue Apr 6 07:40:49 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Tue, 6 Apr 1999 12:40:49 +0100 (BST) Subject: Rolls on "The Brain and Emotion" BBS Call for Book Reviewers Message-ID: Below is the abstract of the Precis of a book that will shortly be circulated for Multiple Book Review in Behavioral and Brain Sciences (BBS): *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** PRECIS OF "THE BRAIN AND EMOTION" (Oxford UP 1998) by Edmund T. Rolls This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 16th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. Please note that it is the book, not the Precis, that is to be reviewed. It would be helpful if you indicated in your reply whether you already have the book or would require a copy. _____________________________________________________________ PRECIS OF "THE BRAIN AND EMOTION" FOR BBS MULTIPLE BOOK REVIEW Oxford University Press on 5th November 1998. Edmund T. Rolls University of Oxford Department of Experimental Psychology South Parks Road Oxford OX1 3UD England. Edmund.Rolls at psy.ox.ac.uk ABSTRACT: The topics treated in The Brain and Emotion include the definition, nature and functions of emotion (Chapter 3), the neural bases of emotion (Chapter 4), reward, punishment and emotion in brain design (Chapter 10), a theory of consciousness and its application to understanding emotion and pleasure (Chapter 9), and neural networks and emotion-related learning (Appendix). The approach is that emotions can be considered as states elicited by reinforcers (rewards and punishers). This approach helps with understanding the functions of emotion, and with classifying different emotions; and in understanding what information processing systems in the brain are involved in emotion, and how they are involved. The hypothesis is developed that brains are designed around reward and punishment evaluation systems, because this is the way that genes can build a complex system that will produce appropriate but flexible behavior to increase fitness (Chapter 10). By specifying goals rather than particular behavioral patterns of responses, genes leave much more open the possible behavioral strategies that might be required to increase fitness. The importance of reward and punishment systems in brain design also provides a basis for understanding brain mechanisms of motivation, as described in Chapters 2 for appetite and feeding, 5 for brain-stimulation reward, 6 for addiction, 7 for thirst, and 8 for sexual behavior. KEYWORDS: emotion; hunger; taste; brain evolution; orbitofrontal cortex; amygdala; dopamine; reward; punishment; consciousness ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.rolls.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.rolls ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.rolls To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.rolls When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From xjwang at cicada.ccs.brandeis.edu Tue Apr 6 14:09:13 1999 From: xjwang at cicada.ccs.brandeis.edu (Xiao-Jing Wang) Date: Tue, 6 Apr 1999 14:09:13 -0400 Subject: No subject Message-ID: <199904061809.OAA20240@cicada.ccs.brandeis.edu> POST-DOCTORAL POSITION AT SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY BRANDEIS UNIVERSITY ================================================================== Applications are invited for a post-doctoral fellowship in computational neuroscience, beginning in the fall of 1999. The successful candidate is expected to work on models of working memory processes in prefrontal cortex and their neuromodulation. Projects will be carried out in close interaction and collaboration with experimental neurobiologists. Candidates with strong theoretical background, analytical and simulation skills, and knowledge in neuroscience, are encouraged to apply. Applicants should send promptly a curriculum vitae and a brief description of fields of interest, and have three letters of recommandation sent to the following address. ================================================================== Xiao-Jing Wang Associate Professor Center for Complex Systems Brandeis University Waltham, MA 02454 phone: 781-736-3147 fax: 781-736-4877 http://www.bio.brandeis.edu/pages/faculty/wang.html From spotter at gg.caltech.edu Wed Apr 7 00:09:51 1999 From: spotter at gg.caltech.edu (Steve Potter) Date: Tue, 6 Apr 1999 20:09:51 -0800 Subject: Neural Interface Jobs Message-ID: Dear Connectionists, This is a follow-up to a post I made to the list circa 1991, when I was a grad student and had decided to study learning in cultured neural networks. I asked the list who was working on such things and got a number of helpful replies. Several led me to Caltech, where I have spent the last 5 years working with Scott Fraser and Jerry Pine, building gadgets and learning techniques to make my dream possible. I am now recruiting a post-doc and a computer software/hardware person to help me out on this project. Please spread the word! -Steve Potter spotter at gg.caltech.edu ___________Please Distribute to Interested Parties_________________ I have two positions available immediately for: 1. Neural Interface Programmer 2. Postdoc in Neural Coding See the hypertext version of this announcement at: http://www.caltech.edu/~pinelab/jobs.html New Neuroscience Technology for Studying Learning in Vitro: Multi-electrode and Imaging Analysis of Cultured Networks I received a 4-year RO1 grant (1 RO1 NS38628-01) from the National Institute of Neurological Disorders and Stroke at the National Institutes of Health. We are developing a two-way link between a network of cultured neurons and a computer, capable of stimulating and recording brain cell activity continuously for weeks. We are interested in observing the process of self-organization in the cultured neural network, using multi-electrode arrays, optical recording using voltage-sensitive dyes, and 2-photon laser-scanning microscopy. We are carrying out this cutting-edge interdisciplinary project within the labs of Prof. Jerry Pine and Prof. Scott Fraser. Pine lab: http://www.caltech.edu/~pinelab/pinelab.html Fraser lab: http://www.its.caltech.edu/~fraslab/ More details about the jobs can be found at the web address at the top. Please send or email me your resume or CV: Steve M. Potter, PhD Principal Investigator Senior Research Fellow spotter at gg.caltech.edu 156-29 Biology California Institute of Technology Pasadena, CA 91125 ___________Please Distribute to Interested Parties_________________ From school at cogs.nbu.acad.bg Wed Apr 7 07:23:43 1999 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Wed, 7 Apr 1999 14:23:43 +0300 Subject: CogSci 99 deadline approaches Message-ID: 6th International Summer School in Cognitive Science Sofia, New Bulgarian University July 12 - 31, 1999 International Advisory Board Elizabeth BATES (University of California at San Diego, USA) Amedeo CAPPELLI (CNR, Pisa, Italy) Cristiano CASTELFRANCHI (CNR, Roma, Italy) Daniel DENNETT (Tufts University, Medford, Massachusetts, USA) Ennio De RENZI (University of Modena, Italy) Charles DE WEERT (University of Nijmegen, Holland ) Christian FREKSA (Hamburg University, Germany) Dedre GENTNER (Northwestern University, Evanston, Illinois, USA) Christopher HABEL (Hamburg University, Germany) William HIRST (New School for Social Sciences, NY, USA) Joachim HOHNSBEIN (Dortmund University, Germany) Douglas HOFSTADTER (Indiana University, Bloomington, Indiana, USA) Keith HOLYOAK (University of California at Los Angeles, USA) Mark KEANE (Trinity College, Dublin, Ireland) Alan LESGOLD (University of Pittsburg, Pennsylvania, USA) Willem LEVELT (Max-Plank Institute of Psycholinguistics, Nijmegen, Holland) David RUMELHART (Stanford University, California, USA) Richard SHIFFRIN (Indiana University, Bloomington, Indiana, USA) Paul SMOLENSKY (University of Colorado, Boulder, USA) Chris THORNTON (University of Sussex, Brighton, England) Carlo UMILTA' (University of Padova, Italy) Eran ZAIDEL (University of California at Los Angeles, USA) Courses Each participant will enroll in 6 of the 10 courses offered thus attending 4 hours classes per day plus 2 hours tutorials in small groups plus individual studies and participation in symposia. Brain and Language: New Approaches to Evolution and Developmet (Elizabeth Bates, Univ. of California at San Diego, USA) Child Language Acquisition (Michael Tomasello, MPI for Evolutionary Anthropology, Germany) Culture and Cognition (Roy D'Andrade, Univ. of California at San Diego, USA) Understanding Social Dependence and Cooperation (Cristiano Castelfranchi, CNR, Italy) Models of Human Memory (Richard Shiffrin, Indiana University, USA) Categorization and Inductive Reasoning: Psychological and Computational Approaches (Evan Heit, Univ. of Warwick, UK) Understanding Human Thinking (Boicho Kokinov, New Bulgarian University) Perception-Based Spatial Reasoning (Reinhard Moratz, Hamburg University, Germany) Perception (Naum Yakimoff, New Bulgarian University) Applying Cognitive Science to Instruction (John Hayes, Carnegie-Mellon University, USA) In addition there will be seminars, working groups, project work, discussions. Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: application form, CV, statement of purpose, copy of diploma; if student - academic transcript letter of recommendation, list of publications (if any) and short summary of up to three of them. For participants from Central and Eastern Europe as well as from the former Soviet Union there are scholarships available (provided by Soros' Open Society Institute). They cover tuition, travel, and living expenses. Deadline for application: April 15tht Notification of acceptance: April 30th. Apply as soon as possible since the number of participants is restricted. For more information contact: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21, Montevideo Str. Sofia 1635, Bulgaria Tel. (+3592) 957-1876 Fax: (+3592) 558262 e-mail: school at cogs.nbu.acad.bg Web page: http://www.nbu.acad.bg/staff/cogs/events/ss99.html From rsun at research.nj.nec.com Wed Apr 7 13:16:36 1999 From: rsun at research.nj.nec.com (Ron Sun) Date: Wed, 7 Apr 1999 13:16:36 -0400 Subject: three technical reports related to reinforcement learning Message-ID: <199904071716.NAA29383@pc-rsun.nj.nec.com> Announcing three technical reports (concerning enhancing reinforcement learners, either in terms of improving their learning processes by dividing up the space or sequence, or in terms of knowledge extraction from outcomes of reinforcement learning) --------------------------------- Learning Plans without a priori Knowledge by Ron Sun and Chad Sessions http://cs.ua.edu/~rsun/sun.plan.ps ABSTRACT This paper is concerned with autonomous learning of plans in probabilistic domains without a priori domain-specific knowledge. Different from existing reinforcement learning algorithms that generate only reactive plans and existing probabilistic planning algorithms that require a substantial amount of a priori knowledge in order to plan, a two-stage bottom-up process is devised, in which first reinforcement learning is applied, without the use of a priori domain-specific knowledge, to acquire a reactive plan and then explicit plans are extracted from the reactive plan. Several options in plan extraction are examined, each of which is based on beam search that performs temporal projection in a restricted fashion, guided by the value functions resulting from reinforcement learning. Some completeness and soundness results are given. Examples in several domains are discussed that together demonstrate the working of the proposed model. A shortened version appeared in: Proc. 1998 International Symposium on Intelligent Data Engineering and Learning, October, 1998. Springer-Verlag. --------------------------------- Multi-Agent Reinforcement Learning: Weighting and Partitioning by Ron Sun and Todd Peterson http://cs.ua.edu/~rsun/sun.NN99.ps ABSTRACT: This paper addresses weighting and partitioning, in complex reinforcement learning tasks, with the aim of facilitating learning. The paper presents some ideas regarding weighting of multiple agents and extends them into partitioning an input/state space into multiple regions with differential weighting in these regions, to exploit differential characteristics of regions and differential characteristics of agents to reduce the learning complexity of agents (and their function approximators) and thus to facilitate the learning overall. It analyzes, in reinforcement learning tasks, different ways of partitioning a task and using agents selectively based on partitioning. Based on the analysis, some heuristic methods are described and experimentally tested. We find that some off-line heuristic methods performed the best, significantly better than single-agent models. To appear in: Neural Networks, in press. A shortened version appeared in Proc. of IJCNN'99 --------------------------------- Self-Segmentation of Sequences: Automatic Formation of Hierarchies of Sequential Behaviors by Ron Sun and Chad Sessions http://cs.ua.edu/~rsun/sun.sss.ps ABSTRACT The paper presents an approach for hierarchical reinforcement learning that does not rely on a priori domain-specific knowledge regarding hierarchical structures. Thus this work deals with a more difficult problem compared with existing work. It involves learning to segment action sequences to create hierarchical structures, based on reinforcement received during task execution, with different levels of control communicating with each other through sharing reinforcement estimates obtained by each other. The algorithm segments action sequences to reduce non-Markovian temporal dependencies, and seeks out proper configurations of long- and short-range dependencies, to facilitate the learning of the overall task. Developing hierarchies also facilitates the extraction of explicit hierarchical plans. The initial experiments demonstrate the promise of the approach. A shortened version of this report appeared in Proc. IJCNN'99. Washington, DC. --------------------------------- Dr. Ron Sun NEC Research Institute 4 Independence Way Princeton, NJ 08540 phone: 609-520-1550 fax: 609-951-2483 email: rsun at research.nj.nec.com (July 1st, 1998 -- July 1st, 1999) ----------------------------------------- Prof. Ron Sun http://cs.ua.edu/~rsun Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu From ascoli at osf1.gmu.edu Wed Apr 7 15:52:36 1999 From: ascoli at osf1.gmu.edu (GIORGIO ASCOLI) Date: Wed, 7 Apr 1999 15:52:36 -0400 (EDT) Subject: positions available Message-ID: Please post, distribute, and circulate as you see fit. **************************************************** * * * POSITIONS AVAILABLE: * * (1) Computational Neuroscience Post-Doc * * (2) Computer programmer for Neuroscience project * * * **************************************************** 1) COMPUTATIONAL NEUROSCIENCE POST-DOCTORAL POSITION AVAILABLE A post-doctoral position is available immediately for computational modeling of dendritic morphology, neuronal connectivity, and development of anatomically and physiologically accurate neural networks. All highly motivated candidates with a recent PhD in biology, computer science, physics, or other areas related to Neuroscience (including MD or engineering degree) are encouraged to apply. C programming skills and/or experience with GENESIS or other modeling packages are desirable but not necessary. Post-doc will join a young and dynamic research group at the Krasnow Institute for Advanced Study, located in Fairfax, VA (<20 miles west of Washington DC). The initial research project is focused on (1) the generation of complete neurons in virtual reality that reproduce accurately the experimental morphological data; and (2) the study of the influence of dendritic shape (geometry and topology) on the electrophysiological behavior. We have developed advanced software to build network models of entire regions of the brain (e.g. the rat hippocampus), and the successful candidate will also work on this aspect of the research. The post-doc will be hired as a Research Assistant Professor (with VA state employee benefits) with a salary based on the NIH postdoctoral scale, and will have a private office, a new computer, and full-time access on Silicon Graphics server and consoles. Send CV, (p)reprints, a brief description of your motivation, and names, email addresses and phone/fax numbers of references to: ascoli at gmu.edu (or by fax at the number below) ASAP. There is no deadline but the position will be filled as soon as a suitable candidate is found. Non-resident aliens are also welcome to apply. The Krasnow Institute is an equal opportunity employer. Giorgio Ascoli, PhD Krasnow Institute for Advanced Study at George Mason University, MS2A1 Fairfax, VA 22030 Ph. (703)993-4383 Fax (703)993-4325 2) PROGRAMMER POSITION AVAILABLE FOR NEUROSCIENCE PROJECT A position for a junior scientific programmer is available immediately to work in a research project aimed to the virtual construction of portions of the brain. Candidates must have strong C (C++ a plus) programming skills and software development experience in both Windows and Unix environments. No background in neuroscience is requested, but interest and motivation are highly desirable. All undergraduate/graduate students as well as BA's, BS's, and MS's are encouraged to apply. The position can be either part-time or full-time, and the schedule is extremely flexible. The programmer will join a young and dynamic research group at the Krasnow Institute for Advanced Study, located in Fairfax, VA (<20 miles west of Washington DC). The successful candidate will work with the principal investigator (Dr. Ascoli) on the implementation of algorithms to generate neuronal structures in 3D according to known and novel anatomical rules. In addition, routines will be developed to measure geometrical and topological parameters from the tree-like branching structures. All the executables (and some of the codes at the discretion of the principal investigator) will be publicly distributed, and the programmer will be given full intellectual credit in both software distribution and scientific publications. The programmer will be hired as a Research Instructor (with VA state employee benefits) with a salary proportioned to experience and skills, and will have a new PC and full-time access on Silicon Graphics server and O2 consoles. Send CV, a brief description of your motivation, and names, email addresses and phone/fax numbers of references to: ascoli at gmu.edu (or by fax at the number below) ASAP. There is no deadline but the position will be filled as soon as a suitable candidate is found. Non-resident aliens are also welcome to apply. The Krasnow Institute is an equal opportunity employer. Giorgio Ascoli, PhD Krasnow Institute for Advanced Study at George Mason University, MS2A1 Fairfax, VA 22030 Ph. (703)993-4383 Fax (703)993-4325 From harnad at coglit.ecs.soton.ac.uk Wed Apr 7 16:15:45 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Wed, 7 Apr 1999 21:15:45 +0100 (BST) Subject: Pavlovian Feed-Forward Mechanisms: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article *** please see also 5 important announcements about new BBS policies and address change at the bottom of this message) *** PAVLOVIAN FEED-FORWARD MECHANISMS IN THE CONTROL OF SOCIAL BEHAVIOR by Michael Domjan, Brian Cusato, & Ronald Villarreal This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by April 8th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ PAVLOVIAN FEED-FORWARD MECHANISMS IN THE CONTROL OF SOCIAL BEHAVIOR Michael Domjan, Brian Cusato, & Ronald Villarreal Department of Psychology University of Texas Austin, Texas 78712 U.S.A. Tel: 512-471-7702 Fax: 512-471-6175 Domjan at psy.utexas.edu ABSTRACT: The conceptual and investigative tools that are brought to bear on the analysis of social behavior are expanded by integrating biological theory, control systems theory, and Pavlovian conditioning. Biological theory has focused on the costs and benefits of social behavior from ecological and evolutionary perspectives. In contrast, control systems theory is concerned with how machines achieve a particular goal or purpose. The accurate operation of a system often requires feed-forward mechanisms that adjust system performance in anticipation of future inputs. Pavlovian conditioning is ideally suited to serve this function in behavioral systems. Pavlovian mechanisms have been demonstrated in various aspects of sexual behavior, maternal lactation, and infant suckling. Pavlovian conditioning of agonistic behavior has been also reported, and Pavlovian processes may be similarly involved in social play and social grooming. In addition, several lines of evidence indicate that Pavlovian conditioning can increase the efficiency and effectiveness of social interactions, thereby improving the cost/benefit ratio. The proposed integrative approach serves to extend Pavlovian concepts beyond the traditional domain of discrete secretory and other physiological reflexes to complex real-world behavioral interactions and helps apply abstract laboratory analyses of the mechanisms of associative learning to the daily challenges animals face as they interact with one another in their natural environment. KEYWORDS: social behavior, biological theory, control theory, feed-forward mechanisms, learning theory, Pavlovian conditioning, aggression, sexual behavior, nursing and lactation, social play, social grooming ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.domjan.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.domjan ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.domjan To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.domjan When you have the file(s) you want, type: quit ____________________________________________________________ *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From wahba at stat.wisc.edu Wed Apr 7 19:19:17 1999 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 7 Apr 1999 18:19:17 -0500 (CDT) Subject: SVM's and the GACV Message-ID: <199904072319.SAA09222@hera.stat.wisc.edu> The following paper was the basis for my NIPS*98 Large Margin Classifier Workshop talk. now available as University of Wisconsin-Madison Statistics Dept TR1006 in http://www.stat.wisc.edu/~wahba -> TRLIST .................................................. Generalized Approximate Cross Validation For Support Vector Machines, or, Another Way to Look at Margin-Like Quantities. Grace Wahba, Yi Lin and Hao Zhang. Abstract We first review the steps connecting the Support Vector Machine (SVM) paradigm in reproducing kernel Hilbert space, and and its connection to the (dual) mathematical programming problem traditional in SVM classification problems. We then review the Generalized Comparative Kullback-Leibler Distance (GCKL) for the SVM paradigm and observe that it is trivially a simple upper bound on the expected misclassification rate. Next we revisit the Generalized Approximate Cross Validation (GACV) as a computable proxy for the GCKL, as a function of certain tuning parameters in SVM kernels. We have found a justifiable (new) approximation for the GACV which is readily computed exactly along with the SVM solution to the dual mathematical programming problem. This GACV turns out interestingly, but not surprisingly to be simply related to what several authors have identified as the (observed) VC dimension of the estimated SVM. Some preliminary simulations in a special case are suggestive of the fact that the minimizer of the GACV is in fact a good estimate of the minimizer of the GCKL, although further simulation and theoretical studies are warranted. It is hoped that this preliminary work will lead to better understanding of `tuning' issues in the optimization of SVM's and related classifiers. ................................................. From harnad at coglit.ecs.soton.ac.uk Thu Apr 8 08:09:07 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Thu, 8 Apr 1999 13:09:07 +0100 (BST) Subject: CogPrints: Archive of Articles in Psychology, Neuroscience, etc. Message-ID: CogPrints Author Archive To all biobehavioral, neural and cognitive scientists: You are invited to archive all your preprints and reprints in the CogPrints electronic archive: http://cogprints.soton.ac.uk There have been some very important developments in the area of Web archiving of scientific papers in recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm The CogPrints Archive covers all the Cognitive Sciences: Psychology, Neuroscience, Biology, Computer Science, Linguistics and Philosophy CogPrints is completely free for everyone, both authors and readers, thanks to a subsidy from the Electronic Libraries Programme of the Joint Information Systems of the United Kingdom and the collaboration of the NSF/DOE-supported Physics Eprint Archive at Los Alamos. CogPrints has recently been opened for public automatic archiving. This means authors can now deposit their own papers automatically. The first wave of papers had been invited and hand-archived by CogPrints in order to set a model of the form and content of CogPrints. To see the current holdings: http://cogprints.soton.ac.uk/ To archive your own papers automatically: http://cogprints.soton.ac.uk/author.html All authors are encouraged to archive their papers on their home servers as well. For further information: admin at coglit.soton.ac.uk -------------------------------------------------------------------- BACKGROUND INFORMATION (No need to read if you wish to proceed directly to the Archive.) The objective of CogPrints is to emulate in the cognitive, beural and biobehavioral sciences the remarkable success of the NSF/DOE-subsidised Physics Eprint Archive at Los Alamos http://xxx.lanl.gov (US) http://xxx.soton.ac.uk (UK) The Physics Eprint Archive now makes available, free for all, well over half of the annual physics periodical literature, with its annual growth strongly suggesting that it will not be long before it becomes the locus classicus for all of the literature in Physics. 25,000 new papers are being deposited annually and there are over 35,000 users daily and 15 mirror sites worldwide. (Daily statistics: http://xxx.lanl.gov/cgi-bin/todays_stats) What this means is that anyone in the world with access to the Internet (and that number too is rising at a breath-taking rate, and already includes all academics, researchers and students in the West, and an increasing proportion in the Third World as well) can now search and retrieve virtually all current work in, for example, High Energy Physics, much of it retroactive to 1990 when the Physics archive was founded by Paul Ginsparg, who must certainly be credited by historians with having launched this revolution in scientific and scholarly publication (www-admin at xxx.lanl.gov). Does this mean that learned journals will disappear? Not at all. They will continue to play their traditional role of validating research through peer review, but this function will be an "overlay" on the electronic archives. The literature that is still in the form of unrefereed preprints and technical reports will be classified as such, to distinguish it from the refereed literature, which will be tagged with the imprimatur of the journal that refereed and accepted it for publication, as it always has been. It will no longer be necessary for publishers to recover (and research libraries to pay) the substantial costs of producing and distributing paper through ever-higher library subscription prices: Instead, it will be the beneficiaries of the global, unimpeded access to the learned research literature -- the funders of the research and the employers of the researcher -- who will cover the much reduced costs of implementing peer review, editing, and archiving in the electronic medium alone, in the form of minimal page-charges, in exchange for instant, permanent, worldwide access to the research literature for all, for free. If this arrangement strikes you as anomalous, consider that the real anomaly was that the authors of the scientific and scholarly periodical research literature, who, unlike trade authors, never got (or expected) royalties for the sale of their texts -- on the contrary, so important was it to them that their work should reach all potentially interested fellow-researchers that they had long been willing to pay for the printing and mailing of preprints and reprints to those who requested them -- nevertheless had to consent to have access to their work restricted to those who paid for it. This Faustian bargain was unavoidable in the Gutenberg age, because of the need to recover the high cost of producing and disseminating print on paper, but Paul Ginsparg has shown the way to launch the entire learned periodical literature into the PostGutenberg Galaxy, in which scientists and scholars can publish their work in the form of "skywriting": visible and available for free to all. -------------------------------------------------------------------- Stevan Harnad harnad at cogsci.soton.ac.uk Professor of Psychology harnad at princeton.edu Director, phone: +44 1703 592582 Cognitive Sciences Centre fax: +44 1703 594597 Department of Psychology http://www.cogsci.soton.ac.uk/~harnad/ University of Southampton http://www.princeton.edu/~harnad/ Highfield, Southampton ftp://ftp.princeton.edu/pub/harnad/ SO17 1BJ UNITED KINGDOM ftp://cogsci.soton.ac.uk/pub/harnad/ See: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm From shastri at ICSI.Berkeley.EDU Thu Apr 8 21:00:06 1999 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 08 Apr 1999 18:00:06 PDT Subject: A Biological Grounding of Recruitment Learning and Vicinal Algorithms Message-ID: <199904090100.SAA13020@lassi.ICSI.Berkeley.EDU> Dear Connectionists, The following report may be of interest to you. Best wishes. -- Lokendra Shastri http://www.icsi.berkeley.edu/~shastri/psfiles/tr-99-009.ps.gz ------------------------------------------------------------------------------ A Biological Grounding of Recruitment Learning and Vicinal Algorithms Lokendra Shastri International Computer Science Institute Berkeley, CA 94704 TR-99-009 April, 1999 Biological neural networks are capable of gradual learning based on observing a large number of exemplars over time as well as rapidly memorizing specific events as a result of a single exposure. The primary focus of research in connectionist modeling has been on gradual learning, but some researchers have also attempted the computational modeling of rapid (one-shot) learning within a framework described variably as recruitment learning and vicinal algorithms. While general arguments for the neural plausibility of recruitment learning and vicinal algorithms based on notions of neural plasticity have been presented in the past, a specific neural correlate of such learning has not been proposed. Here it is shown that recruitment learning and vicinal algorithms can be firmly grounded in the biological phenomena of long-term potentiation (LTP) and long-term depression (LTD). Toward this end, a computational abstraction of LTP and LTD is presented, and an ``algorithm'' for the recruitment of binding-detector cells is described and evaluated using biologically realistic data. It is shown that binding-detector cells of distinct bindings exhibit low levels of cross-talk even when the bindings overlap. In the proposed grounding, the specification of a vicinal algorithm amounts to specifying an appropriate network architecture and suitable parameter values for the induction of LTP and LTD. KEYWORDS: one-shot learning; memorization; recruitment learning; dynamic bindings; long-term potentiation; binding detection. From harnad at coglit.ecs.soton.ac.uk Fri Apr 9 13:37:32 1999 From: harnad at coglit.ecs.soton.ac.uk (Stevan Harnad) Date: Fri, 9 Apr 1999 18:37:32 +0100 (BST) Subject: EEG AND NEOCORTICAL FUNCTION: BBS Call for Commentators Message-ID: Below is the abstract of a forthcoming BBS target article: NEOCORTICAL DYNAMIC FUNCTION AND EEG by Paul L. Nunez This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL by May 14th to: bbs at cogsci.soton.ac.uk or write to [PLEASE NOTE SLIGHTLY CHANGED ADDRESS]: Behavioral and Brain Sciences ECS: New Zepler Building University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection with a WWW browser, anonymous ftp or gopher according to the instructions that follow after the abstract. _____________________________________________________________ TOWARD A QUANTITATIVE DESCRIPTION OF LARGE SCALE NEOCORTICAL DYNAMIC FUNCTION AND EEG. Paul L. Nunez Permanent Address: Brain Physics Group, Dept. of Biomedical Engineering, Tulane University, New Orleans, Louisiana 70118 pnunez at mailhost.tcs.tulane.edu Temporary Address (6/98 - 6/00): Brain Sciences Institute, Swinburne University of Technology, 400 Burwood Road, Melbourne, Victoria 3122, Australia pnunez at mind.scan.swin.edu.au ABSTRACT: A conceptual framework for large-scale neocortical dynamic behavior is proposed. It is sufficiently general to embrace brain theories applied to different experimental designs, spatial scales and brain states. This framework, based on the work of many scientists, is constructed from anatomical, physiological and EEG data. Neocortical dynamics and correlated behavioral/cognitive brain states are viewed in the context of partly distinct, but interacting local (regionally specific) processes and globally coherent dynamics. Local and regional processes (eg, neural networks) are enabled by functional segregation; global processes are facilitated by functional integration. Global processes can also facilitate synchronous activity in remote cell groups (top down) which function simultaneously at several different spatial scales. At the same time, local processes may help drive (bottom up) macroscopic global dynamics observed with EEG (or MEG). A specific, physiologically based local/global dynamic theory is outlined in the context of this general conceptual framework. It is consistent with a body of EEG data and fits naturally within the proposed conceptual framework. The theory is incomplete since its physiological control parameters are known only approximately. Thus, brain state-dependent contributions of local versus global dynamics cannot be predicted. It is also neutral on properties of neural networks, assumed to be embedded within macroscopic fields. Nevertheless, the purely global part of the theory makes qualitative, and in a few cases, semi-quantitative predictions of the outcomes of several disparate EEG studies in which global contributions to the dynamics appear substantial. Experimental data are used to obtain a variety of measures of traveling and standing wave phenomena, predicted by the pure global theory. The more general local/global theory is also proposed as a "meta-theory," a suggestion of what large-scale quantitative theories of neocortical dynamics may be like when more accurate treatment of local and non-linear effects is achieved. In the proposed local/global theory, the dynamics of excitatory and inhibitory synaptic action fields are described. EEG and MEG are believed to provide large-scale estimates of modulation of these synaptic fields about background levels. Brain state is determined by neuromodulatory control parameters. Some states are dominated by local cell groups, in which EEG frequencies are due to local feedback gains and rise and decay times of post-synaptic potentials. Local frequencies vary with brain location. Other states are strongly global, with multiple, closely spaced EEG frequencies, but identical at each cortical location. Coherence at these frequencies is high over large distances. The global mode frequencies are due to a combination of delays in cortico-cortical axons and neocortical boundary conditions. Many states involve dynamic interactions between local networks and the global system, in which case observed EEG frequencies may involve "matching" of local resonant frequencies with one or more of the global frequencies. KEYWORDS: EEG, neocortical dynamics, standing waves, functional integration, spatial scale, binding problem, synchronization, coherence, cell assemblies, limit cycles, pacemakers ____________________________________________________________ To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable from the World Wide Web or by anonymous ftp from the US or UK BBS Archive. Ftp instructions follow below. Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. The URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.nunez.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.nunez ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.nunez *** FIVE IMPORTANT ANNOUNCEMENTS *** ------------------------------------------------------------------ (1) There have been some very important developments in the area of Web archiving of scientific papers very recently. Please see: Science: http://www.cogsci.soton.ac.uk/~harnad/science.html Nature: http://www.cogsci.soton.ac.uk/~harnad/nature.html American Scientist: http://www.cogsci.soton.ac.uk/~harnad/amlet.html Chronicle of Higher Education: http://www.chronicle.com/free/v45/i04/04a02901.htm --------------------------------------------------------------------- (2) All authors in the biobehavioral and cognitive sciences are strongly encouraged to archive all their papers (on their Home-Servers as well as) on CogPrints: http://cogprints.soton.ac.uk/ It is extremely simple to do so and will make all of our papers available to all of us everywhere at no cost to anyone. --------------------------------------------------------------------- (3) BBS has a new policy of accepting submissions electronically. Authors can specify whether they would like their submissions archived publicly during refereeing in the BBS under-refereeing Archive, or in a referees-only, non-public archive. Upon acceptance, preprints of final drafts are moved to the public BBS Archive: ftp://ftp.princeton.edu/pub/harnad/BBS/.WWW/index.html http://www.cogsci.soton.ac.uk/bbs/Archive/ -------------------------------------------------------------------- (4) BBS has expanded its annual page quota and is now appearing bimonthly, so the service of Open Peer Commentary can now be be offered to more target articles. The BBS refereeing procedure is also going to be considerably faster with the new electronic submission and processing procedures. Authors are invited to submit papers to: Email: bbs at cogsci.soton.ac.uk Web: http://cogprints.soton.ac.uk http://bbs.cogsci.soton.ac.uk/ INSTRUCTIONS FOR AUTHORS: http://www.princeton.edu/~harnad/bbs/instructions.for.authors.html http://www.cogsci.soton.ac.uk/bbs/instructions.for.authors.html --------------------------------------------------------------------- (5) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) journal had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). From bern at cs.umass.edu Fri Apr 9 17:33:18 1999 From: bern at cs.umass.edu (Dan Bernstein) Date: Fri, 09 Apr 1999 17:33:18 -0400 Subject: A tech report on transfer of solutions across multiple RL tasks Message-ID: <199904092133.RAA29163@ganymede.cs.umass.edu> Anouncing a technical report related to solving multiple RL tasks: http://www-anw.cs.umass.edu/~bern/publications/reuse_tech.ps -------------------------------------------------------------------------- Daniel S. Bernstein Adaptive Networks Lab Department of Computer Science University of Massachusetts, Amherst TR-1999-26 April, 1999 We consider the reuse of policies for previous MDPs in learning on a new MDP, under the assumption that the vector of parameters of each MDP is drawn from a fixed probability distribution. We use the options framework, in which an option consists of a set of initiation states, a policy, and a termination condition. We use an option called a \emph{reuse option}, for which the set of initiation states is the set of all states, the policy is a combination of policies from the old MDPs, and the termination condition is based on the number of time steps since the option was initiated. Given policies for $m$ of the MDPs from the distribution, we construct reuse options from the policies and compare performance on an $m+1$st MDP both with and without various reuse options. We find that reuse options can speed initial learning of the $m+1$st task. We also present a distribution of MDPs for which reuse options can slow initial learning. We discuss reasons for this and suggest other ways to design reuse options. Keywords: reinforcement learning, Markov decision processes, options, learning to learn ---------------------------------------------------------------------------- ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Daniel S. Bernstein URL: http://www-anw.cs.umass.edu/~bern Department of Computer Science EMAIL: bern at cs.umass.edu University of Massachusetts PHONE: (413)545-1596 [office] Amherst, MA 01003 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ From C.Campbell at bristol.ac.uk Sat Apr 10 04:46:09 1999 From: C.Campbell at bristol.ac.uk (I C G Campbell) Date: Sat, 10 Apr 1999 09:46:09 +0100 (BST) Subject: New Preprint Server (SVMs, COLT, etc) Message-ID: <199904100846.JAA23589@zeus.bris.ac.uk> NEW PREPRINT SERVER We have started up a new preprint server at: http://lara.enm.bris.ac.uk/cig/pubs_nf.htm This contains 16 recent preprints with the titles and authorship listed below. The server would mainly be of interest to researchers in the area of support vector machines and computational learning theory. There are also some further papers on the application of machine learning techniques to medical decision support. Enjoy! Colin Campbell (Bristol University) _________________________________ 1. Data Dependent Structural Risk Minimization for Perceptron Decision Tress. John Shawe-Taylor and Nello Cristianini. 2. Bayesian Voting Schemes and Large Margin Classifiers. Nello Cristianini and John Shawe-Taylor. 3. Bayesian Classifiers are Large Margin Hyperplanes in a Hilbert Space. Nello Cristainini, John Shawe-Taylor and Peter Sybacek, in Shavlik J. (editor) 4. Bayesian Classifiers are Large Margin Hyperplanes in a Hilbert Space. Nello Cristianini, John Shawe-Taylor and Peter Sykacek. 5. Margin Distribution Bounds on Generalisation. John Shawe-Taylor and Nello Cristianini. 6. Robust Bounds on Generalisation from the Margin Distribution. John Shawe-Taylor and Nello Cristianini. 7. Simple Training Algorithms for Support Vector Machines. Colin Campbell and Nello Cristianini. 8. The Kernel-Adatron: a Fast and Simple Learning Procedure for Support Vector Machines. Thilo Friess, Nello Cristianini and Colin Campbell. 9. Large Margin Classification Using the Kernel Adatron Algorithm. Colin Campbell, Thilo Friess and Nello Cristianini. 10. Dynamically Adapting Kernels in Support Vector Machines. Nello Cristianini, Colin Campbell and John Shawe-Taylor. 11. Multiplicative Updatings for Support Vector Machines. Nello Cristianini, Colin Campbell and John Shawe-Taylor. 12. Enlarging the Margin in Perceptron Decision Trees. Kristin Bennett, Nello Critianini, John Shawe-Taylor and Donghui Wu. 13. Large Margin Decision Trees for Induction and Transduction. Donghui Wu, Kristin Bennett, Nello Cristianini and John Shawe-Taylor. 14. Bayes Point Machines: Estimating the Bayes Point in Kernel Space. Ralf Herbrich, Thore Graepel and Colin Campbell. 15. Bayesian Learning in Reproducing Kernel Hilbert Spaces: The Usefulness of the Bayes Point. Ralf Herbrich, Thore Graepel and Colin Campbell. 16. Further Results on the Margin Distribution. John Shawe-Taylor and Nello Cristianini. From Marco.Budinich at trieste.infn.it Sat Apr 10 07:54:19 1999 From: Marco.Budinich at trieste.infn.it (Marco Budinich (tel. +39-040-676 3391)) Date: Sat, 10 Apr 1999 13:54:19 +0200 Subject: Paper available Message-ID: Dear Colleagues, the following paper, to appear in official form on Neural Computation, is presently available at: http://www.ts.infn.it/~mbh/PubABS.html#Nonu_corr all comments are most welcome. All the best, Marco Budinich ---------------------------------------------------------------- Adaptive Calibration of Imaging Array Detectors Marco Budinich and Renato Frison Physics Department - University of Trieste, Italy (to appear on: Neural Computation) Abstract - In this paper we present two methods for non-uniformity correction of imaging array detectors based on neural networks, both of them exploit image properties to supply lack of calibrations while maximizing the entropy of the output. The first method uses a self-organizing net that produces a linear correction of the raw data with coefficients that adapt continuously. The second method employs a kind of contrast equalization curve to match pixel distributions. Our work originates from Silicon detectors but the treatment is general enough to be applicable to many kinds of array detectors like those used in Infrared imaging or in high energy physics. +-----------------------------------------------------------------+ | Marco Budinich | | Dipartimento di Fisica Tel.: +39 040 676 3391 | | Via Valerio 2 Fax.: +39 040 676 3350 | | 34127 Trieste ITALY e-mail: mbh at trieste.infn.it | | | | www: http://www.ts.infn.it/~mbh/MBHgeneral.html | +-----------------------------------------------------------------+ From jfgf at eng.cam.ac.uk Mon Apr 12 06:46:00 1999 From: jfgf at eng.cam.ac.uk (J.F. Gomes De Freitas) Date: Mon, 12 Apr 1999 11:46:00 +0100 (BST) Subject: Software + Papers Message-ID: Dear colleagues You can find the following papers: 1 Sequential Monte Carlo Methods for Optimisation of Neural Network Models (a similar version to appear in Neural Computation). 2 Nonlinear State Space Estimation with Neural Networks and the EM algorithm (possibly to appear in a special issue of VLSI Signal Processing Systems). and the Matlab software at my Cambridge web site: http://svr-www.eng.cam.ac.uk/~jfgf/software.html I'd be grateful for feedback. The abstracts follow: 1 Sequential Monte Carlo Methods for Optimisation of Neural Network Models: We discuss a novel strategy for training neural networks using sequential Monte Carlo algorithms and propose a new hybrid gradient descent/sampling importance resampling algorithm (HySIR). In terms of both computational time and accuracy, the hybrid SIR is a clear improvement over conventional sequential Monte Carlo techniques. The new algorithm may be viewed as a global optimisation strategy, which allows us to learn the probability distributions of the network weights and outputs in a sequential framework. It is well suited to applications involving on-line, nonlinear and non-Gaussian signal processing. We show how the new algorithm outperforms extended Kalman filter training on several problems. In particular, we address the problem of pricing option contracts, traded in financial markets. In this context, we are able to estimate the one-step-ahead probability density functions of the options prices. 2 Nonlinear State Space Estimation with Neural Networks and the EM algorithm: In this paper, we derive an EM algorithm for nonlinear state space models. We use it to estimate jointly the neural network weights, the model uncertainty and the noise in the data. In the E-step we apply a forward-backward Rauch-Tung-Striebel smoother to compute the network weights. For the M-step, we derive expressions to compute the model uncertainty and the measurement noise. We find that the method is intrinsically very powerful, simple, elegant and stable. Best wishes Nando _______________________________________________________________________________ JFG de Freitas (Nando) Speech, Vision and Robotics Group Information Engineering Cambridge University CB2 1PZ England http://svr-www.eng.cam.ac.uk/~jfgf Tel (01223) 302323 (H) (01223) 332754 (W) _______________________________________________________________________________ From michael.j.healy at boeing.com Mon Apr 12 11:08:37 1999 From: michael.j.healy at boeing.com (Michael J. Healy 425-865-3123) Date: Mon, 12 Apr 1999 08:08:37 -0700 Subject: Paper on topological semantics Message-ID: <199904121508.IAA19327@lilith.network-b> This is now out in the latest issue of Connection Science. I have a supply of reprints and will be glad to send one to anyone who is interested: M. J. Healy (1999) "A Topological Semantics for Rule Extraction with Neural Networks", Connection Science, vol. 11, no. 1, pp. 91-113. This is the paper I referred to in the Connectionists on-line discussion on connectionist symbol processing and in the follow-up issue of Neural Computing Surveys. It introduces a mathematical approach to analyzing the semantics of neural networks and their applications. It surveys the point-set-topology version of the approach; a more involved category- theoretic version is intended for a later paper. The main example given in this paper concerns the adequacy of a specific neural network architecture for rule extraction. Items discussed in addition to the mathematical background include issues in learning from data, specialization hierarchies, the concept of a prototype data object, continuous functions and rule-based systems, formal verification, and how the topological approach might be applied to neural network analysis and design. Mike -- =========================================================================== e Michael J. Healy A FA ----------> GA (425)865-3123 | | FAX(425)865-2964 | | Ff | | Gf c/o The Boeing Company | | PO Box 3707 MS 7L-66 \|/ \|/ Seattle, WA 98124-2207 ' ' USA FB ----------> GB -or for priority mail- e "I'm a natural man." 2760 160th Ave SE MS 7L-66 B Bellevue, WA 98008 USA michael.j.healy at boeing.com -or- mjhealy at u.washington.edu ============================================================================ From ingber at ingber.com Wed Apr 14 06:26:56 1999 From: ingber at ingber.com (Lester Ingber) Date: Wed, 14 Apr 1999 05:26:56 -0500 Subject: Trading Research/Programmer Positions Chicago Message-ID: <19990414052656.A11119@ingber.com> Trading Research/Programmer Positions Chicago * Open R&D Positions in Computational Finance/Chicago DRW Investments, LLC, a proprietary trading firm based at the Chicago Mercantile Exchange, with a branch office in London, is expanding its research department. This R&D group works directly with other traders as well develops its own automated trading capabilities. * Programmers/Analysts -- Full Time At least 2-3 years experience programming in C or C++. Must have excellent background in Math, Physics, or similar disciplines. An advanced programmer/analyst position will be primarily dedicated to developing and coding algorithms for automated trading. An intermediate programmer/analyst position will be primarily dedicated to database management and integrating R&D codes for trader support. Flexible hours in intense environment. Requires strong commitment to several ongoing projects with shifting priorities. See http://www.ingber.com/ for some papers and code used on current projects. Please email Lester Ingber a resume regarding this position. * Graduate Students -- Part Time We are looking for part-time graduate students in computational finance who might impact our trading practices. We expect that each graduate student will be a full-time PhD student at a local university, spending approximately 10-20 hours/week at DRW. Projects will focus on publishable results. See http://www.ingber.com/ for some papers on current projects. Please email Lester Ingber a resume regarding these positions. * Trading Clerks -- Full Time We are seeking motivated individuals with a strong work ethic and desire to learn about trading. Our primary focus is on financial futures and options. Responsibilities include actively assisting traders in all aspects of trading. Exposure to math including calculus and statistics is desirable. Please email Jeff Levoff a resume or any questions regarding these positions. * Additional Information We are planning our move to larger offices in our present vicinity of the Chicago exchanges before August 1999. Programmer/analyst and graduate student positions will be filled after our move. * Updates Updates on the status of open DRW positions will be placed in http://www.ingber.com/MISC.DIR/drw_open_positions -- /* Lester Ingber http://www.ingber.com/ ftp://ftp.ingber.com * * ingber at ingber.com ingber at alumni.caltech.edu ingber at drwtrading.com * * PO Box 06440 Wacker Dr PO Sears Tower Chicago IL 60606-0440 */ From omlin at waterbug.cs.sun.ac.za Thu Apr 15 08:56:29 1999 From: omlin at waterbug.cs.sun.ac.za (Christian Omlin) Date: Thu, 15 Apr 1999 14:56:29 +0200 Subject: technical report announcement - rule extraction Message-ID: Dear Connectionists, The following technical report (see abstract attached below) A. Vahed, C.W. Omlin, "Rule Extraction from Recurrent Neural Networks using a Symbolic Machine Learning Algorithm" is available from http://www.cs.sun.ac.za/~omlin/papers/iconip_99.paper.ps.gz This paper contains preliminary results and we welcome any comments you may have. Thank you. Best regards, Christian Christian W. Omlin e-mail: omlin at cs.sun.ac.za Department of Computer Science phone (direct): +27-21-808-4210 University of Stellenbosch phone (secretary): +27-21-808-4232 Private Bag X1 fax: +27-21-808-4416 Stellenbosch 7602 http://www.cs.sun.ac.za/people/staff/omlin SOUTH AFRICA http://www.neci.nj.nec.com/homepages/omlin ------------------------------- cut here -------------------------------- Rule Extraction from Recurrent Neural Networks using a Symbolic Machine Learning Algorithm A. Vahed C.W. Omlin Department of Computer Science Department of Computer Science University of the Western Cape University of Stellenbosch 7535 Bellville 7600 Stellenbosch South Africa South Africa avahed at uwc.ac.za omlin at cs.sun.ac.za This paper addresses the extraction of knowledge from recurrent neural networks trained to behave like deterministic finite-state automata (DFAs). To date, methods used to extract knowledge from such networks have relied on the hypothesis that networks states tend to cluster and that clusters of network states correspond to DFA states. The computational complexity of such a cluster anal- ysis has led to heuristics which either limit the number of clus- ters that may form during training or limit the exploration of the output space of hidden recurrent state neurons. These limita- tions, while necessary, may lead to decreased fidelity, i.e. the extracted knowledge may not model the true behavior of a trained network, perhaps not even for the training set. The method proposed here uses a polynomial-time, symbolic learning algorithm to infer DFAs solely from the observation of a trained network's input/output behavior. Thus, this method has the poten- tial to increase the fidelity of the extracted knowledge. From dacierno.a at irsip.na.cnr.it Thu Apr 15 09:22:57 1999 From: dacierno.a at irsip.na.cnr.it (Antonio d'Acierno) Date: Thu, 15 Apr 1999 15:22:57 +0200 Subject: New Paper Message-ID: <3715E831.346264AC@irsip.na.cnr.it> Dear Connectionists, the following paper "Back-Propagation Learning Algorithm and Parallel Computers: The CLEPSYDRA Mapping Scheme" (accepted for publication on Neurocomputing) is available at my web site: http://ultrae.irsip.na.cnr.it/~tonino From dacierno.a at irsip.na.cnr.it Thu Apr 15 09:22:57 1999 From: dacierno.a at irsip.na.cnr.it (Antonio d'Acierno) Date: Thu, 15 Apr 1999 15:22:57 +0200 Subject: New Paper Message-ID: <3715E831.346264AC@irsip.na.cnr.it> [ Reposted due to a computer glitch that truncated the abstract. -- Dave Touretzky, CONNECTIONISTS moderator ] Dear Connectionists, the following paper "Back-Propagation Learning Algorithm and Parallel Computers: The CLEPSYDRA Mapping Scheme" (accepted for publication on Neurocomputing) is available at my web site: http://ultrae.irsip.na.cnr.it/~tonino Abstract This paper deals with the parallel implementation of the back-propagation of errors learning algorithm. To obtain the partitioning of the neural network on the processor network the author describes a new mapping scheme that uses a mixture of synapse parallelism, neuron parallelism and training examples parallelism (if any). The proposed mapping scheme allows to describe the back-propagation algorithm as a collection of SIMD processes, so that both SIMD and MIMD machines can be used. The main feature of the obtained parallel algorithm is the absence of point-to-point communication; in fact, for each training pattern, an all-to-one broadcasting with an associative operator (combination) and an one-to-all broadcasting (that can be both realized in logP time) are needed. A performance model is proposed and tested on a ring connected MIMD parallel computer. Simulation results on MIMD and SIMD parallel machines are also shown and commented. Keywords: Back-Propagation, Mapping Scheme, MIMD Parallel Computers, SIMD Parallel Computers I welcome any comment and suggestion for improvements! Thank You and Rregards. -- Antonio d'Acierno IRSIP - CNR via P. Castellino, 111 80131 Napoli Italy tel: + 39 081 5904221 fax: + 39 081 5608330 mobile: 0339 6472723 mailto:dacierno.a at irsip.na.cnr.it mailto:adacierno at yahoo.com http://ultrae.irsip.na.cnr.it/~tonino From tononi at nsi.edu Thu Apr 15 18:42:21 1999 From: tononi at nsi.edu (Giulio Tononi) Date: Thu, 15 Apr 1999 15:42:21 -0700 Subject: Junior Fellow (postdoc) positions, The Neurosciences Institute Message-ID: <000201be8791$39591660$1bb985c6@spud.nsi.edu> THE NEUROSCIENCES INSTITUTE, SAN DIEGO The Neurosciences Institute is an independent, not-for-profit organization at the forefront of research on the brain. Research at the Institute spans levels from the molecular to the behavioral and from the computational to the cognitive. The Institute has a strong tradition in theoretical neurobiology and has recently established new experimental facilities. The Institute is also the home of the Neurosciences Research Program and serves as an international meeting place for neuroscientists. JUNIOR FELLOW, NEURAL BASIS OF CONSCIOUSNESS. The Institute has a strong tradition in the theoretical and experimental study of consciousness (see Science, 282:1846-1851). Applications are invited for positions as Junior Fellows to collaborate on experimental and theoretical studies of the neural correlates of conscious perception. Applicants should be at the Postdoctoral level with strong backgrounds in cognitive neuroscience, neuroimaging (including MEG, EEG, and fMRI), and theoretical neurobiology. JUNIOR FELLOW IN THEORETICAL NEUROBIOLOGY. Applications are invited for positions as Junior Fellows in Theoretical Neurobiology. Since 1987, the Institute has had a research program dedicated to developing biologically based, experimentally testable theoretical models of neural systems. Current projects include large-scale simulations of neuronal networks and the analysis of functional interactions among brain areas using information-theoretical approaches. Advanced computing facilities are available. Applicants should be at the Postdoctoral level with strong backgrounds in mathematics, statistics, and computer modeling. JUNIOR FELLOW IN EXPERIMENTAL NEUROBIOLOGY. Applications are invited for positions as Junior Fellow in Experimental Neurobiology. A current focus of the Institute is on the action and pharmacological manipulation of neuromodulatory systems with diffuse projections, such as the noradrenergic, serotoninergic, cholinergic, dopaminergic, and histaminergic systems. Another focus is on behavioral state control and the functions of sleep. Applicants should be at the Postdoctoral level with strong backgrounds in the above-mentioned areas. Fellows receive stipends and research support commensurate with qualifications and experience. Positions are now available. Applications for all positions listed should contain a short statement of research interests, a curriculum vitae, and the names of three references and should be sent to: Giulio Tononi, The Neurosciences Institute, 10640 John Jay Hopkins Drive, San Diego, California 92121; Email: tononi at nsi.edu; URL: http:// www.nsi.edu. From giro-ci0 at wpmail.paisley.ac.uk Fri Apr 16 10:06:36 1999 From: giro-ci0 at wpmail.paisley.ac.uk (Mark Girolami) Date: Fri, 16 Apr 1999 14:06:36 +0000 Subject: PhD Studentship Available Message-ID: Could you please post the details of this PhD studentship please. ---------------------------------------------------------------------------------------- Funded PhD Studentship Available Computational Intelligence Research Unit Department of Computing and Information Systems University of Paisley Since the emergence and explosive growth of the World Wide Web (WWW) there has been a commensurate growth in the availability of online information. Efficient searching and retrieval of relevant information from the WWW has lagged behind this growth and intelligent information retrieval methods are now required as a matter of urgency. This particular challenge is attracting great interest from the machine learning research community and many software companies are closely monitoring the research output. There are two main approaches to online information retrieval, (1) query based and (2) taxonomic. The query based approach relies on methods such as search engines; which take a users query and this is compared with an existing document collection to find the most likely match. The taxonomic approach relies on manual organisation of the information (online documents) into hierarchic categorisations of the document collection. It is accepted that the design of information retrieval systems utilising a number of intelligent computing paradigms may provide the key to improved information access. This project proposes the fusion of both unsupervised and supervised computational models to adaptively build and maintain suitable document hierarchies (an example being Kohonen's WEBSOM) and then rank and classify existing as well as incoming new documents based on user queries (an example being Bayesian networks). Three years of University funding, for a PhD studentship, is available for this project. This project will be carried out in collaboration with a US based software company. Interested candidates should contact:- Dr Mark Girolami Senior Lecturer Computational Intelligence Research Unit University of Paisley High Street, Paisley PA1 2BE Scotland Tel : +44 141 848 3317 Fax: +44 141 848 3542 giro0ci at paisley.ac.uk From psajda at sarnoff.com Mon Apr 19 17:34:38 1999 From: psajda at sarnoff.com (PAUL SAJDA) Date: Mon, 19 Apr 1999 17:34:38 -0400 Subject: Job: Research Position Message-ID: <371BA16D.C3A25033@sarnoff.com> Sarnoff Corporation Position in Adaptive Image and Signal Processing The Adaptive Image and Signal Processing Group at the Sarnoff Corporation has an immediate opening for a researcher (Member, Technical Staff) in the area of speech and signal processing. Specifically, we are seeking an individual with interest/expertise in speech enhancement, microphone arrays, multi-modal interfaces, and machine learning. Responsibilities will include conducting original research, developing intellectual property in the area of adaptive signal processing, and developing relationships with government and commercial customers. Excellent communication skills and willingness to bring in new business is desired. Experience in MATLAB, C and C++ programming and the UNIX/LINUX operating system is a plus. Education level: PhD. in Electrical Engineering, Computer Engineering, Computer Science or related field. send applications to Dr. Paul Sajda Head Adaptive Image & Signal Processing Sarnoff Corporation CN5300 Princeton, NJ 08533-5300 email: psajda at sarnoff.com Sarnoff Corporation Web page: www.sarnoff.com From sutton at research.att.com Tue Apr 20 14:03:29 1999 From: sutton at research.att.com (Rich Sutton) Date: Tue, 20 Apr 1999 13:03:29 -0500 Subject: Workshop on Reinforcement Learning at UMass/Amherst 4/23 Message-ID: Dear Connectionists: FYI, there will be a workshop on reinforcement learning at the University of Massachusetts this friday, to which the public is welcome. Complete information on the schedule, travel information, etc., is available at http://www-anw.cs.umass.edu/nessrl.99. A crude version of the current schedule (it's better on the web) is given below. Hope to see you there. Rich Sutton ---------------------------------------------------------------------------- Current Schedule Time Speaker Title/Abstract 8:30am Breakfast (bagels, coffee, etc) 9:00am Amy McGovern Welcome 9:05 Invited Speaker: Manuela Veloso (CMU) What to Do Next?: Action Selection in Dynamic Multi-Agent Environments 10:00 Mike Bowling A Parallel between Multi-agent Reinforcement Learning and Stochastic Game Theory Peter Stone and Manuela Veloso (presented by Manuela Veloso) Team-Partitioned Opaque-Transition Reinforcement Learning Will Uther Structural Generalization for Growing Decision Forests 11:10 Break 11:20 Dan Bernstein Reusing Old Policies to Accelerate Learning on New MDPs Bryan Singer Learning State Features from Policies to Bias Exploration in Reinforcement Learning Theodore Perkins and Doina Precup Using Options for Knowledge Transfer in Reinforcement Learning 12:30 Lunch 2:00 Michael Kearns Sparse Sampling Methods for Learning and Planning in Large POMDPs Satinder Singh Approximate Planning for Factored POMDPs using Belief State Simplification Rich Sutton Function Approximation in Reinforcement Learning Yishay Mansour Finding a near best strategy from a restricted class of strategies 3:10 Nicolas Meuleau Learning finite-state controllers for partially-observable environments Keith Rogers Learning using the G-function 3:55 Break 4:05 Doina Precup Eligibility Traces for Off-Policy Policy Evaluation Tom Kalt An RL approach to statistical natural language parsing 4:50 Andrew Barto Closing remarks From hunter at nlm.nih.gov Tue Apr 20 09:12:30 1999 From: hunter at nlm.nih.gov (Larry Hunter) Date: Tue, 20 Apr 1999 09:12:30 -0400 (EDT) Subject: Computational Biology position at National Cancer Institute Message-ID: <199904201312.JAA08307@work.nlm.nih.gov> ** Fellowships in Computational Biology ** ** Section on Molecular Statistics and Bioinformatics ** ** National Cancer Institute ** April 19, 1999 --- Please distribute widely Applications are invited for two anticipated research fellowships in computational approaches to understanding the molecular nature of cancer. One position is specifically dedicated to developing tools and techniques for analyzing gene expression data from the National Cancer Institute's Advanced Technology Center. The other position is more open-ended, intended for researchers interested in any aspect of the development and application of advanced machine learning or statistical techniques in the molecular biology of cancer. The positions will be in the section on Molecular Statistics and Bioinformatics, a recently formed group under the leadership of Dr. Lawrence Hunter. We are machine learning researchers, statisticians, molecular biologists and physicians working to develop computational methods to take advantage of the rapid growth of molecular biological data about cancer, including sequences of oncogenes, gene expression profiles of neoplastic tissues, high throughput screening of anti-tumor compounds, and allellic variation assays such as SNPs. The lab has abundant computational resources, and ready access to all NIH facilities. Candidates should be highly motivated, with excellent programming and writing skills, and a solid understanding of molecular biology. We expect to fill these positions with post-doctoral researchers, but good candidates at more junior or more senior levels may be accomodated. Salaries will be on the NIH scale, commensurate with experience and area of expertise. To apply, send a CV, one or two (p)reprints, a brief description of your motivations and goals, and the names, email addresses and phone numbers of at least three references to the address below. Email is preferred, but fax or mail applications are also acceptable. Larry Hunter Molecular Statistics and Bioinformatics National Cancer Institute, MS-9105 7550 Wisconsin Ave., Room 3C06 Bethesda, MD 20892-9015 tel: +1 (301) 402-0389 fax: +1 (301) 480-0223 email: lhunter at nih.gov From jagota at cse.ucsc.edu Tue Apr 20 21:59:41 1999 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Tue, 20 Apr 1999 18:59:41 -0700 (PDT) Subject: new survey publication Message-ID: <199904210159.SAA06212@arapaho.cse.ucsc.edu> New refereed e-publication action editor: Ron Sun K. McGarry, S. Wermter and J. MacIntyre, Hybrid neural systems: from simple coupling to fully integrated neural networks, Neural Computing Surveys 2, 62--93, 1999. 102 references. http://www.icsi.berkeley.edu/~jagota/NCS Abstract: This paper describes techniques for integrating neural networks and symbolic components into powerful hybrid systems. Neural networks have unique processing characteristics that enable tasks to be performed that would be difficult or intractable for a symbolic rule-based system. However, a stand-alone neural network requires an interpretation either by a human or a rule-based system. This motivates the integration of neural/symbolic techniques within a hybrid system. A number of integration possibilities exist: some systems consist of neural network components performing symbolic tasks while other systems are composed of several neural networks and symbolic components, each component acting as a self-contained module communicating with the others. Other hybrid systems are able to transform subsymbolic representations into symbolic ones and vice-versa. This paper provides an overview and evaluation of the state of the art of several hybrid neural systems for rule-based processing. From oby at cs.tu-berlin.de Wed Apr 21 09:25:50 1999 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Wed, 21 Apr 1999 15:25:50 +0200 (MET DST) Subject: positions in CNS Message-ID: <199904211325.PAA27254@pollux.cs.tu-berlin.de> Postdoctoral and Graduate Student Positions in Computational Neuroscience Neural Information Processing Group, Department of Computer Science, Technical University of Berlin, Germany One postdoctoral and one graduate student fellowship are available for candidates, who are interested to work on computational models of primary visual cortex. The successful candidates may choose between the following topics: - Dynamics of cortical circuits including the role of steppy connections and fast synaptic plasticity. - Representation of information in the visual cortex. - Development of neural circuits and the functional organization of visual cortex (cortical maps). The candidates are expected to join an ongoing collaboration with experimental neurobiologists in the group of Jennifer Lund (Institute of Ophthalmology, University College London) Positions will start in summer / fall 1999. The postdoctoral position will be initially for one year, but an extension is possible. The CS department of the Technical University of Berlin approves students with foreign Masters or Diplom degrees for graduate study when certain standards w.r.t. the subject, grades, and the awarding institution are met. The university also allows for a thesis defense in the English language. Interested candidates please send their CV, transcripts of their certificates, a short statement of their research interest, and a list of publications to: Prof. Klaus Obermayer FR2-1, NI, Informatik, Technische Universitaet Berlin Franklinstrasse 28/29, 10587 Berlin, Germany phone: ++49-30-314-73120, fax: -73121, email: oby at cs.tu-berlin.de prefereably by email. For a list of relevant publications and an overview of current research projects please refer to our web-page at: http://ni.cs.tu-berlin.de/ Klaus ----------------------------------------------------------------------------- Prof. Dr. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ From mozer at cs.colorado.edu Thu Apr 22 14:33:25 1999 From: mozer at cs.colorado.edu (Mike Mozer) Date: Thu, 22 Apr 99 12:33:25 -0600 Subject: Positions in Boulder, Colorado Message-ID: <199904221833.MAA17147@neuron.cs.colorado.edu> Athene Software, Inc. Positions in Machine Learning, Statistics, and Data Mining Athene Software has immediate openings for machine learning, statistical, and data mining professionals in Boulder, Colorado. We are seeking qualified candidates to develop and enhance models of subscriber behavior for telecommunications companies. Responsibilities will include statistical investigation of large data sets, application of machine learning algorithms, development and tuning of data representations, and presentation of results to internal and external customers. Strong communication skills are very important. A Ph.D. in Statistics, Computer Science, Electrical Engineering, or related field is desirable. Experience with Java and Oracle is helpful. Send applications to: Richard Wolniewicz, Ph.D. VP Engineering Athene Software, Inc. 2060 Broadway, Suite 300 Boulder, CO 80302 email: richard at athenesoft.com www.athenesoft.com From mark.plumbley at kcl.ac.uk Tue Apr 20 19:34:19 1999 From: mark.plumbley at kcl.ac.uk (Mark Plumbley) Date: Wed, 21 Apr 1999 00:34:19 +0100 (BST) Subject: CoIL Competition [neural nets and machine learning -- Ed.] Message-ID: Is your water safe? There is increased concern at the impact man is having on the environment. In temperate climates, summer algae growth can result in poor water clarity, mass deaths of river fish and the closure of recreational water facilities. To understand this problem, there is a need to identify the crucial chemical control variables for the biological processes. This is the subject of the first Computational Intelligence and Learning (CoIL) competition. CoIL is an EC-funded Cluster of Networks of Excellence (NoEs), formed in Jan 1999 as a collaboration between ERUDIT, EvoNet, MLNET and NEuroNet, representing Fuzzy Logic, Evolutionary Computing, Machine Learning, and Neural Computing respectively. While the techniques and paradigms of interest to these networks are largely distinct (and sometimes complementary), these various techniques can often be used to tackle similar problems or be used together on the same problem. This CoIL competition has been organised through ERUDIT, and is open to all interested parties. ERUDIT has had very successful competitions itself in 1996 and 1998, and the results of these illustrated how a variety of different techniques can be used to tackle any problem. Water quality samples were taken from sites on different European rivers of a period of approximately one year. These samples were analysed for various chemical substances, and algae samples were collected to determine the algae population distributions. While the chemical analysis is cheap and easily automated, the biological part involves microscopic examination, requires trained manpower and is therefore both expensive and slow. The task of the CoIL competition is to predict the algae frequency distributions on the basis of the measured concentrations of the chemical substances and some global information about the season when the sample was taken, the river size and the fluid velocity. The data is a mixture of qualitative and numeric variables, and some of the data is incomplete. The detailed problem description and the data is available from http://www.erudit.de/erudit/committe/fc-ttc/ic-99/index.htm or by ftp from: FTP Server: ftp.mitgmbh.de Username: anonymous Password: Filename: /pub/problem.zip In case of difficulty obtaining the data, contact: ERUDIT Service Center, c/o ELITE Foundation, Promenade 9, 52076 Aachen, Germany. Phone: +49 2408 6969, Fax +49240894582, email: sh at mitgmbh.de A board of referees will declare a winner and a runner-up. The winners will be invited, free of charge, to attend the EUFIT’99 conference to present their solutions during a special session on September 14, 1999 in Aachen, Germany. Important dates: Apr 15, 1999: Data available May 31, 1999: Deadline for submission of solutions Jul 31, 1999: Announcement of results Sep 14, 1999: Award of winners at the EUFIT '99 conference in Aachen Sep 14, 1999: Presentation of the best solutions For general CoIL information, see http://www.dcs.napier.ac.uk/coil/ ##### STOP PRESS ###### KING'S HAS NEW PHONE AND FAX NUMBERS ##### ------------------------------------------------------------------ Dr Mark Plumbley mark.plumbley at kcl.ac.uk |_/ I N G'S Centre for Neural Networks | \ College Department of Electronic Engineering L O N D O N King's College London, Strand, London, WC2R 2LS, UK Founded1829 Tel: +44 (0)171 848 2241, Fax: +44 (0)171 848 2932 World Wide Web URL: http://www.eee.kcl.ac.uk/~mdp ------------------------------------------------------------------ From p.j.b.hancock at psych.stir.ac.uk Thu Apr 22 06:48:41 1999 From: p.j.b.hancock at psych.stir.ac.uk (Peter Hancock) Date: Thu, 22 Apr 1999 11:48:41 +0100 (BST) Subject: Faculty position available Message-ID: We have a senior faculty position available, for which I would be delighted to see some modellers or neuroscience people apply. See my website, or that of Bill Phillips, Barbara Webb, Peter Cahusac or Lindsay Wilson to get an idea of what we are currently doing in this area. Reader / Chair in Psychology The Department wishes to make an appointment to a post at the level of Reader or Chair. The primary role in the early years will be to contribute to research development in one of the Department's existing areas of research strength: Perception; Cognition; Neuroscience; Neuropsychology; Comparative and Developmental Psychology; and Social, Health, Clinical and Community Psychology. Teaching and administrative duties in the early years will be minimal, and there will start-up funding for equipment and studentships. Salary will be within the Senior Lecturer scale (£30,396-£34,464) or by negotiation on the Professorial scale (minimum £35,170). Informal enquiries may be made to Professor Lindsay Wilson, Head of Department on 01786 467640, email j.t.l.wilson at stir.ac.uk. Details of the Department can be found at: www.stir.ac.uk/departments/humansciences/psychology/ Further particulars are available from the Personnel Office, University of Stirling, Stirling, FK9 4LA, tel: (01786) 467028, fax (01786) 466155 or email personnel at stir.ac.uk. Closing date for applications: 13 May 1999. www.stir.ac.uk/departments/admin/personl AN EQUAL OPPORTUNITIES EMPLOYER Peter Hancock Department of Psychology, University of Stirling FK9 4LA Phone 01786 467675 Fax 01786 467641 e-mail pjbh1 at stir.ac.uk http://www-psych.stir.ac.uk/~pjh From steve at cns.bu.edu Thu Apr 22 22:04:45 1999 From: steve at cns.bu.edu (Stephen Grossberg) Date: Thu, 22 Apr 1999 22:04:45 -0400 Subject: Job at Boston University's Department of Cognitive and Neural Systems Message-ID: SYSTEMS ADMINISTRATOR JOB OPENING AT BOSTON UNIVERSITY We are seeking a new Director of the Computation Laboratories for the Department of Cognitive and Neural Systems (CNS) and the Center for Adaptive Systems (CAS) at Boston University, which have active PhD training and research programs in biological and artificial neural network modeling. Both models of how the brain controls behavior, and applications of these insights to outstanding technological problems are developed. The job includes responsibility for planning, developing, purchasing,installing, managing, integrating, reconfiguring, updating, and maintaining the CAS/CNS network for high-end scientific and technological neurocomputing. Participation in research projects is possible for qualified applicants. Salary is commensurate with experience. Contact Cindy Bradford (cindy at cns.bu.edu) for more information. Boston University is an equal opportunity employer. From phwusi at islab.brain.riken.go.jp Fri Apr 23 03:31:46 1999 From: phwusi at islab.brain.riken.go.jp (Si Wu) Date: Fri, 23 Apr 1999 16:31:46 +0900 (JST) Subject: A New idea on Support Vector Machine Message-ID: Dear Connectionists, This is to announce our new idea on support vector machines, "Improving Support Vector Machine Classifiers by Modifying Kernel Functions". We remarked that there exist few theories concerning how to choose a kernel function to fit given data well. This is equivalent to how to choose a smoothing operator. This is a difficult question. We used an idea of conformal transformation given by information geometry to modify a given kernel function. We hope that this gives a new direction for further development of SVMs. The paper will appear in Neural Networks as a Letter. If you are interested in, you can freely download it from http://www.islab.brain.riken.go.jp/~phwusi/Publication.html ************************************ Abstract In this work, we propose a method of modifying a kernel function in a data-dependent way to improve the performance of a support vector machine classifier. This is based on the Riemannian geometrical structure induced by the kernel function. The idea is to enlarge the spatial resolution around the separating boundary surface by a conformal mapping such that the separability between classes is increased. Examples are given specifically for modifying Gaussian Radial Basis Function kernels. Simulation results for both artificial and real data show remarkable improvement of generalization errors, supporting our idea. __________________________________________________________________ | | | Tel: 0081-48-462-4267(H), 0081-48-467-9664(O) | | E-mail: phwusi at islab.brain.riken.go.jp | | http://www.islab.brain.riken.go.jp/~phwusi | | Lab. for Information Synthesis, RIKEN Brain Science Institute | | Hirosawa 2-1, Wako-shi, Saitama 351-01, JAPAN | |________________________________________________________________| From szepes at sol.cc.u-szeged.hu Fri Apr 23 17:05:34 1999 From: szepes at sol.cc.u-szeged.hu (Szepesvari Csaba) Date: Fri, 23 Apr 1999 23:05:34 +0200 (MET DST) Subject: TR announcement Message-ID: Dear Colleagues, The following technical report is available at http://victoria.mindmaker.hu/~szepes/papers/macro-tr99-01.ps.gz All comments are welcome. Best wishes, Csaba Szepesvari ---------------------------------------------------------------- An Evaluation Criterion for Macro Learning and Some Results Zs. Kalmar and Cs. Szepesvari TR99-01, Mindmaker Ltd., Budapest 1121, Konkoly Th. M. u. 29-33 It is known that a well-chosen set of macros makes it possible to considerably speed-up the solution of planning problems. Recently, macros have been considered in the planning framework, built on Markovian decision problem. However, so far no systematic approach was put forth to investigate the utility of macros within this framework. In this article we begin to systematically study this problem by introducing the concept of multi-task MDPs defined with a distribution over the tasks. We propose an evaluation criterion for macro-sets that is based on the expected planning speed-up due to the usage of a macro-set, where the expectation is taken over the set of tasks. The consistency of the empirical speed-up maximization algorithm is shown in the finite case. For acyclic systems, the expected planning speed-up is shown to be proportional to the amount of ``time-compression'' due to the macros. Based on these observations a heuristic algorithm for learning of macros is proposed. The algorithm is shown to return macros identical with those that one would like to design by hand in the case of a particular navigation like multi-task MDP. Some related questions, in particular the problem of breaking up MDPs into multiple tasks, factorizing MDPs and learning generalizations over actions to enhance the amount of transfer are also considered in brief at the end of the paper. Keywords: Reinforcement learning, MDPs, planning, macros, empirical speed-up optimization From hirai at is.tsukuba.ac.jp Sun Apr 25 21:11:27 1999 From: hirai at is.tsukuba.ac.jp (Yuzo Hirai) Date: Mon, 26 Apr 1999 10:11:27 +0900 Subject: PDM Digital Neural Network System. Message-ID: <3723BD3F1E0.6E3DHIRAI@poplar.is.tsukuba.ac.jp> Dear Connectionists readers, We have connected our PDM Digital Neural Network System to the Internet. It is a hardware neural network simulator that consists of 1,008 hardware neurons fully interconnected via 1,028,160 7-bit synapses. The system consists of fully digital circuits and an analog output of each neuron is encoded by Pulse Density Modulation as our real neurons do. The behavior of each neuron is described by a nonlinear first-order differential equation, and the system solves 1,008 simultaneous differential equations in a fully parallel and time-continuous manner. It can solve a WTA network ten thousand times faster than a latest workstation for the best case. The details of the system and how to obtain a permission to use the system, visit http://www.viplab.is.tsukuba.ac.jp/ . Best regards. ******************** Professor Yuzo Hirai Institute of Information sciences and Electronics University of Tsukuba Address: 1-1-1 Ten-nodai, Tsukuba, Ibaraki 305-8573, Japan Tel: +81-298-53-5519 Fax: +81-298-53-5206 e-mail: hirai at is.tsukuba.ac.jp ***************************************************** From risto at cs.utexas.edu Sun Apr 25 23:06:32 1999 From: risto at cs.utexas.edu (risto@cs.utexas.edu) Date: Sun, 25 Apr 1999 22:06:32 -0500 Subject: neuro-evolution software, papers, web demos available Message-ID: <199904260306.WAA03067@tophat.cs.utexas.edu> The JavaSANE software package for evolving neural networks with genetic algorithms is available from the UTCS Neural Networks Research Group website, http://www.cs.utexas.edu/users/nn. The SANE method has been designed as part of our ongoing research in efficient neuro-evolution. This software is intended to facilitate applying neuro-evolution to new domains and problems, and also as a starting point for future research in neuro-evolution algorithms. Abstracts of recent papers on eugenic evolution, on-line evolution, and non-Markovian control are also included below. Demos of these systems as well as other neuroevolution papers are available at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. -- Risto Software: ----------------------------------------------------------------------- JAVASANE: SYMBIOTIC NEURO-EVOLUTION IN SEQUENTIAL DECISION TASKS http://www.cs.utexas.edu/users/nn/pages/software/abstracts.html#javasane Cyndy Matuszek, David Moriarty The JavaSANE package contains the source code for the Hierarchical SANE neuro-evolution method, where a population of neurons is evolved together with network blueprints to find a network for a given task. The method has been shown effective in several sequential decision tasks including robot control, game playing, and resource optimization. JavaSANE is designed especially to make it possible to apply SANE to new tasks with minimal effort. It is also intended to be a platform-independent and parsimonious implementation of SANE, so that can serve as a starting point for further research in neuro-evolution algorithms. (This package is written in Java; an earlier C-version is also available). Papers and Demos: ----------------------------------------------------------------------- SOLVING NON-MARKOVIAN CONTROL TASKS WITH NEUROEVOLUTION Faustino Gomez and Risto Miikkulainen To appear in Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI-99, Stockholm, Sweden) (6 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#gomez.ijcai99.ps.gz The success of evolutionary methods on standard control learning tasks has created a need for new benchmarks. The classic pole balancing problem is no longer difficult enough to serve as a viable yardstick for measuring the learning efficiency of these systems. The double pole case, where two poles connected to the cart must be balanced simultaneously is much more difficult, especially when velocity information is not available. In this article, we demonstrate a neuroevolution system, Enforced Sub-populations (ESP), that is used to evolve a controller for the standard double pole task and a much harder, non-Markovian version. In both cases, our results show that ESP is faster than other neuroevolution methods. In addition, we introduce an incremental method that evolves on a sequence of tasks, and utilizes a local search technique (Delta-Coding) to sustain diversity. This method enables the system to solve even more difficult versions of the task where direct evolution cannot. A demo of ESP in the 2-pole balancing task can be seen at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. ----------------------------------------------------------------------- REAL-TIME INTERACTIVE NEURO-EVOLUTION Adrian Agogino, Kenneth Stanley, and Risto Miikkulainen Technical Report AI98-266, Department of Computer Sciences, The University of Texas at Austin, 1998 (16 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#agostan.ine.ps.Z In standard neuro-evolution, a population of networks is evolved in the task, and the network that best solves the task is found. This network is then fixed and used to solve future instances of the problem. Networks evolved in this way do not handle real-time interaction very well. It is hard to evolve a solution ahead of time that can cope effectively with all the possible environments that might arise in the future and with all the possible ways someone may interact with it. This paper proposes evolving feedforward neural networks online to create agents that improve their performance through real-time interaction. This approach is demonstrated in a game world where neural-network-controlled individuals play against humans. Through evolution, these individuals learn to react to varying opponents while appropriately taking into account conflicting goals. After initial evaluation offline, the population is allowed to evolve online, and its performance improves considerably. The population not only adapts to novel situations brought about by changing strategies in the opponent and the game layout, but it also improves its performance in situations that it has already seen in offline training. This paper will describe an implementation of online evolution and shows that it is a practical method that exceeds the performance of offline evolution alone. A demo of on-line evolution in the real-time gaming task is at http://www.cs.utexas.edu/users/nn/pages/research/neuroevolution.html. ----------------------------------------------------------------------- EUGENIC EVOLUTION FOR COMBINATORIAL OPTIMIZATION John W. Prior Master's Thesis, Technical Report AI98-268, Department of Computer Sciences, The University of Texas at Austin, 1998 (126 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#prior.eugenic-thesis.ps.Z In the past several years, evolutionary algorithms such as simulated annealing and the genetic algorithm have received increasing recognition for their ability to optimize arbitrary functions. These algorithms rely on the process of Darwinian evolution, which promotes highly successful solutions that result from random variation. This variation is produced by the random operators of mutation and/or recombination. These operators make no attempt to determine which alleles or combinations of alleles are most likely to yield overall fitness improvement. This thesis will explore the benefits that can be gained by utilizing a direct analysis of the correlations between fitness and alleles or allele combinations to intelligently and purposefully design new highly-fit solutions. An algorithm is developed in this thesis that explicitly analyzes allele-fitness distributions and then uses the information gained from this analysis to purposefully construct new individuals ``bit by bit''. Explicit measurements of ``gene significance'' (the effect of a particular gene upon fitness) allows the algorithm to adaptively decide when conditional allele-fitness distributions are necessary in order to correctly track important allele interactions. A new operator---the ``restriction'' operator---allows the algorithm to simply and quickly compute allele selection probabilities using these conditional fitness distributions. The resulting feedback from the evaluation of new individuals is used to update the statistics and therefore guide the creation of increasingly better individuals. Since explicit analysis and creation is used to guide this evolutionary process, it is not a form of Darwinian evolution. It is a pro-active, contrived process that attempts to intelligently create better individuals through the use of a detailed analysis of historical data. It is therefore a eugenic evolutionary process, and thus this algorithm is called the ``Eugenic Algorithm'' (EuA). The EuA was tested on a number of benchmark problems (some of which are NP-complete) and compared to widely recognized evolutionary optimization techniques such as simulated annealing and genetic algorithms. The results of these tests are very promising, as the EuA optimized all the problems at a very high level of performance, and did so much more consistently than the other algorithms. In addition, the operation of EuA was very helpful in illustrating the structure of the test problems. The development of the EuA is a very significant step to statistically justified combinatorial optimization, paving the way to the creation of optimization algorithms that make more intelligent use of the information that is available to them. This new evolutionary paradigm, eugenic evolution will lead to faster and more accurate combinatorial optimization and to a greater understanding of the structure of combinatorial optimization problems. ----------------------------------------------------------------------- FAST REINFORCEMENT LEARNING THROUGH EUGENIC NEURO-EVOLUTION Daniel Polani and Risto Miikkulainen Technical Report AI99-277, Department of Computer Sciences, University of Texas at Austin, 1999 (7 pages). http://www.cs.utexas.edu/users/nn/pages/publications/abstracts.html#polani.eusane-99.ps.gz In this paper we introduce EuSANE, a novel reinforcement learning algorithm based on the SANE neuro-evolution method. It uses a global search algorithm, the Eugenic Algorithm, to optimize the selection of neurons to the hidden layer of SANE networks. The performance of EuSANE is evaluated in the two-pole balancing benchmark task, showing that EuSANE is significantly stronger than other reinforcement learning methods to date in this task. From plesser at pegasus.chaos.gwdg.de Mon Apr 26 10:37:38 1999 From: plesser at pegasus.chaos.gwdg.de (Hans Ekkehard Plesser) Date: Mon, 26 Apr 1999 16:37:38 +0200 Subject: Noise in I&F neurons: from stochastic input to escape rates Message-ID: <9904261437.AA04322@pegasus.chaos.gwdg.de> Dear Connectionists! I would like to announce a paper on noisy integrate-and-fire dynamics that has been accepted for publication in Neural Computation: Noise in integrate-and-fire neurons: from stochastic input to escape rates by Hans E. Plesser and Wulfram Gerstner The paper is available on-line at http://www.chaos.gwdg.de/~plesser/publications.html . Abstract: We analyze the effect of noise in integrate-and-fire neurons driven by time-dependent input, and compare the diffusion approximation for the membrane potential to escape noise. It is shown that for time- dependent sub-threshold input, diffusive noise can be replaced by escape noise with a hazard function that has a Gaussian dependence upon the distance between the (noise-free) membrane voltage and threshold. The approximation is improved if we add to the hazard function a probability current proportional to the derivative of the voltage. Stochastic resonance in response to periodic input occurs in both noise models and exhibits similar characteristics. Hans E. Plesser ------------------------------------------------------------------ Hans Ekkehard Plesser Nonlinear Dynamics Group Tel. : ++49-551-5176-421 MPI for Fluid Dynamics Fax : ++49-551-5176-409 D-37073 Goettingen, Germany e-mail: plesser at chaos.gwdg.de ------------------------------------------------------------------ From prevete at axpna1.na.infn.it Wed Apr 28 05:41:48 1999 From: prevete at axpna1.na.infn.it (Roberto Prevete) Date: Wed, 28 Apr 1999 11:41:48 +0200 Subject: A new Java package Message-ID: <3726D7DC.7FA23C3E@axpna1.na.infn.it> The Java package it.na.cy.nnet for simulation of neural networks is now available from cybernetics webpage www.na.infn.it/Gener/cyber/report.html It is a package to simulate neural networks composed of many elementary units. Documentation is also included. Any comment will be appreciated. Thanks, Roberto Prevete From jon at syseng.anu.edu.au Thu Apr 29 03:23:11 1999 From: jon at syseng.anu.edu.au (Jonathan Baxter) Date: Thu, 29 Apr 1999 17:23:11 +1000 Subject: Paper available Message-ID: <372808DF.8A5DDC14@syseng.anu.edu.au> The following paper is available from http://wwwsyseng.anu.edu.au/~jon/papers/doom2.ps.gz "Boosting Algorithms as Gradient Descent in Function Space" by Llew Mason, Jonathan Baxter, Peter Bartlett and Marcus Frean Abstract: Much recent attention, both experimental and theoretical, has been focussed on classification algorithms which produce voted combinations of classifiers. Recent theoretical work has shown that the impressive generalization performance of algorithms like AdaBoost can be attributed to the classifier having large margins on the training data. We present abstract algorithms for finding linear and convex combinations of functions that minimize arbitrary cost functionals (i.e functionals that do not necessarily depend on the margin). Many existing voting methods can be shown to be special cases of these abstract algorithms. Then, following previous theoretical results bounding the generalization performance of convex combinations of classifiers in terms of general cost functions of the margin, we present a new algorithm (DOOM II) for performing a gradient descent optimization of such cost functions. Experiments on several data sets from the UC Irvine repository demonstrate that DOOM II generally outperforms AdaBoost, especially in high noise situations. Margin distribution plots verify that DOOM II is willing to `give up' on examples that are too hard in order to avoid overfitting. We also show that the overfitting behavior exhibited by AdaBoost can be quantified in terms of our proposed cost function. From peter at biodiscovery.com Thu Apr 29 03:31:49 1999 From: peter at biodiscovery.com (Peter Kalocsai) Date: Thu, 29 Apr 1999 00:31:49 -0700 Subject: Job announcement Message-ID: <001a01be9212$58b86a20$66d9efd1@default> Data Mining Scientist BioDiscovery, Inc. is a leading gene expression image and data analysis firm with an outstanding client list and a progressive industry stance. We are an early-stage start-up company dedicated to the development of state-of-the-art bioinformatics software tools for molecular biology and genomics research. We are rapidly growing and are looking for talented individuals with experience and motivations to take on significant responsibility and deliver with minimal supervision. BioDiscovery is an equal opportunity employer and our shop has a friendly, fast paced atmosphere. We are headquartered in sunny Southern California close to the UCLA campus. We are looking for a talented, creative individual with a strong background in software development, mathematics, statistics, and pattern recognition. Knowledge of biology and genetics is a plus but not necessary. This position involves development and implementation of pattern recognition and data mining algorithms for a number of ongoing and planned projects. This position requires the ability to formulate problem descriptions through interaction with end-user scientist working in the various aspects of genomics. These technical issues must then be transformed into innovative and practical algorithmic solutions. We expect our scientists to have outstanding written/oral communication skills and encourage publications in scientific journals. Requirements: A Ph.D. in Computer Science, Electrical Engineering, Mathematics, Biostatistics, or related field, or equivalent experience is required. Knowledge of biology and genetics is a plus but not necessary. Experience with MatLab, JAVA, or at least 5 years programming experience. Please send resume and cover letter to: hr at biodiscovery.com ---------------------------------------------- Peter Kalocsai, Ph.D. BioDiscovery, Inc. 11150 W. Olympic Blvd. Suite 805E Los Angeles, CA 90064 Ph: (310) 966-9366 Fax: (310) 966-9346 E-mail: peter at biodiscovery.com From vaina at enga.bu.edu Fri Apr 30 20:28:50 1999 From: vaina at enga.bu.edu (Lucia M. Vaina) Date: Fri, 30 Apr 1999 20:28:50 -0400 Subject: Postion in computational fMRI in Brain and Vision Research Lab-BU Message-ID: Computational fMRI, graphics algorithms, and image processing. Full time position in Brain and Vision Research Laboratory, Boston University This exciting new venture in the Brain and Vision Research Laboratory at Boston University, Department of Biomedical Engineering involves visualisation the working (plasticity and restorative plasticity) of the human brain during sensory-motor tasks. Specifically, the postholder will: * explore the uses of real-time and near-real-time analysis >techniques in fMRI studies by applying several existing data abalysis >packages for fMRI. * model the changes of functional connectivity of brain activations using structural equations models and functional connectivity models. * develop and implement motion correction algorithms, elastic matching algorithms. Candidates should have at least an MS in Electrical Engineering, Computer Science, Physics or Mathematics. Good communication and interpersonal skills, excellent background in C programming and should be familiar with the Unix environment. Good knowledge of college level mathematics (linear algebra,partial differential equatuions, statistics and probability),signal processing. Knowledge of computer graphics algorithms is a plus. Familiarity with Sun, or SGI platforms and PC is very desirable. Please send a letter of application along with CV, publication list if available, brief statement of current research and background, and two letters of recommendation to Professor Lucia M. Vaina Brain and Vision Research Laboratory Biomedical Engineering Department College of Engineering Boston University 44 Cummington str Boston, Ma 02115 USA fax: 617-353-6766 (Please note that I will away between May 7-15). Lucia M. Vaina Ph.D., D.Sc. Professor of Biomedical Engineering and Neurology Brain and Vision Research Laboratory Boston University, Department of Biomedical Engineering College of Engineering 44 Cummington str, Room 315 Boston University Boston, Ma 02215 USA tel: 617-353-2455 fax: 617-353-6766