From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From Dave.Touretzky at C.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Dave.Touretzky at C.CS.CMU.EDU (Dave.Touretzky@C.CS.CMU.EDU) Date: Sat 16 Apr 88 00:00:30-EDT Subject: analog outputs Message-ID: <12390804231.27.TOURETZKY@C.CS.CMU.EDU> There are plenty of neural net models that produce analog outputs, e.g., backpropagation nets. A more interesting question is whethere there are associative memories for analog vectors. Hopfield nets and BSB (Brain State in a Box), both matrix models, work only for boolean memories; they use nonlinearity to force their units' states to be 0 or 1. There are a few models that can learn analog memories, but they tend to involve competitive learning (winner-take-all nets) and generate a grandmother cell for each pattern to be learned. One example is Hecht-Nielsen's counter-propagation net, which can learn to associate one analog pattern with another and produce an exactly correct output given only an approximate input. Kohonen's self-organizing feature maps are also based on competitive winner-take-all behavior. An interesting generalization on this idea is to allow, say, k distinct winners, and then combine their outputs somehow, e.g., by using a weighted average. This is the principle behind Baum, Moody, and Wilczek's ACAM, or Associative Content-Addressable Memory, which I believe has been generalized to work on analog patterns. Hecht-Nielsen also discusses the idea of generalizing counter-propagation to permit k winners; see his article in the December 1987 issue of Applied Optics. But these multi-grandmother cell approaches are still not as distributed as a Hopfield or backprop model. (Of course one can train ordinary backprop nets to associate analog inputs with analog outputs, but unlike a true associative memory, there is no guarantee that the backprop network will produce the exact same output if the input is perturbed slightly, because it has no attractor states. In fact, this lack of attractor states is being exploited when people use backprop nets to do function approximation by interpolating between training instances.) So the question remains: are there fully-distributed associative memories whose attractor states are analog vectors? Perhaps some of the recent work on generalizing backprop to recurrent networks, introducing the potential for attractor behavior, will lead to a solution to this problem. -- Dave ------- From Dave.Touretzky at C.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Dave.Touretzky at C.CS.CMU.EDU (Dave.Touretzky@C.CS.CMU.EDU) Date: Wed 11 May 88 00:29:19-EDT Subject: NIPS abstract deadline now June 15 Message-ID: <12397363075.25.TOURETZKY@C.CS.CMU.EDU> The deadline for submitting an abstract to the Neural Information Processing Systems conference, to be held in Denver, Nov. 28-Dec. 1, has been extended by one month. Abstracts are now due by June 15th. Please help spread the word: tell your colleagues. Tell your students. Tell your therapist. Tell your cat. See you in November. -- Dave ------- From MOHAN%DWORKIN.usc.edu at oberon.USC.EDU Mon Jun 5 16:42:55 2006 From: MOHAN%DWORKIN.usc.edu at oberon.USC.EDU (Rakesh &) Date: Mon 6 Jun 88 14:30:48-PDT Subject: Dr. Feldman's address Message-ID: I would like to get Dr. J.A. Feldman's current net and/or surface mail address. Thanks, Rakesh Mohan mohan at oberon.usc.edu ------- From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Mon Jun 5 16:42:55 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Tuesday, 21-Jun-88 11:38:14-BST Subject: No subject Message-ID: <134345-370-206@uk.ac.essex> -------- ******** CONNECTIONIST MODEL OF LANGUAGE PROCESSES ******** I have been commisioned to write a large review of connectionist work on natural language. I know the area but i am sure that there are many tech reports and conference papers that i have not read. i do not wish to leave anyone out, so if you or any of your colleagues have any relevant material, please send it to me at: Noel E. Sharkey, Center for cognitive science, Dept. Language and Linguistics, University of Essex, Wivenhoe Park, Colchester CO4 3SQ Essex, England. sharkey at uk.ac.essex@ucl.cs.nss -------- From CS.WEINBERG at R20.UTEXAS.EDU Mon Jun 5 16:42:55 2006 From: CS.WEINBERG at R20.UTEXAS.EDU (CS.WEINBERG@R20.UTEXAS.EDU) Date: Sat 23 Jul 88 17:59:32-CDT Subject: New Address Message-ID: <12416701697.17.CS.WEINBERG@R20.UTEXAS.EDU> My net address is changing from cs.weinberg.utexas.edu to martha at ratliff.utexas.edu. Please make the appropriate changes. Thanx Martha ------- From KORNACKER-K at OSU-20.IRCC.OHIO-STATE.EDU Mon Jun 5 16:42:55 2006 From: KORNACKER-K at OSU-20.IRCC.OHIO-STATE.EDU (Karl Kornacker) Date: Sun 24 Jul 88 14:34:56-EDT Subject: wanted: co-editor for _Computational Models of NL Acquisition_ Message-ID: <8807241835.AA21058@tut.cis.ohio-state.edu> I am looking for a qualified person to co-edit an approximately 500 page volume on _Computational Models of Natural Language Acquisition_ tentatively scheduled by North Holland for publication in 1989 as part of the _Advances in Psychology_ book series. If you are interested and would like more information please e-mail a reply to me by July 30. Karl Kornacker (kornacker-k at osu-20.ircc.ohio-state.edu) ------- From DOUTHAT at A.ISI.EDU Mon Jun 5 16:42:55 2006 From: DOUTHAT at A.ISI.EDU (Dean Z. Douthat) Date: Wed 17 Aug 88 13:12:10-EDT Subject: Benchmarks Message-ID: <12423192061.47.DOUTHAT@A.ISI.EDU> I am interested in your proposals for benchmarks for machine learning systems. We are doing research on machine learning that is out of the mainstream of connectionist and PDP approaches. One such project involves Genetic Algorithms and Classifiers following the theories of J. Holland et al. Benchmarks that can be standardized and used to compare not only different designs within a given technical approach but also to compare across such approaches are of great interest. Your proposal deserves elaboration and formalization. Would you undertake to do so? Could the FSA benchmark be considered one of a series of increasing difficulty with say, pushdown stack automata next and so on? BTW: what is an "infitary" system? Dean Z. Douthat Institute for the Study of Intelligent Systems [ISIS] P.O. Box 669 Ann Arbor MI 48105-0669 (313)747-9170 ------- From Jim Mon Jun 5 16:42:55 2006 From: Jim (Jim) Date: Wed 17 Aug 88 13:33:52-CDT Subject: Wanted: room to share in Boston.... Message-ID: <12423206934.7.EE.JANDERSON@A20.CC.UTEXAS.EDU> Greetings - I am scheduled to present a poster at the INNS meeting in Boston, and would like to attend the entire conference. However, I can hardly afford the $91/night rate for the entire week. Since the double rate is the same as the single rate, I was wondering if anyone who is going to the confernce would be willing to share a double room with me and we'll split the cost. I think you'll find I'm an agreeable roommate. Thanks in advance, Jim Anderson (ECE, U.Texas) ee.janderson at A20.CC.UTEXAS.EDU 1411 Elm Brook Dr. Austin, TX 78758 (512) 834-2092 (home) 474-4526 (Work) ------- From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 30 Aug 1988 13:02-EDT Subject: tech report announcement Message-ID: Using A Neural Network to Learn the Dynamics of the CMU Direct-Drive Arm II Ken Goldberg and Barak Pearlmutter CMU-CS-88-160 ABSTRACT Computing the inverse dynamics of a robot arm is an active area of research in the control literature. We apply a backpropagation network to this problem and measure its performance on the first 2 links of the CMU Direct-Drive Arm II for a family of "pick-and-place" trajectories. Trained on a random sample of actual trajectories, the network is shown to generalize with a root mean square error/standard deviation (RMSS) of 0.10. The resulting weights can be interpreted in terms of the velocity and acceleration filters used in conventional control theory. We also report preliminary results on learning a larger subset of state space for a simulated arm. If you would like a copy, you may either send computer mail to Catherine.Copetas at cs.cmu.edu or physical mail the address below. Please request "technical report CMU-CS-88-160." Catherine Copetas Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 From Mark.Derthick at G.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Mark.Derthick at G.GP.CS.CMU.EDU (Mark.Derthick@G.GP.CS.CMU.EDU) Date: 5 Sep 1988 19:29-EDT Subject: Question about Hopfield&Tank nets Message-ID: Using the Hopfield and Tank energy function E = -1/2 SUMi SUMj Tij Vi Vj + SUMi 1/R INTEGRAL g-inverse + SUMi Ii Vi one COULD calculate dE/dVi for each output and do steepest descent. Instead, Hopfield and Tank introduce a new variable, u=g-inverse(V) representing the input voltage to an amplifier with finite resistance and capacitance. The energy function is still a Liapunov function for their circuit, but the circuit doesn't do steepest descent; it moves in a direction obtained from the gradient by warping with the sigmoid function: delta-Vi is shrunk for those Vi which take on values near zero or one. Hopfield and Tanks's motivation seems to be fidelity to real neurons. If one doesn't care about this, is there any reason to prefer their algorithm to steepest descent? Mark From Michael.Witbrock at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Michael.Witbrock at F.GP.CS.CMU.EDU (Michael.Witbrock@F.GP.CS.CMU.EDU) Date: Fri, 16 Sep 1988 14:28-EDT Subject: Layers Message-ID: <590437703/mjw@F.GP.CS.CMU.EDU> There is a completely defined way of counting layers. Note, however, that I am not arguing for its adoption. Let the distance between two units be defined as the *minimal* number of modifiable weights forming a path between them (i.e. the number of weights on the shortest path between the two nodes) . Then the Layer in which a unit lies is the minimal distance between it and an input unit. The number of layers in the network is the maximum value of the distance between any unit and an input unit. See how much damage a little graph theory can do. michael From EPSYNET%UHUPVM1.BITNET at VMA.CC.CMU.EDU Mon Jun 5 16:42:55 2006 From: EPSYNET%UHUPVM1.BITNET at VMA.CC.CMU.EDU (Psychology Newsletter and Bulletin Board) Date: Sun, 25 Sep 88 Subject: Psychnet Message-ID: Hello If you are not a subscriber to the Bitnet Psychology Newsletter a would like to add your name, please send your reply to me at the userid and node above. There is no charge from us for the subscription. If you already have a subscription, please ignore this notice. A sample issue follows below. Yours truly, Robert C. Morecock, Ph.D. Editor ------------------------------------------------------------------------ ! * * * B I T N E T P S Y C H O L O G Y N E W S L E T T E R * * * ! ------------------------------------------------------------------------ ! Volume 3, Number 22, September 11, 1988 Circulation 941 ! ------------------------------------------------------------------------ ! From the Ed. Psych. Dept., University of Houston, Texas 77004 ! ! Robert C. Morecock, Editor ! ------------------------------------------------------------------------ Today's Topics: 1. USSR and Electronic Mail Networking with the Western World 2. Senate Testimony on Educational Electronic Mail Networking 3. Steven Pinker and Stevan Harnad on 'Rules and Learning' 4. New Journal - Interacting with Computers - Paper Call 5. Free Statistical Hardware for Students and Free Clinical Assessment System Software for Students -- Walter Hudson 6. Files Arriving at the Bulletin Board Since the Last Issue 7. How to Retrieve Bulletin Board Files ------------------------------------------------------------------------ (For discussion of above or other topics send your comments to userid Epsynet at node Uhupvm1 on Bitnet.) ------------------------------------------------------------------------ USSR Academy of Sciences and EARN This letter is from the Feedback section of the BITNET publication NetMonth, August 1988 edition, edited by Chris Condon. The topic is that of the USSR Academy of Sciences having requested a connection to the EARN network for their computers ... From: Hank Nussbacher Subject: More on Russia and networking... Some comments on David Hibler's July editorial: First, let me correct you on one point. The Soviet Union has requested connection to the network but not to BITNET - rather to EARN. If you are in favor of open communication paths then perhaps the United States and people within BITNET should stop using geocentricism when assuming that all networks revolve around them. True, many do, but the fact that Russia (and Hungary and Bulgaria) have requested EARN membership and not BITNET membership should say something to you. The major problem of connecting all these communist countries to the network is not a security fear. It is the US Dept of Commerce that forbids it. Whenever any country buys a supercomputer from the United States (Cray or ETA for example) they are required to sign a very stringent agreement with the US Dept of Commerce that that supercomputer will not be made in any way shape or form available to communist countries - which includes via electronic methods. The US Dept of Commerce realized that one way around the trade ban would be for a non- aligned nation to order a Cray XMP/48 and install an M1 (2Mb) line to Moscow. True, the computer never made it over the border, but its computing power would be sent over the border. So, all EARN sites (as well as many Canadian sites) that have a super computer connected directly or indirectly to BITNET or EARN would have to *renegotiate* their contract with the US Dept of Commerce. Feelers are being made in that direction, but the game is just in the early innings so it is too early to tell if the US Dept of Commerce will relent and alter the supercomputer licences already issued. EARN has been working over the past year on accepting various new countries to their network. Voting was concluded last year for four new countries and their ratification was formally approved: Algeria - University of Annaba Cyprus - University of Cyprus Luxembourg - CEPS/INSTEAD Yugoslavia - UNESCO International Centre Last month two new countries have been ratified as valid for EARN and they are: Morocco - EMI India - Tata Institute Currently, EARN is discussing requests from 3 eastern countries to join EARN, principal among them is the USSR: Hungary USSR - USSR Academy of Sciences Bulgaria There are various legal problems with this and it may be some time before a formal decision is reached. Just thought I'd let you all know how things are currently rather than the usual speculation and philosophy behind this topic. Hank Nussbacher ------------------------------------------------------------------------ Kenneth M. King, President of EDUCOM, recently testified before the Science, Technology and Space Subcommittee of the United States Senate Committee on Commerce, Science and Transportation. Discussing the formation of a national educational and research network, King stated that "there is a broad consensus among government, education, and industry leaders that creation of a high-speed national research and education computer network is a critical national priority." The complete text of the testimony is available for the Psychology Bulletin Board in the file SENATE TESTIMON . -- Ed. ------------------------------------------------------------------------ STEVEN PINKER AND STEVAN HARNAD ON 'RULES AND LEARNING' Recently Steven Pinker presented a paper on aspects of cognitive psychology to a major international convention. Steven Harnad has replied to that paper, followed by counter-replies from Dr. Pinker. The set of six commentaries are contained in the file HARNAD PRINCE on the Psychology Bulletin Board, and provide a good example of how electronic mail networking can facilitate the work of science by providing rapid communication among scholars. The files are reprinted from another electronic mail list. (Ed.) ------------------------------------------------------------------------ From: mdw at INF.RL.AC.UK INTERACTING WITH COMPUTERS - CALL FOR PAPERS The Interdisciplinary Journal of Human-Computer Interaction INTERACTING WITH COMPUTERS will provide a new international forum for communication about HCI issues betwen academia and industry. It will allow information to be disseminated in a form accessible to all HCI practitioners, not just to academic researchers. This new journal is produced in conjunction with the BCS Human-Computer Interaction Specialist Group. Its aim is to stimulate ideas and provoke widespread discussion with a forward-looking perspective. A dialogue will be built up between theorists, researchers and human factors engineers in academia, industry and commerce thus fostering interdisciplinary dependencies. The journal will initially appear three times a year. The first issue of INTERACTING WITH COMPUTERS will be published in March 1989. Each issue will contain a large number of fully refereed papers presented in a form and style suitable for the widest possible audience. All long papers will carry an executive summary for those who would not read the paper in full. Papers may be of any length but content will be substantial. Short applications-directed papers from industrial contributors are actively encouraged. Every paper will be refereed not only by appropriate peers but also by experts outside the area of specialisation. It is intended to support a continuing commentary on published papers by referees and journal readers. The complete call for papers is in the file JOURNAL INT-COMP available from the Psychology Bulletin Board -- Ed. ------------------------------------------------------------------------ From: Walter Hudson AIWWH at ASUACAD FREE STATISTICAL SOFTWARE FOR STUDENTS -- The WALMYR Publishing Co. has released a free Student Edition of the "Statistical Package for the Personal Computer" or SPPC Program. It is NOT public domain software. It is copyrighted and cannot be modified in any manner. However, you may copy the Student Edition of the SPPC Program and give a copy to every student in your class, school, college or university. Or, you may install the SPPC on a local area network or LAN system and thereby make it available to all students. If you would like to have a copy of the Student Edition of the SPPC program, send four formatted blank diskettes and a stamped self-addressed return mailer to the Software Exchange, School of Social Work, Arizona State University, Tempe, AZ 85287. Please note: Diskettes will not be returned unless adequate postage is enclosed. ------------------------------------------------------------------------ From: Walter Hudson AIWWH at ASUACAD FREE CLINICAL ASSESSMENT SYSTEM FOR STUDENTS -- Walter Hudson has just released a FREE Student Version of the "Clinical Assessment System" or CAS program which may be used for classroom or field practicum training in psychiatry, clinical psychology, social work, counseling and other human service professions. It is an extensive system which enables future clinical practitioners to learn computer-based clinical assessment and progress monitoring. It is shipped with 20 clinical scales ready for use and the CAS program is designed for interactive use with clients. Administers and scores the scales and sends graphic output to the screen, disk files, and printer. If you want a copy of the CAS program, send three formatted blank floppies and a stamped self-addressed return mailer to the Software Exchange, School of Social Work, Arizona State University, Tempe, AZ 85287. NOTE: Diskettes will NOT be returned unless adequate postage is enclosed. The Student Version of the CAS program may be copied for and distributed to virtually every student in your school, department or university. The aim of this FREE Student Edition is to encourage students to learn how to monitor and evaluate their clinical practice. ------------------------------------------------------------------------ ------------------------------------------------------------------------ FILES ARRIVING SINCE THE LAST ISSUE ________________________________________________________________________ FILENAME FILETYPE | (Posting Date) FILE CONTENTS ------------------------------------------------------------------------ AIDSNEWS 57 (09.01.88) AIDS Newsletter AIDSNEWS 58 (09.01.88) AIDS Newsletter AIDSNEWS 59 (09.01.88) AIDS Newsletter AIDSNEWS 60 (09.05.88) AIDS Newsletter AIDSNEWS 61 (09.05.88) AIDS Newsletter AIDSNEWS SIGNUP How to get the latest AIDS news automatically BITNET SERVERS (09.01.88) - other fileservers on Bitnet COMPUTER SOCV3N24 (09.05.88) Computer and Society Digest CRTNET 150 (09.05.88) Communications Research and Theory Newsletter CRTNET 151 (09.05.88) Communications Research and Theory Newsletter CRTNET 152 (09.08.88) Communications Research and Theory Newsletter FONETIKS 880901 (09.09.88) Phonetics Newsletter HARNAD PRINCE (09.10.88) S.Harnad&S.Pinker discuss 'On Rules&Learning' JOURNAL INT-COMP (09.10.88) New Journal Paper Call-'Interacting w/Computers' MEDNEWS VOL1N33 (09.02.88) Health Info-Com Network Newsletter MEDNEWS VOL1N34 (09.05.88) Health Info-Com Network Newsletter MEDNEWS VOL1N35 (09.09.88) Health Info-Com Network Newsletter NETMONTH 1988AUG (09.05.88) Bitnet MONTHLY news magazine ------------------------------------------------------------------------ HOW TO REQUEST FILES Most (but not quite all) Bitnet users of this service can request files interactively from userid UH-INFO at node UHUPVM1. If your request is valid and the links between your node and the University of Houston are all operating, your request will be acknowledged automatically and your file will arrive in a few seconds or minutes, depending on how busy the system is. To make the request use the same method you use to 'chat' or talk interactively with live users at other nodes. From a CMS node this might look like: TELL UH-INFO AT UHUPVM1 PSYCHNET SENDME filename filetype from a VAX system it might look like: SEND/REMOTE UH-INFO at UHUPVM1 PSYCHNET SENDME filename filetype At other Bitnet sites (or if these fail for you) check with your local computer center for the exact syntax. If you are not at a Bitnet site (or if within Bitnet you cannot 'chat' or talk interactively with live people at other nodes) send an electronic mail letter to userid EPSYNET at node UHUPVM1 with your request, including a comment that your site cannot send interactive commands. Bob Morecock will send out your requested file, usually the same day that your letter arrives. ------------------------------------------------------------------------ ** End of Psychology Newsletter ** ------------------------------------------------------------------------ From ANDERSON%BROWNCOG.BITNET at MITVMA.MIT.EDU Mon Jun 5 16:42:55 2006 From: ANDERSON%BROWNCOG.BITNET at MITVMA.MIT.EDU (ANDERSON%BROWNCOG.BITNET@MITVMA.MIT.EDU) Date: 5-OCT-1988 14:49:48.60 Subject: Technical Report Message-ID: A technical report is available from the Brown University Department of Cognitive and Linguistic Sciences: Technical Report 88-01 Department of Cognitive and Linguistic Sciences Brown University Representing Simple Arithmetic in Neural Networks Susan R. Viscuso, James A. Anderson and Kathryn T. Spoehr This report discuses neural network models of qualitative multiplication. We review past research in magnitude representation and cognitive arithmetic. We then develop a framework for building neural network models that exhibit behaviors that mimic the empirical results. The simulations show that neural net models can carry out qualitative multiplication given an adequate representation of magnitude information. It is possible to model a number of interesting psychological effects such as associative interference, practice effects, and the symbolic distance effect. However, this set of simulations clearly shows that neural networks are not satisfactory as devices for doing accurate arithmetic. It is possible to spend many hours of supercomputer CPU time teaching multiplication to a network, and still have a system that makes many errors. If, however, instead of accuracy we view this simulation as developing a very simple kind of `number sense,' with the formation and use of internal representations of sizes of numbers, then the simulation is more interesting. When real mathematicians and real physicists think about mathematics and physics, they rarely use logic or formal reasoning, but use past experience and their intuitive understanding of the complex systems they work on. We suspect a useful goal for network models may be to develop similar qualitative intuition in complex problem solving domains. This technical report can be obtained by sending an email message to Anderson at BROWNCOG (BITNET) or a request to: James A. Anderson Department of Cognitive and Linguistic Sciences Box 1978 Brown University Providence, RI 02912 Make sure you give your mailing address in your message. From connectionists-request at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: connectionists-request at cs.cmu.edu (connectionists-request@cs.cmu.edu) Date: Fri, 7 Oct 1988 12:21-EDT Subject: About Re: Technical Report messages Message-ID: <592244469/mjw@F.GP.CS.CMU.EDU> Hello. I am the current connectionists mailing list maintainer. Since "connectionists" is currently sent to 407 addresses, including several which redistribute it to many other addresses, it would be greatly appreciated if readers could exercise extreme care in sending respondses to the authors of messages. In particular, please avoid using the `reply' function of your mailer to respond to posts on "connectionists", in most cases this results in messages going back to connectionists at cs.cmu.edu (whence the messages come) rather than the intended recipient. Thank you very much for your cooperation in this matter. Michael Witbrock connectionists-request at cs.cmu.edu From farrelly%ics at ucsd.edu Mon Jun 5 16:42:55 2006 From: farrelly%ics at ucsd.edu (Kathy Farrelly) Date: 12 October 1988 1458-PDT (Wednesday) Subject: tech report available Message-ID: <8810122158.AA17267@sdics.ICS> If you'd like a copy of the following tech report, please write, call, or send e-mail to: Kathy Farrelly Cognitive Science, C-015 University of California, San Diego La Jolla, CA 92093-0115 (619) 534-6773 farrelly%ics at ucsd.edu Report Info: A LEARNING ALGORITHM FOR CONTINUALLY RUNNING FULLY RECURRENT NEURAL NETWORKS Ronald J. Williams, Northeastern University David Zipser, University of California, San Diego The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived. Practical learning algorithms based on this result are shown to learn complex tasks requiring recurrent connections. In the recurrent networks studied here, any unit can be connected to any other, and any unit can receive external input. These networks run continually in the sense that they sample their inputs on every update cycle, and any unit can have a training target on any cycle. The storage required and computation time on each step are independent of time and are completely determined by the size of the network, so no prior knowledge of the temporal structure of the task being learned is required. The algorithm is nonlocal in the sense that each unit must have knowledge of the complete recurrent weight matrix and error vector. The algorithm is computationally intensive in sequential computers, requiring a storage capacity of order the 3rd power of the number of units and computation time on each cycle of order the 4th power the number of units. The simulations include examples in which networks are taught tasks not possible with tapped delay lines; that is, tasks that require the preservation of state. The most complex example of this kind is learning to emulate a Turing machine that does a parenthesis balancing problem. Examples are also given of networks that do feedforward computations with unknown delays, requiring them to organize into the correct number of layers. Finally, examples are given in which networks are trained to oscillate in various ways, including sinusoidal oscillation. From CDTPJB at CR83.NORTH-STAFFS.AC.UK Mon Jun 5 16:42:55 2006 From: CDTPJB at CR83.NORTH-STAFFS.AC.UK (CDTPJB@CR83.NORTH-STAFFS.AC.UK) Date: 13-OCT-1988 11:20:53 Subject: No subject Message-ID: Here in the Computing Department at Staffordshire Polytechnic, we are involved in research, consultancy and teaching in AI. Could you please send me some information about the bulletin board that I have been told that you organise? Phil Bradley. | Post: Computer Centre JANET: cdtpjb at uk.ac.nsp.cr83 | Staffordshire Polytechnic DARPA: cdtpjb%cr83.nsp.ac.uk at cunyvm.cuny.edu | Blackheath Lane Phone: +44 785 53511 | Stafford, UK | ST18 0AD From Connectionists-Request at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 18 Nov 1988 12:30-EST Subject: A plea from connectionists request. Message-ID: <595877413/mjw@F.GP.CS.CMU.EDU> Please NEVER use the reply function of your mailers to respond to posts on connectionists. Doing so often causes a copy of your message to be sent to connectionists as well as the intended recipient. Once sent to connectionists, it is forwarded to around 430 sites from New Zealand to Finland and from the US to Japan. This is not desirable, since forwarding these messages invariably costs someone something, and often, in the case of researchers outside the US, results in mail reciept charges which must be paid from the individual researcher's budget. This message is not directed at anyone in particular, it is in response to the relatively high numbers of TR and paper requests on connectionists of late. Thank you. Michael Witbrock connectionists-request at cs.cmu.edu From Connectionists-Request at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 18 Nov 1988 17:33-EST Subject: Advice for MH users Message-ID: <595895611/mjw@F.GP.CS.CMU.EDU> Users of the MH mail system can avoid the annoying and costly (for others) mistake of sending carbon copies of messages to a mailing list by making the following changes to their $HOME/.mh_profile file: Create or modify the Alternate-Mailboxes line such that it names each of the redistribution lists you subscribe to, for example: Alternate-Mailboxes: mesard,bargain,connectionists Create or modify the repl line to contain the option "-nocc me". Note that this tells the repl(1) command not to send carbon copies to you or any of the addresses listed as your alternate mailboxes. If you really do want copies of your messages use the -fcc switch. So the line might look something like this: repl: -nocc me -fcc +outbox This advice was kindly supplied by Wayne_Mesard at BBN. Michael Witbrock From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 21 Nov 1988 23:09-EST Subject: Free Recurrent Simulator Message-ID: I wrote a bare bones simulator for recurrent temporally recurrent neural networks in C. It simulates a network of the sort described in "Learning State Space Trajectories in Recurrent Neural Networks", and is named "full". Full simulates only fully connected networks, uses only arrays, and has no user interface at all. It was intended to be easy to translate into other languages, to vectorize and parallelize well, etc. It vectorized fully on the convex on the first try with no source modifications. Although it is short, it is actually usable and it works well. If you wish to use full, I'm allowing access to a compressed tar file through anonymous ftp from host DOGHEN.BOLTZ.CS.CMU.EDU, user "ftpguest", password "oaklisp", file "full/full.tar.Z". Be sure to use the BINARY command, and don't use the CD command or you'll be sorry. I am not going to support full in any way, and I don't have time to mail copies out. If you don't have FTP access perhaps someone with access will post full to the usenet, and perhaps some archive server somewhere will include it. Full is copyrighted, but I'm giving people permission to use if for academic purposes. If someone were to sell a it, modified or not, I'd be really angry. From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 22 Nov 1988 12:55-EST Subject: FTPing full.tar.Z Message-ID: People have been having problems ftping full.tar.Z despite their avoiding the CD command. The solution is to specify the remote and local file names separately: ftp> get remote file: full/full.tar.Z local file: full.tar.Z For the curious, the problem is that when you type "get full/full.tar.Z" to ftp it tries to retrieve the file "full/full.tar.Z" from the remote host and put it in the local file "full/full.tar.Z". If the directory "full/" does not exist at your end you get an error message, and said message does not say which host the file or directory does not exist on. Sorry for the inconvenience. --Barak. From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 22 Nov 1988 22:30-EST Subject: doghen's address Message-ID: A couple people have asked me for the internet address of DOGHEN.BOLTZ.CS.CMU.EDU. The answer is 128.2.222.37. --Barak. From Connectionists-Request at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Tue, 22 Nov 1988 21:32-EST Subject: Please don't access archives Message-ID: <596222655/mjw@F.GP.CS.CMU.EDU> Please refrain from accessing the connectionist archive until I post a new message with directions on how to do so. CS.CMU is changing its systems software to allow anonymous ftp, and I have to change things to reflect these changes. After the change, ftp access will be more convenient. I expect to have things fixed by Mon 28th. Regards, Michael Witbrock connectionists-request at cs.cmu.edu From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Mon Jun 5 16:42:55 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Friday, 6-Jan-89 17:47:50-GMT Subject: change of address Message-ID: <134654-573-533@uk.ac.essex> -------- please change my mailing address for the connectionist network to the following* exeter-connect at uk.ac.exeter.cs@ucl.cs.nss thanks, noel sharkey -------- From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Mon Jun 5 16:42:55 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Friday, 6-Jan-89 17:53:13-GMT Subject: move Message-ID: <134654-575-456@uk.ac.essex> -------- MOVE AND ADDRESS CHANGE. I have now moved to the Department of Computer Science, University of Exeter, Exeter, Devon, UK. My new e-mail address is noel at uk.ac.exeter.cs@ucl.cs.nss noel sharkey -------- From Dean.Pomerleau at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Dean.Pomerleau at F.GP.CS.CMU.EDU (Dean.Pomerleau@F.GP.CS.CMU.EDU) Date: Thu, 26 Jan 1989 10:29-EST Subject: Tech Report Available Message-ID: <601831793/pomerlea@F.GP.CS.CMU.EDU> A shorter version of the following tech report will appear in the proceedings of the 1988 NIPS Conference. To request a copy, please send e-mail to copetas at cs.cmu.edu and ask for tech report CMU-CS-89-107. Don't forget to send your hard mail address. --Dean --------------------------------------------------------------- ALVINN: An Autonomous Land Vehicle In a Neural Network Dean A. Pomerleau January 1989 CMU-CS-89-107 ABSTRACT ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand. --------------------------------------------------------------- From Connectionists-Request at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Thu, 9 Feb 1989 13:22-EST Subject: FTP access to connectionists. Message-ID: <603051758/mjw@F.GP.CS.CMU.EDU> A brief note: If you are accessing the connectionist archives off b.gp.cs.cmu.edu, the ONLY two directories you wil be able to access are: /usr5/connect/connectionists/bibliographies and /usr5/connect/connectionists/archives. Hope this clarifies some to the problems people have been reporting. Michael Witbrock (connectionist-request at cs.cmu.edu) From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 16 Feb 1989 19:33-EST Subject: Tech Report Announcement Message-ID: The following tech report is available. It is a substantially expanded version of a paper of the same title that appeared in the proceedings of the 1988 CMU Connectionist Models Summer School. Learning State Space Trajectories in Recurrent Neural Networks Barak A. Pearlmutter ABSTRACT We describe a number of procedures for finding $\partial E/\partial w_{ij}$ where $E$ is an error functional of the temporal trajectory of the states of a continuous recurrent network and $w_{ij}$ are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize $E$, so these procedures form the kernels of connectionist learning algorithms. Simulations in which networks are taught to move through limit cycles are shown. We also describe a number of elaborations of the basic idea, such as mutable time delays and teacher forcing, and conclude with a complexity analysis. This type of network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech. Overseas copies are sent first class so there is no need to make special arrangements for rapid delivery. Requests for copies should be sent to Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 or Copetas at CS.CMU.EDU by computer mail. Ask for CMU-CS-88-191. From Connectionists-Request at CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Wed, 15 Mar 1989 23:26-EST Subject: I hope this hadn't gone out. Message-ID: <606025575/connect@B.GP.CS.CMU.EDU> This came from someone I just added to the list. I hope I'm not repeating stuff. michael Date: Mon, 13 Mar 89 15:30:04 GMT From: mcvax!inesc!alf!lba at uunet.UU.NET (Luis Borges de Almeida) To: cs.cmu.edu!connectionists-request at inesc.INESC Subject: Workshop announcement EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND CALL FOR PAPERS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Contributions from all fields related to the neural network area are welcome. A (non-exclusive) list of topics is given below. Care is being taken to ensure that the workshop will have a high level of quality. Proposed contributions will be evaluated by an international board of experts, and a proceedings volume will be published. The number of participants will be limited to 50. Full contributions will take the form of oral presentations, and will correspond to papers in the proceedings. Some short contributions will also be accepted, for presentation of ongoing work, projects (ESPRIT, BRAIN, DARPA,...), etc. They will be presented in poster format, and will not originate any written publication. A small number of non-contributing participants may also be accepted. The official language of the workshop will be English. TOPICS: - signal processing (speech, image,...) - pattern recognition - training procedures (new algorithms, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for contributions is as follows: Full contributions Short contributions Deadline for submission June 1, 1989 Oct 1, 1989 Notif. of acceptance Sept 1, 1989 Nov 15, 1989 Camera-ready paper Nov 1, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the Sesimbra Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop I plan to submit a contribution [ ] full [ ] short Preliminary title: (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From Connectionist.Research.Group at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Connectionist.Research.Group at B.GP.CS.CMU.EDU (Connectionist.Research.Group@B.GP.CS.CMU.EDU) Date: Wed, 15 Mar 1989 23:13-EST Subject: Forwarded Message-ID: <606024818/connect@B.GP.CS.CMU.EDU> From: Mahesan Niranjan Date: Mon, 13 Mar 89 10:45:19 GMT Subject: Not Total Squared Error Criterion Re: > Date: Wed, 08 Mar 89 11:36:31 EST > From: thanasis kehagias > Subject: information function vs. squared error > > i am looking for pointers to papers discussing the use of an alternative > criterion to squared error, in back propagation algorithms. the [..] > G=sum{i=1}{N} p_i*log(p_i) > Here is a non-causal reference: I have been looking at an error measure based on "approximate distances to class-boundary" instead of the total squared error used in typical supervised learning networks. The idea is motivated by the fact that a large network has an inherent freedom to classify a training set in many ways (and thus poor generalisation!). In my training, an example of a particular class gets a target value depending on where it lies with respect to examples from the other class (in a two class problem). This implies, that the target interpolation function that the network has to construct is a smooth transition from one class to the other (rather than a step-like cross section in the total squared error criterion). The important consequence of doing this is that networks are automatically deprived of the ability to form large weight (- sharp cross section) solutions. niranjan PS: A Tech report will be announced soon. From Connectionists-Request at CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 21 Mar 1989 12:38-EST Subject: Forward from Cybenko Message-ID: <606505108/connect@B.GP.CS.CMU.EDU> George Cybenko is now on connectionists. Here is his first message. ********************************************************* The paper of mine that Alexis Wieland mentioned in a note is titled "Approximations by superpositions of a sigmoidal function" and it will appear in the journal "Mathematics of Control, Signals and Systems" published by Springer-Verlag soon. People interested in a preprint can write to me at George Cybenko Center for Supercomputing Research and Development 319G Talbot Lab University of Illinois at Urbana Urbana, IL 61801 or call (217) 244-4145 or send email to gc at uicsrd.csrd.uiuc.edu. In addition to the result mentioned by Wieland, there is a treatment of other possible activation functions. George Cybenko From Connectionists-Request at CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 21 Mar 1989 12:44-EST Subject: Forward call for papers Message-ID: <606505474/connect@B.GP.CS.CMU.EDU> From: "Robert L. Russel" Subject: CALL FOR PAPERS - AIST * * * CALL FOR PAPERS * * * -- The Association for Intelligent Systems Technology (AIST) -- The Association for Intelligent Systems Technology (AIST), a chartered not-for-profit organization, is seeking noteworthy papers to appear in the Spring 1989 issue of the association's official publication, INTELLIGENT SYSTEMS REVIEW. In keeping with the AIST purpose, papers may be one of four kinds: 1. A description of original research accomplished and findings contributing to the advancement of artificial intelligence and neural networks technology. 2. A description of the applications of AI annd neural networks technology to a problem in business, engineering, financial operations, or education. 3. A description of a business engaged in engineering, systems development, financial operations, medicine, education, etc. for which one or more applications of intelligent systems technology has had a significant impact on the effectiveness, productivity or profitability of the business, including a description of the application and how it was implemented. 4. Description of an educational program intended to impart knowledge and develop skills on the part of individuals having interest in the application of AI/Neural Networks to business and the professions. The ISR accepts written submissions featuring items such as: -- Original Research: Peer-reviewed, high quality research results representing new and significant contributions to AI/Neural Networks and its applications. -- Articles: Unrefereed technical articles focused on the informative review or tutorials on the author(s)' specialty area, or invited articles as solicited by the ISR editors. -- Letters to the editor: Comments on research papers or articles published in ISR and other matters of interest to AIST. -- Editorials: Commentary on technical/professional issues significant to the AIST community. -- Institutional Research/Project: Introduction of R&D or contract work performed by an organization. Original research papers in the ISR are refereed by one or more peer researchers selected by the Editorial Board. All other articles in the ISR are unrefereed working papers. Authors of the papers accepted for publication will be provided with specific instructions for preparing the final camera-ready manuscript. Author(s) also must sign and date a Transfer of Copyright form to be sent to AIST, Inc. Papers should be about 5000 words (10 pages) in length. They may include line drawings but photography requiring color or gray scale reproduction should not be included. Papers must be submitted by May 15, 1989 to appear in the Spring issue. Contributions are welcomed from any person. All contributions sent to the editors will be assumed to be for consideration for publication unless specified otherwise. The written material will not be returned. Send papers to: For Additional Information Call: Editorial Board Major Bob Russel AIST, Inc. Neural Networks Applications Editor 6310 Fly Road (315)330-7069 E. Syracuse, N.Y. 13057 Mr. Doug White Military (C3I) Applications Editor (315)330-3564 From Connectionists-Request at CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 10 Apr 1989 11:29-EDT Subject: Can anyone answer this question. Message-ID: <608225360/connect@B.GP.CS.CMU.EDU> Message follows - Michael Date: Wed, 5 Apr 89 11:02:07 EST From: nunez at en.ecn.purdue.edu (Fernando J Nunez) To: Connectionists-Request at cs.cmu.edu Subject: A new VLSI-NN system? I am interested in all kinds of VLSI-NNs. In the process of gathering information about implementations, I found a short and confusing article in BusinessWeek, March 6, 1989, p. 103. I have extracted the following: "Neural net chips just won't be able to take the heat. That's because neural nets are based on analog technology, .... Such microcircuitry is much more temperature-sensitive than digital transistors... So Steven G. Morton,a former ITT Corp. researcher, formed Oxford Computer in Oxford, Conn., to pioneer neural nets based on digital memory-chip technology. Although Morton's ideas were met with widespread skepticism, scientists at MIT recently scrutinized his designs and pronounced them apparently sound." I attribute the contradictions, or at least, inaccuracies to the lack of audit of the article by an expert. I would appreciate if someone could tell me where can I found a description of this work. I hope it won't be in FORBES magazine. Here come my name and electronic address. Fernando J. Nunez nunez at en.ecn.purdue.edu From FEILDWB%SERVAX.BITNET at VMA.CC.CMU.EDU Mon Jun 5 16:42:55 2006 From: FEILDWB%SERVAX.BITNET at VMA.CC.CMU.EDU (WILLIAM=FEILDJR) Date: 04/26/89 13:36:09 EST Subject: lodging at conference Message-ID: I am a Phd Candidate at Fla International University in Miami. I will be attending the Neural Network Conference in Washington D.C. 6/18-6/21 at MY OWN EXPENSE! I could use some help in the lodging department. If anyone is planning on attending the conference and would like to share expenses for a room, please contact me. I would greatly appreciate it. I can be reached at: William B Feild Jr Fla International University School of Computer Science University Park Campus Miami Fl 33199 (305) 554-2744 School (305) 595-2017 Home Feildwb at servax. bitnet CC : MAILER at CMUCCVMA From melkman%BENGUS.BITNET at VMA.CC.CMU.EDU Mon Jun 5 16:42:55 2006 From: melkman%BENGUS.BITNET at VMA.CC.CMU.EDU (melkman abraham) Date: Tue, 1 Aug 89 16:16:51-020 Subject: info on character recognizers Message-ID: <8908011816.AA09067@red.bgu.ac.il> I am interested in all information regarding (hand-written) character recognizer learning algorithms, pros and cons of architectures, functional systems etc. Please add citation where possible. I will appreciate your replies, and summarize them if so desired. Thanks, Avraham Melkman From Marco.Zagha at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Marco.Zagha at B.GP.CS.CMU.EDU (Marco.Zagha@B.GP.CS.CMU.EDU) Date: Tue, 29 Aug 1989 09:18-EDT Subject: Connections Per Second Message-ID: <620399884/marcoz@MARCOZ.BOLTZ.CS.CMU.EDU> There are two problem with reporting just connections per second, even if the "standard" method is used. Connections from the input units require 4 floating point operations per pattern, while connections in higher layers require 6 floating operations per pattern (because of the recursive error computation). Comparing just the CPS rating between a NetTalk benchmark and an encoder is not quite fair, since the typical network for NetTalk has many more connections from the input layer. Another problem is that there are different ways of updating weights: every pattern, every epoch, or every N patterns. Unless someone can come up with a good way of capturing these effects in a single number, I would suggest always giving the full details on the network used and the frequency of weight updates. It would also be nice to report a performance model which estimates CPS in terms of frequency of weight updates, number of weights, number of weights, in the input layer, number of patterns, etc. == Marco Zagha (marcoz at cs.cmu.edu) From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From srh at flash.bellcore.COM Mon Jun 5 16:42:55 2006 From: srh at flash.bellcore.COM (srh@flash.bellcore.COM) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: >do not depend on the net's actually being implemented in parallel, >rather than just being serially simulated? Is it only speed and >capacity parameters, or something more? I have a simple answer. In practice, I've resisted using parallel machines to run backprop simulations because in deciding how best to parallelize the problem for a given machine, you tend to make choices you're less likely to make with a very fast serial machine. So, for example, if you parallelize over patterns (one machine node processes the entire net for a given pattern) you sacrifice the capability to update the weights after every pattern. These choices tend to be different for different parallel machines. This experience makes me suspect that, in modeling the brain, the specifics of the parallel implementation (e.g., restriction to local connectivity) are likely to determine the nature of information representation and learning algorithms, as well as of what types of information processing the organism is capable. Gale Martin MCC From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: _______________________________ Date: ____________________________ Subject: No subject Message-ID: 4 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> cut here for license <<<<<<<<<<<<<<<<<<<<<<<<< From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Paul R Kersten From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: about Turing equivalence or lack thereof lose some of their force. I would hope that everyone would agree that no finite connectionist hardware could be more powerful than a conventional finite state automaton, and it would be nice if everybody could also agree that no amoeba, no starfish, no koala bear, and no human being can be more powerful than a finite-state automaton either. There is, of course, the question of whether a connectionist machine would be as powerful as a finite-state automaton, but this strikes me as trivial. (Is it?) Some of you may then ask, why anybody bothers with Turing machines or any machines more powerful than finite-state ones. For example, what of all the arguments of Chomsky et al. about NLs not being finite-state? This leads to the next point. (3) Equivalence with respect to I/O behavior is not the sum total of what the theory of computation has taught us. Thus, while I would claim that no physically realized system can be more powerful than a finite-state machine in the weak sense, the same is not true in other senses. The correct view of the matter is, I am pretty sure, that machines more powerful than the finite-state models are useful mathematical tools for proving results which do in fact apply to physically realized (and hence really finite-state) systems, but which would be more difficult to prove otherwise. Thus, the claim that NLs are not finite-state for example should really be taken to mean that NLs have constructions (such as center-embedding or--much more commonly--repetition) which have the following feature: given a finite-state machine of the usual sort, the size of the machine increases in proportion to an increase in the size of a finite set of tokens of such a construction. Hence, in a sense which can be made more precise, one wants to say that the finite-state machine cannot "know" these patterns. On the other hand, a machine with pushdown storage for center embedding or queue storage for repetition, even if it is strictly finite and so only recognizes a finite set of such tokens, can be modified only slightly to recognize a larger such set (the modification would consist in extending the storage by a cell). In a sense that can be made precise, such a machine "knows" the infinite pattern even though it can only recognize at any given time a finite set of tokens of the pattern. It has always seemed more convenient to state the relevant results in terms of idealized machines which actually have infinite memor, but in reality we are talking about machines that are finite-state in power but have a cleverer kind of design, which allows them in a sense to "know" more than a conventional finite-state machine. Thus, we have to have a broader conception of what mathematical equivalence is. A finite-state machine is weakly equivalent to a pushdown machine with bounded storage, yet there is a well-defined sense in which the latter is more powerful. (4) Hence, it would be useful to know whether connectionist models (the theoretical ones) are equivalent to Turing machines or at least to some class of machines that can handle center-embedding repetition and a few other non-finite-state properties of NLs, for example. For the idea would be that this would tell us IN WHAT SENSE connectionist models (those actually implementable) are equivalent to finite-state machines. The crucial point is this: a finite-state machine designed to handle repetition must be drastically altered each time we increase the size of repetitions we want handled. A simple TM (or a queue machine) with a bound on its memory is I/O equivalent to a finite-state machine, but in order for it to handle larger and larger repetitions, it suffices to keep extending its tape (queue), a much less radical change (in a well-defined sense) than that required in the case of a finite-state machine. Turning back to connectionist models, the question then is whether to handle non-finite-state linguistic constructions (or other such cognitive tasks), they have to altered as radically as finite-state machines do (in effect, by adding new states) or less radically (as in the case of TMs and other classes of automata traditionally assumed to have infinite memory). (5) Perhaps I should add, by way of analogy, that there are many other situations where it is more clearly understood that the theory of computation deals with a class of formalisms that cannot be physically realized but the results are really intended to tell us about formalisms that are realizable. The case of nondeterminism is an obvious one. Nondeterministic machines (in the sense used in automata theory; this is quite different from nondeterminism in physical theory or in colloquial usage) cannot be physically realized. But it is convenient to be able to pull tricks such as (a) prove the equivalence of regular grammars to nondeterministic finite-state machines, (b) prove the equivalence of nondeterministic and deterministic finite-state machines, and (c) be able to conclude that regular grammars might be a useful tool for studying physically realizable deterministic finite-state machines. It is much easier to do things this way than to do things directly, in many cases, but the danger is that people (even the initiated and certainly the uninitiated) will then assume either that the impossible devices are really possible or that the theory is chimerical. I claim that this precisely what has happened with the concept of machines with infinite memory (such as Turing machines). (6) Given what has been said, we can I think also make sense of the results that show that certain connectionist models are MORE powerful than Turing machines. This might seem to contradict the Church-Turing thesis (which I have been assuming throughout), but in fact the Church-Turing thesis implicitly refers only to situations where only time and memory are infinite, but where the algorithm is finitely specified, whereas the results I am alluding to deal with (theoretical) models of nets that are infinite. In more traditional terms, these would correspond to machines like finite-state machines but with an infinite number of states. There is no question but that with an infinite number of states you could do all sorts of things that you cannot do with a finite number but it is unclear to me that such results offer any comfort to the proponents of net architectures. (8) Finally (at least for now), it seems to me that there remain significant gaps between what the theory of computation tells us about and what we need when we try to understand the behavior of complex systems (not only human beings but even real computer software and hardware!) even after some of the confusions I have been trying to dispel are dispelled. While there are many formal notions of equivalence beyond that of I/O equivalence which can make our discussion of theoretical models of language or cognition more useful (higher-level if you will) without sacrificing precision, I don't think that we have developed the mathematical definitions that we need to really capture what goes on in the highest levels of current theoretical thinking. Thus, I would not want to say that the question of the equivalence of connectionist and standard models can be put to rest with current tools of the theory of computation. Rather, I would say that (a) at certain levels the two are equivalent (assuming somebody can do the necessary work of drafting and proving the theorems) (b) at certain other levels they are not, (c) the level at which most people in the field are thinking about the problem intuitively has probably not been formalized, (d) only if this level is formalized will we know what we are really talking about and it is only by deepening the mathematical models that we will achieve this, not by scuttling them, and (e) ultimately the answers we want will have to come from a marriage of a more complete mathematical theory with a richer empirical science of cognition. But neither is achievable without the other, for Every science gets the math that it deserves, and vice versa. Alexis Manaster-Ramer POB 704 IBM Research Yorktown Heights NY 10598 amr at ibm.com (914) 789-7239 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: Wed, 7 Mar 1990 21:10-EST Subject: Linear separability In-Reply-To: Chaim Gotsman's mail message of Wed, 7 Mar 90 22:48:02 +0200 Message-ID: A simple reduction shows that linear separability of functions specified by boolean circuits is NP-complete. If we let g(x_0,...,x_n,x_{n+1},x_{n+2}) = f(x_0,...,x_i) AND (x_{n+1} XOR x_{n+2}) then f is satisfiable if and only if g is not linearly separable, so linearly separability of boolean circuits must be NP-complete. Barak Pearlmutter. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: A level is composed of a "MEDIUM that is to be processed, COMPONENTS that provide primitive processing, LAWS OF COMPOSITION that permit components to be assembled into SYSTEMS, and LAWS OF BEHAVIOR that determine how system behavior depends on the component behavior and the structure of the system". Through the word "level" appears intuitively notion of an entity with some AUTONOMY, the fact that there are several such entities, and RELATIONS (hierarchy), composition laws between them. Through the word "organization" (within a level), appears the notion of REGULARITIES, laws (of behavior), within a level. 2.2 Sketch of a formal description : ***************************************************************** Let us consider : - a set S1; a relation R1 on S1, both making up structure: - a set S2; disjoint of S1, a relation R2 on S2, both making up structure: - a bijective function f from S1**n (S1xS1x...xS1) to S2,n>1 then I suggest to say that : [,,f] represents an (atomic) organization hierarchy iff from R2 relation on S2, through f-1 (f inverse function), it is not possible to infer a relation R1' on S1. ***************************************************************** For instance if we consider instruction and bit levels of a computer, from (logic) relation between two successive instructions, it is not possible to infer a relation between bits corresponding to these instructions. Or in language domain, from (syntactic or semantic) relation between words, it is not possible to infer a relation between letters which make up these words. This definition does not take in account any environment of the object or phenomenon. Only the major notions mentionned above : - regularities: relations R1, R2; - composition law between levels: f; - autonomy: no R1' relation. have been considered. Can such a definition be a (syntactic) bootstrap of a formal description of organization levels ? From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: There are actually many models in the literature; most of the people studying the problem seem not to be aware of each other's work. In addition to those already mentioned, [Feldman+Ballard] have proposed a model inspired by [Treisman+Gelade]. (References are at the end of this message.) [Strong+Whitehead] have implemented a similar model. [Fukushima]'s model is rather different. [Koch+Ullman]'s and [Mozer]'s models seem closest to the psychophysical and neurophysiological data to me. I [Chapman] have implemented [Koch+Ullman]'s proposal successfully and used it to model in detail the psychophysically-based theories of visual search due to [Treisman+Geldade, Treisman+Gormican]. My thesis [Chapman] describes these and other models of attention and tries to sort out their relative strengths. It's probably time for someone to write a review article. Cites: David Chapman, {\em Vision, Instruction, and Action.} PhD Thesis, MIT Artificial Intelligence Laboratory, 1990. Jerome A.~Feldman and Dana Ballard, ``Connectionist Models and their Properties.'' {\em Cognitive Science} {\bf 6} (1982) pp.~205--254. Kunihiko Fukushima, ``A Neural Network Model for Selective Attention in Visual Pattern Recognition.'' {\em Biological Cybernetics} {\bf 55} (1986) pp.~5--15. Christof Koch and Shimon Ullman, ``Selecting One Among the Many: A Simple Network Implementing Shifts in Selective Visual Attention.'' {\em Human Neurobiology} {\bf 4} (1985) pp.~219--227. Also published as MIT AI Memo 770/C.B.I.P.~Paper 003, January, 1984. Michael C.~Mozer, ``A connectionist model of selective attention in visual perception.'' {\em Program of the Tenth Annual Conference of the Cognitive Science Society}, Montreal, 1988, pp.~195--201. Gary W.~Strong and Bruce A.~Whitehead, ``A solution to the tag-assignment problem for neural networks.'' {\em Behavioral and Brain Sciences} (1989) {\bf 12}, pp.~381--433. Anne M.~Treisman and Garry Gelade, ``A Feature-Integration Theory of Attention.'' {\em Cognitive Psychology} {\bf 12} (1980), pp.~97--136. Anne Treisman and Stephen Gormican, ``Feature Analysis in Early Vision: Evidence From Search Asymmetries.'' {\em Psychological Review} Vol.~95 (1988), No.~1, pp.~15--48. Some other relevant references: C.~H.~Anderson and D.~C.~Van Essen, ``Shifter circuits: A computational strategy for dynamic aspects of visual processing.'' {\em Proceedings of the National Academy of Sciences, USA}, Vol.~84, pp.~6297--6301, September 1987. Francis Crick, ``Function of the thalamic reticular complex: The searchlight hypothesis.'' {\em Proceedings of the National Academy of Science}, Vol.~81, pp.~4586--4590, July 1984. Jefferey Moran and Robert Desimone, ``Selective attention gates visual processing in the extrastriate cortex.'' {\em Science} {\bf 229} (1985), pp.~782--784. V.~B.~Mountcastle, B.~C.~Motter, M.~A.~Steinmetz, and A.~K.~Sestokas, ``Common and Differential Effects of Attentive Fixation on the Excitability of Parietal and Prestriate (V4) Cortical Visual Neurons in the Macaque Monkey.'' {\em The Journal of Neuroscience}, July 1987, 7(7), pp.~2239--2255. John K.~Tsotsos, ``Analyzing vision at the complexity level.'' To appear, {\em Behavioral and Brain Sciences}. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From melkman%BENGUS.BITNET at vma.CC.CMU.EDU Mon Jun 5 16:42:55 2006 From: melkman%BENGUS.BITNET at vma.CC.CMU.EDU (melkman abraham) Date: Thu, 26 Apr 90 16:44:10-020 Subject: AI and music workshop Message-ID: <9004261844.AA05952@red.bgu.ac.il> ECAI Workshop on Artificial Intelligence and Music Stockholm, Sweden. Tuesday, August 7, 1990 This workshop is the successor of the four previous workshops held in the last two years: the AAAI-88 (St.Paul) and IJCAI-89 (Detroit) Workshops, the GMD Workshop that preceded the International Computer Music Conference (Bonn, Sept. 88), and the European Workshop on AI and Music (Genoa, June 89). AI and Music is an emerging discipline that involves such fields as artificial intelligence, music, psychology, philosophy, linguistics, and education. The last four workshops demonstrated a mixture of methods, that range from somewhat technical application of AI methods to music problems, to theoretical cognitive research. This workshop will focus on further deepening our understanding of those AI techniques and approaches that are relevant to music, and on the relevance of music to AI research. The workshop topics include (but are not limited to) the following: Cognitive Musicology Expert Systems and Music Knowledge Representation and Music Tutoring Neural Computation and connectionist approaches in Music Composition, Performance, Analysis tools (based on AI techniques) Multi-media Composition and Performance The Workshop is scheduled during the ECAI Tutorials (August, 7), held immediately before the ECAI Scientific Meeting (August, 8-10). ORGANISING COMMITTEE : Mira Balaban , Dept of Math and Computer Science, Ben-Gurion University, Israel. Antonio Camurri , DIST, University of Genoa, Italy. Gianni De Poli , CSC, University of Padova, Italy. Kemal Ebcioglu , IBM, Thomas J. Watson Research Center, USA. Goffredo Haus , LIM-DSI, University of Milan, Italy. Otto Laske , NEWCOMP, USA. Marc Leman , IPEM, University of Ghent, Belgium. Christoph Lischka , GMD, Federal Republic of Germany. SUBMISSION INFORMATION Submit eight copies of a camera ready manuscript (about 5 single-spaced A4 pages) to Antonio Camurri. Please follow the IJCAI standards for the preparation of the manuscript. Antonio Camurri, DIST - University of Genova Via Opera Pia, 11A - 16145 Genova, Italy e-mail: music at ugdist.UUCP Phone +39 (0)10 3532798 - 3532983; Telefax +39 (0)10 3532948 Important dates May 15, 1990: Deadline for submission. July 1, 1990: Notification of acceptance. For registration and general information on the ECAI-90 conference, please refer to: ECAI-90, c/o Stockholm Convention Bureau, Box 6911 S-102 39 Stockholm, Sweden. Tel. +46-8-230990 FAX. +46-8-348441 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From stjohn%cogsci at ucsd.edu Mon Jun 5 16:42:55 2006 From: stjohn%cogsci at ucsd.edu (Mark St. John) Date: 19 June 1990 1344-PDT (Tuesday) Subject: tech report on story comprehension Message-ID: <9006192044.AA29783@cogsci.ucsd.edu.UCSD.EDU> The Story Gestalt Text Comprehension by Cue-based Constraint Satisfaction Mark F. St. John Department of Cognitive Science, UCSD Abstract: Cue-based constraint satisfaction is an appropriate algorithm for many aspects of story comprehension. Under this view, the text is seen to contain cues that are used as evidence to constrain a full interpretation of a story. Each cue can constrain the interpretation in a number of ways, and cues are easily combined to produce an interpretation. Using this algorithm, a number of comprehension tasks become natural and easy. Inferences are drawn and pronouns are resolved automatically as an inherent part of processing the text. The developing interpretation of a story is revised as new information becomes available. Knowledge learned in one context can be shared in new contexts. Cue-based constraint satisfaction is naturally implemented in a recurrent connectionist network where the weights encode the constraints. Propositions are processed sequentially to add constraints to refine the story interpretation. Each of the processes mentioned above is seen as an instance of a general constraint satisfaction process. The model learns its representation of stories in a hidden unit layer called the Story Gestalt. Learning is driven by asking the model questions about a story during processing. Errors in question answering are used to modify the weights in the network via Back Propagation. ------------ The report can be obtained from the neuroprose database by the following procedure. unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): neuron ftp> cd pub/neuroprose ftp> type binary ftp> get (remote-file) stjohn.story.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps.Z unix> lpr -P(your_local_postscript_printer) foo.ps From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Help for a NOAA connectionist "primer" Message-ID: mike, thanks for the input - it seems a cogent summary of the (many) responses I've been getting. However, it seems just about noone has really attempted a one-to-one sort of comparison using traditional pattern recognition benchmarks. Just about everything I hear and read is anecdotal. Would it be fair to say that "neural nets" are more accessible, simply because there is such a plethora of 'sexy' user-friendly packages for sale? Or is back-prop (for example) truly a more flexible and widely-applicable algorithm than other statistical methods with uglier-sounding names? If not, it seems to me that most connectionists should be having a bit of a mid-life crisis about now. rich From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From Dean.Pomerleau at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Dean.Pomerleau at F.GP.CS.CMU.EDU (Dean.Pomerleau@F.GP.CS.CMU.EDU) Date: Mon, 6 Aug 1990 07:46-EDT Subject: Summary (long): pattern recognition comparisons Message-ID: <649943171/pomerlea@F.GP.CS.CMU.EDU> Leonard Uhr writes > Neural nets using backprop have only handled VERY SIMPLE images, usually in > 8-by-8 arrays. and later > What experimental evidence is there that NN recognize images as complex as those > handled by computer vision and pattern recognition approaches? For the past two years I've been using backpropagation networks with 32x30 and 45x48 pixel retinas and up to ~20,000 connections to autonomously drive a Chevy van. This system, called ALVINN (Autonomous Land Vehicle In a Neural Network), uses a video camera or 2D scanning laser rangefinder as input, and outputs the direction in which the vehicle should steer. The network learns by watching a person drive for about a 1/4 mile. After about 5 MINUTES OF TRAINING, the network is able to take over and continue driving on its own. Because it is able to learn what image features are important for particular driving situations, ALVINN has been successfully trained to drive in a wider variety of situations than any other single autonomous navigation system, all of which use the traditional vision processing techniques Leonard Uhr refers to. The situations ALVINN networks have been trained to handle include single lane dirt roads, single lane paved bike paths, two lane suburban neighborhood streets, lined two lane highways, and, using the laser range finder as input, parking lot driving. Because of its ability to effectively integrate multiple image features into a single steering command, ALVINN has proven more robust than other autonomous navigation systems which rely on finding one or a small number of features (like a yellow road center line) in the image. Because of the simplicity of the system, it is able to process up to 29 images per second (both training and testing are done using two Sun-4 Sparcstations). ALVINN is currently limited in the speed it can drive by the test vehicle, which has a top speed of 20 MPH. Autonomous navigation was one domain in which traditional vision researchers were initially skeptical that artificial neural networks would work at all, to say nothing of work as well or better than other systems in a wider variety of situations. --Dean Pomerleau, D.A. (1989) Neural network based autonomous navigation. In Vision and Navigation: The CMU Navlab. Charles Thorpe, (Ed.) Kluwer Academic Publishers. Pomerleau, D.A. (1989) ALVINN: An Autonomous Land Vehicle In a Neural Network, Advances in Neural Information Processing Systems, Vol. 1, D.S. Touretzky (ed.), Morgan Kaufmann. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Subject: Last Call for Papers for AGARD Conference We are extending the deadline for the abstracts for the papers to be presented at the AGARD conference until 21 September 1990. In case you have lost the Call for Papers, it is again attached to this message. Your consideration is greatly appreciated. --Dale AGARD ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT 7 RUE ANCELLE - 92200 NEUILLY-SUR-SEINE - FRANCE TELEPHONE: (1)47 38 5765 TELEX: 610176 AGARD TELEFAX: (1)47 38 57 99 AVP/46 2 APRIL 1990 CALL FOR PAPERS for the SPRING, 1991 AVIONICS PANEL SYMPOSIUM ON MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS to be held in LISBON, Portugal 13-16 May 1991 This meeting will be UNCLASSIFIED Abstracts must be received not later than 31 August 1990. Note: US & UK Authors must comply with National Clearance Procedures requirements for Abstracts and Papers. THEME MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS A large amount of research is being conducted to develop and apply Machine Intelligence (MI) technology to aerospace applications. Machine Intelligence research covers the technical areas under the headings of Artificial Intelligence, Expert Systems, Knowledge Representation, Neural Networks and Machine Learning. This list is not all inclusive. It has been suggested that this research will dramatically alter the design of aerospace electronics systems because MI technology enables automatic or semi-automatic operation and control. Some of the application areas where MI is being considered inlcude sensor cueing, data and information fusion, command/control/communications/intelligence, navigation and guidance, pilot aiding, spacecraft and launch operations, and logistics support for aerospace electronics. For many routine jobs, it appears that MI systems would provide screened and processed ata as well as recommended courses of action to human operators. MI technology will enable electronics systems or subsystems which adapt or correct for errors and many of the paradigms have parallel implementation or use intelligent algorithms to increase the speed of response to near real time. With all of the interest in MI research and the desire to expedite transition of the technology, it is appropriate to organize a symposium to present the results of efforts applying MI technology to aerospace electronics applications. The symposium will focus on applications research and development to determine the types of MI paradigms which are best suited to the wide variety of aerospace electronics applications. The symposium will be organizaed into separate sessions for the various aerospace electronics application areas. It is tentatively proposed that the sessions be organized as follows: SESSION 1 - Offensive System Electronics (fire control systems, sensor cueing and control, signal/data/information fusion, machine vision, etc.) SESSION 2 - Defensive System electronics (electronic counter measures, radar warning receivers, countermeasure resource management, situation awareness, fusion, etc.) SESSION 3 - Command/Control/Communications/Intelligence - C3I (sensor control, signal/data/information fusion, etc.) SESSION 4 - Navigation System Electronics (data filtering, sensor cueing and control, etc.) SESSION 5 - Space Operations (launch and orbital) SESSION 6 - Logistic Systems to Support Aerospace Electronics (on and off-board systems, embedded training, diagnostics and prognostics, etc.) GENERAL INFORMATION This Meeting, supported by the Avionics Panel will be held in Lisbon, Portugal on 13-16 May 1991. It is expected that 30 to 40 papers will be presented. Each author will normally have 20 minutes for presentation and 10 minutes for questions and discussions. Equipment will be available for projection of viewgraph transparencies, 35 mm slides, and 16 mm films. The audience will include Members of the Avionics Panel and 150 to 200 invited experts from the NATO nations. Attendance at AGARD Meetings is by invitation only from an AGARD National Delegate or Panel Member. Final manuscripts should be limited to no more than 16 pages including figures. Presentations at the meeting should be an extract of the final manuscript and not a reading of it. Complete instructions will be sent to authors of papers selected by the Technical Programme Committee. Authors submitting abstracts should insure that financial support for attendance at the meeting will be available. CLASSIFICATION This meeting will be UNCLASSIFIED LANGUAGES Papers may be written and presented either in English or French. Simultanewous interpretation will be provided between these two languages at all sessions. A copy of your prepared remarks (Oral Presentation) and visual aids should be provided to the AGARD staff at least one month prior to the meeting date. This procedure will ensure correct interpretation of your spoken words. ABSTRACTS Abstracts of papers offered for this Symposium are now invited and should conform with the following instructions: LENGTH: 200 to 500 words CONTENT: Scope of the Contribution & Relevance to the Meeting - Your abstract should fully represent your contribution SUMITTAL: To the Technical Programme committee by all authors (US authors must comply with Attachment 1) IDENTIFICATION: Author Information Form (Attachment 2) must be provided with you abstract CLASSIFICATION: Abstracts must be unclassified Your abstracts and Attachment 2 should be mailed in time to reach all members of the Technical Program Committee, and the Executive not later than 31 AUGUST 1990 (Note the exception for the US Authors). This date is important and must be met to ensure that your paper is considered. Abstracts should be submitted in the format shown on the reverse of this page. TITLE OF PAPER Name of Author Organization or Company Affiliation Address Name of Co-Author Organization or Company Affiliation Address The test of your ABSTRACT should start on this line. PUBLICATIONS The proceedings of this meeting will be published in a single volume Conference Proceedings. The Conference Proceedings will include the papers which are presented at the meeting, the questions/discussion following each presentation, and a Technical Evaluation Report of the meeting. It should be noted that AGARD reserves the right to print in the Conference Proceedings any paper or material presented at the Meeting. The Conference Proceedings will be sent to the printer on or about July 1990. NOTE: Authors that fail to provide the required Camera-Ready manuscript by this date may not be published. QUESTIONS concerning the technical programme should be addressed to the Technical Programme Committee. Administrative questions should be sent directly to the Avionics Panel Executive. GENERAL SCHEDULE (Note: Exception for US Authors) EVENT DEADLINE SUBMIT AUTHOR INFORMATION FORM 31 AUG 90 SUBMIT ABSTRACT 31 AUG 90 PROGRAMME COMMITTEE SELECTION OF PAPERS 1 OCT 90 NOTIFICATION OF AUTHORS OCT 90 RETURN AUTHOR REPLY FORM TO AGARD IMMEDIATELY START PUBLICATION/PRESENTATION CLEARANCE PROCEDURE UPON NOTIFICATION AGARD INSTRUCTIONS WILL BE SENT TO CONTRIBUTORS OCT 90 MEETING ANNOUNCEMENT WILL BE PUBLISHED IN JAN 91 SUBMIT CAMERA-READY MANUSCRIPT AND PUBLICATION/ PRESENTATION CLEARANCE CERTIFICATE to arrive at AGARD by 15 MAR 91 SEND ORAL PRESENTATION AND COPIES OF VISUAL AIDS TO THE AVIONICS PANEL EXECUTIVE to arrive at AGARD by 19 APR 91 ALL PAPERS TO BE PRESENTED 13-16 MAY 91 TECHNICAL PROGRAMME COMMITTEE CHAIRMAN Dr Charles H. KRUEGER Jr Director, Systems Avionics Division Wright Research and Development Center (AFSC), ATTN: AAA Wright Patterson Air Force Base Dayton, OH 45433, USA Telephone: (513) 255-5218 Telefax: (513) 476-4020 Mr John J. BART Prof Dr A. Nejat INCE Technical Director, Directorate Burumcuk sokak 7/10 of Reliability & Compatibility P.K. 8 Rome Air Development Center (AFSC) 06752 MALTEPE, ANKARA GRIFFISS AFB, NY 13441 Turkey USA Mr J.M. BRICE Mr Edward M. LASSITER Directeur Technique Vice President THOMSON TMS Space Flight Ops Program Group B.P. 123 P.O. Box 92957 38521 SAINT EGREVE CEDEX LOS ANGELES, CA 90009-2957 France USA Mr L.L. DOPPING-HEPENSTAL Eng. Jose M.B.G. MASCARENHAS Head of Systems Development C-924 BRITISH AEROSPACE PLC, C/O CINCIBERLANT HQ Military Aircraft Limited 2780 OEIRAS WARTON AERODROME Portugal PRESTEN, LANCS PR4 1AX United Kingdom Mr J. DOREY Mr Dale NELSON Directeur des Etudes & Syntheses Wright Research & Development Center O.N.E.R.A. ATTN: AAAT 29 Av. de la Division Leclerc Wright Patterson AFB 92320 CHATILLON CEDEX Dayton, OH 45433 France USA Mr David V. GAGGIN Ir. H.A.T. TIMMERS Director Head, Electronics Department U.S. Army Avionics R&D Activity National Aerospace Laboratory ATTN: SAVAA-D P.O. Box 90502 FT MONMOUTH, NJ 07703-5401 1006 BM Amsterdam USA Netherlands AVIONICS PANEL EXECUTIVE LTC James E. CLAY, US Army Telephone Telex Telefax (33) (1) 47-38-57-65 610176 (33) (1) 47-38-57-99 MAILING ADDRESSES: From Europe and Canada From United States AGARD AGARD ATTN: AVIONICS PANEL ATTN: AVIONICS PANEL 7, rue Ancelle APO NY 09777 92200 Neuilly-sur-Seine France ATTACHMENT 1 FOR US AUTHORS ONLY 1. Authors of US papers involving work performed or sponsored by a US Government Agency must receive clearance from their sponsoring agency. These authors should allow at least six weeks for clearance from their sponsoring agency. Abstracts, notices of clearance by sponsoring agencies, and Attachment 2 should be sent to Mr GAGGIN to arrive not later than 15 AUGUST 1990. 2. All other US authors should forward abstracts and Attachment 2 to Mr GAGGIN to arrive before 31 JULY 1990. These contributors should include the following statements in the cover letter: A. The work described was not performed under sponsorship of a US Government Agency. B. The abstract is technically correct. C. The abstract is unclassified. D. The abstract does not violate any proprietary rights. 3. US authors should send their abstracts to Mr GAGGIn and Dr KRUEGER only. Abstracts should NOT be sent to non-US members of the Technical Programme Committee or the Avionics Panel Executive. ABSTRACTS OF PAPERS FROM US AUTHORS CAN ONLY BE SENT TO: Mr David V. GAGGIN and Dr Charles H. KRUEGER Jr Director Director, Avionics Systems Div Avionics Research & Dev Activity Wright Research & Dev Center ATTN: SAVAA-D ATTN: WRDC/AAA Ft Monmouth, NJ 07703-5401 Wright Patterson AFB Dayton, OH 45433 Telephone: (201) 544-4851 Telephone: (513) 255-5218 or AUTOVON: 995-4851 4. US authors should send the Author Information Form (Attachment 2) to the Avionics Panel Executive, Mr GAGGIN, Dr KRUEGER, and each Technical Programme Committee Member, to meet the above deadlines. 5. Authors selected from the United States are remined that their full papers must be cleared by an authorized national clearance office before they can be forwarded to AGARD. Clearance procedures should be started at least 12 weeks before the paper is to be mailed to AGARD. Mr GAGGIN will provide additional information at the appropriate time. AUTHOR INFORMATION FORM FOR AUTHORS SUBMITTING AN ABSTRACT FOR THE AVIONICS PANEL SYMPOSIUM on MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS INSTRUCTIONS 1. Authors should complete this form and send a copy to the Avionics Panel Executive and all Technical Program Committee members by 31 AUGUST 1990. 2. Attach a copy of your abstract to these forms before they are mailed. US Authors must comply with ATTACHMENT 1 requirements. a. Probable Title Paper: ____________________________________________ _______________________________________________________________________ b. Paper most appropriate for Session # ______________________________ c. Full Name of Author to be listed first on Programmee, including Courtesy Title, First Name and/or Initials, Last Name & Nationality. d. Name of Organization or Activity: _________________________________ _______________________________________________________________________ e. Address for Return Correspondence: Telephone Number: __________________________________ ____________________ __________________________________ Telefax Number: __________________________________ ____________________ __________________________________ Telex Number: __________________________________ ____________________ f. Names of Co-Authors including Courtesy Titles, First Name and/or Initials, Last Name, their Organization, and their nationality. ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ __________ ____________________ Date Signature DUE NOT LATER THAN 15 AUGUST 1990 From Harry.Printz at IUS1.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Harry.Printz at IUS1.CS.CMU.EDU (Harry.Printz@IUS1.CS.CMU.EDU) Date: Tue, 4 Sep 1990 16:02-EDT Subject: Tech Report Announcement Message-ID: <652478569/hwp@IUS1.CS.CMU.EDU> ************** PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS **************** Foundations of a Computational Theory of Catecholamine Effects Harry Printz and David Servan-Schreiber CMU-CS-90-105 This report provides the mathematical foundation of a theory of catecholamine effects upon human signal detection abilities, as developed in a companion paper[*]. We argue that the performance-enhancing effects of catecholamines are a consequence of improved rejection of internal noise within the brain. To support this claim, we develop a neural network model of signal detection. In this model, the release of a catecholamine is treated as a change in the gain of a neuron's activation function. We prove three theorems about this model. The first theorem asserts that in the case of a network that contains only one unit, changing its gain cannot improve the network's signal detection performance. The second shows that if the network contains enough units connected in parallel, and if their inputs satisfy certain conditions, then uniformly increasing the gain of all units does improve performance. The third says that in a network where the output of one unit is the input to another, under suitable assumptions about the presence of noise along this pathway, increasing the gain improves performance. We discuss the significance of these theorems, and the magnitude of the effects that they predict. The report includes numerical examples and numerous figures, and intuitive explanations for each of the formal results. The proofs are presented in separate sections, which can be omitted from reading without loss of continuity. [*] Servan-Schreiber D, Printz H, Cohen JD. "A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior," Science, 249:892-895 (August 24, 1990) You can obtain this technical report by sending email to copetas at CS.CMU.EDU, or physical mail to Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Ask for report number CMU-CS-90-105. There is no charge for this report. Please do not reply to this message. ************** PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS **************** From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: usage is more correct than the first. I think we need a better term for describing the kind of locality exhibited by RBF networks. --Tom From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: That tried and tried to write limericks. They didn't really rhyme, And were short one line. (But weren't entirely self-referential.) -- Doug Blank ----------------------------------------------------- A couple with bad information Got into a tight situation. They said, "We don't see --- It just cannot be We only had backpropagation!" -- Mark Weaver ----------------------------------------------------- To these still stuck in the old school, The rule is the ultimate tool. But connectionists know (The others are slow) That the rule is a tool for a fool! -- Tim van Gelder ----------------------------------------------------- These guys Fodor and Pylyshyn Came to town with a mission. But they got in a jam When faced off with RAAM Instead they should just have gone fishin'. -- Dave Chalmers ----------------------------------------------------- Said a network that was rather horny To another, "This might sound too corny, Of all of my mates, You've got the best weights, Let's optimize them until morning." -- Paul Munro ----------------------------------------------------- Once quoted a young naive RAAM "In my 3-2-3 mind I can cram All of human cognition With minor omission" But it's not that much better than SPAM. -- Devin McAuley and Gary McGraw ----------------------------------------------------- In Bloomington, geniuses gather Working a connectionist lather Did they solve the world's woes, Or compound them -- who knows? It's not a decidable matter. -- Dave Touretzky ----------------------------------------------------- "Backprop will usually converge," Rumelhart and Hinton observe. "When some people say `Brains don't work that way' We just smile and flip them the bird!" -- Pete Angeline and Viet-Anh Nguyen ----------------------------------------------------- There once was a recurrent node Who felt he was ready to explode Without stimulation Only self-excitation He swelled up and then shot his load. -- Devin McAuley and Gary McGraw ----------------------------------------------------- Pollack has made this admission Of his neural net's true composition: "I recursively RAAM it With symbols, Goddamnit! So don't pay no mind to Pylyshyn." -- Dave Touretzky ----------------------------------------------------- Minsky and Papert were cruel Which gave the symbolists fuel. Connectionists waited Till they backpropagated And now neural networks are cool. -- Paul Munro ----------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Acquiring Verb Morphology in Children and Connectionist Nets 201 K. Plunkett, V. Marchman, and S.L. Knudsen Parallel Mapping Circuitry in a Phonological Model 220 D.S. Touretzky A Modular Neural Network Model of Attentional Requirements in Sequence Learning 228 P.G. Schyns A Computational Model of Attentional Requirements in Sequence Learning 236 P.J. Jennings and S.W. Keele Recall of Sequences of Items by a Neural Network 243 S. Nolfi, D. Parisi, G. Vallar, and C. Burani Binding, Episodic Short-Term Memory, and Selective Attention, Or Why are PDP Models Poor at Symbol Manipulation? 253 R. Goebel Analogical Retrieval Within a Hybrid Spreading-Activation Network 265 T.E. Lange, E.R. Melz, C.M. Wharton, and K.J. Holyoak Appropriate Uses of Hybrid Systems 277 D.E. Rose Cognitive Map Construction and Use: A Parallel Distributed Processing Approach 287 R.L. Chrisley PART VIII SPEECH AND VISION Unsupervised Discovery of Speech Segments Using Recurrent Networks 303 A. Doutiraux and D. Zipser Feature Extraction Using an Unsupervised Neural Network 310 N. Intrator Motor Control for Speech Skills: A Connectionist Approach 319 R. Laboissiere, J-L. Schwartz, and G. Bailly Extracting Features From Faces Using Compression Networks: Face, Identity, Emotion, and Gender Recognition Using Holons328 G.W. Cottrell The Development of Topography and Ocular Dominance 338 G.J. Goodhill On Modeling Some Aspects of Higher Level Vision 350 D. Bennett PART IX BIOLOGY Modeling Cortical Area 7a Using Stochastic Real-Valued (SRV) Units 363 V. Gullapalli Neuronal Signal Strength is Enhanced by Rhythmic Firing 369 A. Heirich and C. Koch PART X VLSI IMPLEMENTATION An Analog VLSI Neural Network Cocktail Party Processor 379 A. Heirich, S. Watkins, M. Alston, P. Chau A VLSI Neural Network with On-Chip Learning 387 S.P. Day and D.S. Camporese Index 401 _________________________________________________________________ Ordering Information: Price is $29.95. Shipping is available at cost, plus a nominal handling fee: In the U.S. and Canada, please add $3.50 for the first book and $2.50 for each additional for surface shipping; for surface shipments to all other areas, please add $6.50 for the first book and $3.50 for each additional book. Air shipment available outside North America for $45.00 on the first book, and $25.00 on each additional book. Master Card, Visa and personal checks drawn on US banks accepted. MORGAN KAUFMANN PUBLISHERS, INC. Department B2 2929 Campus Drive, Suite 260 San Mateo, CA 94403 USA Phone: (800) 745-7323 (in North America) (415) 578-9928 Fax: (415) 578-0672 email: morgan at unix.sri.com From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From Barak.Pearlmutter at F.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: Thu, 13 Dec 1990 11:51-EST Subject: tr announcement: CMU-CS-90-196 Message-ID: <661107088/bap@F.GP.CS.CMU.EDU> *** Please do not forward to other mailing lists or digests. *** The following 30 page technical report is now available. It can be FTPed from the neuroprose archives at OSU, under the name pearlmutter.dynets.ps.Z, as shown below, which is the preferred mode of acquisition, or can be ordered by sending a note to School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA along with a check for $2 (domestic) or $5 (outside the USA) to help defray the expense of reproduction and mailing. ---------------- Dynamic Recurrent Neural Networks Barak A. Pearlmutter December 1990 CMU-CS-90-196 (supersedes CMU-CS-88-191) We a survey learning algorithms for recurrent neural networks with hidden units and attempt to put the various techniques into a common framework. We discuss fixpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non-fixpoint algorithms, namely backpropagation through time, Elman's history cutoff nets, and Jordan's output feedback architecture. Forward propagation, an online technique that uses adjoint equations, is also discussed. In many cases, the unified presentation leads to generalizations of various sorts. Some simulations are presented, and at the end, issues of computational complexity are addressed. ---------------- FTP Instructions: ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) Name: anonymous Password: state-your-name-please ftp> cd pub/neuroprose ftp> get pearlmutter.dynets.ps.Z 300374 bytes sent in 9.9 seconds (26 Kbytes/s) ftp> quit unix> zcat pearlmutter.dynets.ps.Z | lpr Unlike some files in the archive, the postscript file has been tested and will print properly on printers without much memory. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Chapter 1 of A New Approach to Pattern Recognition, in Progress in Pattern Recognition 2, eds. L.N.Kanal and A.Rosenfeld, North-Holland, 1985) that the adoption of the VECTOR REPRESENTATION for the objects severely limits the number of the above similarity fields that can be induced naturally in the set of objects. At the same time, it is also useful to remember that the limitations imposed by the vector representation were sufficient to justify the rift between AI an pattern recognition (this is not to say that I am condoning this rift, which was also "politically" motivated). It is not difficult to understand why vector representation is not sufficiently flexible: all features are rigidly fixed, quantitative, and their interrelations are not represented. In reality, the useful features and their relations must emerge dynamically during the learning processes. "Symbolic" representations such as strings, graphs, etc. are more satisfactory from that point of view. Thus, although the NN after the learning process can induce some similarity field in the set of patterns, its capacity to generate various similarity fields is SEVERELY RESTRICTED by the very form of the pattern (object) representation. Furthermore, adapting a new more dynamic framework for the NN (dynamic NNs) will solve only a small part of above representational problem. The issue of representation have received considerable attention in computer science, but, it appears, that people trained in other fields may not fully appreciate its role and importance. -- Lev Goldfarb From zoran at theory.cs.psu.edu Mon Jun 5 16:42:55 2006 From: zoran at theory.cs.psu.edu (Zoran Obradovic) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: IJCNN-91-Seattle Message-ID: Thanks for the complete answer to my question. You can certainly copy my mail and post this to the net. Regards, Zoran See you at IJCNN, folks! Don From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Tue 26 Feb 91 22:54:02-PST Subject: Computists International Message-ID: <667637642.0.LAWS@AI.SRI.COM> *** PLEASE POST *** This is to announce Computists International, a new "networking" association for computer and information scientists. Hi! I'm Ken Laws If this announcement interests you, contact me at internet address laws at ai.sri.com. If you can't get through, my mail address is: Dr. Kenneth I. Laws; 4064 Sutherland Drive, Palo Alto, CA 94303; daytime phone (415) 493-7390. I'm back from two years at the National Science Foundation. I used to run AIList, and I miss it. Now I'm creating a broader service for anyone interested in information (or knowledge), software, databases, algorithms, or doing neat new things with computers. It's a career-oriented association for mutual mentoring about grant and funding sources, information channels, text and software publishing, tenure, career moves, institutions, consulting, business practices, home offices, software packages, taxes, entrepreneurial concerns, and the sociology of work. We can talk about algorithms, too, with a focus on applications. Toward that end, I'm going to edit and publish a weekly+ newsletter, The Computists' Communique. The Communique will be tightly edited, with carefully condensed news and commentary. Content will depend on your contributions, but I will filter, summarize, and generally act like an advice columnist. (Ann Landers?) I'll also suggest lines of discussion, collect "common knowledge" about academia and industry, and help track people and projects. As a bonus, I'll give members whatever behind-the-scenes career help I can. Alas, this won't be free. The charter membership fee for Computists will depend in part on how many people respond to this notice. The Communique itself will be free to all members, FOB Palo Alto; internet delivery incurs no additional charge. To encourage participation, there's a full money-back guarantee (excluding postage). Send me a reply to find out more. -- Ken Computists International and The Computists' Communique are service marks of Kenneth I. Laws. Membership in professional organizations may be a tax-deductible business expense. ------- From Xuedong.Huang at SPEECH2.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Tue, 28 May 1991 21:10-EDT Subject: a new book on speech recognition Message-ID: <675479427/xdh@SPEECH2.CS.CMU.EDU> New Book in the Edinburgh Information Technology Series (EDITS 7) ================================================================= X.D. Huang, Y. Ariki, and M. Jack: "Hidden Markov Models for Speech Recognition", Edinburgh University Press, 1990, 30 Pounds. (ISBN 0 7486 0162 7). "Despite the fact that the hidden Markov model approach to speech recognition is now considered a mature technology, there are very few textbooks which cover the subject in any depth. This new addition to the Edinburgh EDITS series is therefore very welcome. ... I know of no other comparable work and it is therefore a timely and userful addition to the literature" -- Book review, Computer Speech and Language To order, contact Edinburgh University Press. For more information, contact xdh at speech2.cs.cmu.edu. From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Thu 6 Jun 91 22:02:27-PDT Subject: Distributed Representations Message-ID: <676270947.0.LAWS@AI.SRI.COM> I'm not sure this is the same concept, but there were several papers at the last IJCAI showing that neural networks worked better than decision trees. The reason seemed to be that neural decisions depend on all the data all the time, whereas local decisions use only part of the data at one time. I've never put much stock in the military reliability claims. A bullet through the chip or its power supply will be a real challenge. Noise tolerance is important, though, and I suspect that neural systems really are more tolerant. Terry Sejnowski's original NETtalk work has always bothered me. He used a neural network to set up a mapping from an input bit string to 27 output bits, if I recall. I have never seen a "control" experiment showing similar results for 27 separate discriminant analyses, or for a single multivariate discriminant. I suspect that the results would be far better. The wonder of the net was not that it worked so well, but that it worked at all. I have come to believe strongly in "coarse-coded" representations, which are somewhat distributed. (I have no insight as to whether fully distributed representations might be even better. I suspect that their power is similar to adding quadratic and higher-order terms to a standard statistical model.) The real win in coarse coding occurs if the structure of the code models structure in the data source (or perhaps in the problem to be solved). -- Ken Laws ------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: In this thought-provoking volume, George Kampis argues (among other things) that the Turing-Church Thesis is false, at least for the kinds of physical systems that concern developmental biologists, cognitive scientists, economists, and other of that ilk. [...] This book represents an exciting point of departure from ho- hum traditional works on the philosophy of modeling, especially noteworthy being the fact that instead of offering mere complaints against the status quo, Kampis also provides a paradigm holding out the promise of including both the classical systems of the physicist and engineer and the neoclassical processes of the biologist and psychologist under a single umbrella. As such, the ideas in this pioneering book merit the attention of all philosophers and scientists concerned with the way we create reality in our mathematical representations of the world and the connection those representation have with the way things "truly are". How to order if interested: Order from Pergamon Press plc, Headington Hill Hall, Oxford OX3 0BW, England or a local Pergamon office ISBN 0-08-0369790 100 USD/ 50 pound sterlings Hotline Service: USA (800) 257 5755 elsewhere (+44) 865 743685 FAX (+44) 865 743946 **************************************************************** 2. Forthcoming: A SPECIAL ISSUE ON EMERGENCE AND CREATIVITY It's a Special Issue of ************************************************************** * World Futures: The Journal of General Evolution * * (Gordon & Breach), to appear August 1991 * * * * Guest Editor: G. Kampis * * * * Title: Creative Evolution in Nature, Mind and Society * ************************************************************** Individual copies will be available (hopefully), at a special rate (under negotiation). List of contents: Kampis, G. Foreword Rustler, E.O. "On Bodyism" (Report 8-80 hta 372) Salthe, S. Varieties of Emergence Csanyi, V. Societal Creativity Kampis, G. Emergent Computations, Life and Cognition Cariani, P. Adaptivity and Emergence in Organisms and Devices Fernandez,J., Moreno,A. and Etxeberria, A. Life as Emergence Heylighen, F. Modelling Emergence Tsuda, I. Chaotic Itinerancy as a Dynamical Basis for Hermeneutics in Brain and Mind Requardt, M. Godel, Turing, Chaitin and the Question of Emergence as a Meta-Principle of Modern Physics. Some Arguments Against Reductionism **************************************************************** 3. preprint from the Special Issue on Emergence EMERGENT COMPUTATIONS, LIFE AND COGNITION by George Kampis Evolutionary Systems Group, Dept. of Ethology L. Eotvos University of Budapest, Hungary and Department of Theoretical Chemistry University of Tubingen, D-7400 Tubingen, FRG ABSTRACT This is a non-technical text discussing general ideas of information generation. A model for emergent processes is given. Emergence is described in accordance with A.N. Whitehead's theory of 'process'. The role of distinctions and variable/observable definitions is discussed. As applications, parallel computations, evolution, and 'component-systems' are discussed with respect to their ability to realize emergence. KEYWORDS: emergence, distinctions, modeling, information set, evolution, cognitive science, theory of computations. AVAILABLE from the author at h1201kam at ella.hu or h1201kam at ella.uucp or by mail at H-1122 Budapest Maros u. 27. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: to distribute this given datum (association vector or whatever) over my representational units so that I can recover it from a partial stimulus. The issue of how the given datum itself is represented is obviously *very* important --- no quarrel on that --- but the question of "internal representations" (as Bo Xu so appropriately calls them) seems more immediate from a connectionist point of view because it relates *directly* to the problem of learning. As we all know, learning from a finite data set is ill-posed and, even with a fixed network topology, can (and will) produce multiple "equally good" solutions. Unlike Bo Xu, I am not at all convinced that "most of the current networks' topology ensures that the internal representations are mixed distributed representations" --- at least not "optimally" so. For the last year and some, I've been working on the problem of classifying "equally good" network solutions to approximation problems by their ability to withstand internal perturbations gracefully. I have found that, while a large enough network often does distribute responsibility, there is some considerable variation from net to net. My ideal distributed representation would be one that is minimally degraded by the malfunction of individual representative units (weights and neurons) so that it could withstand the maximum trauma better than any other network in its class. Of course, this is a theoretical ideal and the order of the effects I am talking about is insane. However, I think that the internal interactions on which this characterization depends are amenable to relatively simple empirical analysis (!) under simplifying assumptions, and are a "black box" only with respect to exact analysis. In any case, even if they were a black box, the characterization would still be applicable --- we just wouldn't be able to use it. In effect, what I am advocating is already present in most estimation methods under the guise of regularization. An interesting contrast, however, exists with regard to the various "pruning" algorithms used to improve generalization. If things go well and they succeed (most of them do, I think), then the networks they produce have a near-minimal number of representational units, across which the various associastions are quite well-distributed. However, precisely because of their minimality, each representational units has acquired maximal internal relevance, and is now minimally dispensable. Had there been no pruning, and some other method had been used to force well-distributedness, I think that good generalization could have been obtained without losing robustness. In effect, I am saying that instead of using a minimal number of resources as much as possible, we could use all the available resources as little as possible. Both create highly distributed representations, but the latter does so without losing any robustness (indeed, maximizing it). >I want to thank Ali Minai for his comments. All of his comments are very >valuable and thought-stimulating. Ditto for Bo Xu. I've been enjoying this discussion very much. Ali Minai University of Virginia aam9n at Virginia.EDU From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Fri 19 Jul 91 09:24:45-PDT Subject: Simple pictures, tough problems. In-Reply-To: <9107191052.AA11204@uk.ac.stir.cs.nevis> Message-ID: <679940685.0.LAWS@AI.SRI.COM> > This does not leave very much time for any pretty > hierachies and feedback loops. Feedback loops are not necessarily slow. Analog computers can be much faster than digital ones for many tasks, and I think we're beginning to understand neural bundles in analog terms. -- Ken Laws ------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: the learner can manipulate the environment are important for the survival of the learner. The repertoire of manipulations will (should?) bias the learner towards discovering properties that are invariant under those manipulations. Therefore, the learner will tend to learn concepts that are relevant to the tasks that it can perform and its survival. Disclaimer: I don't know anything about symbol grounding. I am actually working on analogical inference - but I have a nasty feeling that if I ever get half way towards having an analogical inference net I will have to know about symbol grounding to train and test it. Ross Gayler ross at psych.psy.uq.oz.au From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: memories. But neither of these viewpoints constitutes AI in any real sense. There is a famous saying by Nietzsche that might be adapted to describe the current status of neural networks: Machines "do not become thinkers simply because their memories are too good." Yet there are other aspects of neural networks that have been extremely important. Thus the structural paradigm is of obvious value to the neurophysiologist, the cognitive scientist, and the vision researcher. It would be of value to the computer science community if Information Sciences were to review and critique the original promise of neurocomputing in the light of developments in the past few years. The Special Issue of Information Sciences will do just this. It will provide reviews of this link between neural networks and AI. In other words, the scope of this \fIIssue\fR is much broader than that of the most commonly encountered applications of associative memories or mapping networks. The application areas that the \fIIssue\fR will deal with include neural logic programming, feature detection, knowledge representation, search techniques, and learning. The connectionist approach to AI will be contrasted from the traditional symbolic techniques. Deadline for Submissions: September 30, 1991 Papers may be sent to: Subhash Kak Guest Editor, Information Sciences Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel: (504) 388-5552 E-mail: kak at max.ee.lsu.edu From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: The University reserves the right not to proceed with any appointment for financial or other reasons. Equal Opportunity is University Policy. From Xuedong.Huang at SPEECH2.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Sun, 1 Sep 1991 19:23-EDT Subject: Processing of auditory sequences In-Reply-To: Scott_Fahlman's mail message of Sat, 31 Aug 91 10:24:21 -0400 Message-ID: <683767402/xdh@SPEECH2.CS.CMU.EDU> For the purpose of speech-compression, current technology using vector quantization can compress speech to 2k bits/s without much fidelity loss. Even lower rate (at 200-800 bits/s) also has acceptable intelligibility. They can be found in many commercial applications. - Xuedong Huang From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Sun 1 Sep 91 23:03:42-PDT Subject: AI/IS/CS Career Newsletter Message-ID: <683791422.0.LAWS@AI.SRI.COM> The Computists' Communique is available at half the standard rate! Computists International is having a membership drive, and our weekly newsletter and discussion list are nearly 50% off through September 30. Reply for full details. -- Ken P.S. You get a no-risk guarantee, of course. -- Dr. Kenneth I. Laws 4064 Sutherland Drive, Palo Alto, CA 94303. laws at ai.sri.com, (415) 493-7390, 11am - 5pm. ------- From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Mon 2 Sep 91 21:03:05-PDT Subject: Apology Message-ID: <683870585.0.LAWS@AI.SRI.COM> I have just learned that I accidentally sent a newsletter offer to the Connectionists list last night. This was inappropriate, and I assure you that I did not mean to use the list as a broadcast medium. I will ensure that I do not make such a slip again. -- Ken Laws ------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Neural networks can be used to realize DILATION and EROSION operations. Other than using backpropagation algorithm, they can be designed directly. You can see the paper written by Lippmann in IEEE ASSP Magazine, pp.4-22, April 1987. Lin Yin ++++++++++++++++++ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Juliette Mattioli, Michel Schmitt et al. in ICANN91 in Helsinki, Vol.1, pg I-117: Shape discrimination based in Mathematical Morphology and Neural Networks, and Vol 2, pg II-1045 Francois Vallet and Michel Schmitt, Network Configuration and Initialization using Mathematical Morphology: Theoretical Study of Measurement Functions. While above article do not talk about the operations you mentioned, I know that the author is working on these, i.e. Michel Schmitt; his address is: Thomson-CSF, Laboratoire Central de Recherche, F-91404 Orsay Cedex, France Konrad Weigl ++++++++++++++++++ Date: Wed, 28 Aug 91 09:49:01 +0200 From: toet at izf.tno.nl (Lex Toet) there is indeed very little literature on this topic. Some references that may be of use are : S.S. Wilson (1989) Vector morphology and iconic neural networks. IEEE Tr SMC 19, pp. 1636-1644. F.Y. Shih and Jenlong Moh (1989) Image morphological operations by neural circuits. In: IEEE 1989 Symposium on Circuits and Systems, pp. 774-777. M. Scmitt and F. Vallet (1991) Network configuration and initialization using mathematical morphology: theoretical study of measurement functions. In: Artificial Neural networks, T. Kohonen, M. Makisara, O. Simula and J. Kangas, eds. Elsevier Science Publishers B.V. , Amsterdam. ++++++++++++++++++ Date: Thu, 29 Aug 91 22:55:35 -0400 From: "Mark Schmalz" Re: your recent posting to comp.ai.vision -- obtain the recent papers on morphological neural nets by Ritter and Davidson, and Davidson and her students. Published in Proc. SPIE, the papers are indexed in the Computer and Control Abstracts, which you should have in your library. Copies may also be obtained by writing to: Center for Computer Vision Research Department of Computer and Information Science Attn: Dr. Joseph Wilson University of Florida Gainesville, FL 32611 The morpho. net computes over the ring (R,max,+) or (R,max,*), where R denotes the set of reals, max the maximum operation, and * multiplication. In contrast, the more usual McCullogh- Pitts net computes over (R,+,*). Thus, the morpho. net is inherently nonlinear. Additionally, numerous decompositions of the morphological operations into linear operations have been published. Casasent has recently published an interesting paper on the applications of optics to the morphological functions. His work on the hit-or-miss transform would be an interesting topic for neural net implementation. I suggest you obtain the SPIE Proceedings pertaining to the 1990 and 1991 Image Algebra conferences, presented at the San Diego Technical Symposium of SPIE (both years). Morphological image processing is included in the conference, and some good papers have appeared over the last two years. Mark Schmalz ++++++++++++++++++ Christoph Herwig Dept. of Electrical and Computer Engineering, Clemson University, SC From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: interpolation vs generalisation In-Reply-To: Message of Sat, 14 Sep 91 09:42:16 ADT from Message-ID: On Sat, 14 Sep 91 09:42:16 ADT Ross Gayler writes: > Analogical inference is a form of > generalisation that is performed on the basis of structural or > relational similarity rather than literal similarity. It is > generalisation, because it involves the application of knowledge from > previously encountered situations to a novel situation. However, the > interpolation does not occur in the space defined by the > input patterns, instead it occurs in the space describing the structural > relationships of the input tokens. The structural relationships > between any set of inputs is not necessarily fixed by those inputs, > but generated dynamically as an 'interpretation' that ties the inputs > to a context. There is an argument that analogical inference is the > basic mode of retrieval from memory, but most connectionist research > has focused on the degenerate case where the structural mapping is an > identity mapping - so the interest is focused on interpolation in the > input space instead of the structural representation space. > > In brief: Generalisation can occur without interpolation in a fixed> data space that you can observe, but it may involve interpolation > in some other space that is constructed internally and dynamically. I also believe that the above point is of critical importance: an intelligent system (at least if it is considered in the course of both micro- and macro- evolution) must have the capacity to generate new metrics based on the structural properties of the object classes. In fact, I find this capacity of an intelligent process to be so important, that I have suggested it to be the basic attribute of an intelligent process. To ensure the presence of this attribute, one can demand from the learning (test) environment some minimum requirement: "The requirement that I propose to adopt can be called structural unboundedness of the environment. Informally, an environment is called structurally unbounded if no finite set of "features", or parameters, is sufficient for specifying all classes of events in the environment." See the paper mentioned in one of my resent postings ("Verifiable characterization of an intelligent process"). If a proposed model can operate successfully in some such environments, then it deserves a more serious consideration. -- Lev Goldfarb From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Thagard model. (1) It is not practical to build a new network on the fly for every problem encountered, especially when the mechanism that builds the networks sidesteps a lot of really hard problems by being fed input as neatly pre-digested symbolic structures. (2) The task of finding a mapping from a given base to a given target is a party trick. In real life you are presented with a target (a structure with some gaps) and you have to find the best base (or bases) in long-term memory to support the mapping that best fills the gaps. Another paper on connectionist analogical inference is: Halford, G.S., Wilson, W.H., Guo, J., Wiles, J., & Stewart, J.E.M. (In preparation). Connectionist implications for processing capacity limitations in analogies. I wouldn't be proper for me to comment on that paper as it is in preparation. I am not aware of any other direct attempts at connectionist implementation of analogical inference (in the sense that interests me: general, dynamic, and practical), but I don't get much time to keep up with the literature and would be pleased to be corrected on this. - References to other work on analogical inference There is a large-ish literature on analogy in psychology, AI, and philosophy. In psychology try: Gentner, D. (1989) The mechanisms of analogical learning. In Vosniadou, S., & Ortony, A. (Eds.), Similarity and analogical reasoning (pp. 199-241). Cambridge: Cambridge University Press. In Artificial Intelligence try: Kedar-Cabelli, S. (1988). Analogy - from a unified perspective. In D.H. Helman (Ed.), Analogical reasoning (pp. 65-103). Dordrecht: Kluwer Academic Publishers. Sorry, I haven't followed the philosophy literature. - References to related connectionist work One of the really hard parts about trying to do connectionist analogical inference is that you need to be able to represent structures, and this is still a wide-open research area. A book that is worth looking at in this area is: Barnden, J.A., & Pollack, J.B. (Eds.) (1991). High-level connectionist models. Norwood NJ: Ablex. For specific techniques to represent structured data you might try to track down the work of Paul Smolensky, Jordan Pollack, and Tony Plate (to name a few). - Lonce Wyse (lwyse at park.bu.edu) says: LW> I was strongly advised against going for such a "high level" LW> cognitive phenomenon [on starting grad school]. I think that is good advice. Connectionist analogical inference is a *really hard* problem (at least if you are aiming for something realistically useful). The solution involves solving a bunch of other problems that are hard in their own rights. Doctoral candidates and untenured academics can't afford the risk of attacking something like this because they have to crank out the publications. If you want to get into this area, either keep it as a hobby or carve out an extremely circumscribed subset of the problem (and lose the fun). - Lonce Wyse also says: LW> I think intermodal application of learning in neural networks LW> is a future hot topic. In classical symbolic AI the relationship between a concept and its corresponding symbol is arbitrary and the 'internal structure' of the symbol does not have any effect on the dynamics of processing. In a connectionist symbol processor the symbol<->referent relationship should still be arbitrary (because we need to be able to reconceive the same referent at whim) but the internal structure of the symbol (a vector) DOES effect the dynamics of processing. The tricky part is to pick a symbol that has the correct dynamic effect. The possibility that I am pursuing is to pick a pattern that is analogically consistent with what is already in Long-Term Memory. Extending the theme requires that the pattern be consistent with information about the same referent obtained via other sensory modes. Some while back I used up some bandwidth on the mailing list asking about the role of intermodal learning in symbol grounding. For my purposes the crucial aspect about symbol grounding is that it concerns linking a perceptual icon to a symbol with the desired symbolic linkages. My intuitive belief is that a system with only one perceptual mode and no ability to interact with its environment can learn to approximate that environment but not to build a genuine model of the environment as distinct from the perceiver. So intermodal learning and the ability to interact with the environment are important. - In answer to my assertion that: "Generalisation can occur without interpolation in a data space that you can observe, but it may involve interpolation in some other space that is constructed internally and dynamically" - Lev Goldfarb (goldfarb at unbmvs1.csd.unb.ca ?) says LG> an *intelligent* system must have the capacity to generate new metrics LG> based on the structural properties of the object classes. - and Thomas Hildebrandt (thildebr at athos.csee.lehigh.edu) says TH> Generalization is interpolation in the *Right* space. Psychological scaling studies have shown that the similarity metric over a group of objects depends on the composition of the group. For example, the perceived similarity of an apple and an orange depends on whether they are with other fruit, or other roughly spherical objects. Mike Humphreys at the University of Queensland has stated that items in LTM are essentially orthogonal (and I am sure he would have the studies to back it up) The point is that the metric used to relate objects is induced by the demands of the current task. I like to think of all the items in LTM as (approximately) mutually orthogonal vectors in a very high dimensional space. The STM representation is a projection of the high-D space onto a low-D space of those LTM items that are currently active. The exact projection that is used is dependent on the task demands. Classic back-prop type connectionism attempts to generate a new representation on the hidden layer such that interpolation is also (correct) generalisation. This is done by learning the correct weights into the hidden layer. Unfortunately, the weights are essentially fixed with respect to the time scale of outside events. What is required for analogical inference (in my sense) is that the weights be dynamic and able to vary at least as fast as the environmental input. For this to have even a hope of working without degenerating into an unstable mess there must be lots of constraint: from the architecture, from cross-modal input and from previous knowledge in LTM. - Marek (marek at iuvax.cs.indiana.edu) says M> Would Pentti Kanerva's model of associative memory fit in with M> your definition of analogical inference? - and Geoff Hinton (geoff at ai.toronto.edu) says GH> It seems to me that my family trees example is exactly an example GH> of analogical mapping. Well, my memory of both is rather patchy, but I think not. At least, not in the sense that I am using analogical inference. The reason that I say this lies back in my previous paragraph. Connectionist work, to date, is very static: the net learns *a* mapping and then you use it. A net may learn to generalise on one problem domain in a way that looks like analogy, but I want it to be able to generalise to others on the fly. In order to perform true analogical inference the network must search in real-time for the correct transformation weights instead of learning them over an extended period. Hinton's 1981 network for assigning canonical object-based frames of reference is probably closer in spirit to analogical retrieval. In this model there are objects that must be recognised from arbitrary view-points. In this model the network settles simultaneously on the object class and the transformation that maps the perceptual image onto the object class. - Jim Franklin (jim at hydra.maths.unsw.oz.au) says JF> What is the 'argument that analogical inference is the basic mode of JF> retrieval from memory'? I thought I'd be able to slip that one by, but I forgot you were out there Jim. OK, here goes. There is a piece of paper you occasionally find tacked on the walls of labs that gives the translations for phrases used in scientific papers. Amongst others it contains: 'It is believed that ...' => 'In my last paper I said that ...' 'It is generally believed that ...' => 'I asked the person in the next office and she thought so too.' In other words, I can't quote you relevant papers but it appears to have some currency among my academic colleagues in psychology. As you would expect the belief is most strongly held by people who study analogy. People studying other phenomena generally try to structure their experiments so that analogical inference can't happen. The strongest support for the notion probably comes from natural language processing. The AI people have been stymied by unconstrained language being inherently metaphorical. If you read even a technical news report you find metaphorical usage: the market was , trading was . Words don't have meanings, they are gestures towards meanings. Wittgenstein pointed out the impossibility of precise definition. Attempts to make natural language understanding software systems by bolting on a metaphor-processor after synatx and semantics just don't work. Metaphor has to be in from ground level. Similarly, perceptual events don't have unambiguous meanings, they are gestures towards meanings. They must be interpreted in the context of the rest of the perceptual field and the intentions of the perceiver. One of the hallmarks of intelligent behaviour is to be able to perceive things in a context dependent way: usually a filing cabinet is a device for storage of papers, sometimes it is a platform for standing on while replacing a light bulb. Now suppose you have a very incomplete and ambiguous input to a memory device. You want the input to be completed and 'interpreted' in a way that is consistent with the input fragment and with the intentional context. You also have a lot of hard-earned prior knowledge that you should take advantage of. Invoke a fairly standard auto-associator for pattern completion. If your input data is a literal match for a learned item it can be simply completed. If your input data is not a literal match then find a transformation such that it *can* be completed via the auto-associator and re-transformed back into the original pattern. If the transformed-associated-untransformed version matches the original and you have also filled in some of the gaps then you have performed an analogical inference/retrieval. The literal matching case can be seen as a special case of this where the transform and its inverse are the identity mapping. So, if you have a memory mechanism that performs analogical retrieval then you automatically get literal retrieval but if your standard mechanism is literal retrieval then you have to have some other mechanism for analogical inference. I believe that if you can do analogical retrieval you have achieved the crucial step on the way to symbol grounding, natural language understanding, common sense reasoning and genuine artificial intelligence. I shall now step down from the soap box. Ross Gayler ross at psych.psy.uq.oz.au ^^^^^^^^^^^^^^^^^^ <- My mailer lives here, but I live 2,000km south. Any job offers will be gratefully considered - I have a mortgage & dependents. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: memories. But neither of these viewpoints constitutes AI in any real sense. There is a famous saying by Nietzsche that might be adapted to describe the current status of neural networks: Machines "do not become thinkers simply because their memories are too good." Yet there are other aspects of neural networks that have been extremely important. Thus the structural paradigm is of obvious value to the neurophysiologist, the cognitive scientist, and the vision researcher. It would be of value to the computer science community if Information Sciences were to review and critique the original promise of neurocomputing in the light of developments in the past few years. The Special Issue of Information Sciences will do just this. It will provide reviews of this link between neural networks and AI. In other words, the scope of this \fIIssue\fR is much broader than that of the most commonly encountered applications of associative memories or mapping networks. The application areas that the \fIIssue\fR will deal with include neural logic programming, feature detection, knowledge representation, search techniques, and learning. The connectionist approach to AI will be contrasted from the traditional symbolic techniques. Deadline for Submissions: October 30, 1991 Papers may be sent to: Subhash Kak Guest Editor, Information Sciences Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: is not much you can parallelize if you do per-sample training. Take the vanilla version of the backprop for example, assuming a network has 20 hidden and output units and 300 weights, then all you can do in parallel is evaluating 20 sigmoid functions and 300 multiply-add (you can't even do that because of the dependencies among the parameters). Thus if you have thousands of processors in a parallem machine, most processors will idle. In strict per-sample case, sample i+1 needs to use the weight updated by sample i, so you can't run multiple copies of the same network. And that is the trick several people came up (indenpendently) to speed up backprop training on parallel machines. Unless we modify the algorithm a little bit, I can't see a way to run multiple copies of a network in parallel in per-sample case. - Xiru Zhang From Xuedong.Huang at SPEECH2.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Tue, 15 Oct 1991 08:17-EDT Subject: Positions in CMU Speech Group Message-ID: <687529025/xdh@SPEECH2.CS.CMU.EDU> Applications are invited for one full-time research programmer and a few part-time programmer positions in the speech group, School of Computer Science, Carnegie Mellon University, Pittsburgh, beginning November 1, 1991, or later. For the full-time position, BS/MS in CS/EE and excellence in C/Unix programming required. Experiences in system intergration, speech recognition, hidden Markov modeling, search, and neural nets preferred. For the part-time positions, we are particulary interested in CMU sophomores. Neat research opportunity for a real speech application. Our Project involves mostly software development, hidden Markov modeling, large-vocabulary search, language modeling, and neural computing. Send all materials including resume, transcripts, and the names of two references to: Dr. Xuedong Huang Research Computer Scientist School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Wed 23 Oct 91 22:36:53-PDT Subject: Continuous vs. Batch learning In-Reply-To: <9110222317.AA22627@sanger.bio.uci.edu> Message-ID: <688282613.0.LAWS@AI.SRI.COM> From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: > I can't think of > any biological examples of batch learning, in which sensory data are > saved until a certain number of them can be somehow averaged together > and conclusions made and remembered. Any ideas? My observation of children is that they remember everything well enough to know if they have seen or heard it before, but they pay little attention to facts or advice that are not repeated. It is the act of repetition that marks a stimulus as one of the important ones that must be learned. -- Ken Laws ------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From jon at johann Mon Jun 5 16:42:55 2006 From: jon at johann (Jonathon Baxter) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Ray White writes: > > Larry Fast writes: > > > I'm expanding the PDP Backprop program (McClelland&Rumlhart version 1.1) to > > compensate for the following problem: > > > As Backprop passes the error back thru multiple layers, the gradient has > > a built in tendency to decay. At the output the maximum slope of > > the 1/( 1 + e(-sum)) activation function is 0.5. > > Each successive layer multiplies this slope by a maximum of 0.5. > ..... > > > It has been suggested (by a couple of sources) that an attempt should be > > made to have each layer learn at the same rate. ... > > > The new error function is: errorPropGain * act * (1 - act) > > This suggests to me that we are too strongly wedded to precisely > f(sum) = 1/( 1 + e(-sum)) as the squashing function. That function > certainly does have a maximum slope of 0.25. > > A nice way to increase that maximum slope is to choose a slightly different > squashing function. For example f(sum) = 1/( 1 + e(-4*sum)) would fill > the bill, or if you'd rather have your output run from -1 to +1, then > tanh(sum) would work. I think that such changes in the squashing function > should automatically improve the maximum-slope situation, essentially by > doing the "errorPropGain" bookkeeping for you. > > Such solutions are static fixes. I suggested a dynamic adjustment of the > learning parameter for recurrent backprop at IJCNN - 90 in San Diego > (The Learning Rate in Back-Propagation Systems: an Application of Newton's > Method, IJCNN 90, vol I, p 679). The method amounts to dividing the > learning rate parameter by the square of the gradient of the output > function (subject to an empirical minimum divisor). One should be able > to do something similar with feedforward systems, perhaps on a layer by > layer basis. > > - Ray White (white at teetot.acusd.edu) The fact that the error "decays" when backpropagated through several layers is not a "problem" with the BP algorithm, its merely a reflection of the fact that earlier weights contribute less to the error than later weights. If you go around changing the formula for the error at each weight then the resulting learning algorithm will no longer be gradient descent, and hence there is no guarantee that your algorithm will reduce the network's error. Ray White's solution is preferable as it will still use gradient descent to improve the network's performance, although doing things on a layer by layer basis would be wrong. I have experimented a little with keeping the magnitude of the error vector constant in feedforward, backprop nets (by dividing the error vector by its magnitude) and have found a significant (*10) speedup in small problems (xor, encoder--decoders, etc). This increase in speed is most noticable in problems where the "solution" is a set of infinite weights, so that an approximate solution is reached by traversing vast, flat regions of weight space. Presumably there is a lot of literature out there on this kind of thing. Another idea is to calculate the matrix of second derivatives (grad(grad E)) as well as the first derivatives (grad E) and from this information calculate the (unique) parabolic surface in weight space that has the same derivatives. Then the weights should be updated so as to jump to the center (minimum) of the parabola. I haven't coded this idea yet, has anyone else looked at this kind of thing, and if so what are the results? Jon Baxter - jon at degas.cs.flinders.oz.au From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Thu, 14 Nov 1991 11:35-EST Subject: Backprop-simulator for Connection Machine available Message-ID: <690136508/thrun@B.GP.CS.CMU.EDU> Folks, according to the recent discussion concerning parallel implementations of neural networks, I want to inform you that there is a very fast code of backprop for the Connection Machine CM-2 public available. This posting is about a year old, and meanwhile various labs have found and removed (almost) all the bugs in the code, such that it is very reliable now. It should be noted that this implementation is the most simple way of parallelizing backprop (it's data-parallel), and that it works efficiently with large training sets only (in the order of 500 to infinity). But then it works great! --- Sebastian ------------------------------------------------------------------ (original message follows) ------------------------------------------------------------------ The following might be interesting for everybody who works with the PDP backpropagation simulator and has access to a Connection Machine: ******************************************************** ** ** ** PDP-Backpropagation on the Connection Machine ** ** ** ******************************************************** For testing our new Connection Machine CM/2 I extended the PDP backpropagation simulator by Rumelhart, McClelland et al. with a parallel training procedure for the Connection Machine (Interface C/Paris, Version 5). Following some ideas by R.M. Faber and A. Singer I simply made use of the inherent parallelism of the training set: Each processor on the connection machine (there are at most 65536) evaluates the forward and backward propagation phase for one training pattern only. Thus the whole training set is evaluated in parallel and the training time does not depend on the size of this set any longer. Especially at large training sets this reduces the training time greatly. For example: I trained a network with 28 nodes, 133 links and 23 biases to approximate the differential equations for the pole balancing task adopted from Anderson's dissertation. With a training set of 16384 patterns, using the conventional "strain" command, one learning epoch took about 110.6 seconds on a SUN 4/110 - the connection machine with this SUN on the frontend managed the same in 0.076 seconds. --> This reduces one week exhaustive training to approximately seven minutes! (By parallelizing the networks themselves similar acceleration can be achieved also with smaller training sets.) -------------- The source is written in C (Interface to Connection Machine: PARIS) and can easily be embedded into the PDP software package. All origin functions of the simulator are not touched - it is also still possible to use the extended version without a Connection Machine. If you want to have the source, please mail me! Sebastian Thrun, thrun at cs.cmu.edu You can also obtain the source via ftp: ftp 129.26.1.90 Name: anonymous Password: ftp> cd pub ftp> cd gmd ftp> get pdp-cm.c ftp> bye From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: about patenting multiplication was misunderstood by many people. I expected that announcing that I had patented multiplication would sound so ridiculous, that it would help make the whole issue of patenting algorithms ridiculous too. Just to make things clear, I am strongly against the patenting of algorithms. Multiplication cannot be patented, because it is in the public domain. In fact, I thought that most people knew this, and that the meaning of the announcement of having patented multiplication would therefore be clear. Apparently, it was not clear to everyone. Sorry. Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp) From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Wed 20 Nov 91 10:07:12-PST Subject: Subtractive network design In-Reply-To: <9111181719.AA00230@poseidon.cs.tulane.edu> Message-ID: <690660432.0.LAWS@AI.SRI.COM> There's been some discussion of whether networks should grow or shrink. This reminds me of the stepwise-inclusion and stepwise-deletion debate for multiple regression. As I recall, there were demonstrable benefits from combining the two. Stepwise inclusion was used for speed, but with stepwise deletion of variables that were thereby made redundant. The selection process was simplified, over the years, by advances in the theory of canonical correlation. The theory of minimal encoding has lately been invoked to improve stopping criteria for the search. Neural-network researchers don't like globally computed statistics or stored states, so you can't set up an A* search within a single network training run. You do seem willing, however, to use genetic algorithms or multiple training runs to find sufficiently good networks for a target application. Brute-force search techniques in the space of permitted connectivities may be necessary. Stepwise growth alternated with stepwise death may be a useful strategy for reducing search time. -- Ken ------- From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Tue, 26 Nov 1991 20:14-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <691204481/thrun@B.GP.CS.CMU.EDU> Dear Connectionists: This is an announcement of a forthcoming Technical Report. In the last months, we did run a first worldwide comparison of some major learning algorithms on three simple classification problems. Two connectionist learning algorithms were also compared, namely plain Backpropagation and Cascade Correlation. Although a) the problems were taken from a domain which supported (some of the) symbolical algorithms, and b) this comparison is considerably un-biased since the testers did really know the methods they tested, connectionist techniques performed surprisingly well. The final report will be available shortly after NIPS conference, but everyone who is interested in this comparison and attends the conference may feel free to contact me at NIPS. I will bring a few pre-prints. Sebastian Thrun thrun at cs.cmu.edu ---------------------------------------------------------------------- ---------------------------------------------------------------------- The MONK's Problems A Performance Comparison of Different Learning Algorithms S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang This report summarizes a comparison of different learning techniques which was performed at the 2nd European Summer School on Machine Learning, held in Belgium during summer 1991. A variety of symbolic and non-symbolic learning techniques - namely AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, CLASSWEB, PRISM, Backpropagation, and Cascade Correlation - are compared on three classification problems, the MONK's problems. The MONK's problems are derived from a domain in which each training example is represented by six discrete-valued attributes. Each problem involves learning a binary function defined over this domain, from a sample of training examples of this function. Experiments were performed with and without noise in the training examples. One significant characteristic of this comparison is that it was performed by a collection of researchers, each of whom was an advocate of the technique they tested (often they were the creators of the various methods). In this sense, the results are less biased than in comparisons performed by a single person advocating a specific learning method, and more accurately reflect the generalization behavior of the learning techniques as applied by knowledgeable users. ---------------------------------------------------------------------- ================================ RESULTS - A SHORT OVERVIEW ================================ Problem: MONK-1 MONK-2 MONK-3(noisy) AQ17-DCI 100.0% 100.0% 94.2% AQ17-HCI 100.0% 93.1% 100.0% AQ17-FCLS 92.6% 97.2% AQ14-NT 100.0% AQ15-GA 100.0% 86.8% 100.0% (by J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang) Assistant Professional 100.0% 81.25% 100.0% (by B. Cestnik, I. Kononenko, and I. Bratko) mFOIL 100.0% 69.2% 100.0% (by S. Dzeroski) ID5R 81.7% 61.8% IDL 97.2% 66.2% ID5R-hat 90.3% 65.7% TDIDT 75.7% 66.7% (by W. Van de Velde) ID3 98.6% 67.9% 94.4% ID3, no windowing 83.2% 69.1% 95.6% ID5R 79.7% 69.2% 95.2% AQR 95.9% 79.7% 87.0% CN2 100.0% 69.0% 89.1% CLASSWEB 0.10 71.8% 64.8% 80.8% CLASSWEB 0.15 65.7% 61.6% 85.4% CLASSWEB 0.20 63.0% 57.2% 75.2% (by J. Kreuziger, R. Hamann, and W. Wenzel) PRISM 86.3% 72.7% 90.3% (by S. Keller) Backpropagation 100.0% 100.0% 93.1% (by S. Thrun) Cascade Correlation 100.0% 100.0% 97.2% (by S.E. Fahlman) ---------------------------------------------------------------------- ---------------------------------------------------------------------- ================================ TABLE OF CONTENTS ================================ 1 The MONK's Comparison Of Learning Algorithms -- Introduction and Survey 1 1.1 The problem 2 1.2 Visualization 2 2 Applying Various AQ Programs to the MONK's Problems: Results and Brief Description of the Methods 7 2.1 Introduction 8 2.2 Results for the 1st problem (M1) 9 2.2.1 Rules obtained by AQ17-DCI 9 2.2.2 Rules obtained by AQ17-HCI 10 2.3 Results for the 2nd problem (M2) 11 2.3.1 Rules obtained by AQ17-DCI 11 2.3.2 Rules obtained by AQ17-HCI 11 2.3.3 Rules obtained by AQ17-FCLS 13 2.4 Results for the 3rd problem (M3) 15 2.4.1 Rules obtained by AQ17-HCI 15 2.4.2 Rules obtained by AQ14-NT 16 2.4.3 Rules obtained by AQ17-FCLS 16 2.4.4 Rules obtained by AQ15-GA 17 2.5 A Brief Description of the Programs and Algorithms 18 2.5.1 AQ17-DCI (Data-driven constructive induction) 18 2.5.2 AQ17-FCLS (Flexible concept learning) 19 2.5.3 AQ17-HCI (Hypothesis-driven constructive induction) 19 2.5.4 AQ14-NT (noise-tolerant learning from engineering data) 20 2.5.5 AQ15-GA (AQ15 with attribute selection by a genetic algorithm) 20 2.5.6 The AQ Algorithm that underlies the programs 21 3 The Assistant Professional Inductive Learning System: MONK's Problems 23 3.1 Introduction 24 3.2 Experimental results 24 3.3 Discussion 25 3.4 Literature 25 3.5 Resulting Decision Trees 26 4 mFOIL on the MONK's Problems 29 4.1 Description 30 4.2 Set 1 31 4.3 Set 2 31 4.4 Set 3 32 5 Comparison of Decision Tree-Based Learning Algorithms on the MONK's Problems 33 5.1 IDL: A Brief Introduction 34 5.1.1 Introduction 34 5.1.2 Related Work 35 5.1.3 Conclusion 36 5.2 Experimental Results 40 5.2.1 ID5R on test set 1 43 5.2.2 IDL on test set 1 43 5.2.3 ID5R-HAT on test set 1 44 5.2.4 TDIDT on test set 1 44 5.2.5 ID5R on test set 2 45 5.2.6 IDL on test set 2 46 5.2.7 TDIDT on test set 2 48 5.2.8 TDIDT on test set 1 49 5.2.9 ID5R-HAT on test set 2 50 5.3 Classification diagrams 52 5.4 Learning curves 56 6 Comparison of Inductive Learning Programs 59 6.1 Introduction 60 6.2 Short description of the algorithms 60 6.2.1 ID3 60 6.2.2 ID5R 61 6.2.3 AQR 61 6.2.4 CN2 62 6.2.5 CLASSWEB 62 6.3 Results 63 6.3.1 Training Time 63 6.3.2 Classifier Results 64 6.4 Conclusion 68 6.5 Classification diagrams 69 7 Documentation of Prism -- an Inductive Learning Algorithm 81 7.1 Short Description 82 7.2 Introduction 82 7.3 PRISM: Entropy versus Information Gain 82 7.3.1 Maximizing the information gain 82 7.3.2 Trimming the tree 82 7.4 The Basic Algorithm 83 7.5 The Use of Heuristics 84 7.6 General Considerations and a Comparison with ID3 84 7.7 Implementation 84 7.8 Results on Running PRISM on the MONK's Test Sets 85 7.8.1 Test Set 1 -- Rules 86 7.8.2 Test Set 2 -- Rules 87 7.8.3 Test Set 3 -- Rules 90 7.9 Classification diagrams 92 8 Backpropagation on the MONK's problems 95 8.1 Introduction 96 8.2 Classification diagrams 97 8.3 Resulting weight matrices 99 9 The Cascade-Correlation Learning Algorithm on the MONK's Problems 101 9.1 The Cascade-Correlation algorithm 102 9.2 Results 103 9.3 Classification diagrams 106 From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Fri, 29 Nov 1991 16:21-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <691449698/thrun@B.GP.CS.CMU.EDU> Dear Connectionists, Two days ago I announced the forthcoming TR "The MONK's Problems -- A Performance Comparison of Different Learning Algorithms" on this mailing list. Since then I spend a significant part of my time in answering e-mails. To make things sufficiently clear: - The final TR will be published in 2-3 weeks. - I will make the TR available by ftp (Ohio State-archive). The report in more than 100 pages in length, and we want send hardcopies only to people who cannot access ftp. If you are not able to retrieve it by ftp, please feel free to contact me. _ I also will also copy the "MONK's problems" to the archive. - If this all is done, I will announce the report again on this list. --- Sebastian From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Mon, 16 Dec 1991 13:27-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <692908033/thrun@B.GP.CS.CMU.EDU> Dear Connectionists: The technical report "The MONK's Problems - A Performance Comparison of Different Learning Algorithms" is now available via anonymous ftp. Copies of the report as well as the MONK's database can be obtained in the following way: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thrun.comparison.ps.Z (=report) ftp> get thrun.comparison.dat.Z (=data) ftp> quit unix> uncompress thrun.comparison.ps.Z unix> uncompress thrun.comparison.dat.Z unix> lpr thrun.comparison.ps unix> lpr thrun.comparison.dat If this does not work, send e-mail to reports at cs.cmu.edu asking for the Technical Report CMU-CS-91-197. Sebastian Thrun thrun at cs.cmu.edu SCS, CMU, Pittsburgh PA 15213 ---------------------------------------------------------------------- ---------------------------------------------------------------------- Some things changed - here is the abstract and the table of contents again: The MONK's Problems A Performance Comparison of Different Learning Algorithms S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang CMU-CS-91-197 This report summarizes a comparison of different learning techniques which was performed at the 2nd European Summer School on Machine Learning, held in Belgium during summer 1991. A variety of symbolic and non-symbolic learning techniques - namely AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, CLASSWEB, PRISM, Backpropagation, and Cascade Correlation - are compared on three classification problems, the MONK's problems. The MONK's problems are derived from a domain in which each training example is represented by six discrete-valued attributes. Each problem involves learning a binary function defined over this domain, from a sample of training examples of this function. Experiments were performed with and without noise in the training examples. One significant characteristic of this comparison is that it was performed by a collection of researchers, each of whom was an advocate of the technique they tested (often they were the creators of the various methods). In this sense, the results are less biased than in comparisons performed by a single person advocating a specific learning method, and more accurately reflect the generalization behavior of the learning techniques as applied by knowledgeable users. ---------------------------------------------------------------------- ================================ RESULTS - A SHORT OVERVIEW ================================ MONK-1 MONK-2 MONK-3(noisy) AQ17-DCI 100.0% 100.0% 94.2% AQ17-HCI 100.0% 93.1% 100.0% AQ17-FCLS 92.6% 97.2% AQ14-NT 100.0% AQ15-GA 100.0% 86.8% 100.0% (by J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang) Assistant Professional 100.0% 81.25% 100.0% (by B. Cestnik, I. Kononenko, and I. Bratko) mFOIL 100.0% 69.2% 100.0% (by S. Dzeroski) ID5R 81.7% 61.8% IDL 97.2% 66.2% ID5R-hat 90.3% 65.7% TDIDT 75.7% 66.7% (by W. Van de Velde) ID3 98.6% 67.9% 94.4% ID3, no windowing 83.2% 69.1% 95.6% ID5R 79.7% 69.2% 95.2% AQR 95.9% 79.7% 87.0% CN2 100.0% 69.0% 89.1% CLASSWEB 0.10 71.8% 64.8% 80.8% CLASSWEB 0.15 65.7% 61.6% 85.4% CLASSWEB 0.20 63.0% 57.2% 75.2% (by J. Kreuziger, R. Hamann, and W. Wenzel) PRISM 86.3% 72.7% 90.3% (by S. Keller) ECOBWEB leaf pred. 71.8% 67.4% 68.2% " plus inform.utility 82.7% 71.3% 68.0% (by Y. Reich and D. Fisher) Backpropagation 100.0% 100.0% 93.1% BP + weight decay 100.0% 100.0% 97.2% (by S. Thrun) Cascade Correlation 100.0% 100.0% 97.2% (by S.E. Fahlman) ---------------------------------------------------------------------- ---------------------------------------------------------------------- 1 The MONK's Comparison Of Learning Algorithms -- Introduction and Survey S.B. Thrun, T. Mitchell, and J. Cheng 1 1.1 The problem 2 1.2 Visualization 2 2 Applying Various AQ Programs to the MONK's Problems: Results and Brief Description of the Methods J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang 7 2.1 Introduction 8 2.2 Results for the 1st problem (M1) 9 2.2.1 Rules obtained by AQ17-DCI 9 2.2.2 Rules obtained by AQ17-HCI 10 2.3 Results for the 2nd problem (M2) 11 2.3.1 Rules obtained by AQ17-DCI 11 2.3.2 Rules obtained by AQ17-HCI 11 2.3.3 Rules obtained by AQ17-FCLS 13 2.4 Results for the 3rd problem (M3) 15 2.4.1 Rules obtained by AQ17-HCI 15 2.4.2 Rules obtained by AQ14-NT 16 2.4.3 Rules obtained by AQ17-FCLS 16 2.4.4 Rules obtained by AQ15-GA 17 2.5 A Brief Description of the Programs and Algorithms 17 2.5.1 AQ17-DCI (Data-driven constructive induction) 17 2.5.2 AQ17-FCLS (Flexible concept learning) 18 2.5.3 AQ17-HCI (Hypothesis-driven constructive induction) 18 2.5.4 AQ14-NT (noise-tolerant learning from engineering data) 19 2.5.5 AQ15-GA (AQ15 with attribute selection by a genetic algorithm) 20 2.5.6 The AQ Algorithm that underlies the programs 20 3 The Assistant Professional Inductive Learning System: MONK's Problems B. Cestnik, I. Kononenko, and I. Bratko 23 3.1 Introduction 24 3.2 Experimental results 24 3.3 Discussion 25 3.4 Literature 25 3.5 Resulting Decision Trees 26 4 mFOIL on the MONK's Problems S. Dzeroski 29 4.1 Description 30 4.2 Set 1 31 4.3 Set 2 31 4.4 Set 3 32 5 Comparison of Decision Tree-Based Learning Algorithms on the MONK's Problems W. Van de Welde 33 5.1 IDL: A Brief Introduction 34 5.1.1 Introduction 34 5.1.2 Related Work 35 5.1.3 Conclusion 36 5.2 Experimental Results 40 5.2.1 ID5R on test set 1 43 5.2.2 IDL on test set 1 43 5.2.3 ID5R-HAT on test set 1 44 5.2.4 TDIDT on test set 1 44 5.2.5 ID5R on test set 2 45 5.2.6 IDL on test set 2 46 5.2.7 TDIDT on test set 2 48 5.2.8 TDIDT on test set 1 49 5.2.9 ID5R-HAT on test set 2 50 5.3 Classification diagrams 52 5.4 Learning curves 56 6 Comparison of Inductive Learning Programs J. Kreuziger, R. Hamann, and W. Wenzel 59 6.1 Introduction 60 6.2 Short description of the algorithms 60 6.2.1 ID3 60 6.2.2 ID5R 61 6.2.3 AQR 61 6.2.4 CN2 62 6.2.5 CLASSWEB 62 6.3 Results 63 6.3.1 Training Time 63 6.3.2 Classifier Results 64 6.4 Conclusion 68 6.5 Classification diagrams 69 7 Documentation of Prism -- an Inductive Learning Algorithm S. Keller 81 7.1 Short Description 82 7.2 Introduction 82 7.3 PRISM: Entropy versus Information Gain 82 7.3.1 Maximizing the information gain 82 7.3.2 Trimming the tree 82 7.4 The Basic Algorithm 83 7.5 The Use of Heuristics 84 7.6 General Considerations and a Comparison with ID3 84 7.7 Implementation 84 7.8 Results on Running PRISM on the MONK's Test Sets 85 7.8.1 Test Set 1 -- Rules 86 7.8.2 Test Set 2 -- Rules 87 7.8.3 Test Set 3 -- Rules 90 7.9 Classification diagrams 92 8 Cobweb and the MONK Problems Y. Reich, and D. Fisher 95 8.1 Cobweb: A brief overview 96 8.2 Ecobweb 97 8.2.1 Characteristics prediction 97 8.2.2 Hierarchy correction mechanism 97 8.2.3 Information utility function 98 8.3 Results 98 8.4 Summary 100 9 Backpropagation on the MONK's Problems S.B. Thrun 101 9.1 Introduction 102 9.2 Classification diagrams 103 9.3 Resulting weight matrices 105 10 The Cascade-Correlation Learning Algorithm on the MONK's Problems S.E. Fahlman 107 10.1 The Cascade-Correlation algorithm 108 10.2 Results 109 10.3 Classification diagrams 112 From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Fri 20 Dec 91 10:34:32-PST Subject: Research topic needed In-Reply-To: <9112200658.AA22681@bluering.cowan.edu.au> Message-ID: <693254072.0.LAWS@AI.SRI.COM> > could any of Neural Gurus help me to identify research topic, please. I don't mean to flame Boon Tan (btan at cowan.edu.au), but shouldn't the question be one of finding a customer? Or a real-world problem? Too many thesis projects serve no purpose beyond getting the degree, one appearance at a major conference, and perhaps a journal article. As long as you're looking for a topic, it would be best to choose one with value outside a single academic department. Neural networks for control of sheep shearing might not interest the Neural Gurus, but at least there be hope of doing the world some good. Or is that total inappropriate at the doctoral level? -- Ken ------- From IN% Mon Jun 5 16:42:55 2006 From: IN% (IN%) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From postmaster%BROWNCOG.BITNET at CARNEGIE.BITNET Mon Jun 5 16:42:55 2006 From: postmaster%BROWNCOG.BITNET at CARNEGIE.BITNET (PMDF Mail Server) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Delivery report Message-ID: <01GF3NDHUGF8000BAJ@BROWNCOG.BITNET> ---------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: possible solutions, then some of them will work well for new inputs and others will not work well. So on one training run a network may appear to generalise well to a new input set, while on another it does not. Does this mean that, when connectionists refer to the ability of a network to generalise, they are referring to an average ability over many trials? Has anyone encountered situations in which the same network appeared to generalise well on one learning trial and poorly on another? Reference: Bates, E.A. & Elman, J.L. (1992). Connectionism and the study of change. CRL Technical Report 9202, (February). -- Paul Atkins email: patkins at laurel.mqcc.mq.oz.au School of Behavioural Sciences phone: (02) 805-8606 Macquarie University fax : (02) 805-8062 North Ryde, NSW, 2113 Australia.  From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Sun, 29 Mar 1992 12:16-EST Subject: new papers about exploration in active learning Message-ID: <701889417/thrun@B.GP.CS.CMU.EDU> This is an announcement of three papers about exploration in neurocontrol and reinforcement learning. I copied postscript versions to our neuroprose archive. Thanks to Jordan Pollack - what would connectionism be without him?? Instructions for retrieval can be found at the end of this message. Comments are welcome. --- Sebastian Thrun =========================================================================== ACTIVE EXPLORATION IN DYNAMIC ENVIRONMENTS by S.Thrun and K.Moeller To appear in: Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson, and R.P. Lippmann (eds.) Morgan Kaufmann, San Mateo, CA, 1992 Whenever an agent learns to control an unknown environment, two opposing principles have to be combined, namely: exploration (long-term optimization) and exploitation (short-term optimization). Many real-valued connectionist approaches to learning control realize exploration by randomness in action selection. This might be disadvantageous when costs are assigned to ``negative experiences.'' The basic idea presented in this paper is to make an agent explore unknown regions in a more directed manner. This is achieved by a so-called competence map, which is trained to predict the controller's accuracy, and is used for guiding exploration. Based on this, a bistable system enables smoothly switching attention between two behaviors -- exploration and exploitation -- depending on expected costs and knowledge gain. The appropriateness of this method is demonstrated by a simple robot navigation task. archive name: thrun.nips91.ps.Z =========================================================================== EFFICIENT EXPLORATION IN REINFORCEMENT LEARNING by S. Thrun Technical Report CMU-CS-92-102, Jan. 1992, Carnegie-Mellon University Exploration plays a fundamental role in any active learning system. This study evaluates the role of exploration in active learning and describes several local techniques for exploration in finite, discrete domains, embedded in a reinforcement learning framework (delayed reinforcement). This paper distinguishes between two families of exploration schemes: undirected and directed exploration. While the former family is closely related to random walk exploration, directed exploration techniques memorize exploration-specific knowledge which is used for guiding the exploration search. In many finite deterministic domains, any learning technique based on undirected exploration is inefficient in terms of learning time, i.e. learning time is expected to scale exponentially with the size of the state space [Whitehead 91]. We prove that for all these domains, reinforcement learning using a directed technique can always be performed in polynomial time, demonstrating the important role of exploration in reinforcement learning. Subsequently, several exploration techniques found in recent reinforcement learning and connectionist adaptive control literature are described. In order to trade off efficiently between exploration and exploitation -- a trade-off which characterizes many real-world active learning tasks -- combination methods are described which explore and avoid costs simultaneously. This includes a selective attention mechanism, which allows smooth switching between exploration and exploitation. All techniques are evaluated and compared on a discrete reinforcement learning task (robot navigation). The empirical evaluation is followed by an extensive discussion of benefits and limitations of this work. archive name: thrun.explor-reinforcement.ps.Z =========================================================================== THE ROLE OF EXPLORATION IN LEARNING CONTROL by S. Thrun To appear in: Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches, D.A. White and D.A. Sofge, Van Nostrand Reinhold, Florence, Kentucky 41022 This chapter basically summarizes the results described in the papers above, and surveys recent work on exploration in neurocontrol and reinforcement learning. Here are the issues addressed in this paper: `[...] Let us begin with the questions characterizing exploration and exploitation. Exploration seeks to minimize learning time. Thus, the central question of efficient exploration reads ``How can learning time be minimized?''. Accordingly, the question of exploitation is ``How can costs be minimized?''. These questions are usually opposing, i.e. the smaller the learning time, the larger the costs, and vice versa. But as we will see, pure exploration does not necessarily minimize learning time. This is because pure exploration, as presented in this chapter, maximizes knowledge gain, and thus may waste much time in exploring task-irrelevant parts of the environment. If one is interested in restricting exploration to relevant parts of the environment, it often makes sense to exploit simultaneously. Therefore exploitation is part of efficient exploration. On the other hand, exploration is also part of efficient exploitation, because costs clearly cannot be minimized over time without exploring the environment. The second important question to ask is ``What impact has the exploration rule on the speed and the costs of learning?'', or in other words ``How much time should a designer, who designs an active learning system, spend for designing an appropriate exploration rule?''. This question will be extensively discussed, since the impact of the exploration technique on both learning time and learning costs can be enormous. Depending on the structure of the environment, ``wrong'' exploration rules may result in inefficient learning time, even if very efficient learning techniques are employed. The third central question relevant for any implementation of learning control is ``How does one trade-off exploration and exploitation?''. Since exploration and exploitation establish a trade-off, this question needs further specification. For example, one might ask ``How can I find the best controller in a given time?'', or ``How can I find the best controller while not exceeding a certain amount of costs?''. Both questions constrain the trade-off dilemma in such a way that an optimal combination between exploration and exploitation may be found, given that the problem can be solved with these constraints at all. Now assume one has already an efficient exploration and an efficient exploitation technique. This raises the question ``How shall exploration and exploitation be combined?''. Shall each action explore and exploit the environment simultaneously, or shall an agent sometimes focus more on exploration, and sometimes focus more on exploitation?' archive name: thrun.exploration-overview.ps.Z =========================================================================== INSTRUCTIONS FOR RETRIEVAL unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thrun.nips91.ps.Z (First paper) ftp> get thrun.explor-reinforcement.ps.Z (Second paper) ftp> get thrun.exploration-overview.ps.Z (Third paper) ftp> quit unix> zcat thrun.nips91.ps.Z | lpr unix> zcat thrun.explor-reinforcement.ps.Z | lpr unix> zcat thrun.exploration-overview.ps.Z | lpr If you are unable to ftp and/or print the papers, send mail to thrun at cs.cmu.edu or write to Sebastian Thrun, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA  From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Sat, 4 Apr 1992 20:14-EST Subject: Why does the error rise in a SRN? In-Reply-To: Gary Cottrell's mail message of Fri, 3 Apr 92 18:12:16 PST Message-ID: <702436485/thrun@B.GP.CS.CMU.EDU> Gary writes: > > Yes, it seems that Elman nets can't learn in batch mode. > I have tried recurrent networks with Elman-structure, but with complete gradient descent through time. This was done on a couple of problems including Morse code recognition, handwritten digit recognition, prediction of a ball trajectory. I used the Connection Machine, batch mode, and a very small learning rate (things are fast on a Connection Machine), and I did not observe that the error on the training set started to increase. However, I did observe that the networks often converged to useless local minima. Finding a meaningful representation for the context layer seems to be an order of magnitude more difficult than identifying weight and biases in a feed-forward network. Sebastian  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From RUSPINI at ai.sri.com Mon Jun 5 16:42:55 2006 From: RUSPINI at ai.sri.com (Enrique Ruspini) Date: Fri 26 Jun 92 12:34:05-PDT Subject: Call for Papers - ICNN'93 Message-ID: <709587245.0.RUSPINI@AI.SRI.COM> 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS San Francisco, California, March 28 - April 1, 1993 The IEEE Neural Networks Council is pleased to announce its 1993 International Conference on Neural Networks (ICNN'93) to be held in San Francisco, California from March 28 to April 1, 1993. ICNN'93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE'93). Participants will be able to attend the technical events of both meetings. ICNN '93 will be devoted to the discussion of basic advances and applications of neurobiological systems, neural networks, and neural computers. Topics of interest include: * Neurodynamics * Associative Memories * Intelligent Neural Networks * Invertebrate Neural Networks * Neural Fuzzy Systems * Evolutionary Programming * Optical Neurocomputers * Supervised Learning * Unsupervised Learning * Sensation and Perception * Genetic Algorithms * Virtual Reality & Neural Networks * Applications to: - Image Processing and Understanding - Optimization - Control - Robotics and Automation - Signal Processing ORGANIZATION: General Chair: Enrique H. Ruspini Program Chairs: Hamid R. Berenji, Elie Sanchez, Shiro Usui ADVISORY BOARD: S. Amari J. A. Anderson J. C. Bezdek Y. Burnod L. Cooper R. C. Eberhart R. Eckmiller J. Feldman M. Feldman K. Fukushima R. Hecht-Nielsen J. Holland C. Jorgensen T. Kohonen C. Lau C. Mead N. Packard D. Rummelhart B. Skyrms L. Stark A. Stubberud H. Takagi P. Treleaven B. Widrow PROGRAM COMMITTEE: K. Aihara I. Aleksander L.B. Almeida G. Andeen C. Anderson J. A. Anderson A. Andreou P. Antsaklis J. Barhen B. Bavarian H. R. Berenji A. Bergman J. C. Bezdek H. Bourlard D. E. Brown J. Cabestany D. Casasent S. Colombano R. de Figueiredo M. Dufosse R. C. Eberhart R. M. Farber J. Farrell J. Feldman W. Fisher W. Fontana A.A. Frolov T. Fukuda C. Glover K. Goser D. Hammerstrom M. H. Hassoun J. Herault J. Hertz D. Hislop A. Iwata M. Jordan C. Jorgensen L. P. Kaelbling P. Khedkar S. Kitamura B. Kosko J. Koza C. Lau C. Lucas R. J. Marks J. Mendel W.T. Miller M. Mitchell S. Miyake A.F. Murray J.-P. Nadal T. Nagano K. S. Narendra R. Newcomb E. Oja N. Packard A. Pellionisz P. Peretto L. Personnaz A. Prieto D. Psaltis H. Rauch T. Ray M. B. Reid E. Sanchez J. Shavlik B. Sheu S. Shinomoto J. Shynk P. K. Simpson N. Sonehara D. F. Specht A. Stubberud N. Sugie H. Takagi S. Usui D. White H. White R. Williams E. Yodogawa S. Yoshizawa S. W. Zucker ORGANIZING COMMITEE: PUBLICITY: H.R. Berenji EXHIBITS: W. Xu TUTORIALS: J.C. Bezdek VIDEO PROCEEDINGS: A. Bergman FINANCE: R. Tong VOLUNTEERS: A. WORTH SPONSORING SOCIETIES: ICNN'93 is sponsored by the Neural Networks Council. Constituent Societies: * IEEE Circuits and Systems Society * IEEE Communications Society * IEEE Computer Society * IEEE Control Systems Society * IEEE Engineering in Medicine & Biology Society * IEEE Industrial Electronics Society * IEEE Industry Applications Society * IEEE Information Theory Society * IEEE Lasers and Electro-Optics Society * IEEE Oceanic Engineering Society * IEEE Power Engineering Society * IEEE Robotics and Automation Society * IEEE Signal Processing Society * IEEE Systems, Man, and Cybernetics Society CALL FOR PAPERS The program committee cordially invites interested authors to submit papers dealing with any aspects of research and applications related to the use of neural models. Papers must be written in English and must be received by SEPTEMBER 21, 1992. Six copies of the paper must be submitted and the paper should not exceed 8 pages including figures, tables, and references. Papers should be prepared on 8.5" x 11" white paper with 1" margins on all sides, using a typewriter or letter quality printer in one column format, in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, authors name(s) and affiliation(s) on top of first page followed by an abstract. FAX submissions are not acceptable. Please send submissions prior to the deadline to: Dr. Hamid Berenji, AI Research Branch, MS 269-2, NASA Ames Research Center, Moffett Field, California 94035 CALL FOR VIDEOS: The IEEE Neural Networks Council is pleased to announce its first Video Proceedings program, intended to present new and significant experimental work in the fields of artificial neural networks and fuzzy systems, so as to enhance and complement results presented in the Conference Proceedings. Interested researchers should submit a 2 to 3 minute video segment (preferred formats: 3/4" Betacam, or Super VHS) and a one page information sheet (including title, author, affiliation, address, a 200-word abstract, 2 to 3 references, and a short acknowledgment, if needed), prior to September 21, 1992, to Meeting Management, 5665 Oberlin Drive, Suite 110, San Diego, CA 92121. We encourage those interested in participating in this program to write to this address for important suggestions to help in the preparation of their submission. TUTORIALS: The Computational Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute Evolutionary Programming David Fogel Orincon Corporation Expert Systems and Neural Networks George Lendaris Portland State University Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Department of Cognitive and Linguistic Sciences Brown University EXHIBITS: ICNN '93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE '93). ICNN '93 and FUZZ-IEEE '93 are the largest conferences and trade shows in their fields. Participants to either conference will be able to attend the combined exhibit program. We anticipate an extraordinary trade show offering a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. Interested exhibitors are requested to contact the Chairman, Exhibits, ICNN '93 and FUZZ-IEEE '93, Wei Xu at Telephone (408) 428-1888, FAX (408) 428-1884. FOR ADDITIONAL INFORMATION, CONTACT Meeting Management 5665 Oberlin Drive Suite 110 San Diego, CA 92121 Tel. (619) 453-6222 FAX (619) 535-3880 -------  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: pe. rcpt to: data pQ.. rcpt to: data p.mh_sequencess.cmu.edu> rcpt to: data p#2_sequencess.cmu.edu> rcpt to: data p &#3_sequencess.cmu.edu> rcpt to: data pr#1_sequencess.cmu.edu> rcpt to: data pD#5_sequencess.cmu.edu> rcpt to: data pT#6_sequencess.cmu.edu> rcpt to: data p#4_sequencess.cmu.edu> rcpt to: data p\#8_sequencess.cmu.edu> rcpt to: data p#9_sequencess.cmu.edu> rcpt to: data  From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Thu 20 Aug 92 21:18:09-PDT Subject: A neat idea from L. Breiman Message-ID: <714370689.0.LAWS@AI.SRI.COM> L. Breiman's "back fitting" sounds very much like the search strategy in some of the fancier stepwise multiple regression programs. At each step, the best remaining variable is added to the regression equation. Then other variables in the equation are tested to see if any can be dropped out. The "repeat until quiesence" search isn't usually performed, but I suppose that it could have its uses. There are also clustering algorithms that have this flavor, notably ISODATA. As clusters are grown or merged, it's possible for data points to drop out. I've also used such an algorithm in image segmentation. I look for pairs of regions that can be merged, and for any region that can be split. The two searches are alternated, approaching a global optimum (one hopes). It's quite different from the usual split-merge algorithm. If you try Breiman's back fitting, watch out for cycles. In my segmentation application, I ran into cycles containing more than a hundred steps. -- Ken Laws -------  From bill at nsma.arizona.edu Mon Jun 5 16:42:55 2006 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Models of LTP and LTD In-Reply-To: fellous@rana.usc.edu's message of 7 Aug 92 01:03:01 GMT Message-ID: Here are a few things for you: Zador A, Koch C, Brown TH: Biophysical model of a Hebbian synapse. {\em Proc Nat Acad Sci USA} 1990, 87:6718-6722. Proposes a specific, experimentally justified model of the dynamics of LTP in hippocampal synapses. Brown TH, Zador AM, Mainen ZF and Claiborne BJ (1991) Hebbian modifications in hippocampal neurons, in ``Long-Term Potentiation: A Debate of Current Issues'' (eds M Baudry and JL Davis) MIT Press, Cambridge MS 357-389. Summarizes the material in the previous paper, and explores the consequences of the facts of LTP for the representations formed within the hippocampus, using compartmental modeling techniques. Holmes WR, Levy WB: Insights into associative long-term potentiation from computational models of NMDA receptor-mediated calcium influx and intracellular Calcium concentration changes. {\em J Neurophysiol} 1990, 63:1148-1168. Regards, -- Bill From dario at cns.nyu.edu Mon Jun 5 16:42:55 2006 From: dario at cns.nyu.edu (Dario Ringach) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: <9208081613.AA02826@wotan.cns.nyu.edu> For a review see: AUTHOR Baudry, M. TITLE Long-term potentiation : a debate of current issues IMPRINT Cambridge, Mass. : MIT Press, c1991. Hope this helps... -- Dario Dario Ringach office: (212) 998-3941 Center for Neural Science home: (212) 727-3941 New York University e-mail: dario at cns.nyu.edu From koch at Iago.Caltech.Edu Mon Jun 5 16:42:55 2006 From: koch at Iago.Caltech.Edu (koch@Iago.Caltech.Edu) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: LTP Message-ID: <920809100145.20401657@Iago.Caltech.Edu> I wrote a recent overview article for Science which you might find of relevance to LTP. "Dendritic spines: convergence of theory and experiment", C. Koch, A. Zador and T. Brown, {\it Science} {\bf 256:} 973-974, Ciao, Christof From coby at shum.huji.ac.il Mon Jun 5 16:42:55 2006 From: coby at shum.huji.ac.il (Yaakov Metzger) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: LTP LTD Message-ID: Hi Regarding your question on the net about LTP and LTD, I guess you might be interested in my MSc thesis, published in: AUTHOR = {Y. Metzger and D. Lehmann}, TITLE = {Learning Temporal Sequences by Local Synaptic Changes}, JOURNAL = {Network}, VOLUME = {Vol 1}, PAGES = {169--188}, YEAR = 1990} We present there some considerations around the nature of LTP and LTD. I'd also like to know what aspects of LTP and LTD you are looking at, and what other answers you got. Please mail me your answer even if you post it because I dont scan the net too often. Coby From: granger at ics.uci.edu Status: RO I and my colleages at U.C. Irvine have done some computational modeling of LTP in the olfactory system and in hippocampus. The following article is based on computational analysis of network-level effects of LTP as it occurs in the olfactory cortex. The incremental strengthening of synapses, in combination with lateral inhibitory activity, led to a computational "clustering" effect; repetitive cyclic activity of the olfactory bulb-cortex system led to sequential hierarchical information emerging from the system: Ambros-Ingerson, J., Granger, R., and Lynch, G. (1990). Simulation of paleocortex performs hierarchical clustering. {\em Science}, 247: 1344-1348. [LTP in olfactory paleocortex was shown by Jung et al., Synapse, 6: 279 (1990) and by Kanter & Haberly, Brain Research, 525: 175 (1990).] The Science article led to specific behavioral and physiological predictions which were tested with positive results, reported in: Granger, R, Staubli, U, Powers, H, Otto, T, Ambros-Ingerson, J, & Lynch, G. (1991). Behavioral tests of a prediction from a cortical network simulation. {\em Psychological Science}, 2: 116-118. McCollum, J, Larson, J, Otto, T, Schottler, F, Granger, R, & Lynch, G. (1991). Short-latency single-unit processing in olfactory cortex. {\em J. Cognitive Neurosci.}, 3: 293-299. These and related results are summarized and reviewed in: Granger, R., and Lynch, G. (1991). Higher olfactory processes: Perceptual learning and memory. {\em Current Biology}, 1: 209-214. LTP varies in its effects in different anatomical regions, such as the distinct subddivisions of the hippocampal formation; possible effects of these different variants of LTP and interactions among the regions expressing them are explored in: Lynch, G. and Granger, R. (1992). Variations in synaptic plasticity and types of memory in cortico-hippocampal networks. {\em J.~Cognitive Neurosci.}, 4: 189-199. Larson & Lynch, Brain Research, 489: 49 (1989) showed that synapses due to different afferents all converging on a single target cell become differentially strengthened (potentiated) via synaptic long-term potentiation (LTP) as a function of the order in which the afferents are activated within a time window of about 70 msec. It might be expected that the latest arrivers would coincide with the maximal depolarization of the target cell and thus by a "Hebbian" argument would be most strengthened (potentiated), yet it is in fact the earliest arriving afferents that become most potentiated, and the later arrivers are least potentiated. This enables the cell to become selective to different sequential activation, i.e., to act as a form of sequence detector. This is described in a forthcoming paper accepted in P.N.A.S. (to appear): Granger, Whitson, Larson & Lynch (1992): Non-Hebbian properties of LTP enable high-capacity encoding of temporal sequences. {\em Proc Nat Acad Sci USA} 1992, (in press). Some of these results are briefly summarized in: Anton, P., Granger, R. and Lynch, G. (1992). Temporal information processing in synapses, cells and circuits. In: {\em Single neuron computation}, (T.McKenna, J.Davis and S.Zornetzer, Eds.), NY: Academic Press, pp. 291-313. Hope this is helpful; I'd be happy to provide more information and additional papers if you wish. -Richard Granger Bonney Center University of California Irvine, California 92717 From ted at sbcs.sunysb.edu Mon Jun 5 16:42:55 2006 From: ted at sbcs.sunysb.edu (ted@sbcs.sunysb.edu) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Models of LTP and LTD Message-ID: <9208110155.AA04523@sbstaff2> Take a look at "Biophysical model of a Hebbian synapse" by Zador et al. PNAS 87:6718-22 (Sept 1990). From ted at sbcs.sunysb.edu Mon Jun 5 16:42:55 2006 From: ted at sbcs.sunysb.edu (ted@sbcs.sunysb.edu) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Models of LTP and LTD Message-ID: <9208140324.AA05836@sbstaff2> Here's another, more recent. Brown TH et al.: Hebbian computations in hippocampal dendrites and spines. Chapter 4 in Single Neuron Computation (Academic Press, 1992) ed. McKenna T et al. pp.81-116. BTW, my new address is carnevale-ted at cs.yale.edu --Ted <<<<<<<<<<<<<<<< .  From nsekar at umaxc.weeg.uiowa.edu Mon Jun 5 16:42:55 2006 From: nsekar at umaxc.weeg.uiowa.edu (nangarvaram Sekar) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: <9208080011.AA09590@umaxc.weeg.uiowa.edu> Hi , Tony Zador's work is the only sort of work, which is supposed to model LTP. His paper "Biophysical model of the Hebbian Synapse" doesn't model LTP / LTD perse but it models calcium dynamics on the dendritic spines. Few months back, we were investigating the biophysical plausibility of an algorithm ALOPEX (an optimization algorithm) and we have implemented a rudimental neural circuitry implementing ALOPEX,through LTP and LTD using neuron. This doesn't model LTP to the gory details of calcium dynamics. We had simple hodgkin & huxley membranes , with passive dendrites, NMDA receptors. Our model of LTP and LTD was as follows: increasing the synaptic conductance of NMDA_receptors & Non-NMDA receptors when NMDA was activated and the post synaptic voltage was above a certain threshold. We have implemented it in NEURON. Let me know if you get any additional references. Also you could refer to the book " LTP - a debate of current issues" by davies. There are a couple of chapters on modelling, but none of them as you would like them to be. <<<<<<<<  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Queue this request? y Or if you wish you can get a listing of the available files, by giving the remote filename as: princeton.edu:(D)/pub/harnad Because of traffic delays through the FT-RELAY, still another method can sometimes be recommended, which is to use the Princeton bitftp fileserver described below. Typically, one sends a mail message of the form: FTP princeton.edu UUENCODE USER anonymous LS /pub/harnad GET /pub/harnad/bbs.fischer QUIT (the line beginning LS is required only if you need a listing of available files) to email address BITFTP at EARN.PUCC or to BITFTP at EDU.PRINCETON, and receives the requested file in the form of one or more email messages. [Thanks to Brian Josephson (BDJ10 at UK.AC.CAM.PHX) for the above detailed UK/JANET instructions; similar special instructions for file retrieval from other networks or countries would be appreciated and will be included in updates of these instructions.] --- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From 4 Mon Jun 5 16:42:55 2006 From: 4 (4) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Really-From: Nigel.Gilbert at soc.surrey.ac.uk Date: Sat, 24 Oct 92 15:56:13 BST Call for papers and participation Simulating Societies '93 24-26 July 1993 Approaches to simulating social phenomena and social processes Although the value of simulating complex phenomena in order to come to a better understanding of their nature is well recognised, it is still rare for simulation to be used to understand social processes. This symposium is intended to present original research, review current ideas, compare alternative approaches and suggest directions for future work on the simulation of social processes. It follows the first symposium held in April 1992 at the University of Surrey, UK. It is expected that about a dozen papers will be presented to the symposium and that revised versions will be published as a book. We are now seeking proposals for papers and for participation. Contributions from a range of disciplines including sociology, anthropology, archaeology, ethology, artificial intelligence, and artificial life are very welcome. Papers on the following and related topics are invited: * Discussions of approaches to the simulation of social processes such as those based on distributed artificial intelligence, genetic algorithms and neural networks, non-linear systems, general purpose stochastic simulation systems etc. * Accounts of specific simulations of processes and phenomena, at macro or micro level. * Critical reviews of existing work that has involved the simulation of social processes. * Reviews of simulation work in archeology, economics, psychology, geography, demography, etc. with lessons for the simulation of social processes. * Arguments for or against simulation as an approach to understanding complex social processes. * Simulations of human, animal and 'possible' societies. 'Social process' may be interpreted widely to include, for example, the rise and fall of nation states, the behaviour of households, the evolution of animal societies, and social interaction. Registration, accommodation and subsistence expenses during the meeting will be met by the sponsors. Partic ipants will need to find their own travel expenses. Proposals for papers are initially invited in the form of an abstract of no more than 300 words. Abstracts should be sent, along with a brief statement of research interests, to the address below by 15th March 1993. Authors of those selected will be invited to submit full papers by 1st June 1993. Those interested in participat ing, but not wishing to present a paper, should send a letter indicating the contribution they could make to the symposium, also by 15th March 1993. The organisers of the Symposium are Cristiano Castelfranchi (IP-CNR and University of Siena, Italy), Jim Doran (University of Essex, UK), Nigel Gilbert (University of Surrey, UK) and Domenico Parisi (IP- CNR, Roma, Italy). The symposium is sponsored by the University of Siena (Corso di laurea in Scienze della Comunicazione), the Consiglio Nazionale delle Ricerche (Istituto di Psicologia, Roma) and the University of Surrey. The meeting will be held at Certosa di Pontignano near Siena, Italy, a conference centre on the site of a 1400AD monastery. Proposals should be sent to: Prof Nigel Gilbert, Department of Sociology, University of Surrey, Guildford GU2 5XH, United Kingdom Tel: +44 (0)483 509173 Fax: +44 (0)483 306290 Email: gng at soc.surrey.ac.uk From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From jose at tractatus.siemens.com Mon Jun 5 16:42:55 2006 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NIPS*92 and CME travel Message-ID: [ Steve promises that this will be the last NIPS-92 announcement sent to CONNECTIONISTS. Sorry for the profusion of last-minute messages. -- Dave Touretzky, list maintainer ] NIPS*92 Goers: This is a annoucement will we try and send out to you in the next week, but the date is so tight that I am sending it on the Net first. Please repost and send to your NIPS colleagues. Thanks. Steve Hanson NIPS*92 General Chair CME Travel (big mountain picture in background) INVITATION TO ROCKIES: On behalf of the NIPS Conference Coordinators, CME and CME Travel would like to welcome you to the Vail Valley. Your organization has selected Colorado Mountain Express to assit with your travel needs while attending the NEURAL INFORMATION PROCESSING SYSTEMS WORKSHOP at the Radissson Resort in Vail Colorado, December 2-5, 1992. In an effort to provide the most economic and professional service, speical discounted airfare and ground transportation rates have been negotiated to fly you into Denver and transfer you on December 3 at 1:30pm from Marriott's City Center Hotel to the Radisson in Vail and return you back to Denver Stapleton Airport upon your requested departure. Colorado Mountain Express located in the VAil Valley, has been serving the Vail and Beaver Creek Resort since 1983. Your speical group code "NIPS" not only provides you access to SPECIAL AIRLINE FARES, negotiaed on your behalf but also makes available preferred gound transfer rates with Colorado Mountona Express or Hertz Car Rental. ***NIPS*** Special Group Code ******Preferred Airline Contracts****** ******Discounted Ground Transportation**** via Colorado Mountain Express or Hertz Car Rental 1-800-525-6363 RSVP by NOVEMBER 18, 1992 We look forward to coordinating your travel arrangements. Please contact a CME travel Consultant at ext 6100 no later than Nov. 18th to secure your travel plans. Sincerely, Colorado Mountain Express & CME Travel Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: "The audio synthesizer is built around an integrated-circuit chip from Intel Corporation in Santa Clara, California. The chip, called the Intel 80170NX electrically trainable analog neural network (ETANN), simulates the function of nerve cells in a biological brain." Unlikely in that we don't yet know how nerve cells in a biological brain function. Is it really necessary many years (now) into neural net research to continue to lean on the brain for moral support? Sorry to retorically beat a dead horse, but statements like this are annoying to those of us whose primary interest is to understand how the brain works. They also still occur far to frequently especially in association with products. Jim Bower From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Job opening Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart Size: 2020 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20060605/4b9b7794/attachment.ksh From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From David Mon Jun 5 16:42:55 2006 From: David (David) Date: December 1, 1992 Subject: Job Opening at UMass (Amherst) Message-ID: I seek your assistance in finding someone for a position at the University of Massachusetts, Amherst. I have been awarded a Research Scientist Development Award (RSDA) from the National Institute of Mental Health. The award allows me to devote full time to research for 5 years (beginning September 30, 1992). It also provides funds to hire someone to cover the courses I normally teach. The person occupying the replacement position is expected to engage in research that complements my own. My colleagues and I are therefore looking for someone to teach in the area of cognitive/experimental psychology (3 courses per year, typically one graduate and two undergraduate) and to do research related to my interests. Currently, I am working on a computational model of movement selection (primarily for reaching and related behaviors). My students and I are testing predictions of the model with normal adult human subjects, using an Optotrak recording system housed in our Department. Ideally, we would like to find someone with a strong background in cognitive or experimental psychology who is well versed in computational approaches to cognition and performance, especially, but not exclusively, in the domain of motor control. If you know of such a person and think he or she might be interested in this opportunity, would you please bring it to his or her attention? A copy of the ad, which will be appearing soon in the APA Monitor and APS Observer, is attached. Our Psychology Department is an exciting place for someone with interests in the cognitive substrates of motor control. My colleague, Professor Rachel Clifton, also holds an RSDA; one of her areas of study is infant motor development. We have close ties to biomechanists in the Exercise Science Department, roboticists and connectionist modellers in the Computer Science Department, and neuroscientists in our own department and in Biology. The UMass Psychology Department has a strong faculty in cognitive and neuroscience generally. There are frequent interdisciplinary meetings involving the many people in the greater Amherst area who are concerned with basic and applied issues related to the control of action, and there are many other meetings as well pertaining to other areas of cognitive science. A word about the timing of the appointment is in order. Funds are available to hire someone immediately, although only on a temporary basis; that is, the replacement position cannot be filled permanently until September, following a full affirmative-action search. Anyone hired on a temporary basis will be expected to teach at least 1 and possibly 2 courses in the Spring semester (which begins in late January). Whether the person teaches 1 course or 2 depends on his or her abilities and desires, as well as departmental needs. The temporary appointment can begin earlier than January, as far as I know. In the best of all worlds, the person hired temporarily will then stay on for the full 4 years, but this is not guaranteed. I look forward to hearing from you or someone you might tell about this position. Please feel free to contact me at the above address or at any of the numbers below for further information. It is advisable to respond quickly to this call. Thank you for your kind attention. David A. Rosenbaum Professor of Psychology 413-545-4714 DAVID.ROSENBAUM at PSYCH.UMASS.EDU Here is the ad that will appear soon in the APA Monitor and the APS Observer: COGNITIVE PSYCHOLOGY: The Department of Psychology at the University of Massachusetts/Amherst anticipates an opening for a non-tenure track position at the Assistant or Associate Professor level, starting September 1993 and renewable through August 1997. Preference will be given for individuals with primary interests in the cognitive substrates of human motor control, perceptual-motor integration, or human performance, although candidates focusing on other topics will be considered. Send vita, statement of interest, representative papers, and at least three letters of recommendation to: Dr. David A. Rosenbaum, Search Committee, Department of Psychology, University of Massachusetts, Amherst, MA 01003. Review of applications will begin January 18 and continue until the position is filled. The University of Massachusetts is an Affirmative Action/Equal Opportunity Institution. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Kak> The observer plays a Kak> fundamental role in the measurement problem of quantum mechanics Kak> and several scientists have claimed that physics will remain Kak> incomplete unless consciousness is incorporated into it. The present thread will remain incomplete without reference to what Popper has to say on this matter. For a strong case against the mystification of physics, see Karl R. Popper Quantum Theory and the Schism in Physics from the Postscript to "The Logic of Scientific Discovery" Edited by W. W. Bartley Unwin Hyman: London, 1982 -Shimon p.s. Popper, contrary to what could be expected from his being a dualist about the so-called "mind-body problem", is actually very much a realist about other, more important issues in science. Consequently, one can ignore his "The Self and its Brain" (a bad influence of J. Eccles? :-), and still benefit from the earlier (and better) "The Logic of Scientific Discovery". From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Does backprop need the derivative ?? Message-ID: Hi, I have done some experiments on this and am working on "Robustness of BP to transfer function and derivative" ( tentative title ). I have found that the actual derivative need not be used. So long as the "derivative" equivalent ( whether constant or other functions) indicates the direction of increasing or decreasing value of the transfer function ( whether immediate or potential ) ie if the transfer function is increasing or will increase any positive value for the derivative would do. Hence for sigmoid function one may use a positive constant. For unit step function f(x) = 1 for x >= 0 = 0 for x < 0 we could use some high positive value at x=0 and nearby and some low positive value further away. Although the derivative is zero except at x=0, using zero would jam ( stop ) the whole backprop process since backproped error would be zero in all nodes ( eventually ). Hence using a low value in this case could be interpreted as an indication that if we move in the positive direction we may possibly increase the output of the node. Many variations of the derivative are possible. I have tried many and they work ( most of the time ). One problem with this is that if the output of the node is already "1" then increasing the input would not increase the output as our derivative suggest. What we need to do in this case is to check the backprop error's direction ( ie +ve or -ve ) and have two different values of our derivative depending on thedirection. Still working on it. Hope this helps. Please contact me for any comments / discussion. Regards, Tiong_Hwee Goh  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: derivative CANNOT be replaced with a constant. Below several people indicate that they had trouble when the derivative was replaced. However some people says that it can be replaced. I believe that it is problem dependent. I know of two small problems in which you cannot change the derivative. They are: - the XOR-problem (I suppose everyone is familiar with that one) and - the so-called sine-problem: Try to learn a 1-3-1 network the sine in the range from -pi to pi. The output neuron has a linear transfer function, the hidden neurons have a tanh transfer function. Backpropagation with individual update is used to train the network (batch update has problems). I used 37 training samples in the given range and stopped training when the total error (sum over all patterns of .5 times square target output minus actual output) was smaller than 0.001 Furthermore, I would like to say that communicating in such a way is very efficient and I would like to thank everyone for their responses (and upcoming responses). Heini Withagen Department of Electrical Engineering EH 9.29 Eindhoven Technical University P.O. Box 513 5600 MB Eindhoven The Netherlands Phone: 31-40472366 Fax: 31-40455674 E-mail: heiniw at eeb.ele.tue.nl ------------------------------------------------------------------------ David Bisant from Stanford Univ. wrote: Those interested in this problem might want to take a look at an obscure reference by Chen & Mars (Wash., DC IJCNN, Vol 1 pg 601, 1990). They essentially drop the derivative term altogether from the weight update equation for the output layer. They claim that it helps to avoid saturated units. A magazine article (AI Expert July, 1991) has empirically compared this method with Fahlman's and a few others on some toy problems (not a rigorous comparison, but still informative). Here are some other references where an attempt has been made to simplify the activation and/or differential function: Samad IJCNN 90 (Wash DC) & Honeywell Tech Report SSDC-89-14902-3 Rezgui IJCNN 90 (Wash DC) Tepedelenlioglu IEEE ICSE 89 ------------------------------------------------------------------------ Guido Bugmann from King's College London wrote: I have developped a model of formal neuron by using micro-circuits of pRAM neurons. In order to train the parameters of the pRAM's composing the formal neuron, I had to rewrite backpropagation for this case. At some stage, I have found that propagating back only the sign (+1 or -1) of the error was enough. But it turned out that this technique was restricted to cases where the weights had to converge toward their maximum or minimum value. For problems where intermediate weights were optimum, the more refined information of the size of the error for each example was required. (By "error" I mean the whole expression which is backpropagated). ------------------------------------------------------------------------ Scott E. Fahlman from Carnegie Mellon University wrote: Interesting. I just tried this on encoder problems and a couple of other simple things, and leapt to the conclusion that it was a general phenomenon. It seems plausible to me that any "derivative" function that preserves the sign of the error and doesn't have a "flat spot" (stable point of 0 derivative) would work OK, but I don't know of anyone who has made an extensive study of this. ------------------------------------------------------------------------ George Bolt from University of York, U.K. wrote: I've looked at BP learning in MLP's w.r.t. fault tolerance and found that the derivative of the transfer function is used to *stop* learning. Once a unit's weights for some particular input (to that unit rather than the network) are sufficiently developed for it to decide whether to output 0 or 1, then weight changes are approximately zero due to this derivative. I would imagine that by setting it to a constant, then a MLP will over- learn certain patterns and be unable to converge to a state of equilibrium, i.e. all patterns are matched to some degree. A better route would be to set the derivative function to a constant over a range [-r,+r], where f[r] - (sorry) f( |r| ) -> 1.0. To make individual units robust with respect to weights, make r=c.a where f( |a| ) -> 1.0 and c is a small constant multiplicative value. ------------------------------------------------------------------------ Joris van Dam from University of Amsterdam wrote: At the University of Amsterdam, we have a single layer feed forward network that computes the probabilities in one occupancy grid given the occupancy probabilities in another grid that is rotated and translated with respect to the former. It turns out that a rather complex activation function needs to be used, which also involves the computation of a complex derivative. (Note: it can be easily computed from the activation). It is clear that in this case the derivative cannot be omitted: LEARNING WOULD BE INCORRECT. The derivative has a clear interpretation in the context of occupancy grids and the learning procedure (with derivative !!!!!) can be related to Monte Carlo estimation procedures. Omission of the derivative can thus be proven to be incorrect and experiments have underlined this theory. In my opinion the omission of the derivative is mathematically incorrect, but can be useful in some applications and may even speed up learning (some derivatives have, like Scott Fahlmann said, zero spots). However, it seems that esp. with complex networks and activation functions, the derivative needs to be used indeed. ------------------------------------------------------------------------ Janvier Movellan wrote: My experience with Boltzmann machines and GRAIN/diffusion networks (the continuous stochastic version of the Boltzmann machine) has been that replacing the real gradient by its sign times a constant accelerates learning DRAMATICALLY. I first saw this technique in one of the original CMU tech reports on the Boltzmann machine. I believe Peterson and Hartman and Peterson and Anderson also used this technique, which they called "Manhattan updating", with the deterministic Mean Field learning algorithm. I believe they had an article in "Complex Systems" comparing Backprop and Mean-Field with both with standard gradient descent and with Manhattan updating. It is my understanding that the Mean-Field/Boltzmann chip developed at Bellcore uses "Manhattan Updating" as its default training method. Josh Allspector is the person to contact about this. At this point I've tried 4 different learning algorithms with continuous and discrete stochastic networks and in all cases Manhattan Updating worked better than straight gradient descent.The question is why Manhattan updating works so well (at least in stochastic and Mean-Field networks) ? One possible interpreation is that Manhattan updating limits the influence of outliers and thus it performs something similar to robust regression. Another interpretation is that Manhattan updating avoids the saturation regions, where the error space becomes almost flat in some dimensions, slowing down learning. One of the disadvantages of Manhattan updating is that sometimes one needs to reduce the weight change constant at the end of learning. But sometimes we also do this in standard gradient descent anyway. ------------------------------------------------------------------------ David G. Stork from Ricoh California Research Center wrote: In an in-depth study of a particular hardware implementation of backprop, we investigated the need for the derivative in the learning rule. We found thatit was often essential to have such a derivative. For instance, the XOR problemcould not be so solved. (Incidentally, this analysis led to a patent: "A method employing logical gates for calculating activation function derivatives on stochastically-encoded signals" granted to myself and Ron Keesing, US Patent # 5,157,275.) Without the derivative, one is not guaranteed that you're doing gradient descent in error space. ------------------------------------------------------------------------ Randy Shimabukuro wrote: I am not familiar with Fahlman's paper, but I have looked at approximating the derivative of the transfer function with a step function approximation. I also looked at other approximations which we made to simplify the implementation of back propagation in an integrated circuit. The results were writen up in the following reference. Shimabukuro, Randy L., Shoemaker, Patrick A., Guest, Clark C., & Carlin, Michael J.(1991) Effect of Circuit Parameters on Convergence of Trinary Update Back-Propagation. Proceedings of the 1990 Connectionist Models Summer School, Touretzky, D.S., Elman, J.L., Sejnowski, T.J., and Hinton, G.E., Eds., pp. 152-158. Morgan Kaufmann, San Mateo, CA. ------------------------------------------------------------------------ Marwan Jabri from Sydney University wrote: It is likely as Scott Fahlman suggested any derivative that "preserves" the error sign may do the job. The question however is the implication in terms of convergence speed, and the comparison thereof with perturbation type training methods. ------------------------------------------------------------------------ Radford Neal responded to Marwan Jabri's writing with: One would expect this to work only for BATCH training. On-line training approximates the batch result only if the net result of updating the weights on many training cases mimics the summing of derivatives in the batch scheme. This will not be the case if a training case where the derivative is +0.00001 counts as much as one where it is +10000. This is not to say it might not work in some cases. There's just no reason to think that it will work generally. ------------------------------------------------------------------------ Jonathan Cohen wrote: You might take a look at a paper by Nestor Schmayuk in Psychological Review 1992. The paper is about the role of the hippocampus which, in a word, he argues implements biologically plausible backprop. The algorithm uses a hidden unit's activation rather than its derivative for computing the error. He doesn't give too broad a range of training examples, but you might contact him to find out what else he has tried. Hope this information is helpful. ------------------------------------------------------------------------ Jay McClelland wrote: Some work has been done using the activation rather than the derivative of the activation by Nestor Schmajuk. He is interested in biologically plausible models and tends to keep hidden units in the bottom half of the sigmoid. In that case they can be approximated by exponentials and so the derivative can be approximated by the activation. ------------------------------------------------------------------------ John Kolen wrote: The quick answer to your question is no, you don't need "the derivative" you can use anything with the general qualitative shape of the derivate. I have some empirical results of training feedforward networks with different learning "functions", i.e different squashing derivatives, combination operators, etc. ------------------------------------------------------------------------ Gary Cottrell wrote: I happen to know it doesn't work for a more complicated encoder problem: Image compression. When Paul Munro & I were first doing image compression back in 86, the error would go down and then back up! Rumelhart said: "there's a bug in your code" and indeed there was: we left out the derivative on the hidden units. ------------------------------------------------------------------------  From pollack at cis.ohio-state.edu Mon Jun 5 16:42:55 2006 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Delays in Neuroprose Message-ID: <9302281147.AA09137@dendrite.cis.ohio-state.edu> ** DO NOT FORWARD TO OTHER GROUPS** To anyone submitting files, please expect short delays in processing of neuroprose files for a couple of weeks. Jordan Pollack Proud father of Dylan Seth Pollack CIS Dept/OSU Born 2/23/93, 7lbs 6oz 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614)292-4890 (then * to fax) ** DO NOT FORWARD TO OTHER GROUPS **  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: networks, it is easy to see that an "instantaneous" multi-layer network combined with delays/integrators in the feedback loop can approximate arbitrary discrete/continuous-time dynamical systems. A question of interest is whether it can be done when all the units have intrinsic delays/integrators. The answer is yes, if we use a distributed representation of the state space. (6 pages) ----It is a simple problem someone might have already solved. I appreciate any reference to previous works. **************************************************************** Bifurcations of Recurrent Neural Networks in Gradient Descent Learning Kenji Doya, UCSD Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as ``bifurcation points''. At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations. (11 pages) ----It is (to be) an extended version of "doya.bifurcation.ps.Z". **************************************************************** Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Kenji Doya and Allen I. Selverston, UCSD An artificial neural network approach for dimension reduction of dynamical systems is proposed and applied to conductance-based neuron models. Networks with bottleneck layers of continuous-time dynamical units could make a 2-dimensional model from the trajectories of the Hodgkin-Huxley model and a 3-dimensional model from the trajectories of a 6-dimensional bursting neuron model. Nullcline analysis of these reduced models revealed the bifurcations of the neuronal dynamics underlying firing and bursting behaviors. (17 pages) **************************************************************** FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary either ftp> get doya.universality.ps.Z ftp> get doya.bifurcation2.ps.Z ftp> get doya.dimension.ps.Z or ftp> mget doya.* rehtie ftp> bye unix% zcat doya.universality.ps.Z | lpr unix% zcat doya.bifurcation2.ps.Z | lpr unix% zcat doya.dimension.ps.Z | lpr These files are also available for anonymous ftp from crayfish.ucsd.edu (132.239.70.10), directory "pub/doya".  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: faster if you don't start too close to the origin. That's why I normally use the range [-1,1] for weight initialization. Again, I never ran any extensive tests on that. The input logical values are symmetrical for the same reason that the sigmoid should be symmetrical - avoid the DC component. On the other hand, it is well known that one should not choose the saturation levels of the sigmoid as target logical values, otherwise the weights will tend to grow to infinity. That's why I chose +-.9 . The only parameter that I played with, in this case, was the learning rate. I made a few preliminary runs with different values of this parameter, and the value of 1 looked good. Note, however, that these were really just a few runs, not any extensive optimization. Since the previous informal results generated some discussion, I decided to be a bit more formal, and I report here the results of 51 runs using the framework indicated above, and different seeds for the random number generator. What I give below is the histogram of the number of epochs for convergence. The first figure is the number of epochs, the second one is the number of runs that converged in that number of epochs. 7 - 3 22 - 2 8 - 1 27 - 1 9 - 3 28 - 1 10 - 3 36 - 1 11 - 2 46 - 1 12 - 2 48 - 1 13 - 5 50 - 1 17 - 5 51 - 1 18 - 1 56 - 1 19 - 1 72 - 1 21 - 2 >2000 - 12 The ">2000" are the "local minima" (see below). As you can see, the median of this distribution is 19 epochs. Some colleagues around here have been running tests, with results consistent with these. One of them (Jose Amaral) has been studying algorithm convergence speeds, and therefore has software specially designed for this kind of tests. He also has similar results for this situation (in fact a median of 19, too). But he also came up with a very surprising result: if you use "tanh(s/2)" as sigmoid, with a step size of .7, the median of the number of epochs is only 4 (!) [I've put the exclamation between parentheses, so that people don't think it is the factorial of 4]. We plan to make available, in a few days, a postscript version of one or two graphs, with a summary of his results for a few different cases. A few words about "local minima": I used this expression somewhat informally, as we normally do, meaning that after a large number of epochs (say, 2000) the network has not yet learned the correct outputs for all training patterns, and the cost function is decreasing very slowly, so it appears to be converging to a local minimum. I must say, however, that some years ago I once took one of these "local minima" of the XOR, and allowed it to continue training for a long time. After some 180000 epochs, the net actually learned all 4 patterns correctly. I tried this with one of the "local minima" here, and the same thing happened again (after I reduced the step size to .5, and then to .2). I don't know how many epochs it took: when I left to teach a class, it was above 1 000 000 epochs, with wrong output in one of the patterns. I left it running and when I came back it was at 5 360 000 epochs, and had already learned all 4 patterns. Finally, I am sorry that I cannot publish the simulator code itself. We sell this simulator (we don't make much money with it, but anyway), so I can't make it public. And besides, now that I have told you all my tricks, leave me at least with my little simulator, so that I can earn my living by selling it to those that didn't read this e-mail :) Happy training, Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp)  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From POULSON at freud.sbs.utah.edu Mon Jun 5 16:42:55 2006 From: POULSON at freud.sbs.utah.edu (KIMBERLY POULSON) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: MEMORANDUM TO: Faculty, Graduate Students, Auxiliary Faculty and other interested parties FROM: William Johnston/Charlie Shimp TOPIC: William F. Prokasy Lecture This year's William F. Prokasy Lecturer will be Dr. Irving Biederman. Dr. Biederman will speak on Tuesday, May 11, at 5:00 p.m. in BEH SCI 110. The title is "Shape Recognition in Mind and Brain." Dr. Irving Biederman is the William M. Keck Professor of Cognitive Neuroscience at the University of Southern California, where he is a member of the Departments of Psychology, Computer Science, and Neuroscience and Head of the Cognitive and Behavioral Neuroscience Program. Professor Biederman has proposed a theory of real-time human object recognition that posits that objects and scenes are represented as an arrangement of simple volumetric primitives, termed geons. This theory has undergone extensive assessment in psychophysical experiments. Recently, he has employed neural network models to provide a more biologically based version of the geon-assemblage theory which is currently undergoing tests through single unit recording experiments in monkeys and the study of the impairment of object recognition in patients with a variety of neurological symptoms. Prior to his recent appointment at USC, Dr. Biederman was the Fesler-Lampert Professor of Artificial Intelligence and Cognitive Science at the University of Minnesota. He has been a member of panels for the National Science Foundation, National Research Council, and the Air Force Office of Scientific Research, where he served as the first Program Manager (consulting) for the Cognitive Science Program. Please put these dates on your calendars. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From sgoss at ulb.ac.be Mon Jun 5 16:42:55 2006 From: sgoss at ulb.ac.be (Goss Simon) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: no subject (file transmission) Message-ID: Subject European Conference on Artificial Life (Brussels) Date: Tue, 4 May 93 11:15:07 MET Dear ECAL Participant, (This is an e-mail copy of a letter we're sending by post, and therefore does not include maps). Please find enclosed the programme for ECAL '93, as well as maps and instructions on how to get to the hotels and the conference site. You are all invited to an informal "Welcome to ECAL" drink/check-in session, at the Falstaff cafe, Sunday May 23rd, from 8-10 in the evening. The Falstaff is near La Bourse, at the centre of Bruxelles (see enclosed instructions). If all goes as planned, those of you who reserved their hotel room through us should find a copy of the informal proceedings either waiting for them in their rooms or at the reception desk. Those of you who have made separate arrangements will receive their proceedings either at the Falstaff or at the conference check-in on Monday morning. We hope all goes well with your travel arrangements, and look forward to seeing you at the Falstaff. Yours sincerely Simon Goss (for the organising committee) ________________________________________________________ Program ECAL MONDAY May 24th 09.00 - Inauguration 09.15 - F. Varela : "Organism : a meshwork of selfless selves" 09.45 - G. Nicolis : "Non linear physics and the evolution of complex systems" 10.15 - Coffee Aspects of Autonomous Behaviour 10.30 - M. Tilden "Robot jurassic park : primitives to predators" 11.10 - S. Nolfi, D. Parisi "Auto-teaching : networks that develop their own teaching inputs" 11.40 - B. Webb "Modelling biological behaviour or 'dumb animals and stupid robots'" 12.10 - Lunch Patterns & Rhythms 10.40 - E. Presnov, Z. Agur "Origin and breakdown of synchrony of the cell cycles in early development" 11.10 - A. Hjemfelt, F.W. Schneider, J. Ross "Parallel computation in coupled chemical kinetic systems" 11.40 - P. de Kepper, P. Rudovics, J.J. Perraud, E. Dulos "Experiments on Turing structures" 12.20 - M. Braune, H. Engel "Light-sensitive Belousov-Zhabotinsky reaction - a suitable tool for studies of nonlinear wave dynamics in active media" 12.50 - V.S. Zykov, S.C. Muller "Boundary layer kinematical model of autowave patterns in a two-component reaction-diffusion system" 13.20 - Lunch Aspects of Autonomous Behaviour continued 14.20 - H. Hendriks-Jansen "Natural kinds, autonomous robots and history of use" 14.50 - R. Pfeifer "Studying emotions : fungus eaters" 15.20 - J.C. Rutkowska "Ontogenetic constraints on scaling-up sensory-motor systems" 15.50 - G. Deffuant, E. Monneret "Morphodynamic networks : the example of adaptive fibres" 16.20 - Coffee Patterns and Rhythms continued 14.20 - J. Kosek, P. Pinkas, M. Marek "Spatiotemporal patterns in cyclic arrays with and without time delay" 14.50 - J.J. Perraud "The early years of excitable media from electro-physiology to physical chemistry" 15.20 - D. Thieffry, R. Thomas "Logical synthesis of regulatory models" 15.50 - E. Av-Ron, H. Parnas, L.A. Segel "Modelling bursting neurons of the Lobster cardiac network" 16.20 - Coffee Evolutionary Mechanisms 16.50 - T. Ray "Evolution and ecology of digital organisms" 17.30 - R. Hightower, S. Forrest, A.S. Perelson "The evolution of secondary organization in immune system gene libraries" 18.00 - R. Davidge "Looping as a means to survival : playing Russian Roulette in a harsh environment" Patterns and Rhythms Continued 17.00 - G.R. Welch "The computational machinery of the living cell" 17.30 - S. Douady, Y. Couder A physical investigation of the iterative process of botanical growth" 18.00 - V. Gundlach, L. Demetrius "Mutation and selection in non linear growth processes" 18.30 - Beer and Sandwiches 19.30 - C. Langton Title to be communicated 20.10 - D. Lestel, L. Bec, J.-L. Lemoigne "Visible characteristics of living systems: esthetics and artificial life" 20.40 - Discussion on philosophical issues. 22.00 - Close --------------------------------------------------------- TUESDAY May 25th Origins of life & molecular evolution 09.00 - P. Schuster "Sequences and shapes of biopolymers" 09.40 - P.L. Luisi, P.A. Vonmont-Bachmann, M. Fresta "Chemical autopoiesis : Self-replicating micelles and vesicles" 10.10 - C. Biebricher "Requirements for template activity of RNA in RNA replication" 10.50 - Coffee 11.10 - M.A. Huynen, P. Hogeweg "Evolutionary dynamics and the relation between RNA structure and RNA landscapes" 11.40 - W. Fontana "Constructive dynamical systems" 12.20 - Lunch Dynamics of Human Societies 09.00 - B. Huberman, N.S. Glance "Social dilemnas and fluid organizations" 09.40 - T.A. Brown "Political life on a lattice" 10.10 - A. Meier-Koll, E. Bohl "Time-structure analysis in a village community of Columbian Indians" 10.40 - Coffee Multi-Robot Systems 11.10 - R. Beckers, J.L. Deneubourg, S. Goss, R. King "Self-organised groups of interacting Robots" 11.40 - C. Numaoka "Collective alteration of strategic type" 12.10 - S. Rasmussen "Engineering based on self-organisation" 12.40 - T. Shibata, T. Fukuda "Coordinative balancing in evolutionary multi-agent-robot System using genetic algorithm" 13.10 - Lunch 14.00 - 18.00 Poster & Demonstration Session (Robots, Videos, Chemical reactions, ...) 14.10 - A. Collie "A tele-operated robot with local autonomy" 14.40 - F. Hess "Moving sound creatures" 16.00 - Coffee 18.00 - Talk by Professor I. Prigogine 18.45 - Cocktail 20.00 - Banquet ________________________________________________________________________________ WEDNESDAY May 26th Collective Intelligence 09.00 - N.R. Franks "Limited rationality in the organization of societies of ants, robots and men" 09.40 - B. Corbara, A. Drogoul, D. Fresneau, S. Lalande "Simulating the sociogenesis process in ant colonies with manta" 10.10 - J.L. Deneubourg "In search of simplicity" 10.40 - Coffee 11.10 - L. Edelstein Keshet "Trail following as an adaptable mechanism for population behaviour" 11.40 - I. Chase "Hierarchies in animal and human societies" 12.10 - H. Gutowitz "Complexity-seeking ants" 12.40 - Lunch Sensory and Motor Activity 09.00 - H. Cruse, G. Cymbalyuk, J. Dean "A walking machine using coordinating mechanisms of three different animals : stick insect, cray fish and cat" 09.40 - D.E. Brunn, J. Dean, J. Schmitz "Simple rules governing leg placement by the stick insect during walking" 10.10 - D. Cliff, P. Husbands, I. Harvey "Analysis of evolved sensory-motor controllers" 10.40 - Coffee Ecosystems & Evolution 11.00 - M. Nowak "Evolutionary and spatial dynamics of the prisoner's dilemma" 11.40 - K. Lindgren, M.G. Nordahl "Evolutionary dynamics of spatial games" 12.10 - P.M. Todd "Artificial death" 12.40 - G. Weisbuch, G. Duchateau "Emergence of mutualism : application of a differential model to endosymbiosis" 13.10 - Lunch Collective Intelligence continued 14.20 - M.J. Mataric, M.J. Marjanovic "Synthesizing complex behaviors by composing simple primitives" 14.50 - O. Miramontes, R.V. Sole, B.C. Goodwin "Antichaos in ants : the excitability metaphor at two hierarchical levels" 15.20 - S. Camazine "Collective intelligence in insect societies by means of self-organization" 16.00 - Coffee 16.30 - S. Focardi "Dynamics of mammal groups" 17.00 - A. Stevens Modelling and simulations of the gliding and aggregation of myxobacteria" 17.30 - O. Steinbock, F. Siegert, C.J. Weijer, S.C. Muller "Rotating cell motion and wave propagation during the developmental cycle of dictyostelium" Ecosystems and Evolution continued 14.20 - S. Kauffman Title to be communicated 14.50 - M. Bedau, A. Bahm "The Evolution of diversity" Theoretical Immunology 15.20 - J. Stewart "The immune system : emergent self-assertion in an autonomous network" 15.50 - Coffee 16.20 - J. Urbain "The dynamics of the immune response" 17.00 - Behn, K. Lippert, C. Muller, L. van Hemmen, B. Sulzer. "Memory in the immune system : synergy of different strategies" 18.00 - Closing Remarks 20.00 - Epistemological Conference (in french, open to the public, room 2215, Campus Solbosch) "La Vie Artificielle : une Vie en dehors des Vivants. Utopie ou Realite?" Intervenants : G. Nicolis, F. Varela, I. Stengers, J. De Rosnay. 22.00 - Close __________________________________________________________________________ Poster Session (Tuesday May 25th, 14.00 - 18.00) Patterns & Rhythms M. Colding-Jorgensen "Chaotic signal processing in nerve cells" M. Dumont, G. Cheron, E. Godaux "Non-linear forecasting of cats eye movement time series" M. Gomez-Gesteira, A. P. Munuzuri, V. P. Munuzuri, V. Perez-Villar "Vortex drift induced by an electric field in excitable media" I. Grabec "Self-organization of formal neurons described by the second maximum entropy principle" A. Hunding "Simulation of 3 dimensional turing patterns related to early biological morphogenesis" Lj. Kolar-Anic, Dj. Misljenovic, S. Anic "Mechanism of the Bray-Liebhafsky reaction : effect of the reduction of iodate ion by hydrogen peroxide" V. Krinsky, K. Aglaze, L. Budriene, G. Ivanitsky, V. Shakhbazyan, M. Tsyganov "Wave mechanisms of pattern formation in microbial populations" J. Luck, H.B. Luck "Can phyllotaxis be controlled by a cellular program ?" E.D. Lumer, B.A. Huberman "Binding hierarchies : a basis for dynamic perceptual grouping" O.C. Martin, J.C. Letelier "Hebbian neural networks which topographically self-organize" J. Maselko "Multiplicity of stationary patterns in an array of chemical oscillators" A.S. Mikhailov, D. Meinkohn "Self-motion in physico-chemical systems far from thermal equilibrium" A.F. Munster, D. Snita, P. Hasal, M. Marek "Spatial and spatiotemporal patterns in the ionic brusselator" P. Pelce "Geometrical dynamics for morphogenesis" A. Wuensche "Memory far from equilibrium" Origins of Life & Molecular Evolution Y. Almirantis, S. Papageorgiou "Long or short range correlations in DNA sequences ?" R. Costalat, J.-P. Morillon, J. Burger "Effect of self-association on the stability of metabolic units" J. Putnam "A primordial soup environment" Epistemological Issues E.W. Bonabeau "On the appeals and dangers of synthetic reductionism" V. Calenbuhr "Intelligence under the viewpoint of the concepts of complementary and autopoiesis" D. Mange "Wetware as a bridge between computer engineering and biology" B. Mc Mullin "What is a universal constructor ?" Aspects of Autonomous Behaviour D. Cliff, S. Bullock "Adding 'foveal vision' to Wilson's animat" O. Holland, M. Snaith "Generalization, world modelling, and optimal choice : improving reinforcement learning in real robots" J.J. Merelo, A. Moreno, A. Etxeberria "Artificial organisms with adaptive sensors" F. Mondada, P.F. Verschure "Modelling system-environment interaction : the complementary roles of simulation and real world artifacts" C. Thornton "Statistical factors in behaviour learning" B. Yamauchi, R. Beer "Escaping static and cyclic behaviour in autonomous agents" N. Magome, Y. Yonezawa, K. Yoshikawa "Self-excitable molecula-assembly towards the development of neuro-computer, intelligent sensor and mechanochemical transducer" Evolutionary Mechanisms H. de Garis "Evolving a replicator" G. Kampis "Coevolution in the computer : the necessity and use of distributed code systems" Dynamics of Human Societies S. Margarita, A. Beltratti "Co-evolution of trading strategies in an on-screen stock market" Multi-Robot Systems A. Ali Cherif "Collective behaviour for a micro-colony of robots" S. Goss, J.-L. Deneubourg, R. Beckers, J.-L. Henrotte "Recipes of collective movement" T. Ueyama, T. Fukuda "Structure organization of cellular robot based on genetic information" Collective Intelligence G. De Schutter, E. Nuyts "Birds use self-organized social behaviours to regulate their daily dispersal over wide areas : evidences from gull roosts" M.M. Millonas "Swarm field dynamics and functional morphogenesis" Z. Penzes, I. Karsai "Round shape combs produced by stigmergic scripts in social wasp" P.-Y. Quenette "Collective vigilance as an example of self-organisation : a precise study on the wild boar (Sus scrofa)" R.W. Schmieder "A knowledge tracking algorithm for generating collective behaviour in indivual-based populations" T.R. Stickland, C.M.N. Tofts, N.R. Franks "Algorithms & collective decisions in ants : information exchange, numbers of individuals and search limits" Sensory-Motor Activity S. Giszter "Modelling spinal organization of motor behaviors in the frog" P. Grandguillaume "A new model of visual processing based on space-time coupling in the retina" S. Mikami, H. Tano, Y. Kakazu "An autonomous legged robot that learns to walk through simulated evolution" U. Nehmzov, B. McGonigle "Robot navigation by light" Ecosystems & Evolution G. Baier, J.S. Thomsen, E. Mosekilde "Chaotic hierarchy in a model of competing populations" D. Floreano "Patterns of interactions in shared environments" A. Garliauskas "Theoretical and practical investigations of lake biopopulations" T. Ikegami "Ecology of evolutionary game strategies" T. Kim, K. Stuber "Patterns of cluster formation and evolutionary activity in evolving L-systems" C.C. Maley "The effect of dispersal on the evolution of artificial parasites" B.V. Williams, D.G. Bounds "Learning and evolution in populations of backprop networks" Theoretical Immunology N. Cerf "Fixed point stability in idiotypic immune networks" E. Muraille, M. Kauffman "The role of antigen presentation for the TH1 versus TH2 immune response" E. Vargas-Madrazo, J.C. Almagro, F. Lara-Ochoa, M.A. Jimenez-Montano "Amino acids patterns in the recognition site of immunoglobulins" ___________________________________________________________________ How to get from the airport to your hotel 1. Taxi: this is by far the simplest and quickest, but can cost 800-1200 BF ($25-$40). 2. Train link: there is a train station underneath the airport terminal (Zaventem) which takes you to Brussels 3 times per hour (xx.09, xx.24 and xx.46, trip takes 20 min), and costs 80BF ($2). There are 3 stations at Brussels, Nord, Central and Midi. Bruxelles Nord is nearest Hotels President WTC, Palace, and Vendome. Bruxelles Central is nearest Hotels Atlas, Arcade Ste. Catherine, Opera, Orion and Sabina. Bruxelles Midi is nearest Hotel de Paris. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: have, or on other factors, you might care to walk, take a taxi (count 200-300BF), or take the metro. At Bruxelles Nord you might be better advised to take a taxi to the hotel or a tram (platform in the station) to Place Rogier, rather than walk, as this is not the most attractive or safest part of Brussels. If you wish to take the metro (see comments on how to get from the hotel to the conference about buying metro tickets), then: For Hotel President WTC, the hotel is close to the Gare du Nord. On your map, where the street names are marked, it's just off square E1, to the North, on Boulevard Emile Jacqmain (no 180), marked in green and yellow. You can see the World Trade Centre marked in black on square E1. The hotel is not quite opposite, being a bit off the map to the North, further along the Boulevard E. Jacqmain. (To get to the centre, they organise a shuttle in the evenings. It's 15-20 minutes walk.) For Hotels Palace and Vendome, get off at Bruxelles Nord and take the tram (Pre-Metro, in the station) to Place Rogier (just 1 stop). For Hotels President WTC, Palace and Vendome, get off at Bruxelles Nord and take the tram (Pre-Metro, in the station) to Place Rogier (just 1 stop). For Hotels Atlas, Arcade Ste. Catherine, and Orion, get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station Ste. Catherine (line 1a and 1b, direction Heysel or Bizet, both are fine, just 2 stops). For Hotel Opera get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station De Brouckere (line 1a and 1b, direction Heysel or Bizet, both are fine, just 1 stop). For Hotel Sabina get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station Arts-Loi (line 1a and 1b, direction Hermann-Debroux or Stockel, both are fine, just 2 stops). There change to line 2, direction Simonis, and get off att Place Madou (the next stop). Hotel de Paris is just near the railway station Bruxelles-Midi, so its not worth taking a tram. The enclosed colour tourist map has nearly each hotel marked with a black number. How to get to the Falstaff Cafe for the Sunday night reception See enclosed map. The Falstaff (rue H. Maus, 17) is one of Brussels best known cafes (always full), art-deco style, and is right next to La Bourse (the Stock Exchange), which is the centre point of Bruxelles. You can't miss it! We have reserved a room there, which will be signposted. How to get from your hotel to the conference By Metro: By far the easiest way is to take the Metro. Buy a 10 trip card at the metro stations or from most newsagents ("Je voudrais une carte de tram s'il vous plait", 290 BF). On the way to the platform insert it in one of the stamping machines in your path, and you can travel anywhere for 1 hour, changing bus, tram or metro without restriction. The University Campus (Campus Plaine, ULB) is directly at station Delta on Line 1b (Direction Hermann-Debroux). It has a common section with line 1a, so don't get on the wrong metro (Direction Stockel). The destination of the metro is indicated on the platform and also inside each car. The line splits at Merode, so if you do get on the one going to Stockel you can always get off at or before Merode and wait on the same platform for the next one going to Hermann-Debroux. Returning to the centre you don't have this problem, all metros direction Heysel are OK. Hotel President WTC: To get to the conference, you can walk to the Place Rogier, and then follow the instructions below for Hotels Palce and Vendome, or else take the pre-metro from the Gare du Nord and change at Place de Brouckere for line Ia (direction Herman- Debroux), and get off at station Delta which is directly at the campus. We will also organise a bus service to and from the conference at about 08.30 in the morning, and after the end of each conference day (hours to be announced).. Hotels Palace and Vendome are nearest Metro station Rogier, line 2. Take direction Bruxelles Midi, and change to line 1a (direction Hermann-Debroux) at station Arts- Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Simonis). Hotels Atlas, Arcade Ste. Catherine, and Orion are nearest Metro station Ste Catherine (line 1). Hotel Opera is nearest Metro station De Brouckere (line 1) Hotel Sabina is nearest Metro station Madou line 2. Take direction Bruxelles Midi, and change to line 1a (direction Hermann-Debroux) at station Arts-Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Simonis). Hotel de Paris is nearest the metro Bruxelles-Midi, line 2. Take direction Simonis, and change to line 1a (direction Hermann-Debroux) at station Arts-Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Bruxelles-Midi. At Delta, the ULB Campus Plaine is signposted (the corridors and escalators takew you right on Campus), and we will have placed ECAL signposts along your route. The conference is at the Forum (see enclosed map). By car: If heavy rush-hour traffic doesn't scare you, follow the signs to Namur (if you can find any), or take la Rue du Trone and Avenue de la Couronne, which will get you close. The Campus is just at the start of the Bruxelles-Namur-Luxembourg autoroute E411. Once you find the Campus, there is a parking lot reserved for you at Access 4. Do not confuse the ULB (Universite Libre de Bruxelles, nearer the autoroute) with its neighbouring cousin on the same Campus, the VUB (Vrij Universiteit Brussel, nearer the centre). The conference is just 30m from there (at the Forum) and will be signposted. How to get from the conference to the Banquet The Banquet is at the other ULB Campus, Campus Solbosch (La Salle de Marbre), 800-1000m from Campus Plaine (see enclosed map). Cross the railway bridge, and go straight, over the roundabout at the Ixelles Cemetry, and still staright on up Avenue de l'Universite to the Campus Plaine, down the middle of the Campus Solbosch (Avenue Paul Heger), and follow the ECAL Banquet signs. In any case we will escort you. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From jose at learning.siemens.com Mon Jun 5 16:42:55 2006 From: jose at learning.siemens.com (Steve Hanson) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NIPS5 Oversight Message-ID: NIPS-5 attendees: Due to an oversight we regret the inadvertent exclusion of 3 papers from the recent NIPS-5 volume. These papers were: Mark Plutowski, Garrison Cottrell and Halbert White: Learning Mackey-Glass from 25 examples, Plus or Minus 2 Yehuda Salu: Classification of Multi-Spectral Pixels by the Binary Diamond Neural Network A. C. Tsoi, D.S.C. So and A. Sergejew: Classification of Electroencephalograms using Artificial Neural Networks We are writing this note to (1) acknowledge our error (2) point out where you can obtain a present copy of the author's papers and (3) inform you that they will appear in their existing form or an updated form in NIPS Vol. 6. Presently, Morgan Kaufmann will be sending a bundle of the 3 formatted papers to all NIPS-5 attendees, these will be marked as NIPS-5 Addendum. You should also be able to retrieve an official copy from NEUROPROSE archive. Again, we apologize for the oversight to the authors. Stephen J. Hanson, General Chair Jack Cowan, Program Chair C. Lee Giles, Publications Chair #!/bin/sh ######################################################################## # usage: ohio # # A Script to get, uncompress, and print postscript # files from the neuroprose directory on cheops.ohio-state.edu # # By Tony Plate & Jordan Pollack ######################################################################## if [ "$1" = "" ] ; then echo usage: $0 " " echo echo The filename must be exactly as it is in the archive, if your echo file is not found the first time, look in the file \"ftp.log\" echo for a list of files in the archive. echo echo The printerflags are used for the optional lpr command that echo is executed after the file is retrieved. A common use would echo be to use -P to specify a particular postscript printer. exit fi ######################################################################## # set up script for ftp ######################################################################## cat > .ftp.script < ftp.log rm -f .ftp.script if [ ! -f /tmp/$1 ] ; then echo Failed to get file - please inspect ftp.log for list of available files exit fi ######################################################################## # Uncompress if necessary ######################################################################## echo Retrieved /tmp/$1 case $1 in *.Z) echo Uncompressing /tmp/$1 uncompress /tmp/$1 FILE=`basename $1 .Z` ;; *) FILE=$1 esac ######################################################################## # query to print file ######################################################################## echo -n "Send /tmp/$FILE to 'lpr $2' (y or n)? " read x case $x in [yY]*) echo Printing /tmp/$FILE lpr $2 /tmp/$FILE ;; esac echo File left in /tmp/$FILE From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: October 19-20, 1993 [This Workshop was previously scheduled for April 1993] Program Committee: Michael Arbib (Organizer), George Bekey, Damian Lyons, Paul Rosenbloom, and Ron Sun To design complex technological systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence and perceptual robotics which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. We seek (both at the level of schema asemblages, and in terms of "modular" neural networks) a distributed model of computation, supporting many concurrent activities for recognition of objects, and the planning and control of different activities. The use, representation, and recall of knowledge is mediated through the activity of a network of interacting computing agents which between them provide processes for going from a particular situation and a particular structure of goals and tasks to a suitable course of action. This action may involve passing of messages, changes of state, instantiation to add new schema instances to the network, deinstantiation to remove instances, and may involve self-modification and self-organization. Schemas provide a form of knowledge representation which differs from frames and scripts by being of a finer granularity. Schema theory is generative: schemas may well be linkedwwww to others to provide yet more comprehensive schemas, whereas frames tend to "build in" from the overall framework. The analysis of interacting computing agents (the schema instances) is intermediate between the overall specification of some behavior and the neural networks that subserve it. The Workshop will focus on different facets of this multi-level methodology. While the emphasis will be on technological systems, papers will also be accepted on biological and cognitive systems. Submission of Papers A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Alternative paradigms for modeling symbolic and subsymbolic knowledge Hierarchical and distributed representations: adaptation and coding Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule-based reasoning (using schemas for the integration) Programming Languages for Schemas and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, and Neuroscience Prospective contributors should send a five-page extended abstract, including figures with informative captions and full references - a hard copy, either by regular mail or fax - by August 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. In accepting papers submitted in response to this Call for Papers, preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of a network of interacting schemas, and where some but not necessarily all of the schemas are implemented in neural networks. Papers which present a single neural network for pattern recognition ("perceptual schema") or pattern generation ("motor schema") will not be accepted. It is the development of a methodology to analyze the interaction of multiple functional units that constitutes the distinctive thrust of this Workshop. Notification of acceptance or rejection will be sent by email no later than September 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but (revised versions) of accepted abstracts received prior to October 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. A number of papers have already been accepted for the Workshop. These include the following: Arbib: Schemas and Neural Networks: A Tutorial Introduction to Integrating Symbolic and Subsymbolic Approaches to Cooperative Computation Arkin: Reactive Schema-based Robotic Systems: Principles and Practice Heenskerk and Keijzer: A Real-time Neural Implementation of a Schema Driven Toy-Car Leow and Miikkulainen, Representing and Learning Visual Schemas in Neural Networks for Scene Analysis Lyons & Hendriks: Describing and analysing robot behavior with schema theory Murphy, Lyons & Hendriks: Visually Guided Multi-Fingered Robot Hand Grasping as Defined by Schemas and a Reactive System Sun: Neural Schemas and Connectionist Logic: A Synthesis of the Symbolic and the Subsymbolic Weitzenfeld: Hierarchy, Composition, Heterogeneity, and Multi-granularity in Concurrent Object-Oriented Programming for Schemas and Neural Networks Wilson & Hendler: Neural Network Software Modules Bonus Event: The CNE Research Review: Monday, October 18, 1993 The CNE Review will present a day-long sampling of CNE research, with talks by faculty, and students, as well as demos of hardware and software. Special attention will be paid to talks on, and demos in, our new Autonomous Robotics Lab and Neuro-Optical Computing Lab. Fully paid registrants of the Workshop are entitled to attend the CNE Review at no extra charge. Registration The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students and CNE members) together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. --------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC October 19-20, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] I wish to be registered for the CNE Research Review: YES [ ] NO [ ] Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, fellous at pollux.usc.edu. From jbeard at aip.org Mon Jun 5 16:42:55 2006 From: jbeard at aip.org (jonathan_beard) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: bee learning in Nature Message-ID: entomology Findings about bees' brains could shed light on how people learn URBANA, Ill. - Can honey bees help scientists understand how adult humans learn? Researchers at the University of Illinois are convinced they can. In the July 15 issue of the journal Nature, they describe structural changes that occur in the brains of bees when the insects leave their domestic chores to tackle their most challenging and complex task - foraging for pollen and nectar. As part of a doctoral thesis, neuroscience graduate student Ginger S. Withers focused on the "mushroom bodies," a region of the insect brain so named because it appears mushroom-shaped when viewed in cross-section. The region is closely associated with learning and memory. Withers used quantitative neuroanatomical methods to study sections of bee brains to show that the mushroom bodies are reorganized when a bee becomes a forager. Although a honey bee typically switches from hive-keeping tasks, such as rearing younger sisters and caring for the queen, to foraging at about three weeks of age, the brain changes are not simply due to aging. In a key experiment, young honey bees were forced to become foragers by removing older bees from the colony. The mushroom bodies of the precocious foragers, who were only about one week old, mirrored those of normal-aged foragers. The findings suggest that nerve cells in the mushroom bodies receive more informational inputs per cell as the bee learns to forage. In order to be a successful forager, a bee must learn how to navigate to and from its hive and how to collect food efficiently from many different types of flowers. The implications for neuroscience go far beyond the beehive, said the article's co-authors, U. of I. insect biologists Susan E. Fahrbach and Gene E. Robinson. There could be application to human studies, they said, because the structure of bee brains is similar to - but much simpler than - human brains. Fahrbach, whose research has focused on the impact of hormones on the nervous system, was drawn to the honey bee by its sophisticated behavior, small brain and power of concentration. "Honey bees offer an exceptionally powerful model for the study of changes in the brain related to naturally occurring changes in behavior, because, once a bee becomes a forager, it does nothing else," she said. "Because the behavioral shifts are so complete, the changes in brain structure that accompany the behavioral transitions must be related to the performance of the new observed behavior." Robinson, who is director of the U. of I.'s Bee Research Facility and who has previously studied other physiological and genetic aspects of bee behavior, agrees: "This discovery opens a new area of research on the relationship between brain and behavioral plasticity. One fundamental question this research raises is 'which comes first?' Do changes in behavior lead to changes in brain structure? Or do the changes in brain structure occur first, in preparation for the changes in behavior?" As researchers pursue the changes in brain cells that form the underpinnings of learning, the U. of I. scientists say the combination of neuroscience and entomology may yield sweet rewards. Contact: Jim Barlow University of Illinois News Bureau phone: 217-333-5802 fax: 217-244-0161 Compuserve: 72002,630 Internet: jbarlow at ux1.cso.uiuc.edu  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: vs. "randomized" decision rules, as they are called in decision theory ("stochastic learning algorithm" means something different to me, but maybe I'm just misinterpreting your posting). Picking an opinion from a pool of experts randomly is clearly not a particularly good randomized decision rule in most cases. However, there are cases in which properly chosen randomized decision rules are important (any good introduction on Bayesian statistics should discuss this). Unless there is an intelligent adversary involved, such cases are probably mostly of theoretical interest, but nonetheless, a randomized decision rule can be "better" than any deterministic one. Thomas.  From avner at elect1.weizmann.ac.il Mon Jun 5 16:42:55 2006 From: avner at elect1.weizmann.ac.il (Priel Avner) Date: Sun, 19 sep 93 Subject: New paper in neuroprose Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/priel.2_layered_perc.ps.Z The file priel.2_layered_perc.ps.Z is now available for copying from the Neuroprose archive. This is a 41-page long paper. This paper was submitted for publication in " Physical Review E ". A limited number of Hardcopies (10) is reserved for those who can not use the FTP server. Computational Capabilities of Restricted Two Layered Perceptrons by Avner Priel, Marcelo Blatt, Tal Grossman and Eytan Domany Electronics Department, The Weizmann Institute of Science, Rehovot 76100, Israel. and Ido Kanter Department of Physics, Bar Ilan University, 52900 Ramat Gan, Israel. Abstract: We study the extent to which fixing the second layer weights reduces the capacity and generalization ability of a two-layer perceptron. Architectures with $N$ inputs, $K$ hidden units and a single output are considered, with both overlapping and non-overlapping receptive fields. We obtain from simulations one measure of the strength of a network - its critical capacity, $\alpha_c$. Using the ansatz $\tau_{med} \propto (\alpha_c - \alpha)^{-2}$ to describe the manner in which the median learning time diverges as $\alpha_c$ is approached, we estimate $\alpha_c$ in a manner that does not depend on arbitrary impatience parameters. The $CHIR$ learning algorithm is used in our simulations. For $K=3$ and overlapping receptive fields we show that the general machine is equivalent to the Committee with the same architecture. For $K=5$ and the same connectivity the general machine is the union of four distinct networks with fixed second layer weights, of which the Committee is the one with the highest $\alpha_c$. Since the capacity of the union of a finite set of machines equals that of the strongest constituent, the capacity of the general machine with $K=5$ equals that of the Committee. We investigated the internal representations used by different machines, and found that high correlations between the hidden units and the output reduce the capacity. Finally we studied the Boolean functions that can be realized by networks with fixed second layer weights. We discovered that two different machines implement two completely distinct sets of Boolean functions.  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: <01H4KZLPHYCY8WYV8X@buenga.bu.edu> PLEASE POST. Subject: NEUROSCIENCE/BIOMEDICAL ENGINEERING FACULTY POSITION BU BOSTON UNIVERSITY, Department of Biomedical Engineering has openings for SEVERAL tenure-track faculty positions at the junior level. Coumputational Vision, Medical Image Processing, Neuroengineering, are among the areas of interest. For details see the add in Science-- October 22 1993. Applicants should submit a CV, a one page summary of research interests, and names and addresses of at least three references to: Herbert Voigt Ph.D. Chairman Department of Biomedical Engineering College of Engineering Boston University 44 Cummington str Boston, Ma 02215-2407 Consideration will be given to applicants who already hold a PHD in a field of engineering or related field (e.g. physics) and have had at least one year of postdoctoral experience.The target starting date for positions is September 1, 1994. Considerations of applications will begin on November 1, 1993 and will continue until the positions are filled.  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: following. Note that some locations may have "firewall" that prevents Xwindows' applications from running. If this procedure fails, you may have to find a machine outside your firewall or use the character-based interface (csb). + xhost +128.96.58.4 (Xwindows display permission for superbook.bellcore.com) + telnet 128.96.58.4 + (login) + TERM=xterms (it is important to use "xterms") + Figure out and enter your machine's IP address(in /etc/hosts or ask an administrator) + gxsb (Xwindows version of SuperBook) 3.1 Overview of Xwindows SuperBook Commands When you login to SuperBook, you will obtain a Library Window. For the IWANNT proceedings, you should select the IWANNT shelf, highlight "Applications of Neural Networks to Telecommunications" and click "Open". The Text Window should be placed on the right side of the screen and the Table-of-Contents Window should be placed on the left. These windows can be resized. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by clicking in the TOC. If the entry contains subsections, it will be marked with a "+". Double-clicking on those entries expands them. Clicking on an expanded entry closes it. Text Window: The text can be scrolled one-line-at-a-time with the Scroll Bar Arrows or a page-at-a-time by clicking on the spaces immediately above or below the Slider. Graphics: Figures, tables, and some equations are presented as bitmaps. The graphics can be viewed by clicking on the blue icons at the right side of the text which pops up a bitmap-viewer. Graphics can be closed by clicking on their "close" button. Some multilemdia applications have been included, but these may not work correctly across the Internet. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* You can also search by clicking on a term in the text (to clear that search and select another, do ctrl-click). Annotations: Annotations are indicated with a pencil icon and can be read by clicking on the icon. Annotations can be created (with conference-attendee logins) by clicking in the text with the left button and then typing in the annotation window. Exiting: Pull down the FILE menu on the Library Window to "QUIT", and release. 4. Remote Access via character-based interface From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: + telnet 128.96.58.4 (for superbook.bellcore.com) + (login) + TERM=(termtype) (use "xterms" for an Xwindow inside a firewall) + csb 4.1 Overview of csb SuperBook Commands The character-based interface resembles emacs. You first enter Library mode. After selecting a shelf (make sure you are on the IWANNT shelf) and a book on that shelf (e.g., Applications of Neural Networks to Telecommunications), the screen is split laterally into two parts. The upper window is the TOC and the lower window has the text. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by typing the number beside them in the TOC. If the entry contains subsections, it will be marked with a "+". Text Window: The text can be scrolled one-line-at-a-time with the u/d keys or a page-at-a-time with the U/D keys. Graphics: Most bitmapped graphics will not be available. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* Searches are also possible by posting the cursor over a word and hitting RET. Annotations: Annotations are indicated with an A on the right edge of the screen. These can be read by entering an A on the line on which they are presented. Annotations can be created (given correct permissions) by entering A on any line. Exiting: Enter "Q" From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: following. Note that some locations may have "firewall" that prevents Xwindows' applications from running. If this procedure fails, you may have to find a machine outside your firewall or use the character-based interface (csb). + xhost +128.96.58.4 (Xwindows display permission for superbook.bellcore.com) + telnet 128.96.58.4 + (login) + TERM=xterms (it is important to use "xterms") + enter your email address + Figure out and enter your machine's IP address (in /etc/hosts or ask an administrator) + gxsb (Xwindows version of SuperBook) 3.1 Overview of Xwindows SuperBook Commands When you login to SuperBook, you will obtain a Library Window. For the IWANNT proceedings, you should select the IWANNT shelf, highlight "Applications of Neural Networks to Telecommunications" and click "Open". The Text Window should be placed on the right side of the screen and the Table-of-Contents Window should be placed on the left. These windows can be resized. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by clicking in the TOC. If the entry contains subsections, it will be marked with a "+". Double-clicking on those entries expands them. Clicking on an expanded entry closes it. Text Window: The text can be scrolled one-line-at-a-time with the Scroll Bar Arrows or a page-at-a-time by clicking on the spaces immediately above or below the Slider. Graphics: Figures, tables, and some equations are presented as bitmaps. The graphics can be viewed by clicking on the blue icons at the right side of the text which pops up a bitmap-viewer. Graphics can be closed by clicking on their "close" button. Some multilemdia applications have been included, but these may not work correctly across the Internet. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* You can also search by clicking on a term in the text (to clear that search and select another, do ctrl-click). Annotations: Annotations are indicated with a pencil icon and can be read by clicking on the icon. Annotations can be created (with conference-attendee logins) by clicking in the text with the left button and then typing in the annotation window. Exiting: Pull down the FILE menu on the Library Window to "QUIT", and release. 4. Remote Access via character-based interface From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: + telnet 128.96.58.4 (for superbook.bellcore.com) + (login) + TERM=(termtype) (use "xterms" for an Xwindow inside a firewall) + enter your email address + csb 4.1 Overview of csb SuperBook Commands The character-based interface resembles emacs. You first enter Library mode. After selecting a shelf (make sure you are on the IWANNT shelf) and a book on that shelf (e.g., Applications of Neural Networks to Telecommunications), the screen is split laterally into two parts. The upper window is the TOC and the lower window has the text. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by typing the number beside them in the TOC. If the entry contains subsections, it will be marked with a "+". Text Window: The text can be scrolled one-line-at-a-time with the u/d keys or a page-at-a-time with the U/D keys. Graphics: Most bitmapped graphics will not be available. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* Searches are also possible by posting the cursor over a word and hitting RET. Annotations: Annotations are indicated with an A on the right edge of the screen. These can be read by entering an A on the line on which they are presented. Annotations can be created (given correct permissions) by entering A on any line. Exiting: Enter "Q" From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Hebbian learning plus deflation, but this deflation appears to be parallel, from all units to all other units, and therefore not sequential. I would think that the same happens with Leen's algorithms (ref. below, also): there is deflation but it is parallel, not sequential. References: E. Oja, H. Ogawa and J. Wangviwattana, "PCA in Fully Parallel Neural Networks", in I. Aleksander and J. Taylor (eds.), Artificial Neural Networks 2, Elsevier Science Publishers, 1992. T. Leen, "Dynamics of Learning in Recurrent Feature-Discovery Networks", in R. Lippmann, J. Moody and D. Touretzky (eds.), Advances in Neural Information Processing Systems 3, Morgan Kaufmann, 1991. Regards, Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: in answering the mail from here. ======================================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: cutting across traditional disciplines. This conference seeks to bring together researchers from the Australasian region who are actively involved in research in Complex systems for creative discussion, and to provide an introduction to specialised topics for people seeking to know further about the theoretical and practical aspects of research in Complex systems. The theme of the conference "Mechanism of Adaptation in Natural, Man Made and Mathematical Systems", invites us to investigate and question the dynamic processes in complex systems, and to compare our overall modelling processes of natural systems. Processes such as evolution, growth and learning are being investigated through genetic algorithms, evolutionary programming and neural networks. How well do these techniques perform and to what extent do the fit an evolutionary paradigm. It also raises the underlying question: "How does order arise in complex systems?" PAPERS: Original papers concerned with both theory and application are solicited. The areas of interest include, but are not limited to the following: * Natural and Artificial Life * Genetic algorithms * Fractals, Chaos and Non-linear Dynamics * Self-organisation * Information and Control Systems * Neural Networks * Parallel and Emergent Computation. * Bio-Complexity DATES: Second Circular Jan 31, 1994 Third Circular Feb 14, 1994 Submission of Abstracts: Mar 14, 1994 Notification of Acceptance: May 16, 1994 Receipt of Camera-ready papers: Jul 25, 1994 CONFERENCE ORGANIZATION: The conference will open with advance registration and a barbecue party on Sunday 25th September. The conference fee of $285 ($130 students) will include morning & afternoon teas and lunch on each day, the opening barbecue, and the conference dinner. Accommodation will be available on campus at the University Residential College and at nearby motels within walking distance of the University. The conference dinner is to be held on Tuesday 27th September. TUTORIALS AND WORKSHOPS: It is planned to hold one or more introductory tutorials on selected topics in complex systems on Sunday 25th September. The aim of these tutorials will be to introduce participants to fundamental concepts in complex systems and to provide them with practical experience. Tentatively the topics covered will include genetic algorithms, cellular automata, chaos, and fractals. The exact content will depend on demand. If interested in attending please indicate your preferences on the attached expression of interest. The number of places will be strictly limited by facilities available. On Wednesday advanced workshops may be held on specialised topics if there is sufficient interest. Suggestions/offers for advanced workshop topics are encouraged. There will be an additional fee for attendance at the tutorials and workshops, which will include lunch and refreshments. SUBMISSION OF PAPERS: Intending authors are requested to submit an extended abstract of about 500 words, containing a clear, concise statement of the significant results of the work. Each abstract will be assessed by two referees. All accepted papers will be published in the conference proceedings. Individual authors may be allocated to either an oral or poster presentation, but contributions in both formats will appear identically in the proceedings. Copies of the proceedings will be provided to participants at the conference in hardcopy form. LaTeX style files and other formating options will be provided to authors of accepted papers. ORGANISING COMMITTEE: Conference Chairperson: Assoc Prof. Russel Stonier, Department of Mathematics and Computing University of Central Queensland Rockhampton Mail Centre 4702 QLD Australia. Tel. +61 79 309487 Fax: +61 79 309729 Email: complex at ucq.edu.au Technical Chairperson: Dr Xing Huo Yu, Department of Mathematics and Computing University of Central Queensland Rockhampton Mail Centre 4702 QLD Australia. Tel. +61 79 309865 Fax: +61 79 309729 Email: complex at ucq.edu.au Members: Prof. J. Diederich, Queensland University of Technology; Prof. A.C. Tsoi, University of Queensland; Dr D. Green, Australian National University; Dr T. Bossomaier, Australian National University; Mr S. Smith, University of Central Queensland. -----------%< cut here %<------------- COMPLEX'94 Second Australian National Conference on Complex Systems EXPRESSION OF INTEREST NAME: ______________________________________________ ORGANIZATION:________________________________________ ________________________________________ ADDRESS: ________________________________________ ________________________________________ ________________________________________ TEL: /FAX: ________________________________________ E-MAIL: _________________________________________ [ ] I am interested in attending COMPLEX'94. Please send me a registration form. [ ] I am interested in PRESENTING A PAPER and/or POSTER. Tentative title: ___________________________________________________ ___________________________________________________ ___________________________________________________ [ ] I am interested in ATTENDING A TUTORIAL. Preferences: Genetic Algorithms ______ Cellular Automata ______ Chaos Theory ______ Fractals ______ Distributed Programming ______ Other (specify) ______________________________ [ ] I am interested in attending an advanced WORKSHOP. [ ] I am UNABLE TO ATTEND the conference but would like to be kept informed. -------------%< cut here %<------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: and its Applications was held in Honolulu. Some of the topics that were covered in the symposium are listed below. Circuits and Systems Neural Networks Chaos Dynamics Cellular Neural Networks Fractals Bifurcation Biocybernetics Soliton Oscillations Reactive Phenomena Fuzzy Numerical Methods Pattern Generation Information Dynamics Self-Validating Numerics Time Series Analysis Chua's Circuits Chemistry and Physics Mechanics Fluid Mechanics Acoustics Control Optics Circuit Simulation Communication Economics Digital/analog VLSI circuits Image Processing Power Electronics Power Systems We have extra copies of the proceedings that are on sale for $100 to participants of the conference and $150 to nonparticipants. Checks drawn from US banks and money orders will be accepted. To receive a copy of the proceedings make payments to ``NOLTA 93'' and send to Anthony Kuh Dept. of Electrical Engineering University of Hawaii Honolulu HI 96822 For more information please contact me by email at kuh at wiliki.eng.hawaii.edu or by fax at 808-956-3427, From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: L. Bernard Widrow, Stanford University - SUNDAY, JUNE 5. 1994, 8 AM-12 PM Adaptive Filters, Adaptive Controls, Adaptive Neural Networks, and Applications M. Gail Carpenter, Boston University - SUNDAY, JUNE 5. 1994, 8 AM-12 PM Adaptive Resonance Theory N. Takeshi Yamakawa, Kyushu Institute of Technology - SATURDAY, JUNE 4. 1994, 6-10 PM What are the Differences and the Similarities Among Fuzzy, Neural, and Chaotic Systems? O. Stephen Grossberg, Boston University - SUNDAY, JUNE 5. 1994, 1-5 PM Autonomous Neurodynamics: From Perception to Action P. Lee Giles, NEC Research Institute - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Dynamically-Driven Recurrent Neural Networks: Models, Training Algorithms, and Applications Q. Alianna Maren, Accurate Automation Corporation - SATURDAY, JUNE 4. 1994, 1-5 PM Introduction to Neural Network Applications R. David Casasent, Carnegie Mellon University - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Pattern Recognition and Neural Networks S. Per Bak, Brookhaven National Laboratory - SATURDAY, JUNE 4. 1994, 1-5 PM Introduction to Self-Organized Criticality T. Melanie Mitchell, Sante Fe Institute - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Genetic Algorithms, Theory and Applications U. Lotfi A. Zadeh, University of California, Berkeley - SUNDAY, JUNE 5. 1994, 1-5 PM Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs V. Nikolay G. Rambidi, International Research Institute for Management Sciences - SUNDAY, JUNE 5. 1994, 6-10 PM Image Processing and Pattern Recognition Based on Molecular Neural Networks __________________________________________________________ PLENARIES: 1. Tuesday, June 7, 1994, 6-7 PM Lotfi A. Zadeh, University of California, Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" 2. Tuesday, June 7, 1994, 7-8 PM Per Bak, Brookhaven National Laboratory "Introduction to Self-Organized Criticality" 3. Wednesday, June 8, 1994, 6-7 PM Bernard Widrow, Stanford University "Adaptive Inverse Control" 4. Wednesday, June 8, 1994, 7-8 PM Melanie Mitchell, Sante Fe Institute "Genetic Algorithms: Why They Work and What They Can Do For You" 5. Thursday, June 9, 1994, 6-7 PM Paul Werbos, US National Science Foundation "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" 6. Thursday, June 9, 1994, 7-8 PM John Taylor, King's College London "Capturing What It Is Like to Be: Modelling the Mind with Neural Networks" __________________________________________________________ SPECIAL SESSIONS: Special Session 1: "Biomedical Applications of Neural Networks" Tuesday, June 7, 1994 Co-sponsored by the National Institute of Allergy and Infectious Diseases, U.S. NIH; the Division of Cancer Treatment, National Cancer Institute, U.S. NIH; and the Center for Devices and Radiological Health, U.S. Food and Drug Administration Chairs: David G, Brown, PhD Center for Devices and Radiological Health, FDA John Weinstein, MD, PhD National Cancer Institute, NIH This special session will focus on recent progress in applying neural networks to biomedical problems, both in the research laboratory and in the clinical environment. Applications moving toward early commercial implementation will be highlighted, and working demonstrations will be given. The track will commence with an overview session including invited presentations by Dr. John Weinstein, NCI, NIH on the present status of biomedical applications research and by his co-session chair Professor Shiro Usui, Toyohasi University on biomedical applications in Japan. A second morning session, chaired by Dr. Harry B. Burke, MD, PhD, University of Nevada and by Dr. Judith Dayhoff, PhD, University of Maryland, University of Maryland, will address neural networks for prediction and other nonimaging applications. The afternoon session, chaired by Dr. Maryellen L, Giger, PhD, University of Chicago, and Dr. Laurie J. Mango, Neuromedical Systems, Inc., will cover biomedical image analysis/image understanding applications. The final session, chaired by Dr. David G. Brown, PhD, CDRH, FDA is an interactive panel/audience discussion of the promise and pitfalls of neural network biomedical applications. Other prominent invited speakers include Dr. Nozomu Hoshimiya of Tohoku University and Dr. Michael O'Neill of the University of Maryland. Submission of oral and/or poster presentations are welcomed to complement the invited presentations. Special Session 2: "Commercial and Industrial Applications of Neural Networks" Tuesday, June 7, 1994 Co-sponsored by the Society for Manufacturing Engineers Overall Chair: Bernard Widrow, Stanford University This special session will be divided into four sessions of invited talks and will place its emphasis on commercial and industrial applications working 24 hours a day and making money for their users. The sessions will be organized as follows: Morning Session 1 "Practical Applications of Neural Hardware" Chair: Dan Hammerstrom, Adaptive Solutions, Portland, Oregon, USA Morning Session 2 "Applications of Neural Networks in Pattern Recognition and Prediction" Chair: Kenneth Marko, Ford Motor Company Afternoon Session 1 "Applications of Neural Networks in the Financial Industry" Chair: Ken Otwell, BehavHeuristics, College Park, Maryland, USA Afternoon Session 2 "Applications of Neural Networks on Process Control and Manufacturing" Chair: Tariq Samad, Honeywell Special Session 3: "Financial and Economic Applications of Neural Networks" Wednesday, June 8, 1994 Chair: Guido J. Deboeck, World Bank This special session will focus on the state-of the-art in financial and economic applications. The track will be split into four sessions: Morning Session 1 Overview on Major Financial Applications of Neural Networks and Related Advanced Technologies" Morning Session 2 Presentation of Papers: Time-Series, Forecasting, Genetic Algorithms, Fuzzy Logic, Non-Linear Dynamics Afternoon Session 1 Product Presentations Afternoon Session 2 Panel discussion on "Cost and Benefits of Advanced Technologies in Finance" Invited speakers to be announced. Papers submitted to regular sessions may receive consideration for this special session. Special Session 4: "Neural Networks in Chemical Engineering" Thursday, June 9, 1994 Co-sponsored by the American Institute of Chemical Engineers Chair: Thomas McAvoy, University of Maryland This special session on neural networks in the chemical process industries will explore applications to all areas of the process industries including process modelling, both steady state and dynamic, process control, fault detection, soft sensing, sensor validation, and business examples. Contributions from both industry and academia are being solicited. Special Session 5: "Mind, Brain and Consciousness" Thursday, June 9, 1994 Session Chair: John Taylor, King's College London Session Co-chair: Walter Freeman, University of California, Berkeley Session Committee: Stephen Grossberg, Boston University and Gerhardt Roth, Brain Research Institute Invited Speakers include S. Grossberg, P. Werbos, G. Roth, B. Libet, J. Taylor Consciousness and inner experience have suddenly emerged as the centre of activity in psychology, philosophy, and neurobiology. Neural modelling is preceding apace in this subject. Contributors from all areas are now coming together to move rapidly towards a solution of what might be regarded as one of the deepest problems of human existence. Neural models, and their constraints, will be presented in the session, with an assessment of how far we are from building a machine that can see the world the way we do. _____________________________________________________________ SPECIAL INTEREST GROUP (SIGINNS) SESSIONS INNS Special Interest Groups have been established for interaction between individuals with interests in various subfields of neural networks as well as within geographic areas. Several SIGs have tentatively planned sessions for Wednesday, June 8, 1994 from 8 - 9:30 PM: Automatic Target Recognition - Brian Telfer, Chair A U.S Civilian Neurocomputing Initiative - Andras Pellionisz, Chair Control, Robotics, and Automation - Kaveh Ashenayi, Chair Electronics/VLSI - Ralph Castain, Chair Higher Level Cognitive Processes - John Barnden, Chair Hybrid Intelligence - Larry Medsker, Chair Mental Function and Dysfunction - Daniel Levine, Chair Midwest US Area - Cihan Dagli, Chair Power Engineering - Dejan Sobajic, Chair ______________________________________________________ NEW in '94! NEURAL NETWORK INDUSTRIAL EXPOSITION The State-of-the-Art in Advanced Technological Applications MONDAY JUNE 6, 1994 - 8 AM to 9 PM SPECIAL EXPOSITION-ONLY REGISTRATION AVAILABLE: $55 * Dedicated to Currently Available Commercial Applications of Neural Nets & Related Technologies * Commercial Hardware and Software Product Demos * Poster Presentations * Panel Conclusions Led by Industry Experts * Funding Panel EXPOSITION CHAIR -- Takeshi Yamakawa, Kyushu Institute of Technology CHAIRS -- Hardware: Dan Hammerstrom, PhD, Adaptive Solutions, Inc. Takeshi Yamakawa, Kyushu Institute of Technology Robert Pap, Accurate Automation Corporation Software: Dr. Robert Hecht-Nielsen, HNC, Inc. Casimir C. Klimasauskas, Co-founder, NeuralWare, Inc. John Sutherland, Chairman and VP Research, AND America, Ltd. Soo-Young Lee, KAIST Asain Liaison Pierre Martineau, Martineau and Associates European Liaison Plus: NEURAL NETWORK CONTEST with $1500 GRAND PRIZE chaired by Bernard Widrow, Harold Szu, and Lotfi Zadeh with a panel of distinguished judges! EXPOSITION SCHEDULE Morning Session: Hardware 8-11 AM Product Demonstration Area and Poster Presentations 11 AM-12 PM Panel Conclusion Afternoon Session: Software 1-4 PM Product Demonstration Area and Poster Presentations 4-5 PM Panel Conclusion Evening Session 6-8 PM Neural Network Contest 8-9 PM Funding Panel HOW TO PARTICIPATE: To demonstrate your hardware or software product contact James J. Wesolowski, 202-466-4667; (fax) 202-466-2888. For more information on the neural network contest, indicate your interest on the registration form. Further information and contest rules will be sent to all interested parties! Deadline for contest registration is March 1, 1994. ______________________________________________________ FEES at WCNN 1994 REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, and Industrial Exposition. Separate registration for Short Courses.) - INNS Members: US$195 - US$395 - Non Members: US$295 - US$495 - Full Time Students: US$85 - US$135 - Spouse/Guest: US$35 - US$55 SHORT COURSE FEE (Pay for 2 short courses, get the third FREE) - INNS Members: US$225 - US$275 - Non Members: US$275 - US$325 - Full Time Students: US$125 - US$150 CONFERENCE HOTEL: Town and Country Hotel (same site as conference) - Single: US$70 - US$95 - Double: US$80 - US$105 TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant n the Congress. Flights booked on American Airlines, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the code: Star #S0464FS ________________________________________________________ TO RECEIVE CONFERENCE BROCHURES AND REGISTRATION FORMS, HOTEL ACCOMMODATION FORMS, AND FURTHER CONGRESS INFORMATION, CONTACT THE INTERNATIONAL NEURAL NETWORK SOCIETY AT: International Neural Network Society 1250 24th Street, NW Suite 300 Washington, DC 20037 USA phone: 202-466-4667 fax: 202-466-2888 e-mail: 70712.3265 at compuserve.com From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: predictable are those carrying most information [haussler-91]. Suppose that the learning machine was trained on a sequence of examples x1, x2, ... xn and is presented pattern xn+1. If pattern xn+1 is easily predicted (i.e. its error is small) the benefit from learning that pattern is going to be small: the hypothesis space is not going to shrink much. Conversely, if xn+1 is hard to predict, learning it will result in a large shrinking of hypothesis space. Minimax algorithms which minimize the maximum error instead of the average error rely on this principal. The solution of minimax algorithms depend only on a number of informative patterns that are those patterns having maximum error (and that other people would call outlyers) [boser-92]. What happens when the data is not perfectly clean? Then, outlyers can be either very informative, if they correspond to atypical patterns, or very non-informative, if they correspond to garbage patterns. With algorithms that detect the outlyers (e.g. minimax algorithms) one can clean the data either automatically or by hand by removing a subset of the outlyers. The VC-theory predicts the point of optimal cleaning [matic-92]. Isabelle Guyon --------------------------------- @inproceedings{haussler-91, author = "Haussler, D. and Kearns, M. and Shapire, R.", title = "Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the {VC} Dimension", booktitle = "Computational Learning Theory workshop", organization = "ACM", year = "1991", } @inproceedings{boser-92, author = "Boser, B. and Guyon, I. and Vapnik, V.", title = "An Training Algorithm for Optimal Margin Classifiers", year = "1992", booktitle = "Fifth Annual Workshop on Computational Learning Theory", address = "Pittsburgh", publisher = "ACM", month = "July", pages = "144-152" } @inproceedings{matic-92, author = "Mati\'{c}, N. and Guyon, I. and Bottou, L. and Denker, J. and Vapnik, V.", title = "Computer Aided Cleaning of Large Databases for Character Recognition", organization = "IAPR/IEEE", address = "Amsterdam", month = "August", year = 1992, booktitle = "11th International Conference on Pattern Recognition", volume = "II", pages = "330-333", } From Sebastian.Thrun at B.GP.CS.CMU.EDU Mon Jun 5 16:42:55 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Thu, 17 Mar 1994 11:03-EST Subject: CALL FOR PAPERS: Issue on Robot Learning, Machine Learning Journal Message-ID: ************************************************************** ***** CALL FOR PAPERS ****** ************************************************************** Special Issue on ROBOT LEARNING Journal MACHINE LEARNING (edited by J. Franklin and T. Mitchell and S. Thrun) This issue focuses on recent progress in the area of robot learning. The goal is to bring together key research on machine learning techniques designed for and applied to robots, in order to stimulate research in this area. We particularly encourage submission of innovative learning approaches that have been successfully implemented on real robots. Submission deadline: October 1, 1994 Papers should be double spaced and 8,000 to 12,000 words in length, with full-page figures counting for 400 words. All submissions will be subject to the standard review procedure. It is our goal to also publish the issue as a book. Send three (3) copies of submissions to: Sebastian Thrun Universitaet Bonn Institut fuer Informatik III Roemerstr. 164 D-53117 Bonn Germany phone: +49-228-550-373 Fax: +49-228-550-382 E-mail: thrun at cs.bonn.edu, thrun at cmu.edu Also mail five (5) copies of submitted papers to: Karen Cullen MACHINE LEARNING Editorial Office Kluwer Academic Publishers 101 Philip Drive Norwell, MA 02061 USA phone: (617) 871-6300 E-mail: karen at world.std.com Note: Machine Learning is now accepting submission of final copy in electronic form. There is a latex style file and related files available via anonymous ftp from world.std.com. Look in Kluwer/styles/journals for the files README, smjrnl.doc, smjrnl.sty, smjsamp.tex, smjtmpl.tex, or smjstyles.tar (which contains them all).  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: example, to be purely commercial (without parasites of any kind). As a friend of mine, a securities lawyer formerly with the SEC, has pointed out, without a larger body of speculators, there would not be enough liquidity to generate an efficient transfer of risk. All the large commercial hedgers have basically the same information and time goals. It is natural to expect them to be in the same side of the market for the same period. A society that would not permit and reward the role of the speculator could not take advantage of the risk transfer (and low price guarantees) that this large liquid market provides. Someone has to be willing to be on the other side of a zero-sum game. When I addressed the panel recently, I said that the competition is a difficult endeavor, hated by some, enjoyed by others, but of interest to most. We all (those interested on real function modeling) want to know how these tools would fair in such a test, with objective measuring tools, compared against the same data, and each tool properly applied. Not a simple task to be done by anyone, but only practical on a setting, such as a competition. Furthermore, these results will provide a single source of rich material for future experimentation all collected on a single place. > Furthermore, it is pretty >clear that the only way to consistently make money with such >a technique would be to keep it secret. I would say that for a specific technique, applied to a specific market and time frame you are probably right. But the statement is not true in general. Even if it can arguably be the case, the competition promotes two tracks: one with full disclosure, and another one, a little harder to get in, for non-disclosure entries. In answering to a negative assertion, all that we need is a counter example. We did not wish to have commercial claims saying that they know how to make the system work but they wouldn't show it to us. Here is an opportunity for all. Lastly, I would like to point out that our sole objective at this point is not the sordid business of making money by designing trading systems, but rather to study the predictive quality of non linear techniques as applied to this specific problem (which has eluded a number of other techniques). There is a ocean of difference between a reasonably accurate predictor and a money making trading system. The discussion of which is beyond the scope of this note, except to say that one, definitely, does not imply in the other. > >On the other hand, I have a great deal of respect for several >of the people involved in the "Competition", and this leads me >to wonder whether I might be missing some crucial point. Can >anybody help me with this? > > -- Bill > On behalf to the panel, I would like to thank you for the vote of confidence, and say that we will strive to make the most accurate assertions of the quality of the entries. I would like to invite all the real function modeling researchers to participate and to try the best that can be done at this point with these techniques. Here is a good chance to show if any technique can indeed make a difference in this difficult and important problem. If it can, it can certainly be of use in a number of related problems. Science has been based on curiosity and correct methodology. I hope that we all can strive for answers, specially to questions that are enveloped in mysterious folklore... -- Manoel ____________________________________________________________________________ ________________________________________ ___________________________ Manoel Fernando Tenorio Parallel Distributed Structures Lab School of Electrical Engineering Purdue University W. Lafayette, In 47907 Ph.: 317-494-3482 Fax: 317-494-6440 tenorio at ecn.purdue.edu ============================================================================ = From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: 21 May 1994 (Sat) Subject: No subject Message-ID: The file nascimento.phd.tarZ is now available for copying from the anonymous ftp-site 'slarti.csc.umist.ac.uk' (130.88.116.3): Author: Cairo L. Nascimento Jr. (cairo at csc.umist.ac.uk) PhD thesis title: Artificial Neural Networks in Control and Optimization Submission date: February 1994 Supervisor: Dr. Martin B. Zarrop (zarrop at csc.umist.ac.uk) UMIST - Control Systems Centre P.O. Box 88 - Sackville Street Manchester M60 1QD United Kingdom Abstract: This thesis concerns the application of artificial neural networks to solve optimization and dynamical control problems. A general framework for artificial neural networks models is introduced first. Then the main feedforward and feedback models are presented. The IAC (Interactive Activation and Competition) feedback network is analysed in detail. It is shown that the IAC network, like the Hopfield network, can be used to solve quadratic optimization problems. A method that speeds up the training of feedforward artificial neural networks by constraining the location of the decision surfaces defined by the weights arriving at the hidden units is developed. The problem of training artificial neural networks to be fault tolerant to loss of hidden units is mathematically analysed. It is shown that by considering the network fault tolerance the above problem is regularized, that is the number of local minima is reduced. It is also shown that in some cases there is a unique set of weights that minimizes a cost function. The BPS algorithm, a network training algorithm that switches the hidden units on and off, is developed and it is shown that its use results in fault tolerant neural networks. A novel non-standard artificial neural network model is then proposed to solve the extremum control problem for static systems that have an asymmetric performance index. An algorithm to train such a network is developed and it is shown that the proposed network structure can also be applied to the multi-input case. A control structure that integrates feedback control and a feedforward artificial neural network to perform nonlinear control is proposed. It is shown that such a structure performs closed-loop identification of the inverse dynamical system. The technique of adapting the gains of the feedback controller during training is then introduced. Finally it is shown that the BPS algorithm can also be used in this case to increase the fault tolerance of the neural controller in relation to loss of hidden units. Computer simulations are used throughout to illustrate the results. ----------------------------------------------------------------------------------- The thesis is 226 pages (17 preamble + 209 text). Hardcopies are not available at the moment. To obtain a copy of the Postscript files: % ftp slarti.csc.umist.ac.uk > Name: anonymous > Password: > cd /pub/neural/cairo > binary > get nascimento.phd.tarZ > quit The file nascimento.phd.tarZ is a unix TAR file which contains the following postscript files (compressed by the standard unix command "compress"): File Size in bytes nascimento.phd.tarZ 2015232 chap01.ps.Z 36737 chap24.ps.Z 1041029 chap58.ps.Z 928199 When uncompressed the file sizes and number of pages in each file are: File Size in bytes Number of pages chap01.ps 109471 22 chap24.ps 3315662 97 chap58.ps 2913551 107 --------- ----- 6338684 216 To obtain one of the postscript files from the TAR file, use: % tar tvf nascimento.phd.tarZ (list the table of contents of the TAR file) % tar xvf nascimento.phd.tarZ chap24.ps.Z (extracts only the file chap24.ps.Z from the TAR file) % uncompress -v chap24.ps.Z % lpr -s -P chap24.ps (do not delete or compress the PS file until the printing is finished) OBS: 1) The uncompressed postscript files can be viewed using "ghostview" (or "ghostscript"), but I don't know about "pageview". 2) If you have GZIP installed locally, consider compressing the PS files using it. Using the command "gzip -9v filename" the size of the compressed PS files will be respectively 24568, 613052, 637269 bytes (total using GZIP -9: 1274889 bytes, total using COMPRESS: 2005965 bytes; 1274889 / 2005965 = 63.6 %). 3) Some of my other publications are available in the same directory. For more details get the file /pub/neural/cairo/INDEX.TXT. ------------------------------------------------------------------------- Cairo L. Nascimento Jr. | E-Mail: cairo at csc.umist.ac.uk UMIST - Control Systems Centre | Tel: +(44)(61) 200-4659, Room C70 P.O. Box 88 - Sackville Street | or +(44)(61) 236-3311, Ext.2821 Manchester M60 1QD | Tel. Home: +(44)(61) 343-3979 United Kingdom | WHOIS handle for ds.internic.net: CLN2 ------------------------------------------------------------------------- After 1st June 1994 my surface address will be: Cairo L. Nascimento Jr. Instituto Tecnologico de Aeronautica, CTA - ITA - IEE- IEEE 12228-900 - Sao Jose' dos Campos - SP Brazil E-mail in Brazil (after 1st June 1994): ita at fpsp.fapesp.br (please, include my name in the subject line). The email address cairo at csc.umist.ac.uk should remain operational for some months after June/94. .................................................................................... END From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Applications}, V. Cherkassky, J.H. Friedman and H. Wechsler (eds.), NATO ASI Series F, Springer-Verlag 1994. ------------------------------------------------------------------------- Prediction Risk and Architecture Selection for Neural Networks John Moody Abstract: We describe two important sets of tools for neural network modeling: prediction risk estimation and network architecture selection. Prediction risk is defined as the expected performance of an estimator in predicting new observations. Estimated prediction risk can be used both for estimating the quality of model predictions and for model selection. Prediction risk estimation and model selection are especially important for problems with limited data. Techniques for estimating prediction risk include data resampling algorithms such as {\em nonlinear cross--validation (NCV)} and algebraic formulae such as the {\em predicted squared error (PSE)} and {\em generalized prediction error (GPE)}. We show that exhaustive search over the space of network architectures is computationally infeasible even for networks of modest size. This motivates the use of {\em heuristic} strategies that dramatically reduce the search complexity. These strategies employ directed search algorithms, such as selecting the number of nodes via {\em sequential network construction (SNC)} and pruning inputs and weights via {\em sensitivity based pruning (SBP)} and {\em optimal brain damage (OBD)} respectively. Keywords: prediction risk, network architecture selection, cross--validation (CV), nonlinear cross--validation (NCV), predicted squared error (PSE), generalized prediction error (GPE), effective number of parameters, heuristic search, sequential network construction (SNC), pruning, sensitivity based pruning (SBP), optimal brain damage (OBD). ========================================================================= Retrieval instructions are: unix> ftp neural.cse.ogi.edu login: anonymous password: name at email.address ftp> cd pub/neural ftp> cd papers ftp> get INDEX ftp> binary ftp> get moody94.predictionrisk.ps.Z ftp> quit unix> uncompress *.Z unix> lpr *.ps From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: is the use of patient specific measurements of an epidemiological nature (such as maternal age, past obstetrical history etc) in the forecasting of a number of specific Adverse Pregnancy Outcomes. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: of pattern processing and classification systems which can be trained to forecast problems in pregnancy. These systems will be designed to accept a variety of data formats from project partners throughout the EC, and will be tuned to provide optimum performance for the particular medical task. In neural net terms, such obstetrical problems are similar to financial problems of credit risk prediction. Many leading European obstetrical centers are involved in this project and close collaboration with a number of these will be an essential component of the post offered. The CEC grant to QAMC is likely to be for three years. The post offered would therefore be for one year in the first instance, with the likelihood of renewal up to a maximum of three years, subject to satisfactory performance. The person appointed will work principally in Cambridge and should have already had considerable experience with Neural Networks, ideally up to PhD level. A medical qualification would be desirable, but this is by no means essential. The gross salary will depend on age but the present scale (subject to review) lies within the range of 12828 - 18855 UKP per year. Interviews are likely to be held in Cambridge on 31 August 1994. Closing date for applications is 18 August 94. Further particulars may be obtained from and application forms should be sent to: Dr Kevin Dalton PhD FRCOG OR Dr Richard Prager Dept of Obstetrics and Gynaecology Dept of Engineering Rosie Maternity Hospital Trumpington Street Cambridge CB2 2SW Cambridge CB2 1PZ UK UK Phone 44 223 410250 44 223 332771 FaX 44 223 336873 44 223 332662 Email kjd5 at phx.cam.ac.uk rwp at eng.cam.ac.uk ---------------------- JOB JOB JOB JOB ------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: orthogonal) basic ideas are discussed, regarding computational effort, stability, reliability, sensor requirements, and consistency as well as their useful applications. The first approach is an exact, geometric technique using line representations extracted from the information produced by a laser-range finder. The second discussed possibility is a qualitative, topologic mapping of the environment using neural clustering techniques. Both presented classes of environment-modelling strategies are evaluated on the basis of principal arguments and of simulations resp. tests on real robots. Experiences from the MOBOT resp. the ALICE project are discussed together with some related work. ------------------------------------------------------------------------ --- ALICE - Topographic Exploration, Cartography and Adaptive Navigation --- on a Simple Mobile Robot ------------------------------------------------------------------------ --- File name is : Zimmer.ALICE.ps.Z TSRPC '94, Leeuwenhorst, The Netherlands, June 24-26, 1994 ALICE - Topographic Exploration, Cartography and Adaptive Navigation on a Simple Mobile Robot Pascal Lefevre, Andreas Pruess & Uwe R. Zimmer A sub-symbolic, adaptive approach to the basic world-modelling, navigation and exploration tasks of a mobile robot is discussed in this paper. One of the main goals is to adapt a couple of internal representations to a moderate structured and dynamic environment. The main internal world model is a qualitative, topologic map, which is continuously adapted to the actual environment. This adaptation is based on passive light and touch sensors as well as on a internal position calculated by dead-reckoning and by correlation to distinct sensor situations. Due to the fact that ALICE is an embedded system with a continuous flow of sensor-samples (i.e. without the possibility to stop this data-flow), realtime aspects have to be handled. ALICE is implemented as a mobile platform with an on-board computer and as a simulation, where light distributions and position drifts are considered. ------------------------------------------------------------------ FTP-information (anonymous login): FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File names are : Zimmer.ALICE.ps.Z Zimmer.Comparison.ps.Z Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Zimmer.Learning_Surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z .. or ... FTP-Server is : ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/... Subdirectory is : 1994/papers File names are : Zimmer.ALICE.ps.Z Zimmer.Comparison.ps.Z Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Subdirectory is : 1993/papers File names are : Zimmer.learning_surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z Subdirectory is : 1992/papers File name is : Zimmer.rt_communication.ps.Z Subdirectory is : 1991/papers File names are : Edlinger.Pos_Estimation.ps.Z Edlinger.Eff_Navigation.ps.Z Knieriemen.euromicro_91.ps.Z Zimmer.albatross.ps.Z .. or ... FTP-Server is : archive.cis.ohio-state.edu Mode is : binary Directory is : /pub/neuroprose File names are : zimmer.alice.ps.Z zimmer.comparison.ps.Z zimmer.navigation.ps.z zimmer.visual_search.ps.z zimmer.learning_surfaces.ps.z zimmer.spin-nfds.ps.z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: 6. Time series archive ++++++++++++++++++++++ Various datasets of time series (to be used for prediction learning problems) are available for anonymous ftp from ftp.santafe.edu [192.12.12.1] in /pub/Time-Series". Problems are for example: fluctuations in a far-infrared laser; Physiological data of patients with sleep apnea; High frequency currency exchange rate data; Intensity of a white dwarf star; J.S. Bachs final (unfinished) fugue from "Die Kunst der Fuge" Some of the datasets were used in a prediction contest and are described in detail in the book "Time series prediction: Forecasting the future and understanding the past", edited by Weigend/Gershenfield, Proceedings Volume XV in the Santa Fe Institute Studies in the Sciences of Complexity series of Addison Wesley (1994). Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: connectionist strategy can be discerned. Up to now this strategy has been largely implicit, to be found only in some quotes concerning the progress and goals of the connectionist research program, which usually end up somewhere in the introduction or the concluding remarks of an article. Those who are more theoretically inclined do write on connectionism and its role within cognitive science, but their articles mainly concern the differences between connectionism and symbolism and -again- only mention 'the strategy' implicitly. In the thesis I make explicit how connectionist research is striving towards the goal of better understanding cognition. By taking the above mentioned quotes literally, it is possible to construct a model of progress in the connectionist field. This stagewise model is valuable in several ways both within the field and in its relation to the 'outside world'. The main aim of the thesis is to describe this (methodological) stagewise treatment (of the progress) of connectionism, which is to be called STC -Stagewise Treatment of Connectionism-. To further indicate the value of such a model, it will be used to examine several important aspects of connectionism. The first is a closer look at connectionism itself. It is important to place the research that is currently done into a larger connectionist perspective. STC can be used to give an indication of the stronger and weaker points of this field of research and by making these explicit, connectionism can proceed in a more 'self aware' and precise way. The second use of STC lies in a comparison with other research programs, specifically of course the classical, symbolist program, which is considered to be the main competitor within the area of cognitive science. In order to do that I describe progress of the symbolist program in a way similar to STC. In the third part of the thesis the practical use of the model is demonstrated by using STC to describe in greater detail one specific sub-area of cognitive research, that of developmental psychology. The main goal is to show the value of STC as a descriptive tool, but after establishing the legitimacy of the model some indication of its prescriptive uses will follow.  From ertel at fbe.fh-weingarten.de Mon Jun 5 16:42:55 2006 From: ertel at fbe.fh-weingarten.de (ertel@fbe.fh-weingarten.de) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: POSITION AVAILABLE in hybrid systems research Message-ID: RESEARCH POSITION --- RESEARCH POSITION --- RESEARCH POSITION Integration of Symbolic and Connectionist Reasoning in the Research Group AUTOMATED REASONING at the Chair Informatik VIII Fakult"at f"ur Informatik Technische Universit"at M"unchen We are an AI research group (consisting of about a dozen researchers) at the University of Technology in Munich. Our major field of research is Automated Reasoning and our main product is the Automated Theorem Prover SETHEO. For a couple of years we are now pursuing research on the combination of symbolic reasoning with neural techniques. We are one of the partners in the ESPRIT project MIX (Modular Integration of Symbolic and Connectionist Processing in Knowledge-Based Systems). Our goal in this project is to design a system which is able to do rule based symbolic reasoning on certain and uncertain knowledge as well as inductive reasoning (generalization) from empirical data. To achieve this goal we employ techniques from Automated Reasoning, Statistics and Neural Networks. Among other applications we are starting to work on a medical expert system for diagnosis in clinical toxicology. Our future colleague should have experience in at least some of the mentioned research fields and should be willing to enter the others as well. She/he shall participate actively in the design of the computational model, in the realization of the application and should represent the project at MIX project meetings. The position is available immediately and limited (with good chances for continuation) until March 1997. Funding is according to BAT IIa or BAT Ib (approx. 65000 -- 70000 DM before tax) depending on qualification and experience. The applicants must have at least a Master's degree in Computer Science or a comparable qualification. Applicants without Ph.D. are expected to prepare a doctoral thesis in the course of their research tasks. Applicants without German as mother tongue should be sincerely willing to learn German. Please send as soon as possible your application documents with references to: Bertram Fronh"ofer Institut f"ur Informatik der Technischen Universit"at 80290 M"unchen Fax.: +49-89/526502 E-mail: fronhoef at informatik.tu-muenchen.de Since we want to fill this vacant position as soon as possible, we would highly appreciate to receive from applicants as soon as possible a short notification of interest (preferably by e-mail) containing a short description of the applicant's qualification: e.g. short CV, a list of publications, summary of master thesis or Ph.D. thesis, etc. _______________________________________________________________________________ Bertram Fronh"ofer Automated Reasoning Group Institut f"ur Informatik at the Lehrstuhl Informatik VIII Technische Universit"at M"unchen Tel.: +49-89/2105-2031 Arcisstr. 21 Fax.: +49-89/526502 D-80290 Muenchen Email: fronhoef at informatik.tu-muenchen.de _______________________________________________________________________________  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: commercially available products relating to neural network tools and applications. In addition, advanced prototypes of tools and applications developed by public sector research organizations will be demonstrated. To receive a complete exhibitor's package, please contact the Conference Secretariat at the address indicated. ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France ICANN ' 95 Paris, October 9-13, 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ...................................................................... ...................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : ................................................................ Fax : ................................................................ Electronic mail:...................................................... * I wish to attend the O Scientific conference O Industrial conference * I intend to exhibit * I intend to submit a paper Provisional title.................................................... Author (s) : ........................................................ Brief outline of the subject : ...................................... ..................................................................... Category : * Scientific conference O Theory O Algorithms & architectures O Implementations O Cognitive sciences & AI O Neurobiology O Applications ( please specify) * Industrial conference O Tools O Techniques O Applications ( please specify) ***************************************************************************** TEAR OFF HERE ***************************************************************************** STEERING COMMITTEE Chair F.Fogelman - Sligos (Paris, F) Scientific Program co-chairs G.Dreyfus - ESPCI (Paris, F) M.Weinfeld - Ecole Polytechnique (Palaiseau, F) Industrial Program chair P.Corsi - CEC (Brussels, B) Tutorials & Publications chair P.Gallinari - Universite P.& M.Curie (Paris, F) SCIENTIFIC PROGRAM COMMITTEE (Preliminary) I. Aleksander (UK); L.B. Almeida (P); S.I. Amari (J); E. Bienenstock (USA); C. Bishop (UK); L. Bottou (F); J. M. Buhmann (D); S. Canu (F); V. Cerny (SL); M. Cosnard (F); R. De Mori (CDN); R. Eckmiller (D); N. Franceschini (F); S. Gielen (NL); J. Herault (F); M. Jordan (USA); D. Kayser (F); T. Kohonen (SF); A. Lansner (S); Z. Li (USA); L. Ljung (S); C. von der Malsburg (D); S. Marcos (F); P.Morasso (I); J.P.Nadal(F); E. Oja (SF); P. Peretto (F); C. Peterson (S); L. Personnaz (F); R. Pfeiffer (CH); T. Poggio (USA); P. Puget (F); S. Raudys (LT); H. Ritter (D); M. Saerens ( B); W.von Seelen (D); J.J. Slotine (USA); S. Solla (DK); J.G. Taylor (GB); C. Torras (E); B. Victorri (F); A. Weigend (USA). INDUSTRIAL PROGRAM COMMITTEE (Preliminary) M. Boda (S); B. Braunschweig (F); C. Bishop (UK); J.P. Corriou (F); M. Duranton (F); A. Germond (CH); I. Guyon (USA); P. Refenes (UK); S. Thiria (F); C. Wellekens (B); B. Wiggins (UK). ***************************************************************************** From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From tesauro at watson.ibm.com Mon Jun 5 16:42:55 2006 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NIPS*94 registration and hotel deadlines Message-ID: The NIPS*94 conference program, with titles and authors of all talks and posters, is online and can be accessed via the NIPS homepage at: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html The abstract booklet will also be appearing on the homepage soon. -- Dave Touretzky, NIPS*94 Program Chair ================================================================ This is a reminder that the deadline for early registration for NIPS*94 is this SATURDAY, OCTOBER 29. To obtain a copy of the registration brochure, send e-mail to nips94 at mines.colorado.edu. The brochure is also available on-line via the NIPS*94 Mosaic homepage (http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html), or by anonymous FTP: FTP site: mines.colorado.edu (138.67.1.3) FTP file: /pub/nips94/nips94-registration-brochure.ps The deadlines for hotel reservations in Denver and in Vail are also fast approaching. Information on hotel accomodations is given below. In Denver, the official hotel for NIPS*94 is the Denver Marriott City Center. The NIPS group rate is $74.00 per night single, $84.00 double (plus 11.8% tax). For reservations, call (800)228--9290 and say that you are attending NIPS*94. Cut-off date for reservations is Nov. 11. The Denver Marriott City Center may be contacted directly at (303)297--1300 for further information. The Marriott City Center is located in the heart of Denver and is easily accessible by taxi or local airport shuttle services. In Vail, the official hotel for NIPS*94 is the Marriott Vail Mountain Resort (formerly known as the Radisson Resort Vail, as listed in our previous publicity). The NIPS group rate is $80.00 per night, single or double (plus 8% tax). For reservations, phone (303)476--4444. Cut-off date for reservations is Nov. 1. --Gerry Tesauro NIPS*94 General Chair  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: enough details of proofs so that he/she can judge on that basis alone, whether it is likely that the proofs in the paper are correct. In this situation it is helpful for the referee if he/she can also take into account how carefully that particular author tends to check his/her math. proofs (of course, if they don't know the author, they should give the benefit of the doubt). It is a fact of life, that different authors tend to check their proofs with quite different amounts of care. Unfortunately it is NECESSARY for a referee to make a guess about the correctness of proofs in a theory paper. On the basis of the statements of the theorems and their consequences alone, incorrect theoretical paper often appear to be more exciting (and therefore more "acceptable") than correct ones. Hence I am bit afraid that a "blind reviewing" policy provides an incentive for submitting exciting but only half-finished theoretical papers, and that NIPS ends up publishing more incorrect theory-papers. I would like to add, that in the well-known (often very selective) conferences in theoretical computer science the submissions are NOT anonymous, and this seems to work satisfactory. There, the main precaution for avoiding biased decisions lies in a careful selection of referees and program committee members (trying to get researchers who are known for the quality of their own work; but still enforcing a substantial amount of rotation). The results of these policies are certainly not perfect, but quite good. Wolfgang Maass  From ted at SPENCER.CTAN.YALE.EDU Mon Jun 5 16:42:55 2006 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: "Blind" reviews are impossible Message-ID: Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. --Ted  From LAWS at ai.sri.com Mon Jun 5 16:42:55 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Tue 20 Dec 94 20:31:39-PST Subject: Open Review In-Reply-To: <199412192128.AA02352@morticia.cnns.unt.edu> Message-ID: <787984299.0.LAWS@AI.SRI.COM> > From David Tam: > I think a totally honest system has to be doubly-open ... I agree. But now you're talking about a "scientific revolution." If reviewers are not guaranteed anonymity, many -- most? -- of the better-qualified people will refuse to review. (Consider legal liability, for instance. Will professional societies indemnify reviewers against malpractice claims? And, liability aside, how many of the "top people" want to spend their time feuding with colleagues or answering challenges from offended authors?) Not that feuding isn't an acceptable alternative. Louis Pasteur feuded bitterly with opponents of his bacterial theory of anthrax and other diseases; eventually he won. Lister fought for antisepsis; Jenner (and Pasteur) fought for immunization. They won, just as Galileo won against the might of the Church. But if that's the system, either it has to be the whole system -- no one can escape just by boycotting one or several conferences -- or you have to pay a few good people to become knowlegeable critics. There's a long tradition of professional critics in art, drama, literature, politics, entertainment, travel services, and fine dining. There's only one reason that we have no such pundits in science -- we offer no financial support for such a career. Professional critics have their own critics, of course -- individually and as an institution -- but that's just part of the doubly-open review system. I think it's healthy and I'd love to see a scientific journalism illuminate our field. It won't happen on the initiative of those now in power, as they need the shadows to keep the current system going. (Questions about the quality of graduate education, necessity of the research being done, and exploitation of students and postdocs are best not asked. They won't lead to reform, but to funds being withdrawn from our field.) The revolution will happen, but through grass-roots self-publishing. Tenure committees are still committed to counting papers in prestigious forums, but that will change when the current journals and conferences collapse. Online journals and "conferences" will take over -- or evolve from the existing channels -- but self-publication will become an increasingly important way of sharing results. And with that comes the need for amateur and professional reviewers. Unpaid reviews will predominate within each discipline, but paid reviewers, abstracters, journalists, and the like will follow the discussions and report significant findings to researchers in nearby fields -- and to funding agencies and the general public. Reports of these gatekeepers will in turn be reviewed, with some being acknowledged as more reliable than others. Eventually the tenure committees will start looking at the reviews rather than publication counts. Can this work in Computer Science? It already does, in much of the computer hardware world. We may not think of Infoworld or Computerworld as part of the academic press, but they do pick up important stories from time to time. EE Times broke the news of the Pentium bug, and often carries Colin Johnson's reports on neural-network hardware advances. Other articles cover ARPA funding for ANN initiatives, or other news of interest to professional researchers and developers. What distinguishes an industry from a scientific discipline? It is largely the presence of commercial journalism. An industry has at least a weekly trade magazine to keep everyone informed of what's happening, what resources are available, and where the jobs are. Of course there has to be money pumping around also, or the trade magazine couldn't flourish -- but online journalism may be able to operate much more cheaply. (Or may not. The role of advertising hasn't yet been established, and it is advertising that pays for most trade publications.) Before doubly-open review can take hold, with professional journalists, columnists, and the like to contribute and to referee, there's still a bit of pioneering to be done. I'm working on one approach, trying to build a professional association and publication ab initio, taylored to the online age. The association, Computists International, is a mutual-aid society for AI/IS/CS researchers. Our flagship publication is the weekly Computists' Communique, a cross between a newsletter and a news wire (with echoes of Reader's Digest and Johnny Carson). The Communique hasn't grown enough yet to have regular columnists or deep critical analysis of scientific controversies, but it comes closer than most other publications. The connectionists disussion stream is one of many from which I draw material on inference, pattern recognition, and related topics. Last year, I tried to offer connectionists free issues of the Communique -- one per month, in what I call my Full Moon subset. The announcement was refused by your moderator as not being entirely related to neural-network theory. Assuming that this message gets past the gatekeepers, I'd like to make the offer again. Contact me at the address below (or reply to this message, IF that won't send your message back to the connectionists list). Mention "connectionists," ask for the Full Moon subset, and have your full name somewhere in the message. I'll sign you up for one free issue per month, just to introduce my service and to keep in touch with you. It will give you a good look at the kind of "niche journalism" the net will currently support. I hope some of you will take up the torch, starting similar publications that are specific to your own interests. The world would be better for having an online high-signal newsletter devoted to connectionism. -- Ken Laws Dr. Kenneth I. Laws; Computists International; laws at ai.sri.com. Ask about the free Full Moon subset of the Computists' Communique. -------  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: in education has been a major focus of our development efforts. The intention has been to both support the increased use of numerical simulations in neuroscience, as well as provide a basis for more sophisticated approaches to teaching neuroscience in general. With partricular respect to the connectionist mailing list, we believe the GENESIS tutorials,in concert with the Book of GENESIS, provide a means to further the neuroscience education of our engineering and neural network colleagues. For those currently using GENESIS in education, or interested in doing so, we have recently setup a moderated email newsgroup that will enable users of GENESIS in teaching to share ideas, syllabi, exercises, computer lab handouts, and other materials that they may develop when teaching neuroscience and/or neural modeling. Those interested should contact: genesis-teach-request at smaug.bbb.caltech.edu. ************************************************************************ Additional information Additional information on the Book of GENESIS (including the table of contents) and the free GENESIS distribution is available over the net from Caltech by sending an email request to genesis at cns.caltech.edu, or by accessing the World Wide Web server, http://www.bbb.caltech.edu/GENESIS. The WWW server will also allow you to see "snapshots" of the GENESIS tutorials, take a look at the GENESIS programers manual, and find information about research which has been or is currently being conducted using GENESIS. "The Book of GENESIS" is published by TELOS, an "electronic publishing" affiliate of Springer-Verlag, and may be ordered from Springer by phone, mail, fax, email, or through the TELOS WWW page. Here is the relevant ordering information: The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System, by James M. Bower and David Beeman 1994/450 pages/Hardcover ISBN 0-387-94019-7 Send orders to: Springer-Verlag New York, Inc. PO Box 2485 Secaucus, NJ 07096-2485 Order Desk: 1-800-777-4643 FAX: 201-348-4505 email: info at telospub.com WWW: http://www.telospub.com/genesis.html ------------------------------------------- *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Further information can be obtained from either organizer: Dr. Henk J. Haarmann: haarmann at psy.cmu.edu tel:(412)-268-2402 Dr. Marcel Adam Just: just at psy.cmu.edu tel:(412)-268-2791 -------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Return-Path From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: commercially available products relating to neural network tools and applications. In addition, advanced prototypes of tools and applications developed by public sector research organizations will be demonstrated. To receive a complete exhibitor's package, please contact the Conference Secretariat at the address indicated. ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France Fax: +33 - 1 - 41 28 45 84 ICANN ' 95 Paris, October 9-13, 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ..................................................................... ..................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : .................................Fax : ......................... Electronic mail:...................................................... * I wish to attend the O Scientific conference O Industrial conference * I intend to exhibit * I intend to submit a paper Provisional title.................................................... Author (s) : ........................................................ Brief outline of the subject : ...................................... .................................................................... Category : * Scientific conference O Theory O Algorithms & architectures O Implementations O Cognitive sciences & AI O Neurobiology O Applications ( please specify) * Industrial conference O Tools O Techniques O Applications ( please specify) ***************************************************************************** TEAR OFF HERE ***************************************************************************** STEERING COMMITTEE Chairs F. Fogelman - Sligos (Paris, F) J.C. Rault - C3ST (Paris, F) Scientific Program co-chairs G. Dreyfus - ESPCI (Paris, F) M. Weinfeld - Ecole Polytechnique (Palaiseau, F) Industrial Program chair P. Corsi - CEC (Brussels, B) Tutorials & Publications chair P. Gallinari - Universite P.& M.Curie (Paris, F) SCIENTIFIC PROGRAM COMMITTEE I. Aleksander (UK); L.B. Almeida (P); S.I. Amari (J); M. Berthod (F); E. Bienenstock (USA); C.M. Bishop (UK); L. Bottou (F); J. M. Buhmann (D); S. Canu (F); V. Cerny (SL); M. Cosnard (F); R. De Mori (CAN); R. Eckmiller (D); N. Franceschini (F); S. Gielen (NL); J.P. Haton (F); J. Herault (F); M. Jordan (USA); D. Kayser (F); T. Kohonen (SF); V. Kurkova (CZ); A. Lansner (S); Z. Li (USA); L. Ljung (S); C. von der Malsburg (D); S. Marcos (F); P.Morasso (I); J.P.Nadal(F); E. Oja (SF); P. Peretto (F); C. Peterson (S); L. Personnaz (F); R. Pfeiffer (CH); T. Poggio (USA); P. Puget (F); S. Raudys (LT); H. Ritter (D); M. Saerens ( B); W. von Seelen (D); J.J. Slotine (USA); S. Solla (DK); J.G. Taylor (GB); C. Torras (E); B. Victorri (F); A. Weigend (USA). INDUSTRIAL PROGRAM COMMITTEE (Preliminary) V.Ancona (F); M. Boda (S); B. Braunschweig (F); C. Bishop (UK); J.P. Corriou (F); M. Dougherty (UK); M. Duranton (F); A. Germond (CH); I. Guyon (USA); G. Kuhn (D); H. Noel (F); P. Refenes (UK); S. Thiria (F); C. Wellekens (B); B. Wiggins (UK). **************************************************************************** PROGRAM PLENARY SPEAKERS J. Friedman (USA); M. Kawato (J); T. Kohonen (SF); L. Ljung (S); W. Singer (D) INVITED SPEAKERS C.M. Bishop (UK); H. Bourlard (B); B. Denby (I); I. Guyon (USA); G. Hinton (CAN); A. Konig (D); Y. Le Cun (USA); D. McKay (UK); C. von der Malsburg (D); E. Oja (SF); C. Peterson (S); T. Poggio (USA); S. Raudys (LT); J.G. Taylor (GB); C. Torras (E); V. Vapnik (USA). TUTORIALS C.M. Bishop (UK); L. Bottou (F); J. Friedman (USA); A. Gee (UK); J. Hertz (DK); L. Jackel (USA); L. Ljung (S); E. Oja (SF); L. Personnaz (F); T. Poggio (USA); I. Rivals (F); V. Vapnik (USA). INDUSTRIAL SESSIONS Banking, finance & insurance (P. Refenes); Defense (H. Noel); Document processing, OCR, text retrieval & indexing (I. Guyon); Forecasting & marketing (G. Kuhn); Medicine (J. Demongeot); NN Clubs & Funding Programs (C. Bishop); Oil industry (B. Braunschweig); Power industry (A. Germond); Process engineering, control and monitoring (J.P. Corriou); Robotics (W. von Seelen); Speech processing (C. Wellekens); Telecommunications (M. Boda); Teledetection (S. Thiria); Transportation M. Dougherty); VLSI & dedicated hardware (M. Duranton). ************************************************************************ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: take the airport bus to Stockholm City. It takes approximately 45 minutes. From there, take a taxi to the hotel or conference center. From Sergel Plaza and Strand, IVA is walking distance. If you use the subway you should get off at the Ostermalm station and use the Grev Turegatan exit. Hotels To reserve a hotel at the conference rate call between 8:00am and 4:00pm Stockholm (European) time: Annika Lindqvist e-mail: lme.lmehotel at memo.ericsson.se tel: +46 8 6813590 fax: +46 8 6813585 The following hotels have been recommended although others are available. Prices are in Swed- ish Kroners (Single/Double) including VAT: Hotel Sergel Plaza: Brunkebergstorg 9, tel +46 8 226600 (825/825) Located close to conference. Strand/SAS Hotel: Nybrokajen 9, tel +46 8 6787800 (925/925) Located close to conference. Hotel Attache: Cedergrenvagen 16, tel +46 8 181185 (585/585) Approximately 10 minutes by The Underground. Good Morning South: Vastertorpsvagen 131, tel +46 8 180140 (495/495) Approximately 15 minutes by The Underground. Hotel Malmen: Gotgatan 49-51, tel +46 8 226080 (655/655) Located south of city in nice area. Underground station in building. Sharing a Room Because of costs, some may want to share a room. To be part of the Room Share List, send e-mail to timxb at bellcore.com with your name, preferred way to be contacted, and preferences (male/female, non-smoker, etc.). Once you make arrangements, contact timxb again, so that you can be removed from the list. ----------------------------------------------------------------------------- --------------------Registration Form--------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Name: Institution: Mailing Address: Telephone: Fax: E-mail: Make check ($400; $500 after May 1, 1995; $200 students) out to IWANNT*95. Please make sure your name is on the check. Registration includes breaks, a boat tour of the Stockholm archipelago, and proceedings available at the conference. Mail to: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South Street Morristown, NJ 07960, USA Voice: (201) 829-4993 Fax: (201) 829-5888 Email: bg1 at faline.bellcore.com From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: approach for the analysis of the Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics that are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a connection pattern) as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and provide a rigorous criterion for the parameter regime in which the network will have the potential to develop a specially designated connection pattern. In particular, this approach analytically demonstrates, for the first time, the crucial role played by the synaptic arbor density. For example, our analytic predictions indicate that no structured connection pattern can emerge in a Linsker's network that is fully feedforward connected without localized synaptic arbor density. Our general theorems lead to a complete and precise picture of the parameter space that defines the relationships between the different sets of system parameters and the corresponding fixed point attractors, and yield a method to predict whether a given connection pattern will emerge under a given set of parameters without running a numerical simulation of the model. The theoretical results are corroborated by our examples (including center- surround and certain oriented receptive fields), and match key observations reported in Linsker's numerical simulation. The rigorous approach presented here provides a unified treatment of many diverse problems about the dynamical mechanism of a class of models that use the limiter function (also referred to as the piecewise linear sigmoidal function) as the constraint limiting the size of the weight or the state variables, and applies not only to the Linsker's network but also to other learning or retrieval models of this class. ------------------------------------------------------------------------ Key Words: Unsupervised Hebbian learning, Network self-organization, Linsker's developmental model, Brain-State-in-a-Box model, Ontogenesis of primary visual system, Afferent receptive field, Synaptic arbor density, Correlations, Limiter function, Nonlinear dynamics, Qualitative analysis, Parameter space, Coexistence of attractors, Fixed point, Stability. ------------------------------------------------------------------------ Contents: {1} Introduction {1.1} Formulation Of The Linsker's Developmental Model {1.2} Qualitative Analysis Of Nonlinear System And Afferent Receptive Fields {1.3} Summary Of Our Approach {2} General Theorems About Fixed Points And Their Stability {3} The Criterion For The Division Of Parameter Regimes For The Occurrence Of Attractors {3.1} The Necessary And Sufficient Condition For The Emergence Of Afferent Receptive Fields {3.2} The General Principal Parameter Regimes {4} The Afferent Receptive Fields In The First Three Layers {4.1} Description Of The First Three Layers Of The Linsker's Network {4.2} Development Of Connections Between Layers A And B {4.3} Analytic Studies Of Synaptic Density Functions' Influences In The First Three Layers {4.4} Examples Of Structured Afferent Receptive Fields Between Layers B And C {5} Concluding Remarks {5.1} Synaptic Arbor Density Function {5.2} The Linsker's Network And The Brain-State-in-a-Box Model {5.3} Dynamics With Limiter Function {5.4} Intralayer Interaction And Biological Discussion References Appendix A: On the Continuous Version of the Linsker's Model Appendix B: Examples of Structured Afferent Receptive Fields between Layers B and C of the Linsker's Network ------------------------------------------------------------------------ FTP Instructions: unix> ftp archive.cis.ohio-state.edu login: anonymous password: (your e-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get pan.purdue-tr-ee-95-12.ps.Z ftp> quit unix> uncompress pan.purdue-tr-ee-95-12.ps.Z unix> ghostview pan.purdue-tr-ee-95-12.ps (or however you view or print) *************** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***************** From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From harmonme at aa.wpafb.af.mil Mon Jun 5 16:42:55 2006 From: harmonme at aa.wpafb.af.mil (HARMONME) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Paper on Residual Advantage Learning Message-ID: <2455330927061995/A03539/YODA> The following paper, submitted to NIPS-95, is now available via WWW at the following address: http://ace.aa.wpafb.af.mil/~aaat/harmon.html =============================================================================== Residual Advantage Learning Applied to a Differential Game Mance E. Harmon Wright Laboratory WL/AAAT Bldg. 635 2185 Avionics Circle Wright-Patterson Air Force Base, OH 45433-7301 harmonme at aa.wpafb.mil Leemon C. Baird III U.S.Air Force Academy 2354 Fairchild Dr. Suite 6K41, USAFA, CO 80840-6234 baird at cs.usafa.af.mil ABSTRACT An application of reinforcement learning to a differential game is presented. The reinforcement learning system uses a recently developed algorithm, the residual form of advantage learning. The game is a Markov decision process (MDP) with continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step each player chooses one of two possible actions; turn left or turn right 90 degrees. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. The reinforcement learning algorithm for optimal control is modified for differential games in order to find the minimax point, rather than the maximum. As far as we know, this is the first time that a reinforcement learning algorithm with guaranteed convergence for general function approximation systems has been demonstrated to work with a general neural network. =============================================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From eann96 at lpac.qmw.ac.uk Mon Jun 5 16:42:55 2006 From: eann96 at lpac.qmw.ac.uk (Engineering Apps in Neural Nets 96) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning,... The project includes theoretical work on classification algorithms, simulations and benchmarks, especially on realistic industrial data. Hardware implementation, especially VLSI option, is the last objective. The set of databases available is to be used for tests and benchmarks of machine-learning classification algorithms. The databases are splitted into two parts: ARTIFICIALly generated databases, mainly used for preliminary tests, and REAL ones, used for objective benchmarks and comparisons of methods. The choice of the databases has been guided by various parameters, such as availability of published results concerning conventional classification algorithms, size of the database, number of attributes, number of classes, overlapping between classes and non-linearities of the borders,... Results of PCA and DFA preprocessing of the REAL databases are also included, together with several measures useful for the databases characterization (statistics, fractal dimension, dispersion,...). All these databases and their preprocessing are available together with a postcript technical report describing in details the different databases ('Databases.ps.Z' - 45 pages - 777781 bytes) and a report related to the comparative benchmarking studies of various algorithms ('Benchmarks.ps.Z' - 113 pages - 1927571 bytes) well-known by the Statistical and Neural Network communities (MLP, RCE, LVQ, k_NN, GQC) or developped in the framework of the Elena project (IRVQ, PLS). A LaTeX bibfile containing more than 90 entries corresponding to the Elena partners bibliography related to the project is also available ('Elena.bib') in the same directory. All files are available by anonymous ftp from the following directory: ftp://ftp.dice.ucl.ac.be/pub/neural-nets/ELENA/databases The databases are splitted into two parts: the 'ARTIFICIAL' ones, being generated in order to obtain some defined characteristics, and for which the theoretical Bayes error can be computed, and the 'REAL' ones, collected in existing real-world applications. The ARTIFICIAL databases ('Gaussian', 'Clouds' and 'Concentric') were generated according to the following requirements: - heavy intersection of the class distributions, - high degree of nonlinearity of the class boundaries, - various dimensions of the vectors, - already published results on these databases. They are restricted to two-class problems, since we believe it yield answers to the most essential questions. The ARTIFICIAL databases are mainly used for rapid test purposes on newly developed algorithms. The REAL databases ('Satimage', 'Texture', 'Iris' and 'Phoneme') were selected according to the following requirements: - classical databases in the field of classification (Iris), - already published results on these databases (Phoneme, from the ROARS ESPRIT project and 'Satimage' from the STATLOG ESPRIT project), - various dimensions of the vectors, - sufficient number of vectors (to avoid the ``empty space phenomenon''). - the 'Texture' database, generated at INPG for the Elena project is interesting for its high number of classes (11). ############################################################################## ########### # DETAILS # ########### The 'Benchmarks' technical report ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The 'Benchmarks.ps' Elena report is related to the benchmarking studies of various classifiers. Most of the classifiers which were used for the benchmark comparative studies are are well known by the neural network and machine learning community. These are the k-Nearest Neighbour (k_NN) classifier, selected for its powerful probability density estimation properties; the Gaussian Quadratic Classifier (GQC), the most classical statistical parametric simple classification method; the Learning Vector Quantizer (LVQ), a powerful non-linear iterative learning algorithm proposed by Kohonen; the Reduced Coulomb Energy (RCE) algorithm, an incremental Region Of Influence algorithm; the Inertia Rated Vector Quantizer (IRVQ) and the Piecewise Linear Separation (PLS) classifiers, developed in the framework of the Elena project. The main objectives of the 'Benchmarks.ps' Elena report report are the following: - to provide an overall comprehensive view of the general problem of comparative benchmarking studies and to propose a useful common test basis for existing and further classification methods, - to obtain objective comparisons of the different chosen classifiers on the set of databases described in this report (each classifier being used with its optimal configuration for each particular database), - to study the possible links between the data structures of the databases viewed by some parameters, and the behavior of the studied classifiers (mainly the evolution of their the optimal configuration parameters). - to study the links between the preprocessing methods and the classification algorithms from the performances and hardware constraints point of view (especially the computation times and memory requirements). Databases format ~~~~~~~~~~~~~~~~ All the databases available are in the following format (after decompression) : - All files containing the databases are stored as ASCII files for their easy edition and checking. - In a file, each of the n lines is reserved for each vectorial sample (instance) and each line consists of d floating-point numbers (the attributes) followed by the class label (which must be an integer). Example: 1.51768 12.65 3.56 1.30 73.08 0.61 8.69 0.00 0.14 1 1.51747 12.84 3.50 1.14 73.27 0.56 8.55 0.00 0.00 0 1.51775 12.85 3.48 1.23 72.97 0.61 8.56 0.09 0.22 1 1.51753 12.57 3.47 1.38 73.39 0.60 8.55 0.00 0.06 1 1.51783 12.69 3.54 1.34 72.95 0.57 8.75 0.00 0.00 3 1.51567 13.29 3.45 1.21 72.74 0.56 8.57 0.00 0.00 1 There are NO missing values. If you desire to get a database, you MUST do it in ftp the binary mode. So if you aren't in this mode, simply type 'binary' at the ftp prompt. EXAMPLE: to get the "phoneme" database : cd REAL cd phoneme binary get phoneme.txt get phoneme.dat.Z get ... cd ... ... quit After your ftp session, you simply have to type 'uncompress phoneme.dat.Z' to get the uncompressed datafile. Contents of the 'ARTIFICIAL' directory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The databases of this directory contain only the 'ARTIFICIAL' classification problems. The present 'ARTIFICIAL' databases are only two-class problems, since it yields answers to the most essential questions. For each problem, the confusion matrix corresponding to the theoretical Bayes boundary is provided with the confusion matrix obtained by a k_NN classifier (k chosen to reach the minimum of the total Leave-One-Out error). These databases were selected to use for preliminary test and to study the behavior of the implemented algorithms for some particular problems: - Overlapping classes: The classifier should have the ability to form a decision boundary that minimizes the amount of misclassification for all of the overlapping classes. - Nonlinear separability: The classifier should be able to build decision regions that separate classes of any shape and size. There is one subdirectory for each database. In this subdirectory, there is : - A text file providing detailed information about the related database ('databasename.txt'). - The compressed database ('databasename.dat.Z). The different patterns of each database are presented in a random order. - For bidimensional databases, a postscript file representing the 2-D datasets (those files are in eps format). For each subdirectory, the directoryname is the same as the name chosen for the concerned database. Here are the directorynames with a brief description. - 'clouds' Bidimensional distributions : the class 0 is the sum of three different normal distributions while the the class 1 is another normal, overlapping the class 0. 5000 patterns, 2500 in each class. This allows the study of the classifier behavior for heavy intersection of the class distributions and for high degree of nonlinearity of the class boundaries. - 'gaussian' A set of seven databases corresponding to the same problem, but with dimensionality ranging from 2 to 8. This allows the study of the classifier behavior for different dimensionalities of the input vectors, for heavy overlapped distributions and for non linear separability. Theses databases where already studied by Kohonen in: Kohonen, T. and Barna, G. and Chrisley, R., "Statistical Pattern Recognition with Neural Networks: Benchmarking Studies", IEEE Int. Conf. on Neural Networks, SOS Printing, San Diego, 1988. In this paper,the performances of three basis types of neural-like networks (Backpropagation network, Boltzmann machine and Learning Vector Quantization) is evaluated and compared to the theoretical limit. - 'concentric' Bidimensional uniform concentric circular distributions. 2500 instances, 1579 in class 1, 921 in class 0. This database may be used to study the linear separability of the classifier when some classes are nested in other without overlapping. Contents of the 'REAL' directory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The databases of this directory contain only the real classification problem sets selected for the Elena benchmarking studies. There is one subdirectory for each database. In this subdirectory, there are: - a text file giving detailed information about the related database (`databasename.txt'), - the compressed original database in the Elena format (`databasename.dat.Z'); the different patterns of each database being presented in a random order. - By the way of a normalization process, each original feature will have the same importance in a subsequent classification process. A typical method is first to center each feature separately and than to reduce it to a unit variance; this process has been applied on all the REAL Elena databases in order to build the ``CR'' databases contained in the ``databasename_CR.dat.Z'' files. The Principal Components Analysis (PCA) is a very classical method in pattern recognition [Duda73]. PCA reduces the sample dimension in a linear way for the best representation in lower dimensions keeping the maximum of inertia. The best axe for the representation is however not necessary the best axe for the discrimination. After PCA, features are selected according to the percentage of initial inertia which is covered by the different axes and the number of features is determined according to the percentage of initial inertia to keep for the classification process. This selection method has been applied on every REAL database after centering and reduction (thus on the databasename_CR.dat files). When quasi-linear correlations exists between some initial features, these redundant dimensions are removed by PCA and this preprocessing is then recommended. In this case, before a PCA, the determinant of the data covariance matrix is near zero; this database is thus badly conditioned for all process which use this information (the quadratic classifier for example). The following files, related to PCA are also available for the REAL databases: - ``databasename_PCA.dat.Z'', the projection of the ``CR'' database on its principal components (sorted in a decreasing order of the related inertia percentage), - ``databasename_corr_circle.ps.Z'', a graphical representation of the correlation between the initial attributes and the two first principal components, - ``databasename_proj_PCA.ps.Z'', a graphical representation of the projection of the initial database on the two first principal components, - ``databasename_EV.dat'', a file with the eigenvalues and associated inertia percentages The Discriminant Factorial Analysis (DFA) can be applied to a learning database where each learning sample belongs to a particular class [Duda73]. The number of discriminant features selected by DFA is fixed in function of the number of classes (c) and of the number of input dimensions (d); this number is equal to the minimum between d and c-1. In the usual case where d is greater than c, the output dimension is fixed equal to the number of classes minus one and the discriminant axes are selected in order to maximize the between-variance and to minimize the within-variance of the classes. The discrimination power (ratio of the projected between-variance over the projected within-variance) is not the same for each discriminant axis: this ratio decreases for each axis. So for a problem with many classes, this preprocessing will not be always efficient as the last output features will not be so discriminant. This analysis uses the information of the inverse of the global covariance matrix, so the covariance matrix must be well conditioned (for example, a preliminary PCA must be applied to remove the linearly correlated dimensions). The DFA preprocessing method has been applied on the 18 first principal components of the 'satimage_PCA' and 'texture_PCA' databases (thus by keeping only the 18 first attributes of these databases before to apply the DFA preprocessing) in order to build the 'satimage_DFA.dat.Z' and 'texture_DFA.dat.Z' database files, having respectively 5 and 10 dimensions (the 'satimage' database having 6 classes and 'texture' 11). For each subdirectory, the directoryname is the same as the name chosen for the contained database. Here are the directorynames with a brief numerical description of the available databases. - phoneme French and Spannish phoneme recognition problem. The aim is to distinguish between nasal (AN, IN, ON) and oral (A, I, O, E, E') vowels. 5404 patterns, 5 attributes (the normalized amplitudes of the five first harmonics), 2 classes. This database was in use in the European ESPRIT 5516 project ROARS. The aim of this project is the development and the implementation of a REAL time analytical system for French and Spannish phoneme recognition. - texture The aim is to distinguish between 11 different textures (Grass lawn, Pressed calf leather, Handmade paper, Raffia looped to a high pile, Cotton canvas, ...), each pattern (pixel) being characterised by 40 attributes built by the estimation of fourth order modified moments in four orientations: 0, 45, 90 and 135 degrees. 5500 patterns, 11 classes of 500 instances (each class refers to a type of texture in the Brodatz album). The original source of this database is: P. Brodatz "Textures: A Photographic Album for Artists and Designers", Dover Publications, Inc., New York, 1966. This database was generated by the Laboratory of Image Processing and Pattern Recognition (INPG-LTIRF Grenoble, France) in the development of the Esprit project ELENA No. 6891 and the Esprit working group ATHOS No. 6620. - satimage (*) Classification of the multi-spectral values of an image of the Landsat satellite. Each line contains the pixel values in four spectral bands of each of the 9 pixels in a 3x3 neighbourhood and a number indicating the classification label of the central pixel (corresponding to the type of soil: red soil, cotton crop, grey soil, ...). The aim is to predict this classification, given the multi-spectral values. 6435 instances, 36 attributes (4 spectral bands x 9 pixels in neighbourhood), 6 classes. This database was in use in the European StatLog project, which involves comparing the performances of machine learning, statistical, and neural network algorithms on data sets from REAL-world industrial areas including medicine, finance, image analysis, and engineering design: D. Michie, D.J. Spiegelhalter, and C.C. Taylor, editors. Machine learning, Neural and Statistical Classification. Ellis Horwood Series In Artificial Intelligence, England, 1994. - iris (*) This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. 4 attributes (sepal length, sepal width, petal length and petal width). (*) These databases are taken from the ftp anonymous "UCI Repository Of Machine Learning Databases and Domain Theories" (ics.uci.edu: pub/machine-learning-databases): Murphy, P. M. and Aha, D. W. (1992). "UCI Repository of machine learning databases" [Machine-readable data repository]. Irvine, CA: University of California, Department of Information and Computer Science. [Duda73] Duda, R.O. and Hart, P.E., Pattern Classification and Scene Analysis, John Wiley & Sons, 1973. ############################################################################## The ELENA PROJECT ~~~~~~~~~~~~~~~~~ Neural networks are now known as powerful methods for empirical data analysis, especially for approximation (identification, control, prediction) and classification problems. The ELENA project investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning (control of the network size by adding or removing neurons),... URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA.html ELENA is an ESPRIT III Basic Research Action project (No. 6891). It involves: INPG (Grenoble, F), UPC (Barcelona, E), EPFL (Lausanne, CH), UCL (Louvain-la-Neuve, B), Thomson-Sintra ASM (Sophia Antipolis, F) EERIE (Nimes, F). The coordinator of the project can be contacted at: Prof. Christian Jutten, INPG-LTIRF, 46 av. Flix Viallet, F-38031 Grenoble Cedex, France Phone: +33 76 57 45 48, Fax: +33 76 57 47 90, e-mail: chris at tirf.inpg.fr A simulation environment (PACKLIB) has been developed in the project; it is a smart graphical tool allowing fast programming and interactive analysis. The PACKLIB environment greatly simplifies the user's task by requiring only to write the basic code of the algorithms, while the whole graphical input, output and relationship framework is handled by the environment itself. PACKLIB is used for extensive benchmarks in the ELENA project and in other situations (image processing, control of mobile robots,...). Currently, PACKLIB is tested by beta users and a demo version available in the public domain. URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/Packlib.html ############################################################################## IF YOU HAVE ANY PROBLEM, QUESTION OR PROPOSITION, PLEASE E_MAIL the following. VOZ Jean-Luc or Michel Verleysen Universite Catholique de Louvain DICE - Lab. de Microelectronique 3, place du Levant B-1348 LOUVAIN-LA-NEUVE E_mail : voz at dice.ucl.ac.be verleysen at dice.ucl.ac.be From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From winther at connect.nbi.dk Mon Jun 5 16:42:55 2006 From: winther at connect.nbi.dk (Ole Winther) Date: October 6, 1995 Subject: Paper available: "A mean field approach to Bayes Learning in feed-forward neural networks" Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/opper.bayes.ps.Z WWW-host: http://connect.nbi.dk ---------------------------------------------- The following paper is now available: A mean field approach to Bayes Learning in feed-forward neural networks [12 pages] Manfred Opper Theoretical Physics, University of Wurzburg, Germany Ole Winther CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: We propose an algorithm to realise Bayes optimal predictions for feed-forward neural networks which is based on the TAP mean field method developed for the statistical mechanics of disordered systems. We conjecture that our approach will be exact in the thermodynamic limit. The algorithm results in a simple built-in leave-one-out crossvalidation of the predictions. Simulations for the case of the simple perceptron and the committee machine are in excellent agreement with the results of replica theory. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get opper.bayes.ps.Z ftp> quit unix> uncompress opper.bayes.ps.Z ----------------------------------------------- Ole Winther, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5311 Fax: +45-3142-1016 e-mail: winther at connect.nbi.dk  From austin at minster.cs.york.ac.uk Mon Jun 5 16:42:55 2006 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: characterize the needed relationship between the set of generalizers and the prior that allows cross-validation to work. David Wolpert From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Project lies in the use of patient-specific measurements of an epidemiological nature (such as maternal age, past obstetrical history, etc.) as well as fetal heart rate recordings, in the forecasting of a number of specific Adverse Pregnancy Outcomes. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: pattern-processing and classification systems which can be trained to forecast problems in pregnancy. This will involve continuation of work on pattern-classification and regression analysis, using neural networks operating on a very large database of about 1.2 million pregnancies from various European countries. Challenging components of the project include dealing with missing and uncertain variables, sensitivity analysis, variable selection procedures and cluster analysis. Many leading European obstetrical centres are involved in the Euro-PUNCH project, and close collaboration with a number of these will be an essential component of the post offered. Candidates for this post are expected to have a good first degree and preferably a post-graduate degree in a relevant discipline. Come familiarity with medical statistics and neural networks is desirable but not essential. Salary (on the RA scale) will depend on age and experience, and is likely to be in the range of #14,317 to #15,986 per annum. Appointment would be subject to satisfactory health screening. Applications will close on Friday 8th December 1995. Applications (naming two referees) should be submitted to: Dr Kevin J Dalton PhD FRCOG Division of Materno-Fetal Medicine, Dept. Obstetrics & Gynaecology University of Cambridge, Addenbrooke's Hospital Cambridge CB2 2QQ Tel: +44-1223-410250 Fax: +44-1223-336873 or 215327 e-mail: kjd5 at cus.cam.ac.uk Informal enquiries about the project should be directed to: (Obstetric side) Dr Kevin Dalton kjd5 at cus.cam.ac.uk (Engineering Side) Dr Niranjan niranjan at eng.cam.ac.uk (Engineering Side) Dr Richard Prager rwp at eng.cam.ac.uk -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Saturday, December 2, 1995, 7:30AM-9:30AM and Subject: No subject Message-ID: Topic and purpose of the workshop ================================= Proper benchmarking of neural networks on non-toy examples is needed from an application perspective in order to evaluate the relative strenghts and weaknesses of proposed algorithms and from a theoretical perspective in order to validate theoretical predictions and see how they relate to realistic learning tasks. Despite this important role, NN benchmarking is rarely done well enough today: o Learning tasks: Most researchers use only toy problems and, perhaps, one at least somewhat realistic problem. While this shows that an algorithm works at all, it cannot explore its strenghts and weaknesses. o Design: Often the setup is designed wrongly and cannot produce valid results from a statistical point of view. o Reproducibility: In many cases, the setup is not described exactly enough to reproduce the experiments. This violates scientific principles. o Comparability: Hardly ever are two setups of different researchers so similar that one could directly compare the experiment results. This has the effect that even after a large number of experiments with certain algorithms, their differences in learning results may remain unclear. There are various reasons why we still find this situation: o unawareness of the importance of proper benchmarking; o insufficient pressure from reviewers towards good benchmarking; o unavailability of a sufficient number of standard benchmarking datasets; o lack of standard benchmarking procedures. The purpose of the workshop is to address these issues in order to improve research practices, in particular more benchmarking with more and better datasets, better reproducibility, and better comparability. Specific questions to be addressed on the workshop are [Concerning the data:] o What benchmarking facilities (in particular: datasets) are publicly available? For which kinds of domains? How suitable are they? o What facilities would we like to have? Who is willing to prepare and maintain them? o Where and how can we get new datasets from real applications? [Concerning the methodology:] o When and why would we prefer artificial datasets over real ones and vice versa? o What data representation is acceptable for general benchmarks? o What are the most common errors in performing benchmarks? How can we avoid them? o Real-life benchmarking warstories and lessons learned o What must be reported for proper reproducibility? o What are useful general benchmark approaches (broad vs. deep etc.)? o Can we agree on a small number of standard benchmark setup styles in order to improve comparability? Which styles? The workshop will focus on two things: Launching a new benchmark database that is currently being prepared by some of the workshop chairs and discussing the above questions in general and in the context of this database. The benchmark database facility is planned to comprise o datasets, o data format conversion tools, o terminological and methodological suggestions, and o a results database. Workshop format =============== We invite anyone who is interested in the above issues to participate in the discussions at the workshop. The workshop will consist of a few talks by invited speakers and extensive discussion periods. The purpose of the discussion is to refine the design and setup of the benchmark collection, to explore questions about its scope, format, and purpose, to motivate potential users and contributors of the facility, and to discuss benchmarking in general. Workshop program ================ The following talks will be given at the workshop [The list is still preliminary]. After each talk there will be time for discussion. In the morning session we will focus on assessing the state of the practice of benchmarking and discussing an abstract ideal of it. In the afternoon session we will try to become concrete how that ideal might be realized. o Lutz Prechelt. A quantitative study of current benchmarking practices. A quantitative survey of 400 journal articles on NN algorithms. (15 minutes) o Tom Dietterich. Experimental Methodology. Benchmarking goals, measures of behavior, correct statistical testing, synthetic versus real-world data. (15 minutes) o Brian Ripley. What can we learn from the study of the design of experiments? (15 minutes) o Lutz Prechelt. Available NN benchmarking data collections. CMU nnbench, UCI machine learning databases archive, Proben1, Statlog data, ELENA data (10 minutes). o Tom Dietterich. Available benchmarking data generators. (10 minutes) o Break. o Carl Rasmussen and Geoffrey Hinton. A thoroughly designed benchmark collection. A proposal of data, terminology, and procedures and a facility for the collection of benchmarking results. (45 minutes) o Panel discussion. The future of benchmarking: purpose and procedures The WWW adress for this announcement is http://wwwipd.ira.uka.de/~prechelt/NIPS_bench.html Lutz Dr. Lutz Prechelt (http://wwwipd.ira.uka.de/~prechelt/) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; D-76128 Karlsruhe; Germany | they get (Phone: +49/721/608-4068, FAX: +49/721/694092) | less simple. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Friday, Dec 1 ============= AM 7:30-7:35 Welcome 7:35-8:05 Tom Mitchell (invited talk) - "Situated Learning" 8:05-8:25 Lorien Pratt - "Neural Transfer For Hazardous Waste" 8:25-8:45 Nathan Intrator - "Learning Internal Reps From Multiple Tasks" 8:45-9:05 Rich Caruana - "Where is Multitask Learning Useful?" 9:05-9:30 Panel Debate and Discussion - Topics include: serial vs. parallel transfer, what should be transfered? what domains are ripe for transfer? what are the goals of transfer? (Baxter, Caruana, Intrator, Mitchell, Silver, Pratt, ...) 9:30-4:30 Extracurricular Recreation PM 4:30-5:00 Jude Shavlik (invited talk) - "Talking to Your Neural Net" 5:00-5:20 Leo Breiman (invited talk) - "Curds & Whey" 5:20-5:40 Jonathan Baxter - "Bayesian Model of Learning to Learn" 5:40-6:00 Sebastian Thrun - "Identifying Relevant Tasks" 6:00-6:30 Panel Debate and Discussion - Topics include: transfer human to machine vs. machine to machine, is practice meeting theory? is theory meeting practice? (Baxter, Caruana, Breiman, Thrun, Mitchell, Shavlik, ...) Saturday, Dec 2 =============== AM 7:30-8:00 Noel Sharkey (invited talk) - "Adaptive Generalisation" 8:00-8:20 Anthony Robbins - "Rehearsal and Catastrophic Interference" 8:20-8:40 J. Schmidhuber - "A Theoretical Model of Learning to Learn" 8:40-9:00 Bairaktaris/Levy - "Dual-weight ANNs: Short/Long Term Learning" 9:00-9:30 Panel Debate and Discussion - Topics include: catastrophic interference, is there evidence for transfer in cognition? what can nature/cogsci tell us about transfer? (Bairaktaris, de Sa, Levy, Robbins, Sharkey, Silver, ...) 9:30-4:30 More Extracurricular Recreation PM 4:30-5:00 Tomaso Poggio (invited talk) - "Virtual Examples" 5:00-5:20 Virginia de Sa - "On Segregating Input Dimensions" 5:20-5:40 Chris Thornton - "Learning to be Brave: A Constructive Approach" 5:40-6:00 Mark Ring - "Continual Learning" 6:00-6:25 Panel Debate and Discussion - Topics include: combining supervised and unsupervised learning, where do we go from here? *this space intentionally left flexible* (de Sa, Mitchell, Poggio, Ring, Thornton, ...) 6:25-6:30 Farewell Full titles and abstracts are available on the workshop web page. 20 minute talks are 12 minutes presentation and 8 minutes questions and discussion. 30 minute invited talks are 20 minutes presentation and 10 minutes questions and discussion. There are four 30-minute panels, one for each session. Although topics are listed for each panel, these are intended merely as points of departure. Everyone attending the workshop should feel free to raise any issues during the panels that seem appropriate. We encourage speakers and members of the audience to prepare a terse list (preferably using inflammatory language) of your favorite transfer issues and questions. There are 16 talks, but this is not a conference! If speakers don't abuse their question/discussion time too much, more than 50% of the workshop will be spent on questions and discussion. To promote this, talks will use few slides and will focus on a few key issues. It's a workshop. Come preapred to speak up, be controversial, and have fun. Look forward to seeing you at Vail. -Danny, Jon, Lori, Rich, Sebastian, and Tom. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NFL and practice Message-ID: Joerg Lemm wrote > One may discuss NFL for theoretical reasons, but > the conditions under which NFL-Theorems hold > are not those which are normally met in practice. Exactly the opposite. The theory behind NFL is trivial (in some sense). The power of NFL is that it deals directly with what is rountinely practiced in the neural network community today. > 1.) In short, NFL assumes that data, i.e. information of the form y_i=f(x_i), > do not contain information about function values on a non-overlapping test set. > This is done by postulating "unrestricted uniform" priors, > or uniform hyperpriors over nonumiform priors... (with respect to Craig's ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > two cases this average would include a third case: target and model are > anticorrelated so anticrossvalidation works better) and "vertical" likelihoods. > So, in a NFL setting data never say something about function values > for new arguments. > This seems rather trivial under this assumption and one has to ask > how natural is such a NFL situation. This is indeed a very trivial and unnatural assumption, which has been criticised by generations of statisticians over several centuries. However, it is exactly what is practiced by a majority of NN researchers. Consider the claim: "This is an algorithm which will perform well as long as there is some nonuniform prior". If such a claim could ever be true, then the algorithm would also be good for a uniform hyperprior over nonuniform priors. But this is in direct contradiction to NFL. According to NFL, you have to say:"This is an algorithm which will perform well on this particular nonuniform prior, (hence it will perform badly on that particular nonuniform prior)". Similarly, with the Law of Energy Conservation, if you say "I've designed a machine to generate electricity", then you automatically imply that you have designed a machine to consume some other forms of energy. You can't make every term positive in your balance sheet, if the grand total is bound to be zero. > Joerg continued with examples of various priors of practical concern, including smoothness, symmetry, positive correlation, iid samples, etc. These are indeed very important priors which match the real world, and they are the implicit assumptions behind most algorithms. What NFL tells us is: If your algorithm is designed for such a prior, then say so explicitly so that a user can decide whether to use it. You can't expect it to be also good for any other prior which you have not considered. In fact, in a sense, you should expect it to perform worse than a purely random algorithm on those other priors. > To conclude: > > In many interesting cases "effective" function values contain information > about other function values and NFL does not hold! This is like saying "In many interesting cases we do have energy sources, and we can make a machine running forever, so the natural laws against `perpetual motion machines' do not hold." These general principles might not be quite obviously interesting to a user, but they are of fundamental importance to a researcher. They are in fact also of fundamental importance to a user, as he must assume the responsibility of supplying the energy source, or specifying the prior. -- Huaiyu Zhu, PhD email: H.Zhu at aston.ac.uk Neural Computing Research Group http://neural-server.aston.ac.uk/People/zhuh Dept of Computer Science ftp://cs.aston.ac.uk/neural/zhuh and Applied Mathematics tel: +44 121 359 3611 x 5427 Aston University, fax: +44 121 333 6215 Birmingham B4 7ET, UK ----- End Included Message ----- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: where the packages are mirrored. The original location in cochlea.hut.fi is back in effect as soon as the machine is stable enough. Yours, Jari Kangas http://nucleus.hut.fi/~jari/ ------------------------------------------------------------------ ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.1 (April 7, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: 9:30- 10:00 Heinz Muehlenbein (GMD, Bonn) Inspiration from Nature vs. Copying nature. Lessons Learned from Genetic Algorithms 10:00- 10:50 Stephen Grossberg (Boston U) Are there Universal Principles of Brain Computation? 10:50-11:30 Discussion and Coffee 11:30-12:00 Anil Nerode (Cornell, Ithaca) Hybrid Systems as a Modelling Substrate for Biological and Cognitive Systems 12:00- 12:30 Daniel Mange (EPFL, Lausanne) Von Neumann Revisited: a Turing Machine with Self-Repair and Self-Reproduction Properties 12:30- 1:30 Lunch Robotics and Autonomous Systems 1:30- 1:50 Lynne Parker (ORNL, Oak Ridge) From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: 1:50-2:10 Akira Ito (Kansai Res. Ctr., Kobe) How Selfish Agents Learn to Cooperate 2:10-2:30 Bengt Carlsson (Karlskrona U, Sweden) The War of Attrition Strategy in Multi-Agent Systems 2:30-2:50 Claudio Cesar de Sa (IMA, Brasil) Architecture for a Mobile Agent 2:50-3:10 A.N. Stafylopatis (NTU, Athens) Autonomous Vehicle Navigation Using Evolutionary Reinforcement Learning 3:10-3:30 Jun Tani (Sony, Tokyo) Cognition from Dynamical Systems Perspective: Robot Navigation Learning 3:00-3:30 Discussion and Coffee Mathematical Models 3:30-3:50 Erol Gelenbe (Duke, NC) Genetic Algorithms which Learn 3:50-4:10 Petr Lansky (CTS, Prague University), Jean-Pierre Rospars (INRA) Stochastic Models of the Olfactory System 4:10-4:30 Vladimir Protopopescu (ORNL, Oak Ridge, Tenn.) Learning Algorithms Based on Finite Samples 4:30-5:00 Ivan Havel (CTS, Prague University) Interaction of Processes at Different Time Scales 5:00-5:30 Boris Stilman (Univ. of Colorado, Denver) Linguistic Geometry: A Cognitive Model for Autonomous Agents 7:00 Dinner Second Day: March 5, 1996 Neural Control 9:00- 9:30 Kumpati Narendra (Yale, New Haven) Neural Networks and Control 9:30- 10:00 John G. Taylor (King's College, London) Global Control Systems of the Brain 10:00- 10:30 Paul Werbos (NSF) Brain-like Control 10:30- 11:00 Discussion and Coffee 11:00-11:20 Shahid Habib and Mona Zaghloul (NASA and GWU) Concurrent System Identification and Control 11:30-12:00 Harry Klopf (Wright-Patterson AFB) Drive-Reinforcement Learning and Hierarchical Networks of Control Systems as Models of Nervous System Function 12:00-1:00 Lunch Learning 1:00- 1:20 Nestor Schmajuk (Duke, NC) The Psychology of Robots 1:20- 1:40 John Staddon (Duke, NC) Habituation: A Non-Associative Learning Process 1:40-2:00 David Rubin (Duke, NC) A Biologically Inspired Model of Autobiographical Memory 2:00-2:20 Ugur Halici (METU, Ankara) Reward, Punishment and Expectation in Reinforcement Learning for the RNN 2:20-2:40 Daniel Levine (Univ. of Texas, Arlington) Analyzing the Executive: Modeling the Functions of Prefrontal Subcortical Loops 2:40-3:00 Discussion and Coffee Autonomous Systems 3:00-3:15 E. Koerner, U. Koerner (Honda R \& D, Japan) Selforganization of Semantic Constraints for Knowledge Representation in Autonomous Systems: A Model of the Role of an Emotional System in Brains 3:15-3:30 Tetsuya Higuchi et al. (Tsukuba, Japan) Hardware Evolution at Gate and Function Levels 3:30-3:45 Christopher Landauer (The Aerospace Corp., Virginia) Constructing Autonomous Software Systems 3:45-4:00 Robert E. Smith Combined Biological Paradigms: A Neural, Genetics-Based Autonomous Systems Strategy Vision and Imaging 4:00-4:15 Jonathan Marshall (UNC, NC) Self-organization of Triadic Neural Circuits for Anticipatory Visual Receptive Field Shifts under Intended Eye Movements 4:15-4:30 Didem Gokcay, LiMin Fu (Univ. of Florida, Gainesville) Visualization of Functional Magnetic Resonance Images through Self-Organizing Maps 4:30-4:45 S. Guberman, W. Wojtkowski (Paragraph International, California) DD algorithm and Automated Image Comprehension 4:45-5:00 E. Koerner, U. Koerner (Honda R \& D, Japan) Neocortex-like Neural Network Architecture for Autonomous Image Understanding 5:00-5:15 E. Oztop (METU, Ankara) Baseline Extraction on Document Images by Repulsive/Attractive Network 5:15-5:30 Y. Feng, E. Gelenbe (Duke, NC) Detecting Faint Targets in Strong Clutter: A Neural Approach Networking Applications 5:30-5:45 Christopher Cramer et al. (Duke, NC) Adaptive Neural Video Compression 5:45-6:00 Thomas John, Scott Toborg (Southwestern Bel, Austin, Texas) Neural Network Techniques for Fault and Performance Diagnosis of Broadband Networks 6:15-6:30 Philippe de Wilde (Imperial College, London) Equilibria of a Communication Network 6:30-6:45 Jonathan W. Mills (Indiana University) Implementing the McCulloch-Kilmer RETIC Architecture with an Analog VLSI Neural Field Computer End of the Workshop ------------------------------------------------------------------ For further information contact: Margrid Krueger Dept. of Electrical and Computer Engineering Duke University email: mak at ee.duke.edu Fax: (919) 660 5293 Tel: (919) 660 5253 From dcrespin at euler.ciens.ucv.ve Mon Jun 5 16:42:55 2006 From: dcrespin at euler.ciens.ucv.ve (Daniel Crespin(UCV) Date: Tue, 13 Feb 96 10:52:30-040 Subject: papers available Message-ID: <9602131452.AA26199@euler.ciens.ucv.ve.ciens.ucv.ve> The preprints abstracted below could be of interest. To obtain the preprints use a WWW browser and go to http://euler.ciens.ucv.ve/Professors/dcrespin/Pub/ [1] Neural Network Formalism: Neural networks are defined using only elementary concepts from set theory, without the usual connectionistic graphs. The typical neural diagrams are derived from these definitions. This approach provides mathematical techniques and insight to develop theory and applications of neural networks. [2] Generalized Backpropagation: Global backpropagation formulas for differentiable neural networks are considered from the viewpoint of minimization of the quadratic error using the gradient method. The gradient of (the quadratic error function of) a processing unit is expressed in terms of the output error and the transposed derivative of the unit with respect to the weight. The gradient of the layer is the product of the gradients of the processing units. The gradient of the network equals the product of the gradients of the layers. Backpropagation provides the desired outputs or targets for the layers. Standard formulas for semilinear networks are deduced as a special case. [3] Geometry of Perceptrons: It is proved that perceptron networks are products of characteristic maps of polyhedra. This gives insight into the geometric structure of these networks. The result also holds for more general (algebraic, etc.) perceptron networks, and suggests a new technique to solve pattern recognition problems. [4] Neural Polyhedra: Explicit formulas to realize any polyhedron as a three layer perceptron neural network. Useful to calculate directly and without training the architecture and weights of a network that executes a given pattern recognition task. [5] Pattern Recognition with Untrained Perceptrons: Gives algorithms to construct polyhedra directly from given pattern recognition data. The perceptron network associated to these polyhedra (see preprint above) solves the recognition problem proposed. No network training is necessary. Daniel Crespin From ertel at fbe.FH-Weingarten.DE Mon Jun 5 16:42:55 2006 From: ertel at fbe.FH-Weingarten.DE (ertel@fbe.FH-Weingarten.DE) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: CFA: Autumn School on HYBRID SYSTEMS Message-ID: ------------------------------------------------------------------------------- * web page: http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm * LaTeX source: enclosed below ------------------------------------------------------------------------------- AUTUMN SCHOOL on HYBRID SYMBOLIC CONNECTIONIST SYSTEMS Ravensburg -- Weingarten (Germany) September 25--28, 1996 organized by ESPRIT BASIC RESEARCH PROJECT 9119 (MIX) "Modular Integration of Connectionist and Symbolic Processing in Knowledge-Based Systems" The combination of symbolic and connectionist systems is one of the currently most challenging and interesting fields of Artificial Intelligence. This school introduces students as well as active researchers to this new and promising direction of research. Most symbolic systems are more or less logic based and therefore perform deductive reasoning on expert knowledge, but have severe problems with inductive reasoning. On the other hand neural networks are good in inductive reasoning based on data, but are less apt to perform deductive reasoning. Another view of this problem is the integration of prior knowledge (e.g. expert knowledge) into an inductive system. The lectures will give an insight into the ongoing research in the field where the fundamental theory as well as practical solutions for concrete applications will be presented. Participants are expected to have basic knowledge of Neural Networks and Artificial Intelligence. LECTURES There will be 6 lectures each consisting of 4 x 45 min lessons. 1) Melanie Hilario (Univ. Geneva), Abderrahim Labbi (Univ. Grenoble), Wolfgang Ertel (FH Weingarten): A Framework for the Modular Integration of Knowledge and Data in Hybrid Systems 2) Alessandro Sperduti (Univ. Pisa): Neural Networks for the Processing of Structures 3) Jose Gonzales and Juan R. Velasco (Univ. Madrid): A Comprehensive View of Fuzzy-Neural Hybrids 4) Michael Kaiser (Univ. Karlsruhe): Combining Symbolic and Connectionist Techniques in Adaptive Control 5) NN 6) NN POSTER SESSION The attendees of the autumn school are encouraged to bring along a poster (size about 40 60 cm) which gives insight into their research work, the project they are working in, etc. which shall be presented in a poster session. DIRECTORS OF THE SCHOOL Wolfgang Ertel, FH Ravensburg-Weingarten Bertram Fronhoefer TU Munich GENERAL INFORMATION PARTICIPATION FEES: - Students: ECU 150.-- - University: ECU 250.-- - Industry: ECU 400.-- DEADLINE FOR APPLICATION: April 12, 1996 NOTIFICATION OF ACCEPTANCE: May 22, 1996 Applications should be sent preferably by email to: ashs at fl-sun00.fbe.fh-weingarten.de Applications should contain a full address and a short statement about the applicants scientific qualification (student, PhD student, industrial researcher, etc.) and his interests in the topics of the autumn school. If email is not available, applications by surface mail should be sent to: Wolfgang Ertel Phone: +49--751--501--721 FH Ravensburg-Weingarten Fax: +49--751--501--749 Postfach 1261 D-88241 Weingarten Attendance to the school will be limited to about 50 participants. LANGUAGE: All lectures will be in English. LECTURE SITE: The lectures will be given in the Informatik Zentrum of the Fachhochschule Ravensburg-Weingarten and will start on September 25 in the morning. ACCOMODATION: Apart from a large range of hotels with prizes from DM 40.-- till DM 200.--, there are also limited occasions for inexpensive student lodging. Low price lunch will be provided by the Mensa (canteen) of the Fachhochschule Ravensburg-Weingarten. LOCATION: Weingarten and its immediate neighbour-city Ravensburg with about 70000 inhabitants represent the economic and cultural heart of Oberschwaben. Above the valley of the river Schussen the famous basilica of Weingarten together with the adjacent old Benedictine abbey is one of the most significant baroque constructions north of the alps. Close to and partly inside the baroque buildings of the abbey is the Fachhochschule, a university for engineering and social sciences where the school will take place. Oberschwaben is a rural pre-alpine area with various little lakes and fens, located in the south-west of Germany, close to the lake of Konstanz and the borders to Austria and Switzerland. The alps are not far (an hour by car or train) and the lake of Konstanz provides all facilities for marine outdoor activities. For further information and inquiries concerning participation please send an e-mail message to the above address. This call as well as futher information is available from the WWW-page: http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm %%%%%%%%%%%%%%%%%%%%%%%%%%%%% LaTeX source %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[12pt]{article} \nonstopmode \parindent=0pt \parskip=4pt \hoffset=-10mm \voffset=-15mm \textheight=240mm \textwidth=175mm \oddsidemargin=0mm \evensidemargin=0mm \topmargin=0mm \newcommand{\tit}[1]{{\em #1.}} \newenvironment{alist}{\begin{list}{}{\itemsep 1mm \parsep 0mm}}{\end{list}} \begin{document} \bibliographystyle{alpha} \begin{centering} {\Large\bf AUTUMN SCHOOL on HYBRID SYMBOLIC CONNECTIONIST SYSTEMS } {\large FH Ravensburg--Weingarten (Germany) September 25--28, 1996 } \bigskip organized by \\[1ex] {\large ESPRIT BASIC RESEARCH PROJECT 9119 (MIX) }\\[1ex] "Modular Integration of \\ Connectionist and Symbolic Processing \\ in Knowledge-Based Systems" \\ \end {centering} \bigskip \bigskip The combination of symbolic and connectionist systems is one of the currently most challenging and interesting fields of Artificial Intelligence. This school introduces students as well as active researchers to this new and promising direction of research. Most symbolic systems are more or less logic based and therefore perform deductive reasoning on expert knowledge, but have severe problems with inductive reasoning. On the other hand neural networks are good in inductive reasoning based on data, but are less apt to perform deductive reasoning. Another view of this problem is the integration of prior knowledge (e.g.\ expert knowledge) into an inductive system. The lectures will give an insight into the ongoing research in the field where the fundamental theory as well as practical solutions for concrete applications will be presented. Participants are expected to have basic knowledge of Neural Networks and Artificial Intelligence. \bigskip \begin{centering} {\bf Lectures} \\ There will be 6 lectures each consisting of 4 x 45 min lessons. \end {centering} \begin{alist} \item[1.] Melanie Hilario (Univ.\ Geneva), Abderrahim Labbi (Univ.\ Grenoble), Wolfgang Ertel \\(FH~Weingarten):\\ \tit{A Framework for the Modular Integration of Knowledge and Data in Hybrid Systems} \item[2.] Alessandro Sperduti (Univ.\ Pisa): \tit{Neural Networks for the Processing of Structures} \item[3.] Jose Gonzales and Juan R. Velasco (Univ.\ Madrid):\\ \tit{A Comprehensive View of Fuzzy-Neural Hybrids} \item[4.] Michael Kaiser (Univ.\ Karlsruhe):\\ \tit{Combining Symbolic and Connectionist Techniques in Adaptive Control} \item[5.] NN \item[6.] NN \end{alist} \medskip \begin{centering} {\bf Poster Session \\} \end {centering} The attendees of the autumn school are encouraged to bring along a poster (size about 40 $\times$ 60 cm) which gives insight into their research work, the project they are working in, etc. which shall be presented in a poster session. \newpage \begin{centering} {\bf Directors of the School } Wolfgang Ertel, FH Ravensburg-Weingarten \\ Bertram Fronh\"ofer, TU Munich \\ \bigskip {\bf General Information \\ } \end {centering} {\bf Participation fees:} \\ \begin{tabular}{ll} -- Students : & ECU 150.-- \\ -- University : & ECU 250.-- \\ -- Industry : & ECU 400.-- \end{tabular} {\bf Deadline for Application:} April 12, 1996 {\bf Notification of Acceptance:} May 22, 1996 Applications should be sent preferably by email to: {\tt ashs at fl-sun00.fbe.fh-weingarten.de} \\ Applications should contain a full address and a short statement about the applicants scientific qualification (student, PhD student, industrial researcher, etc.) and his interests in the topics of the autumn school. If email is not available, applications by surface mail should be sent to: % \begin{center} \parbox[t]{6cm}{Wolfgang Ertel\\ FH Ravensburg-Weingarten\\ Postfach 1261\\ D-88241 Weingarten} \parbox[t]{6cm}{Phone: +49--751--501--721\\ Fax: +49--751--501--749} \end{center} % Attendance to the school will be limited to about 50 participants. {\bf Language:} All lectures will be in English. {\bf Lecture site:} The lectures will be given in the Informatik Zentrum of the Fachhochschule Ravensburg-Weingarten and will start on September 25 in the morning. {\bf Accomodation:} Apart from a large range of hotels with prizes from DM 40.-- till DM 200.--, there are also limited occasions for inexpensive student lodging. Low price lunch will be provided by the Mensa (canteen) of the Fachhochschule Ravensburg-Weingarten. {\bf Location:} Weingarten and its immediate neighbour-city Ravensburg with about 70000 inhabitants represent the economic and cultural heart of Oberschwaben. Above the valley of the river Schussen the famous basilica of Weingarten together with the adjacent old Benedictine abbey is one of the most significant baroque constructions north of the alps. Close to and partly inside the baroque buildings of the abbey is the Fachhochschule, a university for engineering and social sciences where the school will take place.\\ Oberschwaben is a rural pre-alpine area with various little lakes and fens, located in the south-west of Germany, close to the lake of Konstanz and the borders to Austria and Switzerland. The alps are not far (an hour by car or train) and the lake of Konstanz provides all facilities for marine outdoor activities. For further information and inquiries concerning participation please send an e-mail message to the above address. This call as well as futher information is available from the WWW-page:\\ {\tt http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm} \end{document} From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Graduate Scholarship Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Geraint Johnson, L Nealon and Roger O. Lindsay Initialization by Rule Induction Prior to Learning Ralf Salomon DEDEC: A Methodology for Extracting Rules From Trained Artificial Neural Networks Alan B. Tickle, Marian Orlowski and Joachim Diederich An Algorithm for Extracting Propositions From Trained Neural Networks Using Mltilinear Functions Hiroshi Tsukimoto and Chie Morita Automatic Acquisition of Symbolic Knowledge From Subsymbolic Neural Networks Alfred Ultsch and Dieter Korus Rule Extraction From Trained Neural Networks: Different Techniques for the Determination of Herbicides for the Plant Protection Advisory System PRO_PLANT Ubbo Visser, Alan Tickle, Ross Hayward and Robert Andrews From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: It provides compelling neurobiological evidence for the existence of stable attractor dynamics; at the same time, it forces consideration of the challenging problem of how to shift a stable activity profile. Because the first systematic experimental studies of HD cells were published only in 1990, the current paper includes a relatively complete reference list of the experimental publications, in addition to some immediately related theoretical papers. ____________________________________________________________________________ The paper has appeared in: Journal of Neuroscience 16(6): 2112-2126 (1996) Title: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory Author: Kechen Zhang Department of Cognitive Science University of California, San Diego La Jolla, California 92093-0515 Abstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local- view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self- motion information. __________________________________________________________________________ Comments and suggestions are welcome. Email: zhang at salk.edu or kzhang at cogsci.ucsd.edu http://www.cnl.salk.edu/~zhang From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: human cortex is a myth. There is postnatal cortical cell death in rodents, but in primates (including humans) there is only (i) a decreased density of cell packing, and (ii) massive (up to 50%) synapse loss. (The decreased density of cell packing was apparently misinterpreted as cell loss in the past). Of course, there are pathological cases, such as Alzheimers, in which there is cell loss. I have written a review of human postnatal brain development which I can send out on request. Mark Johnson =============== Mark H. Johnson Senior Research Scientist (Special Appointment) Professor of Psychology, University College London MRC Cognitive Development Unit, 4 Taviton Street, London WC1H OBT, UK tel: 0171-387-4692 fax: 0171-383-0398 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of the cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons are lost by age 60. Using the standard estimate of ten bilion neurons in the neocortex, this works out to about one hunderd thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey and rodent data" DG Flood & PD Coleman, Neurobiology of Aging, 9, (1988) pp.453-464. -------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I have come across a brief reference to adult neural death that may be of use, or at least a starting point. The book is: Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward Univ. In a footnote (!) on page 32, he writes: There is typically a loss of 5-10 percent of brain tissue with age. Assuming a brain loss of 7 percent over a life span of 100 years, and 10^11 neurons (100 billions) to begin with, approximately 200,000 neurons are lost per day. ---------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: As I remember it, the studies showing the marked reduction in nerve cell count with age were done around the turn of the century. Themethod, then as now, is to obtain brains of deceased persons, fix tehm, prepare cuts, count cells microscopically in those cuts, and then estimate the total number by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramidal cells are turned into that form by fixation. The new study showed that the total water content of the brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percents?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty ceratin that the gist is correct. So the conclusion seems to be that the cell loss with age in the CNS is much lower than generally thought. ---------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991) writes on page 208 that: "Comparisons of neural densities in the brain of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cell die between the ages of twenty and eighty years (Tomlinson and Gibson, 1980). Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5% of the cortical cells die at random...." and goes on to discuss the implications for network robustness. Reference: Gearald H, Tomlinson BE and Gibson PH (1980) "Cell counts in human cerebral cortex in normal adults throughout life using an image analysis computer" J. Neurol., 46, pp. 113-136. ------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: "In search of the Engram" The problem of robutsness from a neurobiological perspective seems to originate from works done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of the memory and seeing if it affected memory... Other work was done by a gentlemen named Richard F. Thompson. Both speak of the loss of neurons in a system and how integrity was kept. In particular Karl Lashley spoke of the memory as holograms... ------------------------------------------------- Hope it helps... Regards Guido Bugmann ----------------------------- Dr. Guido Bugmann Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom ----------------------------- Tel: (+44) 1752 23 25 66 / 41 Fax: (+44) 1752 23 25 40 Email: gbugmann at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/Staff/GuidBugm/Bugmann.html ----------------------------- From stavrosz at med.auth.gr Mon Jun 5 16:42:55 2006 From: stavrosz at med.auth.gr (Stavros Zanos) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Neuronal Cell Death Message-ID: This recently published (April '96) review about the amount and the possible role of neuronal cell-death during development, could be of an interest to some of the readers of this list. James T. Voyvodic (1996) Cell Death in Cortical Development: How Much? Why? So What? Neuron 16(4) Stavros Zanos Aristotle University School of Medicine Thessaloniki, Macedonia, Greece "If I Had More Time, I Would Have Written You A Shorter Letter" Pascal From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: algorithms definitely need to design and train nets on their own. (By the way, this is indeed a doable task and all neural network algorithms need to focus on both of these tasks.) We cannot leave design out of our algorithms. Our original intent is to build self-learning systems, not systems that we have to "baby sit" all the time. Such systems are "useless" if we want to build truly autonomous learning systems that can learn own their own. "Learning" includes "design and training". We cannot call them learning algorithms unless they design nets on their own and unless they attempt to generalize (i.e. attempt to build the smallest possible net). I would welcome more thoughts and debate on all of these issues. It would help to see some more response on two of the other premises of classical connectionist learning - local learning and memoryless learning. They have been the key concepts behind algorithm development in this field for the last 40 to 50 years. Again, open and vigorous debate is very healthy for a scientific field. Perhaps more researchers will come forward with facts and ideas on all these two and other issues. ******************************************************** ******************************************************** On May 23 Danny Silver wrote: "Dr. Roy .. It was interesting to read your mail on new criteria for neural network based inductive learning. I am sure that many other readers have at one time or another had similar thoughts or portions thereof. Notwithstanding the need to walk before you run, there is reason to set our sights a little higher then they have been. Along these lines I would like to point you toward a growing body of work on Transfer in Inductive Systems which suggests that a "life long learning" or "learning to learn" approach encomposes much of the criteria which you have outlined. At NIPS*95 a post-conference workshop covered this very topic and heard from some 15 speakers on the subject. All those who are interested should search through the hompages below for additional information." Daniel L. Silver University of Western Ontario, London, Canada = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) WWW home page .... http://www.csd.uwo.ca/~dsilver = ================================================== Workshop page: http://www.cs.cmu.edu/afs/cs.cmu.edu/usr/caruana/pub/transfer.html Lori Pratt's transfer page: http://vita.mines.colorado.edu:3857/0/lpratt/transfer.html Danny Silver's transfer ref list: http://www.csd.uwo.ca/~dsilver/ltl-ref-list Rich Caruana's transfer ref list: http://www.cs.cmu.edu/afs/cs.cmu.edu/user/caruana/pub/transferbib.html ******************************************************** ******************************************************** On May 21 Michael Vanier wrote: "I read your post to the computational neuroscience mailing list with interest. I agreed with most of your points about the differences between "brain-like" learning and the learning exhibited by current neural network models. I have a couple of comments, for what it's worth. (On Task A: Perform Network Design Task) As a student of neuroscience (and computational neuroscience), it isn't clear to me what you're referring to when you say that the brain designs an appropriate network for a given task. One take on this is that evolution has done just that, but evolution has operated over millions of years. Biological development can also presumably tune a network in response to inputs (e.g. the development of connectivity in visual cortex in response to the presence or absence of visual stimuli), but again, this is slow and relatively fixed after a certain period, so it would only apply to generic tasks whose nature doesn't change profoundly over time (which presumably is the case for early vision). I know of no example where the brain can massively rewire itself in order to perform some task. However, the kind of learning found in connectionist networks (correlation-based using local learning rules) has a fairly direct analogy to long-term potentiation and depression in the brain, so it's likely that the brain is at least this powerful. This accounts for much of the appeal of local learning rules: you can find them (or something similar to them) in the brain. In fact,despite the practical problems with backprop (which you mention), the most common objection given by biologists to backprop is that even this simple a learning rule would be very difficult to instantiate in a biological system. (On Task C: Quickness in Learning) This is indeed a problem. Interestingly, attractor networks such as the Hopfield net can in principle learn in one trial (although there are other problems involved there too). Hopfield nets are also fundamentally feedback structures, like the brain but unlike most connectionist models. This is not to suggest that Hopfield nets are good models of the brain; they clearly aren't. It's not clear to me what you mean by "storing training examples in memory". Again using the Hopfield net example, in that case the whole purpose of the network is to store patterns in memory. Perhaps what you're suggesting is that feedforward networks take advantage of this to repeatedly play back memorized patterns from attractor networks so as to make learning more rapid. Some researchers believe the hippocampus is performing this function by storing patterns when an animal is awake and playing them back when the animal is asleep. Thanks for an interesting post." ******************************************************** ******************************************************** On May 15 Brendan McCane wrote: " Hi, Just a few comments here. Although I think the points you make are valid and probably desirable, I don't think they can necessarily be applied to the human brain. Following are specific comments about the listed criteria. (On Task A: Design Networks) The neural network architecture of the brain is largely pre-determined. Tuning certainly takes place, but I do not believe that the entire brain architecture is rebuilt for every newborn. This would require tremendous effort and probably end up with people who cannot communicate with each other at all (due to different representations). The human brain system has actually been created with external assistance, namely from evolution. (On Task B: Robustness in Learning) I agree that no local-minima would be optimal, but humans almost certainly fall into local minima (due to lack of extensive input or whatever) and only jump out when new input comes to light. (On Task E: Efficiency in Learning.) I don't see why insects or birds could not solve NP-hard problems from an evolutionary point of view. That is, the solution has now been hard-wired into their brains after millions of years of learning. I am not convinced that these characteristics are more brain-like than classical connectionist ones. Certainly they are desirable, and are possibly the holy grail of learning, but I don't think you can make the claim that the brain functions in this way. I think I've addressed all the other points made below in the points above." ******************************************************** ******************************************************** On May 15 Richard Kenyon wrote: "Here are my comments. I think that what you are looking for is something along the lines of a-life type networks which would evolve their design (much like the brain, see Brendans comment), as there is no obvious design for any particular problem in the first place, and a net which can design a network must already know something about the problem, which is why you raise the issue. I think though that design is evolving albeit at the hands of connectionist scienctists, i.e the title of this list is one such step in the evolution. (On Task B: Robustness in Learning) For me one of the key concepts in neural nets is graceful degradation, the idea that when problems arise the networks don;t just fall over. I reckon that networks are still fairly brittle and that a lot needs ot be done in this area. However i agree again with Brendan that our brains suffer local minima more than we would like to admit. (On Task C: Quickness in Learning) Memory is indeed very important, but recurrent neural networks have published a lot on the storage capacity of such devices already, it has not been forgotten. Very idealistic i'm afraid. Humans don't learn as quickly as we might like to think. Our 'education' is a long drawn out process and only every now and again do we experience enlightenment in the grasping of a key concept. This does not happen quickly or that often (relatively). The main factor affecting neural nets (imo) will be parallel computers at which point the net as we know it will not be stand alone but connected to many more, this is the principle i think is the closest theorisation we have to the brains parallelism. This is also why hybrid systems are v interesting, as a parallel system will be able to process output from mnay designs. (On Task D: Efficiency in Learning) Obviously efficiency in learning is important, but for us humans this is often mediated by efficent teaching, as in the case of some algorithms, self organising nets offer some form of autonamy in learning, but often end up doing it the same way over and over again, as do we. Kohonen has interpreted this as a physiological principle, in that it takes a lot of effort to sever old neural connections and etablish a new path for incorporating new ideas. Local minima have muscle. (On Task E: Generalization in Learning) The brain probably accepts some form of redundancy (waste). I agree that the brain is one hell of an optimisation machine. Intelligence whatever task it may be applied to is (again imho) one long optimisation process. Generalisation arises (even emerges or is a side effect) as a result of ongoing optimisation, conglomeration, reprocessing etc etc. This is again very important i agree, but i think (i do anyway) we in NN commumnity are aware of this as with much of the above. I thought that apart from point A we were doing all of this already, although to have it explicitly published is very valuable. I may be wrong > A good test for a so-called "brain-like" algorithm is to imagine it > actually being part of a human brain. I don't think that many researchers would claim too much about neural nets being very brain like at all. The simulated neurons, whether sigmoid or tansigmoid etc, do not behave very like real neurons at all, which is why there is a lot of research into biologically plauysible neurons. > Then examine the learning > phenomenon of the algorithm and compare it with that of the > human's. For example, pose the following question: If an algorithm > like back propagation is "planted" in the brain, how will it behave? > Will it be similar to human behavior in every way? Look at the > following simple "model/algorithm" phenomenon when the back- > propagation algorithm is "fitted" to a human brain. You give it a > few learning examples for a simple problem and after a while this > "back prop fitted" brain says: "I am stuck in a local minimum. I > need to relearn this problem. Start over again." And you ask: > "Which examples should I go over again?" And this "back prop > fitted" brain replies: "You need to go over all of them. I agree this is limitation, but how is any net supposed ot know what is relevant to remember or even pay greater attention to. This is in part the frame problem which roboticists are having a great deal of fun discussing. > I don't > remember anything you told me." So you go over the teaching > examples again. And let's say it gets stuck in a local minimum again > and, as usual, does not remember any of the past examples. So you > provide the teaching examples again and this process is repeated a > few times until it learns properly. The obvious questions are as > follows: Is "not remembering" any of the learning examples a brain- > like phenomenon? yes and no, children often need to be told over and over again, and this fielkd is still in its infancy. >Are the interactions with this so-called "brain- > like" algorithm similar to what one would actually encounter with a > human in a similar situation? If the interactions are not similar, then > the algorithm is not brain-like. A so-called brain-like algorithm's > interactions with the external world/teacher cannot be different > from that of the human. > > In the context of this example, it should be noted that > storing/remembering relevant facts and examples is very much a > natural part of the human learning process. Without the ability to > store and recall facts/information and discuss, compare and argue > about them, our ability to learn would be in serious jeopardy. > Information storage facilitates mental comparison of facts and > information and is an integral part of rapid and efficient learning. It > is not biologically justified when "brain-like" algorithms disallow > usage of memory to store relevant information. I did not know they were not allowed, but perhapos they have been left on the sidelines, but again i refer you to recurrent nets. > Another typical phenomenon of classical connectionist learning is > the "external tweaking" of algorithms. How many times do we > "externally tweak" the brain (e.g. adjust the net, try a different > parameter setting) for it to learn? Interactions with a brain-like > algorithm has to be brain-like indeed in all respect. An analogy here is perhaps taking a different perspective on a problem, this is a very human parameter that we must tweak to make progress. > It is perhaps time to reexamine the foundations of the neural > network/connectionist field. This mailing list/newsletter provides an > excellent opportunity for participation by all concerned throughout > the world. I am looking forward to a lively debate on these matters. > That is how a scientific field makes real progress. i agree with the last sentiment." ******************************************************** ******************************************************** On May 16 Chris Cleirigh wrote: "hi good luck with your enterprise, i think if you aim to be consistent with biology you have more chance of long term success. i'm no engineer -- i'm a linguist -- but i've read of Edelman's theory of neuronal group selection which seeks to explain categorisation through darwinian processes of variation and selection of populations of neuronal groups in the brain. are you motivated by such models. one thing, you say: For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. however, efficient learning does involve external intervention, especially by other brains. consider language learning and the corrective role played by adults in teaching children." ******************************************************** ******************************************************** On May 17 Kevin Gurney wrote: " I read your (provocative) posting to the cogpsy mailing list and would like to make some comments (Your original remarks are enclosed in square brackets) YA. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem,...From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system -.." It might be from a neuroengineering point of view but not from a neurscientific one. Real brains undergo a developmental process, much of which is encoded in the organism's DNA. Thus, the basic mechanisms of structural and trophic development are not thought to be activity driven per se. Mechansims like Long Term Potentiation (LTP) may be the biological correlate of connectionist learning (Hebb rule) but are not responsible for the overall neural architecture at the modular level which includes the complex layering of the cortex. I would take issue quite generally with your frequent invocation of th eneuroscientists in your programme. They *are* definitley interested in discovering the nature of real brains - rather than super-efficient networks hat may be engineered - will bring this out in subsequent points below YB. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties." Again, it may be the goal of neuro*engineers* to study ideal devices - it is not the domain of neuroscientists. YC. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. " Humans don't, in fact, learn from just a few examples in most cognitive and perceptual tasks - this is a myth. The fine tuning of visual and motor cortex which is a result of the critical period in infanthood is a result of a continuous bombardment of the animal with stimuli and tactile feedback. The same goes for langauge. The same applies for the learning of any new skill in fact (reading, playing a musical instrument ec etc.). These may be executed in an algorithmic, serial processing fashion until they become automatised in the parallel processing of the brain (cf Andy Clarke's von-Neuman emulaton by the brain) Many connectionists have imbued humans with god-like powers which aren't there. It is true that we can learn one-off facts and add them to our episodic memory but this is not usually the kind of things which nets are asked to perform. YD. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and PapertY1988"). It must be able to both design and train an appropriate net in polynomial time." Judd has shown that NN learning is intrinsically NP complex in many instances - there is no `free lunch'. See also the results in computational learning theory by Wolpert and Schaffer. YE. Generalization in Learning: ...That is, it must try to design the smallest possible net, ... This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task." Not true. Visual cortex uses a massive expansion in its coding from the LGN to V1 before it `recompresses' in higher visual centres. This has been described theoretically in terms of PCA etc (ECVP last year - can't recall ref. just now) YAs far as I know, there is no biological evidence for any of the premises of classical connectionist learning." The relation LTP = Hebb rule is a fairly non-contentious statement in the neuroscientific community. I could go on (RP learning and operant conditioning etc)... YSo, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself!" The brain uses many `algorithms' to develop - it is these working in concert (genetically deterimined and activity mediated) which ensure the final state YYou give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again."" My brain constantly gets stuck in local minima. If not then I would learn everything I tried to do to perfection - I would be an accomplished craftsman/musician/linguist/sporstman etc. In fact I am non of these...but rather have a small amount (local minimum's worth) of ability in each. YThe obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? " There may be some mechanism for storing the `rules' and `examples' in STM or even LTM but even this is not certain (e.g. `now describe to me the perfect tennis backhand......`No - you forget to mention the follow-through - how many more times...') Finally, an engineering point. The claim that previous connectionist algorithms are not able to construct networks is a little brash. There have been several attempts to contruct nets as part of the learning proces (e.g. Cascade correlation). In summary: I am pleased to see that people are trying to overcome some of the problems encountered in building neural nets. However, I would urge people not to missappropriate the activities of people in other fields (neuroscience) and to learn a little more about the real capabilities of humans and their brains as described by neuroscientists, and psychologists. I would also ask that more account be taken of some of the teoretical literature on learning be taken into account. I hope this contribution is useful" ******************************************************** ******************************************************** On May 18 Craig Hicks wrote: " Hi, >A. Perform Network Design Task: A neural network/connectionist >learning method must be able to design an appropriate network for >a given problem, since, in general, it is a task performed by the >brain. A pre-designed net should not be provided to the method as >part of its external input, since it never is an external input to the >brain. From a neuroengineering and neuroscience point of view, this >is an essential property for any "stand-alone" learning system - a >system that is expected to learn "on its own" without any external >design assistance. Doesn't this ignore the role of evolution as a "learning" force? It's undisputable that the brain has a highly specialized structure. Obviously, this did not come from nowhere, but is the result of the forces of natural selection." ******************************************************** ******************************************************** On May 23 Dgragan Gamberger wrote: "I read you submission with great interest although (or may because of) I m not working in the field of neural networks. My interests are in the field of inductive learning. The presented ideas seem very attractive to me and in my opinion your criticism of the present systems is fully justified. The only suggestion for improvement is on part C.: > C. Quickness in Learning: The method must be quick in its > learning and learn rapidly from only a few examples, much as > humans do. For example, one which learns from only 10 examples > learns faster than one which requires a 100 or a 1000 examples. Although the statement is not incorrect by itself, in my opinion it reflects the common unawareness of the importance of redundancy for machine, as well as for human learning. In practice neither machine nor human can learn something (except extremely simple concepts) from 10 examples especially if there is noise (errors in training examples). Even for learning of simple concepts it is advisable to use as much as possible training examples (and not only necessary subset) because it can improve quality and (at least) reliability of induced concepts. Especially for handling imperfections in training data (noise) the use of redundant training set is obligatory. In practice, humans can and do induce concepts from a small training set but they are 'aware' of their unreliability and use every occasion (additional examples) to test induced concepts and to refine them if necessary. That is potentially the ideal model of incremental learning." ******************************************************** ******************************************************** On May 25 Guido Bugmann responded to Raj Rao: "A similar question (are there references for 1 millions neurons lost per day ?) came up in a discussion on the topic of robustness on connectionists a few years ago (1992). Some of the replies were: ------------------------------------------------------- From Bill Skaggs, bill at nsma.arizona.edu : There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of the cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons are lost by age 60. Using the standard estimate of ten bilion neurons in the neocortex, this works out to about one hunderd thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey and rodent data" DG Flood & PD Coleman, Neurobiology of Aging, 9, (1988) pp.453-464. -------------------------------------------------------- From Arshavir Balckwell, arshavir at crl.ucsd.edu : I have come across a brief reference to adult neural death that may be of use, or at least a starting point. The book is: Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward Univ. In a footnote (!) on page 32, he writes: There is typically a loss of 5-10 percent of brain tissue with age. Assuming a brain loss of 7 percent over a life span of 100 years, and 10^11 neurons (100 billions) to begin with, approximately 200,000 neurons are lost per day. ---------------------------------------------------------------- From Jan Vorbrueggen, jan at neuroinformatik.ruhr-uni-bochum.de As I remember it, the studies showing the marked reduction in nerve cell count with age were done around the turn of the century. The method, then as now, is to obtain brains of deceased persons, fix them, prepare cuts, count cells microscopically in those cuts, and then estimate the total number by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramidal cells are turned into that form by fixation. The new study showed that the total water content of the brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percents?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty certain that the gist is correct. So the conclusion seems to be that the cell loss with age in the CNS is much lower than generally thought. ---------------------------------------------------------------- From Paul King, Paul_King at next.com Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991) writes on page 208 that: "Comparisons of neural densities in the brain of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cell die between the ages of twenty and eighty years (Tomlinson and Gibson, 1980). Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5% of the cortical cells die at random...." and goes on to discuss the implications for network robustness. Reference: Gearald H, Tomlinson BE and Gibson PH (1980) "Cell counts in human cerebral cortex in normal adults throughout life using an image analysis computer" J. Neurol., 46, pp. 113-136. ------------------------------------------------------------- From Robert A. Santiago, rsantiag at note.nsf.gov "In search of the Engram" The problem of robutsness from a neurobiological perspective seems to originate from works done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of the memory and seeing if it affected memory... Other work was done by a gentlemen named Richard F. Thompson. Both speak of the loss of neurons in a system and how integrity was kept. In particular Karl Lashley spoke of the memory as holograms... ------------------------------------------------- Hope it helps..." From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From pelillo at minster.cs.york.ac.uk Mon Jun 5 16:42:55 2006 From: pelillo at minster.cs.york.ac.uk (pelillo@minster.cs.york.ac.uk) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: April 1995 Subject: No subject Message-ID: Abstract ---------- This paper describes how graph grammars with attributes may be used to grow neural networks. The grammar facilitates a very compact and declarative description of every aspect of a neural architecture; this is important from a software/neural engineering point of view, since the descriptions are much easier to write and maintain than programs written in a high-level language, such as C++, and do not require programming ability. The output of the growth process is a neural network that can be transformed into a Postscript representation for display purposes, or simulated using a separate neural network simulation program, or mapped directly into hardware in some cases. In this approach, there is no separate learning algorithm; learning proceeds (if at all) as an intrinsic part of the network behaviour. This has interesting application in the evolution of neural nets, since now it is possible to evolve all aspects of a network (including the learning `algorithm') within a single unified paradigm. As an example, a grammar is given for growing a multi-layer perceptron with active weights that has the error back-propagation learning algorithm embedded in its structure. This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get esann95.ps ----------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: April 1996 Subject: No subject Message-ID: Abstract ---------- This paper describes a set-based chromosome for describing neural networks. The chromosome etween sets. Each set is updated in order, as are the neurons in that set, in accordance with a simple pre-specified algorithm. This allows all details of a neural architecture, including its learning behaviour to be specified in a simple and purely declarative manner. To evolve a learning behaviour for a particular network architecture, certain details of the architecture are pre-specified by defining a chromosome template, with some of the genes fixed, and others allowed to vary. In this paper, a learning perceptron is evolved, by fixing the feedforward and error-computation parts of the chromosome, then evolving the feedback part responsible for computing weight updates. Using this methodology, learning behaviours with similar performance to the delta rule have been evolved. This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get esann96.ps ----------------------------------------------------------------------- Comments and criticisms welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom http://esewww.essex.ac.uk/~sml Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Feb 1996 Subject: No subject Message-ID: This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get ieevisp.ps ----------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Sep 1996 Subject: No subject Message-ID: This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get iwfhr96.ps ----------------------------------------------------------------------- Comments and criticisms welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom http://esewww.essex.ac.uk/~sml Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk secretary: Mrs Wendy Ryder (+44) 1206 872437 -------------------------------------------------  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: "The purpose of this book is to introduce multivariate statistical methods to non-mathematicians. It is not intended to be comprehensive. Rather, the intention is to keep the details to a minimum while still conveying a good idea of what can be done. In other words, it is a book to `get you going' in a particular area of statistical methods." jay Jay Moody Department of Cognitive Science, 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 fax: 619-354-1128 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Here are some thoughts: One general point is that I'm not entirely sure about what is meant by theglobal/local distinction. Certainly action at a distance can't take place; something physical happens to the cell/connection in question in order for it to change. As I understand it, the prototypical local learning is a Hebbian rule, where all the information specifying plasticity is in the pre and post-synaptic cells (ie "local" to the connection), while a global learning rule is mediated by something distal to the cell in question (i.e. a neuromodulatory signal). But of course the signal must contact the actual cell via diffusion of a chemical substance (e.g. dopamine). So one different distinction might be how specific the signal is; i.e. in a local rule like LTP the information acts only on the single connection, while a modulatory signal could change all the connections in an area by a similar amount. However, the effects of a neuromodulator could in turn be modulated by the current state of the connection - hence a global signal might act very differently at each connection. Which would make the global signal seem local. So I'm not sure the distinction is clearcut. Maybe its better to consider a continuum of physical distance of the signal to change and specificity of the signal at individual connections. A couple of specific comments follow: > A) Does plasticity imply local learning? > > The physical changes that are observed in synapses/cells in > experimental neuroscience when some kind of external stimuli is > applied to the cells may not result at all from any specific > "learning" at the cells.The cells might simply be responding to a > "signal to change" - that is, to change by a specific amount in a > specific direction. In animal brains, it is possible that the > "actual" learning occurs in some other part(s) of the brain, say > perhaps by a global learning mechanism. This global mechanism can > then send "change signals" to the various cells it is using to > learn a specific task. So it is possible that in these > neuroscience experiments, the external stimuli generates signals > for change similar to those of a global learning agent in the brain > and that > the changes are not due to "learning" at the cells > themselves. Please note that scientific facts/phenomenon like LTP/LTD > or synaptic plasticity can probably be explained equally well by > many theories of learning (e.g. local learning vs. global learning, > etc.). However, the correctness of an explanation would have to > be I think it would be difficult to explain the actual phenomenon of LTP/LTD as a response to some signal sent by a different part of the brain, since a good amount of the evidence comes from in vitro work. So clearly the "change signals" can't be coming from some distant part of the brain - unless the slices contain the necessary machinery for generating the change signal. Also, its of course possible that LTP/LTD local learning rules act in concert with global signals (as you mention below); these global signals being sent by nonspecific neuromodulators (an idea brought up plenty of times before). I'm not sure about the differences in the LTP/LTD data collected in vivo versus in vitro; I'm sure there are people out there studying it carefully, and this could provide insight. > > B) "Pure" local learning does not explain a number of other > activities that are part of the process of learning!! > > When learning is to take place by means of "local learning" in a > network of cells, the network has to be designed prior to its > training. Setting up the net before "local" learning can proceed > implies that an external mechanism is involved in this part of > the > learning process. This "design" part of learning precedes actual > training or learning by a collection of "local learners" whose > only > knowledge about anything is limited to the local learning law to > use! Of course, changing connection strengths seems to be the last phase of the "learning/development" process. Correct numbers of cells need to be generated, they have to get to their correct locations, proper connections between subpopulations need to be established and refined, and only at this point is there a substrate for "local" learning. All of these can be affected to a certain extent by environment. For example, the number of cells in the spinal cord innervating a peripheral target can be downregulated with limb bud ablation; conversely, the final number can be upregulated with supernumerary limb grafts. Another well known example is the development of ocular dominance columns. Here, physical connections can be removed (in normal development), or new connections can be established (evidence for this from the reverse suture experiments), depending on the given environment. What would be quite interesting would be if all these developmental phases are guided by similar principles, but acting over different spatial and temporal scales, and mediated by different carriers (e.g. chemical versus electrical signals). Alas, if only I had a well-articulated, cogent principle in hand with which to unify these disparate findings; my first Nobel prize would be forthcoming. In lieu of this, we're stuck with my ramblings. > > In order to learn properly and quickly, humans generally collect > and store relevant information in their brains and then "think" > about it (e.g. what problem features are relevant, problem > complexity, etc.). So prior to any "local learning," there must > be processes in the brain that examine this "body of > information/facts" about a problem in order to design the > appropriate network that would fit the problem complexity, select > the problem features that are meaningful, etc. It would be very > difficult to answer the questions "What size net?" and "What > features to use?" without looking at the problem in great detail. > A bunch of "pure" local learners, armed with their local learning > laws, would have no clue to these issues of net design, > generalization and feature selection. > > So, in the whole, there are a "number of activities" that need to > be performed before any kind of "local learning" can take place. > These aforementioned learning activities "cannot" be performed > by a collection of "local learning" cells! There is more to the > process of learning than simple local learning by individual cells. > Many learning "decisions/tasks" must precede actual training by > "local learners." A group of independent "local learners" simply > cannot start learning and be able to reproduce the learning > characteristics and processes of an "autonomous system" like the > brain. > > Local learning, however, is still a feasible idea, but only > within a general global learning context. A global learning > mechanism would be the one that "guides" and "exploits" these > local learners. However, it is also possible that the global > mechanism actually does all of the computations (learning) > and "simply sends signals" > to the network cells for appropriate synaptic adjustment. Both of > these possibilities seem logical: (a) a "pure" global mechanism > that learns by itself and then sends signals to the cells to > adjust, or (b) a global/local combination where the global > mechanism performs certain tasks and then uses the local mechanism > for training/learning. > > Note that the global learning mechanism may actually be implemented > with a collection of local learners!! > Notwithstanding the last remark, the above paragraphs perhaps run the risk of positing a little global homunculus that "does all the computations" and simply "sends signals" to the cells. I might be confused by the distinction between local and global learning. All we have to work with are cells that change their properties based on signals impinging upon them, be they chemical or electrical and originating near or far from the synapse, so it seems that a "global" learning mechanism *must* be implemented by local learners. (Again, if by local you specifically mean LTP/LTD or something similar, then I agree - other mechanisms are also at work). > The basic argument being made here is that there are many tasks > in a "learning process" and that a set of "local learners" armed > with their local learning laws is incapable of performing all of > those tasks. So local learning can only exist in the context of > global learning and thus is only "a part" of the total learning > process. > > It will be much easier to develop a consistent learning theory > using the global/local idea. The global/local idea perhaps will > also give us a better handle on the processes that we call > "developmental" and "evolutionary." One last comment. I'm not sure that the "developmental" vs. "learning" distinction is meaningful, either (I'm not hacking on your statements above, Asim; I think this distinction is more or less a tacit assumption in pretty much all neuroscience research). I read these as roughly equivalent to "nature vs. nurture" or "genetics vs. environment". I would claim that to say that any phenomenon is controlled by "genetics" is a scientifically meaningless statement. The claim that such-and-such a phenomenon is genetic is the modern equivalent of saying "The thing is there cause thats how god made it". Genes don't code for behavioral or physical attributes per se, they are simply a string of DNA which code for different proteins. Phenotypes can only arise from the genetic "code" by a complex interaction between cells and signals from their environment. Now these signals can be generated by events outside the organism or within the organism, and I would say that the distinction between development and learning is better thought of as whether the signals for change arise wholly within the organism or if the signals at least in part arise from outside the organism. Any explanation of either learning or development has to be couched in terms of what the relevant signals are and how they affect the system in question. anthony ============================================================ From: Russell Anderson, Ph.D. Smith-Kettlewell Eye Research Institute anderson at skivs.ski.org I read over the replies you received with interest. 1. In regards to Response #1 (j. Faith) I am not sure how relevant canalization is to your essay, but I wrote a paper on the topic a few years back: "Learning and Evolution: A Quantitative Genetics Approach" J. Theor. Biol. 175:89-101 (1995). Incidentally, the phenomenon known as "canalization" was described much earlier by Baldwin, Osborn, and Morgan (in 1896), and is more generally known as the "Baldwin effect" If you're interested, I could mail you a copy. 2. I take issue with the analogies used by Brendan McCane. His analogy of insect colonies is confused or irrelevant: First, the behavior of insects, for the purpose of this argument, does not indicate any individual (local) learning. Hence, the analogy is inappropriate. Second, The "global" learning occuring in the case of insect colonies operates at the level of natural selection acting on the genes, transmitted by the surviving colonies to new founding Queens. In this sense, individual ants are genetically ballistic ("pure developmental"). The genetics of insect colonies are well-studied in evolutionary biology, and he should be referred to any standard text on the topic (Dawkins, Dennett, Wilson, etc.) The analogy using computer science metaphors is likewise flawed or off-the-subject. ============================================================= From: Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 I do not know if it is quite on point, but Larry Stein at the University of California at Irvine has done fascinating work on a very different type of neural plasiticity called In-Vitro Reinforcement (IVR). I have been working on neural networks whose learning algorithm is based on his data and theory. I don't know whether you would call those networks "local" or "global," but they do have the interesting characteristic that all the units in the network receive the same globally distributed binary reinforcement signal. That is, feedback is not passed along the connections, but distributed simultaneously and equally across the network after the fashion of nondirected dopamine release from the ventral tegmental projections. In any event, I will forward the guts of a recent proposal we have written here to give you a taste of the issues involved. I will be happy to provide more information on this research if you are interested. (Steven Kemp did mail me parts of a recent proposal. It is long, so I did not include it in this posting. Feel free to write to him or me for a copy of it.) ============================================================ From: "K. Char" I have few quick comments: 1. The answer to some parts of the discussions seem to lie in the notion of a *SEQUENCE*. That is: global->local->(final) global; clearly the initial global is not the same as the final global. Some of the discussants seem to prefer the sequence: local->global. A number of such possibilities exists. 2. The next question is: who dictates the sequence? Is it a global mechanism or a local mechanism? 3. In the case of the bee, though it had an individual goal how was this goal arrived at? 4. In the context of neural networks (artificial or real): who dictates the node activation functions, the topology and the learning rules? Does every node find its own activation function? 5. Finally how do we form concepts? Do the concepts evolve as a result of local interactions at the neuron level or through the interaction of micro-concepts at a global level which then trigger a local mechanism? 6. Here the next question could be: how did these micro-concepts evolve in the very first place? 7. Is it possible that these neural structures provide the *very motivation* for the formation of concepts at the global level in order to adapt these structures effectively? If so, does this motivation arise from the environment itself? ============================================================ Response # 1: As you mention, neuroscience tends to equate network plasticity with learning. Connectionists tend to do the same. However this raises a problem with biological systems because this conflates the processes of development and learning. Even the smartest organism starts from an egg, and develops for its entire lifespan - how do we distinguish which changes are learnt, and which are due to development. No one would argue that we *learn* to have a cortex, for instance, even though it is due to massive emryological changes in the central nervous system of the animal. This isn't a problem with artificial nets, because they do not usually have a true developmental process and so there can be no confusion between the two; but it has been a long-standing problem in the ethology literature, where learnt changes are contrasted with "innate" developmental ones. A very interesting recent contribution to this debate is Andre Ariew's "Innateness and Canalization", in Philosophy of Science 63 (Proceedings), in which he identifies non-learnt changes as being due to canalised processes. Canalization was a concept developed by the biologist Waddington in the 40's to describe how many changes seem to have fixed end-goals that are robust against changes in the environment. The relationship between development and learning was also thoroughly explored by Vygotsky (see collected works vol 1, pages 194-210). I'd like to see what other sorts of responses you get, Joe Faith Evolutionary and Adaptive Systems Group, School of Cognitive and Computing Sciences, University of Sussex, UK. ================================================================= Response # 2: I fully agree with you, that local learning is not the one and only ultimate approach - even though it results in very good learning for some domains. I am currently writing a paper on the competitive learning paradigm. I am proposing, that this competition that occurs e.g. within neurons should be called local competition. The network as a whole gives a global common goal to these local competitors and thus their competition must be regarded as cooperation from a more global point of view. There is a nice paper by Kenton Lynne that integrates the ideas of reinforcement and competition. When external evaluations are present, they can serve as teaching values, if nor the neurons compete locally. @InProceedings{Lynne88, author = {K.J.\ Lynne}, title = {Competitive Reinforcement Learning}, booktitle = {Proceedings of the 5th International Conference on Machine Learning}, year = {1988}, publisher = {Morgan Kaufmann}, pages = {188--199} } ---------------------------------------------------------- Christoph Herrmann Visiting researcher Hokkaido University Meme Media Laboratory Kita 13 Nishi 8, Kita- Tel: +81 - 11 - 706 - 7253 Sapporo 060 Fax: +81 - 11 - 706 - 7808 Japan Email: chris at meme.hokudai.ac.jp http://aida.intellektik.informatik.th-darmstadt.de/~chris/ ============================================================= Response #3: I've just read your list of questions on local vs. global learning mechanisms. I think I'm sympathatic to the implications or presuppositions of your questions but need to read them more carefully later. Meanwhile, you might find very interesting a two-part article on such a mechanism by Peter G. Burton in the 1990 volume of _Psychobiology_ 18(2).119-161 & 162-194. Steve Chandler =============================================================== Response #4: A few years back, I wrote a review article on issues of local versus global learning w.r.t. synaptic plasticity. (Unfortunately, it has been "in press" for nearly 4 years). Below is an abstract. I can email the paper to you in TeX or postscript format, or mail you a copy, if you're interested. Russell Anderson ------------------------------------------------ "Biased Random-Walk Learning: A Neurobiological Correlate to Trial-and-Error" (In press: Progress in Neural Networks) Russell W. Anderson Smith-Kettlewell Eye Research Institute 2232 Webster Street San Francisco, CA 94115 Office: (415) 561-1715 FAX: (415) 561-1610 anderson at skivs.ski.org Abstract: Neural network models offer a theoretical testbed for the study of learning at the cellular level. The only experimentally verified learning rule, Hebb's rule, is extremely limited in its ability to train networks to perform complex tasks. An identified cellular mechanism responsible for Hebbian-type long-term potentiation, the NMDA receptor, is highly versatile. Its function and efficacy are modulated by a wide variety of compounds and conditions and are likely to be directed by non-local phenomena. Furthermore, it has been demonstrated that NMDA receptors are not essential for some types of learning. We have shown that another neural network learning rule, the chemotaxis algorithm, is theoretically much more powerful than Hebb's rule and is consistent with experimental data. A biased random-walk in synaptic weight space is a learning rule immanent in nervous activity and may account for some types of learning -- notably the acquisition of skilled movement. ========================================================== Response #5: Asim Roy typed ... > > B) "Pure" local learning does not explain a number of other > activities that are part of the process of learning!! .. > > So, in the whole, there are a "number of activities" that need to > be > performed before any kind of "local learning" can take place. > These aforementioned learning activities "cannot" be performed by > a collection of "local learning" cells! There is more to the > process of learning than simple local learning by individual cells. > Many learning "decisions/tasks" must precede actual training by > "local learners." A group of independent "local learners" simply > cannot start learning and be able to reproduce the learning > characteristics and processes of an "autonomous system" like the > brain. I cannot see how you can prove the above statement (particularly the last sentence). Do you have any proof. By analogy, consider many insect colonies (bees, ants etc). No-one could claim that one of the insects has a global view of what should happen in the colony. Each insect has its own purpose and goes about that purpose without knowing the global purpose of the colony. Yet an ants nest does get built, and the colony does survive. Similarly, it is difficult to claim that evolution has a master plan, order just seems to develop out of chaos. I am not claiming that one type of learning (local or global) is better than another, but I would like to see some evidence for your somewhat outrageous claims. > Note that the global learning mechanism may actually be implemented > with a collection of local learners!! You seem to contradict yourself here. You first say that local learning cannot cope with many problems of learning, yet global learning can. You then say that global learning can be implemented using local learners. This is like saying that you can implement things in C, that cannot be implemented in assembly!! It may be more convenient to implement it in C (or using global learning), but that doesn't make it impossible for assembly. ------------------------------------------------------------------- Brendan McCane, PhD. Email: mccane at cs.otago.ac.nz Comp.Sci. Dept., Otago University, Phone: +64 3 479 8588. Box 56, Dunedin, New Zealand. There's only one catch - Catch 22. =============================================================== Response #6: In regards to arguments against global learning:I think no one seriously questions this possibility, but think that global learning theories are currently non-verifiable/ non-falsifyable. Part of the point of my paper was that there ARE ways to investigate non-local learning, but it requires changes in current experimental protocols. Anyway, good luck. I look forward to seeing your compilation. Russell Anderson 2415 College Ave. #33 Berkeley, CA 94704 ============================================================== Response #7: I am sorry that it has taken so long for me to reply to your inquiry about plasticity and local/global learning. As I mentioned in my first note to you, I am sympathetic to the view that learning involves some sort of overarching, global mechanism even though the actual information storage may consist of distributed patterns of local information. Because I am sympathetic to such a view, it makes it very difficult for me to try to imagine and anticipate the problems for such views. That's why I am glad to see that you are explicitly trying to find people to point out possible problems; we need the reality check. The Peter Burton articles that I have sent you describes exactly the kind of mechanism implied by your first question: Does plasticity imply local learning? Burton describes a neurological mechanism by which local learning could emerge from a global signal. Essentially he posits that whenever the new perceptual input being attended to at any given moment differs sufficiently from the record of previously recorded experiences to which that new input is being compared, the difference triggers a global "proceed-to-store" signal. This signal creates a neural "snapshot" (my term, not Burton's) of the cortical activations at that moment, a global episodic memory (subject to stimulus sampling effects, etc.). Burton goes on to describe how discrete episodic memories could become associated with one another so as to give rise to schematic representations of percepts (personally I don't think that positing this abstraction step is necessary, but Burton does it). As neuroscientists sometimes note, while it is widely assumed that LTP/LTD are local learning mechanisms, the direct evidence for such a hypothesis is pretty slim at best. Of course of of the most serious problems with that view is that the changes don't last very long and thus are not really good candidates for long term (i.e., life long) memory. Now, to my mind, one of the most important possibilities overlooked in LTP studies (inherently so in all in vitro preparations and so far as I know --which is not very far because this is not my field--in the in vivo preparations that I have read about) is that LTP/D is either an artifact of the experiment or some sort of short term change which requires a global signal to become consolidated into a long term record. Burton describes one such possible mechanism. Another motivation for some sort of global mechanism comes from the so-called 'binding problem' addressed especially by the Damasio's, but others too. Somehow somewhere all the distributed pieces of information about what an orange is, for example, have to be tied together. A number of studies of different sorts have demonstarted repeatedly that such information is distributed throughout cortical areas. Burton distinguishes between "perceptual learning" requiring no external teacher (either locally or globally) and "conceptual learning", which may require the assistance of a 'teacher'. In his model though, both types of learning are activated by global "proceed-to-learn" signals triggered in turn by the global summation of local disparities between remembered episodes and current input. I'll just mention in closing that I am particularly interested in the empirical adequacy of neuropsychological accounts such as Burton's because I am very interested in "instance-based" or "exemplar-based" models of learning. In particular, Royal Skousen's _Analogical Modeling of Language_ (Kluwer, 1989) describes an explicit, mathematical model for predicting new behavior on analogy to instances stored in long term memory. Burton's model suggests a possible neurological basis for such behavior. Steve Chandler ============================================================== Response #8: ******************************************************************* Fred Wolf E-Mail: fred at chaos.uni-frankfurt.de Institut fuer Theor. Physik Robert-Mayer-Str. 8 Tel: 069/798-23674 D-60 054 Frankfurt/Main 11 Fax: (49) 69/798-28354 Germany could you please point me to a few neuroBIOLOGICAL references that justify your claim that > > A predominant belief in neuroscience is that synaptic plasticity > and LTP/LTD imply local learning (in your sens). > I think many people appreciate that real learning implies the concerted interplay of a lot of different brain systems and should not even be attempted to be explained by "isolated local learners". See e.g. the series of review-papers on memory in a recent volume of PNAS 93 (1996) (http://www.pnas.org/). Good luck with your general theory of global/local learning. best wishes Fred Wolf ============================================================== Response #9: I am into neurocomputing for several years. I read your arguments with interest. They certainly deserve further attention. Perhaps some combination of global-local learning agents would be the right choice. - Vassilis G. Kaburlasos Aristotle University of Thessaloniki, Greece ============================================================== =============================================================== Original Memo: A predominant belief in neuroscience is that synaptic plasticity and LTP/LTD imply local learning. It is a possibility, but it is not the only possibility. Here are some thoughts on some of the other possibilities (e.g. global learning mechanisms or a combination of global/local mechanisms) and some discussion on the problems associated with "pure" local learning. The local learning idea is a very core idea that drives research in a number of different fields. I welcome comments on the questions and issues raised here. This note is being sent to many listserves. I will collect all of the responses from different sources and redistribute them to all of the participating listserves. The last such discussion was very productive. It has led to the realization by some key researchers in the connectionist area that "memoryless" learning perhaps is not a very "valid" idea. That recognition by itself will lead to more robust and reliable learning algorithms in the future. Perhaps a more active debate on the local learning issue will help us resolve this issue too. A) Does plasticity imply local learning? The physical changes that are observed in synapses/cells in experimental neuroscience when some kind of external stimuli is applied to the cells may not result at all from any specific "learning" at the cells. The cells might simply be responding to a "signal to change" - that is, to change by a specific amount in a specific direction. In animal brains, it is possible that the "actual" learning occurs in some other part(s) of the brain, say perhaps by a global learning mechanism. This global mechanism can then send "change signals" to the various cells it is using to learn a specific task. So it is possible that in these neuroscience experiments, the external stimuli generates signals for change similar to those of a global learning agent in the brain and that the changes are not due to "learning" at the cells themselves. Please note that scientific facts and phenomenon like LTP/LTD or synaptic plasticity can probably be explained equally well by many theories of learning (e.g. local learning vs. global learning, etc.). However, the correctness of an explanation would have to be judged from its consistency with other behavioral and biological facts, not just "one single" biological phenomemon or fact. B) "Pure" local learning does not explain a number of other "activities" that are part of the process of learning!! When learning is to take place by means of "local learning" in a network of cells, the network has to be designed prior to its training. Setting up the net before "local" learning can proceed implies that an external mechanism is involved in this part of the learning process. This "design" part of learning precedes actual training or learning by a collection of "local learners" whose only knowledge about anything is limited to the local learning law to use! In addition, these "local learners" may have to be told what type of local learning law to use, given that a variety of different types can be used under different circumstances. Imagine who is to "instruct and set up" such local learners which type of learning law to use? In addition to these, the "passing" of appropriate information to the appropriate set of cells also has to be "coordinated" by some external or global learning mechanism. This coordination cannot just happen by itself, like magic. It has to be directed from some place by some agent or mechanism. In order to learn properly and quickly, humans generally collect and store relevant information in their brains and then "think" about it (e.g. what problem features are relevant, complexity of the problem, etc.). So prior to any "local learning," there must be processes in the brain that "examine" this "body of information/facts" about a problem in order to design the appropriate network that would fit the problem complexity, select the problem features that are meaningful, etc. It would be very difficult to answer the questions "What size net?" and "What features to use?" without looking at the problem (body of information)in great detail. A bunch of "pure" local learners, armed with their local learning laws, would have no clue to these issues of net design, generalization and feature selection. So, in the whole, there are a "number of activities" that need to be performed before any kind of "local learning" can take place. These aforementioned learning activities "cannot" be performed by a collection of "local learning" cells! There is more to the process of learning than simple local learning by individual cells. Many learning "decisions/tasks" must precede actual training by "local learners." A group of independent "local learners" simply cannot start learning and be able to reproduce the learning characteristics and processes of an "autonomous system" like the brain. Local learning or local computation, however, is still a feasible idea, but only within a general global learning context. A global learning mechanism would be the one that "guides" and "exploits" these local learners or computational elements. However, it is also possible that the global mechanism actually does all of the computations (learning) and "simply sends signals" to the network of cells for appropriate synaptic adjustment. Both of these possibilities seem logical: (a) a "pure" global mechanism that learns by itself and then sends signals to the cells to adjust, or (b) a global/local combination where the global mechanism performs certain tasks and then uses the local mechanism for training/learning. Thus note that the global learning mechanism may actually be implemented with a collection of local learners or computational elements!! However, certain "learning decisions" are made in the global sense and not by "pure" local learners. The basic argument being made here is that there are many tasks in a "learning process" and that a set of "local learners" armed with their local learning laws is incapable of performing all of those tasks. So local learning can only exist in the context of global learning and thus is only "a part" of the total learning process. It will be much easier to develop a consistent learning theory using the global/local idea. The global/local idea perhaps will also give us a better handle on the processes that we call "developmental" and "evolutionary." And it will, perhaps, allow us to better explain many of the puzzles and inconsistencies in our current body of discoveries about the brain. And, not the least, it will help us construct far better algorithms by removing the "unwarranted restrictions" imposed on us by the current ideas. Any comments on these ideas and possibilities are welcome. Asim Roy Arizona State University  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: When this book was conceived ten years ago, few scientists realized the width of scope and the power for applicability of the central ideas. Partially because of the enthusiastic reception of the first edition, open problems have been solved and new applications have been developed. We have added new material on the relation between data compression and minimum description length induction, computational learning, and universal prediction; circuit theory; distributed algorithmics; instance complexity; CD compression; computational complexity; Kolmogorov random graphs; shortest encoding of routing tables in communication networks; resource-bounded computable universal distributions; average case properties; the equality of statistical entropy and expected Kolmogorov complexity; and so on. Apart from being used by researchers and as reference work, the book is now commonly used for graduate courses and seminars. In recognition of this fact, the second edition has been produced in textbook style. We have preserved as much as possible the ordering of the material as it was in the first edition. The many exercises bunched together at the ends of some chapters have been moved to the appropriate sections. The comprehensive bibliography on Kolmogorov complexity at the end of the book has been updated, as have the ``History and References'' sections of the chapters. Many readers were kind enough to express their appreciation for the first edition and to send notification of typos, errors, and comments. Their number is too large to thank them individually, so we thank them all collectively. BLURB: Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity---the theory dealing with the quantity of information in individual objects. Kolmogorov complexity is known variously as `algorithmic information', `algorithmic entropy', `Kolmogorov-Chaitin complexity', `descriptional complexity', `shortest program length', `algorithmic randomness', and others. The book is ideal for advanced undergraduate students, graduate students and researchers in computer science, mathematics, cognitive sciences, artificial intelligence, philosophy, statistics and physics. The book is self contained in the sense that it contains the basic requirements of computability theory, probability theory, information theory, and coding. Included are also numerous problem sets, comments, source references and hints to the solutions of problems, course outlines for classroom use, as well as a great deal of new material not included in the first edition. CONTENTS: Preface to the First Edition v How to Use This Book viii Acknowledgments x Preface to the Second Edition xii Outlines of One-Semester Courses xii List of Figures xix 1 Preliminaries 1 1.1 A Brief Introduction 1 1.2 Prerequisites and Notation 6 1.3 Numbers and Combinatorics 8 1.4 Binary Strings 12 1.5 Asymptotic Notation 15 1.6 Basics of Probability Theory 18 1.7 Basics of Computability Theory 24 1.8 The Roots of Kolmogorov Complexity 47 1.9 Randomness 49 1.10 Prediction and Probability 59 1.11 Information Theory and Coding 65 1.12 State Symbol Complexity 84 1.13 History and References 86 2 Algorithmic Complexity 93 2.1 The Invariance Theorem 96 2.2 Incompressibility 108 2.3 C as an Integer Function 119 2.4 Random Finite Sequences 127 2.5 *Random Infinite Sequences 136 2.6 Statistical Properties of Finite Sequences 158 2.7 Algorithmic Properties of 167 2.8 Algorithmic Information Theory 179 2.9 History and References 185 3 Algorithmic Prefix Complexity 189 3.1 The Invariance Theorem 192 3.2 *Sizes of the Constants 197 3.3 Incompressibility 202 3.4 K as an Integer Function 206 3.5 Random Finite Sequences 208 3.6 *Random Infinite Sequences 211 3.7 Algorithmic Properties of 224 3.8 *Complexity of Complexity 226 3.9 *Symmetry of Algorithmic Information 229 3.10 History and References 237 4 Algorithmic Probability 239 4.1 Enumerable Functions Revisited 240 4.2 Nonclassical Notation of Measures 242 4.3 Discrete Sample Space 245 4.4 Universal Average-Case Complexity 268 4.5 Continuous Sample Space 272 4.6 Universal Average-Case Complexity, Continued 307 4.7 History and References 307 5 Inductive Reasoning 315 5.1 Introduction 315 5.2 Solomonoff's Theory of Prediction 324 5.3 Universal Recursion Induction 335 5.4 Simple Pac-Learning 339 5.5 Hypothesis Identification by Minimum Description Length 351 5.6 History and References 372 6 The Incompressibility Method 379 6.1 Three Examples 380 6.2 High- Probability Properties 385 6.3 Combinatorics 389 6.4 Kolmogorov Random Graphs 396 6.5 Compact Routing 404 6.6 Average-Case Complexity of Heapsort 412 6.7 Longest Common Subsequence 417 6.8 Formal Language Theory 420 6.9 Online CFL Recognition 427 6.10 Turing Machine Time Complexity 432 6.11 Parallel Computation 445 6.12 Switching Lemma 449 6.13 History and References 452 7 Resource-Bounded Complexity 459 7.1 Mathematical Theory 460 7.2 Language Compression 476 7.3 Computational Complexity 488 7.4 Instance Complexity 495 7.5 Kt Complexity and Universal Optimal Search 502 7.6 Time-Limited Universal Distributions 506 7.7 Logical Depth 510 7.8 History and References 516 8 Physics, Information, and Computation 521 8.1 Algorithmic Complexity and Shannon's Entropy 522 8.2 Reversible Computation 528 8.3 Information Distance 537 8.4 Thermodynamics 554 8.5 Entropy Revisited 565 8.6 Compression in Nature 583 8.7 History and References 586 References 591 Index 618 If you are seriously interested in using the text in the course, contact Springer-Verlag's Editor for Computer Science, Martin Gilchrist, for a complimentary copy. Martin Gilchrist marting at springer-sc.com Suite 200, 3600 Pruneridge Ave. (408) 249-9314 Santa Clara, CA 95051 If you are interested in the text but won't be teaching a course, we understand that Springer-Verlag sells the book, too. To order, call toll-free 1-800-SPRINGER (1-800-777-4643); N.J. residents call 201-348-4033. For information regarding examination copies for course adoptions, write Springer-Verlag New York, Inc. , 175 Fifth Avenue, New York,NY 10010. You can order through the Web site: "http://www.springer-ny.com/" For U.S.A./Canada/Mexico- e-mail: orders at springer-ny.com or fax an order form to: 201-348-4505. For orders outside U.S.A./Canada/Mexico send this form to: orders at springer.de Or call toll free: 800-SPRINGER - 8:30 am to 5:30 pm ET (that's 777-4643 and 201-348-4033 in NJ). Write to Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY, 10010. Visit your local scientific bookstore. Mail payments may be made by check, purchase order, or credit card (see note below). Prices are payable in U.S. currency or its equivalent and are subject to change without notice. Remember, your 30-day return privilege is always guaranteed! Your complete address is necessary to fulfill your order.  From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: three distinct patterns which roughly corresponded to the global brain states W (wakefulness), S (slow wave or deep sleep) and R (REM sleep). The outputs from the self-organizing feature map were subsequently mapped via a Radial Basis Function (RBF) classifier onto three outputs, trained by sections of data on which experts agreed upon the stage W, S, or R. For each input, the resulting network produced probabilities for the three global stages, describing intermediate stages as a combination of three 'mixing fractions' . This general approach, yielding a novel way of describing brain state while exploiting some of experts' knowledge in a partly supervised method, will be adopted and extended in the following ways Many features extracted from the signals will be considered. Instead of the 2-dimensional feature map used by R&T, alternative approaches will be investigated. It has been shown that combinations of other clustering and mapping methods can outperform SOMs. Moreover, since topographic mapping is only exploited for visualization, the general approach can be based on more advanced clustering techniques (e.g. techniques for non-Gaussian clustering (Roberts 1997) or Bayesian-inspired methods). in order to cope with the large number of input features to be investigated active feature selection methods will be applied. techniques for intelligent sensor fusion will be investigated. When multiple sources are combined to lead to classification results, it is not trivial to decide which are the most relevant sources at any given time, or what should happen when sources fail to provide input (e.g. because an electrode is faulty). Approaches based on the computation of running error measures can be employed here. The Imperial College group will form be a leading centre in the theory subgroup of the project. We will be active in the researching of Mixture density networks and mixtures of experts Model estimation and pre-processing Active sensor fusion Active feature and data selection Unsupervised data partitioning methods (clustering) Model comparison and validation References 1. S.J. Roberts, Parametric and Non-parametric Unsupervised Cluster Analysis. Pattern Recognition, 30 (2) ,1997. 2. S.J. Roberts and L. Tarassenko, The Analysis of the Sleep EEG using a Multi- layer Neural Network with Spatial Organisation. IEE Proceedings Part F, 139(6), 420-425, 1992a. 3. S.J. Roberts and L. Tarassenko, New Method of Automated Sleep Quantification. Medical and Biological Engineering and Computing, 30(5), 509-517, 1992b. 3) Assessment of cortical vigilance This project, funded by British Aerospaces Sowerby Research Centre at Bristol, aims to assess and predict lapses in vigilance in the human brain. Recordings of the brains electrical activity are to be recorded and analysed. The utility of a device or system which may monitor an individual's level of vigilance is clear in a range of safety-critical environments. We propose that such utility would be enhanced by a system which, as well as monitoring the present state of vigilance, made a prediction as to the likely evolution of vigilance in the near future. To perform both these tasks, i.e. a static pattern assessment and a dynamic tracking and prediction, sophisticated methods of information extraction, sensor fusion and classification/regression must be employed. Over the last decade the theory of artificial neural' networks has been pitched within the framework of advanced statistical decision theory and it is within this framework which we intend to work. The aim of the project is to work towards a practical real-time system. The latter should be minimally intrusive and should make predictions of future vigilance states. Where appropriate, therefore, the investigation will assess each technique in the developing system with a view to its implementation in a real-time environment. The project will involve research into : New methods of signal complexity and synchronisation estimation Information flow estimation in multi-channel environments Active sensor fusion Prediction and classification Error estimation State transition detection and state sequence modelling Reference Makeig, S. and Jung, T-P. and Sejnowski, T. (1996), Using feedforward neural networks to monitor alertness from changes in EEG correlation and coherence, Advances in Neural Information Processing Systems (NIPS), MIT Press, Cambridge, MA. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: descent in multilayered neural networks it is known that the necessary process of student specialization can be delayed significantly. We demonstrate that this phenomenon also occurs in various models of unsupervised learning. A solvable model of competitive learning is presented, which identifies prototype vectors suitable for the repre- sentation of high--dimensional data. The specific case of two overlapping clusters of data and a matching number of prototype vectors exhibits non- trivial behavior like almost stationary plateau configurations. As a second example scenario we investigate the application of Sanger's algorithm for principal component analysis in the presence of two relevant directions in input space. Here, the fast learning of the first principal component may lead to an almost complete loss of initial knowledge about the second one. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> binary ftp> get WUE-ITP-97-003.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-97-003.ps.gz e.g. unix> lp WUE-ITP-97-003.ps [10 pages] (*) can be replaced by "get WUE-ITP-97-003.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ -- Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de homepage: http://www.physik.uni-wuerzburg.de/~biehl Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Take the 110 south past downtown LA to Exposition; at the exit, bend soft right to hop through a quick light at Flowers St. Continue straight one block, turn right on Figueroa. The USC campus is now on your left, but do not enter. Continue the length of the campus, turn left on Jefferson, and continue 2/3 of the length of the campus heading west, past Hoover. At the light at McClintock, turn left into the campus. This is the weekend entrance. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 25 minutes. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Take the 10 west, exit Vermont, turn right (south) and proceed a half mile, turn left on Jefferson and continue to McClintock. The weekend entrance to the USC campus will be on your right. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 5 minutes from Vermont exit. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Take the (5 N to the) 405 N to the 110 N, exit Exposition. Proceed straight through the light and the next light at the DMV entrance. Bend hard left past the DMV to cross under the freeway. Proceed through the light at Flowers, continue 1 block, and turn right on Figueroa. The USC campus is now on your left, but do not enter. Continue the length of the campus, turn left on Jefferson, and continue 2/3 of the length of the campus heading west, past Hoover. At the light at McClintock, turn left into the campus. This is the weekend entrance. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 25 minutes. EVERYBODY Enter the USC campus and purchase an all-day parking pass ($6) at the guard booth. Proceed straight south on McClintock past the pool (on right), and playing field (on left) to the corner of 36th Place/Downey. You may park in lot 6 on your right or the large parking structure just ahead on the right. Seeley Mudd (SGM) is a tall brick and concrete building on the NE corner of Downey and McClintock. SGM 124 is a large auditorium on the ground floor. Look for coffee-drinking computational neuroscientists. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: When this book was conceived ten years ago, few scientists realized the width of scope and the power for applicability of the central ideas. Partially because of the enthusiastic reception of the first edition, open problems have been solved and new applications have been developed. We have added new material on the relation between data compression and minimum description length induction, computational learning, and universal prediction; circuit theory; distributed algorithmics; instance complexity; CD compression; computational complexity; Kolmogorov random graphs; shortest encoding of routing tables in communication networks; resource-bounded computable universal distributions; average case properties; the equality of statistical entropy and expected Kolmogorov complexity; and so on. Apart from being used by researchers and as reference work, the book is now commonly used for graduate courses and seminars. In recognition of this fact, the second edition has been produced in textbook style. We have preserved as much as possible the ordering of the material as it was in the first edition. The many exercises bunched together at the ends of some chapters have been moved to the appropriate sections. The comprehensive bibliography on Kolmogorov complexity at the end of the book has been updated, as have the ``History and References'' sections of the chapters. Many readers were kind enough to express their appreciation for the first edition and to send notification of typos, errors, and comments. Their number is too large to thank them individually, so we thank them all collectively. BLURB: Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity---the theory dealing with the quantity of information in individual objects. Kolmogorov complexity is known variously as `algorithmic information', `algorithmic entropy', `Kolmogorov-Chaitin complexity', `descriptional complexity', `shortest program length', `algorithmic randomness', and others. The book is ideal for advanced undergraduate students, graduate students and researchers in computer science, mathematics, cognitive sciences, artificial intelligence, philosophy, statistics and physics. The book is self contained in the sense that it contains the basic requirements of computability theory, probability theory, information theory, and coding. Included are also numerous problem sets, comments, source references and hints to the solutions of problems, course outlines for classroom use, as well as a great deal of new material not included in the first edition. CONTENTS: Preface to the First Edition v How to Use This Book viii Acknowledgments x Preface to the Second Edition xii Outlines of One-Semester Courses xii List of Figures xix 1 Preliminaries 1 1.1 A Brief Introduction 1 1.2 Prerequisites and Notation 6 1.3 Numbers and Combinatorics 8 1.4 Binary Strings 12 1.5 Asymptotic Notation 15 1.6 Basics of Probability Theory 18 1.7 Basics of Computability Theory 24 1.8 The Roots of Kolmogorov Complexity 47 1.9 Randomness 49 1.10 Prediction and Probability 59 1.11 Information Theory and Coding 65 1.12 State Symbol Complexity 84 1.13 History and References 86 2 Algorithmic Complexity 93 2.1 The Invariance Theorem 96 2.2 Incompressibility 108 2.3 C as an Integer Function 119 2.4 Random Finite Sequences 127 2.5 *Random Infinite Sequences 136 2.6 Statistical Properties of Finite Sequences 158 2.7 Algorithmic Properties of 167 2.8 Algorithmic Information Theory 179 2.9 History and References 185 3 Algorithmic Prefix Complexity 189 3.1 The Invariance Theorem 192 3.2 *Sizes of the Constants 197 3.3 Incompressibility 202 3.4 K as an Integer Function 206 3.5 Random Finite Sequences 208 3.6 *Random Infinite Sequences 211 3.7 Algorithmic Properties of 224 3.8 *Complexity of Complexity 226 3.9 *Symmetry of Algorithmic Information 229 3.10 History and References 237 4 Algorithmic Probability 239 4.1 Enumerable Functions Revisited 240 4.2 Nonclassical Notation of Measures 242 4.3 Discrete Sample Space 245 4.4 Universal Average-Case Complexity 268 4.5 Continuous Sample Space 272 4.6 Universal Average-Case Complexity, Continued 307 4.7 History and References 307 5 Inductive Reasoning 315 5.1 Introduction 315 5.2 Solomonoff's Theory of Prediction 324 5.3 Universal Recursion Induction 335 5.4 Simple Pac-Learning 339 5.5 Hypothesis Identification by Minimum Description Length 351 5.6 History and References 372 6 The Incompressibility Method 379 6.1 Three Examples 380 6.2 High- Probability Properties 385 6.3 Combinatorics 389 6.4 Kolmogorov Random Graphs 396 6.5 Compact Routing 404 6.6 Average-Case Complexity of Heapsort 412 6.7 Longest Common Subsequence 417 6.8 Formal Language Theory 420 6.9 Online CFL Recognition 427 6.10 Turing Machine Time Complexity 432 6.11 Parallel Computation 445 6.12 Switching Lemma 449 6.13 History and References 452 7 Resource-Bounded Complexity 459 7.1 Mathematical Theory 460 7.2 Language Compression 476 7.3 Computational Complexity 488 7.4 Instance Complexity 495 7.5 Kt Complexity and Universal Optimal Search 502 7.6 Time-Limited Universal Distributions 506 7.7 Logical Depth 510 7.8 History and References 516 8 Physics, Information, and Computation 521 8.1 Algorithmic Complexity and Shannon's Entropy 522 8.2 Reversible Computation 528 8.3 Information Distance 537 8.4 Thermodynamics 554 8.5 Entropy Revisited 565 8.6 Compression in Nature 583 8.7 History and References 586 References 591 Index 618 If you are seriously interested in using the text in the course, contact Springer-Verlag's Editor for Computer Science, Martin Gilchrist, for a complimentary copy. Martin Gilchrist marting at springer-sc.com Suite 200, 3600 Pruneridge Ave. (408) 249-9314 Santa Clara, CA 95051 If you are interested in the text but won't be teaching a course, we understand that Springer-Verlag sells the book, too. To order, call toll-free 1-800-SPRINGER (1-800-777-4643); N.J. residents call 201-348-4033. For information regarding examination copies for course adoptions, write Springer-Verlag New York, Inc. , 175 Fifth Avenue, New York,NY 10010. You can order through the Web site: "http://www.springer-ny.com/" For U.S.A./Canada/Mexico- e-mail: orders at springer-ny.com or fax an order form to: 201-348-4505. For orders outside U.S.A./Canada/Mexico send this form to: orders at springer.de Or call toll free: 800-SPRINGER - 8:30 am to 5:30 pm ET (that's 777-4643 and 201-348-4033 in NJ). Write to Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY, 10010. Visit your local scientific bookstore. Mail payments may be made by check, purchase order, or credit card (see note below). Prices are payable in U.S. currency or its equivalent and are subject to change without notice. Remember, your 30-day return privilege is always guaranteed! Your complete address is necessary to fulfill your order. From horvitz at MICROSOFT.com Mon Jun 5 16:42:55 2006 From: horvitz at MICROSOFT.com (Eric Horvitz) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: UAI '97 program and registration information Message-ID: Dear Colleague: I have appended program and registration information for the Thirteenth Conference on Uncertainty and Artificial Intelligence (UAI '97). More details and an online registration form are linked to the UAI '97 home page at http://cuai97.microsoft.com. UAI '97 will be held at Brown University in Providence, Rhode Island, August 1-3. In addition to the main program, you may find interesting the Full-Day Course on Uncertain Reasoning which will be held on Thursday, July 31. Details on the course can be found at http://cuai97.microsoft.com/course.htm. Please register for the conference and/or the course before early registration comes to an end on May 31, 1997. I would be happy to answer any additional questions about the conference. Best regards, Eric Horvitz Conference Chair ==================================================== Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI '97) http://cuai97.microsoft.com August 1-3, 1997 Brown University Providence, Rhode Island, USA ============================================= ** UAI '97 Conference Program ** ============================================= Thursday, July 31, 1997 Conference and Course Registration 8:00-8:30am http://cuai97.microsoft.com/register/reg.htm Full-Day Course on Uncertain Reasoning 8:30-6:00pm http://cuai97.microsoft.com/course.htm _____________________________________________ Friday, August 1, 1997 Main Conference Registration 8:00-8:25am Opening Remarks Dan Geiger and Prakash P. Shenoy 8:25-8:30am Invited talk I: Local Computation Algorithms Steffen L. Lauritzen 8:30-9:30am Abstract: Inference in probabilistic expert systems has been made possible through the development of efficient algorithms that in one way or another involve message passing between local entities arranged to form a junction tree. Many of these algorithms have a common structure which can be partly formalized in abstract axioms with an algebraic flavor. However, the existing abstract frameworks do not fully capture all interesting cases of such local computation algorithms. The lecture will describe the basic elements of the algorithms, give examples of interesting local computations that are covered by current abstract frameworks, and also examples of interesting computations that are not, with a view towards reaching a fuller exploitation of the potential in these ideas. Invited talk II: Coding Theory and Probability Propagation in Loopy Bayesian Networks Robert J. McEliece 9:30-10:30am Abstract: In 1993 a group coding researchers in France devised, as part of their astonishing "turbo code" breakthrough, a remarkable iterative decoding algorithm. This algorithm can be viewed as an inference algorithm on a Bayesian network, but (a) it is approximate, not exact, and (b) it violates a sacred assumption in Bayesian analysis, viz., that the network should have no loops. Indeed, it is accurate to say that the turbo decoding algorithm is functionally equivalent to Pearl's algorithm applied to a certain directed bipartite graph in which the messages circulate around indefinitely, until either convergence is reached, or (more realistically) for a fixed number of cycles. With hindsight, it is possible to trace a continuous chain of "loopy" belief propagation algorithms within the coding community beginning in 1962 (with Gallager's iterative decoding algorithm for low density parity check codes), continued in 1981 by Tanner and much more recently (1995-1996) by Wiberg and MacKay-Neal. In this talk I'd like to challenge the UAI community to reassess the conventional wisdom that probability propagation only works in trees, since the coding community has now accumulated considerable experimental evidence that in some cases at least, "loopy" belief propagation works, at least approximately. Along the way, I'll do my best to bring the AI audience up to speed on the latest developments in coding. My emphasis will be on convolutional codes, since they are the building blocks for turbo-codes. I will mention that two of the most important (pre-turbo) decoding algorithms, viz. Viterbi (1967) and BCJR (1974) can be stated in orthodox Bayesian network terms. BCJR, for example, is an anticipation of Pearls' algorithm on a special kind of tree, and Viterbi's algorithm gives a solution to the "most probable explanation" problem on the same structure. Thus coding theorists and AI people have been working on, and solving, similar problems for a long time. It would be nice if they became more aware of each other's work. Break 10:30-11:00am ** Plenary Session I: Modeling 11:00-12:00am Object-Oriented Bayesian Networks Daphne Koller and Avi Pfeffer (winner of the best student paper award) Problem-Focused Incremental Elicitation of Multi-Attribute Utility Models Vu Ha and Peter Haddawy Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market David M. Pennock and Michael P. Wellman Lunch 12:00-1:30pm ** Plenary Session II: Learning & Clustering 1:30-3:00pm A Bayesian Approach to Learning Bayesian Networks with Local Structure David Maxwell Chickering and David Heckerman Batch and On-line Parameter Estimation in Bayesian Networks Eric Bauer, Daphne Koller, and Yoram Singer Sequential Update of Bayesian Networks Structure Nir Friedman and Moises Goldszmidt An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering Michael Kearns, Yishay Mansour, and Andrew Ng ** Poster Session I: Overview Presentations 3:00-3:30pm * Poster Session I 3:30-5:30pm Algorithms for Learning Decomposable Models and Chordal Graphs Luis M. de Campos and Juan F. Huete Defining Explanation in Probabilistic Systems Urszula Chajewska and Joseph Y. Halpern Exploring Parallelism in Learning Belief Networks T. Chu and Yang Xiang Efficient Induction of Finite State Automata Matthew S. Collins and Jonathon J. Oliver A Scheme for Approximating Probabilistic Inference Rina Dechter and Irina Rish Limitations of Skeptical Default Reasoning Jens Doerpmund The Complexity of Plan Existence and Evaluation in Probabilistic Domains Judy Goldsmith, Michael L. Littman, and Martin Mundhenk Learning Bayesian Nets that Perform Well Russell Greiner Model Selection for Bayesian-Network Classifiers David Heckerman and Christopher Meek Time-Critical Action Eric Horvitz and Adam Seiver Composition of Probability Measures on Finite Spaces Radim Jirousek Computational Advantages of Relevance Reasoning in Bayesian Belief Networks Yan Lin and Marek J. Druzdzel Support and Plausibility Degrees in Generalized Functional Models Paul-Andre Monney On Stable Multi-Agent Behavior in Face of Uncertainty Moshe Tennenholtz Cost-Sharing in Bayesian Knowledge Bases Solomon Eyal Shimony, Carmel Domshlak and Eugene Santos Jr. Independence of Causal Influence and Clique Tree Propagation Nevin L. Zhang and Li Yan __________________________________________________________ Saturday, August 2, 1997 Invited talk III: Genetic Linkage Analysis Alejandro A. Schaffer 8:30-9:30am Abstract: Genetic linkage analysis is a collection of statistical techniques used to infer the approximate chromosomal location of disease susceptibility genes using family tree data. Among the widely publicized linkage discoveries in 1996 were the approximate locations of genes conferring susceptibility to Parkinson's disease, prostate cancer, Crohn's disease, and adult-onset diabetes. Most linkage analysis methods are based on maximum likelihood estimation. Parametric linkage analysis methods use probabilistic inference on Bayesian networks, which is also used in the UAI community. I will give a self-contained overview of the genetics, statistics, algorithms, and software used in real linkage analysis studies. ** Plenary Session III: Markov Decision Processes 9:30-10:30am Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes Thomas Dean, Robert Givan and Sonia Leach Incremental Pruning: A Simple, Fast, Exact Algorithm for Partially Observable Markov Decision Processes Anthony Cassandra, Michael L. Littman and Nevin L. Zhang Region-based Approximations for Planing in Stochastic Domains Nevin L. Zhang and Wenju Liu Break 10:30-11:00am * Panel Discussion: 11:00-12:00am Lunch 12:00-1:30pm ** Plenary Session IV: Foundations 1:30-3:00pm Two Senses of Utility Independence Yoav Shoham Probability Update: Conditioning vs. Cross-Entropy Adam J. Grove and Joseph Y. Halpern Probabilistic Acceptance Henry E. Kyburg Jr. Estimation of Effects of Sequential Treatments By Reparameterizing Directed Acyclic Graphs James M. Robins and Larry Wasserman ** Poster Session II: Overview Presentations 3:00-3:30pm * Poster Session II 3:30-5:30pm Network Fragments: Representing Knowledge for Probabilistic Models Kathryn Blackmond Laskey and Suzanne M. Mahoney Correlated Action Effects in Decision Theoretic Regression Craig Boutilier A Standard Approach for Optimizing Belief-Network Inference Adnan Darwiche and Gregory Provan Myopic Value of Information for Influence Diagrams Soren L. Dittmer and Finn V. Jensen Algorithm Portfolio Design Theory vs. Practice Carla P. Gomes and Bart Selman Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels J. Hu and Yang Xiang Relational Bayesian Networks Manfred Jaeger A Target Classification Decision Aid Todd Michael Mansell Structure and Parameter Learning for Causal Independence and Causal Interactions Models Christopher Meek and David Heckerman An Investigation into the Cognitive Processing of Causal Knowledge Richard E. Neapolitan, Scott B. Morris, and Doug Cork Learning Bayesian Networks from Incomplete Databases Marco Ramoni and Paola Sebastiani Incremental Map Generation by Low Cost Robots Based on Possibility/Necessity Grids M. Lopez Sanchez, R. Lopez de Mantaras, and C. Sierra Sequential Thresholds: Evolving Context of Default Extensions Choh Man Teng Score and Information for Recursive Exponential Models with Incomplete Data Bo Thiesson Fast Value Iteration for Goal-Directed Markov Decision Processes Nevin L. Zhang and Weihong Zhang __________________________________________________________ Sunday, August 3, 1997 Invited talk IV: Gaussian processes - a replacement for supervised neural networks? David J.C. MacKay 8:20-9:20am Abstract: Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network's learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic lengthscales of the Gaussian process. Neal's observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this talk I will review work on this idea by Neal, Williams, Rasmussen, Barber, Gibbs and MacKay, and will assess whether, for supervised regression and classification tasks, the feedforward network has been superceded. * Plenary Session V: Applications of Uncertain Reasoning 9:20-10:40am Bayes Networks for Sonar Sensor Fusion Ami Berler and Solomon Eyal Shimony Image Segmentation in Video Sequences: A Probabilistic Approach Nir Friedman and Stuart Russell Lexical Access for Speech Understanding using Minimum Message Length Encoding Ian Thomas, Ingrid Zukerman, Bhavani Raskutti, Jonathan Oliver, David Albrecht A Decision-Theoretic Approach to Graphics Rendering Eric Horvitz and Jed Lengyel * Break 10:40-11:00am * Panel Discussion: 11:00-12:00am Lunch 12:00-1:30pm ** Plenary Session VI: Developments in Belief and Possibility 1:30-3:00pm Decision-making under Ordinal Preferences and Comparative Uncertainty D. Dubois, H. Fargier, and H. Prade Inference with Idempotent Valuations Luis D. Hernandez and Serafin Moral Corporate Evidential Decision Making in Performance Prediction Domains A.G. Buchner, W. Dubitzky, A. Schuster, P. Lopes P.G. O'Donoghue, J.G. Hughes, D.A. Bell, K. Adamson, J.A. White, J. Anderson, M.D. Mulvenna Exploiting Uncertain and Temporal Information in Correlation John Bigham Break 3:00-3:30am ** Plenary Session VII: Topics on Inference 3:30-5:00pm Nonuniform Dynamic Discretization in Hybrid Networks Alexander V. Kozlov and Daphne Koller Robustness Analysis of Bayesian Networks with Local Convex Sets of Distributions Fabio Cozman Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks Adrian Y. W. Cheuk and Craig Boutilier Nested Junction Trees Uffe Kjaerulff __________________________________________________________ If you have questions about the UAI '97 program, contact the UAI '97 Program Chairs, Dan Geiger and Prakash P. Shenoy. For other questions about UAI '97, please contact the Conference Chair, Eric Horvitz. * * * UAI '97 Conference Chair Eric Horvitz (horvitz at microsoft.com) Microsoft Research, 9S Redmond, WA, USA http://research.microsoft.com/~horvitz UAI '97 Program Chairs Dan Geiger (dang at cs.technion.ac.il) Computer Science Department Technion, Israel Institute of Technology Prakash Shenoy (pshenoy at ukans.edu) School of Business University of Kansas http://pshenoy at stat1.cc.ukans.edu/~pshenoy/ ==================================================== To register for UAI '97, please use the online registration form at: http://cuai97.microsoft.com/register/reg.htm If you do not have access to the web, please use the appended ascii form. Detailed information on accomodations can be found at http://cuai97.microsoft.com/#lodge. Several blocks of rooms of on-campus housing at Brown University have been reserved for UAI attendees on a first come, first serve basis. In addition, there are five hotels within a 1 mile radius from the UAI Conference (see http://www.providenceri.com/as220/hotels.html for additional information on hotels). Travel information is available at: http://cuai97.microsoft.com/#trav ====================================================== ***** UAI '97 Registration Form ***** (If possible, please use the online form at http://cuai97.microsoft.com/register/reg.htm) ------------------------------------------------------------------------ ----------------- * Name (Last, First): _____________________________ * Affiliation: ___________________________ * Email address: ___________________________ * Mailing address: ___________________________ * Telephone: ___________________________ ------------------------------------------------------------------------ ----------------- ** Registration Fees: >>> Main Conference <<<< Fees (please circle and tally below): Early Registration: $225, Late Registration (After May 31): $285 Student Registration (certify below): $125, Late Registration (After May 31): $150 * * * >>> Full-Day Course on Uncertain Reasoning (July 31, 1997) <<< * Fees: With Conference Registration: $75, Without Conference: $125 Student (certify below): With Conference Registration: $35, Without Conference: $55 The registration fee includes the conference banquet on August 2nd and a package of three lunches which will be served on campus. * Student certification I am a full-time student at the following institution:____________________ Academic advisor's name:____________________ Academic advisor's email:____________________ * Conference Registration Fees: U.S. $ ________________________ Full-Day Course: U.S. $ ________________________ TOTAL: U.S. $ ________________________ ______________________________________________________ Please make check payable to: AUAI or Association for Uncertainty in Artificial Intelligence Or Indicate credit card payment(s) enclosed: ______ Mastercard ______ Visa Credit Card No.: _____________________________________________ Exp. Date: ________________________ Signature: ____________________________ For credit card payment, you may fax this form to: (206) 936-1616 Registrations by check/money order should be mailed to: Eric Horvitz Microsoft Research, 9S Redmond, WA 98052-6399 Fax: 206-936-1616 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Neural Networks for modelling and control Message-ID: Dear all, Below are the description of two recently submitted papers dealing with Neural networks for modelling and control. Respective drafts are http available at http://www.mech.gla.ac.uk/~ericr/research.html Any comments would be greatly appreciated (1) Eric Ronco and Peter J. Gawthrop, 1997 (submitted). Incremental Model Reference Adaptive Polynomial Controllers Network. IEEE transaction on Systems, man and cybernetics. Abstract: The Incremental Model Reference Adaptive Polynomial Controllers Network (IMRAPCN) is a completely autonomous adaptive non linear controller. This algorithm consists of a Polynomial Controllers Network (PCN) and an Incremental Network Construction (INC). The PCN is a network of polynomial controllers each one being valid for a different operating region of the system. The use of polynomial controllers reduces significantly the number of controllers required to control a non linear system while improving the control accuracy, and the whole, without any drawbacks since polynomials are ""linear in parameters functions''. Such a control system can be used for the control of a possibly discontinuous non linear system, it is not affected by the ""stability-plasticity dilemma'' and yet can have a very clear architecture since it is composed of linear controllers. The INC aims to resolve the clustering problem that faces any such multi-controller method. The INC enables a very efficient construction of the network as well as an accurate determination of the region of validity of each controller. Hence, the INC gives to the PCN a complete autonomy since the clustering of the operating space can be achieved without any a priori knowledge about the system. Those advantages make clear the powerful control potential of the IMRAPCN in the domain of autonomous adaptive control of non linear systems. (2) Eric Ronco and Peter J. Gawthrop, 1997 (submitted). Polynomial Models Network for system modelling and control. IEEE transaction on Neural Networks. Abstract: For the purposes of control, it is essential that the chosen class of models is {em transparent} in the sense that the model structure and parameters may be interpreted in the context of control system design. The unclear representation of the system developed by most of the neural networks highly restrict their application for system modelling and control. Local computation tends to give clarity into the neural representation. The local models network (LMN) applies this method while adapting different models to different operating regions of the system. This paper builds on the Local Model Network approach and makes two main contributions: the network is not of a fixed structure, but rather is constructed {em incrementally} on line and the models are not linear but rather {em polynomial} in the variables. The resulting network is named the incremental polynomial model network (IPMN). In this paper we show that the transparency of the IPMN's representation makes model analysis and control design straight forward. The many advantages of this approach exposed in conclusion demonstrate the powerful capability of the IPMN to model and control non-linear systems. ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I found your commentary in the recent Connectionists posting very interesting. I have been working on the logical basis for neural computation for well over a decade now and have what I feel are many exciting results that I think help provide some focus regarding how to approach the problem of the understanding and designing artificial neural systems. I have enclosed some references -- not to impress you, but to give you a flavor for the logical foundation that I have struggled with ultimately to get it into an extremely simple and insightful explanation of neural computation. Anyway, I am very interested in what you talked about and am interested in trying to attend the described panel. I think that finally, researchers are beginning to ask the right questions. I believe that learning itself is the computational search for the "right" questions and so perhaps we collectively, are about to really learn something about how the computational objective(s) of the brain and neural computation in general. Sincerely, Prof. Robert L. Fry Relevant Publication List "A logical basis for neural network design," invited book chapter in Theory and application of neural networks, Academic Press, to be published 1997. "Neural Mechanics," invited paper, Proc. Int. Conf. Neur. Info. Proc., Hong Kong, 1996. "Rational neural models based on information theory," invited paper, Post-conference workshop on Neural Information Processing, NIPS'95. "Observer-participant models of neural processing," IEEE Trans. Neural Networks, July 1995 "Rational neural models based on information theory," 1995 Workshop on Maximum Entropy and Bayesian Methods, Sante Fe, NM, July 1995, sponsored by the Sante Fe Institute and Lawrence Livermore Laboratory "Rational neural models," Information theory and the brain workshop, Stirling, Scotland, Sept 1995. "Neural processing of information," R. L. Fry, paper presentation at 1994 IEEE International Symposium on Information Theory, Norway. "A mathematical basis for an entropy machine that makes rational decisions," APL Internal Memo F1F(1)90-U-094, 1990. "Neural models for the distributed processing of information," APL IRAD report, 1991. "The principle of cross-entropy applied to neural networks and autonomous target recognition," APL Internal Memo F1F(1)90-U-005, 1990. "Maximized mutual information using macrocanonical probability distributions," 1994 IEEE/IMS Workshop on Information Theory and Statistics, Arlington, VA. ============================================================= From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I have been following your discussion and summarizations with some interest over the last few months and would like to provide some input at this point. Many of the issues you raise are quite important - in particular your call for a clear set of external objectives for any "brain-like" learner. However, the general context in which you have framed the question is somewhat limited. As I understand it, you are suggesting that the classical context of local learning and predefined structure is unrealistic. In many ways I agree .. I beleive the larger and more important context involves the issues of what has been called "life-long learning" and "learning to learn" and "consolidation and transfer of knowledge". I have no idea why so many researchers continue to pursue the development of the next best inductive algorithm or archiecture (be it ANN or not) when many of them understand that the percentage gains on predictive accuracy based solely on an example set is marginal in comparison to the use of prior knowledge (selection of inductive bias). The research community has looked very carefully at how we induce ANN models from examples ... but in comparision there has been very little work on the consolidation and transfer of the knowledge contained in previously learned ANN models. Most of the questions you have posed are subsumed by this larger =0Acontext.Primari= ly .. what type of mechanism(s) is required to (1) learn a task taking advantage of previous learning (prior knowledge), and (2) consolidate this new task knowledge for use in future learning. I do not think that the CNS (central nervous system) is successful in doing this - just by chance - there is something unique about it's architecture that allows =0Athis too= occur. Recent work by myself, Lorien Pratt, Sebatian =0AThrun, Rich Caru= ana, Tom Mitchell, Mark Ring, Johnathon Baxter, and others [NIPS 95 work shop on learning to learn] have provided =0Afounda= tions on which to build in this area. I encourage to review some of this material if you are interested and feel it applies. You can start by cheking out my homepage at ww.csd.uuwo.ca/~dsilver It will lead you to other related areas. ================================================================= Daniel L. Silver University of Western Ontario, London, Canada N6A 3K7 - Dept. of Comp. Sci. dsilver at csd.uwo.ca H: (902)582-7558 O: (902)494-1813 WWW home page .... http://www.csd.uwo.ca/~dsilver ================================================================= From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Regarding the panel discussion on "connectionist learning : is it time to reconsider the foundations ?", I read the arguments with interest and I would like to pose, in addition, another issue. That is, to simulate convincingly a biological system we should not probably be dealing solely with vectors of real numbers. The capacity to deal with symbols and other types of data merits also attention. In other words, besides memory and more global learning capabilities, it will be advantageous to be able to handle jointly disparate data such as real numbers, fuzzy sets, propositional statements, etc. Therefore from a model development point of view it might be quite advantageous to consider working on less structured spaces than the conventional N-dimensional Euclidean space. Such a space could be a (mathematical) lattice. Note that all the previously mentioned data are in effect elements of a mathematical lattice. That is, not only the conventional Euclidean space is a lattice but also the set of propositional statements, the collection of fuzzy sets on a universe of discourse, etc. My tentative proposion is this : For machine learning purposes only, replace the Euclidean-space by a Lattice-space. Just imagine how much the learning and decision making robustness of a system would be enhanced if in addition to memory, the capability to design the net on its own, polynomial complexity, generalization capability, etc., the system in question in also able to handle jointly dsparate data. With considerations, Vassilis G. Kaburlasos Aristotle University of Thessaloniki, Greece ================================================================ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Just to congratulate you for organizing such an important panel. On the foundations, I would like to remind you that even the point that learning must occur by synaptic adjustments is another question that should be discussed in my opinion. =============================================================== An additional note from Prof. Weber Martins - weber at eee.ufg.br As a researcher with Computer and Electrical Engineering background, I was not talking about biological plausibility. I am interested in the resolution of complex problems in a efficient way. I work mainly with weightless (Boolean) neural networks. In those models, we usually deal with the adjusting of neuronal "behavior", not synapses. In my point of view, the Neural Network community uses homogeneous networks (with the same type of neuron throughtout the network) to simplify mathematical analysis and algorithms. However, from many books I read on NN, it seems that if you don't adjust synapses, you're not doing NN... From an computational point of view, this doesn't =0Alook r= ight to me. =============================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I presume that the panel organizers really want to focus on these two assumptions of connectionist learning rather than looking at other assumptions that are embedded in the whole connectionist approach. We should recognize, however, that the two assumptions listed above do not begin to exhaust the theoretical commitments that are bound up in most connectionist models. Some of these other assumptions are: 1) Place-coding. Inputs to the network are encoded by spatial patterns of activation of elements in the input layer rather than in the time-structure of the inputs (i.e. rate-place or labelled-line neural codes rather than temporal pattern codes). To the extent that input patterns are encoded temporally, in the std view a time-delay layer is used to convert this into a place pattern that is handled by a conventional network. However, examples of fine time structure have been found in many parts of the brain where there has been a concerted effort to look for it. 2) Scalar signals. There is one signal conveyed per element (no multiplexing of multiple signals onto the same lines), so that inputs to and outputs from each element are scalar quantities (1 and 2 are closely related). 3) Synchronous or near-synchronous operation. The inputs are summed together to produce an output signal for each time step. The std. neurobiological assumption is that neurons function as rate-integrators with integration times comparable to the time-steps of the inputs rather than coincidence detectors. But examples of coincidence detection in the brain are many, and there has been an ongoing debate about whether the cortical pyramidal cell is best seen in terms of an integrating element or as a coincidence detector. 4) Fan-out of the same signals to all target elements. Connection weights may differ, but the same signals are being fed to all of their targets. The std. neurobiological assumption is that impulses of spike trains invade all of the daughter branches of axonal trees. However, there may be conduction blocks at branchpoints that mean that some spikes will not propapagate into some terminals (so that the targets do not receive all of the spikes that were transmitted through the axon trunk). There are examples of this in the crayfish. To the extent that one or more of these assumptions are altered, one can potentially get neural architectures with different topologies, and with potentially different capacities. I'm not one to quibble over what the topic of a discussion should be -- that's up to the participants. I'd just like to suggest that if we (a very general and inclusive "we") are going to "reconsider the foundations" of connectionism, we might think more broadly about ALL the assumptions, tacit and explicit, that are involved. My own sense is that, in lieu of a much deeper understanding of exactly what kinds of computational operations are being carried out by cortical structures and how these are carried out, we should probably avoid labels like "brain-like" that give the false sense that we understand more than we do about how the brain works. If one is seriously interested in how "brain-like" a given network architecture is, then one needs to get real, detailed neuroanatomical and/or neurophysiological data and make the comparison more directly. Comparisons with what's in the textbooks just doesn't do. Things get messy very fast when the neural responses are phasic, nonmonotonic, and have a multitude of different kinds of stimuli that produce them. Peter Cariani Peter Cariani, Ph.D. Eaton Peabody Laboratory Massachusetts Eye & Ear Infirmary 243 Charles St, Boston MA 02114 tel (617) 573-4243 FAX (617) 720-4408 email peter at epl.meei.harvard.edu ============================================================ Asim Roy's note to Peter Cariani: If I understand you correctly, you are saying that we need to broaden the set of questions. I am sure this issue will come up as we grapple with these questions. And one of the issues from the artificial neural network point of view is how exactly do you replicate the detailed biological processes. You certainly want to extract the clever biological ideas, but at some point, say 50 years from now, we might do better than biology with our artificial systems. And we might do things differently than in biology. An example is our flying machines. We do better than the birds out there. The functionality is there, but we do it differently and do it better. I think the point I am making is that we need not be tied to every biological detail, but we certainly want to pick up the good ideas and then develop them further. And in the end, we would have a system far superior to the biological ones, but not exactly like it in all the details. ============================================================== Peter Cariani's reply to the above note: Yes, I think the most important decisions we make regarding how to construct neural nets in the image of the brain are to determine exactly which aspects of the real biological system are functionally relevant and which ones are not. I definitely agree with you that every biological detail is not important, and I myself am trying to work out highly abstracted schemes for how temporally-structured signals might be processed by arrays of coincidence detectors and delay lines. (Usually the standard criticisms from biologists are that not enough of the biological details are included.) What I am saying, however, is that the basic functional organization that is assumed by the standard connectionist models may not be the right one (or it may not be the only one or the canonical one). There are many aspects of connectionist models that really don't seem to couple well to the neurophysiology, so maybe we should go back and re-examine our assumptions about neural coding. I myself think that temporal pattern codes are far more promising than is commonly thought, but that's my idee fixe. (I could send you a review that I wrote on them if you'd like). I definitely agree with you that once we understand the functional organization of the brain as an information processing system, then we will be able to build devices that are far superior to the biological ones. My motto is: "keep your hands wet, but your mind dry" -- it's important to pay attention to the biology, to not project one's preconceptions onto the system, but it's equally important to keep one's eyes on the essentials, to avoid getting bogged down in largely irrelevant details. I've worked on both adaptive systems and neurophysiology, and I know that the dry people tend to get their neural models from textbooks (that present an overly simplified and uncritical view of things), and the wet people tend not to say much about what kinds of general mechanisms are being used (they look to the NN people for general theories). There is a cycling of ideas between the two that we need to be somewhat wary of --- many neuroscientists begin to believe that information processing must be done in the ways that the neural networks people suggest, and consequently, they cast the interpretation of the data in that light. The NN people then use the physiological evidence to suggest that their models are "brain-like". Physiological data gets shoe-horned into a connectionist account (and especially what aspects of neural activity people decide to go out and observe), and the connectionist account is then held up as being an adequate theory of how the brain works. There are very few strong connectionist accounts that I know of that really stand up under scrutiny -- that are grounded in the data, =0Athat pr= edict important aspects of the behavior, and that cannot be explained through other sets of assumptions. In these discussions you really need both the physiologists, who understand the complexities and limitations of the data, and the theorists, who understand the functional implications, to interact strongly with each other. So, anyway, I wish you the best of luck with your session. ============================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Your abstract is very interesting. It sounds like it will be a great discussion. The idea of using a different kind of learning which explicitly stored training data is one I've worked on in the past. A few questions that crossed my mind while reading the abstract are listed below: On Wed, 23 Apr 1997, Asim Roy wrote: > Classical connectionist learning is based on two key ideas. >First, no training examples are to be stored by the learning >algorithm in its memory (memoryless learning). I'm a bit unclear about this. Aren't the weights of the network trained to implicitly store ALL the training examples? I would have said that that connectionist learning is based on the idea that "ALL training examples are to be stored by the learning algorithm in its memory"! If I understand correctly, the first key idea is that the training examples are not EXPLICITLY stored in a way in which they could be retrieved or reconstructed. Perhaps my confusion lies in the word stored. How would you define that? I would further say that a number of dynamical recurrent networks like those discussed at my NIPS workshop =0A(http://running.dgcd.d= oc.ca/NIPS/) do explictly store presented examples. Infact, training algorithms like back-propagation through time have been criticized for having to explicitly store previous input and hidden unit patterns and thus consume extra memory resources. But, I guess you're probably aware of this since you have Lee on your panel. > The second key idea is that of local > learning - that the nodes of a network are autonomous learners. > Local learning embodies the viewpoint that simple, autonomous > learners, such as the single nodes of a network, can in fact > produce complex behavior in a collective fashion. This second >idea, in its purest form, implies a predefined net being provided >to the algorithm for learning, such as in multilayer perceptrons. In what sense are the learners autonomous? In the MLP each learner requires a feedback error value provided by another node (and ultimately an outside source) in-order to update. I would say its NOT autonomous. > Second, strict local learning (e.g. back propagation type >learning) is not a feasible idea for any system, biological >or otherwise. If it is not feasible for "any" learning system then any system which attempts to use it must fail. Therefor, working connectionist networks must not use strict local learning. Therefore, strict local learning cannot be one of the fundamental ideas of the connectionist approach. Therefore why are we discussing it? I must have misunderstood something here...any ideas where I went off-track? ------------- Dr. Stefan C. Kremer, Research Scientist, Artificial Neural Systems Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca =============================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: In response to your invitation for comments on cogpsy at coglab.psy.soton.ac.uk I have the following : I believe it is past time to reconsider the foundations. The critical deficiency in current connectionism is lack of understanding of the meaning of the term "system architecture". Any system which performs a complex function using large numbers of components experiences very severe constraints on its system architecture if building, repairing or adding features to the system are to be feasible. Current electronic systems are in production which use billions of individual transistors. Such systems must have a simple functional architecture which can (for example) relate failures experienced at the high functional level to defects occurring at the device level. The reason the von Neumann architecture is ubiquitous for electronic systems is that it provides the means for such a simple relationship. It achieves this by partitioning functionality into consistent elements at a number of levels of detail. The consistent element is the instruction, and instruction based descriptions can be seen at every level. Typical levels include device (instruction 'open gate'); assembly code (instruction 'jump'); software (instruction 'do:[ ]'); procedure call (instruction 'disconnect x'); through features (instruction 'carry out Y') to major system functions (instruction 'periodically test for problems') and overall function. The requirement for a function to operate within a simple architecture is crucial. To illustrate, if I needed to design a function to connect telephones together, many designs would be possible, and would carry out the function efficiently, in some cases more efficiently than designs actually in use. However, the vast majority of those designs would be useless once it was necessary for them to interact with and support functions like testing (a connection did not work, which component is defective ?) or billing (who should be charged how much for that call ?) or adding features (allow recipient of call to identify caller). Conclusions drawn about human cognition from a simulation which performs (for example) face recognition without considering how that function would fit within the total cognitive/behavioral system are almost certainly invalid. In my 1990 book I argued that although the brain obviously did not have a von Neumann architecture, very similar pressures exist for a simple functional architecture (for example, the need to build many copies from DNA 'blueprints'). I went on to develop a new and complete system architecture based on an element of functionality appropriate to the brain, the pattern extraction/action recommendation element. This architecture separates cognition into a few major functions, which can in turn be partitioned further all the way down to neurons. The functionality required in neurons is defined by the functional partitioning at a higher level, and that functional partitioning is in turn constrained by the information available to individual neurons, the kind of changes which can be made in neuron connectivity, and the timescale of such changes. (see a couple of 1997 papers). This architecture shows phenomena which bear a remarkable resemblance to human brain phenomena, including unguided learning by categorization generating declarative memory; dream sleep; procedural memory; emotional arousal; and even internally generated image sequences. All these phenomena play a functional role in generating behavior from sensory input, and have also been demonstrated by electronic simulation (Coward 1996). My response to the questions to be posed to the panelists would be: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? In the pattern extraction hierarchy architecture, which appears to me to be the only option other than the obviously inapplicable von Neumann, one major type of learning (associated with a major separation in the architecture) is the process of sorting experience into categories and associating behaviors with those categories. A category is established by extracting and recording a set of patterns from one unfamiliar object, and developed by adding patterns extracted from any subsequent object which contains many of the patterns already in the category. Memory of objects is thus the prerequisite to this type of learning, which is associated with the cortex. Memoryless learning occurs in other major functions and is an appropriate model in those functions. 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? The real issues here are first to identify what information could feasibly be made available to a neuron (e.g. past firing of the neuron itself; correlated firing of neurons in its neighborhood; correlated firing between the neuron and another neuron; correlated firing within a separate functional group of neurons; feedback from pleasure or pain; or feedback of some expected result). The second issue is to identify the nature of the feasible changes to the neuron which could be produced (e.g. assignment or removal of a neuron; addition or deletion of an input; correlated addition or deletion of a set of inputs; changes in relative strength of inputs; correlated changes in the strength of a set of inputs; general change in effective input strengths (i.e. threshold change); how long a change lasts). Only after these qualitative factors have been defined by higher functional requirements can quantitative algorithms can be developed. 3. Who designs the network inside an autonomous learning system such as the brain? Within the pattern extraction hierarchy architecture it is possible to start from random connectivity and sort experienced objects into categories without guidance or feedback. References: Coward L.A. (1990), 'Pattern Thinking', New York: Praeger (Greenwood) Coward L.A. (1996), 'Understanding of Consciousness through Application of Techniques for Design of Extremely Complex Electronic Systems' Towards a Science of Consciousness , Tucson, Arizona. Coward L.A. (1997), 'Unguided Categorization, Direct and Symbolic Representation, and Evolution of Cognition in a Modified Connectionist Theory', to be published in Proceedings of the Conference on New Trends in Cognitive Science, Austria 1997. Coward L.A. (1997), 'The Pattern Extraction Architecture: a Connectionist Alternative to the Von Neumann Architecture', to be published in the Proceedings of International Workshop on Artificial and Natural Neural Networks, Canary Islands 1997. ================================================================ An additional note from Prof. Andrew Coward I believe the brain has a functional architecture which I label the pattern extraction hierarchy. At the highest level, functionality separates into five major functions. The first extracts constant patterns from the environment (e.g. object color independent of illumination). The second allows the set of patterns which have been extracted from one object to enter the third function. The third function generates a set of alternative behavioral recommendations with respect to the selected object. The fourth function selects one (or perhaps none) of the alternatives to proceed to action, and the fifth function implements the action. This functional separation can be observed in the major physiology (roughly, the primary sensory cortex, the thalamus, the cortex, the basal ganglia, and the cerebellum). There are of course levels of detail and complexity below this. Within each function, the needs of that function determine the functionality required from neurons within the function, subject to what neuron functionality is possible (which in turn is one factor forcing the use of the architecture). I mentioned in the earlier note that functional partitioning is constrained by the possible neuron functionality given limitations in the areas of the information available to individual neurons, the kind of changes which can be made in neuron connectivity, and the timescale of such changes. Expanding on this somewhat: The source of information which controls changes to extracted pattern could be: feedback from comparison with expected result; feedback from pleasure or pain; past firing of neuron itself; correlated firing of neurons in neighbourhood; correlated firing between neuron and another neuron; correlated firing within a separate functional group of neurons The nature of the changes to the extracted pattern produced in the neuron could be: assignment or removal of a neuron; =0Aaddition or = deletion of an input; correlated addition or deletion of a set of inputs; changes in relative strength of inputs; correlated changes in the strength of a set of inputs; =0Ageneral = change in effective input strengths (i.e. threshold change); changes in sensitivity to other parameters. The permanence of the changes to the extracted pattern could be: change only at time source of information is present; change for limited time following source of information being present; change for long period following source of information being present. In each high level function, the particular combination of information used, changes which occur, and timescale is dictated by the type of high level functionality. For example, in the behavioral alternative generation region the changes which are required include assignment of neurons, biased random assignment of inputs, setting sensitivity to the arousal factor for the assigned region, deletion of inactive inputs, and threshold reduction. The sources of information which ultimately control these changes are correlated firing of neurons in neighbourhood, correlated firing of neurons in the neighbourhood with firing of an input, correlated firing of a separate functional group of neurons, and firing of the neuron itself. Timescale is short and long for different operations. Each type of change has an associated source(s) of information and timescale. This combination of functionality at the neuron level gives rise to all the phenomena of declarative memory at the higher level, including dream sleep. A different combination of neuron parameters is required in the behavioral alternative selection function, and gives rise to learning which is not declarative. In this function, pleasure and pain act on recently firing neurons to modulate the ability of similar firing in the future to gain control of action. I apply the term 'memoryless' to this learning because no record of prior states is preserved (although the memory of pleasure and pain may be preserved in the alternative generation function). I regard the perceptron and even the adaptive resonance neurons as simplistic in the system sense, although it turns out that the Hebbian neuron plays an important system role. The above is a summary of some discussion in the papers which have been accepted for publication in the proceedings of a couple of conferences in the next month or so. I could send copies if you are interested. I appreciate the opportunity to discuss. Andrew Coward. ================================================================ APPENDIX: 1997 International Conference on Neural Networks (ICNN'97) Houston, Texas (June 8 -12, 1997) ---------------------------------------------------------------- Further information on the conference is available on the conference web page: http://www.mindspring.com/~pci-inc/ICNN97/ ------------------------------------------------------------------ PANEL DISCUSSION ON "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ------------------------------------------------------------------- This is to announce that a panel will discuss the above question at ICNN'97 on Monday afternoon (June 9). Below is the abstract for the panel discussion broadly outlining the questions to be addressed. I am also attaching a slightly modified version of a subsequent note sent to the panelist. I think the issues are very broad and the questions are simple. The questions are not tied to any specific "algorithm" or "network architecture" or "task to be performed." However, the answers to these simple questions may have an enormous effect on the "nature of algorithms" that we would call "brain-like" and for the design and construction of autonomous learning systems and robots. I believe these questions also have a bearing on other brain related sciences such as neuroscience, neurobiology and cognitive science. Asim Roy Arizona State University ------------------------- PANEL MEMBERS 1. Igor Aleksander 2. Shunichi Amari 3. Eric Baum 4. Jim Bezdek 5. Rolf Eckmiller 6. Lee Giles 7. Geoffrey Hinton 8. Dan Levine 9. Robert Marks 10. Jean Jacques Slotine 11. John G. Taylor 12. David Waltz 13. Paul Werbos 14. Nicolaos Karayiannis (Panel Moderator, ICNN'97 General Chair) 15. Asim Roy Six of the above members are plenary speakers at the meeting. ------------------------- PANEL TITLE: "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners.Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn=92t. Second, strict local learning (e.g. back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.) can be developed if we don=92t constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionists learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? ------------------------- A SUBSEQUENT NOTE SENT TO THE PANELIST The panel abstract was written to question the two pillars of classical connectionist learning - memoryless learning and pure local learning. With regards to memoryless learning, the basic argument against it is that humans do store information (remember facts/information) in order to learn. So memoryless learning, as far I understand, cannot be justified by any behavioral or biological observations/facts. That does not mean that humans store any and all information provided to them. They are definitely selective and parsimonious in the choice of information/facts to collect and store. We have been arguing that it is the "combination" of memoryless learning and pure local learning that is not feasible for any system, biological or otherwise. Pure local learning, in this context, implies that the system somehow puts together a set of "local learners" that start learning with each learning example given to it (e.g. in back propagation) without having seen a single training example before and without knowing anything about the complexity of the problem. Such a system can be demonstrated to do well in some cases, but would not work in general. Note that not all existing neural network algorithms are of this pure local learning type. For example, if I understand correctly, in constructive algorithms such as ART, RBF, RCE/hypersphere and others, a "decision" to create a new node is made by a "global decision-maker" based on evidence on performance of the existing system. So there is quite a bit of global coordination and "decision-making" in those algorithms beyond the simple "local learning". Anyway, if we "accept" the idea that memory can indeed be used for the purpose of learning (Paul Werbos indicated so in one of his notes), the terms of the debate/discussion change dramatically. We then open the door to the development of far more robust and reliable learning algorithms with much nicer properties than before. We can then start to develop algorithms that are closer to "normal human learning processes". Normal human learning includes processes such as (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets)for the problem, (4) testing of trial solutions (nets), (5)discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. And these learning processes are implemented within the brain, without doubt, using local computing mechanisms of different types. But these learning processes cannot exist without allowing for storage of information about the problem. One of the "large" missing pieces in the neural network field is the definition or characterization of an autonomous learning system such as the brain. We have never defined the external behavioral characteristics of our learning algorithms. We have largely pursued algorithm development from an "internal mechanisms" point of view (local learning, memoryless learning) rather than from the point of view of "external behavior or characteristics" of these resulting algorithms. Some of these external characteristics of our learning algorithms might be:(1) the capability to design the net on their own, (2) polynomial time complexity of the algorithm in design and training of the net, (3) generalization capability, and (4) learning from as few examples as possible (quickness in learning). It is perhaps time to define a set of desirable external characteristics for our learning algorithms. We need to define characteristics that are "independent of": (1) a particular architecture, (2) the problem to be solved (function approximation, classification, memory, etc.), (3)local/global learning issues, and (4) issues of whether to use memory or not to learn. We should rather argue about these external properties than issues of global/local learning and of memoryless learning. With best regards, Asim Roy Arizona State University From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Neural networks for Modelling and Control Message-ID: Dear all, Just to let you know of the http availability of a new technical report entitled "Neural networks for Modelling and Control" (it is a compressed file. Please, gunzip the file to view or print it). http://www.mech.gla.ac.uk/~ericr/pub/gmnn_rep.ps.gz or http://www.mech.gla.ac.uk/~yunli/reports.htm The report has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Modularity. Abstract: This report is a review of the main neuro-control technologies. Two main kinds of neuro-control approaches are distinguished. One entails developing a single controller from a neural network and the other one embeds a number of controllers inside a neural network. The single neuro-control approaches are mainly system inverse: the inverse of the system dynamics is used to control the system in an open loop manner. The Multi-Layer Perceptron (MLP) is widely used for this purpose although there is no guarantee that it can succeed in learning to control the plant and that, more importantly, the unclear representation it achieves prohibits the analysis of its learned control properties. These problems and the fact that open loop control is not suitable for many systems highly restricts the usefulness of the MLP for control purposes. However, the non-linear modelling capability of the MLP could be exploited to enhance model based predictive control approaches since essentially, an accurate model of the plant is all that is required to apply this method. The second neuro-control approach can be seen as a modular approach since different controllers are used for the control of different components of the systems. The main modular neuro-controllers are listed. They are all characterised by a ""gating system'' used to select the the modular units (i.e. controllers or models) valid for the computing of a current input pattern. These neural networks are referred to as the Gated Modular Neural Networks (GMNNs). Two of these networks are particularly fitted for modelling oriented control purposes. They are the Local Model Network (LMN) and the Multiple Switched Models (MSM). Since the local models of the plant are linear, it is fairly easy to transform them into controllers. For the same reason, the analysis of the properties of these networks can be easily performed and it is straightforward to determine the parameter values of the controllers as linear regression methods can be applied. These advantages among others related to a modular architecture reveal the great potential of these GMNNs for the modelling and control of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Incremental Polynomial Model-Controller Network: a self organising non-linear controller Message-ID: Dear all, Just to let you know of the http availability of a new technical report entitled "Incremental Polynomial Model-Controller Network: a self organising non-linear controller" (it is a compressed file. Please, gunzip the file to view or print it). http://www.mech.gla.ac.uk/~ericr/pub/csc9710.ps.gz The report has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: The aim of this study is to present the "Incremental Polynomial Model-Controller Network" (IPMCN). This network is composed of controllers each one attached to a model used for its indirect design. At each instant the controller connected to the model performing the best is selected. An automatic network construction algorithm is discribed in this study. It makes the IPMCN a self-organising non-linear controller. However the emphasis is on the polynomial controllers that are the building blocks of the IPMCN. From an analysis of the properties of polynomial functions for system modelling it is shown that multiple low order odd polynomials are very suitable to model non-linear systems. A closed loop reference model method to design a controller from a odd polynomial model is then described. The properties of the IPMCN are illustrated according to a second order system having both system states $y$ and $\dot{y}$ involving non-linear behaviour. It shows that as a component of a network or alone, a low order odd polynomial controller performs much better than a linear adaptive controller. Moreover, the number of controllers is significantly reduced with the increase of the polynomial order of the controllers and an improvement of the control performance is proportional to the decrease of the number of controllers. In addition, the clustering free approach, applied for the selection of the controllers, makes the IPMCN insensitive to the number of quantities involving non-linearity in the system. The use of local controllers capable of handling systems with complex dynamics will make this scheme one of the most effective approaches for the control of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: ... two self-organising non-linear controllers Message-ID: Dear all, Just to let you know of the http availability of a paper submitted to the Journal "Modeling, Identification and Control" entitled "Incremental Controller Networks: a comparative study between two self-organising non-linear controllers" (it is a compressed file. Please, gunzip the file to view or print it). Get the file among other publications in http://www.mech.gla.ac.uk/~ericr/research.html or the file itself http://www.mech.gla.ac.uk/~ericr/pub/csc97011.ps.gz This paper has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: Two self-organising controller networks are presented in this study. The ""Clustered Controller Network'' (CCN) uses a spatial clustering approach to select the controllers at each instant. In the other gated controller network, the ""Models-Controller Network'' (MCN), it is the performance of the model attached to each controller which is used to achieve the controller selection. An algorithm to automaticly conctrust the architecture of both networks is described. It makes the two schemes self-organising. Different examples of control of non-linear systems are considered in order to illustrate the behaviour of the ICCN and the IMCN. It makes clear that both these schemes are performing much better than a single adaptive controller. The two main advantages of the ICCN over the IMCN concern the possibilities to use any controller as a building block of its network architecture and to apply the ICCN for modelling purpose. However the ICCN appears to have serious problems to cope with non-linear systems having more than a single variable implying a non-linear behaviour. The IMCN does not suffer from this trouble. This high sensitivity to the clustering space order is the main drawback limiting the use of the ICCN and therefore makes the IMCN a much more suitable approach to control a wide range of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Dr Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: A thesis on self-organising neuro-control Message-ID: Dear all, Just to let you know of the http availability of my thesis (130 pages) entitled "Incremental Polynomial Controller Networks: two self-organising non-linear controllers" (it is a compressed file. Please, gunzip the file to view or print it). Get the file among other publications in http://www.mech.gla.ac.uk/~ericr/research.html or download it directly http://www.mech.gla.ac.uk/~ericr/pub/thesis.ps.gz The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: A step toward the development of a self-organising approach for the control of non-linear system has been made by developing two ``incremental polynomial controller networks''. They constitute two systematic self-organising approaches for the control of non-linear systems with simple dynamics. Each network is composed of controllers having a region of activity over the system operating space. One is the ``Incremental Clustered Controller Network'' (ICCN) and the other one is the ``Incremental Model-Controller Network'' (IMCN). The two controller networks differ by the manner they achieve the selection of the currently valid local controllers. In the ICCN the controller selection relies on a spatial clustering of the system operating space. In the IMCN, each controller is selected according to the performance of its connected model. Both these controller networks are using an incremental algorithm to construct automaticly their architecture. This algorithm is called the ``Incremental Network Construction'' (INC). It is the INC which makes the ICCN and IMCN self-organising approaches, since no {\it a priori} knowledge (except the system order) is required to apply them. Until now, the controller networks were composed of {\bf linear} controllers. However, since a high number of linear controllers are required to accurately control a significantly non-linear system, the control capabilities of both these controller networks have been further extended by using {\bf polynomial} controllers as building block of the networks. An important advantage of polynomial functions is their capacity to smoothly approximate non-linear systems and yet have their parameters identifiable using linear regression methods (e.g. least squares). It has been shown in this study that odd low order polynomial functions are very suitable to model non-linear systems. Illustrating examples indicated that the use of such a function as building block of the controller networks implies an important decrease of the number of controllers required to control accurately a system. Moreover an improvement of the control performance was proportional to the decrease of the number of controllers, with the smoothness of the input transients being the main area of improvement. It was clear from various control examples that the incremental polynomial controller networks have a great potential in the control of non-linear systems. However, the IMCN is a more satisfactory approach than the ICCN. This is due to the clustering free approach applied by the IMCN for the selection of the controllers. It makes the IMCN insensitive to the number of quantities involving non-linearity in the system. It is argued that the use of local controllers capable of handling systems with complex dynamics makes this scheme one of the most effective self-organising approaches for the control of non-linear systems. Best regards, Eric Ronco ----------------------------------------------------------------------------- | Dr Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: A Brain-like Design to Learn Optimal Decision Strategies in Complex Environments (P. J. Werbos); PART VI. KNOWLEDGE DISCOVERY AND INFORMATION RETRIEVAL: Structural Learning and Rule Discovery from Data (M. Ishikawa); Measuring the Significance and Contributions of Inputs in Backpropagation Neural Networks for Rules Extraction and Data Mining (T. D. Gedeon); Applying Connectionist Models to Information Retrieval (S. J. Cunningham et al.); PART VII : CONSCIOUSNESS IN LIVING AND ARTIFICIAL SYSTEMS: Neural Networks for Consciousness (J. G. Taylor); Platonic Model of Mind as an Approximation to Neurodynamics (W.Duch); Towards Visual Awareness in a Neural System (I. Aleksander et al.) Nov 1997 544pp Hardcover ISBN: 981-3083-58-1 US$79.00 For ordering information: http://www.springer.com.sg Springer-Verlag Singapore Pte Ltd 1 Tannery Road, Cencon I, #04-01 Singapore 347719 Tel : (65) 842 0112 Fax : (65) 842 0107 e-mail : springer at cyberway.com.sg From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From dst at cs.cmu.edu Mon Jun 5 16:42:55 2006 From: dst at cs.cmu.edu (Dave Touretzky) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Book: Neurons, Networks, and Motor Behavior Message-ID: [[ FORWARDED FROM THE COMP-NEURO MAILING LIST -- DST ]] The following is a book which readers of this list might find of interest. For more information please visit http://mitpress.mit.edu/promotions/books/STE1NHF97 Neurons, Networks, and Motor Behavior edited by Paul S. G. Stein, Sten Grillner, Allen I. Selverston, and Douglas G. Stuart Recent advances in motor behavior research rely on detailed knowledge of the characteristics of the neurons and networks that generate motor behavior. At the cellular level, Neurons, Networks, and Motor Behavior describes the computational characteristics of individual neurons and how these characteristics are modified by neuromodulators. At the network and behavioral levels, the volume discusses how network structure is dynamically modulated to produce adaptive behavior. Comparisons of model systems throughout the animal kingdom provide insights into general principles of motor control. Contributors describe how networks generate such motor behaviors as walking, swimming, flying, scratching, reaching, breathing, feeding, and chewing. An emerging principle of organization is that nervous systems are remarkably efficient in constructing neural networks that control multiple tasks and dynamically adapt to change. The volume contains six sections: selection and initiation of motor patterns; generation and formation of motor patterns: cellular and systems properties; generation and formation of motor patterns: computational approaches; modulation and reconfiguration; short-term modulation of pattern generating circuits; and sensory modification of motor output to control whole body orientation. Computational Neuroscience series. A Bradford Book. December 1997 262 pp. ISBN 0-262-19390-6 MIT Press * 5 Cambridge Center * Cambridge, MA 02142 * (617)625-8569 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: appropriate to distinguish between "memory" and "learning". Memory is simple recording of information and facts (word pairs, etc), whereas learning involves generalization (deriving additional general facts from that information). Generalization (e.g. categorization) depends on a body of information. Recording of names of people, objects, places and relations is simple memorization, not learning (generalization). Much of what is cited by Gale Martin, Gary Cottrell and Stefan Schaal is recording of information (memorization). And "relevant" information, facts may indeed be recorded "instantaneously" (memorized) by the brain, there is no question about that. The issue here is whether "learning" is instantaneous and permanent in the Hebbian sense. The study clearly indicates that is not so. ------------------- (C) COULD THE "LOSS OF SKILL" RESULT FROM TRAINING OF THE SAME NET, HEBBIAN STYLE (THE INTERFERENCE PROBLEM)? Several people raised this question (Eric Pitcher, Will Penny and others). There are a variety of problems with that argument. First, the same network may not be appropriate for learning the second motor skill. So, in that case, there are two possibilities to consider for any learning system, biological or otherwise. One, it could destroy the previous net and use its free neurons to create a new net (perhaps using more or less neurons than the previous net) to learn the second motor skill. But such distruction of the previous net will result in "total" loss of skill on the first task, not just "partial" loss of skill. So that possibility will not explain the phenomenon at hand. Second, if in fact the same net is used to learn the second motor skill, then one would have the problem of "catastrophic forgetting." As is well-known, catastrophic forgetting is observed in back-propagation and other types of networks when a previously trained network is subsequently trained with examples from a problem that is completely different from the previous one. And catastrophic forgetting is not just limited to pathological cases of learning. Catastrophic forgetting, in fact, is what we "depend on" when we talk about "adaptive learning" - adaptating to a new situation and forgetting the old. So learning of the second skill in the same net would also result in "total loss of skills" (catastrophic forgetting), not just "partial loss of skills." So this does not explain the phenomenon either. So this type of interference is not a good explanation for the phenomenon at hand - that of "partial loss of skills." -------------------- D) WHY DO YOU SAY CLASSICAL CONNECTIONIST LEARNING IS MEMORYLESS? ISN'T THERE MEMORY IN THE WEIGHTS? Several persons raised this issue. So I include this note below from one of my previous memos: "Memoryless learning implies there is no EXPLICIT storage of any learning example in the system in order to learn. In classical connectionist learning, the weights of the net are adjusted whenever a learning example is presented, but it is promptly forgotten by the system. There is no EXPLICIT storage of any presented example in the system. That is the generally accepted view of "adaptive" or "on-line learning systems." Imagine such a system "planted" in some human brain. And suppose we want to train it to learn addition. So we provide the first example - say, 2 + 2 = 4. This system then uses the example to promptly adjust the weights of the net and forgets the particular example. It has done what it is supposed to do - adjust the weights, given a learning example. Suppose, you then ask this "human", fitted with this learning algorithm: "How much is 2 + 2?" Since it has only seen one example and has not yet fully grasped the rule for adding numbers, it probably would give a wrong answer. So you, as the teacher, perhaps might ask at that point: "I just told you 2 + 2 = 4. What do you mean you don't remember?" And this "human" might respond: "Very honestly, I don't recall you ever having said that! I am very sorry." And this would continue to happen after every example you present to this "human" until complete learning has taken place!!! So do you think there is memory in those "weights"? Do you think humans are like that?" ------------------------ (E) A LAST NOTE: The arguments I used against Hebbian-style learning did not rely in any way or form on the details of the PET studies. Only the external behavioral facts were used in the arguments. So questions about irreproducibility of PET and fMRI studies are irrelevant to this argument. ------------------------------------------- RESPONSES FROM OTHERS From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: (REZA SHADMEHR is one of the authors of the study referred to in this discussion.) There is little question among the neuroscientists that practice only starts a process that continues to progress long after the presentation of information has stopped. One only has to consider the fact that the half-life of proteins are on the order of minutes to hours, while memories, which are presumably represented as changes in protein dependent synaptic mechanisms, may last a life time. How this is done remains a mystery. Perhaps, as our study hints, with time there are system-wide changes in representation of newly acquired memories. There is much more evidence for this in memories that rely on the medial parts of the temporal lobe, the regions where damage causes amnesia. We find evidence that memories that do not depend on the med. temporal lobe structures also show a time dependent stability property and that this property is correlated with changes in brain regions of representation. -------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: You might be right that consolidation phenomena support a claim for memory-based learning. However, your arguments play too fast and free with a very large literature on human memory and learning, which indicate that human learning is considerably more complex. The following are some examples. >One of the fundamental beliefs in neuroscience, cognitive science >and artificial neural networks is that the brain learns in >real-time. That is, it learns instantaneously from each and every >learning example provided to it by adjusting the synaptic >strengths or connection weights in a network of neurons. This is wrong. Consolidation phenomena have been around for a long time, as has been the assumption that something happens in the first 5 or so hours after learning to cement what has been learned. This has been the traditional explanation of retrograde amnesia--a trauma to the brain can result in loss of memory regarding events that occurred several hours before the trauma. >What are the real implications of this study? One of the most >important facts is that although both groups had identical >training sessions, they had different levels of learning >of the motor task because of what they did subsequent to >practice. From >this fact alone one can conclude with some degree of >certainty that real-time, instantaneous learning is not >used for learning motor skills. ..... >One has to remember that the essence of learning is >generalization. These statements are also wrong, or at least too simplistic. The existence of consolidation effects (and possibly memory-based learning) does not rule out the existence of real-time, instantaneous learning. There is perhaps a half-century of psychological research on interference effects in learning that argue for a broader view. When one experiences an event, the ability to recall the event is influenced both by what occurs prior to the event (referred to as proactive interference), and what occurs after it (referred to as retroactive interference). Proactive interference effects indicate that the more familiar you are with a stimulus, the less impact an encounter with it will have on your memory of the encounter (see the psychological literature on word-frequency effects on recognition, repetition effects, lag effects, von Restorff effects, and proactive inhibition in paired- associates learning). Many of these effects occur with familiarity that is established within the immediately prior seconds, minutes, or hours of the experiment, so some type of instantaneous learning occurs that impacts longer-term learning. Retroactive interference effects have been studied most thoroughly in the paired-associates learning paradigm. Here, the point is that when you learn, you are often learning an association between a stimulus and a response. The paired-associates learning paradigm involves having subjects learn a list of stimulus-response pairs, such that when they are presented with each of the stimuli in the list, they can retrieve the corresponding response. Retroactive interference effects occur when, after this learning, subjects are given a new list, with either the same or similar stimuli, paired with new responses. What typically happens is that, even if the first list is learned perfectly, learning the second list interferes with retrieving the responses from the first list. These results occur over short periods of time, and over longer periods of time. Hence, the results from the study you cite, might possibly be explained as retroactive interference effects, as well as, or instead of, as consolidation effects. >A logical explanation perhaps for the "loss of >motor skill" phenomenon, as for any other similar phenomenon, is >that the brain has a limited amount of working or short term >memory. And when encountering important new information, the brain >stores it simply by erasing some old information from the working >memory. And the prior information gets erased from the working >memory before the brain has the time to transfer it to a more >permanent or semi-permanent location for actual learning. The problem here is that short-term memory and working memory have more precise meanings associated with them. Basically, they refer to what you can pay attention to, or rehearse internally or externally, at one time. The capacity is very limited, and so you would be continually changing the contents of short-term memory in working on a single task like the one you describe. Thus, for memory-based learning to occur, there must be some form of instantaneous learning that keeps the to-be-remembered stimuli around long enough for consolidation to occur, but this is not what is commonly referred to as short-term or working memory. --------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Thank you for your reply. I think you are right that we probably would agree about a lot of issues in this area. Nevertheless, I still feel the need to caution you about making the strong claim that there is no real time learning, as defined by evidence of generalization. One of the research paradigms used to investigate human category learning involves first creating a prototype. Sometimes the prototype is an image of randomly arranged dots. Sometimes it is a collection of related sentences, or an image of spatially arranged objects. Exemplars of the concept are then created by applying transformations of various sorts to the prototype. In many cases, the exemplars are created so that some are relatively close, or similar, to the prototype, and some are relatively distant, or dissimilar from the prototype. An experiment involves training people to categorize the exemplars, and then after this learning has occurred, testing them on other exemplars, not seen before, and on the prototype as well. Since most psychological experiments are conducted on undergraduates, in a single one-hour session, many of these category learning experiments take place within a single hour. People usually can perform this task, without being exposed to the number of stimuli we would expect that a neural net would need for such learning, and they can usually accomplish the learning within an hour (which argues against a time-consuming consolidation process being the responsible mechanism).Hence, I think it would be relatively easy to disprove your strong claim, using the enormous empirical literature psychologists have generated over the past half-century or so. Nevertheless, I also think it is great that you are making such a strong claim because it centers attention on the fact that people are capable of category learning which would seem to be impossible by our current computational conceptions of learning, due to high input dimensionality, and the complex mapping functions they apparently are able to approximate through category learning. If we can discover how they do this (and I think the psychological literature provides some clues) we may be able to extend such capabilities to artificial neural nets. ---------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: >It shouldn't be too hard too explain the "loss of skill" >phenomenon, from back-to-back instructions on new motor skills, >that was observed in the study. The explanation shouldn't be >different from the one for the "forgetting of instructions" >phenomenon that occurs with back-to-back instructions in any >learning situation. A logical explanation perhaps for the "loss of >motor skill" phenomenon, as for any other similar phenomenon, is >that the brain has a limited amount of working or short term >memory. And when encountering important new information, the brain >stores it simply by erasing some old information from the working >memory. And the prior information gets erased from the working >memory before the brain has the time to transfer it to a more >permanent or semi-permanent location for actual learning. So "loss >of information" in working memory leads to a "loss of skill." Re the above commentary: Could'nt you apply the same sort of argument about finite working memory capacity to finite real-time learning capacity ? What about the following hypothesis ? The brain has a limited real-time learning capacity (say a few networks in the frontal lobe that can do real-time learning). This learning is later transferred to other brain areas (the areas that will carry out the task in the future). But these real-time learning networks can be overwritten when people are exposed to sequences of novel tasks. So there could still be a role for real-time learning in the brain. ------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I don't understand quite where you get this idea that Cognitive Scientists believe learning is memoryless. All you have to do is read any introductory text to Cognitive Science to see all of the different kinds of memory there are, and how learning can go on in these different subsystems. Perhaps I am not getting your point - that real-time learning is somehow different? But the simplest kind of learning is rote learning, and one could argue that the hippocampus stores examples (which then train cortex, see papers by Larry Squire). Also, I would guess that most of us believe that there are attentional filters on what gets learned - not *every* example gets to be learned upon. --------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I think most definitions start with rote learning, which is memorization. Thu I don't see how you can call learning memoryless. Even implicit learning requires storage of the skill in your synaptic strengths ("weights"), which is one version of what "implicit memory" is. ------------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: It might interest you to look at the following paper: @Article{mcclelland95, author = "J. L. McClelland and B. L. McNaughton and R. C. O'Reilly", title = "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory.", journal = "Psych. Rev.", year = 1995, volume = 102, pages = "419--457" } ----------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: some comments concerning your posting: > > What are the real implications of this study? One of the most > important facts is that although both groups had identical >training sessions, they had different levels of learning >of the motor task because of what they did subsequent to >practice. From this fact alone one can conclude with some >degree of certainty that real-time, instantaneous learning >is not used for learning motor skills. .... > So real-time, instantaneous and permanent >weight-adjustment > (real-time learning) is contradictory to the results here. I do not get your point. Assume both groups learned the same level of performance. Now they subsequently do something else. One group learns a new motor skill which interfers with the previously learned motor skill in short term motor memory. The other groups does unrelated tasks (clearly nothing comparable to Reza's manipulandum task), and this group does not have interference with the short term memory. Why does this exclude real-time learning? The consolidation process later to put STM into LTM is not relevant to this questions. > Second, from a broader behavioral perspective, all types of > "learning" by the brain involves collection and storage of > information prior to actual learning. As is well known, the > fundamental process of learning involves: (1) collection and > storage of information about a problem, (2) examination of the > information at hand to determine the complexity of the >problem, (3)development of trial solutions (nets) for the >problem, (4) >testing of trial solutions (nets), (5) discarding such >trial solutions (nets) if they are not good enough, and >(6) repetition of these processes until an acceptable >solution is found. Real-time learning is not compatible >with these learning processes. Why would you make this statement about the brain? Nobody really understands how learning in the brain works, and just because the neural network community has this procedure to deal with the bias-variance dilemma, I would not believe that this is the only way to achieve good learning results. We actually worked on a learning algrithm for a while which can achieve incremental learning without all these steps you enumerated. All what it needed is a smoothness bias. (ftp://ftp.cc.gatech.edu/pub/people/sschaal/schaal-NC97.ps.gz) > One has to remember that the essence of learning is >generalization. In order to generalize well, one has to >look at the whole body of information relevant to a >problem, not just bits and pieces of the information at a >time as in real-time learning. So the argument against >real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). One finds out what is there to learn >(generalize) by collecting and storing information about >the problem. In other >words, no system, biological or otherwise, can prepare >itself to learn (generalize) without having any >information about what is to be learnt (generalized). You are right in saying that one needs prior information for generalization. However, there are classes of problems where general priors will be sufficient to generalize. Nobody can do extrapolation without have strong domain knowledge. But you might be able to do save interpolation with some generic biases, which nature may have developed. Again, the above paper talks about related topics. > Another fact from the study that is highly significant is >that the brain takes time to learn. Learning is not quick >and instantaneous. But this may depend on the task. Other tasks can be acquired more quickly. I assume it is save to say that the biological system is only able to learn certain tasks very quickly, and others not. This is why playing good golf or tennis is so hard. But learning to balance a pole happens quite quickly in humans. Interesting arguments, but I do not see how you can make any of your claims about real-time learning. What is your counter-hypothesis? Pure memory-based learning? ---------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I have a small doubt! I think a real-time learning need not be memoryless. As an example (in the context of my work ) I would say that it is possible to evolve a learning rule for a neural network along with the structures. That is, the coevolution of structure and learning is possible. The structures eventually implement a long term memory! ------------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: After getting your text from cogsci, I forwarded it to a collegue who works in the theatre world. I'm forwarding you his "gut" reactions. I am working on a neuro-vision of Cognitive Sci, and he is in contemporary theatre. For some time we've been working together on topics of motor planning, etc. If you want to know more of what we do, have a look at our web page : http://user.orbit.net.mt/josch/... I've added (enclosed in [**....**]) some comments as reading notes! Keep in touch...your ideas are exciting to us! From: "Dr John Schranz" To: "Glyn Goodall" 2. Real time learning - WOW! That is right home! And we have much much to say there. For one thing, but I have to think deeply on this, "imitation" in its common understanding is shaken. I can only DO something as *I* see that it "is". Which means that at any one moment of real time I am seeing that thing differently to what it "is" .. and I am more truly engaged in: > >(1) collection and storage of information about a problem, >(2) examination of the information at hand to determine the >complexity of the problem, >(3) development of trial solutions (nets) for the problem, >(4) testing of trial solutions (nets), >(5) discarding such trial solutions (nets) if they are not >good enough, and (6) repetition of these processes until >an acceptable solution is found." Those are much more in the street of the way I understand learning to be, which is PRIMARILY the identification of the problems which are envisaged whenever one sees onself entering into ANY SORT OF RELATIONSHIP - whether it be with an object (animate or not, it's the same thing) or with a subject (human or not, it's the same thing) ... IN EACH CASE THAT ENVISAGED *RELATIONSHIP* IS SEEN TO BE FRAUGHT WITH POTENTIAL SLIPS (imaginary or not, it's the same thing; minor or not, it's the same thing)... AND WHAT WE CALL *TASKS* ARE, PRECISELY, THE NAVIGATION OF THOSE POTENTIAL SLIPS. This is what Frank [**Cammileri **]- and I mean when we address what we refer to as "Alterity"... "Otherness"... "Difference".... in a course of lectures we have given here [** University of Malta **] and at other universities abroad. >Real-time learning is not compatible with these learning >processes. One has to remember that the essence of >learning is generalization.In order to generalize well, >one has to look at the >whole body of information relevant to a problem, not just >bits and pieces of the information at a time as in >real-time learning. Precisely. That is nearly verbatim (in my reading at least) what I have just expounded on above. >So the argument against real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). And by "knows" as used above I understand "one only knows by relating that which IS IN one's own experience ALREADY to that which as yet is not ...and we use 'knows' in specifically THIS sense".... And that is "to generalise" .. or in other words to "analogise", "AS IF *this* thing (which I do not 'know') were *that* thing which is in my experience"... >One finds out what is there to learn (generalize) by >collecting and storing information about the problem. In >other words, no system, biological or otherwise, can >prepare itself to learn (generalize) without having any >information about what is to be learnt (generalized). Precisely what I have just said ... or, anyway, that's how *I* see it!!!!! Then, of course, come the big discourse on partituras, or scores, as fragments [** sequences of actions that make up a theatrical presentation - which are regularly reworked, and improvised upon, during the entire training and rehersal period, and even during the performance **] which one learns as such ... in order to then play about with them. The entire discourse of variations and "improvisations" is opened out. The paper I've just written for wales [** for a conferance about the Mime, Ducroux **] taps this quite well ... and the cross topics between the two are VERY interesting... ---------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: London, Canada I enjoyed reading your summary and commentary of Shadmehr and Holcomb [1997] Science article. Please stay tuned for two technical reports myself an Bob Mercer, at Univ. of Western Ontario are writing on the functional consolidation and transfer of neural network task knowledge through the use of a Task Rehearsal Mechanism or TRM. By it's vary nature, TRM assumes that there are long term and short term centres for learning as has been the thesis of numerous researchers (for example see J.L. McClelland, B.L. McNaughton, and R.C. Reilly, 1994). The TRM relies on long-term memory for the production of virtual examples of previously learned task knowledge (background knowledge). A functional transfer method is then used to selectively bias the learning of a new task which is developed in short-term memory. The representation of this short term memory is then transfered to long-term memory where it can be used for learning yet another new task in the future. Notice, that explicit examples of a new task need not be stored in long-term memory, only the respresentation of the task which can be later used to generate virtual examples. These virtual examples can be used to rehearse previously learned tasks in concert with a new "related' task. The TRM theory has inspired the development of a system, and a series of experiments which will be discussed in the reports. Consolidation of new task knowledge into a representationally efficient long-term memory is not explicitedly addressed, however one has to assume that this process requires time and energy. If that time and energy are interrupted ... well, it makes sense that the learner may suffer in the context of life-long learning. This agrees with the findings of Shadmehr and Holcomb. See also S.R.Quartz and T.J. Sejnowski, 1996, for an article which has very interesting related information on a potential biological mechanism for CNS learning and consolidation. Ref: J.L. McClelland, B.L. McNaughton, and R.C. Reilly, "Why there are Complementary Learning Systems in the Hippocampus and Neocortx: Insights from the Successes and Failures of Connectionist Models of Learning and Memeory", CMU Technical Report PDP.CNS.94.1, March, 1994 S.R.Quartz and T.J. Sejnowski, "The neural basis of cognitive development: A constructivist manifesto", a BBS target article accpted for publication by Cambridge University Press, 1996. ------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: An alternative hypothesis (which allows for real-time learning): >What are the real implications of this study? One of the most >important facts is that although both groups had identical >training sessions, they had different levels of learning of the >motor task because of what they did subsequent to practice. From >this fact alone one can conclude with some degree of certainty >that real-time, instantaneous learning is not used for learning >motor skills. How can one say that? One can make that conclusion >because if real-time learning was used, there would have been >continuous and instantaneous adjustment of the synaptic strengths >orconnection weights during practice in whatever net the brain was >using to learn the motor task. This means that all persons trained >in that particular motor task should have had more or less the >same "trained net," performance-wise, at the end of that training >session, regardless of what they did subsequently. (It is assumed >here that the task was learnable, given enough practice, and that >both groups had enough practice.) With complete, permanent >learning (weight-adjustments) from "real-time learning," there >should have been no substantial differences in the learnt skill >between the two groups resulting from any activity subsequent to >practice. But this study demonstrates the opposite, that there >were differences in the learnt skill simply because of the nature >of subsequent activity. So real-time, instantaneous and permanent >weight-adjustment (real-time learning) is contradictory to the >results here. The results are not contradictory to the idea of realtime learning, if one can assume that the second group was updating (learning in) the same part of the brain (network) during the second training period as the first. It is well demonstrated in most modalities that subsquent 'interference' training will degrade performance on a newly learned skill. From the little bit I know of neural networks, I'm guessing that the same could be shown with models as well. The point is, if two groups of networks (or brains) are trained in an identical fashion, then one group is trained with a new skill in the same modality, the initial learning will be 'overwritten' to some extent in that group. The same sets of weights will need to be updated. Remember also that PET results are showing _relative_ blood flow, so it cannot be assumed that the cerebellar activity seen after learning was not present during the learning. On the contrary, it was almost certainly necessary for the motor activity to take place. The difference was that the frontal cortex was also highly active, presumably facilitating learning in the motor pathways. Once a subject reached a certain level of proficiency with the task, there would be less need for the frontal cortex to reinforce/facilitate the motor cortex activity, and those (motor) areas would appear to be most active. ------------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Thessaloniki My backgroung is in engineering and learning. Since the nature of neural computing is interdisciplinary I've found the experiments and results by Shadmehr and Holcomb [1997] appealing. In the following I am attempting to enhance the question posed. A system with explicit memory capacity is an interesting approach to learning. But I am wondering: isn't it what we really do when for many neurocomputing paradigms (back-propagation etc.) the data are stored in the memory and are fed again and again until a satisfactory level of performance is achieved ? That is, the training set is employed as it had been stored in the memory of the system. It is true that the essence of learning is generalization. As the cost of both memory and computing time is dropping, it is all the more likely to see in the future systems with on-board "memory capacity" with an enhanced learning capacity. Nevertheless learning and behavior will probably not improve dramatically by only adding memory. This is because with on-board memory we will simply be doing faster and more efficiently what we are doing already. My proposition is that, perhaps, brain-like learning behavior could be simulated by changing the type of data we operate on. That is, usually a learning example considers one type of data and typically from the Euclidean space. Other types of data have also been considered, such as propositional statements. But it is very likely that only some type of hybrid information handling system could simulate the brain convincingly. However, when dealing with disparate data, a "problem" is that such data are usually not handled with mathematical consistency. Hence such issues as "convergence in the limit" are not meaningful. The practical advantage of mathematically consistent hybrid-learning is that such learning could lead to reliable learning models and learning machines with an anticipated behavior. In this context we have treated partial ordered sets, in particular lattices, we defined a mathematical metric, and we have obtained some remarkable learning results with various bechmark data sets. In conclusion it seems to us that a sophisticated learning behavior is only in part a question of memory. It is moreover a question of the type of data being processed. ------------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: > One of the fundamental beliefs in neuroscience, cognitive science > and artificial neural networks is that the brain learns in > real-time. That is, it learns instantaneously from each and every > learning example provided to it by adjusting the synaptic >strengths or connection weights in a network of neurons. The >learning is generally thought to be accomplished using a >Hebbian-style mechanism or some other variation of the idea (a >local learning law). In these scientific fields, real-time >learning also implies memoryless learning. That is a non sequitur. Why should real-time learning imply that there is not also memory based learning? Both types of 'learning' are surely desirable for higher cognitive behaviour in a real-time environment. By 'learning', I just mean 'connection weight adjustment'. In memory-based learning in our brains, it would be impossible to store all parameters of an event or training sample as they occur - but obviously we store some transformed, compacted version of events,(if we didn't, then we would not have long term memories) and equally obviously, this is available for reference for learning at a later time (if not, then we would be unable to learn from our long-term memories). >In memoryless learning, no training examples are stored explicitly >in the memory of the learning system, such as what does explicitly mean in this context? ------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: First, I'd like to point out that hardly anybody considers instanteneous learning in the way you define it. The state of any network will depend on the environment to which it was exposed. I'd say that from what you discuss on cannot discrimintae between real-time and delayed learning unless some more analysis is done. Since the two groups performed different tasks after the test, of course they will end up with different networks, memory or not. So in particular, it does not mean that. > all persons trained in that particular motor task should have had >more or less the same "trained net," performance-wise, at the end >of that training session, regardless of what they did >subsequently. > What are the real implications of this study? One of the most > important facts is that although both groups had identical >training sessions, they had different levels of learning of the >motor task because of what they did subsequent to practice. From >this fact alone one can conclude with some degree of certainty >that real-time, instantaneous learning is not used for learning >motor skills. How can one say that? One can make that conclusion >because if real-time learning was used, there would have been >continuous and instantaneous adjustment of the synaptic strengths >or connection weights during practice in whatever net the brain >was using to learn the motor task. This means that all persons >trained in that particular motor task should have had more or less >the same "trained net," performance-wise, at the end of that >training session, regardless of what they did subsequently. (It is >assumed here that the task was learnable, given enough practice, >and that both groups had enough practice.) With complete, >permanent learning (weight-adjustments) from "real-time learning," >there should have been no substantial differences in the learnt >skill between the two groups resulting from any activity >subsequent to practice. But this study demonstrates the opposite, >that there were differences in the learnt skill simply because of >the nature of subsequent activity. So real-time, instantaneous and >permanent weight-adjustment (real-time learning) is contradictory >to the results here. I'd like to disagree with you again. There are numerous examples of networks without explicit memory (e.g. any BP net), which generalize pretty well. This is a consequence of their general approximator property. > One has to remember that the essence of learning is >generalization. In order to generalize well, one has to look at >the whole body of information relevant to a problem, not just bits >and pieces of the information at a time as in real-time learning. >So the argument against real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). One finds out what is there to learn (generalize) >by collecting and storing information about the problem. In other >words, no system, biological or otherwise, can prepare itself to >learn (generalize) without having any information about what is to >be learnt (generalized). > I don't think anyone has claimed otherwise. We all agree it is most likely a statistical process and it needs many examples for learning. -------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: First of all, you should put these ideas in a paper or tech report and let people access it only if interested. Second, loose arguments that switch wildly between learning of motor skills and language/verbal learning indicate poor understanding of mechanisms involved. The stuff you posted today is almost as ridiculous as any other oversimplified explanation of human learning, be it the symbol system hypothesis or the Hebbian one or the Chomskian one. Good science happens from precise experiments that conclusively reject or accept a well-defined hypothesis. Terrible arguments is all I see in your post. ---------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: mailing list > A recent study by Shadmehr and Holcomb [1997] may lend some > interesting insight on how the brain learns. In this study, a > positron emission tomography (PET) device was used to monitor PET studies are in general irreproducible. I am working on a paper on the subject, and for it I did a survey of the PET and fMRI studies from the beginning of 1997. Of the ~80 articles that I have already read, There isn't even a single case of a study reproducing a previous study. > "learning" by the brain involves collection and storage of > information prior to actual learning. As is well known, the > fundamental process of learning involves: (1) collection and > storage of information about a problem, (2) examination of the > information at hand to determine the complexity of the problem, (3) > development of trial solutions (nets) for the problem, (4) >testing of trial solutions (nets), (5) discarding such trial >solutions (nets) if they are not good enough, and (6) repetition >of these processes until an acceptable solution is found. >Real-time learning is not compatible with these learning >processes. It is not at all 'well known' that the fundamental process of learning involves what you say. You need some srgument for this, rather than just asserting it. There are many contrary examples, e.g a cat learning how to exit from a cage by pulling a lever. This seems to be more relevant to learning by animals, including humans. > One has to remember that the essence of learning is >generalization. This assertion is out of place. Learning is changing of behaviour in a consistent and productive way (by some standard). ------------------------------------------------------------ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Did you consider the possibility that the training on the second task could have affected the same synapses used for the first task ? If this should have happened, it could be another explanation of what has been observed. To check which hypothesis is the right one, I would suggest to repeat the experiment in a slightly different way: The second group should be trained on the second task not immediately after the first task, but, for example, ten hours later.Now the following two things can happen: 1) People from this group still have reduced levels of skill on the first task. 2) They perform well on both tasks. In the first case maybe the same synapses have been used to learn both tasks, with or without the temporary and permanent kinds of memory you described. So the new information could have been (partially) rewritten over the previous one. In the second case I can see two choices: yours is the first one. Here is the second one: the brain seems to have a modular and hierarchical structure. Let's assume that different experiences are stored in different modules; so, when there is something new to remember, the "brain's operative system" has to find an "empty" module to store the information. This fact, if it should be real, could explain why the younger a person is, the less time he takes to learn new things: in his brain the number of "empty" modules would be much greater than the number of "full" modules, so it should be quite fast to find the right place. A way to check this hipothesys would be to make the test you wrote about, and to consider the age of the people involved. If the time the information takes to move from temporary storage to permanent storage is roughly indipendent on age, what I wrote can be thrown away. P.S.: Considering I am still a student, please do not take what I wrote too seriously ! ------------------------------------------------------------ APPENDIX COULD THERE BE REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN? One of the fundamental beliefs in neuroscience, cognitive science and artificial neural networks is that the brain "learns" in real-time. That is, it learns "instantaneously" from each and every learning example provided to it by adjusting the synaptic strengths or connection weights in a network of neurons. The learning is generally thought to be accomplished using a Hebbian-style mechanism or some other variation of the idea (a local learning law). In these scientific fields, real-time learning also implies memoryless learning. In memoryless learning, no training examples are stored explicitly in the memory of the learning system, such as the brain. It can use any particular training example presented to it to adjust whatever network it is learning in, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. This section looks at the possibility of real-time learning in the brain from two different perspectives. First, some factual behavioral evidence from a recent neuroscience study on learning of motor skills is examined. Second, the idea of real-time learning is examined from a broader behavioral perspective. A recent study by Shadmehr and Holcomb [1997] may lend some interesting insight on how the brain learns. In this study, a positron emission tomography (PET) device was used to monitor neural activity in the brain as subjects were taught and then retested on a motor skill. The task required them to manipulate an object on a computer screen by using a motorized robot arm. It required making precise and rapid reaching movements to a series of targets while holding the handle of the robot. And these movements could be learned only through practice. During practice, the blood flow was most active in the prefrontal cerebral cortex of the brain. After the practice session, some of the subjects were allowed to do unrelated routine things for five to six hours and then retested on their recently acquired motor skill. During retesting of this group, it was found that they had learned the motor skill quite well. But it was also found that the blood flow now was most active in a different part of the brain, in the posterior parietal and cerebella areas. The remaining test subjects were trained on a new motor task immediately after practicing the first one. Later, those subjects were retested on the first motor task to find out how much of it they had learnt. It was found that they had reduced levels of skill (learning) on the first task compared to the other group. So Shadmehr and Holcomb [1997] conclude that after practicing a new motor skill, it takes five to six hours for the memory of the new skill to move from a temporary storage site in the front of the brain to a permanent storage site at the back. But if that storage process is interrupted by practicing another new skill, the learning of the first skill is hindered. They also conclude that the shift of location of the memory in the brain is necessary to render it invulnerable and permanent. That is, it is necessary to consolidate the motor skill. What are the real implications of this study? One of the most important facts is that although both groups had identical training sessions, they had different levels of learning of the motor task because of what they did subsequent to practice. From this fact alone one can conclude with some degree of certainty that real-time, instantaneous learning is not used for learning motor skills. How can one say that? One can make that conclusion because if real-time learning was used, there would have been continuous and instantaneous adjustment of the synaptic strengths or connection weights during practice in whatever net the brain was using to learn the motor task. This means that all persons trained in that particular motor task should have had more or less the same "trained net," performance-wise, at the end of that training session, regardless of what they did subsequently. (It is assumed here that the task was learnable, given enough practice, and that both groups had enough practice.) With complete, permanent learning (weight-adjustments) from "real-time learning," there should have been no substantial differences in the learnt skill between the two groups resulting from any activity subsequent to practice. But this study demonstrates the opposite, that there were differences in the learnt skill simply because of the nature of subsequent activity. So real-time, instantaneous and permanent weight-adjustment (real-time learning) is contradictory to the results here. Second, from a broader behavioral perspective, all types of "learning" by the brain involves collection and storage of information prior to actual learning. As is well known, the fundamental process of learning involves: (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets) for the problem, (4) testing of trial solutions (nets), (5) discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. Real-time learning is not compatible with these learning processes. One has to remember that the essence of learning is generalization. In order to generalize well, one has to look at the whole body of information relevant to a problem, not just bits and pieces of the information at a time as in real-time learning. So the argument against real-time learning is simple: one cannot learn (generalize) unless one knows what is there to learn (generalize). One finds out what is there to learn (generalize) by collecting and storing information about the problem. In other words, no system, biological or otherwise, can prepare itself to learn (generalize) without having any information about what is to be learnt (generalized). Learning of motor skills is no exception to this process. The process of training is simply to collect and store information on the skill to be learnt. For example, in learning any sport, one not only remembers the various live demonstrations given by an instructor (pictures are worth a thousand words), but one also remembers the associated verbal explanations and other great words of advise. Instructions, demonstrations and practice of any motor skill are simply meant to provide the rules, exemplars and examples to be used for learning (e.g. a certain type of body, arm or leg movement in order to execute a certain task). During actual practice of a motor skill, humans not only try to follow the rules and exemplars to perform the actual task, but they also observe and store new information about which trial worked (example trial execution of a certain task) and which didn't. One only ought to think back to the days of learning tennis, swimming or some such sport in order to verify information collection and storage by humans to learn motor skills. It shouldn't be too hard too explain the "loss of skill" phenomenon, from back-to-back instructions on new motor skills, that was observed in the study. The explanation shouldn't be different from the one for the "forgetting of instructions" phenomenon that occurs with back-to-back instructions in any learning situation. A logical explanation perhaps for the "loss of motor skill" phenomenon, as for any other similar phenomenon, is that the brain has a limited amount of working or short term memory. And when encountering important new information, the brain stores it simply by erasing some old information from the working memory. And the prior information gets erased from the working memory before the brain has the time to transfer it to a more permanent or semi-permanent location for actual learning. So "loss of information" in working memory leads to a "loss of skill." Another fact from the study that is highly significant is that the brain takes time to learn. Learning is not quick and instantaneous. Reference: Shadmehr, R. and Holcomb, H. (August 1997). "Neural Correlates of Motor Memory Consolidation." Science, Vol. 277, pp. 821-825. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: a day; shuttles for the city centre each 30 minutes. by train-regular connections to Toulouse Railway Station by TGV (High Speed Train) several times a day. Bus facilities for University. Individual delegates will arrange their own transport to their hotel. Details of bus times will be given in the last announcement. Rent-a-car services: Several companies offer car rentals at both Paris and Toulouse-Blagnac airports. It takes about 6 hours to drive from Paris to Toulouse (705 km). Please make arrangements through your travel agent. Visa: Please check with your local travel agent to see if a Visa is necessary. Delegates are responsible for their own Visa arrangements.=20 Climate and Clothing: The meeting season is winter. The climate can be variable (from 4=B0 to 15=B0C) in the area (oceanic temperate climate).= Bring a raincoat, an umbrella, one or two sweaters or pull-overs. Third and Final Announcement: Those who have responded to the Second Announcement will receive the Third Announcement due out in September 1998. CONFERENCE COSTS: Early registration (by 31 August 1998): Ordinary delegates: 1,500 FF (US$ 250) Students: 1,000 FF (US$ 170) Accompanying people: 500 FF (US$ 85) Full registration (transfered order after 31 August 1998): Ordinary delegates: 1,800 FF (US$ 300) Students: 1,300 FF (US$ 210) Accompanying people: 800 FF (US$ 130) Please note: Fees must be sent by international transfer order to: ANN Model Toulouse, LEK S. Caisse d'Epargne de Midi-Pyr=E9n=E9es 5 av. Pierre Coupeau 31130 Balma - France Bank Account: 13135 00080 04084374754 24 Prospective delegates with financial queries should address these to Drs. Sovan Lek or Jean-Fran=E7ois Gu=E9gan when submitting abstracts and/or registration intent. Your registration fee includes the following access to full abstract proceedings; conference kits; morning and afternoon tea or coffee, Monday to Wednesday; lunches, Monday to Thursday; Conference party (Wednesday night, December 16th); Conference excursion (Thursday, December 17th). The fee for accompanying people includes lunches, coffee breaks, closing banquet and excursion. ACCOMODATION: ESTABLISHMENT PRICE=A7 ADDRESS TELEPHONE FAX Wilson Trianon**(15 rooms) FF. 240 7, rue Lafaille - 31000 Toulouse 33 5 61 62 74 74 33 5 61 99 15 44 Wilson Square**(15 rooms) FF. 240 12, rue d'Austerlitz - 31000 Toulouse 33 5 61 21 67 57 33 5 61 21 16 23 H=F4tel de France**(25 rooms) FF. 265 5, rue d'Austerlitz - 31000= Toulouse 33 5 61 21 88 24 33 5 61 21 99 77 Des Arts* (14 rooms) FF. 170 1bis, rue Cantegril - 31000 Toulouse 33 5 61 62 77 59 33 5 61 12 22 37 Splendid* (14 rooms) FF. 120 13 rue Caffarelli - 31000 Toulouse 33 5 61 62 43 02 33 5 61 40 52 76 IAS Center(30 rooms) FF. 165 23, avenue E. Belin - 31028 Toulouse Cedex 4 33 5 62 17 33 09 33 5 61 55 33 85 =A7 Reduced fares for participants to the meeting. Please mention your participation to the workshop when booking. All prices are given in French Francs, and are for Bed & Breakfast per night. Please book directly through the addresses indicated. SOCIAL PROGRAMME Conference Excursion on Thursday, December 17 1998 (costs included in registration fee): o Travel to Carcassonne, a famous Medieval City in South of France. Departure at 9:30 a.m.=20 o Lunch o Visit of the Medieval City o Visit of a wine cellar at the end of the afternoon o Dinner and return to Toulouse in the night ---------------------------------------------------------------- Sovan LEK E-mail: lek at cict.fr Doc. Habil. Fish & Non-linear Modelling CNRS - UMR 5576 Tel. : (33) 5 61 55 86 87 CESAC - Bat. 4R3 Fax : (33) 5 61 55 60 96 Uuniv. Paul Sabatier=20 118 route de Narbonne 31062 Toulouse cedex France From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: a day; shuttles for the city centre each 30 minutes. by train-regular connections to Toulouse Railway Station by TGV (High Speed Train) several times a day. Bus facilities for University. Individual delegates will arrange their own transport to their hotel. Details of bus times will be given in the last announcement. Rent-a-car services: Several companies offer car rentals at both Paris and Toulouse-Blagnac airports. It takes about 6 hours to drive from Paris to Toulouse (705 km). Please make arrangements through your travel agent. Visa: Please check with your local travel agent to see if a Visa is necessary. Delegates are responsible for their own Visa arrangements.=20 Climate and Clothing: The meeting season is winter. The climate can be variable (from 4=B0 to 15=B0C) in the area (oceanic temperate climate).= Bring a raincoat, an umbrella, one or two sweaters or pull-overs. Third and Final Announcement: Those who have responded to the Second Announcement will receive the Third Announcement due out in September 1998. CONFERENCE COSTS: Early registration (by 31 August 1998): Ordinary delegates: 1,500 FF (US$ 250) Students: 1,000 FF (US$ 170) Accompanying people: 500 FF (US$ 85) Full registration (transfered order after 31 August 1998): Ordinary delegates: 1,800 FF (US$ 300) Students: 1,300 FF (US$ 210) Accompanying people: 800 FF (US$ 130) Please note: Fees must be sent by international transfer order to: ANN Model Toulouse, LEK S. Caisse d'Epargne de Midi-Pyr=E9n=E9es 5 av. Pierre Coupeau 31130 Balma - France Bank Account: 13135 00080 04084374754 24 Prospective delegates with financial queries should address these to Drs. Sovan Lek or Jean-Fran=E7ois Gu=E9gan when submitting abstracts and/or registration intent. Your registration fee includes the following access to full abstract proceedings; conference kits; morning and afternoon tea or coffee, Monday to Wednesday; lunches, Monday to Thursday; Conference party (Wednesday night, December 16th); Conference excursion (Thursday, December 17th). The fee for accompanying people includes lunches, coffee breaks, closing banquet and excursion. ACCOMODATION: ESTABLISHMENT PRICE=A7 ADDRESS TELEPHONE FAX Wilson Trianon**(15 rooms) FF. 240 7, rue Lafaille - 31000 Toulouse 33 5 61 62 74 74 33 5 61 99 15 44 Wilson Square**(15 rooms) FF. 240 12, rue d'Austerlitz - 31000 Toulouse 33 5 61 21 67 57 33 5 61 21 16 23 H=F4tel de France**(25 rooms) FF. 265 5, rue d'Austerlitz - 31000= Toulouse 33 5 61 21 88 24 33 5 61 21 99 77 Des Arts* (14 rooms) FF. 170 1bis, rue Cantegril - 31000 Toulouse 33 5 61 62 77 59 33 5 61 12 22 37 Splendid* (14 rooms) FF. 120 13 rue Caffarelli - 31000 Toulouse 33 5 61 62 43 02 33 5 61 40 52 76 IAS Center(30 rooms) FF. 165 23, avenue E. Belin - 31028 Toulouse Cedex 4 33 5 62 17 33 09 33 5 61 55 33 85 =A7 Reduced fares for participants to the meeting. Please mention your participation to the workshop when booking. All prices are given in French Francs, and are for Bed & Breakfast per night. Please book directly through the addresses indicated. SOCIAL PROGRAMME Conference Excursion on Thursday, December 17 1998 (costs included in registration fee): o Travel to Carcassonne, a famous Medieval City in South of France. Departure at 9:30 a.m.=20 o Lunch o Visit of the Medieval City o Visit of a wine cellar at the end of the afternoon o Dinner and return to Toulouse in the night ---------------------------------------------------------------- Sovan LEK E-mail: lek at cict.fr Doc. Habil. Fish & Non-linear Modelling CNRS - UMR 5576 Tel. : (33) 5 61 55 86 87 CESAC - Bat. 4R3 Fax : (33) 5 61 55 60 96 Uuniv. Paul Sabatier=20 118 route de Narbonne 31062 Toulouse cedex France From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From david.brown at bbsrc.ac.uk Mon Jun 5 16:42:55 2006 From: david.brown at bbsrc.ac.uk (david.brown) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: <980330143240.16748@mserv.iapc.bbsrc.ac.uk.0> From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: "This book will be very useful for mathematics and engineering students interested in a modern and rigorous systems course, as well as for experts in control theory and applications" --- Mathematical Reviews "An excellent book... gives a thorough and mathematically rigorous treatment of control and system theory" --- Zentralblatt fur Mathematik "The style is mathematically precise... fills an important niche... serves as an excellent bridge (to topics treated in traditional engineering courses). The book succeeds in conveying the important basic ideas of mathematical control theory, with appropriate level and style" --- IEEE Transactions on Automatic Control Chapter and Section Headings: Introduction What Is Mathematical Control Theory? Proportional-Derivative Control Digital Control Feedback Versus Precomputed Control State-Space and Spectrum Assignment Outputs and Dynamic Feedback Dealing with Nonlinearity A Brief Historical Background Some Topics Not Covered Systems Basic Definitions I/O Behaviors Discrete-Time Linear Discrete-Time Systems Smooth Discrete-Time Systems Continuous-Time Linear Continuous-Time Systems Linearizations Compute Differentials More on Differentiability Sampling Volterra Expansions Notes and Comments Reachability and Controllability Basic Reachability Notions Time-Invariant Systems Controllable Pairs of Matrices Controllability Under Sampling More on Linear Controllability Bounded Controls First-Order Local Controllability Controllability of Recurrent Nets Piecewise Constant Controls Notes and Comments Nonlinear Controllability Lie Brackets Lie Algebras and Flows Accessibility Rank Condition Ad, Distributions, and Frobenius' Theorem Necessity of Accessibility Rank Condition Additional Problems Notes and Comments Feedback and Stabilization Constant Linear Feedback Feedback Equivalence Feedback Linearization Disturbance Rejection and Invariance Stability and Other Asymptotic Notions Unstable and Stable Modes Lyapunov and Control-Lyapunov Functions Linearization Principle for Stability Introduction to Nonlinear Stabilization Notes and Comments Outputs Basic Observability Notions Time-Invariant Systems Continuous-Time Linear Systems Linearization Principle for Observability Realization Theory for Linear Systems Recursion and Partial Realization Rationality and Realizability Abstract Realization Theory Notes and Comments Observers and Dynamic Feedback Observers and Detectability Dynamic Feedback External Stability for Linear Systems Frequency-Domain Considerations Parametrization of Stabilizers Notes and Comments Optimality: Value Function Dynamic Programming Linear Systems with Quadratic Cost Tracking and Kalman Filtering Infinite-Time (Steady-State) Problem Nonlinear Stabilizing Optimal Controls Notes and Comments Optimality: Multipliers Review of Smooth Dependence Unconstrained Controls Excursion into the Calculus of Variations Gradient-Based Numerical Methods Constrained Controls: Minimum Principle Notes and Comments Optimality: Minimum-Time for Linear Systems Existence Results Maximum Principle for Time-Optimality Applications of the Maximum Principle Remarks on the Maximum Principle Additional Exercises Notes and Comments Appendix: Linear Algebra Operator Norms Singular Values Jordan Forms and Matrix Functions Continuity of Eigenvalues Appendix: Differentials Finite Dimensional Mappings Maps Between Normed Spaces Appendix: Ordinary Differential Equations Review of Lebesgue Measure Theory Initial-Value Problems Existence and Uniqueness Theorem Linear Differential Equations Stability of Linear Equations Bibliography List of Symbols From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From Connectionists-Request at CS.cmu.edu Mon Jun 5 16:42:55 2006 From: Connectionists-Request at CS.cmu.edu (Connectionists-Request@CS.cmu.edu) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Bi-monthly Reminder Message-ID: -------- *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated July 24, 1998 This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Mark C. Fuhs --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. The files ending with .gz are compressed using the GNU gzip program. In the event that you do not already have gzip, it is available via ftp from "prep.ai.mit.edu" in the "/pub/gnu" directory. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to Access Files from the CONNECTIONISTS Archive --------------------------------------------------- There are two ways to access the CONNECTIONISTS archive: 1. Using your World Wide Web browser. Enter the following location: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives/ 2. Using an FTP client. a) Open an FTP connection to host FTP.CS.CMU.EDU b) Login as user anonymous with password your username. c) 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. L. Shastri and V. Ajjanagadde (1993). Behavioral and Brain Sciences 16 (3) 417--494. Temporal Synchrony, Dynamic Bindings, and SHRUTI -- a representational but non-classical model of reflexive reasoning. L. Shastri (1996). Behavioral and Brain Sciences 19 (2), 331--337. Robust reasoning: integrating rule-based and similarity-based reasoning. R. Sun, Artificial Intelligence. Vol.75, No.2, pp.241-296. June, 1995. Dave Touretzky mentioned "The Handbook of Brain Theory and Neural Networks" edited by Arbib (MIT Press, 1995). The article on "Structured Connectionist Models" in this handbook lists additional references to work on connectionist symbolic processing. Best Wishes, Lokendra Shastri International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 shastri at icsi.berkeley.edu http://www.icsi.berkeley.edu/~shastri Phone: (510) 642-4274 ext 310 FAX: (510) 643-7684 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. L. Shastri and V. Ajjanagadde (1993). Behavioral and Brain Sciences 16 (3) 417--494. Temporal Synchrony, Dynamic Bindings, and SHRUTI -- a representational but non-classical model of reflexive reasoning. L. Shastri (1996). Behavioral and Brain Sciences 19 (2), 331--337. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: phenomena or patterns of behavior of physical feedback systems (i.e., looking at cognition as essentially a bounded feedback system---bounded under normal conditions, unless the system goes into seizure (explodes mathematically---well, it is still bounded but it tries to explode!), of course.) From this point of view both symbols and fuzziness and every other conceptual representation are neither "true" nor "real" but simply patterns which tend to be, from an information-theoretic point of view, compact and useful or efficient representations. But they are built on a physical substrate of a feedback system, not vice-versa. However, it isn't the symbol, fuzzy or not, which is ultimately general, it is the feedback system, which is ultimately a physical system of course. So, while we may be convinced that your formalism is very good, this does not mean it is more fundamentally powerful than a simulation approach. It may be that your formalism is in fact better for handling symbolic problems, or even problems which require a mixture of fuzzy and discrete logic, etc., but what about problems which are not symbolic at all? What about problems which are both symbolic and non-symbolic (not just fuzzy, but simply not symbolic in any straightforward way?) The fact is, intuitively it seems to me that some connectionist approach is bound to be more general than a more special-purpose approach. This does not necessarily mean it will be as good or fast or easy to use as a specialized approach, such as yours. But it is not at all convincing to me that just because the input space to a connectionist network looks like R(n) in some superficial way, this would imply that somehow a connectionist model would be incapable of doing symbolic processing, or even using your model per se. Mitsu > > > > Two, it may be that the simplest or most efficient > > representation of a given set of rules may include both a continous and a > > discrete component; that is, for example, considering issues such as imprecise > > application of rules, or breaking of rules, and so forth. For example, > > consider poetic speech; the "rules" for interpreting poetry are clearly not > > easily enumerable, yet human beings can read poetry and get something out of > > it. A purely symbolic approach may not be able to easily capture this, > > whereas it seems to me a connectionist approach has a better chance of dealing > > with this kind of situation. > > > > I can see value in your approach, and things that connectionists can learn > > from it, but I do not see that it dooms connectionism by any means. > > See the previous comment. > > Cheers, > Lev From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences 16 (3) 417--494. R. Sun, (1992). On Connectionist variable binding. Connection Science. R. Sun, (1995). Robust reasoning: integrating rule-based and similarity-based reasoning. Artificial Intelligence. Vol.75, No.2, pp.241-296. June, 1995. Lange and Dyer (1989). High Level Inferencing in a Connectionist Network. {\it Connection Science}, 181-217. (See also Lange's chapter in Sun and Bookman (1994).) R. Lacher, S. Hruska, and D. Kunciky, 1992. Backpropagation learning in Expert Networks. Technical Report 91-015. Florida State University. also in IEEE TNN. Barnden, Complex Symbol-Processing in Conposit, in: Sun and Bookman (eds.), Architectures incorporating neural and symbolic processes. Kluwer. 1994. J. Barnden and K. Srinivas, Overcoming Rule-Based Rigidity and Connectionist Limitations Through Massively Parallel Case-based Reasoning, {\it International Journal of Man-Machine Studies}, 1992. ------------------------ NATURAL LANGUAGE (Syntactic and semantic processing) Bailey, D., J. Feldman, S. Narayanan, G. Lakoff (1997). Embodied Lexical Development, Proceedings of the Nineteenth Annual Meeting of the Cognitive Science Society COGSCI-97, Aug 9-11, Stanford: Stanford University Press, 1997. J. Henderson. Journal of Psycholinguistic Research, 23(5):353--379, 1994. Connectionist Syntactic Parsing Using Temporal Variable Binding. T. Regier, Cambridge, MA: MIT Press. 1996. The Human Semantic Potential: Spatial Language and Constrained Connectionism, L. Bookman, A Framework for Integrating Relational and Associational Knowledge for Comprehension, in Sun and Bookman (eds.), Architectures incorporating neural and symbolic processes. Kluwer. 1994. S. Wermter, (ed.) Connectionist language processing (?). Springer. ------------------------ LEARNING OF SYMBOLIC KNOWLEDGE (from NNs) Fu, AAAI-91. and IEEE SMC, 1995, 1997. Towell and Shavlik, Machine Learning. 1995. Giles, et al, (1993). in: Connection Science,1993. special issue on hybrid models. (some of these models involve somewhat distributed representation, but that's not the point.) Sun et al (1998), A bottom-up model of skill learning. CogSci'98 proceedings. (Justification: In some instances, such learning/extraction from NNs is better than learning symbolic knowledge directly using symbolic algorithms, in algorithmic or cognitive terms.) ------------------------ RECOGNITION, RECALL Jacobs, A.M. & Grainger, J. (1992). Testing a semistochastic variant of the interactive activation model in different word recognition experiments. Journal of Experimental Psychology: Human Perception and Performance, 18, 1174-1188. Jacobs, A. M., & Grainger, J. (1994). Models of visual word recognition: Sampling the state of the art. Journal of Experimental Psychology: Human Perception and Performance, 20, 1311-1334. McClelland, J. L. & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: Part I. An account of basic findings. Psychological Review, 88, 375-407. Page, M. & Norris, D. (1998). Modeling immediate serial recall with a localist implementation of the primacy model. In J. Grainger & A.M. Jacobs (Eds.), Localist connectionist approaches to human cognition. Mahwah, NJ.: Erlbaum. ------------------------ MEMORY There are many existing models. See Hintzman 1996 for a review (in Annual Review of Psychology) ------------------------- SKILL LEARNING R. Sun and T. Peterson, A subsymbolic+symbolic model for learning sequential navigation. {\it Proc. of the Fifth International Conference of Simulation of Adaptive Behavior (SAB'98).} Zurich, Switzerland. 1998. MIT Press. R. Sun, E. Merrill, and T. Peterson, A bottom-up model of skill learning. {\it Proc.of 20th Cognitive Science Society Conference}, pp.1037-1042, Lawrence Erlbaum Associates, Mahwah, NJ. 1998. Thompson, Cohen, and Shastri's work (yet to be published, I believe). ------------------------ I cannot even begin to enumerate all the rationales for using localist models for symbolic processing discussed in these pieces of work. The reasons may include (1) localist connectionist models are an apt description framework for a variety of cognitive processing, (See J. Grainger & A.M. Jacobs (Eds.), Localist connectionist approaches to human cognition. Mahwah, NJ.: Erlbaum.) (2) the inherent processing characteristics of connectionist models (such as similarity-based processing, which can also be explored in localist models) make them suitable for cognitive processing, (3) learning processes can naturally be applied to localist models (as opposed to learning LISP code), such as gradient descent, EM, etc. (As has been pointed out by many recently, localist models share many features with Bayesian networks. This actually has been recognized very early on, see for example, Sun (1990 INNC), Sun (1992), in which a localist network is defined from a collection of hidden Markov models, and the Baum-Welch algorithm was used in learning.) Regards, --Ron p.s. See also the recently published edited collection: R. Sun and F. Alexandre (eds.), {\it Connectionist Symbolic Integration}. Lawrence Erlbaum Associates, Hillsdale, NJ. 1997. ----------------------------------------- Dr. Ron Sun NEC Research Institute 4 Independence Way Princeton, NJ 08540 phone: 609-520-1550 fax: 609-951-2482 email: rsun at cs.ua.edu, rsun at research.nj.nec.com ----------------------------------------- Prof. Ron Sun http://cs.ua.edu/~rsun Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: "How can a neuron maximize the information it can transduce and match this to its ability to transmit?" Posing this question leads [1] to an unambiguous definition of learning, symbols, input space, output space based solely on logic, and consequently, probability theory and entropy. Furthermore, it provides an unambiguous computational objective and leads to a neuron or system of neurons that operate inductively. The resulting neural structure is the Hopfield neuron that regarding optimized transduction, obeys a modified form of Oja's equation for spatial adaptation, performs intradendritic channel delay equilibration of inputs for temporal adaptation. For optimized transmission, the model calls for a subtle adaptation of the firing thresold to optimize its transmission rate. This is not to say that the described model of neural computation is "correct." Correct is in the eyes of the beholder and depends on the application or theoretical goal pursued. However, this example does point out that there are common aspects to all neural computational problems and paradigms that in fact lend themselves to more precise definitions of terms like "learning." These definitions arise more naturally when the perspective of the neuron is taken. That is, it observes its inputs (regardless of how we might represent them), perhaps observes its own outputs, and using all that it can observe, executes a computational strategy that effects learning in such a manner as to optimize its defined error formulation, formalized objective criterion, of information-theoretic measure. Any adaptive rule will lead to either (1) a different way of extracting information from its inputs, (2) a different way of generating outputs given the information that has been measured, or (3) both of these. I think that it is a worthwhile goal to try to pursue more rigorous neuron-centric views of the terms used within the neural network community if for no other reason than to better focus exhanges and debates between members of the community. Bob Fry [1] "A logical basis for neural network design," in Vol. 3, Techniques and Applications of Artificial Neural Networks, Academic Press 1998. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: it appears that in neuroscience an important theme on the binding problem (how sensory features are bound together to form a coherent percept) has emerged from his theory. The theory has also started to impact the field of perceptual psychology (e.g. see a Nature paper by M. Usher and N. Donnelly, recently announced on this list). The issue of binding is so fundamental that the final judgement on von der Malsburg's theory is unlikely to be available in the near future. But one would not dispute that his neural network theory has generated major impact on neuroscience. DeLiang Wang ------------------------------- C. von der Malsburg (1981): "The correlatoin theory of brain function," Internal Report 81-2. Max-Planck-Institute for Biophysical Chemistry, Gottingen. P. Milner (1974): "A model for visual shape recognition," Psychological Review, 81, pp. 521-535. R. Eckhorn et al. (1988): "Coherent oscillations: A mechanism of feature linking in the visual cortex," Biological Cybernetics, 60, pp. 121-130. C. Gray, P. Konig, A. Engel, and W. Singer (1989): "Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties," Nature, 338, pp. 334-337. W. Phillips and W. Singer (1997), "In search of common foundations for cortical computation," Behavioral and Brain Sciences, 20, pp. 657-722. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: neurons 67 R. Segev and E. Ben-Jacob Application of biological learning theories to mobile robot avoidance and approach behaviors 79 C. Chang and P. Gaudiano The dynamics of specialization and generalization within biological populations 115 A. J. Spencer, I. D. Couzin and N. R. Franks FORTHCOMING ARTICLES Representations of informational properties for complex systems analysis C. Joslyn and G. de Cooman Optimal control of singular motion H. Nosaka, K. Tsuji and A. Hubler Social sand piles: purposive agents and self organized criticality S. E. Page, H. Ishii and W. Yi Synchronization in an array of population dynamic systems D. Postnov, A. Balanov and E. Mosekilde Dynamics of retinal ganglion cells modeled with hardware coupled nonlinear oscillators A. W. Przybyszewski, P. S. Linsay, P. Gaudiano and C.M. Wilson Collective choice and mutual knowledge structure D. Richards, B. D. McKay and W. A. Richards Random walks, fractals and the origins of rainforest diversity Ricard V. Sole and D. Alonso Language learning and language contact L. Steels Electrodynamical model of quasi-efficient financial market K. N. Illinski and A. S. Stepanenko Adaptive agent-driven routing and load balancing in communication networks M. Heusse, D. Snyers, S. Guerin and P. Kuntz On-off intermittency and riddled basins of attraction in a coupled map system J. Laugesen, E. Mosekilde, Yu. L. Maistrenko and V.L. Maistrenko The Journal of Complex Systems is a new journal, published by HERMES Publishing Co. Web page: http://www.santafe.edu/~bonabeau. Subscriptions: hermes at iway.fr, Attn: Subscriptions Dept. _________________ AIMS and SCOPES _________________ The Journal of Complex Systems aims to provide a medium of communication for multidisciplinary approaches, either empirical or theoretical, to the study of complex systems in such diverse fields as biology, physics, engineering, economics, cognitive science and social sciences, so as to promote the cross- fertilization of ideas among all the scientific disciplines having to deal with their own complex systems. By complex system, it is meant a system comprised of a (usually large) number of (usually strongly) interacting entities, processes, or agents, the understanding of which requires the development, or the use of, new scientific tools, nonlinear models, out-of equilibrium descriptions and computer simulations. Understanding the behavior of the whole from the behavior of its constituent units is a recurring theme in modern science, and is the central topic of the Journal of Complex Systems. Papers suitable for publication in the Journal of Complex Systems should deal with complex systems, from an empirical or theoretical viewpoint, or both, in biology, physics, engineering, economics, cognitive science and social sciences. This list is not exhaustive. Papers should have a cross-disciplinary approach or perspective, and have a significant scientific and technical content. ____________________ EDITORIAL BOARD ____________________ Eric Bonabeau (Editor-in-Chief), Santa Fe Institute, USA Yaneer Bar-Yam, NECSI, USA Eshel Ben-Jacob, Tel Aviv University, Israel Jean-Louis Dessalles, Telecom Paris, France Nigel R. Franks, University of Bath, UK Toshio Fukuda, Nagoya University, Japan Paolo Gaudiano, Boston University, USA Alfred Hubler, University of Illinois,Urbana-Champaign, USA Cliff Joslyn, Los Alamos National Laboratory, USA Alan Kirman, GREQAM EHESS, France Erik Mosekilde, The Technical University of Denmark Avidan U. Neumann, Bar-Ilan University, Israel Scott Page, University of Iowa, USA Diana Richards, University of Minnesota, USA Frank Schweitzer, Humboldt University, Berlin, Germany Ricard V. Sole,Universitat Politcnica de Catalunya, Spain Peter Stadler, University of Vienna, Austria Luc Steels, Brussels, Vrije Universiteit Brussel, Belgium Guy Theraulaz, Paul Sabatier University, France Andreas Wagner, Santa Fe Institute, USA Gerard Weisbuch, Ecole Normale Superieure, France David H. Wolpert, NASA Ames Research Center, USA Yi-Cheng Zhang, Universite de Fribourg, Switzerland ______________________________________ INSTRUCTIONS for prospective AUTHORS ______________________________________ Original papers can be submitted as regular papers or as letters. Regular papers report detailed results, whereas letters are short communications, not exceeding 3000 words, that report important new results. Ideally, regular papers should contain between 3000 and 12000 words, but the editors may consider the publication of shorter or longer papers if necessary. Short reviews and tutorials will also be considered. Please contact the Editor-in-Chief (bonabeau at santafe.edu) before embarking on a review or tutorial. It is extremely important that papers contain a sufficiently long introduction accessible to a wide readership, and that results be put in broader perspective in the conclusion. However, the main body of a paper should not have a low technical content under the pretext that the journal is multi-disciplinary. The submission of a manuscript will be taken to imply that thematerial is original and has not been and will not be (unless not accepted in the Journal of Complex Systems) submitted in equivalent form for publication elsewhere. Papers will be published in English. The American or the British forms of spelling may be used, but this usage must be consistent throughout the manuscript. Electronic submission is requested. Authors that use LaTex should use the template files which are available at the journal's web page (http://www.santafe.edu/~bonabeau. Authors using word processors should conform to the style given in the sample postscript file (jcs.ps) which can also be downloaded from the journal's web page. A postscript file is sufficient for the first submission. Source files will be requested when the paper is accepted for publication. To submit a paper, upload it by ftp: > ftp ftp.santafe.edu > Name: anonymous > Password: you email address > cd pub/bonabeau/incoming > mput files > bye Files cannot be downloaded from there, and directories cannot be created. After uploading your files, send an email to bonabeau at santafe.edu including a list of all uploaded files, a brief description of the paper and a list of keywords. _______________ SCHEDULE _______________ 4 issues yearly. Subscriptions: hermes at iway.fr, Attn: Subscriptions Dept. From jbower at bbb.caltech.edu Mon Jun 5 16:42:55 2006 From: jbower at bbb.caltech.edu (James M. Bower) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Jim Bower -- 4 messages (and end of thread) Message-ID: MODERATOR'S NOTE: Jim Bower's most recent message to the Connectionists list would not display on some mail readers, due to a glitch with a MIME encapsulation header (my fault, not Jim's.) Therefore I am rebroadcasting that message in a compendium with three other short messages Jim sent to the list yesterday. Since this dialog has gone on for a while now, I would like to end this thread and invite interested parties to communicate amongst themselves by email. -- Dave Touretzky, CONNECTIONISTS moderator ================ Message 1 of 4 ================ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Mathematical Sciences in Cambridge was host to a major international programme entitled "Neural Networks and Machine Learning". Many of the world's leading researchers in the field participated for periods ranging from a few weeks up to six months, and numerous younger scientists also benefited from a variety of conferences and workshops held throughout the programme. The Newton Institute's purpose-designed building provided a superb research environment, as well as an excellent venue for workshops. The first workshop of the six month programme was a two-week NATO Advanced Study Institute on "Generalization in Neural Networks and Machine Learning". This was heavily over-subscribed and attendance was limited to around 90 by the capacity of the Institute as well as by the desire to maintain an informal, interactive atmosphere. The topic of generalization was chosen as a focal point for the workshop and provided a common theme running through many of the presentations. This book resulted directly from the NATO ASI, and many of the chapters have a significant tutorial component, reflecting the instructional aims of the workshop. Ordering information: "Neural Networks and Machine Learning" Christopher M. Bishop (Ed.) Springer (ISBN 3-540-64928-X) Hard cover, 353 pages, NATO ASI Series F, volume 168. Amazon: http://www.amazon.com/exec/obidos/ASIN/354064928X/qid=931508458/sr=1-3/002-2 811424-5515635 Springer: http://www.springer.de/cgi-bin/search_book.pl?isbn=3-540-64928-X Blackwells: http://bookshop.blackwell.co.uk/cgi-bin/bv.cgi?BV_EngineID=dalfdglgimmbemhcf hecflkdghl.13&form%25oid=1024412&BV_Operation=Dyn_ProductReceive&form%25cnt_ type=0&form%25observe_selected=1&form%25observe_selected=0&form%25observe_ch ose=0&BV_SessionID=861199645&form%25position=unspecified&form%25destination= %2fbob%2fbob_product_detail.html.tmpl&form%25store_id=0&form%25classname=TC_ ProductLink&form%25destination_type=template&submit%25form=&form%25action=se lect&BV_ServiceName=Mall&form%25table=&form%25data= From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Stefano Panzeri, Alessandro Treves, Simon Schultz and Edmund T. Rolls Short-Term Synaptic Plasticity And Network Behavior Werner M. Kistler and J. Leo van Hemmen Synchrony And Desynchrony In Integrate-And-Fire Oscillators Shannon R. Campbell, DeLiang L. Wang and Ciriyam Jayaprakash Fast Global Oscillations In Networks Of Integrate-and-Fire Neurons With Low Firing Rates Nicolas Brunel and Vincent Hakim Concentration Tuning Mediated By Spare Receptor Capacity In Olfactory Sensory Neurons: A Theoretical Study Thomas A. Cleland and Christiane Linster A Computational Model For Visual Selection Yali Amit and Donald Geman Associative Memory In A Multi-Modular Network Nir Levy and David Horn and Eytan Ruppin Sparse Code Shrinkage: Denoising of Nongaussian Data By Maximum Likelihood Estimation Aapo Hyverinen Improving the Convergence of the Back-Propagation Algorithm Using Learning Rate Adatation Methods G. D. Magoulas, M. N. Vrahatis, and G. S. Androulakis ----- NOTE: Neural Computation is now on-line and issues starting with 11:1 will be available to all for a free trial period: ON-LINE - http://neco.mitpress.org/ ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1999 - VOLUME 11 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $84 Individual $82 $87.74 $116 Institution $302 $323.14 $336 * includes 7% GST (Back issues from Volumes 1-10 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From ESANN at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From ESANN at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: sciences, 'Statistical Pattern Recognition' shows how closely these fields are related in terms of application. Areas such as database design, artificial neural networks and decision support are common to all. The author also examines the more diverse theoretical topics available to the practitioner or researcher, such as outlier detection and model selection, and concludes each section with a description of the wider range of practical applications and the future developments of theoretical techniques. Providing an introduction to statistical pattern theory and techniques that draws on material from a wide range of fields, 'Statistical Pattern Recognition' is a must for all technical professionals wanting to get up to speed on the recent advances in this dynamic subject area. Key Features Contains descriptions of the most up-to-date pattern processing techniques, including the recent advances in non-parametric approaches to discrimination Illustrates the techniques with examples of real-world applications studies Includes a variety of exercises from 'open-book' questions to more lengthy projects Reviews '... features a 'how to' approach with examples and exercises' Lavoiser Contents: Introduction to statistical pattern recognition / Estimation / Density estimation / Linear discriminant analysis / Non-linear discriminant analysis - neural networks / Non-linear discriminant analysis - statistical methods / Classification trees / Feature selection and extraction / Clustering / Additional topics / Measures of dissimilarity / Parameter estimation / Linear algebra / Data / Probability theory. 1999 480pp Paperback ISBN 0 340 74164 3 29.99 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: ============== "Paradigms of Artificial Intelligence" presents a new methodolo- gical analysis of the two competing research paradigms of artifi- cial intelligence and cognitive science: the symbolic versus the connectionist paradigm. It argues that much of the discussion put forward for either paradigm misses the point. Most of the argu- ments in the debates on the two paradigms concentrate on the question whether the nature of intelligence or cognition is prop- erly accommodated by one or the other paradigm. Opposed to that is the analysis in this book, which concentrates on the question which of the paradigms accommodates the "user" of a developed theory or technique. The "user", who may be an engineer or scien- tist, has to be able to grasp the theory and to competently the methods which are developed. Consequently, besides the nature of intelligence and cognition, the book derives new objectives for future research which will help to integrate aspects of both paradigms to obtain more powerful AI techniques and to promote a deeper understanding of cognition. The book presents the fundamental ideas of both, the symbolic as well as the connectionist paradigm. Along with an introduction to the philosophical foundations, an exposition of some of the typical techniques of each paradigm is presented in the first two parts. This is followed by the mentioned analysis of the two paradigms in the third part. The book is intended for researchers, practitioners, advanced students, and interested observers of the developing fields of artificial intelligence and cognitive science. Providing accessi- ble introductions to the basic ideas of both paradigms, it is al- so suitable as a textbook for a subject on the topic at an ad- vanced level in computer science, philosophy, cognitive science, or psychology. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: ================= The field of artificial intelligence (AI), formally founded in 1956, attempts to understand, model and design intelligent sys- tems. Since the beginning of AI, two alternative approaches were pursued to model intelligence: on the one hand, there was the symbolic approach which was a mathematically oriented way of ab- stractly describing processes leading to intelligent behaviour. On the other hand, there was a rather physiologically oriented approach, which favoured the modelling of brain functions in or- der to reverse-engineer intelligence. Between the late 1960s and the mid-1980s, virtually all research in the field of AI and cog- nitive science was conducted in the symbolic paradigm. This was due to the highly influential analysis of the capabilities and limitations of the perceptron by [Minsky and Papert, 1969]. The perceptron was a very popular neural model at that time. In the mid-1980s a renaissance of neural networks took place under the new title of connectionism, challenging the dominant symbolic paradigm of AI. The `brain-oriented' connectionist paradigm claims that research in the traditional symbolic paradigm cannot be successful since symbols are insufficient to model crucial as- pects of cognition and intelligence. Since then a debate between the advocates of both paradigms is taking place, which frequently tends to become polemic in many writings on the virtues and vices of either the symbolic or the connectionist paradigm. Advocates on both sides have often neither appreciated nor really addressed each others arguments or concerns. Besides this somewhat frus- trating state of the debate, the main motivation for writing this book was the methodological analysis of both paradigms, which is presented in part III of this book and which I feel has been long overdue. In part III, I set out to develop criteria which any successful method for building AI systems and any successful the- ory for understanding cognition has to fulfill. The main argu- ments put forward by the advocates on both sides fail to address the methodologically important and ultimately decisive question for or against a paradigm: How feasible is the development of an AI system or the understanding of a theory of cognition? The significance of this question is: it is not only the nature of an intelligent system or the phenomenon of cognition itself which plays the crucial role, but also the human subject who is to perform the design or who wants to understand a theory of cog- nition. The arguments for or against one of the paradigms have, by and large, completely forgotten the role of the human subject. The specific capabilities and limitations of the human subject to understand a theory or a number of design steps needs to be an instrumental criterion in deciding which of the paradigms is more appropriate. Furthermore, the human subject's capabilities and limitations have to provide the guideline for the development of more suitable frameworks for AI and cognitive science. Hence, the major theme of this book are methodological considerations re- garding the form and purpose of a theory, which could and should be the outcome of our scientific endeavours in AI and cognitive science. This book is written for researchers, students, and technically skilled observers of the rapidly evolving fields of AI and cognitive science alike. While the third part is putting forward my methodological criticism, part I and II While the third part is putting forward my methodological criticism, part I and II provide the fundamental ideas and basic techniques of the symbolic and connectionist paradigm respectively. The first two parts are mainly written for those readers, which are new to the field, or are only familiar with one of the paradigms, to allow an easy grasp of the essential ideas of both paradigms. Both parts present the kernel of each paradigm without attempting to cover the details of latest developments, as those do not affect the fundamental ideas. The methodological analysis of both paradigms with respect to their suitability for building AI sys- tems and for understanding cognition is presented in part III. Available from Springer-Verlag. Price approximately (DEM 98, US$ 49) From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Ron Sun Edward Merrill Todd Peterson To appear in: Cognitive Science. http://www.cecs,missouri.edu/~rsun/sun.CS99.ps ABSTRACT This paper presents a skill learning model {\sc Clarion}. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects. Two papers on accounting for consciousness computationally: -------------------------------------------------- Accounting for the Computational Basis of Consciousness: A Connectionist Approach Ron Sun To appear in: Consciousness and Cognition, 1999. http://www.cecs.missouri.edu/~rsun/sun.CC99.ps ABSTRACT This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the functional role of consciousness, in the form of the synergy between the conscious and the unconscious. The fit between the model and various cognitive phenomena and data (documented in the psychological literatures) is discussed to accentuate the plausibility of the model and its explanation of consciousness. Comparisons with existing models of consciousness are made in the end. -------------------------------------------------- Learning, Action, and Consciousness: A Hybrid Approach toward Modeling Consciousness Ron Sun Appeared in: Neural Networks, 10 (7), pp.1317-1331. 1997. http://www.cecs.missouri.edu/~rsun/sun.nn97.ps ABSTRACT This paper is an attempt at understanding the issue of consciousness through investigating its functional role, especially in learning, and through devising hybrid neural network models that (in a qualitative manner) approximate characteristics of human consciousness. In so doing, the paper examines explicit and implicit learning in a variety of psychological experiments and delineates the conscious/unconscious distinction in terms of the two types of learning and their respective products. The distinctions are captured in a two-level action-based model {\sc Clarion}. Some fundamental theoretical issues are also clarified with the help of the model. Comparisons with existing models of consciousness are made to accentuate the present approach. Finally, a paper on computational analysis of the model: --------------------------------- Autonomous Learning of Sequential Tasks: Experiments and Analyses by Ron Sun, Todd Peterson Appeared in: IEEE Transactions on Neural Networks, Vol.9, No.6, pp.1217-1234. November, 1998. http://www.cecs.missouri.edu/~rsun/sun.tnn98.ps ABSTRACT: This paper presents a novel learning model {\sc Clarion}, which is a hybrid model based on the two-level approach proposed in Sun (1995). The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyses in various ways are reported that shed light on the advantages of the model. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.cecs.missouri.edu/~rsun/clarion.html =========================================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: other information from CNSA, and information about public domain clustering and classification software. The CD will save time and effort in looking for clustering and classification information, and a production run of 1000 CDs will be distributed to key researchers, R&D specialists, and educators, across various disciplines relating to classification and data analysis. The CD is distributed as a supplement to the Journal of Classification and, in addition, will be available on library shelves with the Journal of Classification. Availability on CD also means that information on commercial software, shareware, and clustering- and classification-related services will be available, as well as publications and event information. Just $75 for lists of relevant publications or your exhibition events, with links to your web sites! More information and pricing is available from the CSNA web site, http://www.pitt.edu/~csna (see under 'New developments related to Classification Literature Automated Service'). Now is the time to reserve space on this CD. I look forward to hearing from you, Eva Whitmore CLASS Technical Editor /-----------------------------------------------------------\ Eva Whitmore 14 Calgary St. St. John's, NF A1A 3W2 Tel: 709-739-6252 Email: eva at cs.mun.ca \-----------------------------------------------------------/ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Fellowships Available Message-ID: -------------------------------------------------------------------- Positions Available at All Levels in Advanced Signal Processing and Magnetoencephalography/fMRI Wanted: neuroscientists, programmers, computer scientists, and physicists to join our growing Brain and Computation group in a newly funded functional brain imaging (MEG/fMRI) study of neural plasticity. Over a half dozen fellowships (funded by the National Foundation for Functional Brain Imaging) are available. For further details, see http://www.cs.unm.edu/~bap/hiring.html -------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From school at cogs.nbu.acad.bg Mon Jun 5 16:42:55 2006 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: CogSci 2000 Summer School Message-ID: 7th International Summer School in Cognitive Science Sofia, New Bulgarian University, July 10 - 30, 2000 Courses: * Distributed representations and gradual learning processes in cognition - Jay McClelland (Carnegie Mellon University, USA) * Connectionist models of language processing - Jeff Elman (University of California, San Diego, USA) * Brain Organization of Human Memory and Thought - John Gabrieli (Stanford University) * Cognitive Development - Graeme Halford (University of Queensland) * The Human Conceptual System - Lawrence W. Barsalou (Emory University) * Topics in Vision Science - Stephen E. Palmer (University of California, Berkeley) * Cognitive Science: A Basic Science for an Applied Science of Learning - John T. Bruer (James S. McDonnell Foundation) * Psychological Scaling - Encho Gerganov (New Bulgarian University) * Research Methods in Psycholinguistics - Elena Andonova (New Bulgarian University) * Research Methods in Memory and Thinking - Boicho Kokinov (New Bulgarian University) Organised by New Bulgarian University, Bulgarian Academy of Sciences, and Bulgarian Society for Cognitive Science Sponsored by the Open Society Institute in Budapest - HESP Program International Advisory Board Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: * application form (see at the Web page), * CV, * statement of purpose, * copy of diploma; if student - academic transcript * letter of recommendation, * list of publications (if any) and short summary of up to three of them. Apply as soon as possible since the number of participants is restricted. Send applications to: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21 Montevideo Str. Sofia 1635, Bulgaria e-mail: school at cogs.nbu.acad.bg For more information look at: http://www.nbu.bg/cogs/events/ss2000.html From ESANN at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Antonio Turiel and Nestor Parga NOTE Exponential Or Polynomial Learning Curves? - Case By Case Studies Hanzhong Gu and Haruhisa Takahashi LETTERS Training Feed-forward Neural Networks With Gain Constraints Eric Hartman Variational Learning for Switching State-Space Models Zoubin Ghahramani and Geoffrey E. Hinton Retrieval Properties of A Hopfield Model With Random Asymmetric Interactions Zhang Chengxiang, Chandan Dasgupta and Manoranjan P. Singh On "Natural" Learning And Pruning in Multi-Layered Perceptrons Tom Heskes Synthesis of Generalized Algorithms for the Fast Computation of Synaptic Conductances with Markov Kinetic Models in Large Network Simulations Michele Giugliano Hierarchical Bayesian Models for Regularisation in Sequential Learning J.F.G. de Freitas, M. Niranjan and A. H. Gee Sequential Monte Carlo Methods to Train Neural Network Models J.F.G. de Freitas, M. Niranjan an, A. H. Gee and A. Doucet ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2000 - VOLUME 12 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $430 $460.10 $478 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: This thoroughly and thoughtfully revised edtion of a very successful textbook makes the principles and the details of neural network modeling accessible to cognitive scientists of all varieties as well as other scholars interested in these models. Research since the publication of the first edition has been systematically incorporated into a framework of proven pedagogical value. Features of the second edition include: A new section on spatiotemporal pattern processing. Coverage of ARTMAP networks (the supervised version of adaptive resonance networks) and recurrent back-propagation networks. A vastly expanded section on models of specific brain areas, such as the cerebellum, hippocampus, basal ganglia, and visual and motor cortex. Up-to-date coverage of applications of neural networks in areas such as combinatorial optimization and knowledge representation. As in the first edition, the text includes extensive introductions to neuroscience and to differential and difference equations as appendices for students without the requisite background in these areas. As graphically revealed in the flowchart in the front of the book, the text begins with simpler processes and builds up to more complex multilevel functional systems. Table of contents: Chapters 2 through 7 each include equations and exercises (computational, mathematical, and qualitative) at the end of the chapter. The text sections are as follows. Flow Chart of the Book Preface Preface to the Second Edition Chapter 1: Brain and Machine: The Same Principles? What are Neural Networks? What Are Neural Networks? Is Biological Realism a Virtue? What Are Some Principles of Neural Network Theory? Methodological Considerations Chapter 2: Historical Outline 2.1. Digital Approaches The McCulloch-Pitts Network Early Approaches to Modeling Learning: Hull and Hebb Rosenblatt's Perceptrons Some Experiments With Perceptrons The Divergence of Artificial Intelligence and Neural Modeling 2.2. Continuous and Random Net Approaches Rashevsky's Work Early Random Net Models Reconciling Randomness and Specificity Chapter 3: Associative Learning and Synaptic Plasticity 3.1. Physiological Bases for Learning 3.2. Rules for Associative Learning Outstars and Other Early Models of Grossberg Anderson's Connection Matrices Kohonen's Early Work 3.3. Learning Rules Related to Changes in Node Activities Klopf's Hedonistic Neurons and the Sutton-Barto Learning Rule Error Correction and Back Propagation The Differential Hebbian Idea Gated Dipole Theory 3.4. Associative Learning of Patterns Kohonen's Recent Work: Autoassociation and Heteroassociation Kosko's Bidirectional Associative Memory Chapter 4: Competition, Lateral Inhibition, and Short-Term Memory 4.1. Contrast Enhancement, Competition, and Normalization Hartline and Ratliff's Work, and Other Early Visual Models Nonrecurrent Versus Recurrent Lateral Inhibition 4.2. Lateral Inhibition and Excitation Between Sensory Representations Wilson and Cowan's Work Work of Grossberg and Colleagues Work of Amari and Colleagues Energy Functions in the Cohen-Grossberg and Hopfield-Tank Models The Implications of Approach to Equilibrium Networks With Synchronized Oscillations 4.3. Visual Pattern Recognition Models Visual Illusions Boundary Detection Versus Feature Detection Binocular and Stereoscopic Vision Visual Motion Comparison of Grossberg's and Marr's Approaches 4.4. Uses of Lateral Inhibition in Higher Level Processing Chapter 5: Conditioning, Attention, and Reinforcement 5.1. Network Models of Classical Conditioning Early Work: Brindley and Uttley Rescorla and Wagner's Psychological Model Grossberg: Drive Representations and Synchronization Aversive Conditioning and Extinction Differential Hebbian Theory Versus Gated Dipole Theory 5.2. Attention and Short-Term Memory in Conditioning Models Grossberg's Approach to Attention Sutton and Barto's Approach: Blocking and Interstimulus Interval Effects Some Contrasts Between the Grossberg and Sutton-Barto Approaches Further Connections With Invertebrate Neurophysiology Further Connections With Vertebrate Neurophysiology Gated Dipoles, Aversive Conditioning, and Timing Chapter 6: Coding and Categorization 6.1. Interactions Between Short- and Long-Term Memory in Code Development Malsburg's Model With Synaptic Conservation Grossberg's Model With Pattern Normalization Mathematical Results of Grossberg and Amari Feature Detection Models With Stochastic Elements From Feature Coding to Categorization 6.2. Supervised Classification Models The Back Propagation Network and its Variants The RCE Model 6.3. Unsupervised Classification Models The Rumelhart-Zipser Competitive Learning Algorithm Adaptive Resonance Theory Edelman and Neural Darwinism 6.4. Models that Combine Supervised and Unsupervised Parts ARTMAP and Other Supervised Adaptive Resonance Networks Brain-State-in-a-Box (BSB) Models 6.5. Translation and Scale Invariance 6.6. Processing Spatiotemporal Patterns Chapter 7 Optimization, Control, Decision, and Knowledge Representation 7.1. Optimization and Control Classical Optimization Problems Simulated Annealing and Boltzmann Machines Motor Control: The Example of Eye Movements Motor Control: Arm Movements Speech Recognition and Synthesis Robotic and Other Industrial Control Problems 7.2. Decision Making and Knowledge Representation What, If Anything, Do Biological Organisms Optimize? Affect, Habit, and Novelty in Neural Network Theories Knowledge Representation: Letters and Words Knowledge Representation: Concepts and Inference 7.3. Neural Control Circuits, Mental Illness, and Brain Areas Overarousal, Underarousal, Parkinsonism, and Depression Frontal Lobe Function and Dysfunction Disruption of Cognitive-Motivational Interactions Impairment of Motor Task Sequencing Disruption of Context Processing Models of Specific Brain Areas Models of the Cerebellum Models of the Hippocampus Models of the Basal Ganglia Models of the Cerebral Cortex Chapter 8: A Few Recent Technical Advances 8.1. Some "Toy" and Real World Computing Applications Pattern Recognition Knowledge Engineering Financial Engineering "Oddball" Applications 8.2. Some Neurobiological Discoveries Appendix 1: Basic Facts of Neurobiology The Neuron Synapses, Transmitters, Messengers, and Modulators Invertebrate and Vertebrate Nervous Systems Functions of Vertebrate Subcortical Regions Functions of the Mammalian Cerebral Cortex Appendix 2: Difference And Differential Equations in Neural Networks Example: The Sutton-Barto Difference Equations Differential Versus Difference Equations Outstar Equations: Network Interpretation and Numerical Implementation The Chain Rule and Back Propagation Dynamical Systems: Steady States, Limit Cycles, and Chaos ABOUT THE AUTHOR: Daniel S. Levine is Professor of Psychology at the University of Texas at Arlington. A former president of the International Neural Network Society, he is the organizer of the MIND conferences, which bring together leading neural network researchers from academia and industry. Since 1975, he has written nearly 100 books, articles, and chapters for various audiences interested in neural networks. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Papers on associative memory in neuronal networks Message-ID: <14689.57198.956218.420206@cerebellum> PAPERS ON ASSOCIATIVE MEMORY IN NEURONAL NETWORKS We would like to bring to your attention a series of recent theoretical papers about associative memory in neuronal networks, now available over internet. By analysis and simulation experiments these studies explore the computational function of biophysical mechanisms, such as spike synchronization, gamma oscillations, NMDA transmission characteristics, and activity feedback in local cortical networks or over reciprocal projections. The simulation models vary widely in their degree of biophysical realism ranging from binary sparse associative memories to networks of compartmental neurons. List of manuscripts and abstracts, see below. Postscript versions are available on our web pages: http://www.informatik.uni-ulm.de/ni/mitarbeiter/FSommer/FSommernew.html http://personal-homepages.mis.mpg.de/wenneker/index.html Fritz and Thomas ---------------------------------------------------------------------------- Dr. Friedrich T. Sommer Department of Neural Information Processing University of Ulm D-89069 Ulm Germany Tel. 49(731)502-4154 FAX 49(731)502-4156 FRITZ at NEURO.INFORMATIK.UNI-ULM.DE ---------------------------------------------------------------------------- ____________________________________________________________________________ Dr.Thomas Wennekers Max-Planck-Institute for Mathematics in the Sciences Inselstrasse 22-26 04103 Leipzig Germany Phone: +49-341-9959-533 Fax: +49-341-9959-555 Email: Thomas.Wennekers at mis.mpg.de ____________________________________________________________________________ ----------------------------------------------------------------------------- LIST OF MANUSCRIPTS: (1) Sommer, F.T. and Wennekers, T. Modeling studies on the computational function of fast temporal structure in neuronal network activity submitted to J.Neurophysiol. (Paris) (2) Sommer, F.T. and Wennekers, T.: Associative memory in a pair of cortical cell groups with reciprocal connections Acctepted at the Computational Neuroscience Meeting CNS 2000 , Brugge, Belgium, July 2000. (3) Vollmer, U., Wennekers, T. and Sommer, F.T.: Coexistence of short and long term memory in a model network of realistic neurons Accepted at the Computational Neuroscience Meeting CNS 2000, Bruegge, Belgium (4) Sommer, F.T.: On cell assemblies in a cortical column Neurocomputing 2000, to appear (5) Wennekers, T. and Sommer, F.T.: Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons. Neurocomputing 26-27, 573-578, 1999. (6) Sommer, F.T. and Palm, G.: Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Neural Networks 12 (2) (1999) 281 - 297 (7) Sommer, F.T.; Wennekers, Th.; Palm, G.: Bidirectional completion of cell assemblies in the cortex. In: J.M.Bower (ed) Computational Neuroscience: Trends in Research. Plenum Press, New York, 1998. (8) Sommer, F.T. and Palm, G.: Bidirectional Retrieval from Associative Memory in Advances in Neural Information Processing Systems 10, MIT Press, Cambridge, MA (1998) 675 - 681 ========================================================================================= ABSTRACTS: ----------------------------------------------------------------------------------------- (1) Modeling studies on the computational function of fast temporal structure in cortical circuit activity Friedrich T. Sommer and Thomas Wennekers The interplay between experiments and theoretical approaches can support the exploration of the function of neuronal circuits in the cortex. In this review we exemplify such a proceeding with a study on the functional role of spike timing and gamma-oscillations, and their relation to associative activity feedback through cortex-intrinsical synaptic connections. We first discern the theoretical approaches in general that have been most important in brain research, in particular, those approaches focusing on the biophysical, the functional, and the computational aspect. It is demonstrated how results from computational model studies on different levels of abstraction can constrain the functionality of associative memory expected in real cortical neuronal circuits. These constraints will be used to implement a computational model of associative memory on the base of biophysically elaborated compartmental neurons developed by Pinsky and Rinzel \cite{AN:PinskyRinzel94}. We run simulation experiments for two network architectures: a single interconnected pool of cells (say a cortical column), and two such reciprocally connected pools. In our biophysical model individual cell populations correspond to entities formed by Hebbian coincidence learning. When recalled by stimulating some cells in the population the stored patterns are extremely quickly completed and coded by events of synchronized single spikes. These fast associations are executed with an efficiency comparable to optimally tuned technical associative networks. The maximum repetition frequency for these association processes lies in the gamma-range. If a stimulus changes fast enough to switch between different memory patterns within one gamma period, a single association takes place without periodic firing of individual cells. Gamma-band firing and phase locking are therefore not primary coding features. They appear, however, with tonic stimulation or if feedback loops in the network provide a reverberation. The latter can improve (clean up) the recall iteratively. In the reciprocal wiring architecture bidirectional reverberations do not express in a rigid phase locking between the pools. Bursting turns out as a supportive mechanism for bidirectional associative memory. Sommer, F.T. and Wennekers, T. Modeling studies on the computational function of fast temporal structure in neuronal network activity submitted to J.Neurophysiol. (Paris) ----------------------------------------------------------------------------------------- (2) Associative memory in a pair of cortical cell groups with reciprocal projections Friedrich T. Sommer and Thomas Wennekers We examine the functional hypothesis of bidirectional associative memory in a pair of reciprocally projecting cortical cell groups. Our simulation model features two-compartment neurons and synaptic weights formed by Hebbian learning of pattern pairs. After stimulation of a learned memory in one group we recorded the network activation. At high synaptic memory load (0.14 bit/synapse) we varied the number of cells receiving stimulation input (input activity). The network ``recalled'' patterns by synchronized regular gamma spiking. Stimulated cells also expressed bursts that fascilitated the recall with low input activity. Performance was evaluated for one-step retrieval based on monosynaptic transmission expressed after ca. 35ms, and for {\it bidirectional retrieval} involving iterative activity propagation. One-step retrieval performed comparably to the technical Willshaw model with small input activity, but worse in other cases. In 80\% of the trials with low one-step performance iterative retrieval improved the result. It achieved higher overall performance after recall times of 60--260ms. Keyword: population coding; associative memory; Hebbian synapses; reciprocal cortical wiring Sommer, F.T. and Wennekers, T.: Associative memory in a pair of cortical cell groups with reciprocal connections Acctepted at the Computational Neuroscience Meeting CNS 2000 , Brugge, Belgium, July 2000. ----------------------------------------------------------------------------------------- (3) Coexistence of short and long term memory in a model network of realistic neurons Urs Vollmer, Thomas Wennekers, Friedrich T. Sommer NMDA-mediated synaptic currents are believed to influence LTP. A recent model \cite{Lisman98} demonstrates that they can instead support short term memory based on rhythmic spike activity. We examine this effect in a more realistic model that uses two-compartment neurons experiencing fatigue and also includes long-term memory by synaptic LTP. We find that the network does support both modes of operation without any parameter changes, but depending on the input patterns. Short term memory functionality might facilitate Hebbian learning through LTP by holding a new pattern while synaptic potentiation occurs. We also find that susceptibility of the short term memory against new input is time-dependent and reaches a maximum around the time constant of neuronal fatigue (200--400~ms). This corresponds well to the time scale of the syllabic rhythm and various psychophysical phenomena. Keywords: Short-term memory; associative memory; population coding; NMDA-activated channels. Vollmer, U., Wennekers, T. and Sommer, F.T.: Coexistence of short and long term memory in a model network of realistic neurons Accepted at the Computational Neuroscience Meeting CNS 2000, Bruegge, Belgium ----------------------------------------------------------------------------------------- (4) On cell assemblies in a cortical column Friedrich T. Sommer Recent experimental evidence for temporal coding of cortical cell populations \cite{AN:Riehleetal97,AN:Donoghueetal98} recurs to Hebb's classical cell assembly notion. Here the properties of columnar cell assemblies are estimated, using the assumptions about biological parameters of Wickens \& Miller \cite{FS:WickensMiller97}, but extending and correcting their predictions: Not the combinatorical constraint as they assume, but synaptic saturation and the requirement of low activation outside the assembly limit assembly size and number. As will be shown, i) columnar assembly processing can be still information theoretically efficient, and ii) at efficient parameter settings several assemblies can be ignited in a column at the same time. The feature ii) allows faster and more flexible access to the information contained in the set of stored cell assemblies. Keyword}s: population coding; associative memory; Hebbian synapses, columnar connectivity Sommer, F.T.: On cell assemblies in a cortical column Neurocomputing 2000, to appear ----------------------------------------------------------------------------------------- (5) Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons Thomas Wennekers and Friedrich T. Sommer Theoretical studies concerning iterative retrieval in conventional associative memories suggest that cortical gamma-oscillations may constitute sequences of fast associative processes each restricted to a single period. By providing a rhythmic threshold modulation suppressing cells that are uncorrelated with a stimulus, interneurons significantly contribute to this process. This hypothesis is tested in the present paper utilizing a network of two-compartment model neurons developed by Pinsky and Rinzel. It is shown that gamma-oscillations can simultaneously support an optimal speed for single pattern retrieval, an optimal repetition frequency for consecutive retrieval processes, and a very high memory capacity. Keywords: gamma-oscillations; threshold control; associative memory Wennekers, T. and Sommer, F.T.: Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons. Neurocomputing 26-27, 573-578, 1999. ----------------------------------------------------------------------------------------- (6) Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Friedrich T. Sommer and Guenther Palm The Willshaw model is asymptotically the most efficient neural associative memory (NAM), but its finite version is hampered by high retrieval errors. Iterative retrieval has been proposed in a large number of different models to improve performance in auto-association tasks. In this paper bidirectional retrieval for the hetero-associative memory task is considered: We define information efficiency as a general performance measure for bidirectional associative memory (BAM) and determine its asymptotic bound for the bidirectional Willshaw model. For the finite Willshaw model an efficient new bidirectional retrieval strategy is proposed, the appropriate combinatorial model analysis is derived, and implications of the proposed sparse BAM for applications and brain theory are discussed. The distribution of the dendritic sum in the finite Willshaw model given by \citet{FS:Buckingham92} allows no fast numerical evaluation. We derive a combinatorial formula with a highly reduced evaluation time that is used in the improved error analysis of the basic model and for estimation of the retrieval error in the naive model extension where bidirectional retrieval is employed in the hetero-associative Willshaw model. The analysis rules out the naive BAM extension as a promising improvement. A new bidirectional retrieval algorithm -- called {\em crosswise bidirectional} (CB) retrieval -- is presented. The cross talk error is significantly reduced without employing more complex learning procedures or dummy augmentation in the pattern coding as proposed in other refined BAM models \citep{FS:Wangetal90,FS:Leungetal95}. The improved performance of CB retrieval is shown by a combinatorial analysis of the first step and by simulation experiments: It allows very efficient hetero-associative mapping as well as auto-associative completion for sparse patterns -- the experimentally achieved information efficiency is close to the asymptotic bound. The different retrieval methods in hetero-associative Willshaw matrix are discussed as Boolean linear optimization problems. The improved BAM model opens interesting new perspectives, for instance, in Information Retrieval it allows efficient data access providing segmentation of ambiguous user input, relevance feedback and relevance ranking. Finally, we discuss BAM models as functional model for reciprocal cortico-cortical pathways, and the implication of this for a more flexible version of Hebbian cell-assemblies. Keywords: Bidirectional associative memory, Hebbian learning, iterative retrieval, combinatorial analysis, cell-assemblies, neural information retrieval (6) Sommer, F.T. and Palm, GT.: Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Neural Networks 12 (2) (1999) 281 - 297 ----------------------------------------------------------------------------------------- (7) Bidirectional Completion of Cell Assemblies in the Cortex Friedrich T. Sommer T. Wennekers and G. Palm Reciprocal pathways are presumedly the dominant wiring organization for cortico-cortical long range projections\refnote{\cite{AN:FellemanVanEssen91}}. This paper examines the hypothesis that synaptic modification and activation flow in a reciprocal cortico-cortical pathway correspond to learning and retrieval in a bidirectional associative memory (BAM): Unidirectional activation flow may provide the fast estimation of stored information, whereas bidirectional activation flow might establish an improved recall mode. The idea is tested in a network of binary neurons where pairs of sparse memory patterns have been stored in bidirectional synapses by fast Hebbian learning (Willshaw model). We assume that cortical long-range connections shall be efficiently used, i.e., in many different hetero-associative projections corresponding in technical terms to a high memory load. While the straight-forward BAM extension of the Willshaw model does not improve the performance at high memory load, a new bidirectional recall method (CB-retrieval) is proposed accessing patterns with highly improved fault tolerance and also allowing segmentation of ambiguous input. The improved performance is demonstrated in simulations. The consequences and predictions of such a cortico-cortical pathway model are discussed. A brief outline of the relations between a theory of modular BAM operation and common ideas about cell assemblies is given. Sommer, F.T.; Wennekers, Th.; Palm, G.: Bidirectional completion of cell assemblies in the cortex. In: J.M.Bower (ed) Computational Neuroscience: Trends in Research. Plenum Press, New York, 1998. ----------------------------------------------------------------------------------------- (8) Bidirectional Retrieval from Associative Memory Friedrich T. Sommer and G. Palm Similarity based fault tolerant retrieval in neural associative memories (NAM) has not lead to wiedespread applications. A drawback of the efficient Willshaw model for sparse patterns \cite{FS:Steinbuch61,FS:Willshaw69}, is that the high asymptotic information capacity is of little practical use because of high cross talk noise arising in the retrieval for finite sizes. Here a new bidirectional iterative retrieval method for the Willshaw model is presented, called crosswise bidirectional (CB) retrieval, providing enhanced performance. We discuss its asymptotic capacity limit, analyze the first step, and compare it in experiments with the Willshaw model. Applying the very efficient CB memory model either in information retrieval systems or as a functional model for reciprocal cortico-cortical pathways requires more than robustness against random noise in the input: Our experiments show also the segmentation ability of CB-retrieval with addresses containing the superposition of pattens, provided even at high memory load. Sommer, F.T. and Palm, G.: Bidirectional Retrieval from Associative Memory in Advances in Neural Information Processing Systems 10, MIT Press, Cambridge, MA (1998) 675 - 681 ----------------------------------------------------------------------------------------- ========================================================================================= From evsukoff at LMP.UFRJ.BR Mon Jun 5 16:42:55 2006 From: evsukoff at LMP.UFRJ.BR (Alexandre Evsukoff) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: FINAL CALL FOR DEMONSTRATIONS SBRN'2000 Message-ID: <7C7E8A3FEC@lmp.ufrj.br> CALL FOR DEMONSTRATIONS SBRN'2000 - VIth BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS http://www.iltc.br/sbrn2000 Rio de Janeiro, November 22-25, 2000 The SBRN'2000 will host a Demonstration Session, in parallel with Tutorials in November 22, 2000. The Demonstration Session showcases state-of-the-art Neural software products and provides researchers with an opportunity to show their research in action. Early exhibition of research prototypes are encouraged but commercial products will also be present. This will be an opportunity to put face-to-face innovative research prototypes and mature commercial products. The Demonstration Session will be an "open-house" event in the first day of the Symposium. However, all accepted research demonstrations will be left available within the Poster Sessions. A CD-ROM containing a short animated version of each demonstration will be possibly pressed and distributed to participants. Participants are invited to submit proposals to demonstrated their systems, especially those whose papers were accepted for presentation at the conference program. In addition to contact information, proposals must include the following: - A two-page description of the technical content of the demo, including credits and references. - An animated version of the demonstration or a demo storyboard (six pages maximum). This will be the primary method of evaluating the proposals. - A detailed description of hardware and software requirements. The Organising Committee will provide the Demonstration Session with generic PCs and standard software. Unix- and Mac -based demonstrations will be possible. The Demonstration Session will also allow demonstrations via the web. Anyone interested in participating should include a URL that accesses their demo with their proposal. Demonstration proposals must be received entirely including any supporting materials by August 28. Authors will be notified of acceptance by September 29, 2000. Any question or comments as so as demonstration proposals must be made electronically directly to the Demonstrations Chair. Demonstrations Chair: Alexandre Evsukoff (UFRJ/ILTC, Brazil) evsukoff at lmp.ufrj.br From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: An Incremental Multivariate Regression Method for Function Approximation from Noisy Data M. Carozza and S. Rampone Universit del Sannio Abstract In this paper we consider the problem of approximating functions from noisy data. We propose an incremental supervised learning algorithm for RBF networks. Hidden gaussian nodes are added in an iterative manner during the training process. For each new node added, the activation function center and the output connection weight are settled according to an extended chained version of the Nadaraja-Watson estimator. Then the variances of the activation functions are determined by an empirical risk driven rule based on a genetic-like optimization technique. The postscript file is available in http://space.tin.it/scienza/srampone/indexing.htm (click on ) From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: financial assets remains a highly controversial question in finance (even if recent publications in main scientific references seem to give some credits to those works). From the methodological point of view, financial time series appear to be very challenging. They are often characterized by a lot of noise, problems of stationarity, sudden changes of volatility, ... Neural networks have appeared as new tools in this area in this last decade. This special session will try put to into light the serious results that we can await form neural networks in this field and to analyze the methodological issues of their application. Artificial neural networks and early vision processing ------------------------------------------------------ Organised by D. Charles, C. Fyfe, Univ. of Paisley (Scotland) It is well known that biological visual systems, and in particular the human visual system, are extraordinarily good at extracting deciphering very complex visual scenes. Certainly, if consider the human visual system to be solving inverse graphic problems then we have not really come close to building artificial systems which are as effective as biological ones. We have much to learn from studying biological visual architecture and the implementation of practical vision based products could be improved by gaining inspiration from these systems. The following are some suggested areas of interest: - Unsupervised preprocessing methods - e.g. development of local filters, edge filtering. - Statistical structure identification - e.g. Independent Component Analysis, Factor Analysis, Principal Components Analysis, Projection pursuit. - Information theoretic techniques for the extraction/preservation of information in visual data. - Coding strategies - e.g. sparse coding, complexity reduction. - Binocular disparity. - Motion, invariances, colour encoding - e.g. optical flow, space/time filters. - Topography preservation. - The practical application of techniques relating to these topics. Artificial neural networks for Web computing -------------------------------------------- Organised by M. Maggini, Univ. di Siena (Italy) The Internet represents a new challenging field for the application of machine learning techniques to devise systems which improve the accessibility to the information available on the web. This domain is particular appealing since it is easy to collect large amounts of data to be used as training sets while it is usually difficult to write manually sets of rules that solve interesting tasks. The aim of this special session is to present the state of the art in the field of connectionist systems applied to web computing. The possible fields for applications involve distributed information retrieval issues like the design of thematic search engines, user modeling algorithms for the personalization of services to access information on the web, automatic security management, design and improvement of web servers through prediction of request patterns, and so on. In particular the suggested topics are: - Personalization of the access to information on the web - Recommender systems on the web - Crawling policies for search engines Focussed crawlers - Analysis and prediction of requests to web servers - Intelligent chaching and proxies - Security issues (e.g. intrusion detection) Dedicated hardware implementations: perspectives on systems and applications ---------------------------------------------------------------------------- Organised by D. Anguita, M. Valle, Univ. of Genoa (Italy) The aim of this session is to assess new proposals for bridging the gap between algorithms, applications and hardware implementations of neural networks. Usually these three fields are not investigated in close connection: researchers working in the development of dedicated hardware implementations develop simplified versions of otherwise complex neural algorithms or develop dedicated algorithms: usually these algorithms have not been thoroughly tested on real-world applications. At the same time, many theoretically sound algorithms are not feasible in dedicated hardware, therefore limiting their success only to applications where a software solution on a general-purpose system is feasible. The focus of the session will be on the issues related to the hardware implementation of neural algorithms and architectures and their successful application to real world-problems, not on the details of the hardware implementation itself. The session will review both major achievements in hardware friendly algorithms and assess major results obtained in the application of dedicated neural hardware to real industrial and/or consumer applications. Novel neural transfer functions ------------------------------- Organised by W. Duch, Nicholas Copernicus Univ. (Poland) It is commonly believed that because of universal approximation theorem sigmoidal functions are sufficient for all applications. This belief has been responsible for a slow progress in creating neural networks based on novel transfer functions or using several transfer functions in one network. Transfer functions are as important for creating good neural models as the architectures and the training methods are because they have strong influence on rates of convergence and on complexity of networks needed to solve the problem at hand. This special session will be devoted to neural models exploring the benefits of using different transfer functions. Papers comparing results obtained with known and novel transfer functions, developing methods of training suitable for heterogeneous function networks, investigating theoretical rates of convergence or deriving approximations to biological neural activity are strongly encouraged. Neural networks and evolutionary/genetic algorithms - hybrid approaches ----------------------------------------------------------------------- Organised by T. Villmann, Univ. Leipzig (Germany) Artificial neural networks can be taken as a special kind of learning and self-adapting data processing systems. The abilities to handle noisy and high-dimensional data, nonlinear problems, large data sets etc. using neural techniques have lead to an innumerous number of applications as well as a good theory behind. An other adaptation approach is the approach of genetic and evolutionary algorithms. One of the most advantages of these methods is the relative independence of the algorithm according to the optimization goal defined by the fitness function. The fitness function can comprise traditional restrictions but may also include explicit expert knowledge. In the last years several approaches were developed combining both neural networks and genetic/evolutionary algorithms. Thereby, the methods ranging from neural network learning using genetic algorithms and structure adaptation of neural network topologies by genetic algorithms to migration dynamic in evolutionary algorithms according to neural network dynamics and other. Of coarse, combining both approaches should improve the capability of the resulting hybrid system. Authors of this special session are invited to submit actual contributions which cover the above shortly but not completely explained area of hybrid systems combining neural networks and genetic/evolutionary algorithms. Thereby new methods and theoretical developments should be emphasized. However, new applications with an interesting theoretical background are also of interest. Possible topics may be (but not restricted for further): - neural network adaptation by genetic/evolutionary algorithms - learning in neural networks using genetic/evolutionary algorithms - clustering, fuzzy clustering by genetic/evolutionary algorithms - neural networks for genetic/evolutionary algorithms - applications using hybrid systems ===================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Microelectronics Laboratory 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat D facto conference services 27 rue du Laekenveld - B-1080 Brussels - Belgium tel: + 32 2 420 37 57 - fax: + 32 2 420 02 55 mailto:esann at dice.ucl.ac.be ===================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: in-depth discussion on recent research results. Also, at the 7th International Conference on Neural Information Processing on November 2000, special sessions were organized on Blind Adaptive Filtering and Independent Component Analysis and Models of Natural Image Statistics. Also, several post-conference workshops have been organized at Neural Information Processing Systems over the last few years. The papers presented dealt with recent developments on new theories and applications of BSS and ICA, and contribute to the scientific and engineering progress in this important field. Some contributions for the special issue will evolve from the attendants of those meetings. If you were not able to attend and are active in these areas of research, you are highly encouraged to submit your work. Examples of topics relevant to this special issue include : - Multichannel blind deconvolution and equalization - Nonstationary source separation - Nonlinear ICA - Noisy ICA - Variational methods for ICA - PCA/ICA feature extraction - BSS/ICA applications (speech enhancement, Efficient encoding of natural scenes and sound , telecommunication, data mining, medical data processing, etc.) Two copies of the manuscripts should be submitted by March 1st, 2001, to: Dr. V. David Sanchez NEUROCOMPUTING - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 60130 Pasadena, CA 91116-6130 U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net In your submitting letter please clearly write that you are submitting your papers to the Special Issue on BSS/ICA. Guest Editors Dr. Shun-ichi Amari Vice Director, RIKEN Brain Science Institute Laboratory for Mathematical Neuroscience Research Group on Brain-Style Information Systems Wako Japan Tel: +81-(0)48-467-9669 Fax: +81-(0)48-467-9687 E-mail: amari at brain.riken.go.jp Dr. Aapo Hyvarinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400 FIN-02015 HUT Finland Tel: +358-9-451-3278 Fax: +358-9-451-3277 Email: Aapo.Hyvarinen at hut.fi Prof. Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr Dr. Te-Won Lee Institute for Neural Computation University of California, San Diego 9500 Gilman Dr. DEPT 0523 La Jolla, CA 92093, USA Phone: (858) 822-1905 Fax: (858) 587-0417 Email: tewon at inc.ucsd.edu Dr. V. David Sanchez NEUROCOMPUTING - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 60130 Pasadena, CA 91116-6130 U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: John.Smith.txt, Message-ID: This will facilitate appropriate filing. Thanks a lot! Juergen Schmidhuber http://www.idsia.ch/~juergen/ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Ron Sun Edward Merrill Todd Peterson To appear in: Cognitive Science, Vol.25, No.2. March 2001. http://www.cecs.missouri.edu/~rsun/sun.CS99.ps http://www.cecs.missouri.edu/~rsun/sun.CS99.pdf ABSTRACT This paper presents a skill learning model CLARION. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects. A new paper on consciousness: -------------------------------------------------- Computation, Reduction, and Teleology of Consciousness Ron Sun To appear in: {\it Cognitive Systems Research}, Vol.1, No.4, 2001. http://www.cecs.missouri.edu/~rsun/sun.jcsr-cons10.ps http://www.cecs.missouri.edu/~rsun/sun.jcsr-cons10.pdf ABSTRACT This paper aims to explore mechanistic and teleological explanations of consciousness. In terms of mechanistic explanations, it critiques various existing views, especially those embodied by existing computational cognitive models. In this regard, the paper argues in favor of the explanation based on the distinction between localist (symbolic) representation and distributed representation (as formulated in the connectionist literature), which reduces the phenomenological difference to a mechanistic difference. Furthermore, to establish a teleological explanation of consciousness, the paper discusses the issue of the functional role of consciousness on the basis of the afore-mentioned mechanistic explanation. A proposal based on synergistic interaction between the conscious and the unconscious is advanced that encompasses various existing views concerning the functional roles of consciousness. This two-step deepening explanation has some empirical support, in the form of a cognitive model and various cognitive data that it captures. Also, a previous paper on accounting for consciousness computationally: -------------------------------------------------- Accounting for the Computational Basis of Consciousness: A Connectionist Approach Ron Sun Appeared in: Consciousness and Cognition, 1999. http://www.cecs.missouri.edu/~rsun/sun.CC99.ps http://www.cecs.missouri.edu/~rsun/sun.CC99.pdf ABSTRACT This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the functional role of consciousness, in the form of the synergy between the conscious and the unconscious. The fit between the model and various cognitive phenomena and data (documented in the psychological literatures) is discussed to accentuate the plausibility of the model and its explanation of consciousness. Comparisons with existing models of consciousness are made in the end. ----------------------------------------------------------------- Symbol Grounding: A New Look At An Old Idea by Ron Sun Appeared in: Philosophical Psychology, Vol.13, No.2, pp.149-172. 2000. http://www.cecs.missouri.edu/~rsun/sun.PP00.ps http://www.cecs.missouri.edu/~rsun/sun.PP00.pdf ABSTRACT Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or ``objectively". They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. In this paper, we will take a detailed look at this relatively old issue, using a new perspective, aided by our work of computational cognitive model development. Finally, a previous paper on computational aspects of the model: --------------------------------- Autonomous Learning of Sequential Tasks: Experiments and Analyses by Ron Sun, Todd Peterson Appeared in: IEEE Transactions on Neural Networks, Vol.9, No.6, pp.1217-1234. November, 1998. http://www.cecs.missouri.edu/~rsun/sun.tnn98.ps ABSTRACT: This paper presents a novel learning model CLARION, which is a hybrid model based on the two-level approach proposed in Sun (1995). The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyses in various ways are reported that shed light on the advantages of the model. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.cecs.missouri.edu/~rsun/clarion.html =========================================================================== From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: ORGANIZING COMMITTEE Roland Baddeley (Sussex University) William Lowe (Harvard University) John Bullinaria (Reading University) Samantha Harltley (Liverpool University) CONTACT DETAILS For any problems or questions, please send e-mail to Roland Baddeley (ncpw7 at biols.susx.ac.uk) URL: http://www.biols.susx.ac.uk/home/Roland_Baddeley/NCPW7/NCPW7.html AIMS AND OBJECTIVES The Seventh Neural Computation and Psychology Workshop (NCPW7) will be held in Brighton, England from September 17-19, 2001. Each year this highly focused conference attracts a select group of (mostly, but not exclusively, European) neural network modellers specifically interested in psychology and neuropsychology. The theme of this year's workshop is neural network modelling in the areas of Cognition and Perception. Between 25-30 papers will be accepted as oral presentations. In addition to the high quality of the papers presented, this Workshop is always of limited size and takes place in an informal setting, both of which are explicitly designed to encourage interaction among the researchers present. Although we are particularly interested in models of cognition and perception, we will consider all papers that have something to do with the announced topic, even if rather tangentially. The organisation of the final program will depend on the submissions received. As in previous years, the Workshop will be reasonably small and hopefully very friendly, with no parallel sessions and plenty of time to enjoy Brighton. CALL FOR ABSTRACTS There will be approximately 30-35 paper presentations. Abstracts (approximately 200 words) are due by July 14 and should be emailed to ncpw7 at biols.susx.ac.uk. Notification of acceptance for a paper presentation will be by July 31st. REGISTRATION, ETC. The cost for Registration will be 60.00. This will include breakfast, lunch, tea and biscuits, but not evening meals (with Brighton- why?). Accommodation will be 84 for three nights in ?superior? student accommodation. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: and back again Marc de Kamps and Frank van der Velde Is the integrate-and-fire model good enough? A review Jianfeng Feng ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 or 660 SEK or Y 15,000 [including Neural Networks 2,000 entrance fee] or $55 (student) 460 SEK (student) Y 13,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- membership without $30 200 SEK not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- Institutional rates $1132 2230 NLG Y 149,524 ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Tsukada Faculty of Engineering Tamagawa University 6-1-1, Tamagawa Gakuen, Machida-city Tokyo 113-8656 Japan 81 42 739 8431 (phone) 81 42 739 8858 (fax) jnns at jnns.inf.eng.tamagawa.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: entries of Human Nuclear DNA including a Gene with Complete CDS and with more than one exon have been selected according to assessed selection criteria (file genbank_filtered.inf). 4450 exons and 3752 introns have been extracted from these entries (files exons.seq and introns.seq). Several statistics for such exons and introns (overall nucleotides, average GC content, number of exons/introns including not AGCT bases, number of exons/introns in which the annotated end is not found, exon/intron minimum length, exon/intron maximum length, exon/intron average length, exon/intron length standard deviation, number of introns in which the sequence does not start with GT, number of introns in which the sequence does not end with AG) are reported (files exons.stat and introns.stat). Then 3762 + 3762 donor and acceptor sites have been extracted as windows of 140 nucleotides around each splice site. After discarding sequences not including canonical GT-AG junctions (176 +191), including insufficient data (not enough material for a 140 nucleotide window) (590+547), and including not AGCT bases (30+32), there are 2955+2992 windows (files GT_true.seq and AG_true.seq). Information and several statistics about the splice sites extraction are reported (files GT_true.inf, AG_true.inf, GT_true.stat, and AG_true.stat). Finally, there are 287,296+348,370 windows of false splice sites, selected by searching canonical GT-AG pairs in not splicing positions. The false sites in a range+/- 60 from a true splice site are marked as proximal (files GT_false.seq, and AG_false.seq) (Related information: GT_false.inf, and AG_false.inf). HS3D is available at the Web server of the University of Sannio http://www.sci.unisannio.it/docenti/rampone/ ----------- Salvatore Rampone Facolt di Scienze MM.FF.NN. and INFM Universit del Sannio Via Port'Arsa 11 I-82100 Benevento ITALY E-mail: rampone at unisannio.it From esann at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: esann at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From cmbishop at microsoft.com Mon Jun 5 16:42:55 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: RESEARCH POSITION AT MSR CAMBRIDGE A research position, either at researcher level (permanent position) or at postdoc level (2 years fixed term), according to qualifications and experience, is available at MSR Cambridge UK. In addition there is an associated Research Fellowship at Clare Hall College, nearby the lab. The research area is Machine Learning and Perception (including computer vision, signal processing, pattern recognition and probabilistic inference). Further details are at http://www.research.microsoft.com/mlp/. From scheler at ICSI.Berkeley.EDU Mon Jun 5 16:42:55 2006 From: scheler at ICSI.Berkeley.EDU (scheler@ICSI.Berkeley.EDU) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Parallel Paper Submission Message-ID: We would like to suggest the adoption of a general policy in the Neural Network Community of unlimited parallel paper submission. It is the task of editors and reviewers then to accept or reject papers, and the liberty of authors to select the journal where they want to publish. Gabriele Dorothea Scheler Johann Martin Philipp Schumann ............... From jlm at cnbc.cmu.edu Mon Jun 5 16:42:55 2006 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Parallel Paper Submission Message-ID: > > > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. > > -- Jay McClelland > > From jlm at cnbc.cmu.edu Mon Jun 5 16:42:55 2006 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Parallel Paper Submission Message-ID: > > > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. > > -- Jay McClelland > From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject In-Reply-To: References: Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: finally come around to the crux of the biggest problem with the present reviewing system: not enough personal incentive for reviewers to do a good job (or to do it quickly for that matter). While the discussion started out being mostly about speed to publication, the issue of review quality has persistently resurfaced. And while many people have proposed some sort of free-market solution to letting papers compete, what I think would be more helpful would be turning some economic and motivational scrutiny on the reviews themselves. Reviewing is hard, and it should be rewarded, and the reward should ideally be somewhat proportional to quality. Right now the reward is mostly altruism and personal pride in doing good work. There is a little bit of reputation involved in that the editors and sometimes the other reviewers see the results and can attach a name to them, but this is a weak reward signal because of how narrow the audience is. The economic currency of academia is reputation. (There was a short column or article about this somewhere, maybe Science, but I don't remember.) The major motivation for doing good papers in the first place is the affect it has on your reputation. (These papers are part of your "professional voice" as Phil Agre's Networking on the Network document puts it.) This in turn affects funding, job hunting, tenure decisions, etc. so there is plenty of motivation to do it well. It would be nice if there were to create a stronger incentive (reward signal) for review quality. This is not too absurd as it seems only a slight jump away from similar standard practices. Part of a review is quality assesment, but tied in with that is advice on how to improve the work. Advice in some other contexts is amply rewarded in reputational currency. Advisors are partly judged by the accomplishments of students that they have advised. People who give advice on how to improve a paper are often mentioned in an acknowledgements section. Often the job they do is very similar to that of a reviewer, it just isn't coordinated by an editor. Sometimes such people become co-authors and then they get the full benefit of reputational reward for their efforts. Even anonymous reviewers are thanked in acknowledgements sections though their reputations are not aided by this. Sometimes the line between the contributions of a reviewer and an author are somewhat blurry. Many people probably know of examples where a particularly helpful anonymous reviewer contributed more to a paper than someone who was, due to some obligation, listed as a coauthor. But many reviews are quite unhelpful or are way off on the quality assessment. It would improve the quality more consitently if the reviewer got some academic reputational currency out of doing good reviews (and corresponding potential to look foolish for being very wrong). How best to change the structure of the reviewing system to accomplish this is an open question. Someone mentioned a journal where reviews are published with the articles. This has some benefit, but has some problems. Reviews for articles that are completely rejected are not published. We don't want people to only agree to review articles they think will get published. Also, while publishing reviews gives a little incentive not to screw up, to fully motivate quality such things would have to become regularly scrutinized in tenure and job decisions as an integral part of the overall publication record. But the field would have to be careful to separate out the quality of the review from the quality and fame of the reviewed material itself, again to not encourage jockeying to review only the papers that look to be the most influential. Clearly I don't have all the answers, but I advocate looking at the problem in terms of economic incentives, in the same way that economists look at other incentive systems such as incentive stock options for corporate employees, which serve a useful purpose but have well-understood drawbacks from an incentive perspective. Note that review quality is a somewhat separate issue than the also important filtering and attention selection issue, such as the software that Geoff Hinton requested. Even a perfect personalized selection mechanism would not completely replace the benefits of a reviewing system. For example, reviews still help authors to improve their work, and thereby the entire field. And realistically no such perfect selection mechanism will ever exist, so selection will always be greatly aided by quality improvement and filtering at the source side. Thus we should be interested in structural mechanisms to improve the quality of reviews (as well as in useful selection mechanisms to tell us what to read). -Karl ------------------------------------------------------------------------------- Karl Pfleger kpfleger at cs.stanford.edu www-cs-students.stanford.edu/~kpfleger/ ------------------------------------------------------------------------------- > From: Bob Damper > > This shortage of good qualified referees is going to continue all the > time there is no tangible reward (other than a warm altruistic feeling) > for the onerous task of reviewing. So, as many others have pointed > out, parallel submissions will exacerbate this situation rather than > improve it. Not a good idea! > > Bob. > > On Tue, 27 Nov 2001, rinkus wrote: > > > > In many instances a particular student may have particular knowledge and > > insight relevant to a particular submission but the proper model here is > > for the advertised reviewer (i.e., whose name appears on the editorial > > board of the publication) to consult with the student about the > > submission (and this should probably be in an indirect fashion so as to > > protect the author's identity and ideas) and then write the review from > > scratch himself. The scientific review process is undoubtedly worse off > > to the extent this kind of accountability is not ensured. We end up > > seeing far too much rehashing of old ideas and not enough new ideas. > > > > Rod Rinkus > > From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Integration and interpolation processes in vision KEYNOTE LECTURE: Erkki Oja (Helsinki University of Technology) Independent component analysis: Recent advances Friday, May 31, 2002 SYMPOSIUM ON LOCALIST AND DISTRIBUTED REPRESENTATIONS IN PERCEPTION AND COGNITION Edward Callaway (The Salk Institute) Cell type specificity of neural circuits in visual cortex James L. McClelland (Carnegie Mellon University) Varieties of distributed representation: A complementary learning systems perspective Stephen Grossberg (Boston University) Laminar cortical architecture in perception and cognition Jeffrey Bowers (University of Bristol) Localist coding in neural networks for visual word identification Randall O'Reilly (University of Colorado) Learning and memory in the hippocampus and neocortex: Principles and models Michael Page (University of Hertfordshire) Modeling memory for serial order Saturday, June 1, 2002 CORTICAL CODING AND SENSORY-MOTOR CONTROL: Dana Ballard (University of Rochester) Distributed synchrony: A general model for cortical coding Stephen G. Lisberger (University of California School of Medicine) The inner workings of a cortical motor system Daniel Bullock (Boston University) Neural dynamics of ocular tracking, interceptive reaching, and reach/grasp coordination RECOGNITION, MEMORY, AND REWARD: Edmund Rolls (Oxford University) Neural mechanisms involved in invariant object recognition Lynn Nadel (University of Arizona) The role of the hippocampal complex in recent and remote episodic and semantic memory Wolfram Schultz (University of Cambridge) Multiple reward systems in the brain KEYNOTE LECTURE: Daniel Schacter (Harvard University) The seven sins of memory: A cognitive neuroscience perspective CALL FOR ABSTRACTS Session Topics: * vision * spatial mapping and navigation * object recognition * neural circuit models * image understanding * neural system models * audition * mathematics of neural systems * speech and language * robotics * unsupervised learning * hybrid systems (fuzzy, evolutionary, digital) * supervised learning * neuromorphic VLSI * reinforcement and emotion * industrial applications * sensory-motor control * cognition, planning, and attention * other Contributed abstracts must be received, in English, by January 31, 2002. Notification of acceptance will be provided by email by February 28, 2002. A meeting registration fee must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted Abstracts will be returned on request only until April 19, 2002. Each Abstract should fit on one 8.5" x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract; corresponding author and presenting author name, address, telephone, fax, and email address; requested preference for oral or poster presentation; and a first and second choice from the topics above, including whether it is biological (B) or technological (T) work. Example: first choice: vision (T); second choice: neural system models (B). (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, VCR, and LCD projector facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. Accepted Abstracts will be printed in the conference proceedings volume. No longer paper will be required. The original and 3 copies of each Abstract should be sent to: Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. REGISTRATION INFORMATION: Early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to the address above. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: Fellowships for PhD candidates and postdoctoral fellows are available to help cover meeting travel and living costs. The deadline to apply for fellowship support is January 31, 2002. Applicants will be notified by email by February 28, 2002. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Fellowship applicants who also submit an Abstract need to include the registration fee with their Abstract submission. Those who are awarded fellowships are required to register for and attend both the conference and the day of tutorials. Fellowship checks will be distributed after the meeting. REGISTRATION FORM Sixth International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 29, 2002 Meeting: May 30 - June 1, 2002 FAX: (617) 353-7755 http://www.cns.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $85 Conference plus Tutorial (Regular) ( ) $55 Conference plus Tutorial (Student) ( ) $60 Conference Only (Regular) ( ) $40 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ From jbf_w_s_hunter at hotmail.com Mon Jun 5 16:42:55 2006 From: jbf_w_s_hunter at hotmail.com (Jose' B. Fonseca) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: extremely large search spaces when viewed in terms of their basic input features. Examples include learning useful behavior for a robot that receives a continuous stream of video input, or learning to play the game of Go. For such problems, an unbiased search is infeasible, and a bias must be employed that focuses the search within the input space so that the size of the problem is effectively reduced. Letting representations develop as part of learning may be viewed as a way of establishing such a bias. Submissions are encouraged on issues including, but not limited to: * How can large search spaces be reduced by introducing a helpful bias? Existing approaches relevant to this include bias learning and learning to learn. * How can related problems become a source for helpful biases? This question is studied in multitask learning, sequential learning, many-layered learning, and lifelong learning. * Architectures for variable representations. * How may representations be used, and when searching the space of representations, what should their evaluation function be? During the revival of neural network research in the mid 1980's, it became clear that internal representations can be learned based on a global feedback signal. However, while this signal is appropriate as an evaluation for a complete system, the representations such systems employ may require a different evaluation: * assessing modularity: Can a representation be used in multiple contexts? Structural vs. functional modularity. * assessing value: How useful is the information a representation extracts to the construction of solutions? This is a credit assignment question, and recent work on establishing stable economies of value may shed new light on this. * Statistical techniques for assessing modularity. The modularity of a representation relates to a reduced dependency on elements that are not part of the representation. * Bayesian techniques for learning representations. * The relationship between statistical techniques and other approaches to credit assignment. * Hierarchy. The size of input spaces than can be handled may be scaled up by constructing representations from existing representations, leading to a hierarchy of representations. * Practical methods for hierarchical Bayesian inference. * Extracting symbols from sensors. How can raw sensor information be used to extract compact representations or symbols? * How may representations and the solutions employing them be developed simultaneously? One approach to this question is studied in the sub-discipline of evolutionary computation known as co-evolution. * Methods for constructive induction. * Development of theoretical terms through, for example, predicate invention. * Emerging issues in evolutionary and computational biology on the importance of change of representation in gene expression. * Change of representation that occurs over the lifetime of an embedded agent. WORKSHOP FORMAT The workshop will be organized so as to maximize interaction, discussion, and exchange of ideas. The day will start with an invited talk and will be followed by a series of paper presentations grouped by topic. Each presentation will be short, e.g. 10 or 15 minutes, with 5 minutes allotted to questions on the content of the talk. At the end of each group of papers the presenters will participate in a panel discussion to answer questions of a more general sort related to the topic and the relationship between the papers in that group. We will include a panel discussion on emerging problems in the area of development of representation, and conclude the day by inviting all participants to join in an open discussion with the goal of identifying the main themes of the day and establishing a research agenda. PROGRAM CO-CHAIRS Edwin de Jong Computer Science Department Brandeis University MS018 Waltham, MA 02454-9110 1.781.736.3366 edwin at cs.brandeis.edu Tim Oates CSEE Department University of Maryland Baltimore County 1000 Hilltop Circle Baltimore, MD 21250 1.410.455.3082 oates at cs.umbc.edu PROGRAM COMMITTEE Jonathan Baxter (WhizBang! Labs) Rich Caruana (Cornell) Rod Grupen (University of Massachusetts, Amherst) Tom Heskes (University of Nijmegen, The Netherlands) Leslie Kaelbling (MIT) Justus Piater (INRIA Rhone-Alpes, France) Jude Shavlik (University of Wisconsin, Madison) Paul Utgoff (University of Massachusetts, Amherst) IMPORTANT DATES Deadline for submissions: April 22 Notification to participants: May 10 Camera ready copy due: May 31 SUBMISSION INFORMATION Submissions may be either a full technical paper (up to 8 pages) or a position statement in the form of an extended abstract (one or two pages). Electronic submissions (PostScript, PDF, or HTML) are preferred and should be sent by April 22 to either of the co-chairs (Edwin de Jong at edwin at cs.brandeis.edu or Tim Oates at oates at cs.umbc.edu). Please format your submission according to the ICML-2002 formatting guidelines. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Integration and interpolation processes in vision KEYNOTE LECTURE: Erkki Oja (Helsinki University of Technology) Independent component analysis: Recent advances Friday, May 31, 2002 SYMPOSIUM ON LOCALIST AND DISTRIBUTED REPRESENTATIONS IN PERCEPTION AND COGNITION Edward Callaway (The Salk Institute) Cell type specificity of neural circuits in visual cortex James L. McClelland (Carnegie Mellon University) Semantic cognition: A parallel-distributed processing approach Stephen Grossberg (Boston University) Laminar cortical architecture Jeffrey Bowers (University of Bristol) Localist coding in neural networks for visual word identification Randall O'Reilly (University of Colorado) Learning and memory in the hippocampus and neocortex: Principles and models Michael Page (University of Hertfordshire) Modeling memory for serial order Saturday, June 1, 2002 CORTICAL CODING AND SENSORY-MOTOR CONTROL: Dana Ballard (University of Rochester) Distributed synchrony Stephen G. Lisberger (University of California School of Medicine) The inner workings of a cortical motor system Daniel Bullock (Boston University) Neural dynamics of ocular tracking, interceptive reaching, and reach/grasp coordination RECOGNITION, MEMORY, AND REWARD: Edmund Rolls (Oxford University) Neural mechanisms involved in invariant object recognition Lynn Nadel (University of Arizona) The hippocampal formation and episodic memory Wolfram Schultz (University of Cambridge) Multiple reward signals in the brain KEYNOTE LECTURE: Daniel Schacter (Harvard University) The seven sins of memory: A cognitive neuroscience perspective REGISTRATION FORM Sixth International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 29, 2002 Meeting: May 30 - June 1, 2002 FAX: (617) 353-7755 http://www.cns.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $85 Conference plus Tutorial (Regular) ( ) $55 Conference plus Tutorial (Student) ( ) $60 Conference Only (Regular) ( ) $40 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: with Neural Networks by Mahesan Niranjan, Sheffield University, UK From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior edited by Bridget Hallam, Dario Floreano, John Hallam, Gillian Hayes, and Jean-Arcady Meyer The Simulation of Adaptive Behavior Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, computer science, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow adaptation and survival in uncertain environments. The work presented focuses on robotic and computational experimentation with well-defined models that help to characterize and compare alternative organizational principles or architectures underlying adaptive behavior in both natural animals and synthetic animats. Bridget Hallam is Guest Researcher at the University of Southern Denmark. Dario Floreano is Professor of Evolutionary and Adaptive Systems at the Swiss Federal Institute of Technology. John Hallam and Gillian Hayes are Senior Lecturers in the Institute of Perception, Action, and Behavior at the University of Edinburgh. Hallam is also Guest Professor at the University of Southern Denmark. Jean-Arcady Meyer is Director of the AnimatLab at the Labortatoire d'Informatique de Paris 6. 8 1/2 x 11, 500 pp., paper, ISBN 0-262-58217-1 Complex Adaptive Systems series A Bradford Book ______________________ David Weininger Associate Publicist The MIT Press 5 Cambridge Center, 4th Floor Cambridge, MA 02142 617 253 2079 617 253 1709 fax http://mitpress.mit.edu From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Pages 397-428 Kerstin Dautenhahn, Bernard Ogden and Tom Quick http://www.sciencedirect.com/science/article/B6W6C-45JGW5T-1/1/4ebe8ecc97061de118607f87cbd9916c The physical symbol grounding problem, Pages 429-457 Paul Vogt http://www.sciencedirect.com/science/article/B6W6C-45JY928-2/1/9f933ec9575e320bd3e5df298a18f312 On the dynamics of robot exploration learning, Pages 459-470 Jun Tani and Jun Yamamoto http://www.sciencedirect.com/science/article/B6W6C-45NPDR9-1/1/30232225aeb29ea6f7daea13ed03ffa1 Simulating activities: Relating motives, deliberation, and attentive coordination, Pages 471-499 William J. Clancey http://www.sciencedirect.com/science/article/B6W6C-45JY928-1/1/a63c8d94790efa33f82067ac161aad88 Activity organization and knowledge construction during competitive interaction in table tennis, Pages 501-522 Carole Seve, Jacques Saury, Jacques Theureau and Marc Durand http://www.sciencedirect.com/science/article/B6W6C-45KSPCF-2/1/92cbefc3162526340a56011455284bbe Situatedness in translation studies, Pages 523-533 Hanna Risku http://www.sciencedirect.com/science/article/B6W6C-45HFF6Y-1/1/f294067d693ab4a387717992ba06dbc0 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Pages 535-554 Wolff-Michael Roth http://www.sciencedirect.com/science/article/B6W6C-45HWNG0-2/1/ead6e8b30a4465db2274827f692086ad =============================================================================== If you have questions about ScienceDirect, please locate your nearest Help Desk at http://www.info.sciencedirect.com/contacts. =============================================================================== See the following journal Web pages for subscription information for the journal Cognitive Systems Research: http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =================================================================== Professor Ron Sun, Ph.D CECS Department, 201 EBW phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun =================================================================== From cmbishop at microsoft.com Mon Jun 5 16:42:55 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Postdoctoral Research Fellowship in Adaptive Computing DARWIN COLLEGE CAMBRIDGE Microsoft Research Fellowship The Governing Body of Darwin College Cambridge and Microsoft Research jointly invite applications for a stipendiary Research Fellowship supporting research in the field of adaptive computing (including topics such as pattern recognition, probabilistic inference, statistical learning theory and computer vision). Applicants should hold a PhD or should be expecting to have submitted their thesis prior to commencement of the Fellowship. The Fellowship will be tenable for two years commencing 1 October 2003 or on a date to be agreed. The successful candidate will work at the Microsoft Research Laboratory in Cambridge. Information about the laboratory is available from http://research.microsoft.com/cambridge/. Further details are available from the College website http://www.dar.cam.ac.uk or the Master's Secretary, Darwin College, Cambridge CB3 9EU. The closing date for applications is 10 January 2003. - The College follows an equal opportunities policy - Full information: DARWIN COLLEGE CAMBRIDGE Microsoft Research Fellowship The Governing Body of Darwin College Cambridge, and Microsoft Research Cambridge jointly invite applications for a stipendiary Research Fellowship supporting research in the field of adaptive computing (including topics such as pattern recognition, probabilistic inference, statistical learning theory and computer vision). Eligibility Men and women graduates of any university are eligible to apply, irrespective of age, provided they have a doctorate or an equivalent qualification, or expect to have submitted their thesis before taking up the Fellowship. Tenure The Fellowship will be tenable for two years commencing l October 2003 or on a date to be agreed. Duties The successful candidate will engage in research full-time at the Microsoft Research Laboratory in Cambridge. The Fellow will be a member of the Governing Body of Darwin College and will be subject to the Statutes and Ordinances of the College which may be seen on request to the Bursar. The Statutes include the obligation to reside in or near Cambridge for at least two-thirds of each University term, but the Governing Body will normally excuse absences made necessary by the nature of the research undertaken. Stipend and Emoluments The stipend will be dependent upon age and experience. Membership of the Universities' Superannuation Scheme is optional. In addition the Fellow will be able to take seven meals per week at the College table free of charge and additional meals at his or her own expense. Guests may be invited to all meals (within the limits of available accommodation), ten of them free of charge within any quarter of the year. College accommodation will be provided, subject to availability, or an accommodation allowance will be paid in lieu. In addition to a salary the Fellowship provides funding for conference participation. Applications Applications should reach the Master, Darwin College, Cambridge CB3 9EU by 10 January 2003. They should be typed and should include SIX copies of (1) a curriculum vitae, (2) an account, in not more than 1000 words, of the proposed research, including a brief statement of the aims and background to it, (3) the names and addresses of three referees (including telephone, fax and e-mail co-ordinates), WHO SHOULD BE ASKED TO WRITE AT ONCE DIRECT TO THE MASTER indicating the originality of the work and the candidate's scholarly potential, and (4) a list of published or unpublished work that would be available for submission if requested. Testimonials should not be sent. Short-listed candidates may be asked to make themselves available for interview at Darwin College on a date to be arranged in mid-March: election will be made as soon as possible thereafter. In certain circumstances travelling expenses for overseas interviewees may be covered. The College follows an equal opportunities policy From aweigend at amazon.com Mon Jun 5 16:42:55 2006 From: aweigend at amazon.com (Weigend, Andreas) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Do you want to build quantitative models millions of people will use, based on data from the world's largest online laboratory? Are you passionate about formulating relevant questions and producing solutions to initially ill-defined problems? Do the challenges and opportunities of terabytes of data excite you? Can you think abstractly and apply your ideas to the real world? Can you contribute to the big picture and are not afraid to handle the details? Amazon.com is solving incredibly interesting machine learning problems in areas ranging from pricing to personalization, from fraud detection to warehouse picking. Emphasizing measurement and analytics, we build and automate solutions that leverage the Web's scale and instant feedback. We are looking for people with the right blend of vision, intellectual curiosity, and hands-on skills, who want to be part of a highly visible, entrepreneurial team at company headquarters in Seattle. Ideal candidates will have a track record of creating innovative solutions, and typically a Ph.D. in computer science, physics, statistics, or electrical engineering. Significant research experience is desired in fields including active learning, probabilistic graphical models and Bayesian networks, data mining and visualization, Web search and information retrieval, judgment and decision making, consumer modeling, and behavioral economics. If this position excites you, please send your resume, clearly indicating your interests and strengths, to aweigend at amazon.com. Thank you. Andreas S. Weigend, Ph.D. | Chief Scientist, Amazon.com | +1 (917) 697-3800 | www.weigend.com=20 From cburges at microsoft.com Mon Jun 5 16:42:55 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I strongly support the idea of introducing double blind reviewing at NIPS. If bias exists it is insidious and corrosive. Further since reviewing is very subjective, detecting bias can be very hard. Worse, it can occur unconsciously. A close friend recently told me a story about his reviewing two similar papers, one from a group he liked and one from a group whose work he did not respect as much. He started with the paper from the second group, and half way through, he'd already formed strong negative opinions on the work. But then he was shocked to discover that the paper was in fact from the first group. He felt that the incident uncovered a bias in his reviewing of which he was not previously aware. Let's look at the objections, so far, to blind reviewing: John Lazzaro uses the example of Jan Hendrik Schon. John is proposing rejecting the paper due to the previous history of the author. This is exactly the kind of problem blind reviewing addresses. Suppose that Schon has mended his ways and his submission is actually ground breaking, high quality research. Do you want to reject it out of hand? No, you want an unbiased, peer reviewed assessment of it. The problem of vetoing a given authors work should be decided by the editors, based on past history, not by the reviewers - unless they themselves find fraud in the submission. Grace objects that writing a paper so as not to give a clue as to your identity distorts the paper. Also she points out that many authors put their papers on their home page, so digging up the authorship of the submission would be easy. Regarding both of these points: even with blind reviewing, authors can still leave a trail of bread crumbs as to their identity if they wish. No one is suggesting that they be forced to make their identity as hard as possible to discern. What is being suggested, is that a barrier be erected, so that bias in a review would have to be a much more conscious act that it is now. I don't have to put the paper on my home page if I feel that bias may exist. The only other objection so far is that blind reviewing is costly. But that cost is hugely reduced with electronic submissions. It need not be cumbersome any more. Also, coming up with examples of journals / conferences that do not do blind reviewing is not convincing; one can equally well come up with ones that do, e.g. ICCV, CHI, JASA (according to http://www.acm.org/sigmod/record/issues/0003/chair.pdf , about 20% of ACM sponsored conferences are double blind, so it can't be that hard). -- Chris Burges From cburges at microsoft.com Mon Jun 5 16:42:55 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: I was wrong - double blind reviewing does require that all authors do their best to remain anonymous, in order to prevent positive (e.g. authors using their fame, or that of their institution, to get an easier review) as well as negative bias. However re. Grace's point that authors like to put their papers up on Web pages before publication - this year NIPS had a wonderful feature of making draft papers available electronically after they were accepted. So given this, authors would have to wait at most a couple of months. -- Chris Burges From wahba at stat.wisc.edu Mon Jun 5 16:42:55 2006 From: wahba at stat.wisc.edu (Grace Wahba) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NIPS & double blind reviewing Message-ID: > > Have you ever tried to write a paper without giving any > clue to your identity? ("In xxx we proved yyy and in this > paper we extend those results"). > It can seriously distort the > paper. Furthermore, many (most?) people submitting to > NIPS put their paper on their home page and even circulate > it on this list, so a reviewer would have no trouble > finding out who the author was by using, for instance, > google. I fail to see any positives to blind reviewing > and a lot of negatives. > From Barak Mon Jun 5 16:42:55 2006 From: Barak (Barak) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NIPS & double blind reviewing Message-ID: I've heard three objections to blinded reviews. To my mind, none of them quite hold water. OBJECTION 1: It is hard to conceal the authors' identity against the industrious/perceptive/clueful reviewer. Sometimes clues are unavoidable. WHY IT DOESN'T HOLD WATER: So what? In that case blinding isn't any different from the current situation, so why are you objecting? Not all reviewers have these abilities, so blinding will work completely on them. Besides, even the most perceptive reviewer won't figure it out for all papers, only for some. And even when they think they've figured it out, being 80% sure of the author is, psychologically, very different from being 100% sure. Plus, starting an active search for the author's identity might give a reviewer pause ... OBJECTION 2: Sometimes the reviewer actually needs to know the author, eg for theory papers where whether a proof sketch is believable depends on the author. WHY IT DOESN'T HOLD WATER: Err, really? Well, if the reviewer feels themselves to be in that situation, they can either say so in the review, or ask the program committee for the author's name with a brief explanation as to why. It certainly seems healthy, particularly in this (surely quite rare, and therefore low amortized overhead) situation, to have the first pass through the paper be blind! OBJECTION 3: The author might be a well known plagiarist/crackpot/liar. WHY IT DOESN'T HOLD WATER: This is the program committee's job. Anyway it would be easy enough to reveal the authors' names to the reviewers *after* they have their reviews in, so they can bring such an extraordinary situation to the program committee's attention. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: REASON * November 1997 Orchestral Maneuvers By Nick Gillespie A recent study from the National Bureau of Economic Research applies the concept of a level playing field to the symphonic stage. In "Orchestrating Impartiality," economists Claudia Goldin and Cecelia Rouse demonstrate that female orchestra musicians have benefitted hugely from the use of "blind" auditions, in which candidates perform out of the sight of evaluators. In 1970 female musicians made up only 5 percent of players in the country's top orchestras... But beginning in the '70s and '80s, more and more of the orchestras switched to blind auditions, partly to avoid charges of such bias. Female musicians currently make up 25 percent of the "Big Five." Through an analysis of orchestral management files and audition records, Goldin and Rouse conclude that blind auditions increased by 50 percent the probability that a woman would make it out of early rounds. And, they say, the procedure explains between 25 percent and 46 percent of the increase in women in orchestras from 1970 to 1996. From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems. Such insights are pertinent to experimental and computational neuroscientists and to engineers, physicists, and computer scientists interested in how their quantitative tools relate to the brain. The authors present three principles of neural engineering based on the representation of signals by neural ensembles, transformations of these representations through neuronal coupling weights, and the integration of control theory and neural dynamics. Through detailed examples and in-depth discussion, they make the case that these guiding principles constitute a useful theory for generating large-scale models of neurobiological function. A software package written in MatLab for use with their methodology, as well as examples, course notes, exercises, documentation, and other material, are available on the Web. "In this brilliant volume, Eliasmith and Anderson present a novel theoretical framework for understanding the functional organization and operation of nervous systems, from the cellular level to the level of large-scale networks" John P. Miller, Center for Computational Biology, University of Montana "This book represents a significant advance in computational neuroscience. Eliasmith and Anderson have developed an elegant framework for understanding representation, computation, and dynamics in neurobiological systems. The book is beautifully written, and it should be accessible to a wide variety of readers." Bruno A. Olshausen, Center for Neuroscience, University of California, Davis "From principle component analysis to Kalman filters, information theory to attractor dynamics, this book is a brilliant introduction to the mathematical and engineering methods used to analyze neural function." Leif Finkel, Neuroengineering Research Laboratories, University of Pennsylvania http://mitpress.mit.edu/catalog/item/default.asp?sid=29DD45EE-EFE7-4C7F-BCF3 -40C91D6B2635&ttype=2&tid=9538 From cburges at microsoft.com Mon Jun 5 16:42:55 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Sue Becker writes: >> Two of the key factors NIPS reviewers are asked to comment >> on are a paper's significance and originality. Very often >> work is submitted to NIPS that is only a marginal >> advancement over the author's previous work, or worse yet, >> the same paper has already appeared at another conference or >> in a journal. In the course of reviewing for NIPS I have >> often looked at an author's web page, past NIPS proceedings >> etc to assess the closeness to the author's previously >> published work. Double-blind reviewing would make it much >> more difficult to detect this sort of thing. To me, this is the first compelling argument against double blind reviewing put forward in the debate so far (it is not in Dale Schuurmans' list). However I think the issue Sue raises can be addressed as follows. Require that, if authors have closely related work that has been published or submitted elsewhere, they send in a copy of the single closest work to that submitted to NIPS, together with a VERY brief description of how the NIPS submission is different. The session chair (not the reviewer) then incorporates this into his/her decision. If an author abuses this trust, a penalty can be applied, much as the IEEE applies a (severe) penalty in similar circumstances (immediate rejection, immediate withdrawal of all submitted manuscripts by any of the authors, and prohibitions against all of the authors in any IEEE publication for one year: see e.g. http://www.ieee.org/organizations/society/sp/infotsa.html ). Yes, this requires a bit more effort on the session chair's part (although only some submissions will need to do this). But actually whether or not double blind reviewing is adopted, I think this is a separate issue, and maybe a good idea for NIPS anyway. In previous years, NIPS encouraged submission of work that had appeared in part elsewhere, provided it would be new and interesting to the NIPS community. This year a different policy was adopted, requiring stricter originality, and perhaps it will need some enforcement policy for it to work. After all, regardless of the blind reviewing issue, Sue's method - checking up on the author's web page - won't work for people who don't have web pages or who do not put recently submitted material on their web page (as many people don't). -- Chris Burges From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: For posting in connectionists emailing list, thank you. ----------------------------------------------------------------------------- Research Positions in Bioinformatics Several research associate (RA) and research fellow (RF) positions are available at the newly formed BioInformatics Research Centre (BIRC), Nanyang Technological University, Singapore. The current research projects at BIRC are in the areas of =B7 Comparative genomics =B7 Gene expression data analysis =B7 Protein structure prediction =B7 Neuroinformatics M.Sc. or Ph.D. degree in a related field is required for the positions. Salary ranges from S$3,000-5,000 per month, depending on qualifications. Interested candidates should email their CVs to BIRC (birc at ntu.edu.sg), indicating the interest. Preference shall be given to those having experience in above areas. Only selected candidates will be asked submit formal applications. Sincerely, -- Jagath C. Rajapakse, Ph.D., SrMIEEE Deputy Director, BioInformatics Research Centre (BIRC) Associate Professor, School of Computer Engineering Nanyang Technological University Block N4, 2a-32 Nanyang Avenue Singapore 639798 Phone: +65 67905802; Fax: +65 67926559 Email: asjagath at ntu.edu.sg URL: http://www.ntu.edu.sg/home/asjagath/home.htm From josephsirosh at fairisaac.com Mon Jun 5 16:42:55 2006 From: josephsirosh at fairisaac.com (Sirosh, Joseph (Joe)) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Fair Isaac (NYSE: FIC) is cultivating a center of excellence in analytics covering the broad fields of machine learning, statistics and artificial intelligence. We have several open positions highly talented scientists in these fields in our Advanced Technologies (AT) unit based in San Diego. AT performs research into frontier applications of machine learning and artificial intelligence in predictive analytics and scoring, intelligent agents, information retrieval, bioinformatics, video analysis, natural language question answering, uncertain reasoning, and various applications of statistical pattern recognition. Interested candidates must have a MS/Ph.D. in Computer Science, Engineering, Mathematics, Physics, or Statistics and a strong background in machine learning and demonstrable past successes. Three years experience with industry applications is desired. Excellent oral and written communication skills required. Compensation based on achievement, seniority & experience. Fair Isaac is the preeminent provider of creative analytic applications. We offer attractive compensation packages including stock options, stock purchase plans, 401(k), medical & other benefits. Website: http://www.fairisaac.com; e-mail: dawnridz at fairisaac.com. Address: Fair Isaac & Company, 5935 Cornerstone Ct. West, San Diego, CA 92121. FAX: 858-799-8062. Please reference job posting number 1821 or 1824. ============================================== Joseph Sirosh, PhD Fair Isaac & Company Vice President 5935 Cornerstone Court W Advanced Technology San Diego, CA 92121 Phone: (858) 799 8320 Main: (858) 799 8000 Fax: (858) 799 2850 http://www.fairisaac.com From Peter.Andras at newcastle.ac.uk Mon Jun 5 16:42:55 2006 From: Peter.Andras at newcastle.ac.uk (Peter Andras) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Research Associate, School of Computing Science, =A318,265 - =A327,339 Medical Research Council funded postdoctoral position based in the School of Computing Science, University of Newcastle, UK. A postdoctoral research associate is required to work on the development of knowledge discovery and knowledge management tools and applications for GRID-enabled neuroinformatics. The candidates should have a PhD in computer science, neuroinformatics, neuroscience, or related areas, and good knowledge and experience of objected oriented software design and development (e.g., Java, C/C++). Experience in any of the following areas is beneficial: using artificial intelligence methods (e.g., neural networks, text and data mining), working with web-databases (e.g., neuroscience databases), developing distributed systems (e.g., distributed databases). The post is for up to three years. The salary depends on experience and it is on the RA1A scale range: =A318,265 - =A327,339. For further enquiries e-mail Dr Peter Andras at peter.andras at ncl.ac.uk. Applications including an application form (download from the web-site), a CV, and names and addresses of two referees should be sent to Mrs A. Jackson, School of Computing Science, Claremont Tower, Claremont Road, Newcastle upon Tyne NE1 7RU, or by email to: Anke.Jackson at ncl.ac.uk. Closing date is 24 February 2003. Job reference: D520R Web: http://www.ncl.ac.uk/vacancies/vacancy.phtml?ref=3DD520R ----------------- Dr Peter Andras Lecturer Claremont Tower School of Computing Science University of Newcastle Newcastle upon Tyne NE1 7RU UK Tel. +44-191-2227946 Fax. +44-191-2228232 Web: www.staff.ncl.ac.uk/peter.andras From P.Culverhouse at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: P.Culverhouse at plymouth.ac.uk (Phil Culverhouse) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: [My sincere apologies if you receive multiple copies of this email] DEPT OF COMMUNICATION & ELECTRONIC ENGINEERING, University of Plymouth Ref: HAB-buoy/TECH RESEARCH ASSISTANT/FELLOW Salary from =A317624 pa - RA/RF scale An exciting 24-MONTH post is IMMEDIATELY available for a Vision Scientist/Engineer. You will assist the development and integration of a neural network based natural object categoriser for field and laboratory use. The existing prototype (Windows platform) is capable of categorising 23 species of marine plankton, but has to be further developed and a user interface tailored to Marine Ecologists for real-time operation. You should have a working knowledge of neural networks, current machine vision techniques. Familiarity with visual perception and multi-dimensional clustering statistics would be valuable. You should ideally be familiar with Windows operating systems as well as being a C++ programmer. The POST IS AVAILABLE February/March and will involve some European travel. For informal enquiries regarding this post, please contact Dr P Culverhouse on +44 (0) 1752 233517 or email: pculverhouse at plymouth.ac.uk 16th January 2003 From R.Roy at cranfield.ac.uk Mon Jun 5 16:42:55 2006 From: R.Roy at cranfield.ac.uk (Roy, Rajkumar) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: -------------------------------------------- LAST DATE for APPLICATION: 15th February 2003 ! -------------------------------------------- RESEARCH OPPORTUNITIES at Cranfield ----------------------------------- INDUSTRY CASE Studentship (EPSRC) Title: Customer Characterisation for Decision Engineering Industrial Sponsor: BT Exact Duration: March 2003 - March 2006 There is a FULLY FUNDED PhD Studentship (Industry CASE) in the above-mentioned area. Cranfield University is actively involved with a number of companies to research in the areas of Decision Engineering. This research will be an extension of existing work in Applied Soft Comuting area. Soft Computing is a computing paradigm to handle real life complexities such as imprecision. The paradigm utilises a combination of techniques including Fuzzy Logic, Neural Networks and Evolutionary Computing to address the challenge. The research will investigate different soft computing techniques to characterise customer behaviour and preference within Contact Centre Environment. The project will involve close collaboration with BT Exact as the industrial sponsor. This is a pan industry project, where the student is expected to develop generic tools and techniques to analyse data from different industrial context. The research will involve data analysis, classification and presentation to improve the efficiency of the Contact Centres.The tools developed in the project will integrate with existing Contact Centre Environment to provide real time decision support to the staff working at the Centre. EPSRC is expected to pay tuition fees to Cranfield. The student would receive around 11K pounds sterling tax-free per annum for the three years. Interested graduate/postgraduate students with computing/engineering background are invited to submit their CV for an informal discussion over telephone or email. Additional background in data analysis and Soft Computing will be beneficial. The minimum academic requirement for entrants to the degree is an upper second class honours degree or its equivalent. Please note that the funding is restricted to British Nationals, in special cases it may be offered to an EC national. For informal enquiries and application (detailed CV), please contact: Dr. Rajkumar Roy at your earliest: Dr. Rajkumar Roy Senior Lecturer and Course Director, IT for Product Realisation Department of Enterprise Integration, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedford, MK43 0AL, United Kingdom. Tel: +44 (0)1234 754072 or +44 (0)1234 750111 Ext. 2423 Fax: +44 (0)1234 750852 Email: r.roy at cranfield.ac.uk or r.roy at ieee.org URL: http://www.cranfield.ac.uk/sims/staff/royr.htm http://www.cranfield.ac.uk/sims/cim/people/roy.htm -------------------------------------------- LAST DATE for APPLICATION: 15th February 2003 -------------------------------------------- From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Postdoctoral Research Fellowship Centre for Theoretical and Computational Neuroscience University of Plymouth, UK A Postdoctoral Research Fellowship is available for candidates who have just completed or are about to complete a PhD in a suitable area of study, to carry out research within the Centre for Theoretical and Computational Neuroscience. The Fellowship will be for two years initially, at a salary level on the University's scales commensurate with experience and age. The Centre specialises in the application of rigorous quantitative, mathematical and physical approaches, including mathematical and computational modelling and psychophysics, to understanding information coding, processing, storage and transmission in the brain and its manifestation in perception and action. Areas of study include: visual and auditory perception and psychophysics; sensory-motor control, in particular oculomotor control; and mathematical and computational modelling of the cortical neural circuitry underlying perception, attention, learning and memory, and motor control. The appointed Research Fellow will work under the supervision of one of the following academic staff in the Centre: Prof Jochen Braun (vision); Dr Susan Denham (audition); Prof Chris Harris (sensory-motor control); Prof Roman Borisyuk (mathematical and computational modelling); Prof Mike Denham (mathematical and computational modelling). The Centre for Theoretical and Computational Neuroscience is a new research centre in the University of Plymouth, emerging from the previous Centre for Neural and Adaptive Systems (http://www.tech.plym.ac.uk/soc/research/neural/research.html), where the home pages of the above academic staff can be found (the new centre's website is currently under construction). There is currently a thriving community of five postdocs and ten research students in the Centre, working in the above fields. The Centre has a number of externally-funded research programmes and strong international links, including with the Institute of Neuroinformatics at ETH, Zurich, and the Koch laboratory at Caltech. The Centre will be located from April 2003 in a brand new building complex on the University campus which will also house the departments of Computing, Psychology and Biological Sciences and part of the new Medical School. The new self-contained accommodation for the Centre will include office space for all academic staff, postdocs and research students, a library and meeting room, a 40-seater seminar room, and vision, audition and sensory-motor psychophysics labs. The University of Plymouth is one of the largest UK universities, with about 25,000 students, some 16,000 of which are accommodated in the city centre campus in Plymouth. It is located in a beautiful part of the southwest of England, close to outstanding countryside, moorland, river estuaries, historical towns and villages and excellent beaches, including some of the best surfing beaches in Europe. It also offers extensive water sports facilities, including diving and sailing. Interested applicants for this Research Fellowship should in the first instance send an email to the Head of the Centre, Professor Mike Denham (mdenham at plym.ac.uk), including a brief statement of research interests and a short curriculum vitae, plus postal address. Applicants will then be sent a formal application form. Note: The closing date for applications for the Research Fellowship is 31st March 2003. Professor Mike Denham Centre for Theoretical and Compuational Neuroscience University of Plymouth Plymouth PL4 8AA UK tel: +44 (0)1752 232547 fax: +44 (0)1752 232540 email: mdenham at plym.ac.uk From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Research Scholarships Centre for Theoretical and Computational Neuroscience University of Plymouth, UK A number of University Research Scholarships are available for students wishing to study for a PhD in the Centre starting in September/October 2003. The scholarships cover full tuition fees for three years plus an annual living-expenses stipend of 9000. Further support for living expenses is usually available via teaching assistantships. The Centre specialises in the application of rigorous quantitative, mathematical and physical approaches, including mathematical and computational modelling and psychophysics, to understanding information coding, processing, storage and transmission in the brain and its manifestation in perception and action. Areas of study include: visual and auditory perception and psychophysics; sensory-motor control, in particular oculomotor control; and mathematical and computational modelling of the cortical neural circuitry underlying perception, attention, learning and memory, and motor control. PhD students will work under the supervision of one of the following academic staff in the Centre: Prof Jochen Braun (vision); Dr Susan Denham (audition); Prof Chris Harris (sensory-motor control); Prof Roman Borisyuk (mathematical and computational modelling); Prof Mike Denham (mathematical and computational modelling). The Centre for Theoretical and Computational Neuroscience is a new research centre in the University of Plymouth, emerging from the previous Centre for Neural and Adaptive Systems (http://www.tech.plym.ac.uk/soc/research/neural/research.html), where the home pages of the above academic staff can be found (the new centre's website is currently under construction). There is currently a thriving community of five postdocs and ten research students in the Centre, working in the above fields. The Centre has a number of externally-funded research programmes and strong international links, including with the Institute of Neuroinformatics at ETH, Zurich, and the Koch laboratory at Caltech. The Centre will be located from April 2003 in a brand new building complex on the University campus which will also house the departments of Computing, Psychology and Biological Sciences and part of the new Medical School. The new self-contained accommodation for the Centre will include office space for all academic staff, postdocs and research students, a library and meeting room, a 40-seater seminar room, and vision, audition and sensory-motor psychophysics labs. The University of Plymouth is one of the largest UK universities, with about 25,000 students, some 16,000 of which are accommodated in the city centre campus in Plymouth. It is located in a beautiful part of the southwest of England, close to outstanding countryside, moorland, river estuaries, historical towns and villages and excellent beaches, including some of the best surfing beaches in Europe. It also offers extensive water sports facilities, including diving and sailing. Interested applicants for these Research Scholarships must first make application to the University and to the Centre for admission to its PhD programme. Initially this can be done by sending an email to the Head of the Centre, Professor Mike Denham (mdenham at plym.ac.uk), including a brief statement of research interests and a short curriculum vitae, plus postal address. Applicants will then be sent formal admission application forms. Note: The closing date for University Scholarship applications is 31st March 2003. Applications for admission to the University's PhD programme should be made well in advance of this date, ideally by the end of February. Professor Mike Denham Centre for Theoretical and Compuational Neuroscience University of Plymouth Plymouth PL4 8AA UK tel: +44 (0)1752 232547 fax: +44 (0)1752 232540 email: mdenham at plym.ac.uk From cmbishop at microsoft.com Mon Jun 5 16:42:55 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Ninth International Workshop on Artificial Intelligence and Statistics January 3-6, 2003, Hyatt Hotel, Key West, Florida Electronic proceedings of this workshop are available on-line at: http://research.microsoft.com/conferences/aistats2003/proceedings=20 These proceedings include all contributed papers in both Postscript and PDF format, together with the viewgraphs from the invited speakers in PDF format. Chris Bishop Brendan Frey (workshop organisers) From UE001861 at guest.telecomitalia.it Mon Jun 5 16:42:55 2006 From: UE001861 at guest.telecomitalia.it (Corsini Filippo) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: *********************************************************************** First European School on Neuroengineering "Massimo Grattarola" Venice 16-20 June 2003 Telecom Italia Learning Services (TILS), and the University of Genoa (DIBE, DIST, Bioengineering course) are currently organizing the first edition an European Summer School on Neuroengineering. The school will be entitled to Massimo Grattarola. The first edition , which will last for five days, will be held from June 16 to June 20, 2003 at Telecom Italia's Future Center in Venice. The School will cover the following main themes: 1. Neural code and plasticity o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Cortical computational paradigms for perception and action 2. Brain-like adaptive information processing systems o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Models of learning, representation and adaptability based on knowledge of the nervous system. o Exploration of the capabilities of natural neurobiological systems as flexible computational devices. o Use of information from nervous systems to engineer new control techniques and new artificial systems. o Development of highly innovative Artificial Neural Networks capable of reproducing the functioning of vertebrate nervous systems. 3. Bio-artificial systems o Development of novel brain-computer interfaces o Development of new techniques for neuro-rehabilitation and neuro-prostheses o Hybrid silicon/biological systems Speakers: Abbruzzese G. Non-invasive exploration of human cerebral cortex by transcranial magnetic stimulation Benfenati F. Molecular dissection of neurotransmitter release mechanisms: a key to understand short-term memory processes Torre V., Ruaro M. E., Bonifazi P. Towards the neurocomputer: image processing and learning with neuronal cultures Aleksander I. Digital Neuromodelling Based on the Architecture of the Brain: Basics and Applications Destexhe A. The stochastic integrative properties of neocortical neurons in vivo Gielen S. Quantitative models to explain the degree-of-freedom problem in motor control Le Masson G. Cyborgs : from fiction to science Rutten W. Neuro-electronic interface engineering and neuronal network learning Van Pelt J. Computational and experimental approaches in neuronal morphogenesis and network formation Mussa-Ivaldi F. The Engineering of motor learning and adaptive control Sandini G. Cognitive Development in Robot Cubs Morasso P. Motor control of unstable tasks Two Practical Laboratory: Davide F., Stillo G, Morabito F. Chosen exempla of neuronal signals decoding, analysis and features extractions Renaud-Le Masson Bioengineering: on-silicon solutions for biologically realistic artificial neural networks (state-of-the-art and development perspectives for integrated ANN Scientific Board: Sun -ichi Amari RIKKEN, Brain Science Institute Laboratory for Mathematical Neuroscience, Japan Fabrizio Davide Telecom Italia Learning Services, Italy Walter J. Freeman University of Berkeley, USA Stephen Grossberg University of Boston, USA Milena Koudelka-Hep IMT, Institute of Microtechnology, University of Neuchatel, Switzerland Gwendal Le Masson INSERM, French Institute of Health and Medical Research, France Sergio Martinoia DIBE, Unversity of Genoa, Italy Pietro Morasso DIST, University of Genoa, Italy Sylvie RENAUD-LE MASSON ENSEIRB - IXL Bordeaux, France Wim Rutten University of Twente, Netherlands Japp Van Pelt Netherlands Institute for Brain Research, Netherlands LOCATION Future Center, Telecom Italia Lab San Marco, 4826 - Campo San Salvador 30124 Venezia Tel. 041 5213 223 Fax 011 228 8228 Italy http://fc.telecomitalialab.com/ CONTACTS Filippo Corsini Telecom Italia- Learning Services S.p.A. Business Development Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368.70402 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it CREDITS In response to current reforms in training, at the Italian and European levels, the Summer School on Neuroengineering is certified to grant credits. These include: ECM (Educazione Continua in Medicina) credits recognized by the Italian Ministry of Health ECTS (European Credit Transfer System) credits recognized by all European Universities Registration Fee*: Undergraduate and graduate: 100 EUR PhD students and young researcher: 250 EUR Business and medical professional: 500 EUR * The School registration fee includes the meeting program, reception, two coffee breaks and a lunch each day, and meeting proceedings. For informations you can contact: Filippo Corsini ue001861 at guest.telecomitalia.it All the informations will be avaliable on the website www.neuroengineering.org (under construction) _________________________________________________ Dr. Filippo Corsini Telecom Italia- Learning Services S.p.A. Business Development Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368.70402 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it _________________________________________________ This e-mail, including any attachments, may contain private or confidential information. If you think you may not be the intended recipient, or if you have received this e-mail in error, please contact the sender immediately and delete all copies of this e-mail. If you are not the intended recipient, you must not reproduce any part of this e-mail or disclose its contents to any other party. From christiane.debono at epfl.ch Mon Jun 5 16:42:55 2006 From: christiane.debono at epfl.ch (Christiane Debono) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Dear Colleague, An update of faculty positions , group leader positions, Potsdocs and PhD positions open at the new Brain Mind Institute at the EPFL in Lausanne is now available in this second call for applications. Please pass this email on to anyone that may be interested in any of the positions. Nominations of candidates are also welcome. Thank you. Yours, Henry Markram From cmbishop at microsoft.com Mon Jun 5 16:42:55 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Postdoctoral Research Positions at Microsoft Research Cambridge Computer vision, machine learning, and information retrieval Applications are invited for postdoctoral research positions at Microsoft Research Cambridge (MSRC) in the fields of computer vision, machine learning and information retrieval. These positions are for just under two years starting from a mutually agreeable date, generally no later than 1 January 2004. Applicants must have completed the requirements for a PhD, including submission of their thesis, prior to joining MSRC. Postdoctoral researchers receive a competitive salary, together with a benefits package, and will be eligible for relocation expenses. MSRC is Microsoft's European research laboratory, and is housed in a brand new purpose-designed building on Cambridge University's West Cambridge site, adjacent to the Computer Science and Physics departments, and close to the Mathematics departments and to the centre of town. It currently employs 65 researchers of many different nationalities working in a broad range of areas including computer vision, machine learning, information retrieval, hardware devices, programming languages, security, systems, networking and distributed computing. MSRC provides a vibrant research environment with an open publications policy and with close links to Cambridge University and many other academic institutions across Europe. Further information about the lab can be found at: http://www.research.microsoft.com/aboutmsr/labs/cambridge/ The closing date for applications is 9 May 2003. To apply please send a full CV (including a list of publications) in PDF, Postscript or Word format, together with the names and contact details for 3 referees, to: cambhr at microsoft.com with the subject line "Application for postdoctoral research position".=A0 =A0 From silvia at sa.infn.it Mon Jun 5 16:42:55 2006 From: silvia at sa.infn.it (Silvia Scarpetta) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: ------------------------------------------------------ NEW DEADLINE 10 June 2003 ------------------------------------------------------ International School on Neural Nets "E.R. Caianiello" 8th Course: Computational Neuroscience: CORTICAL DYNAMICS 31 Oct.- 6 Nov. 2003 Ettore Majorana Centre Erice (Sicily) ITALY Jointly organized by IIASS International Institute for Advanced Scientific Studies and EMFCSC Ettore Majorana Foundation and Center for Scientific Culture Course homepage: http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ *Directors of the Course:* Maria Marinaro (Dept. of Physics "E.R. Caianiello", Univ. of Salerno, Italy) Peter Erdi (Kalamazoo College, USA & KFKI Res. Inst. Part. and Nucl. Phys. Hung. Acad. Sci. Hungary) *Confirmed Lecturers:* Luigi Agnati - Dept.of Neurosc. Karolinka Inst.Sweden & Modena Univ.Italy Peter Dayan - Gatsby Computational Neuroscience Unit, UCL, UK Peter Erdi - CCSS Kalamazoo College USA & KFKI Hung. Accad.of Science Hungary - Codirector Bruce P Graham - Dept.of Computer Science and Mathem., Univ. of Stirling UK John Hertz - Nordita, DK Zhaoping Li - Univ. College of London, UK Ronen Segev - School of Physics and Astronomy, Tel Aviv University, Israel Ivan Soltesz - Dept. of Anatomy and Neurobiology, Univ. of California, USA Misha Tsodyks - Dept. of Neurobiology Weizmann Institute of Science, Israel Ichiro Tsuda - Dept. of Mathematics, Hokkaido University, Japan Alessandro Treves - Sissa, Cognitive Neuroscience, Trieste, It Fortunato Tito Arecchi - University of Firenze and INOA, Italy Laszlo Zaborszky -Center Mol.& Behav.Neurosc.,Rutgers Univer.,New Jersey Hiroshi Fujii - Dept.of Infor.&Communic. Sciences -Kyoto Sangyo Univer.Japan **Purpose of the Course:** The School is devoted to people from different scientific background (including physics, neuroscience, mathematics and biology) who want to learn about recent developments in computational neuroscience and cortical dynamics. The basic concepts will be introduced, with emphasis on common principles. Cortical dynamics play an important role in important functions such as those related to memory, sensory processing and motor control. A systematic description of cortical organization and computational models of the cortex will be given, with emphasis on connections between experimental evidence and biologically-based as well as more abstract models. The Course is organized as a series of lectures complemented by short seminars that will focus on recent developments and open problems. We also aim to promote a relaxed atmosphere which will encourage informal interactions between all participants and hopefully will lead to new professional relationships which will last beyond the School. **Registrations:** Applications must be received before June 10 2003 in order to be considered by the selection committee. Registration fee of 900 Euro includes accomodation with full board. Application form and additional information are available from http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ Applications should be sent by ordinary mail to the codirector of the school: Prof. Maria Marinaro IIASS Via Pellegrino 19, I-84019 Vietri sul Mare (Sa) Italy or by fax to: +39 089 761 189 (att.ne: Prof. M. Marinaro) or by electronic mail to: iiass.vietri at tin.it subject: summer school **Location** The "Ettore Majorana" International Centre for Scientific Culture takes its inspiration from the outstanding Italian physicist, after whom the Centre was named. Embracing 110 Schools, covering all branches of Science, the Centre is situated in the old pre-mediaeval city of Erice where three restored monasteries provide an appropriate setting for high intellectual endeavour. These monasteries are now named after great Scientists and strong supporters of the "Ettore Majorana" Centre. There are living quarters in all three Monasteries for people attending the Courses of the Centre Please visit: http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ From aburkitt at bionicear.org Mon Jun 5 16:42:55 2006 From: aburkitt at bionicear.org (Anthony BURKITT) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: The following paper on neuronal gain in the leaky integrate-and-fire neuron with conductance synapses has been published by Biological Cybernetics and is now available online: http://link.springer.de/link/service/journals/00422/first/bibs/s00422-003-0408-8.htm "Study of neuronal gain in a conductance-based leaky integrate-and-fire neuron model with balanced excitatory and inhibitory synaptic input" A. N. Burkitt, H. Meffin, and D. B. Grayden Abstract: Neurons receive a continual stream of excitatory and inhibitory synaptic inputs. A conductance-based neuron model is used to investigate how the balanced component of this input modulates the amplitude of neuronal responses. The output spiking-rate is well-described by a formula involving three parameters: the mean $\mu$ and variance $\sigma$ of the membrane potential and the effective membrane time constant $\tau_{\mbox{\tiny Q}}$. This expression shows that, for sufficiently small $\tau_{\mbox{\tiny Q}}$, the level of balanced excitatory-inhibitory input has a non-linear modulatory effect on the neuronal gain. A copy is also available from my web page: http://www.medoto.unimelb.edu.au/people/burkitta/Burkitt_BC_2003.pdf Tony Burkitt ====================ooOOOoo==================== Anthony N. Burkitt The Bionic Ear Institute 384-388 Albert Street East Melbourne, VIC 3002 Australia Email: a.burkitt at medoto.unimelb.edu.au http://www.medoto.unimelb.edu.au/people/burkitta Phone: +61 - 3 - 9663 4453 Fax: +61 - 3 - 9667 7518 =====================ooOOOoo=================== From kamps at fsw.LeidenUniv.nl Mon Jun 5 16:42:55 2006 From: kamps at fsw.LeidenUniv.nl (kamps) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: NeuroIT.net workshop, July 8th, Alicante Message-ID: <3EC1170A@webmail2.fsw.LeidenUniv.nl> The NeuroIT.net workshop is organized as a sattelite workshop of the CNS2003 conference in Alicante. It is held on July 8th. People who are visting the CNS conference are also welcome to attend the workshop. There is no need to register and access to the workshop is free. As the programme shows, the workshop focusses on applications of concepts of neuroscience in IT and engineering. ----------------------------------------------------------------------------- NeuroIT.net workshop in Alicante Location: University campus (Universidad Miguel Hernández, Campus de San Juan de Alicante, Carretera de Valencia N-332 s/n) (on the floor above the smaller rooms that will be hosting in parallel various workshops of the CNS*03 meeting. Transportation by bus will be provided.) Introduction 10.00 - 10.10 Introduction EU (Pekka Karp) 10.10 - 10.30 Neuro-IT.net (Alois Knoll) Project presentations 10.30 - 10.50 AMOTH A fleet of articifical chemosensing moths for distributed environmental monitoring. 10.50 - 11.10 NEUROBIT A bioartificial brain with an artificial body: training a cultured neural tissue to support the purposivebehavior of an artificial body 11.10 - 11.30 APEREST APEREST will develop a coding and representation scheme of perceptual information based on chaotic dynamics and involving collection of data from animal brain recordings. 11.30 - 11.50 break 11.50 - 12.10 BIOLOCH BIOLOCH aims at understanding of perception and locomotion of animals moving in wet and slippery areas, e.g. the gut or wetlands. 12.10 - 12.30 CYBERHAND CYBERHAND and ROSANA cover similar areas of problems related to the construction of neuroprostheses. ROSANA focuses on different ways of stimulating sensorial receptors equivalent to natural stimuli and studying the representation of such stimuli in the central nervous system. CYBERHAND aims at the construction of an artificial hand capable of producing a natural feeling of touch and grip. Key not for Brain Research) Computational and Experimental Approaches in Neuronal Morphogenesis and Network formation Siesta break Introduction 17.00 - 17.10 announcements Project presentations 17.10 - 17.30 CIRCE CIRCE aims at constructing a miniature bat head for active bio-inspired echolocation. 17.30 - 17.50 CICADA CICADA studies the mechanoreceptor hairs of a cricket and itsresponse to predators for constructing bio-inspired MEMS devices. 17.50 - 18.10 ROSANA See Cyberhand 18.10 - 18.30 MIRRORBOT MirrorBot studies biomimetic multimodal learning in a mirror neuron-based robot. 18.30 - 18.50 SENSEMAKER SENSEMAKER aims at integration and unified representations ofmultisensory information. 18.50 - 19.10 SPIKEFORCE SpikeFORCE will develop with real-time spiking networks for robot control based on a model of the cerebellum. Key note speaker 19.10 - 19.45 Eduardo Fernandez (Universitas de Miguel Hernandez) Designing a Brain-Machine interface for direct communication with visual cortex neurons Closure 19.45 - 19.55 From esalinas at wfubmc.edu Mon Jun 5 16:42:55 2006 From: esalinas at wfubmc.edu (Emilio Salinas) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Postdoctoral position available in Theoretical Neuroscience The project investigates the dynamics of neural networks in which units may affect each other's gain. The successful candidate is expected to develop models of neuronal circuits within this broad framework (see Neural Computation 16(7):1439, 2003). Applicants should be interested in quantitative approaches to Neuroscience and should have, or be near completing, a PhD in a relevant discipline - Neuroscience, Physics, Math, etc. The position is for one to three years, with salary starting at $32k and going upward depending on experience. Applicants should email a CV, the names and email addresses of three references, and a description of their research background and interests to Emilio Salinas esalinas at wfubmc.edu Department of Neurobiology and Anatomy Wake Forest University Health Sciences Winston-Salem NC 27157 Affirmative Action/Equal Opportunity Empoloyer. From jonathan.tepper at ntu.ac.uk Mon Jun 5 16:42:55 2006 From: jonathan.tepper at ntu.ac.uk (Tepper, Jonathan) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: PhD STUDENTSHIP AVAILABLE: Neural Networks for Natural Language Processing (=A39,000 per year) ========================================================================== The School of Computing and Mathematics, The Nottingham Trent University is pleased to announce the immediate availability of a PhD studentship funded for three years. The successful candidate will join the Intelligent Recognition Systems Research Group within the School. They will pursue research into the use of neural networks for broad coverage syntactic parsing of English texts using large text corpora. The focus will be to build on existing work carried out in the research group which has produced a parser that is at the forefront of this research area. The candidate will be expected to present their findings in academic papers as part of their research programme. We are seeking highly motivated candidates with the following essential qualifications: -a good honours degree in a Computing subject or Computational Linguistics -strong programming skills -an aptitude for mathematics with a willingness to learn advanced topics -good communication skills in English Ideally, the candidate will also have the following desirable qualifications: -knowledge of neural networks -knowledge or experience in natural language processing or Computational Linguistics -a working knowledge of C or C++ -a Master's thesis in a Computing subject or Computational Linguistics The studentship will cover tax-free living expenses of =A39,000 per year plus tuition fees, and will commence on 22nd September 2003. Informal enquiries may be made to either Dr Jon Tepper via tel. +44 (0)115 848 2255 email: jonathan.tepper at ntu.ac.uk or Dr Heather Powell via tel. +44 (0)115 848 2598 email: heather.powell at ntu.ac.uk. For an application form, please contact: Mrs Doreen Corlett Faculty of Construction, Computing & Technology The Nottingham Trent University Burton Street Nottingham NG1 4BU, UK Email: doreen.corlett at ntu.ac.uk, Telephone: +44 (0)115 848 2301, Fax: +44 (0)115 848 6867 Candidates must send a completed application form, their curriculum vitae and a covering letter stating why they are applying for the post and why they meet the essential qualifications (mentioned above) to Mrs Corlett by 26th August 2003. Applications by CV only will not be accepted. Please forward to interested students. Many thanks, Jon -------- Dr. Jon Tepper Senior Lecturer School of Computing and Mathematics The Nottingham Trent University Email: Jonathan.Tepper at ntu.ac.uk WWW: http://dcm.ntu.ac.uk/5_staff/staff_jt.htm http://dcm.ntu.ac.uk/2_research/iris/index.htm Tel. no. +44 (0) 115 848 2255 Fax. no. +44 (0) 115 848 6518 From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Dear Colleagues, Preprints of two papers on spike timing-dependent synaptic plasticity are available: "Spike timing-dependent plasticity: The relationship to rate-based learning for models with weight dynamics determined by a stable fixed-point", A.N. Burkitt, H. Meffin, and D.B. Grayden, to appear in Neural Computation, is available at http://www.bionicear.org/people/burkitta/Burkitt_NC_2003.pdf "How synapses in the auditory system wax and wane: Theoretical perspectives", A.N. Burkitt and L.J. van Hemmen, to appear in Biological Cybernetics, is available at http://www.bionicear.org/people/burkitta/BurkittvH_BC_2003.pdf Regards, Tony Burkitt =============================================== "Spike timing-dependent plasticity: The relationship to rate-based learning for models with weight dynamics determined by a stable fixed-point", A.N. Burkitt, H. Meffin, and D.B. Grayden, to appear in Neural Computation, Abstract: --------- Experimental evidence indicates that synaptic modification depends upon the timing relationship between the presynaptic inputs and the output spikes that they generate. In this paper results are presented for models of spike timing-dependent plasticity (STDP) whose weight dynamics is determined by a stable fixed-point. Four classes of STDP are identified on the basis of the time-extent of their input-output interactions. The effect upon the potentiation of synapses with different rates of input is investigated to elucidate the relationship of STDP with classical studies of LTP/LTD and rate-based Hebbian learning. The selective potentiation of higher-rate synaptic inputs is found only for models where the time-extent of the input-output interactions are ``input restricted'' (i.e., restricted to time domains delimited by adjacent synaptic inputs) and that have a time-asymmetric learning window with a longer time constant for depression than for potentiation. The analysis provides an account of learning dynamics determined by an input-selective stable fixed-point. The effect of suppressive interspike interactions upon STDP are also analyzed and shown to modify the synaptic dynamics. http://www.bionicear.org/people/burkitta/Burkitt_NC_2003.pdf =============================================== "How synapses in the auditory system wax and wane: Theoretical perspectives", A.N. Burkitt and L.J. van Hemmen, to appear in Biological Cybernetics, Abstract: --------- Spike timing-dependent synaptic plasticity has recently provided an account of both the acuity of sound localization and the development of temporal-fea\-ture maps in the avian auditory system. The dynamics of the resulting learning equation, which describes the evolution of the synaptic weights, is governed by an unstable fixed-point. We outline the derivation of the learning equation for both the Poisson neuron model and the leaky integrate-and-fire neuron with conductance syn\-ap\-ses. The asymptotic solutions of the learning equation can be described by a spectral representation based on a biorthogonal expansion. http://www.bionicear.org/people/burkitta/BurkittvH_BC_2003.pdf ====================ooOOOoo==================== Anthony N. Burkitt The Bionic Ear Institute 384-388 Albert Street East Melbourne, VIC 3002 Australia Email: aburkitt at bionicear.org http://www.bionicear.org/people/burkitta Phone: +61 - 3 - 9663 4453 Fax: +61 - 3 - 9667 7518 =====================ooOOOoo=================== From esann at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: esann at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From aetzold at neuro.uni-bremen.de Mon Jun 5 16:42:55 2006 From: aetzold at neuro.uni-bremen.de (Axel Etzold) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Paper available about construction of robust tuning curves Message-ID: From ASWDuch at ntu.edu.sg Mon Jun 5 16:42:55 2006 From: ASWDuch at ntu.edu.sg (Wlodzislaw Duch (Dr)) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Dear Connectionists, Here are five of my recent computational intelligence papers for your comments: 1. Duch W (2003) Support Vector Neural Training. Submitted to IEEE Transactions on Neural Networks (submitted 11.2003) http://www.phys.uni.torun.pl/publications/kmk/03-SVNT.html 2. Duch W (2003) Uncertainty of data, fuzzy membership functions, and multi-layer perceptrons. Submitted to IEEE Transactions on Neural Networks (submitted 11.2003) http://www.phys.uni.torun.pl/publications/kmk/03-uncert.html 3. Duch W (2003) Coloring black boxes: visualization of neural network decisions. International Joint Conference on Neural Networks http://www.phys.uni.torun.pl/publications/kmk/03-IJCNN.html 4. Kordos M, Duch W (2003) On Some Factors Influencing MLP Error Surface. The Seventh International Conference on Artificial Intelligence and Soft Computing (ICAISC) http://www.phys.uni.torun.pl/publications/kmk/03-MLPerrs.html 5. Duch W (2003) Brain-inspired conscious computing architecture. Journal of Mind and Behavior (submitted 10/03) http://www.phys.uni.torun.pl/publications/kmk/03-Brainins.html All these papers (and quite a few more) are linked to my page: http://www.phys.uni.torun.pl/~duch/cv/papall.html Here are the abstracts: 1. Support Vector Neural Training. Neural networks are usually trained on all available data. Support Vector Machines start from all data but near the end of the training use only a small subset of vectors near the decision border. The same learning strategy may be used in neural networks, independently of the actual optimization method used. Feedforward step is used to identify vectors that will not contribute to optimization. Threshold for acceptance of useful vectors for training is dynamically adjusted during learning to avoid excessive oscillations in the number of support vectors. Benefits of such approach include faster training, higher accuracy of final solutions and identification of a small number of support vectors near decision borders. Results on satellite image classification and hypothyroid disease obtained with this type of training are better than any other neural network results published so far. 2. Uncertainty of data, fuzzy membership functions, and multi-layer perceptrons. Probability that a crisp logical rule applied to imprecise input data is true may be computed using fuzzy membership function. All reasonable assumptions about input uncertainty distributions lead to membership functions of sigmoidal shape. Convolution of several inputs with uniform uncertainty leads to bell-shaped Gaussian-like uncertainty functions. Relations between input uncertainties and fuzzy rules are systematically explored and several new types of membership functions discovered. Multi-layered perceptron (MLP) networks are shown to be a particular implementation of hierarchical sets of fuzzy threshold logic rules based on sigmoidal membership functions. They are equivalent to crisp logical networks applied to input data with uncertainty. Leaving fuzziness on the input side makes the networks or the rule systems easier to understand. Practical applications of these ideas are presented for analysis of questionnaire data and gene expression data. 3. Coloring black boxes: visualization of neural network decisions. Neural networks are commonly regarded as black boxes performing incomprehensible functions. For classification problems networks provide maps from high dimensional feature space to K-dimensional image space. Images of training vector are projected on polygon vertices, providing visualization of network function. Such visualization may show the dynamics of learning, allow for comparison of different networks, display training vectors around which potential problems may arise, show differences due to regularization and optimization procedures, investigate stability of network classification under perturbation of original vectors, and place new data sample in relation to training data, allowing for estimation of confidence in classification of a given sample. An illustrative examples for the three-class Wine data and five-class Satimage data are described. The visualization method proposed here is applicable to any black box system that provides continuous outputs. 4. Kordos M, Duch W (2003) Visualization of MLP error surfaces helps to understand the influence of network structure and training data on neural learning dynamics. PCA is used to determine two orthogonal directions that capture almost all variance in the weight space. 3-dimensional plots show many aspects of the original error surfaces. 5. Duch W (2003) Brain-inspired conscious computing architecture. What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon's claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human. With best regards for the coming year, Wlodzislaw Duch Dept. of Informatics, Nicholaus Copernicus University Dept. of Computer Science, SCE NTU, Singapore http://www.phys.uni.torun.pl/~duch http://www.ntu.edu.sg/home/aswduch/ From aweigend at amazon.com Mon Jun 5 16:42:55 2006 From: aweigend at amazon.com (Weigend, Andreas) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Software Development Positions at Amazon.com in Seattle, WA in Machine Learning, Statistical Analysis, Fraud Detection, Computational Marketing etc. Do you want to build quantitative models millions of people will use, based on data from the world's largest online laboratory? Are you passionate about formulating relevant questions, and producing solutions to initially ill-defined problems? Do the challenges and opportunities of terabytes of data excite you? Can you think abstractly, and apply your ideas to the real world? Can you contribute to the big picture, and are not afraid to handle the details? Amazon.com is solving incredibly interesting problems in areas including consumer behavior modeling, pricing and promotions, personalization and recommendations, reputation management, fraud detection, computational marketing, customer acquisition and retention. Emphasizing measurement and analytics, we build and automate solutions that leverage instant feedback and the Web's scale. We are looking for people with the right blend of vision, curiosity, and hands-on skills, who want to be part of a highly visible, intellectually vibrant, entrepreneurial team. Ideal candidates will have a track record of creating innovative solutions. They will typically have a graduate degree in computer science, physics, statistics, electrical engineering, bioinformatics, or another computational science. More information can be found at www.weigend.com/amazonjobs.html. If this interests you, please email your resume by January 31, 2004 directly to amazonjobs at weigend.com, clearly indicating your interests and strengths. Thank you. Best regards, -- Andreas Weigend ........................... Andreas Weigend Chief Scientist, Amazon.com Mobile: +1 (917) 697-3800 Info: www.weigend.com ........................... From Peter.Andras at newcastle.ac.uk Mon Jun 5 16:42:55 2006 From: Peter.Andras at newcastle.ac.uk (Peter Andras) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Dear Colleague We would hereby like to invite you to attend the symposium entitled 'Human Language: cognitive, neuroscientific and dynamical systems perspectives' at the University of Newcastle upon Tyne, UK, that will take place between 20-22 February 2004. Further details are available at the symposium website:=20 http://www.staff.ncl.ac.uk/peter.andras/lingsymp.htm If you are interested in coming, please email either: Peter Andras: peter.andras at ncl.ac.uk Hermann Moisl: hermann.moisl at ncl.ac.uk The aim is to keep the attendance fairly small to promote effective discussion, so an early reply would be appreciated. Best regards, Peter Andras http://www.staff.ncl.ac.uk/peter.andras/ Gary Green http://www.staff.ncl.ac.uk/gary.green/ Hermann Moisl http://www.staff.ncl.ac.uk/hermann.moisl/ From tkelley at arl.army.mil Mon Jun 5 16:42:55 2006 From: tkelley at arl.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: The Army Research Laboratory's (ARL) Human Research and Engineering Directorate (HRED) is seeking post-doctoral researchers to join us in a variety of areas, particularly modeling human cognitive processes using current architectures such as the Atomic Components of Thought-Rational (ACT-R), cognitive modeling using high performance computer assets, and developing new cognitive processes for robotic agents. We have post-doctoral positions available through the National Research Council (NRC) and American Society for Engineering Education (ASEE). The NRC positions have open windows for applications, the soonest being February 2nd. The ASEE positions are open on an continuing basis. A background in cognitive psychology, computational cognitive models, and/or in neural networks or artificial intelligence (AI) is required.=20 Post-doctoral positions usually last a year, with an option of an extra year. Many post-doctoral candidates eventually become employees with ARL. ARL HRED is located at Aberdeen Proving Ground, in Northern Maryland between Baltimore and Philadelphia on the shores of the Chesapeake Bay. We are midway between Maryland's Appalachian mountains and the ocean shore. Please contact Troy Kelley or Laurel Allender if you are interested in learning more about these research opportunities. There is a deadline for NRC post docs of Feb 2, so if you are interested, please respond soon. tkelley at arl.army.mil lallende at arl.army.mil Troy Kelley=20 U.S. Army Research Laboratory=20 Human Research and Engineering Directorate=20 AMSRL-HR-SE, APG, MD 21005=20 Tel: 410-278-5859=20 Fax: 410-278-9694=20 email: tkelley at arl.army.mil From levys at wlu.edu Mon Jun 5 16:42:55 2006 From: levys at wlu.edu (Simon Levy) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Call for Papers: AAAI Fall 2004 symposium on Compositional Connectionism in Cognitive Science Message-ID: From esann at dice.ucl.ac.be Mon Jun 5 16:42:55 2006 From: esann at dice.ucl.ac.be (esann) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: 2004 European Capital of Culture (www.genova-2004.it) This year, the School will place a special emphasis in exploring the implications of neuroengineering for neurological rehabilitation. The School will feature lectures, held by renowned experts of the field (organized into thematic sessions), on both theoretical and technical aspects; lab activities, and student presentations (one dedicated session per day). The topics covered will include: 1) Neural code and neural plasticity Coding and decoding of information in neural systems Neurophysiological basis of learning and memory Computational paradigms for perception and action Capabilities of natural neurobiological systems as computational devices 2) Neural interfaces and bio-artificial systems EEG, trans-cranial magnetic stimulation, multi-site recordings Intra-cranial, EEG-based and peripheral brain-computer interfaces Neural prostheses Hybrid silicon/biological systems 3) Neuroengineering and rehabilitation Haptic devices in neurological rehabilitation Virtual reality and multimedia for rehabilitation Functional electrical stimulation and biofeedback 4) Neuroengineering of mind Using robots to understand development of higher functions Neural models of higher functions Large scale brain models CONFIRMED SPEAKERS Giovanni Abbruzzese, University of Genova (ITALY) Fabio Babiloni, University of Rome I (ITALY) Marco Bove, University of Genova (ITALY) Andreas K. Engel, Hamburg University (GERMANY) Rainer Goebel, University of Maastricht (THE NETHERLANDS) Peter K=F6nig, University of Osnabrueck (GERMANY) Shimon Marom, Technion, Haifa (ISRAEL) Sergio Martinoia, University of Genova (ITALY) Pietro G. Morasso, University of Genova (ITALY) Miguel A. Nicolelis, Duke University, Durham (USA) David J.Ostry, McGill University, Montreal (CANADA) Silvio P. Sabatini, University of Genova (ITALY) Vincenzo Tagliasco, University of Genova (ITALY) John G. Taylor, King's College, London (UK) REGISTRATION FEES (*) PhD students and postdocs: 150 euros (**) Business and medical professionals: 300 euros (*) includes program, reception, two coffee breaks and lunch each day, and lecture notes (**) Early registration. Increase by 20% after May 15th. SCIENTIFIC ORGANIZATION: Dr. Vittorio Sanguineti University of Genova Via Opera Pia 13 16145 Genova (ITALY) E-mail: vittorio.sanguineti at unige.it Phone: +39-010-3536487 Fax: +39-010-3532154 ADMINISTRATIVE INFORMATIONS: Filippo Corsini Telecom Italia- Learning Services S.p.A. Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368. 72379 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Professor / Reader in Theoretical and Computational Neuroscience Applications are invited for this permanent position within the Centre for Theoretical and Computational Neuroscience (www.plymneuro.org.uk) at the University of Plymouth, England. Applicants must possess a record of high quality, internationally significant research, in any specialist area, eg vision, audition, motor control, ideally with interests in both mathematical/computational modelling and human psychophysics/imaging. Interested persons are invited to contact the Head of the Centre, Professor Mike Denham (email: mdenham at plym.ac.uk; tel: +44 (0)1752 232547), for further details and to discuss the post on an informal basis. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk www.plymneuro.org.uk From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: From A.Cangelosi at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Call for Abstracts 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Bob French (University of Liege) Art Glenberg (University of Wisconsin - Madison) Deb Roy (MIT) Luc Steels (VUB University Brussels and SONY Paris) Daniel Wolpert (Institute of Neurology, UCL London) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: 14 JUNE 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: June 14th 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 ------------- Angelo Cangelosi, PhD ------------- Principal Lecturer Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square (A316) Drake Circus Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From A.Cangelosi at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Apologies if you receive more than one copy of this email. ========================================================= Call for Abstracts 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Bob French (University of Liege) Art Glenberg (University of Wisconsin - Madison) Deb Roy (MIT) Luc Steels (VUB University Brussels and SONY Paris) Daniel Wolpert (Institute of Neurology, UCL London) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: 14 JUNE 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: June 14th 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 ------------- Angelo Cangelosi, PhD ------------- Principal Lecturer Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square (A316) Drake Circus Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Depue & Morrone-Strupinsky/A neurobehavioral model of affiliative bonding: BBS Call for Commentators Message-ID: Below the instructions please find the abstract, keywords, and full text link to the forthcoming BBS target article: A neurobehavioral model of affiliative bonding: Implications for conceptualizing a human trait of affiliation by Richard A. Depue and Jeannine V. Morrone-Strupinsky This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract, keywords below. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: A neurobehavioral model of affiliative bonding: Implications for conceptualizing a human trait of affiliation AUTHORS: Richard A. Depue and Jeannine V. Morrone-Strupinsky ABSTRACT: Because little is known about the human trait of affiliation, we provide a novel neurobehavioral model of affiliative bonding. Discussion is organized around processes of reward and memory formation that occur during approach and consummatory phases of affiliation. Appetitive and consummatory reward processes are mediated independently by the activity of the ventral tegmental area (VTA) dopamine (DA)nucleus accumbens shell (NAS) pathway and the central corticolimbic projections of the u-opiate system of the medial basal arcuate nucleus, respectively, although these two projection systems functionally interact across time. We next explicate the manner in which DA and glutamate interact in both the VTA and NAS to form incentive-encoded contextual memory ensembles that are predictive of reward derived from affiliative objects. Affiliative stimuli, in particular, are incorporated within contextual ensembles predictive of affiliative reward via a) the binding of affiliative stimuli in the rostral circuit of the medial extended amygdala and subsequent transmission to the NAS shell; b) affiliative stimulus-induced opiate potentiation of DA processes in the VTA and NAS; and c) permissive or facilitatory effects of gonadal steroids, oxytocin (in interaction with DA), and vasopressin on (i) sensory, perceptual, and attentional processing of affiliative stimuli and (ii) formation of social memories. Among these various processes, we propose that the capacity to experience affiliative reward via opiate functioning has a disproportionate weight in determing individual differences in affiliation. We delineate sources of these individual differences, and provide the first human data that support an association between opiate functioning and variation in trait affiliation. KEYWORDS: affiliation, social bonds, social memory, personality, appetitive reward, consummatory reward, dopamine, u-opiates, oxytocin, vasopressin, corticolimbic-striatal networks FULL TEXT: http://www.bbsonline.org/Preprints/Depue-07232002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, which is why you received this email. If you do not wish to receive further BBS Calls please email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Arbib/From monkey-like action recognition to human language: BBS Call for Commentators Message-ID: Below please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics Michael A. Arbib This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics AUTHORS: Michael A. Arbib ABSTRACT: The article analyzes the neural and functional grounding of language skills as well as their emergence in hominid evolution, hypothesizing stages leading from abilities known to exist in monkeys and apes and presumed to exist in our hominid ancestors right through to modern spoken and signed languages. The starting point is the observation that both premotor area F5 in monkeys and Broca's area in humans contain a "mirror system" active for both execution and observation of manual actions, and that F5 and Brocas area are homologous brain regions. This grounded the Mirror System Hypothesis of Rizzolatti & Arbib (1998) which offers the mirror system for grasping as a key neural "missing link" between the abilities of our non-human ancestors of 20 million years ago and modern human language, with manual gestures rather than a system for vocal communication providing the initial seed for this evolutionary process. The present article, however, goes "beyond the mirror" to offer hypotheses on evolutionary changes within and outside the mirror systems which may have occurred to equip Homo sapiens with a language-ready brain. Crucial to the early stages of this progression is the mirror system for grasping and its extension to permit imitation. Imitation is seen as evolving via a so-called "simple" system such as that found in chimpanzees (which allows imitation of complex "objectoriented" sequences but only as the result of extensive practice) to a so-called "complex" system found in humans (which allows rapid imitation even of complex sequences, under appropriate conditions) which supports pantomime. This is hypothesized to provide the substrate for the development of protosign, a combinatorially open repertoire of manual gestures, which then provides the scaffolding for the emergence of protospeech (which thus owes little to non-human vocalizations), with protosign and protospeech then developing in an expanding spiral. It is argued that these stages involve biological evolution of both brain and body. By contrast, it is argued that the progression from protosign and protospeech to languages with full-blown syntax and compositional semantics was a historical phenomenon in the development of Homo sapiens, involving few if any further biological changes. KEYWORDS: gestures; hominids; language evolution; mirror system; neurolinguistics; primates; protolanguage; sign language; speech; vocalization http://www.bbsonline.org/Preprints/Arbib-05012002/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- Message-Id: <200404082226.i38MQNB1020437 at ursa.services.brown.edu> X-Priority: From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Dear Connectionists, Either RA post below may appeal to someone with a background in coding and/or analysis of audio using e.g. neural networks. Please forward to anyone who may be interested. Many thanks, Mark Plumbley ---------------------------------------------------------- Centre for Digital Music Queen Mary, University of London Two Post-Doctoral Research Assistants for EPSRC Projects (1) Object-based Coding of Musical Audio and (2) Advanced Subband Systems for Audio Source Separation The Centre for Digital Music is at the forefront of research related to digital music and audio analysis, modeling and processing, including work on digital audio effects, music analysis, music information retrieval, and audio coding. Research Assistants are required for two new EPSRC projects in the Centre. * RA Post 1: Object-Based Coding of Musical Audio (Ref: 04097/DP) The aim of this project is to develop a way to encode musical audio using high-level "sound objects" such as musical notes or chords. This will allow musical audio to be compressed using very low bit rates, over e.g. MPEG4 Structured Audio, with the audio resynthesized at the receiver. The project will develop and investigate methods to encode monophonic (single-note) music and polyphonic music (with several notes at once), and will compare the quality and efficiency of these coding methods with existing methods such as transform coding and parametric coding. * RA Post 2: Advanced Subband Systems for Audio Source Separation * (Ref: 04098/DP) Humans primarily use phase information to localize sounds at low frequency, whereas in the upper frequencies intensity differences dominate due to inherent phase ambiguities. The aim of this project is to create new algorithmic solutions for blind source separation (BSS) for speech and audio that can deal with real acoustic environments in a similar manner to human hearing. The algorithms need to be able to deal with real noisy and reverberant environments and be able to track individual sources as they move and appear/disappear. Such systems will be key in future electronic devices, such as digital hearing aids and hands-free tele-conferencing. The project will also focus on the construction of a real time prototype for system evaluation and demonstration. The salary for the posts will be at up to =A324,325 per annum, inclusive of London Allowance, on the RA1A scale. Further details about the Department are on the web site http://www.elec.qmul.ac.uk/ and about the College on http://www.qmul.ac.uk. Further details and an application form, can be obtained from http://www.elec.qmul.ac.uk/department/vacancies/ Completed application forms should be returned to Theresa Willis, Department of Electronic Engineering, Queen Mary, University of London, Mile End Road, London E1 4NS (email: theresa.willis at elec.qmul.ac.uk), by Wednesday 21 April 2004. Working Towards Equal Opportunities --- Dr Mark D Plumbley Centre for Digital Music Department of Electronic Engineering Queen Mary University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 7518 Fax: +44 (0)20 7882 7997 Email: mark.plumbley at elec.qmul.ac.uk From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Vallortigara & Rogers/Survival with an asymmetrical brain: BBS Call for Commentators Message-ID: Below the proposal instructions please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: Survival with an asymmetrical brain: Advantages and disadvantages of cerebral lateralization Giorgio Vallortigara and Lesley J. Rogers This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Survival with an asymmetrical brain: Advantages and disadvantages of cerebral lateralization AUTHORS: Giorgio Vallortigara and Lesley J. Rogers ABSTRACT: Recent evidence in natural and semi-natural settings has revealed a variety of left-right perceptual asymmetries among vertebrates. This includes preferential use of the left or right visual hemifield during activities such as searching for food, agonistic responses or escape from predators in animals as different as fish, amphibians, reptiles, birds and mammals. There are obvious disadvantages in showing such directional asymmetries because relevant stimuli may happen to be located to the animals left or right at random; there is no a priori association between the meaning of a stimulus (e.g., its being a predator or a food item) and its being located to the animal's left or right. Moreover, other organisms (e.g. predators) could exploit the predictability of behavior that arises from population-level lateral biases. It might be argued that lateralization of function can enhance cognitive capacity and efficiency of the brain, thus counteracting the ecological disadvantages of lateral biases in behavior. However, such an increase in brain efficiency could be obtained by each individual being lateralized without any need to align the direction of the asymmetry in the majority of the individuals of the population. Here we argue that the alignment of the direction of behavioral asymmetries at the population level arises as an evolutionarily stable strategy under "social" pressures, i.e. when individually asymmetrical organisms must coordinate their behavior with the behavior of other asymmetrical organisms of the same or different species. KEYWORDS: Asymmetry, lateralization of behavior, brain evolution, brain lateralization, evolution of lateralization, evolutionarily stable strategy, hemispheric specialization, laterality, social behavior, development FULL TEXT: http://www.bbsonline.org/Preprints/Vallortigara-12152003/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From R.Borisyuk at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: R.Borisyuk at plymouth.ac.uk (Roman Borisyuk) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: New announcements on 9th Neural Computation and Psychology Workshop (NCPW9) Plymouth UK, 8-10 September 2004 www.plymouth.ac.uk/ncpw9 1. Thanks to the sponsorship of the UK councils BBSRC and EPSRC, there is now a limited number of travel grants. You can apply, after submitting your abstract, by writing to ncpw9 at plymouth.ac.uk and explaining the motivation for your request. 2. The abstract submission deadline has been extended to JULY 1st. Please see full Call for Abstract below ========================================================= Call for Abstracts - Extended Deadline 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Nick Chater (Warwick University, UK) Bob French (University of Liege, Belgium) Art Glenberg (University of Wisconsin - Madison, USA) Deb Roy (MIT, USA) Stefan Wermter (Sunderland University, UK) Daniel Wolpert (Institute of Neurology, UCL London, UK) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: JULY 1st, 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: July 1st 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 From arthur at tuebingen.mpg.de Mon Jun 5 16:42:55 2006 From: arthur at tuebingen.mpg.de (Arthur Gretton) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: [repost] extended deadline for ICML/COLT session on kernel methods Message-ID: [ This message is reposted due to an editing error. Apologies. -- moderator ] IMPROMPTU KERNEL PAPERS -- COLT/ICML 2004 Kernel Day, Banff, Canada, July 4 Extended Deadline for Submissions: June 25, 2004 There remain some slots free in the impromptu poster session on kernel methods, to be held during the kernel day at ICML/COLT on 4 July 2004. As in previous years, we are looking for incomplete or unusual ideas, as well as promising directions or problems for new research. If you would like to submit to this session, please send an abstract to Arthur Gretton (arthur at tuebingen.mpg.de) before June 25, 2004. Please do not send posters or long documents. -- Arthur Gretton Mobile : +49 1762 3210867 MPI for Biological Cybernetics Office : +49 7071 601562 Spemannstr 38, 72076 Home : +49 7071 305346 Tuebingen, Germany I used to believe I was a Bayesian, but now I'm not so sure. From A.Cangelosi at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Connection Science Journal Call for Papers A Special Issue on 'The Emergence of Language: Neural and Adaptive Agent Models ' Guest Editor: Angelo Cangelosi Connection Science is calling papers for a special issue entitled 'The Emergence of Language: Neural and Adaptive Agent Models'. Studies of the emergence of language focus on the evolutionary and/or developmental factors that affect the acquisition and auto-organisation of a linguistic communication system. Both language-specific abilities (e.g. speech, semantics, syntax) and other cognitive, sensorimotor and social abilities (e.g. category learning, action and embodiment, social networks) contribute to the emergence of language. Key research issues and topics in the area include: * Emergentism as an alternative to the nativism/empiricism dichotomy * Identification of basic processes producing language complexity * Grammaticalization and emergence of syntax * Emergent models of language acquisition * Evolution and origins of language * Pidgin, creole and second language acquisition * Neural bases of emergent language processes * Auto-organization of shared lexicons in groups of individuals/agents * Grounding of symbols and language in perception and action The main aims of this special issue are to foster interdisciplinary and multi-methodological approaches to modelling the emergence of language, and to identify key research directions for the future. Models based on neural networks (connectionism, computational neuroscience) and adaptive agent methodologies (artificial life, multi-agent systems, robotics), or integrated neural/agent approaches, are particularly encouraged. The submitted papers are expected to: (i) focus on one or more related research issues (see list above), (ii) explain the importance of the topic, the open problems and the different approaches discussed in the literature, (iii) discuss the advantages and drawbacks of the neural and adaptive agent approaches with respect to other methodologies (including experimental research) and (iv) present original models and/or significant new results. Review papers may also be considered. Invited Papers The special issue will include two invited papers, one from Brian MacWhinney (Carnegie Mellon University) and one from Luc Steels (VUB University Brussels and SONY Computer Labs Paris). The invited papers are: * Brian MacWhinney , 'Emergent Linguistic Structures and the Problem of Time' (focus on neural network modeling) * Luc Steels , 'Mirror Learning and the Self-Organisation of Languages' (focus on adaptive agent modeling) Submission Instructions and Deadline Manuscripts, either full papers or shorter research notes (up to 4000 words), following the Connection Science guidelines (http://www.tandf.co.uk/journals/authors/ccosauth.asp ) should be emailed to the guest editor (acangelosi at plymouth.ac.uk) by December 1, 2004. Reviews will be completed by March 1, 2005, and final drafts will be accepted no later than May 1, 2005. The special issue will be published in September 2005. Guest Editor Angelo Cangelosi Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth, Plymouth PL4 8AA, UK Tel: +44 (0) 1752 232559 Fax: +44 (0) 1752 232540 E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/research/ABC/EmergenceLanguage/ Related and Sample Papers Cangelosi, A., and Parisi, D., 1998, The emergence of a 'language' in an evolving population of neural networks. Connection Science, 10(2): 83-97. Cangelosi, A., and Parisi, D., 2004, The processing of verbs and nouns in neural networks: Insights from synthetic brain imaging. Brain and Language, 89(2): 401-408. Elman, J.L, 1999, The emergence of language: A conspiracy theory. In B. MacWhinney (ed.), Emergence of Language (Hillsdale, NJ: LEA). Knight, C., Hurford, J.R., and Studdert-Kennedy, M., (eds), 2000, The evolutionary emergence of language: social function and the origins of linguistic form (Cambridge: Cambridge University Press). MacWhinney, B., 1998, Models of the emergence of language. Annual Review of Psychology, 49: 199-227. Plunkett, K., Sinha, C., Moller, M. F., and Strandsry, O., 1992, Symbol grounding or the emergence of symbols? Vocabulary growth in children and a connectionist net. Connection Science, 4(3-4): 293-312. Roy, D., and Pentland, A., 2002, Learning words from sights and sounds: A computational model, Cognitive Science, 26: 113-146. Steels, L., 2003, Evolving grounded communication for robots. Trends in Cognitive Sciences, 7(7): 308-312. Wermter, S., Elshaw, M., and Farrand, S., 2003, A modular approach to self-organization of robot control based on language instruction. Connection Science, 15(2-3): 73-94. ---------------- Angelo Cangelosi, PhD ---------------- Reader in Artificial Intelligence and Cognition Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square Building (A316) Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From R.Borisyuk at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: R.Borisyuk at plymouth.ac.uk (Roman Borisyuk) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: University of Plymouth Faculty of Science - School of Earth, Ocean and Environmental Science Research Fellow in stochastic modelling, European Lifestyles and Marine Ecosystems Contact person for information: Prof. Laurence Mee (lmee at plymouth.ac.uk.) Further information available at Plymouth University is one of the UK's leading institutions conducting multidisciplinary research on marine and coastal environmental policy worldwide. The Marine and Coastal Policy Research Group aims to: provide a sound scientific, social, legal and economic basis for improved policy for the management, sustainable use and protection of the marine and coastal environment. Due to a recent Framework Six project award it has become necessary to recruit a research fellow to develop the innovative models necessary for predictive scenarios on the state of Europe's seas. The overall project involves the cooperation of 28 research groups in 14 countries and modelling will examine causal relationships between agents of social and economic change and impacts to the environment. It will develop predictive scenarios of future changes to the marine environment. The new post, available with immediate effect for a maximum duration of 30 months will assist the project leader, Prof. Laurence Mee. The work is expected to lead to high quality research publications and should provide an important career step for a young researcher. The successful candidate must be an experienced and adaptable postdoctoral statistician skilled in empirical data analysis and modelling techniques. Approaches that may be explored include multi-criteria analysis, Bayesian networks and neural networks. He/she will have a proven track record of quality research publications and should demonstrate capability for a multidisciplinary approach involving teamwork. The appointee should be able to work autonomously, have good communication skills and experience of working with large datasets. The indicative salary for the post is =A320311-22,191 pa pro-rata. Application forms can be obtained from: The Personnel Department University of Plymouth Drake Circus Plymouth PL4 8AA Vacancy hotline: (24 hour) 01752 232168 Email: personnel at plymouth.ac.uk Closing date 20 August 2004 From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Steels & Belpaeme/Coordinating Perceptually Grounded Categories through Language: BBS Call for Commentators Message-ID: Below the instructions please find the abstract, keywords, and full text link to the forthcoming BBS target article: Coordinating Perceptually Grounded Categories through Language. A Case Study for Colour. by Luc Steels and Tony Belpaeme This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract, keywords below. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Coordinating Perceptually Grounded Categories through Language. A Case Study for Colour. AUTHORS: Luc Steels and Tony Belpaeme ABSTRACT: The paper proposes a number of models to examine through what mechanisms a population of autonomous agents could arrive at a repertoire of perceptually grounded categories that is sufficiently shared to allow successful communication. The models are inspired by the main approaches to human categorisation being discussed in the literature: nativism, empiricism, and culturalism. Colour is taken as a case study. Although the paper takes no stance on which position is to be accepted as final truth with respect to human categorisation and naming, it points to theoretical constraints that make each position more or less likely and contains clear suggestions on what the best engineering solution would be. Specifically, it argues that the collective choice of a shared repertoire must integrate multiple constraints, including constraints coming from communication. KEYWORDS: Autonomous agents, symbol grounding, colour categorisation, colour naming, genetic evolution, connectionism, memes, cultural evolution, self-organisation, origins of language, semiotic dynamics FULL TEXT: http://www.bbsonline.org/Preprints/Steels-09262002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, which is why you received this email. If you do not wish to receive further BBS Calls please email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: processes are in a closed loop, where the input to the system depends on its output. In contrast, most in vitro preparations are not. Thanks to recent advances in real-time computing, we can artificially close the loop and stimulate the system according to its current state. Such a closed-loop approach blurs the border between experiments and simulations, and it allows us to peek into the inner workings of the brain that are not accessible by any other means. This symposium considers neuronal systems ranging from single cells, to small circuits, to the whole organism. It emphasizes "dynamic clamp" approach to study the role of ion channels in orchestrating behavior, and extends this closed-loop concept to networks, neural prostheses and therapeutic interventions. Invited Speakers: Eve E. Marder, Brandeis University, How Good is Good Enough? Using the Dynamic Clamp to Understand Parameter Regulation in Network Function Robert Butera, Georgia Institute of Technology, Dynamic Clamp: Technological Implementations and Algorithmic Development Gwendal le Masson, INSERM, Paris, Biological-Artificial Interactions: Evolution of Techniques and Emerging Concepts in Network Neurosciences Farzan Nadim, Rutgers University, Synaptic Depression Mediates Bistability in Neuronal Networks with Feedback Inhibition Alex Reyes, New York University Controlling the Spread of Synchrony with Inhibition Shimon Marom, Israel Institute of Technology (Technion), Haifa Learning in Networks of Cortical Neurons Yang Dan, University of California, Berkeley Timing-Dependent Plasticity in Visual Cortex Moshe Abeles, Bar Ilan University, Ramat Gan, Israel Spatial and Temporal Organization of Activity in Motor Cortex Rafael Yuste, Columbia University, Imaging the Spontaneous and Evoked Dynamics of the Cortical Microcircuit Theodore W. Berger, University of Southern California Nonlinear Dynamic Models of Neural Systems as the Basis for Neural Prostheses: Application to Hippocampus Michael Dickinson, California Institute of Technology The Organization of Visual Motion Reflexes in Flies and their Role in Flight Control Andrew Schwartz, University of Pittsburgh Useful Signals from Motor Cortex Peter A. Tass, Institute of Medicine, J=FClich, and University of Cologne, Germany Model-Based Development of Desynchronizing Deep Brain Stimulation Keynote Address: Mayada Akil, National Institute of Mental Health Putting it All Together: Schizophrenia, from Phenotype to Genotype and Back Poster sessions will be held during both days of the meeting. Program agenda may be accessed via the NIMH website located at: http://www.nimh.nih.gov/scientificmeetings/dynamics2004.cfm For further information, registration and other logistics, contact Matt Burdetsky at Capital Meeting Planning, Inc., 6521 Arlington Blvd., Suite 505, Falls Church, VA 22042 (703) 536-4993; Fax: (703) 536-4991; E-mail: matt at cmpinc.net From Hualou.Liang at uth.tmc.edu Mon Jun 5 16:42:55 2006 From: Hualou.Liang at uth.tmc.edu (Hualou Liang) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: COMPUTATIONAL COGNITIVE NEUROSCIENCE POSTDOCTORAL POSITION AVAILABLE University of Texas Health Science Center at Houston Applications are invited for a postdoctoral position currently open in the group of Dr. Hualou Liang (http://www.sahs.uth.tmc.edu/hliang/) at University of Texas Health Science Center at Houston to participate in an ongoing research project studying the cortical dynamics of visual selective attention. The project involves the application of modern signal processing techniques to multielectrode neuronal recordings. The ideal candidate should have, or be about to receive, a Ph.D. in relevant discipline with substantial mathematical/computational experience (especially in signal processing, time series analysis, dynamical systems, multivariate statistics). Programming skills in C and Matlab are essential. Experience in neuroscience is advantageous but not required. Interested individuals should email a curriculum vitae, a brief statement of research interests and the names of three references to Dr. Hualou Liang at Hualou.liang at uth.tmc.edu PS: I will be available at the SFN meeting in San Diego. Potential candidates are welcome to discuss this position at the meeting. From R.Roy at Cranfield.ac.uk Mon Jun 5 16:42:55 2006 From: R.Roy at Cranfield.ac.uk (Roy, Rajkumar) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Development of a Soft Computing Approach to Predict Roll Life in Long Product Rolling Industry CASE Studentship at Cranfield University Proposed by: Dr. Rajkumar Roy, Cranfield University Industrial Sponsor: Corus UK November 2004 - October 2007 Outline of the Project Rolls are estimated to contribute about 5-15% of overall production costs in long product rolling. Roll life in long product rolling is dependent on the rate of wear of the rolls. Any prediction about the roll life will require an understanding of the roll wear mechanisms and a model for the wear. It is observed that after many years of research, scientists and engineers are still working on developing such a model. On the other hand expert operators on the shop floor often take corrective actions to improve roll life. Through experience they have developed a mental model of the roll wear behaviour and therefore the roll life. In absence of quantitative model for the roll wear and the roll life predictions, it is proposed that this research will develop an approach utilizing Soft Computing techniques (Neural Networks and Fuzzy Logic) to predict roll life for long product rolling. Soft Computing techniques are proven in many domains to be successful in modeling a complex environment using empirical data and human expertise. It is expected that the research will utilize historical data available within the industry to establish any relationship between certain key roll, component and production variables (quantitative) and actual life of the roll. Neural networks and statistical approaches can be used at this stage of the research. In parallel the research will investigate how expert operators adjusts machine and roll parameters to improve roll life. This would involve extensive knowledge capture exercise. It is expected that fuzzy logic based representation will allow the knowledge to be made explicit. The fuzzy model will incorporate qualitative variables involved in the roll life prediction. The third phase of the research will focus on integrating the quantitative and qualitative models to develop a complete model for roll life prediction. EPSRC is expected to pay tuition fees to Cranfield. The student would receive around 11K pounds sterling tax-free per annum for the three years. Interested graduate/postgraduate students with manufacturing/mechanical engineering background are invited to submit their CV for an informal discussion over telephone or email. Additional background in knowledge capture and Soft Computing will be beneficial. The minimum academic requirement for entrants to the degree is an upper second class honours degree or its equivalent. Please note that the funding is restricted to British Nationals, in special cases it may be offered to an EC national. Please respond by 30th Nov. 2004. For informal enquiries and application (detailed CV), please contact: Dr. Rajkumar Roy at your earliest: Dr. Rajkumar Roy Senior Lecturer and Course Director, IT for Product Realisation Department of Enterprise Integration, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedford, MK43 0AL, United Kingdom. Tel: +44 (0)1234 754072 or +44 (0)1234 750111 Ext. 2423 Fax: +44 (0)1234 750852 Email: r.roy at cranfield.ac.uk or r.roy at ieee.org URL: http://www.cranfield.ac.uk/sims/staff/royr.htm http://www.cranfield.ac.uk/sims/cim/people/roy.htm From bogus@does.not.exist.com Mon Jun 5 16:42:55 2006 From: bogus@does.not.exist.com () Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Large-Scale Dynamics of the Visual Cortex", will be given by Prof. Michael Shelley (Courant Institute, New York), in Barcelona, within the Ph. D. Program of Applied Mathematics of the Technical University of Catalonia. This program has a so-called "Quality Mention", so the Ph.D. students can apply to get grants from the Spanish Ministery of Education and Science. http://wwwn.mec.es/univ/jsp/plantillaAncho.jsp?id=26 For registering to the courses please contact Mrs.Carme Capdevila at the PhD office at the Faculty of Mathematics and Statistics of the UPC at Carmec at fme.upc.edu or at the phone number: + 34 93 401 58 61. This course is also part of a research program of the Centre de Recerca Matem?tica, http://www.crm.es/CONTROL2005, which can offer a reduced number of accommodation grants for Ph. D. students interested in the course. Please, fill in the application for lodging form. The deadline for sending applications is December 15. http://www.crm.es/CONTROL2005/ControlFinancial_form.htm We will be grateful if you can spread this anouncement. Sincerely. Amadeu Delshams Toni Guillamon Department of Applied Mathematics I Technical University of Catalonia, UPC, Barcelona From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Postdoctoral Fellowship The School of Psychology and the Centre for Theoretical and Computational Neuroscience have jointly been awarded a 5-Year Academic Fellowship from Research Councils UK. This fellowship scheme is designed to provide a route into an academic career for researchers with outstanding potential. At the end of the fellowship period, the University will offer a permanent academic post to the Fellow, subject to successful completion of standard academic probation within the five years of the fellowship. You should be a postdoctoral researcher of high quality, able to take an active role in research projects using behavioural experimentation, computational modelling and EEG/ERP techniques to investigate cognition. For informal enquiries in the first instance, please contact Professor Mike Denham on +44 (0)1752 232547 or email mike.denham at plym.ac.uk, although applications must be made in accordance with the details shown below. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk From cmbishop at microsoft.com Mon Jun 5 16:42:55 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: Each year the Microsoft Research lab in Cambridge, U.K. offers around 40+ PhD internships, typically of 12 weeks duration, covering research areas of interest to the lab including machine learning, computer vision and information retrieval. These internships are aimed at PhD students who have completed at least a year (preferably two or three) of their PhD studies. Competition for places is strong, so we have set a deadline of 28 February for receipt applications (including references) for internships in 2005. Detailed information about the internships, as well as information on the applications procedure, is available at: http://www.research.microsoft.com/aboutmsr/jobs/internships/cambridge.aspx Chris Bishop Professor Christopher M. Bishop FREng Assistant Director Microsoft Research Ltd 7 J J Thomson Avenue Cambridge CB3 0FB, U.K. Tel. +44 (0)1223 479 783 Fax: +44 (0)1223 479 999 cmbishop at microsoft.com http://research.microsoft.com/~cmbishop From altun at tti-c.org Mon Jun 5 16:42:55 2006 From: altun at tti-c.org (Yasemin Altun) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Machine Learning Summer School at TTI-C/UC Message-ID: From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: van der Velde & de Kamps/Neural blackboard architectures: BBS Call for Commentators Message-ID: The Online Commentary Proposal System is currently unavailable due to technical difficulties. Until the Online Commentary Proposal System is reactivated, please send all commentary proposals (with relevant expertise) and commentator suggestions to calls at bbsonline.org. --------------------------------------------------------------------------------- Below the proposal instructions please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: "Neural blackboard architectures of combinatorial structures in cognition" Frank van der Velde and Marc de Kamps http://www.bbsonline.org/Preprints/VanderVelde-11132003/Referees/ This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL by March 24, 2005. calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. Please reply by EMAIL to by March 24, 2005 ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Neural blackboard architectures of combinatorial structures in cognition AUTHORS: Frank van der Velde and Marc de Kamps ABSTRACT: Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural 'blackboard' architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. Perspectives and potential developments of the architectures are discussed. KEYWORDS: Binding, blackboard architectures, combinatorial structure, compositionality, language, dynamic system, neurocognition, sentence complexity, sentence structure, working memory, variables, vision FULL TEXT: http://www.bbsonline.org/Preprints/VanderVelde-11132003/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: University of Plymouth Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow Five Year Fixed Term Appointment (salary range =A323,643 - =A329,479 pa) Applications are invited for the post of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK. The post has been made available through the award of a major new =A31.8M five-year research project funded by the UK Engineering and Physical Sciences Research Council entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University (Stefano Panzeri, Piotr Dudek, Steve Furber), University College London (Michael Hausser, Arnd Roth), Edinburgh University (Mark van Rossum, David Willshaw), Oxford University (Jan Schnupp), and London University School of Pharmacy (Alex Thomson), plus a number of leading European research groups. Applicants for the post must have a PhD in a relevant subject area and possess an expert knowledge of the field of theoretical and computational neuroscience, or of a closely related area with some knowledge of theoretical and computational neuroscience. They must be able to provide evidence of an ability to conduct high quality research in this or a closely related research area, eg a strong publication record and peer recognition. Evidence of the ability to work successfully in collaboration with other research groups on joint projects would be particularly advantageous. The work of the Research Fellow will be specifically concerned with the staged development of the cortical microcircuit model, in collaboration with all the participants in the project, on a large scale Linux cluster based simulation facility. Maintaining a close level of collaboration will involve the Research Fellow in spending short periods of time in the laboratories of the collaborators, both in the UK and in Europe. The Research Fellow will also conduct research into methods for combining different levels/scales of model description which emerge as the project progresses, in order to build an integrated cortical microcircuit model. These tasks will require a postdoctoral researcher with extensive research experience in neurobiological modelling and the knowledge of theoretical and computational neuroscience at the neurobiological level, ie detailed modelling of neurons and neural circuitry, necessary to maintain an in-depth understanding of the activities in all of the research areas of the project. The post is available from 1st June 2005 and an appointment will be made as soon as possible after this date. The appointment will be for a fixed term of five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made in the first instance to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk, from whom further details are available. From calls at bbsonline.org Mon Jun 5 16:42:55 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: Striedter/Principles of Brain Evolution: BBS Multiple Book Review Message-ID: The Online Commentary Proposal System is currently unavailable due to technical difficulties. Until the Online Commentary Proposal System is reactivated, please send all commentary proposals (with relevant expertise) and commentator suggestions to calls at bbsonline.org. ======================================================================= BBS MULTIPLE BOOK REVIEW - CALL FOR COMMENTATORS ======================================================================= Below is a link to the forthcoming precis of a book accepted for Multiple Book Review in Behavioral and Brain Sciences (BBS). PRECIS OF: Principles of Brain Evolution AUTHOR: Georg F. Striedter Behavioral and Brain Sciences (BBS), is an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Please note that it is the *BOOK*, not the precis, that is to be reviewed. Reviewers must be BBS Associates or nominated by a BBS Associate. To be considered as a reviewer for this book, to suggest other appropriate reviewers, or for information about how to become a BBS Associate, please send an EMAIL to calls at bbsonline.org by March 25, 2005. The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to nominate someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. A full electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) To help you decide whether you would be an appropriate reviewer for this book, an electronic draft of the precis (only) is retrievable at the URL that follows the abstract below. ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. Please reply by EMAIL to calls at bbsonline.org by March 25, 2005 ======================================================================= *** BOOK PRECIS INFORMATION *** ======================================================================= PRECIS OF: Principles of Brain Evolution Author: Georg F. Striedter ABSTRACT: Brain evolution is a complex weave of species similarities and differences, bound by diverse rules and principles. This book is a detailed examination of these principles, using data from a wide array of vertebrates but minimizing technical details and terminology. It is written for advanced undergraduates, graduate students, and more senior scientists who already know something about the brain, but want a deeper understanding of how diverse brains evolved. The books central theme is that evolutionary changes in absolute brain size tend to correlate with many other aspects of brain structure and function, including the proportional size of individual brain regions, their complexity, and their neuronal connections. To explain these correlations, the book delves into rules of brain development and asks how changes in brain structure impact function and behavior. Two chapters focus specifically on how mammal brains diverged from other brains and how Homo sapiens evolved a very large and special brain. KEYWORDS: Neocortex, Development, Homology, Parcellation, Mammal, Primate, Lamination, Cladistics, Hippocampus, Basal Ganglia, Neuromere PRECIS TEXT: http://www.bbsonline.org/Preprints/Striedter-01132005/Referees ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Ralph BBS ------------------------------------------------------------------- Ralph DeMarco Editorial Coordinator Behavioral and Brain Sciences Journals Department Cambridge University Press 40 West 20th Street New York, NY 10011-4211 UNITED STATES bbs at bbsonline.org http://www.bbsonline.org Tel: +001 212 924 3900 ext.374 Fax: +001 212 645 5960 ------------------------------------------------------------------- From M.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: This is a multi-part message in MIME format. ------_=_NextPart_001_01C523DA.D42D630A Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Apologies for this second posting - the financial information was not properly reproduced on the first one. University of Plymouth Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow=20 Five Year Fixed Term Appointment (salary range 23,643 - 29,479 UK pounds pa) Applications are invited for the post of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK.=20 The post has come available through the award of a major new 1.8M UK pounds five-year research project funded by the UK Engineering and Physical Sciences Research Council entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University (Stefano Panzeri, Piotr Dudek, Steve Furber), University College London (Michael Hausser, Arnd Roth), Edinburgh University (Mark van Rossum, David Willshaw), Oxford University (Jan Schnupp), and London University School of Pharmacy (Alex Thomson), plus a number of leading European research groups. Applicants for the post must have a PhD in a relevant subject area and possess an expert knowledge of the field of theoretical and computational neuroscience, or of a closely related area with some knowledge of theoretical and computational neuroscience. They must be able to provide evidence of an ability to conduct high quality research in this or a closely related research area, eg a strong publication record and peer recognition. Evidence of the ability to work successfully in collaboration with other research groups on joint projects would be particularly advantageous. The work of the Research Fellow will be specifically concerned with the staged development of the cortical microcircuit model, in collaboration with all the participants in the project, on a large scale Linux cluster based simulation facility. Maintaining a close level of collaboration will involve the Research Fellow in spending short periods of time in the laboratories of the collaborators, both in the UK and in Europe. The Research Fellow will also conduct research into methods for combining different levels/scales of model description which emerge as the project progresses, in order to build an integrated cortical microcircuit model. These tasks will require a postdoctoral researcher with extensive research experience in neurobiological modelling and the knowledge of theoretical and computational neuroscience at the neurobiological level, ie detailed modelling of neurons and neural circuitry, necessary to maintain an in-depth understanding of the activities in all of the research areas of the project. The post is available from 1st June 2005 and an appointment will be made as soon as possible after this date. The appointment will be for a fixed term of five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made in the first instance to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk, from whom further details are available. From sml at essex.ac.uk Mon Jun 5 16:42:55 2006 From: sml at essex.ac.uk (Lucas, Simon M) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: IEEE Symposium on Computational Intelligence and Games April 4-6 2005 Essex University, Colchester, Essex http://cigames.org Call for Participation Keynote Speakers ================ - Jordan Pollack: Is progress possible? - Martin Mueller: Challenges in computer Go - Risto Miikkulainen: Creating Intelligent Agents through Neuroevolution - Jaap van den Herik: Opponent Modelling and Commercial Games Tutorials (Sunday 3rd April, afternoon) ========= - Andries P. Engelbrecht: Particle Swarm Optimisation for Learning Game Strategies - Evan J. Hughes: Coevolving game strategies: How to win and how to lose - Thomas P. Runarsson: Temporal Difference Learning for Game Strategy Acquisition 28 Oral Papers(single stream) 10 Poster Papers More details at http://cigames.org Demonstrations ============== We encourage delegates to bring demonstrations (e.g. on Laptops) of their latest research in this area, to be exhibited in the poster session). -------------------------------------------------- Dr. Simon Lucas Department of Computer Science University of Essex Colchester CO4 3SQ United Kingdom Email: sml at essex.ac.uk http://cswww.essex.ac.uk/staff/lucas/lucas.htm -------------------------------------------------- From S.Denham at plymouth.ac.uk Mon Jun 5 16:42:55 2006 From: S.Denham at plymouth.ac.uk (Sue Denham) Date: Mon, 05 Jun 2006 20:42:55 -0000 Subject: No subject Message-ID: The Centre for Theoretical and Computational Neuroscience and the Computer Music Research Group, University of Plymouth, UK, are looking for highly qualified candidates for 2 Post-Doctoral and 2 Research Assistant positions to work on a 3-year research project in the field of Computational Neuroscience & Music Cognition, entitled Emergent Cognition through Active Perception. The project is funded by the Sixth Framework Programme of the European Union and involves a consortium lead by Dr Sue Denham, Prof Mike Denham and Dr Eduardo Miranda (University of Plymouth), in collaboration with Dr Henkjan Honing (University of Amsterdam, Institute for Logic, Language and Computation), Prof Istv=E1n Winkler (Institute for Psychology, Hungarian Academy of Sciences), and Prof Gustavo Deco and Prof Xavier Serra (University Pompeu Fabra, Barcelona, Music Technology Group & Computational Neuroscience Group). The goal of the project is to investigate how complex cognitive behaviour in artificial systems can emerge through interacting with an environment, and how, by becoming sensitive to the properties of the environment, such systems can develop effective representations and processing structures autonomously. Music is an ideal domain in which to investigate complex cognitive behaviour, since music, like language, is a universal phenomenon containing complex abstractions and temporally extended structures, whose organisation is constrained by underlying rules or conventions that participants need to understand for effective cognition and interaction. We will investigate the development of music cognition by combining the complementary approaches of perceptual experiments using human subjects, functional and neurocomputational modelling, and the implementation of an interactive embodied cognitive system. Provisional project start date: 1 October 2005 More details are available here: http://neuromusic.soc.plymouth.ac.uk/EmCAP.html Alternatively contact: Sue Denham s.denham at plymouth.ac.uk Or Eduardo R Miranda eduardo.miranda at plymouth.ac.uk From d.mandic at imperial.ac.uk Fri Jun 2 05:36:27 2006 From: d.mandic at imperial.ac.uk (Danilo P. Mandic) Date: Fri, 2 Jun 2006 10:36:27 +0100 Subject: Connectionists: Postdoctoral position in Nonlinear Multidimensional Signal Processing at Imperial College Message-ID: <006401c68628$17383050$0a1efea9@MandicLaptop> Dear Connectionists may I draw your attention to the opening for a postdoc working on modelling of real world signals my multidimensional Recurrent Neural Networks. More details can be found on http://www.commsp.ee.ic.ac.uk/~mandic The deadline is quite close, sorry for a short notice Danilo ======================================== Dr Danilo P. Mandic Reader in Signal Processing Department of Electrical and Electronic Engineering Imperial College London Exhibition Road, SW7 2BT London Phone: +44 (0)207 594 6271 Fax: +44 (0)207 594 6234 E-mail: d.mandic at imperial.ac.uk www.commsp.ee.ic.ac.uk/~mandic From nicolas.brunel at univ-paris5.fr Fri Jun 2 08:02:18 2006 From: nicolas.brunel at univ-paris5.fr (Nicolas Brunel) Date: Fri, 02 Jun 2006 14:02:18 +0200 Subject: Connectionists: Postdoctoral position in optical microscopy for neuroscience Message-ID: <448028CA.5030904@univ-paris5.fr> Postdoctoral position in optical microscopy for neuroscience Ecole Normale Sup?rieure, Paris, France A postdoctoral position is available in October 2006 at the Laboratory of Molecular and Cellular Neurobiology of the Ecole Normale Sup?rieure, Paris, France, to combine two-photon microscopy and adaptive optic to improve depth of imaging in scattering samples such as brain tissues. The project involves the design of a two-photon microscope including correction of wavefront distortions due to the large scale structures in brain tissues. Adaptive corrections based on wavefront measurement and on genetic algorithm will be used. This project aims at improving depth of imaging /in vivo/ and this optical set-up will be used to analyze in young rats multiple single unit activity in rat layer IV barrel cortex under controlled stimulation of the corresponding principal whisker. We are looking for an applicant with a background in optics and microscopy. Experience with laser optics, two-photon microscopy and adaptive optics is desirable. Please send your CV, a cover letter describing your research interests, and the names and e-mail addresses of 2 references to: Laurent Bourdieu, Ecole Normale Sup?rieure, D?partement de Biologie, Laboratoire de Neurobiologie Mol?culaire et Cellulaire, UMR CNRS 8544, 46 rue d?Ulm, 75005 Paris. Web site: http://www.biologie.ens.fr/neuroctx/. Email: laurent.bourdieu at ens.fr From nicolas.brunel at univ-paris5.fr Fri Jun 2 08:03:26 2006 From: nicolas.brunel at univ-paris5.fr (Nicolas Brunel) Date: Fri, 02 Jun 2006 14:03:26 +0200 Subject: Connectionists: Postdoctoral position in systems neuroscience Message-ID: <4480290E.7030108@univ-paris5.fr> Postdoctoral position in systems neuroscience Ecole Normale Sup?rieure, Paris, France A postdoctoral position is available in October 2006 at the Laboratory of Molecular and Cellular Neurobiology of the Ecole Normale Sup?rieure, Paris, France, to study the coding mechanisms underlying auditory-somatosensory associations in the rat cortex. The project involves behavioral training of rodents and recording of multiple single units, either using two-photon microscopy or multi-electrodes. It aims at investigating neuronal interactions during multimodal associations in the rat cerebral cortex. We are looking for an applicant with a background in integrative neuroscience. Experience with either two-photon microscopy or multi-electrode recordings and with quantitative spike-train data analysis is desirable. Please send your CV, a cover letter describing your research interests, and the names and e-mail addresses of 2 references to: Laurent Bourdieu, Ecole Normale Sup?rieure, D?partement de Biologie, Laboratoire de Neurobiologie Mol?culaire et Cellulaire, UMR CNRS 8544, 46 rue d?Ulm, 75005 Paris. Web site: http://www.biologie.ens.fr/neuroctx/. Email: laurent.bourdieu at ens.fr. From saighi at ixl.fr Fri Jun 2 09:40:39 2006 From: saighi at ixl.fr (Sylvain Saighi) Date: Fri, 02 Jun 2006 15:40:39 +0200 Subject: Connectionists: PhD position in the field of Engineering of Neuromorphic Systems Message-ID: <44803FD7.40603@ixl.fr> *PhD position in the field of Engineering of Neuromorphic Systems* *IXL - CNRS, Bordeaux, France* ** * *In the framework of the European Union's Marie Curie network for human resources and mobility activity, a new project "NeuroVers-IT" investigating Neuro-Cognitive Science and Information Technology has been set up. The project aims at collaborative, highly multidisciplinary research between 11 European well-known research institutions in the areas of neuro-/cognitive sciences/biophysics and robotics/information technologies/ mathematics. For this project, IXL Microelectronics Laboratory in Bordeaux, France, is looking for *Early-Stage Researcher* (holding a Master's degree entitling him/her to pursue a PhD degree, beginning September 2006) The ideal candidate should have a university degree in electrical engineering. An expertise in mixed-analog IC design and VHDL language, a good knowledge of spoken and written English and a strong interest in computational neuroscience are required. The project concerns the development of VLSI circuits and electronics systems of high biological relevance that emulate in real-time multi-conductances neurons and neural networks with adaptive properties. The work will be conducted at the IXL Microelectronics laboratory (www.ixl.fr ), a CNRS institution associated to ENSEIRB-Universit? Bordeaux 1. IXL is located on the Bordeaux Science campus. The PhD student will join the research group "Engineering of Neuromorphic Systems". <>To be eligible the candidate shall not be French citizen and not have resided in France for more than 12 months in the last 3 years. For information about this position you may contact Prof. Sylvie RENAUD IXL Laboratory, CNRS, ENSEIRB, Universit? Bordeaux 1 email: renaud at ixl.fr Tel : +33 540 002 796 Please send your written application including your CV and other relevant material From hommel at fsw.leidenuniv.nl Sat Jun 3 12:18:12 2006 From: hommel at fsw.leidenuniv.nl (Bernhard Hommel) Date: Sat, 03 Jun 2006 18:18:12 +0200 Subject: Connectionists: Postdoc position Message-ID: <00bb01c68729$4f851b10$4501a8c0@bhome> The Cognitive Psychology Section, Department of Psychology at the Leiden University, and the Leiden Institute for Brain and Cognition invites applications for a Postdoc position. The position is embedded into a large-scale four-year project (PACO+, http://www.paco-plus.org/) funded by the European Union and carried out in cooperation with 7 partner labs in Europe (DE, DK, ES, SE, SI, UK). The project aims at developing a neurophysiologically inspired computational model of the acquisition and representation of object-action complexes (OACs, event files), and the subsequent use of such OACs for the selection and control of contextually adapted actions in a behaving humanoid robot (Armar III). A basic tenet of the project is that objects and actions are inseparably intertwined, and that OACs can only emerge through interaction with the environment. Behavioral, anatomical, and physiological knowledge about the human brain will be exploited to build a restricted but biologically plausible cognitive system that will be implemented in real robotic systems. Experimental studies will be conducted to evaluate the models by comparing human and robot behavior. The research will be carried out at the Department of Psychology, Leiden University, and supervised by Prof. Dr. Bernhard Hommel. The project group also comprises of two PhD students who just started. Leiden University is the oldest and most prestigious university in the Netherlands and the group of Dr. Hommel has an excellent international reputation in the area of attention and action control. Leiden is a beautiful historical university city in the vicinity of Den Haag, Amsterdam and Schiphol airport and only 5 km from the North Sea beach. A PhD in cognitive psychology, neuroscience, cognitive neuroscience, computer science, or other related discipline is a prerequisite. Experience in programming (e.g., C++, Matlab) and neural modeling is also required. General knowledge in the areas of visual perception and action selection, visual attention, cognition, or cognitive neuroscience is recommended. The position is for four years, starting as soon as possible. There is no deadline, applications will be continuously evaluated on a first come first served basis. Please send applications and CVs to Prof. Dr. Bernhard Hommel (hommel at fsw.leidenuniv.nl). =================================== Bernhard Hommel Leiden University Department of Psychology Cognitive Psychology Unit & Leiden Institute for Brain and Cognition Wassenaarseweg 52, Room 2B05 P.O. Box 9555 2300 RB Leiden, The Netherlands Phone: +(0)629023062 http://home.planet.nl/~homme247/bh.htm From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From Dave.Touretzky at C.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Dave.Touretzky at C.CS.CMU.EDU (Dave.Touretzky@C.CS.CMU.EDU) Date: Sat 16 Apr 88 00:00:30-EDT Subject: analog outputs Message-ID: <12390804231.27.TOURETZKY@C.CS.CMU.EDU> There are plenty of neural net models that produce analog outputs, e.g., backpropagation nets. A more interesting question is whethere there are associative memories for analog vectors. Hopfield nets and BSB (Brain State in a Box), both matrix models, work only for boolean memories; they use nonlinearity to force their units' states to be 0 or 1. There are a few models that can learn analog memories, but they tend to involve competitive learning (winner-take-all nets) and generate a grandmother cell for each pattern to be learned. One example is Hecht-Nielsen's counter-propagation net, which can learn to associate one analog pattern with another and produce an exactly correct output given only an approximate input. Kohonen's self-organizing feature maps are also based on competitive winner-take-all behavior. An interesting generalization on this idea is to allow, say, k distinct winners, and then combine their outputs somehow, e.g., by using a weighted average. This is the principle behind Baum, Moody, and Wilczek's ACAM, or Associative Content-Addressable Memory, which I believe has been generalized to work on analog patterns. Hecht-Nielsen also discusses the idea of generalizing counter-propagation to permit k winners; see his article in the December 1987 issue of Applied Optics. But these multi-grandmother cell approaches are still not as distributed as a Hopfield or backprop model. (Of course one can train ordinary backprop nets to associate analog inputs with analog outputs, but unlike a true associative memory, there is no guarantee that the backprop network will produce the exact same output if the input is perturbed slightly, because it has no attractor states. In fact, this lack of attractor states is being exploited when people use backprop nets to do function approximation by interpolating between training instances.) So the question remains: are there fully-distributed associative memories whose attractor states are analog vectors? Perhaps some of the recent work on generalizing backprop to recurrent networks, introducing the potential for attractor behavior, will lead to a solution to this problem. -- Dave ------- From Dave.Touretzky at C.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Dave.Touretzky at C.CS.CMU.EDU (Dave.Touretzky@C.CS.CMU.EDU) Date: Wed 11 May 88 00:29:19-EDT Subject: NIPS abstract deadline now June 15 Message-ID: <12397363075.25.TOURETZKY@C.CS.CMU.EDU> The deadline for submitting an abstract to the Neural Information Processing Systems conference, to be held in Denver, Nov. 28-Dec. 1, has been extended by one month. Abstracts are now due by June 15th. Please help spread the word: tell your colleagues. Tell your students. Tell your therapist. Tell your cat. See you in November. -- Dave ------- From MOHAN%DWORKIN.usc.edu at oberon.USC.EDU Tue Jun 6 06:52:25 2006 From: MOHAN%DWORKIN.usc.edu at oberon.USC.EDU (Rakesh &) Date: Mon 6 Jun 88 14:30:48-PDT Subject: Dr. Feldman's address Message-ID: I would like to get Dr. J.A. Feldman's current net and/or surface mail address. Thanks, Rakesh Mohan mohan at oberon.usc.edu ------- From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Tue Jun 6 06:52:25 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Tuesday, 21-Jun-88 11:38:14-BST Subject: No subject Message-ID: <134345-370-206@uk.ac.essex> -------- ******** CONNECTIONIST MODEL OF LANGUAGE PROCESSES ******** I have been commisioned to write a large review of connectionist work on natural language. I know the area but i am sure that there are many tech reports and conference papers that i have not read. i do not wish to leave anyone out, so if you or any of your colleagues have any relevant material, please send it to me at: Noel E. Sharkey, Center for cognitive science, Dept. Language and Linguistics, University of Essex, Wivenhoe Park, Colchester CO4 3SQ Essex, England. sharkey at uk.ac.essex@ucl.cs.nss -------- From CS.WEINBERG at R20.UTEXAS.EDU Tue Jun 6 06:52:25 2006 From: CS.WEINBERG at R20.UTEXAS.EDU (CS.WEINBERG@R20.UTEXAS.EDU) Date: Sat 23 Jul 88 17:59:32-CDT Subject: New Address Message-ID: <12416701697.17.CS.WEINBERG@R20.UTEXAS.EDU> My net address is changing from cs.weinberg.utexas.edu to martha at ratliff.utexas.edu. Please make the appropriate changes. Thanx Martha ------- From KORNACKER-K at OSU-20.IRCC.OHIO-STATE.EDU Tue Jun 6 06:52:25 2006 From: KORNACKER-K at OSU-20.IRCC.OHIO-STATE.EDU (Karl Kornacker) Date: Sun 24 Jul 88 14:34:56-EDT Subject: wanted: co-editor for _Computational Models of NL Acquisition_ Message-ID: <8807241835.AA21058@tut.cis.ohio-state.edu> I am looking for a qualified person to co-edit an approximately 500 page volume on _Computational Models of Natural Language Acquisition_ tentatively scheduled by North Holland for publication in 1989 as part of the _Advances in Psychology_ book series. If you are interested and would like more information please e-mail a reply to me by July 30. Karl Kornacker (kornacker-k at osu-20.ircc.ohio-state.edu) ------- From DOUTHAT at A.ISI.EDU Tue Jun 6 06:52:25 2006 From: DOUTHAT at A.ISI.EDU (Dean Z. Douthat) Date: Wed 17 Aug 88 13:12:10-EDT Subject: Benchmarks Message-ID: <12423192061.47.DOUTHAT@A.ISI.EDU> I am interested in your proposals for benchmarks for machine learning systems. We are doing research on machine learning that is out of the mainstream of connectionist and PDP approaches. One such project involves Genetic Algorithms and Classifiers following the theories of J. Holland et al. Benchmarks that can be standardized and used to compare not only different designs within a given technical approach but also to compare across such approaches are of great interest. Your proposal deserves elaboration and formalization. Would you undertake to do so? Could the FSA benchmark be considered one of a series of increasing difficulty with say, pushdown stack automata next and so on? BTW: what is an "infitary" system? Dean Z. Douthat Institute for the Study of Intelligent Systems [ISIS] P.O. Box 669 Ann Arbor MI 48105-0669 (313)747-9170 ------- From Jim Tue Jun 6 06:52:25 2006 From: Jim (Jim) Date: Wed 17 Aug 88 13:33:52-CDT Subject: Wanted: room to share in Boston.... Message-ID: <12423206934.7.EE.JANDERSON@A20.CC.UTEXAS.EDU> Greetings - I am scheduled to present a poster at the INNS meeting in Boston, and would like to attend the entire conference. However, I can hardly afford the $91/night rate for the entire week. Since the double rate is the same as the single rate, I was wondering if anyone who is going to the confernce would be willing to share a double room with me and we'll split the cost. I think you'll find I'm an agreeable roommate. Thanks in advance, Jim Anderson (ECE, U.Texas) ee.janderson at A20.CC.UTEXAS.EDU 1411 Elm Brook Dr. Austin, TX 78758 (512) 834-2092 (home) 474-4526 (Work) ------- From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 30 Aug 1988 13:02-EDT Subject: tech report announcement Message-ID: Using A Neural Network to Learn the Dynamics of the CMU Direct-Drive Arm II Ken Goldberg and Barak Pearlmutter CMU-CS-88-160 ABSTRACT Computing the inverse dynamics of a robot arm is an active area of research in the control literature. We apply a backpropagation network to this problem and measure its performance on the first 2 links of the CMU Direct-Drive Arm II for a family of "pick-and-place" trajectories. Trained on a random sample of actual trajectories, the network is shown to generalize with a root mean square error/standard deviation (RMSS) of 0.10. The resulting weights can be interpreted in terms of the velocity and acceleration filters used in conventional control theory. We also report preliminary results on learning a larger subset of state space for a simulated arm. If you would like a copy, you may either send computer mail to Catherine.Copetas at cs.cmu.edu or physical mail the address below. Please request "technical report CMU-CS-88-160." Catherine Copetas Department of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 From Mark.Derthick at G.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Mark.Derthick at G.GP.CS.CMU.EDU (Mark.Derthick@G.GP.CS.CMU.EDU) Date: 5 Sep 1988 19:29-EDT Subject: Question about Hopfield&Tank nets Message-ID: Using the Hopfield and Tank energy function E = -1/2 SUMi SUMj Tij Vi Vj + SUMi 1/R INTEGRAL g-inverse + SUMi Ii Vi one COULD calculate dE/dVi for each output and do steepest descent. Instead, Hopfield and Tank introduce a new variable, u=g-inverse(V) representing the input voltage to an amplifier with finite resistance and capacitance. The energy function is still a Liapunov function for their circuit, but the circuit doesn't do steepest descent; it moves in a direction obtained from the gradient by warping with the sigmoid function: delta-Vi is shrunk for those Vi which take on values near zero or one. Hopfield and Tanks's motivation seems to be fidelity to real neurons. If one doesn't care about this, is there any reason to prefer their algorithm to steepest descent? Mark From Michael.Witbrock at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Michael.Witbrock at F.GP.CS.CMU.EDU (Michael.Witbrock@F.GP.CS.CMU.EDU) Date: Fri, 16 Sep 1988 14:28-EDT Subject: Layers Message-ID: <590437703/mjw@F.GP.CS.CMU.EDU> There is a completely defined way of counting layers. Note, however, that I am not arguing for its adoption. Let the distance between two units be defined as the *minimal* number of modifiable weights forming a path between them (i.e. the number of weights on the shortest path between the two nodes) . Then the Layer in which a unit lies is the minimal distance between it and an input unit. The number of layers in the network is the maximum value of the distance between any unit and an input unit. See how much damage a little graph theory can do. michael From EPSYNET%UHUPVM1.BITNET at VMA.CC.CMU.EDU Tue Jun 6 06:52:25 2006 From: EPSYNET%UHUPVM1.BITNET at VMA.CC.CMU.EDU (Psychology Newsletter and Bulletin Board) Date: Sun, 25 Sep 88 Subject: Psychnet Message-ID: Hello If you are not a subscriber to the Bitnet Psychology Newsletter a would like to add your name, please send your reply to me at the userid and node above. There is no charge from us for the subscription. If you already have a subscription, please ignore this notice. A sample issue follows below. Yours truly, Robert C. Morecock, Ph.D. Editor ------------------------------------------------------------------------ ! * * * B I T N E T P S Y C H O L O G Y N E W S L E T T E R * * * ! ------------------------------------------------------------------------ ! Volume 3, Number 22, September 11, 1988 Circulation 941 ! ------------------------------------------------------------------------ ! From the Ed. Psych. Dept., University of Houston, Texas 77004 ! ! Robert C. Morecock, Editor ! ------------------------------------------------------------------------ Today's Topics: 1. USSR and Electronic Mail Networking with the Western World 2. Senate Testimony on Educational Electronic Mail Networking 3. Steven Pinker and Stevan Harnad on 'Rules and Learning' 4. New Journal - Interacting with Computers - Paper Call 5. Free Statistical Hardware for Students and Free Clinical Assessment System Software for Students -- Walter Hudson 6. Files Arriving at the Bulletin Board Since the Last Issue 7. How to Retrieve Bulletin Board Files ------------------------------------------------------------------------ (For discussion of above or other topics send your comments to userid Epsynet at node Uhupvm1 on Bitnet.) ------------------------------------------------------------------------ USSR Academy of Sciences and EARN This letter is from the Feedback section of the BITNET publication NetMonth, August 1988 edition, edited by Chris Condon. The topic is that of the USSR Academy of Sciences having requested a connection to the EARN network for their computers ... From: Hank Nussbacher Subject: More on Russia and networking... Some comments on David Hibler's July editorial: First, let me correct you on one point. The Soviet Union has requested connection to the network but not to BITNET - rather to EARN. If you are in favor of open communication paths then perhaps the United States and people within BITNET should stop using geocentricism when assuming that all networks revolve around them. True, many do, but the fact that Russia (and Hungary and Bulgaria) have requested EARN membership and not BITNET membership should say something to you. The major problem of connecting all these communist countries to the network is not a security fear. It is the US Dept of Commerce that forbids it. Whenever any country buys a supercomputer from the United States (Cray or ETA for example) they are required to sign a very stringent agreement with the US Dept of Commerce that that supercomputer will not be made in any way shape or form available to communist countries - which includes via electronic methods. The US Dept of Commerce realized that one way around the trade ban would be for a non- aligned nation to order a Cray XMP/48 and install an M1 (2Mb) line to Moscow. True, the computer never made it over the border, but its computing power would be sent over the border. So, all EARN sites (as well as many Canadian sites) that have a super computer connected directly or indirectly to BITNET or EARN would have to *renegotiate* their contract with the US Dept of Commerce. Feelers are being made in that direction, but the game is just in the early innings so it is too early to tell if the US Dept of Commerce will relent and alter the supercomputer licences already issued. EARN has been working over the past year on accepting various new countries to their network. Voting was concluded last year for four new countries and their ratification was formally approved: Algeria - University of Annaba Cyprus - University of Cyprus Luxembourg - CEPS/INSTEAD Yugoslavia - UNESCO International Centre Last month two new countries have been ratified as valid for EARN and they are: Morocco - EMI India - Tata Institute Currently, EARN is discussing requests from 3 eastern countries to join EARN, principal among them is the USSR: Hungary USSR - USSR Academy of Sciences Bulgaria There are various legal problems with this and it may be some time before a formal decision is reached. Just thought I'd let you all know how things are currently rather than the usual speculation and philosophy behind this topic. Hank Nussbacher ------------------------------------------------------------------------ Kenneth M. King, President of EDUCOM, recently testified before the Science, Technology and Space Subcommittee of the United States Senate Committee on Commerce, Science and Transportation. Discussing the formation of a national educational and research network, King stated that "there is a broad consensus among government, education, and industry leaders that creation of a high-speed national research and education computer network is a critical national priority." The complete text of the testimony is available for the Psychology Bulletin Board in the file SENATE TESTIMON . -- Ed. ------------------------------------------------------------------------ STEVEN PINKER AND STEVAN HARNAD ON 'RULES AND LEARNING' Recently Steven Pinker presented a paper on aspects of cognitive psychology to a major international convention. Steven Harnad has replied to that paper, followed by counter-replies from Dr. Pinker. The set of six commentaries are contained in the file HARNAD PRINCE on the Psychology Bulletin Board, and provide a good example of how electronic mail networking can facilitate the work of science by providing rapid communication among scholars. The files are reprinted from another electronic mail list. (Ed.) ------------------------------------------------------------------------ From: mdw at INF.RL.AC.UK INTERACTING WITH COMPUTERS - CALL FOR PAPERS The Interdisciplinary Journal of Human-Computer Interaction INTERACTING WITH COMPUTERS will provide a new international forum for communication about HCI issues betwen academia and industry. It will allow information to be disseminated in a form accessible to all HCI practitioners, not just to academic researchers. This new journal is produced in conjunction with the BCS Human-Computer Interaction Specialist Group. Its aim is to stimulate ideas and provoke widespread discussion with a forward-looking perspective. A dialogue will be built up between theorists, researchers and human factors engineers in academia, industry and commerce thus fostering interdisciplinary dependencies. The journal will initially appear three times a year. The first issue of INTERACTING WITH COMPUTERS will be published in March 1989. Each issue will contain a large number of fully refereed papers presented in a form and style suitable for the widest possible audience. All long papers will carry an executive summary for those who would not read the paper in full. Papers may be of any length but content will be substantial. Short applications-directed papers from industrial contributors are actively encouraged. Every paper will be refereed not only by appropriate peers but also by experts outside the area of specialisation. It is intended to support a continuing commentary on published papers by referees and journal readers. The complete call for papers is in the file JOURNAL INT-COMP available from the Psychology Bulletin Board -- Ed. ------------------------------------------------------------------------ From: Walter Hudson AIWWH at ASUACAD FREE STATISTICAL SOFTWARE FOR STUDENTS -- The WALMYR Publishing Co. has released a free Student Edition of the "Statistical Package for the Personal Computer" or SPPC Program. It is NOT public domain software. It is copyrighted and cannot be modified in any manner. However, you may copy the Student Edition of the SPPC Program and give a copy to every student in your class, school, college or university. Or, you may install the SPPC on a local area network or LAN system and thereby make it available to all students. If you would like to have a copy of the Student Edition of the SPPC program, send four formatted blank diskettes and a stamped self-addressed return mailer to the Software Exchange, School of Social Work, Arizona State University, Tempe, AZ 85287. Please note: Diskettes will not be returned unless adequate postage is enclosed. ------------------------------------------------------------------------ From: Walter Hudson AIWWH at ASUACAD FREE CLINICAL ASSESSMENT SYSTEM FOR STUDENTS -- Walter Hudson has just released a FREE Student Version of the "Clinical Assessment System" or CAS program which may be used for classroom or field practicum training in psychiatry, clinical psychology, social work, counseling and other human service professions. It is an extensive system which enables future clinical practitioners to learn computer-based clinical assessment and progress monitoring. It is shipped with 20 clinical scales ready for use and the CAS program is designed for interactive use with clients. Administers and scores the scales and sends graphic output to the screen, disk files, and printer. If you want a copy of the CAS program, send three formatted blank floppies and a stamped self-addressed return mailer to the Software Exchange, School of Social Work, Arizona State University, Tempe, AZ 85287. NOTE: Diskettes will NOT be returned unless adequate postage is enclosed. The Student Version of the CAS program may be copied for and distributed to virtually every student in your school, department or university. The aim of this FREE Student Edition is to encourage students to learn how to monitor and evaluate their clinical practice. ------------------------------------------------------------------------ ------------------------------------------------------------------------ FILES ARRIVING SINCE THE LAST ISSUE ________________________________________________________________________ FILENAME FILETYPE | (Posting Date) FILE CONTENTS ------------------------------------------------------------------------ AIDSNEWS 57 (09.01.88) AIDS Newsletter AIDSNEWS 58 (09.01.88) AIDS Newsletter AIDSNEWS 59 (09.01.88) AIDS Newsletter AIDSNEWS 60 (09.05.88) AIDS Newsletter AIDSNEWS 61 (09.05.88) AIDS Newsletter AIDSNEWS SIGNUP How to get the latest AIDS news automatically BITNET SERVERS (09.01.88) - other fileservers on Bitnet COMPUTER SOCV3N24 (09.05.88) Computer and Society Digest CRTNET 150 (09.05.88) Communications Research and Theory Newsletter CRTNET 151 (09.05.88) Communications Research and Theory Newsletter CRTNET 152 (09.08.88) Communications Research and Theory Newsletter FONETIKS 880901 (09.09.88) Phonetics Newsletter HARNAD PRINCE (09.10.88) S.Harnad&S.Pinker discuss 'On Rules&Learning' JOURNAL INT-COMP (09.10.88) New Journal Paper Call-'Interacting w/Computers' MEDNEWS VOL1N33 (09.02.88) Health Info-Com Network Newsletter MEDNEWS VOL1N34 (09.05.88) Health Info-Com Network Newsletter MEDNEWS VOL1N35 (09.09.88) Health Info-Com Network Newsletter NETMONTH 1988AUG (09.05.88) Bitnet MONTHLY news magazine ------------------------------------------------------------------------ HOW TO REQUEST FILES Most (but not quite all) Bitnet users of this service can request files interactively from userid UH-INFO at node UHUPVM1. If your request is valid and the links between your node and the University of Houston are all operating, your request will be acknowledged automatically and your file will arrive in a few seconds or minutes, depending on how busy the system is. To make the request use the same method you use to 'chat' or talk interactively with live users at other nodes. From a CMS node this might look like: TELL UH-INFO AT UHUPVM1 PSYCHNET SENDME filename filetype from a VAX system it might look like: SEND/REMOTE UH-INFO at UHUPVM1 PSYCHNET SENDME filename filetype At other Bitnet sites (or if these fail for you) check with your local computer center for the exact syntax. If you are not at a Bitnet site (or if within Bitnet you cannot 'chat' or talk interactively with live people at other nodes) send an electronic mail letter to userid EPSYNET at node UHUPVM1 with your request, including a comment that your site cannot send interactive commands. Bob Morecock will send out your requested file, usually the same day that your letter arrives. ------------------------------------------------------------------------ ** End of Psychology Newsletter ** ------------------------------------------------------------------------ From ANDERSON%BROWNCOG.BITNET at MITVMA.MIT.EDU Tue Jun 6 06:52:25 2006 From: ANDERSON%BROWNCOG.BITNET at MITVMA.MIT.EDU (ANDERSON%BROWNCOG.BITNET@MITVMA.MIT.EDU) Date: 5-OCT-1988 14:49:48.60 Subject: Technical Report Message-ID: A technical report is available from the Brown University Department of Cognitive and Linguistic Sciences: Technical Report 88-01 Department of Cognitive and Linguistic Sciences Brown University Representing Simple Arithmetic in Neural Networks Susan R. Viscuso, James A. Anderson and Kathryn T. Spoehr This report discuses neural network models of qualitative multiplication. We review past research in magnitude representation and cognitive arithmetic. We then develop a framework for building neural network models that exhibit behaviors that mimic the empirical results. The simulations show that neural net models can carry out qualitative multiplication given an adequate representation of magnitude information. It is possible to model a number of interesting psychological effects such as associative interference, practice effects, and the symbolic distance effect. However, this set of simulations clearly shows that neural networks are not satisfactory as devices for doing accurate arithmetic. It is possible to spend many hours of supercomputer CPU time teaching multiplication to a network, and still have a system that makes many errors. If, however, instead of accuracy we view this simulation as developing a very simple kind of `number sense,' with the formation and use of internal representations of sizes of numbers, then the simulation is more interesting. When real mathematicians and real physicists think about mathematics and physics, they rarely use logic or formal reasoning, but use past experience and their intuitive understanding of the complex systems they work on. We suspect a useful goal for network models may be to develop similar qualitative intuition in complex problem solving domains. This technical report can be obtained by sending an email message to Anderson at BROWNCOG (BITNET) or a request to: James A. Anderson Department of Cognitive and Linguistic Sciences Box 1978 Brown University Providence, RI 02912 Make sure you give your mailing address in your message. From connectionists-request at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: connectionists-request at cs.cmu.edu (connectionists-request@cs.cmu.edu) Date: Fri, 7 Oct 1988 12:21-EDT Subject: About Re: Technical Report messages Message-ID: <592244469/mjw@F.GP.CS.CMU.EDU> Hello. I am the current connectionists mailing list maintainer. Since "connectionists" is currently sent to 407 addresses, including several which redistribute it to many other addresses, it would be greatly appreciated if readers could exercise extreme care in sending respondses to the authors of messages. In particular, please avoid using the `reply' function of your mailer to respond to posts on "connectionists", in most cases this results in messages going back to connectionists at cs.cmu.edu (whence the messages come) rather than the intended recipient. Thank you very much for your cooperation in this matter. Michael Witbrock connectionists-request at cs.cmu.edu From farrelly%ics at ucsd.edu Tue Jun 6 06:52:25 2006 From: farrelly%ics at ucsd.edu (Kathy Farrelly) Date: 12 October 1988 1458-PDT (Wednesday) Subject: tech report available Message-ID: <8810122158.AA17267@sdics.ICS> If you'd like a copy of the following tech report, please write, call, or send e-mail to: Kathy Farrelly Cognitive Science, C-015 University of California, San Diego La Jolla, CA 92093-0115 (619) 534-6773 farrelly%ics at ucsd.edu Report Info: A LEARNING ALGORITHM FOR CONTINUALLY RUNNING FULLY RECURRENT NEURAL NETWORKS Ronald J. Williams, Northeastern University David Zipser, University of California, San Diego The exact form of a gradient-following learning algorithm for completely recurrent networks running in continually sampled time is derived. Practical learning algorithms based on this result are shown to learn complex tasks requiring recurrent connections. In the recurrent networks studied here, any unit can be connected to any other, and any unit can receive external input. These networks run continually in the sense that they sample their inputs on every update cycle, and any unit can have a training target on any cycle. The storage required and computation time on each step are independent of time and are completely determined by the size of the network, so no prior knowledge of the temporal structure of the task being learned is required. The algorithm is nonlocal in the sense that each unit must have knowledge of the complete recurrent weight matrix and error vector. The algorithm is computationally intensive in sequential computers, requiring a storage capacity of order the 3rd power of the number of units and computation time on each cycle of order the 4th power the number of units. The simulations include examples in which networks are taught tasks not possible with tapped delay lines; that is, tasks that require the preservation of state. The most complex example of this kind is learning to emulate a Turing machine that does a parenthesis balancing problem. Examples are also given of networks that do feedforward computations with unknown delays, requiring them to organize into the correct number of layers. Finally, examples are given in which networks are trained to oscillate in various ways, including sinusoidal oscillation. From CDTPJB at CR83.NORTH-STAFFS.AC.UK Tue Jun 6 06:52:25 2006 From: CDTPJB at CR83.NORTH-STAFFS.AC.UK (CDTPJB@CR83.NORTH-STAFFS.AC.UK) Date: 13-OCT-1988 11:20:53 Subject: No subject Message-ID: Here in the Computing Department at Staffordshire Polytechnic, we are involved in research, consultancy and teaching in AI. Could you please send me some information about the bulletin board that I have been told that you organise? Phil Bradley. | Post: Computer Centre JANET: cdtpjb at uk.ac.nsp.cr83 | Staffordshire Polytechnic DARPA: cdtpjb%cr83.nsp.ac.uk at cunyvm.cuny.edu | Blackheath Lane Phone: +44 785 53511 | Stafford, UK | ST18 0AD From Connectionists-Request at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 18 Nov 1988 12:30-EST Subject: A plea from connectionists request. Message-ID: <595877413/mjw@F.GP.CS.CMU.EDU> Please NEVER use the reply function of your mailers to respond to posts on connectionists. Doing so often causes a copy of your message to be sent to connectionists as well as the intended recipient. Once sent to connectionists, it is forwarded to around 430 sites from New Zealand to Finland and from the US to Japan. This is not desirable, since forwarding these messages invariably costs someone something, and often, in the case of researchers outside the US, results in mail reciept charges which must be paid from the individual researcher's budget. This message is not directed at anyone in particular, it is in response to the relatively high numbers of TR and paper requests on connectionists of late. Thank you. Michael Witbrock connectionists-request at cs.cmu.edu From Connectionists-Request at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 18 Nov 1988 17:33-EST Subject: Advice for MH users Message-ID: <595895611/mjw@F.GP.CS.CMU.EDU> Users of the MH mail system can avoid the annoying and costly (for others) mistake of sending carbon copies of messages to a mailing list by making the following changes to their $HOME/.mh_profile file: Create or modify the Alternate-Mailboxes line such that it names each of the redistribution lists you subscribe to, for example: Alternate-Mailboxes: mesard,bargain,connectionists Create or modify the repl line to contain the option "-nocc me". Note that this tells the repl(1) command not to send carbon copies to you or any of the addresses listed as your alternate mailboxes. If you really do want copies of your messages use the -fcc switch. So the line might look something like this: repl: -nocc me -fcc +outbox This advice was kindly supplied by Wayne_Mesard at BBN. Michael Witbrock From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 21 Nov 1988 23:09-EST Subject: Free Recurrent Simulator Message-ID: I wrote a bare bones simulator for recurrent temporally recurrent neural networks in C. It simulates a network of the sort described in "Learning State Space Trajectories in Recurrent Neural Networks", and is named "full". Full simulates only fully connected networks, uses only arrays, and has no user interface at all. It was intended to be easy to translate into other languages, to vectorize and parallelize well, etc. It vectorized fully on the convex on the first try with no source modifications. Although it is short, it is actually usable and it works well. If you wish to use full, I'm allowing access to a compressed tar file through anonymous ftp from host DOGHEN.BOLTZ.CS.CMU.EDU, user "ftpguest", password "oaklisp", file "full/full.tar.Z". Be sure to use the BINARY command, and don't use the CD command or you'll be sorry. I am not going to support full in any way, and I don't have time to mail copies out. If you don't have FTP access perhaps someone with access will post full to the usenet, and perhaps some archive server somewhere will include it. Full is copyrighted, but I'm giving people permission to use if for academic purposes. If someone were to sell a it, modified or not, I'd be really angry. From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 22 Nov 1988 12:55-EST Subject: FTPing full.tar.Z Message-ID: People have been having problems ftping full.tar.Z despite their avoiding the CD command. The solution is to specify the remote and local file names separately: ftp> get remote file: full/full.tar.Z local file: full.tar.Z For the curious, the problem is that when you type "get full/full.tar.Z" to ftp it tries to retrieve the file "full/full.tar.Z" from the remote host and put it in the local file "full/full.tar.Z". If the directory "full/" does not exist at your end you get an error message, and said message does not say which host the file or directory does not exist on. Sorry for the inconvenience. --Barak. From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 22 Nov 1988 22:30-EST Subject: doghen's address Message-ID: A couple people have asked me for the internet address of DOGHEN.BOLTZ.CS.CMU.EDU. The answer is 128.2.222.37. --Barak. From Connectionists-Request at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Tue, 22 Nov 1988 21:32-EST Subject: Please don't access archives Message-ID: <596222655/mjw@F.GP.CS.CMU.EDU> Please refrain from accessing the connectionist archive until I post a new message with directions on how to do so. CS.CMU is changing its systems software to allow anonymous ftp, and I have to change things to reflect these changes. After the change, ftp access will be more convenient. I expect to have things fixed by Mon 28th. Regards, Michael Witbrock connectionists-request at cs.cmu.edu From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Tue Jun 6 06:52:25 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Friday, 6-Jan-89 17:47:50-GMT Subject: change of address Message-ID: <134654-573-533@uk.ac.essex> -------- please change my mailing address for the connectionist network to the following* exeter-connect at uk.ac.exeter.cs@ucl.cs.nss thanks, noel sharkey -------- From n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK Tue Jun 6 06:52:25 2006 From: n.sharkey%essex.ac.uk at NSS.Cs.Ucl.AC.UK (SHARKEY N (on Essex DEC-10)) Date: Friday, 6-Jan-89 17:53:13-GMT Subject: move Message-ID: <134654-575-456@uk.ac.essex> -------- MOVE AND ADDRESS CHANGE. I have now moved to the Department of Computer Science, University of Exeter, Exeter, Devon, UK. My new e-mail address is noel at uk.ac.exeter.cs@ucl.cs.nss noel sharkey -------- From Dean.Pomerleau at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Dean.Pomerleau at F.GP.CS.CMU.EDU (Dean.Pomerleau@F.GP.CS.CMU.EDU) Date: Thu, 26 Jan 1989 10:29-EST Subject: Tech Report Available Message-ID: <601831793/pomerlea@F.GP.CS.CMU.EDU> A shorter version of the following tech report will appear in the proceedings of the 1988 NIPS Conference. To request a copy, please send e-mail to copetas at cs.cmu.edu and ask for tech report CMU-CS-89-107. Don't forget to send your hard mail address. --Dean --------------------------------------------------------------- ALVINN: An Autonomous Land Vehicle In a Neural Network Dean A. Pomerleau January 1989 CMU-CS-89-107 ABSTRACT ALVINN (Autonomous Land Vehicle In a Neural Network) is a 3-layer back-propagation network designed for the task of road following. Currently ALVINN takes images from a camera and a laser range finder as input and produces as output the direction the vehicle should travel in order to follow the road. Training has been conducted using simulated road images. Successful tests on the Carnegie Mellon autonomous navigation test vehicle indicate that the network can effectively follow real roads under certain field conditions. The representation developed to perform the task differs dramatically when the network is trained under various conditions, suggesting the possibility of a novel adaptive autonomous navigation system capable of tailoring its processing to the conditions at hand. --------------------------------------------------------------- From Connectionists-Request at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Thu, 9 Feb 1989 13:22-EST Subject: FTP access to connectionists. Message-ID: <603051758/mjw@F.GP.CS.CMU.EDU> A brief note: If you are accessing the connectionist archives off b.gp.cs.cmu.edu, the ONLY two directories you wil be able to access are: /usr5/connect/connectionists/bibliographies and /usr5/connect/connectionists/archives. Hope this clarifies some to the problems people have been reporting. Michael Witbrock (connectionist-request at cs.cmu.edu) From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: 16 Feb 1989 19:33-EST Subject: Tech Report Announcement Message-ID: The following tech report is available. It is a substantially expanded version of a paper of the same title that appeared in the proceedings of the 1988 CMU Connectionist Models Summer School. Learning State Space Trajectories in Recurrent Neural Networks Barak A. Pearlmutter ABSTRACT We describe a number of procedures for finding $\partial E/\partial w_{ij}$ where $E$ is an error functional of the temporal trajectory of the states of a continuous recurrent network and $w_{ij}$ are the weights of that network. Computing these quantities allows one to perform gradient descent in the weights to minimize $E$, so these procedures form the kernels of connectionist learning algorithms. Simulations in which networks are taught to move through limit cycles are shown. We also describe a number of elaborations of the basic idea, such as mutable time delays and teacher forcing, and conclude with a complexity analysis. This type of network seems particularly suited for temporally continuous domains, such as signal processing, control, and speech. Overseas copies are sent first class so there is no need to make special arrangements for rapid delivery. Requests for copies should be sent to Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 or Copetas at CS.CMU.EDU by computer mail. Ask for CMU-CS-88-191. From Connectionists-Request at CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Wed, 15 Mar 1989 23:26-EST Subject: I hope this hadn't gone out. Message-ID: <606025575/connect@B.GP.CS.CMU.EDU> This came from someone I just added to the list. I hope I'm not repeating stuff. michael Date: Mon, 13 Mar 89 15:30:04 GMT From: mcvax!inesc!alf!lba at uunet.UU.NET (Luis Borges de Almeida) To: cs.cmu.edu!connectionists-request at inesc.INESC Subject: Workshop announcement EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND CALL FOR PAPERS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Contributions from all fields related to the neural network area are welcome. A (non-exclusive) list of topics is given below. Care is being taken to ensure that the workshop will have a high level of quality. Proposed contributions will be evaluated by an international board of experts, and a proceedings volume will be published. The number of participants will be limited to 50. Full contributions will take the form of oral presentations, and will correspond to papers in the proceedings. Some short contributions will also be accepted, for presentation of ongoing work, projects (ESPRIT, BRAIN, DARPA,...), etc. They will be presented in poster format, and will not originate any written publication. A small number of non-contributing participants may also be accepted. The official language of the workshop will be English. TOPICS: - signal processing (speech, image,...) - pattern recognition - training procedures (new algorithms, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for contributions is as follows: Full contributions Short contributions Deadline for submission June 1, 1989 Oct 1, 1989 Notif. of acceptance Sept 1, 1989 Nov 15, 1989 Camera-ready paper Nov 1, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the Sesimbra Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop I plan to submit a contribution [ ] full [ ] short Preliminary title: (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From Connectionist.Research.Group at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Connectionist.Research.Group at B.GP.CS.CMU.EDU (Connectionist.Research.Group@B.GP.CS.CMU.EDU) Date: Wed, 15 Mar 1989 23:13-EST Subject: Forwarded Message-ID: <606024818/connect@B.GP.CS.CMU.EDU> From: Mahesan Niranjan Date: Mon, 13 Mar 89 10:45:19 GMT Subject: Not Total Squared Error Criterion Re: > Date: Wed, 08 Mar 89 11:36:31 EST > From: thanasis kehagias > Subject: information function vs. squared error > > i am looking for pointers to papers discussing the use of an alternative > criterion to squared error, in back propagation algorithms. the [..] > G=sum{i=1}{N} p_i*log(p_i) > Here is a non-causal reference: I have been looking at an error measure based on "approximate distances to class-boundary" instead of the total squared error used in typical supervised learning networks. The idea is motivated by the fact that a large network has an inherent freedom to classify a training set in many ways (and thus poor generalisation!). In my training, an example of a particular class gets a target value depending on where it lies with respect to examples from the other class (in a two class problem). This implies, that the target interpolation function that the network has to construct is a smooth transition from one class to the other (rather than a step-like cross section in the total squared error criterion). The important consequence of doing this is that networks are automatically deprived of the ability to form large weight (- sharp cross section) solutions. niranjan PS: A Tech report will be announced soon. From Connectionists-Request at CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 21 Mar 1989 12:38-EST Subject: Forward from Cybenko Message-ID: <606505108/connect@B.GP.CS.CMU.EDU> George Cybenko is now on connectionists. Here is his first message. ********************************************************* The paper of mine that Alexis Wieland mentioned in a note is titled "Approximations by superpositions of a sigmoidal function" and it will appear in the journal "Mathematics of Control, Signals and Systems" published by Springer-Verlag soon. People interested in a preprint can write to me at George Cybenko Center for Supercomputing Research and Development 319G Talbot Lab University of Illinois at Urbana Urbana, IL 61801 or call (217) 244-4145 or send email to gc at uicsrd.csrd.uiuc.edu. In addition to the result mentioned by Wieland, there is a treatment of other possible activation functions. George Cybenko From Connectionists-Request at CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 21 Mar 1989 12:44-EST Subject: Forward call for papers Message-ID: <606505474/connect@B.GP.CS.CMU.EDU> From: "Robert L. Russel" Subject: CALL FOR PAPERS - AIST * * * CALL FOR PAPERS * * * -- The Association for Intelligent Systems Technology (AIST) -- The Association for Intelligent Systems Technology (AIST), a chartered not-for-profit organization, is seeking noteworthy papers to appear in the Spring 1989 issue of the association's official publication, INTELLIGENT SYSTEMS REVIEW. In keeping with the AIST purpose, papers may be one of four kinds: 1. A description of original research accomplished and findings contributing to the advancement of artificial intelligence and neural networks technology. 2. A description of the applications of AI annd neural networks technology to a problem in business, engineering, financial operations, or education. 3. A description of a business engaged in engineering, systems development, financial operations, medicine, education, etc. for which one or more applications of intelligent systems technology has had a significant impact on the effectiveness, productivity or profitability of the business, including a description of the application and how it was implemented. 4. Description of an educational program intended to impart knowledge and develop skills on the part of individuals having interest in the application of AI/Neural Networks to business and the professions. The ISR accepts written submissions featuring items such as: -- Original Research: Peer-reviewed, high quality research results representing new and significant contributions to AI/Neural Networks and its applications. -- Articles: Unrefereed technical articles focused on the informative review or tutorials on the author(s)' specialty area, or invited articles as solicited by the ISR editors. -- Letters to the editor: Comments on research papers or articles published in ISR and other matters of interest to AIST. -- Editorials: Commentary on technical/professional issues significant to the AIST community. -- Institutional Research/Project: Introduction of R&D or contract work performed by an organization. Original research papers in the ISR are refereed by one or more peer researchers selected by the Editorial Board. All other articles in the ISR are unrefereed working papers. Authors of the papers accepted for publication will be provided with specific instructions for preparing the final camera-ready manuscript. Author(s) also must sign and date a Transfer of Copyright form to be sent to AIST, Inc. Papers should be about 5000 words (10 pages) in length. They may include line drawings but photography requiring color or gray scale reproduction should not be included. Papers must be submitted by May 15, 1989 to appear in the Spring issue. Contributions are welcomed from any person. All contributions sent to the editors will be assumed to be for consideration for publication unless specified otherwise. The written material will not be returned. Send papers to: For Additional Information Call: Editorial Board Major Bob Russel AIST, Inc. Neural Networks Applications Editor 6310 Fly Road (315)330-7069 E. Syracuse, N.Y. 13057 Mr. Doug White Military (C3I) Applications Editor (315)330-3564 From Connectionists-Request at CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 10 Apr 1989 11:29-EDT Subject: Can anyone answer this question. Message-ID: <608225360/connect@B.GP.CS.CMU.EDU> Message follows - Michael Date: Wed, 5 Apr 89 11:02:07 EST From: nunez at en.ecn.purdue.edu (Fernando J Nunez) To: Connectionists-Request at cs.cmu.edu Subject: A new VLSI-NN system? I am interested in all kinds of VLSI-NNs. In the process of gathering information about implementations, I found a short and confusing article in BusinessWeek, March 6, 1989, p. 103. I have extracted the following: "Neural net chips just won't be able to take the heat. That's because neural nets are based on analog technology, .... Such microcircuitry is much more temperature-sensitive than digital transistors... So Steven G. Morton,a former ITT Corp. researcher, formed Oxford Computer in Oxford, Conn., to pioneer neural nets based on digital memory-chip technology. Although Morton's ideas were met with widespread skepticism, scientists at MIT recently scrutinized his designs and pronounced them apparently sound." I attribute the contradictions, or at least, inaccuracies to the lack of audit of the article by an expert. I would appreciate if someone could tell me where can I found a description of this work. I hope it won't be in FORBES magazine. Here come my name and electronic address. Fernando J. Nunez nunez at en.ecn.purdue.edu From FEILDWB%SERVAX.BITNET at VMA.CC.CMU.EDU Tue Jun 6 06:52:25 2006 From: FEILDWB%SERVAX.BITNET at VMA.CC.CMU.EDU (WILLIAM=FEILDJR) Date: 04/26/89 13:36:09 EST Subject: lodging at conference Message-ID: I am a Phd Candidate at Fla International University in Miami. I will be attending the Neural Network Conference in Washington D.C. 6/18-6/21 at MY OWN EXPENSE! I could use some help in the lodging department. If anyone is planning on attending the conference and would like to share expenses for a room, please contact me. I would greatly appreciate it. I can be reached at: William B Feild Jr Fla International University School of Computer Science University Park Campus Miami Fl 33199 (305) 554-2744 School (305) 595-2017 Home Feildwb at servax. bitnet CC : MAILER at CMUCCVMA From melkman%BENGUS.BITNET at VMA.CC.CMU.EDU Tue Jun 6 06:52:25 2006 From: melkman%BENGUS.BITNET at VMA.CC.CMU.EDU (melkman abraham) Date: Tue, 1 Aug 89 16:16:51-020 Subject: info on character recognizers Message-ID: <8908011816.AA09067@red.bgu.ac.il> I am interested in all information regarding (hand-written) character recognizer learning algorithms, pros and cons of architectures, functional systems etc. Please add citation where possible. I will appreciate your replies, and summarize them if so desired. Thanks, Avraham Melkman From Marco.Zagha at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Marco.Zagha at B.GP.CS.CMU.EDU (Marco.Zagha@B.GP.CS.CMU.EDU) Date: Tue, 29 Aug 1989 09:18-EDT Subject: Connections Per Second Message-ID: <620399884/marcoz@MARCOZ.BOLTZ.CS.CMU.EDU> There are two problem with reporting just connections per second, even if the "standard" method is used. Connections from the input units require 4 floating point operations per pattern, while connections in higher layers require 6 floating operations per pattern (because of the recursive error computation). Comparing just the CPS rating between a NetTalk benchmark and an encoder is not quite fair, since the typical network for NetTalk has many more connections from the input layer. Another problem is that there are different ways of updating weights: every pattern, every epoch, or every N patterns. Unless someone can come up with a good way of capturing these effects in a single number, I would suggest always giving the full details on the network used and the frequency of weight updates. It would also be nice to report a performance model which estimates CPS in terms of frequency of weight updates, number of weights, number of weights, in the input layer, number of patterns, etc. == Marco Zagha (marcoz at cs.cmu.edu) From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From srh at flash.bellcore.COM Tue Jun 6 06:52:25 2006 From: srh at flash.bellcore.COM (srh@flash.bellcore.COM) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: >do not depend on the net's actually being implemented in parallel, >rather than just being serially simulated? Is it only speed and >capacity parameters, or something more? I have a simple answer. In practice, I've resisted using parallel machines to run backprop simulations because in deciding how best to parallelize the problem for a given machine, you tend to make choices you're less likely to make with a very fast serial machine. So, for example, if you parallelize over patterns (one machine node processes the entire net for a given pattern) you sacrifice the capability to update the weights after every pattern. These choices tend to be different for different parallel machines. This experience makes me suspect that, in modeling the brain, the specifics of the parallel implementation (e.g., restriction to local connectivity) are likely to determine the nature of information representation and learning algorithms, as well as of what types of information processing the organism is capable. Gale Martin MCC From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: _______________________________ Date: ____________________________ Subject: No subject Message-ID: 4 >>>>>>>>>>>>>>>>>>>>>>>>>>>>> cut here for license <<<<<<<<<<<<<<<<<<<<<<<<< From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Paul R Kersten From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: about Turing equivalence or lack thereof lose some of their force. I would hope that everyone would agree that no finite connectionist hardware could be more powerful than a conventional finite state automaton, and it would be nice if everybody could also agree that no amoeba, no starfish, no koala bear, and no human being can be more powerful than a finite-state automaton either. There is, of course, the question of whether a connectionist machine would be as powerful as a finite-state automaton, but this strikes me as trivial. (Is it?) Some of you may then ask, why anybody bothers with Turing machines or any machines more powerful than finite-state ones. For example, what of all the arguments of Chomsky et al. about NLs not being finite-state? This leads to the next point. (3) Equivalence with respect to I/O behavior is not the sum total of what the theory of computation has taught us. Thus, while I would claim that no physically realized system can be more powerful than a finite-state machine in the weak sense, the same is not true in other senses. The correct view of the matter is, I am pretty sure, that machines more powerful than the finite-state models are useful mathematical tools for proving results which do in fact apply to physically realized (and hence really finite-state) systems, but which would be more difficult to prove otherwise. Thus, the claim that NLs are not finite-state for example should really be taken to mean that NLs have constructions (such as center-embedding or--much more commonly--repetition) which have the following feature: given a finite-state machine of the usual sort, the size of the machine increases in proportion to an increase in the size of a finite set of tokens of such a construction. Hence, in a sense which can be made more precise, one wants to say that the finite-state machine cannot "know" these patterns. On the other hand, a machine with pushdown storage for center embedding or queue storage for repetition, even if it is strictly finite and so only recognizes a finite set of such tokens, can be modified only slightly to recognize a larger such set (the modification would consist in extending the storage by a cell). In a sense that can be made precise, such a machine "knows" the infinite pattern even though it can only recognize at any given time a finite set of tokens of the pattern. It has always seemed more convenient to state the relevant results in terms of idealized machines which actually have infinite memor, but in reality we are talking about machines that are finite-state in power but have a cleverer kind of design, which allows them in a sense to "know" more than a conventional finite-state machine. Thus, we have to have a broader conception of what mathematical equivalence is. A finite-state machine is weakly equivalent to a pushdown machine with bounded storage, yet there is a well-defined sense in which the latter is more powerful. (4) Hence, it would be useful to know whether connectionist models (the theoretical ones) are equivalent to Turing machines or at least to some class of machines that can handle center-embedding repetition and a few other non-finite-state properties of NLs, for example. For the idea would be that this would tell us IN WHAT SENSE connectionist models (those actually implementable) are equivalent to finite-state machines. The crucial point is this: a finite-state machine designed to handle repetition must be drastically altered each time we increase the size of repetitions we want handled. A simple TM (or a queue machine) with a bound on its memory is I/O equivalent to a finite-state machine, but in order for it to handle larger and larger repetitions, it suffices to keep extending its tape (queue), a much less radical change (in a well-defined sense) than that required in the case of a finite-state machine. Turning back to connectionist models, the question then is whether to handle non-finite-state linguistic constructions (or other such cognitive tasks), they have to altered as radically as finite-state machines do (in effect, by adding new states) or less radically (as in the case of TMs and other classes of automata traditionally assumed to have infinite memory). (5) Perhaps I should add, by way of analogy, that there are many other situations where it is more clearly understood that the theory of computation deals with a class of formalisms that cannot be physically realized but the results are really intended to tell us about formalisms that are realizable. The case of nondeterminism is an obvious one. Nondeterministic machines (in the sense used in automata theory; this is quite different from nondeterminism in physical theory or in colloquial usage) cannot be physically realized. But it is convenient to be able to pull tricks such as (a) prove the equivalence of regular grammars to nondeterministic finite-state machines, (b) prove the equivalence of nondeterministic and deterministic finite-state machines, and (c) be able to conclude that regular grammars might be a useful tool for studying physically realizable deterministic finite-state machines. It is much easier to do things this way than to do things directly, in many cases, but the danger is that people (even the initiated and certainly the uninitiated) will then assume either that the impossible devices are really possible or that the theory is chimerical. I claim that this precisely what has happened with the concept of machines with infinite memory (such as Turing machines). (6) Given what has been said, we can I think also make sense of the results that show that certain connectionist models are MORE powerful than Turing machines. This might seem to contradict the Church-Turing thesis (which I have been assuming throughout), but in fact the Church-Turing thesis implicitly refers only to situations where only time and memory are infinite, but where the algorithm is finitely specified, whereas the results I am alluding to deal with (theoretical) models of nets that are infinite. In more traditional terms, these would correspond to machines like finite-state machines but with an infinite number of states. There is no question but that with an infinite number of states you could do all sorts of things that you cannot do with a finite number but it is unclear to me that such results offer any comfort to the proponents of net architectures. (8) Finally (at least for now), it seems to me that there remain significant gaps between what the theory of computation tells us about and what we need when we try to understand the behavior of complex systems (not only human beings but even real computer software and hardware!) even after some of the confusions I have been trying to dispel are dispelled. While there are many formal notions of equivalence beyond that of I/O equivalence which can make our discussion of theoretical models of language or cognition more useful (higher-level if you will) without sacrificing precision, I don't think that we have developed the mathematical definitions that we need to really capture what goes on in the highest levels of current theoretical thinking. Thus, I would not want to say that the question of the equivalence of connectionist and standard models can be put to rest with current tools of the theory of computation. Rather, I would say that (a) at certain levels the two are equivalent (assuming somebody can do the necessary work of drafting and proving the theorems) (b) at certain other levels they are not, (c) the level at which most people in the field are thinking about the problem intuitively has probably not been formalized, (d) only if this level is formalized will we know what we are really talking about and it is only by deepening the mathematical models that we will achieve this, not by scuttling them, and (e) ultimately the answers we want will have to come from a marriage of a more complete mathematical theory with a richer empirical science of cognition. But neither is achievable without the other, for Every science gets the math that it deserves, and vice versa. Alexis Manaster-Ramer POB 704 IBM Research Yorktown Heights NY 10598 amr at ibm.com (914) 789-7239 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: Wed, 7 Mar 1990 21:10-EST Subject: Linear separability In-Reply-To: Chaim Gotsman's mail message of Wed, 7 Mar 90 22:48:02 +0200 Message-ID: A simple reduction shows that linear separability of functions specified by boolean circuits is NP-complete. If we let g(x_0,...,x_n,x_{n+1},x_{n+2}) = f(x_0,...,x_i) AND (x_{n+1} XOR x_{n+2}) then f is satisfiable if and only if g is not linearly separable, so linearly separability of boolean circuits must be NP-complete. Barak Pearlmutter. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: A level is composed of a "MEDIUM that is to be processed, COMPONENTS that provide primitive processing, LAWS OF COMPOSITION that permit components to be assembled into SYSTEMS, and LAWS OF BEHAVIOR that determine how system behavior depends on the component behavior and the structure of the system". Through the word "level" appears intuitively notion of an entity with some AUTONOMY, the fact that there are several such entities, and RELATIONS (hierarchy), composition laws between them. Through the word "organization" (within a level), appears the notion of REGULARITIES, laws (of behavior), within a level. 2.2 Sketch of a formal description : ***************************************************************** Let us consider : - a set S1; a relation R1 on S1, both making up structure: - a set S2; disjoint of S1, a relation R2 on S2, both making up structure: - a bijective function f from S1**n (S1xS1x...xS1) to S2,n>1 then I suggest to say that : [,,f] represents an (atomic) organization hierarchy iff from R2 relation on S2, through f-1 (f inverse function), it is not possible to infer a relation R1' on S1. ***************************************************************** For instance if we consider instruction and bit levels of a computer, from (logic) relation between two successive instructions, it is not possible to infer a relation between bits corresponding to these instructions. Or in language domain, from (syntactic or semantic) relation between words, it is not possible to infer a relation between letters which make up these words. This definition does not take in account any environment of the object or phenomenon. Only the major notions mentionned above : - regularities: relations R1, R2; - composition law between levels: f; - autonomy: no R1' relation. have been considered. Can such a definition be a (syntactic) bootstrap of a formal description of organization levels ? From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: There are actually many models in the literature; most of the people studying the problem seem not to be aware of each other's work. In addition to those already mentioned, [Feldman+Ballard] have proposed a model inspired by [Treisman+Gelade]. (References are at the end of this message.) [Strong+Whitehead] have implemented a similar model. [Fukushima]'s model is rather different. [Koch+Ullman]'s and [Mozer]'s models seem closest to the psychophysical and neurophysiological data to me. I [Chapman] have implemented [Koch+Ullman]'s proposal successfully and used it to model in detail the psychophysically-based theories of visual search due to [Treisman+Geldade, Treisman+Gormican]. My thesis [Chapman] describes these and other models of attention and tries to sort out their relative strengths. It's probably time for someone to write a review article. Cites: David Chapman, {\em Vision, Instruction, and Action.} PhD Thesis, MIT Artificial Intelligence Laboratory, 1990. Jerome A.~Feldman and Dana Ballard, ``Connectionist Models and their Properties.'' {\em Cognitive Science} {\bf 6} (1982) pp.~205--254. Kunihiko Fukushima, ``A Neural Network Model for Selective Attention in Visual Pattern Recognition.'' {\em Biological Cybernetics} {\bf 55} (1986) pp.~5--15. Christof Koch and Shimon Ullman, ``Selecting One Among the Many: A Simple Network Implementing Shifts in Selective Visual Attention.'' {\em Human Neurobiology} {\bf 4} (1985) pp.~219--227. Also published as MIT AI Memo 770/C.B.I.P.~Paper 003, January, 1984. Michael C.~Mozer, ``A connectionist model of selective attention in visual perception.'' {\em Program of the Tenth Annual Conference of the Cognitive Science Society}, Montreal, 1988, pp.~195--201. Gary W.~Strong and Bruce A.~Whitehead, ``A solution to the tag-assignment problem for neural networks.'' {\em Behavioral and Brain Sciences} (1989) {\bf 12}, pp.~381--433. Anne M.~Treisman and Garry Gelade, ``A Feature-Integration Theory of Attention.'' {\em Cognitive Psychology} {\bf 12} (1980), pp.~97--136. Anne Treisman and Stephen Gormican, ``Feature Analysis in Early Vision: Evidence From Search Asymmetries.'' {\em Psychological Review} Vol.~95 (1988), No.~1, pp.~15--48. Some other relevant references: C.~H.~Anderson and D.~C.~Van Essen, ``Shifter circuits: A computational strategy for dynamic aspects of visual processing.'' {\em Proceedings of the National Academy of Sciences, USA}, Vol.~84, pp.~6297--6301, September 1987. Francis Crick, ``Function of the thalamic reticular complex: The searchlight hypothesis.'' {\em Proceedings of the National Academy of Science}, Vol.~81, pp.~4586--4590, July 1984. Jefferey Moran and Robert Desimone, ``Selective attention gates visual processing in the extrastriate cortex.'' {\em Science} {\bf 229} (1985), pp.~782--784. V.~B.~Mountcastle, B.~C.~Motter, M.~A.~Steinmetz, and A.~K.~Sestokas, ``Common and Differential Effects of Attentive Fixation on the Excitability of Parietal and Prestriate (V4) Cortical Visual Neurons in the Macaque Monkey.'' {\em The Journal of Neuroscience}, July 1987, 7(7), pp.~2239--2255. John K.~Tsotsos, ``Analyzing vision at the complexity level.'' To appear, {\em Behavioral and Brain Sciences}. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From melkman%BENGUS.BITNET at vma.CC.CMU.EDU Tue Jun 6 06:52:25 2006 From: melkman%BENGUS.BITNET at vma.CC.CMU.EDU (melkman abraham) Date: Thu, 26 Apr 90 16:44:10-020 Subject: AI and music workshop Message-ID: <9004261844.AA05952@red.bgu.ac.il> ECAI Workshop on Artificial Intelligence and Music Stockholm, Sweden. Tuesday, August 7, 1990 This workshop is the successor of the four previous workshops held in the last two years: the AAAI-88 (St.Paul) and IJCAI-89 (Detroit) Workshops, the GMD Workshop that preceded the International Computer Music Conference (Bonn, Sept. 88), and the European Workshop on AI and Music (Genoa, June 89). AI and Music is an emerging discipline that involves such fields as artificial intelligence, music, psychology, philosophy, linguistics, and education. The last four workshops demonstrated a mixture of methods, that range from somewhat technical application of AI methods to music problems, to theoretical cognitive research. This workshop will focus on further deepening our understanding of those AI techniques and approaches that are relevant to music, and on the relevance of music to AI research. The workshop topics include (but are not limited to) the following: Cognitive Musicology Expert Systems and Music Knowledge Representation and Music Tutoring Neural Computation and connectionist approaches in Music Composition, Performance, Analysis tools (based on AI techniques) Multi-media Composition and Performance The Workshop is scheduled during the ECAI Tutorials (August, 7), held immediately before the ECAI Scientific Meeting (August, 8-10). ORGANISING COMMITTEE : Mira Balaban , Dept of Math and Computer Science, Ben-Gurion University, Israel. Antonio Camurri , DIST, University of Genoa, Italy. Gianni De Poli , CSC, University of Padova, Italy. Kemal Ebcioglu , IBM, Thomas J. Watson Research Center, USA. Goffredo Haus , LIM-DSI, University of Milan, Italy. Otto Laske , NEWCOMP, USA. Marc Leman , IPEM, University of Ghent, Belgium. Christoph Lischka , GMD, Federal Republic of Germany. SUBMISSION INFORMATION Submit eight copies of a camera ready manuscript (about 5 single-spaced A4 pages) to Antonio Camurri. Please follow the IJCAI standards for the preparation of the manuscript. Antonio Camurri, DIST - University of Genova Via Opera Pia, 11A - 16145 Genova, Italy e-mail: music at ugdist.UUCP Phone +39 (0)10 3532798 - 3532983; Telefax +39 (0)10 3532948 Important dates May 15, 1990: Deadline for submission. July 1, 1990: Notification of acceptance. For registration and general information on the ECAI-90 conference, please refer to: ECAI-90, c/o Stockholm Convention Bureau, Box 6911 S-102 39 Stockholm, Sweden. Tel. +46-8-230990 FAX. +46-8-348441 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From stjohn%cogsci at ucsd.edu Tue Jun 6 06:52:25 2006 From: stjohn%cogsci at ucsd.edu (Mark St. John) Date: 19 June 1990 1344-PDT (Tuesday) Subject: tech report on story comprehension Message-ID: <9006192044.AA29783@cogsci.ucsd.edu.UCSD.EDU> The Story Gestalt Text Comprehension by Cue-based Constraint Satisfaction Mark F. St. John Department of Cognitive Science, UCSD Abstract: Cue-based constraint satisfaction is an appropriate algorithm for many aspects of story comprehension. Under this view, the text is seen to contain cues that are used as evidence to constrain a full interpretation of a story. Each cue can constrain the interpretation in a number of ways, and cues are easily combined to produce an interpretation. Using this algorithm, a number of comprehension tasks become natural and easy. Inferences are drawn and pronouns are resolved automatically as an inherent part of processing the text. The developing interpretation of a story is revised as new information becomes available. Knowledge learned in one context can be shared in new contexts. Cue-based constraint satisfaction is naturally implemented in a recurrent connectionist network where the weights encode the constraints. Propositions are processed sequentially to add constraints to refine the story interpretation. Each of the processes mentioned above is seen as an instance of a general constraint satisfaction process. The model learns its representation of stories in a hidden unit layer called the Story Gestalt. Learning is driven by asking the model questions about a story during processing. Errors in question answering are used to modify the weights in the network via Back Propagation. ------------ The report can be obtained from the neuroprose database by the following procedure. unix> ftp cheops.cis.ohio-state.edu # (or ftp 128.146.8.62) Name (cheops.cis.ohio-state.edu:): anonymous Password (cheops.cis.ohio-state.edu:anonymous): neuron ftp> cd pub/neuroprose ftp> type binary ftp> get (remote-file) stjohn.story.ps.Z (local-file) foo.ps.Z ftp> quit unix> uncompress foo.ps.Z unix> lpr -P(your_local_postscript_printer) foo.ps From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Help for a NOAA connectionist "primer" Message-ID: mike, thanks for the input - it seems a cogent summary of the (many) responses I've been getting. However, it seems just about noone has really attempted a one-to-one sort of comparison using traditional pattern recognition benchmarks. Just about everything I hear and read is anecdotal. Would it be fair to say that "neural nets" are more accessible, simply because there is such a plethora of 'sexy' user-friendly packages for sale? Or is back-prop (for example) truly a more flexible and widely-applicable algorithm than other statistical methods with uglier-sounding names? If not, it seems to me that most connectionists should be having a bit of a mid-life crisis about now. rich From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From Dean.Pomerleau at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Dean.Pomerleau at F.GP.CS.CMU.EDU (Dean.Pomerleau@F.GP.CS.CMU.EDU) Date: Mon, 6 Aug 1990 07:46-EDT Subject: Summary (long): pattern recognition comparisons Message-ID: <649943171/pomerlea@F.GP.CS.CMU.EDU> Leonard Uhr writes > Neural nets using backprop have only handled VERY SIMPLE images, usually in > 8-by-8 arrays. and later > What experimental evidence is there that NN recognize images as complex as those > handled by computer vision and pattern recognition approaches? For the past two years I've been using backpropagation networks with 32x30 and 45x48 pixel retinas and up to ~20,000 connections to autonomously drive a Chevy van. This system, called ALVINN (Autonomous Land Vehicle In a Neural Network), uses a video camera or 2D scanning laser rangefinder as input, and outputs the direction in which the vehicle should steer. The network learns by watching a person drive for about a 1/4 mile. After about 5 MINUTES OF TRAINING, the network is able to take over and continue driving on its own. Because it is able to learn what image features are important for particular driving situations, ALVINN has been successfully trained to drive in a wider variety of situations than any other single autonomous navigation system, all of which use the traditional vision processing techniques Leonard Uhr refers to. The situations ALVINN networks have been trained to handle include single lane dirt roads, single lane paved bike paths, two lane suburban neighborhood streets, lined two lane highways, and, using the laser range finder as input, parking lot driving. Because of its ability to effectively integrate multiple image features into a single steering command, ALVINN has proven more robust than other autonomous navigation systems which rely on finding one or a small number of features (like a yellow road center line) in the image. Because of the simplicity of the system, it is able to process up to 29 images per second (both training and testing are done using two Sun-4 Sparcstations). ALVINN is currently limited in the speed it can drive by the test vehicle, which has a top speed of 20 MPH. Autonomous navigation was one domain in which traditional vision researchers were initially skeptical that artificial neural networks would work at all, to say nothing of work as well or better than other systems in a wider variety of situations. --Dean Pomerleau, D.A. (1989) Neural network based autonomous navigation. In Vision and Navigation: The CMU Navlab. Charles Thorpe, (Ed.) Kluwer Academic Publishers. Pomerleau, D.A. (1989) ALVINN: An Autonomous Land Vehicle In a Neural Network, Advances in Neural Information Processing Systems, Vol. 1, D.S. Touretzky (ed.), Morgan Kaufmann. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Subject: Last Call for Papers for AGARD Conference We are extending the deadline for the abstracts for the papers to be presented at the AGARD conference until 21 September 1990. In case you have lost the Call for Papers, it is again attached to this message. Your consideration is greatly appreciated. --Dale AGARD ADVISORY GROUP FOR AEROSPACE RESEARCH AND DEVELOPMENT 7 RUE ANCELLE - 92200 NEUILLY-SUR-SEINE - FRANCE TELEPHONE: (1)47 38 5765 TELEX: 610176 AGARD TELEFAX: (1)47 38 57 99 AVP/46 2 APRIL 1990 CALL FOR PAPERS for the SPRING, 1991 AVIONICS PANEL SYMPOSIUM ON MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS to be held in LISBON, Portugal 13-16 May 1991 This meeting will be UNCLASSIFIED Abstracts must be received not later than 31 August 1990. Note: US & UK Authors must comply with National Clearance Procedures requirements for Abstracts and Papers. THEME MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS A large amount of research is being conducted to develop and apply Machine Intelligence (MI) technology to aerospace applications. Machine Intelligence research covers the technical areas under the headings of Artificial Intelligence, Expert Systems, Knowledge Representation, Neural Networks and Machine Learning. This list is not all inclusive. It has been suggested that this research will dramatically alter the design of aerospace electronics systems because MI technology enables automatic or semi-automatic operation and control. Some of the application areas where MI is being considered inlcude sensor cueing, data and information fusion, command/control/communications/intelligence, navigation and guidance, pilot aiding, spacecraft and launch operations, and logistics support for aerospace electronics. For many routine jobs, it appears that MI systems would provide screened and processed ata as well as recommended courses of action to human operators. MI technology will enable electronics systems or subsystems which adapt or correct for errors and many of the paradigms have parallel implementation or use intelligent algorithms to increase the speed of response to near real time. With all of the interest in MI research and the desire to expedite transition of the technology, it is appropriate to organize a symposium to present the results of efforts applying MI technology to aerospace electronics applications. The symposium will focus on applications research and development to determine the types of MI paradigms which are best suited to the wide variety of aerospace electronics applications. The symposium will be organizaed into separate sessions for the various aerospace electronics application areas. It is tentatively proposed that the sessions be organized as follows: SESSION 1 - Offensive System Electronics (fire control systems, sensor cueing and control, signal/data/information fusion, machine vision, etc.) SESSION 2 - Defensive System electronics (electronic counter measures, radar warning receivers, countermeasure resource management, situation awareness, fusion, etc.) SESSION 3 - Command/Control/Communications/Intelligence - C3I (sensor control, signal/data/information fusion, etc.) SESSION 4 - Navigation System Electronics (data filtering, sensor cueing and control, etc.) SESSION 5 - Space Operations (launch and orbital) SESSION 6 - Logistic Systems to Support Aerospace Electronics (on and off-board systems, embedded training, diagnostics and prognostics, etc.) GENERAL INFORMATION This Meeting, supported by the Avionics Panel will be held in Lisbon, Portugal on 13-16 May 1991. It is expected that 30 to 40 papers will be presented. Each author will normally have 20 minutes for presentation and 10 minutes for questions and discussions. Equipment will be available for projection of viewgraph transparencies, 35 mm slides, and 16 mm films. The audience will include Members of the Avionics Panel and 150 to 200 invited experts from the NATO nations. Attendance at AGARD Meetings is by invitation only from an AGARD National Delegate or Panel Member. Final manuscripts should be limited to no more than 16 pages including figures. Presentations at the meeting should be an extract of the final manuscript and not a reading of it. Complete instructions will be sent to authors of papers selected by the Technical Programme Committee. Authors submitting abstracts should insure that financial support for attendance at the meeting will be available. CLASSIFICATION This meeting will be UNCLASSIFIED LANGUAGES Papers may be written and presented either in English or French. Simultanewous interpretation will be provided between these two languages at all sessions. A copy of your prepared remarks (Oral Presentation) and visual aids should be provided to the AGARD staff at least one month prior to the meeting date. This procedure will ensure correct interpretation of your spoken words. ABSTRACTS Abstracts of papers offered for this Symposium are now invited and should conform with the following instructions: LENGTH: 200 to 500 words CONTENT: Scope of the Contribution & Relevance to the Meeting - Your abstract should fully represent your contribution SUMITTAL: To the Technical Programme committee by all authors (US authors must comply with Attachment 1) IDENTIFICATION: Author Information Form (Attachment 2) must be provided with you abstract CLASSIFICATION: Abstracts must be unclassified Your abstracts and Attachment 2 should be mailed in time to reach all members of the Technical Program Committee, and the Executive not later than 31 AUGUST 1990 (Note the exception for the US Authors). This date is important and must be met to ensure that your paper is considered. Abstracts should be submitted in the format shown on the reverse of this page. TITLE OF PAPER Name of Author Organization or Company Affiliation Address Name of Co-Author Organization or Company Affiliation Address The test of your ABSTRACT should start on this line. PUBLICATIONS The proceedings of this meeting will be published in a single volume Conference Proceedings. The Conference Proceedings will include the papers which are presented at the meeting, the questions/discussion following each presentation, and a Technical Evaluation Report of the meeting. It should be noted that AGARD reserves the right to print in the Conference Proceedings any paper or material presented at the Meeting. The Conference Proceedings will be sent to the printer on or about July 1990. NOTE: Authors that fail to provide the required Camera-Ready manuscript by this date may not be published. QUESTIONS concerning the technical programme should be addressed to the Technical Programme Committee. Administrative questions should be sent directly to the Avionics Panel Executive. GENERAL SCHEDULE (Note: Exception for US Authors) EVENT DEADLINE SUBMIT AUTHOR INFORMATION FORM 31 AUG 90 SUBMIT ABSTRACT 31 AUG 90 PROGRAMME COMMITTEE SELECTION OF PAPERS 1 OCT 90 NOTIFICATION OF AUTHORS OCT 90 RETURN AUTHOR REPLY FORM TO AGARD IMMEDIATELY START PUBLICATION/PRESENTATION CLEARANCE PROCEDURE UPON NOTIFICATION AGARD INSTRUCTIONS WILL BE SENT TO CONTRIBUTORS OCT 90 MEETING ANNOUNCEMENT WILL BE PUBLISHED IN JAN 91 SUBMIT CAMERA-READY MANUSCRIPT AND PUBLICATION/ PRESENTATION CLEARANCE CERTIFICATE to arrive at AGARD by 15 MAR 91 SEND ORAL PRESENTATION AND COPIES OF VISUAL AIDS TO THE AVIONICS PANEL EXECUTIVE to arrive at AGARD by 19 APR 91 ALL PAPERS TO BE PRESENTED 13-16 MAY 91 TECHNICAL PROGRAMME COMMITTEE CHAIRMAN Dr Charles H. KRUEGER Jr Director, Systems Avionics Division Wright Research and Development Center (AFSC), ATTN: AAA Wright Patterson Air Force Base Dayton, OH 45433, USA Telephone: (513) 255-5218 Telefax: (513) 476-4020 Mr John J. BART Prof Dr A. Nejat INCE Technical Director, Directorate Burumcuk sokak 7/10 of Reliability & Compatibility P.K. 8 Rome Air Development Center (AFSC) 06752 MALTEPE, ANKARA GRIFFISS AFB, NY 13441 Turkey USA Mr J.M. BRICE Mr Edward M. LASSITER Directeur Technique Vice President THOMSON TMS Space Flight Ops Program Group B.P. 123 P.O. Box 92957 38521 SAINT EGREVE CEDEX LOS ANGELES, CA 90009-2957 France USA Mr L.L. DOPPING-HEPENSTAL Eng. Jose M.B.G. MASCARENHAS Head of Systems Development C-924 BRITISH AEROSPACE PLC, C/O CINCIBERLANT HQ Military Aircraft Limited 2780 OEIRAS WARTON AERODROME Portugal PRESTEN, LANCS PR4 1AX United Kingdom Mr J. DOREY Mr Dale NELSON Directeur des Etudes & Syntheses Wright Research & Development Center O.N.E.R.A. ATTN: AAAT 29 Av. de la Division Leclerc Wright Patterson AFB 92320 CHATILLON CEDEX Dayton, OH 45433 France USA Mr David V. GAGGIN Ir. H.A.T. TIMMERS Director Head, Electronics Department U.S. Army Avionics R&D Activity National Aerospace Laboratory ATTN: SAVAA-D P.O. Box 90502 FT MONMOUTH, NJ 07703-5401 1006 BM Amsterdam USA Netherlands AVIONICS PANEL EXECUTIVE LTC James E. CLAY, US Army Telephone Telex Telefax (33) (1) 47-38-57-65 610176 (33) (1) 47-38-57-99 MAILING ADDRESSES: From Europe and Canada From United States AGARD AGARD ATTN: AVIONICS PANEL ATTN: AVIONICS PANEL 7, rue Ancelle APO NY 09777 92200 Neuilly-sur-Seine France ATTACHMENT 1 FOR US AUTHORS ONLY 1. Authors of US papers involving work performed or sponsored by a US Government Agency must receive clearance from their sponsoring agency. These authors should allow at least six weeks for clearance from their sponsoring agency. Abstracts, notices of clearance by sponsoring agencies, and Attachment 2 should be sent to Mr GAGGIN to arrive not later than 15 AUGUST 1990. 2. All other US authors should forward abstracts and Attachment 2 to Mr GAGGIN to arrive before 31 JULY 1990. These contributors should include the following statements in the cover letter: A. The work described was not performed under sponsorship of a US Government Agency. B. The abstract is technically correct. C. The abstract is unclassified. D. The abstract does not violate any proprietary rights. 3. US authors should send their abstracts to Mr GAGGIn and Dr KRUEGER only. Abstracts should NOT be sent to non-US members of the Technical Programme Committee or the Avionics Panel Executive. ABSTRACTS OF PAPERS FROM US AUTHORS CAN ONLY BE SENT TO: Mr David V. GAGGIN and Dr Charles H. KRUEGER Jr Director Director, Avionics Systems Div Avionics Research & Dev Activity Wright Research & Dev Center ATTN: SAVAA-D ATTN: WRDC/AAA Ft Monmouth, NJ 07703-5401 Wright Patterson AFB Dayton, OH 45433 Telephone: (201) 544-4851 Telephone: (513) 255-5218 or AUTOVON: 995-4851 4. US authors should send the Author Information Form (Attachment 2) to the Avionics Panel Executive, Mr GAGGIN, Dr KRUEGER, and each Technical Programme Committee Member, to meet the above deadlines. 5. Authors selected from the United States are remined that their full papers must be cleared by an authorized national clearance office before they can be forwarded to AGARD. Clearance procedures should be started at least 12 weeks before the paper is to be mailed to AGARD. Mr GAGGIN will provide additional information at the appropriate time. AUTHOR INFORMATION FORM FOR AUTHORS SUBMITTING AN ABSTRACT FOR THE AVIONICS PANEL SYMPOSIUM on MACHINE INTELLIGENCE FOR AEROSPACE ELECTRONICS SYSTEMS INSTRUCTIONS 1. Authors should complete this form and send a copy to the Avionics Panel Executive and all Technical Program Committee members by 31 AUGUST 1990. 2. Attach a copy of your abstract to these forms before they are mailed. US Authors must comply with ATTACHMENT 1 requirements. a. Probable Title Paper: ____________________________________________ _______________________________________________________________________ b. Paper most appropriate for Session # ______________________________ c. Full Name of Author to be listed first on Programmee, including Courtesy Title, First Name and/or Initials, Last Name & Nationality. d. Name of Organization or Activity: _________________________________ _______________________________________________________________________ e. Address for Return Correspondence: Telephone Number: __________________________________ ____________________ __________________________________ Telefax Number: __________________________________ ____________________ __________________________________ Telex Number: __________________________________ ____________________ f. Names of Co-Authors including Courtesy Titles, First Name and/or Initials, Last Name, their Organization, and their nationality. ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ ___________________________________________________________________ __________ ____________________ Date Signature DUE NOT LATER THAN 15 AUGUST 1990 From Harry.Printz at IUS1.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Harry.Printz at IUS1.CS.CMU.EDU (Harry.Printz@IUS1.CS.CMU.EDU) Date: Tue, 4 Sep 1990 16:02-EDT Subject: Tech Report Announcement Message-ID: <652478569/hwp@IUS1.CS.CMU.EDU> ************** PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS **************** Foundations of a Computational Theory of Catecholamine Effects Harry Printz and David Servan-Schreiber CMU-CS-90-105 This report provides the mathematical foundation of a theory of catecholamine effects upon human signal detection abilities, as developed in a companion paper[*]. We argue that the performance-enhancing effects of catecholamines are a consequence of improved rejection of internal noise within the brain. To support this claim, we develop a neural network model of signal detection. In this model, the release of a catecholamine is treated as a change in the gain of a neuron's activation function. We prove three theorems about this model. The first theorem asserts that in the case of a network that contains only one unit, changing its gain cannot improve the network's signal detection performance. The second shows that if the network contains enough units connected in parallel, and if their inputs satisfy certain conditions, then uniformly increasing the gain of all units does improve performance. The third says that in a network where the output of one unit is the input to another, under suitable assumptions about the presence of noise along this pathway, increasing the gain improves performance. We discuss the significance of these theorems, and the magnitude of the effects that they predict. The report includes numerical examples and numerous figures, and intuitive explanations for each of the formal results. The proofs are presented in separate sections, which can be omitted from reading without loss of continuity. [*] Servan-Schreiber D, Printz H, Cohen JD. "A network model of catecholamine effects: gain, signal-to-noise ratio, and behavior," Science, 249:892-895 (August 24, 1990) You can obtain this technical report by sending email to copetas at CS.CMU.EDU, or physical mail to Catherine Copetas School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 Ask for report number CMU-CS-90-105. There is no charge for this report. Please do not reply to this message. ************** PLEASE DO NOT FORWARD TO OTHER BULLETIN BOARDS **************** From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: usage is more correct than the first. I think we need a better term for describing the kind of locality exhibited by RBF networks. --Tom From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: That tried and tried to write limericks. They didn't really rhyme, And were short one line. (But weren't entirely self-referential.) -- Doug Blank ----------------------------------------------------- A couple with bad information Got into a tight situation. They said, "We don't see --- It just cannot be We only had backpropagation!" -- Mark Weaver ----------------------------------------------------- To these still stuck in the old school, The rule is the ultimate tool. But connectionists know (The others are slow) That the rule is a tool for a fool! -- Tim van Gelder ----------------------------------------------------- These guys Fodor and Pylyshyn Came to town with a mission. But they got in a jam When faced off with RAAM Instead they should just have gone fishin'. -- Dave Chalmers ----------------------------------------------------- Said a network that was rather horny To another, "This might sound too corny, Of all of my mates, You've got the best weights, Let's optimize them until morning." -- Paul Munro ----------------------------------------------------- Once quoted a young naive RAAM "In my 3-2-3 mind I can cram All of human cognition With minor omission" But it's not that much better than SPAM. -- Devin McAuley and Gary McGraw ----------------------------------------------------- In Bloomington, geniuses gather Working a connectionist lather Did they solve the world's woes, Or compound them -- who knows? It's not a decidable matter. -- Dave Touretzky ----------------------------------------------------- "Backprop will usually converge," Rumelhart and Hinton observe. "When some people say `Brains don't work that way' We just smile and flip them the bird!" -- Pete Angeline and Viet-Anh Nguyen ----------------------------------------------------- There once was a recurrent node Who felt he was ready to explode Without stimulation Only self-excitation He swelled up and then shot his load. -- Devin McAuley and Gary McGraw ----------------------------------------------------- Pollack has made this admission Of his neural net's true composition: "I recursively RAAM it With symbols, Goddamnit! So don't pay no mind to Pylyshyn." -- Dave Touretzky ----------------------------------------------------- Minsky and Papert were cruel Which gave the symbolists fuel. Connectionists waited Till they backpropagated And now neural networks are cool. -- Paul Munro ----------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Acquiring Verb Morphology in Children and Connectionist Nets 201 K. Plunkett, V. Marchman, and S.L. Knudsen Parallel Mapping Circuitry in a Phonological Model 220 D.S. Touretzky A Modular Neural Network Model of Attentional Requirements in Sequence Learning 228 P.G. Schyns A Computational Model of Attentional Requirements in Sequence Learning 236 P.J. Jennings and S.W. Keele Recall of Sequences of Items by a Neural Network 243 S. Nolfi, D. Parisi, G. Vallar, and C. Burani Binding, Episodic Short-Term Memory, and Selective Attention, Or Why are PDP Models Poor at Symbol Manipulation? 253 R. Goebel Analogical Retrieval Within a Hybrid Spreading-Activation Network 265 T.E. Lange, E.R. Melz, C.M. Wharton, and K.J. Holyoak Appropriate Uses of Hybrid Systems 277 D.E. Rose Cognitive Map Construction and Use: A Parallel Distributed Processing Approach 287 R.L. Chrisley PART VIII SPEECH AND VISION Unsupervised Discovery of Speech Segments Using Recurrent Networks 303 A. Doutiraux and D. Zipser Feature Extraction Using an Unsupervised Neural Network 310 N. Intrator Motor Control for Speech Skills: A Connectionist Approach 319 R. Laboissiere, J-L. Schwartz, and G. Bailly Extracting Features From Faces Using Compression Networks: Face, Identity, Emotion, and Gender Recognition Using Holons328 G.W. Cottrell The Development of Topography and Ocular Dominance 338 G.J. Goodhill On Modeling Some Aspects of Higher Level Vision 350 D. Bennett PART IX BIOLOGY Modeling Cortical Area 7a Using Stochastic Real-Valued (SRV) Units 363 V. Gullapalli Neuronal Signal Strength is Enhanced by Rhythmic Firing 369 A. Heirich and C. Koch PART X VLSI IMPLEMENTATION An Analog VLSI Neural Network Cocktail Party Processor 379 A. Heirich, S. Watkins, M. Alston, P. Chau A VLSI Neural Network with On-Chip Learning 387 S.P. Day and D.S. Camporese Index 401 _________________________________________________________________ Ordering Information: Price is $29.95. Shipping is available at cost, plus a nominal handling fee: In the U.S. and Canada, please add $3.50 for the first book and $2.50 for each additional for surface shipping; for surface shipments to all other areas, please add $6.50 for the first book and $3.50 for each additional book. Air shipment available outside North America for $45.00 on the first book, and $25.00 on each additional book. Master Card, Visa and personal checks drawn on US banks accepted. MORGAN KAUFMANN PUBLISHERS, INC. Department B2 2929 Campus Drive, Suite 260 San Mateo, CA 94403 USA Phone: (800) 745-7323 (in North America) (415) 578-9928 Fax: (415) 578-0672 email: morgan at unix.sri.com From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From Barak.Pearlmutter at F.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Barak.Pearlmutter at F.GP.CS.CMU.EDU (Barak.Pearlmutter@F.GP.CS.CMU.EDU) Date: Thu, 13 Dec 1990 11:51-EST Subject: tr announcement: CMU-CS-90-196 Message-ID: <661107088/bap@F.GP.CS.CMU.EDU> *** Please do not forward to other mailing lists or digests. *** The following 30 page technical report is now available. It can be FTPed from the neuroprose archives at OSU, under the name pearlmutter.dynets.ps.Z, as shown below, which is the preferred mode of acquisition, or can be ordered by sending a note to School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 USA along with a check for $2 (domestic) or $5 (outside the USA) to help defray the expense of reproduction and mailing. ---------------- Dynamic Recurrent Neural Networks Barak A. Pearlmutter December 1990 CMU-CS-90-196 (supersedes CMU-CS-88-191) We a survey learning algorithms for recurrent neural networks with hidden units and attempt to put the various techniques into a common framework. We discuss fixpoint learning algorithms, namely recurrent backpropagation and deterministic Boltzmann Machines, and non-fixpoint algorithms, namely backpropagation through time, Elman's history cutoff nets, and Jordan's output feedback architecture. Forward propagation, an online technique that uses adjoint equations, is also discussed. In many cases, the unified presentation leads to generalizations of various sorts. Some simulations are presented, and at the end, issues of computational complexity are addressed. ---------------- FTP Instructions: ftp cheops.cis.ohio-state.edu (or ftp 128.146.8.62) Name: anonymous Password: state-your-name-please ftp> cd pub/neuroprose ftp> get pearlmutter.dynets.ps.Z 300374 bytes sent in 9.9 seconds (26 Kbytes/s) ftp> quit unix> zcat pearlmutter.dynets.ps.Z | lpr Unlike some files in the archive, the postscript file has been tested and will print properly on printers without much memory. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Chapter 1 of A New Approach to Pattern Recognition, in Progress in Pattern Recognition 2, eds. L.N.Kanal and A.Rosenfeld, North-Holland, 1985) that the adoption of the VECTOR REPRESENTATION for the objects severely limits the number of the above similarity fields that can be induced naturally in the set of objects. At the same time, it is also useful to remember that the limitations imposed by the vector representation were sufficient to justify the rift between AI an pattern recognition (this is not to say that I am condoning this rift, which was also "politically" motivated). It is not difficult to understand why vector representation is not sufficiently flexible: all features are rigidly fixed, quantitative, and their interrelations are not represented. In reality, the useful features and their relations must emerge dynamically during the learning processes. "Symbolic" representations such as strings, graphs, etc. are more satisfactory from that point of view. Thus, although the NN after the learning process can induce some similarity field in the set of patterns, its capacity to generate various similarity fields is SEVERELY RESTRICTED by the very form of the pattern (object) representation. Furthermore, adapting a new more dynamic framework for the NN (dynamic NNs) will solve only a small part of above representational problem. The issue of representation have received considerable attention in computer science, but, it appears, that people trained in other fields may not fully appreciate its role and importance. -- Lev Goldfarb From zoran at theory.cs.psu.edu Tue Jun 6 06:52:25 2006 From: zoran at theory.cs.psu.edu (Zoran Obradovic) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: IJCNN-91-Seattle Message-ID: Thanks for the complete answer to my question. You can certainly copy my mail and post this to the net. Regards, Zoran See you at IJCNN, folks! Don From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Tue 26 Feb 91 22:54:02-PST Subject: Computists International Message-ID: <667637642.0.LAWS@AI.SRI.COM> *** PLEASE POST *** This is to announce Computists International, a new "networking" association for computer and information scientists. Hi! I'm Ken Laws If this announcement interests you, contact me at internet address laws at ai.sri.com. If you can't get through, my mail address is: Dr. Kenneth I. Laws; 4064 Sutherland Drive, Palo Alto, CA 94303; daytime phone (415) 493-7390. I'm back from two years at the National Science Foundation. I used to run AIList, and I miss it. Now I'm creating a broader service for anyone interested in information (or knowledge), software, databases, algorithms, or doing neat new things with computers. It's a career-oriented association for mutual mentoring about grant and funding sources, information channels, text and software publishing, tenure, career moves, institutions, consulting, business practices, home offices, software packages, taxes, entrepreneurial concerns, and the sociology of work. We can talk about algorithms, too, with a focus on applications. Toward that end, I'm going to edit and publish a weekly+ newsletter, The Computists' Communique. The Communique will be tightly edited, with carefully condensed news and commentary. Content will depend on your contributions, but I will filter, summarize, and generally act like an advice columnist. (Ann Landers?) I'll also suggest lines of discussion, collect "common knowledge" about academia and industry, and help track people and projects. As a bonus, I'll give members whatever behind-the-scenes career help I can. Alas, this won't be free. The charter membership fee for Computists will depend in part on how many people respond to this notice. The Communique itself will be free to all members, FOB Palo Alto; internet delivery incurs no additional charge. To encourage participation, there's a full money-back guarantee (excluding postage). Send me a reply to find out more. -- Ken Computists International and The Computists' Communique are service marks of Kenneth I. Laws. Membership in professional organizations may be a tax-deductible business expense. ------- From Xuedong.Huang at SPEECH2.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Tue, 28 May 1991 21:10-EDT Subject: a new book on speech recognition Message-ID: <675479427/xdh@SPEECH2.CS.CMU.EDU> New Book in the Edinburgh Information Technology Series (EDITS 7) ================================================================= X.D. Huang, Y. Ariki, and M. Jack: "Hidden Markov Models for Speech Recognition", Edinburgh University Press, 1990, 30 Pounds. (ISBN 0 7486 0162 7). "Despite the fact that the hidden Markov model approach to speech recognition is now considered a mature technology, there are very few textbooks which cover the subject in any depth. This new addition to the Edinburgh EDITS series is therefore very welcome. ... I know of no other comparable work and it is therefore a timely and userful addition to the literature" -- Book review, Computer Speech and Language To order, contact Edinburgh University Press. For more information, contact xdh at speech2.cs.cmu.edu. From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Thu 6 Jun 91 22:02:27-PDT Subject: Distributed Representations Message-ID: <676270947.0.LAWS@AI.SRI.COM> I'm not sure this is the same concept, but there were several papers at the last IJCAI showing that neural networks worked better than decision trees. The reason seemed to be that neural decisions depend on all the data all the time, whereas local decisions use only part of the data at one time. I've never put much stock in the military reliability claims. A bullet through the chip or its power supply will be a real challenge. Noise tolerance is important, though, and I suspect that neural systems really are more tolerant. Terry Sejnowski's original NETtalk work has always bothered me. He used a neural network to set up a mapping from an input bit string to 27 output bits, if I recall. I have never seen a "control" experiment showing similar results for 27 separate discriminant analyses, or for a single multivariate discriminant. I suspect that the results would be far better. The wonder of the net was not that it worked so well, but that it worked at all. I have come to believe strongly in "coarse-coded" representations, which are somewhat distributed. (I have no insight as to whether fully distributed representations might be even better. I suspect that their power is similar to adding quadratic and higher-order terms to a standard statistical model.) The real win in coarse coding occurs if the structure of the code models structure in the data source (or perhaps in the problem to be solved). -- Ken Laws ------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: In this thought-provoking volume, George Kampis argues (among other things) that the Turing-Church Thesis is false, at least for the kinds of physical systems that concern developmental biologists, cognitive scientists, economists, and other of that ilk. [...] This book represents an exciting point of departure from ho- hum traditional works on the philosophy of modeling, especially noteworthy being the fact that instead of offering mere complaints against the status quo, Kampis also provides a paradigm holding out the promise of including both the classical systems of the physicist and engineer and the neoclassical processes of the biologist and psychologist under a single umbrella. As such, the ideas in this pioneering book merit the attention of all philosophers and scientists concerned with the way we create reality in our mathematical representations of the world and the connection those representation have with the way things "truly are". How to order if interested: Order from Pergamon Press plc, Headington Hill Hall, Oxford OX3 0BW, England or a local Pergamon office ISBN 0-08-0369790 100 USD/ 50 pound sterlings Hotline Service: USA (800) 257 5755 elsewhere (+44) 865 743685 FAX (+44) 865 743946 **************************************************************** 2. Forthcoming: A SPECIAL ISSUE ON EMERGENCE AND CREATIVITY It's a Special Issue of ************************************************************** * World Futures: The Journal of General Evolution * * (Gordon & Breach), to appear August 1991 * * * * Guest Editor: G. Kampis * * * * Title: Creative Evolution in Nature, Mind and Society * ************************************************************** Individual copies will be available (hopefully), at a special rate (under negotiation). List of contents: Kampis, G. Foreword Rustler, E.O. "On Bodyism" (Report 8-80 hta 372) Salthe, S. Varieties of Emergence Csanyi, V. Societal Creativity Kampis, G. Emergent Computations, Life and Cognition Cariani, P. Adaptivity and Emergence in Organisms and Devices Fernandez,J., Moreno,A. and Etxeberria, A. Life as Emergence Heylighen, F. Modelling Emergence Tsuda, I. Chaotic Itinerancy as a Dynamical Basis for Hermeneutics in Brain and Mind Requardt, M. Godel, Turing, Chaitin and the Question of Emergence as a Meta-Principle of Modern Physics. Some Arguments Against Reductionism **************************************************************** 3. preprint from the Special Issue on Emergence EMERGENT COMPUTATIONS, LIFE AND COGNITION by George Kampis Evolutionary Systems Group, Dept. of Ethology L. Eotvos University of Budapest, Hungary and Department of Theoretical Chemistry University of Tubingen, D-7400 Tubingen, FRG ABSTRACT This is a non-technical text discussing general ideas of information generation. A model for emergent processes is given. Emergence is described in accordance with A.N. Whitehead's theory of 'process'. The role of distinctions and variable/observable definitions is discussed. As applications, parallel computations, evolution, and 'component-systems' are discussed with respect to their ability to realize emergence. KEYWORDS: emergence, distinctions, modeling, information set, evolution, cognitive science, theory of computations. AVAILABLE from the author at h1201kam at ella.hu or h1201kam at ella.uucp or by mail at H-1122 Budapest Maros u. 27. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: to distribute this given datum (association vector or whatever) over my representational units so that I can recover it from a partial stimulus. The issue of how the given datum itself is represented is obviously *very* important --- no quarrel on that --- but the question of "internal representations" (as Bo Xu so appropriately calls them) seems more immediate from a connectionist point of view because it relates *directly* to the problem of learning. As we all know, learning from a finite data set is ill-posed and, even with a fixed network topology, can (and will) produce multiple "equally good" solutions. Unlike Bo Xu, I am not at all convinced that "most of the current networks' topology ensures that the internal representations are mixed distributed representations" --- at least not "optimally" so. For the last year and some, I've been working on the problem of classifying "equally good" network solutions to approximation problems by their ability to withstand internal perturbations gracefully. I have found that, while a large enough network often does distribute responsibility, there is some considerable variation from net to net. My ideal distributed representation would be one that is minimally degraded by the malfunction of individual representative units (weights and neurons) so that it could withstand the maximum trauma better than any other network in its class. Of course, this is a theoretical ideal and the order of the effects I am talking about is insane. However, I think that the internal interactions on which this characterization depends are amenable to relatively simple empirical analysis (!) under simplifying assumptions, and are a "black box" only with respect to exact analysis. In any case, even if they were a black box, the characterization would still be applicable --- we just wouldn't be able to use it. In effect, what I am advocating is already present in most estimation methods under the guise of regularization. An interesting contrast, however, exists with regard to the various "pruning" algorithms used to improve generalization. If things go well and they succeed (most of them do, I think), then the networks they produce have a near-minimal number of representational units, across which the various associastions are quite well-distributed. However, precisely because of their minimality, each representational units has acquired maximal internal relevance, and is now minimally dispensable. Had there been no pruning, and some other method had been used to force well-distributedness, I think that good generalization could have been obtained without losing robustness. In effect, I am saying that instead of using a minimal number of resources as much as possible, we could use all the available resources as little as possible. Both create highly distributed representations, but the latter does so without losing any robustness (indeed, maximizing it). >I want to thank Ali Minai for his comments. All of his comments are very >valuable and thought-stimulating. Ditto for Bo Xu. I've been enjoying this discussion very much. Ali Minai University of Virginia aam9n at Virginia.EDU From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Fri 19 Jul 91 09:24:45-PDT Subject: Simple pictures, tough problems. In-Reply-To: <9107191052.AA11204@uk.ac.stir.cs.nevis> Message-ID: <679940685.0.LAWS@AI.SRI.COM> > This does not leave very much time for any pretty > hierachies and feedback loops. Feedback loops are not necessarily slow. Analog computers can be much faster than digital ones for many tasks, and I think we're beginning to understand neural bundles in analog terms. -- Ken Laws ------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: the learner can manipulate the environment are important for the survival of the learner. The repertoire of manipulations will (should?) bias the learner towards discovering properties that are invariant under those manipulations. Therefore, the learner will tend to learn concepts that are relevant to the tasks that it can perform and its survival. Disclaimer: I don't know anything about symbol grounding. I am actually working on analogical inference - but I have a nasty feeling that if I ever get half way towards having an analogical inference net I will have to know about symbol grounding to train and test it. Ross Gayler ross at psych.psy.uq.oz.au From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: memories. But neither of these viewpoints constitutes AI in any real sense. There is a famous saying by Nietzsche that might be adapted to describe the current status of neural networks: Machines "do not become thinkers simply because their memories are too good." Yet there are other aspects of neural networks that have been extremely important. Thus the structural paradigm is of obvious value to the neurophysiologist, the cognitive scientist, and the vision researcher. It would be of value to the computer science community if Information Sciences were to review and critique the original promise of neurocomputing in the light of developments in the past few years. The Special Issue of Information Sciences will do just this. It will provide reviews of this link between neural networks and AI. In other words, the scope of this \fIIssue\fR is much broader than that of the most commonly encountered applications of associative memories or mapping networks. The application areas that the \fIIssue\fR will deal with include neural logic programming, feature detection, knowledge representation, search techniques, and learning. The connectionist approach to AI will be contrasted from the traditional symbolic techniques. Deadline for Submissions: September 30, 1991 Papers may be sent to: Subhash Kak Guest Editor, Information Sciences Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel: (504) 388-5552 E-mail: kak at max.ee.lsu.edu From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: The University reserves the right not to proceed with any appointment for financial or other reasons. Equal Opportunity is University Policy. From Xuedong.Huang at SPEECH2.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Sun, 1 Sep 1991 19:23-EDT Subject: Processing of auditory sequences In-Reply-To: Scott_Fahlman's mail message of Sat, 31 Aug 91 10:24:21 -0400 Message-ID: <683767402/xdh@SPEECH2.CS.CMU.EDU> For the purpose of speech-compression, current technology using vector quantization can compress speech to 2k bits/s without much fidelity loss. Even lower rate (at 200-800 bits/s) also has acceptable intelligibility. They can be found in many commercial applications. - Xuedong Huang From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Sun 1 Sep 91 23:03:42-PDT Subject: AI/IS/CS Career Newsletter Message-ID: <683791422.0.LAWS@AI.SRI.COM> The Computists' Communique is available at half the standard rate! Computists International is having a membership drive, and our weekly newsletter and discussion list are nearly 50% off through September 30. Reply for full details. -- Ken P.S. You get a no-risk guarantee, of course. -- Dr. Kenneth I. Laws 4064 Sutherland Drive, Palo Alto, CA 94303. laws at ai.sri.com, (415) 493-7390, 11am - 5pm. ------- From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Mon 2 Sep 91 21:03:05-PDT Subject: Apology Message-ID: <683870585.0.LAWS@AI.SRI.COM> I have just learned that I accidentally sent a newsletter offer to the Connectionists list last night. This was inappropriate, and I assure you that I did not mean to use the list as a broadcast medium. I will ensure that I do not make such a slip again. -- Ken Laws ------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Neural networks can be used to realize DILATION and EROSION operations. Other than using backpropagation algorithm, they can be designed directly. You can see the paper written by Lippmann in IEEE ASSP Magazine, pp.4-22, April 1987. Lin Yin ++++++++++++++++++ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Juliette Mattioli, Michel Schmitt et al. in ICANN91 in Helsinki, Vol.1, pg I-117: Shape discrimination based in Mathematical Morphology and Neural Networks, and Vol 2, pg II-1045 Francois Vallet and Michel Schmitt, Network Configuration and Initialization using Mathematical Morphology: Theoretical Study of Measurement Functions. While above article do not talk about the operations you mentioned, I know that the author is working on these, i.e. Michel Schmitt; his address is: Thomson-CSF, Laboratoire Central de Recherche, F-91404 Orsay Cedex, France Konrad Weigl ++++++++++++++++++ Date: Wed, 28 Aug 91 09:49:01 +0200 From: toet at izf.tno.nl (Lex Toet) there is indeed very little literature on this topic. Some references that may be of use are : S.S. Wilson (1989) Vector morphology and iconic neural networks. IEEE Tr SMC 19, pp. 1636-1644. F.Y. Shih and Jenlong Moh (1989) Image morphological operations by neural circuits. In: IEEE 1989 Symposium on Circuits and Systems, pp. 774-777. M. Scmitt and F. Vallet (1991) Network configuration and initialization using mathematical morphology: theoretical study of measurement functions. In: Artificial Neural networks, T. Kohonen, M. Makisara, O. Simula and J. Kangas, eds. Elsevier Science Publishers B.V. , Amsterdam. ++++++++++++++++++ Date: Thu, 29 Aug 91 22:55:35 -0400 From: "Mark Schmalz" Re: your recent posting to comp.ai.vision -- obtain the recent papers on morphological neural nets by Ritter and Davidson, and Davidson and her students. Published in Proc. SPIE, the papers are indexed in the Computer and Control Abstracts, which you should have in your library. Copies may also be obtained by writing to: Center for Computer Vision Research Department of Computer and Information Science Attn: Dr. Joseph Wilson University of Florida Gainesville, FL 32611 The morpho. net computes over the ring (R,max,+) or (R,max,*), where R denotes the set of reals, max the maximum operation, and * multiplication. In contrast, the more usual McCullogh- Pitts net computes over (R,+,*). Thus, the morpho. net is inherently nonlinear. Additionally, numerous decompositions of the morphological operations into linear operations have been published. Casasent has recently published an interesting paper on the applications of optics to the morphological functions. His work on the hit-or-miss transform would be an interesting topic for neural net implementation. I suggest you obtain the SPIE Proceedings pertaining to the 1990 and 1991 Image Algebra conferences, presented at the San Diego Technical Symposium of SPIE (both years). Morphological image processing is included in the conference, and some good papers have appeared over the last two years. Mark Schmalz ++++++++++++++++++ Christoph Herwig Dept. of Electrical and Computer Engineering, Clemson University, SC From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: interpolation vs generalisation In-Reply-To: Message of Sat, 14 Sep 91 09:42:16 ADT from Message-ID: On Sat, 14 Sep 91 09:42:16 ADT Ross Gayler writes: > Analogical inference is a form of > generalisation that is performed on the basis of structural or > relational similarity rather than literal similarity. It is > generalisation, because it involves the application of knowledge from > previously encountered situations to a novel situation. However, the > interpolation does not occur in the space defined by the > input patterns, instead it occurs in the space describing the structural > relationships of the input tokens. The structural relationships > between any set of inputs is not necessarily fixed by those inputs, > but generated dynamically as an 'interpretation' that ties the inputs > to a context. There is an argument that analogical inference is the > basic mode of retrieval from memory, but most connectionist research > has focused on the degenerate case where the structural mapping is an > identity mapping - so the interest is focused on interpolation in the > input space instead of the structural representation space. > > In brief: Generalisation can occur without interpolation in a fixed> data space that you can observe, but it may involve interpolation > in some other space that is constructed internally and dynamically. I also believe that the above point is of critical importance: an intelligent system (at least if it is considered in the course of both micro- and macro- evolution) must have the capacity to generate new metrics based on the structural properties of the object classes. In fact, I find this capacity of an intelligent process to be so important, that I have suggested it to be the basic attribute of an intelligent process. To ensure the presence of this attribute, one can demand from the learning (test) environment some minimum requirement: "The requirement that I propose to adopt can be called structural unboundedness of the environment. Informally, an environment is called structurally unbounded if no finite set of "features", or parameters, is sufficient for specifying all classes of events in the environment." See the paper mentioned in one of my resent postings ("Verifiable characterization of an intelligent process"). If a proposed model can operate successfully in some such environments, then it deserves a more serious consideration. -- Lev Goldfarb From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Thagard model. (1) It is not practical to build a new network on the fly for every problem encountered, especially when the mechanism that builds the networks sidesteps a lot of really hard problems by being fed input as neatly pre-digested symbolic structures. (2) The task of finding a mapping from a given base to a given target is a party trick. In real life you are presented with a target (a structure with some gaps) and you have to find the best base (or bases) in long-term memory to support the mapping that best fills the gaps. Another paper on connectionist analogical inference is: Halford, G.S., Wilson, W.H., Guo, J., Wiles, J., & Stewart, J.E.M. (In preparation). Connectionist implications for processing capacity limitations in analogies. I wouldn't be proper for me to comment on that paper as it is in preparation. I am not aware of any other direct attempts at connectionist implementation of analogical inference (in the sense that interests me: general, dynamic, and practical), but I don't get much time to keep up with the literature and would be pleased to be corrected on this. - References to other work on analogical inference There is a large-ish literature on analogy in psychology, AI, and philosophy. In psychology try: Gentner, D. (1989) The mechanisms of analogical learning. In Vosniadou, S., & Ortony, A. (Eds.), Similarity and analogical reasoning (pp. 199-241). Cambridge: Cambridge University Press. In Artificial Intelligence try: Kedar-Cabelli, S. (1988). Analogy - from a unified perspective. In D.H. Helman (Ed.), Analogical reasoning (pp. 65-103). Dordrecht: Kluwer Academic Publishers. Sorry, I haven't followed the philosophy literature. - References to related connectionist work One of the really hard parts about trying to do connectionist analogical inference is that you need to be able to represent structures, and this is still a wide-open research area. A book that is worth looking at in this area is: Barnden, J.A., & Pollack, J.B. (Eds.) (1991). High-level connectionist models. Norwood NJ: Ablex. For specific techniques to represent structured data you might try to track down the work of Paul Smolensky, Jordan Pollack, and Tony Plate (to name a few). - Lonce Wyse (lwyse at park.bu.edu) says: LW> I was strongly advised against going for such a "high level" LW> cognitive phenomenon [on starting grad school]. I think that is good advice. Connectionist analogical inference is a *really hard* problem (at least if you are aiming for something realistically useful). The solution involves solving a bunch of other problems that are hard in their own rights. Doctoral candidates and untenured academics can't afford the risk of attacking something like this because they have to crank out the publications. If you want to get into this area, either keep it as a hobby or carve out an extremely circumscribed subset of the problem (and lose the fun). - Lonce Wyse also says: LW> I think intermodal application of learning in neural networks LW> is a future hot topic. In classical symbolic AI the relationship between a concept and its corresponding symbol is arbitrary and the 'internal structure' of the symbol does not have any effect on the dynamics of processing. In a connectionist symbol processor the symbol<->referent relationship should still be arbitrary (because we need to be able to reconceive the same referent at whim) but the internal structure of the symbol (a vector) DOES effect the dynamics of processing. The tricky part is to pick a symbol that has the correct dynamic effect. The possibility that I am pursuing is to pick a pattern that is analogically consistent with what is already in Long-Term Memory. Extending the theme requires that the pattern be consistent with information about the same referent obtained via other sensory modes. Some while back I used up some bandwidth on the mailing list asking about the role of intermodal learning in symbol grounding. For my purposes the crucial aspect about symbol grounding is that it concerns linking a perceptual icon to a symbol with the desired symbolic linkages. My intuitive belief is that a system with only one perceptual mode and no ability to interact with its environment can learn to approximate that environment but not to build a genuine model of the environment as distinct from the perceiver. So intermodal learning and the ability to interact with the environment are important. - In answer to my assertion that: "Generalisation can occur without interpolation in a data space that you can observe, but it may involve interpolation in some other space that is constructed internally and dynamically" - Lev Goldfarb (goldfarb at unbmvs1.csd.unb.ca ?) says LG> an *intelligent* system must have the capacity to generate new metrics LG> based on the structural properties of the object classes. - and Thomas Hildebrandt (thildebr at athos.csee.lehigh.edu) says TH> Generalization is interpolation in the *Right* space. Psychological scaling studies have shown that the similarity metric over a group of objects depends on the composition of the group. For example, the perceived similarity of an apple and an orange depends on whether they are with other fruit, or other roughly spherical objects. Mike Humphreys at the University of Queensland has stated that items in LTM are essentially orthogonal (and I am sure he would have the studies to back it up) The point is that the metric used to relate objects is induced by the demands of the current task. I like to think of all the items in LTM as (approximately) mutually orthogonal vectors in a very high dimensional space. The STM representation is a projection of the high-D space onto a low-D space of those LTM items that are currently active. The exact projection that is used is dependent on the task demands. Classic back-prop type connectionism attempts to generate a new representation on the hidden layer such that interpolation is also (correct) generalisation. This is done by learning the correct weights into the hidden layer. Unfortunately, the weights are essentially fixed with respect to the time scale of outside events. What is required for analogical inference (in my sense) is that the weights be dynamic and able to vary at least as fast as the environmental input. For this to have even a hope of working without degenerating into an unstable mess there must be lots of constraint: from the architecture, from cross-modal input and from previous knowledge in LTM. - Marek (marek at iuvax.cs.indiana.edu) says M> Would Pentti Kanerva's model of associative memory fit in with M> your definition of analogical inference? - and Geoff Hinton (geoff at ai.toronto.edu) says GH> It seems to me that my family trees example is exactly an example GH> of analogical mapping. Well, my memory of both is rather patchy, but I think not. At least, not in the sense that I am using analogical inference. The reason that I say this lies back in my previous paragraph. Connectionist work, to date, is very static: the net learns *a* mapping and then you use it. A net may learn to generalise on one problem domain in a way that looks like analogy, but I want it to be able to generalise to others on the fly. In order to perform true analogical inference the network must search in real-time for the correct transformation weights instead of learning them over an extended period. Hinton's 1981 network for assigning canonical object-based frames of reference is probably closer in spirit to analogical retrieval. In this model there are objects that must be recognised from arbitrary view-points. In this model the network settles simultaneously on the object class and the transformation that maps the perceptual image onto the object class. - Jim Franklin (jim at hydra.maths.unsw.oz.au) says JF> What is the 'argument that analogical inference is the basic mode of JF> retrieval from memory'? I thought I'd be able to slip that one by, but I forgot you were out there Jim. OK, here goes. There is a piece of paper you occasionally find tacked on the walls of labs that gives the translations for phrases used in scientific papers. Amongst others it contains: 'It is believed that ...' => 'In my last paper I said that ...' 'It is generally believed that ...' => 'I asked the person in the next office and she thought so too.' In other words, I can't quote you relevant papers but it appears to have some currency among my academic colleagues in psychology. As you would expect the belief is most strongly held by people who study analogy. People studying other phenomena generally try to structure their experiments so that analogical inference can't happen. The strongest support for the notion probably comes from natural language processing. The AI people have been stymied by unconstrained language being inherently metaphorical. If you read even a technical news report you find metaphorical usage: the market was , trading was . Words don't have meanings, they are gestures towards meanings. Wittgenstein pointed out the impossibility of precise definition. Attempts to make natural language understanding software systems by bolting on a metaphor-processor after synatx and semantics just don't work. Metaphor has to be in from ground level. Similarly, perceptual events don't have unambiguous meanings, they are gestures towards meanings. They must be interpreted in the context of the rest of the perceptual field and the intentions of the perceiver. One of the hallmarks of intelligent behaviour is to be able to perceive things in a context dependent way: usually a filing cabinet is a device for storage of papers, sometimes it is a platform for standing on while replacing a light bulb. Now suppose you have a very incomplete and ambiguous input to a memory device. You want the input to be completed and 'interpreted' in a way that is consistent with the input fragment and with the intentional context. You also have a lot of hard-earned prior knowledge that you should take advantage of. Invoke a fairly standard auto-associator for pattern completion. If your input data is a literal match for a learned item it can be simply completed. If your input data is not a literal match then find a transformation such that it *can* be completed via the auto-associator and re-transformed back into the original pattern. If the transformed-associated-untransformed version matches the original and you have also filled in some of the gaps then you have performed an analogical inference/retrieval. The literal matching case can be seen as a special case of this where the transform and its inverse are the identity mapping. So, if you have a memory mechanism that performs analogical retrieval then you automatically get literal retrieval but if your standard mechanism is literal retrieval then you have to have some other mechanism for analogical inference. I believe that if you can do analogical retrieval you have achieved the crucial step on the way to symbol grounding, natural language understanding, common sense reasoning and genuine artificial intelligence. I shall now step down from the soap box. Ross Gayler ross at psych.psy.uq.oz.au ^^^^^^^^^^^^^^^^^^ <- My mailer lives here, but I live 2,000km south. Any job offers will be gratefully considered - I have a mortgage & dependents. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: memories. But neither of these viewpoints constitutes AI in any real sense. There is a famous saying by Nietzsche that might be adapted to describe the current status of neural networks: Machines "do not become thinkers simply because their memories are too good." Yet there are other aspects of neural networks that have been extremely important. Thus the structural paradigm is of obvious value to the neurophysiologist, the cognitive scientist, and the vision researcher. It would be of value to the computer science community if Information Sciences were to review and critique the original promise of neurocomputing in the light of developments in the past few years. The Special Issue of Information Sciences will do just this. It will provide reviews of this link between neural networks and AI. In other words, the scope of this \fIIssue\fR is much broader than that of the most commonly encountered applications of associative memories or mapping networks. The application areas that the \fIIssue\fR will deal with include neural logic programming, feature detection, knowledge representation, search techniques, and learning. The connectionist approach to AI will be contrasted from the traditional symbolic techniques. Deadline for Submissions: October 30, 1991 Papers may be sent to: Subhash Kak Guest Editor, Information Sciences Department of Electrical and Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: is not much you can parallelize if you do per-sample training. Take the vanilla version of the backprop for example, assuming a network has 20 hidden and output units and 300 weights, then all you can do in parallel is evaluating 20 sigmoid functions and 300 multiply-add (you can't even do that because of the dependencies among the parameters). Thus if you have thousands of processors in a parallem machine, most processors will idle. In strict per-sample case, sample i+1 needs to use the weight updated by sample i, so you can't run multiple copies of the same network. And that is the trick several people came up (indenpendently) to speed up backprop training on parallel machines. Unless we modify the algorithm a little bit, I can't see a way to run multiple copies of a network in parallel in per-sample case. - Xiru Zhang From Xuedong.Huang at SPEECH2.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Xuedong.Huang at SPEECH2.CS.CMU.EDU (Xuedong.Huang@SPEECH2.CS.CMU.EDU) Date: Tue, 15 Oct 1991 08:17-EDT Subject: Positions in CMU Speech Group Message-ID: <687529025/xdh@SPEECH2.CS.CMU.EDU> Applications are invited for one full-time research programmer and a few part-time programmer positions in the speech group, School of Computer Science, Carnegie Mellon University, Pittsburgh, beginning November 1, 1991, or later. For the full-time position, BS/MS in CS/EE and excellence in C/Unix programming required. Experiences in system intergration, speech recognition, hidden Markov modeling, search, and neural nets preferred. For the part-time positions, we are particulary interested in CMU sophomores. Neat research opportunity for a real speech application. Our Project involves mostly software development, hidden Markov modeling, large-vocabulary search, language modeling, and neural computing. Send all materials including resume, transcripts, and the names of two references to: Dr. Xuedong Huang Research Computer Scientist School of Computer Science Carnegie Mellon University Pittsburgh, PA 15213 From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Wed 23 Oct 91 22:36:53-PDT Subject: Continuous vs. Batch learning In-Reply-To: <9110222317.AA22627@sanger.bio.uci.edu> Message-ID: <688282613.0.LAWS@AI.SRI.COM> From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: > I can't think of > any biological examples of batch learning, in which sensory data are > saved until a certain number of them can be somehow averaged together > and conclusions made and remembered. Any ideas? My observation of children is that they remember everything well enough to know if they have seen or heard it before, but they pay little attention to facts or advice that are not repeated. It is the act of repetition that marks a stimulus as one of the important ones that must be learned. -- Ken Laws ------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From jon at johann Tue Jun 6 06:52:25 2006 From: jon at johann (Jonathon Baxter) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Ray White writes: > > Larry Fast writes: > > > I'm expanding the PDP Backprop program (McClelland&Rumlhart version 1.1) to > > compensate for the following problem: > > > As Backprop passes the error back thru multiple layers, the gradient has > > a built in tendency to decay. At the output the maximum slope of > > the 1/( 1 + e(-sum)) activation function is 0.5. > > Each successive layer multiplies this slope by a maximum of 0.5. > ..... > > > It has been suggested (by a couple of sources) that an attempt should be > > made to have each layer learn at the same rate. ... > > > The new error function is: errorPropGain * act * (1 - act) > > This suggests to me that we are too strongly wedded to precisely > f(sum) = 1/( 1 + e(-sum)) as the squashing function. That function > certainly does have a maximum slope of 0.25. > > A nice way to increase that maximum slope is to choose a slightly different > squashing function. For example f(sum) = 1/( 1 + e(-4*sum)) would fill > the bill, or if you'd rather have your output run from -1 to +1, then > tanh(sum) would work. I think that such changes in the squashing function > should automatically improve the maximum-slope situation, essentially by > doing the "errorPropGain" bookkeeping for you. > > Such solutions are static fixes. I suggested a dynamic adjustment of the > learning parameter for recurrent backprop at IJCNN - 90 in San Diego > (The Learning Rate in Back-Propagation Systems: an Application of Newton's > Method, IJCNN 90, vol I, p 679). The method amounts to dividing the > learning rate parameter by the square of the gradient of the output > function (subject to an empirical minimum divisor). One should be able > to do something similar with feedforward systems, perhaps on a layer by > layer basis. > > - Ray White (white at teetot.acusd.edu) The fact that the error "decays" when backpropagated through several layers is not a "problem" with the BP algorithm, its merely a reflection of the fact that earlier weights contribute less to the error than later weights. If you go around changing the formula for the error at each weight then the resulting learning algorithm will no longer be gradient descent, and hence there is no guarantee that your algorithm will reduce the network's error. Ray White's solution is preferable as it will still use gradient descent to improve the network's performance, although doing things on a layer by layer basis would be wrong. I have experimented a little with keeping the magnitude of the error vector constant in feedforward, backprop nets (by dividing the error vector by its magnitude) and have found a significant (*10) speedup in small problems (xor, encoder--decoders, etc). This increase in speed is most noticable in problems where the "solution" is a set of infinite weights, so that an approximate solution is reached by traversing vast, flat regions of weight space. Presumably there is a lot of literature out there on this kind of thing. Another idea is to calculate the matrix of second derivatives (grad(grad E)) as well as the first derivatives (grad E) and from this information calculate the (unique) parabolic surface in weight space that has the same derivatives. Then the weights should be updated so as to jump to the center (minimum) of the parabola. I haven't coded this idea yet, has anyone else looked at this kind of thing, and if so what are the results? Jon Baxter - jon at degas.cs.flinders.oz.au From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Thu, 14 Nov 1991 11:35-EST Subject: Backprop-simulator for Connection Machine available Message-ID: <690136508/thrun@B.GP.CS.CMU.EDU> Folks, according to the recent discussion concerning parallel implementations of neural networks, I want to inform you that there is a very fast code of backprop for the Connection Machine CM-2 public available. This posting is about a year old, and meanwhile various labs have found and removed (almost) all the bugs in the code, such that it is very reliable now. It should be noted that this implementation is the most simple way of parallelizing backprop (it's data-parallel), and that it works efficiently with large training sets only (in the order of 500 to infinity). But then it works great! --- Sebastian ------------------------------------------------------------------ (original message follows) ------------------------------------------------------------------ The following might be interesting for everybody who works with the PDP backpropagation simulator and has access to a Connection Machine: ******************************************************** ** ** ** PDP-Backpropagation on the Connection Machine ** ** ** ******************************************************** For testing our new Connection Machine CM/2 I extended the PDP backpropagation simulator by Rumelhart, McClelland et al. with a parallel training procedure for the Connection Machine (Interface C/Paris, Version 5). Following some ideas by R.M. Faber and A. Singer I simply made use of the inherent parallelism of the training set: Each processor on the connection machine (there are at most 65536) evaluates the forward and backward propagation phase for one training pattern only. Thus the whole training set is evaluated in parallel and the training time does not depend on the size of this set any longer. Especially at large training sets this reduces the training time greatly. For example: I trained a network with 28 nodes, 133 links and 23 biases to approximate the differential equations for the pole balancing task adopted from Anderson's dissertation. With a training set of 16384 patterns, using the conventional "strain" command, one learning epoch took about 110.6 seconds on a SUN 4/110 - the connection machine with this SUN on the frontend managed the same in 0.076 seconds. --> This reduces one week exhaustive training to approximately seven minutes! (By parallelizing the networks themselves similar acceleration can be achieved also with smaller training sets.) -------------- The source is written in C (Interface to Connection Machine: PARIS) and can easily be embedded into the PDP software package. All origin functions of the simulator are not touched - it is also still possible to use the extended version without a Connection Machine. If you want to have the source, please mail me! Sebastian Thrun, thrun at cs.cmu.edu You can also obtain the source via ftp: ftp 129.26.1.90 Name: anonymous Password: ftp> cd pub ftp> cd gmd ftp> get pdp-cm.c ftp> bye From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: about patenting multiplication was misunderstood by many people. I expected that announcing that I had patented multiplication would sound so ridiculous, that it would help make the whole issue of patenting algorithms ridiculous too. Just to make things clear, I am strongly against the patenting of algorithms. Multiplication cannot be patented, because it is in the public domain. In fact, I thought that most people knew this, and that the meaning of the announcement of having patented multiplication would therefore be clear. Apparently, it was not clear to everyone. Sorry. Luis B. Almeida INESC Phone: +351-1-544607 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp) From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Wed 20 Nov 91 10:07:12-PST Subject: Subtractive network design In-Reply-To: <9111181719.AA00230@poseidon.cs.tulane.edu> Message-ID: <690660432.0.LAWS@AI.SRI.COM> There's been some discussion of whether networks should grow or shrink. This reminds me of the stepwise-inclusion and stepwise-deletion debate for multiple regression. As I recall, there were demonstrable benefits from combining the two. Stepwise inclusion was used for speed, but with stepwise deletion of variables that were thereby made redundant. The selection process was simplified, over the years, by advances in the theory of canonical correlation. The theory of minimal encoding has lately been invoked to improve stopping criteria for the search. Neural-network researchers don't like globally computed statistics or stored states, so you can't set up an A* search within a single network training run. You do seem willing, however, to use genetic algorithms or multiple training runs to find sufficiently good networks for a target application. Brute-force search techniques in the space of permitted connectivities may be necessary. Stepwise growth alternated with stepwise death may be a useful strategy for reducing search time. -- Ken ------- From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Tue, 26 Nov 1991 20:14-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <691204481/thrun@B.GP.CS.CMU.EDU> Dear Connectionists: This is an announcement of a forthcoming Technical Report. In the last months, we did run a first worldwide comparison of some major learning algorithms on three simple classification problems. Two connectionist learning algorithms were also compared, namely plain Backpropagation and Cascade Correlation. Although a) the problems were taken from a domain which supported (some of the) symbolical algorithms, and b) this comparison is considerably un-biased since the testers did really know the methods they tested, connectionist techniques performed surprisingly well. The final report will be available shortly after NIPS conference, but everyone who is interested in this comparison and attends the conference may feel free to contact me at NIPS. I will bring a few pre-prints. Sebastian Thrun thrun at cs.cmu.edu ---------------------------------------------------------------------- ---------------------------------------------------------------------- The MONK's Problems A Performance Comparison of Different Learning Algorithms S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang This report summarizes a comparison of different learning techniques which was performed at the 2nd European Summer School on Machine Learning, held in Belgium during summer 1991. A variety of symbolic and non-symbolic learning techniques - namely AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, CLASSWEB, PRISM, Backpropagation, and Cascade Correlation - are compared on three classification problems, the MONK's problems. The MONK's problems are derived from a domain in which each training example is represented by six discrete-valued attributes. Each problem involves learning a binary function defined over this domain, from a sample of training examples of this function. Experiments were performed with and without noise in the training examples. One significant characteristic of this comparison is that it was performed by a collection of researchers, each of whom was an advocate of the technique they tested (often they were the creators of the various methods). In this sense, the results are less biased than in comparisons performed by a single person advocating a specific learning method, and more accurately reflect the generalization behavior of the learning techniques as applied by knowledgeable users. ---------------------------------------------------------------------- ================================ RESULTS - A SHORT OVERVIEW ================================ Problem: MONK-1 MONK-2 MONK-3(noisy) AQ17-DCI 100.0% 100.0% 94.2% AQ17-HCI 100.0% 93.1% 100.0% AQ17-FCLS 92.6% 97.2% AQ14-NT 100.0% AQ15-GA 100.0% 86.8% 100.0% (by J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang) Assistant Professional 100.0% 81.25% 100.0% (by B. Cestnik, I. Kononenko, and I. Bratko) mFOIL 100.0% 69.2% 100.0% (by S. Dzeroski) ID5R 81.7% 61.8% IDL 97.2% 66.2% ID5R-hat 90.3% 65.7% TDIDT 75.7% 66.7% (by W. Van de Velde) ID3 98.6% 67.9% 94.4% ID3, no windowing 83.2% 69.1% 95.6% ID5R 79.7% 69.2% 95.2% AQR 95.9% 79.7% 87.0% CN2 100.0% 69.0% 89.1% CLASSWEB 0.10 71.8% 64.8% 80.8% CLASSWEB 0.15 65.7% 61.6% 85.4% CLASSWEB 0.20 63.0% 57.2% 75.2% (by J. Kreuziger, R. Hamann, and W. Wenzel) PRISM 86.3% 72.7% 90.3% (by S. Keller) Backpropagation 100.0% 100.0% 93.1% (by S. Thrun) Cascade Correlation 100.0% 100.0% 97.2% (by S.E. Fahlman) ---------------------------------------------------------------------- ---------------------------------------------------------------------- ================================ TABLE OF CONTENTS ================================ 1 The MONK's Comparison Of Learning Algorithms -- Introduction and Survey 1 1.1 The problem 2 1.2 Visualization 2 2 Applying Various AQ Programs to the MONK's Problems: Results and Brief Description of the Methods 7 2.1 Introduction 8 2.2 Results for the 1st problem (M1) 9 2.2.1 Rules obtained by AQ17-DCI 9 2.2.2 Rules obtained by AQ17-HCI 10 2.3 Results for the 2nd problem (M2) 11 2.3.1 Rules obtained by AQ17-DCI 11 2.3.2 Rules obtained by AQ17-HCI 11 2.3.3 Rules obtained by AQ17-FCLS 13 2.4 Results for the 3rd problem (M3) 15 2.4.1 Rules obtained by AQ17-HCI 15 2.4.2 Rules obtained by AQ14-NT 16 2.4.3 Rules obtained by AQ17-FCLS 16 2.4.4 Rules obtained by AQ15-GA 17 2.5 A Brief Description of the Programs and Algorithms 18 2.5.1 AQ17-DCI (Data-driven constructive induction) 18 2.5.2 AQ17-FCLS (Flexible concept learning) 19 2.5.3 AQ17-HCI (Hypothesis-driven constructive induction) 19 2.5.4 AQ14-NT (noise-tolerant learning from engineering data) 20 2.5.5 AQ15-GA (AQ15 with attribute selection by a genetic algorithm) 20 2.5.6 The AQ Algorithm that underlies the programs 21 3 The Assistant Professional Inductive Learning System: MONK's Problems 23 3.1 Introduction 24 3.2 Experimental results 24 3.3 Discussion 25 3.4 Literature 25 3.5 Resulting Decision Trees 26 4 mFOIL on the MONK's Problems 29 4.1 Description 30 4.2 Set 1 31 4.3 Set 2 31 4.4 Set 3 32 5 Comparison of Decision Tree-Based Learning Algorithms on the MONK's Problems 33 5.1 IDL: A Brief Introduction 34 5.1.1 Introduction 34 5.1.2 Related Work 35 5.1.3 Conclusion 36 5.2 Experimental Results 40 5.2.1 ID5R on test set 1 43 5.2.2 IDL on test set 1 43 5.2.3 ID5R-HAT on test set 1 44 5.2.4 TDIDT on test set 1 44 5.2.5 ID5R on test set 2 45 5.2.6 IDL on test set 2 46 5.2.7 TDIDT on test set 2 48 5.2.8 TDIDT on test set 1 49 5.2.9 ID5R-HAT on test set 2 50 5.3 Classification diagrams 52 5.4 Learning curves 56 6 Comparison of Inductive Learning Programs 59 6.1 Introduction 60 6.2 Short description of the algorithms 60 6.2.1 ID3 60 6.2.2 ID5R 61 6.2.3 AQR 61 6.2.4 CN2 62 6.2.5 CLASSWEB 62 6.3 Results 63 6.3.1 Training Time 63 6.3.2 Classifier Results 64 6.4 Conclusion 68 6.5 Classification diagrams 69 7 Documentation of Prism -- an Inductive Learning Algorithm 81 7.1 Short Description 82 7.2 Introduction 82 7.3 PRISM: Entropy versus Information Gain 82 7.3.1 Maximizing the information gain 82 7.3.2 Trimming the tree 82 7.4 The Basic Algorithm 83 7.5 The Use of Heuristics 84 7.6 General Considerations and a Comparison with ID3 84 7.7 Implementation 84 7.8 Results on Running PRISM on the MONK's Test Sets 85 7.8.1 Test Set 1 -- Rules 86 7.8.2 Test Set 2 -- Rules 87 7.8.3 Test Set 3 -- Rules 90 7.9 Classification diagrams 92 8 Backpropagation on the MONK's problems 95 8.1 Introduction 96 8.2 Classification diagrams 97 8.3 Resulting weight matrices 99 9 The Cascade-Correlation Learning Algorithm on the MONK's Problems 101 9.1 The Cascade-Correlation algorithm 102 9.2 Results 103 9.3 Classification diagrams 106 From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Fri, 29 Nov 1991 16:21-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <691449698/thrun@B.GP.CS.CMU.EDU> Dear Connectionists, Two days ago I announced the forthcoming TR "The MONK's Problems -- A Performance Comparison of Different Learning Algorithms" on this mailing list. Since then I spend a significant part of my time in answering e-mails. To make things sufficiently clear: - The final TR will be published in 2-3 weeks. - I will make the TR available by ftp (Ohio State-archive). The report in more than 100 pages in length, and we want send hardcopies only to people who cannot access ftp. If you are not able to retrieve it by ftp, please feel free to contact me. _ I also will also copy the "MONK's problems" to the archive. - If this all is done, I will announce the report again on this list. --- Sebastian From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Mon, 16 Dec 1991 13:27-EST Subject: International Comparison of Learning Algorithms: MONK Message-ID: <692908033/thrun@B.GP.CS.CMU.EDU> Dear Connectionists: The technical report "The MONK's Problems - A Performance Comparison of Different Learning Algorithms" is now available via anonymous ftp. Copies of the report as well as the MONK's database can be obtained in the following way: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thrun.comparison.ps.Z (=report) ftp> get thrun.comparison.dat.Z (=data) ftp> quit unix> uncompress thrun.comparison.ps.Z unix> uncompress thrun.comparison.dat.Z unix> lpr thrun.comparison.ps unix> lpr thrun.comparison.dat If this does not work, send e-mail to reports at cs.cmu.edu asking for the Technical Report CMU-CS-91-197. Sebastian Thrun thrun at cs.cmu.edu SCS, CMU, Pittsburgh PA 15213 ---------------------------------------------------------------------- ---------------------------------------------------------------------- Some things changed - here is the abstract and the table of contents again: The MONK's Problems A Performance Comparison of Different Learning Algorithms S. Thrun, J. Bala, E. Bloedorn, I. Bratko, B. Cestnik, J. Cheng, K. De Jong, S. Dzeroski, S.E. Fahlman, D. Fisher, R. Hamann, K. Kaufman, S. Keller, I. Kononenko, J. Kreuziger, R.S. Michalski, T. Mitchell, P. Pachowicz, Y. Reich, H. Vafaie, W. Van de Welde, W. Wenzel, J. Wnek, and J. Zhang CMU-CS-91-197 This report summarizes a comparison of different learning techniques which was performed at the 2nd European Summer School on Machine Learning, held in Belgium during summer 1991. A variety of symbolic and non-symbolic learning techniques - namely AQ17-DCI, AQ17-HCI, AQ17-FCLS, AQ14-NT, AQ15-GA, Assistant Professional, mFOIL, ID5R, IDL, ID5R-hat, TDIDT, ID3, AQR, CN2, CLASSWEB, PRISM, Backpropagation, and Cascade Correlation - are compared on three classification problems, the MONK's problems. The MONK's problems are derived from a domain in which each training example is represented by six discrete-valued attributes. Each problem involves learning a binary function defined over this domain, from a sample of training examples of this function. Experiments were performed with and without noise in the training examples. One significant characteristic of this comparison is that it was performed by a collection of researchers, each of whom was an advocate of the technique they tested (often they were the creators of the various methods). In this sense, the results are less biased than in comparisons performed by a single person advocating a specific learning method, and more accurately reflect the generalization behavior of the learning techniques as applied by knowledgeable users. ---------------------------------------------------------------------- ================================ RESULTS - A SHORT OVERVIEW ================================ MONK-1 MONK-2 MONK-3(noisy) AQ17-DCI 100.0% 100.0% 94.2% AQ17-HCI 100.0% 93.1% 100.0% AQ17-FCLS 92.6% 97.2% AQ14-NT 100.0% AQ15-GA 100.0% 86.8% 100.0% (by J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang) Assistant Professional 100.0% 81.25% 100.0% (by B. Cestnik, I. Kononenko, and I. Bratko) mFOIL 100.0% 69.2% 100.0% (by S. Dzeroski) ID5R 81.7% 61.8% IDL 97.2% 66.2% ID5R-hat 90.3% 65.7% TDIDT 75.7% 66.7% (by W. Van de Velde) ID3 98.6% 67.9% 94.4% ID3, no windowing 83.2% 69.1% 95.6% ID5R 79.7% 69.2% 95.2% AQR 95.9% 79.7% 87.0% CN2 100.0% 69.0% 89.1% CLASSWEB 0.10 71.8% 64.8% 80.8% CLASSWEB 0.15 65.7% 61.6% 85.4% CLASSWEB 0.20 63.0% 57.2% 75.2% (by J. Kreuziger, R. Hamann, and W. Wenzel) PRISM 86.3% 72.7% 90.3% (by S. Keller) ECOBWEB leaf pred. 71.8% 67.4% 68.2% " plus inform.utility 82.7% 71.3% 68.0% (by Y. Reich and D. Fisher) Backpropagation 100.0% 100.0% 93.1% BP + weight decay 100.0% 100.0% 97.2% (by S. Thrun) Cascade Correlation 100.0% 100.0% 97.2% (by S.E. Fahlman) ---------------------------------------------------------------------- ---------------------------------------------------------------------- 1 The MONK's Comparison Of Learning Algorithms -- Introduction and Survey S.B. Thrun, T. Mitchell, and J. Cheng 1 1.1 The problem 2 1.2 Visualization 2 2 Applying Various AQ Programs to the MONK's Problems: Results and Brief Description of the Methods J. Bala, E. Bloedorn, K. De Jong, K. Kaufman, R.S. Michalski, P. Pachowicz, H. Vafaie, J. Wnek, and J. Zhang 7 2.1 Introduction 8 2.2 Results for the 1st problem (M1) 9 2.2.1 Rules obtained by AQ17-DCI 9 2.2.2 Rules obtained by AQ17-HCI 10 2.3 Results for the 2nd problem (M2) 11 2.3.1 Rules obtained by AQ17-DCI 11 2.3.2 Rules obtained by AQ17-HCI 11 2.3.3 Rules obtained by AQ17-FCLS 13 2.4 Results for the 3rd problem (M3) 15 2.4.1 Rules obtained by AQ17-HCI 15 2.4.2 Rules obtained by AQ14-NT 16 2.4.3 Rules obtained by AQ17-FCLS 16 2.4.4 Rules obtained by AQ15-GA 17 2.5 A Brief Description of the Programs and Algorithms 17 2.5.1 AQ17-DCI (Data-driven constructive induction) 17 2.5.2 AQ17-FCLS (Flexible concept learning) 18 2.5.3 AQ17-HCI (Hypothesis-driven constructive induction) 18 2.5.4 AQ14-NT (noise-tolerant learning from engineering data) 19 2.5.5 AQ15-GA (AQ15 with attribute selection by a genetic algorithm) 20 2.5.6 The AQ Algorithm that underlies the programs 20 3 The Assistant Professional Inductive Learning System: MONK's Problems B. Cestnik, I. Kononenko, and I. Bratko 23 3.1 Introduction 24 3.2 Experimental results 24 3.3 Discussion 25 3.4 Literature 25 3.5 Resulting Decision Trees 26 4 mFOIL on the MONK's Problems S. Dzeroski 29 4.1 Description 30 4.2 Set 1 31 4.3 Set 2 31 4.4 Set 3 32 5 Comparison of Decision Tree-Based Learning Algorithms on the MONK's Problems W. Van de Welde 33 5.1 IDL: A Brief Introduction 34 5.1.1 Introduction 34 5.1.2 Related Work 35 5.1.3 Conclusion 36 5.2 Experimental Results 40 5.2.1 ID5R on test set 1 43 5.2.2 IDL on test set 1 43 5.2.3 ID5R-HAT on test set 1 44 5.2.4 TDIDT on test set 1 44 5.2.5 ID5R on test set 2 45 5.2.6 IDL on test set 2 46 5.2.7 TDIDT on test set 2 48 5.2.8 TDIDT on test set 1 49 5.2.9 ID5R-HAT on test set 2 50 5.3 Classification diagrams 52 5.4 Learning curves 56 6 Comparison of Inductive Learning Programs J. Kreuziger, R. Hamann, and W. Wenzel 59 6.1 Introduction 60 6.2 Short description of the algorithms 60 6.2.1 ID3 60 6.2.2 ID5R 61 6.2.3 AQR 61 6.2.4 CN2 62 6.2.5 CLASSWEB 62 6.3 Results 63 6.3.1 Training Time 63 6.3.2 Classifier Results 64 6.4 Conclusion 68 6.5 Classification diagrams 69 7 Documentation of Prism -- an Inductive Learning Algorithm S. Keller 81 7.1 Short Description 82 7.2 Introduction 82 7.3 PRISM: Entropy versus Information Gain 82 7.3.1 Maximizing the information gain 82 7.3.2 Trimming the tree 82 7.4 The Basic Algorithm 83 7.5 The Use of Heuristics 84 7.6 General Considerations and a Comparison with ID3 84 7.7 Implementation 84 7.8 Results on Running PRISM on the MONK's Test Sets 85 7.8.1 Test Set 1 -- Rules 86 7.8.2 Test Set 2 -- Rules 87 7.8.3 Test Set 3 -- Rules 90 7.9 Classification diagrams 92 8 Cobweb and the MONK Problems Y. Reich, and D. Fisher 95 8.1 Cobweb: A brief overview 96 8.2 Ecobweb 97 8.2.1 Characteristics prediction 97 8.2.2 Hierarchy correction mechanism 97 8.2.3 Information utility function 98 8.3 Results 98 8.4 Summary 100 9 Backpropagation on the MONK's Problems S.B. Thrun 101 9.1 Introduction 102 9.2 Classification diagrams 103 9.3 Resulting weight matrices 105 10 The Cascade-Correlation Learning Algorithm on the MONK's Problems S.E. Fahlman 107 10.1 The Cascade-Correlation algorithm 108 10.2 Results 109 10.3 Classification diagrams 112 From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Fri 20 Dec 91 10:34:32-PST Subject: Research topic needed In-Reply-To: <9112200658.AA22681@bluering.cowan.edu.au> Message-ID: <693254072.0.LAWS@AI.SRI.COM> > could any of Neural Gurus help me to identify research topic, please. I don't mean to flame Boon Tan (btan at cowan.edu.au), but shouldn't the question be one of finding a customer? Or a real-world problem? Too many thesis projects serve no purpose beyond getting the degree, one appearance at a major conference, and perhaps a journal article. As long as you're looking for a topic, it would be best to choose one with value outside a single academic department. Neural networks for control of sheep shearing might not interest the Neural Gurus, but at least there be hope of doing the world some good. Or is that total inappropriate at the doctoral level? -- Ken ------- From IN% Tue Jun 6 06:52:25 2006 From: IN% (IN%) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From postmaster%BROWNCOG.BITNET at CARNEGIE.BITNET Tue Jun 6 06:52:25 2006 From: postmaster%BROWNCOG.BITNET at CARNEGIE.BITNET (PMDF Mail Server) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Delivery report Message-ID: <01GF3NDHUGF8000BAJ@BROWNCOG.BITNET> ---------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: possible solutions, then some of them will work well for new inputs and others will not work well. So on one training run a network may appear to generalise well to a new input set, while on another it does not. Does this mean that, when connectionists refer to the ability of a network to generalise, they are referring to an average ability over many trials? Has anyone encountered situations in which the same network appeared to generalise well on one learning trial and poorly on another? Reference: Bates, E.A. & Elman, J.L. (1992). Connectionism and the study of change. CRL Technical Report 9202, (February). -- Paul Atkins email: patkins at laurel.mqcc.mq.oz.au School of Behavioural Sciences phone: (02) 805-8606 Macquarie University fax : (02) 805-8062 North Ryde, NSW, 2113 Australia.  From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Sun, 29 Mar 1992 12:16-EST Subject: new papers about exploration in active learning Message-ID: <701889417/thrun@B.GP.CS.CMU.EDU> This is an announcement of three papers about exploration in neurocontrol and reinforcement learning. I copied postscript versions to our neuroprose archive. Thanks to Jordan Pollack - what would connectionism be without him?? Instructions for retrieval can be found at the end of this message. Comments are welcome. --- Sebastian Thrun =========================================================================== ACTIVE EXPLORATION IN DYNAMIC ENVIRONMENTS by S.Thrun and K.Moeller To appear in: Advances in Neural Information Processing Systems 4, J.E. Moody, S.J. Hanson, and R.P. Lippmann (eds.) Morgan Kaufmann, San Mateo, CA, 1992 Whenever an agent learns to control an unknown environment, two opposing principles have to be combined, namely: exploration (long-term optimization) and exploitation (short-term optimization). Many real-valued connectionist approaches to learning control realize exploration by randomness in action selection. This might be disadvantageous when costs are assigned to ``negative experiences.'' The basic idea presented in this paper is to make an agent explore unknown regions in a more directed manner. This is achieved by a so-called competence map, which is trained to predict the controller's accuracy, and is used for guiding exploration. Based on this, a bistable system enables smoothly switching attention between two behaviors -- exploration and exploitation -- depending on expected costs and knowledge gain. The appropriateness of this method is demonstrated by a simple robot navigation task. archive name: thrun.nips91.ps.Z =========================================================================== EFFICIENT EXPLORATION IN REINFORCEMENT LEARNING by S. Thrun Technical Report CMU-CS-92-102, Jan. 1992, Carnegie-Mellon University Exploration plays a fundamental role in any active learning system. This study evaluates the role of exploration in active learning and describes several local techniques for exploration in finite, discrete domains, embedded in a reinforcement learning framework (delayed reinforcement). This paper distinguishes between two families of exploration schemes: undirected and directed exploration. While the former family is closely related to random walk exploration, directed exploration techniques memorize exploration-specific knowledge which is used for guiding the exploration search. In many finite deterministic domains, any learning technique based on undirected exploration is inefficient in terms of learning time, i.e. learning time is expected to scale exponentially with the size of the state space [Whitehead 91]. We prove that for all these domains, reinforcement learning using a directed technique can always be performed in polynomial time, demonstrating the important role of exploration in reinforcement learning. Subsequently, several exploration techniques found in recent reinforcement learning and connectionist adaptive control literature are described. In order to trade off efficiently between exploration and exploitation -- a trade-off which characterizes many real-world active learning tasks -- combination methods are described which explore and avoid costs simultaneously. This includes a selective attention mechanism, which allows smooth switching between exploration and exploitation. All techniques are evaluated and compared on a discrete reinforcement learning task (robot navigation). The empirical evaluation is followed by an extensive discussion of benefits and limitations of this work. archive name: thrun.explor-reinforcement.ps.Z =========================================================================== THE ROLE OF EXPLORATION IN LEARNING CONTROL by S. Thrun To appear in: Handbook of Intelligent Control: Neural, Fuzzy and Adaptive Approaches, D.A. White and D.A. Sofge, Van Nostrand Reinhold, Florence, Kentucky 41022 This chapter basically summarizes the results described in the papers above, and surveys recent work on exploration in neurocontrol and reinforcement learning. Here are the issues addressed in this paper: `[...] Let us begin with the questions characterizing exploration and exploitation. Exploration seeks to minimize learning time. Thus, the central question of efficient exploration reads ``How can learning time be minimized?''. Accordingly, the question of exploitation is ``How can costs be minimized?''. These questions are usually opposing, i.e. the smaller the learning time, the larger the costs, and vice versa. But as we will see, pure exploration does not necessarily minimize learning time. This is because pure exploration, as presented in this chapter, maximizes knowledge gain, and thus may waste much time in exploring task-irrelevant parts of the environment. If one is interested in restricting exploration to relevant parts of the environment, it often makes sense to exploit simultaneously. Therefore exploitation is part of efficient exploration. On the other hand, exploration is also part of efficient exploitation, because costs clearly cannot be minimized over time without exploring the environment. The second important question to ask is ``What impact has the exploration rule on the speed and the costs of learning?'', or in other words ``How much time should a designer, who designs an active learning system, spend for designing an appropriate exploration rule?''. This question will be extensively discussed, since the impact of the exploration technique on both learning time and learning costs can be enormous. Depending on the structure of the environment, ``wrong'' exploration rules may result in inefficient learning time, even if very efficient learning techniques are employed. The third central question relevant for any implementation of learning control is ``How does one trade-off exploration and exploitation?''. Since exploration and exploitation establish a trade-off, this question needs further specification. For example, one might ask ``How can I find the best controller in a given time?'', or ``How can I find the best controller while not exceeding a certain amount of costs?''. Both questions constrain the trade-off dilemma in such a way that an optimal combination between exploration and exploitation may be found, given that the problem can be solved with these constraints at all. Now assume one has already an efficient exploration and an efficient exploitation technique. This raises the question ``How shall exploration and exploitation be combined?''. Shall each action explore and exploit the environment simultaneously, or shall an agent sometimes focus more on exploration, and sometimes focus more on exploitation?' archive name: thrun.exploration-overview.ps.Z =========================================================================== INSTRUCTIONS FOR RETRIEVAL unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get thrun.nips91.ps.Z (First paper) ftp> get thrun.explor-reinforcement.ps.Z (Second paper) ftp> get thrun.exploration-overview.ps.Z (Third paper) ftp> quit unix> zcat thrun.nips91.ps.Z | lpr unix> zcat thrun.explor-reinforcement.ps.Z | lpr unix> zcat thrun.exploration-overview.ps.Z | lpr If you are unable to ftp and/or print the papers, send mail to thrun at cs.cmu.edu or write to Sebastian Thrun, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA 15213, USA  From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Sat, 4 Apr 1992 20:14-EST Subject: Why does the error rise in a SRN? In-Reply-To: Gary Cottrell's mail message of Fri, 3 Apr 92 18:12:16 PST Message-ID: <702436485/thrun@B.GP.CS.CMU.EDU> Gary writes: > > Yes, it seems that Elman nets can't learn in batch mode. > I have tried recurrent networks with Elman-structure, but with complete gradient descent through time. This was done on a couple of problems including Morse code recognition, handwritten digit recognition, prediction of a ball trajectory. I used the Connection Machine, batch mode, and a very small learning rate (things are fast on a Connection Machine), and I did not observe that the error on the training set started to increase. However, I did observe that the networks often converged to useless local minima. Finding a meaningful representation for the context layer seems to be an order of magnitude more difficult than identifying weight and biases in a feed-forward network. Sebastian  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From RUSPINI at ai.sri.com Tue Jun 6 06:52:25 2006 From: RUSPINI at ai.sri.com (Enrique Ruspini) Date: Fri 26 Jun 92 12:34:05-PDT Subject: Call for Papers - ICNN'93 Message-ID: <709587245.0.RUSPINI@AI.SRI.COM> 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS San Francisco, California, March 28 - April 1, 1993 The IEEE Neural Networks Council is pleased to announce its 1993 International Conference on Neural Networks (ICNN'93) to be held in San Francisco, California from March 28 to April 1, 1993. ICNN'93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE'93). Participants will be able to attend the technical events of both meetings. ICNN '93 will be devoted to the discussion of basic advances and applications of neurobiological systems, neural networks, and neural computers. Topics of interest include: * Neurodynamics * Associative Memories * Intelligent Neural Networks * Invertebrate Neural Networks * Neural Fuzzy Systems * Evolutionary Programming * Optical Neurocomputers * Supervised Learning * Unsupervised Learning * Sensation and Perception * Genetic Algorithms * Virtual Reality & Neural Networks * Applications to: - Image Processing and Understanding - Optimization - Control - Robotics and Automation - Signal Processing ORGANIZATION: General Chair: Enrique H. Ruspini Program Chairs: Hamid R. Berenji, Elie Sanchez, Shiro Usui ADVISORY BOARD: S. Amari J. A. Anderson J. C. Bezdek Y. Burnod L. Cooper R. C. Eberhart R. Eckmiller J. Feldman M. Feldman K. Fukushima R. Hecht-Nielsen J. Holland C. Jorgensen T. Kohonen C. Lau C. Mead N. Packard D. Rummelhart B. Skyrms L. Stark A. Stubberud H. Takagi P. Treleaven B. Widrow PROGRAM COMMITTEE: K. Aihara I. Aleksander L.B. Almeida G. Andeen C. Anderson J. A. Anderson A. Andreou P. Antsaklis J. Barhen B. Bavarian H. R. Berenji A. Bergman J. C. Bezdek H. Bourlard D. E. Brown J. Cabestany D. Casasent S. Colombano R. de Figueiredo M. Dufosse R. C. Eberhart R. M. Farber J. Farrell J. Feldman W. Fisher W. Fontana A.A. Frolov T. Fukuda C. Glover K. Goser D. Hammerstrom M. H. Hassoun J. Herault J. Hertz D. Hislop A. Iwata M. Jordan C. Jorgensen L. P. Kaelbling P. Khedkar S. Kitamura B. Kosko J. Koza C. Lau C. Lucas R. J. Marks J. Mendel W.T. Miller M. Mitchell S. Miyake A.F. Murray J.-P. Nadal T. Nagano K. S. Narendra R. Newcomb E. Oja N. Packard A. Pellionisz P. Peretto L. Personnaz A. Prieto D. Psaltis H. Rauch T. Ray M. B. Reid E. Sanchez J. Shavlik B. Sheu S. Shinomoto J. Shynk P. K. Simpson N. Sonehara D. F. Specht A. Stubberud N. Sugie H. Takagi S. Usui D. White H. White R. Williams E. Yodogawa S. Yoshizawa S. W. Zucker ORGANIZING COMMITEE: PUBLICITY: H.R. Berenji EXHIBITS: W. Xu TUTORIALS: J.C. Bezdek VIDEO PROCEEDINGS: A. Bergman FINANCE: R. Tong VOLUNTEERS: A. WORTH SPONSORING SOCIETIES: ICNN'93 is sponsored by the Neural Networks Council. Constituent Societies: * IEEE Circuits and Systems Society * IEEE Communications Society * IEEE Computer Society * IEEE Control Systems Society * IEEE Engineering in Medicine & Biology Society * IEEE Industrial Electronics Society * IEEE Industry Applications Society * IEEE Information Theory Society * IEEE Lasers and Electro-Optics Society * IEEE Oceanic Engineering Society * IEEE Power Engineering Society * IEEE Robotics and Automation Society * IEEE Signal Processing Society * IEEE Systems, Man, and Cybernetics Society CALL FOR PAPERS The program committee cordially invites interested authors to submit papers dealing with any aspects of research and applications related to the use of neural models. Papers must be written in English and must be received by SEPTEMBER 21, 1992. Six copies of the paper must be submitted and the paper should not exceed 8 pages including figures, tables, and references. Papers should be prepared on 8.5" x 11" white paper with 1" margins on all sides, using a typewriter or letter quality printer in one column format, in Times or similar style, 10 points or larger, and printed on one side of the paper only. Please include title, authors name(s) and affiliation(s) on top of first page followed by an abstract. FAX submissions are not acceptable. Please send submissions prior to the deadline to: Dr. Hamid Berenji, AI Research Branch, MS 269-2, NASA Ames Research Center, Moffett Field, California 94035 CALL FOR VIDEOS: The IEEE Neural Networks Council is pleased to announce its first Video Proceedings program, intended to present new and significant experimental work in the fields of artificial neural networks and fuzzy systems, so as to enhance and complement results presented in the Conference Proceedings. Interested researchers should submit a 2 to 3 minute video segment (preferred formats: 3/4" Betacam, or Super VHS) and a one page information sheet (including title, author, affiliation, address, a 200-word abstract, 2 to 3 references, and a short acknowledgment, if needed), prior to September 21, 1992, to Meeting Management, 5665 Oberlin Drive, Suite 110, San Diego, CA 92121. We encourage those interested in participating in this program to write to this address for important suggestions to help in the preparation of their submission. TUTORIALS: The Computational Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute Evolutionary Programming David Fogel Orincon Corporation Expert Systems and Neural Networks George Lendaris Portland State University Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Department of Cognitive and Linguistic Sciences Brown University EXHIBITS: ICNN '93 will be held concurrently with the Second IEEE International Conference on Fuzzy Systems (FUZZ-IEEE '93). ICNN '93 and FUZZ-IEEE '93 are the largest conferences and trade shows in their fields. Participants to either conference will be able to attend the combined exhibit program. We anticipate an extraordinary trade show offering a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. Interested exhibitors are requested to contact the Chairman, Exhibits, ICNN '93 and FUZZ-IEEE '93, Wei Xu at Telephone (408) 428-1888, FAX (408) 428-1884. FOR ADDITIONAL INFORMATION, CONTACT Meeting Management 5665 Oberlin Drive Suite 110 San Diego, CA 92121 Tel. (619) 453-6222 FAX (619) 535-3880 -------  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: pe. rcpt to: data pQ.. rcpt to: data p.mh_sequencess.cmu.edu> rcpt to: data p#2_sequencess.cmu.edu> rcpt to: data p &#3_sequencess.cmu.edu> rcpt to: data pr#1_sequencess.cmu.edu> rcpt to: data pD#5_sequencess.cmu.edu> rcpt to: data pT#6_sequencess.cmu.edu> rcpt to: data p#4_sequencess.cmu.edu> rcpt to: data p\#8_sequencess.cmu.edu> rcpt to: data p#9_sequencess.cmu.edu> rcpt to: data  From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Thu 20 Aug 92 21:18:09-PDT Subject: A neat idea from L. Breiman Message-ID: <714370689.0.LAWS@AI.SRI.COM> L. Breiman's "back fitting" sounds very much like the search strategy in some of the fancier stepwise multiple regression programs. At each step, the best remaining variable is added to the regression equation. Then other variables in the equation are tested to see if any can be dropped out. The "repeat until quiesence" search isn't usually performed, but I suppose that it could have its uses. There are also clustering algorithms that have this flavor, notably ISODATA. As clusters are grown or merged, it's possible for data points to drop out. I've also used such an algorithm in image segmentation. I look for pairs of regions that can be merged, and for any region that can be split. The two searches are alternated, approaching a global optimum (one hopes). It's quite different from the usual split-merge algorithm. If you try Breiman's back fitting, watch out for cycles. In my segmentation application, I ran into cycles containing more than a hundred steps. -- Ken Laws -------  From bill at nsma.arizona.edu Tue Jun 6 06:52:25 2006 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Models of LTP and LTD In-Reply-To: fellous@rana.usc.edu's message of 7 Aug 92 01:03:01 GMT Message-ID: Here are a few things for you: Zador A, Koch C, Brown TH: Biophysical model of a Hebbian synapse. {\em Proc Nat Acad Sci USA} 1990, 87:6718-6722. Proposes a specific, experimentally justified model of the dynamics of LTP in hippocampal synapses. Brown TH, Zador AM, Mainen ZF and Claiborne BJ (1991) Hebbian modifications in hippocampal neurons, in ``Long-Term Potentiation: A Debate of Current Issues'' (eds M Baudry and JL Davis) MIT Press, Cambridge MS 357-389. Summarizes the material in the previous paper, and explores the consequences of the facts of LTP for the representations formed within the hippocampus, using compartmental modeling techniques. Holmes WR, Levy WB: Insights into associative long-term potentiation from computational models of NMDA receptor-mediated calcium influx and intracellular Calcium concentration changes. {\em J Neurophysiol} 1990, 63:1148-1168. Regards, -- Bill From dario at cns.nyu.edu Tue Jun 6 06:52:25 2006 From: dario at cns.nyu.edu (Dario Ringach) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: <9208081613.AA02826@wotan.cns.nyu.edu> For a review see: AUTHOR Baudry, M. TITLE Long-term potentiation : a debate of current issues IMPRINT Cambridge, Mass. : MIT Press, c1991. Hope this helps... -- Dario Dario Ringach office: (212) 998-3941 Center for Neural Science home: (212) 727-3941 New York University e-mail: dario at cns.nyu.edu From koch at Iago.Caltech.Edu Tue Jun 6 06:52:25 2006 From: koch at Iago.Caltech.Edu (koch@Iago.Caltech.Edu) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: LTP Message-ID: <920809100145.20401657@Iago.Caltech.Edu> I wrote a recent overview article for Science which you might find of relevance to LTP. "Dendritic spines: convergence of theory and experiment", C. Koch, A. Zador and T. Brown, {\it Science} {\bf 256:} 973-974, Ciao, Christof From coby at shum.huji.ac.il Tue Jun 6 06:52:25 2006 From: coby at shum.huji.ac.il (Yaakov Metzger) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: LTP LTD Message-ID: Hi Regarding your question on the net about LTP and LTD, I guess you might be interested in my MSc thesis, published in: AUTHOR = {Y. Metzger and D. Lehmann}, TITLE = {Learning Temporal Sequences by Local Synaptic Changes}, JOURNAL = {Network}, VOLUME = {Vol 1}, PAGES = {169--188}, YEAR = 1990} We present there some considerations around the nature of LTP and LTD. I'd also like to know what aspects of LTP and LTD you are looking at, and what other answers you got. Please mail me your answer even if you post it because I dont scan the net too often. Coby From: granger at ics.uci.edu Status: RO I and my colleages at U.C. Irvine have done some computational modeling of LTP in the olfactory system and in hippocampus. The following article is based on computational analysis of network-level effects of LTP as it occurs in the olfactory cortex. The incremental strengthening of synapses, in combination with lateral inhibitory activity, led to a computational "clustering" effect; repetitive cyclic activity of the olfactory bulb-cortex system led to sequential hierarchical information emerging from the system: Ambros-Ingerson, J., Granger, R., and Lynch, G. (1990). Simulation of paleocortex performs hierarchical clustering. {\em Science}, 247: 1344-1348. [LTP in olfactory paleocortex was shown by Jung et al., Synapse, 6: 279 (1990) and by Kanter & Haberly, Brain Research, 525: 175 (1990).] The Science article led to specific behavioral and physiological predictions which were tested with positive results, reported in: Granger, R, Staubli, U, Powers, H, Otto, T, Ambros-Ingerson, J, & Lynch, G. (1991). Behavioral tests of a prediction from a cortical network simulation. {\em Psychological Science}, 2: 116-118. McCollum, J, Larson, J, Otto, T, Schottler, F, Granger, R, & Lynch, G. (1991). Short-latency single-unit processing in olfactory cortex. {\em J. Cognitive Neurosci.}, 3: 293-299. These and related results are summarized and reviewed in: Granger, R., and Lynch, G. (1991). Higher olfactory processes: Perceptual learning and memory. {\em Current Biology}, 1: 209-214. LTP varies in its effects in different anatomical regions, such as the distinct subddivisions of the hippocampal formation; possible effects of these different variants of LTP and interactions among the regions expressing them are explored in: Lynch, G. and Granger, R. (1992). Variations in synaptic plasticity and types of memory in cortico-hippocampal networks. {\em J.~Cognitive Neurosci.}, 4: 189-199. Larson & Lynch, Brain Research, 489: 49 (1989) showed that synapses due to different afferents all converging on a single target cell become differentially strengthened (potentiated) via synaptic long-term potentiation (LTP) as a function of the order in which the afferents are activated within a time window of about 70 msec. It might be expected that the latest arrivers would coincide with the maximal depolarization of the target cell and thus by a "Hebbian" argument would be most strengthened (potentiated), yet it is in fact the earliest arriving afferents that become most potentiated, and the later arrivers are least potentiated. This enables the cell to become selective to different sequential activation, i.e., to act as a form of sequence detector. This is described in a forthcoming paper accepted in P.N.A.S. (to appear): Granger, Whitson, Larson & Lynch (1992): Non-Hebbian properties of LTP enable high-capacity encoding of temporal sequences. {\em Proc Nat Acad Sci USA} 1992, (in press). Some of these results are briefly summarized in: Anton, P., Granger, R. and Lynch, G. (1992). Temporal information processing in synapses, cells and circuits. In: {\em Single neuron computation}, (T.McKenna, J.Davis and S.Zornetzer, Eds.), NY: Academic Press, pp. 291-313. Hope this is helpful; I'd be happy to provide more information and additional papers if you wish. -Richard Granger Bonney Center University of California Irvine, California 92717 From ted at sbcs.sunysb.edu Tue Jun 6 06:52:25 2006 From: ted at sbcs.sunysb.edu (ted@sbcs.sunysb.edu) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Models of LTP and LTD Message-ID: <9208110155.AA04523@sbstaff2> Take a look at "Biophysical model of a Hebbian synapse" by Zador et al. PNAS 87:6718-22 (Sept 1990). From ted at sbcs.sunysb.edu Tue Jun 6 06:52:25 2006 From: ted at sbcs.sunysb.edu (ted@sbcs.sunysb.edu) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Models of LTP and LTD Message-ID: <9208140324.AA05836@sbstaff2> Here's another, more recent. Brown TH et al.: Hebbian computations in hippocampal dendrites and spines. Chapter 4 in Single Neuron Computation (Academic Press, 1992) ed. McKenna T et al. pp.81-116. BTW, my new address is carnevale-ted at cs.yale.edu --Ted <<<<<<<<<<<<<<<< .  From nsekar at umaxc.weeg.uiowa.edu Tue Jun 6 06:52:25 2006 From: nsekar at umaxc.weeg.uiowa.edu (nangarvaram Sekar) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: <9208080011.AA09590@umaxc.weeg.uiowa.edu> Hi , Tony Zador's work is the only sort of work, which is supposed to model LTP. His paper "Biophysical model of the Hebbian Synapse" doesn't model LTP / LTD perse but it models calcium dynamics on the dendritic spines. Few months back, we were investigating the biophysical plausibility of an algorithm ALOPEX (an optimization algorithm) and we have implemented a rudimental neural circuitry implementing ALOPEX,through LTP and LTD using neuron. This doesn't model LTP to the gory details of calcium dynamics. We had simple hodgkin & huxley membranes , with passive dendrites, NMDA receptors. Our model of LTP and LTD was as follows: increasing the synaptic conductance of NMDA_receptors & Non-NMDA receptors when NMDA was activated and the post synaptic voltage was above a certain threshold. We have implemented it in NEURON. Let me know if you get any additional references. Also you could refer to the book " LTP - a debate of current issues" by davies. There are a couple of chapters on modelling, but none of them as you would like them to be. <<<<<<<<  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Queue this request? y Or if you wish you can get a listing of the available files, by giving the remote filename as: princeton.edu:(D)/pub/harnad Because of traffic delays through the FT-RELAY, still another method can sometimes be recommended, which is to use the Princeton bitftp fileserver described below. Typically, one sends a mail message of the form: FTP princeton.edu UUENCODE USER anonymous LS /pub/harnad GET /pub/harnad/bbs.fischer QUIT (the line beginning LS is required only if you need a listing of available files) to email address BITFTP at EARN.PUCC or to BITFTP at EDU.PRINCETON, and receives the requested file in the form of one or more email messages. [Thanks to Brian Josephson (BDJ10 at UK.AC.CAM.PHX) for the above detailed UK/JANET instructions; similar special instructions for file retrieval from other networks or countries would be appreciated and will be included in updates of these instructions.] --- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From 4 Tue Jun 6 06:52:25 2006 From: 4 (4) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Really-From: Nigel.Gilbert at soc.surrey.ac.uk Date: Sat, 24 Oct 92 15:56:13 BST Call for papers and participation Simulating Societies '93 24-26 July 1993 Approaches to simulating social phenomena and social processes Although the value of simulating complex phenomena in order to come to a better understanding of their nature is well recognised, it is still rare for simulation to be used to understand social processes. This symposium is intended to present original research, review current ideas, compare alternative approaches and suggest directions for future work on the simulation of social processes. It follows the first symposium held in April 1992 at the University of Surrey, UK. It is expected that about a dozen papers will be presented to the symposium and that revised versions will be published as a book. We are now seeking proposals for papers and for participation. Contributions from a range of disciplines including sociology, anthropology, archaeology, ethology, artificial intelligence, and artificial life are very welcome. Papers on the following and related topics are invited: * Discussions of approaches to the simulation of social processes such as those based on distributed artificial intelligence, genetic algorithms and neural networks, non-linear systems, general purpose stochastic simulation systems etc. * Accounts of specific simulations of processes and phenomena, at macro or micro level. * Critical reviews of existing work that has involved the simulation of social processes. * Reviews of simulation work in archeology, economics, psychology, geography, demography, etc. with lessons for the simulation of social processes. * Arguments for or against simulation as an approach to understanding complex social processes. * Simulations of human, animal and 'possible' societies. 'Social process' may be interpreted widely to include, for example, the rise and fall of nation states, the behaviour of households, the evolution of animal societies, and social interaction. Registration, accommodation and subsistence expenses during the meeting will be met by the sponsors. Partic ipants will need to find their own travel expenses. Proposals for papers are initially invited in the form of an abstract of no more than 300 words. Abstracts should be sent, along with a brief statement of research interests, to the address below by 15th March 1993. Authors of those selected will be invited to submit full papers by 1st June 1993. Those interested in participat ing, but not wishing to present a paper, should send a letter indicating the contribution they could make to the symposium, also by 15th March 1993. The organisers of the Symposium are Cristiano Castelfranchi (IP-CNR and University of Siena, Italy), Jim Doran (University of Essex, UK), Nigel Gilbert (University of Surrey, UK) and Domenico Parisi (IP- CNR, Roma, Italy). The symposium is sponsored by the University of Siena (Corso di laurea in Scienze della Comunicazione), the Consiglio Nazionale delle Ricerche (Istituto di Psicologia, Roma) and the University of Surrey. The meeting will be held at Certosa di Pontignano near Siena, Italy, a conference centre on the site of a 1400AD monastery. Proposals should be sent to: Prof Nigel Gilbert, Department of Sociology, University of Surrey, Guildford GU2 5XH, United Kingdom Tel: +44 (0)483 509173 Fax: +44 (0)483 306290 Email: gng at soc.surrey.ac.uk From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From jose at tractatus.siemens.com Tue Jun 6 06:52:25 2006 From: jose at tractatus.siemens.com (Steve Hanson) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NIPS*92 and CME travel Message-ID: [ Steve promises that this will be the last NIPS-92 announcement sent to CONNECTIONISTS. Sorry for the profusion of last-minute messages. -- Dave Touretzky, list maintainer ] NIPS*92 Goers: This is a annoucement will we try and send out to you in the next week, but the date is so tight that I am sending it on the Net first. Please repost and send to your NIPS colleagues. Thanks. Steve Hanson NIPS*92 General Chair CME Travel (big mountain picture in background) INVITATION TO ROCKIES: On behalf of the NIPS Conference Coordinators, CME and CME Travel would like to welcome you to the Vail Valley. Your organization has selected Colorado Mountain Express to assit with your travel needs while attending the NEURAL INFORMATION PROCESSING SYSTEMS WORKSHOP at the Radissson Resort in Vail Colorado, December 2-5, 1992. In an effort to provide the most economic and professional service, speical discounted airfare and ground transportation rates have been negotiated to fly you into Denver and transfer you on December 3 at 1:30pm from Marriott's City Center Hotel to the Radisson in Vail and return you back to Denver Stapleton Airport upon your requested departure. Colorado Mountain Express located in the VAil Valley, has been serving the Vail and Beaver Creek Resort since 1983. Your speical group code "NIPS" not only provides you access to SPECIAL AIRLINE FARES, negotiaed on your behalf but also makes available preferred gound transfer rates with Colorado Mountona Express or Hertz Car Rental. ***NIPS*** Special Group Code ******Preferred Airline Contracts****** ******Discounted Ground Transportation**** via Colorado Mountain Express or Hertz Car Rental 1-800-525-6363 RSVP by NOVEMBER 18, 1992 We look forward to coordinating your travel arrangements. Please contact a CME travel Consultant at ext 6100 no later than Nov. 18th to secure your travel plans. Sincerely, Colorado Mountain Express & CME Travel Stephen J. Hanson Learning Systems Department SIEMENS Research 755 College Rd. East Princeton, NJ 08540 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: "The audio synthesizer is built around an integrated-circuit chip from Intel Corporation in Santa Clara, California. The chip, called the Intel 80170NX electrically trainable analog neural network (ETANN), simulates the function of nerve cells in a biological brain." Unlikely in that we don't yet know how nerve cells in a biological brain function. Is it really necessary many years (now) into neural net research to continue to lean on the brain for moral support? Sorry to retorically beat a dead horse, but statements like this are annoying to those of us whose primary interest is to understand how the brain works. They also still occur far to frequently especially in association with products. Jim Bower From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Job opening Message-ID: A non-text attachment was scrubbed... Name: not available Type: multipart Size: 2020 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20060606/a43c86f7/attachment.ksh From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From David Tue Jun 6 06:52:25 2006 From: David (David) Date: December 1, 1992 Subject: Job Opening at UMass (Amherst) Message-ID: I seek your assistance in finding someone for a position at the University of Massachusetts, Amherst. I have been awarded a Research Scientist Development Award (RSDA) from the National Institute of Mental Health. The award allows me to devote full time to research for 5 years (beginning September 30, 1992). It also provides funds to hire someone to cover the courses I normally teach. The person occupying the replacement position is expected to engage in research that complements my own. My colleagues and I are therefore looking for someone to teach in the area of cognitive/experimental psychology (3 courses per year, typically one graduate and two undergraduate) and to do research related to my interests. Currently, I am working on a computational model of movement selection (primarily for reaching and related behaviors). My students and I are testing predictions of the model with normal adult human subjects, using an Optotrak recording system housed in our Department. Ideally, we would like to find someone with a strong background in cognitive or experimental psychology who is well versed in computational approaches to cognition and performance, especially, but not exclusively, in the domain of motor control. If you know of such a person and think he or she might be interested in this opportunity, would you please bring it to his or her attention? A copy of the ad, which will be appearing soon in the APA Monitor and APS Observer, is attached. Our Psychology Department is an exciting place for someone with interests in the cognitive substrates of motor control. My colleague, Professor Rachel Clifton, also holds an RSDA; one of her areas of study is infant motor development. We have close ties to biomechanists in the Exercise Science Department, roboticists and connectionist modellers in the Computer Science Department, and neuroscientists in our own department and in Biology. The UMass Psychology Department has a strong faculty in cognitive and neuroscience generally. There are frequent interdisciplinary meetings involving the many people in the greater Amherst area who are concerned with basic and applied issues related to the control of action, and there are many other meetings as well pertaining to other areas of cognitive science. A word about the timing of the appointment is in order. Funds are available to hire someone immediately, although only on a temporary basis; that is, the replacement position cannot be filled permanently until September, following a full affirmative-action search. Anyone hired on a temporary basis will be expected to teach at least 1 and possibly 2 courses in the Spring semester (which begins in late January). Whether the person teaches 1 course or 2 depends on his or her abilities and desires, as well as departmental needs. The temporary appointment can begin earlier than January, as far as I know. In the best of all worlds, the person hired temporarily will then stay on for the full 4 years, but this is not guaranteed. I look forward to hearing from you or someone you might tell about this position. Please feel free to contact me at the above address or at any of the numbers below for further information. It is advisable to respond quickly to this call. Thank you for your kind attention. David A. Rosenbaum Professor of Psychology 413-545-4714 DAVID.ROSENBAUM at PSYCH.UMASS.EDU Here is the ad that will appear soon in the APA Monitor and the APS Observer: COGNITIVE PSYCHOLOGY: The Department of Psychology at the University of Massachusetts/Amherst anticipates an opening for a non-tenure track position at the Assistant or Associate Professor level, starting September 1993 and renewable through August 1997. Preference will be given for individuals with primary interests in the cognitive substrates of human motor control, perceptual-motor integration, or human performance, although candidates focusing on other topics will be considered. Send vita, statement of interest, representative papers, and at least three letters of recommendation to: Dr. David A. Rosenbaum, Search Committee, Department of Psychology, University of Massachusetts, Amherst, MA 01003. Review of applications will begin January 18 and continue until the position is filled. The University of Massachusetts is an Affirmative Action/Equal Opportunity Institution. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Kak> The observer plays a Kak> fundamental role in the measurement problem of quantum mechanics Kak> and several scientists have claimed that physics will remain Kak> incomplete unless consciousness is incorporated into it. The present thread will remain incomplete without reference to what Popper has to say on this matter. For a strong case against the mystification of physics, see Karl R. Popper Quantum Theory and the Schism in Physics from the Postscript to "The Logic of Scientific Discovery" Edited by W. W. Bartley Unwin Hyman: London, 1982 -Shimon p.s. Popper, contrary to what could be expected from his being a dualist about the so-called "mind-body problem", is actually very much a realist about other, more important issues in science. Consequently, one can ignore his "The Self and its Brain" (a bad influence of J. Eccles? :-), and still benefit from the earlier (and better) "The Logic of Scientific Discovery". From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Does backprop need the derivative ?? Message-ID: Hi, I have done some experiments on this and am working on "Robustness of BP to transfer function and derivative" ( tentative title ). I have found that the actual derivative need not be used. So long as the "derivative" equivalent ( whether constant or other functions) indicates the direction of increasing or decreasing value of the transfer function ( whether immediate or potential ) ie if the transfer function is increasing or will increase any positive value for the derivative would do. Hence for sigmoid function one may use a positive constant. For unit step function f(x) = 1 for x >= 0 = 0 for x < 0 we could use some high positive value at x=0 and nearby and some low positive value further away. Although the derivative is zero except at x=0, using zero would jam ( stop ) the whole backprop process since backproped error would be zero in all nodes ( eventually ). Hence using a low value in this case could be interpreted as an indication that if we move in the positive direction we may possibly increase the output of the node. Many variations of the derivative are possible. I have tried many and they work ( most of the time ). One problem with this is that if the output of the node is already "1" then increasing the input would not increase the output as our derivative suggest. What we need to do in this case is to check the backprop error's direction ( ie +ve or -ve ) and have two different values of our derivative depending on thedirection. Still working on it. Hope this helps. Please contact me for any comments / discussion. Regards, Tiong_Hwee Goh  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: derivative CANNOT be replaced with a constant. Below several people indicate that they had trouble when the derivative was replaced. However some people says that it can be replaced. I believe that it is problem dependent. I know of two small problems in which you cannot change the derivative. They are: - the XOR-problem (I suppose everyone is familiar with that one) and - the so-called sine-problem: Try to learn a 1-3-1 network the sine in the range from -pi to pi. The output neuron has a linear transfer function, the hidden neurons have a tanh transfer function. Backpropagation with individual update is used to train the network (batch update has problems). I used 37 training samples in the given range and stopped training when the total error (sum over all patterns of .5 times square target output minus actual output) was smaller than 0.001 Furthermore, I would like to say that communicating in such a way is very efficient and I would like to thank everyone for their responses (and upcoming responses). Heini Withagen Department of Electrical Engineering EH 9.29 Eindhoven Technical University P.O. Box 513 5600 MB Eindhoven The Netherlands Phone: 31-40472366 Fax: 31-40455674 E-mail: heiniw at eeb.ele.tue.nl ------------------------------------------------------------------------ David Bisant from Stanford Univ. wrote: Those interested in this problem might want to take a look at an obscure reference by Chen & Mars (Wash., DC IJCNN, Vol 1 pg 601, 1990). They essentially drop the derivative term altogether from the weight update equation for the output layer. They claim that it helps to avoid saturated units. A magazine article (AI Expert July, 1991) has empirically compared this method with Fahlman's and a few others on some toy problems (not a rigorous comparison, but still informative). Here are some other references where an attempt has been made to simplify the activation and/or differential function: Samad IJCNN 90 (Wash DC) & Honeywell Tech Report SSDC-89-14902-3 Rezgui IJCNN 90 (Wash DC) Tepedelenlioglu IEEE ICSE 89 ------------------------------------------------------------------------ Guido Bugmann from King's College London wrote: I have developped a model of formal neuron by using micro-circuits of pRAM neurons. In order to train the parameters of the pRAM's composing the formal neuron, I had to rewrite backpropagation for this case. At some stage, I have found that propagating back only the sign (+1 or -1) of the error was enough. But it turned out that this technique was restricted to cases where the weights had to converge toward their maximum or minimum value. For problems where intermediate weights were optimum, the more refined information of the size of the error for each example was required. (By "error" I mean the whole expression which is backpropagated). ------------------------------------------------------------------------ Scott E. Fahlman from Carnegie Mellon University wrote: Interesting. I just tried this on encoder problems and a couple of other simple things, and leapt to the conclusion that it was a general phenomenon. It seems plausible to me that any "derivative" function that preserves the sign of the error and doesn't have a "flat spot" (stable point of 0 derivative) would work OK, but I don't know of anyone who has made an extensive study of this. ------------------------------------------------------------------------ George Bolt from University of York, U.K. wrote: I've looked at BP learning in MLP's w.r.t. fault tolerance and found that the derivative of the transfer function is used to *stop* learning. Once a unit's weights for some particular input (to that unit rather than the network) are sufficiently developed for it to decide whether to output 0 or 1, then weight changes are approximately zero due to this derivative. I would imagine that by setting it to a constant, then a MLP will over- learn certain patterns and be unable to converge to a state of equilibrium, i.e. all patterns are matched to some degree. A better route would be to set the derivative function to a constant over a range [-r,+r], where f[r] - (sorry) f( |r| ) -> 1.0. To make individual units robust with respect to weights, make r=c.a where f( |a| ) -> 1.0 and c is a small constant multiplicative value. ------------------------------------------------------------------------ Joris van Dam from University of Amsterdam wrote: At the University of Amsterdam, we have a single layer feed forward network that computes the probabilities in one occupancy grid given the occupancy probabilities in another grid that is rotated and translated with respect to the former. It turns out that a rather complex activation function needs to be used, which also involves the computation of a complex derivative. (Note: it can be easily computed from the activation). It is clear that in this case the derivative cannot be omitted: LEARNING WOULD BE INCORRECT. The derivative has a clear interpretation in the context of occupancy grids and the learning procedure (with derivative !!!!!) can be related to Monte Carlo estimation procedures. Omission of the derivative can thus be proven to be incorrect and experiments have underlined this theory. In my opinion the omission of the derivative is mathematically incorrect, but can be useful in some applications and may even speed up learning (some derivatives have, like Scott Fahlmann said, zero spots). However, it seems that esp. with complex networks and activation functions, the derivative needs to be used indeed. ------------------------------------------------------------------------ Janvier Movellan wrote: My experience with Boltzmann machines and GRAIN/diffusion networks (the continuous stochastic version of the Boltzmann machine) has been that replacing the real gradient by its sign times a constant accelerates learning DRAMATICALLY. I first saw this technique in one of the original CMU tech reports on the Boltzmann machine. I believe Peterson and Hartman and Peterson and Anderson also used this technique, which they called "Manhattan updating", with the deterministic Mean Field learning algorithm. I believe they had an article in "Complex Systems" comparing Backprop and Mean-Field with both with standard gradient descent and with Manhattan updating. It is my understanding that the Mean-Field/Boltzmann chip developed at Bellcore uses "Manhattan Updating" as its default training method. Josh Allspector is the person to contact about this. At this point I've tried 4 different learning algorithms with continuous and discrete stochastic networks and in all cases Manhattan Updating worked better than straight gradient descent.The question is why Manhattan updating works so well (at least in stochastic and Mean-Field networks) ? One possible interpreation is that Manhattan updating limits the influence of outliers and thus it performs something similar to robust regression. Another interpretation is that Manhattan updating avoids the saturation regions, where the error space becomes almost flat in some dimensions, slowing down learning. One of the disadvantages of Manhattan updating is that sometimes one needs to reduce the weight change constant at the end of learning. But sometimes we also do this in standard gradient descent anyway. ------------------------------------------------------------------------ David G. Stork from Ricoh California Research Center wrote: In an in-depth study of a particular hardware implementation of backprop, we investigated the need for the derivative in the learning rule. We found thatit was often essential to have such a derivative. For instance, the XOR problemcould not be so solved. (Incidentally, this analysis led to a patent: "A method employing logical gates for calculating activation function derivatives on stochastically-encoded signals" granted to myself and Ron Keesing, US Patent # 5,157,275.) Without the derivative, one is not guaranteed that you're doing gradient descent in error space. ------------------------------------------------------------------------ Randy Shimabukuro wrote: I am not familiar with Fahlman's paper, but I have looked at approximating the derivative of the transfer function with a step function approximation. I also looked at other approximations which we made to simplify the implementation of back propagation in an integrated circuit. The results were writen up in the following reference. Shimabukuro, Randy L., Shoemaker, Patrick A., Guest, Clark C., & Carlin, Michael J.(1991) Effect of Circuit Parameters on Convergence of Trinary Update Back-Propagation. Proceedings of the 1990 Connectionist Models Summer School, Touretzky, D.S., Elman, J.L., Sejnowski, T.J., and Hinton, G.E., Eds., pp. 152-158. Morgan Kaufmann, San Mateo, CA. ------------------------------------------------------------------------ Marwan Jabri from Sydney University wrote: It is likely as Scott Fahlman suggested any derivative that "preserves" the error sign may do the job. The question however is the implication in terms of convergence speed, and the comparison thereof with perturbation type training methods. ------------------------------------------------------------------------ Radford Neal responded to Marwan Jabri's writing with: One would expect this to work only for BATCH training. On-line training approximates the batch result only if the net result of updating the weights on many training cases mimics the summing of derivatives in the batch scheme. This will not be the case if a training case where the derivative is +0.00001 counts as much as one where it is +10000. This is not to say it might not work in some cases. There's just no reason to think that it will work generally. ------------------------------------------------------------------------ Jonathan Cohen wrote: You might take a look at a paper by Nestor Schmayuk in Psychological Review 1992. The paper is about the role of the hippocampus which, in a word, he argues implements biologically plausible backprop. The algorithm uses a hidden unit's activation rather than its derivative for computing the error. He doesn't give too broad a range of training examples, but you might contact him to find out what else he has tried. Hope this information is helpful. ------------------------------------------------------------------------ Jay McClelland wrote: Some work has been done using the activation rather than the derivative of the activation by Nestor Schmajuk. He is interested in biologically plausible models and tends to keep hidden units in the bottom half of the sigmoid. In that case they can be approximated by exponentials and so the derivative can be approximated by the activation. ------------------------------------------------------------------------ John Kolen wrote: The quick answer to your question is no, you don't need "the derivative" you can use anything with the general qualitative shape of the derivate. I have some empirical results of training feedforward networks with different learning "functions", i.e different squashing derivatives, combination operators, etc. ------------------------------------------------------------------------ Gary Cottrell wrote: I happen to know it doesn't work for a more complicated encoder problem: Image compression. When Paul Munro & I were first doing image compression back in 86, the error would go down and then back up! Rumelhart said: "there's a bug in your code" and indeed there was: we left out the derivative on the hidden units. ------------------------------------------------------------------------  From pollack at cis.ohio-state.edu Tue Jun 6 06:52:25 2006 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Delays in Neuroprose Message-ID: <9302281147.AA09137@dendrite.cis.ohio-state.edu> ** DO NOT FORWARD TO OTHER GROUPS** To anyone submitting files, please expect short delays in processing of neuroprose files for a couple of weeks. Jordan Pollack Proud father of Dylan Seth Pollack CIS Dept/OSU Born 2/23/93, 7lbs 6oz 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614)292-4890 (then * to fax) ** DO NOT FORWARD TO OTHER GROUPS **  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: networks, it is easy to see that an "instantaneous" multi-layer network combined with delays/integrators in the feedback loop can approximate arbitrary discrete/continuous-time dynamical systems. A question of interest is whether it can be done when all the units have intrinsic delays/integrators. The answer is yes, if we use a distributed representation of the state space. (6 pages) ----It is a simple problem someone might have already solved. I appreciate any reference to previous works. **************************************************************** Bifurcations of Recurrent Neural Networks in Gradient Descent Learning Kenji Doya, UCSD Asymptotic behavior of a recurrent neural network changes qualitatively at certain points in the parameter space, which are known as ``bifurcation points''. At bifurcation points, the output of a network can change discontinuously with the change of parameters and therefore convergence of gradient descent algorithms is not guaranteed. Furthermore, learning equations used for error gradient estimation can be unstable. However, some kinds of bifurcations are inevitable in training a recurrent network as an automaton or an oscillator. Some of the factors underlying successful training of recurrent networks are investigated, such as choice of initial connections, choice of input patterns, teacher forcing, and truncated learning equations. (11 pages) ----It is (to be) an extended version of "doya.bifurcation.ps.Z". **************************************************************** Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Kenji Doya and Allen I. Selverston, UCSD An artificial neural network approach for dimension reduction of dynamical systems is proposed and applied to conductance-based neuron models. Networks with bottleneck layers of continuous-time dynamical units could make a 2-dimensional model from the trajectories of the Hodgkin-Huxley model and a 3-dimensional model from the trajectories of a 6-dimensional bursting neuron model. Nullcline analysis of these reduced models revealed the bifurcations of the neuronal dynamics underlying firing and bursting behaviors. (17 pages) **************************************************************** FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary either ftp> get doya.universality.ps.Z ftp> get doya.bifurcation2.ps.Z ftp> get doya.dimension.ps.Z or ftp> mget doya.* rehtie ftp> bye unix% zcat doya.universality.ps.Z | lpr unix% zcat doya.bifurcation2.ps.Z | lpr unix% zcat doya.dimension.ps.Z | lpr These files are also available for anonymous ftp from crayfish.ucsd.edu (132.239.70.10), directory "pub/doya".  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: faster if you don't start too close to the origin. That's why I normally use the range [-1,1] for weight initialization. Again, I never ran any extensive tests on that. The input logical values are symmetrical for the same reason that the sigmoid should be symmetrical - avoid the DC component. On the other hand, it is well known that one should not choose the saturation levels of the sigmoid as target logical values, otherwise the weights will tend to grow to infinity. That's why I chose +-.9 . The only parameter that I played with, in this case, was the learning rate. I made a few preliminary runs with different values of this parameter, and the value of 1 looked good. Note, however, that these were really just a few runs, not any extensive optimization. Since the previous informal results generated some discussion, I decided to be a bit more formal, and I report here the results of 51 runs using the framework indicated above, and different seeds for the random number generator. What I give below is the histogram of the number of epochs for convergence. The first figure is the number of epochs, the second one is the number of runs that converged in that number of epochs. 7 - 3 22 - 2 8 - 1 27 - 1 9 - 3 28 - 1 10 - 3 36 - 1 11 - 2 46 - 1 12 - 2 48 - 1 13 - 5 50 - 1 17 - 5 51 - 1 18 - 1 56 - 1 19 - 1 72 - 1 21 - 2 >2000 - 12 The ">2000" are the "local minima" (see below). As you can see, the median of this distribution is 19 epochs. Some colleagues around here have been running tests, with results consistent with these. One of them (Jose Amaral) has been studying algorithm convergence speeds, and therefore has software specially designed for this kind of tests. He also has similar results for this situation (in fact a median of 19, too). But he also came up with a very surprising result: if you use "tanh(s/2)" as sigmoid, with a step size of .7, the median of the number of epochs is only 4 (!) [I've put the exclamation between parentheses, so that people don't think it is the factorial of 4]. We plan to make available, in a few days, a postscript version of one or two graphs, with a summary of his results for a few different cases. A few words about "local minima": I used this expression somewhat informally, as we normally do, meaning that after a large number of epochs (say, 2000) the network has not yet learned the correct outputs for all training patterns, and the cost function is decreasing very slowly, so it appears to be converging to a local minimum. I must say, however, that some years ago I once took one of these "local minima" of the XOR, and allowed it to continue training for a long time. After some 180000 epochs, the net actually learned all 4 patterns correctly. I tried this with one of the "local minima" here, and the same thing happened again (after I reduced the step size to .5, and then to .2). I don't know how many epochs it took: when I left to teach a class, it was above 1 000 000 epochs, with wrong output in one of the patterns. I left it running and when I came back it was at 5 360 000 epochs, and had already learned all 4 patterns. Finally, I am sorry that I cannot publish the simulator code itself. We sell this simulator (we don't make much money with it, but anyway), so I can't make it public. And besides, now that I have told you all my tricks, leave me at least with my little simulator, so that I can earn my living by selling it to those that didn't read this e-mail :) Happy training, Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp)  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From POULSON at freud.sbs.utah.edu Tue Jun 6 06:52:25 2006 From: POULSON at freud.sbs.utah.edu (KIMBERLY POULSON) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: MEMORANDUM TO: Faculty, Graduate Students, Auxiliary Faculty and other interested parties FROM: William Johnston/Charlie Shimp TOPIC: William F. Prokasy Lecture This year's William F. Prokasy Lecturer will be Dr. Irving Biederman. Dr. Biederman will speak on Tuesday, May 11, at 5:00 p.m. in BEH SCI 110. The title is "Shape Recognition in Mind and Brain." Dr. Irving Biederman is the William M. Keck Professor of Cognitive Neuroscience at the University of Southern California, where he is a member of the Departments of Psychology, Computer Science, and Neuroscience and Head of the Cognitive and Behavioral Neuroscience Program. Professor Biederman has proposed a theory of real-time human object recognition that posits that objects and scenes are represented as an arrangement of simple volumetric primitives, termed geons. This theory has undergone extensive assessment in psychophysical experiments. Recently, he has employed neural network models to provide a more biologically based version of the geon-assemblage theory which is currently undergoing tests through single unit recording experiments in monkeys and the study of the impairment of object recognition in patients with a variety of neurological symptoms. Prior to his recent appointment at USC, Dr. Biederman was the Fesler-Lampert Professor of Artificial Intelligence and Cognitive Science at the University of Minnesota. He has been a member of panels for the National Science Foundation, National Research Council, and the Air Force Office of Scientific Research, where he served as the first Program Manager (consulting) for the Cognitive Science Program. Please put these dates on your calendars. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From sgoss at ulb.ac.be Tue Jun 6 06:52:25 2006 From: sgoss at ulb.ac.be (Goss Simon) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: no subject (file transmission) Message-ID: Subject European Conference on Artificial Life (Brussels) Date: Tue, 4 May 93 11:15:07 MET Dear ECAL Participant, (This is an e-mail copy of a letter we're sending by post, and therefore does not include maps). Please find enclosed the programme for ECAL '93, as well as maps and instructions on how to get to the hotels and the conference site. You are all invited to an informal "Welcome to ECAL" drink/check-in session, at the Falstaff cafe, Sunday May 23rd, from 8-10 in the evening. The Falstaff is near La Bourse, at the centre of Bruxelles (see enclosed instructions). If all goes as planned, those of you who reserved their hotel room through us should find a copy of the informal proceedings either waiting for them in their rooms or at the reception desk. Those of you who have made separate arrangements will receive their proceedings either at the Falstaff or at the conference check-in on Monday morning. We hope all goes well with your travel arrangements, and look forward to seeing you at the Falstaff. Yours sincerely Simon Goss (for the organising committee) ________________________________________________________ Program ECAL MONDAY May 24th 09.00 - Inauguration 09.15 - F. Varela : "Organism : a meshwork of selfless selves" 09.45 - G. Nicolis : "Non linear physics and the evolution of complex systems" 10.15 - Coffee Aspects of Autonomous Behaviour 10.30 - M. Tilden "Robot jurassic park : primitives to predators" 11.10 - S. Nolfi, D. Parisi "Auto-teaching : networks that develop their own teaching inputs" 11.40 - B. Webb "Modelling biological behaviour or 'dumb animals and stupid robots'" 12.10 - Lunch Patterns & Rhythms 10.40 - E. Presnov, Z. Agur "Origin and breakdown of synchrony of the cell cycles in early development" 11.10 - A. Hjemfelt, F.W. Schneider, J. Ross "Parallel computation in coupled chemical kinetic systems" 11.40 - P. de Kepper, P. Rudovics, J.J. Perraud, E. Dulos "Experiments on Turing structures" 12.20 - M. Braune, H. Engel "Light-sensitive Belousov-Zhabotinsky reaction - a suitable tool for studies of nonlinear wave dynamics in active media" 12.50 - V.S. Zykov, S.C. Muller "Boundary layer kinematical model of autowave patterns in a two-component reaction-diffusion system" 13.20 - Lunch Aspects of Autonomous Behaviour continued 14.20 - H. Hendriks-Jansen "Natural kinds, autonomous robots and history of use" 14.50 - R. Pfeifer "Studying emotions : fungus eaters" 15.20 - J.C. Rutkowska "Ontogenetic constraints on scaling-up sensory-motor systems" 15.50 - G. Deffuant, E. Monneret "Morphodynamic networks : the example of adaptive fibres" 16.20 - Coffee Patterns and Rhythms continued 14.20 - J. Kosek, P. Pinkas, M. Marek "Spatiotemporal patterns in cyclic arrays with and without time delay" 14.50 - J.J. Perraud "The early years of excitable media from electro-physiology to physical chemistry" 15.20 - D. Thieffry, R. Thomas "Logical synthesis of regulatory models" 15.50 - E. Av-Ron, H. Parnas, L.A. Segel "Modelling bursting neurons of the Lobster cardiac network" 16.20 - Coffee Evolutionary Mechanisms 16.50 - T. Ray "Evolution and ecology of digital organisms" 17.30 - R. Hightower, S. Forrest, A.S. Perelson "The evolution of secondary organization in immune system gene libraries" 18.00 - R. Davidge "Looping as a means to survival : playing Russian Roulette in a harsh environment" Patterns and Rhythms Continued 17.00 - G.R. Welch "The computational machinery of the living cell" 17.30 - S. Douady, Y. Couder A physical investigation of the iterative process of botanical growth" 18.00 - V. Gundlach, L. Demetrius "Mutation and selection in non linear growth processes" 18.30 - Beer and Sandwiches 19.30 - C. Langton Title to be communicated 20.10 - D. Lestel, L. Bec, J.-L. Lemoigne "Visible characteristics of living systems: esthetics and artificial life" 20.40 - Discussion on philosophical issues. 22.00 - Close --------------------------------------------------------- TUESDAY May 25th Origins of life & molecular evolution 09.00 - P. Schuster "Sequences and shapes of biopolymers" 09.40 - P.L. Luisi, P.A. Vonmont-Bachmann, M. Fresta "Chemical autopoiesis : Self-replicating micelles and vesicles" 10.10 - C. Biebricher "Requirements for template activity of RNA in RNA replication" 10.50 - Coffee 11.10 - M.A. Huynen, P. Hogeweg "Evolutionary dynamics and the relation between RNA structure and RNA landscapes" 11.40 - W. Fontana "Constructive dynamical systems" 12.20 - Lunch Dynamics of Human Societies 09.00 - B. Huberman, N.S. Glance "Social dilemnas and fluid organizations" 09.40 - T.A. Brown "Political life on a lattice" 10.10 - A. Meier-Koll, E. Bohl "Time-structure analysis in a village community of Columbian Indians" 10.40 - Coffee Multi-Robot Systems 11.10 - R. Beckers, J.L. Deneubourg, S. Goss, R. King "Self-organised groups of interacting Robots" 11.40 - C. Numaoka "Collective alteration of strategic type" 12.10 - S. Rasmussen "Engineering based on self-organisation" 12.40 - T. Shibata, T. Fukuda "Coordinative balancing in evolutionary multi-agent-robot System using genetic algorithm" 13.10 - Lunch 14.00 - 18.00 Poster & Demonstration Session (Robots, Videos, Chemical reactions, ...) 14.10 - A. Collie "A tele-operated robot with local autonomy" 14.40 - F. Hess "Moving sound creatures" 16.00 - Coffee 18.00 - Talk by Professor I. Prigogine 18.45 - Cocktail 20.00 - Banquet ________________________________________________________________________________ WEDNESDAY May 26th Collective Intelligence 09.00 - N.R. Franks "Limited rationality in the organization of societies of ants, robots and men" 09.40 - B. Corbara, A. Drogoul, D. Fresneau, S. Lalande "Simulating the sociogenesis process in ant colonies with manta" 10.10 - J.L. Deneubourg "In search of simplicity" 10.40 - Coffee 11.10 - L. Edelstein Keshet "Trail following as an adaptable mechanism for population behaviour" 11.40 - I. Chase "Hierarchies in animal and human societies" 12.10 - H. Gutowitz "Complexity-seeking ants" 12.40 - Lunch Sensory and Motor Activity 09.00 - H. Cruse, G. Cymbalyuk, J. Dean "A walking machine using coordinating mechanisms of three different animals : stick insect, cray fish and cat" 09.40 - D.E. Brunn, J. Dean, J. Schmitz "Simple rules governing leg placement by the stick insect during walking" 10.10 - D. Cliff, P. Husbands, I. Harvey "Analysis of evolved sensory-motor controllers" 10.40 - Coffee Ecosystems & Evolution 11.00 - M. Nowak "Evolutionary and spatial dynamics of the prisoner's dilemma" 11.40 - K. Lindgren, M.G. Nordahl "Evolutionary dynamics of spatial games" 12.10 - P.M. Todd "Artificial death" 12.40 - G. Weisbuch, G. Duchateau "Emergence of mutualism : application of a differential model to endosymbiosis" 13.10 - Lunch Collective Intelligence continued 14.20 - M.J. Mataric, M.J. Marjanovic "Synthesizing complex behaviors by composing simple primitives" 14.50 - O. Miramontes, R.V. Sole, B.C. Goodwin "Antichaos in ants : the excitability metaphor at two hierarchical levels" 15.20 - S. Camazine "Collective intelligence in insect societies by means of self-organization" 16.00 - Coffee 16.30 - S. Focardi "Dynamics of mammal groups" 17.00 - A. Stevens Modelling and simulations of the gliding and aggregation of myxobacteria" 17.30 - O. Steinbock, F. Siegert, C.J. Weijer, S.C. Muller "Rotating cell motion and wave propagation during the developmental cycle of dictyostelium" Ecosystems and Evolution continued 14.20 - S. Kauffman Title to be communicated 14.50 - M. Bedau, A. Bahm "The Evolution of diversity" Theoretical Immunology 15.20 - J. Stewart "The immune system : emergent self-assertion in an autonomous network" 15.50 - Coffee 16.20 - J. Urbain "The dynamics of the immune response" 17.00 - Behn, K. Lippert, C. Muller, L. van Hemmen, B. Sulzer. "Memory in the immune system : synergy of different strategies" 18.00 - Closing Remarks 20.00 - Epistemological Conference (in french, open to the public, room 2215, Campus Solbosch) "La Vie Artificielle : une Vie en dehors des Vivants. Utopie ou Realite?" Intervenants : G. Nicolis, F. Varela, I. Stengers, J. De Rosnay. 22.00 - Close __________________________________________________________________________ Poster Session (Tuesday May 25th, 14.00 - 18.00) Patterns & Rhythms M. Colding-Jorgensen "Chaotic signal processing in nerve cells" M. Dumont, G. Cheron, E. Godaux "Non-linear forecasting of cats eye movement time series" M. Gomez-Gesteira, A. P. Munuzuri, V. P. Munuzuri, V. Perez-Villar "Vortex drift induced by an electric field in excitable media" I. Grabec "Self-organization of formal neurons described by the second maximum entropy principle" A. Hunding "Simulation of 3 dimensional turing patterns related to early biological morphogenesis" Lj. Kolar-Anic, Dj. Misljenovic, S. Anic "Mechanism of the Bray-Liebhafsky reaction : effect of the reduction of iodate ion by hydrogen peroxide" V. Krinsky, K. Aglaze, L. Budriene, G. Ivanitsky, V. Shakhbazyan, M. Tsyganov "Wave mechanisms of pattern formation in microbial populations" J. Luck, H.B. Luck "Can phyllotaxis be controlled by a cellular program ?" E.D. Lumer, B.A. Huberman "Binding hierarchies : a basis for dynamic perceptual grouping" O.C. Martin, J.C. Letelier "Hebbian neural networks which topographically self-organize" J. Maselko "Multiplicity of stationary patterns in an array of chemical oscillators" A.S. Mikhailov, D. Meinkohn "Self-motion in physico-chemical systems far from thermal equilibrium" A.F. Munster, D. Snita, P. Hasal, M. Marek "Spatial and spatiotemporal patterns in the ionic brusselator" P. Pelce "Geometrical dynamics for morphogenesis" A. Wuensche "Memory far from equilibrium" Origins of Life & Molecular Evolution Y. Almirantis, S. Papageorgiou "Long or short range correlations in DNA sequences ?" R. Costalat, J.-P. Morillon, J. Burger "Effect of self-association on the stability of metabolic units" J. Putnam "A primordial soup environment" Epistemological Issues E.W. Bonabeau "On the appeals and dangers of synthetic reductionism" V. Calenbuhr "Intelligence under the viewpoint of the concepts of complementary and autopoiesis" D. Mange "Wetware as a bridge between computer engineering and biology" B. Mc Mullin "What is a universal constructor ?" Aspects of Autonomous Behaviour D. Cliff, S. Bullock "Adding 'foveal vision' to Wilson's animat" O. Holland, M. Snaith "Generalization, world modelling, and optimal choice : improving reinforcement learning in real robots" J.J. Merelo, A. Moreno, A. Etxeberria "Artificial organisms with adaptive sensors" F. Mondada, P.F. Verschure "Modelling system-environment interaction : the complementary roles of simulation and real world artifacts" C. Thornton "Statistical factors in behaviour learning" B. Yamauchi, R. Beer "Escaping static and cyclic behaviour in autonomous agents" N. Magome, Y. Yonezawa, K. Yoshikawa "Self-excitable molecula-assembly towards the development of neuro-computer, intelligent sensor and mechanochemical transducer" Evolutionary Mechanisms H. de Garis "Evolving a replicator" G. Kampis "Coevolution in the computer : the necessity and use of distributed code systems" Dynamics of Human Societies S. Margarita, A. Beltratti "Co-evolution of trading strategies in an on-screen stock market" Multi-Robot Systems A. Ali Cherif "Collective behaviour for a micro-colony of robots" S. Goss, J.-L. Deneubourg, R. Beckers, J.-L. Henrotte "Recipes of collective movement" T. Ueyama, T. Fukuda "Structure organization of cellular robot based on genetic information" Collective Intelligence G. De Schutter, E. Nuyts "Birds use self-organized social behaviours to regulate their daily dispersal over wide areas : evidences from gull roosts" M.M. Millonas "Swarm field dynamics and functional morphogenesis" Z. Penzes, I. Karsai "Round shape combs produced by stigmergic scripts in social wasp" P.-Y. Quenette "Collective vigilance as an example of self-organisation : a precise study on the wild boar (Sus scrofa)" R.W. Schmieder "A knowledge tracking algorithm for generating collective behaviour in indivual-based populations" T.R. Stickland, C.M.N. Tofts, N.R. Franks "Algorithms & collective decisions in ants : information exchange, numbers of individuals and search limits" Sensory-Motor Activity S. Giszter "Modelling spinal organization of motor behaviors in the frog" P. Grandguillaume "A new model of visual processing based on space-time coupling in the retina" S. Mikami, H. Tano, Y. Kakazu "An autonomous legged robot that learns to walk through simulated evolution" U. Nehmzov, B. McGonigle "Robot navigation by light" Ecosystems & Evolution G. Baier, J.S. Thomsen, E. Mosekilde "Chaotic hierarchy in a model of competing populations" D. Floreano "Patterns of interactions in shared environments" A. Garliauskas "Theoretical and practical investigations of lake biopopulations" T. Ikegami "Ecology of evolutionary game strategies" T. Kim, K. Stuber "Patterns of cluster formation and evolutionary activity in evolving L-systems" C.C. Maley "The effect of dispersal on the evolution of artificial parasites" B.V. Williams, D.G. Bounds "Learning and evolution in populations of backprop networks" Theoretical Immunology N. Cerf "Fixed point stability in idiotypic immune networks" E. Muraille, M. Kauffman "The role of antigen presentation for the TH1 versus TH2 immune response" E. Vargas-Madrazo, J.C. Almagro, F. Lara-Ochoa, M.A. Jimenez-Montano "Amino acids patterns in the recognition site of immunoglobulins" ___________________________________________________________________ How to get from the airport to your hotel 1. Taxi: this is by far the simplest and quickest, but can cost 800-1200 BF ($25-$40). 2. Train link: there is a train station underneath the airport terminal (Zaventem) which takes you to Brussels 3 times per hour (xx.09, xx.24 and xx.46, trip takes 20 min), and costs 80BF ($2). There are 3 stations at Brussels, Nord, Central and Midi. Bruxelles Nord is nearest Hotels President WTC, Palace, and Vendome. Bruxelles Central is nearest Hotels Atlas, Arcade Ste. Catherine, Opera, Orion and Sabina. Bruxelles Midi is nearest Hotel de Paris. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: have, or on other factors, you might care to walk, take a taxi (count 200-300BF), or take the metro. At Bruxelles Nord you might be better advised to take a taxi to the hotel or a tram (platform in the station) to Place Rogier, rather than walk, as this is not the most attractive or safest part of Brussels. If you wish to take the metro (see comments on how to get from the hotel to the conference about buying metro tickets), then: For Hotel President WTC, the hotel is close to the Gare du Nord. On your map, where the street names are marked, it's just off square E1, to the North, on Boulevard Emile Jacqmain (no 180), marked in green and yellow. You can see the World Trade Centre marked in black on square E1. The hotel is not quite opposite, being a bit off the map to the North, further along the Boulevard E. Jacqmain. (To get to the centre, they organise a shuttle in the evenings. It's 15-20 minutes walk.) For Hotels Palace and Vendome, get off at Bruxelles Nord and take the tram (Pre-Metro, in the station) to Place Rogier (just 1 stop). For Hotels President WTC, Palace and Vendome, get off at Bruxelles Nord and take the tram (Pre-Metro, in the station) to Place Rogier (just 1 stop). For Hotels Atlas, Arcade Ste. Catherine, and Orion, get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station Ste. Catherine (line 1a and 1b, direction Heysel or Bizet, both are fine, just 2 stops). For Hotel Opera get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station De Brouckere (line 1a and 1b, direction Heysel or Bizet, both are fine, just 1 stop). For Hotel Sabina get off at the Gare Centrale, follow the signs to the Metro (line 1, station Gare Centrale), and take the metro to station Arts-Loi (line 1a and 1b, direction Hermann-Debroux or Stockel, both are fine, just 2 stops). There change to line 2, direction Simonis, and get off att Place Madou (the next stop). Hotel de Paris is just near the railway station Bruxelles-Midi, so its not worth taking a tram. The enclosed colour tourist map has nearly each hotel marked with a black number. How to get to the Falstaff Cafe for the Sunday night reception See enclosed map. The Falstaff (rue H. Maus, 17) is one of Brussels best known cafes (always full), art-deco style, and is right next to La Bourse (the Stock Exchange), which is the centre point of Bruxelles. You can't miss it! We have reserved a room there, which will be signposted. How to get from your hotel to the conference By Metro: By far the easiest way is to take the Metro. Buy a 10 trip card at the metro stations or from most newsagents ("Je voudrais une carte de tram s'il vous plait", 290 BF). On the way to the platform insert it in one of the stamping machines in your path, and you can travel anywhere for 1 hour, changing bus, tram or metro without restriction. The University Campus (Campus Plaine, ULB) is directly at station Delta on Line 1b (Direction Hermann-Debroux). It has a common section with line 1a, so don't get on the wrong metro (Direction Stockel). The destination of the metro is indicated on the platform and also inside each car. The line splits at Merode, so if you do get on the one going to Stockel you can always get off at or before Merode and wait on the same platform for the next one going to Hermann-Debroux. Returning to the centre you don't have this problem, all metros direction Heysel are OK. Hotel President WTC: To get to the conference, you can walk to the Place Rogier, and then follow the instructions below for Hotels Palce and Vendome, or else take the pre-metro from the Gare du Nord and change at Place de Brouckere for line Ia (direction Herman- Debroux), and get off at station Delta which is directly at the campus. We will also organise a bus service to and from the conference at about 08.30 in the morning, and after the end of each conference day (hours to be announced).. Hotels Palace and Vendome are nearest Metro station Rogier, line 2. Take direction Bruxelles Midi, and change to line 1a (direction Hermann-Debroux) at station Arts- Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Simonis). Hotels Atlas, Arcade Ste. Catherine, and Orion are nearest Metro station Ste Catherine (line 1). Hotel Opera is nearest Metro station De Brouckere (line 1) Hotel Sabina is nearest Metro station Madou line 2. Take direction Bruxelles Midi, and change to line 1a (direction Hermann-Debroux) at station Arts-Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Simonis). Hotel de Paris is nearest the metro Bruxelles-Midi, line 2. Take direction Simonis, and change to line 1a (direction Hermann-Debroux) at station Arts-Loi. Going back to your hotel change at Arts -Loi to line 2 (direction Bruxelles-Midi. At Delta, the ULB Campus Plaine is signposted (the corridors and escalators takew you right on Campus), and we will have placed ECAL signposts along your route. The conference is at the Forum (see enclosed map). By car: If heavy rush-hour traffic doesn't scare you, follow the signs to Namur (if you can find any), or take la Rue du Trone and Avenue de la Couronne, which will get you close. The Campus is just at the start of the Bruxelles-Namur-Luxembourg autoroute E411. Once you find the Campus, there is a parking lot reserved for you at Access 4. Do not confuse the ULB (Universite Libre de Bruxelles, nearer the autoroute) with its neighbouring cousin on the same Campus, the VUB (Vrij Universiteit Brussel, nearer the centre). The conference is just 30m from there (at the Forum) and will be signposted. How to get from the conference to the Banquet The Banquet is at the other ULB Campus, Campus Solbosch (La Salle de Marbre), 800-1000m from Campus Plaine (see enclosed map). Cross the railway bridge, and go straight, over the roundabout at the Ixelles Cemetry, and still staright on up Avenue de l'Universite to the Campus Plaine, down the middle of the Campus Solbosch (Avenue Paul Heger), and follow the ECAL Banquet signs. In any case we will escort you. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From jose at learning.siemens.com Tue Jun 6 06:52:25 2006 From: jose at learning.siemens.com (Steve Hanson) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NIPS5 Oversight Message-ID: NIPS-5 attendees: Due to an oversight we regret the inadvertent exclusion of 3 papers from the recent NIPS-5 volume. These papers were: Mark Plutowski, Garrison Cottrell and Halbert White: Learning Mackey-Glass from 25 examples, Plus or Minus 2 Yehuda Salu: Classification of Multi-Spectral Pixels by the Binary Diamond Neural Network A. C. Tsoi, D.S.C. So and A. Sergejew: Classification of Electroencephalograms using Artificial Neural Networks We are writing this note to (1) acknowledge our error (2) point out where you can obtain a present copy of the author's papers and (3) inform you that they will appear in their existing form or an updated form in NIPS Vol. 6. Presently, Morgan Kaufmann will be sending a bundle of the 3 formatted papers to all NIPS-5 attendees, these will be marked as NIPS-5 Addendum. You should also be able to retrieve an official copy from NEUROPROSE archive. Again, we apologize for the oversight to the authors. Stephen J. Hanson, General Chair Jack Cowan, Program Chair C. Lee Giles, Publications Chair #!/bin/sh ######################################################################## # usage: ohio # # A Script to get, uncompress, and print postscript # files from the neuroprose directory on cheops.ohio-state.edu # # By Tony Plate & Jordan Pollack ######################################################################## if [ "$1" = "" ] ; then echo usage: $0 " " echo echo The filename must be exactly as it is in the archive, if your echo file is not found the first time, look in the file \"ftp.log\" echo for a list of files in the archive. echo echo The printerflags are used for the optional lpr command that echo is executed after the file is retrieved. A common use would echo be to use -P to specify a particular postscript printer. exit fi ######################################################################## # set up script for ftp ######################################################################## cat > .ftp.script < ftp.log rm -f .ftp.script if [ ! -f /tmp/$1 ] ; then echo Failed to get file - please inspect ftp.log for list of available files exit fi ######################################################################## # Uncompress if necessary ######################################################################## echo Retrieved /tmp/$1 case $1 in *.Z) echo Uncompressing /tmp/$1 uncompress /tmp/$1 FILE=`basename $1 .Z` ;; *) FILE=$1 esac ######################################################################## # query to print file ######################################################################## echo -n "Send /tmp/$FILE to 'lpr $2' (y or n)? " read x case $x in [yY]*) echo Printing /tmp/$FILE lpr $2 /tmp/$FILE ;; esac echo File left in /tmp/$FILE From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: October 19-20, 1993 [This Workshop was previously scheduled for April 1993] Program Committee: Michael Arbib (Organizer), George Bekey, Damian Lyons, Paul Rosenbloom, and Ron Sun To design complex technological systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence and perceptual robotics which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. We seek (both at the level of schema asemblages, and in terms of "modular" neural networks) a distributed model of computation, supporting many concurrent activities for recognition of objects, and the planning and control of different activities. The use, representation, and recall of knowledge is mediated through the activity of a network of interacting computing agents which between them provide processes for going from a particular situation and a particular structure of goals and tasks to a suitable course of action. This action may involve passing of messages, changes of state, instantiation to add new schema instances to the network, deinstantiation to remove instances, and may involve self-modification and self-organization. Schemas provide a form of knowledge representation which differs from frames and scripts by being of a finer granularity. Schema theory is generative: schemas may well be linkedwwww to others to provide yet more comprehensive schemas, whereas frames tend to "build in" from the overall framework. The analysis of interacting computing agents (the schema instances) is intermediate between the overall specification of some behavior and the neural networks that subserve it. The Workshop will focus on different facets of this multi-level methodology. While the emphasis will be on technological systems, papers will also be accepted on biological and cognitive systems. Submission of Papers A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Alternative paradigms for modeling symbolic and subsymbolic knowledge Hierarchical and distributed representations: adaptation and coding Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule-based reasoning (using schemas for the integration) Programming Languages for Schemas and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, and Neuroscience Prospective contributors should send a five-page extended abstract, including figures with informative captions and full references - a hard copy, either by regular mail or fax - by August 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. In accepting papers submitted in response to this Call for Papers, preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of a network of interacting schemas, and where some but not necessarily all of the schemas are implemented in neural networks. Papers which present a single neural network for pattern recognition ("perceptual schema") or pattern generation ("motor schema") will not be accepted. It is the development of a methodology to analyze the interaction of multiple functional units that constitutes the distinctive thrust of this Workshop. Notification of acceptance or rejection will be sent by email no later than September 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but (revised versions) of accepted abstracts received prior to October 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. A number of papers have already been accepted for the Workshop. These include the following: Arbib: Schemas and Neural Networks: A Tutorial Introduction to Integrating Symbolic and Subsymbolic Approaches to Cooperative Computation Arkin: Reactive Schema-based Robotic Systems: Principles and Practice Heenskerk and Keijzer: A Real-time Neural Implementation of a Schema Driven Toy-Car Leow and Miikkulainen, Representing and Learning Visual Schemas in Neural Networks for Scene Analysis Lyons & Hendriks: Describing and analysing robot behavior with schema theory Murphy, Lyons & Hendriks: Visually Guided Multi-Fingered Robot Hand Grasping as Defined by Schemas and a Reactive System Sun: Neural Schemas and Connectionist Logic: A Synthesis of the Symbolic and the Subsymbolic Weitzenfeld: Hierarchy, Composition, Heterogeneity, and Multi-granularity in Concurrent Object-Oriented Programming for Schemas and Neural Networks Wilson & Hendler: Neural Network Software Modules Bonus Event: The CNE Research Review: Monday, October 18, 1993 The CNE Review will present a day-long sampling of CNE research, with talks by faculty, and students, as well as demos of hardware and software. Special attention will be paid to talks on, and demos in, our new Autonomous Robotics Lab and Neuro-Optical Computing Lab. Fully paid registrants of the Workshop are entitled to attend the CNE Review at no extra charge. Registration The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of October 18th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students and CNE members) together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. --------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC October 19-20, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] I wish to be registered for the CNE Research Review: YES [ ] NO [ ] Accommodation Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748-4141, Reservation: (800) 872-1104, Fax: (213) 7480043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, fellous at pollux.usc.edu. From jbeard at aip.org Tue Jun 6 06:52:25 2006 From: jbeard at aip.org (jonathan_beard) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: bee learning in Nature Message-ID: entomology Findings about bees' brains could shed light on how people learn URBANA, Ill. - Can honey bees help scientists understand how adult humans learn? Researchers at the University of Illinois are convinced they can. In the July 15 issue of the journal Nature, they describe structural changes that occur in the brains of bees when the insects leave their domestic chores to tackle their most challenging and complex task - foraging for pollen and nectar. As part of a doctoral thesis, neuroscience graduate student Ginger S. Withers focused on the "mushroom bodies," a region of the insect brain so named because it appears mushroom-shaped when viewed in cross-section. The region is closely associated with learning and memory. Withers used quantitative neuroanatomical methods to study sections of bee brains to show that the mushroom bodies are reorganized when a bee becomes a forager. Although a honey bee typically switches from hive-keeping tasks, such as rearing younger sisters and caring for the queen, to foraging at about three weeks of age, the brain changes are not simply due to aging. In a key experiment, young honey bees were forced to become foragers by removing older bees from the colony. The mushroom bodies of the precocious foragers, who were only about one week old, mirrored those of normal-aged foragers. The findings suggest that nerve cells in the mushroom bodies receive more informational inputs per cell as the bee learns to forage. In order to be a successful forager, a bee must learn how to navigate to and from its hive and how to collect food efficiently from many different types of flowers. The implications for neuroscience go far beyond the beehive, said the article's co-authors, U. of I. insect biologists Susan E. Fahrbach and Gene E. Robinson. There could be application to human studies, they said, because the structure of bee brains is similar to - but much simpler than - human brains. Fahrbach, whose research has focused on the impact of hormones on the nervous system, was drawn to the honey bee by its sophisticated behavior, small brain and power of concentration. "Honey bees offer an exceptionally powerful model for the study of changes in the brain related to naturally occurring changes in behavior, because, once a bee becomes a forager, it does nothing else," she said. "Because the behavioral shifts are so complete, the changes in brain structure that accompany the behavioral transitions must be related to the performance of the new observed behavior." Robinson, who is director of the U. of I.'s Bee Research Facility and who has previously studied other physiological and genetic aspects of bee behavior, agrees: "This discovery opens a new area of research on the relationship between brain and behavioral plasticity. One fundamental question this research raises is 'which comes first?' Do changes in behavior lead to changes in brain structure? Or do the changes in brain structure occur first, in preparation for the changes in behavior?" As researchers pursue the changes in brain cells that form the underpinnings of learning, the U. of I. scientists say the combination of neuroscience and entomology may yield sweet rewards. Contact: Jim Barlow University of Illinois News Bureau phone: 217-333-5802 fax: 217-244-0161 Compuserve: 72002,630 Internet: jbarlow at ux1.cso.uiuc.edu  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: vs. "randomized" decision rules, as they are called in decision theory ("stochastic learning algorithm" means something different to me, but maybe I'm just misinterpreting your posting). Picking an opinion from a pool of experts randomly is clearly not a particularly good randomized decision rule in most cases. However, there are cases in which properly chosen randomized decision rules are important (any good introduction on Bayesian statistics should discuss this). Unless there is an intelligent adversary involved, such cases are probably mostly of theoretical interest, but nonetheless, a randomized decision rule can be "better" than any deterministic one. Thomas.  From avner at elect1.weizmann.ac.il Tue Jun 6 06:52:25 2006 From: avner at elect1.weizmann.ac.il (Priel Avner) Date: Sun, 19 sep 93 Subject: New paper in neuroprose Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/priel.2_layered_perc.ps.Z The file priel.2_layered_perc.ps.Z is now available for copying from the Neuroprose archive. This is a 41-page long paper. This paper was submitted for publication in " Physical Review E ". A limited number of Hardcopies (10) is reserved for those who can not use the FTP server. Computational Capabilities of Restricted Two Layered Perceptrons by Avner Priel, Marcelo Blatt, Tal Grossman and Eytan Domany Electronics Department, The Weizmann Institute of Science, Rehovot 76100, Israel. and Ido Kanter Department of Physics, Bar Ilan University, 52900 Ramat Gan, Israel. Abstract: We study the extent to which fixing the second layer weights reduces the capacity and generalization ability of a two-layer perceptron. Architectures with $N$ inputs, $K$ hidden units and a single output are considered, with both overlapping and non-overlapping receptive fields. We obtain from simulations one measure of the strength of a network - its critical capacity, $\alpha_c$. Using the ansatz $\tau_{med} \propto (\alpha_c - \alpha)^{-2}$ to describe the manner in which the median learning time diverges as $\alpha_c$ is approached, we estimate $\alpha_c$ in a manner that does not depend on arbitrary impatience parameters. The $CHIR$ learning algorithm is used in our simulations. For $K=3$ and overlapping receptive fields we show that the general machine is equivalent to the Committee with the same architecture. For $K=5$ and the same connectivity the general machine is the union of four distinct networks with fixed second layer weights, of which the Committee is the one with the highest $\alpha_c$. Since the capacity of the union of a finite set of machines equals that of the strongest constituent, the capacity of the general machine with $K=5$ equals that of the Committee. We investigated the internal representations used by different machines, and found that high correlations between the hidden units and the output reduce the capacity. Finally we studied the Boolean functions that can be realized by networks with fixed second layer weights. We discovered that two different machines implement two completely distinct sets of Boolean functions.  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: <01H4KZLPHYCY8WYV8X@buenga.bu.edu> PLEASE POST. Subject: NEUROSCIENCE/BIOMEDICAL ENGINEERING FACULTY POSITION BU BOSTON UNIVERSITY, Department of Biomedical Engineering has openings for SEVERAL tenure-track faculty positions at the junior level. Coumputational Vision, Medical Image Processing, Neuroengineering, are among the areas of interest. For details see the add in Science-- October 22 1993. Applicants should submit a CV, a one page summary of research interests, and names and addresses of at least three references to: Herbert Voigt Ph.D. Chairman Department of Biomedical Engineering College of Engineering Boston University 44 Cummington str Boston, Ma 02215-2407 Consideration will be given to applicants who already hold a PHD in a field of engineering or related field (e.g. physics) and have had at least one year of postdoctoral experience.The target starting date for positions is September 1, 1994. Considerations of applications will begin on November 1, 1993 and will continue until the positions are filled.  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: following. Note that some locations may have "firewall" that prevents Xwindows' applications from running. If this procedure fails, you may have to find a machine outside your firewall or use the character-based interface (csb). + xhost +128.96.58.4 (Xwindows display permission for superbook.bellcore.com) + telnet 128.96.58.4 + (login) + TERM=xterms (it is important to use "xterms") + Figure out and enter your machine's IP address(in /etc/hosts or ask an administrator) + gxsb (Xwindows version of SuperBook) 3.1 Overview of Xwindows SuperBook Commands When you login to SuperBook, you will obtain a Library Window. For the IWANNT proceedings, you should select the IWANNT shelf, highlight "Applications of Neural Networks to Telecommunications" and click "Open". The Text Window should be placed on the right side of the screen and the Table-of-Contents Window should be placed on the left. These windows can be resized. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by clicking in the TOC. If the entry contains subsections, it will be marked with a "+". Double-clicking on those entries expands them. Clicking on an expanded entry closes it. Text Window: The text can be scrolled one-line-at-a-time with the Scroll Bar Arrows or a page-at-a-time by clicking on the spaces immediately above or below the Slider. Graphics: Figures, tables, and some equations are presented as bitmaps. The graphics can be viewed by clicking on the blue icons at the right side of the text which pops up a bitmap-viewer. Graphics can be closed by clicking on their "close" button. Some multilemdia applications have been included, but these may not work correctly across the Internet. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* You can also search by clicking on a term in the text (to clear that search and select another, do ctrl-click). Annotations: Annotations are indicated with a pencil icon and can be read by clicking on the icon. Annotations can be created (with conference-attendee logins) by clicking in the text with the left button and then typing in the annotation window. Exiting: Pull down the FILE menu on the Library Window to "QUIT", and release. 4. Remote Access via character-based interface From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: + telnet 128.96.58.4 (for superbook.bellcore.com) + (login) + TERM=(termtype) (use "xterms" for an Xwindow inside a firewall) + csb 4.1 Overview of csb SuperBook Commands The character-based interface resembles emacs. You first enter Library mode. After selecting a shelf (make sure you are on the IWANNT shelf) and a book on that shelf (e.g., Applications of Neural Networks to Telecommunications), the screen is split laterally into two parts. The upper window is the TOC and the lower window has the text. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by typing the number beside them in the TOC. If the entry contains subsections, it will be marked with a "+". Text Window: The text can be scrolled one-line-at-a-time with the u/d keys or a page-at-a-time with the U/D keys. Graphics: Most bitmapped graphics will not be available. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* Searches are also possible by posting the cursor over a word and hitting RET. Annotations: Annotations are indicated with an A on the right edge of the screen. These can be read by entering an A on the line on which they are presented. Annotations can be created (given correct permissions) by entering A on any line. Exiting: Enter "Q" From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: following. Note that some locations may have "firewall" that prevents Xwindows' applications from running. If this procedure fails, you may have to find a machine outside your firewall or use the character-based interface (csb). + xhost +128.96.58.4 (Xwindows display permission for superbook.bellcore.com) + telnet 128.96.58.4 + (login) + TERM=xterms (it is important to use "xterms") + enter your email address + Figure out and enter your machine's IP address (in /etc/hosts or ask an administrator) + gxsb (Xwindows version of SuperBook) 3.1 Overview of Xwindows SuperBook Commands When you login to SuperBook, you will obtain a Library Window. For the IWANNT proceedings, you should select the IWANNT shelf, highlight "Applications of Neural Networks to Telecommunications" and click "Open". The Text Window should be placed on the right side of the screen and the Table-of-Contents Window should be placed on the left. These windows can be resized. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by clicking in the TOC. If the entry contains subsections, it will be marked with a "+". Double-clicking on those entries expands them. Clicking on an expanded entry closes it. Text Window: The text can be scrolled one-line-at-a-time with the Scroll Bar Arrows or a page-at-a-time by clicking on the spaces immediately above or below the Slider. Graphics: Figures, tables, and some equations are presented as bitmaps. The graphics can be viewed by clicking on the blue icons at the right side of the text which pops up a bitmap-viewer. Graphics can be closed by clicking on their "close" button. Some multilemdia applications have been included, but these may not work correctly across the Internet. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* You can also search by clicking on a term in the text (to clear that search and select another, do ctrl-click). Annotations: Annotations are indicated with a pencil icon and can be read by clicking on the icon. Annotations can be created (with conference-attendee logins) by clicking in the text with the left button and then typing in the annotation window. Exiting: Pull down the FILE menu on the Library Window to "QUIT", and release. 4. Remote Access via character-based interface From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: + telnet 128.96.58.4 (for superbook.bellcore.com) + (login) + TERM=(termtype) (use "xterms" for an Xwindow inside a firewall) + enter your email address + csb 4.1 Overview of csb SuperBook Commands The character-based interface resembles emacs. You first enter Library mode. After selecting a shelf (make sure you are on the IWANNT shelf) and a book on that shelf (e.g., Applications of Neural Networks to Telecommunications), the screen is split laterally into two parts. The upper window is the TOC and the lower window has the text. Table-of-Contents (TOC): Books, articles, and sections within books can be selected by typing the number beside them in the TOC. If the entry contains subsections, it will be marked with a "+". Text Window: The text can be scrolled one-line-at-a-time with the u/d keys or a page-at-a-time with the U/D keys. Graphics: Most bitmapped graphics will not be available. Searching: Terms can be searched in the text by typing them into the Search-Window. Wild-card searches are possible as term* Searches are also possible by posting the cursor over a word and hitting RET. Annotations: Annotations are indicated with an A on the right edge of the screen. These can be read by entering an A on the line on which they are presented. Annotations can be created (given correct permissions) by entering A on any line. Exiting: Enter "Q" From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Hebbian learning plus deflation, but this deflation appears to be parallel, from all units to all other units, and therefore not sequential. I would think that the same happens with Leen's algorithms (ref. below, also): there is deflation but it is parallel, not sequential. References: E. Oja, H. Ogawa and J. Wangviwattana, "PCA in Fully Parallel Neural Networks", in I. Aleksander and J. Taylor (eds.), Artificial Neural Networks 2, Elsevier Science Publishers, 1992. T. Leen, "Dynamics of Learning in Recurrent Feature-Discovery Networks", in R. Lippmann, J. Moody and D. Touretzky (eds.), Advances in Neural Information Processing Systems 3, Morgan Kaufmann, 1991. Regards, Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: in answering the mail from here. ======================================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: cutting across traditional disciplines. This conference seeks to bring together researchers from the Australasian region who are actively involved in research in Complex systems for creative discussion, and to provide an introduction to specialised topics for people seeking to know further about the theoretical and practical aspects of research in Complex systems. The theme of the conference "Mechanism of Adaptation in Natural, Man Made and Mathematical Systems", invites us to investigate and question the dynamic processes in complex systems, and to compare our overall modelling processes of natural systems. Processes such as evolution, growth and learning are being investigated through genetic algorithms, evolutionary programming and neural networks. How well do these techniques perform and to what extent do the fit an evolutionary paradigm. It also raises the underlying question: "How does order arise in complex systems?" PAPERS: Original papers concerned with both theory and application are solicited. The areas of interest include, but are not limited to the following: * Natural and Artificial Life * Genetic algorithms * Fractals, Chaos and Non-linear Dynamics * Self-organisation * Information and Control Systems * Neural Networks * Parallel and Emergent Computation. * Bio-Complexity DATES: Second Circular Jan 31, 1994 Third Circular Feb 14, 1994 Submission of Abstracts: Mar 14, 1994 Notification of Acceptance: May 16, 1994 Receipt of Camera-ready papers: Jul 25, 1994 CONFERENCE ORGANIZATION: The conference will open with advance registration and a barbecue party on Sunday 25th September. The conference fee of $285 ($130 students) will include morning & afternoon teas and lunch on each day, the opening barbecue, and the conference dinner. Accommodation will be available on campus at the University Residential College and at nearby motels within walking distance of the University. The conference dinner is to be held on Tuesday 27th September. TUTORIALS AND WORKSHOPS: It is planned to hold one or more introductory tutorials on selected topics in complex systems on Sunday 25th September. The aim of these tutorials will be to introduce participants to fundamental concepts in complex systems and to provide them with practical experience. Tentatively the topics covered will include genetic algorithms, cellular automata, chaos, and fractals. The exact content will depend on demand. If interested in attending please indicate your preferences on the attached expression of interest. The number of places will be strictly limited by facilities available. On Wednesday advanced workshops may be held on specialised topics if there is sufficient interest. Suggestions/offers for advanced workshop topics are encouraged. There will be an additional fee for attendance at the tutorials and workshops, which will include lunch and refreshments. SUBMISSION OF PAPERS: Intending authors are requested to submit an extended abstract of about 500 words, containing a clear, concise statement of the significant results of the work. Each abstract will be assessed by two referees. All accepted papers will be published in the conference proceedings. Individual authors may be allocated to either an oral or poster presentation, but contributions in both formats will appear identically in the proceedings. Copies of the proceedings will be provided to participants at the conference in hardcopy form. LaTeX style files and other formating options will be provided to authors of accepted papers. ORGANISING COMMITTEE: Conference Chairperson: Assoc Prof. Russel Stonier, Department of Mathematics and Computing University of Central Queensland Rockhampton Mail Centre 4702 QLD Australia. Tel. +61 79 309487 Fax: +61 79 309729 Email: complex at ucq.edu.au Technical Chairperson: Dr Xing Huo Yu, Department of Mathematics and Computing University of Central Queensland Rockhampton Mail Centre 4702 QLD Australia. Tel. +61 79 309865 Fax: +61 79 309729 Email: complex at ucq.edu.au Members: Prof. J. Diederich, Queensland University of Technology; Prof. A.C. Tsoi, University of Queensland; Dr D. Green, Australian National University; Dr T. Bossomaier, Australian National University; Mr S. Smith, University of Central Queensland. -----------%< cut here %<------------- COMPLEX'94 Second Australian National Conference on Complex Systems EXPRESSION OF INTEREST NAME: ______________________________________________ ORGANIZATION:________________________________________ ________________________________________ ADDRESS: ________________________________________ ________________________________________ ________________________________________ TEL: /FAX: ________________________________________ E-MAIL: _________________________________________ [ ] I am interested in attending COMPLEX'94. Please send me a registration form. [ ] I am interested in PRESENTING A PAPER and/or POSTER. Tentative title: ___________________________________________________ ___________________________________________________ ___________________________________________________ [ ] I am interested in ATTENDING A TUTORIAL. Preferences: Genetic Algorithms ______ Cellular Automata ______ Chaos Theory ______ Fractals ______ Distributed Programming ______ Other (specify) ______________________________ [ ] I am interested in attending an advanced WORKSHOP. [ ] I am UNABLE TO ATTEND the conference but would like to be kept informed. -------------%< cut here %<------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: and its Applications was held in Honolulu. Some of the topics that were covered in the symposium are listed below. Circuits and Systems Neural Networks Chaos Dynamics Cellular Neural Networks Fractals Bifurcation Biocybernetics Soliton Oscillations Reactive Phenomena Fuzzy Numerical Methods Pattern Generation Information Dynamics Self-Validating Numerics Time Series Analysis Chua's Circuits Chemistry and Physics Mechanics Fluid Mechanics Acoustics Control Optics Circuit Simulation Communication Economics Digital/analog VLSI circuits Image Processing Power Electronics Power Systems We have extra copies of the proceedings that are on sale for $100 to participants of the conference and $150 to nonparticipants. Checks drawn from US banks and money orders will be accepted. To receive a copy of the proceedings make payments to ``NOLTA 93'' and send to Anthony Kuh Dept. of Electrical Engineering University of Hawaii Honolulu HI 96822 For more information please contact me by email at kuh at wiliki.eng.hawaii.edu or by fax at 808-956-3427, From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: L. Bernard Widrow, Stanford University - SUNDAY, JUNE 5. 1994, 8 AM-12 PM Adaptive Filters, Adaptive Controls, Adaptive Neural Networks, and Applications M. Gail Carpenter, Boston University - SUNDAY, JUNE 5. 1994, 8 AM-12 PM Adaptive Resonance Theory N. Takeshi Yamakawa, Kyushu Institute of Technology - SATURDAY, JUNE 4. 1994, 6-10 PM What are the Differences and the Similarities Among Fuzzy, Neural, and Chaotic Systems? O. Stephen Grossberg, Boston University - SUNDAY, JUNE 5. 1994, 1-5 PM Autonomous Neurodynamics: From Perception to Action P. Lee Giles, NEC Research Institute - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Dynamically-Driven Recurrent Neural Networks: Models, Training Algorithms, and Applications Q. Alianna Maren, Accurate Automation Corporation - SATURDAY, JUNE 4. 1994, 1-5 PM Introduction to Neural Network Applications R. David Casasent, Carnegie Mellon University - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Pattern Recognition and Neural Networks S. Per Bak, Brookhaven National Laboratory - SATURDAY, JUNE 4. 1994, 1-5 PM Introduction to Self-Organized Criticality T. Melanie Mitchell, Sante Fe Institute - SATURDAY, JUNE 4. 1994, 8 AM-12 PM Genetic Algorithms, Theory and Applications U. Lotfi A. Zadeh, University of California, Berkeley - SUNDAY, JUNE 5. 1994, 1-5 PM Fuzzy Logic and Calculi of Fuzzy Rules and Fuzzy Graphs V. Nikolay G. Rambidi, International Research Institute for Management Sciences - SUNDAY, JUNE 5. 1994, 6-10 PM Image Processing and Pattern Recognition Based on Molecular Neural Networks __________________________________________________________ PLENARIES: 1. Tuesday, June 7, 1994, 6-7 PM Lotfi A. Zadeh, University of California, Berkeley "Fuzzy Logic, Neural Networks, and Soft Computing" 2. Tuesday, June 7, 1994, 7-8 PM Per Bak, Brookhaven National Laboratory "Introduction to Self-Organized Criticality" 3. Wednesday, June 8, 1994, 6-7 PM Bernard Widrow, Stanford University "Adaptive Inverse Control" 4. Wednesday, June 8, 1994, 7-8 PM Melanie Mitchell, Sante Fe Institute "Genetic Algorithms: Why They Work and What They Can Do For You" 5. Thursday, June 9, 1994, 6-7 PM Paul Werbos, US National Science Foundation "Brain-Like Intelligence in Artificial Models: How Do We Really Get There?" 6. Thursday, June 9, 1994, 7-8 PM John Taylor, King's College London "Capturing What It Is Like to Be: Modelling the Mind with Neural Networks" __________________________________________________________ SPECIAL SESSIONS: Special Session 1: "Biomedical Applications of Neural Networks" Tuesday, June 7, 1994 Co-sponsored by the National Institute of Allergy and Infectious Diseases, U.S. NIH; the Division of Cancer Treatment, National Cancer Institute, U.S. NIH; and the Center for Devices and Radiological Health, U.S. Food and Drug Administration Chairs: David G, Brown, PhD Center for Devices and Radiological Health, FDA John Weinstein, MD, PhD National Cancer Institute, NIH This special session will focus on recent progress in applying neural networks to biomedical problems, both in the research laboratory and in the clinical environment. Applications moving toward early commercial implementation will be highlighted, and working demonstrations will be given. The track will commence with an overview session including invited presentations by Dr. John Weinstein, NCI, NIH on the present status of biomedical applications research and by his co-session chair Professor Shiro Usui, Toyohasi University on biomedical applications in Japan. A second morning session, chaired by Dr. Harry B. Burke, MD, PhD, University of Nevada and by Dr. Judith Dayhoff, PhD, University of Maryland, University of Maryland, will address neural networks for prediction and other nonimaging applications. The afternoon session, chaired by Dr. Maryellen L, Giger, PhD, University of Chicago, and Dr. Laurie J. Mango, Neuromedical Systems, Inc., will cover biomedical image analysis/image understanding applications. The final session, chaired by Dr. David G. Brown, PhD, CDRH, FDA is an interactive panel/audience discussion of the promise and pitfalls of neural network biomedical applications. Other prominent invited speakers include Dr. Nozomu Hoshimiya of Tohoku University and Dr. Michael O'Neill of the University of Maryland. Submission of oral and/or poster presentations are welcomed to complement the invited presentations. Special Session 2: "Commercial and Industrial Applications of Neural Networks" Tuesday, June 7, 1994 Co-sponsored by the Society for Manufacturing Engineers Overall Chair: Bernard Widrow, Stanford University This special session will be divided into four sessions of invited talks and will place its emphasis on commercial and industrial applications working 24 hours a day and making money for their users. The sessions will be organized as follows: Morning Session 1 "Practical Applications of Neural Hardware" Chair: Dan Hammerstrom, Adaptive Solutions, Portland, Oregon, USA Morning Session 2 "Applications of Neural Networks in Pattern Recognition and Prediction" Chair: Kenneth Marko, Ford Motor Company Afternoon Session 1 "Applications of Neural Networks in the Financial Industry" Chair: Ken Otwell, BehavHeuristics, College Park, Maryland, USA Afternoon Session 2 "Applications of Neural Networks on Process Control and Manufacturing" Chair: Tariq Samad, Honeywell Special Session 3: "Financial and Economic Applications of Neural Networks" Wednesday, June 8, 1994 Chair: Guido J. Deboeck, World Bank This special session will focus on the state-of the-art in financial and economic applications. The track will be split into four sessions: Morning Session 1 Overview on Major Financial Applications of Neural Networks and Related Advanced Technologies" Morning Session 2 Presentation of Papers: Time-Series, Forecasting, Genetic Algorithms, Fuzzy Logic, Non-Linear Dynamics Afternoon Session 1 Product Presentations Afternoon Session 2 Panel discussion on "Cost and Benefits of Advanced Technologies in Finance" Invited speakers to be announced. Papers submitted to regular sessions may receive consideration for this special session. Special Session 4: "Neural Networks in Chemical Engineering" Thursday, June 9, 1994 Co-sponsored by the American Institute of Chemical Engineers Chair: Thomas McAvoy, University of Maryland This special session on neural networks in the chemical process industries will explore applications to all areas of the process industries including process modelling, both steady state and dynamic, process control, fault detection, soft sensing, sensor validation, and business examples. Contributions from both industry and academia are being solicited. Special Session 5: "Mind, Brain and Consciousness" Thursday, June 9, 1994 Session Chair: John Taylor, King's College London Session Co-chair: Walter Freeman, University of California, Berkeley Session Committee: Stephen Grossberg, Boston University and Gerhardt Roth, Brain Research Institute Invited Speakers include S. Grossberg, P. Werbos, G. Roth, B. Libet, J. Taylor Consciousness and inner experience have suddenly emerged as the centre of activity in psychology, philosophy, and neurobiology. Neural modelling is preceding apace in this subject. Contributors from all areas are now coming together to move rapidly towards a solution of what might be regarded as one of the deepest problems of human existence. Neural models, and their constraints, will be presented in the session, with an assessment of how far we are from building a machine that can see the world the way we do. _____________________________________________________________ SPECIAL INTEREST GROUP (SIGINNS) SESSIONS INNS Special Interest Groups have been established for interaction between individuals with interests in various subfields of neural networks as well as within geographic areas. Several SIGs have tentatively planned sessions for Wednesday, June 8, 1994 from 8 - 9:30 PM: Automatic Target Recognition - Brian Telfer, Chair A U.S Civilian Neurocomputing Initiative - Andras Pellionisz, Chair Control, Robotics, and Automation - Kaveh Ashenayi, Chair Electronics/VLSI - Ralph Castain, Chair Higher Level Cognitive Processes - John Barnden, Chair Hybrid Intelligence - Larry Medsker, Chair Mental Function and Dysfunction - Daniel Levine, Chair Midwest US Area - Cihan Dagli, Chair Power Engineering - Dejan Sobajic, Chair ______________________________________________________ NEW in '94! NEURAL NETWORK INDUSTRIAL EXPOSITION The State-of-the-Art in Advanced Technological Applications MONDAY JUNE 6, 1994 - 8 AM to 9 PM SPECIAL EXPOSITION-ONLY REGISTRATION AVAILABLE: $55 * Dedicated to Currently Available Commercial Applications of Neural Nets & Related Technologies * Commercial Hardware and Software Product Demos * Poster Presentations * Panel Conclusions Led by Industry Experts * Funding Panel EXPOSITION CHAIR -- Takeshi Yamakawa, Kyushu Institute of Technology CHAIRS -- Hardware: Dan Hammerstrom, PhD, Adaptive Solutions, Inc. Takeshi Yamakawa, Kyushu Institute of Technology Robert Pap, Accurate Automation Corporation Software: Dr. Robert Hecht-Nielsen, HNC, Inc. Casimir C. Klimasauskas, Co-founder, NeuralWare, Inc. John Sutherland, Chairman and VP Research, AND America, Ltd. Soo-Young Lee, KAIST Asain Liaison Pierre Martineau, Martineau and Associates European Liaison Plus: NEURAL NETWORK CONTEST with $1500 GRAND PRIZE chaired by Bernard Widrow, Harold Szu, and Lotfi Zadeh with a panel of distinguished judges! EXPOSITION SCHEDULE Morning Session: Hardware 8-11 AM Product Demonstration Area and Poster Presentations 11 AM-12 PM Panel Conclusion Afternoon Session: Software 1-4 PM Product Demonstration Area and Poster Presentations 4-5 PM Panel Conclusion Evening Session 6-8 PM Neural Network Contest 8-9 PM Funding Panel HOW TO PARTICIPATE: To demonstrate your hardware or software product contact James J. Wesolowski, 202-466-4667; (fax) 202-466-2888. For more information on the neural network contest, indicate your interest on the registration form. Further information and contest rules will be sent to all interested parties! Deadline for contest registration is March 1, 1994. ______________________________________________________ FEES at WCNN 1994 REGISTRATION FEE (includes all sessions, plenaries, proceedings, reception, and Industrial Exposition. Separate registration for Short Courses.) - INNS Members: US$195 - US$395 - Non Members: US$295 - US$495 - Full Time Students: US$85 - US$135 - Spouse/Guest: US$35 - US$55 SHORT COURSE FEE (Pay for 2 short courses, get the third FREE) - INNS Members: US$225 - US$275 - Non Members: US$275 - US$325 - Full Time Students: US$125 - US$150 CONFERENCE HOTEL: Town and Country Hotel (same site as conference) - Single: US$70 - US$95 - Double: US$80 - US$105 TRAVEL RESERVATIONS: Executive Travel Associates (ETA) has been selected the official travel company for the World Congress on Neural Networks. ETA offers the lowest available fares on any airline at time of booking when you contact them at US phone number 202-828-3501 or toll free (in the US) at 800-562-0189 and identify yourself as a participant n the Congress. Flights booked on American Airlines, the official airline for this meeting, will result in an additional discount. Please provide the booking agent you use with the code: Star #S0464FS ________________________________________________________ TO RECEIVE CONFERENCE BROCHURES AND REGISTRATION FORMS, HOTEL ACCOMMODATION FORMS, AND FURTHER CONGRESS INFORMATION, CONTACT THE INTERNATIONAL NEURAL NETWORK SOCIETY AT: International Neural Network Society 1250 24th Street, NW Suite 300 Washington, DC 20037 USA phone: 202-466-4667 fax: 202-466-2888 e-mail: 70712.3265 at compuserve.com From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: predictable are those carrying most information [haussler-91]. Suppose that the learning machine was trained on a sequence of examples x1, x2, ... xn and is presented pattern xn+1. If pattern xn+1 is easily predicted (i.e. its error is small) the benefit from learning that pattern is going to be small: the hypothesis space is not going to shrink much. Conversely, if xn+1 is hard to predict, learning it will result in a large shrinking of hypothesis space. Minimax algorithms which minimize the maximum error instead of the average error rely on this principal. The solution of minimax algorithms depend only on a number of informative patterns that are those patterns having maximum error (and that other people would call outlyers) [boser-92]. What happens when the data is not perfectly clean? Then, outlyers can be either very informative, if they correspond to atypical patterns, or very non-informative, if they correspond to garbage patterns. With algorithms that detect the outlyers (e.g. minimax algorithms) one can clean the data either automatically or by hand by removing a subset of the outlyers. The VC-theory predicts the point of optimal cleaning [matic-92]. Isabelle Guyon --------------------------------- @inproceedings{haussler-91, author = "Haussler, D. and Kearns, M. and Shapire, R.", title = "Bounds on the Sample Complexity of Bayesian Learning Using Information Theory and the {VC} Dimension", booktitle = "Computational Learning Theory workshop", organization = "ACM", year = "1991", } @inproceedings{boser-92, author = "Boser, B. and Guyon, I. and Vapnik, V.", title = "An Training Algorithm for Optimal Margin Classifiers", year = "1992", booktitle = "Fifth Annual Workshop on Computational Learning Theory", address = "Pittsburgh", publisher = "ACM", month = "July", pages = "144-152" } @inproceedings{matic-92, author = "Mati\'{c}, N. and Guyon, I. and Bottou, L. and Denker, J. and Vapnik, V.", title = "Computer Aided Cleaning of Large Databases for Character Recognition", organization = "IAPR/IEEE", address = "Amsterdam", month = "August", year = 1992, booktitle = "11th International Conference on Pattern Recognition", volume = "II", pages = "330-333", } From Sebastian.Thrun at B.GP.CS.CMU.EDU Tue Jun 6 06:52:25 2006 From: Sebastian.Thrun at B.GP.CS.CMU.EDU (Sebastian.Thrun@B.GP.CS.CMU.EDU) Date: Thu, 17 Mar 1994 11:03-EST Subject: CALL FOR PAPERS: Issue on Robot Learning, Machine Learning Journal Message-ID: ************************************************************** ***** CALL FOR PAPERS ****** ************************************************************** Special Issue on ROBOT LEARNING Journal MACHINE LEARNING (edited by J. Franklin and T. Mitchell and S. Thrun) This issue focuses on recent progress in the area of robot learning. The goal is to bring together key research on machine learning techniques designed for and applied to robots, in order to stimulate research in this area. We particularly encourage submission of innovative learning approaches that have been successfully implemented on real robots. Submission deadline: October 1, 1994 Papers should be double spaced and 8,000 to 12,000 words in length, with full-page figures counting for 400 words. All submissions will be subject to the standard review procedure. It is our goal to also publish the issue as a book. Send three (3) copies of submissions to: Sebastian Thrun Universitaet Bonn Institut fuer Informatik III Roemerstr. 164 D-53117 Bonn Germany phone: +49-228-550-373 Fax: +49-228-550-382 E-mail: thrun at cs.bonn.edu, thrun at cmu.edu Also mail five (5) copies of submitted papers to: Karen Cullen MACHINE LEARNING Editorial Office Kluwer Academic Publishers 101 Philip Drive Norwell, MA 02061 USA phone: (617) 871-6300 E-mail: karen at world.std.com Note: Machine Learning is now accepting submission of final copy in electronic form. There is a latex style file and related files available via anonymous ftp from world.std.com. Look in Kluwer/styles/journals for the files README, smjrnl.doc, smjrnl.sty, smjsamp.tex, smjtmpl.tex, or smjstyles.tar (which contains them all).  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: example, to be purely commercial (without parasites of any kind). As a friend of mine, a securities lawyer formerly with the SEC, has pointed out, without a larger body of speculators, there would not be enough liquidity to generate an efficient transfer of risk. All the large commercial hedgers have basically the same information and time goals. It is natural to expect them to be in the same side of the market for the same period. A society that would not permit and reward the role of the speculator could not take advantage of the risk transfer (and low price guarantees) that this large liquid market provides. Someone has to be willing to be on the other side of a zero-sum game. When I addressed the panel recently, I said that the competition is a difficult endeavor, hated by some, enjoyed by others, but of interest to most. We all (those interested on real function modeling) want to know how these tools would fair in such a test, with objective measuring tools, compared against the same data, and each tool properly applied. Not a simple task to be done by anyone, but only practical on a setting, such as a competition. Furthermore, these results will provide a single source of rich material for future experimentation all collected on a single place. > Furthermore, it is pretty >clear that the only way to consistently make money with such >a technique would be to keep it secret. I would say that for a specific technique, applied to a specific market and time frame you are probably right. But the statement is not true in general. Even if it can arguably be the case, the competition promotes two tracks: one with full disclosure, and another one, a little harder to get in, for non-disclosure entries. In answering to a negative assertion, all that we need is a counter example. We did not wish to have commercial claims saying that they know how to make the system work but they wouldn't show it to us. Here is an opportunity for all. Lastly, I would like to point out that our sole objective at this point is not the sordid business of making money by designing trading systems, but rather to study the predictive quality of non linear techniques as applied to this specific problem (which has eluded a number of other techniques). There is a ocean of difference between a reasonably accurate predictor and a money making trading system. The discussion of which is beyond the scope of this note, except to say that one, definitely, does not imply in the other. > >On the other hand, I have a great deal of respect for several >of the people involved in the "Competition", and this leads me >to wonder whether I might be missing some crucial point. Can >anybody help me with this? > > -- Bill > On behalf to the panel, I would like to thank you for the vote of confidence, and say that we will strive to make the most accurate assertions of the quality of the entries. I would like to invite all the real function modeling researchers to participate and to try the best that can be done at this point with these techniques. Here is a good chance to show if any technique can indeed make a difference in this difficult and important problem. If it can, it can certainly be of use in a number of related problems. Science has been based on curiosity and correct methodology. I hope that we all can strive for answers, specially to questions that are enveloped in mysterious folklore... -- Manoel ____________________________________________________________________________ ________________________________________ ___________________________ Manoel Fernando Tenorio Parallel Distributed Structures Lab School of Electrical Engineering Purdue University W. Lafayette, In 47907 Ph.: 317-494-3482 Fax: 317-494-6440 tenorio at ecn.purdue.edu ============================================================================ = From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: 21 May 1994 (Sat) Subject: No subject Message-ID: The file nascimento.phd.tarZ is now available for copying from the anonymous ftp-site 'slarti.csc.umist.ac.uk' (130.88.116.3): Author: Cairo L. Nascimento Jr. (cairo at csc.umist.ac.uk) PhD thesis title: Artificial Neural Networks in Control and Optimization Submission date: February 1994 Supervisor: Dr. Martin B. Zarrop (zarrop at csc.umist.ac.uk) UMIST - Control Systems Centre P.O. Box 88 - Sackville Street Manchester M60 1QD United Kingdom Abstract: This thesis concerns the application of artificial neural networks to solve optimization and dynamical control problems. A general framework for artificial neural networks models is introduced first. Then the main feedforward and feedback models are presented. The IAC (Interactive Activation and Competition) feedback network is analysed in detail. It is shown that the IAC network, like the Hopfield network, can be used to solve quadratic optimization problems. A method that speeds up the training of feedforward artificial neural networks by constraining the location of the decision surfaces defined by the weights arriving at the hidden units is developed. The problem of training artificial neural networks to be fault tolerant to loss of hidden units is mathematically analysed. It is shown that by considering the network fault tolerance the above problem is regularized, that is the number of local minima is reduced. It is also shown that in some cases there is a unique set of weights that minimizes a cost function. The BPS algorithm, a network training algorithm that switches the hidden units on and off, is developed and it is shown that its use results in fault tolerant neural networks. A novel non-standard artificial neural network model is then proposed to solve the extremum control problem for static systems that have an asymmetric performance index. An algorithm to train such a network is developed and it is shown that the proposed network structure can also be applied to the multi-input case. A control structure that integrates feedback control and a feedforward artificial neural network to perform nonlinear control is proposed. It is shown that such a structure performs closed-loop identification of the inverse dynamical system. The technique of adapting the gains of the feedback controller during training is then introduced. Finally it is shown that the BPS algorithm can also be used in this case to increase the fault tolerance of the neural controller in relation to loss of hidden units. Computer simulations are used throughout to illustrate the results. ----------------------------------------------------------------------------------- The thesis is 226 pages (17 preamble + 209 text). Hardcopies are not available at the moment. To obtain a copy of the Postscript files: % ftp slarti.csc.umist.ac.uk > Name: anonymous > Password: > cd /pub/neural/cairo > binary > get nascimento.phd.tarZ > quit The file nascimento.phd.tarZ is a unix TAR file which contains the following postscript files (compressed by the standard unix command "compress"): File Size in bytes nascimento.phd.tarZ 2015232 chap01.ps.Z 36737 chap24.ps.Z 1041029 chap58.ps.Z 928199 When uncompressed the file sizes and number of pages in each file are: File Size in bytes Number of pages chap01.ps 109471 22 chap24.ps 3315662 97 chap58.ps 2913551 107 --------- ----- 6338684 216 To obtain one of the postscript files from the TAR file, use: % tar tvf nascimento.phd.tarZ (list the table of contents of the TAR file) % tar xvf nascimento.phd.tarZ chap24.ps.Z (extracts only the file chap24.ps.Z from the TAR file) % uncompress -v chap24.ps.Z % lpr -s -P chap24.ps (do not delete or compress the PS file until the printing is finished) OBS: 1) The uncompressed postscript files can be viewed using "ghostview" (or "ghostscript"), but I don't know about "pageview". 2) If you have GZIP installed locally, consider compressing the PS files using it. Using the command "gzip -9v filename" the size of the compressed PS files will be respectively 24568, 613052, 637269 bytes (total using GZIP -9: 1274889 bytes, total using COMPRESS: 2005965 bytes; 1274889 / 2005965 = 63.6 %). 3) Some of my other publications are available in the same directory. For more details get the file /pub/neural/cairo/INDEX.TXT. ------------------------------------------------------------------------- Cairo L. Nascimento Jr. | E-Mail: cairo at csc.umist.ac.uk UMIST - Control Systems Centre | Tel: +(44)(61) 200-4659, Room C70 P.O. Box 88 - Sackville Street | or +(44)(61) 236-3311, Ext.2821 Manchester M60 1QD | Tel. Home: +(44)(61) 343-3979 United Kingdom | WHOIS handle for ds.internic.net: CLN2 ------------------------------------------------------------------------- After 1st June 1994 my surface address will be: Cairo L. Nascimento Jr. Instituto Tecnologico de Aeronautica, CTA - ITA - IEE- IEEE 12228-900 - Sao Jose' dos Campos - SP Brazil E-mail in Brazil (after 1st June 1994): ita at fpsp.fapesp.br (please, include my name in the subject line). The email address cairo at csc.umist.ac.uk should remain operational for some months after June/94. .................................................................................... END From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Applications}, V. Cherkassky, J.H. Friedman and H. Wechsler (eds.), NATO ASI Series F, Springer-Verlag 1994. ------------------------------------------------------------------------- Prediction Risk and Architecture Selection for Neural Networks John Moody Abstract: We describe two important sets of tools for neural network modeling: prediction risk estimation and network architecture selection. Prediction risk is defined as the expected performance of an estimator in predicting new observations. Estimated prediction risk can be used both for estimating the quality of model predictions and for model selection. Prediction risk estimation and model selection are especially important for problems with limited data. Techniques for estimating prediction risk include data resampling algorithms such as {\em nonlinear cross--validation (NCV)} and algebraic formulae such as the {\em predicted squared error (PSE)} and {\em generalized prediction error (GPE)}. We show that exhaustive search over the space of network architectures is computationally infeasible even for networks of modest size. This motivates the use of {\em heuristic} strategies that dramatically reduce the search complexity. These strategies employ directed search algorithms, such as selecting the number of nodes via {\em sequential network construction (SNC)} and pruning inputs and weights via {\em sensitivity based pruning (SBP)} and {\em optimal brain damage (OBD)} respectively. Keywords: prediction risk, network architecture selection, cross--validation (CV), nonlinear cross--validation (NCV), predicted squared error (PSE), generalized prediction error (GPE), effective number of parameters, heuristic search, sequential network construction (SNC), pruning, sensitivity based pruning (SBP), optimal brain damage (OBD). ========================================================================= Retrieval instructions are: unix> ftp neural.cse.ogi.edu login: anonymous password: name at email.address ftp> cd pub/neural ftp> cd papers ftp> get INDEX ftp> binary ftp> get moody94.predictionrisk.ps.Z ftp> quit unix> uncompress *.Z unix> lpr *.ps From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: is the use of patient specific measurements of an epidemiological nature (such as maternal age, past obstetrical history etc) in the forecasting of a number of specific Adverse Pregnancy Outcomes. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: of pattern processing and classification systems which can be trained to forecast problems in pregnancy. These systems will be designed to accept a variety of data formats from project partners throughout the EC, and will be tuned to provide optimum performance for the particular medical task. In neural net terms, such obstetrical problems are similar to financial problems of credit risk prediction. Many leading European obstetrical centers are involved in this project and close collaboration with a number of these will be an essential component of the post offered. The CEC grant to QAMC is likely to be for three years. The post offered would therefore be for one year in the first instance, with the likelihood of renewal up to a maximum of three years, subject to satisfactory performance. The person appointed will work principally in Cambridge and should have already had considerable experience with Neural Networks, ideally up to PhD level. A medical qualification would be desirable, but this is by no means essential. The gross salary will depend on age but the present scale (subject to review) lies within the range of 12828 - 18855 UKP per year. Interviews are likely to be held in Cambridge on 31 August 1994. Closing date for applications is 18 August 94. Further particulars may be obtained from and application forms should be sent to: Dr Kevin Dalton PhD FRCOG OR Dr Richard Prager Dept of Obstetrics and Gynaecology Dept of Engineering Rosie Maternity Hospital Trumpington Street Cambridge CB2 2SW Cambridge CB2 1PZ UK UK Phone 44 223 410250 44 223 332771 FaX 44 223 336873 44 223 332662 Email kjd5 at phx.cam.ac.uk rwp at eng.cam.ac.uk ---------------------- JOB JOB JOB JOB ------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: orthogonal) basic ideas are discussed, regarding computational effort, stability, reliability, sensor requirements, and consistency as well as their useful applications. The first approach is an exact, geometric technique using line representations extracted from the information produced by a laser-range finder. The second discussed possibility is a qualitative, topologic mapping of the environment using neural clustering techniques. Both presented classes of environment-modelling strategies are evaluated on the basis of principal arguments and of simulations resp. tests on real robots. Experiences from the MOBOT resp. the ALICE project are discussed together with some related work. ------------------------------------------------------------------------ --- ALICE - Topographic Exploration, Cartography and Adaptive Navigation --- on a Simple Mobile Robot ------------------------------------------------------------------------ --- File name is : Zimmer.ALICE.ps.Z TSRPC '94, Leeuwenhorst, The Netherlands, June 24-26, 1994 ALICE - Topographic Exploration, Cartography and Adaptive Navigation on a Simple Mobile Robot Pascal Lefevre, Andreas Pruess & Uwe R. Zimmer A sub-symbolic, adaptive approach to the basic world-modelling, navigation and exploration tasks of a mobile robot is discussed in this paper. One of the main goals is to adapt a couple of internal representations to a moderate structured and dynamic environment. The main internal world model is a qualitative, topologic map, which is continuously adapted to the actual environment. This adaptation is based on passive light and touch sensors as well as on a internal position calculated by dead-reckoning and by correlation to distinct sensor situations. Due to the fact that ALICE is an embedded system with a continuous flow of sensor-samples (i.e. without the possibility to stop this data-flow), realtime aspects have to be handled. ALICE is implemented as a mobile platform with an on-board computer and as a simulation, where light distributions and position drifts are considered. ------------------------------------------------------------------ FTP-information (anonymous login): FTP-Server is : ag_vp_file_server.informatik.uni-kl.de Mode is : binary Directory is : Neural_Networks/Reports File names are : Zimmer.ALICE.ps.Z Zimmer.Comparison.ps.Z Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Zimmer.Learning_Surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z .. or ... FTP-Server is : ftp.uni-kl.de Mode is : binary Directory is : reports_uni-kl/computer_science/mobile_robots/... Subdirectory is : 1994/papers File names are : Zimmer.ALICE.ps.Z Zimmer.Comparison.ps.Z Zimmer.Navigation.ps.Z Zimmer.Topologic.ps.Z Zimmer.Visual_Search.ps.Z Subdirectory is : 1993/papers File names are : Zimmer.learning_surfaces.ps.Z Zimmer.SPIN-NFDS.ps.Z Subdirectory is : 1992/papers File name is : Zimmer.rt_communication.ps.Z Subdirectory is : 1991/papers File names are : Edlinger.Pos_Estimation.ps.Z Edlinger.Eff_Navigation.ps.Z Knieriemen.euromicro_91.ps.Z Zimmer.albatross.ps.Z .. or ... FTP-Server is : archive.cis.ohio-state.edu Mode is : binary Directory is : /pub/neuroprose File names are : zimmer.alice.ps.Z zimmer.comparison.ps.Z zimmer.navigation.ps.z zimmer.visual_search.ps.z zimmer.learning_surfaces.ps.z zimmer.spin-nfds.ps.z ------------------------------------------------------------------ ----------------------------------------------------- ----- Uwe R. Zimmer --- University of Kaiserslautern - Computer Science Department | Research Group Prof. v. Puttkamer | 67663 Kaiserslautern - Germany | -------------------------------------------------------------- | P.O.Box:3049 | Phone:+49 631 205 2624 | Fax:+49 631 205 2803 | From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: 6. Time series archive ++++++++++++++++++++++ Various datasets of time series (to be used for prediction learning problems) are available for anonymous ftp from ftp.santafe.edu [192.12.12.1] in /pub/Time-Series". Problems are for example: fluctuations in a far-infrared laser; Physiological data of patients with sleep apnea; High frequency currency exchange rate data; Intensity of a white dwarf star; J.S. Bachs final (unfinished) fugue from "Die Kunst der Fuge" Some of the datasets were used in a prediction contest and are described in detail in the book "Time series prediction: Forecasting the future and understanding the past", edited by Weigend/Gershenfield, Proceedings Volume XV in the Santa Fe Institute Studies in the Sciences of Complexity series of Addison Wesley (1994). Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; 76128 Karlsruhe; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: connectionist strategy can be discerned. Up to now this strategy has been largely implicit, to be found only in some quotes concerning the progress and goals of the connectionist research program, which usually end up somewhere in the introduction or the concluding remarks of an article. Those who are more theoretically inclined do write on connectionism and its role within cognitive science, but their articles mainly concern the differences between connectionism and symbolism and -again- only mention 'the strategy' implicitly. In the thesis I make explicit how connectionist research is striving towards the goal of better understanding cognition. By taking the above mentioned quotes literally, it is possible to construct a model of progress in the connectionist field. This stagewise model is valuable in several ways both within the field and in its relation to the 'outside world'. The main aim of the thesis is to describe this (methodological) stagewise treatment (of the progress) of connectionism, which is to be called STC -Stagewise Treatment of Connectionism-. To further indicate the value of such a model, it will be used to examine several important aspects of connectionism. The first is a closer look at connectionism itself. It is important to place the research that is currently done into a larger connectionist perspective. STC can be used to give an indication of the stronger and weaker points of this field of research and by making these explicit, connectionism can proceed in a more 'self aware' and precise way. The second use of STC lies in a comparison with other research programs, specifically of course the classical, symbolist program, which is considered to be the main competitor within the area of cognitive science. In order to do that I describe progress of the symbolist program in a way similar to STC. In the third part of the thesis the practical use of the model is demonstrated by using STC to describe in greater detail one specific sub-area of cognitive research, that of developmental psychology. The main goal is to show the value of STC as a descriptive tool, but after establishing the legitimacy of the model some indication of its prescriptive uses will follow.  From ertel at fbe.fh-weingarten.de Tue Jun 6 06:52:25 2006 From: ertel at fbe.fh-weingarten.de (ertel@fbe.fh-weingarten.de) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: POSITION AVAILABLE in hybrid systems research Message-ID: RESEARCH POSITION --- RESEARCH POSITION --- RESEARCH POSITION Integration of Symbolic and Connectionist Reasoning in the Research Group AUTOMATED REASONING at the Chair Informatik VIII Fakult"at f"ur Informatik Technische Universit"at M"unchen We are an AI research group (consisting of about a dozen researchers) at the University of Technology in Munich. Our major field of research is Automated Reasoning and our main product is the Automated Theorem Prover SETHEO. For a couple of years we are now pursuing research on the combination of symbolic reasoning with neural techniques. We are one of the partners in the ESPRIT project MIX (Modular Integration of Symbolic and Connectionist Processing in Knowledge-Based Systems). Our goal in this project is to design a system which is able to do rule based symbolic reasoning on certain and uncertain knowledge as well as inductive reasoning (generalization) from empirical data. To achieve this goal we employ techniques from Automated Reasoning, Statistics and Neural Networks. Among other applications we are starting to work on a medical expert system for diagnosis in clinical toxicology. Our future colleague should have experience in at least some of the mentioned research fields and should be willing to enter the others as well. She/he shall participate actively in the design of the computational model, in the realization of the application and should represent the project at MIX project meetings. The position is available immediately and limited (with good chances for continuation) until March 1997. Funding is according to BAT IIa or BAT Ib (approx. 65000 -- 70000 DM before tax) depending on qualification and experience. The applicants must have at least a Master's degree in Computer Science or a comparable qualification. Applicants without Ph.D. are expected to prepare a doctoral thesis in the course of their research tasks. Applicants without German as mother tongue should be sincerely willing to learn German. Please send as soon as possible your application documents with references to: Bertram Fronh"ofer Institut f"ur Informatik der Technischen Universit"at 80290 M"unchen Fax.: +49-89/526502 E-mail: fronhoef at informatik.tu-muenchen.de Since we want to fill this vacant position as soon as possible, we would highly appreciate to receive from applicants as soon as possible a short notification of interest (preferably by e-mail) containing a short description of the applicant's qualification: e.g. short CV, a list of publications, summary of master thesis or Ph.D. thesis, etc. _______________________________________________________________________________ Bertram Fronh"ofer Automated Reasoning Group Institut f"ur Informatik at the Lehrstuhl Informatik VIII Technische Universit"at M"unchen Tel.: +49-89/2105-2031 Arcisstr. 21 Fax.: +49-89/526502 D-80290 Muenchen Email: fronhoef at informatik.tu-muenchen.de _______________________________________________________________________________  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: commercially available products relating to neural network tools and applications. In addition, advanced prototypes of tools and applications developed by public sector research organizations will be demonstrated. To receive a complete exhibitor's package, please contact the Conference Secretariat at the address indicated. ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France ICANN ' 95 Paris, October 9-13, 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ...................................................................... ...................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : ................................................................ Fax : ................................................................ Electronic mail:...................................................... * I wish to attend the O Scientific conference O Industrial conference * I intend to exhibit * I intend to submit a paper Provisional title.................................................... Author (s) : ........................................................ Brief outline of the subject : ...................................... ..................................................................... Category : * Scientific conference O Theory O Algorithms & architectures O Implementations O Cognitive sciences & AI O Neurobiology O Applications ( please specify) * Industrial conference O Tools O Techniques O Applications ( please specify) ***************************************************************************** TEAR OFF HERE ***************************************************************************** STEERING COMMITTEE Chair F.Fogelman - Sligos (Paris, F) Scientific Program co-chairs G.Dreyfus - ESPCI (Paris, F) M.Weinfeld - Ecole Polytechnique (Palaiseau, F) Industrial Program chair P.Corsi - CEC (Brussels, B) Tutorials & Publications chair P.Gallinari - Universite P.& M.Curie (Paris, F) SCIENTIFIC PROGRAM COMMITTEE (Preliminary) I. Aleksander (UK); L.B. Almeida (P); S.I. Amari (J); E. Bienenstock (USA); C. Bishop (UK); L. Bottou (F); J. M. Buhmann (D); S. Canu (F); V. Cerny (SL); M. Cosnard (F); R. De Mori (CDN); R. Eckmiller (D); N. Franceschini (F); S. Gielen (NL); J. Herault (F); M. Jordan (USA); D. Kayser (F); T. Kohonen (SF); A. Lansner (S); Z. Li (USA); L. Ljung (S); C. von der Malsburg (D); S. Marcos (F); P.Morasso (I); J.P.Nadal(F); E. Oja (SF); P. Peretto (F); C. Peterson (S); L. Personnaz (F); R. Pfeiffer (CH); T. Poggio (USA); P. Puget (F); S. Raudys (LT); H. Ritter (D); M. Saerens ( B); W.von Seelen (D); J.J. Slotine (USA); S. Solla (DK); J.G. Taylor (GB); C. Torras (E); B. Victorri (F); A. Weigend (USA). INDUSTRIAL PROGRAM COMMITTEE (Preliminary) M. Boda (S); B. Braunschweig (F); C. Bishop (UK); J.P. Corriou (F); M. Duranton (F); A. Germond (CH); I. Guyon (USA); P. Refenes (UK); S. Thiria (F); C. Wellekens (B); B. Wiggins (UK). ***************************************************************************** From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From tesauro at watson.ibm.com Tue Jun 6 06:52:25 2006 From: tesauro at watson.ibm.com (tesauro@watson.ibm.com) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NIPS*94 registration and hotel deadlines Message-ID: The NIPS*94 conference program, with titles and authors of all talks and posters, is online and can be accessed via the NIPS homepage at: http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html The abstract booklet will also be appearing on the homepage soon. -- Dave Touretzky, NIPS*94 Program Chair ================================================================ This is a reminder that the deadline for early registration for NIPS*94 is this SATURDAY, OCTOBER 29. To obtain a copy of the registration brochure, send e-mail to nips94 at mines.colorado.edu. The brochure is also available on-line via the NIPS*94 Mosaic homepage (http://www.cs.cmu.edu:8001/afs/cs/project/cnbc/nips/NIPS.html), or by anonymous FTP: FTP site: mines.colorado.edu (138.67.1.3) FTP file: /pub/nips94/nips94-registration-brochure.ps The deadlines for hotel reservations in Denver and in Vail are also fast approaching. Information on hotel accomodations is given below. In Denver, the official hotel for NIPS*94 is the Denver Marriott City Center. The NIPS group rate is $74.00 per night single, $84.00 double (plus 11.8% tax). For reservations, call (800)228--9290 and say that you are attending NIPS*94. Cut-off date for reservations is Nov. 11. The Denver Marriott City Center may be contacted directly at (303)297--1300 for further information. The Marriott City Center is located in the heart of Denver and is easily accessible by taxi or local airport shuttle services. In Vail, the official hotel for NIPS*94 is the Marriott Vail Mountain Resort (formerly known as the Radisson Resort Vail, as listed in our previous publicity). The NIPS group rate is $80.00 per night, single or double (plus 8% tax). For reservations, phone (303)476--4444. Cut-off date for reservations is Nov. 1. --Gerry Tesauro NIPS*94 General Chair  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: enough details of proofs so that he/she can judge on that basis alone, whether it is likely that the proofs in the paper are correct. In this situation it is helpful for the referee if he/she can also take into account how carefully that particular author tends to check his/her math. proofs (of course, if they don't know the author, they should give the benefit of the doubt). It is a fact of life, that different authors tend to check their proofs with quite different amounts of care. Unfortunately it is NECESSARY for a referee to make a guess about the correctness of proofs in a theory paper. On the basis of the statements of the theorems and their consequences alone, incorrect theoretical paper often appear to be more exciting (and therefore more "acceptable") than correct ones. Hence I am bit afraid that a "blind reviewing" policy provides an incentive for submitting exciting but only half-finished theoretical papers, and that NIPS ends up publishing more incorrect theory-papers. I would like to add, that in the well-known (often very selective) conferences in theoretical computer science the submissions are NOT anonymous, and this seems to work satisfactory. There, the main precaution for avoiding biased decisions lies in a careful selection of referees and program committee members (trying to get researchers who are known for the quality of their own work; but still enforcing a substantial amount of rotation). The results of these policies are certainly not perfect, but quite good. Wolfgang Maass  From ted at SPENCER.CTAN.YALE.EDU Tue Jun 6 06:52:25 2006 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: "Blind" reviews are impossible Message-ID: Even a nonexpert reviewer can figure out who wrote a paper simply by looking for citations of prior work. The only way to guarantee a "blind" review is to forbid authors from citing anything they've done before, or insist on silly euphemisms when citing such publications. --Ted  From LAWS at ai.sri.com Tue Jun 6 06:52:25 2006 From: LAWS at ai.sri.com (Ken Laws) Date: Tue 20 Dec 94 20:31:39-PST Subject: Open Review In-Reply-To: <199412192128.AA02352@morticia.cnns.unt.edu> Message-ID: <787984299.0.LAWS@AI.SRI.COM> > From David Tam: > I think a totally honest system has to be doubly-open ... I agree. But now you're talking about a "scientific revolution." If reviewers are not guaranteed anonymity, many -- most? -- of the better-qualified people will refuse to review. (Consider legal liability, for instance. Will professional societies indemnify reviewers against malpractice claims? And, liability aside, how many of the "top people" want to spend their time feuding with colleagues or answering challenges from offended authors?) Not that feuding isn't an acceptable alternative. Louis Pasteur feuded bitterly with opponents of his bacterial theory of anthrax and other diseases; eventually he won. Lister fought for antisepsis; Jenner (and Pasteur) fought for immunization. They won, just as Galileo won against the might of the Church. But if that's the system, either it has to be the whole system -- no one can escape just by boycotting one or several conferences -- or you have to pay a few good people to become knowlegeable critics. There's a long tradition of professional critics in art, drama, literature, politics, entertainment, travel services, and fine dining. There's only one reason that we have no such pundits in science -- we offer no financial support for such a career. Professional critics have their own critics, of course -- individually and as an institution -- but that's just part of the doubly-open review system. I think it's healthy and I'd love to see a scientific journalism illuminate our field. It won't happen on the initiative of those now in power, as they need the shadows to keep the current system going. (Questions about the quality of graduate education, necessity of the research being done, and exploitation of students and postdocs are best not asked. They won't lead to reform, but to funds being withdrawn from our field.) The revolution will happen, but through grass-roots self-publishing. Tenure committees are still committed to counting papers in prestigious forums, but that will change when the current journals and conferences collapse. Online journals and "conferences" will take over -- or evolve from the existing channels -- but self-publication will become an increasingly important way of sharing results. And with that comes the need for amateur and professional reviewers. Unpaid reviews will predominate within each discipline, but paid reviewers, abstracters, journalists, and the like will follow the discussions and report significant findings to researchers in nearby fields -- and to funding agencies and the general public. Reports of these gatekeepers will in turn be reviewed, with some being acknowledged as more reliable than others. Eventually the tenure committees will start looking at the reviews rather than publication counts. Can this work in Computer Science? It already does, in much of the computer hardware world. We may not think of Infoworld or Computerworld as part of the academic press, but they do pick up important stories from time to time. EE Times broke the news of the Pentium bug, and often carries Colin Johnson's reports on neural-network hardware advances. Other articles cover ARPA funding for ANN initiatives, or other news of interest to professional researchers and developers. What distinguishes an industry from a scientific discipline? It is largely the presence of commercial journalism. An industry has at least a weekly trade magazine to keep everyone informed of what's happening, what resources are available, and where the jobs are. Of course there has to be money pumping around also, or the trade magazine couldn't flourish -- but online journalism may be able to operate much more cheaply. (Or may not. The role of advertising hasn't yet been established, and it is advertising that pays for most trade publications.) Before doubly-open review can take hold, with professional journalists, columnists, and the like to contribute and to referee, there's still a bit of pioneering to be done. I'm working on one approach, trying to build a professional association and publication ab initio, taylored to the online age. The association, Computists International, is a mutual-aid society for AI/IS/CS researchers. Our flagship publication is the weekly Computists' Communique, a cross between a newsletter and a news wire (with echoes of Reader's Digest and Johnny Carson). The Communique hasn't grown enough yet to have regular columnists or deep critical analysis of scientific controversies, but it comes closer than most other publications. The connectionists disussion stream is one of many from which I draw material on inference, pattern recognition, and related topics. Last year, I tried to offer connectionists free issues of the Communique -- one per month, in what I call my Full Moon subset. The announcement was refused by your moderator as not being entirely related to neural-network theory. Assuming that this message gets past the gatekeepers, I'd like to make the offer again. Contact me at the address below (or reply to this message, IF that won't send your message back to the connectionists list). Mention "connectionists," ask for the Full Moon subset, and have your full name somewhere in the message. I'll sign you up for one free issue per month, just to introduce my service and to keep in touch with you. It will give you a good look at the kind of "niche journalism" the net will currently support. I hope some of you will take up the torch, starting similar publications that are specific to your own interests. The world would be better for having an online high-signal newsletter devoted to connectionism. -- Ken Laws Dr. Kenneth I. Laws; Computists International; laws at ai.sri.com. Ask about the free Full Moon subset of the Computists' Communique. -------  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: in education has been a major focus of our development efforts. The intention has been to both support the increased use of numerical simulations in neuroscience, as well as provide a basis for more sophisticated approaches to teaching neuroscience in general. With partricular respect to the connectionist mailing list, we believe the GENESIS tutorials,in concert with the Book of GENESIS, provide a means to further the neuroscience education of our engineering and neural network colleagues. For those currently using GENESIS in education, or interested in doing so, we have recently setup a moderated email newsgroup that will enable users of GENESIS in teaching to share ideas, syllabi, exercises, computer lab handouts, and other materials that they may develop when teaching neuroscience and/or neural modeling. Those interested should contact: genesis-teach-request at smaug.bbb.caltech.edu. ************************************************************************ Additional information Additional information on the Book of GENESIS (including the table of contents) and the free GENESIS distribution is available over the net from Caltech by sending an email request to genesis at cns.caltech.edu, or by accessing the World Wide Web server, http://www.bbb.caltech.edu/GENESIS. The WWW server will also allow you to see "snapshots" of the GENESIS tutorials, take a look at the GENESIS programers manual, and find information about research which has been or is currently being conducted using GENESIS. "The Book of GENESIS" is published by TELOS, an "electronic publishing" affiliate of Springer-Verlag, and may be ordered from Springer by phone, mail, fax, email, or through the TELOS WWW page. Here is the relevant ordering information: The Book of GENESIS: Exploring Realistic Neural Models with the GEneral NEural SImulation System, by James M. Bower and David Beeman 1994/450 pages/Hardcover ISBN 0-387-94019-7 Send orders to: Springer-Verlag New York, Inc. PO Box 2485 Secaucus, NJ 07096-2485 Order Desk: 1-800-777-4643 FAX: 201-348-4505 email: info at telospub.com WWW: http://www.telospub.com/genesis.html ------------------------------------------- *************************************** James M. Bower Division of Biology Mail code: 216-76 Caltech Pasadena, CA 91125 (818) 395-6817 (818) 449-0679 FAX NCSA Mosaic laboratory address: http://www.bbb.caltech.edu/bowerlab NCSA Mosaic address for GENESIS: http://www.bbb.caltech.edu/GENESIS  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Further information can be obtained from either organizer: Dr. Henk J. Haarmann: haarmann at psy.cmu.edu tel:(412)-268-2402 Dr. Marcel Adam Just: just at psy.cmu.edu tel:(412)-268-2791 -------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Return-Path From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: commercially available products relating to neural network tools and applications. In addition, advanced prototypes of tools and applications developed by public sector research organizations will be demonstrated. To receive a complete exhibitor's package, please contact the Conference Secretariat at the address indicated. ***************************************************************************** TEAR OFF HERE ***************************************************************************** INFORMATION FORM to be returned to: ICANN'95 1 avenue Newton bp 207 92 142 CLAMART Cedex France Fax: +33 - 1 - 41 28 45 84 ICANN ' 95 Paris, October 9-13, 1995 Last name : .......................................................... First Name : ........................................................ Organization or company : ............................................ ..................................................................... ..................................................................... Postal code/Zip code : ............................................... City : ............................................................... Country : ............................................................ Tel : .................................Fax : ......................... Electronic mail:...................................................... * I wish to attend the O Scientific conference O Industrial conference * I intend to exhibit * I intend to submit a paper Provisional title.................................................... Author (s) : ........................................................ Brief outline of the subject : ...................................... .................................................................... Category : * Scientific conference O Theory O Algorithms & architectures O Implementations O Cognitive sciences & AI O Neurobiology O Applications ( please specify) * Industrial conference O Tools O Techniques O Applications ( please specify) ***************************************************************************** TEAR OFF HERE ***************************************************************************** STEERING COMMITTEE Chairs F. Fogelman - Sligos (Paris, F) J.C. Rault - C3ST (Paris, F) Scientific Program co-chairs G. Dreyfus - ESPCI (Paris, F) M. Weinfeld - Ecole Polytechnique (Palaiseau, F) Industrial Program chair P. Corsi - CEC (Brussels, B) Tutorials & Publications chair P. Gallinari - Universite P.& M.Curie (Paris, F) SCIENTIFIC PROGRAM COMMITTEE I. Aleksander (UK); L.B. Almeida (P); S.I. Amari (J); M. Berthod (F); E. Bienenstock (USA); C.M. Bishop (UK); L. Bottou (F); J. M. Buhmann (D); S. Canu (F); V. Cerny (SL); M. Cosnard (F); R. De Mori (CAN); R. Eckmiller (D); N. Franceschini (F); S. Gielen (NL); J.P. Haton (F); J. Herault (F); M. Jordan (USA); D. Kayser (F); T. Kohonen (SF); V. Kurkova (CZ); A. Lansner (S); Z. Li (USA); L. Ljung (S); C. von der Malsburg (D); S. Marcos (F); P.Morasso (I); J.P.Nadal(F); E. Oja (SF); P. Peretto (F); C. Peterson (S); L. Personnaz (F); R. Pfeiffer (CH); T. Poggio (USA); P. Puget (F); S. Raudys (LT); H. Ritter (D); M. Saerens ( B); W. von Seelen (D); J.J. Slotine (USA); S. Solla (DK); J.G. Taylor (GB); C. Torras (E); B. Victorri (F); A. Weigend (USA). INDUSTRIAL PROGRAM COMMITTEE (Preliminary) V.Ancona (F); M. Boda (S); B. Braunschweig (F); C. Bishop (UK); J.P. Corriou (F); M. Dougherty (UK); M. Duranton (F); A. Germond (CH); I. Guyon (USA); G. Kuhn (D); H. Noel (F); P. Refenes (UK); S. Thiria (F); C. Wellekens (B); B. Wiggins (UK). **************************************************************************** PROGRAM PLENARY SPEAKERS J. Friedman (USA); M. Kawato (J); T. Kohonen (SF); L. Ljung (S); W. Singer (D) INVITED SPEAKERS C.M. Bishop (UK); H. Bourlard (B); B. Denby (I); I. Guyon (USA); G. Hinton (CAN); A. Konig (D); Y. Le Cun (USA); D. McKay (UK); C. von der Malsburg (D); E. Oja (SF); C. Peterson (S); T. Poggio (USA); S. Raudys (LT); J.G. Taylor (GB); C. Torras (E); V. Vapnik (USA). TUTORIALS C.M. Bishop (UK); L. Bottou (F); J. Friedman (USA); A. Gee (UK); J. Hertz (DK); L. Jackel (USA); L. Ljung (S); E. Oja (SF); L. Personnaz (F); T. Poggio (USA); I. Rivals (F); V. Vapnik (USA). INDUSTRIAL SESSIONS Banking, finance & insurance (P. Refenes); Defense (H. Noel); Document processing, OCR, text retrieval & indexing (I. Guyon); Forecasting & marketing (G. Kuhn); Medicine (J. Demongeot); NN Clubs & Funding Programs (C. Bishop); Oil industry (B. Braunschweig); Power industry (A. Germond); Process engineering, control and monitoring (J.P. Corriou); Robotics (W. von Seelen); Speech processing (C. Wellekens); Telecommunications (M. Boda); Teledetection (S. Thiria); Transportation M. Dougherty); VLSI & dedicated hardware (M. Duranton). ************************************************************************ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: take the airport bus to Stockholm City. It takes approximately 45 minutes. From there, take a taxi to the hotel or conference center. From Sergel Plaza and Strand, IVA is walking distance. If you use the subway you should get off at the Ostermalm station and use the Grev Turegatan exit. Hotels To reserve a hotel at the conference rate call between 8:00am and 4:00pm Stockholm (European) time: Annika Lindqvist e-mail: lme.lmehotel at memo.ericsson.se tel: +46 8 6813590 fax: +46 8 6813585 The following hotels have been recommended although others are available. Prices are in Swed- ish Kroners (Single/Double) including VAT: Hotel Sergel Plaza: Brunkebergstorg 9, tel +46 8 226600 (825/825) Located close to conference. Strand/SAS Hotel: Nybrokajen 9, tel +46 8 6787800 (925/925) Located close to conference. Hotel Attache: Cedergrenvagen 16, tel +46 8 181185 (585/585) Approximately 10 minutes by The Underground. Good Morning South: Vastertorpsvagen 131, tel +46 8 180140 (495/495) Approximately 15 minutes by The Underground. Hotel Malmen: Gotgatan 49-51, tel +46 8 226080 (655/655) Located south of city in nice area. Underground station in building. Sharing a Room Because of costs, some may want to share a room. To be part of the Room Share List, send e-mail to timxb at bellcore.com with your name, preferred way to be contacted, and preferences (male/female, non-smoker, etc.). Once you make arrangements, contact timxb again, so that you can be removed from the list. ----------------------------------------------------------------------------- --------------------Registration Form--------------------------------------- International Workshop on Applications of Neural Networks to Telecommunications (IWANNT*95) Stockholm, Sweden May 22-24, 1995 Name: Institution: Mailing Address: Telephone: Fax: E-mail: Make check ($400; $500 after May 1, 1995; $200 students) out to IWANNT*95. Please make sure your name is on the check. Registration includes breaks, a boat tour of the Stockholm archipelago, and proceedings available at the conference. Mail to: Betty Greer, IWANNT*95 Bellcore, MRE 2P-295 445 South Street Morristown, NJ 07960, USA Voice: (201) 829-4993 Fax: (201) 829-5888 Email: bg1 at faline.bellcore.com From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: approach for the analysis of the Linsker's unsupervised Hebbian learning network. The behavior of this model is determined by the underlying nonlinear dynamics that are parameterized by a set of parameters originating from the Hebbian rule and the arbor density of the synapses. These parameters determine the presence or absence of a specific receptive field (also referred to as a connection pattern) as a saturated fixed point attractor of the model. In this paper, we perform a qualitative analysis of the underlying nonlinear dynamics over the parameter space, determine the effects of the system parameters on the emergence of various receptive fields, and provide a rigorous criterion for the parameter regime in which the network will have the potential to develop a specially designated connection pattern. In particular, this approach analytically demonstrates, for the first time, the crucial role played by the synaptic arbor density. For example, our analytic predictions indicate that no structured connection pattern can emerge in a Linsker's network that is fully feedforward connected without localized synaptic arbor density. Our general theorems lead to a complete and precise picture of the parameter space that defines the relationships between the different sets of system parameters and the corresponding fixed point attractors, and yield a method to predict whether a given connection pattern will emerge under a given set of parameters without running a numerical simulation of the model. The theoretical results are corroborated by our examples (including center- surround and certain oriented receptive fields), and match key observations reported in Linsker's numerical simulation. The rigorous approach presented here provides a unified treatment of many diverse problems about the dynamical mechanism of a class of models that use the limiter function (also referred to as the piecewise linear sigmoidal function) as the constraint limiting the size of the weight or the state variables, and applies not only to the Linsker's network but also to other learning or retrieval models of this class. ------------------------------------------------------------------------ Key Words: Unsupervised Hebbian learning, Network self-organization, Linsker's developmental model, Brain-State-in-a-Box model, Ontogenesis of primary visual system, Afferent receptive field, Synaptic arbor density, Correlations, Limiter function, Nonlinear dynamics, Qualitative analysis, Parameter space, Coexistence of attractors, Fixed point, Stability. ------------------------------------------------------------------------ Contents: {1} Introduction {1.1} Formulation Of The Linsker's Developmental Model {1.2} Qualitative Analysis Of Nonlinear System And Afferent Receptive Fields {1.3} Summary Of Our Approach {2} General Theorems About Fixed Points And Their Stability {3} The Criterion For The Division Of Parameter Regimes For The Occurrence Of Attractors {3.1} The Necessary And Sufficient Condition For The Emergence Of Afferent Receptive Fields {3.2} The General Principal Parameter Regimes {4} The Afferent Receptive Fields In The First Three Layers {4.1} Description Of The First Three Layers Of The Linsker's Network {4.2} Development Of Connections Between Layers A And B {4.3} Analytic Studies Of Synaptic Density Functions' Influences In The First Three Layers {4.4} Examples Of Structured Afferent Receptive Fields Between Layers B And C {5} Concluding Remarks {5.1} Synaptic Arbor Density Function {5.2} The Linsker's Network And The Brain-State-in-a-Box Model {5.3} Dynamics With Limiter Function {5.4} Intralayer Interaction And Biological Discussion References Appendix A: On the Continuous Version of the Linsker's Model Appendix B: Examples of Structured Afferent Receptive Fields between Layers B and C of the Linsker's Network ------------------------------------------------------------------------ FTP Instructions: unix> ftp archive.cis.ohio-state.edu login: anonymous password: (your e-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get pan.purdue-tr-ee-95-12.ps.Z ftp> quit unix> uncompress pan.purdue-tr-ee-95-12.ps.Z unix> ghostview pan.purdue-tr-ee-95-12.ps (or however you view or print) *************** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***************** From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From harmonme at aa.wpafb.af.mil Tue Jun 6 06:52:25 2006 From: harmonme at aa.wpafb.af.mil (HARMONME) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Paper on Residual Advantage Learning Message-ID: <2455330927061995/A03539/YODA> The following paper, submitted to NIPS-95, is now available via WWW at the following address: http://ace.aa.wpafb.af.mil/~aaat/harmon.html =============================================================================== Residual Advantage Learning Applied to a Differential Game Mance E. Harmon Wright Laboratory WL/AAAT Bldg. 635 2185 Avionics Circle Wright-Patterson Air Force Base, OH 45433-7301 harmonme at aa.wpafb.mil Leemon C. Baird III U.S.Air Force Academy 2354 Fairchild Dr. Suite 6K41, USAFA, CO 80840-6234 baird at cs.usafa.af.mil ABSTRACT An application of reinforcement learning to a differential game is presented. The reinforcement learning system uses a recently developed algorithm, the residual form of advantage learning. The game is a Markov decision process (MDP) with continuous states and nonlinear dynamics. The game consists of two players, a missile and a plane; the missile pursues the plane and the plane evades the missile. On each time step each player chooses one of two possible actions; turn left or turn right 90 degrees. Reinforcement is given only when the missile hits the plane or the plane reaches an escape distance from the missile. The advantage function is stored in a single-hidden-layer sigmoidal network. The reinforcement learning algorithm for optimal control is modified for differential games in order to find the minimax point, rather than the maximum. As far as we know, this is the first time that a reinforcement learning algorithm with guaranteed convergence for general function approximation systems has been demonstrated to work with a general neural network. =============================================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From eann96 at lpac.qmw.ac.uk Tue Jun 6 06:52:25 2006 From: eann96 at lpac.qmw.ac.uk (Engineering Apps in Neural Nets 96) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning,... The project includes theoretical work on classification algorithms, simulations and benchmarks, especially on realistic industrial data. Hardware implementation, especially VLSI option, is the last objective. The set of databases available is to be used for tests and benchmarks of machine-learning classification algorithms. The databases are splitted into two parts: ARTIFICIALly generated databases, mainly used for preliminary tests, and REAL ones, used for objective benchmarks and comparisons of methods. The choice of the databases has been guided by various parameters, such as availability of published results concerning conventional classification algorithms, size of the database, number of attributes, number of classes, overlapping between classes and non-linearities of the borders,... Results of PCA and DFA preprocessing of the REAL databases are also included, together with several measures useful for the databases characterization (statistics, fractal dimension, dispersion,...). All these databases and their preprocessing are available together with a postcript technical report describing in details the different databases ('Databases.ps.Z' - 45 pages - 777781 bytes) and a report related to the comparative benchmarking studies of various algorithms ('Benchmarks.ps.Z' - 113 pages - 1927571 bytes) well-known by the Statistical and Neural Network communities (MLP, RCE, LVQ, k_NN, GQC) or developped in the framework of the Elena project (IRVQ, PLS). A LaTeX bibfile containing more than 90 entries corresponding to the Elena partners bibliography related to the project is also available ('Elena.bib') in the same directory. All files are available by anonymous ftp from the following directory: ftp://ftp.dice.ucl.ac.be/pub/neural-nets/ELENA/databases The databases are splitted into two parts: the 'ARTIFICIAL' ones, being generated in order to obtain some defined characteristics, and for which the theoretical Bayes error can be computed, and the 'REAL' ones, collected in existing real-world applications. The ARTIFICIAL databases ('Gaussian', 'Clouds' and 'Concentric') were generated according to the following requirements: - heavy intersection of the class distributions, - high degree of nonlinearity of the class boundaries, - various dimensions of the vectors, - already published results on these databases. They are restricted to two-class problems, since we believe it yield answers to the most essential questions. The ARTIFICIAL databases are mainly used for rapid test purposes on newly developed algorithms. The REAL databases ('Satimage', 'Texture', 'Iris' and 'Phoneme') were selected according to the following requirements: - classical databases in the field of classification (Iris), - already published results on these databases (Phoneme, from the ROARS ESPRIT project and 'Satimage' from the STATLOG ESPRIT project), - various dimensions of the vectors, - sufficient number of vectors (to avoid the ``empty space phenomenon''). - the 'Texture' database, generated at INPG for the Elena project is interesting for its high number of classes (11). ############################################################################## ########### # DETAILS # ########### The 'Benchmarks' technical report ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The 'Benchmarks.ps' Elena report is related to the benchmarking studies of various classifiers. Most of the classifiers which were used for the benchmark comparative studies are are well known by the neural network and machine learning community. These are the k-Nearest Neighbour (k_NN) classifier, selected for its powerful probability density estimation properties; the Gaussian Quadratic Classifier (GQC), the most classical statistical parametric simple classification method; the Learning Vector Quantizer (LVQ), a powerful non-linear iterative learning algorithm proposed by Kohonen; the Reduced Coulomb Energy (RCE) algorithm, an incremental Region Of Influence algorithm; the Inertia Rated Vector Quantizer (IRVQ) and the Piecewise Linear Separation (PLS) classifiers, developed in the framework of the Elena project. The main objectives of the 'Benchmarks.ps' Elena report report are the following: - to provide an overall comprehensive view of the general problem of comparative benchmarking studies and to propose a useful common test basis for existing and further classification methods, - to obtain objective comparisons of the different chosen classifiers on the set of databases described in this report (each classifier being used with its optimal configuration for each particular database), - to study the possible links between the data structures of the databases viewed by some parameters, and the behavior of the studied classifiers (mainly the evolution of their the optimal configuration parameters). - to study the links between the preprocessing methods and the classification algorithms from the performances and hardware constraints point of view (especially the computation times and memory requirements). Databases format ~~~~~~~~~~~~~~~~ All the databases available are in the following format (after decompression) : - All files containing the databases are stored as ASCII files for their easy edition and checking. - In a file, each of the n lines is reserved for each vectorial sample (instance) and each line consists of d floating-point numbers (the attributes) followed by the class label (which must be an integer). Example: 1.51768 12.65 3.56 1.30 73.08 0.61 8.69 0.00 0.14 1 1.51747 12.84 3.50 1.14 73.27 0.56 8.55 0.00 0.00 0 1.51775 12.85 3.48 1.23 72.97 0.61 8.56 0.09 0.22 1 1.51753 12.57 3.47 1.38 73.39 0.60 8.55 0.00 0.06 1 1.51783 12.69 3.54 1.34 72.95 0.57 8.75 0.00 0.00 3 1.51567 13.29 3.45 1.21 72.74 0.56 8.57 0.00 0.00 1 There are NO missing values. If you desire to get a database, you MUST do it in ftp the binary mode. So if you aren't in this mode, simply type 'binary' at the ftp prompt. EXAMPLE: to get the "phoneme" database : cd REAL cd phoneme binary get phoneme.txt get phoneme.dat.Z get ... cd ... ... quit After your ftp session, you simply have to type 'uncompress phoneme.dat.Z' to get the uncompressed datafile. Contents of the 'ARTIFICIAL' directory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The databases of this directory contain only the 'ARTIFICIAL' classification problems. The present 'ARTIFICIAL' databases are only two-class problems, since it yields answers to the most essential questions. For each problem, the confusion matrix corresponding to the theoretical Bayes boundary is provided with the confusion matrix obtained by a k_NN classifier (k chosen to reach the minimum of the total Leave-One-Out error). These databases were selected to use for preliminary test and to study the behavior of the implemented algorithms for some particular problems: - Overlapping classes: The classifier should have the ability to form a decision boundary that minimizes the amount of misclassification for all of the overlapping classes. - Nonlinear separability: The classifier should be able to build decision regions that separate classes of any shape and size. There is one subdirectory for each database. In this subdirectory, there is : - A text file providing detailed information about the related database ('databasename.txt'). - The compressed database ('databasename.dat.Z). The different patterns of each database are presented in a random order. - For bidimensional databases, a postscript file representing the 2-D datasets (those files are in eps format). For each subdirectory, the directoryname is the same as the name chosen for the concerned database. Here are the directorynames with a brief description. - 'clouds' Bidimensional distributions : the class 0 is the sum of three different normal distributions while the the class 1 is another normal, overlapping the class 0. 5000 patterns, 2500 in each class. This allows the study of the classifier behavior for heavy intersection of the class distributions and for high degree of nonlinearity of the class boundaries. - 'gaussian' A set of seven databases corresponding to the same problem, but with dimensionality ranging from 2 to 8. This allows the study of the classifier behavior for different dimensionalities of the input vectors, for heavy overlapped distributions and for non linear separability. Theses databases where already studied by Kohonen in: Kohonen, T. and Barna, G. and Chrisley, R., "Statistical Pattern Recognition with Neural Networks: Benchmarking Studies", IEEE Int. Conf. on Neural Networks, SOS Printing, San Diego, 1988. In this paper,the performances of three basis types of neural-like networks (Backpropagation network, Boltzmann machine and Learning Vector Quantization) is evaluated and compared to the theoretical limit. - 'concentric' Bidimensional uniform concentric circular distributions. 2500 instances, 1579 in class 1, 921 in class 0. This database may be used to study the linear separability of the classifier when some classes are nested in other without overlapping. Contents of the 'REAL' directory ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ The databases of this directory contain only the real classification problem sets selected for the Elena benchmarking studies. There is one subdirectory for each database. In this subdirectory, there are: - a text file giving detailed information about the related database (`databasename.txt'), - the compressed original database in the Elena format (`databasename.dat.Z'); the different patterns of each database being presented in a random order. - By the way of a normalization process, each original feature will have the same importance in a subsequent classification process. A typical method is first to center each feature separately and than to reduce it to a unit variance; this process has been applied on all the REAL Elena databases in order to build the ``CR'' databases contained in the ``databasename_CR.dat.Z'' files. The Principal Components Analysis (PCA) is a very classical method in pattern recognition [Duda73]. PCA reduces the sample dimension in a linear way for the best representation in lower dimensions keeping the maximum of inertia. The best axe for the representation is however not necessary the best axe for the discrimination. After PCA, features are selected according to the percentage of initial inertia which is covered by the different axes and the number of features is determined according to the percentage of initial inertia to keep for the classification process. This selection method has been applied on every REAL database after centering and reduction (thus on the databasename_CR.dat files). When quasi-linear correlations exists between some initial features, these redundant dimensions are removed by PCA and this preprocessing is then recommended. In this case, before a PCA, the determinant of the data covariance matrix is near zero; this database is thus badly conditioned for all process which use this information (the quadratic classifier for example). The following files, related to PCA are also available for the REAL databases: - ``databasename_PCA.dat.Z'', the projection of the ``CR'' database on its principal components (sorted in a decreasing order of the related inertia percentage), - ``databasename_corr_circle.ps.Z'', a graphical representation of the correlation between the initial attributes and the two first principal components, - ``databasename_proj_PCA.ps.Z'', a graphical representation of the projection of the initial database on the two first principal components, - ``databasename_EV.dat'', a file with the eigenvalues and associated inertia percentages The Discriminant Factorial Analysis (DFA) can be applied to a learning database where each learning sample belongs to a particular class [Duda73]. The number of discriminant features selected by DFA is fixed in function of the number of classes (c) and of the number of input dimensions (d); this number is equal to the minimum between d and c-1. In the usual case where d is greater than c, the output dimension is fixed equal to the number of classes minus one and the discriminant axes are selected in order to maximize the between-variance and to minimize the within-variance of the classes. The discrimination power (ratio of the projected between-variance over the projected within-variance) is not the same for each discriminant axis: this ratio decreases for each axis. So for a problem with many classes, this preprocessing will not be always efficient as the last output features will not be so discriminant. This analysis uses the information of the inverse of the global covariance matrix, so the covariance matrix must be well conditioned (for example, a preliminary PCA must be applied to remove the linearly correlated dimensions). The DFA preprocessing method has been applied on the 18 first principal components of the 'satimage_PCA' and 'texture_PCA' databases (thus by keeping only the 18 first attributes of these databases before to apply the DFA preprocessing) in order to build the 'satimage_DFA.dat.Z' and 'texture_DFA.dat.Z' database files, having respectively 5 and 10 dimensions (the 'satimage' database having 6 classes and 'texture' 11). For each subdirectory, the directoryname is the same as the name chosen for the contained database. Here are the directorynames with a brief numerical description of the available databases. - phoneme French and Spannish phoneme recognition problem. The aim is to distinguish between nasal (AN, IN, ON) and oral (A, I, O, E, E') vowels. 5404 patterns, 5 attributes (the normalized amplitudes of the five first harmonics), 2 classes. This database was in use in the European ESPRIT 5516 project ROARS. The aim of this project is the development and the implementation of a REAL time analytical system for French and Spannish phoneme recognition. - texture The aim is to distinguish between 11 different textures (Grass lawn, Pressed calf leather, Handmade paper, Raffia looped to a high pile, Cotton canvas, ...), each pattern (pixel) being characterised by 40 attributes built by the estimation of fourth order modified moments in four orientations: 0, 45, 90 and 135 degrees. 5500 patterns, 11 classes of 500 instances (each class refers to a type of texture in the Brodatz album). The original source of this database is: P. Brodatz "Textures: A Photographic Album for Artists and Designers", Dover Publications, Inc., New York, 1966. This database was generated by the Laboratory of Image Processing and Pattern Recognition (INPG-LTIRF Grenoble, France) in the development of the Esprit project ELENA No. 6891 and the Esprit working group ATHOS No. 6620. - satimage (*) Classification of the multi-spectral values of an image of the Landsat satellite. Each line contains the pixel values in four spectral bands of each of the 9 pixels in a 3x3 neighbourhood and a number indicating the classification label of the central pixel (corresponding to the type of soil: red soil, cotton crop, grey soil, ...). The aim is to predict this classification, given the multi-spectral values. 6435 instances, 36 attributes (4 spectral bands x 9 pixels in neighbourhood), 6 classes. This database was in use in the European StatLog project, which involves comparing the performances of machine learning, statistical, and neural network algorithms on data sets from REAL-world industrial areas including medicine, finance, image analysis, and engineering design: D. Michie, D.J. Spiegelhalter, and C.C. Taylor, editors. Machine learning, Neural and Statistical Classification. Ellis Horwood Series In Artificial Intelligence, England, 1994. - iris (*) This is perhaps the best known database to be found in the pattern recognition literature. Fisher's paper is a classic in the field and is referenced frequently to this day. (See Duda & Hart, for example.) The data set contains 3 classes of 50 instances each, where each class refers to a type of iris plant. One class is linearly separable from the other 2; the latter are NOT linearly separable from each other. 4 attributes (sepal length, sepal width, petal length and petal width). (*) These databases are taken from the ftp anonymous "UCI Repository Of Machine Learning Databases and Domain Theories" (ics.uci.edu: pub/machine-learning-databases): Murphy, P. M. and Aha, D. W. (1992). "UCI Repository of machine learning databases" [Machine-readable data repository]. Irvine, CA: University of California, Department of Information and Computer Science. [Duda73] Duda, R.O. and Hart, P.E., Pattern Classification and Scene Analysis, John Wiley & Sons, 1973. ############################################################################## The ELENA PROJECT ~~~~~~~~~~~~~~~~~ Neural networks are now known as powerful methods for empirical data analysis, especially for approximation (identification, control, prediction) and classification problems. The ELENA project investigates several aspects of classification by neural networks, including links between neural networks and Bayesian statistical classification, incremental learning (control of the network size by adding or removing neurons),... URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/ELENA.html ELENA is an ESPRIT III Basic Research Action project (No. 6891). It involves: INPG (Grenoble, F), UPC (Barcelona, E), EPFL (Lausanne, CH), UCL (Louvain-la-Neuve, B), Thomson-Sintra ASM (Sophia Antipolis, F) EERIE (Nimes, F). The coordinator of the project can be contacted at: Prof. Christian Jutten, INPG-LTIRF, 46 av. Flix Viallet, F-38031 Grenoble Cedex, France Phone: +33 76 57 45 48, Fax: +33 76 57 47 90, e-mail: chris at tirf.inpg.fr A simulation environment (PACKLIB) has been developed in the project; it is a smart graphical tool allowing fast programming and interactive analysis. The PACKLIB environment greatly simplifies the user's task by requiring only to write the basic code of the algorithms, while the whole graphical input, output and relationship framework is handled by the environment itself. PACKLIB is used for extensive benchmarks in the ELENA project and in other situations (image processing, control of mobile robots,...). Currently, PACKLIB is tested by beta users and a demo version available in the public domain. URL: http://www.dice.ucl.ac.be/neural-nets/ELENA/Packlib.html ############################################################################## IF YOU HAVE ANY PROBLEM, QUESTION OR PROPOSITION, PLEASE E_MAIL the following. VOZ Jean-Luc or Michel Verleysen Universite Catholique de Louvain DICE - Lab. de Microelectronique 3, place du Levant B-1348 LOUVAIN-LA-NEUVE E_mail : voz at dice.ucl.ac.be verleysen at dice.ucl.ac.be From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From winther at connect.nbi.dk Tue Jun 6 06:52:25 2006 From: winther at connect.nbi.dk (Ole Winther) Date: October 6, 1995 Subject: Paper available: "A mean field approach to Bayes Learning in feed-forward neural networks" Message-ID: FTP-host: connect.nbi.dk FTP-file: neuroprose/opper.bayes.ps.Z WWW-host: http://connect.nbi.dk ---------------------------------------------- The following paper is now available: A mean field approach to Bayes Learning in feed-forward neural networks [12 pages] Manfred Opper Theoretical Physics, University of Wurzburg, Germany Ole Winther CONNECT, The Niels Bohr Institute, University of Copenhagen, Denmark Abstract: We propose an algorithm to realise Bayes optimal predictions for feed-forward neural networks which is based on the TAP mean field method developed for the statistical mechanics of disordered systems. We conjecture that our approach will be exact in the thermodynamic limit. The algorithm results in a simple built-in leave-one-out crossvalidation of the predictions. Simulations for the case of the simple perceptron and the committee machine are in excellent agreement with the results of replica theory. Please do not reply directly to this message. ----------------------------------------------- FTP-instructions: unix> ftp connect.nbi.dk (or 130.225.212.30) ftp> Name: anonymous ftp> Password: your e-mail address ftp> cd neuroprose ftp> binary ftp> get opper.bayes.ps.Z ftp> quit unix> uncompress opper.bayes.ps.Z ----------------------------------------------- Ole Winther, Computational Neural Network Center (CONNECT) Niels Bohr Institute Blegdamsvej 17 2100 Copenhagen Denmark Telephone: +45-3532-5200 Direct: +45-3532-5311 Fax: +45-3142-1016 e-mail: winther at connect.nbi.dk  From austin at minster.cs.york.ac.uk Tue Jun 6 06:52:25 2006 From: austin at minster.cs.york.ac.uk (austin@minster.cs.york.ac.uk) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: characterize the needed relationship between the set of generalizers and the prior that allows cross-validation to work. David Wolpert From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Project lies in the use of patient-specific measurements of an epidemiological nature (such as maternal age, past obstetrical history, etc.) as well as fetal heart rate recordings, in the forecasting of a number of specific Adverse Pregnancy Outcomes. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: pattern-processing and classification systems which can be trained to forecast problems in pregnancy. This will involve continuation of work on pattern-classification and regression analysis, using neural networks operating on a very large database of about 1.2 million pregnancies from various European countries. Challenging components of the project include dealing with missing and uncertain variables, sensitivity analysis, variable selection procedures and cluster analysis. Many leading European obstetrical centres are involved in the Euro-PUNCH project, and close collaboration with a number of these will be an essential component of the post offered. Candidates for this post are expected to have a good first degree and preferably a post-graduate degree in a relevant discipline. Come familiarity with medical statistics and neural networks is desirable but not essential. Salary (on the RA scale) will depend on age and experience, and is likely to be in the range of #14,317 to #15,986 per annum. Appointment would be subject to satisfactory health screening. Applications will close on Friday 8th December 1995. Applications (naming two referees) should be submitted to: Dr Kevin J Dalton PhD FRCOG Division of Materno-Fetal Medicine, Dept. Obstetrics & Gynaecology University of Cambridge, Addenbrooke's Hospital Cambridge CB2 2QQ Tel: +44-1223-410250 Fax: +44-1223-336873 or 215327 e-mail: kjd5 at cus.cam.ac.uk Informal enquiries about the project should be directed to: (Obstetric side) Dr Kevin Dalton kjd5 at cus.cam.ac.uk (Engineering Side) Dr Niranjan niranjan at eng.cam.ac.uk (Engineering Side) Dr Richard Prager rwp at eng.cam.ac.uk -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Saturday, December 2, 1995, 7:30AM-9:30AM and Subject: No subject Message-ID: Topic and purpose of the workshop ================================= Proper benchmarking of neural networks on non-toy examples is needed from an application perspective in order to evaluate the relative strenghts and weaknesses of proposed algorithms and from a theoretical perspective in order to validate theoretical predictions and see how they relate to realistic learning tasks. Despite this important role, NN benchmarking is rarely done well enough today: o Learning tasks: Most researchers use only toy problems and, perhaps, one at least somewhat realistic problem. While this shows that an algorithm works at all, it cannot explore its strenghts and weaknesses. o Design: Often the setup is designed wrongly and cannot produce valid results from a statistical point of view. o Reproducibility: In many cases, the setup is not described exactly enough to reproduce the experiments. This violates scientific principles. o Comparability: Hardly ever are two setups of different researchers so similar that one could directly compare the experiment results. This has the effect that even after a large number of experiments with certain algorithms, their differences in learning results may remain unclear. There are various reasons why we still find this situation: o unawareness of the importance of proper benchmarking; o insufficient pressure from reviewers towards good benchmarking; o unavailability of a sufficient number of standard benchmarking datasets; o lack of standard benchmarking procedures. The purpose of the workshop is to address these issues in order to improve research practices, in particular more benchmarking with more and better datasets, better reproducibility, and better comparability. Specific questions to be addressed on the workshop are [Concerning the data:] o What benchmarking facilities (in particular: datasets) are publicly available? For which kinds of domains? How suitable are they? o What facilities would we like to have? Who is willing to prepare and maintain them? o Where and how can we get new datasets from real applications? [Concerning the methodology:] o When and why would we prefer artificial datasets over real ones and vice versa? o What data representation is acceptable for general benchmarks? o What are the most common errors in performing benchmarks? How can we avoid them? o Real-life benchmarking warstories and lessons learned o What must be reported for proper reproducibility? o What are useful general benchmark approaches (broad vs. deep etc.)? o Can we agree on a small number of standard benchmark setup styles in order to improve comparability? Which styles? The workshop will focus on two things: Launching a new benchmark database that is currently being prepared by some of the workshop chairs and discussing the above questions in general and in the context of this database. The benchmark database facility is planned to comprise o datasets, o data format conversion tools, o terminological and methodological suggestions, and o a results database. Workshop format =============== We invite anyone who is interested in the above issues to participate in the discussions at the workshop. The workshop will consist of a few talks by invited speakers and extensive discussion periods. The purpose of the discussion is to refine the design and setup of the benchmark collection, to explore questions about its scope, format, and purpose, to motivate potential users and contributors of the facility, and to discuss benchmarking in general. Workshop program ================ The following talks will be given at the workshop [The list is still preliminary]. After each talk there will be time for discussion. In the morning session we will focus on assessing the state of the practice of benchmarking and discussing an abstract ideal of it. In the afternoon session we will try to become concrete how that ideal might be realized. o Lutz Prechelt. A quantitative study of current benchmarking practices. A quantitative survey of 400 journal articles on NN algorithms. (15 minutes) o Tom Dietterich. Experimental Methodology. Benchmarking goals, measures of behavior, correct statistical testing, synthetic versus real-world data. (15 minutes) o Brian Ripley. What can we learn from the study of the design of experiments? (15 minutes) o Lutz Prechelt. Available NN benchmarking data collections. CMU nnbench, UCI machine learning databases archive, Proben1, Statlog data, ELENA data (10 minutes). o Tom Dietterich. Available benchmarking data generators. (10 minutes) o Break. o Carl Rasmussen and Geoffrey Hinton. A thoroughly designed benchmark collection. A proposal of data, terminology, and procedures and a facility for the collection of benchmarking results. (45 minutes) o Panel discussion. The future of benchmarking: purpose and procedures The WWW adress for this announcement is http://wwwipd.ira.uka.de/~prechelt/NIPS_bench.html Lutz Dr. Lutz Prechelt (http://wwwipd.ira.uka.de/~prechelt/) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; D-76128 Karlsruhe; Germany | they get (Phone: +49/721/608-4068, FAX: +49/721/694092) | less simple. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Friday, Dec 1 ============= AM 7:30-7:35 Welcome 7:35-8:05 Tom Mitchell (invited talk) - "Situated Learning" 8:05-8:25 Lorien Pratt - "Neural Transfer For Hazardous Waste" 8:25-8:45 Nathan Intrator - "Learning Internal Reps From Multiple Tasks" 8:45-9:05 Rich Caruana - "Where is Multitask Learning Useful?" 9:05-9:30 Panel Debate and Discussion - Topics include: serial vs. parallel transfer, what should be transfered? what domains are ripe for transfer? what are the goals of transfer? (Baxter, Caruana, Intrator, Mitchell, Silver, Pratt, ...) 9:30-4:30 Extracurricular Recreation PM 4:30-5:00 Jude Shavlik (invited talk) - "Talking to Your Neural Net" 5:00-5:20 Leo Breiman (invited talk) - "Curds & Whey" 5:20-5:40 Jonathan Baxter - "Bayesian Model of Learning to Learn" 5:40-6:00 Sebastian Thrun - "Identifying Relevant Tasks" 6:00-6:30 Panel Debate and Discussion - Topics include: transfer human to machine vs. machine to machine, is practice meeting theory? is theory meeting practice? (Baxter, Caruana, Breiman, Thrun, Mitchell, Shavlik, ...) Saturday, Dec 2 =============== AM 7:30-8:00 Noel Sharkey (invited talk) - "Adaptive Generalisation" 8:00-8:20 Anthony Robbins - "Rehearsal and Catastrophic Interference" 8:20-8:40 J. Schmidhuber - "A Theoretical Model of Learning to Learn" 8:40-9:00 Bairaktaris/Levy - "Dual-weight ANNs: Short/Long Term Learning" 9:00-9:30 Panel Debate and Discussion - Topics include: catastrophic interference, is there evidence for transfer in cognition? what can nature/cogsci tell us about transfer? (Bairaktaris, de Sa, Levy, Robbins, Sharkey, Silver, ...) 9:30-4:30 More Extracurricular Recreation PM 4:30-5:00 Tomaso Poggio (invited talk) - "Virtual Examples" 5:00-5:20 Virginia de Sa - "On Segregating Input Dimensions" 5:20-5:40 Chris Thornton - "Learning to be Brave: A Constructive Approach" 5:40-6:00 Mark Ring - "Continual Learning" 6:00-6:25 Panel Debate and Discussion - Topics include: combining supervised and unsupervised learning, where do we go from here? *this space intentionally left flexible* (de Sa, Mitchell, Poggio, Ring, Thornton, ...) 6:25-6:30 Farewell Full titles and abstracts are available on the workshop web page. 20 minute talks are 12 minutes presentation and 8 minutes questions and discussion. 30 minute invited talks are 20 minutes presentation and 10 minutes questions and discussion. There are four 30-minute panels, one for each session. Although topics are listed for each panel, these are intended merely as points of departure. Everyone attending the workshop should feel free to raise any issues during the panels that seem appropriate. We encourage speakers and members of the audience to prepare a terse list (preferably using inflammatory language) of your favorite transfer issues and questions. There are 16 talks, but this is not a conference! If speakers don't abuse their question/discussion time too much, more than 50% of the workshop will be spent on questions and discussion. To promote this, talks will use few slides and will focus on a few key issues. It's a workshop. Come preapred to speak up, be controversial, and have fun. Look forward to seeing you at Vail. -Danny, Jon, Lori, Rich, Sebastian, and Tom. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NFL and practice Message-ID: Joerg Lemm wrote > One may discuss NFL for theoretical reasons, but > the conditions under which NFL-Theorems hold > are not those which are normally met in practice. Exactly the opposite. The theory behind NFL is trivial (in some sense). The power of NFL is that it deals directly with what is rountinely practiced in the neural network community today. > 1.) In short, NFL assumes that data, i.e. information of the form y_i=f(x_i), > do not contain information about function values on a non-overlapping test set. > This is done by postulating "unrestricted uniform" priors, > or uniform hyperpriors over nonumiform priors... (with respect to Craig's ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ > two cases this average would include a third case: target and model are > anticorrelated so anticrossvalidation works better) and "vertical" likelihoods. > So, in a NFL setting data never say something about function values > for new arguments. > This seems rather trivial under this assumption and one has to ask > how natural is such a NFL situation. This is indeed a very trivial and unnatural assumption, which has been criticised by generations of statisticians over several centuries. However, it is exactly what is practiced by a majority of NN researchers. Consider the claim: "This is an algorithm which will perform well as long as there is some nonuniform prior". If such a claim could ever be true, then the algorithm would also be good for a uniform hyperprior over nonuniform priors. But this is in direct contradiction to NFL. According to NFL, you have to say:"This is an algorithm which will perform well on this particular nonuniform prior, (hence it will perform badly on that particular nonuniform prior)". Similarly, with the Law of Energy Conservation, if you say "I've designed a machine to generate electricity", then you automatically imply that you have designed a machine to consume some other forms of energy. You can't make every term positive in your balance sheet, if the grand total is bound to be zero. > Joerg continued with examples of various priors of practical concern, including smoothness, symmetry, positive correlation, iid samples, etc. These are indeed very important priors which match the real world, and they are the implicit assumptions behind most algorithms. What NFL tells us is: If your algorithm is designed for such a prior, then say so explicitly so that a user can decide whether to use it. You can't expect it to be also good for any other prior which you have not considered. In fact, in a sense, you should expect it to perform worse than a purely random algorithm on those other priors. > To conclude: > > In many interesting cases "effective" function values contain information > about other function values and NFL does not hold! This is like saying "In many interesting cases we do have energy sources, and we can make a machine running forever, so the natural laws against `perpetual motion machines' do not hold." These general principles might not be quite obviously interesting to a user, but they are of fundamental importance to a researcher. They are in fact also of fundamental importance to a user, as he must assume the responsibility of supplying the energy source, or specifying the prior. -- Huaiyu Zhu, PhD email: H.Zhu at aston.ac.uk Neural Computing Research Group http://neural-server.aston.ac.uk/People/zhuh Dept of Computer Science ftp://cs.aston.ac.uk/neural/zhuh and Applied Mathematics tel: +44 121 359 3611 x 5427 Aston University, fax: +44 121 333 6215 Birmingham B4 7ET, UK ----- End Included Message ----- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: where the packages are mirrored. The original location in cochlea.hut.fi is back in effect as soon as the machine is stable enough. Yours, Jari Kangas http://nucleus.hut.fi/~jari/ ------------------------------------------------------------------ ************************************************************************ * * * SOM_PAK * * * * The * * * * Self-Organizing Map * * * * Program Package * * * * Version 3.1 (April 7, 1995) * * * * Prepared by the * * SOM Programming Team of the * * Helsinki University of Technology * * Laboratory of Computer and Information Science * * Rakentajanaukio 2 C, SF-02150 Espoo * * FINLAND * * * * Copyright (c) 1992-1995 * * * ************************************************************************ Updated public-domain programs for Self-Organizing Map (SOM) algorithms are available via anonymous FTP on the Internet. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: 9:30- 10:00 Heinz Muehlenbein (GMD, Bonn) Inspiration from Nature vs. Copying nature. Lessons Learned from Genetic Algorithms 10:00- 10:50 Stephen Grossberg (Boston U) Are there Universal Principles of Brain Computation? 10:50-11:30 Discussion and Coffee 11:30-12:00 Anil Nerode (Cornell, Ithaca) Hybrid Systems as a Modelling Substrate for Biological and Cognitive Systems 12:00- 12:30 Daniel Mange (EPFL, Lausanne) Von Neumann Revisited: a Turing Machine with Self-Repair and Self-Reproduction Properties 12:30- 1:30 Lunch Robotics and Autonomous Systems 1:30- 1:50 Lynne Parker (ORNL, Oak Ridge) From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: 1:50-2:10 Akira Ito (Kansai Res. Ctr., Kobe) How Selfish Agents Learn to Cooperate 2:10-2:30 Bengt Carlsson (Karlskrona U, Sweden) The War of Attrition Strategy in Multi-Agent Systems 2:30-2:50 Claudio Cesar de Sa (IMA, Brasil) Architecture for a Mobile Agent 2:50-3:10 A.N. Stafylopatis (NTU, Athens) Autonomous Vehicle Navigation Using Evolutionary Reinforcement Learning 3:10-3:30 Jun Tani (Sony, Tokyo) Cognition from Dynamical Systems Perspective: Robot Navigation Learning 3:00-3:30 Discussion and Coffee Mathematical Models 3:30-3:50 Erol Gelenbe (Duke, NC) Genetic Algorithms which Learn 3:50-4:10 Petr Lansky (CTS, Prague University), Jean-Pierre Rospars (INRA) Stochastic Models of the Olfactory System 4:10-4:30 Vladimir Protopopescu (ORNL, Oak Ridge, Tenn.) Learning Algorithms Based on Finite Samples 4:30-5:00 Ivan Havel (CTS, Prague University) Interaction of Processes at Different Time Scales 5:00-5:30 Boris Stilman (Univ. of Colorado, Denver) Linguistic Geometry: A Cognitive Model for Autonomous Agents 7:00 Dinner Second Day: March 5, 1996 Neural Control 9:00- 9:30 Kumpati Narendra (Yale, New Haven) Neural Networks and Control 9:30- 10:00 John G. Taylor (King's College, London) Global Control Systems of the Brain 10:00- 10:30 Paul Werbos (NSF) Brain-like Control 10:30- 11:00 Discussion and Coffee 11:00-11:20 Shahid Habib and Mona Zaghloul (NASA and GWU) Concurrent System Identification and Control 11:30-12:00 Harry Klopf (Wright-Patterson AFB) Drive-Reinforcement Learning and Hierarchical Networks of Control Systems as Models of Nervous System Function 12:00-1:00 Lunch Learning 1:00- 1:20 Nestor Schmajuk (Duke, NC) The Psychology of Robots 1:20- 1:40 John Staddon (Duke, NC) Habituation: A Non-Associative Learning Process 1:40-2:00 David Rubin (Duke, NC) A Biologically Inspired Model of Autobiographical Memory 2:00-2:20 Ugur Halici (METU, Ankara) Reward, Punishment and Expectation in Reinforcement Learning for the RNN 2:20-2:40 Daniel Levine (Univ. of Texas, Arlington) Analyzing the Executive: Modeling the Functions of Prefrontal Subcortical Loops 2:40-3:00 Discussion and Coffee Autonomous Systems 3:00-3:15 E. Koerner, U. Koerner (Honda R \& D, Japan) Selforganization of Semantic Constraints for Knowledge Representation in Autonomous Systems: A Model of the Role of an Emotional System in Brains 3:15-3:30 Tetsuya Higuchi et al. (Tsukuba, Japan) Hardware Evolution at Gate and Function Levels 3:30-3:45 Christopher Landauer (The Aerospace Corp., Virginia) Constructing Autonomous Software Systems 3:45-4:00 Robert E. Smith Combined Biological Paradigms: A Neural, Genetics-Based Autonomous Systems Strategy Vision and Imaging 4:00-4:15 Jonathan Marshall (UNC, NC) Self-organization of Triadic Neural Circuits for Anticipatory Visual Receptive Field Shifts under Intended Eye Movements 4:15-4:30 Didem Gokcay, LiMin Fu (Univ. of Florida, Gainesville) Visualization of Functional Magnetic Resonance Images through Self-Organizing Maps 4:30-4:45 S. Guberman, W. Wojtkowski (Paragraph International, California) DD algorithm and Automated Image Comprehension 4:45-5:00 E. Koerner, U. Koerner (Honda R \& D, Japan) Neocortex-like Neural Network Architecture for Autonomous Image Understanding 5:00-5:15 E. Oztop (METU, Ankara) Baseline Extraction on Document Images by Repulsive/Attractive Network 5:15-5:30 Y. Feng, E. Gelenbe (Duke, NC) Detecting Faint Targets in Strong Clutter: A Neural Approach Networking Applications 5:30-5:45 Christopher Cramer et al. (Duke, NC) Adaptive Neural Video Compression 5:45-6:00 Thomas John, Scott Toborg (Southwestern Bel, Austin, Texas) Neural Network Techniques for Fault and Performance Diagnosis of Broadband Networks 6:15-6:30 Philippe de Wilde (Imperial College, London) Equilibria of a Communication Network 6:30-6:45 Jonathan W. Mills (Indiana University) Implementing the McCulloch-Kilmer RETIC Architecture with an Analog VLSI Neural Field Computer End of the Workshop ------------------------------------------------------------------ For further information contact: Margrid Krueger Dept. of Electrical and Computer Engineering Duke University email: mak at ee.duke.edu Fax: (919) 660 5293 Tel: (919) 660 5253 From dcrespin at euler.ciens.ucv.ve Tue Jun 6 06:52:25 2006 From: dcrespin at euler.ciens.ucv.ve (Daniel Crespin(UCV) Date: Tue, 13 Feb 96 10:52:30-040 Subject: papers available Message-ID: <9602131452.AA26199@euler.ciens.ucv.ve.ciens.ucv.ve> The preprints abstracted below could be of interest. To obtain the preprints use a WWW browser and go to http://euler.ciens.ucv.ve/Professors/dcrespin/Pub/ [1] Neural Network Formalism: Neural networks are defined using only elementary concepts from set theory, without the usual connectionistic graphs. The typical neural diagrams are derived from these definitions. This approach provides mathematical techniques and insight to develop theory and applications of neural networks. [2] Generalized Backpropagation: Global backpropagation formulas for differentiable neural networks are considered from the viewpoint of minimization of the quadratic error using the gradient method. The gradient of (the quadratic error function of) a processing unit is expressed in terms of the output error and the transposed derivative of the unit with respect to the weight. The gradient of the layer is the product of the gradients of the processing units. The gradient of the network equals the product of the gradients of the layers. Backpropagation provides the desired outputs or targets for the layers. Standard formulas for semilinear networks are deduced as a special case. [3] Geometry of Perceptrons: It is proved that perceptron networks are products of characteristic maps of polyhedra. This gives insight into the geometric structure of these networks. The result also holds for more general (algebraic, etc.) perceptron networks, and suggests a new technique to solve pattern recognition problems. [4] Neural Polyhedra: Explicit formulas to realize any polyhedron as a three layer perceptron neural network. Useful to calculate directly and without training the architecture and weights of a network that executes a given pattern recognition task. [5] Pattern Recognition with Untrained Perceptrons: Gives algorithms to construct polyhedra directly from given pattern recognition data. The perceptron network associated to these polyhedra (see preprint above) solves the recognition problem proposed. No network training is necessary. Daniel Crespin From ertel at fbe.FH-Weingarten.DE Tue Jun 6 06:52:25 2006 From: ertel at fbe.FH-Weingarten.DE (ertel@fbe.FH-Weingarten.DE) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: CFA: Autumn School on HYBRID SYSTEMS Message-ID: ------------------------------------------------------------------------------- * web page: http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm * LaTeX source: enclosed below ------------------------------------------------------------------------------- AUTUMN SCHOOL on HYBRID SYMBOLIC CONNECTIONIST SYSTEMS Ravensburg -- Weingarten (Germany) September 25--28, 1996 organized by ESPRIT BASIC RESEARCH PROJECT 9119 (MIX) "Modular Integration of Connectionist and Symbolic Processing in Knowledge-Based Systems" The combination of symbolic and connectionist systems is one of the currently most challenging and interesting fields of Artificial Intelligence. This school introduces students as well as active researchers to this new and promising direction of research. Most symbolic systems are more or less logic based and therefore perform deductive reasoning on expert knowledge, but have severe problems with inductive reasoning. On the other hand neural networks are good in inductive reasoning based on data, but are less apt to perform deductive reasoning. Another view of this problem is the integration of prior knowledge (e.g. expert knowledge) into an inductive system. The lectures will give an insight into the ongoing research in the field where the fundamental theory as well as practical solutions for concrete applications will be presented. Participants are expected to have basic knowledge of Neural Networks and Artificial Intelligence. LECTURES There will be 6 lectures each consisting of 4 x 45 min lessons. 1) Melanie Hilario (Univ. Geneva), Abderrahim Labbi (Univ. Grenoble), Wolfgang Ertel (FH Weingarten): A Framework for the Modular Integration of Knowledge and Data in Hybrid Systems 2) Alessandro Sperduti (Univ. Pisa): Neural Networks for the Processing of Structures 3) Jose Gonzales and Juan R. Velasco (Univ. Madrid): A Comprehensive View of Fuzzy-Neural Hybrids 4) Michael Kaiser (Univ. Karlsruhe): Combining Symbolic and Connectionist Techniques in Adaptive Control 5) NN 6) NN POSTER SESSION The attendees of the autumn school are encouraged to bring along a poster (size about 40 60 cm) which gives insight into their research work, the project they are working in, etc. which shall be presented in a poster session. DIRECTORS OF THE SCHOOL Wolfgang Ertel, FH Ravensburg-Weingarten Bertram Fronhoefer TU Munich GENERAL INFORMATION PARTICIPATION FEES: - Students: ECU 150.-- - University: ECU 250.-- - Industry: ECU 400.-- DEADLINE FOR APPLICATION: April 12, 1996 NOTIFICATION OF ACCEPTANCE: May 22, 1996 Applications should be sent preferably by email to: ashs at fl-sun00.fbe.fh-weingarten.de Applications should contain a full address and a short statement about the applicants scientific qualification (student, PhD student, industrial researcher, etc.) and his interests in the topics of the autumn school. If email is not available, applications by surface mail should be sent to: Wolfgang Ertel Phone: +49--751--501--721 FH Ravensburg-Weingarten Fax: +49--751--501--749 Postfach 1261 D-88241 Weingarten Attendance to the school will be limited to about 50 participants. LANGUAGE: All lectures will be in English. LECTURE SITE: The lectures will be given in the Informatik Zentrum of the Fachhochschule Ravensburg-Weingarten and will start on September 25 in the morning. ACCOMODATION: Apart from a large range of hotels with prizes from DM 40.-- till DM 200.--, there are also limited occasions for inexpensive student lodging. Low price lunch will be provided by the Mensa (canteen) of the Fachhochschule Ravensburg-Weingarten. LOCATION: Weingarten and its immediate neighbour-city Ravensburg with about 70000 inhabitants represent the economic and cultural heart of Oberschwaben. Above the valley of the river Schussen the famous basilica of Weingarten together with the adjacent old Benedictine abbey is one of the most significant baroque constructions north of the alps. Close to and partly inside the baroque buildings of the abbey is the Fachhochschule, a university for engineering and social sciences where the school will take place. Oberschwaben is a rural pre-alpine area with various little lakes and fens, located in the south-west of Germany, close to the lake of Konstanz and the borders to Austria and Switzerland. The alps are not far (an hour by car or train) and the lake of Konstanz provides all facilities for marine outdoor activities. For further information and inquiries concerning participation please send an e-mail message to the above address. This call as well as futher information is available from the WWW-page: http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm %%%%%%%%%%%%%%%%%%%%%%%%%%%%% LaTeX source %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% \documentstyle[12pt]{article} \nonstopmode \parindent=0pt \parskip=4pt \hoffset=-10mm \voffset=-15mm \textheight=240mm \textwidth=175mm \oddsidemargin=0mm \evensidemargin=0mm \topmargin=0mm \newcommand{\tit}[1]{{\em #1.}} \newenvironment{alist}{\begin{list}{}{\itemsep 1mm \parsep 0mm}}{\end{list}} \begin{document} \bibliographystyle{alpha} \begin{centering} {\Large\bf AUTUMN SCHOOL on HYBRID SYMBOLIC CONNECTIONIST SYSTEMS } {\large FH Ravensburg--Weingarten (Germany) September 25--28, 1996 } \bigskip organized by \\[1ex] {\large ESPRIT BASIC RESEARCH PROJECT 9119 (MIX) }\\[1ex] "Modular Integration of \\ Connectionist and Symbolic Processing \\ in Knowledge-Based Systems" \\ \end {centering} \bigskip \bigskip The combination of symbolic and connectionist systems is one of the currently most challenging and interesting fields of Artificial Intelligence. This school introduces students as well as active researchers to this new and promising direction of research. Most symbolic systems are more or less logic based and therefore perform deductive reasoning on expert knowledge, but have severe problems with inductive reasoning. On the other hand neural networks are good in inductive reasoning based on data, but are less apt to perform deductive reasoning. Another view of this problem is the integration of prior knowledge (e.g.\ expert knowledge) into an inductive system. The lectures will give an insight into the ongoing research in the field where the fundamental theory as well as practical solutions for concrete applications will be presented. Participants are expected to have basic knowledge of Neural Networks and Artificial Intelligence. \bigskip \begin{centering} {\bf Lectures} \\ There will be 6 lectures each consisting of 4 x 45 min lessons. \end {centering} \begin{alist} \item[1.] Melanie Hilario (Univ.\ Geneva), Abderrahim Labbi (Univ.\ Grenoble), Wolfgang Ertel \\(FH~Weingarten):\\ \tit{A Framework for the Modular Integration of Knowledge and Data in Hybrid Systems} \item[2.] Alessandro Sperduti (Univ.\ Pisa): \tit{Neural Networks for the Processing of Structures} \item[3.] Jose Gonzales and Juan R. Velasco (Univ.\ Madrid):\\ \tit{A Comprehensive View of Fuzzy-Neural Hybrids} \item[4.] Michael Kaiser (Univ.\ Karlsruhe):\\ \tit{Combining Symbolic and Connectionist Techniques in Adaptive Control} \item[5.] NN \item[6.] NN \end{alist} \medskip \begin{centering} {\bf Poster Session \\} \end {centering} The attendees of the autumn school are encouraged to bring along a poster (size about 40 $\times$ 60 cm) which gives insight into their research work, the project they are working in, etc. which shall be presented in a poster session. \newpage \begin{centering} {\bf Directors of the School } Wolfgang Ertel, FH Ravensburg-Weingarten \\ Bertram Fronh\"ofer, TU Munich \\ \bigskip {\bf General Information \\ } \end {centering} {\bf Participation fees:} \\ \begin{tabular}{ll} -- Students : & ECU 150.-- \\ -- University : & ECU 250.-- \\ -- Industry : & ECU 400.-- \end{tabular} {\bf Deadline for Application:} April 12, 1996 {\bf Notification of Acceptance:} May 22, 1996 Applications should be sent preferably by email to: {\tt ashs at fl-sun00.fbe.fh-weingarten.de} \\ Applications should contain a full address and a short statement about the applicants scientific qualification (student, PhD student, industrial researcher, etc.) and his interests in the topics of the autumn school. If email is not available, applications by surface mail should be sent to: % \begin{center} \parbox[t]{6cm}{Wolfgang Ertel\\ FH Ravensburg-Weingarten\\ Postfach 1261\\ D-88241 Weingarten} \parbox[t]{6cm}{Phone: +49--751--501--721\\ Fax: +49--751--501--749} \end{center} % Attendance to the school will be limited to about 50 participants. {\bf Language:} All lectures will be in English. {\bf Lecture site:} The lectures will be given in the Informatik Zentrum of the Fachhochschule Ravensburg-Weingarten and will start on September 25 in the morning. {\bf Accomodation:} Apart from a large range of hotels with prizes from DM 40.-- till DM 200.--, there are also limited occasions for inexpensive student lodging. Low price lunch will be provided by the Mensa (canteen) of the Fachhochschule Ravensburg-Weingarten. {\bf Location:} Weingarten and its immediate neighbour-city Ravensburg with about 70000 inhabitants represent the economic and cultural heart of Oberschwaben. Above the valley of the river Schussen the famous basilica of Weingarten together with the adjacent old Benedictine abbey is one of the most significant baroque constructions north of the alps. Close to and partly inside the baroque buildings of the abbey is the Fachhochschule, a university for engineering and social sciences where the school will take place.\\ Oberschwaben is a rural pre-alpine area with various little lakes and fens, located in the south-west of Germany, close to the lake of Konstanz and the borders to Austria and Switzerland. The alps are not far (an hour by car or train) and the lake of Konstanz provides all facilities for marine outdoor activities. For further information and inquiries concerning participation please send an e-mail message to the above address. This call as well as futher information is available from the WWW-page:\\ {\tt http://www.fh-weingarten.de/homepags/doz/ertel/ashs.htm} \end{document} From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Graduate Scholarship Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Geraint Johnson, L Nealon and Roger O. Lindsay Initialization by Rule Induction Prior to Learning Ralf Salomon DEDEC: A Methodology for Extracting Rules From Trained Artificial Neural Networks Alan B. Tickle, Marian Orlowski and Joachim Diederich An Algorithm for Extracting Propositions From Trained Neural Networks Using Mltilinear Functions Hiroshi Tsukimoto and Chie Morita Automatic Acquisition of Symbolic Knowledge From Subsymbolic Neural Networks Alfred Ultsch and Dieter Korus Rule Extraction From Trained Neural Networks: Different Techniques for the Determination of Herbicides for the Plant Protection Advisory System PRO_PLANT Ubbo Visser, Alan Tickle, Ross Hayward and Robert Andrews From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: It provides compelling neurobiological evidence for the existence of stable attractor dynamics; at the same time, it forces consideration of the challenging problem of how to shift a stable activity profile. Because the first systematic experimental studies of HD cells were published only in 1990, the current paper includes a relatively complete reference list of the experimental publications, in addition to some immediately related theoretical papers. ____________________________________________________________________________ The paper has appeared in: Journal of Neuroscience 16(6): 2112-2126 (1996) Title: Representation of spatial orientation by the intrinsic dynamics of the head-direction cell ensemble: A theory Author: Kechen Zhang Department of Cognitive Science University of California, San Diego La Jolla, California 92093-0515 Abstract: The head-direction (HD) cells found in the limbic system in freely moving rats represent the instantaneous head direction of the animal in the horizontal plane regardless of the location of the animal. The internal direction represented by these cells uses both self-motion information for inertially based updating and familiar visual landmarks for calibration. Here, a model of the dynamics of the HD cell ensemble is presented. The stability of a localized static activity profile in the network and a dynamic shift mechanism are explained naturally by synaptic weight distribution components with even and odd symmetry, respectively. Under symmetric weights or symmetric reciprocal connections, a stable activity profile close to the known directional tuning curves will emerge. By adding a slight asymmetry to the weights, the activity profile will shift continuously without disturbances to its shape, and the shift speed can be accurately controlled by the strength of the odd-weight component. The generic formulation of the shift mechanism is determined uniquely within the current theoretical framework. The attractor dynamics of the system ensures modality-independence of the internal representation and facilitates the correction for cumulative error by the putative local- view detectors. The model offers a specific one-dimensional example of a computational mechanism in which a truly world-centered representation can be derived from observer-centered sensory inputs by integrating self- motion information. __________________________________________________________________________ Comments and suggestions are welcome. Email: zhang at salk.edu or kzhang at cogsci.ucsd.edu http://www.cnl.salk.edu/~zhang From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: human cortex is a myth. There is postnatal cortical cell death in rodents, but in primates (including humans) there is only (i) a decreased density of cell packing, and (ii) massive (up to 50%) synapse loss. (The decreased density of cell packing was apparently misinterpreted as cell loss in the past). Of course, there are pathological cases, such as Alzheimers, in which there is cell loss. I have written a review of human postnatal brain development which I can send out on request. Mark Johnson =============== Mark H. Johnson Senior Research Scientist (Special Appointment) Professor of Psychology, University College London MRC Cognitive Development Unit, 4 Taviton Street, London WC1H OBT, UK tel: 0171-387-4692 fax: 0171-383-0398 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of the cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons are lost by age 60. Using the standard estimate of ten bilion neurons in the neocortex, this works out to about one hunderd thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey and rodent data" DG Flood & PD Coleman, Neurobiology of Aging, 9, (1988) pp.453-464. -------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I have come across a brief reference to adult neural death that may be of use, or at least a starting point. The book is: Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward Univ. In a footnote (!) on page 32, he writes: There is typically a loss of 5-10 percent of brain tissue with age. Assuming a brain loss of 7 percent over a life span of 100 years, and 10^11 neurons (100 billions) to begin with, approximately 200,000 neurons are lost per day. ---------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: As I remember it, the studies showing the marked reduction in nerve cell count with age were done around the turn of the century. Themethod, then as now, is to obtain brains of deceased persons, fix tehm, prepare cuts, count cells microscopically in those cuts, and then estimate the total number by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramidal cells are turned into that form by fixation. The new study showed that the total water content of the brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percents?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty ceratin that the gist is correct. So the conclusion seems to be that the cell loss with age in the CNS is much lower than generally thought. ---------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991) writes on page 208 that: "Comparisons of neural densities in the brain of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cell die between the ages of twenty and eighty years (Tomlinson and Gibson, 1980). Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5% of the cortical cells die at random...." and goes on to discuss the implications for network robustness. Reference: Gearald H, Tomlinson BE and Gibson PH (1980) "Cell counts in human cerebral cortex in normal adults throughout life using an image analysis computer" J. Neurol., 46, pp. 113-136. ------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: "In search of the Engram" The problem of robutsness from a neurobiological perspective seems to originate from works done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of the memory and seeing if it affected memory... Other work was done by a gentlemen named Richard F. Thompson. Both speak of the loss of neurons in a system and how integrity was kept. In particular Karl Lashley spoke of the memory as holograms... ------------------------------------------------- Hope it helps... Regards Guido Bugmann ----------------------------- Dr. Guido Bugmann Neurodynamics Research Group School of Computing University of Plymouth Plymouth PL4 8AA United Kingdom ----------------------------- Tel: (+44) 1752 23 25 66 / 41 Fax: (+44) 1752 23 25 40 Email: gbugmann at soc.plym.ac.uk http://www.tech.plym.ac.uk/soc/Staff/GuidBugm/Bugmann.html ----------------------------- From stavrosz at med.auth.gr Tue Jun 6 06:52:25 2006 From: stavrosz at med.auth.gr (Stavros Zanos) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Neuronal Cell Death Message-ID: This recently published (April '96) review about the amount and the possible role of neuronal cell-death during development, could be of an interest to some of the readers of this list. James T. Voyvodic (1996) Cell Death in Cortical Development: How Much? Why? So What? Neuron 16(4) Stavros Zanos Aristotle University School of Medicine Thessaloniki, Macedonia, Greece "If I Had More Time, I Would Have Written You A Shorter Letter" Pascal From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: algorithms definitely need to design and train nets on their own. (By the way, this is indeed a doable task and all neural network algorithms need to focus on both of these tasks.) We cannot leave design out of our algorithms. Our original intent is to build self-learning systems, not systems that we have to "baby sit" all the time. Such systems are "useless" if we want to build truly autonomous learning systems that can learn own their own. "Learning" includes "design and training". We cannot call them learning algorithms unless they design nets on their own and unless they attempt to generalize (i.e. attempt to build the smallest possible net). I would welcome more thoughts and debate on all of these issues. It would help to see some more response on two of the other premises of classical connectionist learning - local learning and memoryless learning. They have been the key concepts behind algorithm development in this field for the last 40 to 50 years. Again, open and vigorous debate is very healthy for a scientific field. Perhaps more researchers will come forward with facts and ideas on all these two and other issues. ******************************************************** ******************************************************** On May 23 Danny Silver wrote: "Dr. Roy .. It was interesting to read your mail on new criteria for neural network based inductive learning. I am sure that many other readers have at one time or another had similar thoughts or portions thereof. Notwithstanding the need to walk before you run, there is reason to set our sights a little higher then they have been. Along these lines I would like to point you toward a growing body of work on Transfer in Inductive Systems which suggests that a "life long learning" or "learning to learn" approach encomposes much of the criteria which you have outlined. At NIPS*95 a post-conference workshop covered this very topic and heard from some 15 speakers on the subject. All those who are interested should search through the hompages below for additional information." Daniel L. Silver University of Western Ontario, London, Canada = N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b = dsilver at csd.uwo.ca H: (519)473-6168 O: (519)679-2111 (ext.6903) WWW home page .... http://www.csd.uwo.ca/~dsilver = ================================================== Workshop page: http://www.cs.cmu.edu/afs/cs.cmu.edu/usr/caruana/pub/transfer.html Lori Pratt's transfer page: http://vita.mines.colorado.edu:3857/0/lpratt/transfer.html Danny Silver's transfer ref list: http://www.csd.uwo.ca/~dsilver/ltl-ref-list Rich Caruana's transfer ref list: http://www.cs.cmu.edu/afs/cs.cmu.edu/user/caruana/pub/transferbib.html ******************************************************** ******************************************************** On May 21 Michael Vanier wrote: "I read your post to the computational neuroscience mailing list with interest. I agreed with most of your points about the differences between "brain-like" learning and the learning exhibited by current neural network models. I have a couple of comments, for what it's worth. (On Task A: Perform Network Design Task) As a student of neuroscience (and computational neuroscience), it isn't clear to me what you're referring to when you say that the brain designs an appropriate network for a given task. One take on this is that evolution has done just that, but evolution has operated over millions of years. Biological development can also presumably tune a network in response to inputs (e.g. the development of connectivity in visual cortex in response to the presence or absence of visual stimuli), but again, this is slow and relatively fixed after a certain period, so it would only apply to generic tasks whose nature doesn't change profoundly over time (which presumably is the case for early vision). I know of no example where the brain can massively rewire itself in order to perform some task. However, the kind of learning found in connectionist networks (correlation-based using local learning rules) has a fairly direct analogy to long-term potentiation and depression in the brain, so it's likely that the brain is at least this powerful. This accounts for much of the appeal of local learning rules: you can find them (or something similar to them) in the brain. In fact,despite the practical problems with backprop (which you mention), the most common objection given by biologists to backprop is that even this simple a learning rule would be very difficult to instantiate in a biological system. (On Task C: Quickness in Learning) This is indeed a problem. Interestingly, attractor networks such as the Hopfield net can in principle learn in one trial (although there are other problems involved there too). Hopfield nets are also fundamentally feedback structures, like the brain but unlike most connectionist models. This is not to suggest that Hopfield nets are good models of the brain; they clearly aren't. It's not clear to me what you mean by "storing training examples in memory". Again using the Hopfield net example, in that case the whole purpose of the network is to store patterns in memory. Perhaps what you're suggesting is that feedforward networks take advantage of this to repeatedly play back memorized patterns from attractor networks so as to make learning more rapid. Some researchers believe the hippocampus is performing this function by storing patterns when an animal is awake and playing them back when the animal is asleep. Thanks for an interesting post." ******************************************************** ******************************************************** On May 15 Brendan McCane wrote: " Hi, Just a few comments here. Although I think the points you make are valid and probably desirable, I don't think they can necessarily be applied to the human brain. Following are specific comments about the listed criteria. (On Task A: Design Networks) The neural network architecture of the brain is largely pre-determined. Tuning certainly takes place, but I do not believe that the entire brain architecture is rebuilt for every newborn. This would require tremendous effort and probably end up with people who cannot communicate with each other at all (due to different representations). The human brain system has actually been created with external assistance, namely from evolution. (On Task B: Robustness in Learning) I agree that no local-minima would be optimal, but humans almost certainly fall into local minima (due to lack of extensive input or whatever) and only jump out when new input comes to light. (On Task E: Efficiency in Learning.) I don't see why insects or birds could not solve NP-hard problems from an evolutionary point of view. That is, the solution has now been hard-wired into their brains after millions of years of learning. I am not convinced that these characteristics are more brain-like than classical connectionist ones. Certainly they are desirable, and are possibly the holy grail of learning, but I don't think you can make the claim that the brain functions in this way. I think I've addressed all the other points made below in the points above." ******************************************************** ******************************************************** On May 15 Richard Kenyon wrote: "Here are my comments. I think that what you are looking for is something along the lines of a-life type networks which would evolve their design (much like the brain, see Brendans comment), as there is no obvious design for any particular problem in the first place, and a net which can design a network must already know something about the problem, which is why you raise the issue. I think though that design is evolving albeit at the hands of connectionist scienctists, i.e the title of this list is one such step in the evolution. (On Task B: Robustness in Learning) For me one of the key concepts in neural nets is graceful degradation, the idea that when problems arise the networks don;t just fall over. I reckon that networks are still fairly brittle and that a lot needs ot be done in this area. However i agree again with Brendan that our brains suffer local minima more than we would like to admit. (On Task C: Quickness in Learning) Memory is indeed very important, but recurrent neural networks have published a lot on the storage capacity of such devices already, it has not been forgotten. Very idealistic i'm afraid. Humans don't learn as quickly as we might like to think. Our 'education' is a long drawn out process and only every now and again do we experience enlightenment in the grasping of a key concept. This does not happen quickly or that often (relatively). The main factor affecting neural nets (imo) will be parallel computers at which point the net as we know it will not be stand alone but connected to many more, this is the principle i think is the closest theorisation we have to the brains parallelism. This is also why hybrid systems are v interesting, as a parallel system will be able to process output from mnay designs. (On Task D: Efficiency in Learning) Obviously efficiency in learning is important, but for us humans this is often mediated by efficent teaching, as in the case of some algorithms, self organising nets offer some form of autonamy in learning, but often end up doing it the same way over and over again, as do we. Kohonen has interpreted this as a physiological principle, in that it takes a lot of effort to sever old neural connections and etablish a new path for incorporating new ideas. Local minima have muscle. (On Task E: Generalization in Learning) The brain probably accepts some form of redundancy (waste). I agree that the brain is one hell of an optimisation machine. Intelligence whatever task it may be applied to is (again imho) one long optimisation process. Generalisation arises (even emerges or is a side effect) as a result of ongoing optimisation, conglomeration, reprocessing etc etc. This is again very important i agree, but i think (i do anyway) we in NN commumnity are aware of this as with much of the above. I thought that apart from point A we were doing all of this already, although to have it explicitly published is very valuable. I may be wrong > A good test for a so-called "brain-like" algorithm is to imagine it > actually being part of a human brain. I don't think that many researchers would claim too much about neural nets being very brain like at all. The simulated neurons, whether sigmoid or tansigmoid etc, do not behave very like real neurons at all, which is why there is a lot of research into biologically plauysible neurons. > Then examine the learning > phenomenon of the algorithm and compare it with that of the > human's. For example, pose the following question: If an algorithm > like back propagation is "planted" in the brain, how will it behave? > Will it be similar to human behavior in every way? Look at the > following simple "model/algorithm" phenomenon when the back- > propagation algorithm is "fitted" to a human brain. You give it a > few learning examples for a simple problem and after a while this > "back prop fitted" brain says: "I am stuck in a local minimum. I > need to relearn this problem. Start over again." And you ask: > "Which examples should I go over again?" And this "back prop > fitted" brain replies: "You need to go over all of them. I agree this is limitation, but how is any net supposed ot know what is relevant to remember or even pay greater attention to. This is in part the frame problem which roboticists are having a great deal of fun discussing. > I don't > remember anything you told me." So you go over the teaching > examples again. And let's say it gets stuck in a local minimum again > and, as usual, does not remember any of the past examples. So you > provide the teaching examples again and this process is repeated a > few times until it learns properly. The obvious questions are as > follows: Is "not remembering" any of the learning examples a brain- > like phenomenon? yes and no, children often need to be told over and over again, and this fielkd is still in its infancy. >Are the interactions with this so-called "brain- > like" algorithm similar to what one would actually encounter with a > human in a similar situation? If the interactions are not similar, then > the algorithm is not brain-like. A so-called brain-like algorithm's > interactions with the external world/teacher cannot be different > from that of the human. > > In the context of this example, it should be noted that > storing/remembering relevant facts and examples is very much a > natural part of the human learning process. Without the ability to > store and recall facts/information and discuss, compare and argue > about them, our ability to learn would be in serious jeopardy. > Information storage facilitates mental comparison of facts and > information and is an integral part of rapid and efficient learning. It > is not biologically justified when "brain-like" algorithms disallow > usage of memory to store relevant information. I did not know they were not allowed, but perhapos they have been left on the sidelines, but again i refer you to recurrent nets. > Another typical phenomenon of classical connectionist learning is > the "external tweaking" of algorithms. How many times do we > "externally tweak" the brain (e.g. adjust the net, try a different > parameter setting) for it to learn? Interactions with a brain-like > algorithm has to be brain-like indeed in all respect. An analogy here is perhaps taking a different perspective on a problem, this is a very human parameter that we must tweak to make progress. > It is perhaps time to reexamine the foundations of the neural > network/connectionist field. This mailing list/newsletter provides an > excellent opportunity for participation by all concerned throughout > the world. I am looking forward to a lively debate on these matters. > That is how a scientific field makes real progress. i agree with the last sentiment." ******************************************************** ******************************************************** On May 16 Chris Cleirigh wrote: "hi good luck with your enterprise, i think if you aim to be consistent with biology you have more chance of long term success. i'm no engineer -- i'm a linguist -- but i've read of Edelman's theory of neuronal group selection which seeks to explain categorisation through darwinian processes of variation and selection of populations of neuronal groups in the brain. are you motivated by such models. one thing, you say: For neuroscientists and neuroengineers, it should open the door to development of brain-like systems they have always wanted - those that can learn on their own without any external intervention or assistance, much like the brain. however, efficient learning does involve external intervention, especially by other brains. consider language learning and the corrective role played by adults in teaching children." ******************************************************** ******************************************************** On May 17 Kevin Gurney wrote: " I read your (provocative) posting to the cogpsy mailing list and would like to make some comments (Your original remarks are enclosed in square brackets) YA. Perform Network Design Task: A neural network/connectionist learning method must be able to design an appropriate network for a given problem,...From a neuroengineering and neuroscience point of view, this is an essential property for any "stand-alone" learning system -.." It might be from a neuroengineering point of view but not from a neurscientific one. Real brains undergo a developmental process, much of which is encoded in the organism's DNA. Thus, the basic mechanisms of structural and trophic development are not thought to be activity driven per se. Mechansims like Long Term Potentiation (LTP) may be the biological correlate of connectionist learning (Hebb rule) but are not responsible for the overall neural architecture at the modular level which includes the complex layering of the cortex. I would take issue quite generally with your frequent invocation of th eneuroscientists in your programme. They *are* definitley interested in discovering the nature of real brains - rather than super-efficient networks hat may be engineered - will bring this out in subsequent points below YB. Robustness in Learning: The method must be robust so as not to have the local minima problem, the problems of oscillation and catastrophic forgetting, the problem of recall or lost memories and similar learning difficulties." Again, it may be the goal of neuro*engineers* to study ideal devices - it is not the domain of neuroscientists. YC. Quickness in Learning: The method must be quick in its learning and learn rapidly from only a few examples, much as humans do. " Humans don't, in fact, learn from just a few examples in most cognitive and perceptual tasks - this is a myth. The fine tuning of visual and motor cortex which is a result of the critical period in infanthood is a result of a continuous bombardment of the animal with stimuli and tactile feedback. The same goes for langauge. The same applies for the learning of any new skill in fact (reading, playing a musical instrument ec etc.). These may be executed in an algorithmic, serial processing fashion until they become automatised in the parallel processing of the brain (cf Andy Clarke's von-Neuman emulaton by the brain) Many connectionists have imbued humans with god-like powers which aren't there. It is true that we can learn one-off facts and add them to our episodic memory but this is not usually the kind of things which nets are asked to perform. YD. Efficiency in Learning: The method must be computationally efficient in its learning when provided with a finite number of training examples (Minsky and PapertY1988"). It must be able to both design and train an appropriate net in polynomial time." Judd has shown that NN learning is intrinsically NP complex in many instances - there is no `free lunch'. See also the results in computational learning theory by Wolpert and Schaffer. YE. Generalization in Learning: ...That is, it must try to design the smallest possible net, ... This property is based on the notion that the brain could not be wasteful of its limited resources, so it must be trying to design the smallest possible net for every task." Not true. Visual cortex uses a massive expansion in its coding from the LGN to V1 before it `recompresses' in higher visual centres. This has been described theoretically in terms of PCA etc (ECVP last year - can't recall ref. just now) YAs far as I know, there is no biological evidence for any of the premises of classical connectionist learning." The relation LTP = Hebb rule is a fairly non-contentious statement in the neuroscientific community. I could go on (RP learning and operant conditioning etc)... YSo, who should construct the net for a neural net algorithm? The answer again is very simple: Who else, but the algorithm itself!" The brain uses many `algorithms' to develop - it is these working in concert (genetically deterimined and activity mediated) which ensure the final state YYou give it a few learning examples for a simple problem and after a while this "back prop fitted" brain says: "I am stuck in a local minimum. I need to relearn this problem. Start over again."" My brain constantly gets stuck in local minima. If not then I would learn everything I tried to do to perfection - I would be an accomplished craftsman/musician/linguist/sporstman etc. In fact I am non of these...but rather have a small amount (local minimum's worth) of ability in each. YThe obvious questions are as follows: Is "not remembering" any of the learning examples a brain- like phenomenon? " There may be some mechanism for storing the `rules' and `examples' in STM or even LTM but even this is not certain (e.g. `now describe to me the perfect tennis backhand......`No - you forget to mention the follow-through - how many more times...') Finally, an engineering point. The claim that previous connectionist algorithms are not able to construct networks is a little brash. There have been several attempts to contruct nets as part of the learning proces (e.g. Cascade correlation). In summary: I am pleased to see that people are trying to overcome some of the problems encountered in building neural nets. However, I would urge people not to missappropriate the activities of people in other fields (neuroscience) and to learn a little more about the real capabilities of humans and their brains as described by neuroscientists, and psychologists. I would also ask that more account be taken of some of the teoretical literature on learning be taken into account. I hope this contribution is useful" ******************************************************** ******************************************************** On May 18 Craig Hicks wrote: " Hi, >A. Perform Network Design Task: A neural network/connectionist >learning method must be able to design an appropriate network for >a given problem, since, in general, it is a task performed by the >brain. A pre-designed net should not be provided to the method as >part of its external input, since it never is an external input to the >brain. From a neuroengineering and neuroscience point of view, this >is an essential property for any "stand-alone" learning system - a >system that is expected to learn "on its own" without any external >design assistance. Doesn't this ignore the role of evolution as a "learning" force? It's undisputable that the brain has a highly specialized structure. Obviously, this did not come from nowhere, but is the result of the forces of natural selection." ******************************************************** ******************************************************** On May 23 Dgragan Gamberger wrote: "I read you submission with great interest although (or may because of) I m not working in the field of neural networks. My interests are in the field of inductive learning. The presented ideas seem very attractive to me and in my opinion your criticism of the present systems is fully justified. The only suggestion for improvement is on part C.: > C. Quickness in Learning: The method must be quick in its > learning and learn rapidly from only a few examples, much as > humans do. For example, one which learns from only 10 examples > learns faster than one which requires a 100 or a 1000 examples. Although the statement is not incorrect by itself, in my opinion it reflects the common unawareness of the importance of redundancy for machine, as well as for human learning. In practice neither machine nor human can learn something (except extremely simple concepts) from 10 examples especially if there is noise (errors in training examples). Even for learning of simple concepts it is advisable to use as much as possible training examples (and not only necessary subset) because it can improve quality and (at least) reliability of induced concepts. Especially for handling imperfections in training data (noise) the use of redundant training set is obligatory. In practice, humans can and do induce concepts from a small training set but they are 'aware' of their unreliability and use every occasion (additional examples) to test induced concepts and to refine them if necessary. That is potentially the ideal model of incremental learning." ******************************************************** ******************************************************** On May 25 Guido Bugmann responded to Raj Rao: "A similar question (are there references for 1 millions neurons lost per day ?) came up in a discussion on the topic of robustness on connectionists a few years ago (1992). Some of the replies were: ------------------------------------------------------- From Bill Skaggs, bill at nsma.arizona.edu : There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of the cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons are lost by age 60. Using the standard estimate of ten bilion neurons in the neocortex, this works out to about one hunderd thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey and rodent data" DG Flood & PD Coleman, Neurobiology of Aging, 9, (1988) pp.453-464. -------------------------------------------------------- From Arshavir Balckwell, arshavir at crl.ucsd.edu : I have come across a brief reference to adult neural death that may be of use, or at least a starting point. The book is: Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward Univ. In a footnote (!) on page 32, he writes: There is typically a loss of 5-10 percent of brain tissue with age. Assuming a brain loss of 7 percent over a life span of 100 years, and 10^11 neurons (100 billions) to begin with, approximately 200,000 neurons are lost per day. ---------------------------------------------------------------- From Jan Vorbrueggen, jan at neuroinformatik.ruhr-uni-bochum.de As I remember it, the studies showing the marked reduction in nerve cell count with age were done around the turn of the century. The method, then as now, is to obtain brains of deceased persons, fix them, prepare cuts, count cells microscopically in those cuts, and then estimate the total number by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramidal cells are turned into that form by fixation. The new study showed that the total water content of the brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percents?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty certain that the gist is correct. So the conclusion seems to be that the cell loss with age in the CNS is much lower than generally thought. ---------------------------------------------------------------- From Paul King, Paul_King at next.com Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991) writes on page 208 that: "Comparisons of neural densities in the brain of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cell die between the ages of twenty and eighty years (Tomlinson and Gibson, 1980). Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5% of the cortical cells die at random...." and goes on to discuss the implications for network robustness. Reference: Gearald H, Tomlinson BE and Gibson PH (1980) "Cell counts in human cerebral cortex in normal adults throughout life using an image analysis computer" J. Neurol., 46, pp. 113-136. ------------------------------------------------------------- From Robert A. Santiago, rsantiag at note.nsf.gov "In search of the Engram" The problem of robutsness from a neurobiological perspective seems to originate from works done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of the memory and seeing if it affected memory... Other work was done by a gentlemen named Richard F. Thompson. Both speak of the loss of neurons in a system and how integrity was kept. In particular Karl Lashley spoke of the memory as holograms... ------------------------------------------------- Hope it helps..." From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From pelillo at minster.cs.york.ac.uk Tue Jun 6 06:52:25 2006 From: pelillo at minster.cs.york.ac.uk (pelillo@minster.cs.york.ac.uk) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: April 1995 Subject: No subject Message-ID: Abstract ---------- This paper describes how graph grammars with attributes may be used to grow neural networks. The grammar facilitates a very compact and declarative description of every aspect of a neural architecture; this is important from a software/neural engineering point of view, since the descriptions are much easier to write and maintain than programs written in a high-level language, such as C++, and do not require programming ability. The output of the growth process is a neural network that can be transformed into a Postscript representation for display purposes, or simulated using a separate neural network simulation program, or mapped directly into hardware in some cases. In this approach, there is no separate learning algorithm; learning proceeds (if at all) as an intrinsic part of the network behaviour. This has interesting application in the evolution of neural nets, since now it is possible to evolve all aspects of a network (including the learning `algorithm') within a single unified paradigm. As an example, a grammar is given for growing a multi-layer perceptron with active weights that has the error back-propagation learning algorithm embedded in its structure. This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get esann95.ps ----------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: April 1996 Subject: No subject Message-ID: Abstract ---------- This paper describes a set-based chromosome for describing neural networks. The chromosome etween sets. Each set is updated in order, as are the neurons in that set, in accordance with a simple pre-specified algorithm. This allows all details of a neural architecture, including its learning behaviour to be specified in a simple and purely declarative manner. To evolve a learning behaviour for a particular network architecture, certain details of the architecture are pre-specified by defining a chromosome template, with some of the genes fixed, and others allowed to vary. In this paper, a learning perceptron is evolved, by fixing the feedforward and error-computation parts of the chromosome, then evolving the feedback part responsible for computing weight updates. Using this methodology, learning behaviours with similar performance to the delta rule have been evolved. This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get esann96.ps ----------------------------------------------------------------------- Comments and criticisms welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom http://esewww.essex.ac.uk/~sml Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk secretary: Mrs Wendy Ryder (+44) 1206 872437 ------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Feb 1996 Subject: No subject Message-ID: This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get ieevisp.ps ----------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Sep 1996 Subject: No subject Message-ID: This paper is available through my web page: http://esewww.essex.ac.uk/~sml or via anonymous ftp: ftp tarifa.essex.ac.uk cd /images/sml/reports get iwfhr96.ps ----------------------------------------------------------------------- Comments and criticisms welcome. Simon Lucas ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom http://esewww.essex.ac.uk/~sml Tel: (+44) 1206 872935 Fax: (+44) 1206 872900 Email: sml at essex.ac.uk secretary: Mrs Wendy Ryder (+44) 1206 872437 -------------------------------------------------  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: "The purpose of this book is to introduce multivariate statistical methods to non-mathematicians. It is not intended to be comprehensive. Rather, the intention is to keep the details to a minimum while still conveying a good idea of what can be done. In other words, it is a book to `get you going' in a particular area of statistical methods." jay Jay Moody Department of Cognitive Science, 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 fax: 619-354-1128 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Here are some thoughts: One general point is that I'm not entirely sure about what is meant by theglobal/local distinction. Certainly action at a distance can't take place; something physical happens to the cell/connection in question in order for it to change. As I understand it, the prototypical local learning is a Hebbian rule, where all the information specifying plasticity is in the pre and post-synaptic cells (ie "local" to the connection), while a global learning rule is mediated by something distal to the cell in question (i.e. a neuromodulatory signal). But of course the signal must contact the actual cell via diffusion of a chemical substance (e.g. dopamine). So one different distinction might be how specific the signal is; i.e. in a local rule like LTP the information acts only on the single connection, while a modulatory signal could change all the connections in an area by a similar amount. However, the effects of a neuromodulator could in turn be modulated by the current state of the connection - hence a global signal might act very differently at each connection. Which would make the global signal seem local. So I'm not sure the distinction is clearcut. Maybe its better to consider a continuum of physical distance of the signal to change and specificity of the signal at individual connections. A couple of specific comments follow: > A) Does plasticity imply local learning? > > The physical changes that are observed in synapses/cells in > experimental neuroscience when some kind of external stimuli is > applied to the cells may not result at all from any specific > "learning" at the cells.The cells might simply be responding to a > "signal to change" - that is, to change by a specific amount in a > specific direction. In animal brains, it is possible that the > "actual" learning occurs in some other part(s) of the brain, say > perhaps by a global learning mechanism. This global mechanism can > then send "change signals" to the various cells it is using to > learn a specific task. So it is possible that in these > neuroscience experiments, the external stimuli generates signals > for change similar to those of a global learning agent in the brain > and that > the changes are not due to "learning" at the cells > themselves. Please note that scientific facts/phenomenon like LTP/LTD > or synaptic plasticity can probably be explained equally well by > many theories of learning (e.g. local learning vs. global learning, > etc.). However, the correctness of an explanation would have to > be I think it would be difficult to explain the actual phenomenon of LTP/LTD as a response to some signal sent by a different part of the brain, since a good amount of the evidence comes from in vitro work. So clearly the "change signals" can't be coming from some distant part of the brain - unless the slices contain the necessary machinery for generating the change signal. Also, its of course possible that LTP/LTD local learning rules act in concert with global signals (as you mention below); these global signals being sent by nonspecific neuromodulators (an idea brought up plenty of times before). I'm not sure about the differences in the LTP/LTD data collected in vivo versus in vitro; I'm sure there are people out there studying it carefully, and this could provide insight. > > B) "Pure" local learning does not explain a number of other > activities that are part of the process of learning!! > > When learning is to take place by means of "local learning" in a > network of cells, the network has to be designed prior to its > training. Setting up the net before "local" learning can proceed > implies that an external mechanism is involved in this part of > the > learning process. This "design" part of learning precedes actual > training or learning by a collection of "local learners" whose > only > knowledge about anything is limited to the local learning law to > use! Of course, changing connection strengths seems to be the last phase of the "learning/development" process. Correct numbers of cells need to be generated, they have to get to their correct locations, proper connections between subpopulations need to be established and refined, and only at this point is there a substrate for "local" learning. All of these can be affected to a certain extent by environment. For example, the number of cells in the spinal cord innervating a peripheral target can be downregulated with limb bud ablation; conversely, the final number can be upregulated with supernumerary limb grafts. Another well known example is the development of ocular dominance columns. Here, physical connections can be removed (in normal development), or new connections can be established (evidence for this from the reverse suture experiments), depending on the given environment. What would be quite interesting would be if all these developmental phases are guided by similar principles, but acting over different spatial and temporal scales, and mediated by different carriers (e.g. chemical versus electrical signals). Alas, if only I had a well-articulated, cogent principle in hand with which to unify these disparate findings; my first Nobel prize would be forthcoming. In lieu of this, we're stuck with my ramblings. > > In order to learn properly and quickly, humans generally collect > and store relevant information in their brains and then "think" > about it (e.g. what problem features are relevant, problem > complexity, etc.). So prior to any "local learning," there must > be processes in the brain that examine this "body of > information/facts" about a problem in order to design the > appropriate network that would fit the problem complexity, select > the problem features that are meaningful, etc. It would be very > difficult to answer the questions "What size net?" and "What > features to use?" without looking at the problem in great detail. > A bunch of "pure" local learners, armed with their local learning > laws, would have no clue to these issues of net design, > generalization and feature selection. > > So, in the whole, there are a "number of activities" that need to > be performed before any kind of "local learning" can take place. > These aforementioned learning activities "cannot" be performed > by a collection of "local learning" cells! There is more to the > process of learning than simple local learning by individual cells. > Many learning "decisions/tasks" must precede actual training by > "local learners." A group of independent "local learners" simply > cannot start learning and be able to reproduce the learning > characteristics and processes of an "autonomous system" like the > brain. > > Local learning, however, is still a feasible idea, but only > within a general global learning context. A global learning > mechanism would be the one that "guides" and "exploits" these > local learners. However, it is also possible that the global > mechanism actually does all of the computations (learning) > and "simply sends signals" > to the network cells for appropriate synaptic adjustment. Both of > these possibilities seem logical: (a) a "pure" global mechanism > that learns by itself and then sends signals to the cells to > adjust, or (b) a global/local combination where the global > mechanism performs certain tasks and then uses the local mechanism > for training/learning. > > Note that the global learning mechanism may actually be implemented > with a collection of local learners!! > Notwithstanding the last remark, the above paragraphs perhaps run the risk of positing a little global homunculus that "does all the computations" and simply "sends signals" to the cells. I might be confused by the distinction between local and global learning. All we have to work with are cells that change their properties based on signals impinging upon them, be they chemical or electrical and originating near or far from the synapse, so it seems that a "global" learning mechanism *must* be implemented by local learners. (Again, if by local you specifically mean LTP/LTD or something similar, then I agree - other mechanisms are also at work). > The basic argument being made here is that there are many tasks > in a "learning process" and that a set of "local learners" armed > with their local learning laws is incapable of performing all of > those tasks. So local learning can only exist in the context of > global learning and thus is only "a part" of the total learning > process. > > It will be much easier to develop a consistent learning theory > using the global/local idea. The global/local idea perhaps will > also give us a better handle on the processes that we call > "developmental" and "evolutionary." One last comment. I'm not sure that the "developmental" vs. "learning" distinction is meaningful, either (I'm not hacking on your statements above, Asim; I think this distinction is more or less a tacit assumption in pretty much all neuroscience research). I read these as roughly equivalent to "nature vs. nurture" or "genetics vs. environment". I would claim that to say that any phenomenon is controlled by "genetics" is a scientifically meaningless statement. The claim that such-and-such a phenomenon is genetic is the modern equivalent of saying "The thing is there cause thats how god made it". Genes don't code for behavioral or physical attributes per se, they are simply a string of DNA which code for different proteins. Phenotypes can only arise from the genetic "code" by a complex interaction between cells and signals from their environment. Now these signals can be generated by events outside the organism or within the organism, and I would say that the distinction between development and learning is better thought of as whether the signals for change arise wholly within the organism or if the signals at least in part arise from outside the organism. Any explanation of either learning or development has to be couched in terms of what the relevant signals are and how they affect the system in question. anthony ============================================================ From: Russell Anderson, Ph.D. Smith-Kettlewell Eye Research Institute anderson at skivs.ski.org I read over the replies you received with interest. 1. In regards to Response #1 (j. Faith) I am not sure how relevant canalization is to your essay, but I wrote a paper on the topic a few years back: "Learning and Evolution: A Quantitative Genetics Approach" J. Theor. Biol. 175:89-101 (1995). Incidentally, the phenomenon known as "canalization" was described much earlier by Baldwin, Osborn, and Morgan (in 1896), and is more generally known as the "Baldwin effect" If you're interested, I could mail you a copy. 2. I take issue with the analogies used by Brendan McCane. His analogy of insect colonies is confused or irrelevant: First, the behavior of insects, for the purpose of this argument, does not indicate any individual (local) learning. Hence, the analogy is inappropriate. Second, The "global" learning occuring in the case of insect colonies operates at the level of natural selection acting on the genes, transmitted by the surviving colonies to new founding Queens. In this sense, individual ants are genetically ballistic ("pure developmental"). The genetics of insect colonies are well-studied in evolutionary biology, and he should be referred to any standard text on the topic (Dawkins, Dennett, Wilson, etc.) The analogy using computer science metaphors is likewise flawed or off-the-subject. ============================================================= From: Steven M. Kemp | Department of Psychology | email: steve_kemp at unc.edu Davie Hall, CB# 3270 | University of North Carolina | Chapel Hill, NC 27599-3270 | fax: (919) 962-2537 I do not know if it is quite on point, but Larry Stein at the University of California at Irvine has done fascinating work on a very different type of neural plasiticity called In-Vitro Reinforcement (IVR). I have been working on neural networks whose learning algorithm is based on his data and theory. I don't know whether you would call those networks "local" or "global," but they do have the interesting characteristic that all the units in the network receive the same globally distributed binary reinforcement signal. That is, feedback is not passed along the connections, but distributed simultaneously and equally across the network after the fashion of nondirected dopamine release from the ventral tegmental projections. In any event, I will forward the guts of a recent proposal we have written here to give you a taste of the issues involved. I will be happy to provide more information on this research if you are interested. (Steven Kemp did mail me parts of a recent proposal. It is long, so I did not include it in this posting. Feel free to write to him or me for a copy of it.) ============================================================ From: "K. Char" I have few quick comments: 1. The answer to some parts of the discussions seem to lie in the notion of a *SEQUENCE*. That is: global->local->(final) global; clearly the initial global is not the same as the final global. Some of the discussants seem to prefer the sequence: local->global. A number of such possibilities exists. 2. The next question is: who dictates the sequence? Is it a global mechanism or a local mechanism? 3. In the case of the bee, though it had an individual goal how was this goal arrived at? 4. In the context of neural networks (artificial or real): who dictates the node activation functions, the topology and the learning rules? Does every node find its own activation function? 5. Finally how do we form concepts? Do the concepts evolve as a result of local interactions at the neuron level or through the interaction of micro-concepts at a global level which then trigger a local mechanism? 6. Here the next question could be: how did these micro-concepts evolve in the very first place? 7. Is it possible that these neural structures provide the *very motivation* for the formation of concepts at the global level in order to adapt these structures effectively? If so, does this motivation arise from the environment itself? ============================================================ Response # 1: As you mention, neuroscience tends to equate network plasticity with learning. Connectionists tend to do the same. However this raises a problem with biological systems because this conflates the processes of development and learning. Even the smartest organism starts from an egg, and develops for its entire lifespan - how do we distinguish which changes are learnt, and which are due to development. No one would argue that we *learn* to have a cortex, for instance, even though it is due to massive emryological changes in the central nervous system of the animal. This isn't a problem with artificial nets, because they do not usually have a true developmental process and so there can be no confusion between the two; but it has been a long-standing problem in the ethology literature, where learnt changes are contrasted with "innate" developmental ones. A very interesting recent contribution to this debate is Andre Ariew's "Innateness and Canalization", in Philosophy of Science 63 (Proceedings), in which he identifies non-learnt changes as being due to canalised processes. Canalization was a concept developed by the biologist Waddington in the 40's to describe how many changes seem to have fixed end-goals that are robust against changes in the environment. The relationship between development and learning was also thoroughly explored by Vygotsky (see collected works vol 1, pages 194-210). I'd like to see what other sorts of responses you get, Joe Faith Evolutionary and Adaptive Systems Group, School of Cognitive and Computing Sciences, University of Sussex, UK. ================================================================= Response # 2: I fully agree with you, that local learning is not the one and only ultimate approach - even though it results in very good learning for some domains. I am currently writing a paper on the competitive learning paradigm. I am proposing, that this competition that occurs e.g. within neurons should be called local competition. The network as a whole gives a global common goal to these local competitors and thus their competition must be regarded as cooperation from a more global point of view. There is a nice paper by Kenton Lynne that integrates the ideas of reinforcement and competition. When external evaluations are present, they can serve as teaching values, if nor the neurons compete locally. @InProceedings{Lynne88, author = {K.J.\ Lynne}, title = {Competitive Reinforcement Learning}, booktitle = {Proceedings of the 5th International Conference on Machine Learning}, year = {1988}, publisher = {Morgan Kaufmann}, pages = {188--199} } ---------------------------------------------------------- Christoph Herrmann Visiting researcher Hokkaido University Meme Media Laboratory Kita 13 Nishi 8, Kita- Tel: +81 - 11 - 706 - 7253 Sapporo 060 Fax: +81 - 11 - 706 - 7808 Japan Email: chris at meme.hokudai.ac.jp http://aida.intellektik.informatik.th-darmstadt.de/~chris/ ============================================================= Response #3: I've just read your list of questions on local vs. global learning mechanisms. I think I'm sympathatic to the implications or presuppositions of your questions but need to read them more carefully later. Meanwhile, you might find very interesting a two-part article on such a mechanism by Peter G. Burton in the 1990 volume of _Psychobiology_ 18(2).119-161 & 162-194. Steve Chandler =============================================================== Response #4: A few years back, I wrote a review article on issues of local versus global learning w.r.t. synaptic plasticity. (Unfortunately, it has been "in press" for nearly 4 years). Below is an abstract. I can email the paper to you in TeX or postscript format, or mail you a copy, if you're interested. Russell Anderson ------------------------------------------------ "Biased Random-Walk Learning: A Neurobiological Correlate to Trial-and-Error" (In press: Progress in Neural Networks) Russell W. Anderson Smith-Kettlewell Eye Research Institute 2232 Webster Street San Francisco, CA 94115 Office: (415) 561-1715 FAX: (415) 561-1610 anderson at skivs.ski.org Abstract: Neural network models offer a theoretical testbed for the study of learning at the cellular level. The only experimentally verified learning rule, Hebb's rule, is extremely limited in its ability to train networks to perform complex tasks. An identified cellular mechanism responsible for Hebbian-type long-term potentiation, the NMDA receptor, is highly versatile. Its function and efficacy are modulated by a wide variety of compounds and conditions and are likely to be directed by non-local phenomena. Furthermore, it has been demonstrated that NMDA receptors are not essential for some types of learning. We have shown that another neural network learning rule, the chemotaxis algorithm, is theoretically much more powerful than Hebb's rule and is consistent with experimental data. A biased random-walk in synaptic weight space is a learning rule immanent in nervous activity and may account for some types of learning -- notably the acquisition of skilled movement. ========================================================== Response #5: Asim Roy typed ... > > B) "Pure" local learning does not explain a number of other > activities that are part of the process of learning!! .. > > So, in the whole, there are a "number of activities" that need to > be > performed before any kind of "local learning" can take place. > These aforementioned learning activities "cannot" be performed by > a collection of "local learning" cells! There is more to the > process of learning than simple local learning by individual cells. > Many learning "decisions/tasks" must precede actual training by > "local learners." A group of independent "local learners" simply > cannot start learning and be able to reproduce the learning > characteristics and processes of an "autonomous system" like the > brain. I cannot see how you can prove the above statement (particularly the last sentence). Do you have any proof. By analogy, consider many insect colonies (bees, ants etc). No-one could claim that one of the insects has a global view of what should happen in the colony. Each insect has its own purpose and goes about that purpose without knowing the global purpose of the colony. Yet an ants nest does get built, and the colony does survive. Similarly, it is difficult to claim that evolution has a master plan, order just seems to develop out of chaos. I am not claiming that one type of learning (local or global) is better than another, but I would like to see some evidence for your somewhat outrageous claims. > Note that the global learning mechanism may actually be implemented > with a collection of local learners!! You seem to contradict yourself here. You first say that local learning cannot cope with many problems of learning, yet global learning can. You then say that global learning can be implemented using local learners. This is like saying that you can implement things in C, that cannot be implemented in assembly!! It may be more convenient to implement it in C (or using global learning), but that doesn't make it impossible for assembly. ------------------------------------------------------------------- Brendan McCane, PhD. Email: mccane at cs.otago.ac.nz Comp.Sci. Dept., Otago University, Phone: +64 3 479 8588. Box 56, Dunedin, New Zealand. There's only one catch - Catch 22. =============================================================== Response #6: In regards to arguments against global learning:I think no one seriously questions this possibility, but think that global learning theories are currently non-verifiable/ non-falsifyable. Part of the point of my paper was that there ARE ways to investigate non-local learning, but it requires changes in current experimental protocols. Anyway, good luck. I look forward to seeing your compilation. Russell Anderson 2415 College Ave. #33 Berkeley, CA 94704 ============================================================== Response #7: I am sorry that it has taken so long for me to reply to your inquiry about plasticity and local/global learning. As I mentioned in my first note to you, I am sympathetic to the view that learning involves some sort of overarching, global mechanism even though the actual information storage may consist of distributed patterns of local information. Because I am sympathetic to such a view, it makes it very difficult for me to try to imagine and anticipate the problems for such views. That's why I am glad to see that you are explicitly trying to find people to point out possible problems; we need the reality check. The Peter Burton articles that I have sent you describes exactly the kind of mechanism implied by your first question: Does plasticity imply local learning? Burton describes a neurological mechanism by which local learning could emerge from a global signal. Essentially he posits that whenever the new perceptual input being attended to at any given moment differs sufficiently from the record of previously recorded experiences to which that new input is being compared, the difference triggers a global "proceed-to-store" signal. This signal creates a neural "snapshot" (my term, not Burton's) of the cortical activations at that moment, a global episodic memory (subject to stimulus sampling effects, etc.). Burton goes on to describe how discrete episodic memories could become associated with one another so as to give rise to schematic representations of percepts (personally I don't think that positing this abstraction step is necessary, but Burton does it). As neuroscientists sometimes note, while it is widely assumed that LTP/LTD are local learning mechanisms, the direct evidence for such a hypothesis is pretty slim at best. Of course of of the most serious problems with that view is that the changes don't last very long and thus are not really good candidates for long term (i.e., life long) memory. Now, to my mind, one of the most important possibilities overlooked in LTP studies (inherently so in all in vitro preparations and so far as I know --which is not very far because this is not my field--in the in vivo preparations that I have read about) is that LTP/D is either an artifact of the experiment or some sort of short term change which requires a global signal to become consolidated into a long term record. Burton describes one such possible mechanism. Another motivation for some sort of global mechanism comes from the so-called 'binding problem' addressed especially by the Damasio's, but others too. Somehow somewhere all the distributed pieces of information about what an orange is, for example, have to be tied together. A number of studies of different sorts have demonstarted repeatedly that such information is distributed throughout cortical areas. Burton distinguishes between "perceptual learning" requiring no external teacher (either locally or globally) and "conceptual learning", which may require the assistance of a 'teacher'. In his model though, both types of learning are activated by global "proceed-to-learn" signals triggered in turn by the global summation of local disparities between remembered episodes and current input. I'll just mention in closing that I am particularly interested in the empirical adequacy of neuropsychological accounts such as Burton's because I am very interested in "instance-based" or "exemplar-based" models of learning. In particular, Royal Skousen's _Analogical Modeling of Language_ (Kluwer, 1989) describes an explicit, mathematical model for predicting new behavior on analogy to instances stored in long term memory. Burton's model suggests a possible neurological basis for such behavior. Steve Chandler ============================================================== Response #8: ******************************************************************* Fred Wolf E-Mail: fred at chaos.uni-frankfurt.de Institut fuer Theor. Physik Robert-Mayer-Str. 8 Tel: 069/798-23674 D-60 054 Frankfurt/Main 11 Fax: (49) 69/798-28354 Germany could you please point me to a few neuroBIOLOGICAL references that justify your claim that > > A predominant belief in neuroscience is that synaptic plasticity > and LTP/LTD imply local learning (in your sens). > I think many people appreciate that real learning implies the concerted interplay of a lot of different brain systems and should not even be attempted to be explained by "isolated local learners". See e.g. the series of review-papers on memory in a recent volume of PNAS 93 (1996) (http://www.pnas.org/). Good luck with your general theory of global/local learning. best wishes Fred Wolf ============================================================== Response #9: I am into neurocomputing for several years. I read your arguments with interest. They certainly deserve further attention. Perhaps some combination of global-local learning agents would be the right choice. - Vassilis G. Kaburlasos Aristotle University of Thessaloniki, Greece ============================================================== =============================================================== Original Memo: A predominant belief in neuroscience is that synaptic plasticity and LTP/LTD imply local learning. It is a possibility, but it is not the only possibility. Here are some thoughts on some of the other possibilities (e.g. global learning mechanisms or a combination of global/local mechanisms) and some discussion on the problems associated with "pure" local learning. The local learning idea is a very core idea that drives research in a number of different fields. I welcome comments on the questions and issues raised here. This note is being sent to many listserves. I will collect all of the responses from different sources and redistribute them to all of the participating listserves. The last such discussion was very productive. It has led to the realization by some key researchers in the connectionist area that "memoryless" learning perhaps is not a very "valid" idea. That recognition by itself will lead to more robust and reliable learning algorithms in the future. Perhaps a more active debate on the local learning issue will help us resolve this issue too. A) Does plasticity imply local learning? The physical changes that are observed in synapses/cells in experimental neuroscience when some kind of external stimuli is applied to the cells may not result at all from any specific "learning" at the cells. The cells might simply be responding to a "signal to change" - that is, to change by a specific amount in a specific direction. In animal brains, it is possible that the "actual" learning occurs in some other part(s) of the brain, say perhaps by a global learning mechanism. This global mechanism can then send "change signals" to the various cells it is using to learn a specific task. So it is possible that in these neuroscience experiments, the external stimuli generates signals for change similar to those of a global learning agent in the brain and that the changes are not due to "learning" at the cells themselves. Please note that scientific facts and phenomenon like LTP/LTD or synaptic plasticity can probably be explained equally well by many theories of learning (e.g. local learning vs. global learning, etc.). However, the correctness of an explanation would have to be judged from its consistency with other behavioral and biological facts, not just "one single" biological phenomemon or fact. B) "Pure" local learning does not explain a number of other "activities" that are part of the process of learning!! When learning is to take place by means of "local learning" in a network of cells, the network has to be designed prior to its training. Setting up the net before "local" learning can proceed implies that an external mechanism is involved in this part of the learning process. This "design" part of learning precedes actual training or learning by a collection of "local learners" whose only knowledge about anything is limited to the local learning law to use! In addition, these "local learners" may have to be told what type of local learning law to use, given that a variety of different types can be used under different circumstances. Imagine who is to "instruct and set up" such local learners which type of learning law to use? In addition to these, the "passing" of appropriate information to the appropriate set of cells also has to be "coordinated" by some external or global learning mechanism. This coordination cannot just happen by itself, like magic. It has to be directed from some place by some agent or mechanism. In order to learn properly and quickly, humans generally collect and store relevant information in their brains and then "think" about it (e.g. what problem features are relevant, complexity of the problem, etc.). So prior to any "local learning," there must be processes in the brain that "examine" this "body of information/facts" about a problem in order to design the appropriate network that would fit the problem complexity, select the problem features that are meaningful, etc. It would be very difficult to answer the questions "What size net?" and "What features to use?" without looking at the problem (body of information)in great detail. A bunch of "pure" local learners, armed with their local learning laws, would have no clue to these issues of net design, generalization and feature selection. So, in the whole, there are a "number of activities" that need to be performed before any kind of "local learning" can take place. These aforementioned learning activities "cannot" be performed by a collection of "local learning" cells! There is more to the process of learning than simple local learning by individual cells. Many learning "decisions/tasks" must precede actual training by "local learners." A group of independent "local learners" simply cannot start learning and be able to reproduce the learning characteristics and processes of an "autonomous system" like the brain. Local learning or local computation, however, is still a feasible idea, but only within a general global learning context. A global learning mechanism would be the one that "guides" and "exploits" these local learners or computational elements. However, it is also possible that the global mechanism actually does all of the computations (learning) and "simply sends signals" to the network of cells for appropriate synaptic adjustment. Both of these possibilities seem logical: (a) a "pure" global mechanism that learns by itself and then sends signals to the cells to adjust, or (b) a global/local combination where the global mechanism performs certain tasks and then uses the local mechanism for training/learning. Thus note that the global learning mechanism may actually be implemented with a collection of local learners or computational elements!! However, certain "learning decisions" are made in the global sense and not by "pure" local learners. The basic argument being made here is that there are many tasks in a "learning process" and that a set of "local learners" armed with their local learning laws is incapable of performing all of those tasks. So local learning can only exist in the context of global learning and thus is only "a part" of the total learning process. It will be much easier to develop a consistent learning theory using the global/local idea. The global/local idea perhaps will also give us a better handle on the processes that we call "developmental" and "evolutionary." And it will, perhaps, allow us to better explain many of the puzzles and inconsistencies in our current body of discoveries about the brain. And, not the least, it will help us construct far better algorithms by removing the "unwarranted restrictions" imposed on us by the current ideas. Any comments on these ideas and possibilities are welcome. Asim Roy Arizona State University  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: When this book was conceived ten years ago, few scientists realized the width of scope and the power for applicability of the central ideas. Partially because of the enthusiastic reception of the first edition, open problems have been solved and new applications have been developed. We have added new material on the relation between data compression and minimum description length induction, computational learning, and universal prediction; circuit theory; distributed algorithmics; instance complexity; CD compression; computational complexity; Kolmogorov random graphs; shortest encoding of routing tables in communication networks; resource-bounded computable universal distributions; average case properties; the equality of statistical entropy and expected Kolmogorov complexity; and so on. Apart from being used by researchers and as reference work, the book is now commonly used for graduate courses and seminars. In recognition of this fact, the second edition has been produced in textbook style. We have preserved as much as possible the ordering of the material as it was in the first edition. The many exercises bunched together at the ends of some chapters have been moved to the appropriate sections. The comprehensive bibliography on Kolmogorov complexity at the end of the book has been updated, as have the ``History and References'' sections of the chapters. Many readers were kind enough to express their appreciation for the first edition and to send notification of typos, errors, and comments. Their number is too large to thank them individually, so we thank them all collectively. BLURB: Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity---the theory dealing with the quantity of information in individual objects. Kolmogorov complexity is known variously as `algorithmic information', `algorithmic entropy', `Kolmogorov-Chaitin complexity', `descriptional complexity', `shortest program length', `algorithmic randomness', and others. The book is ideal for advanced undergraduate students, graduate students and researchers in computer science, mathematics, cognitive sciences, artificial intelligence, philosophy, statistics and physics. The book is self contained in the sense that it contains the basic requirements of computability theory, probability theory, information theory, and coding. Included are also numerous problem sets, comments, source references and hints to the solutions of problems, course outlines for classroom use, as well as a great deal of new material not included in the first edition. CONTENTS: Preface to the First Edition v How to Use This Book viii Acknowledgments x Preface to the Second Edition xii Outlines of One-Semester Courses xii List of Figures xix 1 Preliminaries 1 1.1 A Brief Introduction 1 1.2 Prerequisites and Notation 6 1.3 Numbers and Combinatorics 8 1.4 Binary Strings 12 1.5 Asymptotic Notation 15 1.6 Basics of Probability Theory 18 1.7 Basics of Computability Theory 24 1.8 The Roots of Kolmogorov Complexity 47 1.9 Randomness 49 1.10 Prediction and Probability 59 1.11 Information Theory and Coding 65 1.12 State Symbol Complexity 84 1.13 History and References 86 2 Algorithmic Complexity 93 2.1 The Invariance Theorem 96 2.2 Incompressibility 108 2.3 C as an Integer Function 119 2.4 Random Finite Sequences 127 2.5 *Random Infinite Sequences 136 2.6 Statistical Properties of Finite Sequences 158 2.7 Algorithmic Properties of 167 2.8 Algorithmic Information Theory 179 2.9 History and References 185 3 Algorithmic Prefix Complexity 189 3.1 The Invariance Theorem 192 3.2 *Sizes of the Constants 197 3.3 Incompressibility 202 3.4 K as an Integer Function 206 3.5 Random Finite Sequences 208 3.6 *Random Infinite Sequences 211 3.7 Algorithmic Properties of 224 3.8 *Complexity of Complexity 226 3.9 *Symmetry of Algorithmic Information 229 3.10 History and References 237 4 Algorithmic Probability 239 4.1 Enumerable Functions Revisited 240 4.2 Nonclassical Notation of Measures 242 4.3 Discrete Sample Space 245 4.4 Universal Average-Case Complexity 268 4.5 Continuous Sample Space 272 4.6 Universal Average-Case Complexity, Continued 307 4.7 History and References 307 5 Inductive Reasoning 315 5.1 Introduction 315 5.2 Solomonoff's Theory of Prediction 324 5.3 Universal Recursion Induction 335 5.4 Simple Pac-Learning 339 5.5 Hypothesis Identification by Minimum Description Length 351 5.6 History and References 372 6 The Incompressibility Method 379 6.1 Three Examples 380 6.2 High- Probability Properties 385 6.3 Combinatorics 389 6.4 Kolmogorov Random Graphs 396 6.5 Compact Routing 404 6.6 Average-Case Complexity of Heapsort 412 6.7 Longest Common Subsequence 417 6.8 Formal Language Theory 420 6.9 Online CFL Recognition 427 6.10 Turing Machine Time Complexity 432 6.11 Parallel Computation 445 6.12 Switching Lemma 449 6.13 History and References 452 7 Resource-Bounded Complexity 459 7.1 Mathematical Theory 460 7.2 Language Compression 476 7.3 Computational Complexity 488 7.4 Instance Complexity 495 7.5 Kt Complexity and Universal Optimal Search 502 7.6 Time-Limited Universal Distributions 506 7.7 Logical Depth 510 7.8 History and References 516 8 Physics, Information, and Computation 521 8.1 Algorithmic Complexity and Shannon's Entropy 522 8.2 Reversible Computation 528 8.3 Information Distance 537 8.4 Thermodynamics 554 8.5 Entropy Revisited 565 8.6 Compression in Nature 583 8.7 History and References 586 References 591 Index 618 If you are seriously interested in using the text in the course, contact Springer-Verlag's Editor for Computer Science, Martin Gilchrist, for a complimentary copy. Martin Gilchrist marting at springer-sc.com Suite 200, 3600 Pruneridge Ave. (408) 249-9314 Santa Clara, CA 95051 If you are interested in the text but won't be teaching a course, we understand that Springer-Verlag sells the book, too. To order, call toll-free 1-800-SPRINGER (1-800-777-4643); N.J. residents call 201-348-4033. For information regarding examination copies for course adoptions, write Springer-Verlag New York, Inc. , 175 Fifth Avenue, New York,NY 10010. You can order through the Web site: "http://www.springer-ny.com/" For U.S.A./Canada/Mexico- e-mail: orders at springer-ny.com or fax an order form to: 201-348-4505. For orders outside U.S.A./Canada/Mexico send this form to: orders at springer.de Or call toll free: 800-SPRINGER - 8:30 am to 5:30 pm ET (that's 777-4643 and 201-348-4033 in NJ). Write to Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY, 10010. Visit your local scientific bookstore. Mail payments may be made by check, purchase order, or credit card (see note below). Prices are payable in U.S. currency or its equivalent and are subject to change without notice. Remember, your 30-day return privilege is always guaranteed! Your complete address is necessary to fulfill your order.  From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: three distinct patterns which roughly corresponded to the global brain states W (wakefulness), S (slow wave or deep sleep) and R (REM sleep). The outputs from the self-organizing feature map were subsequently mapped via a Radial Basis Function (RBF) classifier onto three outputs, trained by sections of data on which experts agreed upon the stage W, S, or R. For each input, the resulting network produced probabilities for the three global stages, describing intermediate stages as a combination of three 'mixing fractions' . This general approach, yielding a novel way of describing brain state while exploiting some of experts' knowledge in a partly supervised method, will be adopted and extended in the following ways Many features extracted from the signals will be considered. Instead of the 2-dimensional feature map used by R&T, alternative approaches will be investigated. It has been shown that combinations of other clustering and mapping methods can outperform SOMs. Moreover, since topographic mapping is only exploited for visualization, the general approach can be based on more advanced clustering techniques (e.g. techniques for non-Gaussian clustering (Roberts 1997) or Bayesian-inspired methods). in order to cope with the large number of input features to be investigated active feature selection methods will be applied. techniques for intelligent sensor fusion will be investigated. When multiple sources are combined to lead to classification results, it is not trivial to decide which are the most relevant sources at any given time, or what should happen when sources fail to provide input (e.g. because an electrode is faulty). Approaches based on the computation of running error measures can be employed here. The Imperial College group will form be a leading centre in the theory subgroup of the project. We will be active in the researching of Mixture density networks and mixtures of experts Model estimation and pre-processing Active sensor fusion Active feature and data selection Unsupervised data partitioning methods (clustering) Model comparison and validation References 1. S.J. Roberts, Parametric and Non-parametric Unsupervised Cluster Analysis. Pattern Recognition, 30 (2) ,1997. 2. S.J. Roberts and L. Tarassenko, The Analysis of the Sleep EEG using a Multi- layer Neural Network with Spatial Organisation. IEE Proceedings Part F, 139(6), 420-425, 1992a. 3. S.J. Roberts and L. Tarassenko, New Method of Automated Sleep Quantification. Medical and Biological Engineering and Computing, 30(5), 509-517, 1992b. 3) Assessment of cortical vigilance This project, funded by British Aerospaces Sowerby Research Centre at Bristol, aims to assess and predict lapses in vigilance in the human brain. Recordings of the brains electrical activity are to be recorded and analysed. The utility of a device or system which may monitor an individual's level of vigilance is clear in a range of safety-critical environments. We propose that such utility would be enhanced by a system which, as well as monitoring the present state of vigilance, made a prediction as to the likely evolution of vigilance in the near future. To perform both these tasks, i.e. a static pattern assessment and a dynamic tracking and prediction, sophisticated methods of information extraction, sensor fusion and classification/regression must be employed. Over the last decade the theory of artificial neural' networks has been pitched within the framework of advanced statistical decision theory and it is within this framework which we intend to work. The aim of the project is to work towards a practical real-time system. The latter should be minimally intrusive and should make predictions of future vigilance states. Where appropriate, therefore, the investigation will assess each technique in the developing system with a view to its implementation in a real-time environment. The project will involve research into : New methods of signal complexity and synchronisation estimation Information flow estimation in multi-channel environments Active sensor fusion Prediction and classification Error estimation State transition detection and state sequence modelling Reference Makeig, S. and Jung, T-P. and Sejnowski, T. (1996), Using feedforward neural networks to monitor alertness from changes in EEG correlation and coherence, Advances in Neural Information Processing Systems (NIPS), MIT Press, Cambridge, MA. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: descent in multilayered neural networks it is known that the necessary process of student specialization can be delayed significantly. We demonstrate that this phenomenon also occurs in various models of unsupervised learning. A solvable model of competitive learning is presented, which identifies prototype vectors suitable for the repre- sentation of high--dimensional data. The specific case of two overlapping clusters of data and a matching number of prototype vectors exhibits non- trivial behavior like almost stationary plateau configurations. As a second example scenario we investigate the application of Sanger's algorithm for principal component analysis in the presence of two relevant directions in input space. Here, the fast learning of the first principal component may lead to an almost complete loss of initial knowledge about the second one. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1997 ftp> binary ftp> get WUE-ITP-97-003.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-97-003.ps.gz e.g. unix> lp WUE-ITP-97-003.ps [10 pages] (*) can be replaced by "get WUE-ITP-97-003.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ -- Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de homepage: http://www.physik.uni-wuerzburg.de/~biehl Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Take the 110 south past downtown LA to Exposition; at the exit, bend soft right to hop through a quick light at Flowers St. Continue straight one block, turn right on Figueroa. The USC campus is now on your left, but do not enter. Continue the length of the campus, turn left on Jefferson, and continue 2/3 of the length of the campus heading west, past Hoover. At the light at McClintock, turn left into the campus. This is the weekend entrance. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 25 minutes. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Take the 10 west, exit Vermont, turn right (south) and proceed a half mile, turn left on Jefferson and continue to McClintock. The weekend entrance to the USC campus will be on your right. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 5 minutes from Vermont exit. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Take the (5 N to the) 405 N to the 110 N, exit Exposition. Proceed straight through the light and the next light at the DMV entrance. Bend hard left past the DMV to cross under the freeway. Proceed through the light at Flowers, continue 1 block, and turn right on Figueroa. The USC campus is now on your left, but do not enter. Continue the length of the campus, turn left on Jefferson, and continue 2/3 of the length of the campus heading west, past Hoover. At the light at McClintock, turn left into the campus. This is the weekend entrance. See instructions for "EVERYBODY" below for the final details. Expected drive time on Saturday: 25 minutes. EVERYBODY Enter the USC campus and purchase an all-day parking pass ($6) at the guard booth. Proceed straight south on McClintock past the pool (on right), and playing field (on left) to the corner of 36th Place/Downey. You may park in lot 6 on your right or the large parking structure just ahead on the right. Seeley Mudd (SGM) is a tall brick and concrete building on the NE corner of Downey and McClintock. SGM 124 is a large auditorium on the ground floor. Look for coffee-drinking computational neuroscientists. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: When this book was conceived ten years ago, few scientists realized the width of scope and the power for applicability of the central ideas. Partially because of the enthusiastic reception of the first edition, open problems have been solved and new applications have been developed. We have added new material on the relation between data compression and minimum description length induction, computational learning, and universal prediction; circuit theory; distributed algorithmics; instance complexity; CD compression; computational complexity; Kolmogorov random graphs; shortest encoding of routing tables in communication networks; resource-bounded computable universal distributions; average case properties; the equality of statistical entropy and expected Kolmogorov complexity; and so on. Apart from being used by researchers and as reference work, the book is now commonly used for graduate courses and seminars. In recognition of this fact, the second edition has been produced in textbook style. We have preserved as much as possible the ordering of the material as it was in the first edition. The many exercises bunched together at the ends of some chapters have been moved to the appropriate sections. The comprehensive bibliography on Kolmogorov complexity at the end of the book has been updated, as have the ``History and References'' sections of the chapters. Many readers were kind enough to express their appreciation for the first edition and to send notification of typos, errors, and comments. Their number is too large to thank them individually, so we thank them all collectively. BLURB: Written by two experts in the field, this is the only comprehensive and unified treatment of the central ideas and their applications of Kolmogorov complexity---the theory dealing with the quantity of information in individual objects. Kolmogorov complexity is known variously as `algorithmic information', `algorithmic entropy', `Kolmogorov-Chaitin complexity', `descriptional complexity', `shortest program length', `algorithmic randomness', and others. The book is ideal for advanced undergraduate students, graduate students and researchers in computer science, mathematics, cognitive sciences, artificial intelligence, philosophy, statistics and physics. The book is self contained in the sense that it contains the basic requirements of computability theory, probability theory, information theory, and coding. Included are also numerous problem sets, comments, source references and hints to the solutions of problems, course outlines for classroom use, as well as a great deal of new material not included in the first edition. CONTENTS: Preface to the First Edition v How to Use This Book viii Acknowledgments x Preface to the Second Edition xii Outlines of One-Semester Courses xii List of Figures xix 1 Preliminaries 1 1.1 A Brief Introduction 1 1.2 Prerequisites and Notation 6 1.3 Numbers and Combinatorics 8 1.4 Binary Strings 12 1.5 Asymptotic Notation 15 1.6 Basics of Probability Theory 18 1.7 Basics of Computability Theory 24 1.8 The Roots of Kolmogorov Complexity 47 1.9 Randomness 49 1.10 Prediction and Probability 59 1.11 Information Theory and Coding 65 1.12 State Symbol Complexity 84 1.13 History and References 86 2 Algorithmic Complexity 93 2.1 The Invariance Theorem 96 2.2 Incompressibility 108 2.3 C as an Integer Function 119 2.4 Random Finite Sequences 127 2.5 *Random Infinite Sequences 136 2.6 Statistical Properties of Finite Sequences 158 2.7 Algorithmic Properties of 167 2.8 Algorithmic Information Theory 179 2.9 History and References 185 3 Algorithmic Prefix Complexity 189 3.1 The Invariance Theorem 192 3.2 *Sizes of the Constants 197 3.3 Incompressibility 202 3.4 K as an Integer Function 206 3.5 Random Finite Sequences 208 3.6 *Random Infinite Sequences 211 3.7 Algorithmic Properties of 224 3.8 *Complexity of Complexity 226 3.9 *Symmetry of Algorithmic Information 229 3.10 History and References 237 4 Algorithmic Probability 239 4.1 Enumerable Functions Revisited 240 4.2 Nonclassical Notation of Measures 242 4.3 Discrete Sample Space 245 4.4 Universal Average-Case Complexity 268 4.5 Continuous Sample Space 272 4.6 Universal Average-Case Complexity, Continued 307 4.7 History and References 307 5 Inductive Reasoning 315 5.1 Introduction 315 5.2 Solomonoff's Theory of Prediction 324 5.3 Universal Recursion Induction 335 5.4 Simple Pac-Learning 339 5.5 Hypothesis Identification by Minimum Description Length 351 5.6 History and References 372 6 The Incompressibility Method 379 6.1 Three Examples 380 6.2 High- Probability Properties 385 6.3 Combinatorics 389 6.4 Kolmogorov Random Graphs 396 6.5 Compact Routing 404 6.6 Average-Case Complexity of Heapsort 412 6.7 Longest Common Subsequence 417 6.8 Formal Language Theory 420 6.9 Online CFL Recognition 427 6.10 Turing Machine Time Complexity 432 6.11 Parallel Computation 445 6.12 Switching Lemma 449 6.13 History and References 452 7 Resource-Bounded Complexity 459 7.1 Mathematical Theory 460 7.2 Language Compression 476 7.3 Computational Complexity 488 7.4 Instance Complexity 495 7.5 Kt Complexity and Universal Optimal Search 502 7.6 Time-Limited Universal Distributions 506 7.7 Logical Depth 510 7.8 History and References 516 8 Physics, Information, and Computation 521 8.1 Algorithmic Complexity and Shannon's Entropy 522 8.2 Reversible Computation 528 8.3 Information Distance 537 8.4 Thermodynamics 554 8.5 Entropy Revisited 565 8.6 Compression in Nature 583 8.7 History and References 586 References 591 Index 618 If you are seriously interested in using the text in the course, contact Springer-Verlag's Editor for Computer Science, Martin Gilchrist, for a complimentary copy. Martin Gilchrist marting at springer-sc.com Suite 200, 3600 Pruneridge Ave. (408) 249-9314 Santa Clara, CA 95051 If you are interested in the text but won't be teaching a course, we understand that Springer-Verlag sells the book, too. To order, call toll-free 1-800-SPRINGER (1-800-777-4643); N.J. residents call 201-348-4033. For information regarding examination copies for course adoptions, write Springer-Verlag New York, Inc. , 175 Fifth Avenue, New York,NY 10010. You can order through the Web site: "http://www.springer-ny.com/" For U.S.A./Canada/Mexico- e-mail: orders at springer-ny.com or fax an order form to: 201-348-4505. For orders outside U.S.A./Canada/Mexico send this form to: orders at springer.de Or call toll free: 800-SPRINGER - 8:30 am to 5:30 pm ET (that's 777-4643 and 201-348-4033 in NJ). Write to Springer-Verlag New York, Inc., 175 Fifth Avenue, New York, NY, 10010. Visit your local scientific bookstore. Mail payments may be made by check, purchase order, or credit card (see note below). Prices are payable in U.S. currency or its equivalent and are subject to change without notice. Remember, your 30-day return privilege is always guaranteed! Your complete address is necessary to fulfill your order. From horvitz at MICROSOFT.com Tue Jun 6 06:52:25 2006 From: horvitz at MICROSOFT.com (Eric Horvitz) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: UAI '97 program and registration information Message-ID: Dear Colleague: I have appended program and registration information for the Thirteenth Conference on Uncertainty and Artificial Intelligence (UAI '97). More details and an online registration form are linked to the UAI '97 home page at http://cuai97.microsoft.com. UAI '97 will be held at Brown University in Providence, Rhode Island, August 1-3. In addition to the main program, you may find interesting the Full-Day Course on Uncertain Reasoning which will be held on Thursday, July 31. Details on the course can be found at http://cuai97.microsoft.com/course.htm. Please register for the conference and/or the course before early registration comes to an end on May 31, 1997. I would be happy to answer any additional questions about the conference. Best regards, Eric Horvitz Conference Chair ==================================================== Thirteenth Conference on Uncertainty in Artificial Intelligence (UAI '97) http://cuai97.microsoft.com August 1-3, 1997 Brown University Providence, Rhode Island, USA ============================================= ** UAI '97 Conference Program ** ============================================= Thursday, July 31, 1997 Conference and Course Registration 8:00-8:30am http://cuai97.microsoft.com/register/reg.htm Full-Day Course on Uncertain Reasoning 8:30-6:00pm http://cuai97.microsoft.com/course.htm _____________________________________________ Friday, August 1, 1997 Main Conference Registration 8:00-8:25am Opening Remarks Dan Geiger and Prakash P. Shenoy 8:25-8:30am Invited talk I: Local Computation Algorithms Steffen L. Lauritzen 8:30-9:30am Abstract: Inference in probabilistic expert systems has been made possible through the development of efficient algorithms that in one way or another involve message passing between local entities arranged to form a junction tree. Many of these algorithms have a common structure which can be partly formalized in abstract axioms with an algebraic flavor. However, the existing abstract frameworks do not fully capture all interesting cases of such local computation algorithms. The lecture will describe the basic elements of the algorithms, give examples of interesting local computations that are covered by current abstract frameworks, and also examples of interesting computations that are not, with a view towards reaching a fuller exploitation of the potential in these ideas. Invited talk II: Coding Theory and Probability Propagation in Loopy Bayesian Networks Robert J. McEliece 9:30-10:30am Abstract: In 1993 a group coding researchers in France devised, as part of their astonishing "turbo code" breakthrough, a remarkable iterative decoding algorithm. This algorithm can be viewed as an inference algorithm on a Bayesian network, but (a) it is approximate, not exact, and (b) it violates a sacred assumption in Bayesian analysis, viz., that the network should have no loops. Indeed, it is accurate to say that the turbo decoding algorithm is functionally equivalent to Pearl's algorithm applied to a certain directed bipartite graph in which the messages circulate around indefinitely, until either convergence is reached, or (more realistically) for a fixed number of cycles. With hindsight, it is possible to trace a continuous chain of "loopy" belief propagation algorithms within the coding community beginning in 1962 (with Gallager's iterative decoding algorithm for low density parity check codes), continued in 1981 by Tanner and much more recently (1995-1996) by Wiberg and MacKay-Neal. In this talk I'd like to challenge the UAI community to reassess the conventional wisdom that probability propagation only works in trees, since the coding community has now accumulated considerable experimental evidence that in some cases at least, "loopy" belief propagation works, at least approximately. Along the way, I'll do my best to bring the AI audience up to speed on the latest developments in coding. My emphasis will be on convolutional codes, since they are the building blocks for turbo-codes. I will mention that two of the most important (pre-turbo) decoding algorithms, viz. Viterbi (1967) and BCJR (1974) can be stated in orthodox Bayesian network terms. BCJR, for example, is an anticipation of Pearls' algorithm on a special kind of tree, and Viterbi's algorithm gives a solution to the "most probable explanation" problem on the same structure. Thus coding theorists and AI people have been working on, and solving, similar problems for a long time. It would be nice if they became more aware of each other's work. Break 10:30-11:00am ** Plenary Session I: Modeling 11:00-12:00am Object-Oriented Bayesian Networks Daphne Koller and Avi Pfeffer (winner of the best student paper award) Problem-Focused Incremental Elicitation of Multi-Attribute Utility Models Vu Ha and Peter Haddawy Representing Aggregate Belief through the Competitive Equilibrium of a Securities Market David M. Pennock and Michael P. Wellman Lunch 12:00-1:30pm ** Plenary Session II: Learning & Clustering 1:30-3:00pm A Bayesian Approach to Learning Bayesian Networks with Local Structure David Maxwell Chickering and David Heckerman Batch and On-line Parameter Estimation in Bayesian Networks Eric Bauer, Daphne Koller, and Yoram Singer Sequential Update of Bayesian Networks Structure Nir Friedman and Moises Goldszmidt An Information-Theoretic Analysis of Hard and Soft Assignment Methods for Clustering Michael Kearns, Yishay Mansour, and Andrew Ng ** Poster Session I: Overview Presentations 3:00-3:30pm * Poster Session I 3:30-5:30pm Algorithms for Learning Decomposable Models and Chordal Graphs Luis M. de Campos and Juan F. Huete Defining Explanation in Probabilistic Systems Urszula Chajewska and Joseph Y. Halpern Exploring Parallelism in Learning Belief Networks T. Chu and Yang Xiang Efficient Induction of Finite State Automata Matthew S. Collins and Jonathon J. Oliver A Scheme for Approximating Probabilistic Inference Rina Dechter and Irina Rish Limitations of Skeptical Default Reasoning Jens Doerpmund The Complexity of Plan Existence and Evaluation in Probabilistic Domains Judy Goldsmith, Michael L. Littman, and Martin Mundhenk Learning Bayesian Nets that Perform Well Russell Greiner Model Selection for Bayesian-Network Classifiers David Heckerman and Christopher Meek Time-Critical Action Eric Horvitz and Adam Seiver Composition of Probability Measures on Finite Spaces Radim Jirousek Computational Advantages of Relevance Reasoning in Bayesian Belief Networks Yan Lin and Marek J. Druzdzel Support and Plausibility Degrees in Generalized Functional Models Paul-Andre Monney On Stable Multi-Agent Behavior in Face of Uncertainty Moshe Tennenholtz Cost-Sharing in Bayesian Knowledge Bases Solomon Eyal Shimony, Carmel Domshlak and Eugene Santos Jr. Independence of Causal Influence and Clique Tree Propagation Nevin L. Zhang and Li Yan __________________________________________________________ Saturday, August 2, 1997 Invited talk III: Genetic Linkage Analysis Alejandro A. Schaffer 8:30-9:30am Abstract: Genetic linkage analysis is a collection of statistical techniques used to infer the approximate chromosomal location of disease susceptibility genes using family tree data. Among the widely publicized linkage discoveries in 1996 were the approximate locations of genes conferring susceptibility to Parkinson's disease, prostate cancer, Crohn's disease, and adult-onset diabetes. Most linkage analysis methods are based on maximum likelihood estimation. Parametric linkage analysis methods use probabilistic inference on Bayesian networks, which is also used in the UAI community. I will give a self-contained overview of the genetics, statistics, algorithms, and software used in real linkage analysis studies. ** Plenary Session III: Markov Decision Processes 9:30-10:30am Model Reduction Techniques for Computing Approximately Optimal Solutions for Markov Decision Processes Thomas Dean, Robert Givan and Sonia Leach Incremental Pruning: A Simple, Fast, Exact Algorithm for Partially Observable Markov Decision Processes Anthony Cassandra, Michael L. Littman and Nevin L. Zhang Region-based Approximations for Planing in Stochastic Domains Nevin L. Zhang and Wenju Liu Break 10:30-11:00am * Panel Discussion: 11:00-12:00am Lunch 12:00-1:30pm ** Plenary Session IV: Foundations 1:30-3:00pm Two Senses of Utility Independence Yoav Shoham Probability Update: Conditioning vs. Cross-Entropy Adam J. Grove and Joseph Y. Halpern Probabilistic Acceptance Henry E. Kyburg Jr. Estimation of Effects of Sequential Treatments By Reparameterizing Directed Acyclic Graphs James M. Robins and Larry Wasserman ** Poster Session II: Overview Presentations 3:00-3:30pm * Poster Session II 3:30-5:30pm Network Fragments: Representing Knowledge for Probabilistic Models Kathryn Blackmond Laskey and Suzanne M. Mahoney Correlated Action Effects in Decision Theoretic Regression Craig Boutilier A Standard Approach for Optimizing Belief-Network Inference Adnan Darwiche and Gregory Provan Myopic Value of Information for Influence Diagrams Soren L. Dittmer and Finn V. Jensen Algorithm Portfolio Design Theory vs. Practice Carla P. Gomes and Bart Selman Learning Belief Networks in Domains with Recursively Embedded Pseudo Independent Submodels J. Hu and Yang Xiang Relational Bayesian Networks Manfred Jaeger A Target Classification Decision Aid Todd Michael Mansell Structure and Parameter Learning for Causal Independence and Causal Interactions Models Christopher Meek and David Heckerman An Investigation into the Cognitive Processing of Causal Knowledge Richard E. Neapolitan, Scott B. Morris, and Doug Cork Learning Bayesian Networks from Incomplete Databases Marco Ramoni and Paola Sebastiani Incremental Map Generation by Low Cost Robots Based on Possibility/Necessity Grids M. Lopez Sanchez, R. Lopez de Mantaras, and C. Sierra Sequential Thresholds: Evolving Context of Default Extensions Choh Man Teng Score and Information for Recursive Exponential Models with Incomplete Data Bo Thiesson Fast Value Iteration for Goal-Directed Markov Decision Processes Nevin L. Zhang and Weihong Zhang __________________________________________________________ Sunday, August 3, 1997 Invited talk IV: Gaussian processes - a replacement for supervised neural networks? David J.C. MacKay 8:20-9:20am Abstract: Feedforward neural networks such as multilayer perceptrons are popular tools for nonlinear regression and classification problems. From a Bayesian perspective, a choice of a neural network model can be viewed as defining a prior probability distribution over non-linear functions, and the neural network's learning process can be interpreted in terms of the posterior probability distribution over the unknown function. (Some learning algorithms search for the function with maximum posterior probability and other Monte Carlo methods draw samples from this posterior probability). In the limit of large but otherwise standard networks, Neal (1996) has shown that the prior distribution over non-linear functions implied by the Bayesian neural network falls in a class of probability distributions known as Gaussian processes. The hyperparameters of the neural network model determine the characteristic lengthscales of the Gaussian process. Neal's observation motivates the idea of discarding parameterized networks and working directly with Gaussian processes. Computations in which the parameters of the network are optimized are then replaced by simple matrix operations using the covariance matrix of the Gaussian process. In this talk I will review work on this idea by Neal, Williams, Rasmussen, Barber, Gibbs and MacKay, and will assess whether, for supervised regression and classification tasks, the feedforward network has been superceded. * Plenary Session V: Applications of Uncertain Reasoning 9:20-10:40am Bayes Networks for Sonar Sensor Fusion Ami Berler and Solomon Eyal Shimony Image Segmentation in Video Sequences: A Probabilistic Approach Nir Friedman and Stuart Russell Lexical Access for Speech Understanding using Minimum Message Length Encoding Ian Thomas, Ingrid Zukerman, Bhavani Raskutti, Jonathan Oliver, David Albrecht A Decision-Theoretic Approach to Graphics Rendering Eric Horvitz and Jed Lengyel * Break 10:40-11:00am * Panel Discussion: 11:00-12:00am Lunch 12:00-1:30pm ** Plenary Session VI: Developments in Belief and Possibility 1:30-3:00pm Decision-making under Ordinal Preferences and Comparative Uncertainty D. Dubois, H. Fargier, and H. Prade Inference with Idempotent Valuations Luis D. Hernandez and Serafin Moral Corporate Evidential Decision Making in Performance Prediction Domains A.G. Buchner, W. Dubitzky, A. Schuster, P. Lopes P.G. O'Donoghue, J.G. Hughes, D.A. Bell, K. Adamson, J.A. White, J. Anderson, M.D. Mulvenna Exploiting Uncertain and Temporal Information in Correlation John Bigham Break 3:00-3:30am ** Plenary Session VII: Topics on Inference 3:30-5:00pm Nonuniform Dynamic Discretization in Hybrid Networks Alexander V. Kozlov and Daphne Koller Robustness Analysis of Bayesian Networks with Local Convex Sets of Distributions Fabio Cozman Structured Arc Reversal and Simulation of Dynamic Probabilistic Networks Adrian Y. W. Cheuk and Craig Boutilier Nested Junction Trees Uffe Kjaerulff __________________________________________________________ If you have questions about the UAI '97 program, contact the UAI '97 Program Chairs, Dan Geiger and Prakash P. Shenoy. For other questions about UAI '97, please contact the Conference Chair, Eric Horvitz. * * * UAI '97 Conference Chair Eric Horvitz (horvitz at microsoft.com) Microsoft Research, 9S Redmond, WA, USA http://research.microsoft.com/~horvitz UAI '97 Program Chairs Dan Geiger (dang at cs.technion.ac.il) Computer Science Department Technion, Israel Institute of Technology Prakash Shenoy (pshenoy at ukans.edu) School of Business University of Kansas http://pshenoy at stat1.cc.ukans.edu/~pshenoy/ ==================================================== To register for UAI '97, please use the online registration form at: http://cuai97.microsoft.com/register/reg.htm If you do not have access to the web, please use the appended ascii form. Detailed information on accomodations can be found at http://cuai97.microsoft.com/#lodge. Several blocks of rooms of on-campus housing at Brown University have been reserved for UAI attendees on a first come, first serve basis. In addition, there are five hotels within a 1 mile radius from the UAI Conference (see http://www.providenceri.com/as220/hotels.html for additional information on hotels). Travel information is available at: http://cuai97.microsoft.com/#trav ====================================================== ***** UAI '97 Registration Form ***** (If possible, please use the online form at http://cuai97.microsoft.com/register/reg.htm) ------------------------------------------------------------------------ ----------------- * Name (Last, First): _____________________________ * Affiliation: ___________________________ * Email address: ___________________________ * Mailing address: ___________________________ * Telephone: ___________________________ ------------------------------------------------------------------------ ----------------- ** Registration Fees: >>> Main Conference <<<< Fees (please circle and tally below): Early Registration: $225, Late Registration (After May 31): $285 Student Registration (certify below): $125, Late Registration (After May 31): $150 * * * >>> Full-Day Course on Uncertain Reasoning (July 31, 1997) <<< * Fees: With Conference Registration: $75, Without Conference: $125 Student (certify below): With Conference Registration: $35, Without Conference: $55 The registration fee includes the conference banquet on August 2nd and a package of three lunches which will be served on campus. * Student certification I am a full-time student at the following institution:____________________ Academic advisor's name:____________________ Academic advisor's email:____________________ * Conference Registration Fees: U.S. $ ________________________ Full-Day Course: U.S. $ ________________________ TOTAL: U.S. $ ________________________ ______________________________________________________ Please make check payable to: AUAI or Association for Uncertainty in Artificial Intelligence Or Indicate credit card payment(s) enclosed: ______ Mastercard ______ Visa Credit Card No.: _____________________________________________ Exp. Date: ________________________ Signature: ____________________________ For credit card payment, you may fax this form to: (206) 936-1616 Registrations by check/money order should be mailed to: Eric Horvitz Microsoft Research, 9S Redmond, WA 98052-6399 Fax: 206-936-1616 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Neural Networks for modelling and control Message-ID: Dear all, Below are the description of two recently submitted papers dealing with Neural networks for modelling and control. Respective drafts are http available at http://www.mech.gla.ac.uk/~ericr/research.html Any comments would be greatly appreciated (1) Eric Ronco and Peter J. Gawthrop, 1997 (submitted). Incremental Model Reference Adaptive Polynomial Controllers Network. IEEE transaction on Systems, man and cybernetics. Abstract: The Incremental Model Reference Adaptive Polynomial Controllers Network (IMRAPCN) is a completely autonomous adaptive non linear controller. This algorithm consists of a Polynomial Controllers Network (PCN) and an Incremental Network Construction (INC). The PCN is a network of polynomial controllers each one being valid for a different operating region of the system. The use of polynomial controllers reduces significantly the number of controllers required to control a non linear system while improving the control accuracy, and the whole, without any drawbacks since polynomials are ""linear in parameters functions''. Such a control system can be used for the control of a possibly discontinuous non linear system, it is not affected by the ""stability-plasticity dilemma'' and yet can have a very clear architecture since it is composed of linear controllers. The INC aims to resolve the clustering problem that faces any such multi-controller method. The INC enables a very efficient construction of the network as well as an accurate determination of the region of validity of each controller. Hence, the INC gives to the PCN a complete autonomy since the clustering of the operating space can be achieved without any a priori knowledge about the system. Those advantages make clear the powerful control potential of the IMRAPCN in the domain of autonomous adaptive control of non linear systems. (2) Eric Ronco and Peter J. Gawthrop, 1997 (submitted). Polynomial Models Network for system modelling and control. IEEE transaction on Neural Networks. Abstract: For the purposes of control, it is essential that the chosen class of models is {em transparent} in the sense that the model structure and parameters may be interpreted in the context of control system design. The unclear representation of the system developed by most of the neural networks highly restrict their application for system modelling and control. Local computation tends to give clarity into the neural representation. The local models network (LMN) applies this method while adapting different models to different operating regions of the system. This paper builds on the Local Model Network approach and makes two main contributions: the network is not of a fixed structure, but rather is constructed {em incrementally} on line and the models are not linear but rather {em polynomial} in the variables. The resulting network is named the incremental polynomial model network (IPMN). In this paper we show that the transparency of the IPMN's representation makes model analysis and control design straight forward. The many advantages of this approach exposed in conclusion demonstrate the powerful capability of the IPMN to model and control non-linear systems. ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I found your commentary in the recent Connectionists posting very interesting. I have been working on the logical basis for neural computation for well over a decade now and have what I feel are many exciting results that I think help provide some focus regarding how to approach the problem of the understanding and designing artificial neural systems. I have enclosed some references -- not to impress you, but to give you a flavor for the logical foundation that I have struggled with ultimately to get it into an extremely simple and insightful explanation of neural computation. Anyway, I am very interested in what you talked about and am interested in trying to attend the described panel. I think that finally, researchers are beginning to ask the right questions. I believe that learning itself is the computational search for the "right" questions and so perhaps we collectively, are about to really learn something about how the computational objective(s) of the brain and neural computation in general. Sincerely, Prof. Robert L. Fry Relevant Publication List "A logical basis for neural network design," invited book chapter in Theory and application of neural networks, Academic Press, to be published 1997. "Neural Mechanics," invited paper, Proc. Int. Conf. Neur. Info. Proc., Hong Kong, 1996. "Rational neural models based on information theory," invited paper, Post-conference workshop on Neural Information Processing, NIPS'95. "Observer-participant models of neural processing," IEEE Trans. Neural Networks, July 1995 "Rational neural models based on information theory," 1995 Workshop on Maximum Entropy and Bayesian Methods, Sante Fe, NM, July 1995, sponsored by the Sante Fe Institute and Lawrence Livermore Laboratory "Rational neural models," Information theory and the brain workshop, Stirling, Scotland, Sept 1995. "Neural processing of information," R. L. Fry, paper presentation at 1994 IEEE International Symposium on Information Theory, Norway. "A mathematical basis for an entropy machine that makes rational decisions," APL Internal Memo F1F(1)90-U-094, 1990. "Neural models for the distributed processing of information," APL IRAD report, 1991. "The principle of cross-entropy applied to neural networks and autonomous target recognition," APL Internal Memo F1F(1)90-U-005, 1990. "Maximized mutual information using macrocanonical probability distributions," 1994 IEEE/IMS Workshop on Information Theory and Statistics, Arlington, VA. ============================================================= From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I have been following your discussion and summarizations with some interest over the last few months and would like to provide some input at this point. Many of the issues you raise are quite important - in particular your call for a clear set of external objectives for any "brain-like" learner. However, the general context in which you have framed the question is somewhat limited. As I understand it, you are suggesting that the classical context of local learning and predefined structure is unrealistic. In many ways I agree .. I beleive the larger and more important context involves the issues of what has been called "life-long learning" and "learning to learn" and "consolidation and transfer of knowledge". I have no idea why so many researchers continue to pursue the development of the next best inductive algorithm or archiecture (be it ANN or not) when many of them understand that the percentage gains on predictive accuracy based solely on an example set is marginal in comparison to the use of prior knowledge (selection of inductive bias). The research community has looked very carefully at how we induce ANN models from examples ... but in comparision there has been very little work on the consolidation and transfer of the knowledge contained in previously learned ANN models. Most of the questions you have posed are subsumed by this larger =0Acontext.Primari= ly .. what type of mechanism(s) is required to (1) learn a task taking advantage of previous learning (prior knowledge), and (2) consolidate this new task knowledge for use in future learning. I do not think that the CNS (central nervous system) is successful in doing this - just by chance - there is something unique about it's architecture that allows =0Athis too= occur. Recent work by myself, Lorien Pratt, Sebatian =0AThrun, Rich Caru= ana, Tom Mitchell, Mark Ring, Johnathon Baxter, and others [NIPS 95 work shop on learning to learn] have provided =0Afounda= tions on which to build in this area. I encourage to review some of this material if you are interested and feel it applies. You can start by cheking out my homepage at ww.csd.uuwo.ca/~dsilver It will lead you to other related areas. ================================================================= Daniel L. Silver University of Western Ontario, London, Canada N6A 3K7 - Dept. of Comp. Sci. dsilver at csd.uwo.ca H: (902)582-7558 O: (902)494-1813 WWW home page .... http://www.csd.uwo.ca/~dsilver ================================================================= From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Regarding the panel discussion on "connectionist learning : is it time to reconsider the foundations ?", I read the arguments with interest and I would like to pose, in addition, another issue. That is, to simulate convincingly a biological system we should not probably be dealing solely with vectors of real numbers. The capacity to deal with symbols and other types of data merits also attention. In other words, besides memory and more global learning capabilities, it will be advantageous to be able to handle jointly disparate data such as real numbers, fuzzy sets, propositional statements, etc. Therefore from a model development point of view it might be quite advantageous to consider working on less structured spaces than the conventional N-dimensional Euclidean space. Such a space could be a (mathematical) lattice. Note that all the previously mentioned data are in effect elements of a mathematical lattice. That is, not only the conventional Euclidean space is a lattice but also the set of propositional statements, the collection of fuzzy sets on a universe of discourse, etc. My tentative proposion is this : For machine learning purposes only, replace the Euclidean-space by a Lattice-space. Just imagine how much the learning and decision making robustness of a system would be enhanced if in addition to memory, the capability to design the net on its own, polynomial complexity, generalization capability, etc., the system in question in also able to handle jointly dsparate data. With considerations, Vassilis G. Kaburlasos Aristotle University of Thessaloniki, Greece ================================================================ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Just to congratulate you for organizing such an important panel. On the foundations, I would like to remind you that even the point that learning must occur by synaptic adjustments is another question that should be discussed in my opinion. =============================================================== An additional note from Prof. Weber Martins - weber at eee.ufg.br As a researcher with Computer and Electrical Engineering background, I was not talking about biological plausibility. I am interested in the resolution of complex problems in a efficient way. I work mainly with weightless (Boolean) neural networks. In those models, we usually deal with the adjusting of neuronal "behavior", not synapses. In my point of view, the Neural Network community uses homogeneous networks (with the same type of neuron throughtout the network) to simplify mathematical analysis and algorithms. However, from many books I read on NN, it seems that if you don't adjust synapses, you're not doing NN... From an computational point of view, this doesn't =0Alook r= ight to me. =============================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I presume that the panel organizers really want to focus on these two assumptions of connectionist learning rather than looking at other assumptions that are embedded in the whole connectionist approach. We should recognize, however, that the two assumptions listed above do not begin to exhaust the theoretical commitments that are bound up in most connectionist models. Some of these other assumptions are: 1) Place-coding. Inputs to the network are encoded by spatial patterns of activation of elements in the input layer rather than in the time-structure of the inputs (i.e. rate-place or labelled-line neural codes rather than temporal pattern codes). To the extent that input patterns are encoded temporally, in the std view a time-delay layer is used to convert this into a place pattern that is handled by a conventional network. However, examples of fine time structure have been found in many parts of the brain where there has been a concerted effort to look for it. 2) Scalar signals. There is one signal conveyed per element (no multiplexing of multiple signals onto the same lines), so that inputs to and outputs from each element are scalar quantities (1 and 2 are closely related). 3) Synchronous or near-synchronous operation. The inputs are summed together to produce an output signal for each time step. The std. neurobiological assumption is that neurons function as rate-integrators with integration times comparable to the time-steps of the inputs rather than coincidence detectors. But examples of coincidence detection in the brain are many, and there has been an ongoing debate about whether the cortical pyramidal cell is best seen in terms of an integrating element or as a coincidence detector. 4) Fan-out of the same signals to all target elements. Connection weights may differ, but the same signals are being fed to all of their targets. The std. neurobiological assumption is that impulses of spike trains invade all of the daughter branches of axonal trees. However, there may be conduction blocks at branchpoints that mean that some spikes will not propapagate into some terminals (so that the targets do not receive all of the spikes that were transmitted through the axon trunk). There are examples of this in the crayfish. To the extent that one or more of these assumptions are altered, one can potentially get neural architectures with different topologies, and with potentially different capacities. I'm not one to quibble over what the topic of a discussion should be -- that's up to the participants. I'd just like to suggest that if we (a very general and inclusive "we") are going to "reconsider the foundations" of connectionism, we might think more broadly about ALL the assumptions, tacit and explicit, that are involved. My own sense is that, in lieu of a much deeper understanding of exactly what kinds of computational operations are being carried out by cortical structures and how these are carried out, we should probably avoid labels like "brain-like" that give the false sense that we understand more than we do about how the brain works. If one is seriously interested in how "brain-like" a given network architecture is, then one needs to get real, detailed neuroanatomical and/or neurophysiological data and make the comparison more directly. Comparisons with what's in the textbooks just doesn't do. Things get messy very fast when the neural responses are phasic, nonmonotonic, and have a multitude of different kinds of stimuli that produce them. Peter Cariani Peter Cariani, Ph.D. Eaton Peabody Laboratory Massachusetts Eye & Ear Infirmary 243 Charles St, Boston MA 02114 tel (617) 573-4243 FAX (617) 720-4408 email peter at epl.meei.harvard.edu ============================================================ Asim Roy's note to Peter Cariani: If I understand you correctly, you are saying that we need to broaden the set of questions. I am sure this issue will come up as we grapple with these questions. And one of the issues from the artificial neural network point of view is how exactly do you replicate the detailed biological processes. You certainly want to extract the clever biological ideas, but at some point, say 50 years from now, we might do better than biology with our artificial systems. And we might do things differently than in biology. An example is our flying machines. We do better than the birds out there. The functionality is there, but we do it differently and do it better. I think the point I am making is that we need not be tied to every biological detail, but we certainly want to pick up the good ideas and then develop them further. And in the end, we would have a system far superior to the biological ones, but not exactly like it in all the details. ============================================================== Peter Cariani's reply to the above note: Yes, I think the most important decisions we make regarding how to construct neural nets in the image of the brain are to determine exactly which aspects of the real biological system are functionally relevant and which ones are not. I definitely agree with you that every biological detail is not important, and I myself am trying to work out highly abstracted schemes for how temporally-structured signals might be processed by arrays of coincidence detectors and delay lines. (Usually the standard criticisms from biologists are that not enough of the biological details are included.) What I am saying, however, is that the basic functional organization that is assumed by the standard connectionist models may not be the right one (or it may not be the only one or the canonical one). There are many aspects of connectionist models that really don't seem to couple well to the neurophysiology, so maybe we should go back and re-examine our assumptions about neural coding. I myself think that temporal pattern codes are far more promising than is commonly thought, but that's my idee fixe. (I could send you a review that I wrote on them if you'd like). I definitely agree with you that once we understand the functional organization of the brain as an information processing system, then we will be able to build devices that are far superior to the biological ones. My motto is: "keep your hands wet, but your mind dry" -- it's important to pay attention to the biology, to not project one's preconceptions onto the system, but it's equally important to keep one's eyes on the essentials, to avoid getting bogged down in largely irrelevant details. I've worked on both adaptive systems and neurophysiology, and I know that the dry people tend to get their neural models from textbooks (that present an overly simplified and uncritical view of things), and the wet people tend not to say much about what kinds of general mechanisms are being used (they look to the NN people for general theories). There is a cycling of ideas between the two that we need to be somewhat wary of --- many neuroscientists begin to believe that information processing must be done in the ways that the neural networks people suggest, and consequently, they cast the interpretation of the data in that light. The NN people then use the physiological evidence to suggest that their models are "brain-like". Physiological data gets shoe-horned into a connectionist account (and especially what aspects of neural activity people decide to go out and observe), and the connectionist account is then held up as being an adequate theory of how the brain works. There are very few strong connectionist accounts that I know of that really stand up under scrutiny -- that are grounded in the data, =0Athat pr= edict important aspects of the behavior, and that cannot be explained through other sets of assumptions. In these discussions you really need both the physiologists, who understand the complexities and limitations of the data, and the theorists, who understand the functional implications, to interact strongly with each other. So, anyway, I wish you the best of luck with your session. ============================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Your abstract is very interesting. It sounds like it will be a great discussion. The idea of using a different kind of learning which explicitly stored training data is one I've worked on in the past. A few questions that crossed my mind while reading the abstract are listed below: On Wed, 23 Apr 1997, Asim Roy wrote: > Classical connectionist learning is based on two key ideas. >First, no training examples are to be stored by the learning >algorithm in its memory (memoryless learning). I'm a bit unclear about this. Aren't the weights of the network trained to implicitly store ALL the training examples? I would have said that that connectionist learning is based on the idea that "ALL training examples are to be stored by the learning algorithm in its memory"! If I understand correctly, the first key idea is that the training examples are not EXPLICITLY stored in a way in which they could be retrieved or reconstructed. Perhaps my confusion lies in the word stored. How would you define that? I would further say that a number of dynamical recurrent networks like those discussed at my NIPS workshop =0A(http://running.dgcd.d= oc.ca/NIPS/) do explictly store presented examples. Infact, training algorithms like back-propagation through time have been criticized for having to explicitly store previous input and hidden unit patterns and thus consume extra memory resources. But, I guess you're probably aware of this since you have Lee on your panel. > The second key idea is that of local > learning - that the nodes of a network are autonomous learners. > Local learning embodies the viewpoint that simple, autonomous > learners, such as the single nodes of a network, can in fact > produce complex behavior in a collective fashion. This second >idea, in its purest form, implies a predefined net being provided >to the algorithm for learning, such as in multilayer perceptrons. In what sense are the learners autonomous? In the MLP each learner requires a feedback error value provided by another node (and ultimately an outside source) in-order to update. I would say its NOT autonomous. > Second, strict local learning (e.g. back propagation type >learning) is not a feasible idea for any system, biological >or otherwise. If it is not feasible for "any" learning system then any system which attempts to use it must fail. Therefor, working connectionist networks must not use strict local learning. Therefore, strict local learning cannot be one of the fundamental ideas of the connectionist approach. Therefore why are we discussing it? I must have misunderstood something here...any ideas where I went off-track? ------------- Dr. Stefan C. Kremer, Research Scientist, Artificial Neural Systems Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca =============================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: In response to your invitation for comments on cogpsy at coglab.psy.soton.ac.uk I have the following : I believe it is past time to reconsider the foundations. The critical deficiency in current connectionism is lack of understanding of the meaning of the term "system architecture". Any system which performs a complex function using large numbers of components experiences very severe constraints on its system architecture if building, repairing or adding features to the system are to be feasible. Current electronic systems are in production which use billions of individual transistors. Such systems must have a simple functional architecture which can (for example) relate failures experienced at the high functional level to defects occurring at the device level. The reason the von Neumann architecture is ubiquitous for electronic systems is that it provides the means for such a simple relationship. It achieves this by partitioning functionality into consistent elements at a number of levels of detail. The consistent element is the instruction, and instruction based descriptions can be seen at every level. Typical levels include device (instruction 'open gate'); assembly code (instruction 'jump'); software (instruction 'do:[ ]'); procedure call (instruction 'disconnect x'); through features (instruction 'carry out Y') to major system functions (instruction 'periodically test for problems') and overall function. The requirement for a function to operate within a simple architecture is crucial. To illustrate, if I needed to design a function to connect telephones together, many designs would be possible, and would carry out the function efficiently, in some cases more efficiently than designs actually in use. However, the vast majority of those designs would be useless once it was necessary for them to interact with and support functions like testing (a connection did not work, which component is defective ?) or billing (who should be charged how much for that call ?) or adding features (allow recipient of call to identify caller). Conclusions drawn about human cognition from a simulation which performs (for example) face recognition without considering how that function would fit within the total cognitive/behavioral system are almost certainly invalid. In my 1990 book I argued that although the brain obviously did not have a von Neumann architecture, very similar pressures exist for a simple functional architecture (for example, the need to build many copies from DNA 'blueprints'). I went on to develop a new and complete system architecture based on an element of functionality appropriate to the brain, the pattern extraction/action recommendation element. This architecture separates cognition into a few major functions, which can in turn be partitioned further all the way down to neurons. The functionality required in neurons is defined by the functional partitioning at a higher level, and that functional partitioning is in turn constrained by the information available to individual neurons, the kind of changes which can be made in neuron connectivity, and the timescale of such changes. (see a couple of 1997 papers). This architecture shows phenomena which bear a remarkable resemblance to human brain phenomena, including unguided learning by categorization generating declarative memory; dream sleep; procedural memory; emotional arousal; and even internally generated image sequences. All these phenomena play a functional role in generating behavior from sensory input, and have also been demonstrated by electronic simulation (Coward 1996). My response to the questions to be posed to the panelists would be: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? In the pattern extraction hierarchy architecture, which appears to me to be the only option other than the obviously inapplicable von Neumann, one major type of learning (associated with a major separation in the architecture) is the process of sorting experience into categories and associating behaviors with those categories. A category is established by extracting and recording a set of patterns from one unfamiliar object, and developed by adding patterns extracted from any subsequent object which contains many of the patterns already in the category. Memory of objects is thus the prerequisite to this type of learning, which is associated with the cortex. Memoryless learning occurs in other major functions and is an appropriate model in those functions. 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? The real issues here are first to identify what information could feasibly be made available to a neuron (e.g. past firing of the neuron itself; correlated firing of neurons in its neighborhood; correlated firing between the neuron and another neuron; correlated firing within a separate functional group of neurons; feedback from pleasure or pain; or feedback of some expected result). The second issue is to identify the nature of the feasible changes to the neuron which could be produced (e.g. assignment or removal of a neuron; addition or deletion of an input; correlated addition or deletion of a set of inputs; changes in relative strength of inputs; correlated changes in the strength of a set of inputs; general change in effective input strengths (i.e. threshold change); how long a change lasts). Only after these qualitative factors have been defined by higher functional requirements can quantitative algorithms can be developed. 3. Who designs the network inside an autonomous learning system such as the brain? Within the pattern extraction hierarchy architecture it is possible to start from random connectivity and sort experienced objects into categories without guidance or feedback. References: Coward L.A. (1990), 'Pattern Thinking', New York: Praeger (Greenwood) Coward L.A. (1996), 'Understanding of Consciousness through Application of Techniques for Design of Extremely Complex Electronic Systems' Towards a Science of Consciousness , Tucson, Arizona. Coward L.A. (1997), 'Unguided Categorization, Direct and Symbolic Representation, and Evolution of Cognition in a Modified Connectionist Theory', to be published in Proceedings of the Conference on New Trends in Cognitive Science, Austria 1997. Coward L.A. (1997), 'The Pattern Extraction Architecture: a Connectionist Alternative to the Von Neumann Architecture', to be published in the Proceedings of International Workshop on Artificial and Natural Neural Networks, Canary Islands 1997. ================================================================ An additional note from Prof. Andrew Coward I believe the brain has a functional architecture which I label the pattern extraction hierarchy. At the highest level, functionality separates into five major functions. The first extracts constant patterns from the environment (e.g. object color independent of illumination). The second allows the set of patterns which have been extracted from one object to enter the third function. The third function generates a set of alternative behavioral recommendations with respect to the selected object. The fourth function selects one (or perhaps none) of the alternatives to proceed to action, and the fifth function implements the action. This functional separation can be observed in the major physiology (roughly, the primary sensory cortex, the thalamus, the cortex, the basal ganglia, and the cerebellum). There are of course levels of detail and complexity below this. Within each function, the needs of that function determine the functionality required from neurons within the function, subject to what neuron functionality is possible (which in turn is one factor forcing the use of the architecture). I mentioned in the earlier note that functional partitioning is constrained by the possible neuron functionality given limitations in the areas of the information available to individual neurons, the kind of changes which can be made in neuron connectivity, and the timescale of such changes. Expanding on this somewhat: The source of information which controls changes to extracted pattern could be: feedback from comparison with expected result; feedback from pleasure or pain; past firing of neuron itself; correlated firing of neurons in neighbourhood; correlated firing between neuron and another neuron; correlated firing within a separate functional group of neurons The nature of the changes to the extracted pattern produced in the neuron could be: assignment or removal of a neuron; =0Aaddition or = deletion of an input; correlated addition or deletion of a set of inputs; changes in relative strength of inputs; correlated changes in the strength of a set of inputs; =0Ageneral = change in effective input strengths (i.e. threshold change); changes in sensitivity to other parameters. The permanence of the changes to the extracted pattern could be: change only at time source of information is present; change for limited time following source of information being present; change for long period following source of information being present. In each high level function, the particular combination of information used, changes which occur, and timescale is dictated by the type of high level functionality. For example, in the behavioral alternative generation region the changes which are required include assignment of neurons, biased random assignment of inputs, setting sensitivity to the arousal factor for the assigned region, deletion of inactive inputs, and threshold reduction. The sources of information which ultimately control these changes are correlated firing of neurons in neighbourhood, correlated firing of neurons in the neighbourhood with firing of an input, correlated firing of a separate functional group of neurons, and firing of the neuron itself. Timescale is short and long for different operations. Each type of change has an associated source(s) of information and timescale. This combination of functionality at the neuron level gives rise to all the phenomena of declarative memory at the higher level, including dream sleep. A different combination of neuron parameters is required in the behavioral alternative selection function, and gives rise to learning which is not declarative. In this function, pleasure and pain act on recently firing neurons to modulate the ability of similar firing in the future to gain control of action. I apply the term 'memoryless' to this learning because no record of prior states is preserved (although the memory of pleasure and pain may be preserved in the alternative generation function). I regard the perceptron and even the adaptive resonance neurons as simplistic in the system sense, although it turns out that the Hebbian neuron plays an important system role. The above is a summary of some discussion in the papers which have been accepted for publication in the proceedings of a couple of conferences in the next month or so. I could send copies if you are interested. I appreciate the opportunity to discuss. Andrew Coward. ================================================================ APPENDIX: 1997 International Conference on Neural Networks (ICNN'97) Houston, Texas (June 8 -12, 1997) ---------------------------------------------------------------- Further information on the conference is available on the conference web page: http://www.mindspring.com/~pci-inc/ICNN97/ ------------------------------------------------------------------ PANEL DISCUSSION ON "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ------------------------------------------------------------------- This is to announce that a panel will discuss the above question at ICNN'97 on Monday afternoon (June 9). Below is the abstract for the panel discussion broadly outlining the questions to be addressed. I am also attaching a slightly modified version of a subsequent note sent to the panelist. I think the issues are very broad and the questions are simple. The questions are not tied to any specific "algorithm" or "network architecture" or "task to be performed." However, the answers to these simple questions may have an enormous effect on the "nature of algorithms" that we would call "brain-like" and for the design and construction of autonomous learning systems and robots. I believe these questions also have a bearing on other brain related sciences such as neuroscience, neurobiology and cognitive science. Asim Roy Arizona State University ------------------------- PANEL MEMBERS 1. Igor Aleksander 2. Shunichi Amari 3. Eric Baum 4. Jim Bezdek 5. Rolf Eckmiller 6. Lee Giles 7. Geoffrey Hinton 8. Dan Levine 9. Robert Marks 10. Jean Jacques Slotine 11. John G. Taylor 12. David Waltz 13. Paul Werbos 14. Nicolaos Karayiannis (Panel Moderator, ICNN'97 General Chair) 15. Asim Roy Six of the above members are plenary speakers at the meeting. ------------------------- PANEL TITLE: "CONNECTIONIST LEARNING: IS IT TIME TO RECONSIDER THE FOUNDATIONS?" ABSTRACT Classical connectionist learning is based on two key ideas. First, no training examples are to be stored by the learning algorithm in its memory (memoryless learning). It can use and perform whatever computations are needed on any particular training example, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. The second key idea is that of local learning - that the nodes of a network are autonomous learners.Local learning embodies the viewpoint that simple, autonomous learners, such as the single nodes of a network, can in fact produce complex behavior in a collective fashion. This second idea, in its purest form, implies a predefined net being provided to the algorithm for learning, such as in multilayer perceptrons. Recently, some questions have been raised about the validity of these classical ideas. The arguments against classical ideas are simple and compelling. For example, it is a common fact that humans do remember and recall information that is provided to them as part of learning. And the task of learning is considerably easier when one remembers relevant facts and information than when one doesn=92t. Second, strict local learning (e.g. back propagation type learning) is not a feasible idea for any system, biological or otherwise. It implies predefining a network "by the system" without having seen a single training example and without having any knowledge at all of the complexity of the problem. Again, there is no system that can do that in a meaningful way. The other fallacy of the local learning idea is that it acknowledges the existence of a "master" system that provides the design so that autonomous learners can learn. Recent work has shown that much better learning algorithms, in terms of computational properties (e.g. designing and training a network in polynomial time complexity, etc.) can be developed if we don=92t constrain them with the restrictions of classical learning. It is, therefore, perhaps time to reexamine the ideas of what we call "brain-like learning." This panel will attempt to address some of the following questions on classical connectionists learning: 1. Should memory be used for learning? Is memoryless learning an unnecessary restriction on learning algorithms? 2. Is local learning a sensible idea? Can better learning algorithms be developed without this restriction? 3. Who designs the network inside an autonomous learning system such as the brain? ------------------------- A SUBSEQUENT NOTE SENT TO THE PANELIST The panel abstract was written to question the two pillars of classical connectionist learning - memoryless learning and pure local learning. With regards to memoryless learning, the basic argument against it is that humans do store information (remember facts/information) in order to learn. So memoryless learning, as far I understand, cannot be justified by any behavioral or biological observations/facts. That does not mean that humans store any and all information provided to them. They are definitely selective and parsimonious in the choice of information/facts to collect and store. We have been arguing that it is the "combination" of memoryless learning and pure local learning that is not feasible for any system, biological or otherwise. Pure local learning, in this context, implies that the system somehow puts together a set of "local learners" that start learning with each learning example given to it (e.g. in back propagation) without having seen a single training example before and without knowing anything about the complexity of the problem. Such a system can be demonstrated to do well in some cases, but would not work in general. Note that not all existing neural network algorithms are of this pure local learning type. For example, if I understand correctly, in constructive algorithms such as ART, RBF, RCE/hypersphere and others, a "decision" to create a new node is made by a "global decision-maker" based on evidence on performance of the existing system. So there is quite a bit of global coordination and "decision-making" in those algorithms beyond the simple "local learning". Anyway, if we "accept" the idea that memory can indeed be used for the purpose of learning (Paul Werbos indicated so in one of his notes), the terms of the debate/discussion change dramatically. We then open the door to the development of far more robust and reliable learning algorithms with much nicer properties than before. We can then start to develop algorithms that are closer to "normal human learning processes". Normal human learning includes processes such as (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets)for the problem, (4) testing of trial solutions (nets), (5)discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. And these learning processes are implemented within the brain, without doubt, using local computing mechanisms of different types. But these learning processes cannot exist without allowing for storage of information about the problem. One of the "large" missing pieces in the neural network field is the definition or characterization of an autonomous learning system such as the brain. We have never defined the external behavioral characteristics of our learning algorithms. We have largely pursued algorithm development from an "internal mechanisms" point of view (local learning, memoryless learning) rather than from the point of view of "external behavior or characteristics" of these resulting algorithms. Some of these external characteristics of our learning algorithms might be:(1) the capability to design the net on their own, (2) polynomial time complexity of the algorithm in design and training of the net, (3) generalization capability, and (4) learning from as few examples as possible (quickness in learning). It is perhaps time to define a set of desirable external characteristics for our learning algorithms. We need to define characteristics that are "independent of": (1) a particular architecture, (2) the problem to be solved (function approximation, classification, memory, etc.), (3)local/global learning issues, and (4) issues of whether to use memory or not to learn. We should rather argue about these external properties than issues of global/local learning and of memoryless learning. With best regards, Asim Roy Arizona State University From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Neural networks for Modelling and Control Message-ID: Dear all, Just to let you know of the http availability of a new technical report entitled "Neural networks for Modelling and Control" (it is a compressed file. Please, gunzip the file to view or print it). http://www.mech.gla.ac.uk/~ericr/pub/gmnn_rep.ps.gz or http://www.mech.gla.ac.uk/~yunli/reports.htm The report has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Modularity. Abstract: This report is a review of the main neuro-control technologies. Two main kinds of neuro-control approaches are distinguished. One entails developing a single controller from a neural network and the other one embeds a number of controllers inside a neural network. The single neuro-control approaches are mainly system inverse: the inverse of the system dynamics is used to control the system in an open loop manner. The Multi-Layer Perceptron (MLP) is widely used for this purpose although there is no guarantee that it can succeed in learning to control the plant and that, more importantly, the unclear representation it achieves prohibits the analysis of its learned control properties. These problems and the fact that open loop control is not suitable for many systems highly restricts the usefulness of the MLP for control purposes. However, the non-linear modelling capability of the MLP could be exploited to enhance model based predictive control approaches since essentially, an accurate model of the plant is all that is required to apply this method. The second neuro-control approach can be seen as a modular approach since different controllers are used for the control of different components of the systems. The main modular neuro-controllers are listed. They are all characterised by a ""gating system'' used to select the the modular units (i.e. controllers or models) valid for the computing of a current input pattern. These neural networks are referred to as the Gated Modular Neural Networks (GMNNs). Two of these networks are particularly fitted for modelling oriented control purposes. They are the Local Model Network (LMN) and the Multiple Switched Models (MSM). Since the local models of the plant are linear, it is fairly easy to transform them into controllers. For the same reason, the analysis of the properties of these networks can be easily performed and it is straightforward to determine the parameter values of the controllers as linear regression methods can be applied. These advantages among others related to a modular architecture reveal the great potential of these GMNNs for the modelling and control of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Incremental Polynomial Model-Controller Network: a self organising non-linear controller Message-ID: Dear all, Just to let you know of the http availability of a new technical report entitled "Incremental Polynomial Model-Controller Network: a self organising non-linear controller" (it is a compressed file. Please, gunzip the file to view or print it). http://www.mech.gla.ac.uk/~ericr/pub/csc9710.ps.gz The report has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: The aim of this study is to present the "Incremental Polynomial Model-Controller Network" (IPMCN). This network is composed of controllers each one attached to a model used for its indirect design. At each instant the controller connected to the model performing the best is selected. An automatic network construction algorithm is discribed in this study. It makes the IPMCN a self-organising non-linear controller. However the emphasis is on the polynomial controllers that are the building blocks of the IPMCN. From an analysis of the properties of polynomial functions for system modelling it is shown that multiple low order odd polynomials are very suitable to model non-linear systems. A closed loop reference model method to design a controller from a odd polynomial model is then described. The properties of the IPMCN are illustrated according to a second order system having both system states $y$ and $\dot{y}$ involving non-linear behaviour. It shows that as a component of a network or alone, a low order odd polynomial controller performs much better than a linear adaptive controller. Moreover, the number of controllers is significantly reduced with the increase of the polynomial order of the controllers and an improvement of the control performance is proportional to the decrease of the number of controllers. In addition, the clustering free approach, applied for the selection of the controllers, makes the IPMCN insensitive to the number of quantities involving non-linearity in the system. The use of local controllers capable of handling systems with complex dynamics will make this scheme one of the most effective approaches for the control of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: ... two self-organising non-linear controllers Message-ID: Dear all, Just to let you know of the http availability of a paper submitted to the Journal "Modeling, Identification and Control" entitled "Incremental Controller Networks: a comparative study between two self-organising non-linear controllers" (it is a compressed file. Please, gunzip the file to view or print it). Get the file among other publications in http://www.mech.gla.ac.uk/~ericr/research.html or the file itself http://www.mech.gla.ac.uk/~ericr/pub/csc97011.ps.gz This paper has been written by Eric Ronco and Peter J. Gawthrop. The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: Two self-organising controller networks are presented in this study. The ""Clustered Controller Network'' (CCN) uses a spatial clustering approach to select the controllers at each instant. In the other gated controller network, the ""Models-Controller Network'' (MCN), it is the performance of the model attached to each controller which is used to achieve the controller selection. An algorithm to automaticly conctrust the architecture of both networks is described. It makes the two schemes self-organising. Different examples of control of non-linear systems are considered in order to illustrate the behaviour of the ICCN and the IMCN. It makes clear that both these schemes are performing much better than a single adaptive controller. The two main advantages of the ICCN over the IMCN concern the possibilities to use any controller as a building block of its network architecture and to apply the ICCN for modelling purpose. However the ICCN appears to have serious problems to cope with non-linear systems having more than a single variable implying a non-linear behaviour. The IMCN does not suffer from this trouble. This high sensitivity to the clustering space order is the main drawback limiting the use of the ICCN and therefore makes the IMCN a much more suitable approach to control a wide range of non-linear systems. Regards, Eric Ronco ----------------------------------------------------------------------------- | Dr Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: A thesis on self-organising neuro-control Message-ID: Dear all, Just to let you know of the http availability of my thesis (130 pages) entitled "Incremental Polynomial Controller Networks: two self-organising non-linear controllers" (it is a compressed file. Please, gunzip the file to view or print it). Get the file among other publications in http://www.mech.gla.ac.uk/~ericr/research.html or download it directly http://www.mech.gla.ac.uk/~ericr/pub/thesis.ps.gz The keywords are: Neural Networks, Control, Modelling, Self-Organisation. Abstract: A step toward the development of a self-organising approach for the control of non-linear system has been made by developing two ``incremental polynomial controller networks''. They constitute two systematic self-organising approaches for the control of non-linear systems with simple dynamics. Each network is composed of controllers having a region of activity over the system operating space. One is the ``Incremental Clustered Controller Network'' (ICCN) and the other one is the ``Incremental Model-Controller Network'' (IMCN). The two controller networks differ by the manner they achieve the selection of the currently valid local controllers. In the ICCN the controller selection relies on a spatial clustering of the system operating space. In the IMCN, each controller is selected according to the performance of its connected model. Both these controller networks are using an incremental algorithm to construct automaticly their architecture. This algorithm is called the ``Incremental Network Construction'' (INC). It is the INC which makes the ICCN and IMCN self-organising approaches, since no {\it a priori} knowledge (except the system order) is required to apply them. Until now, the controller networks were composed of {\bf linear} controllers. However, since a high number of linear controllers are required to accurately control a significantly non-linear system, the control capabilities of both these controller networks have been further extended by using {\bf polynomial} controllers as building block of the networks. An important advantage of polynomial functions is their capacity to smoothly approximate non-linear systems and yet have their parameters identifiable using linear regression methods (e.g. least squares). It has been shown in this study that odd low order polynomial functions are very suitable to model non-linear systems. Illustrating examples indicated that the use of such a function as building block of the controller networks implies an important decrease of the number of controllers required to control accurately a system. Moreover an improvement of the control performance was proportional to the decrease of the number of controllers, with the smoothness of the input transients being the main area of improvement. It was clear from various control examples that the incremental polynomial controller networks have a great potential in the control of non-linear systems. However, the IMCN is a more satisfactory approach than the ICCN. This is due to the clustering free approach applied by the IMCN for the selection of the controllers. It makes the IMCN insensitive to the number of quantities involving non-linearity in the system. It is argued that the use of local controllers capable of handling systems with complex dynamics makes this scheme one of the most effective self-organising approaches for the control of non-linear systems. Best regards, Eric Ronco ----------------------------------------------------------------------------- | Dr Eric Ronco | | Dt of Mechanical Engineering E.mail : ericr at mech.gla.ac.uk | | James Watt Building WWW : http://www.mech.gla.ac.uk/~ericr | | Glasgow University Tel : (44) (0)141 330 4370 | | Glasgow G12 8QQ Fax : (44) (0)141 330 4343 | | Scotland, UK | ----------------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: A Brain-like Design to Learn Optimal Decision Strategies in Complex Environments (P. J. Werbos); PART VI. KNOWLEDGE DISCOVERY AND INFORMATION RETRIEVAL: Structural Learning and Rule Discovery from Data (M. Ishikawa); Measuring the Significance and Contributions of Inputs in Backpropagation Neural Networks for Rules Extraction and Data Mining (T. D. Gedeon); Applying Connectionist Models to Information Retrieval (S. J. Cunningham et al.); PART VII : CONSCIOUSNESS IN LIVING AND ARTIFICIAL SYSTEMS: Neural Networks for Consciousness (J. G. Taylor); Platonic Model of Mind as an Approximation to Neurodynamics (W.Duch); Towards Visual Awareness in a Neural System (I. Aleksander et al.) Nov 1997 544pp Hardcover ISBN: 981-3083-58-1 US$79.00 For ordering information: http://www.springer.com.sg Springer-Verlag Singapore Pte Ltd 1 Tannery Road, Cencon I, #04-01 Singapore 347719 Tel : (65) 842 0112 Fax : (65) 842 0107 e-mail : springer at cyberway.com.sg From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From dst at cs.cmu.edu Tue Jun 6 06:52:25 2006 From: dst at cs.cmu.edu (Dave Touretzky) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Book: Neurons, Networks, and Motor Behavior Message-ID: [[ FORWARDED FROM THE COMP-NEURO MAILING LIST -- DST ]] The following is a book which readers of this list might find of interest. For more information please visit http://mitpress.mit.edu/promotions/books/STE1NHF97 Neurons, Networks, and Motor Behavior edited by Paul S. G. Stein, Sten Grillner, Allen I. Selverston, and Douglas G. Stuart Recent advances in motor behavior research rely on detailed knowledge of the characteristics of the neurons and networks that generate motor behavior. At the cellular level, Neurons, Networks, and Motor Behavior describes the computational characteristics of individual neurons and how these characteristics are modified by neuromodulators. At the network and behavioral levels, the volume discusses how network structure is dynamically modulated to produce adaptive behavior. Comparisons of model systems throughout the animal kingdom provide insights into general principles of motor control. Contributors describe how networks generate such motor behaviors as walking, swimming, flying, scratching, reaching, breathing, feeding, and chewing. An emerging principle of organization is that nervous systems are remarkably efficient in constructing neural networks that control multiple tasks and dynamically adapt to change. The volume contains six sections: selection and initiation of motor patterns; generation and formation of motor patterns: cellular and systems properties; generation and formation of motor patterns: computational approaches; modulation and reconfiguration; short-term modulation of pattern generating circuits; and sensory modification of motor output to control whole body orientation. Computational Neuroscience series. A Bradford Book. December 1997 262 pp. ISBN 0-262-19390-6 MIT Press * 5 Cambridge Center * Cambridge, MA 02142 * (617)625-8569 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: appropriate to distinguish between "memory" and "learning". Memory is simple recording of information and facts (word pairs, etc), whereas learning involves generalization (deriving additional general facts from that information). Generalization (e.g. categorization) depends on a body of information. Recording of names of people, objects, places and relations is simple memorization, not learning (generalization). Much of what is cited by Gale Martin, Gary Cottrell and Stefan Schaal is recording of information (memorization). And "relevant" information, facts may indeed be recorded "instantaneously" (memorized) by the brain, there is no question about that. The issue here is whether "learning" is instantaneous and permanent in the Hebbian sense. The study clearly indicates that is not so. ------------------- (C) COULD THE "LOSS OF SKILL" RESULT FROM TRAINING OF THE SAME NET, HEBBIAN STYLE (THE INTERFERENCE PROBLEM)? Several people raised this question (Eric Pitcher, Will Penny and others). There are a variety of problems with that argument. First, the same network may not be appropriate for learning the second motor skill. So, in that case, there are two possibilities to consider for any learning system, biological or otherwise. One, it could destroy the previous net and use its free neurons to create a new net (perhaps using more or less neurons than the previous net) to learn the second motor skill. But such distruction of the previous net will result in "total" loss of skill on the first task, not just "partial" loss of skill. So that possibility will not explain the phenomenon at hand. Second, if in fact the same net is used to learn the second motor skill, then one would have the problem of "catastrophic forgetting." As is well-known, catastrophic forgetting is observed in back-propagation and other types of networks when a previously trained network is subsequently trained with examples from a problem that is completely different from the previous one. And catastrophic forgetting is not just limited to pathological cases of learning. Catastrophic forgetting, in fact, is what we "depend on" when we talk about "adaptive learning" - adaptating to a new situation and forgetting the old. So learning of the second skill in the same net would also result in "total loss of skills" (catastrophic forgetting), not just "partial loss of skills." So this does not explain the phenomenon either. So this type of interference is not a good explanation for the phenomenon at hand - that of "partial loss of skills." -------------------- D) WHY DO YOU SAY CLASSICAL CONNECTIONIST LEARNING IS MEMORYLESS? ISN'T THERE MEMORY IN THE WEIGHTS? Several persons raised this issue. So I include this note below from one of my previous memos: "Memoryless learning implies there is no EXPLICIT storage of any learning example in the system in order to learn. In classical connectionist learning, the weights of the net are adjusted whenever a learning example is presented, but it is promptly forgotten by the system. There is no EXPLICIT storage of any presented example in the system. That is the generally accepted view of "adaptive" or "on-line learning systems." Imagine such a system "planted" in some human brain. And suppose we want to train it to learn addition. So we provide the first example - say, 2 + 2 = 4. This system then uses the example to promptly adjust the weights of the net and forgets the particular example. It has done what it is supposed to do - adjust the weights, given a learning example. Suppose, you then ask this "human", fitted with this learning algorithm: "How much is 2 + 2?" Since it has only seen one example and has not yet fully grasped the rule for adding numbers, it probably would give a wrong answer. So you, as the teacher, perhaps might ask at that point: "I just told you 2 + 2 = 4. What do you mean you don't remember?" And this "human" might respond: "Very honestly, I don't recall you ever having said that! I am very sorry." And this would continue to happen after every example you present to this "human" until complete learning has taken place!!! So do you think there is memory in those "weights"? Do you think humans are like that?" ------------------------ (E) A LAST NOTE: The arguments I used against Hebbian-style learning did not rely in any way or form on the details of the PET studies. Only the external behavioral facts were used in the arguments. So questions about irreproducibility of PET and fMRI studies are irrelevant to this argument. ------------------------------------------- RESPONSES FROM OTHERS From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: (REZA SHADMEHR is one of the authors of the study referred to in this discussion.) There is little question among the neuroscientists that practice only starts a process that continues to progress long after the presentation of information has stopped. One only has to consider the fact that the half-life of proteins are on the order of minutes to hours, while memories, which are presumably represented as changes in protein dependent synaptic mechanisms, may last a life time. How this is done remains a mystery. Perhaps, as our study hints, with time there are system-wide changes in representation of newly acquired memories. There is much more evidence for this in memories that rely on the medial parts of the temporal lobe, the regions where damage causes amnesia. We find evidence that memories that do not depend on the med. temporal lobe structures also show a time dependent stability property and that this property is correlated with changes in brain regions of representation. -------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: You might be right that consolidation phenomena support a claim for memory-based learning. However, your arguments play too fast and free with a very large literature on human memory and learning, which indicate that human learning is considerably more complex. The following are some examples. >One of the fundamental beliefs in neuroscience, cognitive science >and artificial neural networks is that the brain learns in >real-time. That is, it learns instantaneously from each and every >learning example provided to it by adjusting the synaptic >strengths or connection weights in a network of neurons. This is wrong. Consolidation phenomena have been around for a long time, as has been the assumption that something happens in the first 5 or so hours after learning to cement what has been learned. This has been the traditional explanation of retrograde amnesia--a trauma to the brain can result in loss of memory regarding events that occurred several hours before the trauma. >What are the real implications of this study? One of the most >important facts is that although both groups had identical >training sessions, they had different levels of learning >of the motor task because of what they did subsequent to >practice. From >this fact alone one can conclude with some degree of >certainty that real-time, instantaneous learning is not >used for learning motor skills. ..... >One has to remember that the essence of learning is >generalization. These statements are also wrong, or at least too simplistic. The existence of consolidation effects (and possibly memory-based learning) does not rule out the existence of real-time, instantaneous learning. There is perhaps a half-century of psychological research on interference effects in learning that argue for a broader view. When one experiences an event, the ability to recall the event is influenced both by what occurs prior to the event (referred to as proactive interference), and what occurs after it (referred to as retroactive interference). Proactive interference effects indicate that the more familiar you are with a stimulus, the less impact an encounter with it will have on your memory of the encounter (see the psychological literature on word-frequency effects on recognition, repetition effects, lag effects, von Restorff effects, and proactive inhibition in paired- associates learning). Many of these effects occur with familiarity that is established within the immediately prior seconds, minutes, or hours of the experiment, so some type of instantaneous learning occurs that impacts longer-term learning. Retroactive interference effects have been studied most thoroughly in the paired-associates learning paradigm. Here, the point is that when you learn, you are often learning an association between a stimulus and a response. The paired-associates learning paradigm involves having subjects learn a list of stimulus-response pairs, such that when they are presented with each of the stimuli in the list, they can retrieve the corresponding response. Retroactive interference effects occur when, after this learning, subjects are given a new list, with either the same or similar stimuli, paired with new responses. What typically happens is that, even if the first list is learned perfectly, learning the second list interferes with retrieving the responses from the first list. These results occur over short periods of time, and over longer periods of time. Hence, the results from the study you cite, might possibly be explained as retroactive interference effects, as well as, or instead of, as consolidation effects. >A logical explanation perhaps for the "loss of >motor skill" phenomenon, as for any other similar phenomenon, is >that the brain has a limited amount of working or short term >memory. And when encountering important new information, the brain >stores it simply by erasing some old information from the working >memory. And the prior information gets erased from the working >memory before the brain has the time to transfer it to a more >permanent or semi-permanent location for actual learning. The problem here is that short-term memory and working memory have more precise meanings associated with them. Basically, they refer to what you can pay attention to, or rehearse internally or externally, at one time. The capacity is very limited, and so you would be continually changing the contents of short-term memory in working on a single task like the one you describe. Thus, for memory-based learning to occur, there must be some form of instantaneous learning that keeps the to-be-remembered stimuli around long enough for consolidation to occur, but this is not what is commonly referred to as short-term or working memory. --------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Thank you for your reply. I think you are right that we probably would agree about a lot of issues in this area. Nevertheless, I still feel the need to caution you about making the strong claim that there is no real time learning, as defined by evidence of generalization. One of the research paradigms used to investigate human category learning involves first creating a prototype. Sometimes the prototype is an image of randomly arranged dots. Sometimes it is a collection of related sentences, or an image of spatially arranged objects. Exemplars of the concept are then created by applying transformations of various sorts to the prototype. In many cases, the exemplars are created so that some are relatively close, or similar, to the prototype, and some are relatively distant, or dissimilar from the prototype. An experiment involves training people to categorize the exemplars, and then after this learning has occurred, testing them on other exemplars, not seen before, and on the prototype as well. Since most psychological experiments are conducted on undergraduates, in a single one-hour session, many of these category learning experiments take place within a single hour. People usually can perform this task, without being exposed to the number of stimuli we would expect that a neural net would need for such learning, and they can usually accomplish the learning within an hour (which argues against a time-consuming consolidation process being the responsible mechanism).Hence, I think it would be relatively easy to disprove your strong claim, using the enormous empirical literature psychologists have generated over the past half-century or so. Nevertheless, I also think it is great that you are making such a strong claim because it centers attention on the fact that people are capable of category learning which would seem to be impossible by our current computational conceptions of learning, due to high input dimensionality, and the complex mapping functions they apparently are able to approximate through category learning. If we can discover how they do this (and I think the psychological literature provides some clues) we may be able to extend such capabilities to artificial neural nets. ---------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: >It shouldn't be too hard too explain the "loss of skill" >phenomenon, from back-to-back instructions on new motor skills, >that was observed in the study. The explanation shouldn't be >different from the one for the "forgetting of instructions" >phenomenon that occurs with back-to-back instructions in any >learning situation. A logical explanation perhaps for the "loss of >motor skill" phenomenon, as for any other similar phenomenon, is >that the brain has a limited amount of working or short term >memory. And when encountering important new information, the brain >stores it simply by erasing some old information from the working >memory. And the prior information gets erased from the working >memory before the brain has the time to transfer it to a more >permanent or semi-permanent location for actual learning. So "loss >of information" in working memory leads to a "loss of skill." Re the above commentary: Could'nt you apply the same sort of argument about finite working memory capacity to finite real-time learning capacity ? What about the following hypothesis ? The brain has a limited real-time learning capacity (say a few networks in the frontal lobe that can do real-time learning). This learning is later transferred to other brain areas (the areas that will carry out the task in the future). But these real-time learning networks can be overwritten when people are exposed to sequences of novel tasks. So there could still be a role for real-time learning in the brain. ------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I don't understand quite where you get this idea that Cognitive Scientists believe learning is memoryless. All you have to do is read any introductory text to Cognitive Science to see all of the different kinds of memory there are, and how learning can go on in these different subsystems. Perhaps I am not getting your point - that real-time learning is somehow different? But the simplest kind of learning is rote learning, and one could argue that the hippocampus stores examples (which then train cortex, see papers by Larry Squire). Also, I would guess that most of us believe that there are attentional filters on what gets learned - not *every* example gets to be learned upon. --------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I think most definitions start with rote learning, which is memorization. Thu I don't see how you can call learning memoryless. Even implicit learning requires storage of the skill in your synaptic strengths ("weights"), which is one version of what "implicit memory" is. ------------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: It might interest you to look at the following paper: @Article{mcclelland95, author = "J. L. McClelland and B. L. McNaughton and R. C. O'Reilly", title = "Why there are complementary learning systems in the hippocampus and neocortex: Insights from the successes and failures of connectionist models of learning and memory.", journal = "Psych. Rev.", year = 1995, volume = 102, pages = "419--457" } ----------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: some comments concerning your posting: > > What are the real implications of this study? One of the most > important facts is that although both groups had identical >training sessions, they had different levels of learning >of the motor task because of what they did subsequent to >practice. From this fact alone one can conclude with some >degree of certainty that real-time, instantaneous learning >is not used for learning motor skills. .... > So real-time, instantaneous and permanent >weight-adjustment > (real-time learning) is contradictory to the results here. I do not get your point. Assume both groups learned the same level of performance. Now they subsequently do something else. One group learns a new motor skill which interfers with the previously learned motor skill in short term motor memory. The other groups does unrelated tasks (clearly nothing comparable to Reza's manipulandum task), and this group does not have interference with the short term memory. Why does this exclude real-time learning? The consolidation process later to put STM into LTM is not relevant to this questions. > Second, from a broader behavioral perspective, all types of > "learning" by the brain involves collection and storage of > information prior to actual learning. As is well known, the > fundamental process of learning involves: (1) collection and > storage of information about a problem, (2) examination of the > information at hand to determine the complexity of the >problem, (3)development of trial solutions (nets) for the >problem, (4) >testing of trial solutions (nets), (5) discarding such >trial solutions (nets) if they are not good enough, and >(6) repetition of these processes until an acceptable >solution is found. Real-time learning is not compatible >with these learning processes. Why would you make this statement about the brain? Nobody really understands how learning in the brain works, and just because the neural network community has this procedure to deal with the bias-variance dilemma, I would not believe that this is the only way to achieve good learning results. We actually worked on a learning algrithm for a while which can achieve incremental learning without all these steps you enumerated. All what it needed is a smoothness bias. (ftp://ftp.cc.gatech.edu/pub/people/sschaal/schaal-NC97.ps.gz) > One has to remember that the essence of learning is >generalization. In order to generalize well, one has to >look at the whole body of information relevant to a >problem, not just bits and pieces of the information at a >time as in real-time learning. So the argument against >real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). One finds out what is there to learn >(generalize) by collecting and storing information about >the problem. In other >words, no system, biological or otherwise, can prepare >itself to learn (generalize) without having any >information about what is to be learnt (generalized). You are right in saying that one needs prior information for generalization. However, there are classes of problems where general priors will be sufficient to generalize. Nobody can do extrapolation without have strong domain knowledge. But you might be able to do save interpolation with some generic biases, which nature may have developed. Again, the above paper talks about related topics. > Another fact from the study that is highly significant is >that the brain takes time to learn. Learning is not quick >and instantaneous. But this may depend on the task. Other tasks can be acquired more quickly. I assume it is save to say that the biological system is only able to learn certain tasks very quickly, and others not. This is why playing good golf or tennis is so hard. But learning to balance a pole happens quite quickly in humans. Interesting arguments, but I do not see how you can make any of your claims about real-time learning. What is your counter-hypothesis? Pure memory-based learning? ---------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I have a small doubt! I think a real-time learning need not be memoryless. As an example (in the context of my work ) I would say that it is possible to evolve a learning rule for a neural network along with the structures. That is, the coevolution of structure and learning is possible. The structures eventually implement a long term memory! ------------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: After getting your text from cogsci, I forwarded it to a collegue who works in the theatre world. I'm forwarding you his "gut" reactions. I am working on a neuro-vision of Cognitive Sci, and he is in contemporary theatre. For some time we've been working together on topics of motor planning, etc. If you want to know more of what we do, have a look at our web page : http://user.orbit.net.mt/josch/... I've added (enclosed in [**....**]) some comments as reading notes! Keep in touch...your ideas are exciting to us! From: "Dr John Schranz" To: "Glyn Goodall" 2. Real time learning - WOW! That is right home! And we have much much to say there. For one thing, but I have to think deeply on this, "imitation" in its common understanding is shaken. I can only DO something as *I* see that it "is". Which means that at any one moment of real time I am seeing that thing differently to what it "is" .. and I am more truly engaged in: > >(1) collection and storage of information about a problem, >(2) examination of the information at hand to determine the >complexity of the problem, >(3) development of trial solutions (nets) for the problem, >(4) testing of trial solutions (nets), >(5) discarding such trial solutions (nets) if they are not >good enough, and (6) repetition of these processes until >an acceptable solution is found." Those are much more in the street of the way I understand learning to be, which is PRIMARILY the identification of the problems which are envisaged whenever one sees onself entering into ANY SORT OF RELATIONSHIP - whether it be with an object (animate or not, it's the same thing) or with a subject (human or not, it's the same thing) ... IN EACH CASE THAT ENVISAGED *RELATIONSHIP* IS SEEN TO BE FRAUGHT WITH POTENTIAL SLIPS (imaginary or not, it's the same thing; minor or not, it's the same thing)... AND WHAT WE CALL *TASKS* ARE, PRECISELY, THE NAVIGATION OF THOSE POTENTIAL SLIPS. This is what Frank [**Cammileri **]- and I mean when we address what we refer to as "Alterity"... "Otherness"... "Difference".... in a course of lectures we have given here [** University of Malta **] and at other universities abroad. >Real-time learning is not compatible with these learning >processes. One has to remember that the essence of >learning is generalization.In order to generalize well, >one has to look at the >whole body of information relevant to a problem, not just >bits and pieces of the information at a time as in >real-time learning. Precisely. That is nearly verbatim (in my reading at least) what I have just expounded on above. >So the argument against real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). And by "knows" as used above I understand "one only knows by relating that which IS IN one's own experience ALREADY to that which as yet is not ...and we use 'knows' in specifically THIS sense".... And that is "to generalise" .. or in other words to "analogise", "AS IF *this* thing (which I do not 'know') were *that* thing which is in my experience"... >One finds out what is there to learn (generalize) by >collecting and storing information about the problem. In >other words, no system, biological or otherwise, can >prepare itself to learn (generalize) without having any >information about what is to be learnt (generalized). Precisely what I have just said ... or, anyway, that's how *I* see it!!!!! Then, of course, come the big discourse on partituras, or scores, as fragments [** sequences of actions that make up a theatrical presentation - which are regularly reworked, and improvised upon, during the entire training and rehersal period, and even during the performance **] which one learns as such ... in order to then play about with them. The entire discourse of variations and "improvisations" is opened out. The paper I've just written for wales [** for a conferance about the Mime, Ducroux **] taps this quite well ... and the cross topics between the two are VERY interesting... ---------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: London, Canada I enjoyed reading your summary and commentary of Shadmehr and Holcomb [1997] Science article. Please stay tuned for two technical reports myself an Bob Mercer, at Univ. of Western Ontario are writing on the functional consolidation and transfer of neural network task knowledge through the use of a Task Rehearsal Mechanism or TRM. By it's vary nature, TRM assumes that there are long term and short term centres for learning as has been the thesis of numerous researchers (for example see J.L. McClelland, B.L. McNaughton, and R.C. Reilly, 1994). The TRM relies on long-term memory for the production of virtual examples of previously learned task knowledge (background knowledge). A functional transfer method is then used to selectively bias the learning of a new task which is developed in short-term memory. The representation of this short term memory is then transfered to long-term memory where it can be used for learning yet another new task in the future. Notice, that explicit examples of a new task need not be stored in long-term memory, only the respresentation of the task which can be later used to generate virtual examples. These virtual examples can be used to rehearse previously learned tasks in concert with a new "related' task. The TRM theory has inspired the development of a system, and a series of experiments which will be discussed in the reports. Consolidation of new task knowledge into a representationally efficient long-term memory is not explicitedly addressed, however one has to assume that this process requires time and energy. If that time and energy are interrupted ... well, it makes sense that the learner may suffer in the context of life-long learning. This agrees with the findings of Shadmehr and Holcomb. See also S.R.Quartz and T.J. Sejnowski, 1996, for an article which has very interesting related information on a potential biological mechanism for CNS learning and consolidation. Ref: J.L. McClelland, B.L. McNaughton, and R.C. Reilly, "Why there are Complementary Learning Systems in the Hippocampus and Neocortx: Insights from the Successes and Failures of Connectionist Models of Learning and Memeory", CMU Technical Report PDP.CNS.94.1, March, 1994 S.R.Quartz and T.J. Sejnowski, "The neural basis of cognitive development: A constructivist manifesto", a BBS target article accpted for publication by Cambridge University Press, 1996. ------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: An alternative hypothesis (which allows for real-time learning): >What are the real implications of this study? One of the most >important facts is that although both groups had identical >training sessions, they had different levels of learning of the >motor task because of what they did subsequent to practice. From >this fact alone one can conclude with some degree of certainty >that real-time, instantaneous learning is not used for learning >motor skills. How can one say that? One can make that conclusion >because if real-time learning was used, there would have been >continuous and instantaneous adjustment of the synaptic strengths >orconnection weights during practice in whatever net the brain was >using to learn the motor task. This means that all persons trained >in that particular motor task should have had more or less the >same "trained net," performance-wise, at the end of that training >session, regardless of what they did subsequently. (It is assumed >here that the task was learnable, given enough practice, and that >both groups had enough practice.) With complete, permanent >learning (weight-adjustments) from "real-time learning," there >should have been no substantial differences in the learnt skill >between the two groups resulting from any activity subsequent to >practice. But this study demonstrates the opposite, that there >were differences in the learnt skill simply because of the nature >of subsequent activity. So real-time, instantaneous and permanent >weight-adjustment (real-time learning) is contradictory to the >results here. The results are not contradictory to the idea of realtime learning, if one can assume that the second group was updating (learning in) the same part of the brain (network) during the second training period as the first. It is well demonstrated in most modalities that subsquent 'interference' training will degrade performance on a newly learned skill. From the little bit I know of neural networks, I'm guessing that the same could be shown with models as well. The point is, if two groups of networks (or brains) are trained in an identical fashion, then one group is trained with a new skill in the same modality, the initial learning will be 'overwritten' to some extent in that group. The same sets of weights will need to be updated. Remember also that PET results are showing _relative_ blood flow, so it cannot be assumed that the cerebellar activity seen after learning was not present during the learning. On the contrary, it was almost certainly necessary for the motor activity to take place. The difference was that the frontal cortex was also highly active, presumably facilitating learning in the motor pathways. Once a subject reached a certain level of proficiency with the task, there would be less need for the frontal cortex to reinforce/facilitate the motor cortex activity, and those (motor) areas would appear to be most active. ------------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Thessaloniki My backgroung is in engineering and learning. Since the nature of neural computing is interdisciplinary I've found the experiments and results by Shadmehr and Holcomb [1997] appealing. In the following I am attempting to enhance the question posed. A system with explicit memory capacity is an interesting approach to learning. But I am wondering: isn't it what we really do when for many neurocomputing paradigms (back-propagation etc.) the data are stored in the memory and are fed again and again until a satisfactory level of performance is achieved ? That is, the training set is employed as it had been stored in the memory of the system. It is true that the essence of learning is generalization. As the cost of both memory and computing time is dropping, it is all the more likely to see in the future systems with on-board "memory capacity" with an enhanced learning capacity. Nevertheless learning and behavior will probably not improve dramatically by only adding memory. This is because with on-board memory we will simply be doing faster and more efficiently what we are doing already. My proposition is that, perhaps, brain-like learning behavior could be simulated by changing the type of data we operate on. That is, usually a learning example considers one type of data and typically from the Euclidean space. Other types of data have also been considered, such as propositional statements. But it is very likely that only some type of hybrid information handling system could simulate the brain convincingly. However, when dealing with disparate data, a "problem" is that such data are usually not handled with mathematical consistency. Hence such issues as "convergence in the limit" are not meaningful. The practical advantage of mathematically consistent hybrid-learning is that such learning could lead to reliable learning models and learning machines with an anticipated behavior. In this context we have treated partial ordered sets, in particular lattices, we defined a mathematical metric, and we have obtained some remarkable learning results with various bechmark data sets. In conclusion it seems to us that a sophisticated learning behavior is only in part a question of memory. It is moreover a question of the type of data being processed. ------------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: > One of the fundamental beliefs in neuroscience, cognitive science > and artificial neural networks is that the brain learns in > real-time. That is, it learns instantaneously from each and every > learning example provided to it by adjusting the synaptic >strengths or connection weights in a network of neurons. The >learning is generally thought to be accomplished using a >Hebbian-style mechanism or some other variation of the idea (a >local learning law). In these scientific fields, real-time >learning also implies memoryless learning. That is a non sequitur. Why should real-time learning imply that there is not also memory based learning? Both types of 'learning' are surely desirable for higher cognitive behaviour in a real-time environment. By 'learning', I just mean 'connection weight adjustment'. In memory-based learning in our brains, it would be impossible to store all parameters of an event or training sample as they occur - but obviously we store some transformed, compacted version of events,(if we didn't, then we would not have long term memories) and equally obviously, this is available for reference for learning at a later time (if not, then we would be unable to learn from our long-term memories). >In memoryless learning, no training examples are stored explicitly >in the memory of the learning system, such as what does explicitly mean in this context? ------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: First, I'd like to point out that hardly anybody considers instanteneous learning in the way you define it. The state of any network will depend on the environment to which it was exposed. I'd say that from what you discuss on cannot discrimintae between real-time and delayed learning unless some more analysis is done. Since the two groups performed different tasks after the test, of course they will end up with different networks, memory or not. So in particular, it does not mean that. > all persons trained in that particular motor task should have had >more or less the same "trained net," performance-wise, at the end >of that training session, regardless of what they did >subsequently. > What are the real implications of this study? One of the most > important facts is that although both groups had identical >training sessions, they had different levels of learning of the >motor task because of what they did subsequent to practice. From >this fact alone one can conclude with some degree of certainty >that real-time, instantaneous learning is not used for learning >motor skills. How can one say that? One can make that conclusion >because if real-time learning was used, there would have been >continuous and instantaneous adjustment of the synaptic strengths >or connection weights during practice in whatever net the brain >was using to learn the motor task. This means that all persons >trained in that particular motor task should have had more or less >the same "trained net," performance-wise, at the end of that >training session, regardless of what they did subsequently. (It is >assumed here that the task was learnable, given enough practice, >and that both groups had enough practice.) With complete, >permanent learning (weight-adjustments) from "real-time learning," >there should have been no substantial differences in the learnt >skill between the two groups resulting from any activity >subsequent to practice. But this study demonstrates the opposite, >that there were differences in the learnt skill simply because of >the nature of subsequent activity. So real-time, instantaneous and >permanent weight-adjustment (real-time learning) is contradictory >to the results here. I'd like to disagree with you again. There are numerous examples of networks without explicit memory (e.g. any BP net), which generalize pretty well. This is a consequence of their general approximator property. > One has to remember that the essence of learning is >generalization. In order to generalize well, one has to look at >the whole body of information relevant to a problem, not just bits >and pieces of the information at a time as in real-time learning. >So the argument against real-time learning is simple: one cannot >learn (generalize) unless one knows what is there to learn >(generalize). One finds out what is there to learn (generalize) >by collecting and storing information about the problem. In other >words, no system, biological or otherwise, can prepare itself to >learn (generalize) without having any information about what is to >be learnt (generalized). > I don't think anyone has claimed otherwise. We all agree it is most likely a statistical process and it needs many examples for learning. -------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: First of all, you should put these ideas in a paper or tech report and let people access it only if interested. Second, loose arguments that switch wildly between learning of motor skills and language/verbal learning indicate poor understanding of mechanisms involved. The stuff you posted today is almost as ridiculous as any other oversimplified explanation of human learning, be it the symbol system hypothesis or the Hebbian one or the Chomskian one. Good science happens from precise experiments that conclusively reject or accept a well-defined hypothesis. Terrible arguments is all I see in your post. ---------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: mailing list > A recent study by Shadmehr and Holcomb [1997] may lend some > interesting insight on how the brain learns. In this study, a > positron emission tomography (PET) device was used to monitor PET studies are in general irreproducible. I am working on a paper on the subject, and for it I did a survey of the PET and fMRI studies from the beginning of 1997. Of the ~80 articles that I have already read, There isn't even a single case of a study reproducing a previous study. > "learning" by the brain involves collection and storage of > information prior to actual learning. As is well known, the > fundamental process of learning involves: (1) collection and > storage of information about a problem, (2) examination of the > information at hand to determine the complexity of the problem, (3) > development of trial solutions (nets) for the problem, (4) >testing of trial solutions (nets), (5) discarding such trial >solutions (nets) if they are not good enough, and (6) repetition >of these processes until an acceptable solution is found. >Real-time learning is not compatible with these learning >processes. It is not at all 'well known' that the fundamental process of learning involves what you say. You need some srgument for this, rather than just asserting it. There are many contrary examples, e.g a cat learning how to exit from a cage by pulling a lever. This seems to be more relevant to learning by animals, including humans. > One has to remember that the essence of learning is >generalization. This assertion is out of place. Learning is changing of behaviour in a consistent and productive way (by some standard). ------------------------------------------------------------ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Did you consider the possibility that the training on the second task could have affected the same synapses used for the first task ? If this should have happened, it could be another explanation of what has been observed. To check which hypothesis is the right one, I would suggest to repeat the experiment in a slightly different way: The second group should be trained on the second task not immediately after the first task, but, for example, ten hours later.Now the following two things can happen: 1) People from this group still have reduced levels of skill on the first task. 2) They perform well on both tasks. In the first case maybe the same synapses have been used to learn both tasks, with or without the temporary and permanent kinds of memory you described. So the new information could have been (partially) rewritten over the previous one. In the second case I can see two choices: yours is the first one. Here is the second one: the brain seems to have a modular and hierarchical structure. Let's assume that different experiences are stored in different modules; so, when there is something new to remember, the "brain's operative system" has to find an "empty" module to store the information. This fact, if it should be real, could explain why the younger a person is, the less time he takes to learn new things: in his brain the number of "empty" modules would be much greater than the number of "full" modules, so it should be quite fast to find the right place. A way to check this hipothesys would be to make the test you wrote about, and to consider the age of the people involved. If the time the information takes to move from temporary storage to permanent storage is roughly indipendent on age, what I wrote can be thrown away. P.S.: Considering I am still a student, please do not take what I wrote too seriously ! ------------------------------------------------------------ APPENDIX COULD THERE BE REAL-TIME, INSTANTANEOUS LEARNING IN THE BRAIN? One of the fundamental beliefs in neuroscience, cognitive science and artificial neural networks is that the brain "learns" in real-time. That is, it learns "instantaneously" from each and every learning example provided to it by adjusting the synaptic strengths or connection weights in a network of neurons. The learning is generally thought to be accomplished using a Hebbian-style mechanism or some other variation of the idea (a local learning law). In these scientific fields, real-time learning also implies memoryless learning. In memoryless learning, no training examples are stored explicitly in the memory of the learning system, such as the brain. It can use any particular training example presented to it to adjust whatever network it is learning in, but must forget that example before examining others. The idea is to obviate the need for large amounts of memory to store a large number of training examples. This section looks at the possibility of real-time learning in the brain from two different perspectives. First, some factual behavioral evidence from a recent neuroscience study on learning of motor skills is examined. Second, the idea of real-time learning is examined from a broader behavioral perspective. A recent study by Shadmehr and Holcomb [1997] may lend some interesting insight on how the brain learns. In this study, a positron emission tomography (PET) device was used to monitor neural activity in the brain as subjects were taught and then retested on a motor skill. The task required them to manipulate an object on a computer screen by using a motorized robot arm. It required making precise and rapid reaching movements to a series of targets while holding the handle of the robot. And these movements could be learned only through practice. During practice, the blood flow was most active in the prefrontal cerebral cortex of the brain. After the practice session, some of the subjects were allowed to do unrelated routine things for five to six hours and then retested on their recently acquired motor skill. During retesting of this group, it was found that they had learned the motor skill quite well. But it was also found that the blood flow now was most active in a different part of the brain, in the posterior parietal and cerebella areas. The remaining test subjects were trained on a new motor task immediately after practicing the first one. Later, those subjects were retested on the first motor task to find out how much of it they had learnt. It was found that they had reduced levels of skill (learning) on the first task compared to the other group. So Shadmehr and Holcomb [1997] conclude that after practicing a new motor skill, it takes five to six hours for the memory of the new skill to move from a temporary storage site in the front of the brain to a permanent storage site at the back. But if that storage process is interrupted by practicing another new skill, the learning of the first skill is hindered. They also conclude that the shift of location of the memory in the brain is necessary to render it invulnerable and permanent. That is, it is necessary to consolidate the motor skill. What are the real implications of this study? One of the most important facts is that although both groups had identical training sessions, they had different levels of learning of the motor task because of what they did subsequent to practice. From this fact alone one can conclude with some degree of certainty that real-time, instantaneous learning is not used for learning motor skills. How can one say that? One can make that conclusion because if real-time learning was used, there would have been continuous and instantaneous adjustment of the synaptic strengths or connection weights during practice in whatever net the brain was using to learn the motor task. This means that all persons trained in that particular motor task should have had more or less the same "trained net," performance-wise, at the end of that training session, regardless of what they did subsequently. (It is assumed here that the task was learnable, given enough practice, and that both groups had enough practice.) With complete, permanent learning (weight-adjustments) from "real-time learning," there should have been no substantial differences in the learnt skill between the two groups resulting from any activity subsequent to practice. But this study demonstrates the opposite, that there were differences in the learnt skill simply because of the nature of subsequent activity. So real-time, instantaneous and permanent weight-adjustment (real-time learning) is contradictory to the results here. Second, from a broader behavioral perspective, all types of "learning" by the brain involves collection and storage of information prior to actual learning. As is well known, the fundamental process of learning involves: (1) collection and storage of information about a problem, (2) examination of the information at hand to determine the complexity of the problem, (3) development of trial solutions (nets) for the problem, (4) testing of trial solutions (nets), (5) discarding such trial solutions (nets) if they are not good enough, and (6) repetition of these processes until an acceptable solution is found. Real-time learning is not compatible with these learning processes. One has to remember that the essence of learning is generalization. In order to generalize well, one has to look at the whole body of information relevant to a problem, not just bits and pieces of the information at a time as in real-time learning. So the argument against real-time learning is simple: one cannot learn (generalize) unless one knows what is there to learn (generalize). One finds out what is there to learn (generalize) by collecting and storing information about the problem. In other words, no system, biological or otherwise, can prepare itself to learn (generalize) without having any information about what is to be learnt (generalized). Learning of motor skills is no exception to this process. The process of training is simply to collect and store information on the skill to be learnt. For example, in learning any sport, one not only remembers the various live demonstrations given by an instructor (pictures are worth a thousand words), but one also remembers the associated verbal explanations and other great words of advise. Instructions, demonstrations and practice of any motor skill are simply meant to provide the rules, exemplars and examples to be used for learning (e.g. a certain type of body, arm or leg movement in order to execute a certain task). During actual practice of a motor skill, humans not only try to follow the rules and exemplars to perform the actual task, but they also observe and store new information about which trial worked (example trial execution of a certain task) and which didn't. One only ought to think back to the days of learning tennis, swimming or some such sport in order to verify information collection and storage by humans to learn motor skills. It shouldn't be too hard too explain the "loss of skill" phenomenon, from back-to-back instructions on new motor skills, that was observed in the study. The explanation shouldn't be different from the one for the "forgetting of instructions" phenomenon that occurs with back-to-back instructions in any learning situation. A logical explanation perhaps for the "loss of motor skill" phenomenon, as for any other similar phenomenon, is that the brain has a limited amount of working or short term memory. And when encountering important new information, the brain stores it simply by erasing some old information from the working memory. And the prior information gets erased from the working memory before the brain has the time to transfer it to a more permanent or semi-permanent location for actual learning. So "loss of information" in working memory leads to a "loss of skill." Another fact from the study that is highly significant is that the brain takes time to learn. Learning is not quick and instantaneous. Reference: Shadmehr, R. and Holcomb, H. (August 1997). "Neural Correlates of Motor Memory Consolidation." Science, Vol. 277, pp. 821-825. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: a day; shuttles for the city centre each 30 minutes. by train-regular connections to Toulouse Railway Station by TGV (High Speed Train) several times a day. Bus facilities for University. Individual delegates will arrange their own transport to their hotel. Details of bus times will be given in the last announcement. Rent-a-car services: Several companies offer car rentals at both Paris and Toulouse-Blagnac airports. It takes about 6 hours to drive from Paris to Toulouse (705 km). Please make arrangements through your travel agent. Visa: Please check with your local travel agent to see if a Visa is necessary. Delegates are responsible for their own Visa arrangements.=20 Climate and Clothing: The meeting season is winter. The climate can be variable (from 4=B0 to 15=B0C) in the area (oceanic temperate climate).= Bring a raincoat, an umbrella, one or two sweaters or pull-overs. Third and Final Announcement: Those who have responded to the Second Announcement will receive the Third Announcement due out in September 1998. CONFERENCE COSTS: Early registration (by 31 August 1998): Ordinary delegates: 1,500 FF (US$ 250) Students: 1,000 FF (US$ 170) Accompanying people: 500 FF (US$ 85) Full registration (transfered order after 31 August 1998): Ordinary delegates: 1,800 FF (US$ 300) Students: 1,300 FF (US$ 210) Accompanying people: 800 FF (US$ 130) Please note: Fees must be sent by international transfer order to: ANN Model Toulouse, LEK S. Caisse d'Epargne de Midi-Pyr=E9n=E9es 5 av. Pierre Coupeau 31130 Balma - France Bank Account: 13135 00080 04084374754 24 Prospective delegates with financial queries should address these to Drs. Sovan Lek or Jean-Fran=E7ois Gu=E9gan when submitting abstracts and/or registration intent. Your registration fee includes the following access to full abstract proceedings; conference kits; morning and afternoon tea or coffee, Monday to Wednesday; lunches, Monday to Thursday; Conference party (Wednesday night, December 16th); Conference excursion (Thursday, December 17th). The fee for accompanying people includes lunches, coffee breaks, closing banquet and excursion. ACCOMODATION: ESTABLISHMENT PRICE=A7 ADDRESS TELEPHONE FAX Wilson Trianon**(15 rooms) FF. 240 7, rue Lafaille - 31000 Toulouse 33 5 61 62 74 74 33 5 61 99 15 44 Wilson Square**(15 rooms) FF. 240 12, rue d'Austerlitz - 31000 Toulouse 33 5 61 21 67 57 33 5 61 21 16 23 H=F4tel de France**(25 rooms) FF. 265 5, rue d'Austerlitz - 31000= Toulouse 33 5 61 21 88 24 33 5 61 21 99 77 Des Arts* (14 rooms) FF. 170 1bis, rue Cantegril - 31000 Toulouse 33 5 61 62 77 59 33 5 61 12 22 37 Splendid* (14 rooms) FF. 120 13 rue Caffarelli - 31000 Toulouse 33 5 61 62 43 02 33 5 61 40 52 76 IAS Center(30 rooms) FF. 165 23, avenue E. Belin - 31028 Toulouse Cedex 4 33 5 62 17 33 09 33 5 61 55 33 85 =A7 Reduced fares for participants to the meeting. Please mention your participation to the workshop when booking. All prices are given in French Francs, and are for Bed & Breakfast per night. Please book directly through the addresses indicated. SOCIAL PROGRAMME Conference Excursion on Thursday, December 17 1998 (costs included in registration fee): o Travel to Carcassonne, a famous Medieval City in South of France. Departure at 9:30 a.m.=20 o Lunch o Visit of the Medieval City o Visit of a wine cellar at the end of the afternoon o Dinner and return to Toulouse in the night ---------------------------------------------------------------- Sovan LEK E-mail: lek at cict.fr Doc. Habil. Fish & Non-linear Modelling CNRS - UMR 5576 Tel. : (33) 5 61 55 86 87 CESAC - Bat. 4R3 Fax : (33) 5 61 55 60 96 Uuniv. Paul Sabatier=20 118 route de Narbonne 31062 Toulouse cedex France From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: a day; shuttles for the city centre each 30 minutes. by train-regular connections to Toulouse Railway Station by TGV (High Speed Train) several times a day. Bus facilities for University. Individual delegates will arrange their own transport to their hotel. Details of bus times will be given in the last announcement. Rent-a-car services: Several companies offer car rentals at both Paris and Toulouse-Blagnac airports. It takes about 6 hours to drive from Paris to Toulouse (705 km). Please make arrangements through your travel agent. Visa: Please check with your local travel agent to see if a Visa is necessary. Delegates are responsible for their own Visa arrangements.=20 Climate and Clothing: The meeting season is winter. The climate can be variable (from 4=B0 to 15=B0C) in the area (oceanic temperate climate).= Bring a raincoat, an umbrella, one or two sweaters or pull-overs. Third and Final Announcement: Those who have responded to the Second Announcement will receive the Third Announcement due out in September 1998. CONFERENCE COSTS: Early registration (by 31 August 1998): Ordinary delegates: 1,500 FF (US$ 250) Students: 1,000 FF (US$ 170) Accompanying people: 500 FF (US$ 85) Full registration (transfered order after 31 August 1998): Ordinary delegates: 1,800 FF (US$ 300) Students: 1,300 FF (US$ 210) Accompanying people: 800 FF (US$ 130) Please note: Fees must be sent by international transfer order to: ANN Model Toulouse, LEK S. Caisse d'Epargne de Midi-Pyr=E9n=E9es 5 av. Pierre Coupeau 31130 Balma - France Bank Account: 13135 00080 04084374754 24 Prospective delegates with financial queries should address these to Drs. Sovan Lek or Jean-Fran=E7ois Gu=E9gan when submitting abstracts and/or registration intent. Your registration fee includes the following access to full abstract proceedings; conference kits; morning and afternoon tea or coffee, Monday to Wednesday; lunches, Monday to Thursday; Conference party (Wednesday night, December 16th); Conference excursion (Thursday, December 17th). The fee for accompanying people includes lunches, coffee breaks, closing banquet and excursion. ACCOMODATION: ESTABLISHMENT PRICE=A7 ADDRESS TELEPHONE FAX Wilson Trianon**(15 rooms) FF. 240 7, rue Lafaille - 31000 Toulouse 33 5 61 62 74 74 33 5 61 99 15 44 Wilson Square**(15 rooms) FF. 240 12, rue d'Austerlitz - 31000 Toulouse 33 5 61 21 67 57 33 5 61 21 16 23 H=F4tel de France**(25 rooms) FF. 265 5, rue d'Austerlitz - 31000= Toulouse 33 5 61 21 88 24 33 5 61 21 99 77 Des Arts* (14 rooms) FF. 170 1bis, rue Cantegril - 31000 Toulouse 33 5 61 62 77 59 33 5 61 12 22 37 Splendid* (14 rooms) FF. 120 13 rue Caffarelli - 31000 Toulouse 33 5 61 62 43 02 33 5 61 40 52 76 IAS Center(30 rooms) FF. 165 23, avenue E. Belin - 31028 Toulouse Cedex 4 33 5 62 17 33 09 33 5 61 55 33 85 =A7 Reduced fares for participants to the meeting. Please mention your participation to the workshop when booking. All prices are given in French Francs, and are for Bed & Breakfast per night. Please book directly through the addresses indicated. SOCIAL PROGRAMME Conference Excursion on Thursday, December 17 1998 (costs included in registration fee): o Travel to Carcassonne, a famous Medieval City in South of France. Departure at 9:30 a.m.=20 o Lunch o Visit of the Medieval City o Visit of a wine cellar at the end of the afternoon o Dinner and return to Toulouse in the night ---------------------------------------------------------------- Sovan LEK E-mail: lek at cict.fr Doc. Habil. Fish & Non-linear Modelling CNRS - UMR 5576 Tel. : (33) 5 61 55 86 87 CESAC - Bat. 4R3 Fax : (33) 5 61 55 60 96 Uuniv. Paul Sabatier=20 118 route de Narbonne 31062 Toulouse cedex France From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From david.brown at bbsrc.ac.uk Tue Jun 6 06:52:25 2006 From: david.brown at bbsrc.ac.uk (david.brown) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: <980330143240.16748@mserv.iapc.bbsrc.ac.uk.0> From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: "This book will be very useful for mathematics and engineering students interested in a modern and rigorous systems course, as well as for experts in control theory and applications" --- Mathematical Reviews "An excellent book... gives a thorough and mathematically rigorous treatment of control and system theory" --- Zentralblatt fur Mathematik "The style is mathematically precise... fills an important niche... serves as an excellent bridge (to topics treated in traditional engineering courses). The book succeeds in conveying the important basic ideas of mathematical control theory, with appropriate level and style" --- IEEE Transactions on Automatic Control Chapter and Section Headings: Introduction What Is Mathematical Control Theory? Proportional-Derivative Control Digital Control Feedback Versus Precomputed Control State-Space and Spectrum Assignment Outputs and Dynamic Feedback Dealing with Nonlinearity A Brief Historical Background Some Topics Not Covered Systems Basic Definitions I/O Behaviors Discrete-Time Linear Discrete-Time Systems Smooth Discrete-Time Systems Continuous-Time Linear Continuous-Time Systems Linearizations Compute Differentials More on Differentiability Sampling Volterra Expansions Notes and Comments Reachability and Controllability Basic Reachability Notions Time-Invariant Systems Controllable Pairs of Matrices Controllability Under Sampling More on Linear Controllability Bounded Controls First-Order Local Controllability Controllability of Recurrent Nets Piecewise Constant Controls Notes and Comments Nonlinear Controllability Lie Brackets Lie Algebras and Flows Accessibility Rank Condition Ad, Distributions, and Frobenius' Theorem Necessity of Accessibility Rank Condition Additional Problems Notes and Comments Feedback and Stabilization Constant Linear Feedback Feedback Equivalence Feedback Linearization Disturbance Rejection and Invariance Stability and Other Asymptotic Notions Unstable and Stable Modes Lyapunov and Control-Lyapunov Functions Linearization Principle for Stability Introduction to Nonlinear Stabilization Notes and Comments Outputs Basic Observability Notions Time-Invariant Systems Continuous-Time Linear Systems Linearization Principle for Observability Realization Theory for Linear Systems Recursion and Partial Realization Rationality and Realizability Abstract Realization Theory Notes and Comments Observers and Dynamic Feedback Observers and Detectability Dynamic Feedback External Stability for Linear Systems Frequency-Domain Considerations Parametrization of Stabilizers Notes and Comments Optimality: Value Function Dynamic Programming Linear Systems with Quadratic Cost Tracking and Kalman Filtering Infinite-Time (Steady-State) Problem Nonlinear Stabilizing Optimal Controls Notes and Comments Optimality: Multipliers Review of Smooth Dependence Unconstrained Controls Excursion into the Calculus of Variations Gradient-Based Numerical Methods Constrained Controls: Minimum Principle Notes and Comments Optimality: Minimum-Time for Linear Systems Existence Results Maximum Principle for Time-Optimality Applications of the Maximum Principle Remarks on the Maximum Principle Additional Exercises Notes and Comments Appendix: Linear Algebra Operator Norms Singular Values Jordan Forms and Matrix Functions Continuity of Eigenvalues Appendix: Differentials Finite Dimensional Mappings Maps Between Normed Spaces Appendix: Ordinary Differential Equations Review of Lebesgue Measure Theory Initial-Value Problems Existence and Uniqueness Theorem Linear Differential Equations Stability of Linear Equations Bibliography List of Symbols From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From Connectionists-Request at CS.cmu.edu Tue Jun 6 06:52:25 2006 From: Connectionists-Request at CS.cmu.edu (Connectionists-Request@CS.cmu.edu) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Bi-monthly Reminder Message-ID: -------- *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated July 24, 1998 This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Mark C. Fuhs --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. The files ending with .gz are compressed using the GNU gzip program. In the event that you do not already have gzip, it is available via ftp from "prep.ai.mit.edu" in the "/pub/gnu" directory. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to Access Files from the CONNECTIONISTS Archive --------------------------------------------------- There are two ways to access the CONNECTIONISTS archive: 1. Using your World Wide Web browser. Enter the following location: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives/ 2. Using an FTP client. a) Open an FTP connection to host FTP.CS.CMU.EDU b) Login as user anonymous with password your username. c) 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. L. Shastri and V. Ajjanagadde (1993). Behavioral and Brain Sciences 16 (3) 417--494. Temporal Synchrony, Dynamic Bindings, and SHRUTI -- a representational but non-classical model of reflexive reasoning. L. Shastri (1996). Behavioral and Brain Sciences 19 (2), 331--337. Robust reasoning: integrating rule-based and similarity-based reasoning. R. Sun, Artificial Intelligence. Vol.75, No.2, pp.241-296. June, 1995. Dave Touretzky mentioned "The Handbook of Brain Theory and Neural Networks" edited by Arbib (MIT Press, 1995). The article on "Structured Connectionist Models" in this handbook lists additional references to work on connectionist symbolic processing. Best Wishes, Lokendra Shastri International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94704 shastri at icsi.berkeley.edu http://www.icsi.berkeley.edu/~shastri Phone: (510) 642-4274 ext 310 FAX: (510) 643-7684 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. L. Shastri and V. Ajjanagadde (1993). Behavioral and Brain Sciences 16 (3) 417--494. Temporal Synchrony, Dynamic Bindings, and SHRUTI -- a representational but non-classical model of reflexive reasoning. L. Shastri (1996). Behavioral and Brain Sciences 19 (2), 331--337. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: phenomena or patterns of behavior of physical feedback systems (i.e., looking at cognition as essentially a bounded feedback system---bounded under normal conditions, unless the system goes into seizure (explodes mathematically---well, it is still bounded but it tries to explode!), of course.) From this point of view both symbols and fuzziness and every other conceptual representation are neither "true" nor "real" but simply patterns which tend to be, from an information-theoretic point of view, compact and useful or efficient representations. But they are built on a physical substrate of a feedback system, not vice-versa. However, it isn't the symbol, fuzzy or not, which is ultimately general, it is the feedback system, which is ultimately a physical system of course. So, while we may be convinced that your formalism is very good, this does not mean it is more fundamentally powerful than a simulation approach. It may be that your formalism is in fact better for handling symbolic problems, or even problems which require a mixture of fuzzy and discrete logic, etc., but what about problems which are not symbolic at all? What about problems which are both symbolic and non-symbolic (not just fuzzy, but simply not symbolic in any straightforward way?) The fact is, intuitively it seems to me that some connectionist approach is bound to be more general than a more special-purpose approach. This does not necessarily mean it will be as good or fast or easy to use as a specialized approach, such as yours. But it is not at all convincing to me that just because the input space to a connectionist network looks like R(n) in some superficial way, this would imply that somehow a connectionist model would be incapable of doing symbolic processing, or even using your model per se. Mitsu > > > > Two, it may be that the simplest or most efficient > > representation of a given set of rules may include both a continous and a > > discrete component; that is, for example, considering issues such as imprecise > > application of rules, or breaking of rules, and so forth. For example, > > consider poetic speech; the "rules" for interpreting poetry are clearly not > > easily enumerable, yet human beings can read poetry and get something out of > > it. A purely symbolic approach may not be able to easily capture this, > > whereas it seems to me a connectionist approach has a better chance of dealing > > with this kind of situation. > > > > I can see value in your approach, and things that connectionists can learn > > from it, but I do not see that it dooms connectionism by any means. > > See the previous comment. > > Cheers, > Lev From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: of rules, variables, and dynamic bindings using temporal synchrony. Behavioral and Brain Sciences 16 (3) 417--494. R. Sun, (1992). On Connectionist variable binding. Connection Science. R. Sun, (1995). Robust reasoning: integrating rule-based and similarity-based reasoning. Artificial Intelligence. Vol.75, No.2, pp.241-296. June, 1995. Lange and Dyer (1989). High Level Inferencing in a Connectionist Network. {\it Connection Science}, 181-217. (See also Lange's chapter in Sun and Bookman (1994).) R. Lacher, S. Hruska, and D. Kunciky, 1992. Backpropagation learning in Expert Networks. Technical Report 91-015. Florida State University. also in IEEE TNN. Barnden, Complex Symbol-Processing in Conposit, in: Sun and Bookman (eds.), Architectures incorporating neural and symbolic processes. Kluwer. 1994. J. Barnden and K. Srinivas, Overcoming Rule-Based Rigidity and Connectionist Limitations Through Massively Parallel Case-based Reasoning, {\it International Journal of Man-Machine Studies}, 1992. ------------------------ NATURAL LANGUAGE (Syntactic and semantic processing) Bailey, D., J. Feldman, S. Narayanan, G. Lakoff (1997). Embodied Lexical Development, Proceedings of the Nineteenth Annual Meeting of the Cognitive Science Society COGSCI-97, Aug 9-11, Stanford: Stanford University Press, 1997. J. Henderson. Journal of Psycholinguistic Research, 23(5):353--379, 1994. Connectionist Syntactic Parsing Using Temporal Variable Binding. T. Regier, Cambridge, MA: MIT Press. 1996. The Human Semantic Potential: Spatial Language and Constrained Connectionism, L. Bookman, A Framework for Integrating Relational and Associational Knowledge for Comprehension, in Sun and Bookman (eds.), Architectures incorporating neural and symbolic processes. Kluwer. 1994. S. Wermter, (ed.) Connectionist language processing (?). Springer. ------------------------ LEARNING OF SYMBOLIC KNOWLEDGE (from NNs) Fu, AAAI-91. and IEEE SMC, 1995, 1997. Towell and Shavlik, Machine Learning. 1995. Giles, et al, (1993). in: Connection Science,1993. special issue on hybrid models. (some of these models involve somewhat distributed representation, but that's not the point.) Sun et al (1998), A bottom-up model of skill learning. CogSci'98 proceedings. (Justification: In some instances, such learning/extraction from NNs is better than learning symbolic knowledge directly using symbolic algorithms, in algorithmic or cognitive terms.) ------------------------ RECOGNITION, RECALL Jacobs, A.M. & Grainger, J. (1992). Testing a semistochastic variant of the interactive activation model in different word recognition experiments. Journal of Experimental Psychology: Human Perception and Performance, 18, 1174-1188. Jacobs, A. M., & Grainger, J. (1994). Models of visual word recognition: Sampling the state of the art. Journal of Experimental Psychology: Human Perception and Performance, 20, 1311-1334. McClelland, J. L. & Rumelhart, D. E. (1981). An interactive activation model of context effects in letter perception: Part I. An account of basic findings. Psychological Review, 88, 375-407. Page, M. & Norris, D. (1998). Modeling immediate serial recall with a localist implementation of the primacy model. In J. Grainger & A.M. Jacobs (Eds.), Localist connectionist approaches to human cognition. Mahwah, NJ.: Erlbaum. ------------------------ MEMORY There are many existing models. See Hintzman 1996 for a review (in Annual Review of Psychology) ------------------------- SKILL LEARNING R. Sun and T. Peterson, A subsymbolic+symbolic model for learning sequential navigation. {\it Proc. of the Fifth International Conference of Simulation of Adaptive Behavior (SAB'98).} Zurich, Switzerland. 1998. MIT Press. R. Sun, E. Merrill, and T. Peterson, A bottom-up model of skill learning. {\it Proc.of 20th Cognitive Science Society Conference}, pp.1037-1042, Lawrence Erlbaum Associates, Mahwah, NJ. 1998. Thompson, Cohen, and Shastri's work (yet to be published, I believe). ------------------------ I cannot even begin to enumerate all the rationales for using localist models for symbolic processing discussed in these pieces of work. The reasons may include (1) localist connectionist models are an apt description framework for a variety of cognitive processing, (See J. Grainger & A.M. Jacobs (Eds.), Localist connectionist approaches to human cognition. Mahwah, NJ.: Erlbaum.) (2) the inherent processing characteristics of connectionist models (such as similarity-based processing, which can also be explored in localist models) make them suitable for cognitive processing, (3) learning processes can naturally be applied to localist models (as opposed to learning LISP code), such as gradient descent, EM, etc. (As has been pointed out by many recently, localist models share many features with Bayesian networks. This actually has been recognized very early on, see for example, Sun (1990 INNC), Sun (1992), in which a localist network is defined from a collection of hidden Markov models, and the Baum-Welch algorithm was used in learning.) Regards, --Ron p.s. See also the recently published edited collection: R. Sun and F. Alexandre (eds.), {\it Connectionist Symbolic Integration}. Lawrence Erlbaum Associates, Hillsdale, NJ. 1997. ----------------------------------------- Dr. Ron Sun NEC Research Institute 4 Independence Way Princeton, NJ 08540 phone: 609-520-1550 fax: 609-951-2482 email: rsun at cs.ua.edu, rsun at research.nj.nec.com ----------------------------------------- Prof. Ron Sun http://cs.ua.edu/~rsun Department of Computer Science and Department of Psychology phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: "How can a neuron maximize the information it can transduce and match this to its ability to transmit?" Posing this question leads [1] to an unambiguous definition of learning, symbols, input space, output space based solely on logic, and consequently, probability theory and entropy. Furthermore, it provides an unambiguous computational objective and leads to a neuron or system of neurons that operate inductively. The resulting neural structure is the Hopfield neuron that regarding optimized transduction, obeys a modified form of Oja's equation for spatial adaptation, performs intradendritic channel delay equilibration of inputs for temporal adaptation. For optimized transmission, the model calls for a subtle adaptation of the firing thresold to optimize its transmission rate. This is not to say that the described model of neural computation is "correct." Correct is in the eyes of the beholder and depends on the application or theoretical goal pursued. However, this example does point out that there are common aspects to all neural computational problems and paradigms that in fact lend themselves to more precise definitions of terms like "learning." These definitions arise more naturally when the perspective of the neuron is taken. That is, it observes its inputs (regardless of how we might represent them), perhaps observes its own outputs, and using all that it can observe, executes a computational strategy that effects learning in such a manner as to optimize its defined error formulation, formalized objective criterion, of information-theoretic measure. Any adaptive rule will lead to either (1) a different way of extracting information from its inputs, (2) a different way of generating outputs given the information that has been measured, or (3) both of these. I think that it is a worthwhile goal to try to pursue more rigorous neuron-centric views of the terms used within the neural network community if for no other reason than to better focus exhanges and debates between members of the community. Bob Fry [1] "A logical basis for neural network design," in Vol. 3, Techniques and Applications of Artificial Neural Networks, Academic Press 1998. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: it appears that in neuroscience an important theme on the binding problem (how sensory features are bound together to form a coherent percept) has emerged from his theory. The theory has also started to impact the field of perceptual psychology (e.g. see a Nature paper by M. Usher and N. Donnelly, recently announced on this list). The issue of binding is so fundamental that the final judgement on von der Malsburg's theory is unlikely to be available in the near future. But one would not dispute that his neural network theory has generated major impact on neuroscience. DeLiang Wang ------------------------------- C. von der Malsburg (1981): "The correlatoin theory of brain function," Internal Report 81-2. Max-Planck-Institute for Biophysical Chemistry, Gottingen. P. Milner (1974): "A model for visual shape recognition," Psychological Review, 81, pp. 521-535. R. Eckhorn et al. (1988): "Coherent oscillations: A mechanism of feature linking in the visual cortex," Biological Cybernetics, 60, pp. 121-130. C. Gray, P. Konig, A. Engel, and W. Singer (1989): "Oscillatory responses in cat visual cortex exhibit inter-columnar synchronization which reflects global stimulus properties," Nature, 338, pp. 334-337. W. Phillips and W. Singer (1997), "In search of common foundations for cortical computation," Behavioral and Brain Sciences, 20, pp. 657-722. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: neurons 67 R. Segev and E. Ben-Jacob Application of biological learning theories to mobile robot avoidance and approach behaviors 79 C. Chang and P. Gaudiano The dynamics of specialization and generalization within biological populations 115 A. J. Spencer, I. D. Couzin and N. R. Franks FORTHCOMING ARTICLES Representations of informational properties for complex systems analysis C. Joslyn and G. de Cooman Optimal control of singular motion H. Nosaka, K. Tsuji and A. Hubler Social sand piles: purposive agents and self organized criticality S. E. Page, H. Ishii and W. Yi Synchronization in an array of population dynamic systems D. Postnov, A. Balanov and E. Mosekilde Dynamics of retinal ganglion cells modeled with hardware coupled nonlinear oscillators A. W. Przybyszewski, P. S. Linsay, P. Gaudiano and C.M. Wilson Collective choice and mutual knowledge structure D. Richards, B. D. McKay and W. A. Richards Random walks, fractals and the origins of rainforest diversity Ricard V. Sole and D. Alonso Language learning and language contact L. Steels Electrodynamical model of quasi-efficient financial market K. N. Illinski and A. S. Stepanenko Adaptive agent-driven routing and load balancing in communication networks M. Heusse, D. Snyers, S. Guerin and P. Kuntz On-off intermittency and riddled basins of attraction in a coupled map system J. Laugesen, E. Mosekilde, Yu. L. Maistrenko and V.L. Maistrenko The Journal of Complex Systems is a new journal, published by HERMES Publishing Co. Web page: http://www.santafe.edu/~bonabeau. Subscriptions: hermes at iway.fr, Attn: Subscriptions Dept. _________________ AIMS and SCOPES _________________ The Journal of Complex Systems aims to provide a medium of communication for multidisciplinary approaches, either empirical or theoretical, to the study of complex systems in such diverse fields as biology, physics, engineering, economics, cognitive science and social sciences, so as to promote the cross- fertilization of ideas among all the scientific disciplines having to deal with their own complex systems. By complex system, it is meant a system comprised of a (usually large) number of (usually strongly) interacting entities, processes, or agents, the understanding of which requires the development, or the use of, new scientific tools, nonlinear models, out-of equilibrium descriptions and computer simulations. Understanding the behavior of the whole from the behavior of its constituent units is a recurring theme in modern science, and is the central topic of the Journal of Complex Systems. Papers suitable for publication in the Journal of Complex Systems should deal with complex systems, from an empirical or theoretical viewpoint, or both, in biology, physics, engineering, economics, cognitive science and social sciences. This list is not exhaustive. Papers should have a cross-disciplinary approach or perspective, and have a significant scientific and technical content. ____________________ EDITORIAL BOARD ____________________ Eric Bonabeau (Editor-in-Chief), Santa Fe Institute, USA Yaneer Bar-Yam, NECSI, USA Eshel Ben-Jacob, Tel Aviv University, Israel Jean-Louis Dessalles, Telecom Paris, France Nigel R. Franks, University of Bath, UK Toshio Fukuda, Nagoya University, Japan Paolo Gaudiano, Boston University, USA Alfred Hubler, University of Illinois,Urbana-Champaign, USA Cliff Joslyn, Los Alamos National Laboratory, USA Alan Kirman, GREQAM EHESS, France Erik Mosekilde, The Technical University of Denmark Avidan U. Neumann, Bar-Ilan University, Israel Scott Page, University of Iowa, USA Diana Richards, University of Minnesota, USA Frank Schweitzer, Humboldt University, Berlin, Germany Ricard V. Sole,Universitat Politcnica de Catalunya, Spain Peter Stadler, University of Vienna, Austria Luc Steels, Brussels, Vrije Universiteit Brussel, Belgium Guy Theraulaz, Paul Sabatier University, France Andreas Wagner, Santa Fe Institute, USA Gerard Weisbuch, Ecole Normale Superieure, France David H. Wolpert, NASA Ames Research Center, USA Yi-Cheng Zhang, Universite de Fribourg, Switzerland ______________________________________ INSTRUCTIONS for prospective AUTHORS ______________________________________ Original papers can be submitted as regular papers or as letters. Regular papers report detailed results, whereas letters are short communications, not exceeding 3000 words, that report important new results. Ideally, regular papers should contain between 3000 and 12000 words, but the editors may consider the publication of shorter or longer papers if necessary. Short reviews and tutorials will also be considered. Please contact the Editor-in-Chief (bonabeau at santafe.edu) before embarking on a review or tutorial. It is extremely important that papers contain a sufficiently long introduction accessible to a wide readership, and that results be put in broader perspective in the conclusion. However, the main body of a paper should not have a low technical content under the pretext that the journal is multi-disciplinary. The submission of a manuscript will be taken to imply that thematerial is original and has not been and will not be (unless not accepted in the Journal of Complex Systems) submitted in equivalent form for publication elsewhere. Papers will be published in English. The American or the British forms of spelling may be used, but this usage must be consistent throughout the manuscript. Electronic submission is requested. Authors that use LaTex should use the template files which are available at the journal's web page (http://www.santafe.edu/~bonabeau. Authors using word processors should conform to the style given in the sample postscript file (jcs.ps) which can also be downloaded from the journal's web page. A postscript file is sufficient for the first submission. Source files will be requested when the paper is accepted for publication. To submit a paper, upload it by ftp: > ftp ftp.santafe.edu > Name: anonymous > Password: you email address > cd pub/bonabeau/incoming > mput files > bye Files cannot be downloaded from there, and directories cannot be created. After uploading your files, send an email to bonabeau at santafe.edu including a list of all uploaded files, a brief description of the paper and a list of keywords. _______________ SCHEDULE _______________ 4 issues yearly. Subscriptions: hermes at iway.fr, Attn: Subscriptions Dept. From jbower at bbb.caltech.edu Tue Jun 6 06:52:25 2006 From: jbower at bbb.caltech.edu (James M. Bower) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Jim Bower -- 4 messages (and end of thread) Message-ID: MODERATOR'S NOTE: Jim Bower's most recent message to the Connectionists list would not display on some mail readers, due to a glitch with a MIME encapsulation header (my fault, not Jim's.) Therefore I am rebroadcasting that message in a compendium with three other short messages Jim sent to the list yesterday. Since this dialog has gone on for a while now, I would like to end this thread and invite interested parties to communicate amongst themselves by email. -- Dave Touretzky, CONNECTIONISTS moderator ================ Message 1 of 4 ================ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Mathematical Sciences in Cambridge was host to a major international programme entitled "Neural Networks and Machine Learning". Many of the world's leading researchers in the field participated for periods ranging from a few weeks up to six months, and numerous younger scientists also benefited from a variety of conferences and workshops held throughout the programme. The Newton Institute's purpose-designed building provided a superb research environment, as well as an excellent venue for workshops. The first workshop of the six month programme was a two-week NATO Advanced Study Institute on "Generalization in Neural Networks and Machine Learning". This was heavily over-subscribed and attendance was limited to around 90 by the capacity of the Institute as well as by the desire to maintain an informal, interactive atmosphere. The topic of generalization was chosen as a focal point for the workshop and provided a common theme running through many of the presentations. This book resulted directly from the NATO ASI, and many of the chapters have a significant tutorial component, reflecting the instructional aims of the workshop. Ordering information: "Neural Networks and Machine Learning" Christopher M. Bishop (Ed.) Springer (ISBN 3-540-64928-X) Hard cover, 353 pages, NATO ASI Series F, volume 168. Amazon: http://www.amazon.com/exec/obidos/ASIN/354064928X/qid=931508458/sr=1-3/002-2 811424-5515635 Springer: http://www.springer.de/cgi-bin/search_book.pl?isbn=3-540-64928-X Blackwells: http://bookshop.blackwell.co.uk/cgi-bin/bv.cgi?BV_EngineID=dalfdglgimmbemhcf hecflkdghl.13&form%25oid=1024412&BV_Operation=Dyn_ProductReceive&form%25cnt_ type=0&form%25observe_selected=1&form%25observe_selected=0&form%25observe_ch ose=0&BV_SessionID=861199645&form%25position=unspecified&form%25destination= %2fbob%2fbob_product_detail.html.tmpl&form%25store_id=0&form%25classname=TC_ ProductLink&form%25destination_type=template&submit%25form=&form%25action=se lect&BV_ServiceName=Mall&form%25table=&form%25data= From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Stefano Panzeri, Alessandro Treves, Simon Schultz and Edmund T. Rolls Short-Term Synaptic Plasticity And Network Behavior Werner M. Kistler and J. Leo van Hemmen Synchrony And Desynchrony In Integrate-And-Fire Oscillators Shannon R. Campbell, DeLiang L. Wang and Ciriyam Jayaprakash Fast Global Oscillations In Networks Of Integrate-and-Fire Neurons With Low Firing Rates Nicolas Brunel and Vincent Hakim Concentration Tuning Mediated By Spare Receptor Capacity In Olfactory Sensory Neurons: A Theoretical Study Thomas A. Cleland and Christiane Linster A Computational Model For Visual Selection Yali Amit and Donald Geman Associative Memory In A Multi-Modular Network Nir Levy and David Horn and Eytan Ruppin Sparse Code Shrinkage: Denoising of Nongaussian Data By Maximum Likelihood Estimation Aapo Hyverinen Improving the Convergence of the Back-Propagation Algorithm Using Learning Rate Adatation Methods G. D. Magoulas, M. N. Vrahatis, and G. S. Androulakis ----- NOTE: Neural Computation is now on-line and issues starting with 11:1 will be available to all for a free trial period: ON-LINE - http://neco.mitpress.org/ ABSTRACTS - http://mitpress.mit.edu/NECO/ SUBSCRIPTIONS - 1999 - VOLUME 11 - 8 ISSUES USA Canada* Other Countries Student/Retired $50 $53.50 $84 Individual $82 $87.74 $116 Institution $302 $323.14 $336 * includes 7% GST (Back issues from Volumes 1-10 are regularly available for $28 each to institutions and $14 each for individuals. Add $5 for postage per issue outside USA and Canada. Add +7% GST for Canada.) MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From ESANN at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From ESANN at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: sciences, 'Statistical Pattern Recognition' shows how closely these fields are related in terms of application. Areas such as database design, artificial neural networks and decision support are common to all. The author also examines the more diverse theoretical topics available to the practitioner or researcher, such as outlier detection and model selection, and concludes each section with a description of the wider range of practical applications and the future developments of theoretical techniques. Providing an introduction to statistical pattern theory and techniques that draws on material from a wide range of fields, 'Statistical Pattern Recognition' is a must for all technical professionals wanting to get up to speed on the recent advances in this dynamic subject area. Key Features Contains descriptions of the most up-to-date pattern processing techniques, including the recent advances in non-parametric approaches to discrimination Illustrates the techniques with examples of real-world applications studies Includes a variety of exercises from 'open-book' questions to more lengthy projects Reviews '... features a 'how to' approach with examples and exercises' Lavoiser Contents: Introduction to statistical pattern recognition / Estimation / Density estimation / Linear discriminant analysis / Non-linear discriminant analysis - neural networks / Non-linear discriminant analysis - statistical methods / Classification trees / Feature selection and extraction / Clustering / Additional topics / Measures of dissimilarity / Parameter estimation / Linear algebra / Data / Probability theory. 1999 480pp Paperback ISBN 0 340 74164 3 29.99 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: ============== "Paradigms of Artificial Intelligence" presents a new methodolo- gical analysis of the two competing research paradigms of artifi- cial intelligence and cognitive science: the symbolic versus the connectionist paradigm. It argues that much of the discussion put forward for either paradigm misses the point. Most of the argu- ments in the debates on the two paradigms concentrate on the question whether the nature of intelligence or cognition is prop- erly accommodated by one or the other paradigm. Opposed to that is the analysis in this book, which concentrates on the question which of the paradigms accommodates the "user" of a developed theory or technique. The "user", who may be an engineer or scien- tist, has to be able to grasp the theory and to competently the methods which are developed. Consequently, besides the nature of intelligence and cognition, the book derives new objectives for future research which will help to integrate aspects of both paradigms to obtain more powerful AI techniques and to promote a deeper understanding of cognition. The book presents the fundamental ideas of both, the symbolic as well as the connectionist paradigm. Along with an introduction to the philosophical foundations, an exposition of some of the typical techniques of each paradigm is presented in the first two parts. This is followed by the mentioned analysis of the two paradigms in the third part. The book is intended for researchers, practitioners, advanced students, and interested observers of the developing fields of artificial intelligence and cognitive science. Providing accessi- ble introductions to the basic ideas of both paradigms, it is al- so suitable as a textbook for a subject on the topic at an ad- vanced level in computer science, philosophy, cognitive science, or psychology. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: ================= The field of artificial intelligence (AI), formally founded in 1956, attempts to understand, model and design intelligent sys- tems. Since the beginning of AI, two alternative approaches were pursued to model intelligence: on the one hand, there was the symbolic approach which was a mathematically oriented way of ab- stractly describing processes leading to intelligent behaviour. On the other hand, there was a rather physiologically oriented approach, which favoured the modelling of brain functions in or- der to reverse-engineer intelligence. Between the late 1960s and the mid-1980s, virtually all research in the field of AI and cog- nitive science was conducted in the symbolic paradigm. This was due to the highly influential analysis of the capabilities and limitations of the perceptron by [Minsky and Papert, 1969]. The perceptron was a very popular neural model at that time. In the mid-1980s a renaissance of neural networks took place under the new title of connectionism, challenging the dominant symbolic paradigm of AI. The `brain-oriented' connectionist paradigm claims that research in the traditional symbolic paradigm cannot be successful since symbols are insufficient to model crucial as- pects of cognition and intelligence. Since then a debate between the advocates of both paradigms is taking place, which frequently tends to become polemic in many writings on the virtues and vices of either the symbolic or the connectionist paradigm. Advocates on both sides have often neither appreciated nor really addressed each others arguments or concerns. Besides this somewhat frus- trating state of the debate, the main motivation for writing this book was the methodological analysis of both paradigms, which is presented in part III of this book and which I feel has been long overdue. In part III, I set out to develop criteria which any successful method for building AI systems and any successful the- ory for understanding cognition has to fulfill. The main argu- ments put forward by the advocates on both sides fail to address the methodologically important and ultimately decisive question for or against a paradigm: How feasible is the development of an AI system or the understanding of a theory of cognition? The significance of this question is: it is not only the nature of an intelligent system or the phenomenon of cognition itself which plays the crucial role, but also the human subject who is to perform the design or who wants to understand a theory of cog- nition. The arguments for or against one of the paradigms have, by and large, completely forgotten the role of the human subject. The specific capabilities and limitations of the human subject to understand a theory or a number of design steps needs to be an instrumental criterion in deciding which of the paradigms is more appropriate. Furthermore, the human subject's capabilities and limitations have to provide the guideline for the development of more suitable frameworks for AI and cognitive science. Hence, the major theme of this book are methodological considerations re- garding the form and purpose of a theory, which could and should be the outcome of our scientific endeavours in AI and cognitive science. This book is written for researchers, students, and technically skilled observers of the rapidly evolving fields of AI and cognitive science alike. While the third part is putting forward my methodological criticism, part I and II While the third part is putting forward my methodological criticism, part I and II provide the fundamental ideas and basic techniques of the symbolic and connectionist paradigm respectively. The first two parts are mainly written for those readers, which are new to the field, or are only familiar with one of the paradigms, to allow an easy grasp of the essential ideas of both paradigms. Both parts present the kernel of each paradigm without attempting to cover the details of latest developments, as those do not affect the fundamental ideas. The methodological analysis of both paradigms with respect to their suitability for building AI sys- tems and for understanding cognition is presented in part III. Available from Springer-Verlag. Price approximately (DEM 98, US$ 49) From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Ron Sun Edward Merrill Todd Peterson To appear in: Cognitive Science. http://www.cecs,missouri.edu/~rsun/sun.CS99.ps ABSTRACT This paper presents a skill learning model {\sc Clarion}. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects. Two papers on accounting for consciousness computationally: -------------------------------------------------- Accounting for the Computational Basis of Consciousness: A Connectionist Approach Ron Sun To appear in: Consciousness and Cognition, 1999. http://www.cecs.missouri.edu/~rsun/sun.CC99.ps ABSTRACT This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the functional role of consciousness, in the form of the synergy between the conscious and the unconscious. The fit between the model and various cognitive phenomena and data (documented in the psychological literatures) is discussed to accentuate the plausibility of the model and its explanation of consciousness. Comparisons with existing models of consciousness are made in the end. -------------------------------------------------- Learning, Action, and Consciousness: A Hybrid Approach toward Modeling Consciousness Ron Sun Appeared in: Neural Networks, 10 (7), pp.1317-1331. 1997. http://www.cecs.missouri.edu/~rsun/sun.nn97.ps ABSTRACT This paper is an attempt at understanding the issue of consciousness through investigating its functional role, especially in learning, and through devising hybrid neural network models that (in a qualitative manner) approximate characteristics of human consciousness. In so doing, the paper examines explicit and implicit learning in a variety of psychological experiments and delineates the conscious/unconscious distinction in terms of the two types of learning and their respective products. The distinctions are captured in a two-level action-based model {\sc Clarion}. Some fundamental theoretical issues are also clarified with the help of the model. Comparisons with existing models of consciousness are made to accentuate the present approach. Finally, a paper on computational analysis of the model: --------------------------------- Autonomous Learning of Sequential Tasks: Experiments and Analyses by Ron Sun, Todd Peterson Appeared in: IEEE Transactions on Neural Networks, Vol.9, No.6, pp.1217-1234. November, 1998. http://www.cecs.missouri.edu/~rsun/sun.tnn98.ps ABSTRACT: This paper presents a novel learning model {\sc Clarion}, which is a hybrid model based on the two-level approach proposed in Sun (1995). The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyses in various ways are reported that shed light on the advantages of the model. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.cecs.missouri.edu/~rsun/clarion.html =========================================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: other information from CNSA, and information about public domain clustering and classification software. The CD will save time and effort in looking for clustering and classification information, and a production run of 1000 CDs will be distributed to key researchers, R&D specialists, and educators, across various disciplines relating to classification and data analysis. The CD is distributed as a supplement to the Journal of Classification and, in addition, will be available on library shelves with the Journal of Classification. Availability on CD also means that information on commercial software, shareware, and clustering- and classification-related services will be available, as well as publications and event information. Just $75 for lists of relevant publications or your exhibition events, with links to your web sites! More information and pricing is available from the CSNA web site, http://www.pitt.edu/~csna (see under 'New developments related to Classification Literature Automated Service'). Now is the time to reserve space on this CD. I look forward to hearing from you, Eva Whitmore CLASS Technical Editor /-----------------------------------------------------------\ Eva Whitmore 14 Calgary St. St. John's, NF A1A 3W2 Tel: 709-739-6252 Email: eva at cs.mun.ca \-----------------------------------------------------------/ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Fellowships Available Message-ID: -------------------------------------------------------------------- Positions Available at All Levels in Advanced Signal Processing and Magnetoencephalography/fMRI Wanted: neuroscientists, programmers, computer scientists, and physicists to join our growing Brain and Computation group in a newly funded functional brain imaging (MEG/fMRI) study of neural plasticity. Over a half dozen fellowships (funded by the National Foundation for Functional Brain Imaging) are available. For further details, see http://www.cs.unm.edu/~bap/hiring.html -------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From school at cogs.nbu.acad.bg Tue Jun 6 06:52:25 2006 From: school at cogs.nbu.acad.bg (CogSci Summer School) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: CogSci 2000 Summer School Message-ID: 7th International Summer School in Cognitive Science Sofia, New Bulgarian University, July 10 - 30, 2000 Courses: * Distributed representations and gradual learning processes in cognition - Jay McClelland (Carnegie Mellon University, USA) * Connectionist models of language processing - Jeff Elman (University of California, San Diego, USA) * Brain Organization of Human Memory and Thought - John Gabrieli (Stanford University) * Cognitive Development - Graeme Halford (University of Queensland) * The Human Conceptual System - Lawrence W. Barsalou (Emory University) * Topics in Vision Science - Stephen E. Palmer (University of California, Berkeley) * Cognitive Science: A Basic Science for an Applied Science of Learning - John T. Bruer (James S. McDonnell Foundation) * Psychological Scaling - Encho Gerganov (New Bulgarian University) * Research Methods in Psycholinguistics - Elena Andonova (New Bulgarian University) * Research Methods in Memory and Thinking - Boicho Kokinov (New Bulgarian University) Organised by New Bulgarian University, Bulgarian Academy of Sciences, and Bulgarian Society for Cognitive Science Sponsored by the Open Society Institute in Budapest - HESP Program International Advisory Board Participation Participants will be selected by a Selection Committee on the bases of their submitted documents: * application form (see at the Web page), * CV, * statement of purpose, * copy of diploma; if student - academic transcript * letter of recommendation, * list of publications (if any) and short summary of up to three of them. Apply as soon as possible since the number of participants is restricted. Send applications to: Summer School in Cognitive Science Central and East European Center for Cognitive Science New Bulgarian University 21 Montevideo Str. Sofia 1635, Bulgaria e-mail: school at cogs.nbu.acad.bg For more information look at: http://www.nbu.bg/cogs/events/ss2000.html From ESANN at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: ESANN at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Antonio Turiel and Nestor Parga NOTE Exponential Or Polynomial Learning Curves? - Case By Case Studies Hanzhong Gu and Haruhisa Takahashi LETTERS Training Feed-forward Neural Networks With Gain Constraints Eric Hartman Variational Learning for Switching State-Space Models Zoubin Ghahramani and Geoffrey E. Hinton Retrieval Properties of A Hopfield Model With Random Asymmetric Interactions Zhang Chengxiang, Chandan Dasgupta and Manoranjan P. Singh On "Natural" Learning And Pruning in Multi-Layered Perceptrons Tom Heskes Synthesis of Generalized Algorithms for the Fast Computation of Synaptic Conductances with Markov Kinetic Models in Large Network Simulations Michele Giugliano Hierarchical Bayesian Models for Regularisation in Sequential Learning J.F.G. de Freitas, M. Niranjan and A. H. Gee Sequential Monte Carlo Methods to Train Neural Network Models J.F.G. de Freitas, M. Niranjan an, A. H. Gee and A. Doucet ----- ON-LINE - http://neco.mitpress.org/ SUBSCRIPTIONS - 2000 - VOLUME 12 - 12 ISSUES USA Canada* Other Countries Student/Retired $60 $64.20 $108 Individual $88 $94.16 $136 Institution $430 $460.10 $478 * includes 7% GST MIT Press Journals, 5 Cambridge Center, Cambridge, MA 02142-9902. Tel: (617) 253-2889 FAX: (617) 258-6779 mitpress-orders at mit.edu ----- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: This thoroughly and thoughtfully revised edtion of a very successful textbook makes the principles and the details of neural network modeling accessible to cognitive scientists of all varieties as well as other scholars interested in these models. Research since the publication of the first edition has been systematically incorporated into a framework of proven pedagogical value. Features of the second edition include: A new section on spatiotemporal pattern processing. Coverage of ARTMAP networks (the supervised version of adaptive resonance networks) and recurrent back-propagation networks. A vastly expanded section on models of specific brain areas, such as the cerebellum, hippocampus, basal ganglia, and visual and motor cortex. Up-to-date coverage of applications of neural networks in areas such as combinatorial optimization and knowledge representation. As in the first edition, the text includes extensive introductions to neuroscience and to differential and difference equations as appendices for students without the requisite background in these areas. As graphically revealed in the flowchart in the front of the book, the text begins with simpler processes and builds up to more complex multilevel functional systems. Table of contents: Chapters 2 through 7 each include equations and exercises (computational, mathematical, and qualitative) at the end of the chapter. The text sections are as follows. Flow Chart of the Book Preface Preface to the Second Edition Chapter 1: Brain and Machine: The Same Principles? What are Neural Networks? What Are Neural Networks? Is Biological Realism a Virtue? What Are Some Principles of Neural Network Theory? Methodological Considerations Chapter 2: Historical Outline 2.1. Digital Approaches The McCulloch-Pitts Network Early Approaches to Modeling Learning: Hull and Hebb Rosenblatt's Perceptrons Some Experiments With Perceptrons The Divergence of Artificial Intelligence and Neural Modeling 2.2. Continuous and Random Net Approaches Rashevsky's Work Early Random Net Models Reconciling Randomness and Specificity Chapter 3: Associative Learning and Synaptic Plasticity 3.1. Physiological Bases for Learning 3.2. Rules for Associative Learning Outstars and Other Early Models of Grossberg Anderson's Connection Matrices Kohonen's Early Work 3.3. Learning Rules Related to Changes in Node Activities Klopf's Hedonistic Neurons and the Sutton-Barto Learning Rule Error Correction and Back Propagation The Differential Hebbian Idea Gated Dipole Theory 3.4. Associative Learning of Patterns Kohonen's Recent Work: Autoassociation and Heteroassociation Kosko's Bidirectional Associative Memory Chapter 4: Competition, Lateral Inhibition, and Short-Term Memory 4.1. Contrast Enhancement, Competition, and Normalization Hartline and Ratliff's Work, and Other Early Visual Models Nonrecurrent Versus Recurrent Lateral Inhibition 4.2. Lateral Inhibition and Excitation Between Sensory Representations Wilson and Cowan's Work Work of Grossberg and Colleagues Work of Amari and Colleagues Energy Functions in the Cohen-Grossberg and Hopfield-Tank Models The Implications of Approach to Equilibrium Networks With Synchronized Oscillations 4.3. Visual Pattern Recognition Models Visual Illusions Boundary Detection Versus Feature Detection Binocular and Stereoscopic Vision Visual Motion Comparison of Grossberg's and Marr's Approaches 4.4. Uses of Lateral Inhibition in Higher Level Processing Chapter 5: Conditioning, Attention, and Reinforcement 5.1. Network Models of Classical Conditioning Early Work: Brindley and Uttley Rescorla and Wagner's Psychological Model Grossberg: Drive Representations and Synchronization Aversive Conditioning and Extinction Differential Hebbian Theory Versus Gated Dipole Theory 5.2. Attention and Short-Term Memory in Conditioning Models Grossberg's Approach to Attention Sutton and Barto's Approach: Blocking and Interstimulus Interval Effects Some Contrasts Between the Grossberg and Sutton-Barto Approaches Further Connections With Invertebrate Neurophysiology Further Connections With Vertebrate Neurophysiology Gated Dipoles, Aversive Conditioning, and Timing Chapter 6: Coding and Categorization 6.1. Interactions Between Short- and Long-Term Memory in Code Development Malsburg's Model With Synaptic Conservation Grossberg's Model With Pattern Normalization Mathematical Results of Grossberg and Amari Feature Detection Models With Stochastic Elements From Feature Coding to Categorization 6.2. Supervised Classification Models The Back Propagation Network and its Variants The RCE Model 6.3. Unsupervised Classification Models The Rumelhart-Zipser Competitive Learning Algorithm Adaptive Resonance Theory Edelman and Neural Darwinism 6.4. Models that Combine Supervised and Unsupervised Parts ARTMAP and Other Supervised Adaptive Resonance Networks Brain-State-in-a-Box (BSB) Models 6.5. Translation and Scale Invariance 6.6. Processing Spatiotemporal Patterns Chapter 7 Optimization, Control, Decision, and Knowledge Representation 7.1. Optimization and Control Classical Optimization Problems Simulated Annealing and Boltzmann Machines Motor Control: The Example of Eye Movements Motor Control: Arm Movements Speech Recognition and Synthesis Robotic and Other Industrial Control Problems 7.2. Decision Making and Knowledge Representation What, If Anything, Do Biological Organisms Optimize? Affect, Habit, and Novelty in Neural Network Theories Knowledge Representation: Letters and Words Knowledge Representation: Concepts and Inference 7.3. Neural Control Circuits, Mental Illness, and Brain Areas Overarousal, Underarousal, Parkinsonism, and Depression Frontal Lobe Function and Dysfunction Disruption of Cognitive-Motivational Interactions Impairment of Motor Task Sequencing Disruption of Context Processing Models of Specific Brain Areas Models of the Cerebellum Models of the Hippocampus Models of the Basal Ganglia Models of the Cerebral Cortex Chapter 8: A Few Recent Technical Advances 8.1. Some "Toy" and Real World Computing Applications Pattern Recognition Knowledge Engineering Financial Engineering "Oddball" Applications 8.2. Some Neurobiological Discoveries Appendix 1: Basic Facts of Neurobiology The Neuron Synapses, Transmitters, Messengers, and Modulators Invertebrate and Vertebrate Nervous Systems Functions of Vertebrate Subcortical Regions Functions of the Mammalian Cerebral Cortex Appendix 2: Difference And Differential Equations in Neural Networks Example: The Sutton-Barto Difference Equations Differential Versus Difference Equations Outstar Equations: Network Interpretation and Numerical Implementation The Chain Rule and Back Propagation Dynamical Systems: Steady States, Limit Cycles, and Chaos ABOUT THE AUTHOR: Daniel S. Levine is Professor of Psychology at the University of Texas at Arlington. A former president of the International Neural Network Society, he is the organizer of the MIND conferences, which bring together leading neural network researchers from academia and industry. Since 1975, he has written nearly 100 books, articles, and chapters for various audiences interested in neural networks. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Papers on associative memory in neuronal networks Message-ID: <14689.57198.956218.420206@cerebellum> PAPERS ON ASSOCIATIVE MEMORY IN NEURONAL NETWORKS We would like to bring to your attention a series of recent theoretical papers about associative memory in neuronal networks, now available over internet. By analysis and simulation experiments these studies explore the computational function of biophysical mechanisms, such as spike synchronization, gamma oscillations, NMDA transmission characteristics, and activity feedback in local cortical networks or over reciprocal projections. The simulation models vary widely in their degree of biophysical realism ranging from binary sparse associative memories to networks of compartmental neurons. List of manuscripts and abstracts, see below. Postscript versions are available on our web pages: http://www.informatik.uni-ulm.de/ni/mitarbeiter/FSommer/FSommernew.html http://personal-homepages.mis.mpg.de/wenneker/index.html Fritz and Thomas ---------------------------------------------------------------------------- Dr. Friedrich T. Sommer Department of Neural Information Processing University of Ulm D-89069 Ulm Germany Tel. 49(731)502-4154 FAX 49(731)502-4156 FRITZ at NEURO.INFORMATIK.UNI-ULM.DE ---------------------------------------------------------------------------- ____________________________________________________________________________ Dr.Thomas Wennekers Max-Planck-Institute for Mathematics in the Sciences Inselstrasse 22-26 04103 Leipzig Germany Phone: +49-341-9959-533 Fax: +49-341-9959-555 Email: Thomas.Wennekers at mis.mpg.de ____________________________________________________________________________ ----------------------------------------------------------------------------- LIST OF MANUSCRIPTS: (1) Sommer, F.T. and Wennekers, T. Modeling studies on the computational function of fast temporal structure in neuronal network activity submitted to J.Neurophysiol. (Paris) (2) Sommer, F.T. and Wennekers, T.: Associative memory in a pair of cortical cell groups with reciprocal connections Acctepted at the Computational Neuroscience Meeting CNS 2000 , Brugge, Belgium, July 2000. (3) Vollmer, U., Wennekers, T. and Sommer, F.T.: Coexistence of short and long term memory in a model network of realistic neurons Accepted at the Computational Neuroscience Meeting CNS 2000, Bruegge, Belgium (4) Sommer, F.T.: On cell assemblies in a cortical column Neurocomputing 2000, to appear (5) Wennekers, T. and Sommer, F.T.: Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons. Neurocomputing 26-27, 573-578, 1999. (6) Sommer, F.T. and Palm, G.: Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Neural Networks 12 (2) (1999) 281 - 297 (7) Sommer, F.T.; Wennekers, Th.; Palm, G.: Bidirectional completion of cell assemblies in the cortex. In: J.M.Bower (ed) Computational Neuroscience: Trends in Research. Plenum Press, New York, 1998. (8) Sommer, F.T. and Palm, G.: Bidirectional Retrieval from Associative Memory in Advances in Neural Information Processing Systems 10, MIT Press, Cambridge, MA (1998) 675 - 681 ========================================================================================= ABSTRACTS: ----------------------------------------------------------------------------------------- (1) Modeling studies on the computational function of fast temporal structure in cortical circuit activity Friedrich T. Sommer and Thomas Wennekers The interplay between experiments and theoretical approaches can support the exploration of the function of neuronal circuits in the cortex. In this review we exemplify such a proceeding with a study on the functional role of spike timing and gamma-oscillations, and their relation to associative activity feedback through cortex-intrinsical synaptic connections. We first discern the theoretical approaches in general that have been most important in brain research, in particular, those approaches focusing on the biophysical, the functional, and the computational aspect. It is demonstrated how results from computational model studies on different levels of abstraction can constrain the functionality of associative memory expected in real cortical neuronal circuits. These constraints will be used to implement a computational model of associative memory on the base of biophysically elaborated compartmental neurons developed by Pinsky and Rinzel \cite{AN:PinskyRinzel94}. We run simulation experiments for two network architectures: a single interconnected pool of cells (say a cortical column), and two such reciprocally connected pools. In our biophysical model individual cell populations correspond to entities formed by Hebbian coincidence learning. When recalled by stimulating some cells in the population the stored patterns are extremely quickly completed and coded by events of synchronized single spikes. These fast associations are executed with an efficiency comparable to optimally tuned technical associative networks. The maximum repetition frequency for these association processes lies in the gamma-range. If a stimulus changes fast enough to switch between different memory patterns within one gamma period, a single association takes place without periodic firing of individual cells. Gamma-band firing and phase locking are therefore not primary coding features. They appear, however, with tonic stimulation or if feedback loops in the network provide a reverberation. The latter can improve (clean up) the recall iteratively. In the reciprocal wiring architecture bidirectional reverberations do not express in a rigid phase locking between the pools. Bursting turns out as a supportive mechanism for bidirectional associative memory. Sommer, F.T. and Wennekers, T. Modeling studies on the computational function of fast temporal structure in neuronal network activity submitted to J.Neurophysiol. (Paris) ----------------------------------------------------------------------------------------- (2) Associative memory in a pair of cortical cell groups with reciprocal projections Friedrich T. Sommer and Thomas Wennekers We examine the functional hypothesis of bidirectional associative memory in a pair of reciprocally projecting cortical cell groups. Our simulation model features two-compartment neurons and synaptic weights formed by Hebbian learning of pattern pairs. After stimulation of a learned memory in one group we recorded the network activation. At high synaptic memory load (0.14 bit/synapse) we varied the number of cells receiving stimulation input (input activity). The network ``recalled'' patterns by synchronized regular gamma spiking. Stimulated cells also expressed bursts that fascilitated the recall with low input activity. Performance was evaluated for one-step retrieval based on monosynaptic transmission expressed after ca. 35ms, and for {\it bidirectional retrieval} involving iterative activity propagation. One-step retrieval performed comparably to the technical Willshaw model with small input activity, but worse in other cases. In 80\% of the trials with low one-step performance iterative retrieval improved the result. It achieved higher overall performance after recall times of 60--260ms. Keyword: population coding; associative memory; Hebbian synapses; reciprocal cortical wiring Sommer, F.T. and Wennekers, T.: Associative memory in a pair of cortical cell groups with reciprocal connections Acctepted at the Computational Neuroscience Meeting CNS 2000 , Brugge, Belgium, July 2000. ----------------------------------------------------------------------------------------- (3) Coexistence of short and long term memory in a model network of realistic neurons Urs Vollmer, Thomas Wennekers, Friedrich T. Sommer NMDA-mediated synaptic currents are believed to influence LTP. A recent model \cite{Lisman98} demonstrates that they can instead support short term memory based on rhythmic spike activity. We examine this effect in a more realistic model that uses two-compartment neurons experiencing fatigue and also includes long-term memory by synaptic LTP. We find that the network does support both modes of operation without any parameter changes, but depending on the input patterns. Short term memory functionality might facilitate Hebbian learning through LTP by holding a new pattern while synaptic potentiation occurs. We also find that susceptibility of the short term memory against new input is time-dependent and reaches a maximum around the time constant of neuronal fatigue (200--400~ms). This corresponds well to the time scale of the syllabic rhythm and various psychophysical phenomena. Keywords: Short-term memory; associative memory; population coding; NMDA-activated channels. Vollmer, U., Wennekers, T. and Sommer, F.T.: Coexistence of short and long term memory in a model network of realistic neurons Accepted at the Computational Neuroscience Meeting CNS 2000, Bruegge, Belgium ----------------------------------------------------------------------------------------- (4) On cell assemblies in a cortical column Friedrich T. Sommer Recent experimental evidence for temporal coding of cortical cell populations \cite{AN:Riehleetal97,AN:Donoghueetal98} recurs to Hebb's classical cell assembly notion. Here the properties of columnar cell assemblies are estimated, using the assumptions about biological parameters of Wickens \& Miller \cite{FS:WickensMiller97}, but extending and correcting their predictions: Not the combinatorical constraint as they assume, but synaptic saturation and the requirement of low activation outside the assembly limit assembly size and number. As will be shown, i) columnar assembly processing can be still information theoretically efficient, and ii) at efficient parameter settings several assemblies can be ignited in a column at the same time. The feature ii) allows faster and more flexible access to the information contained in the set of stored cell assemblies. Keyword}s: population coding; associative memory; Hebbian synapses, columnar connectivity Sommer, F.T.: On cell assemblies in a cortical column Neurocomputing 2000, to appear ----------------------------------------------------------------------------------------- (5) Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons Thomas Wennekers and Friedrich T. Sommer Theoretical studies concerning iterative retrieval in conventional associative memories suggest that cortical gamma-oscillations may constitute sequences of fast associative processes each restricted to a single period. By providing a rhythmic threshold modulation suppressing cells that are uncorrelated with a stimulus, interneurons significantly contribute to this process. This hypothesis is tested in the present paper utilizing a network of two-compartment model neurons developed by Pinsky and Rinzel. It is shown that gamma-oscillations can simultaneously support an optimal speed for single pattern retrieval, an optimal repetition frequency for consecutive retrieval processes, and a very high memory capacity. Keywords: gamma-oscillations; threshold control; associative memory Wennekers, T. and Sommer, F.T.: Gamma-oscillations support optimal retrieval in associative memories of two-compartment neurons. Neurocomputing 26-27, 573-578, 1999. ----------------------------------------------------------------------------------------- (6) Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Friedrich T. Sommer and Guenther Palm The Willshaw model is asymptotically the most efficient neural associative memory (NAM), but its finite version is hampered by high retrieval errors. Iterative retrieval has been proposed in a large number of different models to improve performance in auto-association tasks. In this paper bidirectional retrieval for the hetero-associative memory task is considered: We define information efficiency as a general performance measure for bidirectional associative memory (BAM) and determine its asymptotic bound for the bidirectional Willshaw model. For the finite Willshaw model an efficient new bidirectional retrieval strategy is proposed, the appropriate combinatorial model analysis is derived, and implications of the proposed sparse BAM for applications and brain theory are discussed. The distribution of the dendritic sum in the finite Willshaw model given by \citet{FS:Buckingham92} allows no fast numerical evaluation. We derive a combinatorial formula with a highly reduced evaluation time that is used in the improved error analysis of the basic model and for estimation of the retrieval error in the naive model extension where bidirectional retrieval is employed in the hetero-associative Willshaw model. The analysis rules out the naive BAM extension as a promising improvement. A new bidirectional retrieval algorithm -- called {\em crosswise bidirectional} (CB) retrieval -- is presented. The cross talk error is significantly reduced without employing more complex learning procedures or dummy augmentation in the pattern coding as proposed in other refined BAM models \citep{FS:Wangetal90,FS:Leungetal95}. The improved performance of CB retrieval is shown by a combinatorial analysis of the first step and by simulation experiments: It allows very efficient hetero-associative mapping as well as auto-associative completion for sparse patterns -- the experimentally achieved information efficiency is close to the asymptotic bound. The different retrieval methods in hetero-associative Willshaw matrix are discussed as Boolean linear optimization problems. The improved BAM model opens interesting new perspectives, for instance, in Information Retrieval it allows efficient data access providing segmentation of ambiguous user input, relevance feedback and relevance ranking. Finally, we discuss BAM models as functional model for reciprocal cortico-cortical pathways, and the implication of this for a more flexible version of Hebbian cell-assemblies. Keywords: Bidirectional associative memory, Hebbian learning, iterative retrieval, combinatorial analysis, cell-assemblies, neural information retrieval (6) Sommer, F.T. and Palm, GT.: Improved Bidirectional Retrieval of Sparse Patterns Stored by Hebbian Learning Neural Networks 12 (2) (1999) 281 - 297 ----------------------------------------------------------------------------------------- (7) Bidirectional Completion of Cell Assemblies in the Cortex Friedrich T. Sommer T. Wennekers and G. Palm Reciprocal pathways are presumedly the dominant wiring organization for cortico-cortical long range projections\refnote{\cite{AN:FellemanVanEssen91}}. This paper examines the hypothesis that synaptic modification and activation flow in a reciprocal cortico-cortical pathway correspond to learning and retrieval in a bidirectional associative memory (BAM): Unidirectional activation flow may provide the fast estimation of stored information, whereas bidirectional activation flow might establish an improved recall mode. The idea is tested in a network of binary neurons where pairs of sparse memory patterns have been stored in bidirectional synapses by fast Hebbian learning (Willshaw model). We assume that cortical long-range connections shall be efficiently used, i.e., in many different hetero-associative projections corresponding in technical terms to a high memory load. While the straight-forward BAM extension of the Willshaw model does not improve the performance at high memory load, a new bidirectional recall method (CB-retrieval) is proposed accessing patterns with highly improved fault tolerance and also allowing segmentation of ambiguous input. The improved performance is demonstrated in simulations. The consequences and predictions of such a cortico-cortical pathway model are discussed. A brief outline of the relations between a theory of modular BAM operation and common ideas about cell assemblies is given. Sommer, F.T.; Wennekers, Th.; Palm, G.: Bidirectional completion of cell assemblies in the cortex. In: J.M.Bower (ed) Computational Neuroscience: Trends in Research. Plenum Press, New York, 1998. ----------------------------------------------------------------------------------------- (8) Bidirectional Retrieval from Associative Memory Friedrich T. Sommer and G. Palm Similarity based fault tolerant retrieval in neural associative memories (NAM) has not lead to wiedespread applications. A drawback of the efficient Willshaw model for sparse patterns \cite{FS:Steinbuch61,FS:Willshaw69}, is that the high asymptotic information capacity is of little practical use because of high cross talk noise arising in the retrieval for finite sizes. Here a new bidirectional iterative retrieval method for the Willshaw model is presented, called crosswise bidirectional (CB) retrieval, providing enhanced performance. We discuss its asymptotic capacity limit, analyze the first step, and compare it in experiments with the Willshaw model. Applying the very efficient CB memory model either in information retrieval systems or as a functional model for reciprocal cortico-cortical pathways requires more than robustness against random noise in the input: Our experiments show also the segmentation ability of CB-retrieval with addresses containing the superposition of pattens, provided even at high memory load. Sommer, F.T. and Palm, G.: Bidirectional Retrieval from Associative Memory in Advances in Neural Information Processing Systems 10, MIT Press, Cambridge, MA (1998) 675 - 681 ----------------------------------------------------------------------------------------- ========================================================================================= From evsukoff at LMP.UFRJ.BR Tue Jun 6 06:52:25 2006 From: evsukoff at LMP.UFRJ.BR (Alexandre Evsukoff) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: FINAL CALL FOR DEMONSTRATIONS SBRN'2000 Message-ID: <7C7E8A3FEC@lmp.ufrj.br> CALL FOR DEMONSTRATIONS SBRN'2000 - VIth BRAZILIAN SYMPOSIUM ON NEURAL NETWORKS http://www.iltc.br/sbrn2000 Rio de Janeiro, November 22-25, 2000 The SBRN'2000 will host a Demonstration Session, in parallel with Tutorials in November 22, 2000. The Demonstration Session showcases state-of-the-art Neural software products and provides researchers with an opportunity to show their research in action. Early exhibition of research prototypes are encouraged but commercial products will also be present. This will be an opportunity to put face-to-face innovative research prototypes and mature commercial products. The Demonstration Session will be an "open-house" event in the first day of the Symposium. However, all accepted research demonstrations will be left available within the Poster Sessions. A CD-ROM containing a short animated version of each demonstration will be possibly pressed and distributed to participants. Participants are invited to submit proposals to demonstrated their systems, especially those whose papers were accepted for presentation at the conference program. In addition to contact information, proposals must include the following: - A two-page description of the technical content of the demo, including credits and references. - An animated version of the demonstration or a demo storyboard (six pages maximum). This will be the primary method of evaluating the proposals. - A detailed description of hardware and software requirements. The Organising Committee will provide the Demonstration Session with generic PCs and standard software. Unix- and Mac -based demonstrations will be possible. The Demonstration Session will also allow demonstrations via the web. Anyone interested in participating should include a URL that accesses their demo with their proposal. Demonstration proposals must be received entirely including any supporting materials by August 28. Authors will be notified of acceptance by September 29, 2000. Any question or comments as so as demonstration proposals must be made electronically directly to the Demonstrations Chair. Demonstrations Chair: Alexandre Evsukoff (UFRJ/ILTC, Brazil) evsukoff at lmp.ufrj.br From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: An Incremental Multivariate Regression Method for Function Approximation from Noisy Data M. Carozza and S. Rampone Universit del Sannio Abstract In this paper we consider the problem of approximating functions from noisy data. We propose an incremental supervised learning algorithm for RBF networks. Hidden gaussian nodes are added in an iterative manner during the training process. For each new node added, the activation function center and the output connection weight are settled according to an extended chained version of the Nadaraja-Watson estimator. Then the variances of the activation functions are determined by an empirical risk driven rule based on a genetic-like optimization technique. The postscript file is available in http://space.tin.it/scienza/srampone/indexing.htm (click on ) From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: financial assets remains a highly controversial question in finance (even if recent publications in main scientific references seem to give some credits to those works). From the methodological point of view, financial time series appear to be very challenging. They are often characterized by a lot of noise, problems of stationarity, sudden changes of volatility, ... Neural networks have appeared as new tools in this area in this last decade. This special session will try put to into light the serious results that we can await form neural networks in this field and to analyze the methodological issues of their application. Artificial neural networks and early vision processing ------------------------------------------------------ Organised by D. Charles, C. Fyfe, Univ. of Paisley (Scotland) It is well known that biological visual systems, and in particular the human visual system, are extraordinarily good at extracting deciphering very complex visual scenes. Certainly, if consider the human visual system to be solving inverse graphic problems then we have not really come close to building artificial systems which are as effective as biological ones. We have much to learn from studying biological visual architecture and the implementation of practical vision based products could be improved by gaining inspiration from these systems. The following are some suggested areas of interest: - Unsupervised preprocessing methods - e.g. development of local filters, edge filtering. - Statistical structure identification - e.g. Independent Component Analysis, Factor Analysis, Principal Components Analysis, Projection pursuit. - Information theoretic techniques for the extraction/preservation of information in visual data. - Coding strategies - e.g. sparse coding, complexity reduction. - Binocular disparity. - Motion, invariances, colour encoding - e.g. optical flow, space/time filters. - Topography preservation. - The practical application of techniques relating to these topics. Artificial neural networks for Web computing -------------------------------------------- Organised by M. Maggini, Univ. di Siena (Italy) The Internet represents a new challenging field for the application of machine learning techniques to devise systems which improve the accessibility to the information available on the web. This domain is particular appealing since it is easy to collect large amounts of data to be used as training sets while it is usually difficult to write manually sets of rules that solve interesting tasks. The aim of this special session is to present the state of the art in the field of connectionist systems applied to web computing. The possible fields for applications involve distributed information retrieval issues like the design of thematic search engines, user modeling algorithms for the personalization of services to access information on the web, automatic security management, design and improvement of web servers through prediction of request patterns, and so on. In particular the suggested topics are: - Personalization of the access to information on the web - Recommender systems on the web - Crawling policies for search engines Focussed crawlers - Analysis and prediction of requests to web servers - Intelligent chaching and proxies - Security issues (e.g. intrusion detection) Dedicated hardware implementations: perspectives on systems and applications ---------------------------------------------------------------------------- Organised by D. Anguita, M. Valle, Univ. of Genoa (Italy) The aim of this session is to assess new proposals for bridging the gap between algorithms, applications and hardware implementations of neural networks. Usually these three fields are not investigated in close connection: researchers working in the development of dedicated hardware implementations develop simplified versions of otherwise complex neural algorithms or develop dedicated algorithms: usually these algorithms have not been thoroughly tested on real-world applications. At the same time, many theoretically sound algorithms are not feasible in dedicated hardware, therefore limiting their success only to applications where a software solution on a general-purpose system is feasible. The focus of the session will be on the issues related to the hardware implementation of neural algorithms and architectures and their successful application to real world-problems, not on the details of the hardware implementation itself. The session will review both major achievements in hardware friendly algorithms and assess major results obtained in the application of dedicated neural hardware to real industrial and/or consumer applications. Novel neural transfer functions ------------------------------- Organised by W. Duch, Nicholas Copernicus Univ. (Poland) It is commonly believed that because of universal approximation theorem sigmoidal functions are sufficient for all applications. This belief has been responsible for a slow progress in creating neural networks based on novel transfer functions or using several transfer functions in one network. Transfer functions are as important for creating good neural models as the architectures and the training methods are because they have strong influence on rates of convergence and on complexity of networks needed to solve the problem at hand. This special session will be devoted to neural models exploring the benefits of using different transfer functions. Papers comparing results obtained with known and novel transfer functions, developing methods of training suitable for heterogeneous function networks, investigating theoretical rates of convergence or deriving approximations to biological neural activity are strongly encouraged. Neural networks and evolutionary/genetic algorithms - hybrid approaches ----------------------------------------------------------------------- Organised by T. Villmann, Univ. Leipzig (Germany) Artificial neural networks can be taken as a special kind of learning and self-adapting data processing systems. The abilities to handle noisy and high-dimensional data, nonlinear problems, large data sets etc. using neural techniques have lead to an innumerous number of applications as well as a good theory behind. An other adaptation approach is the approach of genetic and evolutionary algorithms. One of the most advantages of these methods is the relative independence of the algorithm according to the optimization goal defined by the fitness function. The fitness function can comprise traditional restrictions but may also include explicit expert knowledge. In the last years several approaches were developed combining both neural networks and genetic/evolutionary algorithms. Thereby, the methods ranging from neural network learning using genetic algorithms and structure adaptation of neural network topologies by genetic algorithms to migration dynamic in evolutionary algorithms according to neural network dynamics and other. Of coarse, combining both approaches should improve the capability of the resulting hybrid system. Authors of this special session are invited to submit actual contributions which cover the above shortly but not completely explained area of hybrid systems combining neural networks and genetic/evolutionary algorithms. Thereby new methods and theoretical developments should be emphasized. However, new applications with an interesting theoretical background are also of interest. Possible topics may be (but not restricted for further): - neural network adaptation by genetic/evolutionary algorithms - learning in neural networks using genetic/evolutionary algorithms - clustering, fuzzy clustering by genetic/evolutionary algorithms - neural networks for genetic/evolutionary algorithms - applications using hybrid systems ===================================================== ESANN - European Symposium on Artificial Neural Networks http://www.dice.ucl.ac.be/esann * For submissions of papers, reviews,... Michel Verleysen Univ. Cath. de Louvain - Microelectronics Laboratory 3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium tel: +32 10 47 25 51 - fax: + 32 10 47 25 98 mailto:esann at dice.ucl.ac.be * Conference secretariat D facto conference services 27 rue du Laekenveld - B-1080 Brussels - Belgium tel: + 32 2 420 37 57 - fax: + 32 2 420 02 55 mailto:esann at dice.ucl.ac.be ===================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: in-depth discussion on recent research results. Also, at the 7th International Conference on Neural Information Processing on November 2000, special sessions were organized on Blind Adaptive Filtering and Independent Component Analysis and Models of Natural Image Statistics. Also, several post-conference workshops have been organized at Neural Information Processing Systems over the last few years. The papers presented dealt with recent developments on new theories and applications of BSS and ICA, and contribute to the scientific and engineering progress in this important field. Some contributions for the special issue will evolve from the attendants of those meetings. If you were not able to attend and are active in these areas of research, you are highly encouraged to submit your work. Examples of topics relevant to this special issue include : - Multichannel blind deconvolution and equalization - Nonstationary source separation - Nonlinear ICA - Noisy ICA - Variational methods for ICA - PCA/ICA feature extraction - BSS/ICA applications (speech enhancement, Efficient encoding of natural scenes and sound , telecommunication, data mining, medical data processing, etc.) Two copies of the manuscripts should be submitted by March 1st, 2001, to: Dr. V. David Sanchez NEUROCOMPUTING - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 60130 Pasadena, CA 91116-6130 U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net In your submitting letter please clearly write that you are submitting your papers to the Special Issue on BSS/ICA. Guest Editors Dr. Shun-ichi Amari Vice Director, RIKEN Brain Science Institute Laboratory for Mathematical Neuroscience Research Group on Brain-Style Information Systems Wako Japan Tel: +81-(0)48-467-9669 Fax: +81-(0)48-467-9687 E-mail: amari at brain.riken.go.jp Dr. Aapo Hyvarinen Neural Networks Research Centre Helsinki University of Technology P.O. Box 5400 FIN-02015 HUT Finland Tel: +358-9-451-3278 Fax: +358-9-451-3277 Email: Aapo.Hyvarinen at hut.fi Prof. Soo-Young Lee Director, Brain Science Research Center Korea Advanced Institute of Science and Technology 373-1 Kusong-dong, Yusong-gu Taejon 305-701 Korea Tel: +82-42-869-3431 Fax: +82-42-869-8570 E-mail: sylee at ee.kaist.ac.kr Dr. Te-Won Lee Institute for Neural Computation University of California, San Diego 9500 Gilman Dr. DEPT 0523 La Jolla, CA 92093, USA Phone: (858) 822-1905 Fax: (858) 587-0417 Email: tewon at inc.ucsd.edu Dr. V. David Sanchez NEUROCOMPUTING - Editor in Chief - Advanced Computational Intelligent Systems P.O. Box 60130 Pasadena, CA 91116-6130 U.S.A. Fax: +1-626-793-5120 Email: vdavidsanchez at earthlink.net From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: John.Smith.txt, Message-ID: This will facilitate appropriate filing. Thanks a lot! Juergen Schmidhuber http://www.idsia.ch/~juergen/ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Ron Sun Edward Merrill Todd Peterson To appear in: Cognitive Science, Vol.25, No.2. March 2001. http://www.cecs.missouri.edu/~rsun/sun.CS99.ps http://www.cecs.missouri.edu/~rsun/sun.CS99.pdf ABSTRACT This paper presents a skill learning model CLARION. Different from existing models of mostly high-level skill learning that use a top-down approach (that is, turning declarative knowledge into procedural knowledge through practice), we adopt a bottom-up approach toward low-level skill learning, where procedural knowledge develops first and declarative knowledge develops later. Our model is formed by integrating connectionist, reinforcement, and symbolic learning methods to perform on-line reactive learning. It adopts a two-level dual-representation framework (Sun 1995), with a combination of localist and distributed representation. We compare the model with human data in a minefield navigation task, demonstrating some match between the model and human data in several respects. A new paper on consciousness: -------------------------------------------------- Computation, Reduction, and Teleology of Consciousness Ron Sun To appear in: {\it Cognitive Systems Research}, Vol.1, No.4, 2001. http://www.cecs.missouri.edu/~rsun/sun.jcsr-cons10.ps http://www.cecs.missouri.edu/~rsun/sun.jcsr-cons10.pdf ABSTRACT This paper aims to explore mechanistic and teleological explanations of consciousness. In terms of mechanistic explanations, it critiques various existing views, especially those embodied by existing computational cognitive models. In this regard, the paper argues in favor of the explanation based on the distinction between localist (symbolic) representation and distributed representation (as formulated in the connectionist literature), which reduces the phenomenological difference to a mechanistic difference. Furthermore, to establish a teleological explanation of consciousness, the paper discusses the issue of the functional role of consciousness on the basis of the afore-mentioned mechanistic explanation. A proposal based on synergistic interaction between the conscious and the unconscious is advanced that encompasses various existing views concerning the functional roles of consciousness. This two-step deepening explanation has some empirical support, in the form of a cognitive model and various cognitive data that it captures. Also, a previous paper on accounting for consciousness computationally: -------------------------------------------------- Accounting for the Computational Basis of Consciousness: A Connectionist Approach Ron Sun Appeared in: Consciousness and Cognition, 1999. http://www.cecs.missouri.edu/~rsun/sun.CC99.ps http://www.cecs.missouri.edu/~rsun/sun.CC99.pdf ABSTRACT This paper argues for an explanation of the mechanistic (computational) basis of consciousness that is based on the distinction between localist (symbolic) representation and distributed representation, the ideas of which have been put forth in the connectionist literature. A model is developed to substantiate and test this approach. The paper also explores the issue of the functional roles of consciousness, in relation to the proposed mechanistic explanation of consciousness. The model, embodying the representational difference, is able to account for the functional role of consciousness, in the form of the synergy between the conscious and the unconscious. The fit between the model and various cognitive phenomena and data (documented in the psychological literatures) is discussed to accentuate the plausibility of the model and its explanation of consciousness. Comparisons with existing models of consciousness are made in the end. ----------------------------------------------------------------- Symbol Grounding: A New Look At An Old Idea by Ron Sun Appeared in: Philosophical Psychology, Vol.13, No.2, pp.149-172. 2000. http://www.cecs.missouri.edu/~rsun/sun.PP00.ps http://www.cecs.missouri.edu/~rsun/sun.PP00.pdf ABSTRACT Symbols should be grounded, as has been argued before. But we insist that they should be grounded not only in subsymbolic activities, but also in the interaction between the agent and the world. The point is that concepts are not formed in isolation (from the world), in abstraction, or ``objectively". They are formed in relation to the experience of agents, through their perceptual/motor apparatuses, in their world and linked to their goals and actions. In this paper, we will take a detailed look at this relatively old issue, using a new perspective, aided by our work of computational cognitive model development. Finally, a previous paper on computational aspects of the model: --------------------------------- Autonomous Learning of Sequential Tasks: Experiments and Analyses by Ron Sun, Todd Peterson Appeared in: IEEE Transactions on Neural Networks, Vol.9, No.6, pp.1217-1234. November, 1998. http://www.cecs.missouri.edu/~rsun/sun.tnn98.ps ABSTRACT: This paper presents a novel learning model CLARION, which is a hybrid model based on the two-level approach proposed in Sun (1995). The model integrates neural, reinforcement, and symbolic learning methods to perform on-line, bottom-up learning (i.e., learning that goes from neural to symbolic representations). The model utilizes both procedural and declarative knowledge (in neural and symbolic representations respectively), tapping into the synergy of the two types of processes. It was applied to deal with sequential decision tasks. Experiments and analyses in various ways are reported that shed light on the advantages of the model. =========================================================================== Prof. Ron Sun http://www.cecs.missouri.edu/~rsun CECS Department phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 201 Engineering Building West Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun http://www.cecs.missouri.edu/~rsun/journal.html http://www.cecs.missouri.edu/~rsun/clarion.html =========================================================================== From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: ORGANIZING COMMITTEE Roland Baddeley (Sussex University) William Lowe (Harvard University) John Bullinaria (Reading University) Samantha Harltley (Liverpool University) CONTACT DETAILS For any problems or questions, please send e-mail to Roland Baddeley (ncpw7 at biols.susx.ac.uk) URL: http://www.biols.susx.ac.uk/home/Roland_Baddeley/NCPW7/NCPW7.html AIMS AND OBJECTIVES The Seventh Neural Computation and Psychology Workshop (NCPW7) will be held in Brighton, England from September 17-19, 2001. Each year this highly focused conference attracts a select group of (mostly, but not exclusively, European) neural network modellers specifically interested in psychology and neuropsychology. The theme of this year's workshop is neural network modelling in the areas of Cognition and Perception. Between 25-30 papers will be accepted as oral presentations. In addition to the high quality of the papers presented, this Workshop is always of limited size and takes place in an informal setting, both of which are explicitly designed to encourage interaction among the researchers present. Although we are particularly interested in models of cognition and perception, we will consider all papers that have something to do with the announced topic, even if rather tangentially. The organisation of the final program will depend on the submissions received. As in previous years, the Workshop will be reasonably small and hopefully very friendly, with no parallel sessions and plenty of time to enjoy Brighton. CALL FOR ABSTRACTS There will be approximately 30-35 paper presentations. Abstracts (approximately 200 words) are due by July 14 and should be emailed to ncpw7 at biols.susx.ac.uk. Notification of acceptance for a paper presentation will be by July 31st. REGISTRATION, ETC. The cost for Registration will be 60.00. This will include breakfast, lunch, tea and biscuits, but not evening meals (with Brighton- why?). Accommodation will be 84 for three nights in ?superior? student accommodation. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: and back again Marc de Kamps and Frank van der Velde Is the integrate-and-fire model good enough? A review Jianfeng Feng ------------------------------------------------------------------ Electronic access: www.elsevier.com/locate/neunet/. Individuals can look up instructions, aims & scope, see news, tables of contents, etc. Those who are at institutions which subscribe to Neural Networks get access to full article text as part of the institutional subscription. Sample copies can be requested for free and back issues can be ordered through the Elsevier customer support offices: nlinfo-f at elsevier.nl usinfo-f at elsevier.com or info at elsevier.co.jp ------------------------------ INNS/ENNS/JNNS Membership includes a subscription to Neural Networks: The International (INNS), European (ENNS), and Japanese (JNNS) Neural Network Societies are associations of scientists, engineers, students, and others seeking to learn about and advance the understanding of the modeling of behavioral and brain processes, and the application of neural modeling concepts to technological problems. Membership in any of the societies includes a subscription to Neural Networks, the official journal of the societies. Application forms should be sent to all the societies you want to apply to (for example, one as a member with subscription and the other one or two as a member without subscription). The JNNS does not accept credit cards or checks; to apply to the JNNS, send in the application form and wait for instructions about remitting payment. The ENNS accepts bank orders in Swedish Crowns (SEK) or credit cards. The INNS does not invoice for payment. ---------------------------------------------------------------------------- Membership Type INNS ENNS JNNS ---------------------------------------------------------------------------- membership with $80 or 660 SEK or Y 15,000 [including Neural Networks 2,000 entrance fee] or $55 (student) 460 SEK (student) Y 13,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- membership without $30 200 SEK not available to Neural Networks non-students (subscribe through another society) Y 5,000 (student) [including 2,000 entrance fee] ----------------------------------------------------------------------------- Institutional rates $1132 2230 NLG Y 149,524 ----------------------------------------------------------------------------- Name: _____________________________________ Title: _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Phone: _____________________________________ Fax: _____________________________________ Email: _____________________________________ Payment: [ ] Check or money order enclosed, payable to INNS or ENNS OR [ ] Charge my VISA or MasterCard card number ____________________________ expiration date ________________________ INNS Membership 19 Mantua Road Mount Royal NJ 08061 USA 856 423 0162 (phone) 856 423 3420 (fax) innshq at talley.com http://www.inns.org ENNS Membership University of Skovde P.O. Box 408 531 28 Skovde Sweden 46 500 44 83 37 (phone) 46 500 44 83 99 (fax) enns at ida.his.se http://www.his.se/ida/enns JNNS Membership c/o Professor Tsukada Faculty of Engineering Tamagawa University 6-1-1, Tamagawa Gakuen, Machida-city Tokyo 113-8656 Japan 81 42 739 8431 (phone) 81 42 739 8858 (fax) jnns at jnns.inf.eng.tamagawa.ac.jp http://jnns.inf.eng.tamagawa.ac.jp/home-j.html ----------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: entries of Human Nuclear DNA including a Gene with Complete CDS and with more than one exon have been selected according to assessed selection criteria (file genbank_filtered.inf). 4450 exons and 3752 introns have been extracted from these entries (files exons.seq and introns.seq). Several statistics for such exons and introns (overall nucleotides, average GC content, number of exons/introns including not AGCT bases, number of exons/introns in which the annotated end is not found, exon/intron minimum length, exon/intron maximum length, exon/intron average length, exon/intron length standard deviation, number of introns in which the sequence does not start with GT, number of introns in which the sequence does not end with AG) are reported (files exons.stat and introns.stat). Then 3762 + 3762 donor and acceptor sites have been extracted as windows of 140 nucleotides around each splice site. After discarding sequences not including canonical GT-AG junctions (176 +191), including insufficient data (not enough material for a 140 nucleotide window) (590+547), and including not AGCT bases (30+32), there are 2955+2992 windows (files GT_true.seq and AG_true.seq). Information and several statistics about the splice sites extraction are reported (files GT_true.inf, AG_true.inf, GT_true.stat, and AG_true.stat). Finally, there are 287,296+348,370 windows of false splice sites, selected by searching canonical GT-AG pairs in not splicing positions. The false sites in a range+/- 60 from a true splice site are marked as proximal (files GT_false.seq, and AG_false.seq) (Related information: GT_false.inf, and AG_false.inf). HS3D is available at the Web server of the University of Sannio http://www.sci.unisannio.it/docenti/rampone/ ----------- Salvatore Rampone Facolt di Scienze MM.FF.NN. and INFM Universit del Sannio Via Port'Arsa 11 I-82100 Benevento ITALY E-mail: rampone at unisannio.it From esann at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: esann at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From cmbishop at microsoft.com Tue Jun 6 06:52:25 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: RESEARCH POSITION AT MSR CAMBRIDGE A research position, either at researcher level (permanent position) or at postdoc level (2 years fixed term), according to qualifications and experience, is available at MSR Cambridge UK. In addition there is an associated Research Fellowship at Clare Hall College, nearby the lab. The research area is Machine Learning and Perception (including computer vision, signal processing, pattern recognition and probabilistic inference). Further details are at http://www.research.microsoft.com/mlp/. From scheler at ICSI.Berkeley.EDU Tue Jun 6 06:52:25 2006 From: scheler at ICSI.Berkeley.EDU (scheler@ICSI.Berkeley.EDU) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Parallel Paper Submission Message-ID: We would like to suggest the adoption of a general policy in the Neural Network Community of unlimited parallel paper submission. It is the task of editors and reviewers then to accept or reject papers, and the liberty of authors to select the journal where they want to publish. Gabriele Dorothea Scheler Johann Martin Philipp Schumann ............... From jlm at cnbc.cmu.edu Tue Jun 6 06:52:25 2006 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Parallel Paper Submission Message-ID: > > > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. > > -- Jay McClelland > > From jlm at cnbc.cmu.edu Tue Jun 6 06:52:25 2006 From: jlm at cnbc.cmu.edu (Jay McClelland) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Parallel Paper Submission Message-ID: > > > I wholeheartedly agree with Tom. Parallel submission would create > a huge waste of reviewer time, and would lead to many bad feelings > if a paper is accepted to two outlets. Obviously the problem with > the sequential approach is that review turnaround can be slow. This > is an issue that we all can and should work on. > > -- Jay McClelland > From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject In-Reply-To: References: Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: finally come around to the crux of the biggest problem with the present reviewing system: not enough personal incentive for reviewers to do a good job (or to do it quickly for that matter). While the discussion started out being mostly about speed to publication, the issue of review quality has persistently resurfaced. And while many people have proposed some sort of free-market solution to letting papers compete, what I think would be more helpful would be turning some economic and motivational scrutiny on the reviews themselves. Reviewing is hard, and it should be rewarded, and the reward should ideally be somewhat proportional to quality. Right now the reward is mostly altruism and personal pride in doing good work. There is a little bit of reputation involved in that the editors and sometimes the other reviewers see the results and can attach a name to them, but this is a weak reward signal because of how narrow the audience is. The economic currency of academia is reputation. (There was a short column or article about this somewhere, maybe Science, but I don't remember.) The major motivation for doing good papers in the first place is the affect it has on your reputation. (These papers are part of your "professional voice" as Phil Agre's Networking on the Network document puts it.) This in turn affects funding, job hunting, tenure decisions, etc. so there is plenty of motivation to do it well. It would be nice if there were to create a stronger incentive (reward signal) for review quality. This is not too absurd as it seems only a slight jump away from similar standard practices. Part of a review is quality assesment, but tied in with that is advice on how to improve the work. Advice in some other contexts is amply rewarded in reputational currency. Advisors are partly judged by the accomplishments of students that they have advised. People who give advice on how to improve a paper are often mentioned in an acknowledgements section. Often the job they do is very similar to that of a reviewer, it just isn't coordinated by an editor. Sometimes such people become co-authors and then they get the full benefit of reputational reward for their efforts. Even anonymous reviewers are thanked in acknowledgements sections though their reputations are not aided by this. Sometimes the line between the contributions of a reviewer and an author are somewhat blurry. Many people probably know of examples where a particularly helpful anonymous reviewer contributed more to a paper than someone who was, due to some obligation, listed as a coauthor. But many reviews are quite unhelpful or are way off on the quality assessment. It would improve the quality more consitently if the reviewer got some academic reputational currency out of doing good reviews (and corresponding potential to look foolish for being very wrong). How best to change the structure of the reviewing system to accomplish this is an open question. Someone mentioned a journal where reviews are published with the articles. This has some benefit, but has some problems. Reviews for articles that are completely rejected are not published. We don't want people to only agree to review articles they think will get published. Also, while publishing reviews gives a little incentive not to screw up, to fully motivate quality such things would have to become regularly scrutinized in tenure and job decisions as an integral part of the overall publication record. But the field would have to be careful to separate out the quality of the review from the quality and fame of the reviewed material itself, again to not encourage jockeying to review only the papers that look to be the most influential. Clearly I don't have all the answers, but I advocate looking at the problem in terms of economic incentives, in the same way that economists look at other incentive systems such as incentive stock options for corporate employees, which serve a useful purpose but have well-understood drawbacks from an incentive perspective. Note that review quality is a somewhat separate issue than the also important filtering and attention selection issue, such as the software that Geoff Hinton requested. Even a perfect personalized selection mechanism would not completely replace the benefits of a reviewing system. For example, reviews still help authors to improve their work, and thereby the entire field. And realistically no such perfect selection mechanism will ever exist, so selection will always be greatly aided by quality improvement and filtering at the source side. Thus we should be interested in structural mechanisms to improve the quality of reviews (as well as in useful selection mechanisms to tell us what to read). -Karl ------------------------------------------------------------------------------- Karl Pfleger kpfleger at cs.stanford.edu www-cs-students.stanford.edu/~kpfleger/ ------------------------------------------------------------------------------- > From: Bob Damper > > This shortage of good qualified referees is going to continue all the > time there is no tangible reward (other than a warm altruistic feeling) > for the onerous task of reviewing. So, as many others have pointed > out, parallel submissions will exacerbate this situation rather than > improve it. Not a good idea! > > Bob. > > On Tue, 27 Nov 2001, rinkus wrote: > > > > In many instances a particular student may have particular knowledge and > > insight relevant to a particular submission but the proper model here is > > for the advertised reviewer (i.e., whose name appears on the editorial > > board of the publication) to consult with the student about the > > submission (and this should probably be in an indirect fashion so as to > > protect the author's identity and ideas) and then write the review from > > scratch himself. The scientific review process is undoubtedly worse off > > to the extent this kind of accountability is not ensured. We end up > > seeing far too much rehashing of old ideas and not enough new ideas. > > > > Rod Rinkus > > From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Integration and interpolation processes in vision KEYNOTE LECTURE: Erkki Oja (Helsinki University of Technology) Independent component analysis: Recent advances Friday, May 31, 2002 SYMPOSIUM ON LOCALIST AND DISTRIBUTED REPRESENTATIONS IN PERCEPTION AND COGNITION Edward Callaway (The Salk Institute) Cell type specificity of neural circuits in visual cortex James L. McClelland (Carnegie Mellon University) Varieties of distributed representation: A complementary learning systems perspective Stephen Grossberg (Boston University) Laminar cortical architecture in perception and cognition Jeffrey Bowers (University of Bristol) Localist coding in neural networks for visual word identification Randall O'Reilly (University of Colorado) Learning and memory in the hippocampus and neocortex: Principles and models Michael Page (University of Hertfordshire) Modeling memory for serial order Saturday, June 1, 2002 CORTICAL CODING AND SENSORY-MOTOR CONTROL: Dana Ballard (University of Rochester) Distributed synchrony: A general model for cortical coding Stephen G. Lisberger (University of California School of Medicine) The inner workings of a cortical motor system Daniel Bullock (Boston University) Neural dynamics of ocular tracking, interceptive reaching, and reach/grasp coordination RECOGNITION, MEMORY, AND REWARD: Edmund Rolls (Oxford University) Neural mechanisms involved in invariant object recognition Lynn Nadel (University of Arizona) The role of the hippocampal complex in recent and remote episodic and semantic memory Wolfram Schultz (University of Cambridge) Multiple reward systems in the brain KEYNOTE LECTURE: Daniel Schacter (Harvard University) The seven sins of memory: A cognitive neuroscience perspective CALL FOR ABSTRACTS Session Topics: * vision * spatial mapping and navigation * object recognition * neural circuit models * image understanding * neural system models * audition * mathematics of neural systems * speech and language * robotics * unsupervised learning * hybrid systems (fuzzy, evolutionary, digital) * supervised learning * neuromorphic VLSI * reinforcement and emotion * industrial applications * sensory-motor control * cognition, planning, and attention * other Contributed abstracts must be received, in English, by January 31, 2002. Notification of acceptance will be provided by email by February 28, 2002. A meeting registration fee must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Registration fees of accepted Abstracts will be returned on request only until April 19, 2002. Each Abstract should fit on one 8.5" x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract; corresponding author and presenting author name, address, telephone, fax, and email address; requested preference for oral or poster presentation; and a first and second choice from the topics above, including whether it is biological (B) or technological (T) work. Example: first choice: vision (T); second choice: neural system models (B). (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, VCR, and LCD projector facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. Accepted Abstracts will be printed in the conference proceedings volume. No longer paper will be required. The original and 3 copies of each Abstract should be sent to: Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. REGISTRATION INFORMATION: Early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to the address above. If paying by credit card, mail as above, or fax to (617) 353-7755, or email to cindy at cns.bu.edu. The registration fee will help to pay for a reception, 6 coffee breaks, and the meeting proceedings. STUDENT FELLOWSHIPS: Fellowships for PhD candidates and postdoctoral fellows are available to help cover meeting travel and living costs. The deadline to apply for fellowship support is January 31, 2002. Applicants will be notified by email by February 28, 2002. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on official institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Fellowship applicants who also submit an Abstract need to include the registration fee with their Abstract submission. Those who are awarded fellowships are required to register for and attend both the conference and the day of tutorials. Fellowship checks will be distributed after the meeting. REGISTRATION FORM Sixth International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 29, 2002 Meeting: May 30 - June 1, 2002 FAX: (617) 353-7755 http://www.cns.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $85 Conference plus Tutorial (Regular) ( ) $55 Conference plus Tutorial (Student) ( ) $60 Conference Only (Regular) ( ) $40 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ From jbf_w_s_hunter at hotmail.com Tue Jun 6 06:52:25 2006 From: jbf_w_s_hunter at hotmail.com (Jose' B. Fonseca) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: extremely large search spaces when viewed in terms of their basic input features. Examples include learning useful behavior for a robot that receives a continuous stream of video input, or learning to play the game of Go. For such problems, an unbiased search is infeasible, and a bias must be employed that focuses the search within the input space so that the size of the problem is effectively reduced. Letting representations develop as part of learning may be viewed as a way of establishing such a bias. Submissions are encouraged on issues including, but not limited to: * How can large search spaces be reduced by introducing a helpful bias? Existing approaches relevant to this include bias learning and learning to learn. * How can related problems become a source for helpful biases? This question is studied in multitask learning, sequential learning, many-layered learning, and lifelong learning. * Architectures for variable representations. * How may representations be used, and when searching the space of representations, what should their evaluation function be? During the revival of neural network research in the mid 1980's, it became clear that internal representations can be learned based on a global feedback signal. However, while this signal is appropriate as an evaluation for a complete system, the representations such systems employ may require a different evaluation: * assessing modularity: Can a representation be used in multiple contexts? Structural vs. functional modularity. * assessing value: How useful is the information a representation extracts to the construction of solutions? This is a credit assignment question, and recent work on establishing stable economies of value may shed new light on this. * Statistical techniques for assessing modularity. The modularity of a representation relates to a reduced dependency on elements that are not part of the representation. * Bayesian techniques for learning representations. * The relationship between statistical techniques and other approaches to credit assignment. * Hierarchy. The size of input spaces than can be handled may be scaled up by constructing representations from existing representations, leading to a hierarchy of representations. * Practical methods for hierarchical Bayesian inference. * Extracting symbols from sensors. How can raw sensor information be used to extract compact representations or symbols? * How may representations and the solutions employing them be developed simultaneously? One approach to this question is studied in the sub-discipline of evolutionary computation known as co-evolution. * Methods for constructive induction. * Development of theoretical terms through, for example, predicate invention. * Emerging issues in evolutionary and computational biology on the importance of change of representation in gene expression. * Change of representation that occurs over the lifetime of an embedded agent. WORKSHOP FORMAT The workshop will be organized so as to maximize interaction, discussion, and exchange of ideas. The day will start with an invited talk and will be followed by a series of paper presentations grouped by topic. Each presentation will be short, e.g. 10 or 15 minutes, with 5 minutes allotted to questions on the content of the talk. At the end of each group of papers the presenters will participate in a panel discussion to answer questions of a more general sort related to the topic and the relationship between the papers in that group. We will include a panel discussion on emerging problems in the area of development of representation, and conclude the day by inviting all participants to join in an open discussion with the goal of identifying the main themes of the day and establishing a research agenda. PROGRAM CO-CHAIRS Edwin de Jong Computer Science Department Brandeis University MS018 Waltham, MA 02454-9110 1.781.736.3366 edwin at cs.brandeis.edu Tim Oates CSEE Department University of Maryland Baltimore County 1000 Hilltop Circle Baltimore, MD 21250 1.410.455.3082 oates at cs.umbc.edu PROGRAM COMMITTEE Jonathan Baxter (WhizBang! Labs) Rich Caruana (Cornell) Rod Grupen (University of Massachusetts, Amherst) Tom Heskes (University of Nijmegen, The Netherlands) Leslie Kaelbling (MIT) Justus Piater (INRIA Rhone-Alpes, France) Jude Shavlik (University of Wisconsin, Madison) Paul Utgoff (University of Massachusetts, Amherst) IMPORTANT DATES Deadline for submissions: April 22 Notification to participants: May 10 Camera ready copy due: May 31 SUBMISSION INFORMATION Submissions may be either a full technical paper (up to 8 pages) or a position statement in the form of an extended abstract (one or two pages). Electronic submissions (PostScript, PDF, or HTML) are preferred and should be sent by April 22 to either of the co-chairs (Edwin de Jong at edwin at cs.brandeis.edu or Tim Oates at oates at cs.umbc.edu). Please format your submission according to the ICML-2002 formatting guidelines. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Integration and interpolation processes in vision KEYNOTE LECTURE: Erkki Oja (Helsinki University of Technology) Independent component analysis: Recent advances Friday, May 31, 2002 SYMPOSIUM ON LOCALIST AND DISTRIBUTED REPRESENTATIONS IN PERCEPTION AND COGNITION Edward Callaway (The Salk Institute) Cell type specificity of neural circuits in visual cortex James L. McClelland (Carnegie Mellon University) Semantic cognition: A parallel-distributed processing approach Stephen Grossberg (Boston University) Laminar cortical architecture Jeffrey Bowers (University of Bristol) Localist coding in neural networks for visual word identification Randall O'Reilly (University of Colorado) Learning and memory in the hippocampus and neocortex: Principles and models Michael Page (University of Hertfordshire) Modeling memory for serial order Saturday, June 1, 2002 CORTICAL CODING AND SENSORY-MOTOR CONTROL: Dana Ballard (University of Rochester) Distributed synchrony Stephen G. Lisberger (University of California School of Medicine) The inner workings of a cortical motor system Daniel Bullock (Boston University) Neural dynamics of ocular tracking, interceptive reaching, and reach/grasp coordination RECOGNITION, MEMORY, AND REWARD: Edmund Rolls (Oxford University) Neural mechanisms involved in invariant object recognition Lynn Nadel (University of Arizona) The hippocampal formation and episodic memory Wolfram Schultz (University of Cambridge) Multiple reward signals in the brain KEYNOTE LECTURE: Daniel Schacter (Harvard University) The seven sins of memory: A cognitive neuroscience perspective REGISTRATION FORM Sixth International Conference on Cognitive and Neural Systems Department of Cognitive and Neural Systems Boston University 677 Beacon Street Boston, Massachusetts 02215 Tutorials: May 29, 2002 Meeting: May 30 - June 1, 2002 FAX: (617) 353-7755 http://www.cns.bu.edu/meetings/ (Please Type or Print) Mr/Ms/Dr/Prof: _____________________________________________________ Name: ______________________________________________________________ Affiliation: _______________________________________________________ Address: ___________________________________________________________ City, State, Postal Code: __________________________________________ Phone and Fax: _____________________________________________________ Email: _____________________________________________________________ The conference registration fee includes the meeting program, reception, two coffee breaks each day, and meeting proceedings. The tutorial registration fee includes tutorial notes and two coffee breaks. CHECK ONE: ( ) $85 Conference plus Tutorial (Regular) ( ) $55 Conference plus Tutorial (Student) ( ) $60 Conference Only (Regular) ( ) $40 Conference Only (Student) ( ) $25 Tutorial Only (Regular) ( ) $15 Tutorial Only (Student) METHOD OF PAYMENT (please fax or mail): [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Name as it appears on the card: _____________________________________ Type of card: _______________________________________________________ Account number: _____________________________________________________ Expiration date: ____________________________________________________ Signature: __________________________________________________________ From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: with Neural Networks by Mahesan Niranjan, Sheffield University, UK From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Proceedings of the Seventh International Conference on Simulation of Adaptive Behavior edited by Bridget Hallam, Dario Floreano, John Hallam, Gillian Hayes, and Jean-Arcady Meyer The Simulation of Adaptive Behavior Conference brings together researchers from ethology, psychology, ecology, artificial intelligence, artificial life, robotics, computer science, engineering, and related fields to further understanding of the behaviors and underlying mechanisms that allow adaptation and survival in uncertain environments. The work presented focuses on robotic and computational experimentation with well-defined models that help to characterize and compare alternative organizational principles or architectures underlying adaptive behavior in both natural animals and synthetic animats. Bridget Hallam is Guest Researcher at the University of Southern Denmark. Dario Floreano is Professor of Evolutionary and Adaptive Systems at the Swiss Federal Institute of Technology. John Hallam and Gillian Hayes are Senior Lecturers in the Institute of Perception, Action, and Behavior at the University of Edinburgh. Hallam is also Guest Professor at the University of Southern Denmark. Jean-Arcady Meyer is Director of the AnimatLab at the Labortatoire d'Informatique de Paris 6. 8 1/2 x 11, 500 pp., paper, ISBN 0-262-58217-1 Complex Adaptive Systems series A Bradford Book ______________________ David Weininger Associate Publicist The MIT Press 5 Cambridge Center, 4th Floor Cambridge, MA 02142 617 253 2079 617 253 1709 fax http://mitpress.mit.edu From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Pages 397-428 Kerstin Dautenhahn, Bernard Ogden and Tom Quick http://www.sciencedirect.com/science/article/B6W6C-45JGW5T-1/1/4ebe8ecc97061de118607f87cbd9916c The physical symbol grounding problem, Pages 429-457 Paul Vogt http://www.sciencedirect.com/science/article/B6W6C-45JY928-2/1/9f933ec9575e320bd3e5df298a18f312 On the dynamics of robot exploration learning, Pages 459-470 Jun Tani and Jun Yamamoto http://www.sciencedirect.com/science/article/B6W6C-45NPDR9-1/1/30232225aeb29ea6f7daea13ed03ffa1 Simulating activities: Relating motives, deliberation, and attentive coordination, Pages 471-499 William J. Clancey http://www.sciencedirect.com/science/article/B6W6C-45JY928-1/1/a63c8d94790efa33f82067ac161aad88 Activity organization and knowledge construction during competitive interaction in table tennis, Pages 501-522 Carole Seve, Jacques Saury, Jacques Theureau and Marc Durand http://www.sciencedirect.com/science/article/B6W6C-45KSPCF-2/1/92cbefc3162526340a56011455284bbe Situatedness in translation studies, Pages 523-533 Hanna Risku http://www.sciencedirect.com/science/article/B6W6C-45HFF6Y-1/1/f294067d693ab4a387717992ba06dbc0 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Pages 535-554 Wolff-Michael Roth http://www.sciencedirect.com/science/article/B6W6C-45HWNG0-2/1/ead6e8b30a4465db2274827f692086ad =============================================================================== If you have questions about ScienceDirect, please locate your nearest Help Desk at http://www.info.sciencedirect.com/contacts. =============================================================================== See the following journal Web pages for subscription information for the journal Cognitive Systems Research: http://www.cecs.missouri.edu/~rsun/journal.html http://www.elsevier.com/locate/cogsys =================================================================== Professor Ron Sun, Ph.D CECS Department, 201 EBW phone: (573) 884-7662 University of Missouri-Columbia fax: (573) 882 8318 Columbia, MO 65211-2060 email: rsun at cecs.missouri.edu http://www.cecs.missouri.edu/~rsun =================================================================== From cmbishop at microsoft.com Tue Jun 6 06:52:25 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Postdoctoral Research Fellowship in Adaptive Computing DARWIN COLLEGE CAMBRIDGE Microsoft Research Fellowship The Governing Body of Darwin College Cambridge and Microsoft Research jointly invite applications for a stipendiary Research Fellowship supporting research in the field of adaptive computing (including topics such as pattern recognition, probabilistic inference, statistical learning theory and computer vision). Applicants should hold a PhD or should be expecting to have submitted their thesis prior to commencement of the Fellowship. The Fellowship will be tenable for two years commencing 1 October 2003 or on a date to be agreed. The successful candidate will work at the Microsoft Research Laboratory in Cambridge. Information about the laboratory is available from http://research.microsoft.com/cambridge/. Further details are available from the College website http://www.dar.cam.ac.uk or the Master's Secretary, Darwin College, Cambridge CB3 9EU. The closing date for applications is 10 January 2003. - The College follows an equal opportunities policy - Full information: DARWIN COLLEGE CAMBRIDGE Microsoft Research Fellowship The Governing Body of Darwin College Cambridge, and Microsoft Research Cambridge jointly invite applications for a stipendiary Research Fellowship supporting research in the field of adaptive computing (including topics such as pattern recognition, probabilistic inference, statistical learning theory and computer vision). Eligibility Men and women graduates of any university are eligible to apply, irrespective of age, provided they have a doctorate or an equivalent qualification, or expect to have submitted their thesis before taking up the Fellowship. Tenure The Fellowship will be tenable for two years commencing l October 2003 or on a date to be agreed. Duties The successful candidate will engage in research full-time at the Microsoft Research Laboratory in Cambridge. The Fellow will be a member of the Governing Body of Darwin College and will be subject to the Statutes and Ordinances of the College which may be seen on request to the Bursar. The Statutes include the obligation to reside in or near Cambridge for at least two-thirds of each University term, but the Governing Body will normally excuse absences made necessary by the nature of the research undertaken. Stipend and Emoluments The stipend will be dependent upon age and experience. Membership of the Universities' Superannuation Scheme is optional. In addition the Fellow will be able to take seven meals per week at the College table free of charge and additional meals at his or her own expense. Guests may be invited to all meals (within the limits of available accommodation), ten of them free of charge within any quarter of the year. College accommodation will be provided, subject to availability, or an accommodation allowance will be paid in lieu. In addition to a salary the Fellowship provides funding for conference participation. Applications Applications should reach the Master, Darwin College, Cambridge CB3 9EU by 10 January 2003. They should be typed and should include SIX copies of (1) a curriculum vitae, (2) an account, in not more than 1000 words, of the proposed research, including a brief statement of the aims and background to it, (3) the names and addresses of three referees (including telephone, fax and e-mail co-ordinates), WHO SHOULD BE ASKED TO WRITE AT ONCE DIRECT TO THE MASTER indicating the originality of the work and the candidate's scholarly potential, and (4) a list of published or unpublished work that would be available for submission if requested. Testimonials should not be sent. Short-listed candidates may be asked to make themselves available for interview at Darwin College on a date to be arranged in mid-March: election will be made as soon as possible thereafter. In certain circumstances travelling expenses for overseas interviewees may be covered. The College follows an equal opportunities policy From aweigend at amazon.com Tue Jun 6 06:52:25 2006 From: aweigend at amazon.com (Weigend, Andreas) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Do you want to build quantitative models millions of people will use, based on data from the world's largest online laboratory? Are you passionate about formulating relevant questions and producing solutions to initially ill-defined problems? Do the challenges and opportunities of terabytes of data excite you? Can you think abstractly and apply your ideas to the real world? Can you contribute to the big picture and are not afraid to handle the details? Amazon.com is solving incredibly interesting machine learning problems in areas ranging from pricing to personalization, from fraud detection to warehouse picking. Emphasizing measurement and analytics, we build and automate solutions that leverage the Web's scale and instant feedback. We are looking for people with the right blend of vision, intellectual curiosity, and hands-on skills, who want to be part of a highly visible, entrepreneurial team at company headquarters in Seattle. Ideal candidates will have a track record of creating innovative solutions, and typically a Ph.D. in computer science, physics, statistics, or electrical engineering. Significant research experience is desired in fields including active learning, probabilistic graphical models and Bayesian networks, data mining and visualization, Web search and information retrieval, judgment and decision making, consumer modeling, and behavioral economics. If this position excites you, please send your resume, clearly indicating your interests and strengths, to aweigend at amazon.com. Thank you. Andreas S. Weigend, Ph.D. | Chief Scientist, Amazon.com | +1 (917) 697-3800 | www.weigend.com=20 From cburges at microsoft.com Tue Jun 6 06:52:25 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I strongly support the idea of introducing double blind reviewing at NIPS. If bias exists it is insidious and corrosive. Further since reviewing is very subjective, detecting bias can be very hard. Worse, it can occur unconsciously. A close friend recently told me a story about his reviewing two similar papers, one from a group he liked and one from a group whose work he did not respect as much. He started with the paper from the second group, and half way through, he'd already formed strong negative opinions on the work. But then he was shocked to discover that the paper was in fact from the first group. He felt that the incident uncovered a bias in his reviewing of which he was not previously aware. Let's look at the objections, so far, to blind reviewing: John Lazzaro uses the example of Jan Hendrik Schon. John is proposing rejecting the paper due to the previous history of the author. This is exactly the kind of problem blind reviewing addresses. Suppose that Schon has mended his ways and his submission is actually ground breaking, high quality research. Do you want to reject it out of hand? No, you want an unbiased, peer reviewed assessment of it. The problem of vetoing a given authors work should be decided by the editors, based on past history, not by the reviewers - unless they themselves find fraud in the submission. Grace objects that writing a paper so as not to give a clue as to your identity distorts the paper. Also she points out that many authors put their papers on their home page, so digging up the authorship of the submission would be easy. Regarding both of these points: even with blind reviewing, authors can still leave a trail of bread crumbs as to their identity if they wish. No one is suggesting that they be forced to make their identity as hard as possible to discern. What is being suggested, is that a barrier be erected, so that bias in a review would have to be a much more conscious act that it is now. I don't have to put the paper on my home page if I feel that bias may exist. The only other objection so far is that blind reviewing is costly. But that cost is hugely reduced with electronic submissions. It need not be cumbersome any more. Also, coming up with examples of journals / conferences that do not do blind reviewing is not convincing; one can equally well come up with ones that do, e.g. ICCV, CHI, JASA (according to http://www.acm.org/sigmod/record/issues/0003/chair.pdf , about 20% of ACM sponsored conferences are double blind, so it can't be that hard). -- Chris Burges From cburges at microsoft.com Tue Jun 6 06:52:25 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: I was wrong - double blind reviewing does require that all authors do their best to remain anonymous, in order to prevent positive (e.g. authors using their fame, or that of their institution, to get an easier review) as well as negative bias. However re. Grace's point that authors like to put their papers up on Web pages before publication - this year NIPS had a wonderful feature of making draft papers available electronically after they were accepted. So given this, authors would have to wait at most a couple of months. -- Chris Burges From wahba at stat.wisc.edu Tue Jun 6 06:52:25 2006 From: wahba at stat.wisc.edu (Grace Wahba) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NIPS & double blind reviewing Message-ID: > > Have you ever tried to write a paper without giving any > clue to your identity? ("In xxx we proved yyy and in this > paper we extend those results"). > It can seriously distort the > paper. Furthermore, many (most?) people submitting to > NIPS put their paper on their home page and even circulate > it on this list, so a reviewer would have no trouble > finding out who the author was by using, for instance, > google. I fail to see any positives to blind reviewing > and a lot of negatives. > From Barak Tue Jun 6 06:52:25 2006 From: Barak (Barak) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NIPS & double blind reviewing Message-ID: I've heard three objections to blinded reviews. To my mind, none of them quite hold water. OBJECTION 1: It is hard to conceal the authors' identity against the industrious/perceptive/clueful reviewer. Sometimes clues are unavoidable. WHY IT DOESN'T HOLD WATER: So what? In that case blinding isn't any different from the current situation, so why are you objecting? Not all reviewers have these abilities, so blinding will work completely on them. Besides, even the most perceptive reviewer won't figure it out for all papers, only for some. And even when they think they've figured it out, being 80% sure of the author is, psychologically, very different from being 100% sure. Plus, starting an active search for the author's identity might give a reviewer pause ... OBJECTION 2: Sometimes the reviewer actually needs to know the author, eg for theory papers where whether a proof sketch is believable depends on the author. WHY IT DOESN'T HOLD WATER: Err, really? Well, if the reviewer feels themselves to be in that situation, they can either say so in the review, or ask the program committee for the author's name with a brief explanation as to why. It certainly seems healthy, particularly in this (surely quite rare, and therefore low amortized overhead) situation, to have the first pass through the paper be blind! OBJECTION 3: The author might be a well known plagiarist/crackpot/liar. WHY IT DOESN'T HOLD WATER: This is the program committee's job. Anyway it would be easy enough to reveal the authors' names to the reviewers *after* they have their reviews in, so they can bring such an extraordinary situation to the program committee's attention. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: REASON * November 1997 Orchestral Maneuvers By Nick Gillespie A recent study from the National Bureau of Economic Research applies the concept of a level playing field to the symphonic stage. In "Orchestrating Impartiality," economists Claudia Goldin and Cecelia Rouse demonstrate that female orchestra musicians have benefitted hugely from the use of "blind" auditions, in which candidates perform out of the sight of evaluators. In 1970 female musicians made up only 5 percent of players in the country's top orchestras... But beginning in the '70s and '80s, more and more of the orchestras switched to blind auditions, partly to avoid charges of such bias. Female musicians currently make up 25 percent of the "Big Five." Through an analysis of orchestral management files and audition records, Goldin and Rouse conclude that blind auditions increased by 50 percent the probability that a woman would make it out of early rounds. And, they say, the procedure explains between 25 percent and 46 percent of the increase in women in orchestras from 1970 to 1996. From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: For years, researchers have used the theoretical tools of engineering to understand neural systems, but much of this work has been conducted in relative isolation. In Neural Engineering, Chris Eliasmith and Charles Anderson provide a synthesis of the disparate approaches current in computational neuroscience, incorporating ideas from neural coding, neural computation, physiology, communications theory, control theory, dynamics, and probability theory. This synthesis, they argue, enables novel theoretical and practical insights into the functioning of neural systems. Such insights are pertinent to experimental and computational neuroscientists and to engineers, physicists, and computer scientists interested in how their quantitative tools relate to the brain. The authors present three principles of neural engineering based on the representation of signals by neural ensembles, transformations of these representations through neuronal coupling weights, and the integration of control theory and neural dynamics. Through detailed examples and in-depth discussion, they make the case that these guiding principles constitute a useful theory for generating large-scale models of neurobiological function. A software package written in MatLab for use with their methodology, as well as examples, course notes, exercises, documentation, and other material, are available on the Web. "In this brilliant volume, Eliasmith and Anderson present a novel theoretical framework for understanding the functional organization and operation of nervous systems, from the cellular level to the level of large-scale networks" John P. Miller, Center for Computational Biology, University of Montana "This book represents a significant advance in computational neuroscience. Eliasmith and Anderson have developed an elegant framework for understanding representation, computation, and dynamics in neurobiological systems. The book is beautifully written, and it should be accessible to a wide variety of readers." Bruno A. Olshausen, Center for Neuroscience, University of California, Davis "From principle component analysis to Kalman filters, information theory to attractor dynamics, this book is a brilliant introduction to the mathematical and engineering methods used to analyze neural function." Leif Finkel, Neuroengineering Research Laboratories, University of Pennsylvania http://mitpress.mit.edu/catalog/item/default.asp?sid=29DD45EE-EFE7-4C7F-BCF3 -40C91D6B2635&ttype=2&tid=9538 From cburges at microsoft.com Tue Jun 6 06:52:25 2006 From: cburges at microsoft.com (Chris J.C. Burges) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Sue Becker writes: >> Two of the key factors NIPS reviewers are asked to comment >> on are a paper's significance and originality. Very often >> work is submitted to NIPS that is only a marginal >> advancement over the author's previous work, or worse yet, >> the same paper has already appeared at another conference or >> in a journal. In the course of reviewing for NIPS I have >> often looked at an author's web page, past NIPS proceedings >> etc to assess the closeness to the author's previously >> published work. Double-blind reviewing would make it much >> more difficult to detect this sort of thing. To me, this is the first compelling argument against double blind reviewing put forward in the debate so far (it is not in Dale Schuurmans' list). However I think the issue Sue raises can be addressed as follows. Require that, if authors have closely related work that has been published or submitted elsewhere, they send in a copy of the single closest work to that submitted to NIPS, together with a VERY brief description of how the NIPS submission is different. The session chair (not the reviewer) then incorporates this into his/her decision. If an author abuses this trust, a penalty can be applied, much as the IEEE applies a (severe) penalty in similar circumstances (immediate rejection, immediate withdrawal of all submitted manuscripts by any of the authors, and prohibitions against all of the authors in any IEEE publication for one year: see e.g. http://www.ieee.org/organizations/society/sp/infotsa.html ). Yes, this requires a bit more effort on the session chair's part (although only some submissions will need to do this). But actually whether or not double blind reviewing is adopted, I think this is a separate issue, and maybe a good idea for NIPS anyway. In previous years, NIPS encouraged submission of work that had appeared in part elsewhere, provided it would be new and interesting to the NIPS community. This year a different policy was adopted, requiring stricter originality, and perhaps it will need some enforcement policy for it to work. After all, regardless of the blind reviewing issue, Sue's method - checking up on the author's web page - won't work for people who don't have web pages or who do not put recently submitted material on their web page (as many people don't). -- Chris Burges From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: For posting in connectionists emailing list, thank you. ----------------------------------------------------------------------------- Research Positions in Bioinformatics Several research associate (RA) and research fellow (RF) positions are available at the newly formed BioInformatics Research Centre (BIRC), Nanyang Technological University, Singapore. The current research projects at BIRC are in the areas of =B7 Comparative genomics =B7 Gene expression data analysis =B7 Protein structure prediction =B7 Neuroinformatics M.Sc. or Ph.D. degree in a related field is required for the positions. Salary ranges from S$3,000-5,000 per month, depending on qualifications. Interested candidates should email their CVs to BIRC (birc at ntu.edu.sg), indicating the interest. Preference shall be given to those having experience in above areas. Only selected candidates will be asked submit formal applications. Sincerely, -- Jagath C. Rajapakse, Ph.D., SrMIEEE Deputy Director, BioInformatics Research Centre (BIRC) Associate Professor, School of Computer Engineering Nanyang Technological University Block N4, 2a-32 Nanyang Avenue Singapore 639798 Phone: +65 67905802; Fax: +65 67926559 Email: asjagath at ntu.edu.sg URL: http://www.ntu.edu.sg/home/asjagath/home.htm From josephsirosh at fairisaac.com Tue Jun 6 06:52:25 2006 From: josephsirosh at fairisaac.com (Sirosh, Joseph (Joe)) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Fair Isaac (NYSE: FIC) is cultivating a center of excellence in analytics covering the broad fields of machine learning, statistics and artificial intelligence. We have several open positions highly talented scientists in these fields in our Advanced Technologies (AT) unit based in San Diego. AT performs research into frontier applications of machine learning and artificial intelligence in predictive analytics and scoring, intelligent agents, information retrieval, bioinformatics, video analysis, natural language question answering, uncertain reasoning, and various applications of statistical pattern recognition. Interested candidates must have a MS/Ph.D. in Computer Science, Engineering, Mathematics, Physics, or Statistics and a strong background in machine learning and demonstrable past successes. Three years experience with industry applications is desired. Excellent oral and written communication skills required. Compensation based on achievement, seniority & experience. Fair Isaac is the preeminent provider of creative analytic applications. We offer attractive compensation packages including stock options, stock purchase plans, 401(k), medical & other benefits. Website: http://www.fairisaac.com; e-mail: dawnridz at fairisaac.com. Address: Fair Isaac & Company, 5935 Cornerstone Ct. West, San Diego, CA 92121. FAX: 858-799-8062. Please reference job posting number 1821 or 1824. ============================================== Joseph Sirosh, PhD Fair Isaac & Company Vice President 5935 Cornerstone Court W Advanced Technology San Diego, CA 92121 Phone: (858) 799 8320 Main: (858) 799 8000 Fax: (858) 799 2850 http://www.fairisaac.com From Peter.Andras at newcastle.ac.uk Tue Jun 6 06:52:25 2006 From: Peter.Andras at newcastle.ac.uk (Peter Andras) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Research Associate, School of Computing Science, =A318,265 - =A327,339 Medical Research Council funded postdoctoral position based in the School of Computing Science, University of Newcastle, UK. A postdoctoral research associate is required to work on the development of knowledge discovery and knowledge management tools and applications for GRID-enabled neuroinformatics. The candidates should have a PhD in computer science, neuroinformatics, neuroscience, or related areas, and good knowledge and experience of objected oriented software design and development (e.g., Java, C/C++). Experience in any of the following areas is beneficial: using artificial intelligence methods (e.g., neural networks, text and data mining), working with web-databases (e.g., neuroscience databases), developing distributed systems (e.g., distributed databases). The post is for up to three years. The salary depends on experience and it is on the RA1A scale range: =A318,265 - =A327,339. For further enquiries e-mail Dr Peter Andras at peter.andras at ncl.ac.uk. Applications including an application form (download from the web-site), a CV, and names and addresses of two referees should be sent to Mrs A. Jackson, School of Computing Science, Claremont Tower, Claremont Road, Newcastle upon Tyne NE1 7RU, or by email to: Anke.Jackson at ncl.ac.uk. Closing date is 24 February 2003. Job reference: D520R Web: http://www.ncl.ac.uk/vacancies/vacancy.phtml?ref=3DD520R ----------------- Dr Peter Andras Lecturer Claremont Tower School of Computing Science University of Newcastle Newcastle upon Tyne NE1 7RU UK Tel. +44-191-2227946 Fax. +44-191-2228232 Web: www.staff.ncl.ac.uk/peter.andras From P.Culverhouse at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: P.Culverhouse at plymouth.ac.uk (Phil Culverhouse) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: [My sincere apologies if you receive multiple copies of this email] DEPT OF COMMUNICATION & ELECTRONIC ENGINEERING, University of Plymouth Ref: HAB-buoy/TECH RESEARCH ASSISTANT/FELLOW Salary from =A317624 pa - RA/RF scale An exciting 24-MONTH post is IMMEDIATELY available for a Vision Scientist/Engineer. You will assist the development and integration of a neural network based natural object categoriser for field and laboratory use. The existing prototype (Windows platform) is capable of categorising 23 species of marine plankton, but has to be further developed and a user interface tailored to Marine Ecologists for real-time operation. You should have a working knowledge of neural networks, current machine vision techniques. Familiarity with visual perception and multi-dimensional clustering statistics would be valuable. You should ideally be familiar with Windows operating systems as well as being a C++ programmer. The POST IS AVAILABLE February/March and will involve some European travel. For informal enquiries regarding this post, please contact Dr P Culverhouse on +44 (0) 1752 233517 or email: pculverhouse at plymouth.ac.uk 16th January 2003 From R.Roy at cranfield.ac.uk Tue Jun 6 06:52:25 2006 From: R.Roy at cranfield.ac.uk (Roy, Rajkumar) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: -------------------------------------------- LAST DATE for APPLICATION: 15th February 2003 ! -------------------------------------------- RESEARCH OPPORTUNITIES at Cranfield ----------------------------------- INDUSTRY CASE Studentship (EPSRC) Title: Customer Characterisation for Decision Engineering Industrial Sponsor: BT Exact Duration: March 2003 - March 2006 There is a FULLY FUNDED PhD Studentship (Industry CASE) in the above-mentioned area. Cranfield University is actively involved with a number of companies to research in the areas of Decision Engineering. This research will be an extension of existing work in Applied Soft Comuting area. Soft Computing is a computing paradigm to handle real life complexities such as imprecision. The paradigm utilises a combination of techniques including Fuzzy Logic, Neural Networks and Evolutionary Computing to address the challenge. The research will investigate different soft computing techniques to characterise customer behaviour and preference within Contact Centre Environment. The project will involve close collaboration with BT Exact as the industrial sponsor. This is a pan industry project, where the student is expected to develop generic tools and techniques to analyse data from different industrial context. The research will involve data analysis, classification and presentation to improve the efficiency of the Contact Centres.The tools developed in the project will integrate with existing Contact Centre Environment to provide real time decision support to the staff working at the Centre. EPSRC is expected to pay tuition fees to Cranfield. The student would receive around 11K pounds sterling tax-free per annum for the three years. Interested graduate/postgraduate students with computing/engineering background are invited to submit their CV for an informal discussion over telephone or email. Additional background in data analysis and Soft Computing will be beneficial. The minimum academic requirement for entrants to the degree is an upper second class honours degree or its equivalent. Please note that the funding is restricted to British Nationals, in special cases it may be offered to an EC national. For informal enquiries and application (detailed CV), please contact: Dr. Rajkumar Roy at your earliest: Dr. Rajkumar Roy Senior Lecturer and Course Director, IT for Product Realisation Department of Enterprise Integration, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedford, MK43 0AL, United Kingdom. Tel: +44 (0)1234 754072 or +44 (0)1234 750111 Ext. 2423 Fax: +44 (0)1234 750852 Email: r.roy at cranfield.ac.uk or r.roy at ieee.org URL: http://www.cranfield.ac.uk/sims/staff/royr.htm http://www.cranfield.ac.uk/sims/cim/people/roy.htm -------------------------------------------- LAST DATE for APPLICATION: 15th February 2003 -------------------------------------------- From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Postdoctoral Research Fellowship Centre for Theoretical and Computational Neuroscience University of Plymouth, UK A Postdoctoral Research Fellowship is available for candidates who have just completed or are about to complete a PhD in a suitable area of study, to carry out research within the Centre for Theoretical and Computational Neuroscience. The Fellowship will be for two years initially, at a salary level on the University's scales commensurate with experience and age. The Centre specialises in the application of rigorous quantitative, mathematical and physical approaches, including mathematical and computational modelling and psychophysics, to understanding information coding, processing, storage and transmission in the brain and its manifestation in perception and action. Areas of study include: visual and auditory perception and psychophysics; sensory-motor control, in particular oculomotor control; and mathematical and computational modelling of the cortical neural circuitry underlying perception, attention, learning and memory, and motor control. The appointed Research Fellow will work under the supervision of one of the following academic staff in the Centre: Prof Jochen Braun (vision); Dr Susan Denham (audition); Prof Chris Harris (sensory-motor control); Prof Roman Borisyuk (mathematical and computational modelling); Prof Mike Denham (mathematical and computational modelling). The Centre for Theoretical and Computational Neuroscience is a new research centre in the University of Plymouth, emerging from the previous Centre for Neural and Adaptive Systems (http://www.tech.plym.ac.uk/soc/research/neural/research.html), where the home pages of the above academic staff can be found (the new centre's website is currently under construction). There is currently a thriving community of five postdocs and ten research students in the Centre, working in the above fields. The Centre has a number of externally-funded research programmes and strong international links, including with the Institute of Neuroinformatics at ETH, Zurich, and the Koch laboratory at Caltech. The Centre will be located from April 2003 in a brand new building complex on the University campus which will also house the departments of Computing, Psychology and Biological Sciences and part of the new Medical School. The new self-contained accommodation for the Centre will include office space for all academic staff, postdocs and research students, a library and meeting room, a 40-seater seminar room, and vision, audition and sensory-motor psychophysics labs. The University of Plymouth is one of the largest UK universities, with about 25,000 students, some 16,000 of which are accommodated in the city centre campus in Plymouth. It is located in a beautiful part of the southwest of England, close to outstanding countryside, moorland, river estuaries, historical towns and villages and excellent beaches, including some of the best surfing beaches in Europe. It also offers extensive water sports facilities, including diving and sailing. Interested applicants for this Research Fellowship should in the first instance send an email to the Head of the Centre, Professor Mike Denham (mdenham at plym.ac.uk), including a brief statement of research interests and a short curriculum vitae, plus postal address. Applicants will then be sent a formal application form. Note: The closing date for applications for the Research Fellowship is 31st March 2003. Professor Mike Denham Centre for Theoretical and Compuational Neuroscience University of Plymouth Plymouth PL4 8AA UK tel: +44 (0)1752 232547 fax: +44 (0)1752 232540 email: mdenham at plym.ac.uk From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Research Scholarships Centre for Theoretical and Computational Neuroscience University of Plymouth, UK A number of University Research Scholarships are available for students wishing to study for a PhD in the Centre starting in September/October 2003. The scholarships cover full tuition fees for three years plus an annual living-expenses stipend of 9000. Further support for living expenses is usually available via teaching assistantships. The Centre specialises in the application of rigorous quantitative, mathematical and physical approaches, including mathematical and computational modelling and psychophysics, to understanding information coding, processing, storage and transmission in the brain and its manifestation in perception and action. Areas of study include: visual and auditory perception and psychophysics; sensory-motor control, in particular oculomotor control; and mathematical and computational modelling of the cortical neural circuitry underlying perception, attention, learning and memory, and motor control. PhD students will work under the supervision of one of the following academic staff in the Centre: Prof Jochen Braun (vision); Dr Susan Denham (audition); Prof Chris Harris (sensory-motor control); Prof Roman Borisyuk (mathematical and computational modelling); Prof Mike Denham (mathematical and computational modelling). The Centre for Theoretical and Computational Neuroscience is a new research centre in the University of Plymouth, emerging from the previous Centre for Neural and Adaptive Systems (http://www.tech.plym.ac.uk/soc/research/neural/research.html), where the home pages of the above academic staff can be found (the new centre's website is currently under construction). There is currently a thriving community of five postdocs and ten research students in the Centre, working in the above fields. The Centre has a number of externally-funded research programmes and strong international links, including with the Institute of Neuroinformatics at ETH, Zurich, and the Koch laboratory at Caltech. The Centre will be located from April 2003 in a brand new building complex on the University campus which will also house the departments of Computing, Psychology and Biological Sciences and part of the new Medical School. The new self-contained accommodation for the Centre will include office space for all academic staff, postdocs and research students, a library and meeting room, a 40-seater seminar room, and vision, audition and sensory-motor psychophysics labs. The University of Plymouth is one of the largest UK universities, with about 25,000 students, some 16,000 of which are accommodated in the city centre campus in Plymouth. It is located in a beautiful part of the southwest of England, close to outstanding countryside, moorland, river estuaries, historical towns and villages and excellent beaches, including some of the best surfing beaches in Europe. It also offers extensive water sports facilities, including diving and sailing. Interested applicants for these Research Scholarships must first make application to the University and to the Centre for admission to its PhD programme. Initially this can be done by sending an email to the Head of the Centre, Professor Mike Denham (mdenham at plym.ac.uk), including a brief statement of research interests and a short curriculum vitae, plus postal address. Applicants will then be sent formal admission application forms. Note: The closing date for University Scholarship applications is 31st March 2003. Applications for admission to the University's PhD programme should be made well in advance of this date, ideally by the end of February. Professor Mike Denham Centre for Theoretical and Compuational Neuroscience University of Plymouth Plymouth PL4 8AA UK tel: +44 (0)1752 232547 fax: +44 (0)1752 232540 email: mdenham at plym.ac.uk From cmbishop at microsoft.com Tue Jun 6 06:52:25 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Ninth International Workshop on Artificial Intelligence and Statistics January 3-6, 2003, Hyatt Hotel, Key West, Florida Electronic proceedings of this workshop are available on-line at: http://research.microsoft.com/conferences/aistats2003/proceedings=20 These proceedings include all contributed papers in both Postscript and PDF format, together with the viewgraphs from the invited speakers in PDF format. Chris Bishop Brendan Frey (workshop organisers) From UE001861 at guest.telecomitalia.it Tue Jun 6 06:52:25 2006 From: UE001861 at guest.telecomitalia.it (Corsini Filippo) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: *********************************************************************** First European School on Neuroengineering "Massimo Grattarola" Venice 16-20 June 2003 Telecom Italia Learning Services (TILS), and the University of Genoa (DIBE, DIST, Bioengineering course) are currently organizing the first edition an European Summer School on Neuroengineering. The school will be entitled to Massimo Grattarola. The first edition , which will last for five days, will be held from June 16 to June 20, 2003 at Telecom Italia's Future Center in Venice. The School will cover the following main themes: 1. Neural code and plasticity o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Cortical computational paradigms for perception and action 2. Brain-like adaptive information processing systems o Development and implementation of methods to identify, represent and analyze hierarchical and self-organizing systems o Models of learning, representation and adaptability based on knowledge of the nervous system. o Exploration of the capabilities of natural neurobiological systems as flexible computational devices. o Use of information from nervous systems to engineer new control techniques and new artificial systems. o Development of highly innovative Artificial Neural Networks capable of reproducing the functioning of vertebrate nervous systems. 3. Bio-artificial systems o Development of novel brain-computer interfaces o Development of new techniques for neuro-rehabilitation and neuro-prostheses o Hybrid silicon/biological systems Speakers: Abbruzzese G. Non-invasive exploration of human cerebral cortex by transcranial magnetic stimulation Benfenati F. Molecular dissection of neurotransmitter release mechanisms: a key to understand short-term memory processes Torre V., Ruaro M. E., Bonifazi P. Towards the neurocomputer: image processing and learning with neuronal cultures Aleksander I. Digital Neuromodelling Based on the Architecture of the Brain: Basics and Applications Destexhe A. The stochastic integrative properties of neocortical neurons in vivo Gielen S. Quantitative models to explain the degree-of-freedom problem in motor control Le Masson G. Cyborgs : from fiction to science Rutten W. Neuro-electronic interface engineering and neuronal network learning Van Pelt J. Computational and experimental approaches in neuronal morphogenesis and network formation Mussa-Ivaldi F. The Engineering of motor learning and adaptive control Sandini G. Cognitive Development in Robot Cubs Morasso P. Motor control of unstable tasks Two Practical Laboratory: Davide F., Stillo G, Morabito F. Chosen exempla of neuronal signals decoding, analysis and features extractions Renaud-Le Masson Bioengineering: on-silicon solutions for biologically realistic artificial neural networks (state-of-the-art and development perspectives for integrated ANN Scientific Board: Sun -ichi Amari RIKKEN, Brain Science Institute Laboratory for Mathematical Neuroscience, Japan Fabrizio Davide Telecom Italia Learning Services, Italy Walter J. Freeman University of Berkeley, USA Stephen Grossberg University of Boston, USA Milena Koudelka-Hep IMT, Institute of Microtechnology, University of Neuchatel, Switzerland Gwendal Le Masson INSERM, French Institute of Health and Medical Research, France Sergio Martinoia DIBE, Unversity of Genoa, Italy Pietro Morasso DIST, University of Genoa, Italy Sylvie RENAUD-LE MASSON ENSEIRB - IXL Bordeaux, France Wim Rutten University of Twente, Netherlands Japp Van Pelt Netherlands Institute for Brain Research, Netherlands LOCATION Future Center, Telecom Italia Lab San Marco, 4826 - Campo San Salvador 30124 Venezia Tel. 041 5213 223 Fax 011 228 8228 Italy http://fc.telecomitalialab.com/ CONTACTS Filippo Corsini Telecom Italia- Learning Services S.p.A. Business Development Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368.70402 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it CREDITS In response to current reforms in training, at the Italian and European levels, the Summer School on Neuroengineering is certified to grant credits. These include: ECM (Educazione Continua in Medicina) credits recognized by the Italian Ministry of Health ECTS (European Credit Transfer System) credits recognized by all European Universities Registration Fee*: Undergraduate and graduate: 100 EUR PhD students and young researcher: 250 EUR Business and medical professional: 500 EUR * The School registration fee includes the meeting program, reception, two coffee breaks and a lunch each day, and meeting proceedings. For informations you can contact: Filippo Corsini ue001861 at guest.telecomitalia.it All the informations will be avaliable on the website www.neuroengineering.org (under construction) _________________________________________________ Dr. Filippo Corsini Telecom Italia- Learning Services S.p.A. Business Development Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368.70402 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it _________________________________________________ This e-mail, including any attachments, may contain private or confidential information. If you think you may not be the intended recipient, or if you have received this e-mail in error, please contact the sender immediately and delete all copies of this e-mail. If you are not the intended recipient, you must not reproduce any part of this e-mail or disclose its contents to any other party. From christiane.debono at epfl.ch Tue Jun 6 06:52:25 2006 From: christiane.debono at epfl.ch (Christiane Debono) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Dear Colleague, An update of faculty positions , group leader positions, Potsdocs and PhD positions open at the new Brain Mind Institute at the EPFL in Lausanne is now available in this second call for applications. Please pass this email on to anyone that may be interested in any of the positions. Nominations of candidates are also welcome. Thank you. Yours, Henry Markram From cmbishop at microsoft.com Tue Jun 6 06:52:25 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Postdoctoral Research Positions at Microsoft Research Cambridge Computer vision, machine learning, and information retrieval Applications are invited for postdoctoral research positions at Microsoft Research Cambridge (MSRC) in the fields of computer vision, machine learning and information retrieval. These positions are for just under two years starting from a mutually agreeable date, generally no later than 1 January 2004. Applicants must have completed the requirements for a PhD, including submission of their thesis, prior to joining MSRC. Postdoctoral researchers receive a competitive salary, together with a benefits package, and will be eligible for relocation expenses. MSRC is Microsoft's European research laboratory, and is housed in a brand new purpose-designed building on Cambridge University's West Cambridge site, adjacent to the Computer Science and Physics departments, and close to the Mathematics departments and to the centre of town. It currently employs 65 researchers of many different nationalities working in a broad range of areas including computer vision, machine learning, information retrieval, hardware devices, programming languages, security, systems, networking and distributed computing. MSRC provides a vibrant research environment with an open publications policy and with close links to Cambridge University and many other academic institutions across Europe. Further information about the lab can be found at: http://www.research.microsoft.com/aboutmsr/labs/cambridge/ The closing date for applications is 9 May 2003. To apply please send a full CV (including a list of publications) in PDF, Postscript or Word format, together with the names and contact details for 3 referees, to: cambhr at microsoft.com with the subject line "Application for postdoctoral research position".=A0 =A0 From silvia at sa.infn.it Tue Jun 6 06:52:25 2006 From: silvia at sa.infn.it (Silvia Scarpetta) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: ------------------------------------------------------ NEW DEADLINE 10 June 2003 ------------------------------------------------------ International School on Neural Nets "E.R. Caianiello" 8th Course: Computational Neuroscience: CORTICAL DYNAMICS 31 Oct.- 6 Nov. 2003 Ettore Majorana Centre Erice (Sicily) ITALY Jointly organized by IIASS International Institute for Advanced Scientific Studies and EMFCSC Ettore Majorana Foundation and Center for Scientific Culture Course homepage: http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ *Directors of the Course:* Maria Marinaro (Dept. of Physics "E.R. Caianiello", Univ. of Salerno, Italy) Peter Erdi (Kalamazoo College, USA & KFKI Res. Inst. Part. and Nucl. Phys. Hung. Acad. Sci. Hungary) *Confirmed Lecturers:* Luigi Agnati - Dept.of Neurosc. Karolinka Inst.Sweden & Modena Univ.Italy Peter Dayan - Gatsby Computational Neuroscience Unit, UCL, UK Peter Erdi - CCSS Kalamazoo College USA & KFKI Hung. Accad.of Science Hungary - Codirector Bruce P Graham - Dept.of Computer Science and Mathem., Univ. of Stirling UK John Hertz - Nordita, DK Zhaoping Li - Univ. College of London, UK Ronen Segev - School of Physics and Astronomy, Tel Aviv University, Israel Ivan Soltesz - Dept. of Anatomy and Neurobiology, Univ. of California, USA Misha Tsodyks - Dept. of Neurobiology Weizmann Institute of Science, Israel Ichiro Tsuda - Dept. of Mathematics, Hokkaido University, Japan Alessandro Treves - Sissa, Cognitive Neuroscience, Trieste, It Fortunato Tito Arecchi - University of Firenze and INOA, Italy Laszlo Zaborszky -Center Mol.& Behav.Neurosc.,Rutgers Univer.,New Jersey Hiroshi Fujii - Dept.of Infor.&Communic. Sciences -Kyoto Sangyo Univer.Japan **Purpose of the Course:** The School is devoted to people from different scientific background (including physics, neuroscience, mathematics and biology) who want to learn about recent developments in computational neuroscience and cortical dynamics. The basic concepts will be introduced, with emphasis on common principles. Cortical dynamics play an important role in important functions such as those related to memory, sensory processing and motor control. A systematic description of cortical organization and computational models of the cortex will be given, with emphasis on connections between experimental evidence and biologically-based as well as more abstract models. The Course is organized as a series of lectures complemented by short seminars that will focus on recent developments and open problems. We also aim to promote a relaxed atmosphere which will encourage informal interactions between all participants and hopefully will lead to new professional relationships which will last beyond the School. **Registrations:** Applications must be received before June 10 2003 in order to be considered by the selection committee. Registration fee of 900 Euro includes accomodation with full board. Application form and additional information are available from http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ Applications should be sent by ordinary mail to the codirector of the school: Prof. Maria Marinaro IIASS Via Pellegrino 19, I-84019 Vietri sul Mare (Sa) Italy or by fax to: +39 089 761 189 (att.ne: Prof. M. Marinaro) or by electronic mail to: iiass.vietri at tin.it subject: summer school **Location** The "Ettore Majorana" International Centre for Scientific Culture takes its inspiration from the outstanding Italian physicist, after whom the Centre was named. Embracing 110 Schools, covering all branches of Science, the Centre is situated in the old pre-mediaeval city of Erice where three restored monasteries provide an appropriate setting for high intellectual endeavour. These monasteries are now named after great Scientists and strong supporters of the "Ettore Majorana" Centre. There are living quarters in all three Monasteries for people attending the Courses of the Centre Please visit: http://www.sa.infn.it/NeuralGroup/CorticalDynamicsSchool2003/ From aburkitt at bionicear.org Tue Jun 6 06:52:25 2006 From: aburkitt at bionicear.org (Anthony BURKITT) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: The following paper on neuronal gain in the leaky integrate-and-fire neuron with conductance synapses has been published by Biological Cybernetics and is now available online: http://link.springer.de/link/service/journals/00422/first/bibs/s00422-003-0408-8.htm "Study of neuronal gain in a conductance-based leaky integrate-and-fire neuron model with balanced excitatory and inhibitory synaptic input" A. N. Burkitt, H. Meffin, and D. B. Grayden Abstract: Neurons receive a continual stream of excitatory and inhibitory synaptic inputs. A conductance-based neuron model is used to investigate how the balanced component of this input modulates the amplitude of neuronal responses. The output spiking-rate is well-described by a formula involving three parameters: the mean $\mu$ and variance $\sigma$ of the membrane potential and the effective membrane time constant $\tau_{\mbox{\tiny Q}}$. This expression shows that, for sufficiently small $\tau_{\mbox{\tiny Q}}$, the level of balanced excitatory-inhibitory input has a non-linear modulatory effect on the neuronal gain. A copy is also available from my web page: http://www.medoto.unimelb.edu.au/people/burkitta/Burkitt_BC_2003.pdf Tony Burkitt ====================ooOOOoo==================== Anthony N. Burkitt The Bionic Ear Institute 384-388 Albert Street East Melbourne, VIC 3002 Australia Email: a.burkitt at medoto.unimelb.edu.au http://www.medoto.unimelb.edu.au/people/burkitta Phone: +61 - 3 - 9663 4453 Fax: +61 - 3 - 9667 7518 =====================ooOOOoo=================== From kamps at fsw.LeidenUniv.nl Tue Jun 6 06:52:25 2006 From: kamps at fsw.LeidenUniv.nl (kamps) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: NeuroIT.net workshop, July 8th, Alicante Message-ID: <3EC1170A@webmail2.fsw.LeidenUniv.nl> The NeuroIT.net workshop is organized as a sattelite workshop of the CNS2003 conference in Alicante. It is held on July 8th. People who are visting the CNS conference are also welcome to attend the workshop. There is no need to register and access to the workshop is free. As the programme shows, the workshop focusses on applications of concepts of neuroscience in IT and engineering. ----------------------------------------------------------------------------- NeuroIT.net workshop in Alicante Location: University campus (Universidad Miguel Hernández, Campus de San Juan de Alicante, Carretera de Valencia N-332 s/n) (on the floor above the smaller rooms that will be hosting in parallel various workshops of the CNS*03 meeting. Transportation by bus will be provided.) Introduction 10.00 - 10.10 Introduction EU (Pekka Karp) 10.10 - 10.30 Neuro-IT.net (Alois Knoll) Project presentations 10.30 - 10.50 AMOTH A fleet of articifical chemosensing moths for distributed environmental monitoring. 10.50 - 11.10 NEUROBIT A bioartificial brain with an artificial body: training a cultured neural tissue to support the purposivebehavior of an artificial body 11.10 - 11.30 APEREST APEREST will develop a coding and representation scheme of perceptual information based on chaotic dynamics and involving collection of data from animal brain recordings. 11.30 - 11.50 break 11.50 - 12.10 BIOLOCH BIOLOCH aims at understanding of perception and locomotion of animals moving in wet and slippery areas, e.g. the gut or wetlands. 12.10 - 12.30 CYBERHAND CYBERHAND and ROSANA cover similar areas of problems related to the construction of neuroprostheses. ROSANA focuses on different ways of stimulating sensorial receptors equivalent to natural stimuli and studying the representation of such stimuli in the central nervous system. CYBERHAND aims at the construction of an artificial hand capable of producing a natural feeling of touch and grip. Key not for Brain Research) Computational and Experimental Approaches in Neuronal Morphogenesis and Network formation Siesta break Introduction 17.00 - 17.10 announcements Project presentations 17.10 - 17.30 CIRCE CIRCE aims at constructing a miniature bat head for active bio-inspired echolocation. 17.30 - 17.50 CICADA CICADA studies the mechanoreceptor hairs of a cricket and itsresponse to predators for constructing bio-inspired MEMS devices. 17.50 - 18.10 ROSANA See Cyberhand 18.10 - 18.30 MIRRORBOT MirrorBot studies biomimetic multimodal learning in a mirror neuron-based robot. 18.30 - 18.50 SENSEMAKER SENSEMAKER aims at integration and unified representations ofmultisensory information. 18.50 - 19.10 SPIKEFORCE SpikeFORCE will develop with real-time spiking networks for robot control based on a model of the cerebellum. Key note speaker 19.10 - 19.45 Eduardo Fernandez (Universitas de Miguel Hernandez) Designing a Brain-Machine interface for direct communication with visual cortex neurons Closure 19.45 - 19.55 From esalinas at wfubmc.edu Tue Jun 6 06:52:25 2006 From: esalinas at wfubmc.edu (Emilio Salinas) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Postdoctoral position available in Theoretical Neuroscience The project investigates the dynamics of neural networks in which units may affect each other's gain. The successful candidate is expected to develop models of neuronal circuits within this broad framework (see Neural Computation 16(7):1439, 2003). Applicants should be interested in quantitative approaches to Neuroscience and should have, or be near completing, a PhD in a relevant discipline - Neuroscience, Physics, Math, etc. The position is for one to three years, with salary starting at $32k and going upward depending on experience. Applicants should email a CV, the names and email addresses of three references, and a description of their research background and interests to Emilio Salinas esalinas at wfubmc.edu Department of Neurobiology and Anatomy Wake Forest University Health Sciences Winston-Salem NC 27157 Affirmative Action/Equal Opportunity Empoloyer. From jonathan.tepper at ntu.ac.uk Tue Jun 6 06:52:25 2006 From: jonathan.tepper at ntu.ac.uk (Tepper, Jonathan) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: PhD STUDENTSHIP AVAILABLE: Neural Networks for Natural Language Processing (=A39,000 per year) ========================================================================== The School of Computing and Mathematics, The Nottingham Trent University is pleased to announce the immediate availability of a PhD studentship funded for three years. The successful candidate will join the Intelligent Recognition Systems Research Group within the School. They will pursue research into the use of neural networks for broad coverage syntactic parsing of English texts using large text corpora. The focus will be to build on existing work carried out in the research group which has produced a parser that is at the forefront of this research area. The candidate will be expected to present their findings in academic papers as part of their research programme. We are seeking highly motivated candidates with the following essential qualifications: -a good honours degree in a Computing subject or Computational Linguistics -strong programming skills -an aptitude for mathematics with a willingness to learn advanced topics -good communication skills in English Ideally, the candidate will also have the following desirable qualifications: -knowledge of neural networks -knowledge or experience in natural language processing or Computational Linguistics -a working knowledge of C or C++ -a Master's thesis in a Computing subject or Computational Linguistics The studentship will cover tax-free living expenses of =A39,000 per year plus tuition fees, and will commence on 22nd September 2003. Informal enquiries may be made to either Dr Jon Tepper via tel. +44 (0)115 848 2255 email: jonathan.tepper at ntu.ac.uk or Dr Heather Powell via tel. +44 (0)115 848 2598 email: heather.powell at ntu.ac.uk. For an application form, please contact: Mrs Doreen Corlett Faculty of Construction, Computing & Technology The Nottingham Trent University Burton Street Nottingham NG1 4BU, UK Email: doreen.corlett at ntu.ac.uk, Telephone: +44 (0)115 848 2301, Fax: +44 (0)115 848 6867 Candidates must send a completed application form, their curriculum vitae and a covering letter stating why they are applying for the post and why they meet the essential qualifications (mentioned above) to Mrs Corlett by 26th August 2003. Applications by CV only will not be accepted. Please forward to interested students. Many thanks, Jon -------- Dr. Jon Tepper Senior Lecturer School of Computing and Mathematics The Nottingham Trent University Email: Jonathan.Tepper at ntu.ac.uk WWW: http://dcm.ntu.ac.uk/5_staff/staff_jt.htm http://dcm.ntu.ac.uk/2_research/iris/index.htm Tel. no. +44 (0) 115 848 2255 Fax. no. +44 (0) 115 848 6518 From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Dear Colleagues, Preprints of two papers on spike timing-dependent synaptic plasticity are available: "Spike timing-dependent plasticity: The relationship to rate-based learning for models with weight dynamics determined by a stable fixed-point", A.N. Burkitt, H. Meffin, and D.B. Grayden, to appear in Neural Computation, is available at http://www.bionicear.org/people/burkitta/Burkitt_NC_2003.pdf "How synapses in the auditory system wax and wane: Theoretical perspectives", A.N. Burkitt and L.J. van Hemmen, to appear in Biological Cybernetics, is available at http://www.bionicear.org/people/burkitta/BurkittvH_BC_2003.pdf Regards, Tony Burkitt =============================================== "Spike timing-dependent plasticity: The relationship to rate-based learning for models with weight dynamics determined by a stable fixed-point", A.N. Burkitt, H. Meffin, and D.B. Grayden, to appear in Neural Computation, Abstract: --------- Experimental evidence indicates that synaptic modification depends upon the timing relationship between the presynaptic inputs and the output spikes that they generate. In this paper results are presented for models of spike timing-dependent plasticity (STDP) whose weight dynamics is determined by a stable fixed-point. Four classes of STDP are identified on the basis of the time-extent of their input-output interactions. The effect upon the potentiation of synapses with different rates of input is investigated to elucidate the relationship of STDP with classical studies of LTP/LTD and rate-based Hebbian learning. The selective potentiation of higher-rate synaptic inputs is found only for models where the time-extent of the input-output interactions are ``input restricted'' (i.e., restricted to time domains delimited by adjacent synaptic inputs) and that have a time-asymmetric learning window with a longer time constant for depression than for potentiation. The analysis provides an account of learning dynamics determined by an input-selective stable fixed-point. The effect of suppressive interspike interactions upon STDP are also analyzed and shown to modify the synaptic dynamics. http://www.bionicear.org/people/burkitta/Burkitt_NC_2003.pdf =============================================== "How synapses in the auditory system wax and wane: Theoretical perspectives", A.N. Burkitt and L.J. van Hemmen, to appear in Biological Cybernetics, Abstract: --------- Spike timing-dependent synaptic plasticity has recently provided an account of both the acuity of sound localization and the development of temporal-fea\-ture maps in the avian auditory system. The dynamics of the resulting learning equation, which describes the evolution of the synaptic weights, is governed by an unstable fixed-point. We outline the derivation of the learning equation for both the Poisson neuron model and the leaky integrate-and-fire neuron with conductance syn\-ap\-ses. The asymptotic solutions of the learning equation can be described by a spectral representation based on a biorthogonal expansion. http://www.bionicear.org/people/burkitta/BurkittvH_BC_2003.pdf ====================ooOOOoo==================== Anthony N. Burkitt The Bionic Ear Institute 384-388 Albert Street East Melbourne, VIC 3002 Australia Email: aburkitt at bionicear.org http://www.bionicear.org/people/burkitta Phone: +61 - 3 - 9663 4453 Fax: +61 - 3 - 9667 7518 =====================ooOOOoo=================== From esann at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: esann at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From aetzold at neuro.uni-bremen.de Tue Jun 6 06:52:25 2006 From: aetzold at neuro.uni-bremen.de (Axel Etzold) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Paper available about construction of robust tuning curves Message-ID: From ASWDuch at ntu.edu.sg Tue Jun 6 06:52:25 2006 From: ASWDuch at ntu.edu.sg (Wlodzislaw Duch (Dr)) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Dear Connectionists, Here are five of my recent computational intelligence papers for your comments: 1. Duch W (2003) Support Vector Neural Training. Submitted to IEEE Transactions on Neural Networks (submitted 11.2003) http://www.phys.uni.torun.pl/publications/kmk/03-SVNT.html 2. Duch W (2003) Uncertainty of data, fuzzy membership functions, and multi-layer perceptrons. Submitted to IEEE Transactions on Neural Networks (submitted 11.2003) http://www.phys.uni.torun.pl/publications/kmk/03-uncert.html 3. Duch W (2003) Coloring black boxes: visualization of neural network decisions. International Joint Conference on Neural Networks http://www.phys.uni.torun.pl/publications/kmk/03-IJCNN.html 4. Kordos M, Duch W (2003) On Some Factors Influencing MLP Error Surface. The Seventh International Conference on Artificial Intelligence and Soft Computing (ICAISC) http://www.phys.uni.torun.pl/publications/kmk/03-MLPerrs.html 5. Duch W (2003) Brain-inspired conscious computing architecture. Journal of Mind and Behavior (submitted 10/03) http://www.phys.uni.torun.pl/publications/kmk/03-Brainins.html All these papers (and quite a few more) are linked to my page: http://www.phys.uni.torun.pl/~duch/cv/papall.html Here are the abstracts: 1. Support Vector Neural Training. Neural networks are usually trained on all available data. Support Vector Machines start from all data but near the end of the training use only a small subset of vectors near the decision border. The same learning strategy may be used in neural networks, independently of the actual optimization method used. Feedforward step is used to identify vectors that will not contribute to optimization. Threshold for acceptance of useful vectors for training is dynamically adjusted during learning to avoid excessive oscillations in the number of support vectors. Benefits of such approach include faster training, higher accuracy of final solutions and identification of a small number of support vectors near decision borders. Results on satellite image classification and hypothyroid disease obtained with this type of training are better than any other neural network results published so far. 2. Uncertainty of data, fuzzy membership functions, and multi-layer perceptrons. Probability that a crisp logical rule applied to imprecise input data is true may be computed using fuzzy membership function. All reasonable assumptions about input uncertainty distributions lead to membership functions of sigmoidal shape. Convolution of several inputs with uniform uncertainty leads to bell-shaped Gaussian-like uncertainty functions. Relations between input uncertainties and fuzzy rules are systematically explored and several new types of membership functions discovered. Multi-layered perceptron (MLP) networks are shown to be a particular implementation of hierarchical sets of fuzzy threshold logic rules based on sigmoidal membership functions. They are equivalent to crisp logical networks applied to input data with uncertainty. Leaving fuzziness on the input side makes the networks or the rule systems easier to understand. Practical applications of these ideas are presented for analysis of questionnaire data and gene expression data. 3. Coloring black boxes: visualization of neural network decisions. Neural networks are commonly regarded as black boxes performing incomprehensible functions. For classification problems networks provide maps from high dimensional feature space to K-dimensional image space. Images of training vector are projected on polygon vertices, providing visualization of network function. Such visualization may show the dynamics of learning, allow for comparison of different networks, display training vectors around which potential problems may arise, show differences due to regularization and optimization procedures, investigate stability of network classification under perturbation of original vectors, and place new data sample in relation to training data, allowing for estimation of confidence in classification of a given sample. An illustrative examples for the three-class Wine data and five-class Satimage data are described. The visualization method proposed here is applicable to any black box system that provides continuous outputs. 4. Kordos M, Duch W (2003) Visualization of MLP error surfaces helps to understand the influence of network structure and training data on neural learning dynamics. PCA is used to determine two orthogonal directions that capture almost all variance in the weight space. 3-dimensional plots show many aspects of the original error surfaces. 5. Duch W (2003) Brain-inspired conscious computing architecture. What type of artificial systems will claim to be conscious and will claim to experience qualia? The ability to comment upon physical states of a brain-like dynamical system coupled with its environment seems to be sufficient to make claims. The flow of internal states in such system, guided and limited by associative memory, is similar to the stream of consciousness. Minimal requirements for an artificial system that will claim to be conscious were given in form of specific architecture named articon. Nonverbal discrimination of the working memory states of the articon gives it the ability to experience different qualities of internal states. Analysis of the inner state flows of such a system during typical behavioral process shows that qualia are inseparable from perception and action. The role of consciousness in learning of skills, when conscious information processing is replaced by subconscious, is elucidated. Arguments confirming that phenomenal experience is a result of cognitive processes are presented. Possible philosophical objections based on the Chinese room and other arguments are discussed, but they are insufficient to refute claims articon's claims. Conditions for genuine understanding that go beyond the Turing test are presented. Articons may fulfill such conditions and in principle the structure of their experiences may be arbitrarily close to human. With best regards for the coming year, Wlodzislaw Duch Dept. of Informatics, Nicholaus Copernicus University Dept. of Computer Science, SCE NTU, Singapore http://www.phys.uni.torun.pl/~duch http://www.ntu.edu.sg/home/aswduch/ From aweigend at amazon.com Tue Jun 6 06:52:25 2006 From: aweigend at amazon.com (Weigend, Andreas) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Software Development Positions at Amazon.com in Seattle, WA in Machine Learning, Statistical Analysis, Fraud Detection, Computational Marketing etc. Do you want to build quantitative models millions of people will use, based on data from the world's largest online laboratory? Are you passionate about formulating relevant questions, and producing solutions to initially ill-defined problems? Do the challenges and opportunities of terabytes of data excite you? Can you think abstractly, and apply your ideas to the real world? Can you contribute to the big picture, and are not afraid to handle the details? Amazon.com is solving incredibly interesting problems in areas including consumer behavior modeling, pricing and promotions, personalization and recommendations, reputation management, fraud detection, computational marketing, customer acquisition and retention. Emphasizing measurement and analytics, we build and automate solutions that leverage instant feedback and the Web's scale. We are looking for people with the right blend of vision, curiosity, and hands-on skills, who want to be part of a highly visible, intellectually vibrant, entrepreneurial team. Ideal candidates will have a track record of creating innovative solutions. They will typically have a graduate degree in computer science, physics, statistics, electrical engineering, bioinformatics, or another computational science. More information can be found at www.weigend.com/amazonjobs.html. If this interests you, please email your resume by January 31, 2004 directly to amazonjobs at weigend.com, clearly indicating your interests and strengths. Thank you. Best regards, -- Andreas Weigend ........................... Andreas Weigend Chief Scientist, Amazon.com Mobile: +1 (917) 697-3800 Info: www.weigend.com ........................... From Peter.Andras at newcastle.ac.uk Tue Jun 6 06:52:25 2006 From: Peter.Andras at newcastle.ac.uk (Peter Andras) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Dear Colleague We would hereby like to invite you to attend the symposium entitled 'Human Language: cognitive, neuroscientific and dynamical systems perspectives' at the University of Newcastle upon Tyne, UK, that will take place between 20-22 February 2004. Further details are available at the symposium website:=20 http://www.staff.ncl.ac.uk/peter.andras/lingsymp.htm If you are interested in coming, please email either: Peter Andras: peter.andras at ncl.ac.uk Hermann Moisl: hermann.moisl at ncl.ac.uk The aim is to keep the attendance fairly small to promote effective discussion, so an early reply would be appreciated. Best regards, Peter Andras http://www.staff.ncl.ac.uk/peter.andras/ Gary Green http://www.staff.ncl.ac.uk/gary.green/ Hermann Moisl http://www.staff.ncl.ac.uk/hermann.moisl/ From tkelley at arl.army.mil Tue Jun 6 06:52:25 2006 From: tkelley at arl.army.mil (Kelley, Troy (Civ,ARL/HRED)) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: The Army Research Laboratory's (ARL) Human Research and Engineering Directorate (HRED) is seeking post-doctoral researchers to join us in a variety of areas, particularly modeling human cognitive processes using current architectures such as the Atomic Components of Thought-Rational (ACT-R), cognitive modeling using high performance computer assets, and developing new cognitive processes for robotic agents. We have post-doctoral positions available through the National Research Council (NRC) and American Society for Engineering Education (ASEE). The NRC positions have open windows for applications, the soonest being February 2nd. The ASEE positions are open on an continuing basis. A background in cognitive psychology, computational cognitive models, and/or in neural networks or artificial intelligence (AI) is required.=20 Post-doctoral positions usually last a year, with an option of an extra year. Many post-doctoral candidates eventually become employees with ARL. ARL HRED is located at Aberdeen Proving Ground, in Northern Maryland between Baltimore and Philadelphia on the shores of the Chesapeake Bay. We are midway between Maryland's Appalachian mountains and the ocean shore. Please contact Troy Kelley or Laurel Allender if you are interested in learning more about these research opportunities. There is a deadline for NRC post docs of Feb 2, so if you are interested, please respond soon. tkelley at arl.army.mil lallende at arl.army.mil Troy Kelley=20 U.S. Army Research Laboratory=20 Human Research and Engineering Directorate=20 AMSRL-HR-SE, APG, MD 21005=20 Tel: 410-278-5859=20 Fax: 410-278-9694=20 email: tkelley at arl.army.mil From levys at wlu.edu Tue Jun 6 06:52:25 2006 From: levys at wlu.edu (Simon Levy) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Call for Papers: AAAI Fall 2004 symposium on Compositional Connectionism in Cognitive Science Message-ID: From esann at dice.ucl.ac.be Tue Jun 6 06:52:25 2006 From: esann at dice.ucl.ac.be (esann) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: 2004 European Capital of Culture (www.genova-2004.it) This year, the School will place a special emphasis in exploring the implications of neuroengineering for neurological rehabilitation. The School will feature lectures, held by renowned experts of the field (organized into thematic sessions), on both theoretical and technical aspects; lab activities, and student presentations (one dedicated session per day). The topics covered will include: 1) Neural code and neural plasticity Coding and decoding of information in neural systems Neurophysiological basis of learning and memory Computational paradigms for perception and action Capabilities of natural neurobiological systems as computational devices 2) Neural interfaces and bio-artificial systems EEG, trans-cranial magnetic stimulation, multi-site recordings Intra-cranial, EEG-based and peripheral brain-computer interfaces Neural prostheses Hybrid silicon/biological systems 3) Neuroengineering and rehabilitation Haptic devices in neurological rehabilitation Virtual reality and multimedia for rehabilitation Functional electrical stimulation and biofeedback 4) Neuroengineering of mind Using robots to understand development of higher functions Neural models of higher functions Large scale brain models CONFIRMED SPEAKERS Giovanni Abbruzzese, University of Genova (ITALY) Fabio Babiloni, University of Rome I (ITALY) Marco Bove, University of Genova (ITALY) Andreas K. Engel, Hamburg University (GERMANY) Rainer Goebel, University of Maastricht (THE NETHERLANDS) Peter K=F6nig, University of Osnabrueck (GERMANY) Shimon Marom, Technion, Haifa (ISRAEL) Sergio Martinoia, University of Genova (ITALY) Pietro G. Morasso, University of Genova (ITALY) Miguel A. Nicolelis, Duke University, Durham (USA) David J.Ostry, McGill University, Montreal (CANADA) Silvio P. Sabatini, University of Genova (ITALY) Vincenzo Tagliasco, University of Genova (ITALY) John G. Taylor, King's College, London (UK) REGISTRATION FEES (*) PhD students and postdocs: 150 euros (**) Business and medical professionals: 300 euros (*) includes program, reception, two coffee breaks and lunch each day, and lecture notes (**) Early registration. Increase by 20% after May 15th. SCIENTIFIC ORGANIZATION: Dr. Vittorio Sanguineti University of Genova Via Opera Pia 13 16145 Genova (ITALY) E-mail: vittorio.sanguineti at unige.it Phone: +39-010-3536487 Fax: +39-010-3532154 ADMINISTRATIVE INFORMATIONS: Filippo Corsini Telecom Italia- Learning Services S.p.A. Viale Parco de' Medici, 61, 00148 Roma Italy tel: +39.06.368. 72379 fax: +39.06.368.80101 e-mail: UE001861 at guest.telecomitalia.it From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Professor / Reader in Theoretical and Computational Neuroscience Applications are invited for this permanent position within the Centre for Theoretical and Computational Neuroscience (www.plymneuro.org.uk) at the University of Plymouth, England. Applicants must possess a record of high quality, internationally significant research, in any specialist area, eg vision, audition, motor control, ideally with interests in both mathematical/computational modelling and human psychophysics/imaging. Interested persons are invited to contact the Head of the Centre, Professor Mike Denham (email: mdenham at plym.ac.uk; tel: +44 (0)1752 232547), for further details and to discuss the post on an informal basis. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk www.plymneuro.org.uk From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: From A.Cangelosi at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Call for Abstracts 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Bob French (University of Liege) Art Glenberg (University of Wisconsin - Madison) Deb Roy (MIT) Luc Steels (VUB University Brussels and SONY Paris) Daniel Wolpert (Institute of Neurology, UCL London) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: 14 JUNE 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: June 14th 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 ------------- Angelo Cangelosi, PhD ------------- Principal Lecturer Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square (A316) Drake Circus Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From A.Cangelosi at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Apologies if you receive more than one copy of this email. ========================================================= Call for Abstracts 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Bob French (University of Liege) Art Glenberg (University of Wisconsin - Madison) Deb Roy (MIT) Luc Steels (VUB University Brussels and SONY Paris) Daniel Wolpert (Institute of Neurology, UCL London) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: 14 JUNE 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: June 14th 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 ------------- Angelo Cangelosi, PhD ------------- Principal Lecturer Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square (A316) Drake Circus Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Depue & Morrone-Strupinsky/A neurobehavioral model of affiliative bonding: BBS Call for Commentators Message-ID: Below the instructions please find the abstract, keywords, and full text link to the forthcoming BBS target article: A neurobehavioral model of affiliative bonding: Implications for conceptualizing a human trait of affiliation by Richard A. Depue and Jeannine V. Morrone-Strupinsky This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract, keywords below. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: A neurobehavioral model of affiliative bonding: Implications for conceptualizing a human trait of affiliation AUTHORS: Richard A. Depue and Jeannine V. Morrone-Strupinsky ABSTRACT: Because little is known about the human trait of affiliation, we provide a novel neurobehavioral model of affiliative bonding. Discussion is organized around processes of reward and memory formation that occur during approach and consummatory phases of affiliation. Appetitive and consummatory reward processes are mediated independently by the activity of the ventral tegmental area (VTA) dopamine (DA)nucleus accumbens shell (NAS) pathway and the central corticolimbic projections of the u-opiate system of the medial basal arcuate nucleus, respectively, although these two projection systems functionally interact across time. We next explicate the manner in which DA and glutamate interact in both the VTA and NAS to form incentive-encoded contextual memory ensembles that are predictive of reward derived from affiliative objects. Affiliative stimuli, in particular, are incorporated within contextual ensembles predictive of affiliative reward via a) the binding of affiliative stimuli in the rostral circuit of the medial extended amygdala and subsequent transmission to the NAS shell; b) affiliative stimulus-induced opiate potentiation of DA processes in the VTA and NAS; and c) permissive or facilitatory effects of gonadal steroids, oxytocin (in interaction with DA), and vasopressin on (i) sensory, perceptual, and attentional processing of affiliative stimuli and (ii) formation of social memories. Among these various processes, we propose that the capacity to experience affiliative reward via opiate functioning has a disproportionate weight in determing individual differences in affiliation. We delineate sources of these individual differences, and provide the first human data that support an association between opiate functioning and variation in trait affiliation. KEYWORDS: affiliation, social bonds, social memory, personality, appetitive reward, consummatory reward, dopamine, u-opiates, oxytocin, vasopressin, corticolimbic-striatal networks FULL TEXT: http://www.bbsonline.org/Preprints/Depue-07232002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, which is why you received this email. If you do not wish to receive further BBS Calls please email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Arbib/From monkey-like action recognition to human language: BBS Call for Commentators Message-ID: Below please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics Michael A. Arbib This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: From monkey-like action recognition to human language: An evolutionary framework for neurolinguistics AUTHORS: Michael A. Arbib ABSTRACT: The article analyzes the neural and functional grounding of language skills as well as their emergence in hominid evolution, hypothesizing stages leading from abilities known to exist in monkeys and apes and presumed to exist in our hominid ancestors right through to modern spoken and signed languages. The starting point is the observation that both premotor area F5 in monkeys and Broca's area in humans contain a "mirror system" active for both execution and observation of manual actions, and that F5 and Brocas area are homologous brain regions. This grounded the Mirror System Hypothesis of Rizzolatti & Arbib (1998) which offers the mirror system for grasping as a key neural "missing link" between the abilities of our non-human ancestors of 20 million years ago and modern human language, with manual gestures rather than a system for vocal communication providing the initial seed for this evolutionary process. The present article, however, goes "beyond the mirror" to offer hypotheses on evolutionary changes within and outside the mirror systems which may have occurred to equip Homo sapiens with a language-ready brain. Crucial to the early stages of this progression is the mirror system for grasping and its extension to permit imitation. Imitation is seen as evolving via a so-called "simple" system such as that found in chimpanzees (which allows imitation of complex "objectoriented" sequences but only as the result of extensive practice) to a so-called "complex" system found in humans (which allows rapid imitation even of complex sequences, under appropriate conditions) which supports pantomime. This is hypothesized to provide the substrate for the development of protosign, a combinatorially open repertoire of manual gestures, which then provides the scaffolding for the emergence of protospeech (which thus owes little to non-human vocalizations), with protosign and protospeech then developing in an expanding spiral. It is argued that these stages involve biological evolution of both brain and body. By contrast, it is argued that the progression from protosign and protospeech to languages with full-blown syntax and compositional semantics was a historical phenomenon in the development of Homo sapiens, involving few if any further biological changes. KEYWORDS: gestures; hominids; language evolution; mirror system; neurolinguistics; primates; protolanguage; sign language; speech; vocalization http://www.bbsonline.org/Preprints/Arbib-05012002/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- Message-Id: <200404082226.i38MQNB1020437 at ursa.services.brown.edu> X-Priority: From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Dear Connectionists, Either RA post below may appeal to someone with a background in coding and/or analysis of audio using e.g. neural networks. Please forward to anyone who may be interested. Many thanks, Mark Plumbley ---------------------------------------------------------- Centre for Digital Music Queen Mary, University of London Two Post-Doctoral Research Assistants for EPSRC Projects (1) Object-based Coding of Musical Audio and (2) Advanced Subband Systems for Audio Source Separation The Centre for Digital Music is at the forefront of research related to digital music and audio analysis, modeling and processing, including work on digital audio effects, music analysis, music information retrieval, and audio coding. Research Assistants are required for two new EPSRC projects in the Centre. * RA Post 1: Object-Based Coding of Musical Audio (Ref: 04097/DP) The aim of this project is to develop a way to encode musical audio using high-level "sound objects" such as musical notes or chords. This will allow musical audio to be compressed using very low bit rates, over e.g. MPEG4 Structured Audio, with the audio resynthesized at the receiver. The project will develop and investigate methods to encode monophonic (single-note) music and polyphonic music (with several notes at once), and will compare the quality and efficiency of these coding methods with existing methods such as transform coding and parametric coding. * RA Post 2: Advanced Subband Systems for Audio Source Separation * (Ref: 04098/DP) Humans primarily use phase information to localize sounds at low frequency, whereas in the upper frequencies intensity differences dominate due to inherent phase ambiguities. The aim of this project is to create new algorithmic solutions for blind source separation (BSS) for speech and audio that can deal with real acoustic environments in a similar manner to human hearing. The algorithms need to be able to deal with real noisy and reverberant environments and be able to track individual sources as they move and appear/disappear. Such systems will be key in future electronic devices, such as digital hearing aids and hands-free tele-conferencing. The project will also focus on the construction of a real time prototype for system evaluation and demonstration. The salary for the posts will be at up to =A324,325 per annum, inclusive of London Allowance, on the RA1A scale. Further details about the Department are on the web site http://www.elec.qmul.ac.uk/ and about the College on http://www.qmul.ac.uk. Further details and an application form, can be obtained from http://www.elec.qmul.ac.uk/department/vacancies/ Completed application forms should be returned to Theresa Willis, Department of Electronic Engineering, Queen Mary, University of London, Mile End Road, London E1 4NS (email: theresa.willis at elec.qmul.ac.uk), by Wednesday 21 April 2004. Working Towards Equal Opportunities --- Dr Mark D Plumbley Centre for Digital Music Department of Electronic Engineering Queen Mary University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 7518 Fax: +44 (0)20 7882 7997 Email: mark.plumbley at elec.qmul.ac.uk From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Vallortigara & Rogers/Survival with an asymmetrical brain: BBS Call for Commentators Message-ID: Below the proposal instructions please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: Survival with an asymmetrical brain: Advantages and disadvantages of cerebral lateralization Giorgio Vallortigara and Lesley J. Rogers This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Survival with an asymmetrical brain: Advantages and disadvantages of cerebral lateralization AUTHORS: Giorgio Vallortigara and Lesley J. Rogers ABSTRACT: Recent evidence in natural and semi-natural settings has revealed a variety of left-right perceptual asymmetries among vertebrates. This includes preferential use of the left or right visual hemifield during activities such as searching for food, agonistic responses or escape from predators in animals as different as fish, amphibians, reptiles, birds and mammals. There are obvious disadvantages in showing such directional asymmetries because relevant stimuli may happen to be located to the animals left or right at random; there is no a priori association between the meaning of a stimulus (e.g., its being a predator or a food item) and its being located to the animal's left or right. Moreover, other organisms (e.g. predators) could exploit the predictability of behavior that arises from population-level lateral biases. It might be argued that lateralization of function can enhance cognitive capacity and efficiency of the brain, thus counteracting the ecological disadvantages of lateral biases in behavior. However, such an increase in brain efficiency could be obtained by each individual being lateralized without any need to align the direction of the asymmetry in the majority of the individuals of the population. Here we argue that the alignment of the direction of behavioral asymmetries at the population level arises as an evolutionarily stable strategy under "social" pressures, i.e. when individually asymmetrical organisms must coordinate their behavior with the behavior of other asymmetrical organisms of the same or different species. KEYWORDS: Asymmetry, lateralization of behavior, brain evolution, brain lateralization, evolution of lateralization, evolutionarily stable strategy, hemispheric specialization, laterality, social behavior, development FULL TEXT: http://www.bbsonline.org/Preprints/Vallortigara-12152003/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From R.Borisyuk at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: R.Borisyuk at plymouth.ac.uk (Roman Borisyuk) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: New announcements on 9th Neural Computation and Psychology Workshop (NCPW9) Plymouth UK, 8-10 September 2004 www.plymouth.ac.uk/ncpw9 1. Thanks to the sponsorship of the UK councils BBSRC and EPSRC, there is now a limited number of travel grants. You can apply, after submitting your abstract, by writing to ncpw9 at plymouth.ac.uk and explaining the motivation for your request. 2. The abstract submission deadline has been extended to JULY 1st. Please see full Call for Abstract below ========================================================= Call for Abstracts - Extended Deadline 9th NEURAL COMPUTATION AND PSYCHOLOGY WORKSHOP (NCPW 9) Modelling Language, Cognition and Action Plymouth (UK), 8-10 September 2004 http://www.plymouth.ac.uk/ncpw9 The 9th Neural Computation and Psychology Workshop (NCPW9) will be held in Plymouth (England), September 8-10, 2004. Each year this lively forum brings together researchers from diverse disciplines such as psychology, artificial intelligence, cognitive science, computer science, robotics, neuroscience, and philosophy. The special theme of this year's workshop is "Neural Network Modelling of Language, Cognition and Action". PAPER SUBMISSIONS are INVITED in this and others areas covering the wider subject of neural modelling of cognitive and psychological processes. Papers will be considered for oral and poster presentations. After the conference, participants will be invited to submit a paper for the post-conference proceedings. This Workshop has always been characterized by the presentation of high quality papers, its limited size and the fact that it takes place in an informal setting. These features are explicitly designed to encourage interaction among the participants. KEYNOTE SPEAKERS (confirmed) Nick Chater (Warwick University, UK) Bob French (University of Liege, Belgium) Art Glenberg (University of Wisconsin - Madison, USA) Deb Roy (MIT, USA) Stefan Wermter (Sunderland University, UK) Daniel Wolpert (Institute of Neurology, UCL London, UK) CALL FOR ABSTRACTS --------------------- One-page abstracts are now solicited. DEADLINE: JULY 1st, 2004 FORMAT: Each abstract should conform to the following specifications: Length: a single page of A4 with 2.5cm margins all round. Font size 12pt or larger, single-spaced Title centred, 14pts Any reference list and diagram(s) must fit on this single page AUTHORSHIP AND AFFILIATION: The top of the A4 page must contain: Title of paper, Author name(s), Author affiliation(s) in brief (1 line), Email address of principal author. SEND: Word or PDF file to ncpw9 at plymouth.ac.uk. Publication ----------- Proceedings of the workshop will appear in the series Progress in Neural Processing, which is published by World Scientific (to be confirmed). Conference Organisers --------------------- Angelo Cangelosi (University of Plymouth) Guido Bugmann (University of Plymouth) Roman Borisyuk (University of Plymouth) John Bullinaria (University of Birmingham) Important Dates --------------- Deadline for submission of abstracts: July 1st 2004 Notification of acceptance/rejection: July 9th 2004 Submission of full papers: October 15th, 2004 Website ------- More details can be found on the conference website, http://www.plymouth.ac.uk/ncpw9 From arthur at tuebingen.mpg.de Tue Jun 6 06:52:25 2006 From: arthur at tuebingen.mpg.de (Arthur Gretton) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: [repost] extended deadline for ICML/COLT session on kernel methods Message-ID: [ This message is reposted due to an editing error. Apologies. -- moderator ] IMPROMPTU KERNEL PAPERS -- COLT/ICML 2004 Kernel Day, Banff, Canada, July 4 Extended Deadline for Submissions: June 25, 2004 There remain some slots free in the impromptu poster session on kernel methods, to be held during the kernel day at ICML/COLT on 4 July 2004. As in previous years, we are looking for incomplete or unusual ideas, as well as promising directions or problems for new research. If you would like to submit to this session, please send an abstract to Arthur Gretton (arthur at tuebingen.mpg.de) before June 25, 2004. Please do not send posters or long documents. -- Arthur Gretton Mobile : +49 1762 3210867 MPI for Biological Cybernetics Office : +49 7071 601562 Spemannstr 38, 72076 Home : +49 7071 305346 Tuebingen, Germany I used to believe I was a Bayesian, but now I'm not so sure. From A.Cangelosi at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: A.Cangelosi at plymouth.ac.uk (Angelo Cangelosi) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Connection Science Journal Call for Papers A Special Issue on 'The Emergence of Language: Neural and Adaptive Agent Models ' Guest Editor: Angelo Cangelosi Connection Science is calling papers for a special issue entitled 'The Emergence of Language: Neural and Adaptive Agent Models'. Studies of the emergence of language focus on the evolutionary and/or developmental factors that affect the acquisition and auto-organisation of a linguistic communication system. Both language-specific abilities (e.g. speech, semantics, syntax) and other cognitive, sensorimotor and social abilities (e.g. category learning, action and embodiment, social networks) contribute to the emergence of language. Key research issues and topics in the area include: * Emergentism as an alternative to the nativism/empiricism dichotomy * Identification of basic processes producing language complexity * Grammaticalization and emergence of syntax * Emergent models of language acquisition * Evolution and origins of language * Pidgin, creole and second language acquisition * Neural bases of emergent language processes * Auto-organization of shared lexicons in groups of individuals/agents * Grounding of symbols and language in perception and action The main aims of this special issue are to foster interdisciplinary and multi-methodological approaches to modelling the emergence of language, and to identify key research directions for the future. Models based on neural networks (connectionism, computational neuroscience) and adaptive agent methodologies (artificial life, multi-agent systems, robotics), or integrated neural/agent approaches, are particularly encouraged. The submitted papers are expected to: (i) focus on one or more related research issues (see list above), (ii) explain the importance of the topic, the open problems and the different approaches discussed in the literature, (iii) discuss the advantages and drawbacks of the neural and adaptive agent approaches with respect to other methodologies (including experimental research) and (iv) present original models and/or significant new results. Review papers may also be considered. Invited Papers The special issue will include two invited papers, one from Brian MacWhinney (Carnegie Mellon University) and one from Luc Steels (VUB University Brussels and SONY Computer Labs Paris). The invited papers are: * Brian MacWhinney , 'Emergent Linguistic Structures and the Problem of Time' (focus on neural network modeling) * Luc Steels , 'Mirror Learning and the Self-Organisation of Languages' (focus on adaptive agent modeling) Submission Instructions and Deadline Manuscripts, either full papers or shorter research notes (up to 4000 words), following the Connection Science guidelines (http://www.tandf.co.uk/journals/authors/ccosauth.asp ) should be emailed to the guest editor (acangelosi at plymouth.ac.uk) by December 1, 2004. Reviews will be completed by March 1, 2005, and final drafts will be accepted no later than May 1, 2005. The special issue will be published in September 2005. Guest Editor Angelo Cangelosi Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth, Plymouth PL4 8AA, UK Tel: +44 (0) 1752 232559 Fax: +44 (0) 1752 232540 E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/research/ABC/EmergenceLanguage/ Related and Sample Papers Cangelosi, A., and Parisi, D., 1998, The emergence of a 'language' in an evolving population of neural networks. Connection Science, 10(2): 83-97. Cangelosi, A., and Parisi, D., 2004, The processing of verbs and nouns in neural networks: Insights from synthetic brain imaging. Brain and Language, 89(2): 401-408. Elman, J.L, 1999, The emergence of language: A conspiracy theory. In B. MacWhinney (ed.), Emergence of Language (Hillsdale, NJ: LEA). Knight, C., Hurford, J.R., and Studdert-Kennedy, M., (eds), 2000, The evolutionary emergence of language: social function and the origins of linguistic form (Cambridge: Cambridge University Press). MacWhinney, B., 1998, Models of the emergence of language. Annual Review of Psychology, 49: 199-227. Plunkett, K., Sinha, C., Moller, M. F., and Strandsry, O., 1992, Symbol grounding or the emergence of symbols? Vocabulary growth in children and a connectionist net. Connection Science, 4(3-4): 293-312. Roy, D., and Pentland, A., 2002, Learning words from sights and sounds: A computational model, Cognitive Science, 26: 113-146. Steels, L., 2003, Evolving grounded communication for robots. Trends in Cognitive Sciences, 7(7): 308-312. Wermter, S., Elshaw, M., and Farrand, S., 2003, A modular approach to self-organization of robot control based on language instruction. Connection Science, 15(2-3): 73-94. ---------------- Angelo Cangelosi, PhD ---------------- Reader in Artificial Intelligence and Cognition Adaptive Behaviour and Cognition Research Group School of Computing, Communication & Electronics University of Plymouth Portland Square Building (A316) Plymouth PL4 8AA (UK) E-mail: acangelosi at plymouth.ac.uk http://www.tech.plym.ac.uk/soc/staff/angelo (tel) +44 1752 232559 (fax) +44 1752 232540 From R.Borisyuk at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: R.Borisyuk at plymouth.ac.uk (Roman Borisyuk) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: University of Plymouth Faculty of Science - School of Earth, Ocean and Environmental Science Research Fellow in stochastic modelling, European Lifestyles and Marine Ecosystems Contact person for information: Prof. Laurence Mee (lmee at plymouth.ac.uk.) Further information available at Plymouth University is one of the UK's leading institutions conducting multidisciplinary research on marine and coastal environmental policy worldwide. The Marine and Coastal Policy Research Group aims to: provide a sound scientific, social, legal and economic basis for improved policy for the management, sustainable use and protection of the marine and coastal environment. Due to a recent Framework Six project award it has become necessary to recruit a research fellow to develop the innovative models necessary for predictive scenarios on the state of Europe's seas. The overall project involves the cooperation of 28 research groups in 14 countries and modelling will examine causal relationships between agents of social and economic change and impacts to the environment. It will develop predictive scenarios of future changes to the marine environment. The new post, available with immediate effect for a maximum duration of 30 months will assist the project leader, Prof. Laurence Mee. The work is expected to lead to high quality research publications and should provide an important career step for a young researcher. The successful candidate must be an experienced and adaptable postdoctoral statistician skilled in empirical data analysis and modelling techniques. Approaches that may be explored include multi-criteria analysis, Bayesian networks and neural networks. He/she will have a proven track record of quality research publications and should demonstrate capability for a multidisciplinary approach involving teamwork. The appointee should be able to work autonomously, have good communication skills and experience of working with large datasets. The indicative salary for the post is =A320311-22,191 pa pro-rata. Application forms can be obtained from: The Personnel Department University of Plymouth Drake Circus Plymouth PL4 8AA Vacancy hotline: (24 hour) 01752 232168 Email: personnel at plymouth.ac.uk Closing date 20 August 2004 From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Steels & Belpaeme/Coordinating Perceptually Grounded Categories through Language: BBS Call for Commentators Message-ID: Below the instructions please find the abstract, keywords, and full text link to the forthcoming BBS target article: Coordinating Perceptually Grounded Categories through Language. A Case Study for Colour. by Luc Steels and Tony Belpaeme This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL within three (3) weeks to: calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= ** IMPORTANT ** ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable at the URL that follows the abstract, keywords below. ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Coordinating Perceptually Grounded Categories through Language. A Case Study for Colour. AUTHORS: Luc Steels and Tony Belpaeme ABSTRACT: The paper proposes a number of models to examine through what mechanisms a population of autonomous agents could arrive at a repertoire of perceptually grounded categories that is sufficiently shared to allow successful communication. The models are inspired by the main approaches to human categorisation being discussed in the literature: nativism, empiricism, and culturalism. Colour is taken as a case study. Although the paper takes no stance on which position is to be accepted as final truth with respect to human categorisation and naming, it points to theoretical constraints that make each position more or less likely and contains clear suggestions on what the best engineering solution would be. Specifically, it argues that the collective choice of a shared repertoire must integrate multiple constraints, including constraints coming from communication. KEYWORDS: Autonomous agents, symbol grounding, colour categorisation, colour naming, genetic evolution, connectionism, memes, cultural evolution, self-organisation, origins of language, semiotic dynamics FULL TEXT: http://www.bbsonline.org/Preprints/Steels-09262002/Referees/ ======================================================================= ======================================================================= *** SUPPLEMENTARY ANNOUNCEMENT *** (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, which is why you received this email. If you do not wish to receive further BBS Calls please email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: processes are in a closed loop, where the input to the system depends on its output. In contrast, most in vitro preparations are not. Thanks to recent advances in real-time computing, we can artificially close the loop and stimulate the system according to its current state. Such a closed-loop approach blurs the border between experiments and simulations, and it allows us to peek into the inner workings of the brain that are not accessible by any other means. This symposium considers neuronal systems ranging from single cells, to small circuits, to the whole organism. It emphasizes "dynamic clamp" approach to study the role of ion channels in orchestrating behavior, and extends this closed-loop concept to networks, neural prostheses and therapeutic interventions. Invited Speakers: Eve E. Marder, Brandeis University, How Good is Good Enough? Using the Dynamic Clamp to Understand Parameter Regulation in Network Function Robert Butera, Georgia Institute of Technology, Dynamic Clamp: Technological Implementations and Algorithmic Development Gwendal le Masson, INSERM, Paris, Biological-Artificial Interactions: Evolution of Techniques and Emerging Concepts in Network Neurosciences Farzan Nadim, Rutgers University, Synaptic Depression Mediates Bistability in Neuronal Networks with Feedback Inhibition Alex Reyes, New York University Controlling the Spread of Synchrony with Inhibition Shimon Marom, Israel Institute of Technology (Technion), Haifa Learning in Networks of Cortical Neurons Yang Dan, University of California, Berkeley Timing-Dependent Plasticity in Visual Cortex Moshe Abeles, Bar Ilan University, Ramat Gan, Israel Spatial and Temporal Organization of Activity in Motor Cortex Rafael Yuste, Columbia University, Imaging the Spontaneous and Evoked Dynamics of the Cortical Microcircuit Theodore W. Berger, University of Southern California Nonlinear Dynamic Models of Neural Systems as the Basis for Neural Prostheses: Application to Hippocampus Michael Dickinson, California Institute of Technology The Organization of Visual Motion Reflexes in Flies and their Role in Flight Control Andrew Schwartz, University of Pittsburgh Useful Signals from Motor Cortex Peter A. Tass, Institute of Medicine, J=FClich, and University of Cologne, Germany Model-Based Development of Desynchronizing Deep Brain Stimulation Keynote Address: Mayada Akil, National Institute of Mental Health Putting it All Together: Schizophrenia, from Phenotype to Genotype and Back Poster sessions will be held during both days of the meeting. Program agenda may be accessed via the NIMH website located at: http://www.nimh.nih.gov/scientificmeetings/dynamics2004.cfm For further information, registration and other logistics, contact Matt Burdetsky at Capital Meeting Planning, Inc., 6521 Arlington Blvd., Suite 505, Falls Church, VA 22042 (703) 536-4993; Fax: (703) 536-4991; E-mail: matt at cmpinc.net From Hualou.Liang at uth.tmc.edu Tue Jun 6 06:52:25 2006 From: Hualou.Liang at uth.tmc.edu (Hualou Liang) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: COMPUTATIONAL COGNITIVE NEUROSCIENCE POSTDOCTORAL POSITION AVAILABLE University of Texas Health Science Center at Houston Applications are invited for a postdoctoral position currently open in the group of Dr. Hualou Liang (http://www.sahs.uth.tmc.edu/hliang/) at University of Texas Health Science Center at Houston to participate in an ongoing research project studying the cortical dynamics of visual selective attention. The project involves the application of modern signal processing techniques to multielectrode neuronal recordings. The ideal candidate should have, or be about to receive, a Ph.D. in relevant discipline with substantial mathematical/computational experience (especially in signal processing, time series analysis, dynamical systems, multivariate statistics). Programming skills in C and Matlab are essential. Experience in neuroscience is advantageous but not required. Interested individuals should email a curriculum vitae, a brief statement of research interests and the names of three references to Dr. Hualou Liang at Hualou.liang at uth.tmc.edu PS: I will be available at the SFN meeting in San Diego. Potential candidates are welcome to discuss this position at the meeting. From R.Roy at Cranfield.ac.uk Tue Jun 6 06:52:25 2006 From: R.Roy at Cranfield.ac.uk (Roy, Rajkumar) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Development of a Soft Computing Approach to Predict Roll Life in Long Product Rolling Industry CASE Studentship at Cranfield University Proposed by: Dr. Rajkumar Roy, Cranfield University Industrial Sponsor: Corus UK November 2004 - October 2007 Outline of the Project Rolls are estimated to contribute about 5-15% of overall production costs in long product rolling. Roll life in long product rolling is dependent on the rate of wear of the rolls. Any prediction about the roll life will require an understanding of the roll wear mechanisms and a model for the wear. It is observed that after many years of research, scientists and engineers are still working on developing such a model. On the other hand expert operators on the shop floor often take corrective actions to improve roll life. Through experience they have developed a mental model of the roll wear behaviour and therefore the roll life. In absence of quantitative model for the roll wear and the roll life predictions, it is proposed that this research will develop an approach utilizing Soft Computing techniques (Neural Networks and Fuzzy Logic) to predict roll life for long product rolling. Soft Computing techniques are proven in many domains to be successful in modeling a complex environment using empirical data and human expertise. It is expected that the research will utilize historical data available within the industry to establish any relationship between certain key roll, component and production variables (quantitative) and actual life of the roll. Neural networks and statistical approaches can be used at this stage of the research. In parallel the research will investigate how expert operators adjusts machine and roll parameters to improve roll life. This would involve extensive knowledge capture exercise. It is expected that fuzzy logic based representation will allow the knowledge to be made explicit. The fuzzy model will incorporate qualitative variables involved in the roll life prediction. The third phase of the research will focus on integrating the quantitative and qualitative models to develop a complete model for roll life prediction. EPSRC is expected to pay tuition fees to Cranfield. The student would receive around 11K pounds sterling tax-free per annum for the three years. Interested graduate/postgraduate students with manufacturing/mechanical engineering background are invited to submit their CV for an informal discussion over telephone or email. Additional background in knowledge capture and Soft Computing will be beneficial. The minimum academic requirement for entrants to the degree is an upper second class honours degree or its equivalent. Please note that the funding is restricted to British Nationals, in special cases it may be offered to an EC national. Please respond by 30th Nov. 2004. For informal enquiries and application (detailed CV), please contact: Dr. Rajkumar Roy at your earliest: Dr. Rajkumar Roy Senior Lecturer and Course Director, IT for Product Realisation Department of Enterprise Integration, School of Industrial and Manufacturing Science, Cranfield University, Cranfield, Bedford, MK43 0AL, United Kingdom. Tel: +44 (0)1234 754072 or +44 (0)1234 750111 Ext. 2423 Fax: +44 (0)1234 750852 Email: r.roy at cranfield.ac.uk or r.roy at ieee.org URL: http://www.cranfield.ac.uk/sims/staff/royr.htm http://www.cranfield.ac.uk/sims/cim/people/roy.htm From bogus@does.not.exist.com Tue Jun 6 06:52:25 2006 From: bogus@does.not.exist.com () Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Large-Scale Dynamics of the Visual Cortex", will be given by Prof. Michael Shelley (Courant Institute, New York), in Barcelona, within the Ph. D. Program of Applied Mathematics of the Technical University of Catalonia. This program has a so-called "Quality Mention", so the Ph.D. students can apply to get grants from the Spanish Ministery of Education and Science. http://wwwn.mec.es/univ/jsp/plantillaAncho.jsp?id=26 For registering to the courses please contact Mrs.Carme Capdevila at the PhD office at the Faculty of Mathematics and Statistics of the UPC at Carmec at fme.upc.edu or at the phone number: + 34 93 401 58 61. This course is also part of a research program of the Centre de Recerca Matem?tica, http://www.crm.es/CONTROL2005, which can offer a reduced number of accommodation grants for Ph. D. students interested in the course. Please, fill in the application for lodging form. The deadline for sending applications is December 15. http://www.crm.es/CONTROL2005/ControlFinancial_form.htm We will be grateful if you can spread this anouncement. Sincerely. Amadeu Delshams Toni Guillamon Department of Applied Mathematics I Technical University of Catalonia, UPC, Barcelona From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Postdoctoral Fellowship The School of Psychology and the Centre for Theoretical and Computational Neuroscience have jointly been awarded a 5-Year Academic Fellowship from Research Councils UK. This fellowship scheme is designed to provide a route into an academic career for researchers with outstanding potential. At the end of the fellowship period, the University will offer a permanent academic post to the Fellow, subject to successful completion of standard academic probation within the five years of the fellowship. You should be a postdoctoral researcher of high quality, able to take an active role in research projects using behavioural experimentation, computational modelling and EEG/ERP techniques to investigate cognition. For informal enquiries in the first instance, please contact Professor Mike Denham on +44 (0)1752 232547 or email mike.denham at plym.ac.uk, although applications must be made in accordance with the details shown below. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk From cmbishop at microsoft.com Tue Jun 6 06:52:25 2006 From: cmbishop at microsoft.com (Christopher Bishop) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: Each year the Microsoft Research lab in Cambridge, U.K. offers around 40+ PhD internships, typically of 12 weeks duration, covering research areas of interest to the lab including machine learning, computer vision and information retrieval. These internships are aimed at PhD students who have completed at least a year (preferably two or three) of their PhD studies. Competition for places is strong, so we have set a deadline of 28 February for receipt applications (including references) for internships in 2005. Detailed information about the internships, as well as information on the applications procedure, is available at: http://www.research.microsoft.com/aboutmsr/jobs/internships/cambridge.aspx Chris Bishop Professor Christopher M. Bishop FREng Assistant Director Microsoft Research Ltd 7 J J Thomson Avenue Cambridge CB3 0FB, U.K. Tel. +44 (0)1223 479 783 Fax: +44 (0)1223 479 999 cmbishop at microsoft.com http://research.microsoft.com/~cmbishop From altun at tti-c.org Tue Jun 6 06:52:25 2006 From: altun at tti-c.org (Yasemin Altun) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Machine Learning Summer School at TTI-C/UC Message-ID: From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: van der Velde & de Kamps/Neural blackboard architectures: BBS Call for Commentators Message-ID: The Online Commentary Proposal System is currently unavailable due to technical difficulties. Until the Online Commentary Proposal System is reactivated, please send all commentary proposals (with relevant expertise) and commentator suggestions to calls at bbsonline.org. --------------------------------------------------------------------------------- Below the proposal instructions please find the abstract, keywords, and a link to the full text of the forthcoming BBS target article: "Neural blackboard architectures of combinatorial structures in cognition" Frank van der Velde and Marc de Kamps http://www.bbsonline.org/Preprints/VanderVelde-11132003/Referees/ This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or suggested by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please reply by EMAIL by March 24, 2005. calls at bbsonline.org The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to suggest someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. An electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. Please reply by EMAIL to by March 24, 2005 ======================================================================= *** TARGET ARTICLE INFORMATION *** ======================================================================= TITLE: Neural blackboard architectures of combinatorial structures in cognition AUTHORS: Frank van der Velde and Marc de Kamps ABSTRACT: Human cognition is unique in the way in which it relies on combinatorial (or compositional) structures. Language provides ample evidence for the existence of combinatorial structures, but they can also be found in visual cognition. To understand the neural basis of human cognition, it is therefore essential to understand how combinatorial structures can be instantiated in neural terms. In his recent book on the foundations of language, Jackendoff described four fundamental problems for a neural instantiation of combinatorial structures: the massiveness of the binding problem, the problem of 2, the problem of variables and the transformation of combinatorial structures from working memory to long-term memory. This paper aims to show that these problems can be solved by means of neural 'blackboard' architectures. For this purpose, a neural blackboard architecture for sentence structure is presented. In this architecture, neural structures that encode for words are temporarily bound in a manner that preserves the structure of the sentence. It is shown that the architecture solves the four problems presented by Jackendoff. The ability of the architecture to instantiate sentence structures is illustrated with examples of sentence complexity observed in human language performance. Similarities exist between the architecture for sentence structure and blackboard architectures for combinatorial structures in visual cognition, derived from the structure of the visual cortex. These architectures are briefly discussed, together with an example of a combinatorial structure in which the blackboard architectures for language and vision are combined. In this way, the architecture for language is grounded in perception. Perspectives and potential developments of the architectures are discussed. KEYWORDS: Binding, blackboard architectures, combinatorial structure, compositionality, language, dynamic system, neurocognition, sentence complexity, sentence structure, working memory, variables, vision FULL TEXT: http://www.bbsonline.org/Preprints/VanderVelde-11132003/Referees/ ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Barbara Finlay - Editor Paul Bloom - Editor Behavioral and Brain Sciences bbs at bbsonline.org http://www.bbsonline.org ------------------------------------------------------------------- From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: University of Plymouth Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow Five Year Fixed Term Appointment (salary range =A323,643 - =A329,479 pa) Applications are invited for the post of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK. The post has been made available through the award of a major new =A31.8M five-year research project funded by the UK Engineering and Physical Sciences Research Council entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University (Stefano Panzeri, Piotr Dudek, Steve Furber), University College London (Michael Hausser, Arnd Roth), Edinburgh University (Mark van Rossum, David Willshaw), Oxford University (Jan Schnupp), and London University School of Pharmacy (Alex Thomson), plus a number of leading European research groups. Applicants for the post must have a PhD in a relevant subject area and possess an expert knowledge of the field of theoretical and computational neuroscience, or of a closely related area with some knowledge of theoretical and computational neuroscience. They must be able to provide evidence of an ability to conduct high quality research in this or a closely related research area, eg a strong publication record and peer recognition. Evidence of the ability to work successfully in collaboration with other research groups on joint projects would be particularly advantageous. The work of the Research Fellow will be specifically concerned with the staged development of the cortical microcircuit model, in collaboration with all the participants in the project, on a large scale Linux cluster based simulation facility. Maintaining a close level of collaboration will involve the Research Fellow in spending short periods of time in the laboratories of the collaborators, both in the UK and in Europe. The Research Fellow will also conduct research into methods for combining different levels/scales of model description which emerge as the project progresses, in order to build an integrated cortical microcircuit model. These tasks will require a postdoctoral researcher with extensive research experience in neurobiological modelling and the knowledge of theoretical and computational neuroscience at the neurobiological level, ie detailed modelling of neurons and neural circuitry, necessary to maintain an in-depth understanding of the activities in all of the research areas of the project. The post is available from 1st June 2005 and an appointment will be made as soon as possible after this date. The appointment will be for a fixed term of five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made in the first instance to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk, from whom further details are available. From calls at bbsonline.org Tue Jun 6 06:52:25 2006 From: calls at bbsonline.org (Behavioral & Brain Sciences) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: Striedter/Principles of Brain Evolution: BBS Multiple Book Review Message-ID: The Online Commentary Proposal System is currently unavailable due to technical difficulties. Until the Online Commentary Proposal System is reactivated, please send all commentary proposals (with relevant expertise) and commentator suggestions to calls at bbsonline.org. ======================================================================= BBS MULTIPLE BOOK REVIEW - CALL FOR COMMENTATORS ======================================================================= Below is a link to the forthcoming precis of a book accepted for Multiple Book Review in Behavioral and Brain Sciences (BBS). PRECIS OF: Principles of Brain Evolution AUTHOR: Georg F. Striedter Behavioral and Brain Sciences (BBS), is an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Please note that it is the *BOOK*, not the precis, that is to be reviewed. Reviewers must be BBS Associates or nominated by a BBS Associate. To be considered as a reviewer for this book, to suggest other appropriate reviewers, or for information about how to become a BBS Associate, please send an EMAIL to calls at bbsonline.org by March 25, 2005. The Calls are sent to 10,000 BBS Associates, so there is no expectation (indeed, it would be calamitous) that each recipient should comment on every occasion! Hence there is no need to reply except if you wish to comment, or to nominate someone to comment. If you are not a BBS Associate, please approach a current BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work to nominate you. All past BBS authors, referees and commentators are eligible to become BBS Associates. A full electronic list of current BBS Associates is available at this location to help you select a name: http://www.bbsonline.org/Instructions/assoclist.html If no current BBS Associate knows your work, please send us your Curriculum Vitae and BBS will circulate it to appropriate Associates to ask whether they would be prepared to nominate you. (In the meantime, your name, address and email address will be entered into our database as an unaffiliated investigator.) To help you decide whether you would be an appropriate reviewer for this book, an electronic draft of the precis (only) is retrievable at the URL that follows the abstract below. ======================================================================= COMMENTARY PROPOSAL INSTRUCTIONS ======================================================================= To help us put together a balanced list of commentators, it would be most helpful if you would send us an indication of the relevant expertise you would bring to bear on the paper, and what aspect of the paper you would anticipate commenting upon. Please DO NOT prepare a commentary until you receive a formal invitation, indicating that it was possible to include your name on the final list, which is constructed so as to balance areas of expertise and frequency of prior commentaries in BBS. Please reply by EMAIL to calls at bbsonline.org by March 25, 2005 ======================================================================= *** BOOK PRECIS INFORMATION *** ======================================================================= PRECIS OF: Principles of Brain Evolution Author: Georg F. Striedter ABSTRACT: Brain evolution is a complex weave of species similarities and differences, bound by diverse rules and principles. This book is a detailed examination of these principles, using data from a wide array of vertebrates but minimizing technical details and terminology. It is written for advanced undergraduates, graduate students, and more senior scientists who already know something about the brain, but want a deeper understanding of how diverse brains evolved. The books central theme is that evolutionary changes in absolute brain size tend to correlate with many other aspects of brain structure and function, including the proportional size of individual brain regions, their complexity, and their neuronal connections. To explain these correlations, the book delves into rules of brain development and asks how changes in brain structure impact function and behavior. Two chapters focus specifically on how mammal brains diverged from other brains and how Homo sapiens evolved a very large and special brain. KEYWORDS: Neocortex, Development, Homology, Parcellation, Mammal, Primate, Lamination, Cladistics, Hippocampus, Basal Ganglia, Neuromere PRECIS TEXT: http://www.bbsonline.org/Preprints/Striedter-01132005/Referees ======================================================================= SUPPLEMENTARY ANNOUNCEMENT ======================================================================= (1) Call for Book Nominations for BBS Multiple Book Review In the past, Behavioral and Brain Sciences (BBS) had only been able to do 1-2 BBS multiple book treatments per year, because of our limited annual page quota. BBS's new expanded page quota will make it possible for us to increase the number of books we treat per year, so this is an excellent time for BBS Associates and biobehavioral/cognitive scientists in general to nominate books you would like to see accorded BBS multiple book review. (Authors may self-nominate, but books can only be selected on the basis of multiple nominations.) It would be very helpful if you indicated in what way a BBS Multiple Book Review of the book(s) you nominate would be useful to the field (and of course a rich list of potential reviewers would be the best evidence of its potential impact!). *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Please note: Your email address has been added to our user database for Calls for Commentators, the reason you received this email. If you do not wish to receive further Calls, please feel free to change your mailshot status through your User Login link on the BBSPrints homepage, using your username and password. Or, email a response with the word "remove" in the subject line. *-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-*-* Ralph BBS ------------------------------------------------------------------- Ralph DeMarco Editorial Coordinator Behavioral and Brain Sciences Journals Department Cambridge University Press 40 West 20th Street New York, NY 10011-4211 UNITED STATES bbs at bbsonline.org http://www.bbsonline.org Tel: +001 212 924 3900 ext.374 Fax: +001 212 645 5960 ------------------------------------------------------------------- From M.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: This is a multi-part message in MIME format. ------_=_NextPart_001_01C523DA.D42D630A Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Apologies for this second posting - the financial information was not properly reproduced on the first one. University of Plymouth Centre for Theoretical and Computational Neuroscience Postdoctoral Research Fellow=20 Five Year Fixed Term Appointment (salary range 23,643 - 29,479 UK pounds pa) Applications are invited for the post of Postdoctoral Research Fellow in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK.=20 The post has come available through the award of a major new 1.8M UK pounds five-year research project funded by the UK Engineering and Physical Sciences Research Council entitled: "A Novel Computing Architecture for Cognitive Systems based on the Laminar Microcircuitry of the Neocortex". Collaborators on the project include Manchester University (Stefano Panzeri, Piotr Dudek, Steve Furber), University College London (Michael Hausser, Arnd Roth), Edinburgh University (Mark van Rossum, David Willshaw), Oxford University (Jan Schnupp), and London University School of Pharmacy (Alex Thomson), plus a number of leading European research groups. Applicants for the post must have a PhD in a relevant subject area and possess an expert knowledge of the field of theoretical and computational neuroscience, or of a closely related area with some knowledge of theoretical and computational neuroscience. They must be able to provide evidence of an ability to conduct high quality research in this or a closely related research area, eg a strong publication record and peer recognition. Evidence of the ability to work successfully in collaboration with other research groups on joint projects would be particularly advantageous. The work of the Research Fellow will be specifically concerned with the staged development of the cortical microcircuit model, in collaboration with all the participants in the project, on a large scale Linux cluster based simulation facility. Maintaining a close level of collaboration will involve the Research Fellow in spending short periods of time in the laboratories of the collaborators, both in the UK and in Europe. The Research Fellow will also conduct research into methods for combining different levels/scales of model description which emerge as the project progresses, in order to build an integrated cortical microcircuit model. These tasks will require a postdoctoral researcher with extensive research experience in neurobiological modelling and the knowledge of theoretical and computational neuroscience at the neurobiological level, ie detailed modelling of neurons and neural circuitry, necessary to maintain an in-depth understanding of the activities in all of the research areas of the project. The post is available from 1st June 2005 and an appointment will be made as soon as possible after this date. The appointment will be for a fixed term of five years, and will be subject to a probationary period of twelve months. Informal enquiries should be made in the first instance to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk, from whom further details are available. From sml at essex.ac.uk Tue Jun 6 06:52:25 2006 From: sml at essex.ac.uk (Lucas, Simon M) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: IEEE Symposium on Computational Intelligence and Games April 4-6 2005 Essex University, Colchester, Essex http://cigames.org Call for Participation Keynote Speakers ================ - Jordan Pollack: Is progress possible? - Martin Mueller: Challenges in computer Go - Risto Miikkulainen: Creating Intelligent Agents through Neuroevolution - Jaap van den Herik: Opponent Modelling and Commercial Games Tutorials (Sunday 3rd April, afternoon) ========= - Andries P. Engelbrecht: Particle Swarm Optimisation for Learning Game Strategies - Evan J. Hughes: Coevolving game strategies: How to win and how to lose - Thomas P. Runarsson: Temporal Difference Learning for Game Strategy Acquisition 28 Oral Papers(single stream) 10 Poster Papers More details at http://cigames.org Demonstrations ============== We encourage delegates to bring demonstrations (e.g. on Laptops) of their latest research in this area, to be exhibited in the poster session). -------------------------------------------------- Dr. Simon Lucas Department of Computer Science University of Essex Colchester CO4 3SQ United Kingdom Email: sml at essex.ac.uk http://cswww.essex.ac.uk/staff/lucas/lucas.htm -------------------------------------------------- From S.Denham at plymouth.ac.uk Tue Jun 6 06:52:25 2006 From: S.Denham at plymouth.ac.uk (Sue Denham) Date: Tue, 06 Jun 2006 10:52:25 -0000 Subject: No subject Message-ID: The Centre for Theoretical and Computational Neuroscience and the Computer Music Research Group, University of Plymouth, UK, are looking for highly qualified candidates for 2 Post-Doctoral and 2 Research Assistant positions to work on a 3-year research project in the field of Computational Neuroscience & Music Cognition, entitled Emergent Cognition through Active Perception. The project is funded by the Sixth Framework Programme of the European Union and involves a consortium lead by Dr Sue Denham, Prof Mike Denham and Dr Eduardo Miranda (University of Plymouth), in collaboration with Dr Henkjan Honing (University of Amsterdam, Institute for Logic, Language and Computation), Prof Istv=E1n Winkler (Institute for Psychology, Hungarian Academy of Sciences), and Prof Gustavo Deco and Prof Xavier Serra (University Pompeu Fabra, Barcelona, Music Technology Group & Computational Neuroscience Group). The goal of the project is to investigate how complex cognitive behaviour in artificial systems can emerge through interacting with an environment, and how, by becoming sensitive to the properties of the environment, such systems can develop effective representations and processing structures autonomously. Music is an ideal domain in which to investigate complex cognitive behaviour, since music, like language, is a universal phenomenon containing complex abstractions and temporally extended structures, whose organisation is constrained by underlying rules or conventions that participants need to understand for effective cognition and interaction. We will investigate the development of music cognition by combining the complementary approaches of perceptual experiments using human subjects, functional and neurocomputational modelling, and the implementation of an interactive embodied cognitive system. Provisional project start date: 1 October 2005 More details are available here: http://neuromusic.soc.plymouth.ac.uk/EmCAP.html Alternatively contact: Sue Denham s.denham at plymouth.ac.uk Or Eduardo R Miranda eduardo.miranda at plymouth.ac.uk From d.mandic at imperial.ac.uk Fri Jun 2 05:36:27 2006 From: d.mandic at imperial.ac.uk (Danilo P. Mandic) Date: Fri, 2 Jun 2006 10:36:27 +0100 Subject: Connectionists: Postdoctoral position in Nonlinear Multidimensional Signal Processing at Imperial College Message-ID: <006401c68628$17383050$0a1efea9@MandicLaptop> Dear Connectionists may I draw your attention to the opening for a postdoc working on modelling of real world signals my multidimensional Recurrent Neural Networks. More details can be found on http://www.commsp.ee.ic.ac.uk/~mandic The deadline is quite close, sorry for a short notice Danilo ======================================== Dr Danilo P. Mandic Reader in Signal Processing Department of Electrical and Electronic Engineering Imperial College London Exhibition Road, SW7 2BT London Phone: +44 (0)207 594 6271 Fax: +44 (0)207 594 6234 E-mail: d.mandic at imperial.ac.uk www.commsp.ee.ic.ac.uk/~mandic From nicolas.brunel at univ-paris5.fr Fri Jun 2 08:02:18 2006 From: nicolas.brunel at univ-paris5.fr (Nicolas Brunel) Date: Fri, 02 Jun 2006 14:02:18 +0200 Subject: Connectionists: Postdoctoral position in optical microscopy for neuroscience Message-ID: <448028CA.5030904@univ-paris5.fr> Postdoctoral position in optical microscopy for neuroscience Ecole Normale Sup?rieure, Paris, France A postdoctoral position is available in October 2006 at the Laboratory of Molecular and Cellular Neurobiology of the Ecole Normale Sup?rieure, Paris, France, to combine two-photon microscopy and adaptive optic to improve depth of imaging in scattering samples such as brain tissues. The project involves the design of a two-photon microscope including correction of wavefront distortions due to the large scale structures in brain tissues. Adaptive corrections based on wavefront measurement and on genetic algorithm will be used. This project aims at improving depth of imaging /in vivo/ and this optical set-up will be used to analyze in young rats multiple single unit activity in rat layer IV barrel cortex under controlled stimulation of the corresponding principal whisker. We are looking for an applicant with a background in optics and microscopy. Experience with laser optics, two-photon microscopy and adaptive optics is desirable. Please send your CV, a cover letter describing your research interests, and the names and e-mail addresses of 2 references to: Laurent Bourdieu, Ecole Normale Sup?rieure, D?partement de Biologie, Laboratoire de Neurobiologie Mol?culaire et Cellulaire, UMR CNRS 8544, 46 rue d?Ulm, 75005 Paris. Web site: http://www.biologie.ens.fr/neuroctx/. Email: laurent.bourdieu at ens.fr From nicolas.brunel at univ-paris5.fr Fri Jun 2 08:03:26 2006 From: nicolas.brunel at univ-paris5.fr (Nicolas Brunel) Date: Fri, 02 Jun 2006 14:03:26 +0200 Subject: Connectionists: Postdoctoral position in systems neuroscience Message-ID: <4480290E.7030108@univ-paris5.fr> Postdoctoral position in systems neuroscience Ecole Normale Sup?rieure, Paris, France A postdoctoral position is available in October 2006 at the Laboratory of Molecular and Cellular Neurobiology of the Ecole Normale Sup?rieure, Paris, France, to study the coding mechanisms underlying auditory-somatosensory associations in the rat cortex. The project involves behavioral training of rodents and recording of multiple single units, either using two-photon microscopy or multi-electrodes. It aims at investigating neuronal interactions during multimodal associations in the rat cerebral cortex. We are looking for an applicant with a background in integrative neuroscience. Experience with either two-photon microscopy or multi-electrode recordings and with quantitative spike-train data analysis is desirable. Please send your CV, a cover letter describing your research interests, and the names and e-mail addresses of 2 references to: Laurent Bourdieu, Ecole Normale Sup?rieure, D?partement de Biologie, Laboratoire de Neurobiologie Mol?culaire et Cellulaire, UMR CNRS 8544, 46 rue d?Ulm, 75005 Paris. Web site: http://www.biologie.ens.fr/neuroctx/. Email: laurent.bourdieu at ens.fr. From saighi at ixl.fr Fri Jun 2 09:40:39 2006 From: saighi at ixl.fr (Sylvain Saighi) Date: Fri, 02 Jun 2006 15:40:39 +0200 Subject: Connectionists: PhD position in the field of Engineering of Neuromorphic Systems Message-ID: <44803FD7.40603@ixl.fr> *PhD position in the field of Engineering of Neuromorphic Systems* *IXL - CNRS, Bordeaux, France* ** * *In the framework of the European Union's Marie Curie network for human resources and mobility activity, a new project "NeuroVers-IT" investigating Neuro-Cognitive Science and Information Technology has been set up. The project aims at collaborative, highly multidisciplinary research between 11 European well-known research institutions in the areas of neuro-/cognitive sciences/biophysics and robotics/information technologies/ mathematics. For this project, IXL Microelectronics Laboratory in Bordeaux, France, is looking for *Early-Stage Researcher* (holding a Master's degree entitling him/her to pursue a PhD degree, beginning September 2006) The ideal candidate should have a university degree in electrical engineering. An expertise in mixed-analog IC design and VHDL language, a good knowledge of spoken and written English and a strong interest in computational neuroscience are required. The project concerns the development of VLSI circuits and electronics systems of high biological relevance that emulate in real-time multi-conductances neurons and neural networks with adaptive properties. The work will be conducted at the IXL Microelectronics laboratory (www.ixl.fr ), a CNRS institution associated to ENSEIRB-Universit? Bordeaux 1. IXL is located on the Bordeaux Science campus. The PhD student will join the research group "Engineering of Neuromorphic Systems". <>To be eligible the candidate shall not be French citizen and not have resided in France for more than 12 months in the last 3 years. For information about this position you may contact Prof. Sylvie RENAUD IXL Laboratory, CNRS, ENSEIRB, Universit? Bordeaux 1 email: renaud at ixl.fr Tel : +33 540 002 796 Please send your written application including your CV and other relevant material From hommel at fsw.leidenuniv.nl Sat Jun 3 12:18:12 2006 From: hommel at fsw.leidenuniv.nl (Bernhard Hommel) Date: Sat, 03 Jun 2006 18:18:12 +0200 Subject: Connectionists: Postdoc position Message-ID: <00bb01c68729$4f851b10$4501a8c0@bhome> The Cognitive Psychology Section, Department of Psychology at the Leiden University, and the Leiden Institute for Brain and Cognition invites applications for a Postdoc position. The position is embedded into a large-scale four-year project (PACO+, http://www.paco-plus.org/) funded by the European Union and carried out in cooperation with 7 partner labs in Europe (DE, DK, ES, SE, SI, UK). The project aims at developing a neurophysiologically inspired computational model of the acquisition and representation of object-action complexes (OACs, event files), and the subsequent use of such OACs for the selection and control of contextually adapted actions in a behaving humanoid robot (Armar III). A basic tenet of the project is that objects and actions are inseparably intertwined, and that OACs can only emerge through interaction with the environment. Behavioral, anatomical, and physiological knowledge about the human brain will be exploited to build a restricted but biologically plausible cognitive system that will be implemented in real robotic systems. Experimental studies will be conducted to evaluate the models by comparing human and robot behavior. The research will be carried out at the Department of Psychology, Leiden University, and supervised by Prof. Dr. Bernhard Hommel. The project group also comprises of two PhD students who just started. Leiden University is the oldest and most prestigious university in the Netherlands and the group of Dr. Hommel has an excellent international reputation in the area of attention and action control. Leiden is a beautiful historical university city in the vicinity of Den Haag, Amsterdam and Schiphol airport and only 5 km from the North Sea beach. A PhD in cognitive psychology, neuroscience, cognitive neuroscience, computer science, or other related discipline is a prerequisite. Experience in programming (e.g., C++, Matlab) and neural modeling is also required. General knowledge in the areas of visual perception and action selection, visual attention, cognition, or cognitive neuroscience is recommended. The position is for four years, starting as soon as possible. There is no deadline, applications will be continuously evaluated on a first come first served basis. Please send applications and CVs to Prof. Dr. Bernhard Hommel (hommel at fsw.leidenuniv.nl). =================================== Bernhard Hommel Leiden University Department of Psychology Cognitive Psychology Unit & Leiden Institute for Brain and Cognition Wassenaarseweg 52, Room 2B05 P.O. Box 9555 2300 RB Leiden, The Netherlands Phone: +(0)629023062 http://home.planet.nl/~homme247/bh.htm From P.D.Moerland at amc.uva.nl Mon Jun 5 17:11:43 2006 From: P.D.Moerland at amc.uva.nl (P.D. Moerland) Date: Mon, 05 Jun 2006 23:11:43 +0200 Subject: Connectionists: Job vacancy: PhD student reconstruction of biological networks Message-ID: <8a0c9489cae8.89cae88a0c94@amc.uva.nl> The Bioinformatics Lab of the Academic Medical Center (AMC) in Amsterdam (http://www.amc.nl) seeks qualified applicants for a PhD position in a project on the identification of missing genes in metabolic pathways and the reconstruction of eukaryotic biological pathways. The candidate will develop machine learning methods for the annotation and reconstruction of biological networks using different types of experimental data. Such an integrative machine learning approach will be pursued for the development of ranking methods to identify missing genes in metabolic pathways. Furthermore, the candidate will develop kernel-based methods for the reconstruction of biological networks. The PhD position is part of the BioRange (funding from the Dutch government) project "Understanding and reconstruction of eukaryotic biological pathways from microarray data", which is a collaboration between the Bioinformatics Laboratory (AMC), department of Human Genetics (AMC), the BiGCat Bioinformatics group (University of Maastricht) and the department of Biomedical Engineering (Eindhoven University of Technology). The candidate is expected to actively engage in this collaboration. This project is tightly linked to ongoing AMC research projects generating various types of genomic data. The ideal candidate should have a Master's degree in computer science, mathematics or physics. (S)he has a strong background in machine learning more specifically kernel methods and semi-supervised learning. Knowledge of bioinformatics and molecular biology, programming skills (C, Java, SQL, UNIX), and experience in a high-level language such as R(/Bioconductor) are an asset. Appointment for a PhD position is for four years. Monthly gross salary ranges from Euro 1.942 (first year) to Euro 2.465 (last year). Starting data is immediate. For more information on the BioRange project, see http://biorange.amc.nl/pathways/ or contact Perry Moerland at p.d.moerland(at)amc.uva.nl. Candidates should send their detailed CV and an application letter to p.d.moerland(at)amc.uva.nl. --- Perry Moerland, PhD Bioinformatics Lab, Room J1B-206 Academic Medical Center, University of Amsterdam Postbus 22660, 1100 DD Amsterdam, The Netherlands tel: +31 20 5664660 p.d.moerland(at)amc.uva.nl http://www.amc.uva.nl/ From psycoco at st-andrews.ac.uk Mon Jun 5 12:47:35 2006 From: psycoco at st-andrews.ac.uk (psycoco) Date: Mon, 05 Jun 2006 17:47:35 +0100 Subject: Connectionists: Lectureship and 2 Academic Fellowships at St Andrews, U.K. Message-ID: <44846027.7020105@st-andrews.ac.uk> The School of Psychology, University of St. Andrews, rated 5*(A) for research and excellent for teaching, supports a research and teaching strategy which ensures that students get excellent instruction from staff who are at the forefront in their field whilst also ensuring that staff have sufficient time to devote to their research activities. These vacancies offer opportunities for early-career staff to thrive in a vibrant and well-resourced research environment. http://psy.st-andrews.ac.uk/ Lectureship ?27,929 - ?36,959 pa You will be an active researcher, with evidence of, or the potential to establish, an independent research program in biological psychology. You must have some experience teaching psychology to undergraduate students. This post is for four years from 1 September 2006 (or as soon as possible thereafter). Ref: SL204/06. http://www.st-andrews.ac.uk/hr/recruitment/vacancies/vacancy-list/Vacancy.2006-06-01.4959 Academic Fellowships (2 posts) ?20,044 - ?36,959 pa Under a new scheme funded by the joint Research Councils,we are offering two five year research fellowships with a lecturing position to follow, with a start date of 1 October 2006. The fellowships are for research in (1) perception and (2) animal cognition (including behavioural neuroscience). These posts initially carry only limited teaching obligations, though you will play an active part in the academic culture of the School of Psychology. You must already hold a PhD in psychology or a cognate discipline before taking up the position and have evidence of the potential to establish an independent research career. Ref: SL205/06. http://www.st-andrews.ac.uk/hr/recruitment/vacancies/vacancy-list/Vacancy.2006-06-01.5345 Informal enquiries to Head of School, Professor Verity J Brown (vjb at st-and.ac.uk). Application forms and further particulars are available from Human Resources, University of St Andrews, College Gate, North Street, St Andrews, Fife KY16 9AJ (tel: +44 1334 462571, by fax: +44 1334 462570 or by e-mail: jobline at st-andrews.ac.uk). The advertisement and further particulars can be viewed at http://www.st-andrews.ac.uk/hr/recruitment/vacancies Please quote the appropriate reference number on all correspondence. Closing date for all posts: 21 June 2006. From prasad at kitp.ucsb.edu Tue Jun 6 17:26:27 2006 From: prasad at kitp.ucsb.edu (Ila Fiete) Date: Tue, 06 Jun 2006 14:26:27 -0700 Subject: Connectionists: paper on encoding of rat position by triangular lattice neurons Message-ID: Our paper on the encoding of rat position by triangular lattice neurons (grid cells) in the rat brain is now available for download from the q-bio arXiv, at: http://arxiv.org/PS_cache/q-bio/pdf/0606/0606005.pdf The title and abstract are given below. We welcome your comments! Best regards, Ila Fiete Triangular lattice neurons may implement an advanced numeral system to precisely encode rat position over large ranges Authors: Yoram Burak, Ted Brookings, Ila Fiete We argue by observation of the neural data that neurons in area dMEC of rats, which fire whenever the rat is on any vertex of a regular triangular lattice that tiles 2-d space, may be using an advanced numeral system to reversibly encode rat position. We interpret measured dMEC properties within the framework of a residue number system (RNS), and describe how RNS encoding -- which breaks the non-periodic variable of rat position into a set of narrowly distributed periodic variables -- allows a small set of cells to compactly represent and efficiently update rat position with high resolution over a large range. We show that the uniquely useful properties of RNS encoding still hold when the encoded and encoding quantities are relaxed to be real numbers with built-in uncertainties, and provide a numerical and functional estimate of the range and resolution of rat positions that can be uniquely encoded in dMEC. The use of a compact, `arithmetic-friendly' numeral system to encode a metric variable, as we propose is happening in dMEC, is qualitatively different from all previously identified examples of coding in the brain. We discuss the numerous neurobiological implications and predictions of our hypothesis. From erdi at sunserv.kfki.hu Wed Jun 7 12:46:34 2006 From: erdi at sunserv.kfki.hu (Erdi Peter) Date: Wed, 7 Jun 2006 18:46:34 +0200 (MEST) Subject: Connectionists: Job announcement: Postdoctoral and graduate student positions in Budapest Message-ID: Postdoctoral and graduate student positions available at the Computational Neuroscience Group, KFKI, Budapest, Hungary A postdoctoral and a graduate student positions in computational neuroscience are available at the CNS Group Budapest, to study and model spatial navigation strategies. The group works at the KFKI campus in Budapest, Hungary, as part of the Research Institute for Particle and Nuclear Physics, one of the research institutes of the Hungarian Academy of Sciences. Details about the group can be found on the "http://cneuro.rmki.kfki.hu/" homepage. The research will be part of the four-year European research project "Integrating Cognition, Emotion and Autonomy" (ICEA, "http://www.his.se/icea"). The project involves cognitive scientists, neuroscientists, psychologists, computer scientists and roboticists, and aims at developing a cognitive systems architecture integrating cognitive, emotional and autonomic/bioregulatory processes in the control of robotic cognitive systems. These positions will be focused on modelling different spatial orientation strategies and the cooperation/competition between them. The key aim of this work is the functional and realistic modelling of hippocampus and other brain structures (e.g. entorhinal and prefrontral cortex) to give an efficient navigation algorithm to be used in real robots. The model will be built in close cooperation with behavioral electrophysiologists and their data will be used to test the biological reliability of the algorithm. Qualified applicants should have a doctoral degree (postdoctoral position applicants) or B.Sc. (graduate position applicants) in computational neuroscience, computer science or a related field. Applicants for this position will be evaluated according to the following criteria: * experience in computational neuroscience * programming skills * experience in analysis of electro-physiological data * broad interest to learn related fields * general ability to participate in teamwork and perform tasks * administrative and other skills that are relevant to the position The salary will be commensurate with the Hungarian Academic guidelines. For further information please contact Peter Erdi, group leader, or Zoltan Somogyvari ICEA project coordinator. Applicants are invited to send their CV, two letters of recommendation, brief list of scientific activities and publications to Pter rdi (erdi at rmki.kfki.hu), and to Zoltn Somogyvari (soma at sunserv.kfki.hu) via email no later than 20 June, 2006. From jf218 at cam.ac.uk Thu Jun 8 04:31:30 2006 From: jf218 at cam.ac.uk (Prof. J. Feng) Date: 08 Jun 2006 09:31:30 +0100 Subject: Connectionists: position open Message-ID: Research Councils UK Academic Fellowship (RCUKF) 27,929 - 32,490 pounds pa Fixed term Contract for 5 years, leading to a permanent Lectureship dependent upon successful completion of the probationary period. Following the recent RCUK competition, the University of Warwick has successfully bid for Academic Fellowships in key research areas of strategic importance for the University (a total of eight fellowships). The Fellowships will provide experienced postdoctoral researchers with the opportunity to establish a permanent career path. All Fellows will be offered a permanent academic position at the end of the Fellowship, subject to the successful completion of the University? probationary period. During the period of your Fellowship, you will be expected to apply actively for funding for your research. The University of Warwick has a reputation for excellence and innovation in research. With an emphasis on interdisciplinary approaches and entrepreneurial culture, Warwick is one of the leading research institutions in the UK. The University was ranked 5th in the last RAE with 25 out of 26 departments gaining a 5 rating or above. Closing date for applications: 30 June 2006 Applications are now invited for one Academic Fellow in the following area You will focus on complex phenomena arising from modelling data-rich biological systems and computational neuroscience, closely interacting with colleagues in Mathematics, Physics and Biological Sciences (Neurosciences). Informal Enquiries: Prof. Jianfeng Feng, (Jianfeng.Feng at warwick.ac.uk) From juergen at idsia.ch Thu Jun 8 05:03:28 2006 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Thu, 8 Jun 2006 11:03:28 +0200 Subject: Connectionists: Senior Position at the Swiss AI lab IDSIA Message-ID: We are seeking an outstanding researcher (professor or postdoc) with experience / interest in topics such as: sequence learning, adaptive robotics, universal AI, probabilistic models, recurrent networks, etc. Candidates are expected to build their own little research group by acquiring grants from Swiss or EU funding agencies. (Previous IDSIA Seniors include Marco Dorigo, father of the widely used Artificial Ants (working with Luca Gambardella), and Marcus Hutter, father of the asymptotically fastest algorithm for all well-defined problems.) Salary: Commensurate with experience - roughly SFR 85,000 per year, or US$ 70,000 as of May 2006. Low taxes. Interviews possible during the AI 50 summit in Ascona (near IDSIA), July 9-14, 2006 Details: http://www.idsia.ch/~juergen/senior2006.html Juergen Schmidhuber TU Munich & IDSIA http://www.idsia.ch/~juergen/whatsnew.html From oreilly at psych.colorado.edu Wed Jun 7 00:19:05 2006 From: oreilly at psych.colorado.edu (Randall C. O'Reilly) Date: Tue, 6 Jun 2006 22:19:05 -0600 Subject: Connectionists: CCNC Conference Deadline Approaching: June 15 In-Reply-To: <200605191141.43513.Randy.OReilly@colorado.edu> References: <200604142347.38871.Randy.OReilly@colorado.edu> <200605191141.43513.Randy.OReilly@colorado.edu> Message-ID: <200606062219.06440.oreilly@psych.colorado.edu> REMINDER AND UPDATES Updates: * Online registration now open: www.ccnconference.org * Symposia topics selected --- see below * Brain Research has agreed to publish selected papers from this meeting, possibly a special issue. ---------------------------------------------------------------------------- ~ Call for Abstracts ~ 2ND ANNUAL CONFERENCE ON COMPUTATIONAL COGNITIVE NEUROSCIENCE www.ccnconference.org To be held in conjunction with the 2006 PSYCHONOMIC SOCIETY CONFERENCE, November 16-19, 2006 at the Hilton Americas hotel in Houston, TX. CONFERENCE DATES: Wed-Thu November 15 & 16, 2006 The inaugural CCNC 2005 meeting held prior to Society for Neuroscience (SfN) in Washington DC was a great success with approximately 250 attendees 60 presented posters and strongly positive reviews. In future years it will continue to be held on a rotating basis with other meetings such as (tentative list): Cognitive Neuroscience Society (CNS) Organization for Human Brain Mapping (OHBM) Cognitive Science Society (CogSci) Neural Information Processing Systems (NIPS) and Computational and Systems Neuroscience (COSYNE). ___________________________________________________________________________ DEADLINE FOR SUBMISSION OF ABSTRACTS: June 15, 2006 Abstracts to be submitted online via the website: www.ccnconference.org Like last year, there will be two categories of submissions: *Poster only *Poster, plus short talk (15 min) to highlight the poster Abstracts should be no more than 500 words. Women and underrepresented minorities are especially encouraged to apply. Reviewing for posters will be inclusive and only to ensure appropriateness to the meeting. Short talks will be selected on the basis of research quality relevance to conference theme and expected accessibility in a talk format. Abstracts not selected for short talks will still be accepted as posters as long as they meet appropriateness criteria. NOTIFICATION OF ACCEPTANCE: July 15, 2005 The journal Brain Research has agreed to publish selected papers from this meeting as a dedicated section, and possibly special issue of the journal. Presenting authors can elect to have their work considered for this purpose. Final selections will be made by the program committee shortly after the meeting. __________________________________________________________________________ Preliminary Program: __________________________________________________________________________ * 2006 Keynote Speakers (confirmed): Mike Kahana, University of Pennsylvania Mark Seidenberg, University of Wisconsin Madison ___________________________________________________________________________ * 3 Symposia (2 hours each): 1) Face/Object Recognition: Are Faces Special, or Just a Special Case? Computational models of face and object processing Speakers: Gary Cottrell, UCSD (Moderator) Kalanit Grill-Spector, Stanford Alice O'Toole, UT Dallas Maximilian Riesenhuber, Georgetown Description: What can computational models tell us about human visual object processing? We have excellent models that explain how we may recognize objects at multiple scales and orientations, while other models explain why faces may or may not be "special," or simply a special case. The goal of this symposium is to summarize what we understand with some degree of confidence, what is still not understood, and to what degree what we understand meshes with data on human and animal visual processing, including behavioral, fMRI, neurophysiological, and neuropsychological data. 2) Semantics: Development and Brain Organization of Conceptual Knowledge: Computational and Experimental Investigations. Speakers: Jay McClelland, Stanford University (Moderator) Linda Smith, Indiana University Tim Rogers, University of Wisconsin Alex Martin, National Institute of Mental Health Description: The symposium is predicated on the assumption that there are links between conceptual structure, experience, conceptual development, and brain organization of conceptual knowledge. ? Jay McClelland will begin with a computational perspective on conceptual development, followed by Linda Smith with an empirical perspective. ? We would then switch to the subject of brain organization of conceptual knowledge, beginning with a computational perspective by Tim Rogers followed by an empirical perspective from Alex Martin. 3) Cognitive Control: Computational and Empirical Investigations Speakers: Mike Mozer, University of Colorado (Moderator) Others not yet confirmed ___________________________________________________________________________ * 12 short talks featuring selected posters * Poster sessions (2) ___________________________________________________________________________ 2006 Planning Committee: Suzanna Becker, McMaster University Jonathan Cohen, Princeton University Yuko Munakata, University of Colorado, Boulder David Noelle, Vanderbilt University Randall O'Reilly, University of Colorado, Boulder Maximilian Riesenhuber, Georgetown University Medical Center Executive Organizer: Thomas Hazy, University of Colorado, Boulder For more information and to sign up for the mailing list visit: www.ccnconference.org ___________________________________________________________________________ From cns at cnsorg.org Sun Jun 11 14:11:49 2006 From: cns at cnsorg.org (CNS) Date: Sun, 11 Jun 2006 11:11:49 -0700 Subject: Connectionists: CNS 2005 Early registration Message-ID: <20060611180953.M59105@cnsorg.org> CNS 2006 will be held from July 16th-20th in Edinburgh, UK. The scientific program for the main meeting and workshops is posted at www.cnsorg.org. Early registration is open until June 15th, after which registration costs will increase. -- CNS - Organization for Computational Neurosciences From tsukada at eng.tamagawa.ac.jp Mon Jun 12 02:23:05 2006 From: tsukada at eng.tamagawa.ac.jp (Tsukada) Date: Mon, 12 Jun 2006 15:23:05 +0900 Subject: Connectionists: To post a message to all the list members Message-ID: <6.2.0.14.2.20060612152031.04fdc698@eng.tamagawa.ac.jp> ===========================The Second CFP for DBF'07====================== The Second Announcement and Call for Papers ????????? 10th Tamagawa-Riken Dynamic Brain Forum-DBF'07 5-9 March 2007 Hakuba Tokyu Hotel in Hakuba Village, Nagano Prefecture, Japan Conference Framework: The 10th Tamagawa-Riken Dynamic Brain Forum (DBF'07) will be held on March 5-9, 2007 at Hakuba Tokyu Hotel, URL: http://www.tokyuhotel.co.jp/en/TR/TR_HAKUB/index.html in Hakuba Village, Nagano Prefecture, Japan. The Dynamic Brain Forum (DBF) is an annual international forum organized by the Tamagawa University Brain Science Research Center. In 2007, the DBF will be co-sponsored by Riken Brain Institute, Integrative Brain Research Project, Japanese Neural Network Society, Aihara Complexity Modelling Project and by the 21st Century COE Programs in Tamagawa University, Kyushu Institute of Technology and Hokkaido University. The theme of DBF'07 is Cortical Dynamics: Physiology, Theory and Applications. The forum is organized in several sessions focused on different aspects of the theme, with lectures in each session followed by a thematic discussion. In addition, there will be poster presentations on newest results by the discussants and participants. Posters will be up during first two days of meeting, allowing comprehensive discussions by participants. Preceding the forum, there also be two full days of Tutorial Programs, primarily oriented toward young researchers and Ph.D. students interested in the theory and applications of various kinds of dynamic brain function. Tutorial presenters and topics will include, Dr. S. Amari: Mathematical Theories of Dynamics of Neural Information Processing, Dr. I. Tsuda: Chaotic dynamics reality in brain dynamics, Dr. K. Doya: Neural implementation of reinforcement learning, Dr. W. Freeman: Recent advances in high-resolution analysis of EEG and MEG, Dr. A. Aertsen: Cortical Network Dynamics - Precision in a Noisy Environment ? Dr. G. L. Gerstein: Coding dynamics- (a) repeating firing patterns, (b) firing irregularity, Dr. M. Abeles: Brain structure and neural-network architecture, Application dates and Program schedule * PDF-format is mandatory for all documments submitted. * Other formats (Word, LATEX, etc) are never accepted. 31 Jul., 2006: Deadline for registration to attend Tutorials and/or DBF 31 Aug., 2006: Deadline for electronic submission of abstracts (less than 400 words, English) 01 Sep. - 30 Oct., 2006: Oral/poster/acceptance or rejection notification 31 Oct., 2006: Deadline for electronic submission of paper (4 pages on A4-size, English) 30 Nov., 2006: Electronic submission of final paper revisions. 04 - 06 Mar., 2007: Reception of Tutorials and/or DBF at Hakuba Tokyu Hotel 05 - 06 Mar., 2007: Tutorials 07 - Mar., 2007: DBF and poster presentations 07 - 08 Mar., 2007: Poster presentations 07 - 09 Mar., 2007: DBF meeting Registration fee and Travel Grant Tutorial: Free of charge DBF: US$200(US$160 for your early bird registration by end of Jun., 2006) includes banquet fee. Financial support: Max. US$1,500 travel cost is financially supported for excellent papers awarded by Program Committee, submitted by Graduate students and Post-docs. Registration and further information For registration or further information, please send e-mail to Secretary of the Organizing Committee. Secretary: S. Nagayama: nagayama at lab.tamagawa.ac.jp Address: Department of Intelligent Information Systems, Faculty of Engineering, Tamagawa University, 6-1 Tamagawagakuen, Machida, Tokyo, 194-8160, Japan And, also visit our web-site: "10th Tamagawa-Riken Dynamic Brain Forum - DBF'07" URL: http://www.tamagawa.ac.jp/sisetu/gakujutu/brain/dbf2007/index.html Committee: Advisory Committee, Chairman: Shun-ichi Amari: Riken Brain Science Institute, Japan. -Jun Tanji: Tamagawa University, Japan. -Mitsuo Kawato: ATR, Japan. -Takeshi Yamakawa: Kyusyu Institute of Technology, Japan. -Walter Freeman, University of California at Berkeley, USA. Organizing Committee, Chairman: Minoru Tsukada: Tamagawa University, Japan. Vice-Chairman: Ichiro Tsuda: Hokkaido University, Japan. -Keiji Tanaka: Riken Brain Science Institute, Japan. -Ad Aertsen: University of Freiburg, Germany. -Aike Guo: Chinese Academy of Science, China. - Fanji Gu: Fudan University, China. -Gert Hauske: Munich University of Technology, Germany. -Antonio Roque: University of Sao Paulo, Brazil. -Edger Koemer: HONDA R&D Europe, Germany. - James Wright, Univ. of Auckland, NZ Executive Committee, Chairman: Masamichi Sakagami: Tamagawa University, Japan. -Christph Schreiner: University of California, San Francisco, USA. -Takeshi Kasai, Osaka Univ., Japan -Hiroshi Fujii: Kyoto Sangyo University, Japan. -Kenji Doya: Okinawa Institute of Science and Technology, Japan. -Shuji Aou, Kyushu Institute of Tech, Japan -Hiroshi Kojima: Tamagawa University, Japan. -Takeshi Aihara: Tamagawa University, Japan. -Yutaka Sakai: Tamagawa University, Japan. Program Committee, Chairman: Shigetoshi Nara: Okayama University, Japan -Kazuyuki Aihara: University of Tokyo, Japan. -Shozo Yasui: Kyusyu Institute of Technology, Japan. -:Hatsuo Hayashi: Kyusyu Institute of Technology, Japan. -Hiroyuki Ito: Kyoto Sangyo University, Japan. -Tomoki Fukai: Riken Brain Science Institute, Japan. -Guy Sandner, INSERM, Strasburg, France ==================================================================== From reza at bme.jhu.edu Mon Jun 12 13:45:13 2006 From: reza at bme.jhu.edu (Reza Shadmehr) Date: Mon, 12 Jun 2006 13:45:13 -0400 Subject: Connectionists: Advances in computational motor control Message-ID: <000001c68e47$f649ed20$3e1d81a2@Observer> Dear colleagues: We would like to invite you to the fifth computational motor control symposium at the Society for Neuroscience conference. The symposium will take place on Friday, Oct. 13 2006 at the Atlanta convention center. The purpose of the meeting is to highlight computational modeling and theories in motor control. This is an opportunity to meet and hear from some of the bright minds in the field. The program consists of two distinguished speakers and 12 contributed talks, selected from the submitted abstracts. This year the speakers are: Mike Shadlen, University of Washington Chris Atkeson, Carnegie Mellon University We encourage you to consider submitting an abstract. The abstracts will be reviewed by a panel and ranked. The top 10-12 abstracts will be selected for oral presentation. We encourage oral presentation by students who have had a major role in the work described in the abstracts. More information is available here: www.bme.jhu.edu/acmc The deadline for abstract submission is September 1. Abstracts should in a single PDF file, and should be no more than two pages in length, including figures and references. With our best wishes, Reza Shadmehr and Emo Todorov From ishikawa at brain.kyutech.ac.jp Mon Jun 12 20:03:56 2006 From: ishikawa at brain.kyutech.ac.jp (Masumi Ishikawa) Date: Tue, 13 Jun 2006 09:03:56 +0900 Subject: Connectionists: 3rd CFP for BrainIT2006 -- Deadline extended Message-ID: <6.0.0.20.2.20060613084438.023df5f0@mail.brain.kyutech.ac.jp> [Apologies if you receive this more than once] ====================================================== The Third Announcement and Call for Papers for BrainIT2006 ====================================================== Brain-Inspired Information Technology (BrainIT2006) Kitakyushu, Fukuoka, Japan September 27-29, 2006 Welcome to BrainIT 2006 The third international conference, BrainIT 2006, will be held in Kitakyushu, Japan, on September 27-29, 2006, in order to establish the foundations of the Brain-Inspired Information Technology. All working at the frontiers of Brain Science to Information Technology including Robotics are invited to participate in the third international conference, BrainIT 2006. At this conference, we will organize special sessions on the results of our COE research program in addition to invited papers from a wide range of fields from Brain Science to Information Technology. INVITED SPEAKERS Special Session Mitsuo Kawato (Advanced Telecommunications Research Institute International (ATR), Japan) "Manipulative neuroscience based on brain network interface" Invited Sessions Asla Pitkanen (University of Kuopio, Finland) "Molecular and cellular pathways mediating amygdala-hippocampal dialogue during fear conditioning" Andreas G. Andreou (Johns Hopkins University, USA) "Silicon Brains in 3D SOI-CMOS Technology" Helge Ritter (Bielefeld University, Germany) "Making Human-Computer Interfaces Brain-adequate" Rodney Douglas (University/ETH Zurich, Switzerland) "Computations performed by collections of recurrently connected neurons, and their implementation in hybrid CMOS VLSI electronic systems" Yoshihiko Nakamura (The University of Tokyo, Japan) ?Toward Intuitive Communication using Bodies Between Human and Humanoid Robots? Lamp Session Ryohei Kanzaki (The University of Tokyo, Japan) ?How the microbrain generates adaptive behavior??-- from gene and neurons to neural networks and behavior -- IMPORTANT DATES Abstract (for presentation) Submission Deadline: June 19, 2006(EXTENDED!!) Notification of Acceptance: July 14, 2006 Early Registration Deadline: September 5, 2006 SCOPE AND TOPICS BrainIT 2006 solicits experimental, computational, theoretical as well as engineering papers related to the topics in the following non-exhaustive, non-exclusive categories. Categories: 1. Vision systems 2. Other sensory systems 3. Cognition?& Languages 4. Learning and Memory 5. Behavior & Emotion 6. Motor controls 7. Dynamics 8. Neural computation 9. Neural networks 10.Brain-inspired intelligent machines Papers that bridge brain science and information technology are especially welcome. Regular papers may include speculative discussions on Brain-Inspired Information Technology. BrainIT 2006 is open to all working at the frontiers of Brain Science to Information Technology (modeling and hardware realization) and provides the opportunity for presenting and discussing ideas that pave the way for the new field, Brain-Inspired Information Technology. Instructions for Authors Authors for poster presentation must submit a 1-page A4-sized abstract electronically via our web site, though submissions by e-mail are also available. Only PDF files are acceptable.?Each abstract will be independently reviewed by two reviewers.?Abstracts selected by reviewers are asked to present in a Selected Paper Session, an oral session. BrainIT2006 will publish an edited book as before. For further information, please refer to our web site. Registration Registration is free of charge. However, we recommend your early registration as the number of abstract books and other materials may be limited. Sponsors - "World of brain computing interwoven out of animals and robots": The 21st Century Center of Excellence Program of the Ministry of Education, Culture, Sports, Science and Technology, Japan - Kyushu Institute of Technology - Fuzzy Logic Systems Institute (FLSI) - Kitakyushu Foundation for the Advancement of Industry, Science and Technology (FAIS) Contact us Secretariat: Tetsuo FURUKAWA, PhD, Professor Phone: +81-93-695-6124, Fax: +81-93-695-6134 E-mail: secretariat at brain-it.brain.kyutech.ac.jp For more information, please visit our web site: http://conf.lsse.kyutech.ac.jp/~brain-it/ Masumi Ishikawa Department of Brain Science and Engineering Graduate School of Life Science and Systems Engineering Kyushu Institute of Technology 2-4 Hibikino, Wakamatsu, Kitakyushu 808-0196, Japan Tel and Fax: +81-93-695-6106 Email: ishikawa at brain.kyutech.ac.jp URL: http://www.brain.kyutech.ac.jp/~ishikawa URL: http://www.lsse.kyutech.ac.jp/ From mark.plumbley at elec.qmul.ac.uk Mon Jun 12 06:06:19 2006 From: mark.plumbley at elec.qmul.ac.uk (Mark Plumbley) Date: Mon, 12 Jun 2006 11:06:19 +0100 Subject: Connectionists: Lectureship position at Centre for Digital Music, Queen Mary Message-ID: <9D47A2D30B0BFB4C920786C25EC693457E4964@staff-mail.vpn.elec.qmul.ac.uk> Dear Connectionists, Those of you working on audio source separation or machine listening might be interested in this Lectureship in our group. (UK "Lecturer" approximates to US/Canada "Assistant Professor".) Best wishes, Mark Plumbley. ------------------------------------------------------------------------ Queen Mary University of London Department of Electronic Engineering Lectureship in Digital Music The Centre for Digital Music at Queen Mary, University of London is one of the world's leading research teams specializing in Music Informatics, and DSP for Music and Audio. We regularly work with leading UK researchers and with teams across Europe, in the USA and in Japan, and host international conferences in the area. We are pleased to offer the opportunity to join the team as a Lecturer. If appointed you will contribute specialised modules to the department's Masters programmes in Digital Signal Processing and Digital Music Processing, as well as new undergraduate degrees in Audio Systems Engineering and Digital Audio & Music Systems Engineering. You will also be expected to contribute to the Centre's research, bringing your own unique skills to the exciting topics we specialize in - see www.elec.qmul.ac.uk/digitalmusic for more details of our current research. Looking forward, we expect to expand activities in Internet Audio, 3D sound, Haptic and novel interfaces, Performance and Real-time processing. Fuller details of the position and person specification, together with application form are available from http://www.elec.qmul.ac.uk Informal enquiries about this post are encouraged and may be made to Professor Mark Sandler preferably by telephone (+44 (0)20 7882 7680) or by email (mark.sandler at elec.qmul.ac.uk). Completed application forms including the names and addresses of three referees, a CV and publications list, should be returned to Ms Sharon Cording, Department of Electronic Engineering, Queen Mary, University of London, Mile End Road, London E1 4NS by 15th July 2006. ------------------------------------------------------------------------ --- Dr Mark D Plumbley Centre for Digital Music Department of Electronic Engineering Queen Mary University of London Mile End Road, London E1 4NS, UK Tel: +44 (0)20 7882 7518 Fax: +44 (0)20 7882 7997 Email: mark.plumbley at elec.qmul.ac.uk http://www.elec.qmul.ac.uk/people/markp/ From ramamurthy2 at yahoo.com Tue Jun 13 11:58:51 2006 From: ramamurthy2 at yahoo.com (Rama Garimella) Date: Tue, 13 Jun 2006 08:58:51 -0700 (PDT) Subject: Connectionists: Paper Submission for IJCAI workshop on NEURAL NETWORKS Message-ID: <20060613155851.1785.qmail@web30914.mail.mud.yahoo.com> Dear Neural Networks researchers, 1. Please submit papers to the following workshop: IJCAI-2007 (International Joint Conference on Artificial Intelligence) will be organized in HYDERABAD, INDIA. IJCAI is a top notch conference. You are invited to submit papers in the general area of NEURAL NETWORKS. The details are also available on the IJCAI-2007 Workshop proposal link at http://www.iiit.net/faculty/rammurthy.php With Regards, Rama Murthy Rama Murthy, Garimella, c/o G.Venkataramanamma, Flat#23, Block 'E', Swagruha Apartments, Opposite KPHB Colony, Hyderabad-500072 __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com From M.Pontil at cs.ucl.ac.uk Tue Jun 13 07:41:50 2006 From: M.Pontil at cs.ucl.ac.uk (Massi Pontil) Date: Tue, 13 Jun 2006 12:41:50 +0100 (BST) Subject: Connectionists: OPEN HOUSE on Multi-task and Complex Outputs Learning Message-ID: (We apologise for multiple copies of this message) ******************************************************************** The registration for the following event is now open (note that there is no registration fee): There are still some slots available for giving talks on Friday July 14. Please let us know if you wish to present your own work. Massimiliano Pontil and John Shawe-Taylor ******************************************************************** O P E N H O U S E on Multi-task and Complex Outputs Learning http://www.cs.ucl.ac.uk/staff/M.Pontil/open-house.html hosted at the Centre for Computational Statistics and Machine Learning (University College London) July 10--14, 2006, London, UK Sponsored by the PASCAL Network of excellence http://www.pascal-network.org/ ********************************************************************* From michael.spratling at kcl.ac.uk Wed Jun 14 09:37:53 2006 From: michael.spratling at kcl.ac.uk (Michael Spratling) Date: Wed, 14 Jun 2006 14:37:53 +0100 Subject: Connectionists: Post-doc position, King's College London Message-ID: <44901131.6000702@kcl.ac.uk> Postdoctoral Research Associate King's College London An enthusiastic and well-qualified post-doctoral researcher is required to develop a biologically inspired neural network model that will be used to explore neural mechanisms underlying cognitive and behavioural development. The research will involve extending an existing hierarchical neural network in order to simulate the learning of visual-spatial representations in the dorsal pathway. The model will be used to incrementally learn to control visually guided behaviour in a stereo-vision robot head and will be evaluated by simulating infant behavioural data. This post is funded by EPSRC grant EP/D062225/1 "Exploring Mechanisms of Cognitive and Behavioural Development in Humans and Machines". Applicants should have a proven ability to carry out high quality research, have a genuine interest in the neural mechanisms underlying visual perception and visually guided behaviour and be keen to carry out research in epigenetic/developmental robotics. The successful applicant is expected to have a PhD in a relevant area, have a good knowledge of neural networks and/or computational neuroscience, be proficient in C++ programming and ideally should have experience in robotics and machine vision. The position is available from the 1st September 2006 for a period of 48 months. The starting salary is at SP6 on the RA1A scale, currently ?24,612 per annum inclusive of London Allowance. Further particulars and an application pack can be downloaded directly from the King's College London website www.kcl.ac.uk/jobs or can be obtained by contacting the Personnel Office, King's College London, Strand, London WC2R 2LS, strand-recruitment at kcl.ac.uk. Please quote reference W1/CEE/84/06 on all correspondence. Informal inquiries can be made to Dr Michael Spratling via e-mail at: michael.spratling at kcl.ac.uk. The closing date for the receipt of applications: 14th July 2006 Equality of opportunity is College policy From netta at comp.leeds.ac.uk Wed Jun 14 14:26:06 2006 From: netta at comp.leeds.ac.uk (N Cohen) Date: Wed, 14 Jun 2006 19:26:06 +0100 (BST) Subject: Connectionists: Postdoctoral and Doctoral positions in Leeds, UK Message-ID: One Postdoctoral and one Doctoral Opening Biosystems Group School of Computing University of Leeds, UK 1. Postdoctoral Research Fellowship: ------------------------------------ Modelling of biological neural networks and neural computation Applications are invited for an EPSRC-funded postdoctoral Research Fellowship in the Biosystems Group, School of Computing, University of Leeds The Engineering and Physical Research Council (EPSRC) has awarded a grant of ??1.8M for research into Amorphous Computing, Random Graphs and Complex Biological Systems (AMORPH). AMORPH is an interdisciplinary research collaboration between the Universities of Leeds, Sheffield, Southampton, Royal Holloway and King's College, London and industrial partner BT. We focus on the study and development of network models in a broad range of biological (neuroscience, epidemics and gene regulation) and communication (telephone, internet) domains. By applying random graph theory methods and taking inspiration from the organisation of complex biological systems, we will develop methods for amorphous computing. The successful candidate will work with Dr. Netta Cohen in the Biosystems Group, School of Computing at Leeds. Areas of research include computational modelling, simulations and bio-inspired applications of biological neural networks: from abstract models of development, learning and memory, to the study of rhythms and coordination in simple motor networks in invertebrates. Current research approaches are diverse and based both on experimental data and on abstract modelling and simulations. Funding is available for 40 months. All highly motivated candidates with a PhD in Physical, Mathematical or Computer sciences, or other relevant areas to the project are encouraged to apply. International applications are welcome. Applications received before 10 July, 2006 are guaranteed full consideration. Informal enquiries to Dr Netta Cohen, netta at comp.leeds.ac.uk To apply, visit http://jobs.leeds.ac.uk, click on 'Research' on the right hand menu, and see full advert under Job Reference 312067. 2. PhD: Amorphous Computing, Random Graphs and Complex Biological Systems ------------------------------------------------------------------------- Amorph is an exciting multidisciplinary research project, aimed at studying a range of biological networks that can be described as performing `amorphous computing'. Our goal is to develop an appropriate language and tools to model the dynamics of and on these networks, within the context of the theory of random graphs. This is a collaborative project, including biology, mathematics and computer science, and spanning a range of biological domains (from the molecular level, through neuroscience to epidemiology). One PhD studentship (for UK/EU candidates) is available to work with Dr Netta Cohen (Biosystems) and Prof Martin Dyer (Algorithms and Complexity) at the School of Computing, University of Leeds, England, in collaboration with Dr Colin Cooper in Kings College, London. Possible areas of research for this PhD project span mathematical and computational investigations of biological neural networks, gene, protein and/or metabolic networks, as well as bio-inspired applications thereof. Opportunities also exist for collaborations across the Amorph consortium. All highly motivated candidates with strong backgrounds in physics, mathematics, biology, computer science, or other related areas, and who have an interest in complex networks are encouraged to apply. Some research experience is desirable. Background in biology is not a prerequisite. For further information, please email Netta Cohen, enclosing a CV. To apply, please visit http://www.comp.leeds.ac.uk/research/research_opps/phd_apply2.shtml Applications received before 3 July 2006, are guaranteed full consideration. ====================================================================== Netta Cohen BioSystems Group School of Computing Phone: +44 (0)113 3436789 University of Leeds Fax: +44 (0)113 3435457 Leeds, LS2 9JT Email: netta at comp.leeds.ac.uk United Kingdom www.comp.leeds.ac.uk/netta/ From CL243 at cornell.edu Thu Jun 15 09:51:48 2006 From: CL243 at cornell.edu (Christiane Linster) Date: Thu, 15 Jun 2006 09:51:48 -0400 Subject: Connectionists: Postdoctoral position available References: <20060615132311.M7722@cnsorg.org> Message-ID: <004d01c69082$d8dba7a0$0501a8c0@cpl.cornell.edu> Postdoctoral position in Computational Neuroscience laboratory The laboratory of Christiane Linster and Thom Cleland at Cornell University in Ithaca, NY is looking to hire a postdoctoral associate for a project on the role of noradrenergic modulation for olfactory processing and memory. The project involves in vivo electrophysiology, computational modeling and behavioral pharmacology studies. We are looking for an accomplished electrophysiologist with interests in computational neuroscience. The postdoctoral associate will be expected to perform in vivo electrophysiology in rodents and will have the possibility to be trained and participate in Computational and Behavioral techniques. Please contact Christiane Linster at CL243 at cornell.edu if you are interested in this position. The position is open to non US citizens and can start as early as July 1st, 2006. ----------------------------------------------------------- Christiane Linster Associate Prof. Neurobiology and Behavior Cornell University 607 - 2544331 CL243 at cornell.edu From M.Denham at plymouth.ac.uk Thu Jun 15 07:26:27 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Thu, 15 Jun 2006 12:26:27 +0100 Subject: Connectionists: Research Assistant/ PhD position in Plymouth, UK Message-ID: <52A8091888A23F47A013223014B6E9FE0AE2620D@03-CSEXCH.uopnet.plymouth.ac.uk> Centre for Theoretical and Computational Neuroscience, University of Plymouth, UK Research Assistant - Three Year Fixed Term Appointment (salary range 15,194 - 18,129 UK pounds per annum) Applications are invited for the post of Research Assistant in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK. The RA will work as part of a Europe-wide research team on a new EU-funded Integrated Project entitled "FACETS: Fast Analog Computing with Transient States in Neural Architectures. This is a second RA post in Plymouth on the FACETS project and not a readvertisement. The FACETS project is a major 11M euros four-year research project funded by the European Union under the Framework 6 Information Society Technology (IST) priority as part of the Future and Emerging Technologies (FET) programme in Bio-Inspired Information Systems. The stated objective of FACETS is to explore and exploit the yet unknown computational principles that constitute the basis of information processing in the brain. The project involves experimental neuroscience, the construction of models and analytical descriptions for neural cells and networks and the construction of very large scale neural circuits in VLSI technology. The FACETS consortium includes fifteen of the major laboratories in Europe in these areas. The research assistant in Plymouth will work specifically on the construction of computational models of neural processing of visual information in the neocortex, in particular in relation to the perception of motion. The person appointed will be expected to register for the MPhil/PhD programme in the Centre. The normal tuition fees for the PhD programme will be waived for the duration of the three-year appointment, and the research assistant will receive a salary at a level commensurate with the normal annual stipend for UK research students. Applicants for the post must have a sound educational background at first degree level in a relevant subject area, with a standard of achievement consistent with entering into a PhD programme, ie at least equivalent to a UK 2.1 honours degree classification. They should possess a strong background in mathematics or physics, and an interest in the field of computational modelling of the brain. Possession of a Masters degree and/or relevant postgraduate experience would be an advantage. The post is expected to be available from 1st September 2006 and an appointment will be made as soon as possible after this date. The appointment will be for a fixed term of up to three years, ending 31st August 2009, and will be subject to a probationary period of six months. Informal enquiries in the first instance should be made to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk. Further details will then be given on how to apply formally for the post. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk www.plymneuro.org.uk From M.Denham at plymouth.ac.uk Thu Jun 15 08:05:16 2006 From: M.Denham at plymouth.ac.uk (Mike Denham) Date: Thu, 15 Jun 2006 13:05:16 +0100 Subject: Connectionists: PhD position in Plymouth, UK Message-ID: <52A8091888A23F47A013223014B6E9FE0AE2621E@03-CSEXCH.uopnet.plymouth.ac.uk> Centre for Theoretical and Computational Neuroscience, University of Plymouth, UK PhD Studentship Applications are invited for a PhD studentship in theoretical and computational neuroscience in the Centre for Theoretical and Computational Neuroscience at the University of Plymouth, UK, for up to four years starting in September 2006. The studentship is funded by a Doctoral Training Grant to the Centre from the Engineering and Physical Sciences Research Council (EPSRC). Applicants must normally hold a first or upper second class honours degree or equivalent qualification, or a masters degree, in an appropriate discipline, and have a strong interest in studying information processing in the brain using mathematical and computational modelling methods in combination with physiological, psychophysical, EEG/ERP or fMRI experimental methods/data. Projects are available in a number of research areas within the interests of the Centre (see www.plymneuro.org.uk for further details). Examples of available projects include: - mathematical and computational modelling of the basal ganglia and Deep Brain Stimulation - the neural correlates of auditory perceptual organisation: a combined fMRI and computational modelling study - dynamical data analysis and network simulation of neural activity and its relation to active memory functions. - multiscale modelling of neocortical information processing To be eligible for the full studentship, ie full tuition fees and a maintenance award of at least ?12500 pa, candidates must demonstrate a relevant connection with the UK, usually through being ordinarily resident for a period of 3 years immediately prior to the date of application for an award. Nationality or country of origin is not a criterion for eligibility. Nationals of member states of the European Union other than the UK are eligible for fees-only awards if they are resident in their own country in the same way that other candidates are required to be resident in the UK. Informal enquiries should be made to Professor Mike Denham, Centre for Theoretical and Computational Neuroscience, University of Plymouth, Drake Circus, Plymouth, PL4 8AA, UK; tel: +44 (0)1752 232547; email: mdenham at plym.ac.uk. Further particulars concerning the studentship and an application form may be found at (http://www.plymouth.ac.uk/pghowtoapply) or by contacting Ann Treeby (ann.treeby at plymouth.ac.uk ). Please return all applications to: Mrs Ann Treeby, Faculty of Science Research Admin Assistant, University of Plymouth, Rm A504, Portland Square, Plymouth, PL4 8AA The closing date for applications is 12 noon on 7th July 2006. Interviews will take place in July. Applicants who have not received an invitation for interview by the end of July 2006 should consider their application as unsuccessful. Professor Mike Denham Centre for Theoretical and Computational Neuroscience Room A223 Portland Square University of Plymouth Drake Circus Plymouth PL4 8AA UK tel: +44 (0)1752 232547/233359 fax: +44 (0)1752 233349 email: mdenham at plym.ac.uk www.plymneuro.org.uk From hitzler at aifb.uni-karlsruhe.de Fri Jun 16 05:25:02 2006 From: hitzler at aifb.uni-karlsruhe.de (Pascal Hitzler) Date: Fri, 16 Jun 2006 11:25:02 +0200 Subject: Connectionists: CfP: IJCAI-07 Workshop on Neural-Symbolic Learning and Reasoning, NeSy'07 Message-ID: <449278EE.503@aifb.uni-karlsruhe.de> Third International Workshop on Neural-Symbolic Learning and Reasoning Workshop at IJCAI-07, Hyderabad, India, January 2007 NeSy'05 took place at IJCAI-05 NeSy'06 took place at ECAI2006 Call for Papers --------------- Artificial Intelligence researchers continue to face huge challenges in their quest to develop truly intelligent systems. The recent developments in the field of neural-symbolic integration bring an opportunity to integrate well-founded symbolic artificial intelligence with robust neural computing machinery to help tackle some of these challenges. The Workshop on Neural-Symbolic Learning and Reasoning is intended to create an atmosphere of exchange of ideas, providing a forum for the presentation and discussion of the key topics related to neural-symbolic integration. Topics of interest include: * The representation of symbolic knowledge by connectionist systems; * Learning in neural-symbolic systems; * Extraction of symbolic knowledge from trained neural networks; * Reasoning in neural-symbolic systems; * Biological inspiration for neural-symbolic integration; * Applications in robotics, semantic web, engineering, bioinformatics, etc. Submission Researchers and practitioners are invited to submit original papers that have not been submitted for review or published elsewhere. Submitted papers must be written in English and should not exceed 6 pages in the case of research and experience papers, and 2 pages in the case of position papers (including figures, bibliography and appendices) in IJCAI-07 format as described in the IJCAI-07 Call for Papers. All submitted papers will be judged based on their quality, relevance, originality, significance, and soundness. Papers must be submitted directly by email in PDF format to nesy at soi.city.ac.uk Presentation Selected papers will have to be presented during the workshop. The workshop will include extra time for audience discussion of the presentation allowing the group to have a better understanding of the issues, challenges, and ideas being presented. Please note that the number of participants will be strictly limited. Publication Accepted papers will be published in official workshop proceedings, which will be distributed during the workshop. Authors of the best papers will be invited to submit a revised and extended version of their papers to the journal of logic and computation, OUP. Important Dates Deadline for submission: 22nd of September, 2006 Notification of acceptance: 23rd of October, 2006 Camera-ready paper due: 3rd of November, 2006 Workshop date: 6th, 7th or 8th of January, 2007 IJCAI-07 main conference dates: 6th of January 2007 to 12th of January, 2007. Workshop Organisers Artur d'Avila Garcez (City University London, UK) Pascal Hitzler (University Karlsruhe, Germany) Guglielmo Tamburrini (Universit? di Napoli, Italy) Programme Committee (still incomplete) Artur d'Avila Garcez (City University London, UK) Sebastian Bader (TU Dresden, Germany) Howard Blair (Syracuse University, USA) Marco Gori (University of Siena, Italy) Barbara Hammer (TU Clausthal, Germany) Ioannis Hatzilygeroudis (University of Patras, Greece) Pascal Hitzler (University of Karlsruhe, Germany) Kai-Uwe K?hnberger (University of Osnabr?ck, Germany) Luis Lamb (Federal University of Rio Grande do Sul, Brazil) Vasile Palade (Oxford University, UK) Jude W. Shavlik (University of Wisconsin-Madison, USA) Ron Sun (Rensselaer Polytechnic Institute, USA) Guglielmo Tamburrini (Universit? di Napoli Feredico II, Italy) Stefan Wermter (University of Sunderland, UK) Gerson Zaverucha (Federal University of Rio de Janeiro, Brazil) Invited speakers (still incomplete) Additional Information General questions concerning the workshop should be addressed to nesy at soi.city.ac.uk. You are also invited to subscribe to the neural-symbolic integration mailing list. -- Dr. habil. Pascal Hitzler Institute AIFB, University of Karlsruhe, 76128 Karlsruhe email: hitzler at aifb.uni-karlsruhe.de fax: +49 721 608 6580 web: http://www.pascal-hitzler.de phone: +49 721 608 4751 http://www.neural-symbolic.org From tobias at jupiter.chaos.gwdg.de Fri Jun 16 09:40:42 2006 From: tobias at jupiter.chaos.gwdg.de (Tobias Niemann) Date: Fri, 16 Jun 2006 15:40:42 +0200 Subject: Connectionists: Tutorial course on COMPUTATIONAL NEUROSCIENCE at Goettingen, Germany, September 26-30, 2006 Message-ID: <4492B4DA.2090406@chaos.gwdg.de> Applications are invited for a tutorial course on COMPUTATIONAL NEUROSCIENCE at Goettingen, Germany September 26 - 30, 2006 organized by J. M. Herrmann, T. Geisel, F. Woergoetter The course is intended to provide graduate students and young researchers from all parts of neuroscience with working knowledge of theoretical and computational methods in neuroscience and to acquaint them with recent developments in this field. The course includes tutorials and lectures on the following topics: * Carlos Brody: Simple neural models that combine working memory with decision making * Konrad Koerding: Which computational problems does the nervous system solve: A Bayesian view * Alexander Gail: Movement planning in the cortex -- Implications for the design of neuroprosthetic devices * Andreas Herz: Biophysics, information processing and biological function of a small auditory system The course takes place at the Department of Nonlinear Dynamics of the Max Planck Institute for Dynamics and Selforganization, Bunsenstr. 10, D-37073 Goettingen. A course fee of 100 Euro includes participation in the tutorials, study materials, and part of the social events. The number of participants is limited to about 30. Course language is English. To apply please fill in the application form at: http://www.bccn-goettingen.de/CNS-course/index.htm by June 30, 2006. For further information please contact: cns-course at chaos.gwdg.de ********************************************************************* * Dr. J. Michael Herrmann Georg August University Goettingen * * Tel. : +49 (0)551 5176424 Institute for Nonlinear Dynamics * * Fax : +49 (0)551 5176439 Bunsenstrasse 10 * * mobil: 0176 2800 4268 D-37073 Goettingen, Germany * * EMail: michael at chaos.gwdg.de http://www.chaos.gwdg.de * ********************************************************************* From Randy.OReilly at colorado.edu Sat Jun 17 01:46:47 2006 From: Randy.OReilly at colorado.edu (Randall C. O'Reilly) Date: Fri, 16 Jun 2006 23:46:47 -0600 Subject: Connectionists: CCNC Conference Deadline Extended to July 1 In-Reply-To: <200606062219.06440.oreilly@psych.colorado.edu> References: <200604142347.38871.Randy.OReilly@colorado.edu> <200605191141.43513.Randy.OReilly@colorado.edu> <200606062219.06440.oreilly@psych.colorado.edu> Message-ID: <200606162346.49042.Randy.OReilly@colorado.edu> ABSTRACT SUBMISSION DEADLINE EXTENDED Due to popular request and the inadvertent conflict of the deadline with the OHBM conference currently going on, the organizing committee has extended the deadline for abstract submission to July 1, 2006. Notification of acceptance will be correspondingly extended to August 1, 2006. Other reminders: * Online registration open: www.ccnconference.org * Symposia topics selected --- see below * Brain Research has agreed to publish selected papers from this meeting, possibly a special issue. ---------------------------------------------------------------------------- ~ Final Call for Abstracts ~ 2ND ANNUAL CONFERENCE ON COMPUTATIONAL COGNITIVE NEUROSCIENCE www.ccnconference.org To be held in conjunction with the 2006 PSYCHONOMIC SOCIETY CONFERENCE, November 16-19, 2006 at the Hilton Americas hotel in Houston, TX. CONFERENCE DATES: Wed-Thu November 15 & 16, 2006 The inaugural CCNC 2005 meeting held prior to Society for Neuroscience (SfN) in Washington DC was a great success with approximately 250 attendees 60 presented posters and strongly positive reviews. In future years it will continue to be held on a rotating basis with other meetings such as (tentative list): Cognitive Neuroscience Society (CNS) Organization for Human Brain Mapping (OHBM) Cognitive Science Society (CogSci) Neural Information Processing Systems (NIPS) and Computational and Systems Neuroscience (COSYNE). ___________________________________________________________________________ DEADLINE FOR SUBMISSION OF ABSTRACTS: July 1, 2006 (EXTENDED) Abstracts to be submitted online via the website: www.ccnconference.org Like last year, there will be two categories of submissions: *Poster only *Poster, plus short talk (15 min) to highlight the poster Abstracts should be no more than 500 words. Women and underrepresented minorities are especially encouraged to apply. Reviewing for posters will be inclusive and only to ensure appropriateness to the meeting. Short talks will be selected on the basis of research quality relevance to conference theme and expected accessibility in a talk format. Abstracts not selected for short talks will still be accepted as posters as long as they meet appropriateness criteria. NOTIFICATION OF ACCEPTANCE: August 1, 2005 (EXTENDED) The journal Brain Research has agreed to publish selected papers from this meeting as a dedicated section, and possibly special issue of the journal. Presenting authors can elect to have their work considered for this purpose. Final selections will be made by the program committee shortly after the meeting. __________________________________________________________________________ Preliminary Program: __________________________________________________________________________ * 2006 Keynote Speakers (confirmed): Mike Kahana, University of Pennsylvania Mark Seidenberg, University of Wisconsin Madison ___________________________________________________________________________ * 3 Symposia (2 hours each): 1) Face/Object Recognition: Are Faces Special, or Just a Special Case? Computational models of face and object processing Speakers: Gary Cottrell, UCSD (Moderator) Kalanit Grill-Spector, Stanford Alice O'Toole, UT Dallas Maximilian Riesenhuber, Georgetown Description: What can computational models tell us about human visual object processing? We have excellent models that explain how we may recognize objects at multiple scales and orientations, while other models explain why faces may or may not be "special," or simply a special case. The goal of this symposium is to summarize what we understand with some degree of confidence, what is still not understood, and to what degree what we understand meshes with data on human and animal visual processing, including behavioral, fMRI, neurophysiological, and neuropsychological data. 2) Semantics: Development and Brain Organization of Conceptual Knowledge: Computational and Experimental Investigations. Speakers: Jay McClelland, Stanford University (Moderator) Linda Smith, Indiana University Tim Rogers, University of Wisconsin Alex Martin, National Institute of Mental Health Description: The symposium is predicated on the assumption that there are links between conceptual structure, experience, conceptual development, and brain organization of conceptual knowledge. Jay McClelland will begin with a computational perspective on conceptual development, followed by Linda Smith with an empirical perspective. We would then switch to the subject of brain organization of conceptual knowledge, beginning with a computational perspective by Tim Rogers followed by an empirical perspective from Alex Martin. 3) Cognitive Control: Computational and Empirical Investigations Speakers: Mike Mozer, University of Colorado (Moderator) Others not yet confirmed ___________________________________________________________________________ * 12 short talks featuring selected posters * Poster sessions (2) ___________________________________________________________________________ 2006 Planning Committee: Suzanna Becker, McMaster University Jonathan Cohen, Princeton University Yuko Munakata, University of Colorado, Boulder David Noelle, Vanderbilt University Randall O'Reilly, University of Colorado, Boulder Maximilian Riesenhuber, Georgetown University Medical Center Executive Organizer: Thomas Hazy, University of Colorado, Boulder For more information and to sign up for the mailing list visit: www.ccnconference.org From mjhealy at ece.unm.edu Tue Jun 13 21:07:12 2006 From: mjhealy at ece.unm.edu (mjhealy@ece.unm.edu) Date: Tue, 13 Jun 2006 19:07:12 -0600 Subject: Connectionists: Ontologies and Neural Networks Message-ID: <20060613190712.e8tsr0cl4w4c0gc8@webmail.ece.unm.edu> The paper M. J. Healy and T. P. Caudell, "Ontologies and Worlds in Category Theory: Implications for Neural Systems", Axiomathes, vol. 16, no. 1, 2006 is available for download at http://www.ece.unm.edu/~mjhealy . Please let me know if you have a problem with the download. Abstract: We propose category theory, the mathematical theory of structure, as a vehicle for defining ontologies in an unambiguous language with analytical and constructive features. Specifically, we apply categorical logic and model theory, based upon viewing an ontology as a sub-category of a category of theories expressed in a formal logic. In addition to providing mathematical rigor, this approach has several advantages. It allows the incremental analysis of ontologies by basing them in an interconnected hierarchy of theories, with an operation on the hierarchy that expresses the formation of complex theories from simple theories that express first principles. Another operation forms abstractions expressing the shared concepts in an array of theories. The use of categorical model theory makes possible the incremental analysis of possible worlds, or instances, for the theories, and the mapping of instances of a theory to instances of its more abstract parts. We describe the theoretical approach by applying it to the semantics of neural networks. Key words: category, cognition, colimit, functor, limit, natural transformation, neural network, semantics Regards, Mike From litin at iont.ru Mon Jun 19 07:45:48 2006 From: litin at iont.ru (Litinskii) Date: Mon, 19 Jun 2006 15:45:48 +0400 Subject: Connectionists: from Leonid Litisnkii, IONT RAS Message-ID: <17417670015.20060619154548@iont.ru> Dear colleagues, I analyze the problem of minimization of quadratic functional of N binary variables. In the Hopfield model this is the problem of minimization of the energy of the state. With regard to physics this is finding of the ground state of the Ising model. Procedure of the local minimization is well-known: at the time t we calculate the local field h(i,t), acting on the ith spin s(i,t), and if the spin is dissatisfied (in other words, if s(i,t) h(i,t)<0), then at the next moment of the time the ith spin turn over: s(i,t+1)=sign(h(i,t)). At that the energy of the state decreases at the value 4|h(i,t)|. Usually, when a standard approach is used the first occurred dissatisfied spin turns over. The question is: What if we turn over the most dissatisfied spin, that is the one dissatisfied spin for which |h(i,t)| has the maximal value in the given state? Do you know works on this theme? Does anybody investigate such a dynamics? I failed to find such works. I'll be very grateful for references on this theme. Leonid Litinskii, Institute of Optical-neural technologies Russian Academy of Scoences -- Best regards, Litinskii mailto:litin at iont.ru From lubica.benuskova at aut.ac.nz Sun Jun 18 01:08:45 2006 From: lubica.benuskova at aut.ac.nz (Luba Benuskova) Date: Sun, 18 Jun 2006 17:08:45 +1200 Subject: Connectionists: CFP: Hybrid Intelligent Systems, Neuro-Computing and Evolving Intelligence Message-ID: <4494DFDD.90501@aut.ac.nz> CALL FOR PAPERS 4th Conference on Neuro-Computing and Evolving Intelligence (NCEI?06) and 6th International Conference on Hybrid Intelligent Systems (HIS?06) When: 13-15 December 2006, Where: Auckland, New Zealand Important dates: 10 August 2006 - paper submnission deadline 20 September 2006 - final accepted papers submission deadline http://www.aut.ac.nz/research/research_institutes/kedri/conferences.htm General Chairs: Nik Kasabov, KEDRI, AUT, New Zealand (nkasabov at aut.ac.nz) Mario Koppen, Fraunhofer IPK, Germany (mkoeppen at ieee.org) Programme Co-Chairs: Andreas Koenig, University of Kaiserslautern, Germany (Koenig at eit.uni-kl.de) Ajith Abraham, Chun-Ang University, Korea (ajith.abraham at ieee.org) Qun Song, KEDRI, AUT, New Zealand (qsong at aut.ac.nz) Objective: The emphasis of this joint conference will be on adaptive, learning knowledge-based systems and on evolving intelligent systems, i.e. information systems that develop, unfold and evolve their structure and functionality over time through interaction with the environment. The three days event will include tutorials, invited talks, oral presentations, poster presentations and various demonstrations of neuro-computing and hybrid systems for bioinformatics and biomedical applications; biometric and security, brain study and cognitive engineering, agriculture, environment, decision support, business and finance, speech-, image- and multimodal information processing, process control, arts and design. Venue: Auckland, the City of Sails. The Auckland region is an antipasto of environments laid out on a huge platter to make one amazing city, boasting three harbours, two mountain ranges, 48 volcanic cones and more than 50 islands. Auckland's population is approximately 1.3 million, making it by far the largest city in New Zealand, with one third of the country's entire population. http://thenewzealandsite.com/photogalleries/ From nips06pub at hotmail.com Tue Jun 20 03:57:13 2006 From: nips06pub at hotmail.com (M.O. Franz) Date: Tue, 20 Jun 2006 09:57:13 +0200 Subject: Connectionists: NIPS: Call for Workshops and Call for Demonstrations Message-ID: Neural Information Processing Systems Conference and Workshops Vancouver and Whistler, BC, Canada December 4-9, 2006 The Call for Workshops and the Call for Demonstrations are now available online at: http://www.nips.cc The Submission deadlines are as follows: Call for Workshops: August 4, 2006 Call for Demonstrations: September 24, 2006 NIPS Administration nipsinfo at salk.edu _________________________________________________________________ Express yourself instantly with MSN Messenger! Download today it's FREE! http://messenger.msn.click-url.com/go/onm00200471ave/direct/01/ From markus.butz at uni-bielefeld.de Tue Jun 20 10:48:26 2006 From: markus.butz at uni-bielefeld.de (Markus Butz) Date: Tue, 20 Jun 2006 16:48:26 +0200 Subject: Connectionists: CNS*2006: workshop on structural plasticity and development Message-ID: We would like to draw your attention on the coming CNS meeting workshop about structural plasticity: Workshop announcement CNS*2006 meeting, Edinburgh Modelling structural plasticity and neural development M. Butz, Neuroanatomy, University of Bielefeld & A. van Ooyen, CNCR, Vrije Universiteit Amsterdam Structural plasticity in terms of neurite outgrowth, synapse formation, synaptic regression and turnover as well as neurogenesis and apoptosis is crucial for neural development during ontogeny and even for network reorganizations in the mature brain. During development, progressive and regressive events go hand-in-hand to form a homeostaically stable network structure. The workshop is in order to discuss the biological background as well as current theoretical approaches that shed a light to the dynamic of those morphogenetic processe. Besides very different theoretical as well as experimental talks (30 minutes) there will be plenty room for discussion (15 minutes after each talk) You find the latest version of the workshop program in the pdf-file attached. Further information and registration on www.cnsorg.org Best regards, Markus Butz p.p. Arjen van Ooyen _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ Official: Dept. of Neuroanatomy Faculty of Biology University of Bielefeld Universitaetsstr. 25 33615 Bielefeld Germany Phone: +49/(0)521-106-5715 e-mail: markus.butz at uni-bielefeld.de From etuci at ulb.ac.be Wed Jun 21 11:12:18 2006 From: etuci at ulb.ac.be (Elio Tuci) Date: Wed, 21 Jun 2006 17:12:18 +0200 Subject: Connectionists: Postdoc position in Swarmanoid project - Brussels Message-ID: <449961D2.2080902@ulb.ac.be> PostDoc position available IRIDIA Institut de Recherches Interdisciplinaires et de D?veloppements en Intelligence Artificielle Universit? Libre de Bruxelles Brussels, Belgium http://iridia.ulb.ac.be/ Swarmanoid project: Postdoctoral research position available Start date: October 1st, 2006, or later. Position will remain open until a suitable candidate is found. The Swarmanoid project is a Future and Emerging Technologies (FET-OPEN) project funded by the European Commission. The main scientific objective of this research project is the design, implementation and control of a novel distributed robotic system. The system will be made up of heterogeneous, dynamically connected, small autonomous robots. Collectively, these robots will form what we call a swarmanoid. The swarmanoid that we intend to build will be comprised of numerous (about 60) autonomous robots of three types: eye-bots, handbots, and foot-bots. The Swarmanoid project is the successor project to the SWARM-BOTS project, and will build on the results obtained during the SWARM-BOTS project. Information on the SWARM-BOTS project can be found at the following address: www.swarm-bots.org. IRIDIA, the Artificial Intelligence laboratory of the Universit? Libre de Bruxelles and co-ordinator of the Swarmanoid project, has a vacancy for a postdoctoral research position. The successful applicant can start any time from October 1st 2006 onwards. The vacancy will remain open until a suitable candidate is found. There is, therefore, no submission deadline. The area of competence and/or interest of the person we are looking for should be in at least one of the following disciplines: Computational Intelligence, Autonomous Robotics, Self-organizing Systems. He or she should be an experienced researcher with publications in the above-mentioned fields. The working language of the laboratory is English, though French, German and Italian are spoken by most of its members. The right person will have a commitment to research and publication, and possess good communication and presentation skills (in English). He or she will be available for traveling between European labs participating in the research project. The ability to work as part of an international team producing pre-defined deliverables to fixed deadlines is essential. The project is coordinated by Professor Marco Dorigo, director of the IRIDIA lab. Other participants include Prof. Mondada and Prof. Floreano (both at EPFL, Lausanne), Prof. Gambardella (IDSIA, Universit'a della Svizzera Italiana, Lugano), and Prof. Nolfi (CNR, Rome, Italy). IRIDIA is dedicated to equal opportunities. Indeed, we explicitly encourage female researchers to apply for this position. We guarantee that the selection process will be based solely on scientific merit. If you wish to apply, please send an email containing (preferably as a PDF attachment) your CV and a research interests statement to: Dr. Elio Tuci, Ph.D. Email: swarmanoid at iridia.ulb.ac.be The full call for positions is available at: http://www.swarmanoid.org/Swarmanoid_vacancy.pdf From rod at dcs.gla.ac.uk Wed Jun 21 06:43:06 2006 From: rod at dcs.gla.ac.uk (Roderick Murray-Smith) Date: Wed, 21 Jun 2006 11:43:06 +0100 Subject: Connectionists: Post-doc position using machine learning in interaction design - Hamilton Institute, NUIM Message-ID: Post-doctoral Position Applications are invited from well-qualified candidates for a two-year post-doctoral position at the Hamilton Institute, NUI Maynooth, Ireland. (30 minutes from Dublin) http://www.hamilton.ie The successful candidate will be an outstanding researcher, with a Ph.D. qualification, who can demonstrate an exceptional research track record or significant research potential at international level. We are looking for self-motivated, innovative researchers, who are able to take fundamental, novel ideas for interaction design and create prototype systems to test them, and run user evaluations to judge the quality of the ideas. A strong commitment to research excellence, an ability to work in an interdisciplinary team, mathematical modeling skills and programming ability are very important. The successful candidate will probably have background in one or more of: machine learning, control theory, signal processing and human-computer interaction design. The successful candidate will be expected to take up the post no later than September 1st, 2006. Appointments will be at ?45,000 p/a. Informal enquires regarding the post can be made to Roderick Murray-Smith (rod.murraysmith at nuim.ie). Applications with CV including contact details of three referees and two significant papers should be sent electronically to hamilton at nuim.ie. Project: A dynamic systems approach to Negotiated Interaction on mobile devices http://www.dcs.gla.ac.uk/~rod/DSNegInt.htm Project Description: This project will develop a novel approach to interaction design, based on closed-loop system design and probabilistic reasoning. One of the motivations of the approach is to be able to include machine learning models seamlessly in the architecture at multiple levels. The approach makes interaction into a negotiation process, and is especially relevant for systems instrumented with sensors. We will be looking at location-aware, gestural, and multi-user interaction. It includes a dynamic simulation approach to gestural interaction which improves learnability, and robustness to user variability, while allowing users to become 'masters' with fluent, skilled ability to navigate information spaces in a continuous fashion. We shall build on our current sensor platform to create and test applications in spatial and gesturally controlled systems, with multimodal (vibrotactile and audio) feedback. The implementation will be primarily in C/Python on PocketPCs, and Series60 Nokia telephones, with sensor packs. For examples of the group's research, papers and videos of existing demonstrations, please see http://www.dcs.gla.ac.uk/~rod/dynamics.html For a more complete description of the project, please visit http://www.dcs.gla.ac.uk/~rod/DSNegInt.htm Roderick Murray-Smith Hamilton Institute NUI Maynooth Co. Kildare Ireland email: hamilton at nuim.ie to be received no later that 1st July, 2006. Roderick Murray-Smith Department of Computing Science Glasgow University Glasgow G12 8QQ Scotland Tel. (direct line) +44 141 330 4984 Skype callto:roderickmurraysmith Tel. (secretarial) +44 141 330 4256 Fax. +44 141 330 4913 http://www.dcs.gla.ac.uk/~rod Also Senior Researcher at: Hamilton Institute NUI Maynooth, Co. Kildare Ireland http://www.hamilton.may.ie From erik at tnb.ua.ac.be Wed Jun 21 12:07:06 2006 From: erik at tnb.ua.ac.be (Erik De Schutter) Date: Wed, 21 Jun 2006 18:07:06 +0200 Subject: Connectionists: CNS*2006: Neuro-IT workshop on Interoperability of Neural Simulators Message-ID: We would like to draw your attention on the coming CNS meeting workshop: Neuro-IT workshop on Interoperability of Simulators Edinburgh, United Kingdom, July 19-20 2006 For quite some time interoperability of neural simulators has been considered an important goal of the modeling community. But is this more than a buzz word? Is it possible/easy to transfer models from one simulator program to another? And how should any problems be solved? By making simulators interact with each other or by defining models in a simulator-independent way? This workshop brings together several outstanding simulator developers to present and discuss their views on the following 3 questions: - one big simulator versus specialized simulators - interoperability of specialized simulators - simulator independent model specification Further information and program can be found on http:// www.neuroinf.org/CNS.shtml Best regards, Erik De Schutter From jutta.kretzberg at uni-oldenburg.de Fri Jun 23 03:34:41 2006 From: jutta.kretzberg at uni-oldenburg.de (Jutta Kretzberg) Date: Fri, 23 Jun 2006 09:34:41 +0200 Subject: Connectionists: PhD position in computational neuroscience: retinal ensemble coding Message-ID: <449B9991.9060403@uni-oldenburg.de> PhD position in computational neuroscience: Retinal ensemble coding The sensory physiology group at the University of Oldenburg, Germany is looking for a motivated PhD student (BAT IIa/2) to participate in the newly established research unit "Dynamics and stability in retinal processing". The project will be focused on the question how visual information is encoded in the spike activity of retinal ganglion cell ensembles. The PhD student will be mainly concerned with developing and applying mathematical data analysis methods for signal detection and signal estimation based on multi-electrode recordings. Moreover, he / she will perform computational modeling and participate in electrophysiological multi-electrode recordings from the turtle retina. Being involved in the research group offers the PhD student the opportunity to become acquainted with a broad range of neuroscientific methods and approaches by collaborating with several research groups in Oldenburg, Frankfurt and Heidelberg. In addition, he / she can benefit from participating in activities of the international graduate school "Neurosensory science and systems". Applicants should have a theoretical background, be highly interested in neuroscientific and coding questions, and willing to learn experimental techniques. Basic computer and programming skills are expected, experience in neuroscience is preferable but not required. It is mandatory to hold a masters degree or Diplom. The application should include cover letter, CV, list of publications, university certificates and names of two possible referees. Please email your application preferably as PDF-document(s) to jutta.kretzberg at uni-oldenburg.de We cannot guarantee to consider applications arriving after July 12, 2006. For more information on the position and our research (and also for links to more open experimental positions in the research unit) please see http://www.uni-oldenburg.de/sinnesphysiologie/en/ and http://www.uni-oldenburg.de/fg-retina/en/ Jutta Kretzberg University of Oldenburg Institute of Biology and Environmental Sciences Carl-von-Ossietzky-Str. 9-11 26129 Oldenburg Germany From nando at cs.ubc.ca Fri Jun 23 15:13:54 2006 From: nando at cs.ubc.ca (Nando de Freitas) Date: Fri, 23 Jun 2006 12:13:54 -0700 Subject: Connectionists: N-body methods software release Message-ID: <449C3D72.3050800@cs.ubc.ca> Hi, Software for N-body problems, including KD-trees, dual trees, distance transforms and fast multipole methods is now available at: http://www.cs.ubc.ca/~awll/nbody_methods.html It's written in C, but can be called from matlab and it's pretty easy to use. There's some demos for marginal particle filtering and particle smoothing. The software can also be easily applied to Gaussian processes and other kernel methods for dimensionality reduction and clustering. I'd be happy to answer questions about it at ICML. Enjoy. Nando From d.mareschal at bbk.ac.uk Wed Jun 28 08:58:20 2006 From: d.mareschal at bbk.ac.uk (Denis Mareschal) Date: Wed, 28 Jun 2006 13:58:20 +0100 Subject: Connectionists: Academic Fellowship in Developmental and Cognitive Neuroscience Message-ID: This position may be of interest to readers of this list. Please do not respond directly to me. Best regards, Denis Mareschal Birkbeck (University of London) RCUK Academic Fellowship in Developmental and Cognitive Neuroscience We are seeking a talented individual to conduct research within the Centre for Brain & Cognitive Development and/or the new Wolfson Institute for Brain & Function and Development, both within the School of Psychology, Birkbeck. The RCUK Fellowship will build on new and exciting initiatives in developmental and cognitive neuroscience within the School, including a new research MRI facility scheduled to open later this year. We seek applicants who have interests that consolidate the collaborative links between members of the new Wolfson Institute, the CBCD, and the new MRI facility. We are particularly interested in candidates whose research centres on developmental functional or structural MRI. However, other candidates in developmental and cognitive neuroscience that help to establish and reinforce collaborative bridges within the grouping will also be considered. While primarily a research fellow in the initial years, you may contribute to teaching at graduate and postgraduate level. This is initially a three year research post leading to a permanent academic appointment, at an appropriate level, when the fellowship ends. The fellowship is jointly funded by the RCUK Academic Fellowship scheme and Birkbeck. Salary will be ?22,492 rising to ?32,450 per annum at RA1A of the Research Staff Salary Scales. Initial salary award will be dependent on skills and experience. Informal enquiries should be made to Professor Mark Johnson, Director of the Centre for Brain & Cognitive Development, by email at mark.Johnson at bbk.ac.uk Download the job description and application form by clicking on 'Further details' below or email humanresources at bbk.ac.uk quoting reference APS153 . Closing date: 14 August 2006 Birkbeck is an equal opportunities employer. Further details can be obtained from: http://www.jobs.ac.uk/fp/CF400/fp_index.html -- ================================================= Dr. Denis Mareschal Centre for Brain and Cognitive Development School of Psychology Birkbeck College University of London Malet St., London WC1E 7HX, UK tel +44 (0)20 7631-6582/6226 reception: 6207 fax +44 (0)20 7631-6312 http://www.psyc.bbk.ac.uk/people/academic/mareschal_d/ ================================================= From t.heskes at science.ru.nl Fri Jun 30 05:59:58 2006 From: t.heskes at science.ru.nl (Tom Heskes) Date: Fri, 30 Jun 2006 11:59:58 +0200 Subject: Connectionists: Neurocomputing volume 69 (issues 13-15) Message-ID: <44A4F61E.3070306@science.ru.nl> Neurocomputing volume 69 (issues 13-15) ------- SPECIAL PAPERS (Blind Source Separation and Independent Component Analysis edited by Carlos G. Puntonet and Elmar W. Lang) Blind source separation and independent component analysis Carlos G. Puntonet and Elmar W. Lang Quasi-optimal EASI algorithm based on the Score Function Difference (SFD) Samareh Samadi, Massoud Babaie-Zadeh and Christian Jutten Maximization of statistical moments for blind separation of sources revisited Susana Hornillo-Mellado, Rub?n Mart?n-Clemente, Carlos G. Puntonet, Jos? I. Acha and Juan Manuel G?rriz-S?ez The use of ICA in multiplicative noise D. Blanco, B. Mulgrew, S. McLaughlin, D.P. Ruiz and M.C. Carrion Optimizing blind source separation with guided genetic algorithms J.M. G?rriz, C.G. Puntonet, F. Rojas, R. Martin, S. Hornillo and E.W. Lang Sparse ICA via cluster-wise PCA Massoud Babaie-Zadeh, Christian Jutten and Ali Mansour Application of the mutual information minimization to speaker recognition/identification improvement Jordi Sol?-Casals and Marcos Faundez-Zanuy Blind separation of spatio-temporal Synfire sources and visualization of neural cliques Hilit Unger and Yehoshua Y. Zeevi Denoising using local projective subspace methods P. Gruber, K. Stadlthanner, M. B?hm, F.J. Theis, E.W. Lang, A.M. Tom?, A.R. Teixeira, C.G. Puntonet and J.M. Gorriz Sa?z Spatio-temporal dynamics in fMRI recordings revealed with complex independent component analysis J?rn Anem?ller, Jeng-Ren Duann, Terrence J. Sejnowski and Scott Makeig Capturing nonlinear dependencies in natural images using ICA and mixture of Laplacian distribution Hyun-Jin Park and Te-Won Lee Low-complexity ICA based blind multiple-input multiple-output OFDM receivers Luciano Sarperi, Xu Zhu and Asoke K. Nandi ------- REGULAR PAPERS A sequential algorithm for feed-forward neural networks with optimal coefficients and interacting frequencies Enrique Romero and Ren? Alqu?zar A neurocomputational model of stochastic resonance and aging Shu-Chen Li, Timo von Oertzen and Ulman Lindenberger Evolving networks of integrate-and-fire neurons Francisco J. Veredas, Francisco J. Vico and Jos? M. Alonso Improving RBF networks performance in regression tasks by means of a supervised fuzzy clustering Antonino Staiano, Roberto Tagliaferri and Witold Pedrycz First-order approximation of Gram?Schmidt orthonormalization beats deflation in coupled PCA learning rules Ralf M?ller A bio-inspired visual collision detection mechanism for cars: Optimisation of a model of a locust neuron to a novel environment Shigang Yue, F. Claire Rind, Matthais S. Keil, Jorge Cuadri and Richard Stafford Speed estimation with propagation maps C. Rasche From outliers to prototypes: Ordering data Stefan Harmeling, Guido Dornhege, David Tax, Frank Meinecke and Klaus-Robert M?ller Exponential stability and periodic solutions of fuzzy cellular neural networks with time-varying delays Kun Yuan, Jinde Cao and Jianming Deng Chaotic dynamics for multi-value content addressable memory Liang Zhao, Juan C.G. C?ceres, Antonio P.G. Damiance Jr. and Harold Szu A generalized addressing concept for correlative memory and neural networks V.V. Smirnov Stability of feedback error learning method with time delay Aiko Miyamura Ideta ------- BRIEF PAPERS Improved sparse least-squares support vector machine classifiers Yuangui Li, Chen Lin and Weidong Zhang Training sparse MS-SVR with an expectation-maximization algorithm D.N. Zheng, J.X. Wang and Y.N. Zhao An improved discrete Hopfield neural network for Max-Cut problems Jiahai Wang An experimental comparison of ensemble of classifiers for biometric data Loris Nanni and Alessandra Lumini An iterative algorithm for entropy regularized likelihood learning on Gaussian mixture with automatic model selection Zhiwu Lu An approach for improving face recognition in presence of inaccurate detection Loris Nanni and Alessandra Lumini A novel dimensionality-reduction approach for face recognition Fengxi Song, David Zhang and Jingyu Yang MppS: An ensemble of support vector machine based on multiple physicochemical properties of amino acids Loris Nanni and Alessandra Lumini SVM-based CDMA receiver with incremental active learning Elisa Ricci, Luca Rugini and Renzo Perfetti Locally principal component learning for face representation and recognition Jian Yang, David Zhang and Jing-yu Yang Random Bands: A novel ensemble for fingerprint matching Loris Nanni and Alessandra Lumini An advanced multi-modal method for human authentication featuring biometrics data and tokenised random numbers Alessandra Lumini and Loris Nanni Diagonal Fisher linear discriminant analysis for efficient face recognition S. Noushath, G. Hemantha Kumar and P. Shivakumara Rigid medical image registration using PCA neural network Lifeng Shang, Jian Cheng Lv and Zhang Yi Selective attention-based novelty scene detection in dynamic environments Sang-Woo Ban and Minho Lee Improved pruning strategy for radial basis function networks with dynamic decay adjustment Elisa Ricci and Renzo Perfetti An alternative formulation of kernel LPP with application to image recognition Guiyu Feng, Dewen Hu, David Zhang and Zongtan Zhou A reliable method for designing an automatic karyotyping system Loris Nanni An optimal kernel feature extractor and its application to EEG signal classification Shiliang Sun and Changshui Zhang Estimation of software project effort with support vector regression Pages 1749-1753 Adriano L.I. Oliveira Support vector machine interpretation A. Navia-V?zquez and E. Parrado-Hern?ndez On the almost periodic solution of generalized Hopfield neural networks with time-varying delays Yiguang Liu, Zhisheng You and Liping Cao ISOLLE: LLE with geodesic distance Claudio Varini, Andreas Degenhard and Tim W. Nattkemper The Population-Based Incremental Learning Algorithm converges to local optima Reza Rastegar and Arash Hariri Stability in static delayed neural networks: A nonlinear measure approach Ping Li and Jinde Cao Palmprint recognition using FastICA algorithm and radial basis probabilistic neural network Li Shang, De-Shuang Huang, Ji-Xiang Du and Chun-Hou Zheng Hyperchaos and bifurcation in a new class of four-dimensional Hopfield neural networks Yan Huang and Xiao-Song Yang Sub-intrapersonal space analysis for face recognition Xiaoyang Tan, Jun Liu and Songcan Chen Experimental modeling using modified cascade correlation RBF networks for a four DOF tilt rotor aircraft platform Changjie Yu, Jihong Zhu, Jinchun Hu and Zengqi Sun A reformative kernel Fisher discriminant algorithm and its application to face recognition Yu-jie Zheng, Jian Yang, Jing-yu Yang and Xiao-jun Wu ------- JOURNAL SITE: http://www.elsevier.com/locate/neucom SCIENCE DIRECT: http://www.sciencedirect.com/science/issue/5660-2006-999309986-626228 From nicola.cancedda at xrce.xerox.com Tue Jun 27 12:57:51 2006 From: nicola.cancedda at xrce.xerox.com (Nicola Cancedda) Date: Tue, 27 Jun 2006 18:57:51 +0200 Subject: Connectionists: Post-doc position at XRCE, Grenoble Message-ID: <44A1638F.5030101@xrce.xerox.com> Position: postdoc researcher, 18 months contract The Learning and Content Analysis (LCA) group of the Xerox Research Centre Europe (XRCE) is developing data mining technologies, some of which have been delivered to and are now being used by several Xerox business groups, including text clustering and categorization, in both monolingual and multilingual settings. LCA currently focuses on developing new solutions for: 1. Multilingual applications: multilingual lexicon extraction, cross-language information retrieval, inference of machine translation systems from multilingual corpora 2. Image & text mining: categorization, clustering and retrieval of text and image data (this project is conducted in collaboration with the Image Processing group of XRCE) 3. Device mining, where devices are considered either in isolation or connected via a network. This activity aims, among other things, at building systems for diagnosing devices, for preventive maintenance, for monitoring the evolution of devices over time, and for issuing early warnings of failure. XRCE is seeking a postdoc researcher to contribute to the multilingual application activities in the context of the EU funded project "Statistical Multilingual Analysis for Retrieval and Translation" (SMART). This project is focused on advancing the state of the art in Statistical Machine Translation and Cross-Language Textual Information Access technologies by the means of modern Statistical Learning. XRCE is involved in the whole spectrum of SMART activities: * Advanced models for Statistical Machine Translation * Advanced Language Models * Translation and Language Model Adaptation and Combination * Cross-Language Textual Information Access Contributions to some or all these activities are expected, in close collaboration with other members of the group. XRCE maintains a high level of publications (in both journals and conferences) and patents. We expect the successful candidate to be part of these efforts. Required experience and qualifications: * PhD in computer science, statistics, mathematics or optimization with excellent knowledge of machine learning (especially statistical learning). * Experience in the implementation of scalable optimization algorithms and/or in statistical machine translation is a definite plus. * Good programming skills in C, C++, Python or Matlab. * A good command of English is required, as well as open-mindedness and the will to collaborate with a team. Applications, accompanied by a CV and covering letter should be sent to xrce-candidates at xrce.xerox.com The duration of the contract is 18 months, starting October 1st, 2006. Xerox Research Centre Europe (XRCE) is a young, dynamic research organization, which creates innovative new business opportunities for Xerox in the digital and Internet markets. XRCE is a multicultural and multidisciplinary organization set in Grenoble, France. We have renowned expertise in natural language processing, work practice studies,image processing and document structure. The variety of both cultures and disciplines at XRCE makes it both an interesting and stimulating environment to work in, leading to often unexpected discoveries! XRCE is part of the Xerox Innovation Group made up of 1000 researchers and engineers in five world-renowned research and technology centers. The Grenoble site is set in a park in the heart of the French Alps in a stunning location only a few kilometres from the city centre. The city of Grenoble has a large scientific community made up of national research institutes (CNRS, Universities, INRIA) and private industries. Grenoble is close to both the Swiss and Italian borders and is the ideal place for skiing, climbing, hang gliding and all types of mountain sports. -- ---------------------------------------------------------+ Nicola Cancedda, Xerox Research Centre Europe 6, Chemin de Maupertuis, 38240 Meylan France Tel.: +33 (0)4 76.61.51.59 Fax.: +33 (0)4 76.61.50.99 E-mail: Nicola.Cancedda at xrce.xerox.com Home Page: http://www.xrce.xerox.com/people/cancedda/ From oby at cs.tu-berlin.de Fri Jun 30 05:42:58 2006 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Fri, 30 Jun 2006 11:42:58 +0200 (MEST) Subject: Connectionists: simultaneous matrix diagonalization Message-ID: Dear All, I would like to announce the implementation of the QDIAG algorithm for simultaneous matrix diagonalization (e.g. for source separation problems). The software is available for downloading via the web-address: http://ni.cs.tu-berlin.de/software/ using the link "QDIAG". The QDIAG-method is described in detail in the upcoming publication: R. Vollgraf and K. Obermayer, Quadratic Optimization for Simultaneous Matrix Diagonalization, IEEE Transaction on Signal Processing, 2006, in press. The abstract is attached. All the best Klaus ------------------------------------------------------------------------ Quadratic Optimization for Simultaneous Matrix Diagonalization R. Vollgraf and K. Obermayer Simultaneous diagonalization of a set of matrices is a technique, which has numerous applications in statistical signal processing and multi-variate statistics. Although objective functions in a least squares sense can be easily formulated, their minimization is not trivial, because constraints and 4th order terms are usually involved. Most known optimization algorithms are, therefore, subject to certain restrictions on the class of problems: orthogonal transformations, sets of symmetric, hermitian or positive definite matrices, to name a few. In this work we present a new algorithm called QDIAG, that splits the overall optimization problem into a sequence of simpler second order sub-problems. There are no restrictions imposed on the transformation matrix, which may be non-orthogonal, indefnite or even rectangular, and there are no restrictions, except for one, imposed on the matrices to be diagonalized, regarding their symmetry or definiteness. We apply the new method to Second Order Blind Source Separation and show that the algorithm convergences fast and reliably. It allows for an implementation with a complexity independent of the number of matrices and, therefore, is particularly suitable for problems dealing with large sets of matrices. ------------------------------------------------------------------------ Prof. Dr. Klaus Obermayer phone: 49-30-314-73442 FR2-1, NI, Fakultaet IV 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany http://ni.cs.tu-berlin.de/ From t.zito at biologie.hu-berlin.de Fri Jun 30 08:53:05 2006 From: t.zito at biologie.hu-berlin.de (Tiziano Zito) Date: Fri, 30 Jun 2006 14:53:05 +0200 Subject: Connectionists: MDP-2.0 released Message-ID: <20060630125305.GC16597@itb.biologie.hu-berlin.de> MDP version 2.0 has been released! What is it? ----------- Modular toolkit for Data Processing (MDP) is a data processing framework written in Python. From the user's perspective, MDP consists of a collection of trainable supervised and unsupervised algorithms that can be combined into data processing flows. The base of readily available algorithms includes Principal Component Analysis, two flavors of Independent Component Analysis, Slow Feature Analysis, Gaussian Classifiers, Growing Neural Gas, Fisher Discriminant Analysis, and Factor Analysis. From the developer's perspective, MDP is a framework to make the implementation of new algorithms easier. MDP takes care of tedious tasks like numerical type and dimensionality checking, leaving the developer free to concentrate on the implementation of the training and execution phases. The new elements then automatically integrate with the rest of the library. As its user base is increasing, MDP might be a good candidate for becoming a common repository of user-supplied, freely available, Python implemented data processing algorithms. Resources --------- Download: http://sourceforge.net/project/showfiles.php?group_id=116959 Homepage: http://mdp-toolkit.sourceforge.net Mailing list: http://sourceforge.net/mail/?group_id=116959 What's new in version 2.0? -------------------------- MDP 2.0 introduces some important structural changes. It is now possible to implement nodes with multiple training phases and even nodes with an undetermined number of phases. This allows for example the implementation of algorithms that need to collect some statistics on the whole input before proceeding with the actual training, or others that need to iterate over a training phase until a convergence criterion is satisfied. The ability to train each phase using chunks of input data is maintained if the chunks are generated with iterators. Nodes that require supervised training can be defined in a very straightforward way by passing additional arguments (e.g., labels or a target output) to the 'train' method. New algorithms have been added, expanding the base of readily available basic data processing elements. MDP is now based exclusively on the NumPy Python numerical extension. -- Tiziano Zito Institute for Theoretical Biology Humboldt-Universitaet zu Berlin Invalidenstrasse, 43 D-10115 Berlin, Germany Pietro Berkes Gatsby Computational Neuroscience Unit Alexandra House, 17 Queen Square London WC1N 3AR, United Kingdom