From Connectionists-Request at CS.CMU.EDU Mon Mar 1 00:05:16 1993 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 01 Mar 93 00:05:16 EST Subject: Bi-monthly Reminder Message-ID: <17897.730962316@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From prechelt at ira.uka.de Mon Mar 1 09:53:53 1993 From: prechelt at ira.uka.de (prechelt@ira.uka.de) Date: Mon, 01 Mar 93 15:53:53 +0100 Subject: Putting a NN on 16k processors In-Reply-To: Your message of Tue, 23 Feb 93 18:56:11 +0000. <9302231856.AA22594@cato.robots.ox.ac.uk> Message-ID: > Subject: Re: Squashing functions (continued) .. > Concerning memory requirements (eg, MasPar MP1). I don't see why I need 4 bytes .. > Apart of that, seems to be quite complicated to put a nn on 16K processors ... > how do you do that ? 1. Use the Nettalk-architecture which consumes up to two processors per connection. 2. Additionally, replicate the net and train on several examples at once. 3. Don't use TOO small nets and training sets. I am currently working on a neural algorithm programming language and an optimizing compiler for it that use exactly these techniques. Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; D-7500 Karlsruhe 1; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From port at cs.indiana.edu Mon Mar 1 22:18:20 1993 From: port at cs.indiana.edu (Robert Port) Date: Mon, 1 Mar 1993 22:18:20 -0500 Subject: mspt at neuroprose on schemes of composition Message-ID: Recently posted to neuroprose: `Beyond Symbolic: Prolegomena to a Kama-Sutra of Compositionality' by Timothy van Gelder and Robert Port Cognitive Science Program, Indiana University, Bloomington, Indiana, 47405 (tgelder at indiana.edu, port at indiana.edu) Compositionality, in the sense intended here, refers to the way that a complex representation is physically constructed of its parts. Most representations used in cognitive science exhibit variations of one specific kind of compositionality, that which is exhibited by printed sentences and LISP data structures. There are, however, many other kinds. In this paper, we describe six important dimensions for describing compositionality schemes: first, their TOKENS may be (1) static or dynamic, (2) analog or digital, and (3) arbitrary or not. Then, their COMBINATION may be (4) by concatenation or not, (5) static or temporal (that is, in-time), and may involve (6) strict syntactic conformity or not. These dimensions define a huge space of possible kinds of compositionality. Many of these kinds may turn out to be useful in modeling human cognition. In our discussion, we highlight a particular variety of compositionality employing what we call `dynamic representations'. These representations differ from traditional `symbolic' representations along all six of the critical dimensions. In general, we urge cognitive scientists to be more adventurous in exploring the variety of exotic techniques that are available for composing complex structures from simple ones. The postscript file for this paper (17 pages including 3 figures) may be obtained by ftp from the neuroprose archive at Ohio State University maintained by the proud new father of Dylan Seth Pollack, named Jordan. The file is: vangelder.kamasutra.Z To pick it up, do: unix> ftp archive.cis.ohio-state.edu (login as user:'anonymous' using your e-mail address as the password) ftp> binary ftp> cd pub/neuroprose ftp> get vangelder.kamasutra.ps.Z ftp> quit unix> vangelder.kamasutra.ps.Z unix> uncompress vangelder.kamasutra.ps unix> lpr vangelder.kamasutra.ps (for a postscript printer)  From tedwards at wam.umd.edu Tue Mar 2 01:43:37 1993 From: tedwards at wam.umd.edu (Technoshaman Tom) Date: Tue, 2 Mar 1993 01:43:37 -0500 (EST) Subject: Putting a NN on 16k processors In-Reply-To: <9303012333.AA16677@zippy.cs.UMD.EDU> Message-ID: On Mon, 1 Mar 1993 prechelt at ira.uka.de wrote: > > Apart of that, seems to be quite complicated to put a nn on 16K processors ... > > how do you do that ? > 1. Use the Nettalk-architecture which consumes up to two processors per > connection. > 2. Additionally, replicate the net and train on several examples at once. > 3. Don't use TOO small nets and training sets. The method you use for parallel NN depend on the size of your nets, the size of your training set, and restrictions on training mechanisms. One fairly obvious method, which is easy to code in a high level language, is using batch training described by linear algebra equations. (matrix multiplication, transposition, addition, etc.) However, in many situations the size of your net will be small enough to replicate the net on each processor, since usually you will have more examples to train relative to the size of the net. -Thomas Edwards  From marwan at sedal.su.OZ.AU Tue Mar 2 09:14:36 1993 From: marwan at sedal.su.OZ.AU (Marwan Jabri) Date: Wed, 3 Mar 1993 01:14:36 +1100 Subject: Beta sites for MUME Message-ID: <9303021414.AA14590@sedal.sedal.su.OZ.AU> We are seeking universities beta sites for testing a new release (0.6) of a multi-net multi-algorithm connectionist simulator (MUME) for the following plateforms: - HPs 9000/700 - SGIs - DEC Alphas - PC DOS (with DJGCC) If interested please send, name, email, affiliation, address and fax number to my email address below. Note that starting with release 0.6, MUME (including source code) will be made available to universities through FTP but following the signature of a license protecting the University of Sydney and the authors. Marwan ------------------------------------------------------------------- Marwan Jabri Email: marwan at sedal.su.oz.au Senior Lecturer Tel: (+61-2) 692-2240 SEDAL, Electrical Engineering, Fax: 660-1228 Sydney University, NSW 2006, Australia Mobile: (+61-18) 259-086  From reggia at cs.UMD.EDU Tue Mar 2 09:31:31 1993 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Tue, 2 Mar 93 09:31:31 -0500 Subject: New Postdoc Position in Neural Modelling Message-ID: <9303021431.AA27194@avion.cs.UMD.EDU> Post-Doctoral Position in Neural Modelling A new post-doctoral position in computational neuroscience will be available at the University of Maryland, College Park, MD starting between June 1 and Sept. 1, 1993. This research position will center on modelling neocortical self-organization and plasticity. Requirements are a PhD in computer science, neuroscience, applied math, or a related area by the time the position starts, experience with neural modelling, and familiarity with the language C. The position will last one or (preferably) two years. The University of Maryland campus is located just outside of Washington, DC. If you would like to be considered for this position, please send via regular mail services a cover letter expressing your interest and desired starting date, a copy of your cv, the names of two possible references (with their address, phone number, fax number, and email address), and any other information you feel would be relevant to James Reggia Dept. of Computer Science A. V. Williams Bldg. University of Maryland College Park, MD 20742 or send this information via FAX at (301)405-6707. (Applications will NOT be accepted via email.) Closing date for receipt of applications is March 26, 1993. If you have questions about the position please send email to reggia at cs.umd.edu .  From rswiniar%saturn at sdsu.edu Tue Mar 2 12:34:39 1993 From: rswiniar%saturn at sdsu.edu (Dr. Roman Swiniarski) Date: Tue, 2 Mar 93 09:34:39 PST Subject: Neural, fuzzy, rough systems Message-ID: <9303021734.AA02913@saturn.SDSU.EDU> Dear Madam/Sir/Professor, I dare to provide the information about the short course. We will be very happy to introduce the distinguish world class scientists and our friends: Professor L.K. Hansen, Professor W. Pedrycz and Professor A. Skowron. Best regards, Roman Swiniarski, ------------------------------------------------------------------------------ NEURAL NETWORKS. FUZZY AND ROUGH SYSTEMS. THEORY AND APPLICATIONS. Friday, April 2, 1993, room BAM 341 The short course sponsored by the Interdisciplinary Research Center for Scientific Modeling and Computation at Department of Mathematical Sciences San Diego State University. 8:15-11:30 Professor L. K. Hansen, Technical University of Denmark, Denmark 1. Introduction to neural networks. 2. Neural Networks for Signal Processing, Prediction and Image Processing. 11:30-1.00 pm R. Swiniarski, San Diego State University 1. Application of neural networks to systems, adaptive control, and genetics. Break 2:00-4:00 Professor W. Pedrycz, University of Manitoba, Canada 1. Introduction to Fuzzy Sets. 2. Application of Fuzzy Sets: - knowledge based computations and logic-oriented neurocomputing - fuzzy modeling - models of fuzzy reasoning 4:00-6:30 Professor A. Skowron, University of Warsaw, Poland 1. Introduction to Rough Sets and Decision Systems. 2. Applications of Rough Sets and Decision Systems. 3. Neural Networks, Fuzzy Systems, Rough Sets and Evidence Theory. There will be a $80 (students &40) preregistration fee. To register, please send your name and affiliations along with a check to Interdisciplinary Research Center Department of Mathematical Sciences San Diego State University. San Diego, California 82182-0314, U.S.A. The check should be made out to SDSU Interdisciplinary Research Center. The registration fee after March, 18 will be a $100. The number of participants is limited. Should you need further information, please contact Roman Swiniarski (619) 594-5538 rswiniar at saturn.sdsu.edu or Jose Castillo (619) 594-7205 castillo at math.sdsu.edu. You are cordially invited to participate in the short course.  From tsung at cs.ucsd.edu Tue Mar 2 15:12:27 1993 From: tsung at cs.ucsd.edu (Fu-Sheng Tsung) Date: Tue, 2 Mar 93 12:12:27 -0800 Subject: paper available: learning attractors Message-ID: <9303022012.AA26472@roland> The following paper has been placed in the Neuroprose archive. Comments and questions are welcome. ******************************************************************* Phase-Space learning for recurrent networks Fu-Sheng Tsung and Garrison W Cottrell 0114 CSE UC San Diego La Jolla, CA 92093 abstract: We study the problem of learning nonstatic attractors in recurrent networks. With concepts from dynamical systems theory, we show that this problem can be reduced to three sub-problems, (a) that of embedding the temporal trajectory in phase space, (b) approximating the local vector field, and (c) function approximation using feedforward networks. This general framework overcomes problems with traditional methods by providing more appropriate error gradients and enforcing stability explicitly. We describe an online version of our method we call ARTISTE, that can learn periodic attractors without teach-forcing. ******************************************************************* Thanks to Jordan Pollack for providing this service, despite being the father of a new baby boy! ---------------------------------------------------------------- FTP INSTRUCTIONS "Getps tsung.phase.ps.Z" if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get tsung.phase.ps.Z ftp> bye unix% zcat tsung.phase.ps.Z | lpr (the paper is 18 pages) ----------------------------------------------------------------  From gary at cs.ucsd.edu Tue Mar 2 20:44:04 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Tue, 2 Mar 93 17:44:04 -0800 Subject: Seminar announcement Message-ID: <9303030144.AA21268@odin.ucsd.edu> SEMINAR Cascading Maps: Reorganization of Cortex in the Dog Garrison W. Cottrell Department of Dog Science Southern California Condominium College "My dog is an old dog. A major difference between owning a young dog and owning an old dog is walks. Young dog walks exercise the master; old dog walks exercise the master's patience. The major reason for this is _s_n_i_f_f_u_l_a _a_d _n_a_u_s_e_u_o_s_o, Latin for "this dog can't stop sniffing stuff". Lest you think I am talking about something illegal, this dog just will not stop sniffing a bush, a tree, or even a bare spot on the sidewalk. You could be there for 10 minutes and he wouldn't be done yet." --Professor Ebeneezer Huggenbotham The old dog sniffing problem (ODSP), or "Huggenbotham's tenth problem", has provided a rich source of data for recent advances in connectionist dog modeling. Ever since McDonnell & Pew's (McD & P) "Brain state in a Ball"[1] model of the dog olfactory bulb as an multi-state attractor, a debate has sprung up around the issue of whether a connectionist net can actually exhibit dog-like behavior, or whether you needed to be a dog to possess dog-like behavior (McDonnell & Pew, 1986; Peepee, 1988; Pluckit & Walkman, 1989). McD & P's model hypothesizes that since the brain state is in a ball, it can't ever get stuck in a corner, so it just wanders the surface of the sphere[2]. Thus the network never habituates to incoming signals. Peepee's critique of McD & P's model was that since McD & P's model only contained 3 units, it could never represent the variety of smells available to old dogs. Pluckit & Walkman showed that indeed, there were an infinite number of points on a hypersphere, so anything was representable. Further, they showed that If one assumed the models had started with many more units in a hypercube, and lost them decrementally, converging on a 3-D sphere, that one could account for many of the _d_e_g_r_a_d_a_t_i_o_n_a_l _a_s_p_e_c_t_s of the old dog's mind. While these models accounted for many of the psychological findings, the present paper seeks to integrate recent neurophysiological findings into a new understanding of old dog behavior. One of the most striking phenomena found today in the cortical map literature is the amazingly fast reorganization of cortical maps. In monkeys whose fingers have been severed, the somatosensory map reorganizes to represent the other fingers more than before[3]. Fortunately for dogs, they do not have fingers. Fortunately for monkeys, it is also found that the map will reorganize without vivisection. If the monkey is simply required to use a particular fingertip for some task, the map will allocate more space to that fingertip. The surprising thing about this cortical reorganization is that it is 1) fast, happening over hours or days and 2) present in adults[4]. This suggests that our cortical maps may be constantly reorganizing. Furthermore, since this appears to be a Hebbian-based reorganization, dependent upon activity, other maps connected to this one should also reorganize. That is, reorganization will not be confined to somatosensory maps, but will _c_a_s_c_a_d_e to other areas. These observations suggest a new theory about representation in the elder dog's cortex. As we all know, _s_m_e_l_l is the sense most associated with memory. Since input from the eyes and ears degrades with age, the olfactory input will begin to dominate brain activity in the older dog. The visual and auditory maps will reorganize to respond to smell. They will not of course, _r_e_p_r_e_s_e_n_t smell, but will be driven more by smell than by eyes because of activity dependent remapping. This will cause re-activation of vivid scenes associated with those smells. Hence, this suggests that the reason older dogs spend an order of magnitude greater time than a younger dog sniffing the same spot is that they are _r_e_m_i_n_i_s_c_i_n_g. ____________________ [1]Unlike Anderson's model of human cortex as a "Brain State in a Box", the states of the network are not allowed to extend outside of a _h_y_p_e_r_s_p_h_e_r_e. This explains why hu- mans are smarter than dogs: Humans can reach the _c_o_r_n_e_r_s of the hypercube. [2]Recently, more statistically based models have argued that the Kullback-Leibler information transmitted by an old nose was on the order of 1 bit per second (Chapel, 93), sug- gesting the behavior is entirely a peripheral deficit. Ex- perimental 1200 baud "nodem"'s are being implanted in several dogs as a possible cure. [3]This line of research suggests that some scientists did not pull enough legs off of spiders when they were younger. [4]This also suggests that modern American adult males, whose somatosensory maps overrepresent certain areas, are capable of change.  From dayan at helmholtz.sdsc.edu Wed Mar 3 20:42:24 1993 From: dayan at helmholtz.sdsc.edu (Peter Dayan) Date: Wed, 3 Mar 93 17:42:24 PST Subject: Paper : Convergence of TD(lambda) Message-ID: <9303040142.AA16154@helmholtz.sdsc.edu> A postscript version of the following paper has been placed in the neuroprose archive. It has been submitted to Machine Learning, and comments/questions/refutations are eagerly solicited. Hard-copies are not available. ***************************************************************** TD(lambda) Converges with Probability 1 Peter Dayan and Terrence J Sejnowski CNL, The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 92037 The methods of temporal differences allow agents to learn accurate predictions about stationary stochastic future outcomes. The learning is effectively stochastic approximation based on samples extracted from the process generating an agent's future. Sutton has proved that for a special case of temporal differences, the expected values of the predictions converge to their correct values, as larger samples are taken, and this proof has been extended to the case of general lambda. This paper proves the stronger result that the predictions of a slightly modified form of temporal difference learning converge with probability one, and shows how to quantify the rate of convergence. ***************************************************************** ---------------------------------------------------------------- FTP INSTRUCTIONS "Getps dayan.tdl.ps.Z" if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get dayan.tdl.ps.Z ftp> bye unix% zcat dayan.tdl.ps.Z | lpr ----------------------------------------------------------------  From mb at tce.ing.uniroma1.it Thu Mar 4 06:55:16 1993 From: mb at tce.ing.uniroma1.it (mb@tce.ing.uniroma1.it) Date: Thu, 4 Mar 1993 12:55:16 +0100 Subject: Economic forecasting by NN Message-ID: <9303041155.AA06237@tce.ing.uniroma1.it> Concerning requests of references on the topic of economic forecasting using neural networks, many papers may be found in the proceedings of the workshop on Parallel Applications in Statistics and Economics (PASE'92), held in Prague, Czechoslovakia, December 7-8,1992. Thay have been published in a special issue of Neural Network World (vol.2, no.6, 1992). This journal is published in Prague, by the Institute of Computer and Information Science of the Czechoslovak Academy of Sciences, edited by Prof. M. Novak. Their e-mail is CVS15 at CSPGCS11.BITNET Have a good reading! Marco Balsi Dipartimento di Ingegneria Elettronica, Universita' di Roma "La Sapienza" mb at tce.ing.uniroma1.it  From george at psychmips.york.ac.uk Thu Mar 4 11:19:37 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Thu, 4 Mar 93 16:19:37 +0000 (GMT) Subject: Fault Tolerance in ANN's - thesis Message-ID: Thesis Title "Fault Tolerance in Artificial Neural Networks" ------------ by George Bolt, University of York Available via ftp, instructions at end of abstract. Abstract: This thesis has examined the resilience of artificial neural networks to the effect of faults. In particular, it addressed the question of whether neural networks are inherently fault tolerant. Neural networks were visualised from an abstract functional level rather than a physical implementation level to allow their computational fault tolerance to be assessed. This high-level approach required a methodology to be developed for the construction of fault models. Instead of abstracting the effects of physical defects, the system itself was abstracted and fault modes extracted from this description. Requirements for suitable measures to assess a neural network's reliability in the presence of faults were given, and general measures constructed. Also, simulation frameworks were evolved which could allow comparative studies to be made between different architectures and models. It was found that a major influence on the reliability of neural networks is the uniform distribution of information. Critical faults may cause failure for certain regions of input space without this property. This lead to new techniques being developed which ensure uniform storage. It was shown that the basic perceptron unit possesses a degree of fault tolerance related to the characteristics of its input data. This implied that complex perceptron based neural networks can be inherently fault tolerant given suitable training algorithms. However, it was then shown that back-error propagation for multi-layer perceptron networks (MLP's) does not produce a suitable weight configuration. A technique involving the injection of transient faults during back-error propagation training of MLP's was studied. The computational factor in the resulting MLP's causing their resilience to faults was then identified. This lead to a much simpler construction method which does not involve lengthy training times. It was then shown why the conventional back-error propagation algorithm does not produce fault tolerant MLP's. It was concluded that a potential for inherent fault tolerance does exist in neural network architectures, but it is not exploited by current training algorithms. $ ftp minster.york.ac.uk Connected to minster.york.ac.uk. 220 minster.york.ac.uk FTP server (York Tue Aug 25 11:09:10 BST 1992). Name (minster.york.ac.uk:root): anonymous 331 Guest login ok, send email address as password. Password: < insert your email address here > 230 Guest login ok, access restrictions apply. ftp> cd reports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get YCST-93.zoo 200 PORT command successful. 150 Opening BINARY mode data connection for YCST-93.zoo (1041762 bytes). 226 Transfer complete. local: YCST-93.zoo remote: YCST-93.zoo 1041762 bytes received in 19 seconds (55 Kbytes/s) ftp> quit 221 Goodbye $ zoo -extract YCST-93.zoo * $ printout A version of "zoo" compiled for Sun 3's is also available in this directory, just enter command "get zoo" before quitting ftp. If you have any problems, please contact me via email. - George Bolt email: george at psychmips.york.ac.uk smail: Dept. of Psychology University of York Heslington, York YO1 5DD U.K. tel: +904-433155 fax: +904-433181  From dlukas at PARK.BU.EDU Thu Mar 4 16:49:13 1993 From: dlukas at PARK.BU.EDU (dlukas@PARK.BU.EDU) Date: Thu, 4 Mar 93 16:49:13 -0500 Subject: World Conference on Neural Networks '93 (July 11-15, Portland, Oregon, USA) Message-ID: <9303042149.AA26736@retina.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS 1993 Annual Meeting of the International Neural Network Society July 11-15, 1993, Portland, Oregon WCNN'93 is the largest and most inter-disciplinary forum in the neural network field today. COOPERATING SOCIETIES: American Association for Artificial Cognitive Science Society Intelligence European Neural Network Society American Mathematical Society IEEE Computer Society American Physical Society IEEE Neural Networks Council American Psychological Society International Fuzzy Systems Association Association for Behavior Analysis Japanese Neural Network Society Classification Society of North Society for Mathematical Biology America Society for Mathematical Psychology Society of Manufacturing Engineers PLENARY SPEAKERS INCLUDE: Stephen Grossberg, 3-D Vision and Figure-Ground Pop-Out Bart Kosko, Neural Fuzzy Systems Carver Mead, Real-Time On-Chip Learning in Analog VLSI Networks Kumpati Narendra, Intelligent Control Using Neural Networks Wolf Singer, Coherence as an Organizing Principle of Cortical Function TUTORIALS INCLUDE: Gail Carpenter, Adaptive Resonance Theory Robert Desimone, Cognitive Neuroscience Walter Freeman, Neurobiology and Chaos Robert Hecht-Nielsen, Practical Applications of Neural Network Theory Michael Kuperstein, Neural Control and Robotics S.Y.Kung, Structural and Mathematical Approaches to Signal Processes V.S. Ramachandran, Biological Vision David Rumelhart, Cognitive Science Eric Schwartz, Neural Computation and VLSI Fred Watkins, Neural Fuzzy Systems Hal White, Supervised Learning INVITED SPEAKERS INCLUDE: James A. Anderson, Programming in Associative Memory Gail A. Carpenter, Adaptive Resonance Theory: Recent Research and Applications Michael A. Cohen, Recent Results in Neural Models of Speech and Language Perception and Recognition Judith E. Dayhoff, Applications of Temporal and Molecular Structures in Neural Systems Walter Daugherty, A Partially Self-Training System for the Protein Folding Problem Kunihiko Fukushima, Improvement of the Neocognitron and the Selective Attention Model Armin Fuchs, Brain Signals during Qualitative Changes in Patterns of Coordinated Movements Stephen Grossberg, Learning, Recognition, Reinforcement, Attention, and Timing in a Thalamo-Cortico-Hippocampal Model Dan Hammerstrom, Whither Electronic Neurocomputing? R. Hecht-Nielsen, Towards a General Theory of Data Compression James C. Houk, Spatiotemporal Patterns of Activity in an In Vitro Recurrent Network Mitsuo Kawato, Existence of an Inverse Dynamics Model in the Cerebellum Teuvo Kohonen, Boosting the Computing Power in Pattern Recognition by Unconventional Architectures S.Y. Kung, On Training Temporal Neural Networks Michael Kuperstein, Neural Controller for Catching Moving Objects in 3-D Daniel Levine, A Gated Dipole Architecture for Multi-Drive, Multi- Attribute Decision Making Erkki Oja, Nonlinear PCA: Algorithms and Applications Michael P. Perrone, Learning from what's been Learned: Supervised Learning in Multi-Neural Network Systems Michael T. Posner, Tracing Network Processes in Real Time with Scalp Electrodes Robert Sekuler, Perception of Motion: How the Brain Manages Those Thousand Points of Light John G. Taylor, M Forms of Memory Thomas P. Vogl, From Electrophysiology to a Stable Associative Learning Algorithm Allen Waxman, Rats, Robots, Monkeys and Missiles: Neural Pathways in Robot Intelligence Paul J. Werbos, Supervised Learning: Can We Escape from its Local Optimum? Bernard Widrow, Adaptive Signal Processing Shuji Yoshizawa, Dynamics and Capacity of Neural Models of Associative Memory Hussein Youssef, Comparison of Several Neural Networks in Nonlinear Dynamic System Modeling Lotfi A. Zadeh, Soft Computing, Fuzzy Logic and the Calculus of Fuzzy Graphs GENERAL CHAIR: George G. Lendaris MAIN PROGRAM CHAIRS: Stephen Grossberg and Bart Kosko SME/INNS TRACK PROGRAM CHAIRS: Kenneth Marko and Bernard Widrow IFSA/INNS TRACK PROGRAM CHAIRS: Ronald Yager and Paul Werbos COOPERATING SOCIETIES CHAIR: Mark Kon INNS OFFICERS: President: Harold Szu President-Elect: Walter Freeman Past President: Paul Werbos Executive Director: Morgan Downey BOARD OF GOVERNORS: Shun-ichi Amari Richard Andersen James A. Anderson Andrew Barto Gail Carpenter Leon Cooper Judith Dayhoff Kunihiko Fukushima Lee Giles Stephen Grossberg Mitsuo Kawato Christof Koch Teuvo Kohonen Bart Kosko C. von der Malsburg David Rumelhart John Taylor Bernard Widrow Lotfi Zadeh FOR REGISTRATION AND ADDITIONAL INFORMATION PLEASE CONTACT: WCNN'93 Talley Management Group 875 Kings Highway, Suite 200 West Deptford, NJ 08096 Tel: (609) 845-1720 FAX: (609) 853-0411 e-mail: registration at wcnn93.ee.pdx.edu Please do not reply to this account. Please use the telephone number, fax number, U.S. Mail address, or email address listed above.  From doya at crayfish.UCSD.EDU Thu Mar 4 22:24:18 1993 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Thu, 4 Mar 93 19:24:18 PST Subject: three papers on recurrent networks Message-ID: <9303050324.AA11596@crayfish.UCSD.EDU> The following manuscripts have been placed in the Neuroprose archive. doya.universality.ps.Z: Universality of Fully-Connected Recurrent Neural Networks doya.bifurcation2.ps.Z: Bifurcations of Recurrent Neural Networks in Gradient Descent Learning doya.dimension.ps.Z: Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Please find the abstracts and retrieving instructions below. Comments and questions are welcome. Also, some of the figures in my older papers doya.bifurcation.ps.Z (1992 IEEE Symp. on Circuits and Systems) doya.synchronization.ps.Z (NIPS 4) have been replaced. They caused printer errors in several sites. Thanks to Jordan Pollack for maintaining this fast growing archive. Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301 **************************************************************** Universality of Fully-Connected Recurrent Neural Networks Kenji Doya, UCSD From george at psychmips.york.ac.uk Fri Mar 5 05:37:14 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Fri, 5 Mar 93 10:37:14 +0000 (GMT) Subject: Fault Tolerance in ANN's - thesis Message-ID: Since some people have been unable to uncompress the postscript files, I have changed the method by which they are stored. In directory "reports" the thesis postscript files are held in a tar file "YCST-93.tar". The contents are also compressed using the (hopefully) standard Unix compress routine. The tar file is just under a megabyte in size. Ftp instructions: $ ftp minster.york.ac.uk Connected to minster.york.ac.uk. 220 minster.york.ac.uk FTP server (York Tue Aug 25 11:09:10 BST 1992). Name (minster.york.ac.uk:root): anonymous 331 Guest login ok, send email address as password. Password: < insert your email address here > 230 Guest login ok, access restrictions apply. ftp> cd reports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get YCST-93.tar 200 PORT command successful. 150 Opening BINARY mode data connection for YCST-93.zoo (1041762 bytes). 226 Transfer complete. local: YCST-93.zoo remote: YCST-93.zoo 1041762 bytes received in 19 seconds (55 Kbytes/s) ftp> quit 221 Goodbye $ tar -xvf - < YCST-93.tar $ uncompress *.Z $ printout If anyone has any further problems, please contact me via email. - George Bolt email: george at psychmips.york.ac.uk smail: Dept. of Psychology University of York Heslington, York YO1 5DD U.K.  From mikewj at signal.dra.hmg.gb Fri Mar 5 12:14:04 1993 From: mikewj at signal.dra.hmg.gb (mikewj@signal.dra.hmg.gb) Date: Fri, 5 Mar 93 17:14:04 GMT Subject: The best neural networks for classification Message-ID: AA15142@ravel.dra.hmg.gb Dear Connectionists, Many connectionist simulators are geared towards comparing one neural algorithm with another. You can set different numbers of nodes, learning parameters etc. and do 20 runs (say) to get good statistical measurements on the performance of the algorithms on a dataset. I am working on the European Commission funded Statlog project, comparing a whole host of pattern classification techniques on some large, dirty, and difficult industrial data sets; techniques being tested include various neural net paradigms, as well as about 20 statistical and inductive inference methods. Instead of the usual rigorous performance figures useful for comparing different neural nets, I have to produce the BEST NETWORK I CAN, and report performance on training and test sets. In order to do well on the test set, I need to hold back some of the training data, and use this to evaluate the performance of networks of different sizes and trained for different lengths of time, on the remaining portion of the training data. Moreover, I would like to find a simulator which uses faster training algorithms such as conjugate gradients, which can cope with big datasets without having memory or network nightmares, and which will do the hold-out cross-validation itself, automatically. My other options are to do this by hand (which is fine), but I see greater benefits for the project, for "neural nets" as a standard technique, and for industrial users unfamiliar with the trickeries of data preparation and so on, if I can simply recommend a simulator which will find the best network for an application, without a great deal of intervention for evaluation, chopping of data and so on. Thanks for your interest; any suggestions? Mike Wynne-Jones. mikewj at signal.dra.hmg.gb  From rich at gte.com Fri Mar 5 15:07:43 1993 From: rich at gte.com (Rich Sutton) Date: Fri, 5 Mar 93 15:07:43 -0500 Subject: Reinforcement Learning workshop to follow ML93 -- Call for participation Message-ID: <9303052007.AA17647@bunny.gte.com> Call for Participation "REINFORCEMENT LEARNING: What We Know, What We Need" an Informal Workshop to follow ML93 June 30 & July 1, University of Massachusetts, Amherst Reinforcement learning is a simple way of framing the problem of an autonomous agent learning and interacting with the world to achieve a goal. This has been an active area of machine learning research for the last 5 years. The objective of this workshop is to present concisely the current state of the art in reinforcement learning and to identify and highlight critical open problems. The intended audience is all learning researchers interested in reinforcement learning; little prior knowledge of the area will be assumed. The first half of the workshop will consist mainly of tutorial presentations, and the second half will define and explore outstanding problems. The entire workshop will last approximately one and a half days. Attendance will be open to all those registered for the main part of ML93 (June 27-29). Program Committee: Rich Sutton (chair), Nils Nilsson, Leslie Kaelbling, Satinder Singh, Sridhar Mahadevan, Andy Barto, Steve Whitehead CALL FOR PAPERS. Papers are solicited that lay out relevant problem areas, i.e., for the second half of the workshop. Proposals are also solicited for polished tutorial presentations on basic topics of reinforcement learning for the first portion of the workshop. The program has yet to be established, but will probably look something like the following (all names are provisional): Session 1: Introduction The Challenge of Reinforcement Learning, by Rich Sutton History of RL, by Harry Klopf Q-learning Planning and Action Models, by Long-Ji Lin Session 2: Theory Dynamic Programming, by Andy Barto Convergence of Q-learning and TD(lambda), by Peter Dayan Session 3: Applications TD-Gammon, by Gerry Tesauro Robotics, by Sridhar Mahadevan Session 4: Extensions Prioritized Sweeping, by Andrew Moore Eligibility Traces, by Rich Sutton Sessions 5 & 6: Open Problems Generalization Hidden State (short-term memory) Hierarchical RL Search Control Incorporating Prior Knowledge Exploration ... If you are interested in attending the RL workshop, please register by sending a note with your name, email and physical addresses, level of interest, and a brief description of your current level of knowledge about reinforcement learning, to: sutton at gte.com OR Rich Sutton GTE Labs, MS-44 40 Sylvan Road Waltham, MA 02254  From giles at research.nj.nec.com Fri Mar 5 16:09:31 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 5 Mar 93 16:09:31 EST Subject: Reprint: First-Order vs. Second-Order Single Layer Recurrent NN Message-ID: <9303052109.AA01113@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- First-Order vs. Second-Order Single Layer Recurrent Neural Networks Mark W. Goudreau (Princeton University and NEC Research Institute, Inc.) C. Lee Giles (NEC Research Institute, Inc. and University of Maryland) Srimat T. Chakradhar (C&CRL, NEC USA, Inc.) D. Chen (University of Maryland) ABSTRACT We examine the representational capabilities of first-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforwardneurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNNs. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get SLRNN.ps.Z ftp> quit unix> uncompress SLRNN.ps.Z --------------------------------------------------------------------------------. -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From kak at max.ee.lsu.edu Fri Mar 5 12:54:05 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Fri, 5 Mar 93 11:54:05 CST Subject: The New Training Alg for Feedforward Networks Message-ID: <9303051754.AA27581@max.ee.lsu.edu> Hard copies of the below-mentioned paper are now available [until they are exhausted]. -Subhash Kak ------------------------------------------------------------- Pramana - Journal of Physics, vol. 40, January 1993, pp. 35-42 -------------------------------------------------------------- On Training Feedforward Neural Networks Subhash C. Kak Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Abstract: A new algorithm that maps n-dimensional binary vectors into m-dimensional binary vectors using 3-layered feedforward neural networks is described. The algorithm is based on a representation of the mapping in terms of the corners of the n-dimensional signal cube. The weights to the hidden layer are found by a corner classification algorithm and the weights to the output layer are all equal to 1. Two corner classification algorithms are described. The first one is based on the perceptron algorithm and it performs generalization. The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. For problems with 30 to 100 neurons we have found a speedup advantage ranging from 100 to more than a thousand fold. Another corner classification algorithm presented in this paper does not require any computations to find the weights. However, in its basic form this second classification procedure does not perform generalization. The new algorithm described in this paper is guaranteed to find the solution to any mapping problem. The effectiveness of this algorithm is due to the structured nature of the final design. This algorithm can also be applied to analog data. The new algorithm is computationally extremely efficient compared to the backpropagation algorithm. It is also biologically plausible since the computations required to train the network are extremely simple.  From rupa at dendrite.cs.colorado.edu Fri Mar 5 16:05:17 1993 From: rupa at dendrite.cs.colorado.edu (Sreerupa Das) Date: Fri, 5 Mar 1993 14:05:17 -0700 Subject: Preprint available Message-ID: <199303052105.AA06067@pons.cs.Colorado.EDU> The following paper has been placed in the Neuroprose archive. Instructions for retrieving and printing follow the abstract. This is a preprint of the paper to appear in Advances in Neural Information Processing Systems 5, 1993. Comments and questions are welcome. Thanks to Jordan Pollack for maintaining the archive! Sreerupa Das Department of Computer Science University of Colorado, Boulder CO-80309-0430 email: rupa at cs.colorado.edu ************************************************************************** USING PRIOR KNOWLEDGE IN AN NNPDA TO LEARN CONTEXT-FREE LANGUAGES Sreerupa Das C. Lee Giles Guo-Zheng Sun Dept. of Comp. Sc.& NEC Research Inst. Inst. for Adv. Comp. Studies Inst. of Cognitive Sc. 4 Independence Way University of Maryland University of Colorado Princeton, NJ 08540 College Park, MD 20742 Boulder, CO 80309-0430 ABSTRACT Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., a^n b^n and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discuss some ways in which {\em a priori} knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. a^n b^n c b^m a^m). ************************************************************************** -------------------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get das.prior_knowledge.ps.Z ftp> bye unix% zcat das.prior_knowledge.ps.Z | lpr  From giles at research.nj.nec.com Fri Mar 5 12:22:26 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 5 Mar 93 12:22:26 EST Subject: Reprint: Experimental Comparison of the Effect of Order ... Message-ID: <9303051722.AA00702@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. -------------------------------------------------------------------------------- Experimental Comparison of the Effect of Order in Recurrent Neural Networks Clifford B. Miller(a) and C.L. Giles(a,b) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. Maryland, College Park, MD20742 ABSTRACT There has been much interest in increasing the computational power of neural networks. In addition there has been much interest in "designing" neural networks to better suit particular problems. Increasing the "order" of the connectivity of a neural network permits both. Though order has played a significant role in feedforward neural networks, it role in dynamically driven recurrent networks is still being understood. This work explores the effect of order in learning grammars. We present an experimental comparison of first-order and second-order recurrent neural networks, as applied to the task of grammatical inference. We show that for the small grammars studied these two neural net architectures have comparable learning and generalization power, and that both are reasonably capable of extracting the correct finite state automata for the language in question. However, for a larger randomly-generated, ten-state grammar second-order networks significantly outperformed the first-order networks, both in convergence time and generalization capability. We show that these networks learn faster the more neurons they have (our experiments used up to 10 hidden neurons), but that the solutions found by smaller networks are usually of better quality (in terms of generalization performance after training). Second-order nets have the advantage that they converge more quickly to a solution and can find it more reliably than first-order nets, but that the second-order solutions tend to be of poorer quality than those of first-order if both architectures are trained to the same error tolerance. Despite this, second-order nets can more successfully extract finite state machines using heuristic clustering techniques applied to the internal state representations. We speculate that this may be due to restrictions on the ability of first-order architecture to fully make use of its internal state representation power and that this may have implications for the performance of the two architectures when scaled up to larger problems. List of key words: recurrent neural networks, higher order, learning, generalization, automata, grammatical inference, grammars. -------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get miller-giles-ijprai.ps.Z ftp> quit unix> uncompress miller-giles-ijprai.ps.Z --------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From giles at research.nj.nec.com Sun Mar 7 12:09:12 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Sun, 7 Mar 93 12:09:12 EST Subject: Reprint: Constructive Learning of Recurrent Neural Networks Message-ID: <9303071709.AA02977@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Constructive Learning of Recurrent Neural Networks" D. Chen, C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, M.W. Goudreau University of Maryland, College Park NEC Research Institute, Princeton, NJ ABSTRACT Recurrent neural networks are a natural model for learning and predicting temporal signals. In addition, simple recurrent networks have been shown to be both theoretically and experimentally capable of learning finite state automata {cleeremans89,giles92a,minsky67,pollack91, siegelmann92}. However, it is difficult to determine what is the minimal neural network structure for a particular automaton. Using a large recurrent network, which would be versatile in theory, in practice proves to be very difficult to train. Constructive or destructive recurrent methods might offer a solution to this problem. We prove that one current method, Recurrent Cascade Correlation, has fundamental limitations in representation and thus in its learning capabilities. We give a preliminary approach on how to get around these limitations by devising a ``simple" constructive training method that adds neurons during training while still preserving the powerful fully recurrent structure. Through simulations we show that such a method can learn many types of regular grammars that the Recurrent Cascade Correlation method is unable to learn. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get icnn_93_contructive.ps.Z ftp> quit unix> uncompress icnn_93_contructive.ps.Z ---------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From sgoss at ulb.ac.be Fri Mar 5 17:26:14 1993 From: sgoss at ulb.ac.be (Goss Simon) Date: Fri, 5 Mar 93 17:26:14 MET Subject: 2nd European Conference on Artificial Life Message-ID: ECAL '93 2nd European Conference on Artificial Life SELF-ORGANIZATION AND LIFE, FROM SIMPLE RULES TO GLOBAL COMPLEXITY Brussels, May 24-26th, 1993 Natural and artificial systems governed by simple rules exhibit self- organisation leading to autonomy, self-adaptation and evolution. While these phenomena interest an increasing number of scientists, much remains to be done to encourage the cross-fertilisation of ideas and techniques. The aim of this conference is to bring together scientists from different fields in the search for common rules and algorithms underlying different systems. The following themes have been selected : - Origin of life and molecular evolution - Patterns and rhythms in chemical and biochemical systems and interacting cells (neural network, immune system, morphogenesis). - Sensory and motor activities in animals and robots. - Collective intelligence in natural and artificial groups - Ecological communities and evolution . - Ecological computation. - Epistemology We are also planning demonstrations of computer programmes, robots and physico-chemical reactions, both in vivo and in video. Invited Speakers C. Biebricher (Germany), S. Camazine (USA), H. Cruse, P. De Kepper (France), W. Fontana, N. Franks (UK), F. Hess (Holland), B. Huberman (USA), S. Kauffman (USA), C. Langton (USA), M. Nowak (UK), T. Ray (USA), P. Schuster (Germany), M. Tilden (Canada), J. Urbain (Belgium), F. Varela (France). Organising committee J.L. Deneubourg, H. Bersini, S. Goss, G. Nicolis (Universite Libre de Bruxelles) R. Dagonnier (Universite de Mons-Hainaut). International Program committee A. Babloyantz (Belgium), G. Beni (USA), P. Borckmans (Belgium), P. Bourgine (France), H. Cruse (Germany), G. Demongeot (France), G. Dewel (Belgium), P. De Kepper (France), S. Forrest (USA), N. Franks (UK), T. Fukuda (Japan), B. Goodwin (UK), P. Hogeweg (Holland), M. Kauffman (Belgium), C. Langton (USA), R. Lefever (Belgium), P. Maes (USA), J.-A. Meyer (France), T. Ray (USA), P. Schuster (Austria), T. Smithers (Belgium), F. Varela (France), R. Wehner (Germany). Address: ECAL '93, Centre for Non-Linear Phenomena and Complex Systems, CP 231, Universite Libre de Bruxelles, Bld. du Triomphe, 1050 Brussels, Belgium. Fax : 32-2-6505767; Phone : 32-2-6505776; 32-2-6505796; EMAIL : sgoss at ulb.ac.be ________________________________________________________________________________ REGISTRATION The registration fees for ECAL '93 (May 24-26) are as follows ($1=34BF): Payment before Payment after May 1st May 1st Academic: 10.000 BF 12.000 BF Non-Academic: 12.000 BF 14.000 BF Student: 6.500 BF 7.500 BF a) I authorise payment of BF by the following credit card: American Express Visa/Eurocard/Master (please indicate which card!) Card Name Card No Valid from: to: Signature b) I enclose a Eurocheque for BF c) I have ordered my bank to make a draft of BF to : ECAL '93 Account no: 034-1629733-01 CGER (Caisse Generale d'Epargne et de Retraite) Agence Ixelles-Universite Chaussee de Boondael 466 1050 Bruxelles, Belgium _________________________________________________________ ____________ Signature Date Name Telephone Fax e-mail Address ________________________________________________________________________________ ECAL '93 Self-Organisation and life From simple rules to global complexity Brussels May 24-26 (Very) Provisional Program (16 invited speakers, 40 oral communications, 50 posters) Monday May 24th 9.00 Inauguration 9.10 Opening remarks 9.45 Coffee 10.15 Origins of life and 10.15 Chemical patterns molecular evolution. and rhythms. Invited Invited speakers: C. speaker: P. De Kepper Biebricher, W. Fontana, P. Schuster. 12.20 lunch 13.10 lunch 14.10 Theoretical biology 14.10 Collective and artificial life. intelligence in animal Invited speaker: C. groups (social insects). Langton Invited speaker: N.R. Franks 15.50 coffee 15.50 coffee 16.30 Theoretical biology 16.20 Collective and artificial life intelligence in animal groups (social insects). Invited speaker: S. Camazine 18.00 Beer and sandwiches 19.30 Theoretical biology and artificial life: General discussion. Invited speaker: F. Varela 22.00 Close Tuesday May 25th 9.00 Individual behaviour. 9.00 Patterns and rhythms Invited speaker: M. Tilden in human societies. Invited speaker: B. Huberman 10.40 coffee 10.40 Coffee 11.10 Individual behaviour 11.10 Multi-robot systems 12.10 lunch 13.10 lunch 14.00 Posters and demonstrations (robots, simulations, videos, chemical reactions, ...). Invited speaker: F. Hess. 18.00 Cocktail 20.00 Banquet Wednesday May 26th 9.00 Evolution. Invited 9.00 Sensory and motor speakers: T. Ray, S. activities in animals and Kauffman. robots. Invited Speaker: H. Cruse 10.20 Coffee 10.40 coffee 11.10 11.40 Ecological communities and evolution. Invited speaker: M. Nowak 12.10 lunch 13.10 lunch 14.10 Collective pattern 14.10 Patterns and rhythms in living systems in the immune system. Invited speaker: J. Urbain 15.50 coffee 15.50 coffee 16.30 Collective pattern 16.20 Patterns and rhythms in living systems in the immune system. 17.30 Closing remarks ________________________________________________________________________________ *** you may need a physical copy of the BIT hotel reservation form or the *** *** Brussels Hotel Guide. See below for details *** HOTEL ACCOMODATION FOR ECAL '93 We have reserved a number of rooms in the centre of Brussels (see attached list), not far from the Metro line 1a which will take you to the conference (ULB, Campus Plaine, Metro Station Delta, Metro Ligne 1a, direction Hermann Debroux). There are unfortunately no hotels close to the University. All prices include breakfast, TV, bathroom (see enclosed official hotel guide for more details). Hotels are rather expensive in Brussels, and you will see that we have not been able to reserve many low-priced rooms. The earlier you make your reservation, therefore, the surer you are of having one. Another possibility is that you arrange to share a double room with a fellow conferencier. It is important that you try to reserve before the 15th of April, otherwise our options on the rooms may be cancelled. Please note that there are 3 ways of reserving your room, depending on which hotel you choose. 1. Hotels President (World Trade Centre) and Palace For the Hotels President and Palace you will see on the enclosed ECAL selected hotel list that we have been able to negotiate a substantial reduction on normal rates. To do so we have had to agree to pay for the rooms in one lump sum, including a down payment. Therefore, if you wish to take a room in these hotels you must make the reservation through us. We will only accept to do this if you pay the necessary sum in advance (we will refund cancellations, though these two hotels might impose a cancellation charge if there are too many last minute cancellations). Please reserve before April 15th. After this date we cannot garuntee that the hotel will maintain our unused reservations and group rate. We enclose a special form for the registration and pre-payment of these rooms. 2. Other hotels on selected ECAL list For all the other hotels on our selected list, please fill in the enclosed BTR reservation form and return it to us. We will forward it to: Mr. Freddy Meerkens Group reservations Belgian Tourist Reservations Bld. Anspach 111, B4 B-1000 Bruxelles Tel (322) 513.7484; Fax (322) 513.9277 He will make the reservation and should also notify you that the reservation has been made. There is no charge for this service, BTR being a state agency. You can if you wish send or fax your reservation form directly to M. Meerkens, group reservations, BTR. In this case, please do not forget to mention in large letters that you are attending the ECAL '93 conference (group reservation), and please send us a copy (marked "COPY") of your reservation form, so that we can keep track of where everyone is staying. 3. Independent Reservations Finally, for those of you that are more independently minded, who wish to find a cheaper hotel, or have other reasons, we enclose the Brussels Hotel Guide 1993, which lists all the hotels in Brussels. If you choose one that is not on the enclosed ECAL selected hotel list, you can then reserve through Belgian Tourist Reservations (see above), using the enclosed BTR form (you do not need in this case to mention that you are attending ECAL). We would nevertheless like you to send us a copy (marked "COPY") of your BTR reservation form, so that we can keep track of where everyone is staying. ECAL selected Hotel list (do not contact these hotels directly - see attached instructions) ($1=34BF approx.) Hotel President (World Trade Centre) (100 rooms) ***** 180 Bld. E. Jacqmain, 1210 Bruxelles Tel: (322) 217.2020; Fax: (322) 218.8402 single room = double room: ECAL price: 4000 BF (<< Normal price: 7500 BF) Hotel Palace (lots of rooms) **** 3 Rue Gineste, 1210 Bruxelles Tel: (322) 217.6200; Fax: (322) 218.1269 single room: ECAL price: 3780 BF (<< Normal price: 6000 BF) double room: ECAL price: 4520 BF (<< Normal price: 7000 BF) Hotel Atlas (30 singles, 10 doubles) **** 30-34 Rue du Vieux Marche-aux-Grains, 1000 Bruxelles Tel: (322) 502.6006 single room: ECAL price: 3360 BF (just < Normal price: 3500 BF) double room: ECAL price: 3780 BF (just < Normal price: 4100 BF) Hotel Arcade Sainte Catherine (30 singles, 10 doubles) *** 2 Rue Joseph Plateau, 1000 Bruxelles Tel (322) 513.7620; Fax (322) 514.2214 single room = double room: ECAL price: 3900 BF (= Normal price) Hotel Orion (15 singles) *** 51 Quai au Bois a Bruler, 1000 Bruxelles Tel (322) 221.1411; Fax (322) 221.1599 single room: ECAL price: 3120 BF (= Normal price) Hotel Vendome (30 singles) *** 96 Bld. Adolphe Max, 1000 Bruxelles Tel (322) 218.00070; Fax (322) 218.0683 single room: ECAL price: 2875 BF (= Normal price) Hotel Opera (20 singles) ** 53 Rue Gretry, 1000 Bruxelles Tel (322) 219.4343; Fax (322) 219.1720 single room: ECAL price: 2200 BF (= Normal price) Hotel de Paris (20 singles) ** (shower not bathroom) 800 Bld. Poincarre, 1070 Bruxelles Tel (322) 527.0920; Fax (322) 528.8153 single room: ECAL price: 1800 BF (= Normal price) Reservation and pre-payment for Hotel President (World Trade Centre) or Hotel Palace I would like to reserve a single / double room at the Hotel President / Hotel Palace for the nights of: Signature Date Name Telephone Fax e-mail Address _________________________________________________________ ____________ a) I authorise payment to ECAL '93 of BF by the following credit card: American Express Visa/Eurocard/Master (please indicate which card!) Card Name Card No Valid from: to: Signature b) I enclose a Eurocheque (made put to ECAL '93) for BF c) I have ordered my bank to make a draft of BF to : ECAL '93 Account no: 034-1629733-01 CGER (Caisse Generale d'Epargne et de Retraite) Agence Ixelles-Universite Chaussee de Boondael 466 1050 Bruxelles, Belgium -- Simon Goss Unit of Behavioural Ecology Center for Non-Linear Phenomena and Complex Systems CP 231, Campus Plaine Universite Libre de Bruxelles Boulevard du Triomphe 1050 Bruxelles Belgium Tel: 32-2-650.5776 Fax: 32-2-650.5767 E-mail: sgoss at ulb.ac.be -- Simon Goss Unit of Behavioural Ecology Center for Non-Linear Phenomena and Complex Systems CP 231, Campus Plaine Universite Libre de Bruxelles Boulevard du Triomphe 1050 Bruxelles Belgium Tel: 32-2-650.5776 Fax: 32-2-650.5767 E-mail: sgoss at ulb.ac.be  From fellous%sapo.usc.edu at usc.edu Tue Mar 9 10:57:50 1993 From: fellous%sapo.usc.edu at usc.edu (Jean-Marc Fellous) Date: Tue, 9 Mar 93 07:57:50 PST Subject: USC/CNE Workshop - Rescheduling. Message-ID: <9303091557.AA09414@sapo.usc.edu> Thank you for posting the following notice: **************************** RESCHEDULING **************************** SCHEMAS AND NEURAL NETWORKS INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 The Program Committee has now evaluated the submissions to the proposed Workshop on 'Schemas and Neural Networks: Integrating Symbolic and Subsymbolic Approaches to Cooperative Computation', and found that rather few were on the proposed topic, although there were several excellent submissions on Connectionist Imple- mentation of Semantic Networks; and Schemas, Neural Networks, and Reactive Implementations of Robots. We have thus decided to postpone the meeting until October so that we may explicitly sol- icit the strongest possible contributions on an expanded theme - still on the topic of schemas and neural nets, but with the per- spective now broadened to include the two areas most closely re- lated to this particular combination of low and high-level tech- nologies: schemas plus other low-level technologies (such as reactive robot control); and neural nets plus other high-level technologies (such as rule systems and semantic nets). We regret whatever inconvenience this delay may cause you, but believe that for most contributors and participants, this will mean a much stronger and exciting meeting. New Schedule: October 19 -20, 1993 With Best Wishes Michael Arbib **************************************************************************  From sontag at control.rutgers.edu Tue Mar 9 15:21:36 1993 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 9 Mar 93 15:21:36 EST Subject: AVAILABLE IN NEUROPROSE: "Uniqueness of weights for neural networks" Message-ID: <9303092021.AA16170@control.rutgers.edu> TITLE: "Uniqueness of weights for neural networks" AUTHORS: Francesca Albertini, Eduardo D. Sontag, and Vincent Maillot FILE: sontag.uniqueness.ps.Z ABSTRACT This short paper surveys various results dealing with the weight-uniqueness question for neural nets. In essence, these results show that, under various technical assumptions, neuron exchanges and sign flips are the only transformations that (generically) leave the input/output behavior invariant. An alternative proof is given of Sussmann's theorem (Neural Networks, 1992) for single-hidden layer nets, and his result (for the standard logistic, or equivalently tanh(x)) is generalized to a wide class of activations. Also, several theorems for recurrent nets are discussed. (NOTE: The uniqueness theorem extends, with a simple proof, to single-hiden layer nets which employ the Elliott/Georgiou/Koutsougeras/... activation: u s(u) = ------- 1 + |u| This is not discussed in, and is not an immediate consequence of, the results in the paper, but is an easy exercise.) unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sontag.uniqueness.ps.Z ftp> quit unix> uncompress sontag.uniqueness.ps.Z unix> lpr -Pps sontag.uniqueness.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Eduardo D. Sontag Department of Mathematics Rutgers Center for Systems and Control (SYCON) Rutgers University New Brunswick, NJ 08903, USA  From sontag at control.rutgers.edu Tue Mar 9 15:24:05 1993 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 9 Mar 93 15:24:05 EST Subject: AVAILABLE IN NEUROPROSE: paper on finite VC dimension for NN Message-ID: <9303092024.AA16173@control.rutgers.edu> TITLE: "Finiteness results for sigmoidal `neural' networks" AUTHORS: Angus Macintyre and Eduardo D. Sontag (To appear in Proc. 25th Annual Symp.Theory Computing, San Diego, May 1993) FILE: sontag.vc.ps.Z ABSTRACT This paper deals with analog neural nets. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the ``standard sigmoid'' commonly used in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions.) Applications to learnability of sparse polynomials are also mentioned. **** To retrieve: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sontag.vc.ps.Z ftp> quit unix> uncompress sontag.vc.ps.Z unix> lpr -Pps sontag.vc.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Eduardo D. Sontag Department of Mathematics Rutgers Center for Systems and Control (SYCON) Rutgers University New Brunswick, NJ 08903, USA  From piero at dist.dist.unige.it Mon Mar 8 19:44:53 1993 From: piero at dist.dist.unige.it (Piero Morasso) Date: Mon, 8 Mar 93 19:44:53 MET Subject: ENNS membership Message-ID: <9303081844.AA16490@dist.dist.unige.it> ============================================== E N N S European Neural Network Society ============================================= ENNS is the Society which organizes every year the ICANN Conferences (Helsinki'91, Brighton'92, Amsterdam'93, Sorrento'94, ...) ENNS membership allows reduced registration to ICANN, subscription to the Neural Networks journal and the reception of a Newsletter. The ENNS membership application form is available via ftp. In order to get it, proceed as follows: 1) you type "ftp dist.unige.it" 2) upon the request "login:" you type "anonymous" 3) upon the request "password:" you type your email address 4) if this is OK, you are inside the dist machine; your home directory should be /home1/ftp; check it with "pwd" 5) you go to the ENNS directory (type "cd pub/ENNS") 6) you set the transmission to "binary" 7) you get the file by "get member_appl_form.ps.Z" 8) quit ftp 9) type "uncompress member_appl_form.ps" File member_appl_form.ps is now ready to print. -- Pietro Morasso ENNS secretary E-mail: morasso at dist.unige.it mail: DIST-University of Genova Via Opera Pia, 11A I-16145 Genova (ITALY) phone: +39 10 3532749/3532983 fax: +39 10 3532948  From giles at research.nj.nec.com Tue Mar 9 19:00:02 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Tue, 9 Mar 93 19:00:02 EST Subject: Reprint: Routing in Optical Interconnection Networks Using NNs Message-ID: <9303100000.AA06263@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Routing in Optical Multistage Interconnection Networks: A Neural Network Solution" C. Lee Giles NEC Research Institute, Inc. and UMIACS, University of Maryland and Mark W. Goudreau NEC Research Institute, Inc. ABSTRACT There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for an optical multistage interconnection network (OMIN). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs can be used as communication media for shared memory, distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme can be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. The parallel nature of the ANN computation might make this routing scheme faster than conventional routing approaches, especially for OMINs that are irregular. Furthermore, the neural network routing scheme is fault-tolerant. Results are shown for generating routes in a 16 by 16, 3 stage OMIN. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get optics_long.ps.Z ftp> quit unix> uncompress optics_long.ps.Z ------------------------------------------------------------------------------------ -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From lba at sara.inesc.pt Tue Mar 9 13:00:30 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Tue, 9 Mar 93 19:00:30 +0100 Subject: The New Training Alg for Feedforward Networks In-Reply-To: "Dr. S. Kak"'s message of Fri, 5 Mar 93 11:54:05 CST <9303051754.AA27581@max.ee.lsu.edu> Message-ID: <9303091800.AA11872@sara.inesc.pt> Dr. Subhash C. Kak writes, in his message: The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. I cannot agree with the assertions made about the speed of backpropagation in the XOR problem. Just to be sure, I have just run a few tests, using plain backpropagation, in the batch mode, without any acceleration technique (not even momentum), and using an architecture with 2 input units, 2 hidden units and 1 output unit (more details available on request). The runs that didn't stop at local minima, all converged between 12 and 30 epochs. About 1/3 of the runs fell in local minima. Of course, this comment is not intended at denying the qualities of the algorithm proposed by Dr. Kak, it is just intended at putting backpropagation in its actual stand. Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp)  From vijay at envy.cs.umass.edu Wed Mar 10 11:24:03 1993 From: vijay at envy.cs.umass.edu (vijay@envy.cs.umass.edu) Date: Wed, 10 Mar 93 11:24:03 -0500 Subject: AVAILABLE IN NEUROPROSE: "Learning control under extreme uncertainty" Message-ID: <9303101624.AA18303@sloth.cs.umass.edu> The following paper has been placed in the Neuroprose archive. Thanks to Jordan Pollack for providing this service. Comments and questions are welcome. ******************************************************************* Learning Control Under Extreme Uncertainty Vijaykumar Gullapalli Computer Science Department University of Massachusetts Amherst, MA 01003 Abstract A peg-in-hole insertion task is used as an example to illustrate the utility of direct associative reinforcement learning methods for learning control under real-world conditions of uncertainty and noise. Task complexity due to the use of an unchamfered hole and a clearance of less than $0.2mm$ is compounded by the presence of positional uncertainty of magnitude exceeding $10$ to $50$ times the clearance. Despite this extreme degree of uncertainty, our results indicate that direct reinforcement learning can be used to learn a robust reactive control strategy that results in skillful peg-in-hole insertions. ********************************************************************* FTP INSTRUCTIONS unix% Getps gullapalli.uncertainty-nips5.ps.Z if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gullapalli.uncertainty-nips5.ps.Z ftp> bye unix% zcat gullapalli.uncertainty-nips5.ps.Z | lpr  From cowan at synapse.uchicago.edu Wed Mar 10 12:46:50 1993 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Wed, 10 Mar 93 11:46:50 -0600 Subject: NIPS*93 Message-ID: <9303101746.AA00848@synapse> FIRST CALL FOR PAPERS Neural Information Processing Systems -Natural and Synthetic- Monday, November 29 - Thursday, December 2, 1993 Denver, Colorado This is the seventh meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. There will be an afternoon of tutorial presentations (Nov 29) preceding the regular session and two days of focused workshops will follow at a nearby ski area (Dec 3-4). Major categories and examples of subcategories for paper submissions are the following: Neuroscience: Studies and Analyses of Neurobiological Systems, Inhibition in cortical circuits, Signals and noise in neural computation, Theoretical Biology and Biophysics. Theory: Computational Learning Theory, Complexity Theory, Dynamical Systems, Statistical Mechanics, Probability and Statistics, Approximation Theory. Implementation & Simulation: VLSI, Optical, Software Simulators, Implementation Languages, Parallel Processor Design and Benchmarks. Algorithms & Architectures: Learning Algorithms, Constructive and Pruning Algorithms, Localized Basis Functions, Tree Structured Networks, Performance Comparisons, Recurrent Networks, Combinatorial Optimization, Genetic Algorithms. Cognitive Science & AI: Natural Language, Human Learning and Memory, Perception and Psychophysics, Symbolic Reasoning. Visual Processing: Stereopsis, Visual Motion, Recognition, Image Coding and Classification. Speech & Signal Processing: Speech Recognition, Coding, and Synthesis, Text-to-Speech, Adaptive Equalization, Nonlinear Noise Removal. Control, Navigation, & Planning: Navigation and Planning, Learning Internal Models of the World, Trajectory Planning, Robotic Motor Control, Process Control. Applications: Medical Diagnosis or Data Analysis, Financial and Economic Analysis, Timeseries Prediction, Protein Structure Prediction, Music Processing, Expert Systems. Technical Program: Plenary, contributed and poster sessions will be held. There will be no parallel sessions. The full text of presented papers will be published. Submission Procedures: Original research contributions are solicited, and will be carefully refereed. Authors must submit six copies of both a 1000-word (or less) summary and six copies of a separate single-page 50-100 word abstract clearly stating their results postmarked by May 22, 1993 (express mail is not necessary). Accepted abstracts will be published in the conference program. Summaries are for program committee use only. At the bottom of each abstract page and on the first summary page indicate preference for oral or poster presentation and specify one of the above nine broad categories and, if appropriate, sub-categories (For example: Poster, Applications, Expert Systems; Oral, Implementation-Analog VLSI). Include addresses of all authors at the front of the summary and the abstract and indicate to which author correspondence should be addressed. Submissions will not be considered that lack category information, separate abstract sheets, the required six copies, author addresses, or are late. Mail submissions To: Gerry Tesauro The Salk Institute, CNL 10010 North Torrey Pines Rd. La Jolla, CA 92037 Mail for registration material To: NIPS*93 Registration NIPS Foundation PO Box 60035 Pasadena, CA 91116-6035 All submitting authors will be sent registration material automatically. Program committee decisions will be sent to the correspondence author only. NIPS*93 Organizing Committee: General Chair, Jack Cowan, University of Chicago; Publications Chair, Joshua Alspector, Bellcore; Publicity Chair, Bartlett Mel, CalTech; Program Chair, Gerry Tesauro, Salk Institute; Treasurer, Rodney Goodman, CalTech; Local Arrangements, Chuck Anderson, Colorado State University; Tutorials Chair, Dave Touretzky, Carnegie-Mellon, Workshop Chair, Mike Mozer, University of Colorado, Government & Corporate Liaison, Lee Giles, NEC Research Institute Inc. DEADLINE FOR SUMMARIES & ABSTRACTS IS MAY 22, 1993 (POSTMARKED)  From eric at research.nj.nec.com Wed Mar 10 17:38:19 1993 From: eric at research.nj.nec.com (Eric B. Baum) Date: Wed, 10 Mar 93 17:38:19 EST Subject: No subject Message-ID: <9303102238.AA02774@yin> Preprint: Best Play for Imperfect Players and Game Tree Search The following preprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ----------------------------------------------------------------- Best Play for Imperfect Players and Game Tree Search Eric B. Baum and Warren D. Smith NEC Research Institute 4 Independence Way Princeton NJ 08540 ABSTRACT We propose a new approach to game tree search. We train up an evaluation function which returns, rather than a single number estimating the value of a position, a probability distribution $P_L(x)$. $P_L(x)$ is the probability that if we expanded leaf $L$ to some depth, the backed up value of leaf $L$ would then be found to be $x$. We describe how to propagate these distributions efficiently up the tree so that at any node n we compute without approximation the probability node n's negamax value is x given that a value is assigned to each leaf from its distribution. After we are done expanding the tree, the best move is the child of the root whose distribution has highest mean. Note that we take means at the child of the root {\it after} propagating, whereas the normal (Shannon) approach takes the mean at the leaves before propagating, which throws away information. Now we model the expansion of a leaf as selection of one value from its distribution. The total utility of all possible expansion is defined as the ensemble sum over those possible leaf configurations for which the current favorite move is inferior to some alternate move, weighted by the probability of the leaf configuration and the amount the current favorite move is inferior. We propose as the natural measure of the expansion importance of leaf L, the expected absolute change in this utility when we expand leaf L. We support this proposal with several arguments including an approximation theorem valid in the limit that one expands until the remaining utility of expansion becomes small. In summary, we gather distributions at the leaves, propagate exactly all this information to the root, and incrementally grow a tree expanding approximately the most interesting leaf at each step. Under reasonable conditions, we accomplish all of this in time $O(N)$, where N is the number of leaves in the tree when we are done expanding. That is, we pay only a small constant factor overhead for all of our bookkeeping. ---------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/eric/papers ftp> binary ftp> get game.ps.Z ftp> quit unix> uncompress game.ps.Z ----------------------------------------------------------------------- Eric Baum NEC Research Institute 4 Independence Way Princeton NJ 08540 Inet: eric at research.nj.nec.com UUCP: princeton!nec!eric MAIL: 4 Independence Way, Princeton NJ 08540 PHONE: (609) 951-2712 FAX: (609) 951-2482  From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Thu Mar 11 11:59:00 1993 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Thu, 11 Mar 93 11:59:00 EST Subject: The New Training Alg for Feedforward Networks In-Reply-To: Your message of Tue, 09 Mar 93 19:00:30 +0100. <9303091800.AA11872@sara.inesc.pt> Message-ID: Dr. Subhash C. Kak writes, in his message: The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. I cannot agree with the assertions made about the speed of backpropagation in the XOR problem. Just to be sure, I have just run a few tests, using plain backpropagation, in the batch mode, without any acceleration technique (not even momentum), and using an architecture with 2 input units, 2 hidden units and 1 output unit (more details available on request). The runs that didn't stop at local minima, all converged between 12 and 30 epochs. About 1/3 of the runs fell in local minima. I was also going to comment on Kak's statement that backprop takes "several thousand" iterative steps to converge. It's not clear to me about epochs or pattern presentations, but in any case that number is too high. One of the very first papers on backprop (Rumelhart, Hinton, and Williams, in the PDP books, refers to a study by Chauvin that solved XOR in an average of 245 epochs. So Kak's figure is a bit high. On the other hand, Luis Almeida's figure of 12-30 epochs for 2-2-1 XOR with vanilla backprop are much, much better than other reported results for backprop. If he is actually getting those times, something very interesting -- perhaps even earth-shaking -- is going on. I can get times like that with Quickprop, but I've never seen claims under 100 epochs for 2-2-1 backprop, with or without momentum. More details on this experiment would be of interest to many of us, I think. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University Latitude: 40:26:33 N 5000 Forbes Avenue Longitude: 79:56:48 W Pittsburgh, PA 15213 ===========================================================================  From mozer at dendrite.cs.colorado.edu Thu Mar 11 16:40:57 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Thu, 11 Mar 1993 14:40:57 -0700 Subject: NIPS*93 workshops Message-ID: <199303112140.AA23316@neuron.cs.colorado.edu> CALL FOR PROPOSALS NIPS*93 Post-Conference Workshops December 3 and 4, 1993 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1993 conference, workshops on current topics in neural information processing will be held on December 3 and 4, 1993, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control; architectural issues; attention; bayesian analysis; benchmarking neural network applications; computational complexity issues; computational neuroscience; fast training techniques; genetic algorithms; music; neural network dynamics; optimization; recurrent nets; rules and connectionist models; self- organization; sensory biophysics; speech; time series prediction; vision; and VLSI and optical implementations. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Individuals proposing to chair a workshop will have responsibilities including: arranging short informal presentations by experts working on the topic, moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and writing a brief (2 page) summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 22, 1993. (Express mail is *not* necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, and the proposed length of the workshop (one day or two days). It should motivate why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholarship in the field of interest. Mail submissions to: Mike Mozer NIPS*93 Workshops Chair Department of Computer Science University of Colorado Boulder, CO 80309-0430 USA (e-mail: mozer at cs.colorado.edu) Name, mailing address, phone number, fax number, and e-mail net address should be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 22, 1993 Please Post  From burrow at gradient.cis.upenn.edu Thu Mar 11 13:04:43 1993 From: burrow at gradient.cis.upenn.edu (Thomas Fontaine) Date: Thu, 11 Mar 93 13:04:43 EST Subject: Preprint: Recognizing Handprinted Digit Strings Message-ID: <9303111804.AA12711@gradient.cis.upenn.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS ************* The following paper, to be presented at the Fifteenth Annual Meeting of the Cognitive Science Society (June 1993), has been placed in the neuroprose archives at Ohio State University: RECOGNIZING HANDPRINTED DIGIT STRINGS: A HYBRID CONNECTIONIST/PROCEDURAL APPROACH Thomas Fontaine and Lokendra Shastri Computer and Information Science Department 200 South 33rd Street University of Pennsylvania Philadelphia, PA 19104-6389 We describe an alternative approach to handprinted word recognition using a hybrid of procedural and connectionist techniques. We utilize two connectionist components: one to concurrently make recognition and segmentation hypotheses, and another to perform refined recognition of segmented characters. Both networks are governed by a procedural controller which incorporates systematic domain knowledge and procedural algorithms to guide recognition. We employ an approach wherein an image is processed over time by a spatiotemporal connectionist network. The scheme offers several attractive features including shift-invariance and retention of local spatial relationships along the dimension being temporalized, a reduction in the number of free parameters, and the ability to process arbitrarily long images. Recognition results on a set of real-world isolated ZIP code digits are comparable to the best reported to date, with a 96.0\% recognition rate and a rate of 99.0\% when 9.5\% of the images are rejected.} ***************** How to obtain a copy of the report ***************** I'm sorry, but hardcopies are not available. To obtain via anonymous ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fontaine.wordrec.ps.Z ftp> quit unix> uncompress fontaine.wordrec.ps.Z unix> lpr fontaine.wordrec.ps (or however you print Postscript)  From giles at research.nj.nec.com Thu Mar 11 08:52:27 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 11 Mar 93 08:52:27 EST Subject: Reprint: Rule Refinement with Recurrent Neural Networks Message-ID: <9303111352.AA08333@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Rule Refinement with Recurrent Neural Networks" C. Lee Giles(a,b) and Christian W. Omlin(a,c) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. of Maryland, College Park, MD 20742 (c) Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180 ABSTRACT Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFA's) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, we show that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but that they are able to correct through training inserted prior knowledge which was wrong. (By wrong, we mean that the inserted rules were not the ones in the randomly generated grammar.) ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get rule_refinement.ps.Z ftp> quit unix> uncompress rule_refinement.ps.Z ---------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From rubio at hal.ugr.es Thu Mar 11 12:20:29 1993 From: rubio at hal.ugr.es (Antonio J. Rubio Ayuso) Date: Thu, 11 Mar 93 17:20:29 GMT Subject: No subject Message-ID: <9303111720.AA15547@hal.ugr.es> LAST Announcement: NATO Advanced Study Institute (Deadline: April 1, 1993) ------------------------------------------------------------------------------ NEW ADVANCES and TRENDS in SPEECH RECOGNITION and CODING 28 June-10 July 1993. Bubion (Granada), SPAIN. Institute Director: Dr. Antonio Rubio-Ayuso, Dept. de Electronica. Facultad de Ciencias. Universidad de Granada. E-18071 GRANADA, SPAIN. tel. 34-58-243193 FAX. 34-58-243230 e-mail ASI at hal.ugr.es Organizing Committee: Dr. Jean-Paul Haton, CRIN / INRIA, France. Dr. Pietro Laface, Politecnico di Torino, Italy. Dr. Renato De Mori, McGill University, Canada. OBJECTIVES, AGENDA and PARTICIPANTS A series of most successful ASIs on Speech Science (the last ones in Bonas, France; Bad Windsheim, Germany; Cetraro, Italy) created a fruitful and stimulating environment to learn about scientific methods, exchange of results, and discussions of new ideas. The goal of this ASI is to congregate the most important experts on Speech Recognition and Coding to discuss and disseminate their most recent findings, in order to spread them among the European and American Centers of Excellence, as well as among a good selection of qualified students. A two-week programme is planned with invited tutorial lectures, and contributed papers by selected students (maximum 65). The proceedings of the ASI will be published by Springer-Verlag. TOPICS The Institute will focus on the new methodologies and techniques that have been recently developed in the speech communication area. Main topics of interest will be: -Low Delay and Wideband Speech Coding. -Very Low bit Rate and Half-Rate Speech Coding. -Speech coding over noisy channels. -Continuous Speech and Isolated word Recognition. -Neural Networks for Speech Recognition and Coding. -Language Modeling. -Speech Analysis, Synthesis and data bases. Any other related topic will also be considered. INVITED LECTURERS A. Gersho (UCSB, USA): "Speech coding." B. H. Juang (AT&T, USA): "Statistical and discriminative methods for speech recognition - from design objectives to implementation." J. Bridle (RSRU, UK): "Neural networks." G. Chollet (Paris Telecom): "Evaluation of ASR systems, algorithms and databases." E. Vidal (UPV, Spain): "Syntactic learning techniques in language modeling and acoustic-phonetic decoding." J. P. Adoul (U. Sherbrooke, Canada): "Lattice and trellis coded quantizations for efficient coding of speech." R. De Mori (McGill Univ, Canada): "Language models based on stochastic grammars and their use in automatic speech recognition." R. Pieraccini (AT&T, USA): "Speech understanding and dialog, a stochastic approach." F. Jelinek (IBM, USA): "New approaches to language modeling for speech recognition." L. Rabiner (AT&T, USA): "Applications of Voice Processing Technology in Telecommunications." N. Farvardin (UMD, USA): "Speech coding over noisy channels." J. P. Haton (CRIN/INRIA, France): "Methods for the automatic recognition of speech in adverse conditions." R. Schwartz (BBN, USA): "Search algorithms of real-time recognition with high accuracy." H. Niemann (Erlangen-Nurnberg Univ., Germany): "Statistical Modeling of segmental and suprasegmental information." I. Trancoso (INESC, Portugal): "An overview of recent advances on CELP." C. H. Lee (AT&T, USA): "Adaptive learning for acoustic and language modeling." P. Laface (Poli. Torino, Italy) H. Ney (Phillips, Germany): "Search Strategies for Very Large Vocabulary, Continuous Speech Recognition." A. Waibel (CMU, USA): "JANUS, A speech translation system." ATTENDANCE, COSTS and FUNDING Participation from as many NATO countries as possible is desired. Additionally, prospective participants from Greece, Portugal and Turkey are especially encouraged to apply.A small number of students from non-NATO countries may be accepted. The estimated cost of hotel accommodation and meals for the two-week duration of the ASI is US$1,000. A limited number of scholarships are available for academic participants from NATO countries. In the case of industrial or commercial participants a US$500 fee will be charged. Participants are responsible for their own health or accident insurance. A deposit of US$200 is required for living expenses. This deposit is non-refundable in the case of late cancelation (after 10 June, 1993). The NATO Institute will be held in the hospitable village of Bubion (Granada), set on Las Alpujarras, a peaceful mountain region with incomparable landscapes. HOW TO REGISTER Each application should include: 1) Full address (including e-mail and FAX). 2) An abstract of the proposed contribution (1-3 pages). 3) Curriculum vitae of the prospective participant (including birthdate). 4) Indication of whether the attendance to the ASI is conditioned to obtaining a NATO grant. For junior applicants, support letters from senior members of the professional speech community would strengthen the application. This application must be sent to the Institute Director address mentioned above (before 1 April 1993). SCHEDULE Submission of proposals (1-3 pages): To be received by 1 April 1993. Notification of acceptance: To be mailed out on 1 May 1993. Submission of the paper: To be received by 10 June 1993.  From plaut+ at cmu.edu Fri Mar 12 16:56:30 1993 From: plaut+ at cmu.edu (David Plaut) Date: Fri, 12 Mar 1993 16:56:30 -0500 Subject: Preprint: Generalization with Componential Attractors Message-ID: <14183.731973390@crab.psy.cmu.edu> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* The following preprint is available via local anonymous ftp (*not* from neuroprose). Instructions on how to retrieve it are at the end of this messages. The paper will appear in this year's Cognitive Science Society Conference Proceedings. -Dave Generalization with Componential Attractors: Word and Nonwords Reading in an Attractor Network David C. Plaut and James L. McClelland Department of Psychology Carnegie Mellon University Networks that learn to make familiar activity patterns into stable attractors have proven useful in accounting for many aspects of normal and impaired cognition. However, their ability to generalize is questionable, particularly in quasiregular tasks that involve both regularities and exceptions, such as word reading. We trained an attractor network to pronounce virtually all of a large corpus of monosyllabic words, including both regular and exception words. When tested on the lists of pronounceable nonwords used in several empirical studies, its accuracy was closely comparable to that of human subjects. The network generalizes because the attractors it developed for regular words are componential---they have substructure that reflects common sublexical correspondences between orthography and phonology. This componentiality is faciliated by the use of orthographic and phonological representations that make explicit the structured relationship between written and spoken words. Furthermore, the componential attractors for regular words coexist with much less componential attractors for exception words. These results demonstrate that attractors can support effective generalization, challenging ``dual-route'' assumptions that multiple, independent mechanisms are required for quasiregular tasks. unix> ftp hydra.psy.cmu.edu # or 128.2.248.152 Name: anonymous Password: ftp> cd pub/plaut ftp> binary ftp> get plaut.componential.cogsci93.ps.Z ftp> quit unix> zcat plaut.componential.cogsci93.ps.Z | lpr - I'd like to thank Jordan Pollack for not maintaining this archive.... =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University FAX: 412/268-5060 Pittsburgh, PA 15213-3890 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  From dhw at santafe.edu Fri Mar 12 17:37:55 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Mar 93 15:37:55 MST Subject: new paper Message-ID: <9303122237.AA05085@zia> *** DO NOT FORWARD TO ANY OTHER LISTS *** The following file has been placed in connectionists, under the name wolpert.ex_learning.ps.Z. AN INVESTIGATION OF EXHAUSTIVE LEARNING by David H. Wolpert, Alan Lapedes Abstract: An extended version of the Bayesian formalism is reviewed. We use this formalism to investigate the "exhaustive learning" scenario, first introduced by Schwartz et al. This scenario is perhaps the simplest possible supervised learning scenario. It is identical to the noise-free "Gibbs learning" scenario studied recently by Haussler et al., and can also be viewed as the zero-temperature limit of the "statistical mechanics" work of Tishby et al. We prove that the crucial "self-averaging" assumption invoked in the conventional analysis of exhaustive learning does not hold in the simplest non-trivial implementation of exhaustive learning. Therefore the central result of that analysis, that generalization accuracy necessarily rises as training set size is increased, is not generic. More importantly, we show that if one (reasonably) changes the definition of "generalization accuracy", to reflect only the error on inputs outside of the training set, then this central result does not hold even when the self-averaging assumption is valid, and even in the limit of an infinite input space. This implies that the central result is a reflection of the following simple phenomenon: if you add an input/output pair to the training set, the number of distinct input values on which you know exactly how you should guess has either increased or stayed the same, and therefore your generalization accuracy will either increase or stay the same. In addition to using the extended Bayesian formalism to analyze the central result of the conventional analysis of exhaustive learning, we also use it to extend the results of exhaustive learning, to issues not considered in previous analyses of the subject. To retrieve this file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get wolpert.ex_learning.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for wolpert.ex_learning.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress wolpert.ex_learning.ps.Z unix> lpr wolpert.ex_learning.ps (or however you print postscript) Thanks to Jordan Pollack for maintaining this list.  From dhw at santafe.edu Fri Mar 12 17:23:38 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Mar 93 15:23:38 MST Subject: No subject Message-ID: <9303122223.AA05058@zia> Dr.'s Kak and Almeida talk about training issues concerning the XOR problem. One should be careful not to focus too heavilly on the XOR problem. Two points which I believe have been made previously on connectionists bear repeating: Consider the n-dimensional version of XOR, namely n-bit parity. 1) All "local" algorithms (e.g., weighted nearest neighbor) perform badly on parity. More precisely, as the number of training examples goes up, their off-training set generalization *degrades*, asymptoting at 100% errors. This is also true for backprop run on neural nets, at least for n = 6. 2) There are algorithms which perform *perfectly* (0 training or generalizing errors) for the parity problem. Said algorithms are not designed in any way with parity in mind. In other words, in some senses, for all the problems it causes local algorithms, parity is not "difficult". David Wolpert  From hinton at cs.toronto.edu Mon Mar 15 13:00:18 1993 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Mon, 15 Mar 1993 13:00:18 -0500 Subject: No subject In-Reply-To: Your message of Fri, 12 Mar 93 17:23:38 -0500. Message-ID: <93Mar15.130033edt.567@neuron.ai.toronto.edu> Wolpert says: >All "local" algorithms (e.g., weighted nearest neighbor) perform >badly on parity. More precisely, as the number of training examples >goes up, their off-training set generalization *degrades*, asymptoting >at 100% errors. This is also true for backprop run on neural nets, at >least for n = 6. This seems very plausible but its not quite right. First, consider K nearest neighbors, where a validation set is used to pick K and all K neighbors get to vote on the answer. It seems fair to call this a "local" algorithm. If the training data contains a fraction p of all the possible cases of n-bit parity, then each novel test case will have about pn neighbors in the training set that differ by one bit. It will also have about pn(n-1)/2 training neighbors that differ by 2 bits, and so on. So for reasonably large n we will get correct performance by picking K so that we tend to get all the training cases that differ by 1 bit and most of the far more numerous training cases that differ by 2 bits. For very small values of p we need to consider more distant neighbors to get this effect to work, and this requires larger values of n. Second, backprop generalizes parity pretty well (on NOVEL test cases) provided you give it a chance. If we use the "standard" net with n hidden units (we can get by with less) we have (n+1)^2 connections. To get several bits of constraint per connection we need p 2^n >> (n+1)^2 where p is the fraction of all possible cases that are used for training. For n=6 there are only 64 possible cases and we use 49 connections so this isnt possible. For n=10, if we train on 512 cases we only get around 3 or 4% errors on the remaining cases. Of course training is slow for 10 bit parity: About 2000 epochs even with adaptive learning rates on each connection (and probably a million epochs if you choose a bad enough version of backprop.) Neither of these points is intended to detract from the second point that Wolpert makes. There are indeed other very interesting learning algorthms that do very well on parity and other tasks. Geoff  From dhw at santafe.edu Mon Mar 15 15:03:22 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 15 Mar 93 13:03:22 MST Subject: No subject Message-ID: <9303152003.AA07060@zia> Hinton says: >>> First, consider K nearest neighbors, where a validation set is used to pick K and all K neighbors get to vote on the answer. It seems fair to call this a "local" algorithm. If the training data contains a fraction p of all the possible cases of n-bit parity, then each novel test case will have about pn neighbors in the training set that differ by one bit. It will also have about pn(n-1)/2 training neighbors that differ by 2 bits, and so on. So for reasonably large n we will get correct performance by picking K so that we tend to get all the training cases that differ by 1 bit and most of the far more numerous training cases that differ by 2 bits. For very small values of p we need to consider more distant neighbors to get this effect to work, and this requires larger values of n. >>> This seems very plausible but is not quite right. First off, as I stated in my posting, I think we've been here before, about a year ago ... my main point in my posting was that how backprop does on XOR is only of historical interest. But since the broth has been stirred up again ... One can make a strong argument that using cross-validation, as in Geoff's scheme, is, by definition, non-local. When describing a learning algorithm as "local", one (or at least I) implicitly means that the guess it makes in response to a novel input test value depends only on nearest neighbors in the training set. K nearest neighbor for fixed (small) K is a local learning algorithm, as I stated in my original posting. Geoff wishes to claim that the learning algorithm which chooses K = K* via cross-validation and then uses K* nearest neighbor is also local. Such a learning algorithm is manifestly global however - on average, changes in the training set far away will affect (perhaps drastically) how the algorithm responds to a novel test input, since they will affect calculated cross-validation errors, and therefore will affect choice of K*. In short, one should not be misled by concentrating on the fact that an overall algorithm has *as one of its parts*, an algorithm which, if used by itself, is local (namely, K* nearest neighbor). It is the *entire* algorithm which is clearly the primary object of interest. And if that algorithm uses cross-validation, it is not "local" in the (reasonable) way it's defined above. Indeed, one might argue that it is precisely this global character which makes cross-validation such a useful heuristic - it allows information from the entire training set to affect one's guess, and it does so w/o resulting in a fit going through all those points in the training set, with all the attendant "overtraining" dangers of such a fit. On the other hand, if K had been set beforehand, rather than via cross-validation, and if for some reason one had set K particularly high, then, as Geoff correctly points out, the algorithm wouldn't perform poorly on parity at all. Moreover, consider changing the definition of "local", along the lines of "a learning algorithm is local if, on average, the the pairs in the set {single input-output pairs from the training set such that changing any single one of those pairs has a big effect on the guess} all lie close to the test input value", with some appropriate definition of "big". For such a definition, cross-validation-based K nearest neighbor might be local. (The idea is that for a big enough training set, changing a single point far away will have little affect, on average, on calculated cross-validation errors. On the other hand, for K* large, changing a single nearby point will also have little effect, so it's not clear that this modified definition of "local" will save Hinton's argument.) I didn't feel any of this was worth getting into in detail in my original posting. In particular, I did not discuss cross-validation or other such schemes, because the people to whom I was responding did not discuss cross-validation. For the record though, in that posting I was thinking of K nearest neighbor where K is fixed, and on the order of n. For such a scenario, everything I said is true, as Geoff's own reasoning shows. Geoff goes on >>> Second, backprop generalizes parity pretty well (on NOVEL test cases) provided you give it a chance. If we use the "standard" net with n hidden units (we can get by with less) we have (n+1)^2 connections. To get several bits of constraint per connection we need p 2^n >> (n+1)^2 where p is the fraction of all possible cases that are used for training. For n=6 there are only 64 possible cases and we use 49 connections so this isnt possible. For n=10, if we train on 512 cases we only get around 3 or 4% errors on the remaining cases. Of course training is slow for 10 bit parity: About 2000 epochs even with adaptive learning rates on each connection (and probably a million epochs if you choose a bad enough version of backprop.) >>> By and large, I agree. However I think this misses the point. XOR is parity for low n, not high n, and my comments were based on extending XOR (since that's what the people I was responding to were talking about.) Accordingly, it's the low n case which was of interest, and as Geoff agrees, backprop dies a gruesome death for low n. Again, I didn't want to get into all this in my original posting. However, while we're on the subject of cross-validation and the like, I'd like to direct the attention of the connectionist community to an article in the Feb. '93 ML journal by Schaffer which suggests that cross-validation fails as often as it succeeds, on average. So it is by no means a panacea. Formal arguments on this topic can be found in a Complex Systems paper of mine from last year, and also in a new paper, directly addressing Schaffer's experiments, which I plan to post in a week or so. David Wolpert  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 15 18:42:47 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 15 Mar 93 16:42:47 -0700 Subject: The spring 1993 Colorado Machine Learning Colloquium Series Message-ID: <9303152342.AA01804@franklinite.Mines.Colorado.EDU> THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS* announce: The spring, 1993 Colorado Machine Learning Colloquium Series Room 110, Stratton Hall, on the CSM campus in Golden, Colorado All talks at 5:30 pm. Machine learning and neural networks are increasingly important areas of applied computer science and engineering research. These methods allow systems to improve their performance over time with reduced input from a human operator. In the last few years, these methods have demonstrated their usefulness in a wide variety of problems. At CSM, an interdisciplinary atmosphere has fostered several projects that use these technologies for problems in geophysics, materials science, and electrical engineering. In Colorado as a whole, exploration of machine learning and neural networks is widespread. This colloquium series fosters the development of these technologies through presentations of recent applied and basic research. At least a third of each talk will be accessible to a general scientific audience. Schedule: Tuesday, March 16: Aaron Gordon, CSM: Dynamic Recurrent Neural Networks Tuesday, March 23: Darrell Whitley, CSU Ft. Collins: Executable Models of Genetic Algorithms Tuesday March 30: Chidambar Ganesh, CSM: Some Experiences with Data Preprocessing in Neural Network applications Monday April 5: Chuck Anderson, CSU Ft. Collins: Reinforcement Learning and Control Tuesday April 13: Marijke Augusteign, UC Colorado Springs: Solving Classification Problems with Cascade-Correlation Tuesday April 20: John Steele, CSM: Predicting Degree Of Cure Of Epoxy Resins Using Dielectric Sensor Data and Artificial Neural Networks Thursday April 22: Michael Mozer, CU Boulder: Neural network approaches to formal language induction Open to the Public, Refreshments to be Served For more information (including background readings prior to talks), contact Dr. L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 *The mission of the proposed new Center for Robotics and Intelligent Systems (CRIS) at the Colorado School of Mines (CSM) is to facilitate the application of advanced computer science research in neural networks, robotics, and artificial intelligence to specific problem areas of concern at CSM. By bringing diverse interdisciplinary expertise to bear on problems in materials, natural resources, the environment, energy, transportation, information, and communications, the center will facilitate the development of novel computational approaches to difficult problems. When fully operational, the center's activities will include: 1) sponsoring colloquia, 2) publishing a technical report series, 3) aiding researchers in the pursuit of government and private grants, 4) promoting education a) by coordinating CSM courses related to robotics, neural networks and artificial intelligence, and b) by maintaining minors both at the undergraduate and graduate levels, 5) promoting research, and 6) supporting industrial interaction.  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 15 18:52:28 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 15 Mar 93 16:52:28 -0700 Subject: Darrell Whitley to speak in Colorado Machine Learning series Message-ID: <9303152352.AA01846@franklinite.Mines.Colorado.EDU> The spring, 1993 Colorado Machine Learning Colloquium Series presents: Dr. Darrell Whitley Department of Computer Science Colorado State University, Fort Collins EXECUTABLE MODELS OF GENETIC ALGORITHMS Tuesday March 23, 1993 Room 110, Stratton Hall, on the CSM campus 5:30 pm Abstract A set of executable equations are defined which provide an exact model of a simple genetic algorithm. The equations assume an infinitely large population and require the enumeration of all points in the search space.The predictive behavior of the executable equations is examined in the context of deceptive functions. In addition, these equations can be used to study the computational behavior of parallel genetic algorithms. Suggested background reading: A Genetic Algorithm Tutorial, Darrell Whitley. Open to the Public Refreshments to be served at 5:00pm, prior to talk For more information (including background readings prior to talks, and a schedule of all talks in this series), contact: L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 Sponsored by: THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS (The Center for Robotics and Intelligent Systems at the Colorado School of Mines)  From lba at sara.inesc.pt Mon Mar 15 12:49:19 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Mon, 15 Mar 93 18:49:19 +0100 Subject: Training XOR with BP Message-ID: <9303151749.AA20880@sara.inesc.pt> I must say I am a bit surprised myself, with the XOR discussion, but in the opposite sense: for me, the XOR has always converged rather fast. Let me be more specific: I already had the idea in my mind, from previous informal tests, that the XOR usually converged in much less than 100 epochs (with a relatively large percentage of runs that fell into "local minima" - more about this below). The difference with other people's results may come from implementation details, which I will give below. The experiments I reported were made with a BP simulator developed here at Inesc, which has a lot of facilities (adaptive step sizes, cross-validation, optional entropy error, weight decay, momentum, etc.). For these experiments I've set the parameters so that all these features were disabled. And to be sure, I've just been looking into the essential parts of the code, and didn't find any bugs - the thing appears to be really doing plain BP, without any tricks. So, here are the details: Problem: XOR (2 inputs) No. of training patterns: 4 Input logical levels: -1 for FALSE, 1 for TRUE Target output logical levels: -.9 for FALSE, .9 for TRUE Network: 2 inputs, 2 hidden, 1 output Interconnection: Full between successive layers, no direct links from inputs to output Unit non-linearity: Scaled arctangent, i.e. 2/Pi * arctan(s), where "s" is the input sum Learning method: Backpropagation, batch mode, no momentum Step size (learning rate): 1 Cost function: Squared error, summed over the 4 training patterns Weight initialization: Random, uniform in [-1,1] Stopping criterion: When the sign of the output is correct for all 4 training patterns Why did I choose these parameters? It is relatively well known that symmetrical sigmoids (e.g. varying between -1 and 1) give faster learning than unsymmetrical ones (e.g. varying between 0 and 1) [Yann Le Cun had a poster on the reasons for that, in one of the NIPS conferences, two or three years ago]. On the other hand, I thought that "arctan" probably learned faster than "tanh", because of its slower saturation, but I never ran any extensive tests on that - and see below, about results with "tanh(s/2)". From J.R.Chen at durham.ac.uk Tue Mar 16 13:13:01 1993 From: J.R.Chen at durham.ac.uk (J.R.Chen@durham.ac.uk) Date: Tue, 16 Mar 93 18:13:01 GMT Subject: Modelling of Nonlinear Systems Message-ID: IS INPUT_OUTPUT EQUATION A UNIVERSAL MODEL OF NONLINEAR SYSTEM ? About two weeks ago, Kenji Doya announced a paper "Universality of Fully-Connected Recurrent Neural Networks" on this mail list, which showed that if all the state variables are available then any discrete or continuous-time dynamical system can be modeled by a fully-connected discrete or continuous-time recurrent network respectively, provide the network consists of enough units. This is interesting. However in the real situation, it is more likely that the number of observable variables is less than the degree of freedom of the dynamical system. It could be that only one output signal is available for measurement. So the question is if only input signal and one output signal is available from a dynamical system, is it possible to reconstructe the original dynamics of the system? This problem has been well studied for linear systems, and the theory is well established. For the nonlinear systems, it seems is still a partially open question. There has been a lot of publications on using recurrent neural networks, MLP nets or whatever other nets to model nonlinear time-series or for nonlinear system identification. This kind of approach is based on an assumption that a nonlinear system can be modelled by an input-output recursive equation just like a linear system can be modelled by a ARMA model. A typical argument could be like this "Because the n variables {X_k(t)} satisfy a set of first-order differential equation, successive differentiation in time reduces the problem to a single (general highly nonlinear) differential equation of nth order for one of these variables" One can say something similar for discrete systems. Sounds it is quite straight forward. Actually, in most equation specific cases, it do works that way. However obviously this is not a rigorous proof. To my knowledge, the most rigorous results on this problem is presented in F. Takens[1], I.J.Leontaritis and S.A.Billings[2]. [1] mainly discusses autonomous systems and is wildly referenced. It is the theoriatical fundation of almost all the work on chaotic time-series modelling or prediction. In [2] it has been proved under some conditions that a discrete nonlinear system can be represented by a input-output recursive equation in a restricted region of operation around the zero equilibrium point. I don't know is there any global results exist. If not, the queation would be is this mainly a difficult of mathematics, or it would be more fundamental? One might to speculate that for a generic nonlinear dynamical system, there might be no unique input-output recursive equation representation, it may need a set of equations for different operation regions in the state space. If this is true, the modelling of nonlinear dynamical systems with input-output equation has to be based on on-line approach. The parameters have to be updated quick enough to follow the moving of operation point. The off-line modelling or identification may have convergence problem. [1] F. Takens "Detecting strange attractors in turbulence" in Springer Lecture Notes in Mathematics Vol.893 p366 edited by D.A.Rand and L.S.Young 1981 [2] I.J.Leontaritis and S.A.Billings "Input-output parametric models for non-linear systems" Part-1 and Part-2 INT.J.CONTROL. Vol.41 No.2 pp303-328 and pp329-344. 1985. J R Chen SECS University of Durham, UK  From kak at max.ee.lsu.edu Tue Mar 16 15:00:04 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 16 Mar 93 14:00:04 CST Subject: The New Training Alg for Feedforward Networks Message-ID: <9303162000.AA20136@max.ee.lsu.edu> Drs. Almeida and Fahlman have commented on how the attribution that backpropagation takes several thousand steps for the XOR problem (whereas my new algorithm takes only 8 steps) may not be fair. This attribution was not supposed to refer to the best BP algorithm for the problem; it was taken from page 332 of PDP, Vol. 1., and it was meant to illustrate the differences in the two algorithms. As has been posted by others here the new algorithm seems to give a speedup of 100 to 1000 for neurons in the range of 50 to 100. Certainly further tests are called for. The introduction of a learning rate in the new algorithm and learning with respect to an error criterion improve the performance of the new algorithm. These modifications will be described in a forthcoming report. -Subhash Kak  From kuh at spectra.eng.hawaii.edu Tue Mar 16 10:12:48 1993 From: kuh at spectra.eng.hawaii.edu (Anthony Kuh) Date: Tue, 16 Mar 93 10:12:48 HST Subject: NOLTA: call for papers Message-ID: <9303162012.AA18434@spectra.eng.hawaii.edu> Call for Papers 1993 International Symposium on Nonlinear Theory and its Applications Sheraton Waikiki Hotel, HAWAII December 5 - 9, 1993 The 1993 International Symposium on Nonlinear Theory and its Applications(NOLTA'93) will be held at the Sheraton Waikiki Hotel, Hawaii, on Dec. 5 - 9, 1993. The conference is open to all the world. Papers describing original work in all aspects of Nonlinear Theory and its Applications are invited. Possible topics include, but are not limited to the following: Circuits and Systems Neural Networks Chaos Dynamics Cellular Neural Networks Fractals Bifurcation Biocybernetics Soliton Oscillations Reactive Phenomena Fuzzy Numerical Methods Pattern Generation Information Dynamics Self-Validating Numerics Time Series Analysis Chua's Circuits Chemistry and Physics Mechanics Fluid Mechanics Acoustics Control Optics Circuit Simulation Communication Economics Digital/analog VLSI circuits Image Processing Power Electronics Power Systems Other Related Areas Organizers: Research Society of Nonlinear Theory and its Applications, IEICE Dept. of Elect. Engr., Univ. of Hawaii In cooperation with: IEEE Hawaii Section IEEE Circuits and Systems Society IEEE Neural Networks Council International Neural Network Society IEEE CAS Technical Committee on Nonlinear Circuits and Systems Technical Group of Nonlinear Problems, IEICE Technical Group of Circuits and Systems, IEICE Authors are invited to submit three copies of a summary of 2 or 3 pages to: Technical Program Chairman Prof. Shun-ichi Amari Faculty of Engr., University of Tokyo, Bunkyo-ku, Tokyo, 113 Japan Telefax: +81-3-5689-5752 e-mail: amari at sat.t.u-tokyo.ac.jp The summary should include the author's name(s), affiliation(s) and complete return address(es). The authors should also indicate one or more of the above categories that best describe the topic of the paper. Deadline for submission of summaries: August 15, 1993 Notification of acceptance: Before September 15, 1993 Deadline for camera-ready manuscripts: November 1, 1993 HONORARY CHAIRMEN Kazuo Horiuchi (Waseda Univ.) Masao Iri (Univ. of Tokyo) CO-CHAIRMEN Shun-ichi Amari (Univ. of Tokyo) Anthony Kuh (Univ. of Hawaii) Shinsaku Mori (Keio Univ.) TECHNICAL PROGRAM CHAIRMAN Shun-ichi Amari (Univ. of Tokyo) PUBLICITY Shinsaku Mori (Keio Univ.) LOCAL ARRANGEMENT Anthony Kuh (Dept. of Electrical Engr., Univ. of Hawaii, Manoa, Honolulu, Hawaii, 96822 U.S.A. Phone: +1-808-956-7527 Telefax. +1-808-956-3427 e-mail: kuh at wiliki.eng.hawaii.edu) ADVISORY L. O. Chua (U.C.Berkeley) R. Eberhart (Research Triangle Inst.) A. Fettweis (Ruhr Univ.) L. Fortuna (Univ. of Catania) W.J. Freeman (U.C.Berkeley) M. Hasler (Swiss Fed. Inst. of Tech. Lausanne) Tatsuo Higuchi (Tohoku Univ.) Kazumasa Hirai (Kobe Univ.) Ryogo Hirota (Waseda Univ.) E.S. Kuh (U.C.Berkeley) Hiroshi Kawakami (Tokushima Univ.) Tosiro Koga (Kyushu Univ.) Tohru Kohda (Kyushu Univ.) Masami Kuramitsu(Kyoto Univ.) R.W. Liu (Univ. of Notre Dame) Tadashi Matsumoto (Fukui Univ.) A.I. Mees (Univ. of Western Australia ) Michitada Morisue (Saitama Univ.) Tomomasa Nagashima (Muroran Inst. Tech.) Tetsuo Nishi (Kyushu Univ.) J.A. Nossek (Technical University Munich) Kohshi Okumura (Kyoto Univ.) T. Roska (Hungarian Academy of Sciences) Junkichi Satsuma (Univ. of Tokyo) I.W. Sandberg (Univ. of Texas at Austin) Chikara Sato (Keio Univ.) Yasuji Sawada (Tohoku Univ.) V.V. Shakhgildian (Russian Engr. Academy) Yoshisuke Ueda (Kyoto Univ.) Akio Ushida (Tokushima Univ.) J. Vandewalle (Catholic Univ. of Leuven, Heverlee) P. Werbos (National Science Foundation) A.N. Willson, Jr (U.C.L.A.) Shuji Yoshizawa (Univ. of Tokyo) A.H.Zemanian (State Univ. of NY at Stony Brook) SECRETARIATS Shin'ichi Oishi (Waseda Univ.) Mamoru Tanaka (Sophia Univ.) INFORMATION CONTACT Mamoru Tanaka Dept. of Electrical and Electronics Eng., Sophia Univ. Kioicho 7-1, Chiyoda-ku, Tokyo 102 JAPAN Fax: +81-3-3238-3321 e-mail: tanaka at mamoru.ee.sophia.ac.jp  From mm at santafe.edu Wed Mar 17 17:51:39 1993 From: mm at santafe.edu (mm@santafe.edu) Date: Wed, 17 Mar 93 15:51:39 MST Subject: paper available Message-ID: <9303172251.AA16722@lyra> Though not about connectionist networks, the following TR may be of interest to readers of this list: ----------------------------- The following paper is available by public ftp. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell Peter T. Hraber James P. Crutchfield Santa Fe Institute Santa Fe Institute University of California, Berkeley Santa Fe Institute Working Paper 93-03-014 Abstract We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. To obtain an electronic copy: ftp santafe.edu login: anonymous password: cd /pub/Users/mm binary get rev-edge.ps.Z quit Then at your system: uncompress rev-edge.ps.Z lpr -P rev-edge.ps To obtain a hard copy, send a request to mm at santafe.edu.  From bnglaser at tohu0.weizmann.ac.il Wed Mar 17 08:25:20 1993 From: bnglaser at tohu0.weizmann.ac.il (Daniel Glaser) Date: Wed, 17 Mar 93 15:25:20 +0200 Subject: XOR and BP Message-ID: <9303171325.AA04285@tohu0.weizmann.ac.il> Forgive my ignorance, but isn't back-prop with a learning rate of 1 (see Luis B. Almeida's posting of 15.3.93) doing something quite a lot like random walk ? David Wolpert writes (15.3.93) "how back-prop does on XOR is only of historical interest". Is this not because, with XOR, in order to avoid the local minima you HAVE to do a lot more random walking than gradient descending ? It is believed that this is not necessary when using back-prop on most interesting problems. Historically, XOR has been a standard-bearer for back-prop, as a simple, intuitive, function which a perceptron cannot learn. Could it now appear that the whole technique is tainted by association with this pathological case ? Daniel Glaser.  From marshall at cs.unc.edu Thu Mar 18 13:02:11 1993 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Thu, 18 Mar 93 13:02:11 -0500 Subject: Paper available: Unsmearing Visual Motion Message-ID: <9303181802.AA10618@marshall.cs.unc.edu> The following paper is available via ftp from the neuroprose archive at Ohio State (instructions for retrieval follow the abstract). -------------------------------------------------------------------------- Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections Kevin E. Martin and Jonathan A. Marshall Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. Human vision systems integrate information nonlocally, across long spatial ranges. For example, a moving stimulus appears smeared when viewed briefly (30 ms), yet sharp when viewed for a longer exposure (100 ms) (Burr, 1980). This suggests that visual systems combine information along a trajectory that matches the motion of the stimulus. Our self-organizing neural network model shows how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways that unsmear representations of moving stimuli. These results account for Burr's data and can potentially also model other phenomena, such as visual inertia. (In press; to appear in S.J. Hanson, J.D. Cowan, & C.L. Giles, Eds., Advances in Neural Information Processing Systems, 5. San Mateo, CA: Morgan Kaufmann Publishers, 1993.) -------------------------------------------------------------------------- To get a copy of the paper, do the following: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) login: anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get martin.unsmearing.ps.Z ftp> quit unix> uncompress martin.unsmearing.ps.Z unix> lpr martin.unsmearing.ps.Z If you have trouble printing the file on a Postscript-compatible printer, send me e-mail (marshall at cs.unc.edu) with your postal address, and I'll have a hardcopy mailed to you (may take several weeks for delivery, though). --------------------------------------------------------------------------  From kehagias at eng.auth.gr Thu Mar 18 02:18:56 1993 From: kehagias at eng.auth.gr (Thanos Kehagias) Date: Thu, 18 Mar 93 09:18:56 +0200 Subject: Modelling nonlinear systems Message-ID: <9303180718.AA07252@vergina.eng.auth.gr> Regarding J.R. Chen's paper: I think it is important to define in what sense "modelling" is understtood. I have not read the Doya paper, but my guess is that it is an approximation result (rather than exat representation). If it is an approximation result, the sense of approximation (norm or metric used) is important. For instance: in the stochastic context, there is a well known statistical theorem, the Wold theorem, which says that every continuous valued, finite second moment, stochastic process can be approximated by ARMA models. The models are (as one would expect) of increasing order (finite but unbounded). The approximation is in the L2 sense (l.i.m., limit in the mean), that is E([X-X_n]^2) goes to 0, where X is the original process and X_n, n=1,2, ... is the approximating ARMA process. I expect this can also handle stochastic input/ output processes, if the input output pair (X,U) is considered as a joint process. I have proved a similar result in my thesis about approximating finite state stoch. processes with Hidden Markov Models. The approximation is in two senses: weak (approximation of measures) and cross entropy. Since for every HMM it is easy to build an output equivalent network of finite automata, this gets really close to the notion of recurrent networks with sigmoid neurons. Of course this is all for stochastic networks/ probabilistic processes. In the deterministic case one would probably be interested in a different sense of approximation, e.g. L2 or L-infinity approximation. Is the Doya paper in the ohio archive?  From dasgupta at cs.umn.edu Thu Mar 18 13:56:59 1993 From: dasgupta at cs.umn.edu (Bhaskar Dasgupta) Date: Thu, 18 Mar 93 12:56:59 CST Subject: NIPS-92 paper in neuroprose. Message-ID: <9303181857.AA04358@deca.cs.umn.edu> The following file has been placed in connectionists, under the name georg.nips92.ps.Z (to appear in NIPS-92 proceedings). Any questions or comments will be highly appreciated. The Power of Approximating: a Comparison of Activation Functions Bhaskar DasGupta (a) Georg Schnitger (b,c) (a) Department of Computer Science, University of Minnesota, Minneapolis, MN 55455-0159 (b) Department of Computer Science, The Pennsylvania State University, University Park, PA 16802 (c) Department of Mathematics and Computer Science, University of Paderborn, Postfach 1621, 4790 Paderborn, Germany ABSTRACT -------- We compare activation functions in terms of the approximation power of their feedforward nets. We consider the case of analog as well as boolean input. To retrieve this file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: your email address 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get georg.nips92.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for georg.nips92.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress georg.nips92.ps.Z unix> lpr georg.nips92.ps.Z (or however you print postscript) The paper is exactly 8 pages (no proofs appear due to space limitation). Many thanks to Jordan Pollack for maintaining this list. Bhaskar Dasgupta Department of Computer and Information Science University of Minnesota Minneapolis, MN 55455-0159 email :dasgupta at cs.umn.edu  From wray at ptolemy.arc.nasa.gov Thu Mar 18 22:21:29 1993 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Thu, 18 Mar 93 19:21:29 PST Subject: neuroprose paper on 2nd derivatives and their use in BP Message-ID: <9303190321.AA25438@ptolemy.arc.nasa.gov> The following paper is available by public ftp from Jordan Pollack's wonderful neuroprose collection. Details below. ------------------ Computing Second Derivatives in Feed-Forward Networks: a Review Wray L. Buntine Andreas S. Weigend RIACS & NASA Ames Research Center Xerox PARC To appear in IEEE Trans. of Neural Networks Abstract. The calculation of second derivatives is required by recent training and analyses techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with $|w|$ weights, simply writing the full matrix of second derivatives requires $O(|w|^2)$ operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of $2h+2$ backward/forward-propagation passes where $h$ is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or cross-entropy error and Softmax units, etc.). ----------------------------- The paper is buntine.second.ps.Z in the neuroprose archives. The INDEX sentence is A review of computing second derivatives in feed-forward networks. To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:wray): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get buntine.second.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for buntine.second.ps.Z . ftp> quit 221 Goodbye. unix> uncompress buntine.second.ps.Z unix> lpr buntine.second.ps ---------------------- If you cannot ftp or print the postscript file, please send email to silva at parc.xerox.com or write to Nicole Silva Xerox PARC 3333 Coyote Hill Rd Palo Alto, CA 94304 USA  From doya at crayfish.UCSD.EDU Thu Mar 18 19:45:28 1993 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Thu, 18 Mar 93 16:45:28 PST Subject: Modelling of Nonlinear Systems In-Reply-To: J.R.Chen@durham.ac.uk's message of Tue, 16 Mar 93 18:13:01 GMT Message-ID: <9303190045.AA08642@crayfish.UCSD.EDU> As Dr. Chen says, the fact that there exists a recurrent network that models any given dynamical system [1] does not mean that it can be achieved readily by learning, such as output error gradient descent. This may sound similar to the case of learning parity in feed-forward networks, but there are some additional problems that arise from nonlinear dynamics of the network, which I tried to discuss in another paper I posted in Neuroprose (Bifurcations of ...). Takens' result shows that an n-dimensional attractor dynamics can be reconstructed from its scalar output sequence x(t) as (for example) x(t) = F( x(t-1),...,x(t-m)) for m > 2n. Therefore, a conservative connectionist approach to modeling nonlinear dynamics is to prepare a long enough tapped delay line in the input layer and then to train a feed-forward network to simulate the function F. But it may not be the best approach because the same system can look very simple or complex depending on how we take the state vectors. Whether a recurrent network can find an efficient representation of the state space by learning is still an open problem. Another problem is the stability of the reconstructed trajectories. In many cases, the training set consists of specific trajectories like fixed points and limit cycles and no information is explicitly given about how the nearby trajectories should behave [2]. It has been shown empirically that fixed points and "simple" limit cycles (e.g. sinusoids) tend to be stable, presumably by virtue of squashing functions. However, that is not true for complex trajectories. Since we know that the target trajectories are sampled from attractors (otherwise we can't observe it), we should somehow impose this constraint in training a network. About on-line/off-line training: What we want the network to do is to model a global, nonlinear vector field. On-line learning is not attractive (to me) if the network learns a local, (almost) linear vector filed quickly and forgets about the rest of the state space. [1] Dr. Sontag have sent me a paper: H.T. Siegelmann and E.D. Sontag: Some recent results on computing with "neural nets". IEEE Conf. on Decision and Control, Tucson, Dec. 1992. It includes a more formal proof of the universality of recurrent networks. [2] In a recent Neuroprose paper, Tsung and Cottrell explicitly taught the network where the trajectories around a limit cycle should go. Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301  From schmidhu at informatik.tu-muenchen.de Fri Mar 19 11:42:34 1993 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Fri, 19 Mar 1993 17:42:34 +0100 Subject: XOR and BP Message-ID: <93Mar19.174241met.42274@papa.informatik.tu-muenchen.de> David Glaser writes: >> ..................., but isn't back-prop with a learning rate of 1 >> (see Luis B. Almeida's posting of 15.3.93) doing something quite a lot >> like random walk ? Probably not really. I ran a couple of simulations using the 2-2-1 (+ true unit) architecture but doing random search in weight space (instead of backprop). On average, I had to generate 1500 random weight initializations before hitting the first XOR solution (with a uniform distribution for each weight between -10.0 and +10.0). Different architectures and different initialization conditions influence the average number of trials, of course. Since there are only 16 mappings from the set of 4 input patterns to a single binary output, a hypothetical bias-free architecture allowing only such mappings would require about 16 random search trials on average. The results above seem to imply that Luis' backprop procedure had to fight against a `negative' architectural bias. The success of any learning system depends so much on the right bias. Of course, there are architectures and corresponding learning algorithms that solve XOR in a single `epoch'. Juergen Schmidhuber Institut fuer Informatik Technische Universitaet Muenchen Arcisstr. 21, 8000 Muenchen 2, Germany schmidhu at informatik.tu-muenchen.de  From kolen-j at cis.ohio-state.edu Thu Mar 18 05:09:06 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 18 Mar 93 05:09:06 -0500 Subject: Training XOR with BP In-Reply-To: "Luis B. Almeida"'s message of Mon, 15 Mar 93 18:49:19 +0100 <9303151749.AA20880@sara.inesc.pt> Message-ID: <9303181009.AA14632@pons.cis.ohio-state.edu> You can find the same type of graphs in (Kolen and Goel, 1991) where we reported, among other things, results on experiments testing the effects of the initial weight range on training XOR on 2-2-1 ffwd nets with backprop. This work was expanded in (Kolen and Pollack, 1990) where we examined the boundaries of t-convergent (the network reaches some convergence criteria in t epochs) regions in weight space. What we found was that boundary was not smooth, ie increase t and you get more rings of convergent regions, but very "cliffy" and small differences in initial weights can mean the difference between converging in 50 epochs and many more than any of us are willing to wait for. I agree with Luis, "local minima" is overused in the connectionist community to describe networks which take a VERY long time to converge. A good example of this is a 2-2-2-1 network learning XOR which are started with very small weights selected from a uniform distribution between (-0.1,0.1). These networks take a long time to learn the target mapping, but not because of a local minima. Rather, it's stuck in a very flat region near the saddle point at all zero weights. Refs Kolen, J. F. and Goel, A. K., (1991). Learning in PDP networks: Computational Complexity and information content. _IEEE Transactions on Systems, Man, and Cybernetics_. 21, pg 359-367. (Available through neuroprose: kolen.pdplearn*) John. F. Kolen and Jordan. B. Pollack, 1990. Backpropagation is Sensitive to Initial Conditions. _Complex Systems_. 4:3. pg 269-280. (Available through neuroprose: kolen.bpsic*)  From ingber at alumni.cco.caltech.edu Mon Mar 22 08:01:04 1993 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Mon, 22 Mar 1993 05:01:04 -0800 Subject: Modelling nonlinear systems Message-ID: <9303221301.AA00706@alumni.cco.caltech.edu> In the context of modeling discussed in the two postings referenced below, it should be noted that multiplicative noise many times is quite robust in modeling stochastic systems that have hidden variables and/or that otherwise would be modeled by much higher-order ARMA models. "Multiplicative" noise means that the typical Gaussian-Markovian noise terms added to introduce noise to sets of differential equations have additional factors which can be quite general functions of the other "deterministic" variables. Some nice work illustrating this is in %A K. Kishida %T Physical Langevin model and the time-series model in systems far from equilibrium %J Phys. Rev. A %V 25 %D 1982 %P 496-507 and %A K. Kishida %T Equivalent random force and time-series model in systems far from equilibrium %J J. Math. Phys. %V 25 %D 1984 %P 1308-1313 A very detailed reference that properly handles such systems is A F. Langouche %A D. Roekaerts %A E. Tirapegui %T Functional Integration and Semiclassical Expansions %I Reidel %C Dordrecht, The Netherlands %D 1982 Modelers' preferences for simple systems aside, it should be noted that most physical systems that can reasonably be assumed to possess Gaussian-Markovian noise should also be assumed to at least have multiplicative noise as well. Such arguments are given in %A N.G. van Kampen %T Stochastic Processes in Physics and Chemistry %I North-Holland %C Amsterdam %D 1981 In the context of neural systems, such multiplicative noise systems arise quite naturally, as I have described in %A L. Ingber %T Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography %J Phys. Rev. A %N 6 %V 44 %P 4017-4060 %D 1991 and in %A L. Ingber %T Generic mesoscopic neural networks based on statistical mechanics of neocortical interactions %J Phys. Rev. A %V 45 %N 4 %P R2183-R2186 %D 1992 }Article 2057 of mlist.connectionists: }From: Thanos Kehagias }Subject: Modelling nonlinear systems }Date: Mon, 22 Mar 93 07:03:12 GMT }Approved: news at cco.caltech.edu } } }Regarding J.R. Chen's paper: } }I think it is important to define in what sense "modelling" is understtood. }I have not read the Doya paper, but my guess is that it is an approximation }result (rather than exat representation). If it is an approximation }result, the sense of approximation (norm or metric used) is important. } }For instance: in the stochastic context, there is a well known statistical }theorem, the Wold theorem, which says that every continuous valued, finite }second moment, stochastic process can be approximated by ARMA models. The }models are (as one would expect) of increasing order (finite but unbounded). }The approximation is in the L2 sense (l.i.m., limit in the mean), that is }E([X-X_n]^2) goes to 0, where X is the original process and X_n, n=1,2, ... is }the approximating ARMA process. I expect this can also handle stochastic input/ }output processes, if the input output pair (X,U) is considered as a joint }process. } }I have proved a similar result in my thesis about approximating finite state }stoch. processes with Hidden Markov Models. The approximation is in two senses: }weak (approximation of measures) and cross entropy. Since for every HMM it is }easy to build an output equivalent network of finite automata, this gets really close to the notion of recurrent networks with sigmoid neurons. || Prof. Lester Ingber [10ATT]0-700-L-INGBER || || Lester Ingber Research Fax: 0-700-4-INGBER || || P.O. Box 857 Voice Mail: 1-800-VMAIL-LI || || McLean, VA 22101 EMail: ingber at alumni.caltech.edu ||  From mel at cns.caltech.edu Fri Mar 19 19:26:45 1993 From: mel at cns.caltech.edu (Bartlett Mel) Date: Fri, 19 Mar 93 16:26:45 PST Subject: Preprint Message-ID: <9303200026.AA07747@plato.cns.caltech.edu> /*******PLEASE DO NOT POST TO OTHER B-BOARDS*************/ Announcing two preprints now available in the neuroprose archive: 1. Synaptic Integration in an Excitable Dendritic Tree by Bartlett W. Mel 2. Memory Capacity of an Excitable Dendritic Tree by Bartlett W. Mel Abstracts and ftp instructions follow. Hardcopies are not available, unless you're desperate. -Bartlett Division of Biology Caltech 216-76 Pasadena, CA 91125 mel at caltech.edu (818)356-3643, fax: (818)796-8876 ------------------------------------------------------------------ SYNAPTIC INTEGRATION IN AN EXCITABLE DENDRITIC TREE Bartlett W. Mel Computation and Neural Systems California Institute of Technology Compartmental modeling experiments were carried out in an anatomically characterized neocortical pyramidal cell to study the integrative behavior of a complex dendritic tree containing active membrane mechanisms. Building on a hypothesis presented in (Mel 1992a), this work provides further support for a novel principle of dendritic information processing, that could underlie a capacity for nonlinear pattern discrimination and/or sensory-processing within the dendritic trees of individual nerve cells. It was previously demonstrated that when excitatory synaptic input to a pyramidal cell is dominated by voltage-dependent NMDA-type channels, the cell responds more strongly when synaptic drive is concentrated within several dendritic regions than when it is delivered diffusely across the dendritic arbor (Mel 1992a). This effect, called dendritic ``cluster sensitivity'', persisted under wide ranging parameter variations, and directly implicated the spatial ordering of afferent synaptic connections onto the dendritic tree as an important determinant of neuronal response selectivity. In this work, the sensitivity of neocortical dendrites to spatially clustered synaptic drive has been further studied with fast sodium and slow calcium spiking mechanisms present in the dendritic membrane. Several spatial distributions of the dendritic spiking mechanisms were tested, with and without NMDA synapses. Results of numerous simulations reveal that dendritic cluster sensitivity is a highly robust phenomenon in dendrites containing a sufficiency of excitatory membrane mechanisms, and is only weakly dependent on their detailed spatial distribution, peak conductances, or kinetics. Factors that either work against or make irrelevant the dendritic cluster sensitivity effect include 1) very high-resistance spine necks, 2) very large synaptic conductances, 3) very high baseline levels of synaptic activity, or 4) large fluctuations in level of synaptic activity on short time scales. The functional significance of dendritic cluster-sensitivity has been previously discussed in the context of associative learning and memory (Mel 1992ab). Here it is demonstrated that the dendritic tree of a cluster-sensitive neuron implements an approximative spatial correlation, or sum of products, operation, such as that which may underlie nonlinear disparity tuning in binocular visual neurons. ------------------------------------------------------------------ MEMORY CAPACITY OF AN EXCITABLE DENDRITIC TREE Bartlett W. Mel Computation and Neural Systems California Institute of Technology Previous comparmental modeling studies have shown that the dendritic trees of neocortical pyramidal cells may be ``cluster-sensitive'', i.e. selectively responsive to spatially clustered, rather than diffuse, patterns of synaptic activation. The local nonlinear interactions among synaptic inputs in a cluster sensitive neuron are crudely analogous to a layer of hidden units in a neural network, and permit nonlinear pattern discriminations to be carried out within the dendritic tree of a single cell (Mel 1992ab). These studies have suggested that the spatial permutation of synaptic connections onto the dendritic tree is a crucial determinant of a cell's response selectivity. In this paper, the storage capacity of a single cluster sensitive neuron is examined empirically. As in (Mel 1992b), an abstract model neuron, called a ``clusteron'', was used to explore biologically- plausible Hebb-type learning rules capable of manipulating the ordering of synaptic inputs onto cluster-senstive dendrites. Comparisons are made between the storage capacity of a clusteron, a simple perceptron, and a modeled pyramidal cell with either a passive or electrically excitable dendritic tree. Based on the empirically demonstrated storage capacity of a single biophysically-modeled pyramidal cell, it is estimated that a 5 x 5 mm slab of neocortex can ``memorize'' on the order of 100,000 sparse random input-output associations. Finally, the neurobiological relevance of cluster-sensitive dendritic processing and learning rules is considered. ------------------------------------------------------------------ To get these papers by ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mel.synaptic.tar.Z ftp> get mel.memory.ps.Z ftp> quit unix> uncompress mel*Z unix> tar xvf mel*tar unix> lpr -s mel.synaptic.ps1 (or however you print postscript) unix> lpr -s mel.synaptic.ps2 unix> lpr -s mel.synaptic.ps3 unix> lpr -s mel.memory.ps /*******PLEASE DO NOT POST TO OTHER B-BOARDS*************/  From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Mon Mar 22 14:36:46 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Mon, 22 Mar 93 15:36:46 AST Subject: Call for Papers Message-ID: ================================================================== ================================================================== Final Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn, Fredericton New Brunswick Canada Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gardenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- March 30, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department: _____________________________________ University/Company: _____________________________________ _____________________________________ _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Prov/State: _____________________________________ Country: _____________________________________ Telephone: _____________________________________ Email: _____________________________________ Fax: _____________________________________  From shultz at hebb.psych.mcgill.ca Tue Mar 23 09:17:11 1993 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 23 Mar 93 09:17:11 EST Subject: No subject Message-ID: <9303231417.AA21457@hebb.psych.mcgill.ca> Subject: Abstract Date: 23 March '93 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A Connectionist Model of the Development of Seriation Denis Mareschal Department of Experimental Psychology University of Oxford Thomas R. Shultz Department of Psychology McGill University Abstract Seriation is the ability to order a set of objects on some dimension such as size. Psychological research on the child's development of seriation has uncovered both cognitive stages and perceptual constraints. A generative connectionist algorithm, cascade- correlation, is used to successfully model these psychological regularities. Previous rule-based models of seriation have been unable to capture either stage progressions or perceptual effects. The present simulations provide a number of insights about possible processing mechanisms for seriation, the nature of seriation stage transitions, and the opportunities provided by the environment for learning about seriation. This paper will be presented at the Fifteenth Annual Conference of the Cognitive Science Society, University of Colorado, 1993. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get mareschal.seriate.ps.Z ftp> quit unix> uncompress mareschal.seriate.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca  From unni at neuro.cs.gmr.com Tue Mar 23 17:34:01 1993 From: unni at neuro.cs.gmr.com (K.P.Unnikrishnan) Date: Tue, 23 Mar 93 17:34:01 EST Subject: Tech report: MNNs for adaptive control Message-ID: <9303232234.AA00453@neuro.cs.gmr.com> The following technical report is now available. For a hard copy, please send your surface mailing address to sastry at neuro.cs.gmr.com. ftp versions of the paper and the actual code for simulations may be available in future. Unnikrishnan -------------------------------------------------------------- Memory Neuron Networks for Identification and Control of Dynamical Systems P. S. Sastry, G. Santharam Indian Institute of Science and K. P. Unnikrishnan General Motors Research Laboratories This paper presents Memory Neuron Networks as models for identification and adaptive control of nonlinear dynamical systems. These are a class of recurrent networks obtained by adding trainable temporal elements to feed-forward networks which makes the output history sensitive. By virtue of this capability, these networks can identify dynamical systems without having to be explicitly fed with past inputs and outputs. Thus, they can identify systems whose order is unknown or systems with unknown delay. It is argued that for satisfactory modeling of dynamical systems, neural networks should be endowed with such internal memory. The paper presents a preliminary analysis of the learning algorithm, providing theoretical justification for the identification method. Methods for adaptive control of nonlinear systems using these networks are presented. Through extensive simulations, these models are shown to be effective both for identification and model reference adaptive control of nonlinear systems.  From inmanh at cogs.sussex.ac.uk Wed Mar 24 05:15:57 1993 From: inmanh at cogs.sussex.ac.uk (Inman Harvey) Date: Wed, 24 Mar 93 10:15:57 GMT Subject: Evolutionary Robotics - Tech. Reports Message-ID: <9921.9303241015@rsuna.crn.cogs.susx.ac.uk> Evolutionary Robotics at Sussex -- Technical Reports =============================== The following six technical reports describe our recent work in using genetic algorithms to develop neural-network controllers for a simulated simple visually-guided robot. Currently only hard-copies are available. To request copies, mail one of: inmanh at cogs.susx.ac.uk or davec at cogs.susx.ac.uk or philh at cogs.susx.ac.uk giving a surface mail address and the CSRP numbers of the reports you want. or write to us at: School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England, UK. ------------ABSTRACTS-------------------- Genetic convergence in a species of evolved robot control architectures I. Harvey, P. Husbands, D. Cliff Cognitive Science Research Paper CSRP267 February 1993 We analyse how the project of evolving 'neural' network controller for autonomous visually guided robots is significantly different from the usual function optimisation problems standard genetic algorithms are asked to tackle. The need to have open ended increase in complexity of the controllers, to allow for an indefinite number of new tasks to be incrementally added to the robot's capabilities in the long term, means that genotypes of arbitrary length need to be allowed. This results in populations being genetically converged as new tasks are added, and needs a change to usual genetic algorithm practices. Results of successful runs are shown, and the population is analysed in terms of genetic convergence and movement in time across sequence space. Analysing recurrent dynamical networks evolved for robot control P. Husbands, I. Harvey, D. Cliff Cognitive Science Research Paper CSRP265 January 1993 This paper shows how a mixture of qualitative and quantitative analysis can be used to understand a particular brand of arbitrarily recurrent continuous dynamical neural network used to generate robust behaviours in autonomous mobile robots. These networks have been evolved in an open-ended way using an extended form of genetic algorithm. After briefly covering the background to our research, properties of special frequently occurring subnetworks are analysed mathematically. Networks evolved to control simple robots with low resolution sensing are then analysed, using a combination of knowledge of these mathematical properties and careful interpretation of time plots of sensor, neuron and motor activities. Analysis of evolved sensory-motor controllers D. Cliff, P. Husbands, I. Harvey Cognitive Science Research Paper CSRP264 December 1992 We present results from the concurrent evolution of visual sensing morphologies and sensory-motor controller-networks for visually guided robots. In this paper we analyse two (of many) networks which result from using incremental evolution with variable-length genotypes. The two networks come from separate populations, evolved using a common fitness function. The observable behaviours of the two robots are very similar, and close to the optimal behaviour. However, the underlying sensing morphologies and sensory-motor controllers are strikingly different. This is a case of convergent evolution at the behavioural level, coupled with divergent evolution at the morphological level. The action of the evolved networks is described. We discuss the process of analysing evolved artificial networks, a process which bears many similarities to analysing biological nervous systems in the field of neuroethology. Incremental evolution of neural network architectures for adaptive behaviour D. Cliff, I. Harvey, P. Husbands Cognitive Science Research Paper CSRP256 December 1992 This paper describes aspects of our ongoing work in evolving recurrent dynamical artificial neural networks which act as sensory-motor controllers, generating adaptive behaviour in artificial agents. We start with a discussion of the rationale for our approach. Our approach involves the use of recurrent networks of artificial neurons with rich dynamics, resilience to noise (both internal and external); and separate excitation and inhibition channels. The networks allow artificial agents (simulated or robotic) to exhibit adaptive behaviour. The complexity of designing networks built from such units leads us to use our own extended form of genetic algorithm, which allows for incremental automatic evolution of controller-networks. Finally, we review some of our recent results, applying our methods to work with simple visually-guided robots. The genetic algorithm generates useful network architectures from an initial set of randomly-connected networks. During evolution, uniform noise was added to the activation of each neuron. After evolution, we studied two evolved networks, to see how their performance varied when the noise range was altered. Significantly, we discovered that when the noise was eliminated, the performance of the networks degraded: the networks use noise to operate efficiently. Evolving visually guided robots D. Cliff, P. Husbands, I. Harvey Cognitive Science Research Paper CSRP220 July 1992 We have developed a methodology grounded in two beliefs: that autonomous agents need visual processing capabilities, and that the approach of hand-designing control architectures for autonomous agents is likely to be superseded by methods involving the artificial evolution of comparable architectures. In this paper we present results which demonstrate that neural-network control architectures can be evolved for an accurate simulation model of a visually guided robot. The simulation system involves detailed models of the physics of a real robot built at Sussex; and the simulated vision involves ray-tracing computer graphics, using models of optical systems which could readily be constructed from discrete components. The control-network architecture is entirely under genetic control, as are parameters governing the optical system. Significantly, we demonstrate that robust visually-guided control systems evolve from evaluation functions which do not explicitly involve monitoring visual input. The latter part of the paper discusses work now under development, which allows us to engage in long-term fundamental experiments aimed at thoroughly exploring the possibilities of concurrently evolving control networks and visual sensors for navigational tasks. This involves the construction of specialised visual-robotic equipment which eliminates the need for simulated sensing. Issues in evolutionary robotics I. Harvey, P. Husbands, D. Cliff Cognitive Science Research Paper CSRP219 July 1992 In this paper we propose and justify a methodology for the development of the control systems, or `cognitive architectures', of autonomous mobile robots. We argue that the design by hand of such control systems becomes prohibitively difficult as complexity increases. We discuss an alternative approach, involving artificial evolution, where the basic building blocks for cognitive architectures are adaptive noise-tolerant dynamical neural networks, rather than programs. These networks may be recurrent, and should operate in real time. Evolution should be incremental, using an extended and modified version of genetic algorithms. We finally propose that, sooner rather than later, visual processing will be required in order for robots to engage in non-trivial navigation behaviours. Time constraints suggest that initial architecture evaluations should be largely done in simulation. The pitfalls of simulations compared with reality are discussed, together with the importance of incorporating noise. To support our claims and proposals, we present results from some preliminary experiments where robots which roam office-like environments are evolved.  From usui at tut.ac.jp Thu Mar 25 00:15:44 1993 From: usui at tut.ac.jp (usui@tut.ac.jp) Date: Thu, 25 Mar 93 00:15:44 JST Subject: IJCNN'93-NAGOYA Call For Papers Message-ID: <9303241515.AA26419@bpel.tutics.tut.ac.jp> ======================================================================== CALL FOR PAPERS (Second Version) IJCNN'93-NAGOYA, JAPAN INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS NAGOYA CONGRESS CENTER, JAPAN OCTOBER 25-29,1993 IJCNN'93-NAGOYA co-sponsored by the Japanese Neural Network Society (JNNS), the IEEE Neural Networks Council (NNC), the International Neural Network Society (INNS), the European Neural Network Society (ENNS), the Society of Instrument and Control Engineers (SICE, Japan), the Institute of Electronics, Information and Communication Engineers (IEICE, Japan), the Nagoya Industrial Science Research Institute, the Aichi Prefectural Government and the Nagoya Municipal Government cordially invite interested authors to submit papers in the field of neural networks for presentation at the Conference. Nagoya is a historical city famous for Nagoya Castle and is located in the central major industrial area of Japan. There is frequent direct air service from most countries. Nagoya is 2 hours away from Tokyo or 1 hour from Osaka by bullet train. CONFERENCE SCHEDULE: AM PM Evening '93.10.25(Mon.) Registration Registration Tutorial Tutorial 10.26(Tue.) Opening Ceremony Industry Forum Reception 10.27(Wed.) Technical Sessions (Oral,Poster) 10.28(Thu.) Technical Sessions Banquet (Oral,Poster) 10.29(Fri.) Technical Sessions Closing (Oral,Poster) KEYNOTE SPEAKERS INCLUDE: David E. Rumelhart, Methods for Improving Generalization in Connectionist Networks Shun-ichi Amari, Brain and Computer - A Perspective PLENARY SPEAKERS INCLUDE: Rodney Brooks, (TBD) Edmund T. Rolls, Neural Networks in the Hippocampus and Cerebral Cortex Involved in Memory Kunihiko Fukushima, Improved Generalization Ability Using Constrained Neural Network Architectures INVITED SPEAKERS INCLUDE: Keiji Tanaka, Neural Mechanisms of Visual Recognition Tomaso Poggio, Visual Learning: From Object Recognition to Computer Graphics Mitsuo Kawato, Inverse Dynamics Model in the Cerebellum Teuvo Kohonen, Generalization of the Self-Organizing Map Michael I. Jordan, Learning in Hierarchial Networks Rolf Eckmiller, Information Processing in Biology-inspired Pluse Coded Neural Networks Shigenobu Kobayashi, Hybrid Systems of Natural and artificial Intelligence Kazuo Kyuma, Optical Neural Networks / Optical Neurodevices TECHNICAL SESSIONS: Papers may be submitted for consideration as oral or poster presentations in the following areas: Neurobiological Systems Self-organization Cognitive Science Learning & Memory Image Processing & Vision Robotics & Control Speech, Hearing & Language Hybrid Systems (Fuzzy, Genetic, Expert Systems, AI) Sensorimotor Systems Implementation (Electronic, Optical, Bio-chips) Neural Network Architectures Other Applications(Medical and Social Systems, Network Dynamics Art, Economy, etc. Optimization Please specify the area of the application) Four(4) page papers MUST be received by April 30, 1993. Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail. Papers will be reviewed by senior researchers in the field, and all authors will be informed of the decisions at the end of the review process by June 30, 1993. A limited number of papers will be accepted for oral and poster presentations. No poster sessions are scheduled in parallel with oral sessions. All accepted papers will be published as submitted in the conference proceedings, which should be available at the conference for distribution to all regular conference registrants. Please submit six(6) copies (one camera-ready original and five copies) of the paper. Do not fold or staple the original camera-ready copy. The four page papers, including figures, tables, and references, should be written in English. The paper submitted over four pages will be charged 30,000 YEN per extra page. Papers should be submitted on 210mm x 297mm (A4) or 8-1/2" x 11" (letter size) white paper with one inch margins on all four sides (actual space to be allowed to type is 165mm (W) x 228mm (H) or 6-1/2" x 9"). They should be prepared by typewriter or letter-quality printer in one or two-column format, single-spaced, in Times or similar font of 10 points or larger, and printed on one side of the page only. Please be sure that all text, figures, captions, and references are clean, sharp, readable, and of high contrast. Fax submission are not acceptable. Centered at the top of the first page should be the complete title, author(s), affiliation(s), and mailing address(es), followed by a blank space and then an abstract, not to exceed 15 lines, followed by the text. In an accompanying letter, the following should be included. Send papers to: IJCNN'93- NAGOYA Secretariat. Full Title of the Paper Presentation Preferred Oral or Poster Corresponding Author Presenter* Name, Mailing address Name, Mailing address Telephone and FAX numbers Telephone and FAX numbers E-mail address E-mail address Technical Session Audio Visual Requirements 1st and 2nd choices e.g., 35mm Slide, OHP, VCR * Students who wish to apply for the Student Award, please specify and enclose a verification letter of status from the Department head. TUTORIALS INCLUDE: Prof. Edmund T. Rolls (TBD) Prof. H.-N. L. Teodorescu (TBD) Prof. Haim Sompolinsky (TBD) ============================== Models for the development on the visual system Professor Michael P. Stryker University of California ============================== Optical Neural Networks Demetri Psaltis, California Institute of Technology ============================= Self-Organizing Neural Architectures for Adaptive Sensory-Motor Control Stephen Grossberg, Boston University ============================= Biology-Inspired Image Preprocessing:the How and the Why Gart Hauske, Technischen Universitat Munchen ============================= Possible Roles of Stimulus-dominated and Cortex Dominated Synchronizations in the Visual Cortex Prof. Dr. Reinhard Eckhorn Philipps University Marburg ============================= Genetic Algorithm Kenneth De Jong George Mason University ============================= Networks of Behavior Based Robots Prof. Rodony Brooks AI Labo, MIT ============================= Pattern and Speech Recognition by Discriminative Methods B.H. Juang, AT&T Bell Labs. ============================= Developments of modular learning systems Michael I. Jordan MIT ============================= VLSI Implementation of Neural Networks Federico Faggin Synaptics, Inc. ============================= Time Series Prediction and Analysis Dr. Andreas Weigend Palo Alto Research Center ============================= The chaotic dynamics of large networks, R.S.MacKay University of Warwick, ============================= Synaptic coding of spike trains Jose Pedro Segundo University of California, ============================= NEURAL NETWORK BASICS: APPLICATIONS, EXAMPLES AND STANDARDS Mary Lou Padgett Auburn University ============================= Analog Neural Networks - Techniques, Circuits and Learning - Alan F. Murray University of Edinburgh, ============================= Methods to adapt neural or fuzzy networks for control. Paul J. Werbos National Science Foundation ============================= Pattern Recognition with Fuzzy Sets and Neural Nets James C. Bezdek, U. of W. Florida, ============================= Learning, Approximation, and Networks Tomaso Poggio and Federico Girosi Tutorials for IJCNN'93-NAGOYA will be held on Monday, October 25, 1993. Each tutorial will be three hours long. The tutorials should be designed as such and not as expanded talks. They should lead the student at the college Senior level through a pedagogically understandable development of the subject matter. Experts in neural networks and related fields are encouraged to submit proposed topics for tutorials. INDUSTRY FORUM INCLUDE: Guido J. Deboeck Robert Heckt-Nielsen Toshirou Fujiwara Tsuneharu Nitta A major industry forum will be held in the afternoon on Tuesday, October 26, 1993. Speakers will include representatives from industry, government, and academia. The aim of the forum is to permit attendees to understand more fully possible industrial applications of neural networks, discuss problems that have arisen in industrial applications, and to delineate new areas of research and development of neural network applications. EXHIBIT INFORMATION: Exhibitors are encouraged to present the latest innovations in neural networks, including electronic and optical neuro computers, fuzzy neural networks, neural network VLSI chips and development systems, neural network design and simulation tools, software systems, and application demonstration systems. A large group of vendors and participants from academia, industry and government are expected. We believe that the IJCNN'93-NAGOYA will be the neural network largest conference and trade-show in Japan, in which to exhibit your products. Potential exhibitors should plan to sign up before April 30, 1993 for exhibit booths since exhibit space is limited. Vendors may contact the IJCNN'93-NAGOYA Secretariat. COMMITTEES & CHAIRS: Advisory Chair: Fumio Harashima, University of Tokyo Vice-cochairs: Russell Eberhart (IEEE NNC), Research Triangle Institute Paul Werbos (INNS), National Science Foundation Teuvo Kohonen (ENNS), Helsinki University of Technology Organizing Chair: Shun-ichi Amari, University of Tokyo Program Chair: Kunihiko Fukushima, Osaka University Cochairs: Robert J. Marks,II (IEEE NNC), University of Washington Harold H. Szu (INNS), Naval Surface Warfare Center Rolf Eckmiller (ENNS), University of Dusseldorf Noboru Sugie, Nagoya University Steering Chair: Toshio Fukuda, Nagoya University General Affair Chair:Fumihito Arai, Nagoya University Finance Chairs: Hide-aki Saito, Tamagawa University Roy S. Nutter,Jr, West Virginia University Publicity Chairs: Shiro Usui, Toyohashi University of Technology Evangelia Micheli-Tzanakou, Rutgers University Publication Chair: Yoichi Okabe, University of Tokyo Local Arrangement Chair:Yoshiki Uchikawa, Nagoya University Exhibits Chairs: Masanori Idesawa, Riken Shigeru Okuma, Nagoya University Industry Forum Chairs:Noboru Ohnishi, Nagoya University Hisato Kobayashi, Hosei University Social Event Chair: Kazuhiro Kosuge, Nagoya University Tutorial Chair: Minoru Tsukada, Tamagawa University Technical Tour Chair:Hideki Hashimoto, University of Tokyo REGISTRATION: Registration Fee Full conference registration fee includes admission to all sessions, exhibit area, welcome reception and proceedings. Tutorials and banquet are NOT included. Member-ship Before Aug. 31 '93 After Sept. 1 '93 On-site Member* 45,000 yen 55,000 yen 60,000 yen Non-Member 55,000 yen 65,000 yen 70,000 yen Student** 12,000 yen 15,000 yen 20,000 yen Tutorial Registration Fee Tutorials will be held on Monday, October 25, 1993, 10:00 am-1:00 pm. and 3:00 pm - 6:00 pm. The complete list of tutorials will be available in the June mailing. Member-ship Option Before August 31 '93 After Sept. 1 '93 Industrial Univ.& Nonprofit Inst. Member* Half day 20,000 yen 7,000 yen 40,000 yen Full day 30,000 yen 10,000 yen 60,000 yen Non- Half day 30,000 yen 10,000 yen 50,000 yen Member Full day 45,000 yen 15,000 yen 80,000 yen Student**Half day ------------ 5,000 yen 20,000 yen Full day ------------ 7,500 yen 30,000 yen * A member of co-sponsoring and co-operating societies. **Students must submit a verification letter of full-time status from the Department head. Banquet The IJCNN'93-NAGOYA Banquet will be held on Thursday, October 28, 1993. Note that the Banquet ticket (5,000 yen/person) is not included in the registration fee. Pre-registration is recommended, since the number of seats is limited. The registration for the Banquet can be made at the same time with the conference registration. Payment and Remittance Payment for registration and tutorial fees should be in one of the following forms : 1. A bank transfer to the following bank account: Name of Bank: Tokai Bank, Nagoya Ekimae-Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: 6F Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan 2. Credit Cards (American Express, Diners, Visa, Master Card) are acceptable except for domestic registrants. Please indicate your card number and expiration date on the Registration Form Note: When making remittance, please send Registration Form to the IJCNN'93-NAGOYA Secretariat together with a copy of your bank's receipt for transfer. Personal checks and other currencies will not be accepted except Japanese yen. Confirmation and Receipt Upon receiving your Registration Form and confirming your payment, the IJCNN'93-NAGOYA Secretariat will send you a confirmation / receipt. This confirmation should be retained and presented at the registration desk of the conference site. Cancellation and Refund of the Fees All financial transactions for the conference are being handled by the IJCNN'93-NAGOYA Secretariat. Please send a written notification of cancellation directly to the office. Cancellations received on or before September 30, 1993, 50% cancel fee will be charged. We regret that no refunds for registration can be made after October 1, 1993. All refunds will be proceeded after the conference. NAGOYA: The City of Nagoya, with a population of over two million, is the principal city of central Japan and lies at the heart of one of the three leading areas of the country. The area in and around the city contains a large number of high-tech industries with names known worldwide, such as Toyota, Mitsubishi, Honda, Sony and Brother. The city's central location gives it excellent road and rail links to the rest of the country; there exist direct air services to 18 other cities in Japan and 26 cities abroad. Nagoya enjoys a temperate climate and agriculture flourishes on the fertile plain surrounding the city. The area has a long history; Nagoya is the birth place of two of Japan's greatest heroes: the Lords Oda Nobunaga and Toyotomi Hideyoshi, who did much to bring the 'Warring States' period to an end. Tokugawa Ieyasu who completed the task and established the Edo period was also born in the area. Nagoya is flourished under the benevolent rule of this lord and his descendants Climate and Clothing The climate in Nagoya in the late October is usually agreeable and stable, with an average temperature of 16-23 C(60-74 F). Heavy clothing is not necessary, however, a light sweater is recommended. Business suit as well as casual clothing is appropriate. TRAVEL INFORMATION: Official Travel Agent Travel Plaza International Chubu, Inc. (TPI) has been appointed as the Official Travel Agent for IJCNN'93-NAGOYA, JAPAN to handle all travel arrangements in Japan. All inquiries and application forms for hotel accommodations described herein should be addressed as follows: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg. 4-8-10 Meieki, Nakamura-ku Tel: +81-52-561-9880/8655 Nagoya 450, Japan Fax: +81-52-561-1241 Airline Transportation Participants from Europe and North America who are planning to come to Japan by air are advised to get in touch with the following travel agents who can provide information on discount fares. Departure cities are Los Angeles, Washington, New York, Paris, and London. Japan Travel Bureau U.K. Inc. 9 Kingsway London Tel: (01)836-9393 WC2B 6XF, England, U.K. Fax: (01)836-6215 Japan Travel Bureau International Inc. Equitable Tower 11th Floor New York, N.Y. 10019 Tel: (212)698-4955 U.S.A. Fax: (212)246-5607 Japan Travel Bureau Paris 91 Rue du Faubourg Saint-Honore 750008 Paris Tel: (01)4265-1500 France Fax: (01)4265-1132 Japan Travel Bureau International Inc. Suite 1410, One Wilshire Bldg. 624 South Grand Ave, Los Angeles, CA 90017 Tel: (213)687-9881 U.S.A. Fax: (213)621-2318 Japan Rail Pass The JAPAN RAIL PASS is a special ticket that is available only to travellers visiting Japan from foreign countries for sight-seeing. To be eligible to purchase a JAPAN RAIL PASS, you must purchase an Exchange Order from an authorized sales office or agent before you come to Japan. Please contact JTB offices or your travel agent for details. Note: The rail pass is a flash pass good on most of the trains and ferries in Japan. It provides very significant saving on transportation costs within Japan if you plan to travel more than just from Tokyo to Nagoya and return. Booking of Japan Railway tickets cannot be made before issuing Japan Rail Pass in Japan. Access to Nagoya Direct flights to Nagoya are available from the following cities: Seoul, Taipei, Pusan, Hong Kong, Singapore, Bangkok, Cheju, Jakarta, Denpasar, Kuala Lumpur, Honolulu, Portland, Los Angeles, Guam, Saipan, Toronto, Vancouver, Rio de Janeiro, Sao Paulo, Moscow, Frankfurt, Paris, London, Brisbane, Cairns, Sydney and Auckland. Participants flying from the U.S.A. are urged to fly to Los Angeles, CA, or Portland, OR, and transfer to direct flights to Nagoya on Delta Airlines, or fly to Seoul, Korea, for a connecting flight to Nagoya. For participants from other countries, flights to Narita (the New Tokyo International Airport) or Osaka International Airport are recommended. Domestic flights are available from Narita to Nagoya, but not from Osaka. The bullet train, "Shinkansen", is a fast and convenient way to get to Nagoya from either Osaka or Tokyo. Transportation from Nagoya International Airport Bus service to the Nagoya JR train station is available every 15 minutes. The bus stop (signed as No. 1) is to your left as you exit the terminal. The trip takes about 1 hour. Transportation from Narita International Airport To the Tokyo JR train station (to connect with Shinkansen), 2 ways to get from Narita to the JR train station are recommended: 1. An express train from the airport to the Tokyo JR train station. This is an all reserved seat train. Buy tickets before boarding train. Follow the signs in the airport to JR Narita station. The trip takes 1 hour. 2. A non-stop service is available, leaving Narita airport every 15 minutes. The trip will take between one and one and a half hours or more, depending on traffic conditions. The limousine have reserved seating, so it is necessary to purchase a ticket before boarding. If you plan to stay in Tokyo overnight before proceeding to Nagoya, other limousine to major Tokyo hotels are available. Transportation from Osaka International Airport Non-stop-bus service to the Shin-Osaka JR train station is available every 15 min. Foreign Exchange and Travellaer's Checks Purchase of traveller's checks in Japanese yen or U.S. dollars before departure is recommended. The conference secretariat and most of stores will accept only Japanese yen in cash only. Major credit cards are accepted in a number of shops and hotels. Foreign currency exchange and cashing of traveller's checks are available at the New Tokyo International Airport, the Osaka International Airport and major hotels. Major banks that handle foreign currencies are located in the downtown area. Banks are open from 9:00 to 15:00 on the weekday, closed on Saturday and Sunday. Electricity 100 volts, 60 Hz. For registration and additional information please contact: IJCNN'93-NAGOYA Secretariat: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 ________________________________________________________________________________ Please do not reply to this account. Please use the telephone number, fax number or Mail address listed above. --- Shiro Usui (usui at tut.ac.jp) Biological and Physiological Engineering Lab. Department of Information and Computer Sciences Toyohashi University of Technology Toyohashi 441, Japan TEL & FAX 0532-46-7806  From rohwerrj at cs.aston.ac.uk Wed Mar 24 13:54:45 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 24 Mar 93 18:54:45 GMT Subject: postdoctoral research opportunity Message-ID: <1787.9303241854@cs.aston.ac.uk> *************************************************************************** POSTDOCTORAL RESEARCH OPPORTUNITY Dept. of Computer Science and Applied Mathematics Aston University *************************************************************************** The Department of Computer Science and Applied Mathematics at Aston University is seeking a Postdoctoral research assistant under SERC grant GR/J17814, "Training Algorithms based on Adaptive Critics". The successful applicant will work with Richard Rohwer in the Neural Computing research group at Aston to develop and study the use of Adaptive Critic credit assignment techniques for training several types of neural network models. This involves transplanting techniques developed mainly for control applications into a different setting. The applicant must hold a PhD degree in Computer Science, Physics, Mathematics, Electrical Engineering, or a similar quantitative science. Mathematical skill, programming experience (preferably with C or C++ under UNIX), and familiarity with neural network models are essential. Aston University's growing neural networks group currently consists of three academic staff and about 10 research students and visiting researchers. The group has access to about 50 networked Sparc stations in the Computer Science Department, in addition to 5 of its own, and further major acquisitions are in progress. The post will be for a period of two years, which can commence at any time between 15 May 1993 and 15 November 1993. Starting salary will be within the range 12638 to 14962 pounds per annum. Application forms and further particulars may be obtained from the Personnel Officer (Academic Staff), quoting Ref: 9307, Aston University, Aston Triangle, Birmingham B4 7ET, England. (Tel: (44 or 0)21 359-0870 (24 hour answerphone), FAX: (44 or 0)21 359-6470). The closing date for receipt of applications is 30 April 1993.  From harnad at Princeton.EDU Wed Mar 24 18:12:57 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Wed, 24 Mar 93 18:12:57 EST Subject: MATHEMATICAL PRINCIPLES OF REINFORCEMENT: BBS Call for Commentators Message-ID: <9303242312.AA15204@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by PETER KILLEEN on MATHEMATICAL PRINCIPLES OF REINFORCEMENT, that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ MATHEMATICAL PRINCIPLES OF REINFORCEMENT Peter R. Killeen Department of Psychology University of Arizona Tempe, AZ 85281-1104 KEYWORDS: reinforcement, memory, coupling, contingency, contiguity, tuning curves, activation, schedules, trajectories, response rate, mathematical models. ABSTRACT: Effective conditioning requires a correlation between the experimenter's definition of a response and an organism's, but an animal's perception of its own behavior differs from ours. Various definitions of the response are explored experimentally using the slopes of learning curves to infer which comes closest to the organism's definition. The resulting exponentially weighted moving average provides a model of memory which grounds a quantitative theory of reinforcement in which incentives excite behavior and focus the excitement on the responses present in memory at the same time. The correlation between the organism's memory and the behavior measured by the experimenter is given by coupling coefficients derived for various schedules of reinforcement. For simple schedules these coefficients can be concatenated to predict the effects of complex schedules and can be inserted into a generic model of arousal and temporal constraint to predict response rates under any scheduling arrangement. According to the theory, the decay of memory is response-indexed rather than time-indexed. Incentives displace memory for the responses that occur before them and may truncate the representation of the response that brings them about. This contiguity-weighted correlation model bridges opposing views of the reinforcement process and can be extended in a straightforward way to the classical conditioning of stimuli. Placing the short-term memory of behavior in so central a role provides a behavioral account of a key cognitive process. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.killeen). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.killeen When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 22 18:13:18 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 22 Mar 93 16:13:18 -0700 Subject: Chidambar Ganesh to speak on preprocessing in neural networks Message-ID: <9303222313.AA10350@franklinite.Mines.Colorado.EDU> The spring, 1993 Colorado Machine Learning Colloquium Series Dr. Chidambar Ganesh Division of Engineering Colorado School of Mines, Golden Colorado Some Experiences with Data Preprocessing in Neural Network applications Tuesday March 30, 1993 Room 110, Stratton Hall, on the CSM campus 5:30 pm ABSTRACT In the application of Artificial Neural Systems (ANS) to engineering problems, appropriate representation of the measured sensor signals to be input to the ANS and the desired output response from the ANS are critical to successful network development. In this seminar, three different applications are presented wherein the representation of the input-output data sets proved to be a central issue in training neural networks effectively. The examples to be considered are : 1. Object identification based on ultrasonic measurements. 2. Real-time defect detection in an arc welding process from acoustic emission sensors. 3. Aluminum can color quality based on spectroscopic measurements. The first example deals with classification of 2-D objects from an ultrasonic mapping, and provides a simple yet striking illustration of the concept that utilizing data compression techniques can successfully resolve network learning problems. The latter two case studies relate to process monitoring and control in manufacturing. In both situations, an in-depth understanding of the physical process underlying the generated data was essential to developing meaningful representation schemes, ultimately resulting in useful networks. Suggested background readings: A Neural Network-Based Object Identification System..C. Ganesh, D. Morse, E. Wetherell and J. P. H. Steele. Development of an Intelligent Acoustic Emission Sensor Data Processing System -- Final Report. C. Ganesh and Steven M. Lassek. A Neural Network-Based Can Color Diagnostician C. Ganesh, L. Easton, and J. Jones. These readings are available on reserve at the Arthur Lakes Library at CSM. Ask for the reserve package for MACS570, subject: Ganesh. Non-students can check materials out on reserve by providing a driver's license. Open to the Public Refreshments to be served at 5:00pm, prior to talk For more information (including a schedule of all talks in this series), contact: Dr. L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 Sponsored by: THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS The Center for Robotics and Intelligent Systems at the Colorado School of Mines  From dhw at santafe.edu Tue Mar 23 15:44:42 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Tue, 23 Mar 93 13:44:42 MST Subject: new paper in neuroprose Message-ID: <9303232044.AA13789@zia> **DO NOT FORWARD TO OTHER GROUPS** The following file has been placed in neuroprose, under the name wolpert.overfitting.ps.Z. Thanks to Jordan Pollack for maintaining this very useful system. ON OVERFITTING AVOIDANCE AS BIAS by David H. Wolpert, The Santa Fe Institute Abstract: In supervised learning it is commonly believed that Occam's razor works, i.e., that penalizing complex functions helps one avoid "overfitting" functions to data, and therefore improves generalization. It is also commonly believed that cross-validation is an effective way to choose amongst algorithms for fitting functions to data. In a recent paper, Schaffer (1993) presents experimental evidence disputing these claims. The current paper consists of a formal analysis of these contentions of Schaffer's. It proves that his contentions are valid, although some of his experiments must be interpreted with caution. Keywords: overfitting avoidance, cross-validation, decision tree pruning, inductive bias, extended Bayesian analysis, uniform priors. To retrieve the file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get wolpert.overfitting.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress wolpert.overfitting.ps.Z unix> lpr wolpert.overfitting.ps (or however you print postscript)  From tsaih at sun7mis Fri Mar 19 12:32:03 1993 From: tsaih at sun7mis (teache tsaih) Date: 19 Mar 1993 11:32:03 -0600 (CST) Subject: Training XOR with BP Message-ID: <9303190332.AA03805@mis.nccu.edu.tw> People who are surprised by the emperical observation of above might be interested in the paper "The Saddle Stationary point In the Back Propagation Networks", in IJCNN'92 Beijing, II, page 886-892. There I had shown mathematically that the "local minimum" phenomenon in the vicinity of the origin point is due to the nature of the degenerate saddle stationary point. That is, the origin point in the XOR probelm is a degenerate saddle stationary point. Ray Tsaih tsaih at mis.nccu.edu.tw  From harry at neuronz.Jpl.Nasa.Gov Thu Mar 25 12:56:03 1993 From: harry at neuronz.Jpl.Nasa.Gov (Harry Langenbacher) Date: Thu, 25 Mar 93 09:56:03 PST Subject: Post-Doc Position Announcement - JPL Message-ID: <9303251756.AA09331@neuronz.Jpl.Nasa.Gov> The Concurrent Processing Devices Group at JPL has two post-doc positions for recent PhD's with neural-net, analog/digital VLSI and/or opto-electronic experience. The positions will have a duration of one to two years. USA citezinship or permanent-resident status is required. One position will be in our Electronic Concurrent Processing Devices Group to concentrate on applications of neural networks and parallel processing devices to problems such as pattern recognition, resource allocation, and optimization, using custom VLSI designs, and custom designs of computer sub-systems. The other position will be in our optical-processing group, to work with lasers, computer-generated holograms, and Acusto-Optic-Tuneable Filters for applications in pattern recognition and other neural-net architectures. We currently work with (analog, digital, and optical) neural net and concurrent processing devices and hardware systems. We build special-purpose and general-purpose analog, digital, mixed-signal, and opto-electronic chips. We develop neural net algorithms that suit our applications and our hardware. For over 7 years we have been a leader in hardware neural nets . If you're interested, please send me a ONE PAGE summary of your qualifications in the above mentioned fields, by e-mail(preferred), US mail, or FAX. Lab: 818-354-9513 , FAX: 818-393-4540 e-mail: harry%neuron6 at jpl-mil.jpl.nasa.gov Harry Langenbacher JPL, Mail-Stop 302-231 4800 Oak Grove Dr Pasadena, CA 91109 USA  From harnad at Princeton.EDU Thu Mar 25 14:24:25 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 25 Mar 93 14:24:25 EST Subject: VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS: BBS Call for Comm. Message-ID: <9303251924.AA20674@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by BRUCE BRIDGEMAN et al on VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ A THEORY OF VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS Bruce Bridgeman Program in Experimental Psychology University of California Santa Cruz, CA 95064 A.H.C. van der Heijden Department of Psychology Leiden University Wassenaarsweg 52 2333 AK Leiden, The Netherlands Boris M Velichkovsky Department for Psychology and Knowledge Engineering Moscow State University Moscow 103009, Russia KEYWORDS: space constancy, proprioception, efference copy, space perception, saccade, eye movement, modularity, visual stability. ABSTRACT: We identify two aspects of the problem of how there is perceptual stability despite an observer's eye movements. The first, visual direction constancy, is the (egocentric) stability of apparent positions of objects in the visual world relative to the perceiver. The second, visual position constancy, is the (exocentric) stability of positions of objects relative to each other. We analyze the constancy of visual direction despite saccadic eye movements. Three information sources have been proposed to enable the visual system to achieve stability: the structure of the visual field, proprioceptive inflow, and a copy of neural efference or outflow to the extraocular muscles. None of these sources by itself provides adequate information to achieve visual direction constancy; present evidence indicates that all three are used. Our final question concerns the information processing operations that result in a stable world. The three traditional solutions involve elimination, translation, and evaluation. All are rejected. From a review of the physiological and psychological evidence we conclude that no subtraction, compensation or evaluation need take place. The problem for which these solutions were developed turns out to be a false one. We propose a "calibration" solution: correct spatiotopic positions are calculated anew for each fixation. Inflow, outflow, and retinal sources are used in this calculation: saccadic suppression of displacement bridges the errors between these sources and the actual extent of movement. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.bridgeman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.bridgeman When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson). ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From harnad at Princeton.EDU Thu Mar 25 14:15:03 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 25 Mar 93 14:15:03 EST Subject: MOTOR INTENTION, IMAGERY AND REPRESENTATION: BBS Call for Commentators Message-ID: <9303251915.AA20557@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by MARC JEANNEROD, on MOTOR INTENTION, IMAGERY AND REPRESENTATION, that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ THE REPRESENTING BRAIN: NEURAL CORRELATES OF MOTOR INTENTION AND IMAGERY Marc Jeannerod Vision et Motricite INSERM Unite 94 16 avenue du Doyen Lepine 69500 Bron France KEYWORDS: affordances, goals, intention, motor imagery, motor schemata, neural codes, object manipulation, planning, posterior parietal cortex, premotor cortex, representation. ABSTRACT: This target article concerns how motor actions are neurally represented and coded. Action planning and motor preparation can be studied using motor imagery. A close functional equivalence between motor imagery and motor preparation is suggested by the positive effects of imagining movements on motor learning, the similarity between the neural structures involved, and the similar physiological correlates observed in both imagining and preparing. The content of motor representations can be inferred from motor images at a macroscopic level: from global aspects of the action (the duration and amount of effort involved) and from the motor rules and constraints which predict the spatial path and kinematics of movements. A microscopic neural account of the represenation of object-oriented action is described. Object attributes are processed in different neural pathways depending on the kind of task the subject is performing. During object-oriented action, a pragmatic representation is activated in which object affordances are transformed into specific motor schemata independently of other tasks such as object recognition. Animal as well as clinical data implicate posterior parietal and premotor cortical areas in schema instantiation. A mechanism is proposed that is able to encode the desired goal of the action and is applicable to different levels of representational organization. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.jeannerod). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.jeannerod When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson)> ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From rose at apple.com Fri Mar 26 13:29:10 1993 From: rose at apple.com (Dan Rose) Date: Fri, 26 Mar 1993 10:29:10 -0800 Subject: Job openings at Apple (permanent and summer intern) Message-ID: <9303261827.AA23745@taurus.apple.com> The Information Technology project in Apple's Advanced Technology Group is now hiring for one permanent position and two summer internships. One of the intern positions (#2 listed below) is specifically aimed at people with neural net experience; the other two may also be of interest to this audience. Note: E-mail submissions are STRONGLY preferred. ASCII files only, please. (More time unbinhexing, latexing, etc. means less time for us to read your resume!) Apple Computer has a corporate commitment to the principle of diversity. In that spirit, we welcome applications from all individuals. Women, minorities, veterans and disabled individuals are encouraged to apply. --------------------------- PERMANENT POSITION ------------------------- ENGINEER/SCIENTIST Job description: Join a team conducting research on new approaches to finding, sharing, organizing, and manipulating information for content-aware systems. Emphasis on implementation of experimental information and communication systems. Requires: MS in Computer Science or BS with equivalent experience with strong programming skills. Experience in information retrieval, hypertext, interface design, or related field. Preferred: Knowledge of Macintosh Toolbox, dynamic languages (LISP, Smalltalk, etc.), GUI programming. Familiarity with common text-indexing methods. E-mail resumes to infotech-recruit at apple.com, or send to InfoTech Recruiting c/o Nancy Massung Apple Computer, Inc. MS 301-4A One Infinite Loop Cupertino, CA 95014 ----------------------------- SUMMER POSITIONS ------------------------------ ENGINEER/SCIENTIST Intern (summer) #1 Job description: Work with senior researchers on the application of numerical methods to information retrieval (IR) systems. Assist on the design, implementation, user testing and performance evaluation of such systems. Requires: Graduate or upper division undergraduate student in computer science, cognitive science, information retrieval or other relevant program. Macintosh programming experience, the candidate should be able to write an application program. MPW C. Basic knowledge on numerical linear algebra. Preferred: Background on numerical methods and/or statistics. Smalltalk programming, familiarity with common text-indexing techniques. Some exposure to human-computer interaction issues. Knowledge on the following topics would be ideal: the vector model in IR, singular value decomposition and factor analysis. ENGINEER/SCIENTIST Intern (summer) #2 Job Description: Work with senior researchers to experiment with the use of neural network and other learning methods for information retrieval and organization. Requires: Graduate or upper division undergraduate student with experience in neural networks. Lisp programming with CLOS or other object system. Interest in information retrieval, hypertext, corpus linguistics, or related field. Preferred: Macintosh programming experience. Some exposure to human-computer interaction issues. Use of mapping techniques such as vector quantization or multidimensional scaling. Familiarity with common text-indexing methods. E-mail resumes to infotech-intern-recruit at apple.com, or send to InfoTech Internships c/o Nancy Massung Apple Computer, Inc. MS 301-4A One Infinite Loop Cupertino, CA 95014 Please indicate which position you are interested in.  From heiniw at sun1.eeb.ele.tue.nl Fri Mar 26 05:27:41 1993 From: heiniw at sun1.eeb.ele.tue.nl (Heini Withagen) Date: Fri, 26 Mar 1993 11:27:41 +0100 (MET) Subject: Paper on RBF-networks available Message-ID: <9303261027.AA01424@sun1.eeb.ele.tue.nl> A non-text attachment was scrubbed... Name: not available Type: text Size: 2588 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/244a65b7/attachment.ksh From rreilly at nova.ucd.ie Thu Mar 25 09:41:48 1993 From: rreilly at nova.ucd.ie (Ronan Reilly) Date: Thu, 25 Mar 1993 14:41:48 +0000 Subject: Two preprints available Message-ID: The following two papers, to be presented at the upcoming Cognitive Science conference in Boulder, have been placed in Jordan Pollack's Neuroprose archive. Details on how to retrieve the papers, which are in compressed postscript form, are given at the end of this message. The papers are short (six pages), as required for inclusion in the CogSci'93 proceedings. Longer versions of both papers are currently in preparation: ------------------------------------------------------------------------------- Boundary effects in the linguistic representations of simple recurrent networks Ronan Reilly Dept. of Computer Science University College Dublin Belfield, Dublin 4 Ireland Abstract This paper describes a number of simulations which show that SRN representations exhibit interactions between memory and sentence and clause boundaries reminiscent of effects described in the early psycholinguistic literature (Jarvella, 1971; Caplan, 1972). Moreover, these effects can be accounted for by the intrinsic properties of SRN representations without the need to invoke external memory mechanisms as has conventionally been done. ------------------------------------------------------------------------------- A Connectionist Attentional Shift Model of Eye-Movement Control in Reading Ronan Reilly Department of Computer Science University College Dublin Belfield, Dublin 4, Ireland Abstract A connectionist attentional-shift model of eye-movement control (CASMEC) in reading is described. The model provides an integrated account of a range of saccadic control effects found in reading, such as word-skipping, refixation, and of course normal saccadic progression. -------------------------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get reilly.boundary.ps.Z ftp> get reilly.eyemove.ps.Z ftp> quit unix% zcat reilly.boundary.ps.Z | lpr -P unix% zcat reilly.eyemove.ps.Z | lpr -P ------------------------- Ronan Reilly Dept. of Computer Science University College Dublin Belfield, Dublin 4 IRELAND rreilly at ccvax.ucd.ie  From unni at neuro.cs.gmr.com Sat Mar 27 01:45:54 1993 From: unni at neuro.cs.gmr.com (K.P.Unnikrishnan) Date: Sat, 27 Mar 93 01:45:54 EST Subject: Preprint- Alopex: A corr. based learning alg. Message-ID: <9303270645.AA02269@neuro.cs.gmr.com> The following tech report is now available. For a hard copy, please send your surface mail address to venu at neuro.cs.gmr.com. Unnikrishnan ------------------------------------------------------------ Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks K. P. Unnikrishnan General Motors Research Laboratories and K. P. Venugopal Florida Atlantic University We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feed- forward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a `temperature' parameter in a manner similar to that in simulated annealing. A heuristic `annealing schedule' is presented which is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feed-forward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.  From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Tue Mar 30 14:15:51 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Tue, 30 Mar 93 15:15:51 AST Subject: Call for Papers (revised deadline) Message-ID: ================================================================== ================================================================== Final Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn, Fredericton New Brunswick Canada Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gardenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- April 10, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department: _____________________________________ University/Company: _____________________________________ _____________________________________ _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Prov/State: _____________________________________ Country: _____________________________________ Telephone: _____________________________________ Email: _____________________________________ Fax: _____________________________________  From FEHLAUER at msscc.med.utah.edu Tue Mar 30 18:55:00 1993 From: FEHLAUER at msscc.med.utah.edu (FEHLAUER@msscc.med.utah.edu) Date: Tue, 30 Mar 93 16:55 MST Subject: Research and Clinical Positions at Univ. Utah and VA GRECC Message-ID: <5B5B5A7A317F007AA3@msscc.med.utah.edu> Colleagues, The following announcement represents an exciting opportunity to participate in a well funded, multidisciplinary research and clinical program. Please feel free to contact me with questions. Steve Fehlauer, M.D. Research Investigator SLC VAMC GRECC Assistant Professor of Medicine University of Utah School of Medicine VA Phone 801-582-1565 Ext 2468 Univ Phone 801-581-2628 E-Mail Fehlauer at msscc.med.utah.edu ***************************************************************************** Geriatric Internal Medicine The Salt Lake City Geriatric Research, Education and Clinical Center (GRECC) and the University of Utah School of Medicine are recruiting individuals to join the faculty of the GRECC/University program in Geriatric Internal Medicine. Candidates must be BE/BC in Internal Medicine and Geriatrics. Facilities include outpatient clinics, inpatient Geriatric Evaluation and Management Unit, outpatient Geriatric Medicine/ Psychiatry Program, "wet labs" with capabilities in cell and bone marrow culture, cell signalling and molecular biology, and computation facilities including Unix RISC workstations, Pen-Based clinical computers, PC workstations and LAN. Interdepartmental collaborative research is performed in Medical Computation and Modelling (artificial neural networks, expert systems, fuzzy logic, and semantic networks), real-time clinical decision support, nursing and medical information system design, computer assisted medical education, biology of aging, cytokines and immunity during aging, aerobic exercise and cognition during aging, and cellular neuroscience. Low cost of living, excellent recreation and arts abound in Salt Lake City. Appointments will be in the SLC BRECC and the University of Utah Division of Human Development and Aging. Faculty rank dependent upon qualifications. Send curriculum vitae to:: Personnel Service (05) Attn: Pruett VA Medical Center Salt Lake City, UT 84148 For more information, call: (801) 582-1565 Ext. 2475 The Department of Veterans Affairs and the University of Utah are Affirmative Action / Equal Opportunity Employers ******************************************************************************  From Connectionists-Request at CS.CMU.EDU Mon Mar 1 00:05:16 1993 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Mon, 01 Mar 93 00:05:16 EST Subject: Bi-monthly Reminder Message-ID: <17897.730962316@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu".  From prechelt at ira.uka.de Mon Mar 1 09:53:53 1993 From: prechelt at ira.uka.de (prechelt@ira.uka.de) Date: Mon, 01 Mar 93 15:53:53 +0100 Subject: Putting a NN on 16k processors In-Reply-To: Your message of Tue, 23 Feb 93 18:56:11 +0000. <9302231856.AA22594@cato.robots.ox.ac.uk> Message-ID: > Subject: Re: Squashing functions (continued) .. > Concerning memory requirements (eg, MasPar MP1). I don't see why I need 4 bytes .. > Apart of that, seems to be quite complicated to put a nn on 16K processors ... > how do you do that ? 1. Use the Nettalk-architecture which consumes up to two processors per connection. 2. Additionally, replicate the net and train on several examples at once. 3. Don't use TOO small nets and training sets. I am currently working on a neural algorithm programming language and an optimizing compiler for it that use exactly these techniques. Lutz Lutz Prechelt (email: prechelt at ira.uka.de) | Whenever you Institut fuer Programmstrukturen und Datenorganisation | complicate things, Universitaet Karlsruhe; D-7500 Karlsruhe 1; Germany | they get (Voice: ++49/721/608-4068, FAX: ++49/721/694092) | less simple.  From port at cs.indiana.edu Mon Mar 1 22:18:20 1993 From: port at cs.indiana.edu (Robert Port) Date: Mon, 1 Mar 1993 22:18:20 -0500 Subject: mspt at neuroprose on schemes of composition Message-ID: Recently posted to neuroprose: `Beyond Symbolic: Prolegomena to a Kama-Sutra of Compositionality' by Timothy van Gelder and Robert Port Cognitive Science Program, Indiana University, Bloomington, Indiana, 47405 (tgelder at indiana.edu, port at indiana.edu) Compositionality, in the sense intended here, refers to the way that a complex representation is physically constructed of its parts. Most representations used in cognitive science exhibit variations of one specific kind of compositionality, that which is exhibited by printed sentences and LISP data structures. There are, however, many other kinds. In this paper, we describe six important dimensions for describing compositionality schemes: first, their TOKENS may be (1) static or dynamic, (2) analog or digital, and (3) arbitrary or not. Then, their COMBINATION may be (4) by concatenation or not, (5) static or temporal (that is, in-time), and may involve (6) strict syntactic conformity or not. These dimensions define a huge space of possible kinds of compositionality. Many of these kinds may turn out to be useful in modeling human cognition. In our discussion, we highlight a particular variety of compositionality employing what we call `dynamic representations'. These representations differ from traditional `symbolic' representations along all six of the critical dimensions. In general, we urge cognitive scientists to be more adventurous in exploring the variety of exotic techniques that are available for composing complex structures from simple ones. The postscript file for this paper (17 pages including 3 figures) may be obtained by ftp from the neuroprose archive at Ohio State University maintained by the proud new father of Dylan Seth Pollack, named Jordan. The file is: vangelder.kamasutra.Z To pick it up, do: unix> ftp archive.cis.ohio-state.edu (login as user:'anonymous' using your e-mail address as the password) ftp> binary ftp> cd pub/neuroprose ftp> get vangelder.kamasutra.ps.Z ftp> quit unix> vangelder.kamasutra.ps.Z unix> uncompress vangelder.kamasutra.ps unix> lpr vangelder.kamasutra.ps (for a postscript printer)  From tedwards at wam.umd.edu Tue Mar 2 01:43:37 1993 From: tedwards at wam.umd.edu (Technoshaman Tom) Date: Tue, 2 Mar 1993 01:43:37 -0500 (EST) Subject: Putting a NN on 16k processors In-Reply-To: <9303012333.AA16677@zippy.cs.UMD.EDU> Message-ID: On Mon, 1 Mar 1993 prechelt at ira.uka.de wrote: > > Apart of that, seems to be quite complicated to put a nn on 16K processors ... > > how do you do that ? > 1. Use the Nettalk-architecture which consumes up to two processors per > connection. > 2. Additionally, replicate the net and train on several examples at once. > 3. Don't use TOO small nets and training sets. The method you use for parallel NN depend on the size of your nets, the size of your training set, and restrictions on training mechanisms. One fairly obvious method, which is easy to code in a high level language, is using batch training described by linear algebra equations. (matrix multiplication, transposition, addition, etc.) However, in many situations the size of your net will be small enough to replicate the net on each processor, since usually you will have more examples to train relative to the size of the net. -Thomas Edwards  From marwan at sedal.su.OZ.AU Tue Mar 2 09:14:36 1993 From: marwan at sedal.su.OZ.AU (Marwan Jabri) Date: Wed, 3 Mar 1993 01:14:36 +1100 Subject: Beta sites for MUME Message-ID: <9303021414.AA14590@sedal.sedal.su.OZ.AU> We are seeking universities beta sites for testing a new release (0.6) of a multi-net multi-algorithm connectionist simulator (MUME) for the following plateforms: - HPs 9000/700 - SGIs - DEC Alphas - PC DOS (with DJGCC) If interested please send, name, email, affiliation, address and fax number to my email address below. Note that starting with release 0.6, MUME (including source code) will be made available to universities through FTP but following the signature of a license protecting the University of Sydney and the authors. Marwan ------------------------------------------------------------------- Marwan Jabri Email: marwan at sedal.su.oz.au Senior Lecturer Tel: (+61-2) 692-2240 SEDAL, Electrical Engineering, Fax: 660-1228 Sydney University, NSW 2006, Australia Mobile: (+61-18) 259-086  From reggia at cs.UMD.EDU Tue Mar 2 09:31:31 1993 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Tue, 2 Mar 93 09:31:31 -0500 Subject: New Postdoc Position in Neural Modelling Message-ID: <9303021431.AA27194@avion.cs.UMD.EDU> Post-Doctoral Position in Neural Modelling A new post-doctoral position in computational neuroscience will be available at the University of Maryland, College Park, MD starting between June 1 and Sept. 1, 1993. This research position will center on modelling neocortical self-organization and plasticity. Requirements are a PhD in computer science, neuroscience, applied math, or a related area by the time the position starts, experience with neural modelling, and familiarity with the language C. The position will last one or (preferably) two years. The University of Maryland campus is located just outside of Washington, DC. If you would like to be considered for this position, please send via regular mail services a cover letter expressing your interest and desired starting date, a copy of your cv, the names of two possible references (with their address, phone number, fax number, and email address), and any other information you feel would be relevant to James Reggia Dept. of Computer Science A. V. Williams Bldg. University of Maryland College Park, MD 20742 or send this information via FAX at (301)405-6707. (Applications will NOT be accepted via email.) Closing date for receipt of applications is March 26, 1993. If you have questions about the position please send email to reggia at cs.umd.edu .  From rswiniar%saturn at sdsu.edu Tue Mar 2 12:34:39 1993 From: rswiniar%saturn at sdsu.edu (Dr. Roman Swiniarski) Date: Tue, 2 Mar 93 09:34:39 PST Subject: Neural, fuzzy, rough systems Message-ID: <9303021734.AA02913@saturn.SDSU.EDU> Dear Madam/Sir/Professor, I dare to provide the information about the short course. We will be very happy to introduce the distinguish world class scientists and our friends: Professor L.K. Hansen, Professor W. Pedrycz and Professor A. Skowron. Best regards, Roman Swiniarski, ------------------------------------------------------------------------------ NEURAL NETWORKS. FUZZY AND ROUGH SYSTEMS. THEORY AND APPLICATIONS. Friday, April 2, 1993, room BAM 341 The short course sponsored by the Interdisciplinary Research Center for Scientific Modeling and Computation at Department of Mathematical Sciences San Diego State University. 8:15-11:30 Professor L. K. Hansen, Technical University of Denmark, Denmark 1. Introduction to neural networks. 2. Neural Networks for Signal Processing, Prediction and Image Processing. 11:30-1.00 pm R. Swiniarski, San Diego State University 1. Application of neural networks to systems, adaptive control, and genetics. Break 2:00-4:00 Professor W. Pedrycz, University of Manitoba, Canada 1. Introduction to Fuzzy Sets. 2. Application of Fuzzy Sets: - knowledge based computations and logic-oriented neurocomputing - fuzzy modeling - models of fuzzy reasoning 4:00-6:30 Professor A. Skowron, University of Warsaw, Poland 1. Introduction to Rough Sets and Decision Systems. 2. Applications of Rough Sets and Decision Systems. 3. Neural Networks, Fuzzy Systems, Rough Sets and Evidence Theory. There will be a $80 (students &40) preregistration fee. To register, please send your name and affiliations along with a check to Interdisciplinary Research Center Department of Mathematical Sciences San Diego State University. San Diego, California 82182-0314, U.S.A. The check should be made out to SDSU Interdisciplinary Research Center. The registration fee after March, 18 will be a $100. The number of participants is limited. Should you need further information, please contact Roman Swiniarski (619) 594-5538 rswiniar at saturn.sdsu.edu or Jose Castillo (619) 594-7205 castillo at math.sdsu.edu. You are cordially invited to participate in the short course.  From tsung at cs.ucsd.edu Tue Mar 2 15:12:27 1993 From: tsung at cs.ucsd.edu (Fu-Sheng Tsung) Date: Tue, 2 Mar 93 12:12:27 -0800 Subject: paper available: learning attractors Message-ID: <9303022012.AA26472@roland> The following paper has been placed in the Neuroprose archive. Comments and questions are welcome. ******************************************************************* Phase-Space learning for recurrent networks Fu-Sheng Tsung and Garrison W Cottrell 0114 CSE UC San Diego La Jolla, CA 92093 abstract: We study the problem of learning nonstatic attractors in recurrent networks. With concepts from dynamical systems theory, we show that this problem can be reduced to three sub-problems, (a) that of embedding the temporal trajectory in phase space, (b) approximating the local vector field, and (c) function approximation using feedforward networks. This general framework overcomes problems with traditional methods by providing more appropriate error gradients and enforcing stability explicitly. We describe an online version of our method we call ARTISTE, that can learn periodic attractors without teach-forcing. ******************************************************************* Thanks to Jordan Pollack for providing this service, despite being the father of a new baby boy! ---------------------------------------------------------------- FTP INSTRUCTIONS "Getps tsung.phase.ps.Z" if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get tsung.phase.ps.Z ftp> bye unix% zcat tsung.phase.ps.Z | lpr (the paper is 18 pages) ----------------------------------------------------------------  From gary at cs.ucsd.edu Tue Mar 2 20:44:04 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Tue, 2 Mar 93 17:44:04 -0800 Subject: Seminar announcement Message-ID: <9303030144.AA21268@odin.ucsd.edu> SEMINAR Cascading Maps: Reorganization of Cortex in the Dog Garrison W. Cottrell Department of Dog Science Southern California Condominium College "My dog is an old dog. A major difference between owning a young dog and owning an old dog is walks. Young dog walks exercise the master; old dog walks exercise the master's patience. The major reason for this is _s_n_i_f_f_u_l_a _a_d _n_a_u_s_e_u_o_s_o, Latin for "this dog can't stop sniffing stuff". Lest you think I am talking about something illegal, this dog just will not stop sniffing a bush, a tree, or even a bare spot on the sidewalk. You could be there for 10 minutes and he wouldn't be done yet." --Professor Ebeneezer Huggenbotham The old dog sniffing problem (ODSP), or "Huggenbotham's tenth problem", has provided a rich source of data for recent advances in connectionist dog modeling. Ever since McDonnell & Pew's (McD & P) "Brain state in a Ball"[1] model of the dog olfactory bulb as an multi-state attractor, a debate has sprung up around the issue of whether a connectionist net can actually exhibit dog-like behavior, or whether you needed to be a dog to possess dog-like behavior (McDonnell & Pew, 1986; Peepee, 1988; Pluckit & Walkman, 1989). McD & P's model hypothesizes that since the brain state is in a ball, it can't ever get stuck in a corner, so it just wanders the surface of the sphere[2]. Thus the network never habituates to incoming signals. Peepee's critique of McD & P's model was that since McD & P's model only contained 3 units, it could never represent the variety of smells available to old dogs. Pluckit & Walkman showed that indeed, there were an infinite number of points on a hypersphere, so anything was representable. Further, they showed that If one assumed the models had started with many more units in a hypercube, and lost them decrementally, converging on a 3-D sphere, that one could account for many of the _d_e_g_r_a_d_a_t_i_o_n_a_l _a_s_p_e_c_t_s of the old dog's mind. While these models accounted for many of the psychological findings, the present paper seeks to integrate recent neurophysiological findings into a new understanding of old dog behavior. One of the most striking phenomena found today in the cortical map literature is the amazingly fast reorganization of cortical maps. In monkeys whose fingers have been severed, the somatosensory map reorganizes to represent the other fingers more than before[3]. Fortunately for dogs, they do not have fingers. Fortunately for monkeys, it is also found that the map will reorganize without vivisection. If the monkey is simply required to use a particular fingertip for some task, the map will allocate more space to that fingertip. The surprising thing about this cortical reorganization is that it is 1) fast, happening over hours or days and 2) present in adults[4]. This suggests that our cortical maps may be constantly reorganizing. Furthermore, since this appears to be a Hebbian-based reorganization, dependent upon activity, other maps connected to this one should also reorganize. That is, reorganization will not be confined to somatosensory maps, but will _c_a_s_c_a_d_e to other areas. These observations suggest a new theory about representation in the elder dog's cortex. As we all know, _s_m_e_l_l is the sense most associated with memory. Since input from the eyes and ears degrades with age, the olfactory input will begin to dominate brain activity in the older dog. The visual and auditory maps will reorganize to respond to smell. They will not of course, _r_e_p_r_e_s_e_n_t smell, but will be driven more by smell than by eyes because of activity dependent remapping. This will cause re-activation of vivid scenes associated with those smells. Hence, this suggests that the reason older dogs spend an order of magnitude greater time than a younger dog sniffing the same spot is that they are _r_e_m_i_n_i_s_c_i_n_g. ____________________ [1]Unlike Anderson's model of human cortex as a "Brain State in a Box", the states of the network are not allowed to extend outside of a _h_y_p_e_r_s_p_h_e_r_e. This explains why hu- mans are smarter than dogs: Humans can reach the _c_o_r_n_e_r_s of the hypercube. [2]Recently, more statistically based models have argued that the Kullback-Leibler information transmitted by an old nose was on the order of 1 bit per second (Chapel, 93), sug- gesting the behavior is entirely a peripheral deficit. Ex- perimental 1200 baud "nodem"'s are being implanted in several dogs as a possible cure. [3]This line of research suggests that some scientists did not pull enough legs off of spiders when they were younger. [4]This also suggests that modern American adult males, whose somatosensory maps overrepresent certain areas, are capable of change.  From dayan at helmholtz.sdsc.edu Wed Mar 3 20:42:24 1993 From: dayan at helmholtz.sdsc.edu (Peter Dayan) Date: Wed, 3 Mar 93 17:42:24 PST Subject: Paper : Convergence of TD(lambda) Message-ID: <9303040142.AA16154@helmholtz.sdsc.edu> A postscript version of the following paper has been placed in the neuroprose archive. It has been submitted to Machine Learning, and comments/questions/refutations are eagerly solicited. Hard-copies are not available. ***************************************************************** TD(lambda) Converges with Probability 1 Peter Dayan and Terrence J Sejnowski CNL, The Salk Institute 10010 North Torrey Pines Road La Jolla, CA 92037 The methods of temporal differences allow agents to learn accurate predictions about stationary stochastic future outcomes. The learning is effectively stochastic approximation based on samples extracted from the process generating an agent's future. Sutton has proved that for a special case of temporal differences, the expected values of the predictions converge to their correct values, as larger samples are taken, and this proof has been extended to the case of general lambda. This paper proves the stronger result that the predictions of a slightly modified form of temporal difference learning converge with probability one, and shows how to quantify the rate of convergence. ***************************************************************** ---------------------------------------------------------------- FTP INSTRUCTIONS "Getps dayan.tdl.ps.Z" if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get dayan.tdl.ps.Z ftp> bye unix% zcat dayan.tdl.ps.Z | lpr ----------------------------------------------------------------  From mb at tce.ing.uniroma1.it Thu Mar 4 06:55:16 1993 From: mb at tce.ing.uniroma1.it (mb@tce.ing.uniroma1.it) Date: Thu, 4 Mar 1993 12:55:16 +0100 Subject: Economic forecasting by NN Message-ID: <9303041155.AA06237@tce.ing.uniroma1.it> Concerning requests of references on the topic of economic forecasting using neural networks, many papers may be found in the proceedings of the workshop on Parallel Applications in Statistics and Economics (PASE'92), held in Prague, Czechoslovakia, December 7-8,1992. Thay have been published in a special issue of Neural Network World (vol.2, no.6, 1992). This journal is published in Prague, by the Institute of Computer and Information Science of the Czechoslovak Academy of Sciences, edited by Prof. M. Novak. Their e-mail is CVS15 at CSPGCS11.BITNET Have a good reading! Marco Balsi Dipartimento di Ingegneria Elettronica, Universita' di Roma "La Sapienza" mb at tce.ing.uniroma1.it  From george at psychmips.york.ac.uk Thu Mar 4 11:19:37 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Thu, 4 Mar 93 16:19:37 +0000 (GMT) Subject: Fault Tolerance in ANN's - thesis Message-ID: Thesis Title "Fault Tolerance in Artificial Neural Networks" ------------ by George Bolt, University of York Available via ftp, instructions at end of abstract. Abstract: This thesis has examined the resilience of artificial neural networks to the effect of faults. In particular, it addressed the question of whether neural networks are inherently fault tolerant. Neural networks were visualised from an abstract functional level rather than a physical implementation level to allow their computational fault tolerance to be assessed. This high-level approach required a methodology to be developed for the construction of fault models. Instead of abstracting the effects of physical defects, the system itself was abstracted and fault modes extracted from this description. Requirements for suitable measures to assess a neural network's reliability in the presence of faults were given, and general measures constructed. Also, simulation frameworks were evolved which could allow comparative studies to be made between different architectures and models. It was found that a major influence on the reliability of neural networks is the uniform distribution of information. Critical faults may cause failure for certain regions of input space without this property. This lead to new techniques being developed which ensure uniform storage. It was shown that the basic perceptron unit possesses a degree of fault tolerance related to the characteristics of its input data. This implied that complex perceptron based neural networks can be inherently fault tolerant given suitable training algorithms. However, it was then shown that back-error propagation for multi-layer perceptron networks (MLP's) does not produce a suitable weight configuration. A technique involving the injection of transient faults during back-error propagation training of MLP's was studied. The computational factor in the resulting MLP's causing their resilience to faults was then identified. This lead to a much simpler construction method which does not involve lengthy training times. It was then shown why the conventional back-error propagation algorithm does not produce fault tolerant MLP's. It was concluded that a potential for inherent fault tolerance does exist in neural network architectures, but it is not exploited by current training algorithms. $ ftp minster.york.ac.uk Connected to minster.york.ac.uk. 220 minster.york.ac.uk FTP server (York Tue Aug 25 11:09:10 BST 1992). Name (minster.york.ac.uk:root): anonymous 331 Guest login ok, send email address as password. Password: < insert your email address here > 230 Guest login ok, access restrictions apply. ftp> cd reports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get YCST-93.zoo 200 PORT command successful. 150 Opening BINARY mode data connection for YCST-93.zoo (1041762 bytes). 226 Transfer complete. local: YCST-93.zoo remote: YCST-93.zoo 1041762 bytes received in 19 seconds (55 Kbytes/s) ftp> quit 221 Goodbye $ zoo -extract YCST-93.zoo * $ printout A version of "zoo" compiled for Sun 3's is also available in this directory, just enter command "get zoo" before quitting ftp. If you have any problems, please contact me via email. - George Bolt email: george at psychmips.york.ac.uk smail: Dept. of Psychology University of York Heslington, York YO1 5DD U.K. tel: +904-433155 fax: +904-433181  From dlukas at PARK.BU.EDU Thu Mar 4 16:49:13 1993 From: dlukas at PARK.BU.EDU (dlukas@PARK.BU.EDU) Date: Thu, 4 Mar 93 16:49:13 -0500 Subject: World Conference on Neural Networks '93 (July 11-15, Portland, Oregon, USA) Message-ID: <9303042149.AA26736@retina.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS 1993 Annual Meeting of the International Neural Network Society July 11-15, 1993, Portland, Oregon WCNN'93 is the largest and most inter-disciplinary forum in the neural network field today. COOPERATING SOCIETIES: American Association for Artificial Cognitive Science Society Intelligence European Neural Network Society American Mathematical Society IEEE Computer Society American Physical Society IEEE Neural Networks Council American Psychological Society International Fuzzy Systems Association Association for Behavior Analysis Japanese Neural Network Society Classification Society of North Society for Mathematical Biology America Society for Mathematical Psychology Society of Manufacturing Engineers PLENARY SPEAKERS INCLUDE: Stephen Grossberg, 3-D Vision and Figure-Ground Pop-Out Bart Kosko, Neural Fuzzy Systems Carver Mead, Real-Time On-Chip Learning in Analog VLSI Networks Kumpati Narendra, Intelligent Control Using Neural Networks Wolf Singer, Coherence as an Organizing Principle of Cortical Function TUTORIALS INCLUDE: Gail Carpenter, Adaptive Resonance Theory Robert Desimone, Cognitive Neuroscience Walter Freeman, Neurobiology and Chaos Robert Hecht-Nielsen, Practical Applications of Neural Network Theory Michael Kuperstein, Neural Control and Robotics S.Y.Kung, Structural and Mathematical Approaches to Signal Processes V.S. Ramachandran, Biological Vision David Rumelhart, Cognitive Science Eric Schwartz, Neural Computation and VLSI Fred Watkins, Neural Fuzzy Systems Hal White, Supervised Learning INVITED SPEAKERS INCLUDE: James A. Anderson, Programming in Associative Memory Gail A. Carpenter, Adaptive Resonance Theory: Recent Research and Applications Michael A. Cohen, Recent Results in Neural Models of Speech and Language Perception and Recognition Judith E. Dayhoff, Applications of Temporal and Molecular Structures in Neural Systems Walter Daugherty, A Partially Self-Training System for the Protein Folding Problem Kunihiko Fukushima, Improvement of the Neocognitron and the Selective Attention Model Armin Fuchs, Brain Signals during Qualitative Changes in Patterns of Coordinated Movements Stephen Grossberg, Learning, Recognition, Reinforcement, Attention, and Timing in a Thalamo-Cortico-Hippocampal Model Dan Hammerstrom, Whither Electronic Neurocomputing? R. Hecht-Nielsen, Towards a General Theory of Data Compression James C. Houk, Spatiotemporal Patterns of Activity in an In Vitro Recurrent Network Mitsuo Kawato, Existence of an Inverse Dynamics Model in the Cerebellum Teuvo Kohonen, Boosting the Computing Power in Pattern Recognition by Unconventional Architectures S.Y. Kung, On Training Temporal Neural Networks Michael Kuperstein, Neural Controller for Catching Moving Objects in 3-D Daniel Levine, A Gated Dipole Architecture for Multi-Drive, Multi- Attribute Decision Making Erkki Oja, Nonlinear PCA: Algorithms and Applications Michael P. Perrone, Learning from what's been Learned: Supervised Learning in Multi-Neural Network Systems Michael T. Posner, Tracing Network Processes in Real Time with Scalp Electrodes Robert Sekuler, Perception of Motion: How the Brain Manages Those Thousand Points of Light John G. Taylor, M Forms of Memory Thomas P. Vogl, From Electrophysiology to a Stable Associative Learning Algorithm Allen Waxman, Rats, Robots, Monkeys and Missiles: Neural Pathways in Robot Intelligence Paul J. Werbos, Supervised Learning: Can We Escape from its Local Optimum? Bernard Widrow, Adaptive Signal Processing Shuji Yoshizawa, Dynamics and Capacity of Neural Models of Associative Memory Hussein Youssef, Comparison of Several Neural Networks in Nonlinear Dynamic System Modeling Lotfi A. Zadeh, Soft Computing, Fuzzy Logic and the Calculus of Fuzzy Graphs GENERAL CHAIR: George G. Lendaris MAIN PROGRAM CHAIRS: Stephen Grossberg and Bart Kosko SME/INNS TRACK PROGRAM CHAIRS: Kenneth Marko and Bernard Widrow IFSA/INNS TRACK PROGRAM CHAIRS: Ronald Yager and Paul Werbos COOPERATING SOCIETIES CHAIR: Mark Kon INNS OFFICERS: President: Harold Szu President-Elect: Walter Freeman Past President: Paul Werbos Executive Director: Morgan Downey BOARD OF GOVERNORS: Shun-ichi Amari Richard Andersen James A. Anderson Andrew Barto Gail Carpenter Leon Cooper Judith Dayhoff Kunihiko Fukushima Lee Giles Stephen Grossberg Mitsuo Kawato Christof Koch Teuvo Kohonen Bart Kosko C. von der Malsburg David Rumelhart John Taylor Bernard Widrow Lotfi Zadeh FOR REGISTRATION AND ADDITIONAL INFORMATION PLEASE CONTACT: WCNN'93 Talley Management Group 875 Kings Highway, Suite 200 West Deptford, NJ 08096 Tel: (609) 845-1720 FAX: (609) 853-0411 e-mail: registration at wcnn93.ee.pdx.edu Please do not reply to this account. Please use the telephone number, fax number, U.S. Mail address, or email address listed above.  From doya at crayfish.UCSD.EDU Thu Mar 4 22:24:18 1993 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Thu, 4 Mar 93 19:24:18 PST Subject: three papers on recurrent networks Message-ID: <9303050324.AA11596@crayfish.UCSD.EDU> The following manuscripts have been placed in the Neuroprose archive. doya.universality.ps.Z: Universality of Fully-Connected Recurrent Neural Networks doya.bifurcation2.ps.Z: Bifurcations of Recurrent Neural Networks in Gradient Descent Learning doya.dimension.ps.Z: Dimension Reduction of Biological Neuron Models by Artificial Neural Networks Please find the abstracts and retrieving instructions below. Comments and questions are welcome. Also, some of the figures in my older papers doya.bifurcation.ps.Z (1992 IEEE Symp. on Circuits and Systems) doya.synchronization.ps.Z (NIPS 4) have been replaced. They caused printer errors in several sites. Thanks to Jordan Pollack for maintaining this fast growing archive. Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301 **************************************************************** Universality of Fully-Connected Recurrent Neural Networks Kenji Doya, UCSD From george at psychmips.york.ac.uk Fri Mar 5 05:37:14 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Fri, 5 Mar 93 10:37:14 +0000 (GMT) Subject: Fault Tolerance in ANN's - thesis Message-ID: Since some people have been unable to uncompress the postscript files, I have changed the method by which they are stored. In directory "reports" the thesis postscript files are held in a tar file "YCST-93.tar". The contents are also compressed using the (hopefully) standard Unix compress routine. The tar file is just under a megabyte in size. Ftp instructions: $ ftp minster.york.ac.uk Connected to minster.york.ac.uk. 220 minster.york.ac.uk FTP server (York Tue Aug 25 11:09:10 BST 1992). Name (minster.york.ac.uk:root): anonymous 331 Guest login ok, send email address as password. Password: < insert your email address here > 230 Guest login ok, access restrictions apply. ftp> cd reports 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get YCST-93.tar 200 PORT command successful. 150 Opening BINARY mode data connection for YCST-93.zoo (1041762 bytes). 226 Transfer complete. local: YCST-93.zoo remote: YCST-93.zoo 1041762 bytes received in 19 seconds (55 Kbytes/s) ftp> quit 221 Goodbye $ tar -xvf - < YCST-93.tar $ uncompress *.Z $ printout If anyone has any further problems, please contact me via email. - George Bolt email: george at psychmips.york.ac.uk smail: Dept. of Psychology University of York Heslington, York YO1 5DD U.K.  From mikewj at signal.dra.hmg.gb Fri Mar 5 12:14:04 1993 From: mikewj at signal.dra.hmg.gb (mikewj@signal.dra.hmg.gb) Date: Fri, 5 Mar 93 17:14:04 GMT Subject: The best neural networks for classification Message-ID: AA15142@ravel.dra.hmg.gb Dear Connectionists, Many connectionist simulators are geared towards comparing one neural algorithm with another. You can set different numbers of nodes, learning parameters etc. and do 20 runs (say) to get good statistical measurements on the performance of the algorithms on a dataset. I am working on the European Commission funded Statlog project, comparing a whole host of pattern classification techniques on some large, dirty, and difficult industrial data sets; techniques being tested include various neural net paradigms, as well as about 20 statistical and inductive inference methods. Instead of the usual rigorous performance figures useful for comparing different neural nets, I have to produce the BEST NETWORK I CAN, and report performance on training and test sets. In order to do well on the test set, I need to hold back some of the training data, and use this to evaluate the performance of networks of different sizes and trained for different lengths of time, on the remaining portion of the training data. Moreover, I would like to find a simulator which uses faster training algorithms such as conjugate gradients, which can cope with big datasets without having memory or network nightmares, and which will do the hold-out cross-validation itself, automatically. My other options are to do this by hand (which is fine), but I see greater benefits for the project, for "neural nets" as a standard technique, and for industrial users unfamiliar with the trickeries of data preparation and so on, if I can simply recommend a simulator which will find the best network for an application, without a great deal of intervention for evaluation, chopping of data and so on. Thanks for your interest; any suggestions? Mike Wynne-Jones. mikewj at signal.dra.hmg.gb  From rich at gte.com Fri Mar 5 15:07:43 1993 From: rich at gte.com (Rich Sutton) Date: Fri, 5 Mar 93 15:07:43 -0500 Subject: Reinforcement Learning workshop to follow ML93 -- Call for participation Message-ID: <9303052007.AA17647@bunny.gte.com> Call for Participation "REINFORCEMENT LEARNING: What We Know, What We Need" an Informal Workshop to follow ML93 June 30 & July 1, University of Massachusetts, Amherst Reinforcement learning is a simple way of framing the problem of an autonomous agent learning and interacting with the world to achieve a goal. This has been an active area of machine learning research for the last 5 years. The objective of this workshop is to present concisely the current state of the art in reinforcement learning and to identify and highlight critical open problems. The intended audience is all learning researchers interested in reinforcement learning; little prior knowledge of the area will be assumed. The first half of the workshop will consist mainly of tutorial presentations, and the second half will define and explore outstanding problems. The entire workshop will last approximately one and a half days. Attendance will be open to all those registered for the main part of ML93 (June 27-29). Program Committee: Rich Sutton (chair), Nils Nilsson, Leslie Kaelbling, Satinder Singh, Sridhar Mahadevan, Andy Barto, Steve Whitehead CALL FOR PAPERS. Papers are solicited that lay out relevant problem areas, i.e., for the second half of the workshop. Proposals are also solicited for polished tutorial presentations on basic topics of reinforcement learning for the first portion of the workshop. The program has yet to be established, but will probably look something like the following (all names are provisional): Session 1: Introduction The Challenge of Reinforcement Learning, by Rich Sutton History of RL, by Harry Klopf Q-learning Planning and Action Models, by Long-Ji Lin Session 2: Theory Dynamic Programming, by Andy Barto Convergence of Q-learning and TD(lambda), by Peter Dayan Session 3: Applications TD-Gammon, by Gerry Tesauro Robotics, by Sridhar Mahadevan Session 4: Extensions Prioritized Sweeping, by Andrew Moore Eligibility Traces, by Rich Sutton Sessions 5 & 6: Open Problems Generalization Hidden State (short-term memory) Hierarchical RL Search Control Incorporating Prior Knowledge Exploration ... If you are interested in attending the RL workshop, please register by sending a note with your name, email and physical addresses, level of interest, and a brief description of your current level of knowledge about reinforcement learning, to: sutton at gte.com OR Rich Sutton GTE Labs, MS-44 40 Sylvan Road Waltham, MA 02254  From giles at research.nj.nec.com Fri Mar 5 16:09:31 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 5 Mar 93 16:09:31 EST Subject: Reprint: First-Order vs. Second-Order Single Layer Recurrent NN Message-ID: <9303052109.AA01113@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- First-Order vs. Second-Order Single Layer Recurrent Neural Networks Mark W. Goudreau (Princeton University and NEC Research Institute, Inc.) C. Lee Giles (NEC Research Institute, Inc. and University of Maryland) Srimat T. Chakradhar (C&CRL, NEC USA, Inc.) D. Chen (University of Maryland) ABSTRACT We examine the representational capabilities of first-order and second-order Single Layer Recurrent Neural Networks (SLRNNs) with hard-limiting neurons. We show that a second-order SLRNN is strictly more powerful than a first-order SLRNN. However, if the first-order SLRNN is augmented with output layers of feedforwardneurons, it can implement any finite-state recognizer, but only if state-splitting is employed. When a state is split, it is divided into two equivalent states. The judicious use of state-splitting allows for efficient implementation of finite-state recognizers using augmented first-order SLRNNs. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get SLRNN.ps.Z ftp> quit unix> uncompress SLRNN.ps.Z --------------------------------------------------------------------------------. -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From kak at max.ee.lsu.edu Fri Mar 5 12:54:05 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Fri, 5 Mar 93 11:54:05 CST Subject: The New Training Alg for Feedforward Networks Message-ID: <9303051754.AA27581@max.ee.lsu.edu> Hard copies of the below-mentioned paper are now available [until they are exhausted]. -Subhash Kak ------------------------------------------------------------- Pramana - Journal of Physics, vol. 40, January 1993, pp. 35-42 -------------------------------------------------------------- On Training Feedforward Neural Networks Subhash C. Kak Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Abstract: A new algorithm that maps n-dimensional binary vectors into m-dimensional binary vectors using 3-layered feedforward neural networks is described. The algorithm is based on a representation of the mapping in terms of the corners of the n-dimensional signal cube. The weights to the hidden layer are found by a corner classification algorithm and the weights to the output layer are all equal to 1. Two corner classification algorithms are described. The first one is based on the perceptron algorithm and it performs generalization. The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. For problems with 30 to 100 neurons we have found a speedup advantage ranging from 100 to more than a thousand fold. Another corner classification algorithm presented in this paper does not require any computations to find the weights. However, in its basic form this second classification procedure does not perform generalization. The new algorithm described in this paper is guaranteed to find the solution to any mapping problem. The effectiveness of this algorithm is due to the structured nature of the final design. This algorithm can also be applied to analog data. The new algorithm is computationally extremely efficient compared to the backpropagation algorithm. It is also biologically plausible since the computations required to train the network are extremely simple.  From rupa at dendrite.cs.colorado.edu Fri Mar 5 16:05:17 1993 From: rupa at dendrite.cs.colorado.edu (Sreerupa Das) Date: Fri, 5 Mar 1993 14:05:17 -0700 Subject: Preprint available Message-ID: <199303052105.AA06067@pons.cs.Colorado.EDU> The following paper has been placed in the Neuroprose archive. Instructions for retrieving and printing follow the abstract. This is a preprint of the paper to appear in Advances in Neural Information Processing Systems 5, 1993. Comments and questions are welcome. Thanks to Jordan Pollack for maintaining the archive! Sreerupa Das Department of Computer Science University of Colorado, Boulder CO-80309-0430 email: rupa at cs.colorado.edu ************************************************************************** USING PRIOR KNOWLEDGE IN AN NNPDA TO LEARN CONTEXT-FREE LANGUAGES Sreerupa Das C. Lee Giles Guo-Zheng Sun Dept. of Comp. Sc.& NEC Research Inst. Inst. for Adv. Comp. Studies Inst. of Cognitive Sc. 4 Independence Way University of Maryland University of Colorado Princeton, NJ 08540 College Park, MD 20742 Boulder, CO 80309-0430 ABSTRACT Although considerable interest has been shown in language inference and automata induction using recurrent neural networks, success of these models has mostly been limited to regular languages. We have previously demonstrated that Neural Network Pushdown Automaton (NNPDA) model is capable of learning deterministic context-free languages (e.g., a^n b^n and parenthesis languages) from examples. However, the learning task is computationally intensive. In this paper we discuss some ways in which {\em a priori} knowledge about the task and data could be used for efficient learning. We also observe that such knowledge is often an experimental prerequisite for learning nontrivial languages (eg. a^n b^n c b^m a^m). ************************************************************************** -------------------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get das.prior_knowledge.ps.Z ftp> bye unix% zcat das.prior_knowledge.ps.Z | lpr  From giles at research.nj.nec.com Fri Mar 5 12:22:26 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 5 Mar 93 12:22:26 EST Subject: Reprint: Experimental Comparison of the Effect of Order ... Message-ID: <9303051722.AA00702@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. -------------------------------------------------------------------------------- Experimental Comparison of the Effect of Order in Recurrent Neural Networks Clifford B. Miller(a) and C.L. Giles(a,b) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. Maryland, College Park, MD20742 ABSTRACT There has been much interest in increasing the computational power of neural networks. In addition there has been much interest in "designing" neural networks to better suit particular problems. Increasing the "order" of the connectivity of a neural network permits both. Though order has played a significant role in feedforward neural networks, it role in dynamically driven recurrent networks is still being understood. This work explores the effect of order in learning grammars. We present an experimental comparison of first-order and second-order recurrent neural networks, as applied to the task of grammatical inference. We show that for the small grammars studied these two neural net architectures have comparable learning and generalization power, and that both are reasonably capable of extracting the correct finite state automata for the language in question. However, for a larger randomly-generated, ten-state grammar second-order networks significantly outperformed the first-order networks, both in convergence time and generalization capability. We show that these networks learn faster the more neurons they have (our experiments used up to 10 hidden neurons), but that the solutions found by smaller networks are usually of better quality (in terms of generalization performance after training). Second-order nets have the advantage that they converge more quickly to a solution and can find it more reliably than first-order nets, but that the second-order solutions tend to be of poorer quality than those of first-order if both architectures are trained to the same error tolerance. Despite this, second-order nets can more successfully extract finite state machines using heuristic clustering techniques applied to the internal state representations. We speculate that this may be due to restrictions on the ability of first-order architecture to fully make use of its internal state representation power and that this may have implications for the performance of the two architectures when scaled up to larger problems. List of key words: recurrent neural networks, higher order, learning, generalization, automata, grammatical inference, grammars. -------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get miller-giles-ijprai.ps.Z ftp> quit unix> uncompress miller-giles-ijprai.ps.Z --------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From giles at research.nj.nec.com Sun Mar 7 12:09:12 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Sun, 7 Mar 93 12:09:12 EST Subject: Reprint: Constructive Learning of Recurrent Neural Networks Message-ID: <9303071709.AA02977@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Constructive Learning of Recurrent Neural Networks" D. Chen, C.L. Giles, G.Z. Sun, H.H. Chen, Y.C. Lee, M.W. Goudreau University of Maryland, College Park NEC Research Institute, Princeton, NJ ABSTRACT Recurrent neural networks are a natural model for learning and predicting temporal signals. In addition, simple recurrent networks have been shown to be both theoretically and experimentally capable of learning finite state automata {cleeremans89,giles92a,minsky67,pollack91, siegelmann92}. However, it is difficult to determine what is the minimal neural network structure for a particular automaton. Using a large recurrent network, which would be versatile in theory, in practice proves to be very difficult to train. Constructive or destructive recurrent methods might offer a solution to this problem. We prove that one current method, Recurrent Cascade Correlation, has fundamental limitations in representation and thus in its learning capabilities. We give a preliminary approach on how to get around these limitations by devising a ``simple" constructive training method that adds neurons during training while still preserving the powerful fully recurrent structure. Through simulations we show that such a method can learn many types of regular grammars that the Recurrent Cascade Correlation method is unable to learn. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get icnn_93_contructive.ps.Z ftp> quit unix> uncompress icnn_93_contructive.ps.Z ---------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From sgoss at ulb.ac.be Fri Mar 5 17:26:14 1993 From: sgoss at ulb.ac.be (Goss Simon) Date: Fri, 5 Mar 93 17:26:14 MET Subject: 2nd European Conference on Artificial Life Message-ID: ECAL '93 2nd European Conference on Artificial Life SELF-ORGANIZATION AND LIFE, FROM SIMPLE RULES TO GLOBAL COMPLEXITY Brussels, May 24-26th, 1993 Natural and artificial systems governed by simple rules exhibit self- organisation leading to autonomy, self-adaptation and evolution. While these phenomena interest an increasing number of scientists, much remains to be done to encourage the cross-fertilisation of ideas and techniques. The aim of this conference is to bring together scientists from different fields in the search for common rules and algorithms underlying different systems. The following themes have been selected : - Origin of life and molecular evolution - Patterns and rhythms in chemical and biochemical systems and interacting cells (neural network, immune system, morphogenesis). - Sensory and motor activities in animals and robots. - Collective intelligence in natural and artificial groups - Ecological communities and evolution . - Ecological computation. - Epistemology We are also planning demonstrations of computer programmes, robots and physico-chemical reactions, both in vivo and in video. Invited Speakers C. Biebricher (Germany), S. Camazine (USA), H. Cruse, P. De Kepper (France), W. Fontana, N. Franks (UK), F. Hess (Holland), B. Huberman (USA), S. Kauffman (USA), C. Langton (USA), M. Nowak (UK), T. Ray (USA), P. Schuster (Germany), M. Tilden (Canada), J. Urbain (Belgium), F. Varela (France). Organising committee J.L. Deneubourg, H. Bersini, S. Goss, G. Nicolis (Universite Libre de Bruxelles) R. Dagonnier (Universite de Mons-Hainaut). International Program committee A. Babloyantz (Belgium), G. Beni (USA), P. Borckmans (Belgium), P. Bourgine (France), H. Cruse (Germany), G. Demongeot (France), G. Dewel (Belgium), P. De Kepper (France), S. Forrest (USA), N. Franks (UK), T. Fukuda (Japan), B. Goodwin (UK), P. Hogeweg (Holland), M. Kauffman (Belgium), C. Langton (USA), R. Lefever (Belgium), P. Maes (USA), J.-A. Meyer (France), T. Ray (USA), P. Schuster (Austria), T. Smithers (Belgium), F. Varela (France), R. Wehner (Germany). Address: ECAL '93, Centre for Non-Linear Phenomena and Complex Systems, CP 231, Universite Libre de Bruxelles, Bld. du Triomphe, 1050 Brussels, Belgium. Fax : 32-2-6505767; Phone : 32-2-6505776; 32-2-6505796; EMAIL : sgoss at ulb.ac.be ________________________________________________________________________________ REGISTRATION The registration fees for ECAL '93 (May 24-26) are as follows ($1=34BF): Payment before Payment after May 1st May 1st Academic: 10.000 BF 12.000 BF Non-Academic: 12.000 BF 14.000 BF Student: 6.500 BF 7.500 BF a) I authorise payment of BF by the following credit card: American Express Visa/Eurocard/Master (please indicate which card!) Card Name Card No Valid from: to: Signature b) I enclose a Eurocheque for BF c) I have ordered my bank to make a draft of BF to : ECAL '93 Account no: 034-1629733-01 CGER (Caisse Generale d'Epargne et de Retraite) Agence Ixelles-Universite Chaussee de Boondael 466 1050 Bruxelles, Belgium _________________________________________________________ ____________ Signature Date Name Telephone Fax e-mail Address ________________________________________________________________________________ ECAL '93 Self-Organisation and life From simple rules to global complexity Brussels May 24-26 (Very) Provisional Program (16 invited speakers, 40 oral communications, 50 posters) Monday May 24th 9.00 Inauguration 9.10 Opening remarks 9.45 Coffee 10.15 Origins of life and 10.15 Chemical patterns molecular evolution. and rhythms. Invited Invited speakers: C. speaker: P. De Kepper Biebricher, W. Fontana, P. Schuster. 12.20 lunch 13.10 lunch 14.10 Theoretical biology 14.10 Collective and artificial life. intelligence in animal Invited speaker: C. groups (social insects). Langton Invited speaker: N.R. Franks 15.50 coffee 15.50 coffee 16.30 Theoretical biology 16.20 Collective and artificial life intelligence in animal groups (social insects). Invited speaker: S. Camazine 18.00 Beer and sandwiches 19.30 Theoretical biology and artificial life: General discussion. Invited speaker: F. Varela 22.00 Close Tuesday May 25th 9.00 Individual behaviour. 9.00 Patterns and rhythms Invited speaker: M. Tilden in human societies. Invited speaker: B. Huberman 10.40 coffee 10.40 Coffee 11.10 Individual behaviour 11.10 Multi-robot systems 12.10 lunch 13.10 lunch 14.00 Posters and demonstrations (robots, simulations, videos, chemical reactions, ...). Invited speaker: F. Hess. 18.00 Cocktail 20.00 Banquet Wednesday May 26th 9.00 Evolution. Invited 9.00 Sensory and motor speakers: T. Ray, S. activities in animals and Kauffman. robots. Invited Speaker: H. Cruse 10.20 Coffee 10.40 coffee 11.10 11.40 Ecological communities and evolution. Invited speaker: M. Nowak 12.10 lunch 13.10 lunch 14.10 Collective pattern 14.10 Patterns and rhythms in living systems in the immune system. Invited speaker: J. Urbain 15.50 coffee 15.50 coffee 16.30 Collective pattern 16.20 Patterns and rhythms in living systems in the immune system. 17.30 Closing remarks ________________________________________________________________________________ *** you may need a physical copy of the BIT hotel reservation form or the *** *** Brussels Hotel Guide. See below for details *** HOTEL ACCOMODATION FOR ECAL '93 We have reserved a number of rooms in the centre of Brussels (see attached list), not far from the Metro line 1a which will take you to the conference (ULB, Campus Plaine, Metro Station Delta, Metro Ligne 1a, direction Hermann Debroux). There are unfortunately no hotels close to the University. All prices include breakfast, TV, bathroom (see enclosed official hotel guide for more details). Hotels are rather expensive in Brussels, and you will see that we have not been able to reserve many low-priced rooms. The earlier you make your reservation, therefore, the surer you are of having one. Another possibility is that you arrange to share a double room with a fellow conferencier. It is important that you try to reserve before the 15th of April, otherwise our options on the rooms may be cancelled. Please note that there are 3 ways of reserving your room, depending on which hotel you choose. 1. Hotels President (World Trade Centre) and Palace For the Hotels President and Palace you will see on the enclosed ECAL selected hotel list that we have been able to negotiate a substantial reduction on normal rates. To do so we have had to agree to pay for the rooms in one lump sum, including a down payment. Therefore, if you wish to take a room in these hotels you must make the reservation through us. We will only accept to do this if you pay the necessary sum in advance (we will refund cancellations, though these two hotels might impose a cancellation charge if there are too many last minute cancellations). Please reserve before April 15th. After this date we cannot garuntee that the hotel will maintain our unused reservations and group rate. We enclose a special form for the registration and pre-payment of these rooms. 2. Other hotels on selected ECAL list For all the other hotels on our selected list, please fill in the enclosed BTR reservation form and return it to us. We will forward it to: Mr. Freddy Meerkens Group reservations Belgian Tourist Reservations Bld. Anspach 111, B4 B-1000 Bruxelles Tel (322) 513.7484; Fax (322) 513.9277 He will make the reservation and should also notify you that the reservation has been made. There is no charge for this service, BTR being a state agency. You can if you wish send or fax your reservation form directly to M. Meerkens, group reservations, BTR. In this case, please do not forget to mention in large letters that you are attending the ECAL '93 conference (group reservation), and please send us a copy (marked "COPY") of your reservation form, so that we can keep track of where everyone is staying. 3. Independent Reservations Finally, for those of you that are more independently minded, who wish to find a cheaper hotel, or have other reasons, we enclose the Brussels Hotel Guide 1993, which lists all the hotels in Brussels. If you choose one that is not on the enclosed ECAL selected hotel list, you can then reserve through Belgian Tourist Reservations (see above), using the enclosed BTR form (you do not need in this case to mention that you are attending ECAL). We would nevertheless like you to send us a copy (marked "COPY") of your BTR reservation form, so that we can keep track of where everyone is staying. ECAL selected Hotel list (do not contact these hotels directly - see attached instructions) ($1=34BF approx.) Hotel President (World Trade Centre) (100 rooms) ***** 180 Bld. E. Jacqmain, 1210 Bruxelles Tel: (322) 217.2020; Fax: (322) 218.8402 single room = double room: ECAL price: 4000 BF (<< Normal price: 7500 BF) Hotel Palace (lots of rooms) **** 3 Rue Gineste, 1210 Bruxelles Tel: (322) 217.6200; Fax: (322) 218.1269 single room: ECAL price: 3780 BF (<< Normal price: 6000 BF) double room: ECAL price: 4520 BF (<< Normal price: 7000 BF) Hotel Atlas (30 singles, 10 doubles) **** 30-34 Rue du Vieux Marche-aux-Grains, 1000 Bruxelles Tel: (322) 502.6006 single room: ECAL price: 3360 BF (just < Normal price: 3500 BF) double room: ECAL price: 3780 BF (just < Normal price: 4100 BF) Hotel Arcade Sainte Catherine (30 singles, 10 doubles) *** 2 Rue Joseph Plateau, 1000 Bruxelles Tel (322) 513.7620; Fax (322) 514.2214 single room = double room: ECAL price: 3900 BF (= Normal price) Hotel Orion (15 singles) *** 51 Quai au Bois a Bruler, 1000 Bruxelles Tel (322) 221.1411; Fax (322) 221.1599 single room: ECAL price: 3120 BF (= Normal price) Hotel Vendome (30 singles) *** 96 Bld. Adolphe Max, 1000 Bruxelles Tel (322) 218.00070; Fax (322) 218.0683 single room: ECAL price: 2875 BF (= Normal price) Hotel Opera (20 singles) ** 53 Rue Gretry, 1000 Bruxelles Tel (322) 219.4343; Fax (322) 219.1720 single room: ECAL price: 2200 BF (= Normal price) Hotel de Paris (20 singles) ** (shower not bathroom) 800 Bld. Poincarre, 1070 Bruxelles Tel (322) 527.0920; Fax (322) 528.8153 single room: ECAL price: 1800 BF (= Normal price) Reservation and pre-payment for Hotel President (World Trade Centre) or Hotel Palace I would like to reserve a single / double room at the Hotel President / Hotel Palace for the nights of: Signature Date Name Telephone Fax e-mail Address _________________________________________________________ ____________ a) I authorise payment to ECAL '93 of BF by the following credit card: American Express Visa/Eurocard/Master (please indicate which card!) Card Name Card No Valid from: to: Signature b) I enclose a Eurocheque (made put to ECAL '93) for BF c) I have ordered my bank to make a draft of BF to : ECAL '93 Account no: 034-1629733-01 CGER (Caisse Generale d'Epargne et de Retraite) Agence Ixelles-Universite Chaussee de Boondael 466 1050 Bruxelles, Belgium -- Simon Goss Unit of Behavioural Ecology Center for Non-Linear Phenomena and Complex Systems CP 231, Campus Plaine Universite Libre de Bruxelles Boulevard du Triomphe 1050 Bruxelles Belgium Tel: 32-2-650.5776 Fax: 32-2-650.5767 E-mail: sgoss at ulb.ac.be -- Simon Goss Unit of Behavioural Ecology Center for Non-Linear Phenomena and Complex Systems CP 231, Campus Plaine Universite Libre de Bruxelles Boulevard du Triomphe 1050 Bruxelles Belgium Tel: 32-2-650.5776 Fax: 32-2-650.5767 E-mail: sgoss at ulb.ac.be  From fellous%sapo.usc.edu at usc.edu Tue Mar 9 10:57:50 1993 From: fellous%sapo.usc.edu at usc.edu (Jean-Marc Fellous) Date: Tue, 9 Mar 93 07:57:50 PST Subject: USC/CNE Workshop - Rescheduling. Message-ID: <9303091557.AA09414@sapo.usc.edu> Thank you for posting the following notice: **************************** RESCHEDULING **************************** SCHEMAS AND NEURAL NETWORKS INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 The Program Committee has now evaluated the submissions to the proposed Workshop on 'Schemas and Neural Networks: Integrating Symbolic and Subsymbolic Approaches to Cooperative Computation', and found that rather few were on the proposed topic, although there were several excellent submissions on Connectionist Imple- mentation of Semantic Networks; and Schemas, Neural Networks, and Reactive Implementations of Robots. We have thus decided to postpone the meeting until October so that we may explicitly sol- icit the strongest possible contributions on an expanded theme - still on the topic of schemas and neural nets, but with the per- spective now broadened to include the two areas most closely re- lated to this particular combination of low and high-level tech- nologies: schemas plus other low-level technologies (such as reactive robot control); and neural nets plus other high-level technologies (such as rule systems and semantic nets). We regret whatever inconvenience this delay may cause you, but believe that for most contributors and participants, this will mean a much stronger and exciting meeting. New Schedule: October 19 -20, 1993 With Best Wishes Michael Arbib **************************************************************************  From sontag at control.rutgers.edu Tue Mar 9 15:21:36 1993 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 9 Mar 93 15:21:36 EST Subject: AVAILABLE IN NEUROPROSE: "Uniqueness of weights for neural networks" Message-ID: <9303092021.AA16170@control.rutgers.edu> TITLE: "Uniqueness of weights for neural networks" AUTHORS: Francesca Albertini, Eduardo D. Sontag, and Vincent Maillot FILE: sontag.uniqueness.ps.Z ABSTRACT This short paper surveys various results dealing with the weight-uniqueness question for neural nets. In essence, these results show that, under various technical assumptions, neuron exchanges and sign flips are the only transformations that (generically) leave the input/output behavior invariant. An alternative proof is given of Sussmann's theorem (Neural Networks, 1992) for single-hidden layer nets, and his result (for the standard logistic, or equivalently tanh(x)) is generalized to a wide class of activations. Also, several theorems for recurrent nets are discussed. (NOTE: The uniqueness theorem extends, with a simple proof, to single-hiden layer nets which employ the Elliott/Georgiou/Koutsougeras/... activation: u s(u) = ------- 1 + |u| This is not discussed in, and is not an immediate consequence of, the results in the paper, but is an easy exercise.) unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sontag.uniqueness.ps.Z ftp> quit unix> uncompress sontag.uniqueness.ps.Z unix> lpr -Pps sontag.uniqueness.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Eduardo D. Sontag Department of Mathematics Rutgers Center for Systems and Control (SYCON) Rutgers University New Brunswick, NJ 08903, USA  From sontag at control.rutgers.edu Tue Mar 9 15:24:05 1993 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 9 Mar 93 15:24:05 EST Subject: AVAILABLE IN NEUROPROSE: paper on finite VC dimension for NN Message-ID: <9303092024.AA16173@control.rutgers.edu> TITLE: "Finiteness results for sigmoidal `neural' networks" AUTHORS: Angus Macintyre and Eduardo D. Sontag (To appear in Proc. 25th Annual Symp.Theory Computing, San Diego, May 1993) FILE: sontag.vc.ps.Z ABSTRACT This paper deals with analog neural nets. It establishes the finiteness of VC dimension, teaching dimension, and several other measures of sample complexity which arise in learning theory. It also shows that the equivalence of behaviors, and the loading problem, are effectively decidable, modulo a widely believed conjecture in number theory. The results, the first ones that are independent of weight size, apply when the gate function is the ``standard sigmoid'' commonly used in neural networks research. The proofs rely on very recent developments in the elementary theory of real numbers with exponentiation. (Some weaker conclusions are also given for more general analytic gate functions.) Applications to learnability of sparse polynomials are also mentioned. **** To retrieve: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name : anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sontag.vc.ps.Z ftp> quit unix> uncompress sontag.vc.ps.Z unix> lpr -Pps sontag.vc.ps (or however you print PostScript) (With many thanks to Jordan Pollack for providing this valuable service!) Eduardo D. Sontag Department of Mathematics Rutgers Center for Systems and Control (SYCON) Rutgers University New Brunswick, NJ 08903, USA  From piero at dist.dist.unige.it Mon Mar 8 19:44:53 1993 From: piero at dist.dist.unige.it (Piero Morasso) Date: Mon, 8 Mar 93 19:44:53 MET Subject: ENNS membership Message-ID: <9303081844.AA16490@dist.dist.unige.it> ============================================== E N N S European Neural Network Society ============================================= ENNS is the Society which organizes every year the ICANN Conferences (Helsinki'91, Brighton'92, Amsterdam'93, Sorrento'94, ...) ENNS membership allows reduced registration to ICANN, subscription to the Neural Networks journal and the reception of a Newsletter. The ENNS membership application form is available via ftp. In order to get it, proceed as follows: 1) you type "ftp dist.unige.it" 2) upon the request "login:" you type "anonymous" 3) upon the request "password:" you type your email address 4) if this is OK, you are inside the dist machine; your home directory should be /home1/ftp; check it with "pwd" 5) you go to the ENNS directory (type "cd pub/ENNS") 6) you set the transmission to "binary" 7) you get the file by "get member_appl_form.ps.Z" 8) quit ftp 9) type "uncompress member_appl_form.ps" File member_appl_form.ps is now ready to print. -- Pietro Morasso ENNS secretary E-mail: morasso at dist.unige.it mail: DIST-University of Genova Via Opera Pia, 11A I-16145 Genova (ITALY) phone: +39 10 3532749/3532983 fax: +39 10 3532948  From giles at research.nj.nec.com Tue Mar 9 19:00:02 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Tue, 9 Mar 93 19:00:02 EST Subject: Reprint: Routing in Optical Interconnection Networks Using NNs Message-ID: <9303100000.AA06263@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Routing in Optical Multistage Interconnection Networks: A Neural Network Solution" C. Lee Giles NEC Research Institute, Inc. and UMIACS, University of Maryland and Mark W. Goudreau NEC Research Institute, Inc. ABSTRACT There has been much interest in using optics to implement computer interconnection networks. However, there has been little discussion of routing methodologies besides those already used in electronics. In this paper, a neural network routing methodology is proposed that can generate control bits for an optical multistage interconnection network (OMIN). Though we present no optical implementation of this methodology, we illustrate its control for an optical interconnection network. These OMINs can be used as communication media for shared memory, distributed computing systems. The routing methodology makes use of an Artificial Neural Network (ANN) that functions as a parallel computer for generating the routes. The neural network routing scheme can be applied to electrical as well as optical interconnection networks. However, since the ANN can be implemented using optics, this routing approach is especially appealing for an optical computing environment. The parallel nature of the ANN computation might make this routing scheme faster than conventional routing approaches, especially for OMINs that are irregular. Furthermore, the neural network routing scheme is fault-tolerant. Results are shown for generating routes in a 16 by 16, 3 stage OMIN. ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get optics_long.ps.Z ftp> quit unix> uncompress optics_long.ps.Z ------------------------------------------------------------------------------------ -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From lba at sara.inesc.pt Tue Mar 9 13:00:30 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Tue, 9 Mar 93 19:00:30 +0100 Subject: The New Training Alg for Feedforward Networks In-Reply-To: "Dr. S. Kak"'s message of Fri, 5 Mar 93 11:54:05 CST <9303051754.AA27581@max.ee.lsu.edu> Message-ID: <9303091800.AA11872@sara.inesc.pt> Dr. Subhash C. Kak writes, in his message: The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. I cannot agree with the assertions made about the speed of backpropagation in the XOR problem. Just to be sure, I have just run a few tests, using plain backpropagation, in the batch mode, without any acceleration technique (not even momentum), and using an architecture with 2 input units, 2 hidden units and 1 output unit (more details available on request). The runs that didn't stop at local minima, all converged between 12 and 30 epochs. About 1/3 of the runs fell in local minima. Of course, this comment is not intended at denying the qualities of the algorithm proposed by Dr. Kak, it is just intended at putting backpropagation in its actual stand. Luis B. Almeida INESC Phone: +351-1-544607, +351-1-3100246 Apartado 10105 Fax: +351-1-525843 P-1017 Lisboa Codex Portugal lba at inesc.pt lba at inesc.uucp (if you have access to uucp)  From vijay at envy.cs.umass.edu Wed Mar 10 11:24:03 1993 From: vijay at envy.cs.umass.edu (vijay@envy.cs.umass.edu) Date: Wed, 10 Mar 93 11:24:03 -0500 Subject: AVAILABLE IN NEUROPROSE: "Learning control under extreme uncertainty" Message-ID: <9303101624.AA18303@sloth.cs.umass.edu> The following paper has been placed in the Neuroprose archive. Thanks to Jordan Pollack for providing this service. Comments and questions are welcome. ******************************************************************* Learning Control Under Extreme Uncertainty Vijaykumar Gullapalli Computer Science Department University of Massachusetts Amherst, MA 01003 Abstract A peg-in-hole insertion task is used as an example to illustrate the utility of direct associative reinforcement learning methods for learning control under real-world conditions of uncertainty and noise. Task complexity due to the use of an unchamfered hole and a clearance of less than $0.2mm$ is compounded by the presence of positional uncertainty of magnitude exceeding $10$ to $50$ times the clearance. Despite this extreme degree of uncertainty, our results indicate that direct reinforcement learning can be used to learn a robust reactive control strategy that results in skillful peg-in-hole insertions. ********************************************************************* FTP INSTRUCTIONS unix% Getps gullapalli.uncertainty-nips5.ps.Z if you have the shell script, or unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get gullapalli.uncertainty-nips5.ps.Z ftp> bye unix% zcat gullapalli.uncertainty-nips5.ps.Z | lpr  From cowan at synapse.uchicago.edu Wed Mar 10 12:46:50 1993 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Wed, 10 Mar 93 11:46:50 -0600 Subject: NIPS*93 Message-ID: <9303101746.AA00848@synapse> FIRST CALL FOR PAPERS Neural Information Processing Systems -Natural and Synthetic- Monday, November 29 - Thursday, December 2, 1993 Denver, Colorado This is the seventh meeting of an inter-disciplinary conference which brings together neuroscientists, engineers, computer scientists, cognitive scientists, physicists, and mathematicians interested in all aspects of neural processing and computation. There will be an afternoon of tutorial presentations (Nov 29) preceding the regular session and two days of focused workshops will follow at a nearby ski area (Dec 3-4). Major categories and examples of subcategories for paper submissions are the following: Neuroscience: Studies and Analyses of Neurobiological Systems, Inhibition in cortical circuits, Signals and noise in neural computation, Theoretical Biology and Biophysics. Theory: Computational Learning Theory, Complexity Theory, Dynamical Systems, Statistical Mechanics, Probability and Statistics, Approximation Theory. Implementation & Simulation: VLSI, Optical, Software Simulators, Implementation Languages, Parallel Processor Design and Benchmarks. Algorithms & Architectures: Learning Algorithms, Constructive and Pruning Algorithms, Localized Basis Functions, Tree Structured Networks, Performance Comparisons, Recurrent Networks, Combinatorial Optimization, Genetic Algorithms. Cognitive Science & AI: Natural Language, Human Learning and Memory, Perception and Psychophysics, Symbolic Reasoning. Visual Processing: Stereopsis, Visual Motion, Recognition, Image Coding and Classification. Speech & Signal Processing: Speech Recognition, Coding, and Synthesis, Text-to-Speech, Adaptive Equalization, Nonlinear Noise Removal. Control, Navigation, & Planning: Navigation and Planning, Learning Internal Models of the World, Trajectory Planning, Robotic Motor Control, Process Control. Applications: Medical Diagnosis or Data Analysis, Financial and Economic Analysis, Timeseries Prediction, Protein Structure Prediction, Music Processing, Expert Systems. Technical Program: Plenary, contributed and poster sessions will be held. There will be no parallel sessions. The full text of presented papers will be published. Submission Procedures: Original research contributions are solicited, and will be carefully refereed. Authors must submit six copies of both a 1000-word (or less) summary and six copies of a separate single-page 50-100 word abstract clearly stating their results postmarked by May 22, 1993 (express mail is not necessary). Accepted abstracts will be published in the conference program. Summaries are for program committee use only. At the bottom of each abstract page and on the first summary page indicate preference for oral or poster presentation and specify one of the above nine broad categories and, if appropriate, sub-categories (For example: Poster, Applications, Expert Systems; Oral, Implementation-Analog VLSI). Include addresses of all authors at the front of the summary and the abstract and indicate to which author correspondence should be addressed. Submissions will not be considered that lack category information, separate abstract sheets, the required six copies, author addresses, or are late. Mail submissions To: Gerry Tesauro The Salk Institute, CNL 10010 North Torrey Pines Rd. La Jolla, CA 92037 Mail for registration material To: NIPS*93 Registration NIPS Foundation PO Box 60035 Pasadena, CA 91116-6035 All submitting authors will be sent registration material automatically. Program committee decisions will be sent to the correspondence author only. NIPS*93 Organizing Committee: General Chair, Jack Cowan, University of Chicago; Publications Chair, Joshua Alspector, Bellcore; Publicity Chair, Bartlett Mel, CalTech; Program Chair, Gerry Tesauro, Salk Institute; Treasurer, Rodney Goodman, CalTech; Local Arrangements, Chuck Anderson, Colorado State University; Tutorials Chair, Dave Touretzky, Carnegie-Mellon, Workshop Chair, Mike Mozer, University of Colorado, Government & Corporate Liaison, Lee Giles, NEC Research Institute Inc. DEADLINE FOR SUMMARIES & ABSTRACTS IS MAY 22, 1993 (POSTMARKED)  From eric at research.nj.nec.com Wed Mar 10 17:38:19 1993 From: eric at research.nj.nec.com (Eric B. Baum) Date: Wed, 10 Mar 93 17:38:19 EST Subject: No subject Message-ID: <9303102238.AA02774@yin> Preprint: Best Play for Imperfect Players and Game Tree Search The following preprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ----------------------------------------------------------------- Best Play for Imperfect Players and Game Tree Search Eric B. Baum and Warren D. Smith NEC Research Institute 4 Independence Way Princeton NJ 08540 ABSTRACT We propose a new approach to game tree search. We train up an evaluation function which returns, rather than a single number estimating the value of a position, a probability distribution $P_L(x)$. $P_L(x)$ is the probability that if we expanded leaf $L$ to some depth, the backed up value of leaf $L$ would then be found to be $x$. We describe how to propagate these distributions efficiently up the tree so that at any node n we compute without approximation the probability node n's negamax value is x given that a value is assigned to each leaf from its distribution. After we are done expanding the tree, the best move is the child of the root whose distribution has highest mean. Note that we take means at the child of the root {\it after} propagating, whereas the normal (Shannon) approach takes the mean at the leaves before propagating, which throws away information. Now we model the expansion of a leaf as selection of one value from its distribution. The total utility of all possible expansion is defined as the ensemble sum over those possible leaf configurations for which the current favorite move is inferior to some alternate move, weighted by the probability of the leaf configuration and the amount the current favorite move is inferior. We propose as the natural measure of the expansion importance of leaf L, the expected absolute change in this utility when we expand leaf L. We support this proposal with several arguments including an approximation theorem valid in the limit that one expands until the remaining utility of expansion becomes small. In summary, we gather distributions at the leaves, propagate exactly all this information to the root, and incrementally grow a tree expanding approximately the most interesting leaf at each step. Under reasonable conditions, we accomplish all of this in time $O(N)$, where N is the number of leaves in the tree when we are done expanding. That is, we pay only a small constant factor overhead for all of our bookkeeping. ---------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/eric/papers ftp> binary ftp> get game.ps.Z ftp> quit unix> uncompress game.ps.Z ----------------------------------------------------------------------- Eric Baum NEC Research Institute 4 Independence Way Princeton NJ 08540 Inet: eric at research.nj.nec.com UUCP: princeton!nec!eric MAIL: 4 Independence Way, Princeton NJ 08540 PHONE: (609) 951-2712 FAX: (609) 951-2482  From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Thu Mar 11 11:59:00 1993 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Thu, 11 Mar 93 11:59:00 EST Subject: The New Training Alg for Feedforward Networks In-Reply-To: Your message of Tue, 09 Mar 93 19:00:30 +0100. <9303091800.AA11872@sara.inesc.pt> Message-ID: Dr. Subhash C. Kak writes, in his message: The computing power of this algorithm may be gauged from the example that the exclusive-Or problem that requires several thousand iterative steps using the backpropagation algorithm was solved in 8 steps. I cannot agree with the assertions made about the speed of backpropagation in the XOR problem. Just to be sure, I have just run a few tests, using plain backpropagation, in the batch mode, without any acceleration technique (not even momentum), and using an architecture with 2 input units, 2 hidden units and 1 output unit (more details available on request). The runs that didn't stop at local minima, all converged between 12 and 30 epochs. About 1/3 of the runs fell in local minima. I was also going to comment on Kak's statement that backprop takes "several thousand" iterative steps to converge. It's not clear to me about epochs or pattern presentations, but in any case that number is too high. One of the very first papers on backprop (Rumelhart, Hinton, and Williams, in the PDP books, refers to a study by Chauvin that solved XOR in an average of 245 epochs. So Kak's figure is a bit high. On the other hand, Luis Almeida's figure of 12-30 epochs for 2-2-1 XOR with vanilla backprop are much, much better than other reported results for backprop. If he is actually getting those times, something very interesting -- perhaps even earth-shaking -- is going on. I can get times like that with Quickprop, but I've never seen claims under 100 epochs for 2-2-1 backprop, with or without momentum. More details on this experiment would be of interest to many of us, I think. -- Scott =========================================================================== Scott E. Fahlman Internet: sef+ at cs.cmu.edu Senior Research Scientist Phone: 412 268-2575 School of Computer Science Fax: 412 681-5739 Carnegie Mellon University Latitude: 40:26:33 N 5000 Forbes Avenue Longitude: 79:56:48 W Pittsburgh, PA 15213 ===========================================================================  From mozer at dendrite.cs.colorado.edu Thu Mar 11 16:40:57 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Thu, 11 Mar 1993 14:40:57 -0700 Subject: NIPS*93 workshops Message-ID: <199303112140.AA23316@neuron.cs.colorado.edu> CALL FOR PROPOSALS NIPS*93 Post-Conference Workshops December 3 and 4, 1993 Vail, Colorado Following the regular program of the Neural Information Processing Systems 1993 conference, workshops on current topics in neural information processing will be held on December 3 and 4, 1993, in Vail, Colorado. Proposals by qualified individuals interested in chairing one of these workshops are solicited. Past topics have included: active learning and control; architectural issues; attention; bayesian analysis; benchmarking neural network applications; computational complexity issues; computational neuroscience; fast training techniques; genetic algorithms; music; neural network dynamics; optimization; recurrent nets; rules and connectionist models; self- organization; sensory biophysics; speech; time series prediction; vision; and VLSI and optical implementations. The goal of the workshops is to provide an informal forum for researchers to discuss important issues of current interest. Sessions will meet in the morning and in the afternoon of both days, with free time in between for ongoing individual exchange or outdoor activities. Concrete open and/or controversial issues are encouraged and preferred as workshop topics. Individuals proposing to chair a workshop will have responsibilities including: arranging short informal presentations by experts working on the topic, moderating or leading the discussion and reporting its high points, findings, and conclusions to the group during evening plenary sessions (the "gong show"), and writing a brief (2 page) summary. Submission Procedure: Interested parties should submit a short proposal for a workshop of interest postmarked by May 22, 1993. (Express mail is *not* necessary. Submissions by electronic mail will also be accepted.) Proposals should include a title, a description of what the workshop is to address and accomplish, and the proposed length of the workshop (one day or two days). It should motivate why the topic is of interest or controversial, why it should be discussed and what the targeted group of participants is. In addition, please send a brief resume of the prospective workshop chair, a list of publications and evidence of scholarship in the field of interest. Mail submissions to: Mike Mozer NIPS*93 Workshops Chair Department of Computer Science University of Colorado Boulder, CO 80309-0430 USA (e-mail: mozer at cs.colorado.edu) Name, mailing address, phone number, fax number, and e-mail net address should be on all submissions. PROPOSALS MUST BE POSTMARKED BY MAY 22, 1993 Please Post  From burrow at gradient.cis.upenn.edu Thu Mar 11 13:04:43 1993 From: burrow at gradient.cis.upenn.edu (Thomas Fontaine) Date: Thu, 11 Mar 93 13:04:43 EST Subject: Preprint: Recognizing Handprinted Digit Strings Message-ID: <9303111804.AA12711@gradient.cis.upenn.edu> ************** PLEASE DO NOT FORWARD TO OTHER NEWSGROUPS ************* The following paper, to be presented at the Fifteenth Annual Meeting of the Cognitive Science Society (June 1993), has been placed in the neuroprose archives at Ohio State University: RECOGNIZING HANDPRINTED DIGIT STRINGS: A HYBRID CONNECTIONIST/PROCEDURAL APPROACH Thomas Fontaine and Lokendra Shastri Computer and Information Science Department 200 South 33rd Street University of Pennsylvania Philadelphia, PA 19104-6389 We describe an alternative approach to handprinted word recognition using a hybrid of procedural and connectionist techniques. We utilize two connectionist components: one to concurrently make recognition and segmentation hypotheses, and another to perform refined recognition of segmented characters. Both networks are governed by a procedural controller which incorporates systematic domain knowledge and procedural algorithms to guide recognition. We employ an approach wherein an image is processed over time by a spatiotemporal connectionist network. The scheme offers several attractive features including shift-invariance and retention of local spatial relationships along the dimension being temporalized, a reduction in the number of free parameters, and the ability to process arbitrarily long images. Recognition results on a set of real-world isolated ZIP code digits are comparable to the best reported to date, with a 96.0\% recognition rate and a rate of 99.0\% when 9.5\% of the images are rejected.} ***************** How to obtain a copy of the report ***************** I'm sorry, but hardcopies are not available. To obtain via anonymous ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get fontaine.wordrec.ps.Z ftp> quit unix> uncompress fontaine.wordrec.ps.Z unix> lpr fontaine.wordrec.ps (or however you print Postscript)  From giles at research.nj.nec.com Thu Mar 11 08:52:27 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 11 Mar 93 08:52:27 EST Subject: Reprint: Rule Refinement with Recurrent Neural Networks Message-ID: <9303111352.AA08333@fuzzy> The following reprint is available via the NEC Research Institute ftp archive external.nj.nec.com. Instructions for retrieval from the archive follow the summary. ---------------------------------------------------------------------------------- "Rule Refinement with Recurrent Neural Networks" C. Lee Giles(a,b) and Christian W. Omlin(a,c) (a) NEC Research Institute, 4 Independence Way, Princeton, NJ 08540 (b) Institute for Advanced Computer Studies, U. of Maryland, College Park, MD 20742 (c) Computer Science Department, Rensselaer Polytechnic Institute, Troy, NY 12180 ABSTRACT Recurrent neural networks can be trained to behave like deterministic finite-state automata (DFA's) and methods have been developed for extracting grammatical rules from trained networks. Using a simple method for inserting prior knowledge of a subset of the DFA state transitions into recurrent neural networks, we show that recurrent neural networks are able to perform rule refinement. The results from training a recurrent neural network to recognize a known non-trivial, randomly generated regular grammar show that not only do the networks preserve correct prior knowledge, but that they are able to correct through training inserted prior knowledge which was wrong. (By wrong, we mean that the inserted rules were not the ones in the randomly generated grammar.) ------------------------------------------------------------------------------------- FTP INSTRUCTIONS unix> ftp external.nj.nec.com (138.15.10.100) Name: anonymous Password: (your_userid at your_site) ftp> cd pub/giles/papers ftp> binary ftp> get rule_refinement.ps.Z ftp> quit unix> uncompress rule_refinement.ps.Z ---------------------------------------------------------------------------------------- -- C. Lee Giles / NEC Research Institute / 4 Independence Way Princeton, NJ 08540 / 609-951-2642 / Fax 2482 ==  From rubio at hal.ugr.es Thu Mar 11 12:20:29 1993 From: rubio at hal.ugr.es (Antonio J. Rubio Ayuso) Date: Thu, 11 Mar 93 17:20:29 GMT Subject: No subject Message-ID: <9303111720.AA15547@hal.ugr.es> LAST Announcement: NATO Advanced Study Institute (Deadline: April 1, 1993) ------------------------------------------------------------------------------ NEW ADVANCES and TRENDS in SPEECH RECOGNITION and CODING 28 June-10 July 1993. Bubion (Granada), SPAIN. Institute Director: Dr. Antonio Rubio-Ayuso, Dept. de Electronica. Facultad de Ciencias. Universidad de Granada. E-18071 GRANADA, SPAIN. tel. 34-58-243193 FAX. 34-58-243230 e-mail ASI at hal.ugr.es Organizing Committee: Dr. Jean-Paul Haton, CRIN / INRIA, France. Dr. Pietro Laface, Politecnico di Torino, Italy. Dr. Renato De Mori, McGill University, Canada. OBJECTIVES, AGENDA and PARTICIPANTS A series of most successful ASIs on Speech Science (the last ones in Bonas, France; Bad Windsheim, Germany; Cetraro, Italy) created a fruitful and stimulating environment to learn about scientific methods, exchange of results, and discussions of new ideas. The goal of this ASI is to congregate the most important experts on Speech Recognition and Coding to discuss and disseminate their most recent findings, in order to spread them among the European and American Centers of Excellence, as well as among a good selection of qualified students. A two-week programme is planned with invited tutorial lectures, and contributed papers by selected students (maximum 65). The proceedings of the ASI will be published by Springer-Verlag. TOPICS The Institute will focus on the new methodologies and techniques that have been recently developed in the speech communication area. Main topics of interest will be: -Low Delay and Wideband Speech Coding. -Very Low bit Rate and Half-Rate Speech Coding. -Speech coding over noisy channels. -Continuous Speech and Isolated word Recognition. -Neural Networks for Speech Recognition and Coding. -Language Modeling. -Speech Analysis, Synthesis and data bases. Any other related topic will also be considered. INVITED LECTURERS A. Gersho (UCSB, USA): "Speech coding." B. H. Juang (AT&T, USA): "Statistical and discriminative methods for speech recognition - from design objectives to implementation." J. Bridle (RSRU, UK): "Neural networks." G. Chollet (Paris Telecom): "Evaluation of ASR systems, algorithms and databases." E. Vidal (UPV, Spain): "Syntactic learning techniques in language modeling and acoustic-phonetic decoding." J. P. Adoul (U. Sherbrooke, Canada): "Lattice and trellis coded quantizations for efficient coding of speech." R. De Mori (McGill Univ, Canada): "Language models based on stochastic grammars and their use in automatic speech recognition." R. Pieraccini (AT&T, USA): "Speech understanding and dialog, a stochastic approach." F. Jelinek (IBM, USA): "New approaches to language modeling for speech recognition." L. Rabiner (AT&T, USA): "Applications of Voice Processing Technology in Telecommunications." N. Farvardin (UMD, USA): "Speech coding over noisy channels." J. P. Haton (CRIN/INRIA, France): "Methods for the automatic recognition of speech in adverse conditions." R. Schwartz (BBN, USA): "Search algorithms of real-time recognition with high accuracy." H. Niemann (Erlangen-Nurnberg Univ., Germany): "Statistical Modeling of segmental and suprasegmental information." I. Trancoso (INESC, Portugal): "An overview of recent advances on CELP." C. H. Lee (AT&T, USA): "Adaptive learning for acoustic and language modeling." P. Laface (Poli. Torino, Italy) H. Ney (Phillips, Germany): "Search Strategies for Very Large Vocabulary, Continuous Speech Recognition." A. Waibel (CMU, USA): "JANUS, A speech translation system." ATTENDANCE, COSTS and FUNDING Participation from as many NATO countries as possible is desired. Additionally, prospective participants from Greece, Portugal and Turkey are especially encouraged to apply.A small number of students from non-NATO countries may be accepted. The estimated cost of hotel accommodation and meals for the two-week duration of the ASI is US$1,000. A limited number of scholarships are available for academic participants from NATO countries. In the case of industrial or commercial participants a US$500 fee will be charged. Participants are responsible for their own health or accident insurance. A deposit of US$200 is required for living expenses. This deposit is non-refundable in the case of late cancelation (after 10 June, 1993). The NATO Institute will be held in the hospitable village of Bubion (Granada), set on Las Alpujarras, a peaceful mountain region with incomparable landscapes. HOW TO REGISTER Each application should include: 1) Full address (including e-mail and FAX). 2) An abstract of the proposed contribution (1-3 pages). 3) Curriculum vitae of the prospective participant (including birthdate). 4) Indication of whether the attendance to the ASI is conditioned to obtaining a NATO grant. For junior applicants, support letters from senior members of the professional speech community would strengthen the application. This application must be sent to the Institute Director address mentioned above (before 1 April 1993). SCHEDULE Submission of proposals (1-3 pages): To be received by 1 April 1993. Notification of acceptance: To be mailed out on 1 May 1993. Submission of the paper: To be received by 10 June 1993.  From plaut+ at cmu.edu Fri Mar 12 16:56:30 1993 From: plaut+ at cmu.edu (David Plaut) Date: Fri, 12 Mar 1993 16:56:30 -0500 Subject: Preprint: Generalization with Componential Attractors Message-ID: <14183.731973390@crab.psy.cmu.edu> ******************* PLEASE DO NOT FORWARD TO OTHER BBOARDS ******************* The following preprint is available via local anonymous ftp (*not* from neuroprose). Instructions on how to retrieve it are at the end of this messages. The paper will appear in this year's Cognitive Science Society Conference Proceedings. -Dave Generalization with Componential Attractors: Word and Nonwords Reading in an Attractor Network David C. Plaut and James L. McClelland Department of Psychology Carnegie Mellon University Networks that learn to make familiar activity patterns into stable attractors have proven useful in accounting for many aspects of normal and impaired cognition. However, their ability to generalize is questionable, particularly in quasiregular tasks that involve both regularities and exceptions, such as word reading. We trained an attractor network to pronounce virtually all of a large corpus of monosyllabic words, including both regular and exception words. When tested on the lists of pronounceable nonwords used in several empirical studies, its accuracy was closely comparable to that of human subjects. The network generalizes because the attractors it developed for regular words are componential---they have substructure that reflects common sublexical correspondences between orthography and phonology. This componentiality is faciliated by the use of orthographic and phonological representations that make explicit the structured relationship between written and spoken words. Furthermore, the componential attractors for regular words coexist with much less componential attractors for exception words. These results demonstrate that attractors can support effective generalization, challenging ``dual-route'' assumptions that multiple, independent mechanisms are required for quasiregular tasks. unix> ftp hydra.psy.cmu.edu # or 128.2.248.152 Name: anonymous Password: ftp> cd pub/plaut ftp> binary ftp> get plaut.componential.cogsci93.ps.Z ftp> quit unix> zcat plaut.componential.cogsci93.ps.Z | lpr - I'd like to thank Jordan Pollack for not maintaining this archive.... =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= David Plaut plaut+ at cmu.edu Department of Psychology 412/268-5145 Carnegie Mellon University FAX: 412/268-5060 Pittsburgh, PA 15213-3890 =-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=  From dhw at santafe.edu Fri Mar 12 17:37:55 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Mar 93 15:37:55 MST Subject: new paper Message-ID: <9303122237.AA05085@zia> *** DO NOT FORWARD TO ANY OTHER LISTS *** The following file has been placed in connectionists, under the name wolpert.ex_learning.ps.Z. AN INVESTIGATION OF EXHAUSTIVE LEARNING by David H. Wolpert, Alan Lapedes Abstract: An extended version of the Bayesian formalism is reviewed. We use this formalism to investigate the "exhaustive learning" scenario, first introduced by Schwartz et al. This scenario is perhaps the simplest possible supervised learning scenario. It is identical to the noise-free "Gibbs learning" scenario studied recently by Haussler et al., and can also be viewed as the zero-temperature limit of the "statistical mechanics" work of Tishby et al. We prove that the crucial "self-averaging" assumption invoked in the conventional analysis of exhaustive learning does not hold in the simplest non-trivial implementation of exhaustive learning. Therefore the central result of that analysis, that generalization accuracy necessarily rises as training set size is increased, is not generic. More importantly, we show that if one (reasonably) changes the definition of "generalization accuracy", to reflect only the error on inputs outside of the training set, then this central result does not hold even when the self-averaging assumption is valid, and even in the limit of an infinite input space. This implies that the central result is a reflection of the following simple phenomenon: if you add an input/output pair to the training set, the number of distinct input values on which you know exactly how you should guess has either increased or stayed the same, and therefore your generalization accuracy will either increase or stay the same. In addition to using the extended Bayesian formalism to analyze the central result of the conventional analysis of exhaustive learning, we also use it to extend the results of exhaustive learning, to issues not considered in previous analyses of the subject. To retrieve this file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get wolpert.ex_learning.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for wolpert.ex_learning.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress wolpert.ex_learning.ps.Z unix> lpr wolpert.ex_learning.ps (or however you print postscript) Thanks to Jordan Pollack for maintaining this list.  From dhw at santafe.edu Fri Mar 12 17:23:38 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Fri, 12 Mar 93 15:23:38 MST Subject: No subject Message-ID: <9303122223.AA05058@zia> Dr.'s Kak and Almeida talk about training issues concerning the XOR problem. One should be careful not to focus too heavilly on the XOR problem. Two points which I believe have been made previously on connectionists bear repeating: Consider the n-dimensional version of XOR, namely n-bit parity. 1) All "local" algorithms (e.g., weighted nearest neighbor) perform badly on parity. More precisely, as the number of training examples goes up, their off-training set generalization *degrades*, asymptoting at 100% errors. This is also true for backprop run on neural nets, at least for n = 6. 2) There are algorithms which perform *perfectly* (0 training or generalizing errors) for the parity problem. Said algorithms are not designed in any way with parity in mind. In other words, in some senses, for all the problems it causes local algorithms, parity is not "difficult". David Wolpert  From hinton at cs.toronto.edu Mon Mar 15 13:00:18 1993 From: hinton at cs.toronto.edu (Geoffrey Hinton) Date: Mon, 15 Mar 1993 13:00:18 -0500 Subject: No subject In-Reply-To: Your message of Fri, 12 Mar 93 17:23:38 -0500. Message-ID: <93Mar15.130033edt.567@neuron.ai.toronto.edu> Wolpert says: >All "local" algorithms (e.g., weighted nearest neighbor) perform >badly on parity. More precisely, as the number of training examples >goes up, their off-training set generalization *degrades*, asymptoting >at 100% errors. This is also true for backprop run on neural nets, at >least for n = 6. This seems very plausible but its not quite right. First, consider K nearest neighbors, where a validation set is used to pick K and all K neighbors get to vote on the answer. It seems fair to call this a "local" algorithm. If the training data contains a fraction p of all the possible cases of n-bit parity, then each novel test case will have about pn neighbors in the training set that differ by one bit. It will also have about pn(n-1)/2 training neighbors that differ by 2 bits, and so on. So for reasonably large n we will get correct performance by picking K so that we tend to get all the training cases that differ by 1 bit and most of the far more numerous training cases that differ by 2 bits. For very small values of p we need to consider more distant neighbors to get this effect to work, and this requires larger values of n. Second, backprop generalizes parity pretty well (on NOVEL test cases) provided you give it a chance. If we use the "standard" net with n hidden units (we can get by with less) we have (n+1)^2 connections. To get several bits of constraint per connection we need p 2^n >> (n+1)^2 where p is the fraction of all possible cases that are used for training. For n=6 there are only 64 possible cases and we use 49 connections so this isnt possible. For n=10, if we train on 512 cases we only get around 3 or 4% errors on the remaining cases. Of course training is slow for 10 bit parity: About 2000 epochs even with adaptive learning rates on each connection (and probably a million epochs if you choose a bad enough version of backprop.) Neither of these points is intended to detract from the second point that Wolpert makes. There are indeed other very interesting learning algorthms that do very well on parity and other tasks. Geoff  From dhw at santafe.edu Mon Mar 15 15:03:22 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Mon, 15 Mar 93 13:03:22 MST Subject: No subject Message-ID: <9303152003.AA07060@zia> Hinton says: >>> First, consider K nearest neighbors, where a validation set is used to pick K and all K neighbors get to vote on the answer. It seems fair to call this a "local" algorithm. If the training data contains a fraction p of all the possible cases of n-bit parity, then each novel test case will have about pn neighbors in the training set that differ by one bit. It will also have about pn(n-1)/2 training neighbors that differ by 2 bits, and so on. So for reasonably large n we will get correct performance by picking K so that we tend to get all the training cases that differ by 1 bit and most of the far more numerous training cases that differ by 2 bits. For very small values of p we need to consider more distant neighbors to get this effect to work, and this requires larger values of n. >>> This seems very plausible but is not quite right. First off, as I stated in my posting, I think we've been here before, about a year ago ... my main point in my posting was that how backprop does on XOR is only of historical interest. But since the broth has been stirred up again ... One can make a strong argument that using cross-validation, as in Geoff's scheme, is, by definition, non-local. When describing a learning algorithm as "local", one (or at least I) implicitly means that the guess it makes in response to a novel input test value depends only on nearest neighbors in the training set. K nearest neighbor for fixed (small) K is a local learning algorithm, as I stated in my original posting. Geoff wishes to claim that the learning algorithm which chooses K = K* via cross-validation and then uses K* nearest neighbor is also local. Such a learning algorithm is manifestly global however - on average, changes in the training set far away will affect (perhaps drastically) how the algorithm responds to a novel test input, since they will affect calculated cross-validation errors, and therefore will affect choice of K*. In short, one should not be misled by concentrating on the fact that an overall algorithm has *as one of its parts*, an algorithm which, if used by itself, is local (namely, K* nearest neighbor). It is the *entire* algorithm which is clearly the primary object of interest. And if that algorithm uses cross-validation, it is not "local" in the (reasonable) way it's defined above. Indeed, one might argue that it is precisely this global character which makes cross-validation such a useful heuristic - it allows information from the entire training set to affect one's guess, and it does so w/o resulting in a fit going through all those points in the training set, with all the attendant "overtraining" dangers of such a fit. On the other hand, if K had been set beforehand, rather than via cross-validation, and if for some reason one had set K particularly high, then, as Geoff correctly points out, the algorithm wouldn't perform poorly on parity at all. Moreover, consider changing the definition of "local", along the lines of "a learning algorithm is local if, on average, the the pairs in the set {single input-output pairs from the training set such that changing any single one of those pairs has a big effect on the guess} all lie close to the test input value", with some appropriate definition of "big". For such a definition, cross-validation-based K nearest neighbor might be local. (The idea is that for a big enough training set, changing a single point far away will have little affect, on average, on calculated cross-validation errors. On the other hand, for K* large, changing a single nearby point will also have little effect, so it's not clear that this modified definition of "local" will save Hinton's argument.) I didn't feel any of this was worth getting into in detail in my original posting. In particular, I did not discuss cross-validation or other such schemes, because the people to whom I was responding did not discuss cross-validation. For the record though, in that posting I was thinking of K nearest neighbor where K is fixed, and on the order of n. For such a scenario, everything I said is true, as Geoff's own reasoning shows. Geoff goes on >>> Second, backprop generalizes parity pretty well (on NOVEL test cases) provided you give it a chance. If we use the "standard" net with n hidden units (we can get by with less) we have (n+1)^2 connections. To get several bits of constraint per connection we need p 2^n >> (n+1)^2 where p is the fraction of all possible cases that are used for training. For n=6 there are only 64 possible cases and we use 49 connections so this isnt possible. For n=10, if we train on 512 cases we only get around 3 or 4% errors on the remaining cases. Of course training is slow for 10 bit parity: About 2000 epochs even with adaptive learning rates on each connection (and probably a million epochs if you choose a bad enough version of backprop.) >>> By and large, I agree. However I think this misses the point. XOR is parity for low n, not high n, and my comments were based on extending XOR (since that's what the people I was responding to were talking about.) Accordingly, it's the low n case which was of interest, and as Geoff agrees, backprop dies a gruesome death for low n. Again, I didn't want to get into all this in my original posting. However, while we're on the subject of cross-validation and the like, I'd like to direct the attention of the connectionist community to an article in the Feb. '93 ML journal by Schaffer which suggests that cross-validation fails as often as it succeeds, on average. So it is by no means a panacea. Formal arguments on this topic can be found in a Complex Systems paper of mine from last year, and also in a new paper, directly addressing Schaffer's experiments, which I plan to post in a week or so. David Wolpert  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 15 18:42:47 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 15 Mar 93 16:42:47 -0700 Subject: The spring 1993 Colorado Machine Learning Colloquium Series Message-ID: <9303152342.AA01804@franklinite.Mines.Colorado.EDU> THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS* announce: The spring, 1993 Colorado Machine Learning Colloquium Series Room 110, Stratton Hall, on the CSM campus in Golden, Colorado All talks at 5:30 pm. Machine learning and neural networks are increasingly important areas of applied computer science and engineering research. These methods allow systems to improve their performance over time with reduced input from a human operator. In the last few years, these methods have demonstrated their usefulness in a wide variety of problems. At CSM, an interdisciplinary atmosphere has fostered several projects that use these technologies for problems in geophysics, materials science, and electrical engineering. In Colorado as a whole, exploration of machine learning and neural networks is widespread. This colloquium series fosters the development of these technologies through presentations of recent applied and basic research. At least a third of each talk will be accessible to a general scientific audience. Schedule: Tuesday, March 16: Aaron Gordon, CSM: Dynamic Recurrent Neural Networks Tuesday, March 23: Darrell Whitley, CSU Ft. Collins: Executable Models of Genetic Algorithms Tuesday March 30: Chidambar Ganesh, CSM: Some Experiences with Data Preprocessing in Neural Network applications Monday April 5: Chuck Anderson, CSU Ft. Collins: Reinforcement Learning and Control Tuesday April 13: Marijke Augusteign, UC Colorado Springs: Solving Classification Problems with Cascade-Correlation Tuesday April 20: John Steele, CSM: Predicting Degree Of Cure Of Epoxy Resins Using Dielectric Sensor Data and Artificial Neural Networks Thursday April 22: Michael Mozer, CU Boulder: Neural network approaches to formal language induction Open to the Public, Refreshments to be Served For more information (including background readings prior to talks), contact Dr. L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 *The mission of the proposed new Center for Robotics and Intelligent Systems (CRIS) at the Colorado School of Mines (CSM) is to facilitate the application of advanced computer science research in neural networks, robotics, and artificial intelligence to specific problem areas of concern at CSM. By bringing diverse interdisciplinary expertise to bear on problems in materials, natural resources, the environment, energy, transportation, information, and communications, the center will facilitate the development of novel computational approaches to difficult problems. When fully operational, the center's activities will include: 1) sponsoring colloquia, 2) publishing a technical report series, 3) aiding researchers in the pursuit of government and private grants, 4) promoting education a) by coordinating CSM courses related to robotics, neural networks and artificial intelligence, and b) by maintaining minors both at the undergraduate and graduate levels, 5) promoting research, and 6) supporting industrial interaction.  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 15 18:52:28 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 15 Mar 93 16:52:28 -0700 Subject: Darrell Whitley to speak in Colorado Machine Learning series Message-ID: <9303152352.AA01846@franklinite.Mines.Colorado.EDU> The spring, 1993 Colorado Machine Learning Colloquium Series presents: Dr. Darrell Whitley Department of Computer Science Colorado State University, Fort Collins EXECUTABLE MODELS OF GENETIC ALGORITHMS Tuesday March 23, 1993 Room 110, Stratton Hall, on the CSM campus 5:30 pm Abstract A set of executable equations are defined which provide an exact model of a simple genetic algorithm. The equations assume an infinitely large population and require the enumeration of all points in the search space.The predictive behavior of the executable equations is examined in the context of deceptive functions. In addition, these equations can be used to study the computational behavior of parallel genetic algorithms. Suggested background reading: A Genetic Algorithm Tutorial, Darrell Whitley. Open to the Public Refreshments to be served at 5:00pm, prior to talk For more information (including background readings prior to talks, and a schedule of all talks in this series), contact: L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 Sponsored by: THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS (The Center for Robotics and Intelligent Systems at the Colorado School of Mines)  From lba at sara.inesc.pt Mon Mar 15 12:49:19 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Mon, 15 Mar 93 18:49:19 +0100 Subject: Training XOR with BP Message-ID: <9303151749.AA20880@sara.inesc.pt> I must say I am a bit surprised myself, with the XOR discussion, but in the opposite sense: for me, the XOR has always converged rather fast. Let me be more specific: I already had the idea in my mind, from previous informal tests, that the XOR usually converged in much less than 100 epochs (with a relatively large percentage of runs that fell into "local minima" - more about this below). The difference with other people's results may come from implementation details, which I will give below. The experiments I reported were made with a BP simulator developed here at Inesc, which has a lot of facilities (adaptive step sizes, cross-validation, optional entropy error, weight decay, momentum, etc.). For these experiments I've set the parameters so that all these features were disabled. And to be sure, I've just been looking into the essential parts of the code, and didn't find any bugs - the thing appears to be really doing plain BP, without any tricks. So, here are the details: Problem: XOR (2 inputs) No. of training patterns: 4 Input logical levels: -1 for FALSE, 1 for TRUE Target output logical levels: -.9 for FALSE, .9 for TRUE Network: 2 inputs, 2 hidden, 1 output Interconnection: Full between successive layers, no direct links from inputs to output Unit non-linearity: Scaled arctangent, i.e. 2/Pi * arctan(s), where "s" is the input sum Learning method: Backpropagation, batch mode, no momentum Step size (learning rate): 1 Cost function: Squared error, summed over the 4 training patterns Weight initialization: Random, uniform in [-1,1] Stopping criterion: When the sign of the output is correct for all 4 training patterns Why did I choose these parameters? It is relatively well known that symmetrical sigmoids (e.g. varying between -1 and 1) give faster learning than unsymmetrical ones (e.g. varying between 0 and 1) [Yann Le Cun had a poster on the reasons for that, in one of the NIPS conferences, two or three years ago]. On the other hand, I thought that "arctan" probably learned faster than "tanh", because of its slower saturation, but I never ran any extensive tests on that - and see below, about results with "tanh(s/2)". From J.R.Chen at durham.ac.uk Tue Mar 16 13:13:01 1993 From: J.R.Chen at durham.ac.uk (J.R.Chen@durham.ac.uk) Date: Tue, 16 Mar 93 18:13:01 GMT Subject: Modelling of Nonlinear Systems Message-ID: IS INPUT_OUTPUT EQUATION A UNIVERSAL MODEL OF NONLINEAR SYSTEM ? About two weeks ago, Kenji Doya announced a paper "Universality of Fully-Connected Recurrent Neural Networks" on this mail list, which showed that if all the state variables are available then any discrete or continuous-time dynamical system can be modeled by a fully-connected discrete or continuous-time recurrent network respectively, provide the network consists of enough units. This is interesting. However in the real situation, it is more likely that the number of observable variables is less than the degree of freedom of the dynamical system. It could be that only one output signal is available for measurement. So the question is if only input signal and one output signal is available from a dynamical system, is it possible to reconstructe the original dynamics of the system? This problem has been well studied for linear systems, and the theory is well established. For the nonlinear systems, it seems is still a partially open question. There has been a lot of publications on using recurrent neural networks, MLP nets or whatever other nets to model nonlinear time-series or for nonlinear system identification. This kind of approach is based on an assumption that a nonlinear system can be modelled by an input-output recursive equation just like a linear system can be modelled by a ARMA model. A typical argument could be like this "Because the n variables {X_k(t)} satisfy a set of first-order differential equation, successive differentiation in time reduces the problem to a single (general highly nonlinear) differential equation of nth order for one of these variables" One can say something similar for discrete systems. Sounds it is quite straight forward. Actually, in most equation specific cases, it do works that way. However obviously this is not a rigorous proof. To my knowledge, the most rigorous results on this problem is presented in F. Takens[1], I.J.Leontaritis and S.A.Billings[2]. [1] mainly discusses autonomous systems and is wildly referenced. It is the theoriatical fundation of almost all the work on chaotic time-series modelling or prediction. In [2] it has been proved under some conditions that a discrete nonlinear system can be represented by a input-output recursive equation in a restricted region of operation around the zero equilibrium point. I don't know is there any global results exist. If not, the queation would be is this mainly a difficult of mathematics, or it would be more fundamental? One might to speculate that for a generic nonlinear dynamical system, there might be no unique input-output recursive equation representation, it may need a set of equations for different operation regions in the state space. If this is true, the modelling of nonlinear dynamical systems with input-output equation has to be based on on-line approach. The parameters have to be updated quick enough to follow the moving of operation point. The off-line modelling or identification may have convergence problem. [1] F. Takens "Detecting strange attractors in turbulence" in Springer Lecture Notes in Mathematics Vol.893 p366 edited by D.A.Rand and L.S.Young 1981 [2] I.J.Leontaritis and S.A.Billings "Input-output parametric models for non-linear systems" Part-1 and Part-2 INT.J.CONTROL. Vol.41 No.2 pp303-328 and pp329-344. 1985. J R Chen SECS University of Durham, UK  From kak at max.ee.lsu.edu Tue Mar 16 15:00:04 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 16 Mar 93 14:00:04 CST Subject: The New Training Alg for Feedforward Networks Message-ID: <9303162000.AA20136@max.ee.lsu.edu> Drs. Almeida and Fahlman have commented on how the attribution that backpropagation takes several thousand steps for the XOR problem (whereas my new algorithm takes only 8 steps) may not be fair. This attribution was not supposed to refer to the best BP algorithm for the problem; it was taken from page 332 of PDP, Vol. 1., and it was meant to illustrate the differences in the two algorithms. As has been posted by others here the new algorithm seems to give a speedup of 100 to 1000 for neurons in the range of 50 to 100. Certainly further tests are called for. The introduction of a learning rate in the new algorithm and learning with respect to an error criterion improve the performance of the new algorithm. These modifications will be described in a forthcoming report. -Subhash Kak  From kuh at spectra.eng.hawaii.edu Tue Mar 16 10:12:48 1993 From: kuh at spectra.eng.hawaii.edu (Anthony Kuh) Date: Tue, 16 Mar 93 10:12:48 HST Subject: NOLTA: call for papers Message-ID: <9303162012.AA18434@spectra.eng.hawaii.edu> Call for Papers 1993 International Symposium on Nonlinear Theory and its Applications Sheraton Waikiki Hotel, HAWAII December 5 - 9, 1993 The 1993 International Symposium on Nonlinear Theory and its Applications(NOLTA'93) will be held at the Sheraton Waikiki Hotel, Hawaii, on Dec. 5 - 9, 1993. The conference is open to all the world. Papers describing original work in all aspects of Nonlinear Theory and its Applications are invited. Possible topics include, but are not limited to the following: Circuits and Systems Neural Networks Chaos Dynamics Cellular Neural Networks Fractals Bifurcation Biocybernetics Soliton Oscillations Reactive Phenomena Fuzzy Numerical Methods Pattern Generation Information Dynamics Self-Validating Numerics Time Series Analysis Chua's Circuits Chemistry and Physics Mechanics Fluid Mechanics Acoustics Control Optics Circuit Simulation Communication Economics Digital/analog VLSI circuits Image Processing Power Electronics Power Systems Other Related Areas Organizers: Research Society of Nonlinear Theory and its Applications, IEICE Dept. of Elect. Engr., Univ. of Hawaii In cooperation with: IEEE Hawaii Section IEEE Circuits and Systems Society IEEE Neural Networks Council International Neural Network Society IEEE CAS Technical Committee on Nonlinear Circuits and Systems Technical Group of Nonlinear Problems, IEICE Technical Group of Circuits and Systems, IEICE Authors are invited to submit three copies of a summary of 2 or 3 pages to: Technical Program Chairman Prof. Shun-ichi Amari Faculty of Engr., University of Tokyo, Bunkyo-ku, Tokyo, 113 Japan Telefax: +81-3-5689-5752 e-mail: amari at sat.t.u-tokyo.ac.jp The summary should include the author's name(s), affiliation(s) and complete return address(es). The authors should also indicate one or more of the above categories that best describe the topic of the paper. Deadline for submission of summaries: August 15, 1993 Notification of acceptance: Before September 15, 1993 Deadline for camera-ready manuscripts: November 1, 1993 HONORARY CHAIRMEN Kazuo Horiuchi (Waseda Univ.) Masao Iri (Univ. of Tokyo) CO-CHAIRMEN Shun-ichi Amari (Univ. of Tokyo) Anthony Kuh (Univ. of Hawaii) Shinsaku Mori (Keio Univ.) TECHNICAL PROGRAM CHAIRMAN Shun-ichi Amari (Univ. of Tokyo) PUBLICITY Shinsaku Mori (Keio Univ.) LOCAL ARRANGEMENT Anthony Kuh (Dept. of Electrical Engr., Univ. of Hawaii, Manoa, Honolulu, Hawaii, 96822 U.S.A. Phone: +1-808-956-7527 Telefax. +1-808-956-3427 e-mail: kuh at wiliki.eng.hawaii.edu) ADVISORY L. O. Chua (U.C.Berkeley) R. Eberhart (Research Triangle Inst.) A. Fettweis (Ruhr Univ.) L. Fortuna (Univ. of Catania) W.J. Freeman (U.C.Berkeley) M. Hasler (Swiss Fed. Inst. of Tech. Lausanne) Tatsuo Higuchi (Tohoku Univ.) Kazumasa Hirai (Kobe Univ.) Ryogo Hirota (Waseda Univ.) E.S. Kuh (U.C.Berkeley) Hiroshi Kawakami (Tokushima Univ.) Tosiro Koga (Kyushu Univ.) Tohru Kohda (Kyushu Univ.) Masami Kuramitsu(Kyoto Univ.) R.W. Liu (Univ. of Notre Dame) Tadashi Matsumoto (Fukui Univ.) A.I. Mees (Univ. of Western Australia ) Michitada Morisue (Saitama Univ.) Tomomasa Nagashima (Muroran Inst. Tech.) Tetsuo Nishi (Kyushu Univ.) J.A. Nossek (Technical University Munich) Kohshi Okumura (Kyoto Univ.) T. Roska (Hungarian Academy of Sciences) Junkichi Satsuma (Univ. of Tokyo) I.W. Sandberg (Univ. of Texas at Austin) Chikara Sato (Keio Univ.) Yasuji Sawada (Tohoku Univ.) V.V. Shakhgildian (Russian Engr. Academy) Yoshisuke Ueda (Kyoto Univ.) Akio Ushida (Tokushima Univ.) J. Vandewalle (Catholic Univ. of Leuven, Heverlee) P. Werbos (National Science Foundation) A.N. Willson, Jr (U.C.L.A.) Shuji Yoshizawa (Univ. of Tokyo) A.H.Zemanian (State Univ. of NY at Stony Brook) SECRETARIATS Shin'ichi Oishi (Waseda Univ.) Mamoru Tanaka (Sophia Univ.) INFORMATION CONTACT Mamoru Tanaka Dept. of Electrical and Electronics Eng., Sophia Univ. Kioicho 7-1, Chiyoda-ku, Tokyo 102 JAPAN Fax: +81-3-3238-3321 e-mail: tanaka at mamoru.ee.sophia.ac.jp  From mm at santafe.edu Wed Mar 17 17:51:39 1993 From: mm at santafe.edu (mm@santafe.edu) Date: Wed, 17 Mar 93 15:51:39 MST Subject: paper available Message-ID: <9303172251.AA16722@lyra> Though not about connectionist networks, the following TR may be of interest to readers of this list: ----------------------------- The following paper is available by public ftp. Revisiting the Edge of Chaos: Evolving Cellular Automata to Perform Computations Melanie Mitchell Peter T. Hraber James P. Crutchfield Santa Fe Institute Santa Fe Institute University of California, Berkeley Santa Fe Institute Working Paper 93-03-014 Abstract We present results from an experiment similar to one performed by Packard (1988), in which a genetic algorithm is used to evolve cellular automata (CA) to perform a particular computational task. Packard examined the frequency of evolved CA rules as a function of Langton's lambda parameter (Langton, 1990), and interpreted the results of his experiment as giving evidence for the following two hypotheses: (1) CA rules able to perform complex computations are most likely to be found near ``critical'' lambda values, which have been claimed to correlate with a phase transition between ordered and chaotic behavioral regimes for CA; (2) When CA rules are evolved to perform a complex computation, evolution will tend to select rules with lambda values close to the critical values. Our experiment produced very different results, and we suggest that the interpretation of the original results is not correct. We also review and discuss issues related to lambda, dynamical-behavior classes, and computation in CA. The main constructive results of our study are identifying the emergence and competition of computational strategies and analyzing the central role of symmetries in an evolutionary system. In particular, we demonstrate how symmetry breaking can impede the evolution toward higher computational capability. To obtain an electronic copy: ftp santafe.edu login: anonymous password: cd /pub/Users/mm binary get rev-edge.ps.Z quit Then at your system: uncompress rev-edge.ps.Z lpr -P rev-edge.ps To obtain a hard copy, send a request to mm at santafe.edu.  From bnglaser at tohu0.weizmann.ac.il Wed Mar 17 08:25:20 1993 From: bnglaser at tohu0.weizmann.ac.il (Daniel Glaser) Date: Wed, 17 Mar 93 15:25:20 +0200 Subject: XOR and BP Message-ID: <9303171325.AA04285@tohu0.weizmann.ac.il> Forgive my ignorance, but isn't back-prop with a learning rate of 1 (see Luis B. Almeida's posting of 15.3.93) doing something quite a lot like random walk ? David Wolpert writes (15.3.93) "how back-prop does on XOR is only of historical interest". Is this not because, with XOR, in order to avoid the local minima you HAVE to do a lot more random walking than gradient descending ? It is believed that this is not necessary when using back-prop on most interesting problems. Historically, XOR has been a standard-bearer for back-prop, as a simple, intuitive, function which a perceptron cannot learn. Could it now appear that the whole technique is tainted by association with this pathological case ? Daniel Glaser.  From marshall at cs.unc.edu Thu Mar 18 13:02:11 1993 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Thu, 18 Mar 93 13:02:11 -0500 Subject: Paper available: Unsmearing Visual Motion Message-ID: <9303181802.AA10618@marshall.cs.unc.edu> The following paper is available via ftp from the neuroprose archive at Ohio State (instructions for retrieval follow the abstract). -------------------------------------------------------------------------- Unsmearing Visual Motion: Development of Long-Range Horizontal Intrinsic Connections Kevin E. Martin and Jonathan A. Marshall Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. Human vision systems integrate information nonlocally, across long spatial ranges. For example, a moving stimulus appears smeared when viewed briefly (30 ms), yet sharp when viewed for a longer exposure (100 ms) (Burr, 1980). This suggests that visual systems combine information along a trajectory that matches the motion of the stimulus. Our self-organizing neural network model shows how developmental exposure to moving stimuli can direct the formation of horizontal trajectory-specific motion integration pathways that unsmear representations of moving stimuli. These results account for Burr's data and can potentially also model other phenomena, such as visual inertia. (In press; to appear in S.J. Hanson, J.D. Cowan, & C.L. Giles, Eds., Advances in Neural Information Processing Systems, 5. San Mateo, CA: Morgan Kaufmann Publishers, 1993.) -------------------------------------------------------------------------- To get a copy of the paper, do the following: unix> ftp archive.cis.ohio-state.edu (or ftp 128.146.8.52) login: anonymous password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get martin.unsmearing.ps.Z ftp> quit unix> uncompress martin.unsmearing.ps.Z unix> lpr martin.unsmearing.ps.Z If you have trouble printing the file on a Postscript-compatible printer, send me e-mail (marshall at cs.unc.edu) with your postal address, and I'll have a hardcopy mailed to you (may take several weeks for delivery, though). --------------------------------------------------------------------------  From kehagias at eng.auth.gr Thu Mar 18 02:18:56 1993 From: kehagias at eng.auth.gr (Thanos Kehagias) Date: Thu, 18 Mar 93 09:18:56 +0200 Subject: Modelling nonlinear systems Message-ID: <9303180718.AA07252@vergina.eng.auth.gr> Regarding J.R. Chen's paper: I think it is important to define in what sense "modelling" is understtood. I have not read the Doya paper, but my guess is that it is an approximation result (rather than exat representation). If it is an approximation result, the sense of approximation (norm or metric used) is important. For instance: in the stochastic context, there is a well known statistical theorem, the Wold theorem, which says that every continuous valued, finite second moment, stochastic process can be approximated by ARMA models. The models are (as one would expect) of increasing order (finite but unbounded). The approximation is in the L2 sense (l.i.m., limit in the mean), that is E([X-X_n]^2) goes to 0, where X is the original process and X_n, n=1,2, ... is the approximating ARMA process. I expect this can also handle stochastic input/ output processes, if the input output pair (X,U) is considered as a joint process. I have proved a similar result in my thesis about approximating finite state stoch. processes with Hidden Markov Models. The approximation is in two senses: weak (approximation of measures) and cross entropy. Since for every HMM it is easy to build an output equivalent network of finite automata, this gets really close to the notion of recurrent networks with sigmoid neurons. Of course this is all for stochastic networks/ probabilistic processes. In the deterministic case one would probably be interested in a different sense of approximation, e.g. L2 or L-infinity approximation. Is the Doya paper in the ohio archive?  From dasgupta at cs.umn.edu Thu Mar 18 13:56:59 1993 From: dasgupta at cs.umn.edu (Bhaskar Dasgupta) Date: Thu, 18 Mar 93 12:56:59 CST Subject: NIPS-92 paper in neuroprose. Message-ID: <9303181857.AA04358@deca.cs.umn.edu> The following file has been placed in connectionists, under the name georg.nips92.ps.Z (to appear in NIPS-92 proceedings). Any questions or comments will be highly appreciated. The Power of Approximating: a Comparison of Activation Functions Bhaskar DasGupta (a) Georg Schnitger (b,c) (a) Department of Computer Science, University of Minnesota, Minneapolis, MN 55455-0159 (b) Department of Computer Science, The Pennsylvania State University, University Park, PA 16802 (c) Department of Mathematics and Computer Science, University of Paderborn, Postfach 1621, 4790 Paderborn, Germany ABSTRACT -------- We compare activation functions in terms of the approximation power of their feedforward nets. We consider the case of analog as well as boolean input. To retrieve this file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: your email address 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get georg.nips92.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for georg.nips92.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress georg.nips92.ps.Z unix> lpr georg.nips92.ps.Z (or however you print postscript) The paper is exactly 8 pages (no proofs appear due to space limitation). Many thanks to Jordan Pollack for maintaining this list. Bhaskar Dasgupta Department of Computer and Information Science University of Minnesota Minneapolis, MN 55455-0159 email :dasgupta at cs.umn.edu  From wray at ptolemy.arc.nasa.gov Thu Mar 18 22:21:29 1993 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Thu, 18 Mar 93 19:21:29 PST Subject: neuroprose paper on 2nd derivatives and their use in BP Message-ID: <9303190321.AA25438@ptolemy.arc.nasa.gov> The following paper is available by public ftp from Jordan Pollack's wonderful neuroprose collection. Details below. ------------------ Computing Second Derivatives in Feed-Forward Networks: a Review Wray L. Buntine Andreas S. Weigend RIACS & NASA Ames Research Center Xerox PARC To appear in IEEE Trans. of Neural Networks Abstract. The calculation of second derivatives is required by recent training and analyses techniques of connectionist networks, such as the elimination of superfluous weights, and the estimation of confidence intervals both for weights and network outputs. We here review and develop exact and approximate algorithms for calculating second derivatives. For networks with $|w|$ weights, simply writing the full matrix of second derivatives requires $O(|w|^2)$ operations. For networks of radial basis units or sigmoid units, exact calculation of the necessary intermediate terms requires of the order of $2h+2$ backward/forward-propagation passes where $h$ is the number of hidden units in the network. We also review and compare three approximations (ignoring some components of the second derivative, numerical differentiation, and scoring). Our algorithms apply to arbitrary activation functions, networks, and error functions (for instance, with connections that skip layers, or radial basis functions, or cross-entropy error and Softmax units, etc.). ----------------------------- The paper is buntine.second.ps.Z in the neuroprose archives. The INDEX sentence is A review of computing second derivatives in feed-forward networks. To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:wray): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get buntine.second.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for buntine.second.ps.Z . ftp> quit 221 Goodbye. unix> uncompress buntine.second.ps.Z unix> lpr buntine.second.ps ---------------------- If you cannot ftp or print the postscript file, please send email to silva at parc.xerox.com or write to Nicole Silva Xerox PARC 3333 Coyote Hill Rd Palo Alto, CA 94304 USA  From doya at crayfish.UCSD.EDU Thu Mar 18 19:45:28 1993 From: doya at crayfish.UCSD.EDU (Kenji Doya) Date: Thu, 18 Mar 93 16:45:28 PST Subject: Modelling of Nonlinear Systems In-Reply-To: J.R.Chen@durham.ac.uk's message of Tue, 16 Mar 93 18:13:01 GMT Message-ID: <9303190045.AA08642@crayfish.UCSD.EDU> As Dr. Chen says, the fact that there exists a recurrent network that models any given dynamical system [1] does not mean that it can be achieved readily by learning, such as output error gradient descent. This may sound similar to the case of learning parity in feed-forward networks, but there are some additional problems that arise from nonlinear dynamics of the network, which I tried to discuss in another paper I posted in Neuroprose (Bifurcations of ...). Takens' result shows that an n-dimensional attractor dynamics can be reconstructed from its scalar output sequence x(t) as (for example) x(t) = F( x(t-1),...,x(t-m)) for m > 2n. Therefore, a conservative connectionist approach to modeling nonlinear dynamics is to prepare a long enough tapped delay line in the input layer and then to train a feed-forward network to simulate the function F. But it may not be the best approach because the same system can look very simple or complex depending on how we take the state vectors. Whether a recurrent network can find an efficient representation of the state space by learning is still an open problem. Another problem is the stability of the reconstructed trajectories. In many cases, the training set consists of specific trajectories like fixed points and limit cycles and no information is explicitly given about how the nearby trajectories should behave [2]. It has been shown empirically that fixed points and "simple" limit cycles (e.g. sinusoids) tend to be stable, presumably by virtue of squashing functions. However, that is not true for complex trajectories. Since we know that the target trajectories are sampled from attractors (otherwise we can't observe it), we should somehow impose this constraint in training a network. About on-line/off-line training: What we want the network to do is to model a global, nonlinear vector field. On-line learning is not attractive (to me) if the network learns a local, (almost) linear vector filed quickly and forgets about the rest of the state space. [1] Dr. Sontag have sent me a paper: H.T. Siegelmann and E.D. Sontag: Some recent results on computing with "neural nets". IEEE Conf. on Decision and Control, Tucson, Dec. 1992. It includes a more formal proof of the universality of recurrent networks. [2] In a recent Neuroprose paper, Tsung and Cottrell explicitly taught the network where the trajectories around a limit cycle should go. Kenji Doya Department of Biology, University of California, San Diego La Jolla, CA 92093-0322, USA Phone: (619)534-3954/5548 Fax: (619)534-0301  From schmidhu at informatik.tu-muenchen.de Fri Mar 19 11:42:34 1993 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Fri, 19 Mar 1993 17:42:34 +0100 Subject: XOR and BP Message-ID: <93Mar19.174241met.42274@papa.informatik.tu-muenchen.de> David Glaser writes: >> ..................., but isn't back-prop with a learning rate of 1 >> (see Luis B. Almeida's posting of 15.3.93) doing something quite a lot >> like random walk ? Probably not really. I ran a couple of simulations using the 2-2-1 (+ true unit) architecture but doing random search in weight space (instead of backprop). On average, I had to generate 1500 random weight initializations before hitting the first XOR solution (with a uniform distribution for each weight between -10.0 and +10.0). Different architectures and different initialization conditions influence the average number of trials, of course. Since there are only 16 mappings from the set of 4 input patterns to a single binary output, a hypothetical bias-free architecture allowing only such mappings would require about 16 random search trials on average. The results above seem to imply that Luis' backprop procedure had to fight against a `negative' architectural bias. The success of any learning system depends so much on the right bias. Of course, there are architectures and corresponding learning algorithms that solve XOR in a single `epoch'. Juergen Schmidhuber Institut fuer Informatik Technische Universitaet Muenchen Arcisstr. 21, 8000 Muenchen 2, Germany schmidhu at informatik.tu-muenchen.de  From kolen-j at cis.ohio-state.edu Thu Mar 18 05:09:06 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 18 Mar 93 05:09:06 -0500 Subject: Training XOR with BP In-Reply-To: "Luis B. Almeida"'s message of Mon, 15 Mar 93 18:49:19 +0100 <9303151749.AA20880@sara.inesc.pt> Message-ID: <9303181009.AA14632@pons.cis.ohio-state.edu> You can find the same type of graphs in (Kolen and Goel, 1991) where we reported, among other things, results on experiments testing the effects of the initial weight range on training XOR on 2-2-1 ffwd nets with backprop. This work was expanded in (Kolen and Pollack, 1990) where we examined the boundaries of t-convergent (the network reaches some convergence criteria in t epochs) regions in weight space. What we found was that boundary was not smooth, ie increase t and you get more rings of convergent regions, but very "cliffy" and small differences in initial weights can mean the difference between converging in 50 epochs and many more than any of us are willing to wait for. I agree with Luis, "local minima" is overused in the connectionist community to describe networks which take a VERY long time to converge. A good example of this is a 2-2-2-1 network learning XOR which are started with very small weights selected from a uniform distribution between (-0.1,0.1). These networks take a long time to learn the target mapping, but not because of a local minima. Rather, it's stuck in a very flat region near the saddle point at all zero weights. Refs Kolen, J. F. and Goel, A. K., (1991). Learning in PDP networks: Computational Complexity and information content. _IEEE Transactions on Systems, Man, and Cybernetics_. 21, pg 359-367. (Available through neuroprose: kolen.pdplearn*) John. F. Kolen and Jordan. B. Pollack, 1990. Backpropagation is Sensitive to Initial Conditions. _Complex Systems_. 4:3. pg 269-280. (Available through neuroprose: kolen.bpsic*)  From ingber at alumni.cco.caltech.edu Mon Mar 22 08:01:04 1993 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Mon, 22 Mar 1993 05:01:04 -0800 Subject: Modelling nonlinear systems Message-ID: <9303221301.AA00706@alumni.cco.caltech.edu> In the context of modeling discussed in the two postings referenced below, it should be noted that multiplicative noise many times is quite robust in modeling stochastic systems that have hidden variables and/or that otherwise would be modeled by much higher-order ARMA models. "Multiplicative" noise means that the typical Gaussian-Markovian noise terms added to introduce noise to sets of differential equations have additional factors which can be quite general functions of the other "deterministic" variables. Some nice work illustrating this is in %A K. Kishida %T Physical Langevin model and the time-series model in systems far from equilibrium %J Phys. Rev. A %V 25 %D 1982 %P 496-507 and %A K. Kishida %T Equivalent random force and time-series model in systems far from equilibrium %J J. Math. Phys. %V 25 %D 1984 %P 1308-1313 A very detailed reference that properly handles such systems is A F. Langouche %A D. Roekaerts %A E. Tirapegui %T Functional Integration and Semiclassical Expansions %I Reidel %C Dordrecht, The Netherlands %D 1982 Modelers' preferences for simple systems aside, it should be noted that most physical systems that can reasonably be assumed to possess Gaussian-Markovian noise should also be assumed to at least have multiplicative noise as well. Such arguments are given in %A N.G. van Kampen %T Stochastic Processes in Physics and Chemistry %I North-Holland %C Amsterdam %D 1981 In the context of neural systems, such multiplicative noise systems arise quite naturally, as I have described in %A L. Ingber %T Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography %J Phys. Rev. A %N 6 %V 44 %P 4017-4060 %D 1991 and in %A L. Ingber %T Generic mesoscopic neural networks based on statistical mechanics of neocortical interactions %J Phys. Rev. A %V 45 %N 4 %P R2183-R2186 %D 1992 }Article 2057 of mlist.connectionists: }From: Thanos Kehagias }Subject: Modelling nonlinear systems }Date: Mon, 22 Mar 93 07:03:12 GMT }Approved: news at cco.caltech.edu } } }Regarding J.R. Chen's paper: } }I think it is important to define in what sense "modelling" is understtood. }I have not read the Doya paper, but my guess is that it is an approximation }result (rather than exat representation). If it is an approximation }result, the sense of approximation (norm or metric used) is important. } }For instance: in the stochastic context, there is a well known statistical }theorem, the Wold theorem, which says that every continuous valued, finite }second moment, stochastic process can be approximated by ARMA models. The }models are (as one would expect) of increasing order (finite but unbounded). }The approximation is in the L2 sense (l.i.m., limit in the mean), that is }E([X-X_n]^2) goes to 0, where X is the original process and X_n, n=1,2, ... is }the approximating ARMA process. I expect this can also handle stochastic input/ }output processes, if the input output pair (X,U) is considered as a joint }process. } }I have proved a similar result in my thesis about approximating finite state }stoch. processes with Hidden Markov Models. The approximation is in two senses: }weak (approximation of measures) and cross entropy. Since for every HMM it is }easy to build an output equivalent network of finite automata, this gets really close to the notion of recurrent networks with sigmoid neurons. || Prof. Lester Ingber [10ATT]0-700-L-INGBER || || Lester Ingber Research Fax: 0-700-4-INGBER || || P.O. Box 857 Voice Mail: 1-800-VMAIL-LI || || McLean, VA 22101 EMail: ingber at alumni.caltech.edu ||  From mel at cns.caltech.edu Fri Mar 19 19:26:45 1993 From: mel at cns.caltech.edu (Bartlett Mel) Date: Fri, 19 Mar 93 16:26:45 PST Subject: Preprint Message-ID: <9303200026.AA07747@plato.cns.caltech.edu> /*******PLEASE DO NOT POST TO OTHER B-BOARDS*************/ Announcing two preprints now available in the neuroprose archive: 1. Synaptic Integration in an Excitable Dendritic Tree by Bartlett W. Mel 2. Memory Capacity of an Excitable Dendritic Tree by Bartlett W. Mel Abstracts and ftp instructions follow. Hardcopies are not available, unless you're desperate. -Bartlett Division of Biology Caltech 216-76 Pasadena, CA 91125 mel at caltech.edu (818)356-3643, fax: (818)796-8876 ------------------------------------------------------------------ SYNAPTIC INTEGRATION IN AN EXCITABLE DENDRITIC TREE Bartlett W. Mel Computation and Neural Systems California Institute of Technology Compartmental modeling experiments were carried out in an anatomically characterized neocortical pyramidal cell to study the integrative behavior of a complex dendritic tree containing active membrane mechanisms. Building on a hypothesis presented in (Mel 1992a), this work provides further support for a novel principle of dendritic information processing, that could underlie a capacity for nonlinear pattern discrimination and/or sensory-processing within the dendritic trees of individual nerve cells. It was previously demonstrated that when excitatory synaptic input to a pyramidal cell is dominated by voltage-dependent NMDA-type channels, the cell responds more strongly when synaptic drive is concentrated within several dendritic regions than when it is delivered diffusely across the dendritic arbor (Mel 1992a). This effect, called dendritic ``cluster sensitivity'', persisted under wide ranging parameter variations, and directly implicated the spatial ordering of afferent synaptic connections onto the dendritic tree as an important determinant of neuronal response selectivity. In this work, the sensitivity of neocortical dendrites to spatially clustered synaptic drive has been further studied with fast sodium and slow calcium spiking mechanisms present in the dendritic membrane. Several spatial distributions of the dendritic spiking mechanisms were tested, with and without NMDA synapses. Results of numerous simulations reveal that dendritic cluster sensitivity is a highly robust phenomenon in dendrites containing a sufficiency of excitatory membrane mechanisms, and is only weakly dependent on their detailed spatial distribution, peak conductances, or kinetics. Factors that either work against or make irrelevant the dendritic cluster sensitivity effect include 1) very high-resistance spine necks, 2) very large synaptic conductances, 3) very high baseline levels of synaptic activity, or 4) large fluctuations in level of synaptic activity on short time scales. The functional significance of dendritic cluster-sensitivity has been previously discussed in the context of associative learning and memory (Mel 1992ab). Here it is demonstrated that the dendritic tree of a cluster-sensitive neuron implements an approximative spatial correlation, or sum of products, operation, such as that which may underlie nonlinear disparity tuning in binocular visual neurons. ------------------------------------------------------------------ MEMORY CAPACITY OF AN EXCITABLE DENDRITIC TREE Bartlett W. Mel Computation and Neural Systems California Institute of Technology Previous comparmental modeling studies have shown that the dendritic trees of neocortical pyramidal cells may be ``cluster-sensitive'', i.e. selectively responsive to spatially clustered, rather than diffuse, patterns of synaptic activation. The local nonlinear interactions among synaptic inputs in a cluster sensitive neuron are crudely analogous to a layer of hidden units in a neural network, and permit nonlinear pattern discriminations to be carried out within the dendritic tree of a single cell (Mel 1992ab). These studies have suggested that the spatial permutation of synaptic connections onto the dendritic tree is a crucial determinant of a cell's response selectivity. In this paper, the storage capacity of a single cluster sensitive neuron is examined empirically. As in (Mel 1992b), an abstract model neuron, called a ``clusteron'', was used to explore biologically- plausible Hebb-type learning rules capable of manipulating the ordering of synaptic inputs onto cluster-senstive dendrites. Comparisons are made between the storage capacity of a clusteron, a simple perceptron, and a modeled pyramidal cell with either a passive or electrically excitable dendritic tree. Based on the empirically demonstrated storage capacity of a single biophysically-modeled pyramidal cell, it is estimated that a 5 x 5 mm slab of neocortex can ``memorize'' on the order of 100,000 sparse random input-output associations. Finally, the neurobiological relevance of cluster-sensitive dendritic processing and learning rules is considered. ------------------------------------------------------------------ To get these papers by ftp: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get mel.synaptic.tar.Z ftp> get mel.memory.ps.Z ftp> quit unix> uncompress mel*Z unix> tar xvf mel*tar unix> lpr -s mel.synaptic.ps1 (or however you print postscript) unix> lpr -s mel.synaptic.ps2 unix> lpr -s mel.synaptic.ps3 unix> lpr -s mel.memory.ps /*******PLEASE DO NOT POST TO OTHER B-BOARDS*************/  From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Mon Mar 22 14:36:46 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Mon, 22 Mar 93 15:36:46 AST Subject: Call for Papers Message-ID: ================================================================== ================================================================== Final Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn, Fredericton New Brunswick Canada Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gardenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- March 30, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department: _____________________________________ University/Company: _____________________________________ _____________________________________ _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Prov/State: _____________________________________ Country: _____________________________________ Telephone: _____________________________________ Email: _____________________________________ Fax: _____________________________________  From shultz at hebb.psych.mcgill.ca Tue Mar 23 09:17:11 1993 From: shultz at hebb.psych.mcgill.ca (Tom Shultz) Date: Tue, 23 Mar 93 09:17:11 EST Subject: No subject Message-ID: <9303231417.AA21457@hebb.psych.mcgill.ca> Subject: Abstract Date: 23 March '93 Please do not forward this announcement to other boards. Thank you. ------------------------------------------------------------- The following paper has been placed in the Neuroprose archive at Ohio State University: A Connectionist Model of the Development of Seriation Denis Mareschal Department of Experimental Psychology University of Oxford Thomas R. Shultz Department of Psychology McGill University Abstract Seriation is the ability to order a set of objects on some dimension such as size. Psychological research on the child's development of seriation has uncovered both cognitive stages and perceptual constraints. A generative connectionist algorithm, cascade- correlation, is used to successfully model these psychological regularities. Previous rule-based models of seriation have been unable to capture either stage progressions or perceptual effects. The present simulations provide a number of insights about possible processing mechanisms for seriation, the nature of seriation stage transitions, and the opportunities provided by the environment for learning about seriation. This paper will be presented at the Fifteenth Annual Conference of the Cognitive Science Society, University of Colorado, 1993. Instructions for ftp retrieval of this paper are given below. If you are unable to retrieve and print it and therefore wish to receive a hardcopy, please send e-mail to shultz at psych.mcgill.ca Please do not reply directly to this message. FTP INSTRUCTIONS: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get mareschal.seriate.ps.Z ftp> quit unix> uncompress mareschal.seriate.ps.Z Tom Shultz Department of Psychology McGill University 1205 Penfield Avenue Montreal, Quebec H3A 1B1 Canada shultz at psych.mcgill.ca  From unni at neuro.cs.gmr.com Tue Mar 23 17:34:01 1993 From: unni at neuro.cs.gmr.com (K.P.Unnikrishnan) Date: Tue, 23 Mar 93 17:34:01 EST Subject: Tech report: MNNs for adaptive control Message-ID: <9303232234.AA00453@neuro.cs.gmr.com> The following technical report is now available. For a hard copy, please send your surface mailing address to sastry at neuro.cs.gmr.com. ftp versions of the paper and the actual code for simulations may be available in future. Unnikrishnan -------------------------------------------------------------- Memory Neuron Networks for Identification and Control of Dynamical Systems P. S. Sastry, G. Santharam Indian Institute of Science and K. P. Unnikrishnan General Motors Research Laboratories This paper presents Memory Neuron Networks as models for identification and adaptive control of nonlinear dynamical systems. These are a class of recurrent networks obtained by adding trainable temporal elements to feed-forward networks which makes the output history sensitive. By virtue of this capability, these networks can identify dynamical systems without having to be explicitly fed with past inputs and outputs. Thus, they can identify systems whose order is unknown or systems with unknown delay. It is argued that for satisfactory modeling of dynamical systems, neural networks should be endowed with such internal memory. The paper presents a preliminary analysis of the learning algorithm, providing theoretical justification for the identification method. Methods for adaptive control of nonlinear systems using these networks are presented. Through extensive simulations, these models are shown to be effective both for identification and model reference adaptive control of nonlinear systems.  From inmanh at cogs.sussex.ac.uk Wed Mar 24 05:15:57 1993 From: inmanh at cogs.sussex.ac.uk (Inman Harvey) Date: Wed, 24 Mar 93 10:15:57 GMT Subject: Evolutionary Robotics - Tech. Reports Message-ID: <9921.9303241015@rsuna.crn.cogs.susx.ac.uk> Evolutionary Robotics at Sussex -- Technical Reports =============================== The following six technical reports describe our recent work in using genetic algorithms to develop neural-network controllers for a simulated simple visually-guided robot. Currently only hard-copies are available. To request copies, mail one of: inmanh at cogs.susx.ac.uk or davec at cogs.susx.ac.uk or philh at cogs.susx.ac.uk giving a surface mail address and the CSRP numbers of the reports you want. or write to us at: School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England, UK. ------------ABSTRACTS-------------------- Genetic convergence in a species of evolved robot control architectures I. Harvey, P. Husbands, D. Cliff Cognitive Science Research Paper CSRP267 February 1993 We analyse how the project of evolving 'neural' network controller for autonomous visually guided robots is significantly different from the usual function optimisation problems standard genetic algorithms are asked to tackle. The need to have open ended increase in complexity of the controllers, to allow for an indefinite number of new tasks to be incrementally added to the robot's capabilities in the long term, means that genotypes of arbitrary length need to be allowed. This results in populations being genetically converged as new tasks are added, and needs a change to usual genetic algorithm practices. Results of successful runs are shown, and the population is analysed in terms of genetic convergence and movement in time across sequence space. Analysing recurrent dynamical networks evolved for robot control P. Husbands, I. Harvey, D. Cliff Cognitive Science Research Paper CSRP265 January 1993 This paper shows how a mixture of qualitative and quantitative analysis can be used to understand a particular brand of arbitrarily recurrent continuous dynamical neural network used to generate robust behaviours in autonomous mobile robots. These networks have been evolved in an open-ended way using an extended form of genetic algorithm. After briefly covering the background to our research, properties of special frequently occurring subnetworks are analysed mathematically. Networks evolved to control simple robots with low resolution sensing are then analysed, using a combination of knowledge of these mathematical properties and careful interpretation of time plots of sensor, neuron and motor activities. Analysis of evolved sensory-motor controllers D. Cliff, P. Husbands, I. Harvey Cognitive Science Research Paper CSRP264 December 1992 We present results from the concurrent evolution of visual sensing morphologies and sensory-motor controller-networks for visually guided robots. In this paper we analyse two (of many) networks which result from using incremental evolution with variable-length genotypes. The two networks come from separate populations, evolved using a common fitness function. The observable behaviours of the two robots are very similar, and close to the optimal behaviour. However, the underlying sensing morphologies and sensory-motor controllers are strikingly different. This is a case of convergent evolution at the behavioural level, coupled with divergent evolution at the morphological level. The action of the evolved networks is described. We discuss the process of analysing evolved artificial networks, a process which bears many similarities to analysing biological nervous systems in the field of neuroethology. Incremental evolution of neural network architectures for adaptive behaviour D. Cliff, I. Harvey, P. Husbands Cognitive Science Research Paper CSRP256 December 1992 This paper describes aspects of our ongoing work in evolving recurrent dynamical artificial neural networks which act as sensory-motor controllers, generating adaptive behaviour in artificial agents. We start with a discussion of the rationale for our approach. Our approach involves the use of recurrent networks of artificial neurons with rich dynamics, resilience to noise (both internal and external); and separate excitation and inhibition channels. The networks allow artificial agents (simulated or robotic) to exhibit adaptive behaviour. The complexity of designing networks built from such units leads us to use our own extended form of genetic algorithm, which allows for incremental automatic evolution of controller-networks. Finally, we review some of our recent results, applying our methods to work with simple visually-guided robots. The genetic algorithm generates useful network architectures from an initial set of randomly-connected networks. During evolution, uniform noise was added to the activation of each neuron. After evolution, we studied two evolved networks, to see how their performance varied when the noise range was altered. Significantly, we discovered that when the noise was eliminated, the performance of the networks degraded: the networks use noise to operate efficiently. Evolving visually guided robots D. Cliff, P. Husbands, I. Harvey Cognitive Science Research Paper CSRP220 July 1992 We have developed a methodology grounded in two beliefs: that autonomous agents need visual processing capabilities, and that the approach of hand-designing control architectures for autonomous agents is likely to be superseded by methods involving the artificial evolution of comparable architectures. In this paper we present results which demonstrate that neural-network control architectures can be evolved for an accurate simulation model of a visually guided robot. The simulation system involves detailed models of the physics of a real robot built at Sussex; and the simulated vision involves ray-tracing computer graphics, using models of optical systems which could readily be constructed from discrete components. The control-network architecture is entirely under genetic control, as are parameters governing the optical system. Significantly, we demonstrate that robust visually-guided control systems evolve from evaluation functions which do not explicitly involve monitoring visual input. The latter part of the paper discusses work now under development, which allows us to engage in long-term fundamental experiments aimed at thoroughly exploring the possibilities of concurrently evolving control networks and visual sensors for navigational tasks. This involves the construction of specialised visual-robotic equipment which eliminates the need for simulated sensing. Issues in evolutionary robotics I. Harvey, P. Husbands, D. Cliff Cognitive Science Research Paper CSRP219 July 1992 In this paper we propose and justify a methodology for the development of the control systems, or `cognitive architectures', of autonomous mobile robots. We argue that the design by hand of such control systems becomes prohibitively difficult as complexity increases. We discuss an alternative approach, involving artificial evolution, where the basic building blocks for cognitive architectures are adaptive noise-tolerant dynamical neural networks, rather than programs. These networks may be recurrent, and should operate in real time. Evolution should be incremental, using an extended and modified version of genetic algorithms. We finally propose that, sooner rather than later, visual processing will be required in order for robots to engage in non-trivial navigation behaviours. Time constraints suggest that initial architecture evaluations should be largely done in simulation. The pitfalls of simulations compared with reality are discussed, together with the importance of incorporating noise. To support our claims and proposals, we present results from some preliminary experiments where robots which roam office-like environments are evolved.  From usui at tut.ac.jp Thu Mar 25 00:15:44 1993 From: usui at tut.ac.jp (usui@tut.ac.jp) Date: Thu, 25 Mar 93 00:15:44 JST Subject: IJCNN'93-NAGOYA Call For Papers Message-ID: <9303241515.AA26419@bpel.tutics.tut.ac.jp> ======================================================================== CALL FOR PAPERS (Second Version) IJCNN'93-NAGOYA, JAPAN INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS NAGOYA CONGRESS CENTER, JAPAN OCTOBER 25-29,1993 IJCNN'93-NAGOYA co-sponsored by the Japanese Neural Network Society (JNNS), the IEEE Neural Networks Council (NNC), the International Neural Network Society (INNS), the European Neural Network Society (ENNS), the Society of Instrument and Control Engineers (SICE, Japan), the Institute of Electronics, Information and Communication Engineers (IEICE, Japan), the Nagoya Industrial Science Research Institute, the Aichi Prefectural Government and the Nagoya Municipal Government cordially invite interested authors to submit papers in the field of neural networks for presentation at the Conference. Nagoya is a historical city famous for Nagoya Castle and is located in the central major industrial area of Japan. There is frequent direct air service from most countries. Nagoya is 2 hours away from Tokyo or 1 hour from Osaka by bullet train. CONFERENCE SCHEDULE: AM PM Evening '93.10.25(Mon.) Registration Registration Tutorial Tutorial 10.26(Tue.) Opening Ceremony Industry Forum Reception 10.27(Wed.) Technical Sessions (Oral,Poster) 10.28(Thu.) Technical Sessions Banquet (Oral,Poster) 10.29(Fri.) Technical Sessions Closing (Oral,Poster) KEYNOTE SPEAKERS INCLUDE: David E. Rumelhart, Methods for Improving Generalization in Connectionist Networks Shun-ichi Amari, Brain and Computer - A Perspective PLENARY SPEAKERS INCLUDE: Rodney Brooks, (TBD) Edmund T. Rolls, Neural Networks in the Hippocampus and Cerebral Cortex Involved in Memory Kunihiko Fukushima, Improved Generalization Ability Using Constrained Neural Network Architectures INVITED SPEAKERS INCLUDE: Keiji Tanaka, Neural Mechanisms of Visual Recognition Tomaso Poggio, Visual Learning: From Object Recognition to Computer Graphics Mitsuo Kawato, Inverse Dynamics Model in the Cerebellum Teuvo Kohonen, Generalization of the Self-Organizing Map Michael I. Jordan, Learning in Hierarchial Networks Rolf Eckmiller, Information Processing in Biology-inspired Pluse Coded Neural Networks Shigenobu Kobayashi, Hybrid Systems of Natural and artificial Intelligence Kazuo Kyuma, Optical Neural Networks / Optical Neurodevices TECHNICAL SESSIONS: Papers may be submitted for consideration as oral or poster presentations in the following areas: Neurobiological Systems Self-organization Cognitive Science Learning & Memory Image Processing & Vision Robotics & Control Speech, Hearing & Language Hybrid Systems (Fuzzy, Genetic, Expert Systems, AI) Sensorimotor Systems Implementation (Electronic, Optical, Bio-chips) Neural Network Architectures Other Applications(Medical and Social Systems, Network Dynamics Art, Economy, etc. Optimization Please specify the area of the application) Four(4) page papers MUST be received by April 30, 1993. Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail. Papers will be reviewed by senior researchers in the field, and all authors will be informed of the decisions at the end of the review process by June 30, 1993. A limited number of papers will be accepted for oral and poster presentations. No poster sessions are scheduled in parallel with oral sessions. All accepted papers will be published as submitted in the conference proceedings, which should be available at the conference for distribution to all regular conference registrants. Please submit six(6) copies (one camera-ready original and five copies) of the paper. Do not fold or staple the original camera-ready copy. The four page papers, including figures, tables, and references, should be written in English. The paper submitted over four pages will be charged 30,000 YEN per extra page. Papers should be submitted on 210mm x 297mm (A4) or 8-1/2" x 11" (letter size) white paper with one inch margins on all four sides (actual space to be allowed to type is 165mm (W) x 228mm (H) or 6-1/2" x 9"). They should be prepared by typewriter or letter-quality printer in one or two-column format, single-spaced, in Times or similar font of 10 points or larger, and printed on one side of the page only. Please be sure that all text, figures, captions, and references are clean, sharp, readable, and of high contrast. Fax submission are not acceptable. Centered at the top of the first page should be the complete title, author(s), affiliation(s), and mailing address(es), followed by a blank space and then an abstract, not to exceed 15 lines, followed by the text. In an accompanying letter, the following should be included. Send papers to: IJCNN'93- NAGOYA Secretariat. Full Title of the Paper Presentation Preferred Oral or Poster Corresponding Author Presenter* Name, Mailing address Name, Mailing address Telephone and FAX numbers Telephone and FAX numbers E-mail address E-mail address Technical Session Audio Visual Requirements 1st and 2nd choices e.g., 35mm Slide, OHP, VCR * Students who wish to apply for the Student Award, please specify and enclose a verification letter of status from the Department head. TUTORIALS INCLUDE: Prof. Edmund T. Rolls (TBD) Prof. H.-N. L. Teodorescu (TBD) Prof. Haim Sompolinsky (TBD) ============================== Models for the development on the visual system Professor Michael P. Stryker University of California ============================== Optical Neural Networks Demetri Psaltis, California Institute of Technology ============================= Self-Organizing Neural Architectures for Adaptive Sensory-Motor Control Stephen Grossberg, Boston University ============================= Biology-Inspired Image Preprocessing:the How and the Why Gart Hauske, Technischen Universitat Munchen ============================= Possible Roles of Stimulus-dominated and Cortex Dominated Synchronizations in the Visual Cortex Prof. Dr. Reinhard Eckhorn Philipps University Marburg ============================= Genetic Algorithm Kenneth De Jong George Mason University ============================= Networks of Behavior Based Robots Prof. Rodony Brooks AI Labo, MIT ============================= Pattern and Speech Recognition by Discriminative Methods B.H. Juang, AT&T Bell Labs. ============================= Developments of modular learning systems Michael I. Jordan MIT ============================= VLSI Implementation of Neural Networks Federico Faggin Synaptics, Inc. ============================= Time Series Prediction and Analysis Dr. Andreas Weigend Palo Alto Research Center ============================= The chaotic dynamics of large networks, R.S.MacKay University of Warwick, ============================= Synaptic coding of spike trains Jose Pedro Segundo University of California, ============================= NEURAL NETWORK BASICS: APPLICATIONS, EXAMPLES AND STANDARDS Mary Lou Padgett Auburn University ============================= Analog Neural Networks - Techniques, Circuits and Learning - Alan F. Murray University of Edinburgh, ============================= Methods to adapt neural or fuzzy networks for control. Paul J. Werbos National Science Foundation ============================= Pattern Recognition with Fuzzy Sets and Neural Nets James C. Bezdek, U. of W. Florida, ============================= Learning, Approximation, and Networks Tomaso Poggio and Federico Girosi Tutorials for IJCNN'93-NAGOYA will be held on Monday, October 25, 1993. Each tutorial will be three hours long. The tutorials should be designed as such and not as expanded talks. They should lead the student at the college Senior level through a pedagogically understandable development of the subject matter. Experts in neural networks and related fields are encouraged to submit proposed topics for tutorials. INDUSTRY FORUM INCLUDE: Guido J. Deboeck Robert Heckt-Nielsen Toshirou Fujiwara Tsuneharu Nitta A major industry forum will be held in the afternoon on Tuesday, October 26, 1993. Speakers will include representatives from industry, government, and academia. The aim of the forum is to permit attendees to understand more fully possible industrial applications of neural networks, discuss problems that have arisen in industrial applications, and to delineate new areas of research and development of neural network applications. EXHIBIT INFORMATION: Exhibitors are encouraged to present the latest innovations in neural networks, including electronic and optical neuro computers, fuzzy neural networks, neural network VLSI chips and development systems, neural network design and simulation tools, software systems, and application demonstration systems. A large group of vendors and participants from academia, industry and government are expected. We believe that the IJCNN'93-NAGOYA will be the neural network largest conference and trade-show in Japan, in which to exhibit your products. Potential exhibitors should plan to sign up before April 30, 1993 for exhibit booths since exhibit space is limited. Vendors may contact the IJCNN'93-NAGOYA Secretariat. COMMITTEES & CHAIRS: Advisory Chair: Fumio Harashima, University of Tokyo Vice-cochairs: Russell Eberhart (IEEE NNC), Research Triangle Institute Paul Werbos (INNS), National Science Foundation Teuvo Kohonen (ENNS), Helsinki University of Technology Organizing Chair: Shun-ichi Amari, University of Tokyo Program Chair: Kunihiko Fukushima, Osaka University Cochairs: Robert J. Marks,II (IEEE NNC), University of Washington Harold H. Szu (INNS), Naval Surface Warfare Center Rolf Eckmiller (ENNS), University of Dusseldorf Noboru Sugie, Nagoya University Steering Chair: Toshio Fukuda, Nagoya University General Affair Chair:Fumihito Arai, Nagoya University Finance Chairs: Hide-aki Saito, Tamagawa University Roy S. Nutter,Jr, West Virginia University Publicity Chairs: Shiro Usui, Toyohashi University of Technology Evangelia Micheli-Tzanakou, Rutgers University Publication Chair: Yoichi Okabe, University of Tokyo Local Arrangement Chair:Yoshiki Uchikawa, Nagoya University Exhibits Chairs: Masanori Idesawa, Riken Shigeru Okuma, Nagoya University Industry Forum Chairs:Noboru Ohnishi, Nagoya University Hisato Kobayashi, Hosei University Social Event Chair: Kazuhiro Kosuge, Nagoya University Tutorial Chair: Minoru Tsukada, Tamagawa University Technical Tour Chair:Hideki Hashimoto, University of Tokyo REGISTRATION: Registration Fee Full conference registration fee includes admission to all sessions, exhibit area, welcome reception and proceedings. Tutorials and banquet are NOT included. Member-ship Before Aug. 31 '93 After Sept. 1 '93 On-site Member* 45,000 yen 55,000 yen 60,000 yen Non-Member 55,000 yen 65,000 yen 70,000 yen Student** 12,000 yen 15,000 yen 20,000 yen Tutorial Registration Fee Tutorials will be held on Monday, October 25, 1993, 10:00 am-1:00 pm. and 3:00 pm - 6:00 pm. The complete list of tutorials will be available in the June mailing. Member-ship Option Before August 31 '93 After Sept. 1 '93 Industrial Univ.& Nonprofit Inst. Member* Half day 20,000 yen 7,000 yen 40,000 yen Full day 30,000 yen 10,000 yen 60,000 yen Non- Half day 30,000 yen 10,000 yen 50,000 yen Member Full day 45,000 yen 15,000 yen 80,000 yen Student**Half day ------------ 5,000 yen 20,000 yen Full day ------------ 7,500 yen 30,000 yen * A member of co-sponsoring and co-operating societies. **Students must submit a verification letter of full-time status from the Department head. Banquet The IJCNN'93-NAGOYA Banquet will be held on Thursday, October 28, 1993. Note that the Banquet ticket (5,000 yen/person) is not included in the registration fee. Pre-registration is recommended, since the number of seats is limited. The registration for the Banquet can be made at the same time with the conference registration. Payment and Remittance Payment for registration and tutorial fees should be in one of the following forms : 1. A bank transfer to the following bank account: Name of Bank: Tokai Bank, Nagoya Ekimae-Branch Name of Account: Travel Plaza International Chubu, Inc. EC-ka Account No.: 1079574 Address: 6F Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan 2. Credit Cards (American Express, Diners, Visa, Master Card) are acceptable except for domestic registrants. Please indicate your card number and expiration date on the Registration Form Note: When making remittance, please send Registration Form to the IJCNN'93-NAGOYA Secretariat together with a copy of your bank's receipt for transfer. Personal checks and other currencies will not be accepted except Japanese yen. Confirmation and Receipt Upon receiving your Registration Form and confirming your payment, the IJCNN'93-NAGOYA Secretariat will send you a confirmation / receipt. This confirmation should be retained and presented at the registration desk of the conference site. Cancellation and Refund of the Fees All financial transactions for the conference are being handled by the IJCNN'93-NAGOYA Secretariat. Please send a written notification of cancellation directly to the office. Cancellations received on or before September 30, 1993, 50% cancel fee will be charged. We regret that no refunds for registration can be made after October 1, 1993. All refunds will be proceeded after the conference. NAGOYA: The City of Nagoya, with a population of over two million, is the principal city of central Japan and lies at the heart of one of the three leading areas of the country. The area in and around the city contains a large number of high-tech industries with names known worldwide, such as Toyota, Mitsubishi, Honda, Sony and Brother. The city's central location gives it excellent road and rail links to the rest of the country; there exist direct air services to 18 other cities in Japan and 26 cities abroad. Nagoya enjoys a temperate climate and agriculture flourishes on the fertile plain surrounding the city. The area has a long history; Nagoya is the birth place of two of Japan's greatest heroes: the Lords Oda Nobunaga and Toyotomi Hideyoshi, who did much to bring the 'Warring States' period to an end. Tokugawa Ieyasu who completed the task and established the Edo period was also born in the area. Nagoya is flourished under the benevolent rule of this lord and his descendants Climate and Clothing The climate in Nagoya in the late October is usually agreeable and stable, with an average temperature of 16-23 C(60-74 F). Heavy clothing is not necessary, however, a light sweater is recommended. Business suit as well as casual clothing is appropriate. TRAVEL INFORMATION: Official Travel Agent Travel Plaza International Chubu, Inc. (TPI) has been appointed as the Official Travel Agent for IJCNN'93-NAGOYA, JAPAN to handle all travel arrangements in Japan. All inquiries and application forms for hotel accommodations described herein should be addressed as follows: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg. 4-8-10 Meieki, Nakamura-ku Tel: +81-52-561-9880/8655 Nagoya 450, Japan Fax: +81-52-561-1241 Airline Transportation Participants from Europe and North America who are planning to come to Japan by air are advised to get in touch with the following travel agents who can provide information on discount fares. Departure cities are Los Angeles, Washington, New York, Paris, and London. Japan Travel Bureau U.K. Inc. 9 Kingsway London Tel: (01)836-9393 WC2B 6XF, England, U.K. Fax: (01)836-6215 Japan Travel Bureau International Inc. Equitable Tower 11th Floor New York, N.Y. 10019 Tel: (212)698-4955 U.S.A. Fax: (212)246-5607 Japan Travel Bureau Paris 91 Rue du Faubourg Saint-Honore 750008 Paris Tel: (01)4265-1500 France Fax: (01)4265-1132 Japan Travel Bureau International Inc. Suite 1410, One Wilshire Bldg. 624 South Grand Ave, Los Angeles, CA 90017 Tel: (213)687-9881 U.S.A. Fax: (213)621-2318 Japan Rail Pass The JAPAN RAIL PASS is a special ticket that is available only to travellers visiting Japan from foreign countries for sight-seeing. To be eligible to purchase a JAPAN RAIL PASS, you must purchase an Exchange Order from an authorized sales office or agent before you come to Japan. Please contact JTB offices or your travel agent for details. Note: The rail pass is a flash pass good on most of the trains and ferries in Japan. It provides very significant saving on transportation costs within Japan if you plan to travel more than just from Tokyo to Nagoya and return. Booking of Japan Railway tickets cannot be made before issuing Japan Rail Pass in Japan. Access to Nagoya Direct flights to Nagoya are available from the following cities: Seoul, Taipei, Pusan, Hong Kong, Singapore, Bangkok, Cheju, Jakarta, Denpasar, Kuala Lumpur, Honolulu, Portland, Los Angeles, Guam, Saipan, Toronto, Vancouver, Rio de Janeiro, Sao Paulo, Moscow, Frankfurt, Paris, London, Brisbane, Cairns, Sydney and Auckland. Participants flying from the U.S.A. are urged to fly to Los Angeles, CA, or Portland, OR, and transfer to direct flights to Nagoya on Delta Airlines, or fly to Seoul, Korea, for a connecting flight to Nagoya. For participants from other countries, flights to Narita (the New Tokyo International Airport) or Osaka International Airport are recommended. Domestic flights are available from Narita to Nagoya, but not from Osaka. The bullet train, "Shinkansen", is a fast and convenient way to get to Nagoya from either Osaka or Tokyo. Transportation from Nagoya International Airport Bus service to the Nagoya JR train station is available every 15 minutes. The bus stop (signed as No. 1) is to your left as you exit the terminal. The trip takes about 1 hour. Transportation from Narita International Airport To the Tokyo JR train station (to connect with Shinkansen), 2 ways to get from Narita to the JR train station are recommended: 1. An express train from the airport to the Tokyo JR train station. This is an all reserved seat train. Buy tickets before boarding train. Follow the signs in the airport to JR Narita station. The trip takes 1 hour. 2. A non-stop service is available, leaving Narita airport every 15 minutes. The trip will take between one and one and a half hours or more, depending on traffic conditions. The limousine have reserved seating, so it is necessary to purchase a ticket before boarding. If you plan to stay in Tokyo overnight before proceeding to Nagoya, other limousine to major Tokyo hotels are available. Transportation from Osaka International Airport Non-stop-bus service to the Shin-Osaka JR train station is available every 15 min. Foreign Exchange and Travellaer's Checks Purchase of traveller's checks in Japanese yen or U.S. dollars before departure is recommended. The conference secretariat and most of stores will accept only Japanese yen in cash only. Major credit cards are accepted in a number of shops and hotels. Foreign currency exchange and cashing of traveller's checks are available at the New Tokyo International Airport, the Osaka International Airport and major hotels. Major banks that handle foreign currencies are located in the downtown area. Banks are open from 9:00 to 15:00 on the weekday, closed on Saturday and Sunday. Electricity 100 volts, 60 Hz. For registration and additional information please contact: IJCNN'93-NAGOYA Secretariat: Travel Plaza International Chubu, Inc. Shirakawa Dai-san Bldg., 4-8-10 Meieki, Nakamura-ku, Nagoya, 450 Japan Phone: +81-52-561-9880/8655 Fax: +81-52-561-1241 ________________________________________________________________________________ Please do not reply to this account. Please use the telephone number, fax number or Mail address listed above. --- Shiro Usui (usui at tut.ac.jp) Biological and Physiological Engineering Lab. Department of Information and Computer Sciences Toyohashi University of Technology Toyohashi 441, Japan TEL & FAX 0532-46-7806  From rohwerrj at cs.aston.ac.uk Wed Mar 24 13:54:45 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 24 Mar 93 18:54:45 GMT Subject: postdoctoral research opportunity Message-ID: <1787.9303241854@cs.aston.ac.uk> *************************************************************************** POSTDOCTORAL RESEARCH OPPORTUNITY Dept. of Computer Science and Applied Mathematics Aston University *************************************************************************** The Department of Computer Science and Applied Mathematics at Aston University is seeking a Postdoctoral research assistant under SERC grant GR/J17814, "Training Algorithms based on Adaptive Critics". The successful applicant will work with Richard Rohwer in the Neural Computing research group at Aston to develop and study the use of Adaptive Critic credit assignment techniques for training several types of neural network models. This involves transplanting techniques developed mainly for control applications into a different setting. The applicant must hold a PhD degree in Computer Science, Physics, Mathematics, Electrical Engineering, or a similar quantitative science. Mathematical skill, programming experience (preferably with C or C++ under UNIX), and familiarity with neural network models are essential. Aston University's growing neural networks group currently consists of three academic staff and about 10 research students and visiting researchers. The group has access to about 50 networked Sparc stations in the Computer Science Department, in addition to 5 of its own, and further major acquisitions are in progress. The post will be for a period of two years, which can commence at any time between 15 May 1993 and 15 November 1993. Starting salary will be within the range 12638 to 14962 pounds per annum. Application forms and further particulars may be obtained from the Personnel Officer (Academic Staff), quoting Ref: 9307, Aston University, Aston Triangle, Birmingham B4 7ET, England. (Tel: (44 or 0)21 359-0870 (24 hour answerphone), FAX: (44 or 0)21 359-6470). The closing date for receipt of applications is 30 April 1993.  From harnad at Princeton.EDU Wed Mar 24 18:12:57 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Wed, 24 Mar 93 18:12:57 EST Subject: MATHEMATICAL PRINCIPLES OF REINFORCEMENT: BBS Call for Commentators Message-ID: <9303242312.AA15204@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by PETER KILLEEN on MATHEMATICAL PRINCIPLES OF REINFORCEMENT, that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ MATHEMATICAL PRINCIPLES OF REINFORCEMENT Peter R. Killeen Department of Psychology University of Arizona Tempe, AZ 85281-1104 KEYWORDS: reinforcement, memory, coupling, contingency, contiguity, tuning curves, activation, schedules, trajectories, response rate, mathematical models. ABSTRACT: Effective conditioning requires a correlation between the experimenter's definition of a response and an organism's, but an animal's perception of its own behavior differs from ours. Various definitions of the response are explored experimentally using the slopes of learning curves to infer which comes closest to the organism's definition. The resulting exponentially weighted moving average provides a model of memory which grounds a quantitative theory of reinforcement in which incentives excite behavior and focus the excitement on the responses present in memory at the same time. The correlation between the organism's memory and the behavior measured by the experimenter is given by coupling coefficients derived for various schedules of reinforcement. For simple schedules these coefficients can be concatenated to predict the effects of complex schedules and can be inserted into a generic model of arousal and temporal constraint to predict response rates under any scheduling arrangement. According to the theory, the decay of memory is response-indexed rather than time-indexed. Incentives displace memory for the responses that occur before them and may truncate the representation of the response that brings them about. This contiguity-weighted correlation model bridges opposing views of the reinforcement process and can be extended in a straightforward way to the classical conditioning of stimuli. Placing the short-term memory of behavior in so central a role provides a behavioral account of a key cognitive process. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.killeen). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.killeen When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From lpratt at franklinite.Mines.Colorado.EDU Mon Mar 22 18:13:18 1993 From: lpratt at franklinite.Mines.Colorado.EDU (Lorien Y. Pratt) Date: Mon, 22 Mar 93 16:13:18 -0700 Subject: Chidambar Ganesh to speak on preprocessing in neural networks Message-ID: <9303222313.AA10350@franklinite.Mines.Colorado.EDU> The spring, 1993 Colorado Machine Learning Colloquium Series Dr. Chidambar Ganesh Division of Engineering Colorado School of Mines, Golden Colorado Some Experiences with Data Preprocessing in Neural Network applications Tuesday March 30, 1993 Room 110, Stratton Hall, on the CSM campus 5:30 pm ABSTRACT In the application of Artificial Neural Systems (ANS) to engineering problems, appropriate representation of the measured sensor signals to be input to the ANS and the desired output response from the ANS are critical to successful network development. In this seminar, three different applications are presented wherein the representation of the input-output data sets proved to be a central issue in training neural networks effectively. The examples to be considered are : 1. Object identification based on ultrasonic measurements. 2. Real-time defect detection in an arc welding process from acoustic emission sensors. 3. Aluminum can color quality based on spectroscopic measurements. The first example deals with classification of 2-D objects from an ultrasonic mapping, and provides a simple yet striking illustration of the concept that utilizing data compression techniques can successfully resolve network learning problems. The latter two case studies relate to process monitoring and control in manufacturing. In both situations, an in-depth understanding of the physical process underlying the generated data was essential to developing meaningful representation schemes, ultimately resulting in useful networks. Suggested background readings: A Neural Network-Based Object Identification System..C. Ganesh, D. Morse, E. Wetherell and J. P. H. Steele. Development of an Intelligent Acoustic Emission Sensor Data Processing System -- Final Report. C. Ganesh and Steven M. Lassek. A Neural Network-Based Can Color Diagnostician C. Ganesh, L. Easton, and J. Jones. These readings are available on reserve at the Arthur Lakes Library at CSM. Ask for the reserve package for MACS570, subject: Ganesh. Non-students can check materials out on reserve by providing a driver's license. Open to the Public Refreshments to be served at 5:00pm, prior to talk For more information (including a schedule of all talks in this series), contact: Dr. L. Y. Pratt, CSM Dept. of Mathematical and Computer Sciences, lpratt at mines.colorado.edu, (303) 273-3878 Sponsored by: THE CSM DEPARTMENTS OF MATHEMATICAL AND COMPUTER SCIENCES, GEOPHYSICS, DIVISION OF ENGINEERING, AND CRIS The Center for Robotics and Intelligent Systems at the Colorado School of Mines  From dhw at santafe.edu Tue Mar 23 15:44:42 1993 From: dhw at santafe.edu (dhw@santafe.edu) Date: Tue, 23 Mar 93 13:44:42 MST Subject: new paper in neuroprose Message-ID: <9303232044.AA13789@zia> **DO NOT FORWARD TO OTHER GROUPS** The following file has been placed in neuroprose, under the name wolpert.overfitting.ps.Z. Thanks to Jordan Pollack for maintaining this very useful system. ON OVERFITTING AVOIDANCE AS BIAS by David H. Wolpert, The Santa Fe Institute Abstract: In supervised learning it is commonly believed that Occam's razor works, i.e., that penalizing complex functions helps one avoid "overfitting" functions to data, and therefore improves generalization. It is also commonly believed that cross-validation is an effective way to choose amongst algorithms for fitting functions to data. In a recent paper, Schaffer (1993) presents experimental evidence disputing these claims. The current paper consists of a formal analysis of these contentions of Schaffer's. It proves that his contentions are valid, although some of his experiments must be interpreted with caution. Keywords: overfitting avoidance, cross-validation, decision tree pruning, inductive bias, extended Bayesian analysis, uniform priors. To retrieve the file: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get wolpert.overfitting.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. unix> uncompress wolpert.overfitting.ps.Z unix> lpr wolpert.overfitting.ps (or however you print postscript)  From tsaih at sun7mis Fri Mar 19 12:32:03 1993 From: tsaih at sun7mis (teache tsaih) Date: 19 Mar 1993 11:32:03 -0600 (CST) Subject: Training XOR with BP Message-ID: <9303190332.AA03805@mis.nccu.edu.tw> People who are surprised by the emperical observation of above might be interested in the paper "The Saddle Stationary point In the Back Propagation Networks", in IJCNN'92 Beijing, II, page 886-892. There I had shown mathematically that the "local minimum" phenomenon in the vicinity of the origin point is due to the nature of the degenerate saddle stationary point. That is, the origin point in the XOR probelm is a degenerate saddle stationary point. Ray Tsaih tsaih at mis.nccu.edu.tw  From harry at neuronz.Jpl.Nasa.Gov Thu Mar 25 12:56:03 1993 From: harry at neuronz.Jpl.Nasa.Gov (Harry Langenbacher) Date: Thu, 25 Mar 93 09:56:03 PST Subject: Post-Doc Position Announcement - JPL Message-ID: <9303251756.AA09331@neuronz.Jpl.Nasa.Gov> The Concurrent Processing Devices Group at JPL has two post-doc positions for recent PhD's with neural-net, analog/digital VLSI and/or opto-electronic experience. The positions will have a duration of one to two years. USA citezinship or permanent-resident status is required. One position will be in our Electronic Concurrent Processing Devices Group to concentrate on applications of neural networks and parallel processing devices to problems such as pattern recognition, resource allocation, and optimization, using custom VLSI designs, and custom designs of computer sub-systems. The other position will be in our optical-processing group, to work with lasers, computer-generated holograms, and Acusto-Optic-Tuneable Filters for applications in pattern recognition and other neural-net architectures. We currently work with (analog, digital, and optical) neural net and concurrent processing devices and hardware systems. We build special-purpose and general-purpose analog, digital, mixed-signal, and opto-electronic chips. We develop neural net algorithms that suit our applications and our hardware. For over 7 years we have been a leader in hardware neural nets . If you're interested, please send me a ONE PAGE summary of your qualifications in the above mentioned fields, by e-mail(preferred), US mail, or FAX. Lab: 818-354-9513 , FAX: 818-393-4540 e-mail: harry%neuron6 at jpl-mil.jpl.nasa.gov Harry Langenbacher JPL, Mail-Stop 302-231 4800 Oak Grove Dr Pasadena, CA 91109 USA  From harnad at Princeton.EDU Thu Mar 25 14:24:25 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 25 Mar 93 14:24:25 EST Subject: VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS: BBS Call for Comm. Message-ID: <9303251924.AA20674@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by BRUCE BRIDGEMAN et al on VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ A THEORY OF VISUAL STABILITY ACROSS SACCADIC EYE MOVEMENTS Bruce Bridgeman Program in Experimental Psychology University of California Santa Cruz, CA 95064 A.H.C. van der Heijden Department of Psychology Leiden University Wassenaarsweg 52 2333 AK Leiden, The Netherlands Boris M Velichkovsky Department for Psychology and Knowledge Engineering Moscow State University Moscow 103009, Russia KEYWORDS: space constancy, proprioception, efference copy, space perception, saccade, eye movement, modularity, visual stability. ABSTRACT: We identify two aspects of the problem of how there is perceptual stability despite an observer's eye movements. The first, visual direction constancy, is the (egocentric) stability of apparent positions of objects in the visual world relative to the perceiver. The second, visual position constancy, is the (exocentric) stability of positions of objects relative to each other. We analyze the constancy of visual direction despite saccadic eye movements. Three information sources have been proposed to enable the visual system to achieve stability: the structure of the visual field, proprioceptive inflow, and a copy of neural efference or outflow to the extraocular muscles. None of these sources by itself provides adequate information to achieve visual direction constancy; present evidence indicates that all three are used. Our final question concerns the information processing operations that result in a stable world. The three traditional solutions involve elimination, translation, and evaluation. All are rejected. From a review of the physiological and psychological evidence we conclude that no subtraction, compensation or evaluation need take place. The problem for which these solutions were developed turns out to be a false one. We propose a "calibration" solution: correct spatiotopic positions are calculated anew for each fixation. Inflow, outflow, and retinal sources are used in this calculation: saccadic suppression of displacement bridges the errors between these sources and the actual extent of movement. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.bridgeman). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.bridgeman When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson). ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From harnad at Princeton.EDU Thu Mar 25 14:15:03 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 25 Mar 93 14:15:03 EST Subject: MOTOR INTENTION, IMAGERY AND REPRESENTATION: BBS Call for Commentators Message-ID: <9303251915.AA20557@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by MARC JEANNEROD, on MOTOR INTENTION, IMAGERY AND REPRESENTATION, that has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ THE REPRESENTING BRAIN: NEURAL CORRELATES OF MOTOR INTENTION AND IMAGERY Marc Jeannerod Vision et Motricite INSERM Unite 94 16 avenue du Doyen Lepine 69500 Bron France KEYWORDS: affordances, goals, intention, motor imagery, motor schemata, neural codes, object manipulation, planning, posterior parietal cortex, premotor cortex, representation. ABSTRACT: This target article concerns how motor actions are neurally represented and coded. Action planning and motor preparation can be studied using motor imagery. A close functional equivalence between motor imagery and motor preparation is suggested by the positive effects of imagining movements on motor learning, the similarity between the neural structures involved, and the similar physiological correlates observed in both imagining and preparing. The content of motor representations can be inferred from motor images at a macroscopic level: from global aspects of the action (the duration and amount of effort involved) and from the motor rules and constraints which predict the spatial path and kinematics of movements. A microscopic neural account of the represenation of object-oriented action is described. Object attributes are processed in different neural pathways depending on the kind of task the subject is performing. During object-oriented action, a pragmatic representation is activated in which object affordances are transformed into specific motor schemata independently of other tasks such as object recognition. Animal as well as clinical data implicate posterior parietal and premotor cortical areas in schema instantiation. A mechanism is proposed that is able to encode the desired goal of the action and is applicable to different levels of representational organization. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.jeannerod). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.jeannerod When you have the file(s) you want, type: quit In case of doubt or difficulty, consult your system manager. A more elaborate version of these instructions for the U.K. is available on request (thanks to Brian Josephson)> ---------- Where the above procedures are not available (e.g. from Bitnet or other networks), there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From rose at apple.com Fri Mar 26 13:29:10 1993 From: rose at apple.com (Dan Rose) Date: Fri, 26 Mar 1993 10:29:10 -0800 Subject: Job openings at Apple (permanent and summer intern) Message-ID: <9303261827.AA23745@taurus.apple.com> The Information Technology project in Apple's Advanced Technology Group is now hiring for one permanent position and two summer internships. One of the intern positions (#2 listed below) is specifically aimed at people with neural net experience; the other two may also be of interest to this audience. Note: E-mail submissions are STRONGLY preferred. ASCII files only, please. (More time unbinhexing, latexing, etc. means less time for us to read your resume!) Apple Computer has a corporate commitment to the principle of diversity. In that spirit, we welcome applications from all individuals. Women, minorities, veterans and disabled individuals are encouraged to apply. --------------------------- PERMANENT POSITION ------------------------- ENGINEER/SCIENTIST Job description: Join a team conducting research on new approaches to finding, sharing, organizing, and manipulating information for content-aware systems. Emphasis on implementation of experimental information and communication systems. Requires: MS in Computer Science or BS with equivalent experience with strong programming skills. Experience in information retrieval, hypertext, interface design, or related field. Preferred: Knowledge of Macintosh Toolbox, dynamic languages (LISP, Smalltalk, etc.), GUI programming. Familiarity with common text-indexing methods. E-mail resumes to infotech-recruit at apple.com, or send to InfoTech Recruiting c/o Nancy Massung Apple Computer, Inc. MS 301-4A One Infinite Loop Cupertino, CA 95014 ----------------------------- SUMMER POSITIONS ------------------------------ ENGINEER/SCIENTIST Intern (summer) #1 Job description: Work with senior researchers on the application of numerical methods to information retrieval (IR) systems. Assist on the design, implementation, user testing and performance evaluation of such systems. Requires: Graduate or upper division undergraduate student in computer science, cognitive science, information retrieval or other relevant program. Macintosh programming experience, the candidate should be able to write an application program. MPW C. Basic knowledge on numerical linear algebra. Preferred: Background on numerical methods and/or statistics. Smalltalk programming, familiarity with common text-indexing techniques. Some exposure to human-computer interaction issues. Knowledge on the following topics would be ideal: the vector model in IR, singular value decomposition and factor analysis. ENGINEER/SCIENTIST Intern (summer) #2 Job Description: Work with senior researchers to experiment with the use of neural network and other learning methods for information retrieval and organization. Requires: Graduate or upper division undergraduate student with experience in neural networks. Lisp programming with CLOS or other object system. Interest in information retrieval, hypertext, corpus linguistics, or related field. Preferred: Macintosh programming experience. Some exposure to human-computer interaction issues. Use of mapping techniques such as vector quantization or multidimensional scaling. Familiarity with common text-indexing methods. E-mail resumes to infotech-intern-recruit at apple.com, or send to InfoTech Internships c/o Nancy Massung Apple Computer, Inc. MS 301-4A One Infinite Loop Cupertino, CA 95014 Please indicate which position you are interested in.  From heiniw at sun1.eeb.ele.tue.nl Fri Mar 26 05:27:41 1993 From: heiniw at sun1.eeb.ele.tue.nl (Heini Withagen) Date: Fri, 26 Mar 1993 11:27:41 +0100 (MET) Subject: Paper on RBF-networks available Message-ID: <9303261027.AA01424@sun1.eeb.ele.tue.nl> A non-text attachment was scrubbed... Name: not available Type: text Size: 2588 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/244a65b7/attachment-0001.ksh From rreilly at nova.ucd.ie Thu Mar 25 09:41:48 1993 From: rreilly at nova.ucd.ie (Ronan Reilly) Date: Thu, 25 Mar 1993 14:41:48 +0000 Subject: Two preprints available Message-ID: The following two papers, to be presented at the upcoming Cognitive Science conference in Boulder, have been placed in Jordan Pollack's Neuroprose archive. Details on how to retrieve the papers, which are in compressed postscript form, are given at the end of this message. The papers are short (six pages), as required for inclusion in the CogSci'93 proceedings. Longer versions of both papers are currently in preparation: ------------------------------------------------------------------------------- Boundary effects in the linguistic representations of simple recurrent networks Ronan Reilly Dept. of Computer Science University College Dublin Belfield, Dublin 4 Ireland Abstract This paper describes a number of simulations which show that SRN representations exhibit interactions between memory and sentence and clause boundaries reminiscent of effects described in the early psycholinguistic literature (Jarvella, 1971; Caplan, 1972). Moreover, these effects can be accounted for by the intrinsic properties of SRN representations without the need to invoke external memory mechanisms as has conventionally been done. ------------------------------------------------------------------------------- A Connectionist Attentional Shift Model of Eye-Movement Control in Reading Ronan Reilly Department of Computer Science University College Dublin Belfield, Dublin 4, Ireland Abstract A connectionist attentional-shift model of eye-movement control (CASMEC) in reading is described. The model provides an integrated account of a range of saccadic control effects found in reading, such as word-skipping, refixation, and of course normal saccadic progression. -------------------------------------------------------------------------------- FTP INSTRUCTIONS unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get reilly.boundary.ps.Z ftp> get reilly.eyemove.ps.Z ftp> quit unix% zcat reilly.boundary.ps.Z | lpr -P unix% zcat reilly.eyemove.ps.Z | lpr -P ------------------------- Ronan Reilly Dept. of Computer Science University College Dublin Belfield, Dublin 4 IRELAND rreilly at ccvax.ucd.ie  From unni at neuro.cs.gmr.com Sat Mar 27 01:45:54 1993 From: unni at neuro.cs.gmr.com (K.P.Unnikrishnan) Date: Sat, 27 Mar 93 01:45:54 EST Subject: Preprint- Alopex: A corr. based learning alg. Message-ID: <9303270645.AA02269@neuro.cs.gmr.com> The following tech report is now available. For a hard copy, please send your surface mail address to venu at neuro.cs.gmr.com. Unnikrishnan ------------------------------------------------------------ Alopex: A Correlation-Based Learning Algorithm for Feed-Forward and Recurrent Neural Networks K. P. Unnikrishnan General Motors Research Laboratories and K. P. Venugopal Florida Atlantic University We present a learning algorithm for neural networks, called Alopex. Instead of error gradient, Alopex uses local correlations between changes in individual weights and changes in the global error measure. The algorithm does not make any assumptions about transfer functions of individual neurons, and does not explicitly depend on the functional form of the error measure. Hence, it can be used in networks with arbitrary transfer functions and for minimizing a large class of error measures. The learning algorithm is the same for feed- forward and recurrent networks. All the weights in a network are updated simultaneously, using only local computations. This allows complete parallelization of the algorithm. The algorithm is stochastic and it uses a `temperature' parameter in a manner similar to that in simulated annealing. A heuristic `annealing schedule' is presented which is effective in finding global minima of error surfaces. In this paper, we report extensive simulation studies illustrating these advantages and show that learning times are comparable to those for standard gradient descent methods. Feed-forward networks trained with Alopex are used to solve the MONK's problems and symmetry problems. Recurrent networks trained with the same algorithm are used for solving temporal XOR problems. Scaling properties of the algorithm are demonstrated using encoder problems of different sizes and advantages of appropriate error measures are illustrated using a variety of problems.  From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Tue Mar 30 14:15:51 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Tue, 30 Mar 93 15:15:51 AST Subject: Call for Papers (revised deadline) Message-ID: ================================================================== ================================================================== Final Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn, Fredericton New Brunswick Canada Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gardenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- April 10, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department: _____________________________________ University/Company: _____________________________________ _____________________________________ _____________________________________ Address: _____________________________________ _____________________________________ _____________________________________ Prov/State: _____________________________________ Country: _____________________________________ Telephone: _____________________________________ Email: _____________________________________ Fax: _____________________________________  From FEHLAUER at msscc.med.utah.edu Tue Mar 30 18:55:00 1993 From: FEHLAUER at msscc.med.utah.edu (FEHLAUER@msscc.med.utah.edu) Date: Tue, 30 Mar 93 16:55 MST Subject: Research and Clinical Positions at Univ. Utah and VA GRECC Message-ID: <5B5B5A7A317F007AA3@msscc.med.utah.edu> Colleagues, The following announcement represents an exciting opportunity to participate in a well funded, multidisciplinary research and clinical program. Please feel free to contact me with questions. Steve Fehlauer, M.D. Research Investigator SLC VAMC GRECC Assistant Professor of Medicine University of Utah School of Medicine VA Phone 801-582-1565 Ext 2468 Univ Phone 801-581-2628 E-Mail Fehlauer at msscc.med.utah.edu ***************************************************************************** Geriatric Internal Medicine The Salt Lake City Geriatric Research, Education and Clinical Center (GRECC) and the University of Utah School of Medicine are recruiting individuals to join the faculty of the GRECC/University program in Geriatric Internal Medicine. Candidates must be BE/BC in Internal Medicine and Geriatrics. Facilities include outpatient clinics, inpatient Geriatric Evaluation and Management Unit, outpatient Geriatric Medicine/ Psychiatry Program, "wet labs" with capabilities in cell and bone marrow culture, cell signalling and molecular biology, and computation facilities including Unix RISC workstations, Pen-Based clinical computers, PC workstations and LAN. Interdepartmental collaborative research is performed in Medical Computation and Modelling (artificial neural networks, expert systems, fuzzy logic, and semantic networks), real-time clinical decision support, nursing and medical information system design, computer assisted medical education, biology of aging, cytokines and immunity during aging, aerobic exercise and cognition during aging, and cellular neuroscience. Low cost of living, excellent recreation and arts abound in Salt Lake City. Appointments will be in the SLC BRECC and the University of Utah Division of Human Development and Aging. Faculty rank dependent upon qualifications. Send curriculum vitae to:: Personnel Service (05) Attn: Pruett VA Medical Center Salt Lake City, UT 84148 For more information, call: (801) 582-1565 Ext. 2475 The Department of Veterans Affairs and the University of Utah are Affirmative Action / Equal Opportunity Employers ******************************************************************************