From janetw at cs.uq.oz.au Mon Nov 1 14:57:39 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Mon, 1 Nov 93 14:57:39 EST Subject: Position available Message-ID: <9311010457.AA25521@client> The following advert is for a position in the Department of Computer Science, at the University of Queensland, at the highest level of the academic scale. It is open to any area of computing, and hence may interest researchers in cognitive science and/or neural networks. UQ has a strong inter-disciplinary cognitive science program between the departments of computer science, psychology, linguistics and philosophy, and neural network research groups in computer science, psychology and engineering. The University is one of the best in Australia, and Brisbane has a delightful climate, situated on the coastal plain between the mountains and the sea. Inquiries can be directed to Professor Andrew Lister, as mentioned below, or I am happy to answer informal questions about the Department, University or other aspects of academic life in Brisbane. Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au ------------------------------- UNIVERSITY OF QUEENSLAND PROFESSOR OF COMPUTER SCIENCE The successful applicant will have an outstanding record of research leadership and achievement in Computer Science. Teaching experience is expected, as is demonstrable capacity for collaboration with industry and attraction of external funds. The appointee will be expected to contribute substantially to Departmental research, preferably in a field which can exploit or extend current strengths. He or she will also be expected to teach at both undergraduate and postgraduate levels, and to contribute to Departmental policy making. Capacity and willingness to assume the Department headship at an appropriate time will be an important selection criterion. The Department is one of the strongest in Australia with 26 full-time academic staff, including 5 other full Professors, over 40 research staff, and 23 support staff. There are around 500 equivalent full-time students, with a large postgraduate school including 55 PhD students. The Department has been designated by the Federal Government as the Key Centre for Teaching and Research in Software Technology. The Department also contains the Software Verification Research Centre, a Special Research Centre of the Australian Research Council, and is a major partner in the Cooperative Research Centre for Distributed Systems Technology. Current research strengths include formal methods and tools for software development, distributed systems, information systems, programming languages, cognitive science, and algorithm design and analysis. Salary: $77,900 plus superannuation. A market loading may be payable in some circumstances. For further information please contact the Head of Department, Professor Andrew Lister (lister at cs.uq.oz.au), 07-365 3168 or international +61 7 365 3168. Applications: (4 copies) should be made to the Director, Personnel Services, The University of Queensland, St Lucia, Queensland 4072, Australia. Closing Date: 10 Jan 1994 From Connectionists-Request at cs.cmu.edu Mon Nov 1 00:05:14 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Nov 93 00:05:14 EST Subject: Bi-monthly Reminder Message-ID: <15574.752130314@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From Philip.Resnik at East.Sun.COM Mon Nov 1 10:28:18 1993 From: Philip.Resnik at East.Sun.COM (Philip Resnik - Sun Microsystems Labs BOS) Date: Mon, 1 Nov 93 10:28:18 EST Subject: ACL-94 Call for papers Message-ID: <9311011528.AA10418@caesar.East.Sun.COM> Hi, I'd like to follow up on Gary Cottrell's note about the ACL-94 conference with two brief comments. First, statistical (and statistical-symbolic) approaches to NLP are an important area right now, and people doing that kind of work (of which I am one) share a great many concerns with members of this list. Nonetheless, there seems to be little communication between the groups. As an example, David Wolpert and David Wolf's recent reports (on estimating entropy, etc. given finite samples), publicized on this list, target an issue of central concern to people doing statistical NLP, yet I suspect that few computational linguists have come across them. Second, I want to draw your attention to the ACL conference's student sessions (buried in the middle of the call for papers). These sessions are intended to give students a chance to present work in progress, as opposed to completed work, particularly so they can get feedback from more senior members of the computational linguistics community with whom they might not otherwise come into contact. The deadline is somewhat later than for the ACL main sessions (February 1), and the submissions are reviewed by a committee comprising both students and faculty members. It's a very useful forum, by no means inferior to the main sessions of the conference, and I STRONGLY encourage students doing language-related connectionist work to get involved. Philip From sbh at eng.cam.ac.uk Tue Nov 2 08:27:25 1993 From: sbh at eng.cam.ac.uk (S.B. Holden) Date: Tue, 2 Nov 93 13:27:25 GMT Subject: Technical Report Message-ID: <10707.9311021327@tw700.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Quantifying Generalization in Linearly Weighted Neural Networks Sean B. Holden and Martin Anthony Technical Report CUED/F-INFENG/TR113 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The Vapnik-Chervonenkis Dimension has proven to be of great use in the theoretical study of generalization in artificial neural networks. The `probably approximately correct' learning framework is described and the importance of the VC dimension is illustrated. We then investigate the VC dimension of certain types of linearly weighted neural networks. First, we obtain bounds on the VC dimensions of radial basis function networks with basis functions of several types. Secondly, we calculate the VC dimension of polynomial discriminant functions defined over both real and binary-valued inputs. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get holden_tr113.ps.Z ftp> quit unix> uncompress holden_tr113.ps.Z unix> lpr holden_tr113.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from Sean B. Holden Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: sbh at eng.cam.ac.uk This report also appears as London School of Economics Mathematics Preprint number LSE-MPS-42, December, 1992. From tgd at chert.CS.ORST.EDU Tue Nov 2 16:28:35 1993 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Tue, 2 Nov 93 13:28:35 PST Subject: papers of interest in Machine Learning 13:2/3 Message-ID: <9311022128.AA27174@curie.CS.ORST.EDU> Machine Learning Volume 13: 2/3 is a special issue devoted to Genetic Algorithms (J. Grefenstette, Ed.) Two of the papers in the issue are of potential interest to this list: Genetic Reinforcement Learning for Neurocontrol Problems D. Whitley, S. Dominic, R. Das, C. W. Anderson What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation S. Forrest, M. Mitchell For ordering information, contact Kluwer at world.std.com --Tom From jbower at smaug.bbb.caltech.edu Tue Nov 2 18:25:08 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Tue, 2 Nov 93 15:25:08 PST Subject: Call for papers CNS*94 Message-ID: <9311022325.AA28306@smaug.bbb.caltech.edu> CALL FOR PAPERS Third Annual Computation and Neural Systems Meeting CNS*94 July 21 - 25, 1994 Monterey, California DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1993 This is the third annual meeting of an interdisciplinary conference intended to address the broad range of research approaches and issues involved in the field of computational neuroscience. The last two year's meetings, in San Francisco (CNS*92) and Washington DC (CNS*93), brought experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. Peer reviewed papers were presented at the meeting on a range of subjects related to understanding how biological neural systems compute. As in previous years, the meeting is intended to equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The main body of the meeting will take place at the Monterey Doubletree hotel and include plenary, contributed and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. Following the regular session, there will be two days of focused workshops at the natural ocean side setting of the Asilomar Conference Center on the Monterey Peninsula. With this announcement we solicit the submission of presented papers to the meeting. All papers will be refereed. Authors should send original research contributions in the form of a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Summaries are for program committee use only. Abstracts will be published in the conference program. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other systems F. Motor Systems and Sensory Motor Integration G. Learning and Memory H. Behavior I. Cognitive J. Disease Include addresses of all authors on the front of the summary and the abstract including the E-mail address for EACH author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered if they lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by surface mail ONLY by sending 6 copies of the abstract and summary to: CNS*94 Submissions Division of Biology 216-76 Caltech Pasadena, CA. 91125 Submissions must be postmarked by January 26th, 1993. Registration information: All submitting authors will be sent registration material automatically. Others interested in obtaining registration material once they become available should surface mail to the above address or email to: cp at smaug.cns.caltech.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CNS*94 Organizing Committee: Co-meeting chair logistics - John Miller, UC Berkeley Co-meeting chair program - Jim Bower, Caltech Program committee John Rinzel, NIDDK/NIH Gwen Jacobs, UC Berkeley Catherine Carr, University of Maryland, College Park Dennis Glanzman, NIMH/NIH Charles Wilson, University of Tennessee, Memphis Proceedings - Frank Eeckman, Lawrence Livermore National Labs. Workshops - Mike Hasselmo, Harvard University European organizer - Erik DeSchutter (Belgium) Middle Eastern organizer - Idan Segev, Jerusalem Down under organizer - Mike Paulin (New Zealand) South American organizer - Renato Sabbatini (Brazil) ============================================================ In each of the past two years, the meeting has been able to offer travel grants to students presenting papers with support from the National Science Foundation. Potential participants interested in the content of last year's meeting can ftp last year's agenda as follows (you enter text in quotes): yourhost% "ftp 131.215.137.69" Connected to 131.215.137.69. 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" 230 Guest login ok, access restrictions apply. ftp> "cd cns94" 250 CWD command successful. ftp> "get.agenda93" 200 PORT command successful. 150 ASCII data connection for agenda93 (131.215.137.60,2916) (12761 bytes). 226 ASCII Transfer complete. local: agenda93 remote: agenda93 13145 bytes received in 0.26 seconds (49 Kbytes/s) ftp> "quit" 221 Goodbye. yourhost% (use any editor to look at the file) ======================================================= DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1994 please post From graham at charles-cross.plymouth.ac.uk Wed Nov 3 11:17:07 1993 From: graham at charles-cross.plymouth.ac.uk (Graham Smith) Date: Wed, 3 Nov 93 16:17:07 GMT Subject: No subject Message-ID: <97.9311031617@cx.plym.ac.uk> Subject: Dynamic Binding Cc: neuron-request at edu.upenn.psych.cattell graham It strikes me that an obvious solution to the binding problem has been overlooked in our rush to study phase locking in oscillatory networks. ;-) I have recently trained a simple multi-layer feed-forward network using back-propagation, to auto-associate patterns which simultaneously describe two items. The patterns consist of four features (red, blue, square, triangle) which are enumerated over the two items. The domain allows 16 two item patterns (e.g. "red square and blue triangle" or "blue triangle and blue square") and 4 single item patterns. The network had 8 input units, 4 hidden units and 8 output units. It was successfully trained to auto-associate 15 of the 20 patterns and was able to correctly auto-associate the 5 previously unseen patterns. This result is unsurprising, the network has simply learned the regularities of the set of bit patterns. But I will argue that the network can be described at the symbolic level as performing dynamic binding. Binding does not occur at either the input or output layer as both of these representations are enumerated. However, a hidden layer activation pattern is a transformation of the input pattern which contains sufficient information to allow its transformation back to the original by the hidden to output layer of weights. Such descriptions are dubbed holistic representations by RAAM enthusiasts. Furthermore, van Gelder argues that holistic representations are functionally compositional and that truely connectionist representations have functional rather than concatenative compositionality. Phase synchrony is a concatenative compositional putative binding mechanism and is a hybrid approach rather than connectionist. The holistic representation of "red square and blue triangle" is not ambiguous. It could not be confused for the holistic representation of "blue square and red triangle". The holistic representation is performing binding. To be more accurate, binding does not literally take place at the subsymbolic level. No variables are concatenatively bound to constants. Subsets of features are not "glued" together. Rather dynamic binding is a symbolic level approximate description of the subsymbolic process and subsymbolic "binding" is a functionally compositional state-space representation. I hope to publish the above-mentioned work but before doing so I shall be grateful for some feedback either to reassure myself that there is something here worth publishing or to spare my blushes with a wider audience. Graham Smith Centre for Intelligent Systems University of Plymouth England From mwitten at chpc.utexas.edu Wed Nov 3 12:37:13 1993 From: mwitten at chpc.utexas.edu (mwitten@chpc.utexas.edu) Date: Wed, 3 Nov 93 11:37:13 CST Subject: URGENT: DEADLINE CHANGE FOR WORLD CONGRESS Message-ID: <9311031737.AA08913@morpheus.chpc.utexas.edu> UPDATE ON DEADLINES FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE, PUBLIC HEALTH, AND BIOTECHNOLOGY 24-28 April 1994 Hyatt Regency Hotel Austin, Texas ----- (Feel Free To Cross Post This Announcement) ---- Due to a confusion in the electronic distribution of the congress announcement and deadlines, as well as incorrect deadlines appearing in a number of society newsletters and journals, we are extending the abstract submission deadline for this congress to 31 December 1993. We apologize to those who were confused over the differing deadline announcements and hope that this change will allow everyone to participate. For congress details: To contact the congress organizers for any reason use any of the following pathways: ELECTRONIC MAIL - compmed94 at chpc.utexas.edu FAX (USA) - (512) 471-2445 PHONE (USA) - (512) 471-2472 GOPHER: log into the University of Texas System-CHPC select the Computational Medicine and Allied Health menu choice ANONYMOUS FTP: ftp.chpc.utexas.edu cd /pub/compmed94 (all documents and forms are stored here) POSTAL: Compmed 1994 University of Texas System CHPC Balcones Research Center 10100 Burnet Road, 1.154CMS Austin, Texas 78758-4497 SUBMISSION PROCEDURES: Authors must submit 5 copies of a single-page 50-100 word abstract clearly discussing the topic of their presentation. In addition, authors must clearly state their choice of poster, contributed paper, tutorial, exhibit, focused workshop or birds of a feather group along with a discussion of their presentation. Abstracts will be published as part of the preliminary conference material. To notify the congress organizing committee that you would like to participate and to be put on the congress mailing list, please fill out and return the form that follows this announcement. You may use any of the contact methods above. If you wish to organize a contributed paper session, tutorial session, focused workshop, or birds of a feather group, please contact the conference director at mwitten at chpc.utexas.edu . The abstract may be submitted electronically to compmed94 at chpc.utexas.edu or by mail or fax. There is no official format. If you need further details, please contact me. Matthew Witten Congress Chair mwitten at chpc.utexas.edu From kolen-j at cis.ohio-state.edu Thu Nov 4 10:25:44 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 4 Nov 93 10:25:44 -0500 Subject: Dynamic Binding In-Reply-To: Graham Smith's message of Wed, 3 Nov 93 16:17:07 GMT <97.9311031617@cx.plym.ac.uk> Message-ID: <9311041525.AA12055@pons.cis.ohio-state.edu> As Graham Smith writes: > It strikes me that an obvious solution to the binding problem has been > overlooked in our rush to study phase locking in oscillatory networks. ;-) He's right. > I hope to publish the above-mentioned work but before doing so I shall be > grateful for some feedback either to reassure myself that there is > something here worth publishing or to spare my blushes with a wider > audience. A wider audience? Those who read connectionists (1000+ ??, only DT knows for sure) are pretty much the ones who really count. Before you do publish this result, check the proceedings of the last CogSci Society meeting. Janet Wiles presented a similar variable binding method using autoassociative (aa) encoders. She found that strict aa mappings are too difficult to learn for very large networks. In response, the strict aa mapping was replaced by a mapping of 1-in-n slots to n/2-in-n slots. The representational shift helped learning tremendously, as the std encoder network can easily produce this mapping. The citation for this paper is: Wiles, J., (1993) "Representation of variables and their values in neural networks", In Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. June 18-21, 1993. Boulder, CO. Lawrence Erlbaum Associates, Hillsdale, NJ. I've sent this to connectionists, rather than individually to Graham Smith, because I think the Wiles paper was perhaps the best all-around neural network paper at CogSci this year. John From rsun at athos.cs.ua.edu Fri Nov 5 13:41:15 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Fri, 5 Nov 1993 12:41:15 -0600 Subject: Dynamic Binding Message-ID: <9311051841.AA12690@athos.cs.ua.edu> 1. It is unclear to me from your message how you plan to use the dynamic binding (formed at the hidden layer of your autoassociator) to do some reasoning or whatever. The point of doing dynamic binding is to be able to use the binding in other processing tasks, for example, high level reasoning, especially in rule-based (or rule-like) reasoning. In such reasoning, bindings are constructed and deconstructed, passed around, checked against some constraints, and unbound or rebound to something else. Simply forming an association is NOT the point. 2. There are a variety of existing methods for dynamic binding; phase synchronizatrion is just one of them, and definitely not the simplest one possible. As a matter of fact, the sign (or signature) propagation method can easily handle that, and it can also simulate phase synchronization without the need for temporal properties as in the nodes using phase synchronization. --Ron From stjohn at cogsci.UCSD.EDU Fri Nov 5 15:17:27 1993 From: stjohn at cogsci.UCSD.EDU (Mark St. John) Date: Fri, 5 Nov 1993 12:17:27 -0800 Subject: Dynamic binding Message-ID: <9311052017.AA06008@cogsci.UCSD.EDU> The sort of "holistic" binding that happens in the hidden layer of a 3-layer network and that Graham Smith describes is not particularly new, at least if you look in the right place. As Smith says, this is just the sort of binding that happens in one of Jordan Pollack's RAAM networks (1990, Artificial Intelligence), and it's much the same as the sort of binding that happens in the hidden layer in the language comprehension systems that Jay McClelland and I (1990, Artificial Intelligence) have developed and that Risto Miikkulainen and Michael Dyer (1991, Cognitive Science) have developed. An example of binding in these models would be to give them a sentence like, "Ricky played a trick on Lucy," and observe that the model correctly binds Ricky to the agent role and Lucy to the recipient role. Then you give the opposite sentence (Lucy played a trick on Ricky) and observe that the bindings are reversed. One serious issue/limitation of this holistic binding method, however, is how well it generalizes to novel cases: How many of the total possible sentences have to be trained so that the remaining sentences will be processed correctly? What happens is that the sentences missing from the training set create regularities that the model can learn. These regularities are sentences that do not (and so as far as the hidden units are concerned, cannot) occur. The question, then, once the network has been trained, which force is stronger: the generalization to novel cases, or the regularity the network learned that that novel case cannot happen? For example, say the network never trained on "Lucy played a trick on Ricky." The model learns many other sentences that suggest how to map sentence constituents to thematic roles, but it also learns that playing tricks is a one-way sort of deal because Lucy never seems to play them. Now if we give the Lucy sentence as a generalization test case, part of the model wants to generalize and activate the systematic meaning based on all that it learned about sentence comprehension, but another part of the model wants to correct this obvious error in the input because it knows that this sentence and meaning are unlikely. The model can "correct the error" by flipping the agent and recipient to the better known arrangement or by changing the verb to a more likely alternative, etc. Which part of the model (or better put, which influence) wins depends on many factors, such as the number of hidden units, the cost function, the combinatorial nature of the training corpus, the size of the training corpus, and so on. It turns out that to achieve the sort of systematic mapping we want we need to use A LOT of hiddens units (yes, more is better in this case of generalization), a large training set, and critically, a reasonably combinatorial corpus so that each element/word is paired with some variety of other elements (in statistical terms, you need to break as many high-order regularities as possible). See St. John (1993, Cognitive Science Proceedings) for some discussion. I'm a little embarrassed to toot my own horn, but I've thought about this some, and these papers may be of some interest -- in addition to Janet Wiles paper (1993, Cognitive Science Proceedings) that John Colen mentioned. One final point I'd like to raise is that this tension between generalization (along the lines of a systematic mapping) and "error correction" is not all bad. There is considerable psycholinguistic evidence that these sorts of "error corrections" happen with some frequency. People mis-hear, mis-read, mis-remember, see what they want to see, etc. all the time. On the other hand, we can all understand the infamous sentence "John ate the guitar" even though we've presumably never seen such a thing before and it's pretty unlikely. This ability, however, may simply attest to the wide variety and combinatorics of our language training. Why it is that we mis-read on some occassions and comprehend the systematic meaning, like with John eatting the guitar, on other occassion is not well understood. Training is probably involved, and attention is probably involved, to name two factors. We're currently work on models and human experiments to understand this issue better. -Mark St. John Dept. of Cognitive Science, UCSD From eric at research.nj.nec.com Fri Nov 5 16:30:44 1993 From: eric at research.nj.nec.com (Eric B. Baum) Date: Fri, 5 Nov 93 16:30:44 EST Subject: C Programmer Wanted Message-ID: <9311052130.AA18527@yin> C Programmer Wanted. Note- this job may be more interesting than most postdocs. May pay better too, if successful applicant has substantial commercial experience. Prerequisites: Experience in getting large programs to work. Some mathematical sophistication, *at least* equivalent of good undergraduate degree in math, physics, theoretical computer science, or related field. Salary: Depends on experience. Job: Implementing various novel algorithms. For example, implementing an entirely new approach to game tree search. Conceivably this could lead into a major effort to produce a championship chess program based on novel strategy, and on novel use of learning algorithms. Another example, implementing novel approaches to Travelling Salesman Problem. Another example, experiments with RTDP (TD learning.) Algorithms are *not* exclusively neural. These projects are at the leading edge of algorithm research, so expect the work to be both interesting and challenging. Term-contract position. To apply please send cv, cover letter and list of references to: Eric Baum, NEC Research Institute, 4 Independence Way, Princeton NJ 08540, or PREFERABLY by internet to eric at research.nj.nec.com Equal Opportunity Employer M/F/D/V ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com From roger at eccles.psych.nwu.edu Mon Nov 8 10:44:27 1993 From: roger at eccles.psych.nwu.edu (Roger Ratcliff) Date: Mon, 8 Nov 1993 09:44:27 -0600 Subject: Dynamic binding Message-ID: <9311081544.AA24404@eccles.psych.nwu.edu> We have some experimental data on human subjects that might be interesting (challenging) to model for sentence matching for the kinds of binding examples given (by Mark). If subjects study "John hit Bill" and other active and passive sentences of this type with different names ("Helen was attracted by Jeff") and then are given a true/false test (is this sentence true according to those you studied), then we find evidence for availability of different kinds of information as a functin of processing time. We used a response signal procedure in which subjects were interrupted at one of several times (typically 50, 150, 250, 400, 800, 2000 msec) and were required to respond immediately, within 200 to 300 ms. The probability of responding yes for "John hit Bill" and "Bill hit John" increased at the same rate from 400 ms to 700 ms of processing time, then after 700 ms, information about the relationship (JhB or BhJ) became available and the two curves split apart. We concluded that overall match (the three words were the same or something like that) was available early in processing and later information about the precise form of the relationship became available. These data may be useful when examining the results of matching a sentence against the representation that is built in the model. Ratcliff & McKoon, (1989) Similarity information versus relational information: differences in the time course of retrieval. Cognitive Psychology, 21, 139-155. Roger Ratcliff Psychology dept Northwestern From Christian.Lehmann at di.epfl.ch Mon Nov 8 11:05:42 1993 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Mon, 8 Nov 93 17:05:42 +0100 Subject: PERAC'94 CALL FOR PAPER Message-ID: <9311081605.AA24743@lamisun.epfl.ch> --- From Perception to Action --- xxxxxx PerAc'94 Lausanne xxxxxxx A state of the art conference on perceptive processing, artificial life, autonomous agents, emergent behaviours and micro-robotic systems Lausanne, Switzerland, 7-9 september 1994 Swarm intelligence Micro-robotics Evolution, genetic processes Competition and cooperation Learning machines Self organization Active perception Sensory/motor loops Emergent behavior Cognition ---------------------------------------------- | Call for Papers, Call for Posters | | Call for Demonstrations, Call for Videos | | Contest | ---------------------------------------------- Contributions can be made in the following categories: -- Papers -- (30 to 45 minutes). 2-page abstracts should be submitted by February 1, 1994. The conference will have no parallel sessions, and a didactically structured program. Most of the papers will be solicited. The submitted abstracts should attempt a synthetic approach from sensing to action. Selected authors will have to adapt their presentation to the general conference program and prepare a complete well-structured text before June 94. -- Posters -- 4-page short papers that will be published in the proceedings and presented as posters are due for June 1, 1994. Posters will be displayed during the whole Conference and enough time will be provided to promote interaction with the authors. A jury will thoroughly examine them and the two best posters will be presented as a paper in the closing session (20' presentation). -- Demonstrations -- Robotic demonstrations are considered as posters. In addition to the 4-page abstract describing the scientific interest of the demonstration, the submission should include a 1-page requirement for demonstration space and support. -- Videos -- 5 minute video clips are accepted in Super-VHS or VHS (preferably PAL, NTSC leads to a poorer quality). Tapes together with a 2-page description should be submitted before June 1, 1994. Clips will be edited and distributed at the conference. -- Contest -- A robotic contest will be organized the day before the conference. Teams participating to the contest will be able to follow the conference freely. The contest will consist in searching for and collecting or stacking 36mm film cans. One or several mobile robots or robotic arms can be used for this task. The rules and preliminary registration forms will be sent upon request by air-mail only as soon as definitive (end of October 93). For further information: Prof J.D. Nicoud, LAMI-EPFL, CH-1015 Lausanne Fax ++41 21 693-5263, Email perac at di.epfl.ch Program Committee and referees (September 93) L. Bengtsson, Uni Halmstad, S. -- R. Brooks, MIT, Cambridge, USA. P. Dario, Santa Anna, Pisa, I. -- J.L. Deneubourg, ULB, Bruxelles, B R. Eckmiller, Uni, D|sseldorf, D. -- N. Franceschini, Marseilles, F T. Fukuda, Uni, Nagoya, JP. -- S. Grossberg, Uni, Boston, USA J.A. Meyer, Uni, Paris, F. -- R. Pfeifer, Uni, Z|rich, CH L. Steels, VUB, Brussels, B. -- A. Treisman, Uni, Princeton, USA F. Varela, Polytechnique, Paris, F. -- E. Vittoz, CSEM, Neuchbtel, CH J. Albus, NIST, Gaithersburg, USA. -- D.J. Amit, Uni, Jerusalem, Israel X. Arreguit, CSEM, Neuchbtel, CH. -- H. Asama, Riken, Wako, JP R. Beer, Case Western, Cleveland, USA. -- G. Beni, Uni, Riverside, USA P. Bourgine, Cemagref, Antony, F. -- Y. Burnod, Uni VI, Paris, F D. Cliff, Uni Sussex, Brighton, UK Ph. Gaussier, LAMI, Lausanne, CH. -- P. Husbands, Uni Sussex, Brighton, UK O. Kubler, ETH, Z|rich, CH. -- C.G. Langton, Santa Fe Inst, USA I. Masaki, MIT, Cambridge, USA. -- E. Mazer, LIFIA, Grenoble, F M. Mataric, MIT, Cambridge, USA . -- H. Miura, Uni, Tokyo, JP S. Rasmussen, Los Alamos, USA. -- G. Sandini, Uni, Genova, I T. Smithers, Uni, San Sebastian, E. -- J. Stewart, Inst. Pasteur, Paris, F L. Tarassenko, Uni, Oxford, UK. -- C. Touzet, EERIE, Nnmes, F P. Vershure, NSI, La Jolla, USA. From rba at bellcore.com Mon Nov 8 12:54:19 1993 From: rba at bellcore.com (Bob Allen) Date: Mon, 8 Nov 93 12:54:19 -0500 Subject: No subject Message-ID: <9311081754.AA00674@vintage.bellcore.com> Subject: IWANNT'93 Electronic Proceedings Electronic Proceedings for 1993 International Workshop on Applications of Neural Networks to Telecommunications 1. Electronic Proceedings (EPROCS) The Proceedings for the 1993 International Workshop on Applications of Neural Networks to Telecommunications (IWANNT'93) have been converted to electronic form and are available in the SuperBook(TM) document browsing system. In addition to the IWANNT'93 proceedings, you will be able to access abstracts from the 1992 Bellcore Workshop on Applications of Neural Networks to Telecommunications and pictures of several of the conference attendees. We would appreciate your feedback about the use of this system. In addition, if you have questions, or would like a personal account, please contact Robert B. Allen (iwannt_allen at bellcore.com or rba at bellcore.com). 2. Accounts and Passwords Public access is available with the account name: iwan_ pub Annotations made by iwan pub will be removed. Individual accounts and passwords were given to conference participants. The difference between public and individual accounts is that individual accounts have permission to make annotations. 3. Remote Access Via Xwindows From BATTITI at itnvax.science.unitn.it Tue Nov 9 04:21:17 1993 From: BATTITI at itnvax.science.unitn.it (BATTITI@itnvax.science.unitn.it) Date: 09 Nov 1993 09:21:17 +0000 Subject: Tech. Reports about REACTIVE TABU SEARCH (RTS) Message-ID: <01H53UJE0RBAKY5DVE@itnvax.science.unitn.it> The following technical reports in the area of combinatorial optimization and neural nets training are available by anonymous ftp from the Mathematics Department archive at Trento University. _______________________________________________________________________ The Reactive Tabu Search Roberto Battiti and Giampietro Tecchiolli Technical Report UTM 405, October 1992 Dipartimento di Matematica, Univ. di Trento 38050 Povo (Trento) - Italia Abstract We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a "strict" Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem. _______________________________________________________________________ Training Neural Nets with the Reactive Tabu Search Roberto Battiti and Giampietro Tecchiolli Technical Report UTM 421, November 1993 Dipartimento di Matematica, Univ. di Trento 38050 Povo (Trento) - Italia Abstract In this paper the task of training sub-symbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search (RTS) proposed by the authors and based on F. Glover's Tabu Search. An iterative optimization process based on a ``modified greedy search'' component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search, for a prohibition period that is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to non-differentiable functions, it is robust with respect to the random initialization and effective in continuing the search after local minima. The limited memory and processing required make RTS a competitive candidate for special-purpose VLSI implementations. We present and discuss four tests of the technique on feedforward and feedback systems. _______________________________________________________________________ ******> how to obtain a copy via FTP: unix> ftp volterra.science.unitn.it (130.186.34.16) Name: anonymous Password: (type your email address) ftp> cd pub ftp> binary ftp> get reactive-tabu-search.ps.Z ftp> get rts-neural-nets.ps.Z ftp> quit unix> uncompress *.ps.Z unix> lpr *.ps (or however you print PostScript) note: gnu-zipped files are available as file.gz reactive-tabu-search.ps : 27 pages rts-neural-nets.ps : 45 pages both papers contain complex figures so that printing can be slow on some printers ******> A limited number of hardcopies are available (only if you don't have access to FTP!) from: Roberto Battiti Dip. di Matematica Universita' di Trento 38050 Povo (Trento) Italy e-mail: battiti at itnvax.science.unitn.it From janetw at cs.uq.oz.au Tue Nov 9 20:20:39 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Tue, 9 Nov 93 20:20:39 EST Subject: Dynamic Binding Message-ID: <9311091020.AA08508@client> On binding using hidden layers, Graham Smith writes: > I have recently trained a simple multi-layer feed-forward network using > back-propagation, to auto-associate patterns which simultaneously describe > two items. The patterns consist of four features (red, blue, square, > ... Graham's query has generated a reference to my work, and below I summarize some of the issues as I see them, and give references to related work. The use of a hidden layer for binding is at least implicit (and sometimes explicit) in several uses of nets in the late 1980s (as Mark St John points out). The experiments Graham describes are a very clean version of the problem, useful for highlighting the explicit issues in binding, combinatorial structure and generalisation. They are also known as multi-variable encoders (MVEs) or Nk-Mk-Nk encoders, where k is the number of "variables" (or "features" or "components"), N is the number of "values" (or "elements") per variable and M is the number of hidden units per variable. At least two groups have published simulation results that I know about- 1. Olivier Brousse and Paul Smolensky worked on a variation (see Brousse's PhD thesis "Generativity and systematicity in neural network combinatorial learning", Oct, 1993, originally presented in 1989, at Cog Sci 11 or possibly earlier, though I don't have a reference). The papers related to this work addressed the issue of generalisation in combinatorial domains, in which the binding problem is an implicit part. The experiments are described using Smolensky's tensor notation, and hence the connection to the MVE task is perhaps not obvious for those unfamiliar with tensors. The results are very clear, showing massive generalisation from very few training samples. Brousse's thesis goes into issues in compositionality and systematicity in detail. 2. Mark Ollila and I presented an analysis of the representations in hidden unit (HU) space formed in a colour-shape-location mapping task (1992, NIPS "Intersecting regions: The key to combinatorial structure in HU space"). As Graham Smith described in his simulations, HU space forms a representation of a particular "scene". In our simulations we were interested in the range of classifications possible in HU space. Eg. Could a single hyperplane distinguish between all the scenes with a blue object on the left? a blue object anywhere? a blue square and a red triangle anywhere? These questions are asking whether it is possible to partition the HU space by a single hyperplane so that all the patterns relevant to the query (eg all the blue scenes) can be distinguished from all the others. The answer took us into an analysis of possible structures in HU space. If HU representations are structured so that any classification can be made using a single hyperplane, then the patterns must lie at the corners of a hypertetrahedron (a generalised triangle). This is a geometric way of thinking about VC-dimension. The MVE tasks do not require nearly such capacity in HU space - we can think of them as compressed representations of the general hypertetrahedron. For each variable, the task only requires a 1-of-N selection. Hence the structure of HU space can be organized into a hypertetrahedron over the number of variables, (rather than the variables x values). This structure would allow combination of any colour in location one, with any colour in location two, but not both blue and green in location one, since it is not a legal pattern. The bottom line is that instead of needing N^k-1 hidden units, where N is the number of values per variable, and k is the number of variables, we only need Mk, where M is the number of HU required per variable (M is a constant). This year's Cog Sci paper (which John Kolen mentioned) began as an extension to the NIPS92 study, asking: What is the minimum number of hidden units required for such a mapping task, and how easily is it learned? These tasks are "ultra-tight" MVEs, since there are several variables but a minimum number of HUs. Given the ultra-tight bottleneck, the hidden units divide into pairs, each pair encoding the internal representation of one variable. (The theoretical minimum is 2 HU per variable, based on a proof by Kruglyak on the N-2-N encoder). Caveats: The decomposition of the hidden layer into pairs of cooperating units only occurs (in fact is only possible from a coding theory point of view) when there is a specific structure in the patterns for each variable. The traditional "local" codes used in encoder tasks satisfy this criterion, but are hard to learn. In our simulations, we used block codes (eg 1111100000) for each variable in the output and local codes (eg 100000000) for each variable in the input (Bakker et al, 1993, ICANN). A second caveat is that the bottleneck of 2 HU/variable does not provide sufficient capacity for a net to represent the probability distribution of the input patterns, (eg co-occurrence between variables) and hence generalises to all possible combinations of the variables. Sometimes this may be desirable, sometimes not, depending on the task. In later work, Steven Phillips and I showed that ultra-tight encoders with block-structured outputs can be learned "efficiently" by standard bp. ie. polynomial number of patterns in the training set (1993, IJCNN Japan). Steve is continuing this work for his PhD. ----- Several people have asked for copies of my CogSci paper in response to to comments generated by Graham's query. It is not online -- hard copy is available for people who do not have access to the Proceedings. ------------------------ Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au From janetw at cs.uq.oz.au Tue Nov 9 20:25:30 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Tue, 9 Nov 93 20:25:30 EST Subject: Dynamic Binding Message-ID: <9311091025.AA08553@client> On binding using tensors: A second approach to the binding problem (which also differs from the phase approach) is the use of tensors or their compressed representations in convolution/correlation models. This approach has been used since the early 1980s for modeling temporal and contextual bindings in human memory. When dealing with several variables, it can help to think of HU space as a compressed representation of their tensor product (see Method 2 below). The terminology differs across disciplines, which makes it harder to find appropriate connections. The following are some that I have come across: Tensors: Smolensky (papers go back at least to 1984 with Riley; see 1990, Artificial Intelligence); Pike (1984, Psych Review) A comparison of convolution and matrix distributed memory systems; Humphreys, Bain and Pike (1989, Psych Review) - called 3D matrices; showed the use of a tensor for binding context, cue and target in memory storage, and how to access both semantic and episodic information from the memory. Sloman and Rumelhart had a memory model that looked like a feedforward net with sigma-pi units which was essentially the same underlying maths with respect to the binding aspects (date?, ed. Healey, The Estes volume); Halford et al use tensors as the mapping process in analogical reasoning tasks, specifically looking at the limits of human capacity for processing interacting dimensions (1993, eds Holyoak and Barnden, Advances in Connectionist and Neural Computational Theory Vol 2); Wiles et al (1992, Univ of Qld TR218) reviews the models of Halford et al and Humphreys et al and shows the link between them. Convolution/correlation models: Murdock (1982, Psych Review) Eich (1982, 1985, Psych review) Plate (1991, IJCAI, and 1992 & 93 NIPS) Holographic reduced representations. NB. There are differences in the use of tensors. Eg. to encode a predicate F(x,y,z), where x, y and z are vectors: Method 1: Smolensky would create roles for each variable in the triple, r1, r2, r3, and then create a representation, T, of the triple as T = r1*x + r2*y + r3*z where * is the tensor (or outer) product operation and + is the usual addition of vectors (also called linear superposition). A composite memory would require a vector to specify each item, eg i1, and then superimpose all such representations, ie, M = i1*(r1*x1 + r2*y1 + r3*z1) + i2*(r1*x2 + r2*y2 + r3*z2) + ... Method 1 allows binding of arbitrary sized tuples using a tensor, M, of rank 3, but does not represent the interactions between variables. It seems plausible that phase locking would be one way of implementing method 1. Method 2: In the approach of Humphreys et al and Halford et al, the tensor would be the outer product of all three variables (like a 3D matrix), ie T = x*y*z The memory would be formed by superimposing all such tensor representations, M = x1*y1*z1 + x2*y2*z2 + x3*y3*z3 + x4*y4*z4 + ... Method 2 does not require a unique vector for each item, nor role vectors, and the interactions between variables are accessible. But, there are practical limits to the size of the tensor - Halford estimates that humans can process up to 4 independent variables in parallel - which he models as a tensor of rank 5. In the memory work of Humphreys et al, tensors of rank 3 are used (context, cue and target). If a context is not available, then a unit vector is substituted, effectively accessing the average of all the other items (a "semantic" memory). This allows both context sensitive and context- insensitive access processes over the same information. ------------------------ Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au From luttrell at signal.dra.hmg.gb Tue Nov 9 06:43:56 1993 From: luttrell at signal.dra.hmg.gb (luttrell@signal.dra.hmg.gb) Date: Tue, 09 Nov 93 11:43:56 +0000 Subject: New preprint in neuroprose Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/luttrell.part-mixture.ps.Z The file luttrell.part-mixture.ps.Z is now available for copying from the Neuroprose repository (22 pages). This paper has been submitted to a Special Issue of IEE Proceedings on Vision, Image and Signal Processing. An early version of this paper appeared in the Proceedings of the IEE International Conference on Artificial Neural Networks, Brighton, 1993, pp. 313-316. The Partitioned Mixture Distribution: An Adaptive Bayesian Network for Low-Level Image Processing Steve P Luttrell Adaptive Systems Theory Section Defence Research Agency Malvern, Worcs, United Kingdom, WR14 3PS e-mail: luttrell at signal.dra.hmg.gb ABSTRACT Bayesian methods are used to analyse the problem of training a model to make predictions about the probability distribution of data that has yet to be received. Mixture distributions emerge naturally from this framework, but are not well-matched to high-dimensional problems such as image processing. An extension, called a partitioned mixture distribution (PMD) is presented, which is essentially a set of overlapping mixture distributions. An expectation-maximisation training algorithm is derived. Finally, the results of some numerical simulations are presented, which demonstrate that lateral inhibition arises naturally in PMDs, and that the nodes in a PMD co-operate in such a way that each mixture distribution in the PMD receives the necessary complement of machinery for it to compute its mixture distribution. From rba at bellcore.com Tue Nov 9 12:45:40 1993 From: rba at bellcore.com (Bob Allen) Date: Tue, 9 Nov 93 12:45:40 -0500 Subject: IWANNT-EPROCS - note that the correct public login is iwan_pub Message-ID: <9311091745.AA01986@vintage.bellcore.com> Subject: IWANNT'93 Electronic Proceedings Electronic Proceedings for 1993 International Workshop on Applications of Neural Networks to Telecommunications 1. Electronic Proceedings (EPROCS) The Proceedings for the 1993 International Workshop on Applications of Neural Networks to Telecommunications (IWANNT'93) have been converted to electronic form and are available in the SuperBook(TM) document browsing system. In addition to the IWANNT'93 proceedings, you will be able to access abstracts from the 1992 Bellcore Workshop on Applications of Neural Networks to Telecommunications and pictures of several of the conference attendees. We would appreciate your feedback about the use of this system. In addition, if you have questions, or would like a personal account, please contact Robert B. Allen (iwannt_allen at bellcore.com or rba at bellcore.com). 2. Accounts and Passwords Public access is available with the account name: iwan_pub Individual accounts and passwords were given to conference participants. Annotations made by iwan_pub may be edited by the electonic proceedings editor. 3. Remote Access Via Xwindows From schmidhu at informatik.tu-muenchen.de Thu Nov 11 07:37:22 1993 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Thu, 11 Nov 1993 13:37:22 +0100 Subject: dynamic variable binding Message-ID: <93Nov11.133729met.42241@papa.informatik.tu-muenchen.de> Recently, several people on this list mentioned dynamic variable binding. A general approach to dynamic variable binding needs to address *temporary* bindings in *time-varying* environments. The following reference shows how a system with time-varying inputs and ``fast weights'' can learn to create useful *temporary* bindings. @article{S92, author = {J. Schmidhuber}, title = { Learning to Control Fast-Weight Memories: An Alternative to Recurrent Nets}, journal={Neural Computation}, volume = {4}, number = {1}, pages={131-139}, year = {1992}} ------------------------------------------------- Juergen Schmidhuber Institut fuer Informatik, H2 Technische Universitaet Muenchen 80290 Muenchen, Germany schmidhu at informatik.tu-muenchen.de From RAMPO at SALERNO.INFN.IT Thu Nov 11 13:26:00 1993 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Thu, 11 NOV 93 18:26 GMT Subject: ICANN'94 Message-ID: <6849@SALERNO.INFN.IT> ----------------------------------------------------------- To CONNECTIONISTS mailing list. Someone of you may have received an outdated version of the ICANN'94 Registration Form and Call for Papers. This is the final official version. Please, do not take in account any previous one. ----------------------------------------------------------- -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * R E G I S T R A T I O N F O R M * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | |------------------------------------------------------------------| | R E G I S T R A T I O N F O R M | |------------------------------------------------------------------| | FAMILY NAME ____________________________________________________ | | FIRST NAME, MIDDLE INITIAL _____________________________________ | | AFFILIATION ____________________________________________________ | | MAILING ADDRESS ________________________________________________ | | ZIP CODE, CITY, COUNTRY ________________________________________ | | FAX ____________________________________________________________ | | PHONE __________________________________________________________ | | EMAIL __________________________________________________________ | | ACCOMPANIED BY _________________________________________________ | | MEMBERSHIP (Regular/ENNS member/Student) _______________________ | | ENNS MEMBERSHIP NO. ____________________________________________ | | REGISTRATION FEE _______________________________________________ | | TUTORIAL FEE ___________________________________________________ | | DATE ______________________ SIGNATURE __________________________ | | | |------------------------------------------------------------------| | C O N F E R E N C E R E G I S T R A T I O N F E E S (in LIT) | |------------------------------------------------------------------| | MEMBERSHIP | Before 15/12/93 | Before 15/2/94 | On site | |--------------|-------------------|------------------|------------| | REGULAR | 650,000 | 800,000 | 950,000 | | ENNS MEMBER | 550,000 | 700,000 | 850,000 | | STUDENT | 200,000 | 250,000 | 300,000 | |------------------------------------------------------------------| | T U T O R I A L F E E S (in LIT) | |------------------------------------------------------------------| | | Before 15/2/94 | On site | |--------------|-------------------|-------------------------------| | REGULAR | 250,000 | 350,000 | | STUDENT | 100,000 | 150,000 | |------------------------------------------------------------------| | - Regular registrants become ENNS members. | | - Student registrants must provide an official certification of | | their status. | | - Pre-registration payment: Remittance in LIT to | | BANCO DI NAPOLI, Branch of FISCIANO, FISCIANO (SALERNO), ITALY| | on the Account of "Dipartimento di Fisica Teorica e S.M.S.A." | | clearly stating the motivation (Registration Fee for ICANN'94) | | and the attendee name. | | - On-site payment: cash. | | - The registration form together with a copy of the bank | | remittance must be mailed to: | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy | | Fax +39 89 822275 | | - Accepted papers will be included in the Proceedings only if | | the authors have registered in advance. | |------------------------------------------------------------------| | H O T E L R E S E R V A T I O N | |------------------------------------------------------------------| | The official travel agent is (fax for a booking form): | | RUSSO TRAVEL srl | | Via S. Antonio, I-80067 Sorrento, Italy | | Fax: +39 81 807 1367 Phone: +39 81 807 1845 | |------------------------------------------------------------------| | S U B M I S S I O N | |------------------------------------------------------------------| | Interested authors are cordially invited to present their work | | in one of the following "Scientific Areas" (A-Cognitive Science; | | B-Mathematical Models; C- Neurobiology; D-Fuzzy Systems; | | E-Neurocomputing), indicating also an "Application domain" | | (1-Motor Control;2-Speech;3-Vision;4-Natural Language; | | 5-Process Control;6-Robotics;7-Signal Processing; | | 8-Pattern Recognition;9-Hybrid Systems;10-Implementation). | | | | DEADLINE for CAMERA-READY COPIES: December 15, 1993. | | ---------------------------------------------------- | | Papers received after that date will be returned unopened. | | Papers will be reviewed by senior researchers in the field | | and the authors will be informed of their decision by the end | | of January 1994. Accepted papers will be included in the | | Proceedings only if the authors have registered in advance. | | | | SIZE: 4 pages, including figures, tables, and references. | | LANGUAGE: English. | | COPIES: submit a camera-ready original and 3 copies. | | (Accepted papers cannot be edited.) | | EMAIL where to send correspondence (not papers): | | iiass at salerno.infn.it | | ADDRESS where to send the papers: | | IIASS (Intl. Inst. Adv. Sci. Studies), ICANN'94, | | Via Pellegrino 19, Vietri sul Mare (Salerno), 84019 Italy. | | ADDRESS where to send correspondence (not papers): | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy - Fax +39 89 822275 | | EMAIL where to get LaTeX files: listserv at dist.unige.it | |------------------------------------------------------------------| | P R O G R A M C O M M I T T E E | |------------------------------------------------------------------| | | | I. Aleksander (UK), D. Amit (ISR), L. B. Almeida (P), | | S.I. Amari (J), E. Bizzi (USA), E. Caianiello (I), | | L. Cotterill (DK), R. De Mori (CAN), R. Eckmiller (D), | | F. Fogelman Soulie (F), W. Freeman (USA), S. Gielen (NL), | | S. Grossberg (USA), R. Hecht-Nielsen (USA), J. Herault (F), | | M. Jordan (USA), M. Kawato (J), T. Kohonen (SF), | | V. Lopez Martinez (E), R.J. Marks II (USA), P. Morasso (I), | | E. Oja (SF), T. Poggio (USA), H. Ritter (D), H. Szu (USA), | | L. Stark (USA), J. G. Taylor (UK), S. Usui (J), L. Zadeh (USA) | | | | Conference Chair: Prof. Eduardo R. Caianiello, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | |------------------------------------------------------------------| | T U T O R I A L S | |------------------------------------------------------------------| | 1) Introduction to neural networks (D. Gorse), 2) Advanced | | techniques in supervised learning (F. Fogelman Soulie`), | | 3) Advanced techniques for self-organizing maps (T. Kohonen) | | 4) Weightless neural nets (I. Aleksander), 5) Applications of | | neural networks (R. Hecht-Nielsen), 6) Neurobiological modelling | | (J.G. Taylor), 7) Information theory and neural networks | | (M. Plumbley). | | Tutorial Chair: Prof. John G. Taylor, King's College, London, UK | | fax: +44 71 873 2017 | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | |------------------------------------------------------------------| ******************************************************************** ******************************************************************** ******************************************************************** ******************************************************************** -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * C A L L F O R P A P E R S * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | | | |------------------------------------------------------------------| | S U B M I S S I O N | |------------------------------------------------------------------| | Interested authors are cordially invited to present their work | | in one of the following "Scientific Areas" (A-Cognitive Science; | | B-Mathematical Models; C- Neurobiology; D-Fuzzy Systems; | | E-Neurocomputing), indicating also an "Application domain" | | (1-Motor Control;2-Speech;3-Vision;4-Natural Language; | | 5-Process Control;6-Robotics;7-Signal Processing; | | 8-Pattern Recognition;9-Hybrid Systems;10-Implementation). | | | | DEADLINE for CAMERA-READY COPIES: December 15, 1993. | | ---------------------------------------------------- | | Papers received after that date will be returned unopened. | | Papers will be reviewed by senior researchers in the field | | and the authors will be informed of their decision by the end | | of January 1994. Accepted papers will be included in the | | Proceedings only if the authors have registered in advance. | | Allocation of accepted papers to oral or poster sessions will | | not be performed as a function of technical merit but only with | | the aim of coherently clustering different contributions in | | related topics; for this reason there will be no overlap of | | oral and poster sessions with the same denomination. Conference | | proceedings, that include all the accepted (and regularly | | registered) papers, will be distributed at the Conference desk | | to all regular registrants. | | | | SIZE: 4 pages, including figures, tables, and references. | | LANGUAGE: English. | | COPIES: submit a camera-ready original and 3 copies. | | (Accepted papers cannot be edited.) | | EMAIL where to send correspondence (not papers): | | iiass at salerno.infn.it | | ADDRESS where to send the papers: | | IIASS (Intl. Inst. Adv. Sci. Studies), ICANN'94, | | Via Pellegrino 19, Vietri sul Mare (Salerno), 84019 Italy. | | ADDRESS where to send correspondence (not papers): | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | Fax +39 89 822275 | | EMAIL where to get LaTeX files: listserv at dist.unige.it | | | | In an accompanying letter, the following should be included: | | (i) title of the paper, (ii) corresponding author, | | (iii) presenting author, (iv) scientific area and application | | domain (e.g. "B-7"), (vi) preferred presentation (oral/poster), | | (vii) audio-visual requirements. | | | |------------------------------------------------------------------| | F O R M A T | |------------------------------------------------------------------| | The 4 pages of the manuscripts should be prepared on A4 white | | paper with a typewriter or letter- quality printer in | | one-column format, single-spaced, justified on both sides and | | printed on one side of the page only, without page numbers | | or headers/footers. Printing area: 120 mm x 195 mm. | | | | Authors are encouraged to use LaTeX. For LaTeX users, the LaTeX | | style-file and an example-file can be obtained via email as | | follows: | | - send an email message to the address "listserv at dist.unige.it" | | - the first two lines of the message must be: | | get ICANN94 icann94.sty | | get ICANN94 icann94-example.tex | | If problems arise, please contact the conference co-chair below. | | Non LaTeX users can ask for a specimen of the paper layout, | | to be sent via fax. | | | |------------------------------------------------------------------| | P R O G R A M C O M M I T T E E | |------------------------------------------------------------------| | The preliminary program committee is as follows: | | | | I. Aleksander (UK), D. Amit (ISR), L. B. Almeida (P), | | S.I. Amari (J), E. Bizzi (USA), E. Caianiello (I), | | L. Cotterill (DK), R. De Mori (CAN), R. Eckmiller (D), | | F. Fogelman Soulie (F), S. Gielen (NL), S. Grossberg (USA), | | J. Herault (F), M. Jordan (USA), M. Kawato (J), T. Kohonen (SF), | | V. Lopez Martinez (E), R.J. Marks II (USA), P. Morasso (I), | | E. Oja (SF), T. Poggio (USA), H. Ritter (D), H. Szu (USA), | | L. Stark (USA), J. G. Taylor (UK), S. Usui (J), L. Zadeh (USA) | | | | Conference Chair: Prof. Eduardo R. Caianiello, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | | | |------------------------------------------------------------------| | T U T O R I A L S | |------------------------------------------------------------------| | The preliminary list of tutorials is as follows: | | 1) Introduction to neural networks (D. Gorse), 2) Advanced | | techniques in supervised learning (F. Fogelman Soulie`), | | 3) Advanced techniques for self-organizing maps (T. Kohonen) | | 4) Weightless neural nets (I. Aleksander), 5) Applications of | | neural networks (R. Hecht-Nielsen), 6) Neurobiological modelling | | (J.G. Taylor), 7) Information theory and neural networks | | (M. Plumbley). | | Tutorial Chair: Prof. John G. Taylor, King's College, London, UK | | fax: +44 71 873 2017 | | | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | A technical exhibition will be organized for presenting the | | literature on neural networks and related fields, neural networks| | design and simulation tools, electronic and optical | | implementation of neural computers, and application | | demonstration systems. Potential exhibitors are kindly requested | | to contact the industrial liaison chair. | | | | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | | | |------------------------------------------------------------------| | S O C I A L P R O G R A M | |------------------------------------------------------------------| | Social activities will include a welcome party, a banquet, and | | post-conference tours to some of the many possible targets of | | the area (participants will also have no difficulty to | | self-organize a la carte). | -------------------------------------------------------------------- From hayit at micro.caltech.edu Thu Nov 11 14:32:34 1993 From: hayit at micro.caltech.edu (Hayit Greenspan) Date: Thu, 11 Nov 93 11:32:34 PST Subject: NIPS_WORKSHOP Message-ID: <9311111932.AA13157@electra.caltech.edu> NIPS*93 - Post Meeting workshop: --------------------------------------------------------------- --------------------------------------------------------------- Learning in Computer Vision and Image Understanding - An advantage over classical techniques? Dec 4th, 1993 --------------------------------------------------------------- --------------------------------------------------------------- Organizer: Hayit Greenspan (hayit at micro.caltech.edu) --------- Dept. of Electrical Engineering California Institute of Technology Pasadena, CA 91125 Program Committee: T. Poggio(MIT), R. Chellappa(Maryland), P. Smyth(JPL) ----------------- Intended Audience: ------------------- Researchers in the field of Learning and in Vision and those interested in the combination of both for pattern-recognition, computer-vision and image-understanding tasks. Abstract: --------- There is an increasing interest in the area of Learning in Computer Vision and Image Understanding, both from researchers in the learning community and from researchers involved with the computer vision world. The field is characterized by a shift away from the classical, purely model-based computer vision techniques, towards data-driven learning paradigms for solving real-world vision problems. Classical computer-vision techniques have to a large extent neglected learning, which is an important component for robust and flexible vision systems. Meanwhile, there is real-world demand for automated image handling for scientific and commercial purposes, and a growing need for automated image understanding and recognition, in which learning can play a key role. Applications include remote-sensing imagery analysis, automated inspection, difficult recognition tasks such as face recognition, autonomous navigation systems which use vision as part of their sensors, and the field of automated imagery data-base analysis. Some of the issues for general discussion: o Where do classical computer-vision techniques fail - and what are the main issues to be solved? o What does learning mean in a vision context? Is it tuning an existing model (defined a priori) via its parameters, or trying to learn the model (extract most relevant features etc)? o Can existing learning techniques help in their present format or do we need vision-specific learning methods? For example, is learning in vision a practical prospect without one "biasing" the learning models with lots of prior knowledge ? The major emphasis of the workshop will be on integrating viewpoints from a variety of backgrounds (theory, applications, pattern recognition, computer-vision, learning, neurobiology). The goal is to forge some common ground between the different perspectives, and arrive at a set of open questions and challenges in the field. Program: ---------- Morning session ---------------- 7:30-7:35 Introduction to the workshop 7:35-8:00 Keynote speaker: Poggio/Girosi (MIT) - Learning and Vision 8:00-8:15 Combining Geometric Reasoning and Artificial Neural Networks for Machine Vision Dean Pomerleau (CMU) 8:15-8:45 Discussion: o AAAI forum on Machine Learning in Computer Vision- relevant issues, Rich Zemel (Salk Institute) o What is going on in the vision and learning worlds 8:45-8:55 Combining classical and learning-based approaches into a recognition framework for texture and shape Hayit Greenspan (Caltech) 8:55-9:05 Visual Processing: Bag of tricks or Unified Theory? Jonathan Marshall (Univ. of N. Carolina) 9:05-9:15 Learning in 3D object recognition- An extreme approach Bartlett Mel (Caltech) 9:15-9:30 Discussion: o Learning in the 1D vs. 2D vs. 3D worlds Afternoon session ----------------- 4:30-4:45 The window registration problem in unsupervised learning of visual features Eric Saund (XEROX) 4:45-4:55 Unsupervised learning of object models Chris Williams (Toronto) 4:55-5:15 Discussion: o The role of unsupervised learning in vision 5:15-5:30 Network architectures and learning algorithms for word reading Yann Le Cun (AT&T) 5:30-5:40 Challenges for vision and learning in the context of large scientific image databases Padhraic Smyth (JPL) 5:40-5:50 Elastic Matching and learning for face recognition Joachim Buhmann (BONN) 5:50-6:30 Discussion: o What are the difficult challenges in vision applications? o Summary of the main research objectives in the field today, as discussed in the workshop. ------------------------------------------------------------------------------- From plunkett at dragon.psych Thu Nov 11 13:43:43 1993 From: plunkett at dragon.psych (plunkett (Kim Plunkett)) Date: Thu, 11 Nov 93 18:43:43 GMT Subject: No subject Message-ID: <9311111843.AA10257@dragon.psych.pdp> UNIVERSITY OF OXFORD MRC BRAIN AND BEHAVIOUR CENTRE McDONNELL-PEW CENTRE FOR COGNITIVE NEUROSCIENCE SUMMER SCHOOL ON CONNECTIONIST MODELLING Department of Experimental Psychology University of Oxford 11-23 September 1994 Applications are invited for participation in a 2-week residential Summer School on techniques in connectionist modelling of cognitive and biological phenomena. The course is aimed primarily at researchers who wish to exploit neural network models in their teaching and/or research. It will provide a general introduction to connectionist modelling through lectures and exercises on PCs. The instructors with primary responsibility for teaching the course are Kim Plunkett and Edmund Rolls. No prior knowledge of computational modelling will be required though simple word processing skills will be assumed. Participants will be encouraged to start work on their own modelling projects during the Summer School. The Summer School is sponsored (jointly) by the University of Oxford McDonnell-Pew Centre for Cognitive Neuroscience and the MRC Brain and Behaviour Centre. The cost of parti- cipation in the summer school is 500 pounds to include accommodation (bed and breakfast at St. John's College) and summer school registration. Participants will be expected to cover their own travel and meal costs. A small number of graduate student scholarships may be available. Applicants should indicate whether they wish to be considered for a graduate student scholarship but are advised to seek their own funding as well, since in previous years the number of graduate student applications has far exceeded the number of scholarships available. If you are interested in participating in the Summer School, please contact: Mrs. Sue King Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD Tel: (0865) 271353 Email: sking at uk.ac.oxford.psy Please send a brief description of your background with an explanation why you would like to attend the Summer School (one page maximum) no later than 1 April 1994. From shastri at ICSI.Berkeley.EDU Thu Nov 11 20:54:11 1993 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 11 Nov 1993 17:54:11 PST Subject: Dynamic bindings Message-ID: <9311120154.AA12851@icsib20.ICSI.Berkeley.EDU> Several recent messages about the binding problem have mentioned solutions based on temporal synchrony. Those interested in finding out more about this proposal and its implications for cognitive models may want to take a look at the following article that appears in the recent issue of "Behavioral and Brain Sciences": From simple associations to systematic reasoning: a connectionist encoding of rules, variables and dynamic bindings using temporal synchrony. Shastri, L. and V. Ajjanagadde. BBS, 16 (3) 1993. A number of issues raised in the discussion are covered in the article, accompanying commentaries and the authors' response. The proposed solution leads to a number of predictions about the constraints on automatic (reflexive) processing. Some of which are: 1. A large number of instances of relations/schemas/events/.... can be active simultaneously. In production system (PS) terms: the working memory capacity underlying automatic/reflexive processing is very large. 2. A very large number of systematic mappings between relations/schemas/.. may be computed simultaneously. In PS terms: a large number of rules (even those containing variables) may fire in parallel. BUT 3. The maximum number of entites (individuals) that can be referenced by instances active in the working memory is small (~10). 4. Multiple instances of the same schema/relation... may be active simultaneously, but no schema/relation... may be instantiated more than ~3 times during an episode of reflexive reasoning. -- Shastri Lokendra Shastri International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94707-1105 shastri at icsi.berkeley.edu (510) 642-4274 ext 310 From harnad at Princeton.EDU Thu Nov 11 22:36:48 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 11 Nov 93 22:36:48 EST Subject: Artificial Life vs. Artificial Intelligence: Conference Message-ID: <9311120336.AA17606@clarity.Princeton.EDU> From inmanh at cogs.susx.ac.uk Fri Nov 12 04:19:00 1993 From: inmanh at cogs.susx.ac.uk (Inman Harvey) Date: Fri, 12 Nov 93 09:19 GMT Subject: SAB94 CFP Message-ID: ============================================================================== Conference Announcement and FINAL Call For Papers FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Authors should make every effort to suggest implications of their work for both natural and artificial animals. Papers which do not deal explicitly with adaptive behavior will be rejected. Submission Instructions Authors are requested to send five copies (hard copy only) of a full paper to the Program Chair (Dave Cliff). Papers should not exceed 10 pages (excluding the title page), with 1 inch margins all around, and no smaller than 10 pt (12 pitch) type (Times Roman preferred). LaTex template available by email, see below. This is same format as SAB90 and SAB92. Each paper must include a title page containing the following: (1) Full names, postal addresses, phone numbers, email addresses (if available), and fax numbers for each author, (2) A 100-200 word abstract, (3) The topic area(s) in which the paper could be reviewed (see list above). Camera ready versions of the papers, in two-column format, will be required after acceptance. Computer, video, and robotic demonstrations are also invited. Please contact Phil Husbands to make arrangements for demonstrations. Other program proposals will also be considered. Conference committee Conference Chair: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Cambridge Parkway Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK e-mail: davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan W. Uttal, USA D. Waltz, USA. Official Language: English Publisher: MIT Press/Bradford Books Conference Information The conference will be held in the centre of Brighton, on the South Coast. This is a resort town, less than one hour from London, only 30 mins from London Gatwick airport. A number of invited speakers will be giving tutorial talks in subject areas covered by the conference. Through sponsorship, conference fees will be kept to a minimum and there should also be some travel grants available. We have made arrangements for the Proceedings to be available at the conference, which requires efficient processing of submitted papers; hence if possible first submissions should be made using LaTex template available by email. Email Information Email sab94 at cogs.susx.ac.uk with subject line "Subscribe mail-list" to be put on our mailing list and be sent further information about conference arrangements when available. Email sab94 at cogs.susx.ac.uk with subject line "LaTex template" to be sent LaTex template for camera-ready and for initial submissions. Important Dates =============== JAN 5, 1994: Submission deadline MAR 10: Notification of acceptance or rejection APR 10: Camera ready revised versions due MAY 1: Early registration deadline JUL 8: Regular registration deadline AUG 8-12: Conference dates General queries to: sab94 at cogs.susx.ac.uk ============================================================================== From george at psychmips.york.ac.uk Mon Nov 15 08:28:07 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Mon, 15 Nov 93 13:28:07 +0000 (GMT) Subject: CHI '94 Workshop Message-ID: CHI '94 Workshops Workshops provide an opportunity for small groups of participants who share a technical interest to meet for 1 to 2 days of dialogue on their areas of common concern. Workshops are different than paper sessions, panels and posters, in that the focus of workshops is on group discussion of topics rather than presentations of individuals' positions with follow-up questions. All workshops require pre-conference activity by the participants. CHI '94 offers 10 workshops covering a range of research and applied topics. These workshops will be held Sunday, April 24 and Monday, April 25. Results of the workshop can be presented both during and after the conference. During the conference, it is possible to present results and expand discussion by holding a Special Interest Group Meeting (see information on SIGs in this program). After the conference, each organizer provides an article summarizing the workshop for publication in the SIGCHI Bulletin. Several SIGCHI workshops have further presented their contents by publishing books and journal articles. Participation in a Workshop: To ensure a small enough group for open interchange, each workshop is limited to a maximum of 20 participants including the organizers. Participants are chosen before the conference on the basis of position papers sent to the workshop organizers. Unless stated otherwise in the individual workshop descriptions below, the position papers are 2-3 page statements on the workshop theme. All position papers are due to all workshop organizers by February 18th, 1994. Submitters will be notified of their selection by March 4, 1994 and must confirm their participation by March 18, 1994. Fees: The fees are $25 for a 1-day workshop, $40 for a 1.5-day workshop, and $50 for a 2-day workshop. ************************************************************************* Pattern Recognition in Human-Computer Interaction: A Viable Approach? All day Sunday, April 24 and Monday, April 25 Janet Finlay University of Huddersfield, UK Alan Dix University of York, UK George Bolt University of York, UK In 1991, a SIGCHI workshop entitled "Neural Networks and Pattern Recognition in Human-Computer Interaction" was held, involving researchers using novel techniques, such as machine learning and neural networks, on human-computer interaction (HCI) problems. Three years on it is still unclear whether such an approach is viable for realistic applications. This workshop will address this question, bringing together researchers in pattern recognition and HCI researchers who are investigating problems involving the analysis of traces of interaction (e.g. evaluation, user modelling, error diagnosis). The emphasis of the workshop will be active research: Participants will attempt to apply pattern recognition techniques to derive solutions to identified HCI problems. Its aim is twofold: to initiate interdisciplinary research in the area and to consider the scope of these methods. This will be a two-day workshop, limited to 16 participants who will be involved either in pattern recognition (statistical, inductive or neural) or in relevant HCI research, but not necessarily both. Position statements (2-3 pages) from pattern recognition researchers should describe the technique, its strengths and limitations, and any computer tools. HCI researchers should identify their problem area and the pattern recognition issues it raises. Participants will begin discussion and establish research "teams" prior to the workshop itself, and applicants should be prepared to take part in this preliminary work. A book based on the workshop activities is planned and participants will be asked to submit papers for inclusion at a later date. Contact: Dr. Janet Finlay School of Computing and Mathematics University of Huddersfield Queensgate Huddersfield, HD1 3DH, UK Voice: +44-484 472147 Answering Machine: +44-484 649108 Fax: +44-484 421106 janet at zeus.hud.ac.uk ****************************************************************** From B344DSL at UTARLG.UTA.EDU Tue Nov 16 12:24:07 1993 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: 16 Nov 1993 12:24:07 -0500 (CDT) Subject: Conference announcement (Turkey) Message-ID: CALL FOR PAPERS TAINN III The Third Turkish Symposium on ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS June 22-24, 1994, METU, Ankara, Turkey Organized by Middle East Technical University & Bilkent University in cooperation with Bogazici University, TUBITAK INNS Turkey SIG, IEEE Computer Society Turkey Chapter, ACM SIGART Turkey Chapter, Conference Chair: Nese Yalabik (METU), nese at vm.cc.metu.edu.tr Program Committee Co-chairs: Cem Bozsahin (METU), bozsahin at vm.cc.metu.edu.tr Ugur Halici (METU), halici at vm.cc.metu.edu.tr Kemal Oflazer (Bilkent), ko at cs.bilkent.edu.tr Organization Committee Chair: Gokturk Ucoluk (METU) , ucoluk at vm.cc.metu.edu.tr Program Comittee: L. Akin (Bosphorus), V. Akman (Bilkent), E. Alpaydin (Bosphorus), S.I. Amari (Tokyo), I. Aybay (METU), B. Buckles (Tulane), G. Carpenter (Boston), I. iekli (Bilkent), C. Dagli (Missouri-Rolla), D.Davenport (Bilkent), G. Ernst (Case Western), A. Erkmen (METU) N. Findler (Arizona State), E. Gelenbe (Duke), M. Guler (METU), A. Guvenir (Bilkent), S. Kocabas (TUBITAK), R. Korf (UCLA), S. Kuru (Bosphorus), D. Levine (Texas Arlington), R. Lippmann (MIT), K. Narendra (Yale), H. Ogmen (Houston), U. Sengupta (Arizona State), R. Parikh (CUNY), F. Petry (Tulane), C. Say (Bosphorus), A. Yazici (METU), G. Ucoluk (METU), P. Werbos (NSF), N. Yalabik (METU), L. Zadeh (California), W. Zadrozny (IBM TJ Watson) Organization Committee: A. Guloksuz, O. Izmirli, E. Ersahin, I. Ozturk, . Turhan Scope of the Symposium * Commonsense Reasoning * Expert Systems * Knowledge Representation * Natural Language Processing * AI Programming Environments and Tools * Automated Deduction * Computer Vision * Speech Recognition * Control and Planning * Machine Learning and Knowledge Acquisition * Robotics * Social, Legal, Ethical Issues * Distributed AI * Intelligent Tutoring Systems * Search * Cognitive Models * Parallel and Distributed Processing * Genetic Algorithms * NN Applications * NN Simulation Environments * Fuzzy Logic * Novel NN Models * Theoretical Aspects of NN * Pattern Recognition * Other Related Topics on AI and NN Paper Submission: Submit five copies of full papers (in English or Turkish) limited to 10 pages by January 31, 1994 to : TAINN III, Cem Bozsahin Department of Computer Engineering Middle East Technical University, 06531, Ankara, Turkey Authors will be notified of acceptance by April 1, 1994. Accepted papers will be published in the symposium proceedings. The conference will be held on the campus of Middle East Technical University (METU) in Ankara, Turkey. A limited number of free lodging facilities will be provided on campus for student participants. If there is sufficient interest, sightseeing tours to the nearby Cappadocia region known for its mystical underground cities and fairy chimneys, to the archaeological remains at Alacahoyuk , the capital of the Hittite empire, and to local museums will be organized. For further information and announcements contact: TAINN, Ugur Halici Department of Electrical Engineering Middle East Technical University 06531, Ankara, Turkey EMAIL: TAINN at VM.CC.METU.EDU.TR (AFTER JANUARY 1994) HALICI at VM.CC.METU.EDU.TR (BEFORE) --------------------------------------------------------------------- YOUR HELP IN DISTRIBUTING THIS ANNOUNCEMENT ON OTHER BULLETIN BOARDS AND LISTS WHICH HAVE AN AUDIANCE ON ARTIFICIAL INTELLIGENCE OR NEURAL NETWORKS IS HIGHLY APPRECIATED. From jagota at cs.Buffalo.EDU Mon Nov 15 20:28:36 1993 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Mon, 15 Nov 93 20:28:36 EST Subject: NIPS workshop schedule Message-ID: <9311160128.AA18065@pegasus.cs.Buffalo.EDU> NIPS*93 Workshop: Neural Network Methods for Optimization Problems ================ December 4, Vail, CO, USA Intended Audience: Researchers interested in Connectionist solution ================= of optimization problems. Organizer: Arun Jagota ========= jagota at cs.buffalo.edu Program: ======= Ever since the work of Hopfield and Tank, neural networks have found increasing use for the approximate solution of hard optimization problems. The successes in the past have however been limited, when compared to traditional methods. In this workshop, speakers will present state of the art research on neural network methods for optimization problems. This ranges from specific algorithms to specific applications to general methodologies to theoretical issues to experimental studies to comparisons with conventional approaches. We hope to examine strengths and weaknesses of current algorithms, and discuss potential areas for improvement. We hope to exchange views and computational experiences on the merits and deficiencies of particular algorithms. We hope to carefully study some of the broad theoretical and methodological issues. We hope to discuss significant applications. We hope to discuss parallel implementation experiences. A fair amount of time is reserved in the afternoon session for informal discussion and audience participation on the above topics (see below). Morning Session: 7:30 - 8:00 N. Peterfreund, Technion Trajectory Control of Convergent Networks with Applications to TSP 8:00 - 8:30 Bruce Rosen, UT San Antonio Training Feedforward NN Quickly and Accurately with Very Fast Simulated Annealing Methods 8:30 - 9:00 Tal Grossman, Los Alamos National Lab A Neural Network Approach to the General Minimal Cover Problem 9:00 - 9:30 Eric Mjolsness, Yale Algebraic and Grammatical Design of Relaxation Nets Afternoon Session: 4:30 - 5:00 Yoshiyasu Takefuji, Case Western Reserve University Neural Computing for Optimization and Combinatorics 5:00 - 5:30 Arun Jagota, Memphis State Report on the DIMACS Combinatorial Optimization Challenge: A Comparison of Neural Network Methods With Several Others 5:30 - 6:00 Daniel S. Levine, UT Arlington Optimality in Biological and Artificial Neural Networks 6:00 - 6:25 Informal Discussion 6:25 - 6:30 Arun Jagota Closing Remarks Arun Jagota From cga at ai.mit.edu Tue Nov 16 15:08:16 1993 From: cga at ai.mit.edu (Christopher G. Atkeson) Date: Tue, 16 Nov 93 15:08:16 EST Subject: NIPS Workshop Message-ID: <9311162008.AA01469@mulch> NIPS*93 Workshop: Memory-based Methods for Regression and Classification ================= Intended Audience: Researchers interested in memory-based methods, locality in learning ================== Organizers: =========== Chris Atkeson Tom Dietterich Andrew Moore Dietrich Wettschereck cga at ai.mit.edu tgd at cs.orst.edu awm at cs.cmu.edu wettscd at cs.orst.edu Program: ======== Local, memory-based learning methods store all or most of the training data and predict new points by analyzing nearby training points (e.g., nearest neighbor, radial-basis functions, local linear methods). The purpose of this workshop is to determine the state of the art in memory-based learning methods and to assess current progress on important open problems. Specifically, we will consider such issues as how to determine distance metrics and smoothing parameters, how to regularize memory-based methods, how to obtain error bars on predictions, and how to scale to large data sets. We will also compare memory-based methods with methods (such as multi-layer perceptrons) that construct global decision boundaries or regression surfaces, and we will explore current theoretical models of local learning methods. By the close of the workshop, we will have assembled an agenda of open problems requiring further research. This workshop meets both days. Friday will be devoted primarily to classification tasks, and Saturday will be devoted primarily to regression tasks. Please send us email if you would like to present something. Current schedule: Friday: 7:30-7:45 Introduction (Dietterich) 7:45-8:15 Leon Bottou 8:15-8:30 Discussion 8:30-9:00 David Lowe 9:00-9:30 Discussion 4:30-5:00 Patrice Simard 5:00-5:15 Discussion 5:15-5:45 Dietrich Wettschereck 5:45-6:15 John Platt 6:15-6:30 Discussion Saturday: 7:30-8:00 Trevor Hastie 8:00-8:15 Discussion 8:15-8:45 Doyne Farmer 8:45-9:00 Discussion 9:00-9:30 5-minute descriptions by other participants 4:30-5:00 Chris Atkeson 5:00-5:30 Frederico Girosi 5:30-6:00 Andrew Moore 6:00-6:30 Discussion From jordan at psyche.mit.edu Tue Nov 16 15:15:42 1993 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 16 Nov 93 15:15:42 EST Subject: faculty opening at MIT Message-ID: The MIT Department of Brain and Cognitive Sciences anticipates making a tenure-track appointment in computational brain and cognitive science at the ASSISTANT PROFESSOR level. Candidates should have a strong mathematical background and an active research interest in the mathematical modeling of specific neural or cognitive phenomena. Individuals whose research focuses on learning and memory are especially encouraged to apply. Responsibilities include graduate and undergraduate teaching and research supervision. Applications should include a brief cover letter stating the candidate's research and teaching interests, a vita, three letters of recommendation and representative reprints. Send applications by January 15, 1994 to: Michael I. Jordan, Chair Faculty Search Committee E10-018 MIT Cambridge, MA 02139 Qualified women and minority candidates are especially encouraged to apply. MIT is an Affirmative Action/Equal Opportunity employer. From fellous at rana.usc.edu Tue Nov 16 17:24:57 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Tue, 16 Nov 93 14:24:57 PST Subject: Please forward this to connectionists-ml and any other appropriate Message-ID: <9311162224.AA14052@rana.usc.edu> Could you please post this announcement .... ASSISTANT/ASSOCIATE PROFESSOR BIOMEDICAL ENGINEERING/NEUROSCIENCE UNIVERSITY OF SOUTHERN CALIFORNIA A tenure-trace faculty position is available in the Department of Biomedical Engineering at the University of Southern California. This is a new position, created to strengthen the concentration of neuroscience research within the Department. Applicants should be capable of establishing an externally funded research program that includes a rigorous, quantitative approach to functional aspects of the nervous system. A combined theoretical and experimental approach is preferred, though applicants withpurely theoretical research programs will be considered. Multiple opportunities for interdisciplinary research are fostered by USC academic and research programs such as the Biomedical Simulations Resource, the Program in Neu~science, and the Center for Neural Computing. Send curriculum vitae, three letters of recommendation, and a description of current and future research by January 1, 1994 to Search Committee, Department of Biomedical Engineerig, 530 Olin Hall, University of Southern California, Los Angeles, CA 90089-1451. From mm at SANTAFE.EDU Tue Nov 16 17:52:02 1993 From: mm at SANTAFE.EDU (Melanie Mitchell) Date: Tue, 16 Nov 93 15:52:02 MST Subject: paper available Message-ID: <9311162252.AA25934@wupatki> The following paper (available via anonymous ftp) may be of interest to some on this list: Evolving Cellular Automata to Perform Computations: Mechanisms and Impediments Melanie Mitchell James P. Crutchfield Peter T. Hraber Santa Fe Institute UC Berkeley Santa Fe Institute Santa Fe Institute Working Paper 93-11-071 Submitted to Physica D October 18, 1993 Abstract We present results from experiments in which a genetic algorithm was used to evolve cellular automata (CAs) to perform a particular computational task---one-dimensional density classification. We look in detail at the evolutionary mechanisms producing the GA's behavior on this task and the impediments faced by the GA. In particular, we identify four ``epochs of innovation'' in which new CA strategies for solving the problem are discovered by the GA, describe how these strategies are implemented in CA rule tables, and identify the GA mechanisms underlying their discovery. The epochs are characterized by a breaking of the task's symmetries on the part of the GA. The symmetry breaking results in a short-term fitness gain but ultimately prevents the discovery of the most highly fit strategies. We discuss the extent to which symmetry breaking and other impediments are general phenomena in any GA search. To obtain an electronic copy of this paper: Note that the paper (44 pages) is broken up into two halves that must be retrieved separately. ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get sfi-93-11-071.part1.ps.Z get sfi-93-11-071.part2.ps.Z quit Then at your system: uncompress sfi-93-11-071.part1.ps.Z uncompress sfi-93-11-071.part2.ps.Z lpr -P sfi-93-11-071.part1.ps lpr -P sfi-93-11-071.part2.ps If you cannot obtain an electronic copy, send a request for a hard copy to dlu at santafe.edu. From mm at santafe.edu Tue Nov 16 18:28:12 1993 From: mm at santafe.edu (Melanie Mitchell) Date: Tue, 16 Nov 93 16:28:12 MST Subject: paper available Message-ID: <9311162328.AA26121@wupatki> The following paper (available via anonymous ftp) may be of interest to readers of this list: Genetic Algorithms and Artificial Life Melanie Mitchell Stephanie Forrest Santa Fe Institute University of New Mexico Santa Fe Institute Working Paper 93-11-072 To appear in _Artificial Life_ Abstract Genetic algorithms are computational models of evolution that play a central role in many artificial-life models. We review the history and current scope of research on genetic algorithms in artificial life, using illustrative examples in which the genetic algorithm is used to study how learning and evolution interact, and to model ecosystems, immune system, cognitive systems, and social systems. We also outline a number of open questions and future directions for genetic algorithms in artificial-life research. To obtain an electronic copy of this paper: ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get sfi-93-11-072.ps.Z quit Then at your system: uncompress sfi-93-11-072.ps.Z lpr -P sfi-93-11-072.ps If you cannot obtain an electronic copy, send a request for a hard copy to dlu at santafe.edu. From tds at ai.mit.edu Tue Nov 16 19:52:23 1993 From: tds at ai.mit.edu (Terence D. Sanger) Date: Tue, 16 Nov 93 19:52:23 EST Subject: Principal Components algorithms Message-ID: <9311170052.AA18287@rice-chex> Dear Connectionists, Recently, several people have asked me about the relationship between Kung&Diamantaras's APEX algorithm and the GHA algorithm I proposed in 1988. I can summarize my view of algorithms for Principal Components Analysis (PCA) by grouping them into 3 categories: 1) Algorithms to find the first principal component 2) Algorithms to find a set of vectors which spans the subspace of the first m principal components 3) Algorithms which find the first m principal components (eigenvectors) directly (I can provide a fairly extensive bibliography of these to anyone who is interested.) APEX and GHA (among many others) are in category 3. Most algorithms in category 3 make use of an algorithm from category 1 (such as Oja's method) combined with a deflation procedure which removes components from the input as they are discovered by the network. In other words, an algorithm to find the first principal component will actually find the second one once the first component is removed. Continuing this procedure allows all components to be extracted. Note that pure "Hebbian" algorithms usually fall into category 1. I propose the following hypothesis: "All algorithms for PCA which are based on a Hebbian learning rule must use sequential deflation to extract components beyond the first." All category 3 algorithms that I know about conform to the hypothesis. The easiest example is GHA, which was specifically designed as a differential-equation implementation of sequential deflation. It turns out that APEX also conforms to the hypothesis, despite the apparent use of lateral connections instead of deflation. Some fairly straightforward mathematics shows that the APEX equations can be rewritten in such a way that they are almost equivalent to the GHA equations. (Instructions for obtaining a Latex file with the derivation are below.) The use of lateral connections in APEX may indicate a more "biologically plausible" implementation than the original GHA formulation, but the performance of the two algorithms should be almost identical. Another apparently different algorithm which nevertheless conforms to the hypothesis is Brockett's. It is possible to think of GHA as a "thresholded" form of Brockett's algorithm. (Instructions for Latex file are below.) I appreciate all comments/opinions/counterexamples, so please feel free! Regards to all, Terry Sanger (tds at ai.mit.edu) Instructions for retrieving latex document: ftp ftp.ai.mit.edu login: anonymous password: your-net-address cd pub/sanger-papers get apex.tex get brockett.tex quit latex apex latex brockett lpr apex.dvi lpr brockett.dvi From shs at yantra.ernet.in Tue Nov 16 05:02:42 1993 From: shs at yantra.ernet.in (S H Srinivasan) Date: Tue, 16 Nov 93 10:52:42+050 Subject: dynamic binding Message-ID: <9311160552.AA01283@yantra.noname> There is yet another solution to dynamic binding which I have been working on. Conceptually we can solve the problem of dynamic binding if we use binary *vector* of activations for each *unit* instead of the usual binary ({0,1}) activations. Taking the example of Graham Smith assume that there are four features - red, blue, square, and triangle. Also assume that we want to represent two patterns simultaneously. Each unit now takes activation in {0,1}^{2} so that the activation pattern ( (1 0) (0 1) (1 0) (0 1)) represents "red square and blue triangle" and ( (0 0) (1 1) (0 1) (1 0)) represents "blue triangle and blue square". As Ron Sun observes: > The point of doing dynamic binding is to be able to use the > binding in other processing tasks, for example, high level > reasoning, especially in rule-based (or rule-like) reasoning. > In such reasoning, bindings are constructed and deconstructed, > passed around, checked against some constraints, and unbound > or rebound to something else. Simply forming an association > is NOT the point. the whole point of dynamic binding is the ability to *use* it. Using binary vector activations for units, it is possible to do tasks like multiple content-addressable memory (MCAM) - in which multiple patterns are retrieved simultaneously - in a straightforward manner. We have also looked into a (conceptual) *implementation* of the above scheme using complex-valued activations for the units. It is possible to represent about five objects using complex activations. It is also possible to perform tasks like MCAM. Finally, a question to neurobiologists: Can the existence of multiple neurotransmitters in neurons be related to the binary vector of activations idea? S H Srinivasan Center for AI & Robotics Bangalore - 560 012, INDIA. From reza at ai.mit.edu Wed Nov 17 08:26:38 1993 From: reza at ai.mit.edu (Reza Shadmehr) Date: Wed, 17 Nov 93 08:26:38 EST Subject: NSF Postdoctoral Fellowships Message-ID: <9311171326.AA16455@corpus-callosum.ai.mit.edu> Here's information on a Postdoctoral Fellowship program from NSF for Computational Sciences. with best wishes, Reza Shadmehr reza at ai.mit.edu --------------------------------------- CISE Postdoctoral Research Associates in Computational Science and Engineering and, in Experimental Science Program Announcement DIVISION OF ADVANCED SCIENTIFIC COMPUTING OFFICE OF CROSS-DISCIPLINARY ACTIVITIES DEADLINE: NOVEMBER 29, 1993 NATIONAL SCIENCE FOUNDATION CISE Postdoctoral Research Associates in Computational Science and Engineering CISE Postdoctoral Research Associates in Experimental Science The Computer and Information Science and Engineering (CISE) Directorate of the National Science Foundation plans a limited number of grants for support of Postdoctoral Research Associateships contingent upon available funding. The Associates are of two types: - Associateships in Computational Science and Engineering (CS&E Associates) supported by the New Technologies Program in the Division of Advanced Scientific Computing (DASC) in cooperation with other NSF CS&E disciplines (CS&E Associates). The objective of these Associateship awards is to increase expertise in the development of innovative methods and software for applying high performance, scalable parallel computing systems in solving large scale CS&E problems. - Associateships in Experimental Science (ES Associates) supported by the Office of Cross Disciplinary Activities (CDA) . The objective of the ES Associateship awards is to increase expertise in CISE experimental science by providing opportunities for associates to work in established laboratories performing experimental research in one or more of the research areas supported by the CISE Directorate. These awards provide opportunities for recent Ph.D.s to broaden their knowledge and experience and to prepare them for significant research careers on the frontiers of contemporary computational science and engineering and experimental science. It is assumed that CS&E Associates will conduct their research at academic research institutions or other centers or institutions which provide access, either on site or by network, to high performance, scalable parallel computing systems and will be performing research associated with those systems. It is assumed that ES Associates will conduct their research in academic research institutions or other institutions devoted to experimental science in one or more of the research areas supported by the CISE Directorate. Who may submit Universities, colleges, and other research institutions as described in Grants for Research and Education in Science and Engineering (GRESE), (NSF 92-89) are eligible to submit proposals to this program. For CS&E Associateships the institution must have access to high performance, emerging parallel computing systems. For ES Associateships, the institution should have an established laboratory performing research in CISE experimental areas (as described in Guide to Programs (NSF 92-78)). Associateship awards will be based on proposals submitted by the sponsoring institution. The principal investigator will serve as an unreimbursed scientific advisor for the research associate. Research associates should not be listed as co-principal investigators. Each proposal must include a research and training plan for the proposed research associate in an activity of computational science and engineering in any of the fields supported by DASC, other NSF CS&E programs or experimental research supported by the CISE Directorate. To be eligible for this support, individuals must; (1) be eligible to be appointed as a research associate or research assistant professor in the institution which has submitted the proposal, (2) fulfill the requirement for the doctoral degree in computational science and engineering, computer science or a closely related discipline by September 30, 1994. Award Amounts, Stipends and Research Expense Allowances Awards will range from $36,200-$46,200 for a 24 month period. The award will include $32,000-$42,000 to support the Research Associate (to be matched equally by the sponsoring institution). There will also be an allowance of $4,200 to the sponsoring institution, in lieu of indirect costs, as partial reimbursement for expenses incurred in support of the research. The annual award to the research associate will be composed of two parts; an annual stipend (salary and benefits) that may range from $28,000-$38,000, and a $4,000 per year research expense allowance expendable at the Associate's discretion for travel, publication expenses, and other research-related costs. There is no allowance for dependents. The effective date of the award cannot be later than January 1995. Matching Funds The institution must match the NSF award on a dollar for dollar basis excluding the $4,200 granted in lieu of indirect costs. Matching funds may come from grants from other NSF programs, other agencies programs, or from other institutional resources. Matching fund arrangements are the responsibility of the submitting institution and must be detailed in the budget request. To the extent that the sponsoring institution increases its cost sharing by providing additional stipend beyond the level of $38,000 over the 24 month award period, the CISE Postdoctoral Associates program will not provide additional funds. Evaluation and Selection Proposals will be reviewed by panel in accordance with established Foundation procedures and the general criteria described in the GRESE brochure. In particular, the review panel will consider: the candidate's ability, accomplishments, potential as evidenced by the quality and significance of past research, long range career goals, the likely impact of the proposed postdoctoral training and research on the future career goals, the likely impact of the proposed postdoctoral training and research on the future scientific development of the applicant and on the parallel computing infrastructure of the US (for CS&E Associates) or on Experimental Science in CISE disciplines (for ES Associates), and the adequacy of the sponsoring institutions access to high performance and/or experimental computational resources to support the proposed research. The selection of the Research Associates will be made by the National Science Foundation on the basis of panel reviews, with due consideration of the effect of the awards on the infrastructure of CS&E and experimental computer science research in the US. Copies of the GRESE brochure and other NSF publications are available at no cost from the NSF Forms and Publication Unit, phone (703) 306-1130, or via e-mail (Bitnet:pub at nsf or Internet:pubs at nsf.gov). Application Procedures and Proposal Materials To be eligible for consideration, a proposal must contain forms which can be found in the GRESE brochure. Required are a Supplementary Application Information Form (NSF Form 1225-one copy), a Current and Pending Support Form (NSF Form 1239-one copy) to be completed by the Principal Investigator (the scientific advisor), and one original and twelve copies of: (a) Cover page with institutional certificates (Form 1207). Title should indicate whether the proposal is an CS&E Postdoctoral Associate or ES Postdoctoral Associate. (b) Budget (Form 1030). (c) Statement with details regarding matching funds and their source. (d) Personal career goals statement not to exceed one single- spaced page, written by the research associate applicant, that describes the career goals of the applicant and what role the chosen research, scientific advisor and sponsoring institution will play in enhancing the realization of these long-range career goals. (e) Statement of results from prior NSF support (of the Principal Investigator) related to the proposed research. (f) Biographical sketch of the principal investigator as called for in the GRESE brochure. (g) Up-to-date curriculum vitae of the research associate applicant including a complete list of publications, but no reprints (a thesis should not be included, but a thesis abstract may be included). (h) Proposal abstract, less than 250 words, of the training and research plan. (i) Training and research plan (not to exceed three single- spaced typewritten pages). This should propose research which could be carried out during the award period. The creativity, description and essential elements of the research proposal must be those of the research associate applicant. (j) Statement from the proposed postdoctoral advisor nominating the research associate indicating the nature of the postdoctoral supervision to be given if the award is made. (k) Statement from the advisor clearly describing the computing facilities and resources that will be available to support the proposed research. (l) Three recommendations (normally including one from the doctoral advisor). Training and research plans should be provided to your references to assist their recommendations. Please note that the research description page limit is less than the research description page limit specified in GRESE. All application materials must be: (1) received by NSF no later than the deadline date November 29, 1993; (2) be postmarked no later than five (5) days prior to the deadline date; or (3) be sent via commercial overnight mail no later than two (2) days prior to the deadline date; to be considered for award. Send completed proposals with supporting application materials to: National Science Foundation - PPU Announcement No. 93-150 4201 Wilson Blvd. Arlington, VA 22230 Additional Information If you wish additional information, please contact Dr. Robert G.Voigt, Program Director, New Technologies, DASC, at 202-357-7727 (e-mail: rvoigt at nsf.gov) for CS&E Associates or Dr. Tse-Yun Feng, Program Director, CDA at (202) 357-7349 (e-mail: tfeng at nsf.gov) for ES Associates. After November 19, 1993, the phone numbers are respectively 703-306-1962 and 703-306-1980. Copies of most program announcements are available electronically using the Science and Technology Information System (STIS). The full text can be searched on-line, and copied from the system. Instructions for use of the system are in NSF 91-10 "STIS Flyer." The printed copy is available from the Forms and Publications Unit. An electronic copy may be requested by sending a message to "stis at nsf" (bitnet) or "stis at nsf.gov" (Internet). The Foundation provides awards for research in the sciences and engineering. The awardee is wholly responsible for the conduct of such research and preparation of the results for publication. The Foundation does not assume responsibility for such findings or their interpretation. The Foundation welcomes proposals on behalf of all qualified scientists and engineers and strongly encourages women, minorities, and persons with disabilities to compete fully in any of the research and research-related programs described in this document. Facilitation Awards for Scientists and Engineers with Disabilities provide funding for special assistance or equipment to enable persons with disabilities (investigators and other staff, including student research assistants) to work on an NSF project. See program announcement (NSF 91-54), or contact the program coordinator (703) 306-1697 for more information. In accordance with Federal statutes and regulations and NSF policies, no person on grounds of race, color, age, sex, national origin, or disability shall be excluded from participation in, denied the benefits of, or be subject to discrimination under any program or activity receiving financial assistance from the National Science Foundation. NSF has TDD (Telephone Device for the Deaf) capability which enables individuals with hearing impairments to communicate with the Division of Human Resource Management for information relating to NSF programs, employment, or general information. This number is (703) 306-0090. Grants awarded as a result of this announcement are administered in accordance with the terms and conditions of NSF GC-1, Grant General Conditions, or FDP-II, Federal Demonstration Project General Terms and Conditions, depending on the grantee organization. Copies of these documents are available at no cost from the NSF Forms and Publications Unit, phone (703) 306-1130, or via e-mail (Bitnet:pubs at nsf or Internet:pubs at nsf.gov). More comprehensive information is contained in the NSF Grant Policy Manual (July 1989) for sale through the Superintendent of Documents, Government Printing Office, Washington, DC 20402. From pjh at compsci.stirling.ac.uk Wed Nov 17 05:40:39 1993 From: pjh at compsci.stirling.ac.uk (Peter J.B. Hancock) Date: 17 Nov 93 10:40:39 GMT (Wed) Subject: NCPW94 Message-ID: <9311171040.AA02456@uk.ac.stir.cs.nevis> Preliminary Announcement and first call for papers 3rd Neural Computation and Psychology Workshop University of Stirling Scotland 31 August - 2 September 1994 The theme of next year's workshop will be models of perception: general vision, faces, music etc. There will be invited and contributed talks and posters. It is hoped that a proceedings will be published after the event. Participation from postgraduates is particularly encouraged. Papers will be selected on the basis of abstracts of at most 1000 words. Deadline for submission: 1 June 1994. For further information contact: Peter Hancock, Department of Psychology, pjh at uk.ac.stir.cs, 0786-467659 Leslie Smith, Department of Computing Science and Mathematics, lss at uk.ac.stir.cs, 0786-467435 From george at psychmips.york.ac.uk Wed Nov 17 10:03:22 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Wed, 17 Nov 93 15:03:22 +0000 (GMT) Subject: CHI '94 Workshop - HCI & Neural Networks Message-ID: I neglected to mention where the workshop is to be held in my posting. It will be with the main HCI conference in Boston, MA, USA. - George Bolt Dept. of Psychology, University of York, UK. From mozer at dendrite.cs.colorado.edu Wed Nov 17 12:12:13 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Wed, 17 Nov 1993 10:12:13 -0700 Subject: book announcement Message-ID: <199311171712.AA01090@neuron.cs.colorado.edu> In case you don't already have enough to read, the following volume is now available: Mozer, M., Smolensky, P., Touretzky, D., Elman, J., & Weigend, A. (Eds.). (1994). _Proceedings of the 1993 Connectionist Models Summer School_. Hillsdale, NJ: Erlbaum Associates. The table of contents is listed below. For prepaid orders by check or credit card, the price is $49.95 US. Orders may be made by e-mail to "orders at leanhq.mhs.compuserve.com", by fax to (201) 666 2394, or by calling 1 (800) 926 6579. Include your credit card number, type, expiration date, and refer to "ISBN 1590-2". ------------------------------------------------------------------------------- Proceedings of the 1993 Connectionist Models Summer School Table of Contents ------------------------------------------------------------------------------- NEUROSCIENCE Sigma-pi properties of spiking neurons / Thomas Rebotier and Jacques Droulez Towards a computational theory of rat navigation / Hank S. Wan, David S. Touretzky, and A. David Redish Evaluating connectionist models in psychology and neuroscience / H. Tad Blair VISION Self-organizing feature maps with lateral connections: Modeling ocular dominance / Joseph Sirosh and Risto Miikkulainen Joint solution of low, intermediate, and high level vision tasks by global optimization: Application to computer vision at low SNR / Anoop K. Bhattacharjya and Badrinath Roysam COGNITIVE MODELING Learning global spatial structures from local associations / Thea B. Ghiselli-Crippa and Paul W. Munro A connectionist model of auditory Morse code perception / David Ascher A competitive neural network model for the process of recurrent choice / Valentin Dragoi and J. E. R. Staddon A neural network simulation of numerical verbal-to-arabic transcoding / A. Margrethe Lindemann Combining models of single-digit arithmetic and magnitude comparison / Thomas Lund Neural network models as tools for understanding high-level cognition: Developing paradigms for cognitive interpretation of neural network models / Itiel E. Dror LANGUAGE Modeling language as sensorimotor coordination F. James Eisenhart Structure and content in word production: Why it's hard to say dlorm Anita Govindjee and Gary Dell Investigating phonological representations: A modeling agenda Prahlad Gupta Part-of-speech tagging using a variable context Markov model Hinrich Schutze and Yoram Singer Quantitative predictions from a constraint-based theory of syntactic ambiguity resolution Michael Spivey-Knowlton Optimality semantics Bruce B. Tesar SYMBOLIC COMPUTATION AND RULES What's in a rule? The past tense by some other name might be called a connectionist net Kim G. Daugherty and Mary Hare On the proper treatment of symbolism--A lesson from linguistics Amit Almor and Michael Rindner Structure sensitivity in connectionist models Lars F. Niklasson Looking for structured representations in recurrent networks Mihail Crucianu Back propagation with understandable results Irina Tchoumatchenko Understanding neural networks via rule extraction and pruning Mark W. Craven and Jude W. Shavlik Rule learning and extraction with self-organizing neural networks Ah-Hwee Tan RECURRENT NETWORKS AND TEMPORAL PATTERN PROCESSING Recurrent networks: State machines or iterated function systems? John F. Kolen On the treatment of time in recurrent neural networks Fred Cummins and Robert F. Port Finding metrical structure in time J. Devin McAuley Representations of tonal music: A case study in the development of temporal relationships Catherine Stevens and Janet Wiles Applications of radial basis function fitting to the analysis of dynamical systems Michael A. S. Potts, D. S. Broomhead, and J. P. Huke Event prediction: Faster learning in a layered Hebbian network with memory Michael E. Young and Todd M. Bailey CONTROL Issues in using function approximation for reinforcement learning Sebastian Thrun and Anton Schwartz Approximating Q-values with basis function representations Philip Sabes Efficient learning of multiple degree-of-freedom control problems with quasi-independent Q-agents Kevin L. Markey Neural adaptive control of systems with drifting parameters Anya L. Tascillo and Victor A. Skormin LEARNING ALGORITHMS AND ARCHITECTURES Temporally local unsupervised learning: The MaxIn algorithm for maximizing input information Randall C. O'Reilly Minimizing disagreement for self-supervised classification Virginia R. de Sa Comparison of two unsupervised neural network models for redundancy reduction Stefanie Natascha Lindstaedt Solving inverse problems using an EM approach to density estimation Zoubin Ghahramani Estimating a-posteriori probabilities using stochastic network models Michael Finke and Klaus-Robert Muller LEARNING THEORY On overfitting and the effective number of hidden units Andreas S. Weigend Increase of apparent complexity is due to decrease of training set error Robert Dodier Momentum and optimal stochastic search Genevieve B. Orr and Todd K. Leen Scheme to improve the generalization error Rodrigo Garces General averaging results for convex optimization Michael P. Perrone Multitask connectionist learning Richard A. Caruana Estimating learning performance using hints Zehra Cataltepe and Yaser S. Abu-Mostafa SIMULATION TOOLS A simulator for asynchronous Hopfield models Arun Jagota An object-oriented dataflow approach for better designs of neural net architectures Alexander Linden From mayer at Heuristicrat.COM Tue Nov 16 11:59:58 1993 From: mayer at Heuristicrat.COM (Andrew Mayer) Date: Tue, 16 Nov 1993 08:59:58 -0800 Subject: Announcement: Summer Institute on Probabilistic Reasoning in AI Message-ID: <199311161659.AA16863@euclid.Heuristicrat.COM> Summer Institute on Probabilistic Reasoning in Artificial Intelligence Corvallis, Oregon July 22 - 27, 1994 WHAT: An intensive short course in modern probabilistic modeling, Bayesian inference, and decision theory designed for advanced Phd students, recent Phds, and government and industry researchers. WHY: In the last decade, researchers have made significant breakthroughs in techniques for representing and reasoning about uncertain information. Many now feel that probabilistic reasoning and rational decision making provide a sound and practical foundation needed for a variety of problems in artificial intelligence. In fact, these techniques now form the basis of state-of-the-art applications in search, planning, machine learning, diagnosis, vision, robotics, and speech understanding. The field has reached a level of maturity where the basic techniques are well understood and ready for dissemination. At the same time, there is a wealth of potential applications and open research topics. Thus, the time is ripe to train the next generation of researchers. WHERE, WHEN, HOW: The first Summer Institute will be held at Oregon State University, Corvallis, Oregon from July 22-27, 1994. A distinguished faculty will lecture on foundations or probabilistic reasoning and decision theory, knowledge acquisition, learning, and inference methods. Case studies will be presented on implemented applications. The Institute will provide housing, and expects to be able to provide limited travel funds. The Institute is sponsored by the Air Force Office of Scientific Research. WHO: Faculty Include: Jack Breese, Microsoft Wray Buntine, NASA Ames Bruce D'Ambrosio, Oregon State Thomas Dean, Brown Robert Fung, Lumina Decision Systems Othar Hansson, HRI & UC Berkeley David Heckerman, Microsoft Max Henrion, Lumina Decision Systems Keiji Kanazawa, UC Berkeley Tod Levitt, IET & Stanford Andrew Mayer, HRI & UC Berkeley Judea Pearl, UCLA Mark Peot, Stanford Ross Shachter, Stanford Michael Wellman, Univ. Michigan and others to be announced at a later date TO APPLY: For information and applications please contact the recruiting chair: Andrew Mayer Heuristicrats Research, Inc. 1678 Shattuck Avenue, Suite 310 Berkeley, CA 94709-1631 (510) 845-5810, x629 mayer at heuristicrat.com APPLICATIONS MUST BE RECEIVED BY FEBRUARY 15, 1994. Early application is encouraged to aid our planning process. From lba at ilusion.inesc.pt Fri Nov 19 05:29:39 1993 From: lba at ilusion.inesc.pt (Luis B. Almeida) Date: Fri, 19 Nov 93 11:29:39 +0100 Subject: Principal Components algorithms In-Reply-To: <9311170052.AA18287@rice-chex> (tds@ai.mit.edu) Message-ID: <9311191029.AA07506@ilusion.inesc.pt> Dear Terence, dear Connectionists, Terence writes: > I propose the following hypothesis: > "All algorithms for PCA which are based on a Hebbian learning rule must > use sequential deflation to extract components beyond the first." I agree with that hypothesis, in what concerns most of the PCA algorithms. However, I am not sure of that for all algorithms. One of them is the "weighted subspace" algorithm of Oja et al. (see ref. below). The simplest way I have found to interpret this algorithm is as a weighted combination of Williams' error-correction learning (or the plain subspace algorithm, which is the same) and Oja's original Hebbian rule. If one takes into account the realtive weights of both, which are different from one unit to another, it is rather easy to understand that the algorithm should extract the principal components. I can give more detail on this interpretation, if people find it useful. From granger at ics.uci.edu Fri Nov 19 14:32:38 1993 From: granger at ics.uci.edu (Rick Granger) Date: Fri, 19 Nov 1993 11:32:38 -0800 Subject: Principal Components algorithms In-Reply-To: Your message of "Tue, 16 Nov 1993 19:52:23 EST." <9311170052.AA18287@rice-chex> Message-ID: <6305.753737558@ics.uci.edu> Terry, you point out that correlational or "Hebbian" algorithms identify the first principal component, and that "sequential deflation" (i.e., successive removal of the prior component) will then iteratively discover the subsequent components. Based on our bottom-up simulation studies some years ago of the combined olfactory bulb - olfactory paleocortex system, we arrived at an algorithm that we identified as performing this class of function. In brief, cortex processes feedforward inputs from bulb, cortical feedback to bulb inhibits a portion of the input, the feedforward remainder from bulb is then processed by cortex, and the process iterates, successively removing portions of the input and then processing them. We described our findings in a 1990 article (Ambros-Ingerson, Granger and Lynch, Science, 247: 1344-1348, 1990). Moreover, in that paper we identified a superset of this class of functions, which includes families of algorithms both for PCA and for the disparate statistical function of hierarchical clustering. Intuitively, in a network that computes principal components, all the target cells (or any single target cell) will respond to all inputs, and with correlational learning the weight vectors coverge to the first principal component. Consider instead the same network but with a lateral inhibition (or "competitive" or "winner-take-all") performance rule (see, e.g., Coultrip et al., Neural Networks, 5: 47-54). In this version, instead of all the cells acting in concert computing the principal component, each individual "winning" cell will move to the mean of just that subset of inputs that it wins on. Then the response corresponds to the statistical operation of clustering (as in the work of Von der Malsburg '73, Grossberg '76, Zipser and Rumelhart '86, and many others). Then an operation of "sequential deflation" (in this case, successive removal of the cluster via inhibitory feedback from cortex to bulb) identifies sub-clusters (and then sub-sub-clusters, etc.), iteratively descending a hierarchy that successively approximates the inputs, performing the operation of hierarchical clustering. Thus these two operations, hierarchical clustering and principal components analysis, fall out as special cases of the same general class of "successive removal" (or "sequential deflation") algorithms. A formal characterization of this finding appears in the 1990 Science paper. (We have hypothesized that an operation of this kind may be performed by the olfactory bulb-cortex system, and have tested some physiological and behavioral predictions from the model: e.g., McCollum et al., J.Cog.Neurosci., 3: 293-299, 1991; Granger et al., Psychol.Sci., 2: 116-118, 1991. This and related work is reviewed in Gluck and Granger, Annual Review of Neurosci., 16: 667-706, 1993. Anyone interested in reprints is welcome to send me a mail or email request.) - Rick Granger Center for the Neurobiology of Learning and Memory University of California Irvine, California 92717 granger at ics.uci.edu From tds at ai.mit.edu Fri Nov 19 14:59:30 1993 From: tds at ai.mit.edu (Terence D. Sanger) Date: Fri, 19 Nov 93 14:59:30 EST Subject: PCA bibliography Message-ID: <9311191959.AA05559@rice-chex> Dear Connectionists, Since I sent out the offer to supply a PCA bibliography, I have received so many requests that I now realize I should have included it with the original mailing! My mistake. A file called "pca.bib" is now available via anonymous ftp from the same site (instructions below). This file is in a not-very-clean BibTex format. I won't even pretend that it is a complete bibliography, since many people are currently working in this field and I don't yet have all the most recent reprints. If I've missed anyone or there are mistakes, please send me some email and I'll update the bibliography for everyone. Thanks, Terry Sanger Instructions for retrieving bibliography database: ftp ftp.ai.mit.edu login: anonymous password: yourname at yoursite cd pub/sanger-papers get pca.bib quit P.S.: Several people have commented to me that the way I phrased the hypothesis seems to imply the use of time-sequential deflation. In other words, it sounds as if the first eigenvector must be found and removed from the data, before the second is found. Most algorithms do not do this, and instead deflate the first learned component while it is being learned. Thus learning of all components continues simultaneously "in parallel". I meant to include this case, but I could not think of any succinct way to say it! Technically, it is not very different, since most convergence proofs assume sequential learning of the outputs. But in practice, algorithms which learn all outputs in parallel seem to perform faster than those that learn one output at a time. I have certainly found this to be true for GHA, and people have mentioned that it holds true for other algorithms as well. From smagt at fwi.uva.nl Fri Nov 19 16:40:03 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Fri, 19 Nov 1993 22:40:03 +0100 Subject: CFP: book on neural systems for robotics Message-ID: <199311192140.AA03441@zijde.fwi.uva.nl> NOTE THE DEADLINES! =================== PROGRESS IN NEURAL NETWORKS series editor O. M. Omidvar CALL FOR PAPERS Special Volume: NEURAL SYSTEMS FOR ROBOTICS Editor: P. Patrick van der Smagt This series will review state-of-the-art research in neural networks, natural and synthetic. Contributions from leading researchers and practitioners will be sought. This series will help shape and define academic and professional programs in this area. This series is intended for a wide audience; those professionally involved in neural network research, such as lecturers and primary investigators in neural computing, neural modeling, neural learning, neural memory, and neurocomputers. The upcoming volume, NEURAL SYSTEMS FOR ROBOTICS, will focus on research in natural and artificial neural systems directly related to robotics and robot control. Authors are invited to submit original manuscripts describing recent progress in neural network research directly applicable to robotics. Manuscripts may be survey or tutorial in nature. Suggested topics include, but are not limited to: * Neural control systems for visually guided robots * Manipulator trajectory control * Sensor feedback systems & sensor data fusion * Obstacle avoidance * Biologically inspired robot systems * Identification of kinematics and dynamics Implementation of algoritms in non-simulated environments (i.e., *real* robots) is encouraged. The papers will be refereed and uniformly typeset. Ablex and the Progress Series editors invite you to submit an abstract, extended summary or manuscript proposal, directly to the Special Volume Editor: P. Patrick van der Smagt, Dept. of Computer Systems, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, THE NETHERLANDS Tel: +31 20 525-7524 Fax: +31 20 525-7490 Email: smagt at fwi.uva.nl or to the Series Editor: Dr. Omid M. Omidvar, Computer Science Dept., University of the District of Columbia, Washington DC 20008 Tel: (202)282-7345 Fax: (202)282-3677 Email: OOMIDVAR at UDCVAX.BITNET The deadline for extended abstracts is December 31, 1993. Author notification is by February, 1994. Final submissions, which should not exceed 50 double-spaced pages in length, should be in by May 31, 1994. The Publisher is Ablex Publishing Corporation, Norwood, NJ. From henders at linc.cis.upenn.edu Thu Nov 18 13:19:53 1993 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Thu, 18 Nov 1993 13:19:53 -0500 Subject: dynamic binding Message-ID: <199311181819.NAA11159@linc.cis.upenn.edu> The vector representation of dynamic bindings suggested by S H Srinivasan can be thought of as a spatial form of the temporal synchrony representation of dynamic bindings. Under an idealized form of the temporal synchrony model, the network cycles through the set of entities one at a time. One period of this cycle corresponds to one computation step in the vector processing model. I can imagine other forms of "vector position synchrony" that would correspond to less idealized versions of temporal synchrony. One advantage of the temporal synchrony model over the vector model is that if a rule is learned for an entity in one "position" of the vector, the temporal synchrony model inherently generalizes this rule to all "positions". Because the temporal synchrony model cycles through entities, the piece of network which implements a rule is time-multiplexed across all entities. The vector model would need a special mechanism for enforcing this property of rules. - Jamie James Henderson Computer and Information Science University of Pennsylvania From sparre at connect.nbi.dk Thu Nov 18 16:13:05 1993 From: sparre at connect.nbi.dk (Jacob Sparre Andersen) Date: Thu, 18 Nov 93 16:13:05 MET Subject: Reference list Message-ID: We're writing a paper on learning strategic games with neural nets and other optimization methods. We've collected some references, but we hope that we can get some help improving our reference list. Regards, Jacob Sparre Andersen and Peer Sommerlund Here's our list of references (some not complete): Justin A. Boyan (1992): "Modular Neural Networks for Learning Context-Dependent Game Strategies", Department of Engineering and Computer Laboratory, University of Cambridge, 1992, Cambridge, England Bernd Bruegmann (1993): "Monte Carlo Go", unpublished? Herbert Enderton (1989?): "The Golem Go Program" B. Freisleben (1992): "Teaching a Neural Network to Play GO-MOKU," Artificial Neural Networks 2, proceedings of ICANN '92, editors: I. Aleksander and J. Taylor, pp. 1659-1662, Elsevier Science Publishers, 1992 W.T.Katz and S.P.Pham (1991): "Experience-Based Learning Experiments using Go-moku", Proc. of the 1991 IEEE International Conference on Systems, Man, and Cybernetics, 2: 1405-1410, October 1991. M. Kohle & F. Schonbauer (19??): "Experience gained with a neural network that learns to play bridge", Proc. of the 5th Austrian Artificial Intelligence meeting, pp. 224-229. Kai-Fu Lee and Sanjoy Mahajan (1988): "A Pattern Classification Approach to Evaluation Function Learning", Artificial Intelligence, 1988, vol 36, pp. 1-25. Barney Pell (1992?): "" Pell has done some work in machine learning for GO. Article available by ftp. A.L. Samuel (1959): "Some studies in machine learning using the game of checkers", IBM journal of Research and Development, vol 3, nr. 3, pp. 210-229, 1959. A.L. Samuel (1967): "Some studies in machine learning using the game of checkers 2 - recent progress", IBM journal of Research and Development, vol 11, nr. 6, pp. 601-616, 1967. David Stoutamire (19??): has written a thesis on machine learning applied to Go. G. Tesauro (1989): "Connectionist learning of expert preferences by comparison training", Advances in NIPS 1, 99-106 1989 G. Tesauro & T.J. Sejnowski (1989): "A Parallel Network that learns to play Backgammon", Artificial Intelligence, vol 39, pp. 357-390, 1989. G. Tesauro & T.J. Sejnowski (1990): "Neurogammon: A Neural Network Backgammon Program", IJCNN Proceedings, vol 3, pp. 33-39, 1990. In Machine Learning is this article, in which he comments on temporal difference learning (i.e. training a net from scratch by playing a copy of itself). The program he develops is called "TD-gammon": G. Tesauro (1991): "Practical Issues in Temporal Difference Learning", IBM Research Report RC17223(#76307 submitted) 9/30/91; see also the special issue on Reinforcement Learning of the Machine Learning Journal 1992, where it also appears. He Yo, Zhen Xianjun, Ye Yizheng, Li Zhongrong (1990): "Knowledge acquisition and reasoning based on neural networks - the research of a bridge bidding system", INNC '90, Paris, vol 1, pp. 416-423. The annual computer olympiad involves tournaments in a variety of games. These publications contain a wealth of interesting articles: Heuristic Programming in Artificial Intelligence - the first computer olympiad D.N.L. Levy & D.F. Beal eds. Ellis Horwood ltd, 1989. Heuristic Programming in Artificial Intelligence 2 - the second computer olympiad D.N.L. Levy & D.F. Beal eds. Ellis Horwood, 1991. Heuristic Programming in Artificial Intelligence 3 - the third computer olympiad H.J. van den Herik & L.V. Allis eds. Ellis Horwood, 1992. -------------------------------------------------------------------------- Jacob Sparre Andersen, Niels Bohr Institute, University of Copenhagen. E-mail: sparre at connect.nbi.dk - Fax: (+45) 35 32 04 60 -------------------------------------------------------------------------- Peer Sommerlund, Department of Computer science, University of Copenhagen. E-mail: peso at connect.nbi.dk -------------------------------------------------------------------------- We're writing a paper on learning strategic games with neural nets and other optimization methods. -------------------------------------------------------------------------- From anguita at dibe.unige.it Sat Nov 20 16:14:48 1993 From: anguita at dibe.unige.it (Davide Anguita) Date: Sat, 20 Nov 93 16:14:48 MEZ Subject: Matrix Back Prop (MBP) available Message-ID: Matrix Back Propagation v1.1 is finally available. This code implements (in C language) the algorithms described in: D.Anguita, G.Parodi, R.Zunino - An efficient implementation of BP on RISC- based workstations. Neurocomputing, in press. D.Anguita, G.Parodi, R.Zunino - Speed improvement of the BP on current generation workstations. WCNN '93, Portland. D.Anguita, G.Parodi, R.Zunino - YPROP: yet another accelerating technique for the bp. ICANN '93, Amsterdam. To retrieve the code: ftp risc6000.dibe.unige.it <130.251.89.154> anonymous cd pub bin get MBPv1.1.tar.Z quit uncompress MBPv1.1.tar.Z tar -xvf MBPv1.1.tar Then print the file mbpv11.ps (PostScript). Send comments (or flames) to the address below. Good luck. Davide. ======================================================================== Davide Anguita DIBE Phone: +39-10-3532192 University of Genova Fax: +39-10-3532175 Via all'Opera Pia 11a e-mail: anguita at dibe.unige.it 16145 Genova, ITALY From gary at cs.ucsd.edu Sat Nov 20 15:07:23 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Sat, 20 Nov 93 12:07:23 -0800 Subject: PCA bibliography Message-ID: <9311202007.AA29326@desi> RE: >P.S.: Several people have commented to me that the way I phrased the >hypothesis seems to imply the use of time-sequential deflation. In other >words, it sounds as if the first eigenvector must be found and removed from >the data, before the second is found. Most algorithms do not do this, and >instead deflate the first learned component while it is being learned. Thus >learning of all components continues simultaneously "in parallel". In fact, that's how it works also for the "Category 2" systems mentioned in your travelogue of methods. Straight LMS with hidden units will learn the principal components at different rates, with the highest rate on the first, then the second, etc., up to the number of hidden units. Of course, these systems only span the principal subspace rather than learning the pc's directly, but I find that an advantage. (See Baldi & Hornik 1989, and Cottrell & Munro, SPIE88 paper). Also, if you add more hidden layers to a nonlinear system, as in DeMers & Cottrell (93), you can learn better representations, in the sense that you can find the actual dimensionality of the system you are modeling, with respect to your error criterion. So for example, a helix in 3 space will be found to be one dimensional instead of 3, data from 3.5D Mackey Glass will be found to be either 3 or 4 dimensional depending on your reconstruction fidelity required. We don't know how to do a half a hidden unit yet, though! Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 "Only connect" Computer Science and Engineering 0114 University of California San Diego -E.M. Forster La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP) References: Baldi, P. and Hornik, K., (1989) Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima, Neural Networks 2, 53--58. Cottrell, G.W. and Munro, P. (1988) Principal components analysis of images via back propagation. Invited paper in Proceedings of the Society of Photo-Optical Instrumentation Engineers, Cambridge, MA. Available from erica at cs.ucsd.edu DeMers, D. & Cottrell, G.W. (1993) Nonlinear dimensionality reduction. In Hanson, Cowan & Giles (Eds.), Advances in neural information processing systems 5, pp. 580-587, San Mateo, CA: Morgan Kaufmann. Available on neuroprose as demers.nips92-nldr.ps.Z. From phatak at zonker.ecs.umass.edu Sun Nov 21 16:27:40 1993 From: phatak at zonker.ecs.umass.edu (Dhananjay S Phatak) Date: Sun, 21 Nov 93 16:27:40 -0500 Subject: Tecnhical report on fault tolerance of feedforward ANNs available. Message-ID: <9311212127.AA12281@zonker.ecs.umass.edu> This is the abstract of Technical Report No. TR-92-CSE-26, ECE Dept., Univ. of Massachusetts, Amherst, MA 01003. The report is titled "Complete and Partial Fault Tolerance of Feedforward Neural Nets". It is available in the neuroprose archive (the compressed post script file name is phatak.nn-fault-tolerance.ps.Z). An abbreviated version of this report is in print in the IEEE Transactions on Neural Nets (to appear in 1994). I would be glad to get feedback about the issues discussed and the results presented. Thanks ! ---------------------- ABSTRACT --------------------------------------- A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck--at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy. Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a NECESSARY condition that holds for ALL feedforward nets, irrespective of the network topology or the task it is trained on. Analytical as well as extensive simulation results indicate that the actual redundancy needed to SYNTHESIZE a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data implies that the conventional TMR scheme of triplication and majority vote is the best way to achieve complete fault tolerance in most ANNs. Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with (i.e., without any extra redundancy) and degrade gracefully. The first replication is seen to yield maximum enhancement in partial fault tolerance compared to later, successive replications. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing. Moreover, when the fraction of links tested is held fixed, the accuracy of the estimate generated by random testing is seen to improve as the net size grows. ------------------------------------------------------------------------------- From murray at DBresearch-berlin.de Mon Nov 22 09:06:00 1993 From: murray at DBresearch-berlin.de (R. Murray-Smith) Date: Mon, 22 Nov 93 09:06 MET Subject: IEE Colloq. Neural networks: CALL FOR PAPERS Message-ID: CALL FOR PAPERS --------------- IEE Colloquium on Advances in Neural Networks for Control and Systems 26-27 May 1994 To be held at a location in Central Europe A colloquium on `Advances in neural networks for control and systems' is being organised by the control committees of the Institution of Electrical Engineers with additional support from Daimler-Benz Systems Technology Research, Berlin. This two-day meeting will be held on 26-27 May 1994 at a central european location. The programme will comprise a mix of invited papers and papers received in response to this call. Invited speakers include leading international academic workers in the field and major industrial companies who will present recent applications of neural methods, and outline the latest theoretical advances. Neural networks have been seen for some years now as providing considerable promise for application in nonlinear control and systems problems. This promise stems from the theoretical ability of networks of various types to approximate arbitrarily well continuous nonlinear mappings. The aim of this colloquium is to evaluate the state-of-the-art in this very popular field from the engineering perspective. The colloquium will cover both theoretical and applied aspects. A major goal of the workshop will be to examine ways of improving the engineering involved in neural network modelling and control, so that the theoretical power of learning systems can be harnessed for practical applications. This includes questions such as: - Which network architecture for which application? - Can constructive learning algorithms capture the underlying dynamics while avoiding overfitting? - How can we introduce a priori knowledge or models into neural networks? - Can experiment design and active learning be used to automatically create 'optimal' training sets? - How can we validate a neural network model? In line with this goal of better engineering methods, the colloquium will also place emphasis on real industrial applications of the technology; applied papers are most welcome. Prospective authors are invited to submit three copies of a 500-word abstract by Friday 25 February 1994 to Dr K J Hunt, Daimler-Benz AG, Alt-Moabit 91 B, D-10559 Berlin, Germany (tel: + 49 30 399 82 275, FAX: + 49 30 399 82 107, E-mail: hunt at DBresearch-berlin.de). From announce at PARK.BU.EDU Mon Nov 22 09:50:43 1993 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 22 Nov 93 09:50:43 -0500 Subject: Faculty position in Cognitive and Neural Systems at Boston University Message-ID: <9311221450.AA20171@retina.bu.edu> NEW SENIOR FACULTY IN COGNITIVE AND NEURAL SYSTEMS AT BOSTON UNIVERSITY Boston University seeks an associate or full professor starting in Fall 1994 for its graduate Department of Cognitive and Neural Systems. This Department offers an integrated curriculum of psychological, neurobiological, and computational concepts, models, and methods in the fields of neural networks, computational neuroscience, and connectionist cognitive science in which Boston University is a leader. Candidates should have an international research reputation, preferably including extensive analytic or computational research experience in modeling a broad range of nonlinear neural networks, especially in one or more of the areas: vision and image processing, visual cognition, spatial orientation, adaptive pattern recognition, and cognitive information processing. Send a complete curriculum vitae and three letters of recommendation to Search Committee, Department of Cognitive and Neural Systems, Room 240, 111 Cummington Street, Boston University, Boston, MA 02215. Boston University is an Equal Opportunity/Affirmative Action employer. From announce at PARK.BU.EDU Mon Nov 22 09:58:31 1993 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 22 Nov 93 09:58:31 -0500 Subject: Graduate study in Cognitive and Neural Systems at Boston University Message-ID: <9311221458.AA20329@retina.bu.edu> (please post) *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: cns at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe From oja at dendrite.hut.fi Tue Nov 23 09:50:35 1993 From: oja at dendrite.hut.fi (Erkki Oja) Date: Tue, 23 Nov 93 16:50:35 +0200 Subject: No subject Message-ID: <9311231450.AA17347@dendrite.hut.fi.hut.fi> RE: PCA in neural networks Dear Terry + connectionists: I will continue shortly the discussion about PCA, Hebbian rule and deflation. I would also like to point out the "Weighted Subspace algorithm" which Luis Almeida already mentioned. A more comprehensive reference is given at the end of this note. While in the GHA and SGA methods, the j-th weight vector depends on all the others up to index j, the Weighted Subspace algorithm is homogeneous in the sense that the j-th weight vector depends on all the others. I would say that the latter type is not deflation. There is an extension, mentioned in ref. (2) below, which is totally symmetrical, but there is a nonlinearity in the "feedback" term of the algorithm. This is sufficient to drive the vectors to the true eigenvectors. This was pointed out to me first by L. Xu. The references: 1. E. Oja, H. Ogawa and J. Wangviwattana: "Principal component analysis by homogeneous neural networks, Part I: The weighted subspace criterion". IEICE Trans. Inf. and Systems, vol. E75-D, no. 3, May 1992, pp. 366 - 375. 2. E. Oja, H. Ogawa and J. Wangviwattana: "Principal component analysis by homogeneous neural networks, Part II: Analysis and extensions of the learning algorithms. Same as above, pp. 376 - 382. Regards, Erkki Oja (Erkki.Oja at hut.fi) From biehl at physik.uni-wuerzburg.de Tue Nov 23 16:00:13 1993 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 23 Nov 93 16:00:13 MEZ Subject: preprint available Message-ID: <9311231500.AA14860@wptx08.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/biehlmietzner.pancakes.ps.Z The following paper has been placed in the Neuroprose archive (see above for ftp-host) as a compressed postscript file named biehlmietzner.pancakes.ps.Z (15 pages of output) email addresses of authors : biehl at physik.uni-wuerzburg.de mietzner at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ "Statistical Mechanics of Unsupervised Structure Recognition" Michael Biehl and Andreas Mietzner Physikalisches Institut der Universitaet Am Hubland D-97074 Wuerzburg Germany Abstract: A model of unsupervised learning is studied, where the environment provides N-dimensional input examples that are drawn from two overlapping Gaussian clouds. We consider the optimization of two different objective functions: the search for the direction of the largest variance in the data and the largest separating gap (stability) respectively. By means of a statistical mechanics analysis, we investigate how well the underlying structure is inferred from a set of examples. The performance of the learning algorithms depends crucially on the actual shape of the input distribution. A generic result is the existence of a critical number of examples needed for successful learning. The learning strategies are compared with methods different in spirit, such as the estimation of parameters in a model distribution and an information theoretic approach. ---------------------------------------------------------------------- From radford at cs.toronto.edu Tue Nov 23 12:05:43 1993 From: radford at cs.toronto.edu (Radford Neal) Date: Tue, 23 Nov 1993 12:05:43 -0500 Subject: Review of Markov chain Monte Carlo methods Message-ID: <93Nov23.120551edt.638@neuron.ai.toronto.edu> The following review paper is now available via ftp, as described below. PROBABILISTIC INFERENCE USING MARKOV CHAIN MONTE CARLO METHODS Radford M. Neal Department of Computer Science University of Toronto 25 September 1993 Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The ``Metropolis algorithm'' has been used to solve difficult problems in statistical physics for over forty years, and, in the last few years, the related method of ``Gibbs sampling'' has been applied to problems of statistical inference. Concurrently, an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the ``hybrid Monte Carlo'' method. In computer science, Markov chain sampling is the basis of the heuristic optimization technique of ``simulated annealing'', and has recently been used in randomized algorithms for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilistic inference in expert systems, discovery of latent classes from data, and Bayesian learning for neural networks. TABLE OF CONTENTS 1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Probabilistic Inference for Artificial Intelligence. . . . . . 4 2.1 Probabilistic inference with a fully-specified model . . . 5 2.2 Statistical inference for model parameters . . . . . . . . 13 2.3 Bayesian model comparison. . . . . . . . . . . . . . . . . 23 2.4 Statistical physics. . . . . . . . . . . . . . . . . . . . 25 3 Background on the Problem and its Solution . . . . . . . . . . 30 3.1 Definition of the problem. . . . . . . . . . . . . . . . . 30 3.2 Approaches to solving the problem. . . . . . . . . . . . . 32 3.3 Theory of Markov chains . . . . . . . . . . . . . . . . . 36 4 The Metropolis and Gibbs Sampling Algorithms . . . . . . . . . 47 4.1 Gibbs sampling . . . . . . . . . . . . . . . . . . . . . . 47 4.2 The Metropolis algorithm . . . . . . . . . . . . . . . . . 54 4.3 Variations on the Metropolis algorithm . . . . . . . . . . 59 4.4 Analysis of the Metropolis and Gibbs sampling algorithms . 64 5 The Dynamical and Hybrid Monte Carlo Methods . . . . . . . . . 70 5.1 The stochastic dynamics method . . . . . . . . . . . . . . 70 5.2 The hybrid Monte Carlo algorithm . . . . . . . . . . . . . 77 5.3 Other dynamical methods. . . . . . . . . . . . . . . . . . 81 5.4 Analysis of the hybrid Monte Carlo algorithm . . . . . . . 83 6 Extensions and Refinements . . . . . . . . . . . . . . . . . . 87 6.1 Simulated annealing. . . . . . . . . . . . . . . . . . . . 87 6.2 Free energy estimation . . . . . . . . . . . . . . . . . . 94 6.3 Error assessment and reduction . . . . . . . . . . . . . . 102 6.4 Parallel implementation. . . . . . . . . . . . . . . . . . 114 7 Directions for Research. . . . . . . . . . . . . . . . . . . . 116 7.1 Improvements in the algorithms . . . . . . . . . . . . . . 116 7.2 Scope for applications . . . . . . . . . . . . . . . . . . 118 8 Annotated Bibliography . . . . . . . . . . . . . . . . . . . . 121 Total length: 144 pages The paper may be obtained in PostScript form as follows: unix> ftp ftp.cs.toronto.edu (log in as user 'anonymous', your e-mail address as password) ftp> cd pub/radford ftp> binary ftp> get review.ps.Z ftp> quit unix> uncompress review.ps.Z unix> lpr review.ps (or however you print PostScript) The files review[0123].ps.Z in the same directory contain the same paper in smaller chunks; these may prove useful if your printer cannot digest the paper all at once. Radford Neal radford at cs.toronto.edu From malsburg at neuroinformatik.ruhr-uni-bochum.de Tue Nov 23 12:43:18 1993 From: malsburg at neuroinformatik.ruhr-uni-bochum.de (malsburg@neuroinformatik.ruhr-uni-bochum.de) Date: Tue, 23 Nov 93 18:43:18 +0100 Subject: Graham Smith's suggestion Message-ID: <9311231743.AA03176@circe.neuroinformatik.ruhr-uni-bochum.de> If I understand the text correctly, both on the input level and the output level he has two cells for each of the four feature types, one for each object. I presume that after learning, there are no connections in the system that confuse cells belonging to different objects; there will be, for instance, no hidden unit to fire in response to the combination ``red-a - square-b'' (if -a and -b stand for the two object identities), and correspondingly the output could not fire erroneously to this false conjunction. I have actually discussed this ``solution'' to the binding problem in my article ``Am I thinking assemblies?'' (Proceedings of the Trieste Meeting on Brain Theory, October 1984. G.Palm and A.Aertsen, eds. Springer: Berlin Heidelberg (1986), {\it pp} 161--176). I then talked about the ``box solution'' (keeping things not to be confused with each other in separate boxes with no confusing connections between them). The main problem with that ``solution'' is creating those boxes in the first place (Graham Smith solved this by learning). Sorting features into boxes appropriately is another one (this problem is not solved in his scheme at all, features being sorted already into the -a and -b boxes in the input patterns). A third problem is that rigid boxes keep the system from generalizing appropriately. The beauty of temporal binding is that a system can deal with a feature combination even if it occurs for the first time. For instance, if a ``red'' unit and a ``square'' unit have been learned to send activation to some output cell, and a red square occurs for the first time in life, the two units correlate their signals in time and summate on the output cell, whereas with a red triangle and a blue square, the signals will not be synchronized and cannot summate on the output. From rjw at ccs.neu.edu Tue Nov 23 13:48:58 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Tue, 23 Nov 1993 13:48:58 -0500 Subject: paper available Message-ID: <9311231848.AA10376@walden.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.perf-bound.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following paper is now available in the neuroprose directory. It is 17 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions Northeastern University College of Computer Science Technical Report NU-CCS-93-13 Ronald J. Williams College of Computer Science Northeastern University rjw at ccs.neu.edu Abstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning, and in this case it is also shown that this bound is tight in general. One significant application of this result is to problems where a function approximator is used to learn a value function, with training of the approximator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.perf-bound.ps.Z quit Then at your system: uncompress williams.perf-bound.ps.Z lpr -P williams.perf-bound.ps --------------------------------------------------------------------------- Ronald J. Williams | email: rjw at ccs.neu.edu College of Computer Science, 161 CN | Phone: (617) 373-8683 Northeastern University | Fax: (617) 373-5121 Boston, MA 02115, USA | --------------------------------------------------------------------------- From esann at dice.ucl.ac.be Tue Nov 23 15:47:34 1993 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 23 Nov 93 21:47:34 +0100 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9311232047.AA02167@ns1.dice.ucl.ac.be> ____________________________________________________________________ ____________________________________________________________________ ESANN'94 European Symposium on Artificial Neural Networks Brussels - April 20-21-22, 1994 FINAL CALL FOR PAPERS __________________________________________________ ! Authors, prospective authors or participants ! ! PLEASE URGENTLY READ THIS E-MAIL !!! ! __________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Due to post delivery problems in Belgium, the organizers of ESANN'94 have decided to accept submitted papers till beginning of December. But to avoid lost papers, authors or prospective authors are strongly invited to make them known by sending an E-mail to the address 'esann at dice.ucl.ac.be', before November 30th, mentioning their name(s), the title and authors of the paper, and an E-mail address or fax number where they can be reached if the paper is not received. ESANN'94 is the second symposium on fundamental aspects of artificial neural networks. Papers are still welcome in the following areas (this list is not restrictive): works: theory models and architectures mathematics learning algorithms biologically plausible artificial networks neurobiological systems adaptive behavior signal processing statistics vector quantization self-organization evolutive learning Accepted papers will cover new results in one or several of these aspects or will be of tutorial nature. Papers insisting on the relations between artificial neural networks and classical methods of information processing, signal processing or statistics are encouraged. People interested in ESANN'94 and having not received the call for papers are invited to contact the conference secretariat for further information. _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________ From smagt at fwi.uva.nl Wed Nov 24 15:45:29 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Wed, 24 Nov 1993 21:45:29 +0100 Subject: TR: neural robot hand-eye coordination Message-ID: <199311242045.AA10159@carol.fwi.uva.nl> The following technical report has been put in the connectionists archive. Robot hand-eye coordination using neural networks P. Patrick van der Smagt, Frans C. A. Groen, and Ben J. A. Kr\"ose Department of Computer Systems University of Amsterdam TR CS--93--10 A self-learning, adaptive control system for a robot arm using a vision system in a feedback loop is described both in simulation and in practice. The task of the control system is to position the end-effector as accurately as possible directly above a target object, so that it can be grasped. The camera of the vision system is positioned in the end-effector and visual information is used to control the robot. Knowledge of the size of the object is used for obtaining 3D information from a single camera. The neural controller is shown to exhibit `real-time' learning behaviour, and adaptivity to unknown changes in the robot configuration. ------------------------------------------------------------------- A postscript version of the paper can be obtained as follows: unix> ftp archive.cis.ohio-state.edu ftp> login name: anonymous ftp> password: xxx at yyy.zzz ftp> cd pub/neuroprose ftp> bin ftp> get smagt.hand-eye.ps.Z ftp> bye The technical report is 23 pages long, about 2M. Many Unix-systems may require printing using lpr -s smagt.hand-eye.ps to prevent the print spooler from overflowing. The paper contains two bitmap photographs (on pages 4 and 9), which may confuse some printers. If you have trouble printing the postscript file, remove those pictures as follows: unix> sed -e '/photographstart/,/photographend/d' < smagt.hand-eye.ps > mu.ps (or remove the blocks which are enclosed between the lines containing "photographstart" and "photographend" in the ps file by hand) which will mutilate figures 1 and 6. Then print mu.ps. Patrick From kolen-j at cis.ohio-state.edu Wed Nov 24 16:32:35 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Wed, 24 Nov 93 16:32:35 -0500 Subject: Reprint Announcement Message-ID: <9311242132.AA03867@pons.cis.ohio-state.edu> While this paper does not directly talk about neural networks, it does have plenty of implications for cognitive science and neural network research. Implications addressed in my upcoming dissertation. John ----------------------------------------------------------------------- This is an announcement of a newly available paper in neuroprose: The Observers' Paradox: Apparent Computational Complexity in Physical Systems John F. Kolen Jordan B. Pollack Laboratory for Artificial Intelligence Research The Ohio State University Columbus, OH 43210 Many researchers in AI and cognitive science believe that the complexity of a behavioral description reflects the underlying information processing complexity of the mechanism producing the behavior. This paper explores the foundations of this complexity argument. We first distinguish two types of complexity judgments that can be applied to these descriptions and then argue that neither type can be an intrinsic property of the underlying physical system. In short, we demonstrate how changes in the method of observation can radically alter both the number of apparent states and the apparent generative class of a system's behavioral description. From these examples we conclude that the act of observation can suggest frivolous computational explanations of physical phenomena, up to and including cognition. This paper will appear in The Journal of Experimental and Theoretical Artificial Intelligence. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get kolen.paradox.ps.Z ftp> quit unix> uncompress kolen.paradox.ps.Z unix> lpr kolen.paradox.ps (or what you normally do to print PostScript) From hayit at micro.caltech.edu Wed Nov 24 19:38:45 1993 From: hayit at micro.caltech.edu (Hayit Greenspan) Date: Wed, 24 Nov 93 16:38:45 PST Subject: NIPS_workshop Message-ID: <9311250038.AA01847@electra.caltech.edu> NIPS*93 - Post Meeting workshop: -------------------------------- Learning in Computer Vision and Image Understanding - An advantage over classical techniques? To those interested: ********************** The workshop schedule and list of abstracts are now available via anonymous ftp : FTP-host: helper.systems.caltech.edu filenames: /pub/nips/NIPS_prog /pub/nips/NIPSabs.ps.Z Hayit Greenspan ****************************************************** From rsun at athos.cs.ua.edu Wed Nov 24 21:10:21 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 24 Nov 1993 20:10:21 -0600 Subject: Graham Smith's suggestion Message-ID: <9311250210.AA11097@athos.cs.ua.edu> you can use a simpler means to achieve the same result, computationally. you can use a distinct activation value to represent bindings. this distinct value can be equated to phase. so if two nodes receive the same value, then they are bound to the same thing. This scheme does not reuire  require temporal processing, but it can have all the nice properties you mentioned. --Ron From cells at tce.ing.uniroma1.it Thu Nov 25 14:54:50 1993 From: cells at tce.ing.uniroma1.it (cells@tce.ing.uniroma1.it) Date: Thu, 25 Nov 1993 20:54:50 +0100 Subject: Cellular Neural Networks mailing list Message-ID: <9311251954.AA13248@tce.ing.uniroma1.it> ************************************************************ * ANNOUNCING A NEW MAILING LIST ON * * CELLULAR NEURAL NETWORKS: * * cells at tce.ing.uniroma1.it * ************************************************************ Cellular Neural Networks (CNN) are continuous-time dynamical systems, consisting of a grid of processing elements (neurons, or cells) connected only to neighbors within a given (typically small) distance. It is therefore a class of recurrent neural networks, whose particular topology is most suited for integrated circuit realization. In fact, while in typical realizations of other neural systems most of silicon area is taken by connections, in this case connection area is neglectible, so that processor density can be much larger. Since their first definition by L.O. Chua and L. Yang in 1988, many applications were proposed, mainly in the field of image processing. In most cases a space-invariant weight pattern is used (i.e. weights are defined by a template, which repeats identically for all cells), and neurons are characterized by simple first order dynamics. However, many different kinds of dynamics (e.g. oscillatory and chaotic) have also been used for special purposes. A recent extension of the model is obtained by integrating the analog CNN with some simple logic components, leading to the realization of a universal programmable "analogic" machine. Essential bibliography: 1) L.O. Chua & L. Yang, "Cellular Neural Networks: Theory", IEEE Trans. on Circ. and Systems, CAS-35(10), p. 1257, 1988 2) -----, "Cellular Neural Networks: Applications", ibid., p. 1273 3) Proc. of IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-90), Budapest, Hungary, Dec. 16-19, 1990 4) Proc. of IEEE Second International Workshop on Cellular Neural Networks and their Applications (CNNA-92), Munich, Germany, Oct. 14-16, 1992 5) International Journal of Circuit Theory and Applications, vol.20, no. 5 (1992), special issue on Cellular Neural Networks 6) IEEE Transactions on Circuits and Systems, parts I & II, vol.40, no. 3 (1993), special issue on Cellular Neural Networks 7) T. Roska, L.O. Chua, "The CNN Universal Machine: an Analogic Array Computer", IEEE Trans. on Circ. and Systems, II, 40(3), 1993, p. 163 8) V. Cimagalli, M. Balsi, "Cellular Neural Networks: a Review", Proc. of Sixth Italian Workshop on Parallel Architectures and Neural Networks, Vietri sul Mare, Italy, May 12-14, 1993. (E. Caianiello, ed.), World Scientific, Singapore. Our research group at "La Sapienza" University of Rome, Italy, has been involved in CNN research for several years, and will host next IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94), which will be held in Rome, December 18-21, 1994. We are now announcing the start of a new mailing list dedicated to Cellular Neural Networks. It will give the opportunity of discussing current research, exchanging news, submitting questions. Due to memory shortage, we are currently not able to offer an archive service, and hope that some other group will be able to volunteer for the establishment of this means of fast distribution of recent reports and papers. The list will not be moderated, at least as long as the necessity does not arise. THOSE INTERESTED IN BEING INCLUDED IN THE LIST SHOULD SEND A MESSAGE to Marco Balsi (who will be supervising the functioning of the list) at address mb at tce.ing.uniroma1.it (151.100.8.30). This is the address to which any communication not intended to go to all subscribers of the list should be sent. We would also appreciate if you let us know the address of colleagues who might be interested in the list (rather than just forward the announcement directly), so that we can send them this announcement and keep track of those that were contacted, avoiding duplications. TO SEND MESSAGES TO ALL SUBSCRIBERS PLEASE USE THE FOLLOWING ADDRESS: cells at tce.ing.uniroma1.it (151.100.8.30) We hope that this service will encourage communication and foster collaboration among researchers working on CNNs and related topics. We are looking forward for your comments, and subscriptions to the list! Yours, Prof. V. Cimagalli Dipartimento di Ingegneria Elettronica Universita' "La Sapienza" di Roma via Eudossiana, 18, 00184 Roma Italy fax: +39-6-4742647 From David.Green at anu.edu.au Sun Nov 28 19:30:44 1993 From: David.Green at anu.edu.au (David G Green) Date: Mon, 29 Nov 1993 11:30:44 +1100 Subject: COMPLEX'94 Message-ID: <199311290030.AA06144@anusf.anu.edu.au> COMPLEX'94 Second Australian National Conference Sponsored by the University of Central Queensland Australian National University September 26-28th, 1994 University of Central Queensland Rockhampton Queensland Australia FIRST CIRCULAR AND CALL FOR PAPERS The inaugural Australian National Conference on Complex Systems was held at the Australian National University in 1992. Recognising the need to maintain and stimulate research interest in these topics the University of Central Queensland is hosting the Second Australian National Conference on Complex Systems in Rockhampton. Rockhampton is situated on the Tropic of Capricorn in Queensland on the east coast of Australia and is 35kms from the Central Queensland Coast. It is within easy access of tourist resorts including the resort island, Great Keppel, the Central Queensland Hinterland, and the Great Barrier Reef. This first circular is intended to provide basic information about the conference and to canvas expressions of interest, both in attending and in presenting papers or posters. A second circular, to be distributed in late January/early February, will provide details of keynote speakers, the program, registration procedures, etc. Please pass on this notice to interested colleagues. For further information contact the organisers (see below). AIMS: Complex systems are systems whose evolution is dominated by non-linearity or interactions between their components. Such systems may be very simple but reproduce very complex phenomena. Terms such as artificial life, biocomplexity, chaos, criticality, fractals, learning systems, neural networks, non-linear dynamics, parallel computation, percolation, self-organization have become common place. From sampat at CVAX.IPFW.INDIANA.EDU Mon Nov 29 12:47:51 1993 From: sampat at CVAX.IPFW.INDIANA.EDU (Pulin) Date: Mon, 29 Nov 1993 12:47:51 EST Subject: fifth neural network conference proceedings... Message-ID: <0097643E.6B752BA0.1678@CVAX.IPFW.INDIANA.EDU> The Proceedings of the Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University at Fort Wayne, held April 9-11, 1992 are now available. They can be ordered ($9 + $1 U.S. mail cost; make checks payable to IPFW) from: Secretary, Department of Physics FAX: (219)481-6880 Voice: (219)481-6306 OR 481-6157 Indiana University Purdue University Fort Wayne email: proceedings at ipfwcvax.bitnet Fort Wayne, IN 46805-1499 The following papers are included in the Proceedings of the Fifth Conference: Tutorials Phil Best, Miami University, Processing of Spatial Information in the Brain William Frederick, Indiana-Purdue University, Introduction to Fuzzy Logic Helmut Heller and K. Schulten, University of Illinois, Parallel Distributed Computing for Molecular Dynamics: Simulation of Large Hetrogenous Systems on a Systolic Ring of Transputer Krzysztof J. Cios, University Of Toledo, An Algorithm Which Self-Generates Neural Network Architecture - Summary of Tutorial Biological and Cooperative Phenomena Optimization Ljubomir T. Citkusev & Ljubomir J. Buturovic, Boston University, Non- Derivative Network for Early Vision M.B. Khatri & P.G. Madhavan, Indiana-Purdue University, Indianapolis, ANN Simulation of the Place Cell Phenomenon Using Cue Size Ratio J. Wu, M. Penna, P.G. Madhavan, & L. Zheng, Purdue University at Indianapolis, Cognitive Map Building and Navigation J. Wu, C. Zhu, Michael A. Penna & S. Ochs, Purdue University at Indianapolis, Using the NADEL to Solve the Correspondence Problem Arun Jagota, SUNY-Buffalo, On the Computational Complexity of Analyzing a Hopfield-Clique Network Network Analysis M.R. Banan & K.D. Hjelmstad, University of Illinois at Urbana-Champaign, A Supervised Training Environment Based on Local Adaptation, Fuzzyness, and Simulation Pranab K. Das II & W.C. Schieve, University of Texas at Austin, Memory in Small Hopfield Neural Networks: Fixed Points, Limit Cycles and Chaos Arun Maskara & Andrew Noetzel, Polytechnic University, Forced Learning in Simple Recurrent Neural Networks Samir I. Sayegh, Indiana-Purdue University, Neural Networks Sequential vs Cumulative Update: An * Expansion D.A. Brown, P.L.N. Murthy, & L. Berke, The College of Wooster, Self- Adaptation in Backpropagation Networks Through Variable Decomposition and Output Set Decomposition Sandip Sen, University of Michigan, Noise Sensitivity in a Simple Classifier System Xin Wang, University of Southern California, Complex Dynamics of Discrete- Time Neural Networks Zhenni Wang and Christine di Massimo, University of Newcastle, A Procedure for Determining the Canonical Structure of Multilayer Feedforward Neural Networks Srikanth Radhakrishnan and C, Koutsougeras, Tulane University, Pattern Classification Using the Hybrid Coulomb Energy Network Applications K.D. Hooks, A. Malkani, & L. C. Rabelo, Ohio University, Application of Artificial Neural Networks in Quality Control Charts B.E. Stephens & P.G. Madhavan, Purdue University at Indianapolis, Simple Nonlinear Curve Fitting Using the Artificial Neural Network Nasser Ansari & Janusz A. Starzyk, Ohio University, Distance Field Approach to Handwritten Character Recognition Thomas L. Hemminger & Yoh-Han Pao, Case Western Reserve University, A Real-Time Neural-Net Computing Approach to the Detection and Classification of Underwater Acoustic Transients Seibert L. Murphy & Samir I. Sayegh, Indiana-Purdue University, Analysis of the Classification Performance of a Back Propagation Neural Network Designed for Acoustic Screening S. Keyvan, L. C. Rabelo, & A. Malkani, Ohio University, Nuclear Diagnostic Monitoring System Using Adaptive Resonance Theory From rjw at ccs.neu.edu Tue Nov 30 09:04:42 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Tue, 30 Nov 1993 09:04:42 -0500 Subject: revised paper available Message-ID: <9311301404.AA24614@walden.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.perf-bound.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following improved version of the recently announced paper with the same title is now available in the neuroprose directory. The compressed file has the same name and supersedes the earlier version. In addition to containing some new material, this new version has a corrected TR number and an added co-author, and it is 20 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions Northeastern University College of Computer Science Technical Report NU-CCS-93-14 Ronald J. Williams Leemon C. Baird, III College of Computer Science Wright Laboratory Northeastern University Wright-Patterson Air Force Base rjw at ccs.neu.edu bairdlc at wL.wpafb.af.mil Abstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a tight bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning. One significant application of these results is to problems where a function approximator is used to learn a value function, with training of the approximator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.perf-bound.ps.Z quit Then at your system: uncompress williams.perf-bound.ps.Z lpr -P williams.perf-bound.ps From icip at pine.ece.utexas.edu Tue Nov 30 22:01:04 1993 From: icip at pine.ece.utexas.edu (International Conf on Image Processing Mail Box) Date: Tue, 30 Nov 93 21:01:04 CST Subject: No subject Message-ID: <9312010301.AA11738@pine.ece.utexas.edu> PLEASE POST PLEASE POST PLEASE POST PLEASE POST *************************************************************** FIRST IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING November 13-16, 1994 Austin Convention Center, Austin, Texas, USA CALL FOR PAPERS Sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society, ICIP-94 is the inaugural international conference on theoretical, experimental and applied image processing. It will provide a centralized, high-quality forum for presentation of technological advances and research results by scientists and engineers working in Image Processing and associated disciplines such as multimedia and video technology. Also encouraged are image processing applications in areas such as the biomedical sciences and geosciences. SCOPE: 1. IMAGE PROCESSING: Coding, Filtering, Enhancement, Restoration, Segmentation, Multiresolution Processing, Multispectral Processing, Image Representation, Image Analysis, Interpolation and Spatial Transformations, Motion Detection and Estimation, Image Sequence Processing, Video Signal Processing, Neural Networks for image processing and model-based compression, Noise Modeling, Architectures and Software. 2. COMPUTED IMAGING: Acoustic Imaging, Radar Imaging, Tomography, Magnetic Resonance Imaging, Geophysical and Seismic Imaging, Radio Astronomy, Speckle Imaging, Computer Holography, Confocal Microscopy, Electron Microscopy, X-ray Crystallography, Coded-Aperture Imaging, Real-Aperture Arrays. 3. IMAGE SCANNING DISPLAY AND PRINTING: Scanning and Sampling, Quantization and Halftoning, Color Reproduction, Image Representation and Rendering, Graphics and Fonts, Architectures and Software for Display and Printing Systems, Image Quality, Visualization. 4. VIDEO: Digital video, Multimedia, HD video and packet video, video signal processor chips. 5. APPLICATIONS: Application of image processing technology to any field. PROGRAM COMMITTEE: GENERAL CHAIR: Alan C. Bovik, U. Texas, Austin TECHNICAL CHAIRS: Tom Huang, U. Illinois, Champaign and John W. Woods, Rensselaer, Troy SPECIAL SESSIONS CHAIR: Mike Orchard, U. Illinois, Champaign EAST EUROPEAN LIASON: Henri Maitre, TELECOM, Paris FAR EAST LIASON: Bede Liu, Princeton University SUBMISSION PROCEDURES Prospective authors are invited to propose papers for lecture or poster presentation in any of the technical areas listed above. To submit a proposal, prepare a summary of the paper using no more than 3 pages including figures and references. Send five copies of the paper summary along with a cover sheet stating the paper title, technical area(s) and contact address to: John W. Woods Center for Image Processing Research Rensselaer Polytechnic Institute Troy, NY 12180-3590, USA. Each selected paper (five-page limit) will be published in the Proceedings of ICIP-94, using high-quality paper for good image reproduction. Style files in LaTeX will be provided for the convenience of the authors. SCHEDULE Paper summaries/abstracts due*: 15 February 1994 Notification of Acceptance: 1 May 1994 Camera-Ready papers: 15 July 1994 Conference: 13-16 November 1994 *For an automatic electronic reminder, send a "reminder please" message to: icip at pine.ece.utexas.edu CONFERENCE ENVIRONMENT ICIP-94 will be held in the recently completed state-of-the-art Convention Center in downtown Austin. The Convention Center is situated two blocks from the Town Lake, and is only 12 minutes from Robert Meuller Airport. It is surrounded by many modern hotels that provide comfortable accommodation for $75-$125 per night. Austin, the state capital, is renowned for its natural hill- country beauty and an active cultural scene. Within walking distance of the Convention Center are several hiking and jogging trails, as well as opportunities for a variety of aquatic sports. Live bands perform in various clubs around the city and at night spots along Sixth Street, offering a range of jazz, blues, country/Western, reggae, swing and rock music. Day temperatures are typically in the upper sixties in mid-November. An exciting range of EXHIBITS, TUTORIALS, SPECIAL PRODUCT SESSIONS,, and SOCIAL EVENTS will be offered. For further details about ICIP-94, please contact: Conference Management Services 3024 Thousand Oaks Drive Austin, Texas 78746 Tel: 512/327/4012; Fax:512/327/8132 or email: icip at pine.ece.utexas.edu FINAL CALL FOR PAPERS FIRST IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING November 13-16, 1994 Austin Convention Center, Austin, Texas, USA From janetw at cs.uq.oz.au Mon Nov 1 14:57:39 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Mon, 1 Nov 93 14:57:39 EST Subject: Position available Message-ID: <9311010457.AA25521@client> The following advert is for a position in the Department of Computer Science, at the University of Queensland, at the highest level of the academic scale. It is open to any area of computing, and hence may interest researchers in cognitive science and/or neural networks. UQ has a strong inter-disciplinary cognitive science program between the departments of computer science, psychology, linguistics and philosophy, and neural network research groups in computer science, psychology and engineering. The University is one of the best in Australia, and Brisbane has a delightful climate, situated on the coastal plain between the mountains and the sea. Inquiries can be directed to Professor Andrew Lister, as mentioned below, or I am happy to answer informal questions about the Department, University or other aspects of academic life in Brisbane. Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au ------------------------------- UNIVERSITY OF QUEENSLAND PROFESSOR OF COMPUTER SCIENCE The successful applicant will have an outstanding record of research leadership and achievement in Computer Science. Teaching experience is expected, as is demonstrable capacity for collaboration with industry and attraction of external funds. The appointee will be expected to contribute substantially to Departmental research, preferably in a field which can exploit or extend current strengths. He or she will also be expected to teach at both undergraduate and postgraduate levels, and to contribute to Departmental policy making. Capacity and willingness to assume the Department headship at an appropriate time will be an important selection criterion. The Department is one of the strongest in Australia with 26 full-time academic staff, including 5 other full Professors, over 40 research staff, and 23 support staff. There are around 500 equivalent full-time students, with a large postgraduate school including 55 PhD students. The Department has been designated by the Federal Government as the Key Centre for Teaching and Research in Software Technology. The Department also contains the Software Verification Research Centre, a Special Research Centre of the Australian Research Council, and is a major partner in the Cooperative Research Centre for Distributed Systems Technology. Current research strengths include formal methods and tools for software development, distributed systems, information systems, programming languages, cognitive science, and algorithm design and analysis. Salary: $77,900 plus superannuation. A market loading may be payable in some circumstances. For further information please contact the Head of Department, Professor Andrew Lister (lister at cs.uq.oz.au), 07-365 3168 or international +61 7 365 3168. Applications: (4 copies) should be made to the Director, Personnel Services, The University of Queensland, St Lucia, Queensland 4072, Australia. Closing Date: 10 Jan 1994 From Connectionists-Request at cs.cmu.edu Mon Nov 1 00:05:14 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Nov 93 00:05:14 EST Subject: Bi-monthly Reminder Message-ID: <15574.752130314@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated January 4, 1993. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 APPENDIX: Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. Here is the INDEX entry: rosenblatt.reborn.ps.Z rosenblatt at gvax.cs.cornell.edu 17 pages. Boastful statements by the deceased leader of the neurocomputing field. Let me know when it is in place so I can announce it to Connectionists at cmu. Frank ^D AFTER FRANK RECEIVES THE GO-AHEAD, AND HAS A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: gvax> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/rosenblatt.reborn.ps.Z The file rosenblatt.reborn.ps.Z is now available for copying from the Neuroprose repository: Born Again Perceptrons (17 pages) Frank Rosenblatt Cornell University ABSTRACT: In this unpublished paper, I review the historical facts regarding my death at sea: Was it an accident or suicide? Moreover, I look over the past 23 years of work and find that I was right in my initial overblown assessments of the field of neural networks. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From Philip.Resnik at East.Sun.COM Mon Nov 1 10:28:18 1993 From: Philip.Resnik at East.Sun.COM (Philip Resnik - Sun Microsystems Labs BOS) Date: Mon, 1 Nov 93 10:28:18 EST Subject: ACL-94 Call for papers Message-ID: <9311011528.AA10418@caesar.East.Sun.COM> Hi, I'd like to follow up on Gary Cottrell's note about the ACL-94 conference with two brief comments. First, statistical (and statistical-symbolic) approaches to NLP are an important area right now, and people doing that kind of work (of which I am one) share a great many concerns with members of this list. Nonetheless, there seems to be little communication between the groups. As an example, David Wolpert and David Wolf's recent reports (on estimating entropy, etc. given finite samples), publicized on this list, target an issue of central concern to people doing statistical NLP, yet I suspect that few computational linguists have come across them. Second, I want to draw your attention to the ACL conference's student sessions (buried in the middle of the call for papers). These sessions are intended to give students a chance to present work in progress, as opposed to completed work, particularly so they can get feedback from more senior members of the computational linguistics community with whom they might not otherwise come into contact. The deadline is somewhat later than for the ACL main sessions (February 1), and the submissions are reviewed by a committee comprising both students and faculty members. It's a very useful forum, by no means inferior to the main sessions of the conference, and I STRONGLY encourage students doing language-related connectionist work to get involved. Philip From sbh at eng.cam.ac.uk Tue Nov 2 08:27:25 1993 From: sbh at eng.cam.ac.uk (S.B. Holden) Date: Tue, 2 Nov 93 13:27:25 GMT Subject: Technical Report Message-ID: <10707.9311021327@tw700.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Quantifying Generalization in Linearly Weighted Neural Networks Sean B. Holden and Martin Anthony Technical Report CUED/F-INFENG/TR113 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract The Vapnik-Chervonenkis Dimension has proven to be of great use in the theoretical study of generalization in artificial neural networks. The `probably approximately correct' learning framework is described and the importance of the VC dimension is illustrated. We then investigate the VC dimension of certain types of linearly weighted neural networks. First, we obtain bounds on the VC dimensions of radial basis function networks with basis functions of several types. Secondly, we calculate the VC dimension of polynomial discriminant functions defined over both real and binary-valued inputs. ************************ How to obtain a copy ************************ a) Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get holden_tr113.ps.Z ftp> quit unix> uncompress holden_tr113.ps.Z unix> lpr holden_tr113.ps (or however you print PostScript) b) Via postal mail: Request a hardcopy from Sean B. Holden Cambridge University Engineering Department, Trumpington Street, Cambridge CB2 1PZ, England. or email me: sbh at eng.cam.ac.uk This report also appears as London School of Economics Mathematics Preprint number LSE-MPS-42, December, 1992. From tgd at chert.CS.ORST.EDU Tue Nov 2 16:28:35 1993 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Tue, 2 Nov 93 13:28:35 PST Subject: papers of interest in Machine Learning 13:2/3 Message-ID: <9311022128.AA27174@curie.CS.ORST.EDU> Machine Learning Volume 13: 2/3 is a special issue devoted to Genetic Algorithms (J. Grefenstette, Ed.) Two of the papers in the issue are of potential interest to this list: Genetic Reinforcement Learning for Neurocontrol Problems D. Whitley, S. Dominic, R. Das, C. W. Anderson What makes a problem hard for a genetic algorithm? Some anomalous results and their explanation S. Forrest, M. Mitchell For ordering information, contact Kluwer at world.std.com --Tom From jbower at smaug.bbb.caltech.edu Tue Nov 2 18:25:08 1993 From: jbower at smaug.bbb.caltech.edu (Jim Bower) Date: Tue, 2 Nov 93 15:25:08 PST Subject: Call for papers CNS*94 Message-ID: <9311022325.AA28306@smaug.bbb.caltech.edu> CALL FOR PAPERS Third Annual Computation and Neural Systems Meeting CNS*94 July 21 - 25, 1994 Monterey, California DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1993 This is the third annual meeting of an interdisciplinary conference intended to address the broad range of research approaches and issues involved in the field of computational neuroscience. The last two year's meetings, in San Francisco (CNS*92) and Washington DC (CNS*93), brought experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. Peer reviewed papers were presented at the meeting on a range of subjects related to understanding how biological neural systems compute. As in previous years, the meeting is intended to equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The main body of the meeting will take place at the Monterey Doubletree hotel and include plenary, contributed and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. Following the regular session, there will be two days of focused workshops at the natural ocean side setting of the Asilomar Conference Center on the Monterey Peninsula. With this announcement we solicit the submission of presented papers to the meeting. All papers will be refereed. Authors should send original research contributions in the form of a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Summaries are for program committee use only. Abstracts will be published in the conference program. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other systems F. Motor Systems and Sensory Motor Integration G. Learning and Memory H. Behavior I. Cognitive J. Disease Include addresses of all authors on the front of the summary and the abstract including the E-mail address for EACH author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered if they lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by surface mail ONLY by sending 6 copies of the abstract and summary to: CNS*94 Submissions Division of Biology 216-76 Caltech Pasadena, CA. 91125 Submissions must be postmarked by January 26th, 1993. Registration information: All submitting authors will be sent registration material automatically. Others interested in obtaining registration material once they become available should surface mail to the above address or email to: cp at smaug.cns.caltech.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CNS*94 Organizing Committee: Co-meeting chair logistics - John Miller, UC Berkeley Co-meeting chair program - Jim Bower, Caltech Program committee John Rinzel, NIDDK/NIH Gwen Jacobs, UC Berkeley Catherine Carr, University of Maryland, College Park Dennis Glanzman, NIMH/NIH Charles Wilson, University of Tennessee, Memphis Proceedings - Frank Eeckman, Lawrence Livermore National Labs. Workshops - Mike Hasselmo, Harvard University European organizer - Erik DeSchutter (Belgium) Middle Eastern organizer - Idan Segev, Jerusalem Down under organizer - Mike Paulin (New Zealand) South American organizer - Renato Sabbatini (Brazil) ============================================================ In each of the past two years, the meeting has been able to offer travel grants to students presenting papers with support from the National Science Foundation. Potential participants interested in the content of last year's meeting can ftp last year's agenda as follows (you enter text in quotes): yourhost% "ftp 131.215.137.69" Connected to 131.215.137.69. 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" 230 Guest login ok, access restrictions apply. ftp> "cd cns94" 250 CWD command successful. ftp> "get.agenda93" 200 PORT command successful. 150 ASCII data connection for agenda93 (131.215.137.60,2916) (12761 bytes). 226 ASCII Transfer complete. local: agenda93 remote: agenda93 13145 bytes received in 0.26 seconds (49 Kbytes/s) ftp> "quit" 221 Goodbye. yourhost% (use any editor to look at the file) ======================================================= DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1994 please post From graham at charles-cross.plymouth.ac.uk Wed Nov 3 11:17:07 1993 From: graham at charles-cross.plymouth.ac.uk (Graham Smith) Date: Wed, 3 Nov 93 16:17:07 GMT Subject: No subject Message-ID: <97.9311031617@cx.plym.ac.uk> Subject: Dynamic Binding Cc: neuron-request at edu.upenn.psych.cattell graham It strikes me that an obvious solution to the binding problem has been overlooked in our rush to study phase locking in oscillatory networks. ;-) I have recently trained a simple multi-layer feed-forward network using back-propagation, to auto-associate patterns which simultaneously describe two items. The patterns consist of four features (red, blue, square, triangle) which are enumerated over the two items. The domain allows 16 two item patterns (e.g. "red square and blue triangle" or "blue triangle and blue square") and 4 single item patterns. The network had 8 input units, 4 hidden units and 8 output units. It was successfully trained to auto-associate 15 of the 20 patterns and was able to correctly auto-associate the 5 previously unseen patterns. This result is unsurprising, the network has simply learned the regularities of the set of bit patterns. But I will argue that the network can be described at the symbolic level as performing dynamic binding. Binding does not occur at either the input or output layer as both of these representations are enumerated. However, a hidden layer activation pattern is a transformation of the input pattern which contains sufficient information to allow its transformation back to the original by the hidden to output layer of weights. Such descriptions are dubbed holistic representations by RAAM enthusiasts. Furthermore, van Gelder argues that holistic representations are functionally compositional and that truely connectionist representations have functional rather than concatenative compositionality. Phase synchrony is a concatenative compositional putative binding mechanism and is a hybrid approach rather than connectionist. The holistic representation of "red square and blue triangle" is not ambiguous. It could not be confused for the holistic representation of "blue square and red triangle". The holistic representation is performing binding. To be more accurate, binding does not literally take place at the subsymbolic level. No variables are concatenatively bound to constants. Subsets of features are not "glued" together. Rather dynamic binding is a symbolic level approximate description of the subsymbolic process and subsymbolic "binding" is a functionally compositional state-space representation. I hope to publish the above-mentioned work but before doing so I shall be grateful for some feedback either to reassure myself that there is something here worth publishing or to spare my blushes with a wider audience. Graham Smith Centre for Intelligent Systems University of Plymouth England From mwitten at chpc.utexas.edu Wed Nov 3 12:37:13 1993 From: mwitten at chpc.utexas.edu (mwitten@chpc.utexas.edu) Date: Wed, 3 Nov 93 11:37:13 CST Subject: URGENT: DEADLINE CHANGE FOR WORLD CONGRESS Message-ID: <9311031737.AA08913@morpheus.chpc.utexas.edu> UPDATE ON DEADLINES FIRST WORLD CONGRESS ON COMPUTATIONAL MEDICINE, PUBLIC HEALTH, AND BIOTECHNOLOGY 24-28 April 1994 Hyatt Regency Hotel Austin, Texas ----- (Feel Free To Cross Post This Announcement) ---- Due to a confusion in the electronic distribution of the congress announcement and deadlines, as well as incorrect deadlines appearing in a number of society newsletters and journals, we are extending the abstract submission deadline for this congress to 31 December 1993. We apologize to those who were confused over the differing deadline announcements and hope that this change will allow everyone to participate. For congress details: To contact the congress organizers for any reason use any of the following pathways: ELECTRONIC MAIL - compmed94 at chpc.utexas.edu FAX (USA) - (512) 471-2445 PHONE (USA) - (512) 471-2472 GOPHER: log into the University of Texas System-CHPC select the Computational Medicine and Allied Health menu choice ANONYMOUS FTP: ftp.chpc.utexas.edu cd /pub/compmed94 (all documents and forms are stored here) POSTAL: Compmed 1994 University of Texas System CHPC Balcones Research Center 10100 Burnet Road, 1.154CMS Austin, Texas 78758-4497 SUBMISSION PROCEDURES: Authors must submit 5 copies of a single-page 50-100 word abstract clearly discussing the topic of their presentation. In addition, authors must clearly state their choice of poster, contributed paper, tutorial, exhibit, focused workshop or birds of a feather group along with a discussion of their presentation. Abstracts will be published as part of the preliminary conference material. To notify the congress organizing committee that you would like to participate and to be put on the congress mailing list, please fill out and return the form that follows this announcement. You may use any of the contact methods above. If you wish to organize a contributed paper session, tutorial session, focused workshop, or birds of a feather group, please contact the conference director at mwitten at chpc.utexas.edu . The abstract may be submitted electronically to compmed94 at chpc.utexas.edu or by mail or fax. There is no official format. If you need further details, please contact me. Matthew Witten Congress Chair mwitten at chpc.utexas.edu From kolen-j at cis.ohio-state.edu Thu Nov 4 10:25:44 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 4 Nov 93 10:25:44 -0500 Subject: Dynamic Binding In-Reply-To: Graham Smith's message of Wed, 3 Nov 93 16:17:07 GMT <97.9311031617@cx.plym.ac.uk> Message-ID: <9311041525.AA12055@pons.cis.ohio-state.edu> As Graham Smith writes: > It strikes me that an obvious solution to the binding problem has been > overlooked in our rush to study phase locking in oscillatory networks. ;-) He's right. > I hope to publish the above-mentioned work but before doing so I shall be > grateful for some feedback either to reassure myself that there is > something here worth publishing or to spare my blushes with a wider > audience. A wider audience? Those who read connectionists (1000+ ??, only DT knows for sure) are pretty much the ones who really count. Before you do publish this result, check the proceedings of the last CogSci Society meeting. Janet Wiles presented a similar variable binding method using autoassociative (aa) encoders. She found that strict aa mappings are too difficult to learn for very large networks. In response, the strict aa mapping was replaced by a mapping of 1-in-n slots to n/2-in-n slots. The representational shift helped learning tremendously, as the std encoder network can easily produce this mapping. The citation for this paper is: Wiles, J., (1993) "Representation of variables and their values in neural networks", In Proceedings of the Fifteenth Annual Meeting of the Cognitive Science Society. June 18-21, 1993. Boulder, CO. Lawrence Erlbaum Associates, Hillsdale, NJ. I've sent this to connectionists, rather than individually to Graham Smith, because I think the Wiles paper was perhaps the best all-around neural network paper at CogSci this year. John From rsun at athos.cs.ua.edu Fri Nov 5 13:41:15 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Fri, 5 Nov 1993 12:41:15 -0600 Subject: Dynamic Binding Message-ID: <9311051841.AA12690@athos.cs.ua.edu> 1. It is unclear to me from your message how you plan to use the dynamic binding (formed at the hidden layer of your autoassociator) to do some reasoning or whatever. The point of doing dynamic binding is to be able to use the binding in other processing tasks, for example, high level reasoning, especially in rule-based (or rule-like) reasoning. In such reasoning, bindings are constructed and deconstructed, passed around, checked against some constraints, and unbound or rebound to something else. Simply forming an association is NOT the point. 2. There are a variety of existing methods for dynamic binding; phase synchronizatrion is just one of them, and definitely not the simplest one possible. As a matter of fact, the sign (or signature) propagation method can easily handle that, and it can also simulate phase synchronization without the need for temporal properties as in the nodes using phase synchronization. --Ron From stjohn at cogsci.UCSD.EDU Fri Nov 5 15:17:27 1993 From: stjohn at cogsci.UCSD.EDU (Mark St. John) Date: Fri, 5 Nov 1993 12:17:27 -0800 Subject: Dynamic binding Message-ID: <9311052017.AA06008@cogsci.UCSD.EDU> The sort of "holistic" binding that happens in the hidden layer of a 3-layer network and that Graham Smith describes is not particularly new, at least if you look in the right place. As Smith says, this is just the sort of binding that happens in one of Jordan Pollack's RAAM networks (1990, Artificial Intelligence), and it's much the same as the sort of binding that happens in the hidden layer in the language comprehension systems that Jay McClelland and I (1990, Artificial Intelligence) have developed and that Risto Miikkulainen and Michael Dyer (1991, Cognitive Science) have developed. An example of binding in these models would be to give them a sentence like, "Ricky played a trick on Lucy," and observe that the model correctly binds Ricky to the agent role and Lucy to the recipient role. Then you give the opposite sentence (Lucy played a trick on Ricky) and observe that the bindings are reversed. One serious issue/limitation of this holistic binding method, however, is how well it generalizes to novel cases: How many of the total possible sentences have to be trained so that the remaining sentences will be processed correctly? What happens is that the sentences missing from the training set create regularities that the model can learn. These regularities are sentences that do not (and so as far as the hidden units are concerned, cannot) occur. The question, then, once the network has been trained, which force is stronger: the generalization to novel cases, or the regularity the network learned that that novel case cannot happen? For example, say the network never trained on "Lucy played a trick on Ricky." The model learns many other sentences that suggest how to map sentence constituents to thematic roles, but it also learns that playing tricks is a one-way sort of deal because Lucy never seems to play them. Now if we give the Lucy sentence as a generalization test case, part of the model wants to generalize and activate the systematic meaning based on all that it learned about sentence comprehension, but another part of the model wants to correct this obvious error in the input because it knows that this sentence and meaning are unlikely. The model can "correct the error" by flipping the agent and recipient to the better known arrangement or by changing the verb to a more likely alternative, etc. Which part of the model (or better put, which influence) wins depends on many factors, such as the number of hidden units, the cost function, the combinatorial nature of the training corpus, the size of the training corpus, and so on. It turns out that to achieve the sort of systematic mapping we want we need to use A LOT of hiddens units (yes, more is better in this case of generalization), a large training set, and critically, a reasonably combinatorial corpus so that each element/word is paired with some variety of other elements (in statistical terms, you need to break as many high-order regularities as possible). See St. John (1993, Cognitive Science Proceedings) for some discussion. I'm a little embarrassed to toot my own horn, but I've thought about this some, and these papers may be of some interest -- in addition to Janet Wiles paper (1993, Cognitive Science Proceedings) that John Colen mentioned. One final point I'd like to raise is that this tension between generalization (along the lines of a systematic mapping) and "error correction" is not all bad. There is considerable psycholinguistic evidence that these sorts of "error corrections" happen with some frequency. People mis-hear, mis-read, mis-remember, see what they want to see, etc. all the time. On the other hand, we can all understand the infamous sentence "John ate the guitar" even though we've presumably never seen such a thing before and it's pretty unlikely. This ability, however, may simply attest to the wide variety and combinatorics of our language training. Why it is that we mis-read on some occassions and comprehend the systematic meaning, like with John eatting the guitar, on other occassion is not well understood. Training is probably involved, and attention is probably involved, to name two factors. We're currently work on models and human experiments to understand this issue better. -Mark St. John Dept. of Cognitive Science, UCSD From eric at research.nj.nec.com Fri Nov 5 16:30:44 1993 From: eric at research.nj.nec.com (Eric B. Baum) Date: Fri, 5 Nov 93 16:30:44 EST Subject: C Programmer Wanted Message-ID: <9311052130.AA18527@yin> C Programmer Wanted. Note- this job may be more interesting than most postdocs. May pay better too, if successful applicant has substantial commercial experience. Prerequisites: Experience in getting large programs to work. Some mathematical sophistication, *at least* equivalent of good undergraduate degree in math, physics, theoretical computer science, or related field. Salary: Depends on experience. Job: Implementing various novel algorithms. For example, implementing an entirely new approach to game tree search. Conceivably this could lead into a major effort to produce a championship chess program based on novel strategy, and on novel use of learning algorithms. Another example, implementing novel approaches to Travelling Salesman Problem. Another example, experiments with RTDP (TD learning.) Algorithms are *not* exclusively neural. These projects are at the leading edge of algorithm research, so expect the work to be both interesting and challenging. Term-contract position. To apply please send cv, cover letter and list of references to: Eric Baum, NEC Research Institute, 4 Independence Way, Princeton NJ 08540, or PREFERABLY by internet to eric at research.nj.nec.com Equal Opportunity Employer M/F/D/V ------------------------------------- Eric Baum NEC Research Institute, 4 Independence Way, Princeton NJ 08540 PHONE:(609) 951-2712, FAX:(609) 951-2482, Inet:eric at research.nj.nec.com From roger at eccles.psych.nwu.edu Mon Nov 8 10:44:27 1993 From: roger at eccles.psych.nwu.edu (Roger Ratcliff) Date: Mon, 8 Nov 1993 09:44:27 -0600 Subject: Dynamic binding Message-ID: <9311081544.AA24404@eccles.psych.nwu.edu> We have some experimental data on human subjects that might be interesting (challenging) to model for sentence matching for the kinds of binding examples given (by Mark). If subjects study "John hit Bill" and other active and passive sentences of this type with different names ("Helen was attracted by Jeff") and then are given a true/false test (is this sentence true according to those you studied), then we find evidence for availability of different kinds of information as a functin of processing time. We used a response signal procedure in which subjects were interrupted at one of several times (typically 50, 150, 250, 400, 800, 2000 msec) and were required to respond immediately, within 200 to 300 ms. The probability of responding yes for "John hit Bill" and "Bill hit John" increased at the same rate from 400 ms to 700 ms of processing time, then after 700 ms, information about the relationship (JhB or BhJ) became available and the two curves split apart. We concluded that overall match (the three words were the same or something like that) was available early in processing and later information about the precise form of the relationship became available. These data may be useful when examining the results of matching a sentence against the representation that is built in the model. Ratcliff & McKoon, (1989) Similarity information versus relational information: differences in the time course of retrieval. Cognitive Psychology, 21, 139-155. Roger Ratcliff Psychology dept Northwestern From Christian.Lehmann at di.epfl.ch Mon Nov 8 11:05:42 1993 From: Christian.Lehmann at di.epfl.ch (Christian Lehmann) Date: Mon, 8 Nov 93 17:05:42 +0100 Subject: PERAC'94 CALL FOR PAPER Message-ID: <9311081605.AA24743@lamisun.epfl.ch> --- From Perception to Action --- xxxxxx PerAc'94 Lausanne xxxxxxx A state of the art conference on perceptive processing, artificial life, autonomous agents, emergent behaviours and micro-robotic systems Lausanne, Switzerland, 7-9 september 1994 Swarm intelligence Micro-robotics Evolution, genetic processes Competition and cooperation Learning machines Self organization Active perception Sensory/motor loops Emergent behavior Cognition ---------------------------------------------- | Call for Papers, Call for Posters | | Call for Demonstrations, Call for Videos | | Contest | ---------------------------------------------- Contributions can be made in the following categories: -- Papers -- (30 to 45 minutes). 2-page abstracts should be submitted by February 1, 1994. The conference will have no parallel sessions, and a didactically structured program. Most of the papers will be solicited. The submitted abstracts should attempt a synthetic approach from sensing to action. Selected authors will have to adapt their presentation to the general conference program and prepare a complete well-structured text before June 94. -- Posters -- 4-page short papers that will be published in the proceedings and presented as posters are due for June 1, 1994. Posters will be displayed during the whole Conference and enough time will be provided to promote interaction with the authors. A jury will thoroughly examine them and the two best posters will be presented as a paper in the closing session (20' presentation). -- Demonstrations -- Robotic demonstrations are considered as posters. In addition to the 4-page abstract describing the scientific interest of the demonstration, the submission should include a 1-page requirement for demonstration space and support. -- Videos -- 5 minute video clips are accepted in Super-VHS or VHS (preferably PAL, NTSC leads to a poorer quality). Tapes together with a 2-page description should be submitted before June 1, 1994. Clips will be edited and distributed at the conference. -- Contest -- A robotic contest will be organized the day before the conference. Teams participating to the contest will be able to follow the conference freely. The contest will consist in searching for and collecting or stacking 36mm film cans. One or several mobile robots or robotic arms can be used for this task. The rules and preliminary registration forms will be sent upon request by air-mail only as soon as definitive (end of October 93). For further information: Prof J.D. Nicoud, LAMI-EPFL, CH-1015 Lausanne Fax ++41 21 693-5263, Email perac at di.epfl.ch Program Committee and referees (September 93) L. Bengtsson, Uni Halmstad, S. -- R. Brooks, MIT, Cambridge, USA. P. Dario, Santa Anna, Pisa, I. -- J.L. Deneubourg, ULB, Bruxelles, B R. Eckmiller, Uni, D|sseldorf, D. -- N. Franceschini, Marseilles, F T. Fukuda, Uni, Nagoya, JP. -- S. Grossberg, Uni, Boston, USA J.A. Meyer, Uni, Paris, F. -- R. Pfeifer, Uni, Z|rich, CH L. Steels, VUB, Brussels, B. -- A. Treisman, Uni, Princeton, USA F. Varela, Polytechnique, Paris, F. -- E. Vittoz, CSEM, Neuchbtel, CH J. Albus, NIST, Gaithersburg, USA. -- D.J. Amit, Uni, Jerusalem, Israel X. Arreguit, CSEM, Neuchbtel, CH. -- H. Asama, Riken, Wako, JP R. Beer, Case Western, Cleveland, USA. -- G. Beni, Uni, Riverside, USA P. Bourgine, Cemagref, Antony, F. -- Y. Burnod, Uni VI, Paris, F D. Cliff, Uni Sussex, Brighton, UK Ph. Gaussier, LAMI, Lausanne, CH. -- P. Husbands, Uni Sussex, Brighton, UK O. Kubler, ETH, Z|rich, CH. -- C.G. Langton, Santa Fe Inst, USA I. Masaki, MIT, Cambridge, USA. -- E. Mazer, LIFIA, Grenoble, F M. Mataric, MIT, Cambridge, USA . -- H. Miura, Uni, Tokyo, JP S. Rasmussen, Los Alamos, USA. -- G. Sandini, Uni, Genova, I T. Smithers, Uni, San Sebastian, E. -- J. Stewart, Inst. Pasteur, Paris, F L. Tarassenko, Uni, Oxford, UK. -- C. Touzet, EERIE, Nnmes, F P. Vershure, NSI, La Jolla, USA. From rba at bellcore.com Mon Nov 8 12:54:19 1993 From: rba at bellcore.com (Bob Allen) Date: Mon, 8 Nov 93 12:54:19 -0500 Subject: No subject Message-ID: <9311081754.AA00674@vintage.bellcore.com> Subject: IWANNT'93 Electronic Proceedings Electronic Proceedings for 1993 International Workshop on Applications of Neural Networks to Telecommunications 1. Electronic Proceedings (EPROCS) The Proceedings for the 1993 International Workshop on Applications of Neural Networks to Telecommunications (IWANNT'93) have been converted to electronic form and are available in the SuperBook(TM) document browsing system. In addition to the IWANNT'93 proceedings, you will be able to access abstracts from the 1992 Bellcore Workshop on Applications of Neural Networks to Telecommunications and pictures of several of the conference attendees. We would appreciate your feedback about the use of this system. In addition, if you have questions, or would like a personal account, please contact Robert B. Allen (iwannt_allen at bellcore.com or rba at bellcore.com). 2. Accounts and Passwords Public access is available with the account name: iwan_ pub Annotations made by iwan pub will be removed. Individual accounts and passwords were given to conference participants. The difference between public and individual accounts is that individual accounts have permission to make annotations. 3. Remote Access Via Xwindows From BATTITI at itnvax.science.unitn.it Tue Nov 9 04:21:17 1993 From: BATTITI at itnvax.science.unitn.it (BATTITI@itnvax.science.unitn.it) Date: 09 Nov 1993 09:21:17 +0000 Subject: Tech. Reports about REACTIVE TABU SEARCH (RTS) Message-ID: <01H53UJE0RBAKY5DVE@itnvax.science.unitn.it> The following technical reports in the area of combinatorial optimization and neural nets training are available by anonymous ftp from the Mathematics Department archive at Trento University. _______________________________________________________________________ The Reactive Tabu Search Roberto Battiti and Giampietro Tecchiolli Technical Report UTM 405, October 1992 Dipartimento di Matematica, Univ. di Trento 38050 Povo (Trento) - Italia Abstract We propose an algorithm for combinatorial optimization where an explicit check for the repetition of configurations is added to the basic scheme of Tabu search. In our Tabu scheme the appropriate size of the list is learned in an automated way by reacting to the occurrence of cycles. In addition, if the search appears to be repeating an excessive number of solutions excessively often, then the search is diversified by making a number of random moves proportional to a moving average of the cycle length. The reactive scheme is compared to a "strict" Tabu scheme, that forbids the repetition of configurations and to schemes with a fixed or randomly varying list size. From the implementation point of view we show that the Hashing or Digital Tree techniques can be used in order to search for repetitions in a time that is approximately constant. We present the results obtained for a series of computational tests on a benchmark function, on the 0-1 Knapsack Problem, and on the Quadratic Assignment Problem. _______________________________________________________________________ Training Neural Nets with the Reactive Tabu Search Roberto Battiti and Giampietro Tecchiolli Technical Report UTM 421, November 1993 Dipartimento di Matematica, Univ. di Trento 38050 Povo (Trento) - Italia Abstract In this paper the task of training sub-symbolic systems is considered as a combinatorial optimization problem and solved with the heuristic scheme of the Reactive Tabu Search (RTS) proposed by the authors and based on F. Glover's Tabu Search. An iterative optimization process based on a ``modified greedy search'' component is complemented with a meta-strategy to realize a discrete dynamical system that discourages limit cycles and the confinement of the search trajectory in a limited portion of the search space. The possible cycles are discouraged by prohibiting (i.e., making tabu) the execution of moves that reverse the ones applied in the most recent part of the search, for a prohibition period that is adapted in an automated way. The confinement is avoided and a proper exploration is obtained by activating a diversification strategy when too many configurations are repeated excessively often. The RTS method is applicable to non-differentiable functions, it is robust with respect to the random initialization and effective in continuing the search after local minima. The limited memory and processing required make RTS a competitive candidate for special-purpose VLSI implementations. We present and discuss four tests of the technique on feedforward and feedback systems. _______________________________________________________________________ ******> how to obtain a copy via FTP: unix> ftp volterra.science.unitn.it (130.186.34.16) Name: anonymous Password: (type your email address) ftp> cd pub ftp> binary ftp> get reactive-tabu-search.ps.Z ftp> get rts-neural-nets.ps.Z ftp> quit unix> uncompress *.ps.Z unix> lpr *.ps (or however you print PostScript) note: gnu-zipped files are available as file.gz reactive-tabu-search.ps : 27 pages rts-neural-nets.ps : 45 pages both papers contain complex figures so that printing can be slow on some printers ******> A limited number of hardcopies are available (only if you don't have access to FTP!) from: Roberto Battiti Dip. di Matematica Universita' di Trento 38050 Povo (Trento) Italy e-mail: battiti at itnvax.science.unitn.it From janetw at cs.uq.oz.au Tue Nov 9 20:20:39 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Tue, 9 Nov 93 20:20:39 EST Subject: Dynamic Binding Message-ID: <9311091020.AA08508@client> On binding using hidden layers, Graham Smith writes: > I have recently trained a simple multi-layer feed-forward network using > back-propagation, to auto-associate patterns which simultaneously describe > two items. The patterns consist of four features (red, blue, square, > ... Graham's query has generated a reference to my work, and below I summarize some of the issues as I see them, and give references to related work. The use of a hidden layer for binding is at least implicit (and sometimes explicit) in several uses of nets in the late 1980s (as Mark St John points out). The experiments Graham describes are a very clean version of the problem, useful for highlighting the explicit issues in binding, combinatorial structure and generalisation. They are also known as multi-variable encoders (MVEs) or Nk-Mk-Nk encoders, where k is the number of "variables" (or "features" or "components"), N is the number of "values" (or "elements") per variable and M is the number of hidden units per variable. At least two groups have published simulation results that I know about- 1. Olivier Brousse and Paul Smolensky worked on a variation (see Brousse's PhD thesis "Generativity and systematicity in neural network combinatorial learning", Oct, 1993, originally presented in 1989, at Cog Sci 11 or possibly earlier, though I don't have a reference). The papers related to this work addressed the issue of generalisation in combinatorial domains, in which the binding problem is an implicit part. The experiments are described using Smolensky's tensor notation, and hence the connection to the MVE task is perhaps not obvious for those unfamiliar with tensors. The results are very clear, showing massive generalisation from very few training samples. Brousse's thesis goes into issues in compositionality and systematicity in detail. 2. Mark Ollila and I presented an analysis of the representations in hidden unit (HU) space formed in a colour-shape-location mapping task (1992, NIPS "Intersecting regions: The key to combinatorial structure in HU space"). As Graham Smith described in his simulations, HU space forms a representation of a particular "scene". In our simulations we were interested in the range of classifications possible in HU space. Eg. Could a single hyperplane distinguish between all the scenes with a blue object on the left? a blue object anywhere? a blue square and a red triangle anywhere? These questions are asking whether it is possible to partition the HU space by a single hyperplane so that all the patterns relevant to the query (eg all the blue scenes) can be distinguished from all the others. The answer took us into an analysis of possible structures in HU space. If HU representations are structured so that any classification can be made using a single hyperplane, then the patterns must lie at the corners of a hypertetrahedron (a generalised triangle). This is a geometric way of thinking about VC-dimension. The MVE tasks do not require nearly such capacity in HU space - we can think of them as compressed representations of the general hypertetrahedron. For each variable, the task only requires a 1-of-N selection. Hence the structure of HU space can be organized into a hypertetrahedron over the number of variables, (rather than the variables x values). This structure would allow combination of any colour in location one, with any colour in location two, but not both blue and green in location one, since it is not a legal pattern. The bottom line is that instead of needing N^k-1 hidden units, where N is the number of values per variable, and k is the number of variables, we only need Mk, where M is the number of HU required per variable (M is a constant). This year's Cog Sci paper (which John Kolen mentioned) began as an extension to the NIPS92 study, asking: What is the minimum number of hidden units required for such a mapping task, and how easily is it learned? These tasks are "ultra-tight" MVEs, since there are several variables but a minimum number of HUs. Given the ultra-tight bottleneck, the hidden units divide into pairs, each pair encoding the internal representation of one variable. (The theoretical minimum is 2 HU per variable, based on a proof by Kruglyak on the N-2-N encoder). Caveats: The decomposition of the hidden layer into pairs of cooperating units only occurs (in fact is only possible from a coding theory point of view) when there is a specific structure in the patterns for each variable. The traditional "local" codes used in encoder tasks satisfy this criterion, but are hard to learn. In our simulations, we used block codes (eg 1111100000) for each variable in the output and local codes (eg 100000000) for each variable in the input (Bakker et al, 1993, ICANN). A second caveat is that the bottleneck of 2 HU/variable does not provide sufficient capacity for a net to represent the probability distribution of the input patterns, (eg co-occurrence between variables) and hence generalises to all possible combinations of the variables. Sometimes this may be desirable, sometimes not, depending on the task. In later work, Steven Phillips and I showed that ultra-tight encoders with block-structured outputs can be learned "efficiently" by standard bp. ie. polynomial number of patterns in the training set (1993, IJCNN Japan). Steve is continuing this work for his PhD. ----- Several people have asked for copies of my CogSci paper in response to to comments generated by Graham's query. It is not online -- hard copy is available for people who do not have access to the Proceedings. ------------------------ Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au From janetw at cs.uq.oz.au Tue Nov 9 20:25:30 1993 From: janetw at cs.uq.oz.au (janetw@cs.uq.oz.au) Date: Tue, 9 Nov 93 20:25:30 EST Subject: Dynamic Binding Message-ID: <9311091025.AA08553@client> On binding using tensors: A second approach to the binding problem (which also differs from the phase approach) is the use of tensors or their compressed representations in convolution/correlation models. This approach has been used since the early 1980s for modeling temporal and contextual bindings in human memory. When dealing with several variables, it can help to think of HU space as a compressed representation of their tensor product (see Method 2 below). The terminology differs across disciplines, which makes it harder to find appropriate connections. The following are some that I have come across: Tensors: Smolensky (papers go back at least to 1984 with Riley; see 1990, Artificial Intelligence); Pike (1984, Psych Review) A comparison of convolution and matrix distributed memory systems; Humphreys, Bain and Pike (1989, Psych Review) - called 3D matrices; showed the use of a tensor for binding context, cue and target in memory storage, and how to access both semantic and episodic information from the memory. Sloman and Rumelhart had a memory model that looked like a feedforward net with sigma-pi units which was essentially the same underlying maths with respect to the binding aspects (date?, ed. Healey, The Estes volume); Halford et al use tensors as the mapping process in analogical reasoning tasks, specifically looking at the limits of human capacity for processing interacting dimensions (1993, eds Holyoak and Barnden, Advances in Connectionist and Neural Computational Theory Vol 2); Wiles et al (1992, Univ of Qld TR218) reviews the models of Halford et al and Humphreys et al and shows the link between them. Convolution/correlation models: Murdock (1982, Psych Review) Eich (1982, 1985, Psych review) Plate (1991, IJCAI, and 1992 & 93 NIPS) Holographic reduced representations. NB. There are differences in the use of tensors. Eg. to encode a predicate F(x,y,z), where x, y and z are vectors: Method 1: Smolensky would create roles for each variable in the triple, r1, r2, r3, and then create a representation, T, of the triple as T = r1*x + r2*y + r3*z where * is the tensor (or outer) product operation and + is the usual addition of vectors (also called linear superposition). A composite memory would require a vector to specify each item, eg i1, and then superimpose all such representations, ie, M = i1*(r1*x1 + r2*y1 + r3*z1) + i2*(r1*x2 + r2*y2 + r3*z2) + ... Method 1 allows binding of arbitrary sized tuples using a tensor, M, of rank 3, but does not represent the interactions between variables. It seems plausible that phase locking would be one way of implementing method 1. Method 2: In the approach of Humphreys et al and Halford et al, the tensor would be the outer product of all three variables (like a 3D matrix), ie T = x*y*z The memory would be formed by superimposing all such tensor representations, M = x1*y1*z1 + x2*y2*z2 + x3*y3*z3 + x4*y4*z4 + ... Method 2 does not require a unique vector for each item, nor role vectors, and the interactions between variables are accessible. But, there are practical limits to the size of the tensor - Halford estimates that humans can process up to 4 independent variables in parallel - which he models as a tensor of rank 5. In the memory work of Humphreys et al, tensors of rank 3 are used (context, cue and target). If a context is not available, then a unit vector is substituted, effectively accessing the average of all the other items (a "semantic" memory). This allows both context sensitive and context- insensitive access processes over the same information. ------------------------ Janet Wiles Departments of Computer Science and Psychology University of Queensland QLD 4072 AUSTRALIA email: janetw at cs.uq.oz.au From luttrell at signal.dra.hmg.gb Tue Nov 9 06:43:56 1993 From: luttrell at signal.dra.hmg.gb (luttrell@signal.dra.hmg.gb) Date: Tue, 09 Nov 93 11:43:56 +0000 Subject: New preprint in neuroprose Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/luttrell.part-mixture.ps.Z The file luttrell.part-mixture.ps.Z is now available for copying from the Neuroprose repository (22 pages). This paper has been submitted to a Special Issue of IEE Proceedings on Vision, Image and Signal Processing. An early version of this paper appeared in the Proceedings of the IEE International Conference on Artificial Neural Networks, Brighton, 1993, pp. 313-316. The Partitioned Mixture Distribution: An Adaptive Bayesian Network for Low-Level Image Processing Steve P Luttrell Adaptive Systems Theory Section Defence Research Agency Malvern, Worcs, United Kingdom, WR14 3PS e-mail: luttrell at signal.dra.hmg.gb ABSTRACT Bayesian methods are used to analyse the problem of training a model to make predictions about the probability distribution of data that has yet to be received. Mixture distributions emerge naturally from this framework, but are not well-matched to high-dimensional problems such as image processing. An extension, called a partitioned mixture distribution (PMD) is presented, which is essentially a set of overlapping mixture distributions. An expectation-maximisation training algorithm is derived. Finally, the results of some numerical simulations are presented, which demonstrate that lateral inhibition arises naturally in PMDs, and that the nodes in a PMD co-operate in such a way that each mixture distribution in the PMD receives the necessary complement of machinery for it to compute its mixture distribution. From rba at bellcore.com Tue Nov 9 12:45:40 1993 From: rba at bellcore.com (Bob Allen) Date: Tue, 9 Nov 93 12:45:40 -0500 Subject: IWANNT-EPROCS - note that the correct public login is iwan_pub Message-ID: <9311091745.AA01986@vintage.bellcore.com> Subject: IWANNT'93 Electronic Proceedings Electronic Proceedings for 1993 International Workshop on Applications of Neural Networks to Telecommunications 1. Electronic Proceedings (EPROCS) The Proceedings for the 1993 International Workshop on Applications of Neural Networks to Telecommunications (IWANNT'93) have been converted to electronic form and are available in the SuperBook(TM) document browsing system. In addition to the IWANNT'93 proceedings, you will be able to access abstracts from the 1992 Bellcore Workshop on Applications of Neural Networks to Telecommunications and pictures of several of the conference attendees. We would appreciate your feedback about the use of this system. In addition, if you have questions, or would like a personal account, please contact Robert B. Allen (iwannt_allen at bellcore.com or rba at bellcore.com). 2. Accounts and Passwords Public access is available with the account name: iwan_pub Individual accounts and passwords were given to conference participants. Annotations made by iwan_pub may be edited by the electonic proceedings editor. 3. Remote Access Via Xwindows From schmidhu at informatik.tu-muenchen.de Thu Nov 11 07:37:22 1993 From: schmidhu at informatik.tu-muenchen.de (Juergen Schmidhuber) Date: Thu, 11 Nov 1993 13:37:22 +0100 Subject: dynamic variable binding Message-ID: <93Nov11.133729met.42241@papa.informatik.tu-muenchen.de> Recently, several people on this list mentioned dynamic variable binding. A general approach to dynamic variable binding needs to address *temporary* bindings in *time-varying* environments. The following reference shows how a system with time-varying inputs and ``fast weights'' can learn to create useful *temporary* bindings. @article{S92, author = {J. Schmidhuber}, title = { Learning to Control Fast-Weight Memories: An Alternative to Recurrent Nets}, journal={Neural Computation}, volume = {4}, number = {1}, pages={131-139}, year = {1992}} ------------------------------------------------- Juergen Schmidhuber Institut fuer Informatik, H2 Technische Universitaet Muenchen 80290 Muenchen, Germany schmidhu at informatik.tu-muenchen.de From RAMPO at SALERNO.INFN.IT Thu Nov 11 13:26:00 1993 From: RAMPO at SALERNO.INFN.IT (RAMPO@SALERNO.INFN.IT) Date: Thu, 11 NOV 93 18:26 GMT Subject: ICANN'94 Message-ID: <6849@SALERNO.INFN.IT> ----------------------------------------------------------- To CONNECTIONISTS mailing list. Someone of you may have received an outdated version of the ICANN'94 Registration Form and Call for Papers. This is the final official version. Please, do not take in account any previous one. ----------------------------------------------------------- -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * R E G I S T R A T I O N F O R M * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | |------------------------------------------------------------------| | R E G I S T R A T I O N F O R M | |------------------------------------------------------------------| | FAMILY NAME ____________________________________________________ | | FIRST NAME, MIDDLE INITIAL _____________________________________ | | AFFILIATION ____________________________________________________ | | MAILING ADDRESS ________________________________________________ | | ZIP CODE, CITY, COUNTRY ________________________________________ | | FAX ____________________________________________________________ | | PHONE __________________________________________________________ | | EMAIL __________________________________________________________ | | ACCOMPANIED BY _________________________________________________ | | MEMBERSHIP (Regular/ENNS member/Student) _______________________ | | ENNS MEMBERSHIP NO. ____________________________________________ | | REGISTRATION FEE _______________________________________________ | | TUTORIAL FEE ___________________________________________________ | | DATE ______________________ SIGNATURE __________________________ | | | |------------------------------------------------------------------| | C O N F E R E N C E R E G I S T R A T I O N F E E S (in LIT) | |------------------------------------------------------------------| | MEMBERSHIP | Before 15/12/93 | Before 15/2/94 | On site | |--------------|-------------------|------------------|------------| | REGULAR | 650,000 | 800,000 | 950,000 | | ENNS MEMBER | 550,000 | 700,000 | 850,000 | | STUDENT | 200,000 | 250,000 | 300,000 | |------------------------------------------------------------------| | T U T O R I A L F E E S (in LIT) | |------------------------------------------------------------------| | | Before 15/2/94 | On site | |--------------|-------------------|-------------------------------| | REGULAR | 250,000 | 350,000 | | STUDENT | 100,000 | 150,000 | |------------------------------------------------------------------| | - Regular registrants become ENNS members. | | - Student registrants must provide an official certification of | | their status. | | - Pre-registration payment: Remittance in LIT to | | BANCO DI NAPOLI, Branch of FISCIANO, FISCIANO (SALERNO), ITALY| | on the Account of "Dipartimento di Fisica Teorica e S.M.S.A." | | clearly stating the motivation (Registration Fee for ICANN'94) | | and the attendee name. | | - On-site payment: cash. | | - The registration form together with a copy of the bank | | remittance must be mailed to: | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy | | Fax +39 89 822275 | | - Accepted papers will be included in the Proceedings only if | | the authors have registered in advance. | |------------------------------------------------------------------| | H O T E L R E S E R V A T I O N | |------------------------------------------------------------------| | The official travel agent is (fax for a booking form): | | RUSSO TRAVEL srl | | Via S. Antonio, I-80067 Sorrento, Italy | | Fax: +39 81 807 1367 Phone: +39 81 807 1845 | |------------------------------------------------------------------| | S U B M I S S I O N | |------------------------------------------------------------------| | Interested authors are cordially invited to present their work | | in one of the following "Scientific Areas" (A-Cognitive Science; | | B-Mathematical Models; C- Neurobiology; D-Fuzzy Systems; | | E-Neurocomputing), indicating also an "Application domain" | | (1-Motor Control;2-Speech;3-Vision;4-Natural Language; | | 5-Process Control;6-Robotics;7-Signal Processing; | | 8-Pattern Recognition;9-Hybrid Systems;10-Implementation). | | | | DEADLINE for CAMERA-READY COPIES: December 15, 1993. | | ---------------------------------------------------- | | Papers received after that date will be returned unopened. | | Papers will be reviewed by senior researchers in the field | | and the authors will be informed of their decision by the end | | of January 1994. Accepted papers will be included in the | | Proceedings only if the authors have registered in advance. | | | | SIZE: 4 pages, including figures, tables, and references. | | LANGUAGE: English. | | COPIES: submit a camera-ready original and 3 copies. | | (Accepted papers cannot be edited.) | | EMAIL where to send correspondence (not papers): | | iiass at salerno.infn.it | | ADDRESS where to send the papers: | | IIASS (Intl. Inst. Adv. Sci. Studies), ICANN'94, | | Via Pellegrino 19, Vietri sul Mare (Salerno), 84019 Italy. | | ADDRESS where to send correspondence (not papers): | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | I-84081 Baronissi, Salerno, Italy - Fax +39 89 822275 | | EMAIL where to get LaTeX files: listserv at dist.unige.it | |------------------------------------------------------------------| | P R O G R A M C O M M I T T E E | |------------------------------------------------------------------| | | | I. Aleksander (UK), D. Amit (ISR), L. B. Almeida (P), | | S.I. Amari (J), E. Bizzi (USA), E. Caianiello (I), | | L. Cotterill (DK), R. De Mori (CAN), R. Eckmiller (D), | | F. Fogelman Soulie (F), W. Freeman (USA), S. Gielen (NL), | | S. Grossberg (USA), R. Hecht-Nielsen (USA), J. Herault (F), | | M. Jordan (USA), M. Kawato (J), T. Kohonen (SF), | | V. Lopez Martinez (E), R.J. Marks II (USA), P. Morasso (I), | | E. Oja (SF), T. Poggio (USA), H. Ritter (D), H. Szu (USA), | | L. Stark (USA), J. G. Taylor (UK), S. Usui (J), L. Zadeh (USA) | | | | Conference Chair: Prof. Eduardo R. Caianiello, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | |------------------------------------------------------------------| | T U T O R I A L S | |------------------------------------------------------------------| | 1) Introduction to neural networks (D. Gorse), 2) Advanced | | techniques in supervised learning (F. Fogelman Soulie`), | | 3) Advanced techniques for self-organizing maps (T. Kohonen) | | 4) Weightless neural nets (I. Aleksander), 5) Applications of | | neural networks (R. Hecht-Nielsen), 6) Neurobiological modelling | | (J.G. Taylor), 7) Information theory and neural networks | | (M. Plumbley). | | Tutorial Chair: Prof. John G. Taylor, King's College, London, UK | | fax: +44 71 873 2017 | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | |------------------------------------------------------------------| ******************************************************************** ******************************************************************** ******************************************************************** ******************************************************************** -------------------------------------------------------------------- | ************************************************ | | * * | | * EUROPEAN NEURAL NETWORK SOCIETY * | | *----------------------------------------------* | | * C A L L F O R P A P E R S * | | *----------------------------------------------* | | * I C A N N ' 94 - SORRENTO * | | * * | | ************************************************ | | | | ICANN'94 (INTERNATIONAL CONFERENCE ON ARTIFICIAL NEURAL NETWORKS)| | is the fourth Annual Conference of ENNS and it comes after | | ICANN'91(Helsinki), ICANN'92 (Brighton), ICANN'93 (Amsterdam). | | It is co-sponsored by INNS, IEEE-NC, JNNS. | | It will take place at the Sorrento Congress Center, near Naples, | | Italy, on May 26-29, 1994. | | | |------------------------------------------------------------------| | S U B M I S S I O N | |------------------------------------------------------------------| | Interested authors are cordially invited to present their work | | in one of the following "Scientific Areas" (A-Cognitive Science; | | B-Mathematical Models; C- Neurobiology; D-Fuzzy Systems; | | E-Neurocomputing), indicating also an "Application domain" | | (1-Motor Control;2-Speech;3-Vision;4-Natural Language; | | 5-Process Control;6-Robotics;7-Signal Processing; | | 8-Pattern Recognition;9-Hybrid Systems;10-Implementation). | | | | DEADLINE for CAMERA-READY COPIES: December 15, 1993. | | ---------------------------------------------------- | | Papers received after that date will be returned unopened. | | Papers will be reviewed by senior researchers in the field | | and the authors will be informed of their decision by the end | | of January 1994. Accepted papers will be included in the | | Proceedings only if the authors have registered in advance. | | Allocation of accepted papers to oral or poster sessions will | | not be performed as a function of technical merit but only with | | the aim of coherently clustering different contributions in | | related topics; for this reason there will be no overlap of | | oral and poster sessions with the same denomination. Conference | | proceedings, that include all the accepted (and regularly | | registered) papers, will be distributed at the Conference desk | | to all regular registrants. | | | | SIZE: 4 pages, including figures, tables, and references. | | LANGUAGE: English. | | COPIES: submit a camera-ready original and 3 copies. | | (Accepted papers cannot be edited.) | | EMAIL where to send correspondence (not papers): | | iiass at salerno.infn.it | | ADDRESS where to send the papers: | | IIASS (Intl. Inst. Adv. Sci. Studies), ICANN'94, | | Via Pellegrino 19, Vietri sul Mare (Salerno), 84019 Italy. | | ADDRESS where to send correspondence (not papers): | | Prof. Roberto Tagliaferri, Dept. Informatics, Univ. Salerno, | | Fax +39 89 822275 | | EMAIL where to get LaTeX files: listserv at dist.unige.it | | | | In an accompanying letter, the following should be included: | | (i) title of the paper, (ii) corresponding author, | | (iii) presenting author, (iv) scientific area and application | | domain (e.g. "B-7"), (vi) preferred presentation (oral/poster), | | (vii) audio-visual requirements. | | | |------------------------------------------------------------------| | F O R M A T | |------------------------------------------------------------------| | The 4 pages of the manuscripts should be prepared on A4 white | | paper with a typewriter or letter- quality printer in | | one-column format, single-spaced, justified on both sides and | | printed on one side of the page only, without page numbers | | or headers/footers. Printing area: 120 mm x 195 mm. | | | | Authors are encouraged to use LaTeX. For LaTeX users, the LaTeX | | style-file and an example-file can be obtained via email as | | follows: | | - send an email message to the address "listserv at dist.unige.it" | | - the first two lines of the message must be: | | get ICANN94 icann94.sty | | get ICANN94 icann94-example.tex | | If problems arise, please contact the conference co-chair below. | | Non LaTeX users can ask for a specimen of the paper layout, | | to be sent via fax. | | | |------------------------------------------------------------------| | P R O G R A M C O M M I T T E E | |------------------------------------------------------------------| | The preliminary program committee is as follows: | | | | I. Aleksander (UK), D. Amit (ISR), L. B. Almeida (P), | | S.I. Amari (J), E. Bizzi (USA), E. Caianiello (I), | | L. Cotterill (DK), R. De Mori (CAN), R. Eckmiller (D), | | F. Fogelman Soulie (F), S. Gielen (NL), S. Grossberg (USA), | | J. Herault (F), M. Jordan (USA), M. Kawato (J), T. Kohonen (SF), | | V. Lopez Martinez (E), R.J. Marks II (USA), P. Morasso (I), | | E. Oja (SF), T. Poggio (USA), H. Ritter (D), H. Szu (USA), | | L. Stark (USA), J. G. Taylor (UK), S. Usui (J), L. Zadeh (USA) | | | | Conference Chair: Prof. Eduardo R. Caianiello, Univ. Salerno, | | Italy, Dept. Theoretic Physics; email: iiass at salerno.infn.it | | | | Conference Co-Chair: Prof. Pietro G. Morasso, Univ. Genova, | | Italy, Dept. Informatics, Systems, Telecommunication; | | email: morasso at dist.unige.it; fax: +39 10 3532948 | | | |------------------------------------------------------------------| | T U T O R I A L S | |------------------------------------------------------------------| | The preliminary list of tutorials is as follows: | | 1) Introduction to neural networks (D. Gorse), 2) Advanced | | techniques in supervised learning (F. Fogelman Soulie`), | | 3) Advanced techniques for self-organizing maps (T. Kohonen) | | 4) Weightless neural nets (I. Aleksander), 5) Applications of | | neural networks (R. Hecht-Nielsen), 6) Neurobiological modelling | | (J.G. Taylor), 7) Information theory and neural networks | | (M. Plumbley). | | Tutorial Chair: Prof. John G. Taylor, King's College, London, UK | | fax: +44 71 873 2017 | | | |------------------------------------------------------------------| | T E C H N I C A L E X H I B I T I O N | |------------------------------------------------------------------| | A technical exhibition will be organized for presenting the | | literature on neural networks and related fields, neural networks| | design and simulation tools, electronic and optical | | implementation of neural computers, and application | | demonstration systems. Potential exhibitors are kindly requested | | to contact the industrial liaison chair. | | | | Industrial Liaison Chair: Dr. Roberto Serra, Ferruzzi | | Finanziaria, Ravenna, fax: +39 544 35692/32358 | | | |------------------------------------------------------------------| | S O C I A L P R O G R A M | |------------------------------------------------------------------| | Social activities will include a welcome party, a banquet, and | | post-conference tours to some of the many possible targets of | | the area (participants will also have no difficulty to | | self-organize a la carte). | -------------------------------------------------------------------- From hayit at micro.caltech.edu Thu Nov 11 14:32:34 1993 From: hayit at micro.caltech.edu (Hayit Greenspan) Date: Thu, 11 Nov 93 11:32:34 PST Subject: NIPS_WORKSHOP Message-ID: <9311111932.AA13157@electra.caltech.edu> NIPS*93 - Post Meeting workshop: --------------------------------------------------------------- --------------------------------------------------------------- Learning in Computer Vision and Image Understanding - An advantage over classical techniques? Dec 4th, 1993 --------------------------------------------------------------- --------------------------------------------------------------- Organizer: Hayit Greenspan (hayit at micro.caltech.edu) --------- Dept. of Electrical Engineering California Institute of Technology Pasadena, CA 91125 Program Committee: T. Poggio(MIT), R. Chellappa(Maryland), P. Smyth(JPL) ----------------- Intended Audience: ------------------- Researchers in the field of Learning and in Vision and those interested in the combination of both for pattern-recognition, computer-vision and image-understanding tasks. Abstract: --------- There is an increasing interest in the area of Learning in Computer Vision and Image Understanding, both from researchers in the learning community and from researchers involved with the computer vision world. The field is characterized by a shift away from the classical, purely model-based computer vision techniques, towards data-driven learning paradigms for solving real-world vision problems. Classical computer-vision techniques have to a large extent neglected learning, which is an important component for robust and flexible vision systems. Meanwhile, there is real-world demand for automated image handling for scientific and commercial purposes, and a growing need for automated image understanding and recognition, in which learning can play a key role. Applications include remote-sensing imagery analysis, automated inspection, difficult recognition tasks such as face recognition, autonomous navigation systems which use vision as part of their sensors, and the field of automated imagery data-base analysis. Some of the issues for general discussion: o Where do classical computer-vision techniques fail - and what are the main issues to be solved? o What does learning mean in a vision context? Is it tuning an existing model (defined a priori) via its parameters, or trying to learn the model (extract most relevant features etc)? o Can existing learning techniques help in their present format or do we need vision-specific learning methods? For example, is learning in vision a practical prospect without one "biasing" the learning models with lots of prior knowledge ? The major emphasis of the workshop will be on integrating viewpoints from a variety of backgrounds (theory, applications, pattern recognition, computer-vision, learning, neurobiology). The goal is to forge some common ground between the different perspectives, and arrive at a set of open questions and challenges in the field. Program: ---------- Morning session ---------------- 7:30-7:35 Introduction to the workshop 7:35-8:00 Keynote speaker: Poggio/Girosi (MIT) - Learning and Vision 8:00-8:15 Combining Geometric Reasoning and Artificial Neural Networks for Machine Vision Dean Pomerleau (CMU) 8:15-8:45 Discussion: o AAAI forum on Machine Learning in Computer Vision- relevant issues, Rich Zemel (Salk Institute) o What is going on in the vision and learning worlds 8:45-8:55 Combining classical and learning-based approaches into a recognition framework for texture and shape Hayit Greenspan (Caltech) 8:55-9:05 Visual Processing: Bag of tricks or Unified Theory? Jonathan Marshall (Univ. of N. Carolina) 9:05-9:15 Learning in 3D object recognition- An extreme approach Bartlett Mel (Caltech) 9:15-9:30 Discussion: o Learning in the 1D vs. 2D vs. 3D worlds Afternoon session ----------------- 4:30-4:45 The window registration problem in unsupervised learning of visual features Eric Saund (XEROX) 4:45-4:55 Unsupervised learning of object models Chris Williams (Toronto) 4:55-5:15 Discussion: o The role of unsupervised learning in vision 5:15-5:30 Network architectures and learning algorithms for word reading Yann Le Cun (AT&T) 5:30-5:40 Challenges for vision and learning in the context of large scientific image databases Padhraic Smyth (JPL) 5:40-5:50 Elastic Matching and learning for face recognition Joachim Buhmann (BONN) 5:50-6:30 Discussion: o What are the difficult challenges in vision applications? o Summary of the main research objectives in the field today, as discussed in the workshop. ------------------------------------------------------------------------------- From plunkett at dragon.psych Thu Nov 11 13:43:43 1993 From: plunkett at dragon.psych (plunkett (Kim Plunkett)) Date: Thu, 11 Nov 93 18:43:43 GMT Subject: No subject Message-ID: <9311111843.AA10257@dragon.psych.pdp> UNIVERSITY OF OXFORD MRC BRAIN AND BEHAVIOUR CENTRE McDONNELL-PEW CENTRE FOR COGNITIVE NEUROSCIENCE SUMMER SCHOOL ON CONNECTIONIST MODELLING Department of Experimental Psychology University of Oxford 11-23 September 1994 Applications are invited for participation in a 2-week residential Summer School on techniques in connectionist modelling of cognitive and biological phenomena. The course is aimed primarily at researchers who wish to exploit neural network models in their teaching and/or research. It will provide a general introduction to connectionist modelling through lectures and exercises on PCs. The instructors with primary responsibility for teaching the course are Kim Plunkett and Edmund Rolls. No prior knowledge of computational modelling will be required though simple word processing skills will be assumed. Participants will be encouraged to start work on their own modelling projects during the Summer School. The Summer School is sponsored (jointly) by the University of Oxford McDonnell-Pew Centre for Cognitive Neuroscience and the MRC Brain and Behaviour Centre. The cost of parti- cipation in the summer school is 500 pounds to include accommodation (bed and breakfast at St. John's College) and summer school registration. Participants will be expected to cover their own travel and meal costs. A small number of graduate student scholarships may be available. Applicants should indicate whether they wish to be considered for a graduate student scholarship but are advised to seek their own funding as well, since in previous years the number of graduate student applications has far exceeded the number of scholarships available. If you are interested in participating in the Summer School, please contact: Mrs. Sue King Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD Tel: (0865) 271353 Email: sking at uk.ac.oxford.psy Please send a brief description of your background with an explanation why you would like to attend the Summer School (one page maximum) no later than 1 April 1994. From shastri at ICSI.Berkeley.EDU Thu Nov 11 20:54:11 1993 From: shastri at ICSI.Berkeley.EDU (Lokendra Shastri) Date: Thu, 11 Nov 1993 17:54:11 PST Subject: Dynamic bindings Message-ID: <9311120154.AA12851@icsib20.ICSI.Berkeley.EDU> Several recent messages about the binding problem have mentioned solutions based on temporal synchrony. Those interested in finding out more about this proposal and its implications for cognitive models may want to take a look at the following article that appears in the recent issue of "Behavioral and Brain Sciences": From simple associations to systematic reasoning: a connectionist encoding of rules, variables and dynamic bindings using temporal synchrony. Shastri, L. and V. Ajjanagadde. BBS, 16 (3) 1993. A number of issues raised in the discussion are covered in the article, accompanying commentaries and the authors' response. The proposed solution leads to a number of predictions about the constraints on automatic (reflexive) processing. Some of which are: 1. A large number of instances of relations/schemas/events/.... can be active simultaneously. In production system (PS) terms: the working memory capacity underlying automatic/reflexive processing is very large. 2. A very large number of systematic mappings between relations/schemas/.. may be computed simultaneously. In PS terms: a large number of rules (even those containing variables) may fire in parallel. BUT 3. The maximum number of entites (individuals) that can be referenced by instances active in the working memory is small (~10). 4. Multiple instances of the same schema/relation... may be active simultaneously, but no schema/relation... may be instantiated more than ~3 times during an episode of reflexive reasoning. -- Shastri Lokendra Shastri International Computer Science Institute 1947 Center Street, Suite 600 Berkeley, CA 94707-1105 shastri at icsi.berkeley.edu (510) 642-4274 ext 310 From harnad at Princeton.EDU Thu Nov 11 22:36:48 1993 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 11 Nov 93 22:36:48 EST Subject: Artificial Life vs. Artificial Intelligence: Conference Message-ID: <9311120336.AA17606@clarity.Princeton.EDU> From inmanh at cogs.susx.ac.uk Fri Nov 12 04:19:00 1993 From: inmanh at cogs.susx.ac.uk (Inman Harvey) Date: Fri, 12 Nov 93 09:19 GMT Subject: SAB94 CFP Message-ID: ============================================================================== Conference Announcement and FINAL Call For Papers FROM ANIMALS TO ANIMATS Third International Conference on Simulation of Adaptive Behavior (SAB94) Brighton, UK, August 8-12, 1994 The object of the conference is to bring together researchers in ethology, psychology, ecology, cybernetics, artificial intelligence, robotics, and related fields so as to further our understanding of the behaviors and underlying mechanisms that allow animals and, potentially, robots to adapt and survive in uncertain environments. The conference will focus particularly on well-defined models, computer simulations, and built robots in order to help characterize and compare various organizational principles or architectures capable of inducing adaptive behavior in real or artificial animals. Contributions treating any of the following topics from the perspective of adaptive behavior will receive special emphasis. Individual and collective behavior Autonomous robots Neural correlates of behavior Hierarchical and parallel organizations Perception and motor control Emergent structures and behaviors Motivation and emotion Problem solving and planning Action selection and behavioral Goal directed behavior sequences Neural networks and evolutionary Ontogeny, learning and evolution computation Internal world models Characterization of environments and cognitive processes Applied adaptive behavior Authors should make every effort to suggest implications of their work for both natural and artificial animals. Papers which do not deal explicitly with adaptive behavior will be rejected. Submission Instructions Authors are requested to send five copies (hard copy only) of a full paper to the Program Chair (Dave Cliff). Papers should not exceed 10 pages (excluding the title page), with 1 inch margins all around, and no smaller than 10 pt (12 pitch) type (Times Roman preferred). LaTex template available by email, see below. This is same format as SAB90 and SAB92. Each paper must include a title page containing the following: (1) Full names, postal addresses, phone numbers, email addresses (if available), and fax numbers for each author, (2) A 100-200 word abstract, (3) The topic area(s) in which the paper could be reviewed (see list above). Camera ready versions of the papers, in two-column format, will be required after acceptance. Computer, video, and robotic demonstrations are also invited. Please contact Phil Husbands to make arrangements for demonstrations. Other program proposals will also be considered. Conference committee Conference Chair: Philip HUSBANDS Jean-Arcady MEYER Stewart WILSON School of Cognitive Groupe de Bioinformatique The Rowland Institute and Comp. Sciences Ecole Normale Superieure for Science University of Sussex 46 rue d'Ulm 100 Cambridge Parkway Brighton BN1 9QH, UK 75230 Paris Cedex 05 Cambridge, MA 02142, USA philh at cogs.susx.ac.uk meyer at wotan.ens.fr wilson at smith.rowland.org Program Chair: David CLIFF School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH, UK e-mail: davec at cogs.susx.ac.uk Financial Chair: P. Husbands, H. Roitblat Local Arrangements: I. Harvey, P. Husbands Program Committee M. Arbib, USA R. Arkin, USA R. Beer, USA A. Berthoz, France L. Booker, USA R. Brooks, USA P. Colgan, Canada T. Collett, UK H. Cruse, Germany J. Delius, Germany J. Ferber, France N. Franceschini, France S. Goss, Belgium J. Halperin, Canada I. Harvey, UK I. Horswill, USA A. Houston, UK L. Kaelbling, USA H. Klopf, USA L-J. Lin, USA P. Maes, USA M. Mataric, USA D. McFarland, UK G. Miller, UK R. Pfeifer, Switzerland H. Roitblat, USA J. Slotine, USA O. Sporns, USA J. Staddon, USA F. Toates, UK P. Todd, USA S. Tsuji, Japan W. Uttal, USA D. Waltz, USA. Official Language: English Publisher: MIT Press/Bradford Books Conference Information The conference will be held in the centre of Brighton, on the South Coast. This is a resort town, less than one hour from London, only 30 mins from London Gatwick airport. A number of invited speakers will be giving tutorial talks in subject areas covered by the conference. Through sponsorship, conference fees will be kept to a minimum and there should also be some travel grants available. We have made arrangements for the Proceedings to be available at the conference, which requires efficient processing of submitted papers; hence if possible first submissions should be made using LaTex template available by email. Email Information Email sab94 at cogs.susx.ac.uk with subject line "Subscribe mail-list" to be put on our mailing list and be sent further information about conference arrangements when available. Email sab94 at cogs.susx.ac.uk with subject line "LaTex template" to be sent LaTex template for camera-ready and for initial submissions. Important Dates =============== JAN 5, 1994: Submission deadline MAR 10: Notification of acceptance or rejection APR 10: Camera ready revised versions due MAY 1: Early registration deadline JUL 8: Regular registration deadline AUG 8-12: Conference dates General queries to: sab94 at cogs.susx.ac.uk ============================================================================== From george at psychmips.york.ac.uk Mon Nov 15 08:28:07 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Mon, 15 Nov 93 13:28:07 +0000 (GMT) Subject: CHI '94 Workshop Message-ID: CHI '94 Workshops Workshops provide an opportunity for small groups of participants who share a technical interest to meet for 1 to 2 days of dialogue on their areas of common concern. Workshops are different than paper sessions, panels and posters, in that the focus of workshops is on group discussion of topics rather than presentations of individuals' positions with follow-up questions. All workshops require pre-conference activity by the participants. CHI '94 offers 10 workshops covering a range of research and applied topics. These workshops will be held Sunday, April 24 and Monday, April 25. Results of the workshop can be presented both during and after the conference. During the conference, it is possible to present results and expand discussion by holding a Special Interest Group Meeting (see information on SIGs in this program). After the conference, each organizer provides an article summarizing the workshop for publication in the SIGCHI Bulletin. Several SIGCHI workshops have further presented their contents by publishing books and journal articles. Participation in a Workshop: To ensure a small enough group for open interchange, each workshop is limited to a maximum of 20 participants including the organizers. Participants are chosen before the conference on the basis of position papers sent to the workshop organizers. Unless stated otherwise in the individual workshop descriptions below, the position papers are 2-3 page statements on the workshop theme. All position papers are due to all workshop organizers by February 18th, 1994. Submitters will be notified of their selection by March 4, 1994 and must confirm their participation by March 18, 1994. Fees: The fees are $25 for a 1-day workshop, $40 for a 1.5-day workshop, and $50 for a 2-day workshop. ************************************************************************* Pattern Recognition in Human-Computer Interaction: A Viable Approach? All day Sunday, April 24 and Monday, April 25 Janet Finlay University of Huddersfield, UK Alan Dix University of York, UK George Bolt University of York, UK In 1991, a SIGCHI workshop entitled "Neural Networks and Pattern Recognition in Human-Computer Interaction" was held, involving researchers using novel techniques, such as machine learning and neural networks, on human-computer interaction (HCI) problems. Three years on it is still unclear whether such an approach is viable for realistic applications. This workshop will address this question, bringing together researchers in pattern recognition and HCI researchers who are investigating problems involving the analysis of traces of interaction (e.g. evaluation, user modelling, error diagnosis). The emphasis of the workshop will be active research: Participants will attempt to apply pattern recognition techniques to derive solutions to identified HCI problems. Its aim is twofold: to initiate interdisciplinary research in the area and to consider the scope of these methods. This will be a two-day workshop, limited to 16 participants who will be involved either in pattern recognition (statistical, inductive or neural) or in relevant HCI research, but not necessarily both. Position statements (2-3 pages) from pattern recognition researchers should describe the technique, its strengths and limitations, and any computer tools. HCI researchers should identify their problem area and the pattern recognition issues it raises. Participants will begin discussion and establish research "teams" prior to the workshop itself, and applicants should be prepared to take part in this preliminary work. A book based on the workshop activities is planned and participants will be asked to submit papers for inclusion at a later date. Contact: Dr. Janet Finlay School of Computing and Mathematics University of Huddersfield Queensgate Huddersfield, HD1 3DH, UK Voice: +44-484 472147 Answering Machine: +44-484 649108 Fax: +44-484 421106 janet at zeus.hud.ac.uk ****************************************************************** From B344DSL at UTARLG.UTA.EDU Tue Nov 16 12:24:07 1993 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: 16 Nov 1993 12:24:07 -0500 (CDT) Subject: Conference announcement (Turkey) Message-ID: CALL FOR PAPERS TAINN III The Third Turkish Symposium on ARTIFICIAL INTELLIGENCE & NEURAL NETWORKS June 22-24, 1994, METU, Ankara, Turkey Organized by Middle East Technical University & Bilkent University in cooperation with Bogazici University, TUBITAK INNS Turkey SIG, IEEE Computer Society Turkey Chapter, ACM SIGART Turkey Chapter, Conference Chair: Nese Yalabik (METU), nese at vm.cc.metu.edu.tr Program Committee Co-chairs: Cem Bozsahin (METU), bozsahin at vm.cc.metu.edu.tr Ugur Halici (METU), halici at vm.cc.metu.edu.tr Kemal Oflazer (Bilkent), ko at cs.bilkent.edu.tr Organization Committee Chair: Gokturk Ucoluk (METU) , ucoluk at vm.cc.metu.edu.tr Program Comittee: L. Akin (Bosphorus), V. Akman (Bilkent), E. Alpaydin (Bosphorus), S.I. Amari (Tokyo), I. Aybay (METU), B. Buckles (Tulane), G. Carpenter (Boston), I. iekli (Bilkent), C. Dagli (Missouri-Rolla), D.Davenport (Bilkent), G. Ernst (Case Western), A. Erkmen (METU) N. Findler (Arizona State), E. Gelenbe (Duke), M. Guler (METU), A. Guvenir (Bilkent), S. Kocabas (TUBITAK), R. Korf (UCLA), S. Kuru (Bosphorus), D. Levine (Texas Arlington), R. Lippmann (MIT), K. Narendra (Yale), H. Ogmen (Houston), U. Sengupta (Arizona State), R. Parikh (CUNY), F. Petry (Tulane), C. Say (Bosphorus), A. Yazici (METU), G. Ucoluk (METU), P. Werbos (NSF), N. Yalabik (METU), L. Zadeh (California), W. Zadrozny (IBM TJ Watson) Organization Committee: A. Guloksuz, O. Izmirli, E. Ersahin, I. Ozturk, . Turhan Scope of the Symposium * Commonsense Reasoning * Expert Systems * Knowledge Representation * Natural Language Processing * AI Programming Environments and Tools * Automated Deduction * Computer Vision * Speech Recognition * Control and Planning * Machine Learning and Knowledge Acquisition * Robotics * Social, Legal, Ethical Issues * Distributed AI * Intelligent Tutoring Systems * Search * Cognitive Models * Parallel and Distributed Processing * Genetic Algorithms * NN Applications * NN Simulation Environments * Fuzzy Logic * Novel NN Models * Theoretical Aspects of NN * Pattern Recognition * Other Related Topics on AI and NN Paper Submission: Submit five copies of full papers (in English or Turkish) limited to 10 pages by January 31, 1994 to : TAINN III, Cem Bozsahin Department of Computer Engineering Middle East Technical University, 06531, Ankara, Turkey Authors will be notified of acceptance by April 1, 1994. Accepted papers will be published in the symposium proceedings. The conference will be held on the campus of Middle East Technical University (METU) in Ankara, Turkey. A limited number of free lodging facilities will be provided on campus for student participants. If there is sufficient interest, sightseeing tours to the nearby Cappadocia region known for its mystical underground cities and fairy chimneys, to the archaeological remains at Alacahoyuk , the capital of the Hittite empire, and to local museums will be organized. For further information and announcements contact: TAINN, Ugur Halici Department of Electrical Engineering Middle East Technical University 06531, Ankara, Turkey EMAIL: TAINN at VM.CC.METU.EDU.TR (AFTER JANUARY 1994) HALICI at VM.CC.METU.EDU.TR (BEFORE) --------------------------------------------------------------------- YOUR HELP IN DISTRIBUTING THIS ANNOUNCEMENT ON OTHER BULLETIN BOARDS AND LISTS WHICH HAVE AN AUDIANCE ON ARTIFICIAL INTELLIGENCE OR NEURAL NETWORKS IS HIGHLY APPRECIATED. From jagota at cs.Buffalo.EDU Mon Nov 15 20:28:36 1993 From: jagota at cs.Buffalo.EDU (Arun Jagota) Date: Mon, 15 Nov 93 20:28:36 EST Subject: NIPS workshop schedule Message-ID: <9311160128.AA18065@pegasus.cs.Buffalo.EDU> NIPS*93 Workshop: Neural Network Methods for Optimization Problems ================ December 4, Vail, CO, USA Intended Audience: Researchers interested in Connectionist solution ================= of optimization problems. Organizer: Arun Jagota ========= jagota at cs.buffalo.edu Program: ======= Ever since the work of Hopfield and Tank, neural networks have found increasing use for the approximate solution of hard optimization problems. The successes in the past have however been limited, when compared to traditional methods. In this workshop, speakers will present state of the art research on neural network methods for optimization problems. This ranges from specific algorithms to specific applications to general methodologies to theoretical issues to experimental studies to comparisons with conventional approaches. We hope to examine strengths and weaknesses of current algorithms, and discuss potential areas for improvement. We hope to exchange views and computational experiences on the merits and deficiencies of particular algorithms. We hope to carefully study some of the broad theoretical and methodological issues. We hope to discuss significant applications. We hope to discuss parallel implementation experiences. A fair amount of time is reserved in the afternoon session for informal discussion and audience participation on the above topics (see below). Morning Session: 7:30 - 8:00 N. Peterfreund, Technion Trajectory Control of Convergent Networks with Applications to TSP 8:00 - 8:30 Bruce Rosen, UT San Antonio Training Feedforward NN Quickly and Accurately with Very Fast Simulated Annealing Methods 8:30 - 9:00 Tal Grossman, Los Alamos National Lab A Neural Network Approach to the General Minimal Cover Problem 9:00 - 9:30 Eric Mjolsness, Yale Algebraic and Grammatical Design of Relaxation Nets Afternoon Session: 4:30 - 5:00 Yoshiyasu Takefuji, Case Western Reserve University Neural Computing for Optimization and Combinatorics 5:00 - 5:30 Arun Jagota, Memphis State Report on the DIMACS Combinatorial Optimization Challenge: A Comparison of Neural Network Methods With Several Others 5:30 - 6:00 Daniel S. Levine, UT Arlington Optimality in Biological and Artificial Neural Networks 6:00 - 6:25 Informal Discussion 6:25 - 6:30 Arun Jagota Closing Remarks Arun Jagota From cga at ai.mit.edu Tue Nov 16 15:08:16 1993 From: cga at ai.mit.edu (Christopher G. Atkeson) Date: Tue, 16 Nov 93 15:08:16 EST Subject: NIPS Workshop Message-ID: <9311162008.AA01469@mulch> NIPS*93 Workshop: Memory-based Methods for Regression and Classification ================= Intended Audience: Researchers interested in memory-based methods, locality in learning ================== Organizers: =========== Chris Atkeson Tom Dietterich Andrew Moore Dietrich Wettschereck cga at ai.mit.edu tgd at cs.orst.edu awm at cs.cmu.edu wettscd at cs.orst.edu Program: ======== Local, memory-based learning methods store all or most of the training data and predict new points by analyzing nearby training points (e.g., nearest neighbor, radial-basis functions, local linear methods). The purpose of this workshop is to determine the state of the art in memory-based learning methods and to assess current progress on important open problems. Specifically, we will consider such issues as how to determine distance metrics and smoothing parameters, how to regularize memory-based methods, how to obtain error bars on predictions, and how to scale to large data sets. We will also compare memory-based methods with methods (such as multi-layer perceptrons) that construct global decision boundaries or regression surfaces, and we will explore current theoretical models of local learning methods. By the close of the workshop, we will have assembled an agenda of open problems requiring further research. This workshop meets both days. Friday will be devoted primarily to classification tasks, and Saturday will be devoted primarily to regression tasks. Please send us email if you would like to present something. Current schedule: Friday: 7:30-7:45 Introduction (Dietterich) 7:45-8:15 Leon Bottou 8:15-8:30 Discussion 8:30-9:00 David Lowe 9:00-9:30 Discussion 4:30-5:00 Patrice Simard 5:00-5:15 Discussion 5:15-5:45 Dietrich Wettschereck 5:45-6:15 John Platt 6:15-6:30 Discussion Saturday: 7:30-8:00 Trevor Hastie 8:00-8:15 Discussion 8:15-8:45 Doyne Farmer 8:45-9:00 Discussion 9:00-9:30 5-minute descriptions by other participants 4:30-5:00 Chris Atkeson 5:00-5:30 Frederico Girosi 5:30-6:00 Andrew Moore 6:00-6:30 Discussion From jordan at psyche.mit.edu Tue Nov 16 15:15:42 1993 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 16 Nov 93 15:15:42 EST Subject: faculty opening at MIT Message-ID: The MIT Department of Brain and Cognitive Sciences anticipates making a tenure-track appointment in computational brain and cognitive science at the ASSISTANT PROFESSOR level. Candidates should have a strong mathematical background and an active research interest in the mathematical modeling of specific neural or cognitive phenomena. Individuals whose research focuses on learning and memory are especially encouraged to apply. Responsibilities include graduate and undergraduate teaching and research supervision. Applications should include a brief cover letter stating the candidate's research and teaching interests, a vita, three letters of recommendation and representative reprints. Send applications by January 15, 1994 to: Michael I. Jordan, Chair Faculty Search Committee E10-018 MIT Cambridge, MA 02139 Qualified women and minority candidates are especially encouraged to apply. MIT is an Affirmative Action/Equal Opportunity employer. From fellous at rana.usc.edu Tue Nov 16 17:24:57 1993 From: fellous at rana.usc.edu (Jean-Marc Fellous) Date: Tue, 16 Nov 93 14:24:57 PST Subject: Please forward this to connectionists-ml and any other appropriate Message-ID: <9311162224.AA14052@rana.usc.edu> Could you please post this announcement .... ASSISTANT/ASSOCIATE PROFESSOR BIOMEDICAL ENGINEERING/NEUROSCIENCE UNIVERSITY OF SOUTHERN CALIFORNIA A tenure-trace faculty position is available in the Department of Biomedical Engineering at the University of Southern California. This is a new position, created to strengthen the concentration of neuroscience research within the Department. Applicants should be capable of establishing an externally funded research program that includes a rigorous, quantitative approach to functional aspects of the nervous system. A combined theoretical and experimental approach is preferred, though applicants withpurely theoretical research programs will be considered. Multiple opportunities for interdisciplinary research are fostered by USC academic and research programs such as the Biomedical Simulations Resource, the Program in Neu~science, and the Center for Neural Computing. Send curriculum vitae, three letters of recommendation, and a description of current and future research by January 1, 1994 to Search Committee, Department of Biomedical Engineerig, 530 Olin Hall, University of Southern California, Los Angeles, CA 90089-1451. From mm at SANTAFE.EDU Tue Nov 16 17:52:02 1993 From: mm at SANTAFE.EDU (Melanie Mitchell) Date: Tue, 16 Nov 93 15:52:02 MST Subject: paper available Message-ID: <9311162252.AA25934@wupatki> The following paper (available via anonymous ftp) may be of interest to some on this list: Evolving Cellular Automata to Perform Computations: Mechanisms and Impediments Melanie Mitchell James P. Crutchfield Peter T. Hraber Santa Fe Institute UC Berkeley Santa Fe Institute Santa Fe Institute Working Paper 93-11-071 Submitted to Physica D October 18, 1993 Abstract We present results from experiments in which a genetic algorithm was used to evolve cellular automata (CAs) to perform a particular computational task---one-dimensional density classification. We look in detail at the evolutionary mechanisms producing the GA's behavior on this task and the impediments faced by the GA. In particular, we identify four ``epochs of innovation'' in which new CA strategies for solving the problem are discovered by the GA, describe how these strategies are implemented in CA rule tables, and identify the GA mechanisms underlying their discovery. The epochs are characterized by a breaking of the task's symmetries on the part of the GA. The symmetry breaking results in a short-term fitness gain but ultimately prevents the discovery of the most highly fit strategies. We discuss the extent to which symmetry breaking and other impediments are general phenomena in any GA search. To obtain an electronic copy of this paper: Note that the paper (44 pages) is broken up into two halves that must be retrieved separately. ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get sfi-93-11-071.part1.ps.Z get sfi-93-11-071.part2.ps.Z quit Then at your system: uncompress sfi-93-11-071.part1.ps.Z uncompress sfi-93-11-071.part2.ps.Z lpr -P sfi-93-11-071.part1.ps lpr -P sfi-93-11-071.part2.ps If you cannot obtain an electronic copy, send a request for a hard copy to dlu at santafe.edu. From mm at santafe.edu Tue Nov 16 18:28:12 1993 From: mm at santafe.edu (Melanie Mitchell) Date: Tue, 16 Nov 93 16:28:12 MST Subject: paper available Message-ID: <9311162328.AA26121@wupatki> The following paper (available via anonymous ftp) may be of interest to readers of this list: Genetic Algorithms and Artificial Life Melanie Mitchell Stephanie Forrest Santa Fe Institute University of New Mexico Santa Fe Institute Working Paper 93-11-072 To appear in _Artificial Life_ Abstract Genetic algorithms are computational models of evolution that play a central role in many artificial-life models. We review the history and current scope of research on genetic algorithms in artificial life, using illustrative examples in which the genetic algorithm is used to study how learning and evolution interact, and to model ecosystems, immune system, cognitive systems, and social systems. We also outline a number of open questions and future directions for genetic algorithms in artificial-life research. To obtain an electronic copy of this paper: ftp ftp.santafe.edu login: anonymous password: cd /pub/Users/mm binary get sfi-93-11-072.ps.Z quit Then at your system: uncompress sfi-93-11-072.ps.Z lpr -P sfi-93-11-072.ps If you cannot obtain an electronic copy, send a request for a hard copy to dlu at santafe.edu. From tds at ai.mit.edu Tue Nov 16 19:52:23 1993 From: tds at ai.mit.edu (Terence D. Sanger) Date: Tue, 16 Nov 93 19:52:23 EST Subject: Principal Components algorithms Message-ID: <9311170052.AA18287@rice-chex> Dear Connectionists, Recently, several people have asked me about the relationship between Kung&Diamantaras's APEX algorithm and the GHA algorithm I proposed in 1988. I can summarize my view of algorithms for Principal Components Analysis (PCA) by grouping them into 3 categories: 1) Algorithms to find the first principal component 2) Algorithms to find a set of vectors which spans the subspace of the first m principal components 3) Algorithms which find the first m principal components (eigenvectors) directly (I can provide a fairly extensive bibliography of these to anyone who is interested.) APEX and GHA (among many others) are in category 3. Most algorithms in category 3 make use of an algorithm from category 1 (such as Oja's method) combined with a deflation procedure which removes components from the input as they are discovered by the network. In other words, an algorithm to find the first principal component will actually find the second one once the first component is removed. Continuing this procedure allows all components to be extracted. Note that pure "Hebbian" algorithms usually fall into category 1. I propose the following hypothesis: "All algorithms for PCA which are based on a Hebbian learning rule must use sequential deflation to extract components beyond the first." All category 3 algorithms that I know about conform to the hypothesis. The easiest example is GHA, which was specifically designed as a differential-equation implementation of sequential deflation. It turns out that APEX also conforms to the hypothesis, despite the apparent use of lateral connections instead of deflation. Some fairly straightforward mathematics shows that the APEX equations can be rewritten in such a way that they are almost equivalent to the GHA equations. (Instructions for obtaining a Latex file with the derivation are below.) The use of lateral connections in APEX may indicate a more "biologically plausible" implementation than the original GHA formulation, but the performance of the two algorithms should be almost identical. Another apparently different algorithm which nevertheless conforms to the hypothesis is Brockett's. It is possible to think of GHA as a "thresholded" form of Brockett's algorithm. (Instructions for Latex file are below.) I appreciate all comments/opinions/counterexamples, so please feel free! Regards to all, Terry Sanger (tds at ai.mit.edu) Instructions for retrieving latex document: ftp ftp.ai.mit.edu login: anonymous password: your-net-address cd pub/sanger-papers get apex.tex get brockett.tex quit latex apex latex brockett lpr apex.dvi lpr brockett.dvi From shs at yantra.ernet.in Tue Nov 16 05:02:42 1993 From: shs at yantra.ernet.in (S H Srinivasan) Date: Tue, 16 Nov 93 10:52:42+050 Subject: dynamic binding Message-ID: <9311160552.AA01283@yantra.noname> There is yet another solution to dynamic binding which I have been working on. Conceptually we can solve the problem of dynamic binding if we use binary *vector* of activations for each *unit* instead of the usual binary ({0,1}) activations. Taking the example of Graham Smith assume that there are four features - red, blue, square, and triangle. Also assume that we want to represent two patterns simultaneously. Each unit now takes activation in {0,1}^{2} so that the activation pattern ( (1 0) (0 1) (1 0) (0 1)) represents "red square and blue triangle" and ( (0 0) (1 1) (0 1) (1 0)) represents "blue triangle and blue square". As Ron Sun observes: > The point of doing dynamic binding is to be able to use the > binding in other processing tasks, for example, high level > reasoning, especially in rule-based (or rule-like) reasoning. > In such reasoning, bindings are constructed and deconstructed, > passed around, checked against some constraints, and unbound > or rebound to something else. Simply forming an association > is NOT the point. the whole point of dynamic binding is the ability to *use* it. Using binary vector activations for units, it is possible to do tasks like multiple content-addressable memory (MCAM) - in which multiple patterns are retrieved simultaneously - in a straightforward manner. We have also looked into a (conceptual) *implementation* of the above scheme using complex-valued activations for the units. It is possible to represent about five objects using complex activations. It is also possible to perform tasks like MCAM. Finally, a question to neurobiologists: Can the existence of multiple neurotransmitters in neurons be related to the binary vector of activations idea? S H Srinivasan Center for AI & Robotics Bangalore - 560 012, INDIA. From reza at ai.mit.edu Wed Nov 17 08:26:38 1993 From: reza at ai.mit.edu (Reza Shadmehr) Date: Wed, 17 Nov 93 08:26:38 EST Subject: NSF Postdoctoral Fellowships Message-ID: <9311171326.AA16455@corpus-callosum.ai.mit.edu> Here's information on a Postdoctoral Fellowship program from NSF for Computational Sciences. with best wishes, Reza Shadmehr reza at ai.mit.edu --------------------------------------- CISE Postdoctoral Research Associates in Computational Science and Engineering and, in Experimental Science Program Announcement DIVISION OF ADVANCED SCIENTIFIC COMPUTING OFFICE OF CROSS-DISCIPLINARY ACTIVITIES DEADLINE: NOVEMBER 29, 1993 NATIONAL SCIENCE FOUNDATION CISE Postdoctoral Research Associates in Computational Science and Engineering CISE Postdoctoral Research Associates in Experimental Science The Computer and Information Science and Engineering (CISE) Directorate of the National Science Foundation plans a limited number of grants for support of Postdoctoral Research Associateships contingent upon available funding. The Associates are of two types: - Associateships in Computational Science and Engineering (CS&E Associates) supported by the New Technologies Program in the Division of Advanced Scientific Computing (DASC) in cooperation with other NSF CS&E disciplines (CS&E Associates). The objective of these Associateship awards is to increase expertise in the development of innovative methods and software for applying high performance, scalable parallel computing systems in solving large scale CS&E problems. - Associateships in Experimental Science (ES Associates) supported by the Office of Cross Disciplinary Activities (CDA) . The objective of the ES Associateship awards is to increase expertise in CISE experimental science by providing opportunities for associates to work in established laboratories performing experimental research in one or more of the research areas supported by the CISE Directorate. These awards provide opportunities for recent Ph.D.s to broaden their knowledge and experience and to prepare them for significant research careers on the frontiers of contemporary computational science and engineering and experimental science. It is assumed that CS&E Associates will conduct their research at academic research institutions or other centers or institutions which provide access, either on site or by network, to high performance, scalable parallel computing systems and will be performing research associated with those systems. It is assumed that ES Associates will conduct their research in academic research institutions or other institutions devoted to experimental science in one or more of the research areas supported by the CISE Directorate. Who may submit Universities, colleges, and other research institutions as described in Grants for Research and Education in Science and Engineering (GRESE), (NSF 92-89) are eligible to submit proposals to this program. For CS&E Associateships the institution must have access to high performance, emerging parallel computing systems. For ES Associateships, the institution should have an established laboratory performing research in CISE experimental areas (as described in Guide to Programs (NSF 92-78)). Associateship awards will be based on proposals submitted by the sponsoring institution. The principal investigator will serve as an unreimbursed scientific advisor for the research associate. Research associates should not be listed as co-principal investigators. Each proposal must include a research and training plan for the proposed research associate in an activity of computational science and engineering in any of the fields supported by DASC, other NSF CS&E programs or experimental research supported by the CISE Directorate. To be eligible for this support, individuals must; (1) be eligible to be appointed as a research associate or research assistant professor in the institution which has submitted the proposal, (2) fulfill the requirement for the doctoral degree in computational science and engineering, computer science or a closely related discipline by September 30, 1994. Award Amounts, Stipends and Research Expense Allowances Awards will range from $36,200-$46,200 for a 24 month period. The award will include $32,000-$42,000 to support the Research Associate (to be matched equally by the sponsoring institution). There will also be an allowance of $4,200 to the sponsoring institution, in lieu of indirect costs, as partial reimbursement for expenses incurred in support of the research. The annual award to the research associate will be composed of two parts; an annual stipend (salary and benefits) that may range from $28,000-$38,000, and a $4,000 per year research expense allowance expendable at the Associate's discretion for travel, publication expenses, and other research-related costs. There is no allowance for dependents. The effective date of the award cannot be later than January 1995. Matching Funds The institution must match the NSF award on a dollar for dollar basis excluding the $4,200 granted in lieu of indirect costs. Matching funds may come from grants from other NSF programs, other agencies programs, or from other institutional resources. Matching fund arrangements are the responsibility of the submitting institution and must be detailed in the budget request. To the extent that the sponsoring institution increases its cost sharing by providing additional stipend beyond the level of $38,000 over the 24 month award period, the CISE Postdoctoral Associates program will not provide additional funds. Evaluation and Selection Proposals will be reviewed by panel in accordance with established Foundation procedures and the general criteria described in the GRESE brochure. In particular, the review panel will consider: the candidate's ability, accomplishments, potential as evidenced by the quality and significance of past research, long range career goals, the likely impact of the proposed postdoctoral training and research on the future career goals, the likely impact of the proposed postdoctoral training and research on the future scientific development of the applicant and on the parallel computing infrastructure of the US (for CS&E Associates) or on Experimental Science in CISE disciplines (for ES Associates), and the adequacy of the sponsoring institutions access to high performance and/or experimental computational resources to support the proposed research. The selection of the Research Associates will be made by the National Science Foundation on the basis of panel reviews, with due consideration of the effect of the awards on the infrastructure of CS&E and experimental computer science research in the US. Copies of the GRESE brochure and other NSF publications are available at no cost from the NSF Forms and Publication Unit, phone (703) 306-1130, or via e-mail (Bitnet:pub at nsf or Internet:pubs at nsf.gov). Application Procedures and Proposal Materials To be eligible for consideration, a proposal must contain forms which can be found in the GRESE brochure. Required are a Supplementary Application Information Form (NSF Form 1225-one copy), a Current and Pending Support Form (NSF Form 1239-one copy) to be completed by the Principal Investigator (the scientific advisor), and one original and twelve copies of: (a) Cover page with institutional certificates (Form 1207). Title should indicate whether the proposal is an CS&E Postdoctoral Associate or ES Postdoctoral Associate. (b) Budget (Form 1030). (c) Statement with details regarding matching funds and their source. (d) Personal career goals statement not to exceed one single- spaced page, written by the research associate applicant, that describes the career goals of the applicant and what role the chosen research, scientific advisor and sponsoring institution will play in enhancing the realization of these long-range career goals. (e) Statement of results from prior NSF support (of the Principal Investigator) related to the proposed research. (f) Biographical sketch of the principal investigator as called for in the GRESE brochure. (g) Up-to-date curriculum vitae of the research associate applicant including a complete list of publications, but no reprints (a thesis should not be included, but a thesis abstract may be included). (h) Proposal abstract, less than 250 words, of the training and research plan. (i) Training and research plan (not to exceed three single- spaced typewritten pages). This should propose research which could be carried out during the award period. The creativity, description and essential elements of the research proposal must be those of the research associate applicant. (j) Statement from the proposed postdoctoral advisor nominating the research associate indicating the nature of the postdoctoral supervision to be given if the award is made. (k) Statement from the advisor clearly describing the computing facilities and resources that will be available to support the proposed research. (l) Three recommendations (normally including one from the doctoral advisor). Training and research plans should be provided to your references to assist their recommendations. Please note that the research description page limit is less than the research description page limit specified in GRESE. All application materials must be: (1) received by NSF no later than the deadline date November 29, 1993; (2) be postmarked no later than five (5) days prior to the deadline date; or (3) be sent via commercial overnight mail no later than two (2) days prior to the deadline date; to be considered for award. Send completed proposals with supporting application materials to: National Science Foundation - PPU Announcement No. 93-150 4201 Wilson Blvd. Arlington, VA 22230 Additional Information If you wish additional information, please contact Dr. Robert G.Voigt, Program Director, New Technologies, DASC, at 202-357-7727 (e-mail: rvoigt at nsf.gov) for CS&E Associates or Dr. Tse-Yun Feng, Program Director, CDA at (202) 357-7349 (e-mail: tfeng at nsf.gov) for ES Associates. After November 19, 1993, the phone numbers are respectively 703-306-1962 and 703-306-1980. Copies of most program announcements are available electronically using the Science and Technology Information System (STIS). The full text can be searched on-line, and copied from the system. Instructions for use of the system are in NSF 91-10 "STIS Flyer." The printed copy is available from the Forms and Publications Unit. An electronic copy may be requested by sending a message to "stis at nsf" (bitnet) or "stis at nsf.gov" (Internet). The Foundation provides awards for research in the sciences and engineering. The awardee is wholly responsible for the conduct of such research and preparation of the results for publication. The Foundation does not assume responsibility for such findings or their interpretation. The Foundation welcomes proposals on behalf of all qualified scientists and engineers and strongly encourages women, minorities, and persons with disabilities to compete fully in any of the research and research-related programs described in this document. Facilitation Awards for Scientists and Engineers with Disabilities provide funding for special assistance or equipment to enable persons with disabilities (investigators and other staff, including student research assistants) to work on an NSF project. See program announcement (NSF 91-54), or contact the program coordinator (703) 306-1697 for more information. In accordance with Federal statutes and regulations and NSF policies, no person on grounds of race, color, age, sex, national origin, or disability shall be excluded from participation in, denied the benefits of, or be subject to discrimination under any program or activity receiving financial assistance from the National Science Foundation. NSF has TDD (Telephone Device for the Deaf) capability which enables individuals with hearing impairments to communicate with the Division of Human Resource Management for information relating to NSF programs, employment, or general information. This number is (703) 306-0090. Grants awarded as a result of this announcement are administered in accordance with the terms and conditions of NSF GC-1, Grant General Conditions, or FDP-II, Federal Demonstration Project General Terms and Conditions, depending on the grantee organization. Copies of these documents are available at no cost from the NSF Forms and Publications Unit, phone (703) 306-1130, or via e-mail (Bitnet:pubs at nsf or Internet:pubs at nsf.gov). More comprehensive information is contained in the NSF Grant Policy Manual (July 1989) for sale through the Superintendent of Documents, Government Printing Office, Washington, DC 20402. From pjh at compsci.stirling.ac.uk Wed Nov 17 05:40:39 1993 From: pjh at compsci.stirling.ac.uk (Peter J.B. Hancock) Date: 17 Nov 93 10:40:39 GMT (Wed) Subject: NCPW94 Message-ID: <9311171040.AA02456@uk.ac.stir.cs.nevis> Preliminary Announcement and first call for papers 3rd Neural Computation and Psychology Workshop University of Stirling Scotland 31 August - 2 September 1994 The theme of next year's workshop will be models of perception: general vision, faces, music etc. There will be invited and contributed talks and posters. It is hoped that a proceedings will be published after the event. Participation from postgraduates is particularly encouraged. Papers will be selected on the basis of abstracts of at most 1000 words. Deadline for submission: 1 June 1994. For further information contact: Peter Hancock, Department of Psychology, pjh at uk.ac.stir.cs, 0786-467659 Leslie Smith, Department of Computing Science and Mathematics, lss at uk.ac.stir.cs, 0786-467435 From george at psychmips.york.ac.uk Wed Nov 17 10:03:22 1993 From: george at psychmips.york.ac.uk (George Bolt) Date: Wed, 17 Nov 93 15:03:22 +0000 (GMT) Subject: CHI '94 Workshop - HCI & Neural Networks Message-ID: I neglected to mention where the workshop is to be held in my posting. It will be with the main HCI conference in Boston, MA, USA. - George Bolt Dept. of Psychology, University of York, UK. From mozer at dendrite.cs.colorado.edu Wed Nov 17 12:12:13 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Wed, 17 Nov 1993 10:12:13 -0700 Subject: book announcement Message-ID: <199311171712.AA01090@neuron.cs.colorado.edu> In case you don't already have enough to read, the following volume is now available: Mozer, M., Smolensky, P., Touretzky, D., Elman, J., & Weigend, A. (Eds.). (1994). _Proceedings of the 1993 Connectionist Models Summer School_. Hillsdale, NJ: Erlbaum Associates. The table of contents is listed below. For prepaid orders by check or credit card, the price is $49.95 US. Orders may be made by e-mail to "orders at leanhq.mhs.compuserve.com", by fax to (201) 666 2394, or by calling 1 (800) 926 6579. Include your credit card number, type, expiration date, and refer to "ISBN 1590-2". ------------------------------------------------------------------------------- Proceedings of the 1993 Connectionist Models Summer School Table of Contents ------------------------------------------------------------------------------- NEUROSCIENCE Sigma-pi properties of spiking neurons / Thomas Rebotier and Jacques Droulez Towards a computational theory of rat navigation / Hank S. Wan, David S. Touretzky, and A. David Redish Evaluating connectionist models in psychology and neuroscience / H. Tad Blair VISION Self-organizing feature maps with lateral connections: Modeling ocular dominance / Joseph Sirosh and Risto Miikkulainen Joint solution of low, intermediate, and high level vision tasks by global optimization: Application to computer vision at low SNR / Anoop K. Bhattacharjya and Badrinath Roysam COGNITIVE MODELING Learning global spatial structures from local associations / Thea B. Ghiselli-Crippa and Paul W. Munro A connectionist model of auditory Morse code perception / David Ascher A competitive neural network model for the process of recurrent choice / Valentin Dragoi and J. E. R. Staddon A neural network simulation of numerical verbal-to-arabic transcoding / A. Margrethe Lindemann Combining models of single-digit arithmetic and magnitude comparison / Thomas Lund Neural network models as tools for understanding high-level cognition: Developing paradigms for cognitive interpretation of neural network models / Itiel E. Dror LANGUAGE Modeling language as sensorimotor coordination F. James Eisenhart Structure and content in word production: Why it's hard to say dlorm Anita Govindjee and Gary Dell Investigating phonological representations: A modeling agenda Prahlad Gupta Part-of-speech tagging using a variable context Markov model Hinrich Schutze and Yoram Singer Quantitative predictions from a constraint-based theory of syntactic ambiguity resolution Michael Spivey-Knowlton Optimality semantics Bruce B. Tesar SYMBOLIC COMPUTATION AND RULES What's in a rule? The past tense by some other name might be called a connectionist net Kim G. Daugherty and Mary Hare On the proper treatment of symbolism--A lesson from linguistics Amit Almor and Michael Rindner Structure sensitivity in connectionist models Lars F. Niklasson Looking for structured representations in recurrent networks Mihail Crucianu Back propagation with understandable results Irina Tchoumatchenko Understanding neural networks via rule extraction and pruning Mark W. Craven and Jude W. Shavlik Rule learning and extraction with self-organizing neural networks Ah-Hwee Tan RECURRENT NETWORKS AND TEMPORAL PATTERN PROCESSING Recurrent networks: State machines or iterated function systems? John F. Kolen On the treatment of time in recurrent neural networks Fred Cummins and Robert F. Port Finding metrical structure in time J. Devin McAuley Representations of tonal music: A case study in the development of temporal relationships Catherine Stevens and Janet Wiles Applications of radial basis function fitting to the analysis of dynamical systems Michael A. S. Potts, D. S. Broomhead, and J. P. Huke Event prediction: Faster learning in a layered Hebbian network with memory Michael E. Young and Todd M. Bailey CONTROL Issues in using function approximation for reinforcement learning Sebastian Thrun and Anton Schwartz Approximating Q-values with basis function representations Philip Sabes Efficient learning of multiple degree-of-freedom control problems with quasi-independent Q-agents Kevin L. Markey Neural adaptive control of systems with drifting parameters Anya L. Tascillo and Victor A. Skormin LEARNING ALGORITHMS AND ARCHITECTURES Temporally local unsupervised learning: The MaxIn algorithm for maximizing input information Randall C. O'Reilly Minimizing disagreement for self-supervised classification Virginia R. de Sa Comparison of two unsupervised neural network models for redundancy reduction Stefanie Natascha Lindstaedt Solving inverse problems using an EM approach to density estimation Zoubin Ghahramani Estimating a-posteriori probabilities using stochastic network models Michael Finke and Klaus-Robert Muller LEARNING THEORY On overfitting and the effective number of hidden units Andreas S. Weigend Increase of apparent complexity is due to decrease of training set error Robert Dodier Momentum and optimal stochastic search Genevieve B. Orr and Todd K. Leen Scheme to improve the generalization error Rodrigo Garces General averaging results for convex optimization Michael P. Perrone Multitask connectionist learning Richard A. Caruana Estimating learning performance using hints Zehra Cataltepe and Yaser S. Abu-Mostafa SIMULATION TOOLS A simulator for asynchronous Hopfield models Arun Jagota An object-oriented dataflow approach for better designs of neural net architectures Alexander Linden From mayer at Heuristicrat.COM Tue Nov 16 11:59:58 1993 From: mayer at Heuristicrat.COM (Andrew Mayer) Date: Tue, 16 Nov 1993 08:59:58 -0800 Subject: Announcement: Summer Institute on Probabilistic Reasoning in AI Message-ID: <199311161659.AA16863@euclid.Heuristicrat.COM> Summer Institute on Probabilistic Reasoning in Artificial Intelligence Corvallis, Oregon July 22 - 27, 1994 WHAT: An intensive short course in modern probabilistic modeling, Bayesian inference, and decision theory designed for advanced Phd students, recent Phds, and government and industry researchers. WHY: In the last decade, researchers have made significant breakthroughs in techniques for representing and reasoning about uncertain information. Many now feel that probabilistic reasoning and rational decision making provide a sound and practical foundation needed for a variety of problems in artificial intelligence. In fact, these techniques now form the basis of state-of-the-art applications in search, planning, machine learning, diagnosis, vision, robotics, and speech understanding. The field has reached a level of maturity where the basic techniques are well understood and ready for dissemination. At the same time, there is a wealth of potential applications and open research topics. Thus, the time is ripe to train the next generation of researchers. WHERE, WHEN, HOW: The first Summer Institute will be held at Oregon State University, Corvallis, Oregon from July 22-27, 1994. A distinguished faculty will lecture on foundations or probabilistic reasoning and decision theory, knowledge acquisition, learning, and inference methods. Case studies will be presented on implemented applications. The Institute will provide housing, and expects to be able to provide limited travel funds. The Institute is sponsored by the Air Force Office of Scientific Research. WHO: Faculty Include: Jack Breese, Microsoft Wray Buntine, NASA Ames Bruce D'Ambrosio, Oregon State Thomas Dean, Brown Robert Fung, Lumina Decision Systems Othar Hansson, HRI & UC Berkeley David Heckerman, Microsoft Max Henrion, Lumina Decision Systems Keiji Kanazawa, UC Berkeley Tod Levitt, IET & Stanford Andrew Mayer, HRI & UC Berkeley Judea Pearl, UCLA Mark Peot, Stanford Ross Shachter, Stanford Michael Wellman, Univ. Michigan and others to be announced at a later date TO APPLY: For information and applications please contact the recruiting chair: Andrew Mayer Heuristicrats Research, Inc. 1678 Shattuck Avenue, Suite 310 Berkeley, CA 94709-1631 (510) 845-5810, x629 mayer at heuristicrat.com APPLICATIONS MUST BE RECEIVED BY FEBRUARY 15, 1994. Early application is encouraged to aid our planning process. From lba at ilusion.inesc.pt Fri Nov 19 05:29:39 1993 From: lba at ilusion.inesc.pt (Luis B. Almeida) Date: Fri, 19 Nov 93 11:29:39 +0100 Subject: Principal Components algorithms In-Reply-To: <9311170052.AA18287@rice-chex> (tds@ai.mit.edu) Message-ID: <9311191029.AA07506@ilusion.inesc.pt> Dear Terence, dear Connectionists, Terence writes: > I propose the following hypothesis: > "All algorithms for PCA which are based on a Hebbian learning rule must > use sequential deflation to extract components beyond the first." I agree with that hypothesis, in what concerns most of the PCA algorithms. However, I am not sure of that for all algorithms. One of them is the "weighted subspace" algorithm of Oja et al. (see ref. below). The simplest way I have found to interpret this algorithm is as a weighted combination of Williams' error-correction learning (or the plain subspace algorithm, which is the same) and Oja's original Hebbian rule. If one takes into account the realtive weights of both, which are different from one unit to another, it is rather easy to understand that the algorithm should extract the principal components. I can give more detail on this interpretation, if people find it useful. From granger at ics.uci.edu Fri Nov 19 14:32:38 1993 From: granger at ics.uci.edu (Rick Granger) Date: Fri, 19 Nov 1993 11:32:38 -0800 Subject: Principal Components algorithms In-Reply-To: Your message of "Tue, 16 Nov 1993 19:52:23 EST." <9311170052.AA18287@rice-chex> Message-ID: <6305.753737558@ics.uci.edu> Terry, you point out that correlational or "Hebbian" algorithms identify the first principal component, and that "sequential deflation" (i.e., successive removal of the prior component) will then iteratively discover the subsequent components. Based on our bottom-up simulation studies some years ago of the combined olfactory bulb - olfactory paleocortex system, we arrived at an algorithm that we identified as performing this class of function. In brief, cortex processes feedforward inputs from bulb, cortical feedback to bulb inhibits a portion of the input, the feedforward remainder from bulb is then processed by cortex, and the process iterates, successively removing portions of the input and then processing them. We described our findings in a 1990 article (Ambros-Ingerson, Granger and Lynch, Science, 247: 1344-1348, 1990). Moreover, in that paper we identified a superset of this class of functions, which includes families of algorithms both for PCA and for the disparate statistical function of hierarchical clustering. Intuitively, in a network that computes principal components, all the target cells (or any single target cell) will respond to all inputs, and with correlational learning the weight vectors coverge to the first principal component. Consider instead the same network but with a lateral inhibition (or "competitive" or "winner-take-all") performance rule (see, e.g., Coultrip et al., Neural Networks, 5: 47-54). In this version, instead of all the cells acting in concert computing the principal component, each individual "winning" cell will move to the mean of just that subset of inputs that it wins on. Then the response corresponds to the statistical operation of clustering (as in the work of Von der Malsburg '73, Grossberg '76, Zipser and Rumelhart '86, and many others). Then an operation of "sequential deflation" (in this case, successive removal of the cluster via inhibitory feedback from cortex to bulb) identifies sub-clusters (and then sub-sub-clusters, etc.), iteratively descending a hierarchy that successively approximates the inputs, performing the operation of hierarchical clustering. Thus these two operations, hierarchical clustering and principal components analysis, fall out as special cases of the same general class of "successive removal" (or "sequential deflation") algorithms. A formal characterization of this finding appears in the 1990 Science paper. (We have hypothesized that an operation of this kind may be performed by the olfactory bulb-cortex system, and have tested some physiological and behavioral predictions from the model: e.g., McCollum et al., J.Cog.Neurosci., 3: 293-299, 1991; Granger et al., Psychol.Sci., 2: 116-118, 1991. This and related work is reviewed in Gluck and Granger, Annual Review of Neurosci., 16: 667-706, 1993. Anyone interested in reprints is welcome to send me a mail or email request.) - Rick Granger Center for the Neurobiology of Learning and Memory University of California Irvine, California 92717 granger at ics.uci.edu From tds at ai.mit.edu Fri Nov 19 14:59:30 1993 From: tds at ai.mit.edu (Terence D. Sanger) Date: Fri, 19 Nov 93 14:59:30 EST Subject: PCA bibliography Message-ID: <9311191959.AA05559@rice-chex> Dear Connectionists, Since I sent out the offer to supply a PCA bibliography, I have received so many requests that I now realize I should have included it with the original mailing! My mistake. A file called "pca.bib" is now available via anonymous ftp from the same site (instructions below). This file is in a not-very-clean BibTex format. I won't even pretend that it is a complete bibliography, since many people are currently working in this field and I don't yet have all the most recent reprints. If I've missed anyone or there are mistakes, please send me some email and I'll update the bibliography for everyone. Thanks, Terry Sanger Instructions for retrieving bibliography database: ftp ftp.ai.mit.edu login: anonymous password: yourname at yoursite cd pub/sanger-papers get pca.bib quit P.S.: Several people have commented to me that the way I phrased the hypothesis seems to imply the use of time-sequential deflation. In other words, it sounds as if the first eigenvector must be found and removed from the data, before the second is found. Most algorithms do not do this, and instead deflate the first learned component while it is being learned. Thus learning of all components continues simultaneously "in parallel". I meant to include this case, but I could not think of any succinct way to say it! Technically, it is not very different, since most convergence proofs assume sequential learning of the outputs. But in practice, algorithms which learn all outputs in parallel seem to perform faster than those that learn one output at a time. I have certainly found this to be true for GHA, and people have mentioned that it holds true for other algorithms as well. From smagt at fwi.uva.nl Fri Nov 19 16:40:03 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Fri, 19 Nov 1993 22:40:03 +0100 Subject: CFP: book on neural systems for robotics Message-ID: <199311192140.AA03441@zijde.fwi.uva.nl> NOTE THE DEADLINES! =================== PROGRESS IN NEURAL NETWORKS series editor O. M. Omidvar CALL FOR PAPERS Special Volume: NEURAL SYSTEMS FOR ROBOTICS Editor: P. Patrick van der Smagt This series will review state-of-the-art research in neural networks, natural and synthetic. Contributions from leading researchers and practitioners will be sought. This series will help shape and define academic and professional programs in this area. This series is intended for a wide audience; those professionally involved in neural network research, such as lecturers and primary investigators in neural computing, neural modeling, neural learning, neural memory, and neurocomputers. The upcoming volume, NEURAL SYSTEMS FOR ROBOTICS, will focus on research in natural and artificial neural systems directly related to robotics and robot control. Authors are invited to submit original manuscripts describing recent progress in neural network research directly applicable to robotics. Manuscripts may be survey or tutorial in nature. Suggested topics include, but are not limited to: * Neural control systems for visually guided robots * Manipulator trajectory control * Sensor feedback systems & sensor data fusion * Obstacle avoidance * Biologically inspired robot systems * Identification of kinematics and dynamics Implementation of algoritms in non-simulated environments (i.e., *real* robots) is encouraged. The papers will be refereed and uniformly typeset. Ablex and the Progress Series editors invite you to submit an abstract, extended summary or manuscript proposal, directly to the Special Volume Editor: P. Patrick van der Smagt, Dept. of Computer Systems, University of Amsterdam, Kruislaan 403, 1098 SJ Amsterdam, THE NETHERLANDS Tel: +31 20 525-7524 Fax: +31 20 525-7490 Email: smagt at fwi.uva.nl or to the Series Editor: Dr. Omid M. Omidvar, Computer Science Dept., University of the District of Columbia, Washington DC 20008 Tel: (202)282-7345 Fax: (202)282-3677 Email: OOMIDVAR at UDCVAX.BITNET The deadline for extended abstracts is December 31, 1993. Author notification is by February, 1994. Final submissions, which should not exceed 50 double-spaced pages in length, should be in by May 31, 1994. The Publisher is Ablex Publishing Corporation, Norwood, NJ. From henders at linc.cis.upenn.edu Thu Nov 18 13:19:53 1993 From: henders at linc.cis.upenn.edu (Jamie Henderson) Date: Thu, 18 Nov 1993 13:19:53 -0500 Subject: dynamic binding Message-ID: <199311181819.NAA11159@linc.cis.upenn.edu> The vector representation of dynamic bindings suggested by S H Srinivasan can be thought of as a spatial form of the temporal synchrony representation of dynamic bindings. Under an idealized form of the temporal synchrony model, the network cycles through the set of entities one at a time. One period of this cycle corresponds to one computation step in the vector processing model. I can imagine other forms of "vector position synchrony" that would correspond to less idealized versions of temporal synchrony. One advantage of the temporal synchrony model over the vector model is that if a rule is learned for an entity in one "position" of the vector, the temporal synchrony model inherently generalizes this rule to all "positions". Because the temporal synchrony model cycles through entities, the piece of network which implements a rule is time-multiplexed across all entities. The vector model would need a special mechanism for enforcing this property of rules. - Jamie James Henderson Computer and Information Science University of Pennsylvania From sparre at connect.nbi.dk Thu Nov 18 16:13:05 1993 From: sparre at connect.nbi.dk (Jacob Sparre Andersen) Date: Thu, 18 Nov 93 16:13:05 MET Subject: Reference list Message-ID: We're writing a paper on learning strategic games with neural nets and other optimization methods. We've collected some references, but we hope that we can get some help improving our reference list. Regards, Jacob Sparre Andersen and Peer Sommerlund Here's our list of references (some not complete): Justin A. Boyan (1992): "Modular Neural Networks for Learning Context-Dependent Game Strategies", Department of Engineering and Computer Laboratory, University of Cambridge, 1992, Cambridge, England Bernd Bruegmann (1993): "Monte Carlo Go", unpublished? Herbert Enderton (1989?): "The Golem Go Program" B. Freisleben (1992): "Teaching a Neural Network to Play GO-MOKU," Artificial Neural Networks 2, proceedings of ICANN '92, editors: I. Aleksander and J. Taylor, pp. 1659-1662, Elsevier Science Publishers, 1992 W.T.Katz and S.P.Pham (1991): "Experience-Based Learning Experiments using Go-moku", Proc. of the 1991 IEEE International Conference on Systems, Man, and Cybernetics, 2: 1405-1410, October 1991. M. Kohle & F. Schonbauer (19??): "Experience gained with a neural network that learns to play bridge", Proc. of the 5th Austrian Artificial Intelligence meeting, pp. 224-229. Kai-Fu Lee and Sanjoy Mahajan (1988): "A Pattern Classification Approach to Evaluation Function Learning", Artificial Intelligence, 1988, vol 36, pp. 1-25. Barney Pell (1992?): "" Pell has done some work in machine learning for GO. Article available by ftp. A.L. Samuel (1959): "Some studies in machine learning using the game of checkers", IBM journal of Research and Development, vol 3, nr. 3, pp. 210-229, 1959. A.L. Samuel (1967): "Some studies in machine learning using the game of checkers 2 - recent progress", IBM journal of Research and Development, vol 11, nr. 6, pp. 601-616, 1967. David Stoutamire (19??): has written a thesis on machine learning applied to Go. G. Tesauro (1989): "Connectionist learning of expert preferences by comparison training", Advances in NIPS 1, 99-106 1989 G. Tesauro & T.J. Sejnowski (1989): "A Parallel Network that learns to play Backgammon", Artificial Intelligence, vol 39, pp. 357-390, 1989. G. Tesauro & T.J. Sejnowski (1990): "Neurogammon: A Neural Network Backgammon Program", IJCNN Proceedings, vol 3, pp. 33-39, 1990. In Machine Learning is this article, in which he comments on temporal difference learning (i.e. training a net from scratch by playing a copy of itself). The program he develops is called "TD-gammon": G. Tesauro (1991): "Practical Issues in Temporal Difference Learning", IBM Research Report RC17223(#76307 submitted) 9/30/91; see also the special issue on Reinforcement Learning of the Machine Learning Journal 1992, where it also appears. He Yo, Zhen Xianjun, Ye Yizheng, Li Zhongrong (1990): "Knowledge acquisition and reasoning based on neural networks - the research of a bridge bidding system", INNC '90, Paris, vol 1, pp. 416-423. The annual computer olympiad involves tournaments in a variety of games. These publications contain a wealth of interesting articles: Heuristic Programming in Artificial Intelligence - the first computer olympiad D.N.L. Levy & D.F. Beal eds. Ellis Horwood ltd, 1989. Heuristic Programming in Artificial Intelligence 2 - the second computer olympiad D.N.L. Levy & D.F. Beal eds. Ellis Horwood, 1991. Heuristic Programming in Artificial Intelligence 3 - the third computer olympiad H.J. van den Herik & L.V. Allis eds. Ellis Horwood, 1992. -------------------------------------------------------------------------- Jacob Sparre Andersen, Niels Bohr Institute, University of Copenhagen. E-mail: sparre at connect.nbi.dk - Fax: (+45) 35 32 04 60 -------------------------------------------------------------------------- Peer Sommerlund, Department of Computer science, University of Copenhagen. E-mail: peso at connect.nbi.dk -------------------------------------------------------------------------- We're writing a paper on learning strategic games with neural nets and other optimization methods. -------------------------------------------------------------------------- From anguita at dibe.unige.it Sat Nov 20 16:14:48 1993 From: anguita at dibe.unige.it (Davide Anguita) Date: Sat, 20 Nov 93 16:14:48 MEZ Subject: Matrix Back Prop (MBP) available Message-ID: Matrix Back Propagation v1.1 is finally available. This code implements (in C language) the algorithms described in: D.Anguita, G.Parodi, R.Zunino - An efficient implementation of BP on RISC- based workstations. Neurocomputing, in press. D.Anguita, G.Parodi, R.Zunino - Speed improvement of the BP on current generation workstations. WCNN '93, Portland. D.Anguita, G.Parodi, R.Zunino - YPROP: yet another accelerating technique for the bp. ICANN '93, Amsterdam. To retrieve the code: ftp risc6000.dibe.unige.it <130.251.89.154> anonymous cd pub bin get MBPv1.1.tar.Z quit uncompress MBPv1.1.tar.Z tar -xvf MBPv1.1.tar Then print the file mbpv11.ps (PostScript). Send comments (or flames) to the address below. Good luck. Davide. ======================================================================== Davide Anguita DIBE Phone: +39-10-3532192 University of Genova Fax: +39-10-3532175 Via all'Opera Pia 11a e-mail: anguita at dibe.unige.it 16145 Genova, ITALY From gary at cs.ucsd.edu Sat Nov 20 15:07:23 1993 From: gary at cs.ucsd.edu (Gary Cottrell) Date: Sat, 20 Nov 93 12:07:23 -0800 Subject: PCA bibliography Message-ID: <9311202007.AA29326@desi> RE: >P.S.: Several people have commented to me that the way I phrased the >hypothesis seems to imply the use of time-sequential deflation. In other >words, it sounds as if the first eigenvector must be found and removed from >the data, before the second is found. Most algorithms do not do this, and >instead deflate the first learned component while it is being learned. Thus >learning of all components continues simultaneously "in parallel". In fact, that's how it works also for the "Category 2" systems mentioned in your travelogue of methods. Straight LMS with hidden units will learn the principal components at different rates, with the highest rate on the first, then the second, etc., up to the number of hidden units. Of course, these systems only span the principal subspace rather than learning the pc's directly, but I find that an advantage. (See Baldi & Hornik 1989, and Cottrell & Munro, SPIE88 paper). Also, if you add more hidden layers to a nonlinear system, as in DeMers & Cottrell (93), you can learn better representations, in the sense that you can find the actual dimensionality of the system you are modeling, with respect to your error criterion. So for example, a helix in 3 space will be found to be one dimensional instead of 3, data from 3.5D Mackey Glass will be found to be either 3 or 4 dimensional depending on your reconstruction fidelity required. We don't know how to do a half a hidden unit yet, though! Gary Cottrell 619-534-6640 Reception: 619-534-6005 FAX: 619-534-7029 "Only connect" Computer Science and Engineering 0114 University of California San Diego -E.M. Forster La Jolla, Ca. 92093 gary at cs.ucsd.edu (INTERNET) gcottrell at ucsd.edu (BITNET, almost anything) ..!uunet!ucsd!gcottrell (UUCP) References: Baldi, P. and Hornik, K., (1989) Neural Networks and Principal Component Analysis: Learning from Examples without Local Minima, Neural Networks 2, 53--58. Cottrell, G.W. and Munro, P. (1988) Principal components analysis of images via back propagation. Invited paper in Proceedings of the Society of Photo-Optical Instrumentation Engineers, Cambridge, MA. Available from erica at cs.ucsd.edu DeMers, D. & Cottrell, G.W. (1993) Nonlinear dimensionality reduction. In Hanson, Cowan & Giles (Eds.), Advances in neural information processing systems 5, pp. 580-587, San Mateo, CA: Morgan Kaufmann. Available on neuroprose as demers.nips92-nldr.ps.Z. From phatak at zonker.ecs.umass.edu Sun Nov 21 16:27:40 1993 From: phatak at zonker.ecs.umass.edu (Dhananjay S Phatak) Date: Sun, 21 Nov 93 16:27:40 -0500 Subject: Tecnhical report on fault tolerance of feedforward ANNs available. Message-ID: <9311212127.AA12281@zonker.ecs.umass.edu> This is the abstract of Technical Report No. TR-92-CSE-26, ECE Dept., Univ. of Massachusetts, Amherst, MA 01003. The report is titled "Complete and Partial Fault Tolerance of Feedforward Neural Nets". It is available in the neuroprose archive (the compressed post script file name is phatak.nn-fault-tolerance.ps.Z). An abbreviated version of this report is in print in the IEEE Transactions on Neural Nets (to appear in 1994). I would be glad to get feedback about the issues discussed and the results presented. Thanks ! ---------------------- ABSTRACT --------------------------------------- A method is proposed to estimate the fault tolerance of feedforward Artificial Neural Nets (ANNs) and synthesize robust nets. The fault model abstracts a variety of failure modes of hardware implementations to permanent stuck--at type faults of single components. A procedure is developed to build fault tolerant ANNs by replicating the hidden units. It exploits the intrinsic weighted summation operation performed by the processing units in order to overcome faults. It is simple, robust and is applicable to any feedforward net. Based on this procedure, metrics are devised to quantify the fault tolerance as a function of redundancy. Furthermore, a lower bound on the redundancy required to tolerate all possible single faults is analytically derived. This bound demonstrates that less than Triple Modular Redundancy (TMR) cannot provide complete fault tolerance for all possible single faults. This general result establishes a NECESSARY condition that holds for ALL feedforward nets, irrespective of the network topology or the task it is trained on. Analytical as well as extensive simulation results indicate that the actual redundancy needed to SYNTHESIZE a completely fault tolerant net is specific to the problem at hand and is usually much higher than that dictated by the general lower bound. The data implies that the conventional TMR scheme of triplication and majority vote is the best way to achieve complete fault tolerance in most ANNs. Although the redundancy needed for complete fault tolerance is substantial, the results do show that ANNs exhibit good partial fault tolerance to begin with (i.e., without any extra redundancy) and degrade gracefully. The first replication is seen to yield maximum enhancement in partial fault tolerance compared to later, successive replications. For large nets, exhaustive testing of all possible single faults is prohibitive. Hence, the strategy of randomly testing a small fraction of the total number links is adopted. It yields partial fault tolerance estimates that are very close to those obtained by exhaustive testing. Moreover, when the fraction of links tested is held fixed, the accuracy of the estimate generated by random testing is seen to improve as the net size grows. ------------------------------------------------------------------------------- From murray at DBresearch-berlin.de Mon Nov 22 09:06:00 1993 From: murray at DBresearch-berlin.de (R. Murray-Smith) Date: Mon, 22 Nov 93 09:06 MET Subject: IEE Colloq. Neural networks: CALL FOR PAPERS Message-ID: CALL FOR PAPERS --------------- IEE Colloquium on Advances in Neural Networks for Control and Systems 26-27 May 1994 To be held at a location in Central Europe A colloquium on `Advances in neural networks for control and systems' is being organised by the control committees of the Institution of Electrical Engineers with additional support from Daimler-Benz Systems Technology Research, Berlin. This two-day meeting will be held on 26-27 May 1994 at a central european location. The programme will comprise a mix of invited papers and papers received in response to this call. Invited speakers include leading international academic workers in the field and major industrial companies who will present recent applications of neural methods, and outline the latest theoretical advances. Neural networks have been seen for some years now as providing considerable promise for application in nonlinear control and systems problems. This promise stems from the theoretical ability of networks of various types to approximate arbitrarily well continuous nonlinear mappings. The aim of this colloquium is to evaluate the state-of-the-art in this very popular field from the engineering perspective. The colloquium will cover both theoretical and applied aspects. A major goal of the workshop will be to examine ways of improving the engineering involved in neural network modelling and control, so that the theoretical power of learning systems can be harnessed for practical applications. This includes questions such as: - Which network architecture for which application? - Can constructive learning algorithms capture the underlying dynamics while avoiding overfitting? - How can we introduce a priori knowledge or models into neural networks? - Can experiment design and active learning be used to automatically create 'optimal' training sets? - How can we validate a neural network model? In line with this goal of better engineering methods, the colloquium will also place emphasis on real industrial applications of the technology; applied papers are most welcome. Prospective authors are invited to submit three copies of a 500-word abstract by Friday 25 February 1994 to Dr K J Hunt, Daimler-Benz AG, Alt-Moabit 91 B, D-10559 Berlin, Germany (tel: + 49 30 399 82 275, FAX: + 49 30 399 82 107, E-mail: hunt at DBresearch-berlin.de). From announce at PARK.BU.EDU Mon Nov 22 09:50:43 1993 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 22 Nov 93 09:50:43 -0500 Subject: Faculty position in Cognitive and Neural Systems at Boston University Message-ID: <9311221450.AA20171@retina.bu.edu> NEW SENIOR FACULTY IN COGNITIVE AND NEURAL SYSTEMS AT BOSTON UNIVERSITY Boston University seeks an associate or full professor starting in Fall 1994 for its graduate Department of Cognitive and Neural Systems. This Department offers an integrated curriculum of psychological, neurobiological, and computational concepts, models, and methods in the fields of neural networks, computational neuroscience, and connectionist cognitive science in which Boston University is a leader. Candidates should have an international research reputation, preferably including extensive analytic or computational research experience in modeling a broad range of nonlinear neural networks, especially in one or more of the areas: vision and image processing, visual cognition, spatial orientation, adaptive pattern recognition, and cognitive information processing. Send a complete curriculum vitae and three letters of recommendation to Search Committee, Department of Cognitive and Neural Systems, Room 240, 111 Cummington Street, Boston University, Boston, MA 02215. Boston University is an Equal Opportunity/Affirmative Action employer. From announce at PARK.BU.EDU Mon Nov 22 09:58:31 1993 From: announce at PARK.BU.EDU (announce@PARK.BU.EDU) Date: Mon, 22 Nov 93 09:58:31 -0500 Subject: Graduate study in Cognitive and Neural Systems at Boston University Message-ID: <9311221458.AA20329@retina.bu.edu> (please post) *********************************************** * * * DEPARTMENT OF * * COGNITIVE AND NEURAL SYSTEMS (CNS) * * AT BOSTON UNIVERSITY * * * *********************************************** Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies The Boston University Department of Cognitive and Neural Systems offers comprehensive advanced training in the neural and computational principles, mechanisms, and architectures that underly human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1994 admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: Department of Cognitive & Neural Systems Boston University 111 Cummington Street, Room 240 Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: cns at cns.bu.edu Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self- organization; associative learning and long-term memory; computational neuroscience; nerve cell biophysics; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; active vision; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of twelve interdisciplinary graduate courses each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Nine additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as Biology, Computer Science, Engineering, Mathematics, and Psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems (CAS). Students interested in neural network hardware work with researchers in CNS, the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the Department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. 1993-94 CAS MEMBERS and CNS FACULTY: Jacob Beck Daniel H. Bullock Gail A. Carpenter Chan-Sup Chung Michael A. Cohen H. Steven Colburn Paolo Gaudiano Stephen Grossberg Frank H. Guenther Thomas G. Kincaid Nancy Kopell Ennio Mingolla Heiko Neumann Alan Peters Adam Reeves Eric L. Schwartz Allen Waxman Jeremy Wolfe From oja at dendrite.hut.fi Tue Nov 23 09:50:35 1993 From: oja at dendrite.hut.fi (Erkki Oja) Date: Tue, 23 Nov 93 16:50:35 +0200 Subject: No subject Message-ID: <9311231450.AA17347@dendrite.hut.fi.hut.fi> RE: PCA in neural networks Dear Terry + connectionists: I will continue shortly the discussion about PCA, Hebbian rule and deflation. I would also like to point out the "Weighted Subspace algorithm" which Luis Almeida already mentioned. A more comprehensive reference is given at the end of this note. While in the GHA and SGA methods, the j-th weight vector depends on all the others up to index j, the Weighted Subspace algorithm is homogeneous in the sense that the j-th weight vector depends on all the others. I would say that the latter type is not deflation. There is an extension, mentioned in ref. (2) below, which is totally symmetrical, but there is a nonlinearity in the "feedback" term of the algorithm. This is sufficient to drive the vectors to the true eigenvectors. This was pointed out to me first by L. Xu. The references: 1. E. Oja, H. Ogawa and J. Wangviwattana: "Principal component analysis by homogeneous neural networks, Part I: The weighted subspace criterion". IEICE Trans. Inf. and Systems, vol. E75-D, no. 3, May 1992, pp. 366 - 375. 2. E. Oja, H. Ogawa and J. Wangviwattana: "Principal component analysis by homogeneous neural networks, Part II: Analysis and extensions of the learning algorithms. Same as above, pp. 376 - 382. Regards, Erkki Oja (Erkki.Oja at hut.fi) From biehl at physik.uni-wuerzburg.de Tue Nov 23 16:00:13 1993 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Tue, 23 Nov 93 16:00:13 MEZ Subject: preprint available Message-ID: <9311231500.AA14860@wptx08.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/biehlmietzner.pancakes.ps.Z The following paper has been placed in the Neuroprose archive (see above for ftp-host) as a compressed postscript file named biehlmietzner.pancakes.ps.Z (15 pages of output) email addresses of authors : biehl at physik.uni-wuerzburg.de mietzner at physik.uni-wuerzburg.de **** Hardcopies cannot be provided **** ------------------------------------------------------------------ "Statistical Mechanics of Unsupervised Structure Recognition" Michael Biehl and Andreas Mietzner Physikalisches Institut der Universitaet Am Hubland D-97074 Wuerzburg Germany Abstract: A model of unsupervised learning is studied, where the environment provides N-dimensional input examples that are drawn from two overlapping Gaussian clouds. We consider the optimization of two different objective functions: the search for the direction of the largest variance in the data and the largest separating gap (stability) respectively. By means of a statistical mechanics analysis, we investigate how well the underlying structure is inferred from a set of examples. The performance of the learning algorithms depends crucially on the actual shape of the input distribution. A generic result is the existence of a critical number of examples needed for successful learning. The learning strategies are compared with methods different in spirit, such as the estimation of parameters in a model distribution and an information theoretic approach. ---------------------------------------------------------------------- From radford at cs.toronto.edu Tue Nov 23 12:05:43 1993 From: radford at cs.toronto.edu (Radford Neal) Date: Tue, 23 Nov 1993 12:05:43 -0500 Subject: Review of Markov chain Monte Carlo methods Message-ID: <93Nov23.120551edt.638@neuron.ai.toronto.edu> The following review paper is now available via ftp, as described below. PROBABILISTIC INFERENCE USING MARKOV CHAIN MONTE CARLO METHODS Radford M. Neal Department of Computer Science University of Toronto 25 September 1993 Probabilistic inference is an attractive approach to uncertain reasoning and empirical learning in artificial intelligence. Computational difficulties arise, however, because probabilistic models with the necessary realism and flexibility lead to complex distributions over high-dimensional spaces. Related problems in other fields have been tackled using Monte Carlo methods based on sampling using Markov chains, providing a rich array of techniques that can be applied to problems in artificial intelligence. The ``Metropolis algorithm'' has been used to solve difficult problems in statistical physics for over forty years, and, in the last few years, the related method of ``Gibbs sampling'' has been applied to problems of statistical inference. Concurrently, an alternative method for solving problems in statistical physics by means of dynamical simulation has been developed as well, and has recently been unified with the Metropolis algorithm to produce the ``hybrid Monte Carlo'' method. In computer science, Markov chain sampling is the basis of the heuristic optimization technique of ``simulated annealing'', and has recently been used in randomized algorithms for approximate counting of large sets. In this review, I outline the role of probabilistic inference in artificial intelligence, present the theory of Markov chains, and describe various Markov chain Monte Carlo algorithms, along with a number of supporting techniques. I try to present a comprehensive picture of the range of methods that have been developed, including techniques from the varied literature that have not yet seen wide application in artificial intelligence, but which appear relevant. As illustrative examples, I use the problems of probabilistic inference in expert systems, discovery of latent classes from data, and Bayesian learning for neural networks. TABLE OF CONTENTS 1 Introduction. . . . . . . . . . . . . . . . . . . . . . . . . 1 2 Probabilistic Inference for Artificial Intelligence. . . . . . 4 2.1 Probabilistic inference with a fully-specified model . . . 5 2.2 Statistical inference for model parameters . . . . . . . . 13 2.3 Bayesian model comparison. . . . . . . . . . . . . . . . . 23 2.4 Statistical physics. . . . . . . . . . . . . . . . . . . . 25 3 Background on the Problem and its Solution . . . . . . . . . . 30 3.1 Definition of the problem. . . . . . . . . . . . . . . . . 30 3.2 Approaches to solving the problem. . . . . . . . . . . . . 32 3.3 Theory of Markov chains . . . . . . . . . . . . . . . . . 36 4 The Metropolis and Gibbs Sampling Algorithms . . . . . . . . . 47 4.1 Gibbs sampling . . . . . . . . . . . . . . . . . . . . . . 47 4.2 The Metropolis algorithm . . . . . . . . . . . . . . . . . 54 4.3 Variations on the Metropolis algorithm . . . . . . . . . . 59 4.4 Analysis of the Metropolis and Gibbs sampling algorithms . 64 5 The Dynamical and Hybrid Monte Carlo Methods . . . . . . . . . 70 5.1 The stochastic dynamics method . . . . . . . . . . . . . . 70 5.2 The hybrid Monte Carlo algorithm . . . . . . . . . . . . . 77 5.3 Other dynamical methods. . . . . . . . . . . . . . . . . . 81 5.4 Analysis of the hybrid Monte Carlo algorithm . . . . . . . 83 6 Extensions and Refinements . . . . . . . . . . . . . . . . . . 87 6.1 Simulated annealing. . . . . . . . . . . . . . . . . . . . 87 6.2 Free energy estimation . . . . . . . . . . . . . . . . . . 94 6.3 Error assessment and reduction . . . . . . . . . . . . . . 102 6.4 Parallel implementation. . . . . . . . . . . . . . . . . . 114 7 Directions for Research. . . . . . . . . . . . . . . . . . . . 116 7.1 Improvements in the algorithms . . . . . . . . . . . . . . 116 7.2 Scope for applications . . . . . . . . . . . . . . . . . . 118 8 Annotated Bibliography . . . . . . . . . . . . . . . . . . . . 121 Total length: 144 pages The paper may be obtained in PostScript form as follows: unix> ftp ftp.cs.toronto.edu (log in as user 'anonymous', your e-mail address as password) ftp> cd pub/radford ftp> binary ftp> get review.ps.Z ftp> quit unix> uncompress review.ps.Z unix> lpr review.ps (or however you print PostScript) The files review[0123].ps.Z in the same directory contain the same paper in smaller chunks; these may prove useful if your printer cannot digest the paper all at once. Radford Neal radford at cs.toronto.edu From malsburg at neuroinformatik.ruhr-uni-bochum.de Tue Nov 23 12:43:18 1993 From: malsburg at neuroinformatik.ruhr-uni-bochum.de (malsburg@neuroinformatik.ruhr-uni-bochum.de) Date: Tue, 23 Nov 93 18:43:18 +0100 Subject: Graham Smith's suggestion Message-ID: <9311231743.AA03176@circe.neuroinformatik.ruhr-uni-bochum.de> If I understand the text correctly, both on the input level and the output level he has two cells for each of the four feature types, one for each object. I presume that after learning, there are no connections in the system that confuse cells belonging to different objects; there will be, for instance, no hidden unit to fire in response to the combination ``red-a - square-b'' (if -a and -b stand for the two object identities), and correspondingly the output could not fire erroneously to this false conjunction. I have actually discussed this ``solution'' to the binding problem in my article ``Am I thinking assemblies?'' (Proceedings of the Trieste Meeting on Brain Theory, October 1984. G.Palm and A.Aertsen, eds. Springer: Berlin Heidelberg (1986), {\it pp} 161--176). I then talked about the ``box solution'' (keeping things not to be confused with each other in separate boxes with no confusing connections between them). The main problem with that ``solution'' is creating those boxes in the first place (Graham Smith solved this by learning). Sorting features into boxes appropriately is another one (this problem is not solved in his scheme at all, features being sorted already into the -a and -b boxes in the input patterns). A third problem is that rigid boxes keep the system from generalizing appropriately. The beauty of temporal binding is that a system can deal with a feature combination even if it occurs for the first time. For instance, if a ``red'' unit and a ``square'' unit have been learned to send activation to some output cell, and a red square occurs for the first time in life, the two units correlate their signals in time and summate on the output cell, whereas with a red triangle and a blue square, the signals will not be synchronized and cannot summate on the output. From rjw at ccs.neu.edu Tue Nov 23 13:48:58 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Tue, 23 Nov 1993 13:48:58 -0500 Subject: paper available Message-ID: <9311231848.AA10376@walden.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.perf-bound.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following paper is now available in the neuroprose directory. It is 17 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions Northeastern University College of Computer Science Technical Report NU-CCS-93-13 Ronald J. Williams College of Computer Science Northeastern University rjw at ccs.neu.edu Abstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning, and in this case it is also shown that this bound is tight in general. One significant application of this result is to problems where a function approximator is used to learn a value function, with training of the approximator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.perf-bound.ps.Z quit Then at your system: uncompress williams.perf-bound.ps.Z lpr -P williams.perf-bound.ps --------------------------------------------------------------------------- Ronald J. Williams | email: rjw at ccs.neu.edu College of Computer Science, 161 CN | Phone: (617) 373-8683 Northeastern University | Fax: (617) 373-5121 Boston, MA 02115, USA | --------------------------------------------------------------------------- From esann at dice.ucl.ac.be Tue Nov 23 15:47:34 1993 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Tue, 23 Nov 93 21:47:34 +0100 Subject: ESANN'94: European Symposium on ANNs Message-ID: <9311232047.AA02167@ns1.dice.ucl.ac.be> ____________________________________________________________________ ____________________________________________________________________ ESANN'94 European Symposium on Artificial Neural Networks Brussels - April 20-21-22, 1994 FINAL CALL FOR PAPERS __________________________________________________ ! Authors, prospective authors or participants ! ! PLEASE URGENTLY READ THIS E-MAIL !!! ! __________________________________________________ ____________________________________________________________________ ____________________________________________________________________ Due to post delivery problems in Belgium, the organizers of ESANN'94 have decided to accept submitted papers till beginning of December. But to avoid lost papers, authors or prospective authors are strongly invited to make them known by sending an E-mail to the address 'esann at dice.ucl.ac.be', before November 30th, mentioning their name(s), the title and authors of the paper, and an E-mail address or fax number where they can be reached if the paper is not received. ESANN'94 is the second symposium on fundamental aspects of artificial neural networks. Papers are still welcome in the following areas (this list is not restrictive): works: theory models and architectures mathematics learning algorithms biologically plausible artificial networks neurobiological systems adaptive behavior signal processing statistics vector quantization self-organization evolutive learning Accepted papers will cover new results in one or several of these aspects or will be of tutorial nature. Papers insisting on the relations between artificial neural networks and classical methods of information processing, signal processing or statistics are encouraged. People interested in ESANN'94 and having not received the call for papers are invited to contact the conference secretariat for further information. _____________________________ Michel Verleysen D facto conference services 45 rue Masui 1210 Brussels Belgium tel: +32 2 245 43 63 fax: +32 2 245 46 94 E-mail: esann at dice.ucl.ac.be _____________________________ From smagt at fwi.uva.nl Wed Nov 24 15:45:29 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Wed, 24 Nov 1993 21:45:29 +0100 Subject: TR: neural robot hand-eye coordination Message-ID: <199311242045.AA10159@carol.fwi.uva.nl> The following technical report has been put in the connectionists archive. Robot hand-eye coordination using neural networks P. Patrick van der Smagt, Frans C. A. Groen, and Ben J. A. Kr\"ose Department of Computer Systems University of Amsterdam TR CS--93--10 A self-learning, adaptive control system for a robot arm using a vision system in a feedback loop is described both in simulation and in practice. The task of the control system is to position the end-effector as accurately as possible directly above a target object, so that it can be grasped. The camera of the vision system is positioned in the end-effector and visual information is used to control the robot. Knowledge of the size of the object is used for obtaining 3D information from a single camera. The neural controller is shown to exhibit `real-time' learning behaviour, and adaptivity to unknown changes in the robot configuration. ------------------------------------------------------------------- A postscript version of the paper can be obtained as follows: unix> ftp archive.cis.ohio-state.edu ftp> login name: anonymous ftp> password: xxx at yyy.zzz ftp> cd pub/neuroprose ftp> bin ftp> get smagt.hand-eye.ps.Z ftp> bye The technical report is 23 pages long, about 2M. Many Unix-systems may require printing using lpr -s smagt.hand-eye.ps to prevent the print spooler from overflowing. The paper contains two bitmap photographs (on pages 4 and 9), which may confuse some printers. If you have trouble printing the postscript file, remove those pictures as follows: unix> sed -e '/photographstart/,/photographend/d' < smagt.hand-eye.ps > mu.ps (or remove the blocks which are enclosed between the lines containing "photographstart" and "photographend" in the ps file by hand) which will mutilate figures 1 and 6. Then print mu.ps. Patrick From kolen-j at cis.ohio-state.edu Wed Nov 24 16:32:35 1993 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Wed, 24 Nov 93 16:32:35 -0500 Subject: Reprint Announcement Message-ID: <9311242132.AA03867@pons.cis.ohio-state.edu> While this paper does not directly talk about neural networks, it does have plenty of implications for cognitive science and neural network research. Implications addressed in my upcoming dissertation. John ----------------------------------------------------------------------- This is an announcement of a newly available paper in neuroprose: The Observers' Paradox: Apparent Computational Complexity in Physical Systems John F. Kolen Jordan B. Pollack Laboratory for Artificial Intelligence Research The Ohio State University Columbus, OH 43210 Many researchers in AI and cognitive science believe that the complexity of a behavioral description reflects the underlying information processing complexity of the mechanism producing the behavior. This paper explores the foundations of this complexity argument. We first distinguish two types of complexity judgments that can be applied to these descriptions and then argue that neither type can be an intrinsic property of the underlying physical system. In short, we demonstrate how changes in the method of observation can radically alter both the number of apparent states and the apparent generative class of a system's behavioral description. From these examples we conclude that the act of observation can suggest frivolous computational explanations of physical phenomena, up to and including cognition. This paper will appear in The Journal of Experimental and Theoretical Artificial Intelligence. ************************ How to obtain a copy ************************ Via Anonymous FTP: unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: (type your email address) ftp> cd pub/neuroprose ftp> binary ftp> get kolen.paradox.ps.Z ftp> quit unix> uncompress kolen.paradox.ps.Z unix> lpr kolen.paradox.ps (or what you normally do to print PostScript) From hayit at micro.caltech.edu Wed Nov 24 19:38:45 1993 From: hayit at micro.caltech.edu (Hayit Greenspan) Date: Wed, 24 Nov 93 16:38:45 PST Subject: NIPS_workshop Message-ID: <9311250038.AA01847@electra.caltech.edu> NIPS*93 - Post Meeting workshop: -------------------------------- Learning in Computer Vision and Image Understanding - An advantage over classical techniques? To those interested: ********************** The workshop schedule and list of abstracts are now available via anonymous ftp : FTP-host: helper.systems.caltech.edu filenames: /pub/nips/NIPS_prog /pub/nips/NIPSabs.ps.Z Hayit Greenspan ****************************************************** From rsun at athos.cs.ua.edu Wed Nov 24 21:10:21 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 24 Nov 1993 20:10:21 -0600 Subject: Graham Smith's suggestion Message-ID: <9311250210.AA11097@athos.cs.ua.edu> you can use a simpler means to achieve the same result, computationally. you can use a distinct activation value to represent bindings. this distinct value can be equated to phase. so if two nodes receive the same value, then they are bound to the same thing. This scheme does not reuire  require temporal processing, but it can have all the nice properties you mentioned. --Ron From cells at tce.ing.uniroma1.it Thu Nov 25 14:54:50 1993 From: cells at tce.ing.uniroma1.it (cells@tce.ing.uniroma1.it) Date: Thu, 25 Nov 1993 20:54:50 +0100 Subject: Cellular Neural Networks mailing list Message-ID: <9311251954.AA13248@tce.ing.uniroma1.it> ************************************************************ * ANNOUNCING A NEW MAILING LIST ON * * CELLULAR NEURAL NETWORKS: * * cells at tce.ing.uniroma1.it * ************************************************************ Cellular Neural Networks (CNN) are continuous-time dynamical systems, consisting of a grid of processing elements (neurons, or cells) connected only to neighbors within a given (typically small) distance. It is therefore a class of recurrent neural networks, whose particular topology is most suited for integrated circuit realization. In fact, while in typical realizations of other neural systems most of silicon area is taken by connections, in this case connection area is neglectible, so that processor density can be much larger. Since their first definition by L.O. Chua and L. Yang in 1988, many applications were proposed, mainly in the field of image processing. In most cases a space-invariant weight pattern is used (i.e. weights are defined by a template, which repeats identically for all cells), and neurons are characterized by simple first order dynamics. However, many different kinds of dynamics (e.g. oscillatory and chaotic) have also been used for special purposes. A recent extension of the model is obtained by integrating the analog CNN with some simple logic components, leading to the realization of a universal programmable "analogic" machine. Essential bibliography: 1) L.O. Chua & L. Yang, "Cellular Neural Networks: Theory", IEEE Trans. on Circ. and Systems, CAS-35(10), p. 1257, 1988 2) -----, "Cellular Neural Networks: Applications", ibid., p. 1273 3) Proc. of IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-90), Budapest, Hungary, Dec. 16-19, 1990 4) Proc. of IEEE Second International Workshop on Cellular Neural Networks and their Applications (CNNA-92), Munich, Germany, Oct. 14-16, 1992 5) International Journal of Circuit Theory and Applications, vol.20, no. 5 (1992), special issue on Cellular Neural Networks 6) IEEE Transactions on Circuits and Systems, parts I & II, vol.40, no. 3 (1993), special issue on Cellular Neural Networks 7) T. Roska, L.O. Chua, "The CNN Universal Machine: an Analogic Array Computer", IEEE Trans. on Circ. and Systems, II, 40(3), 1993, p. 163 8) V. Cimagalli, M. Balsi, "Cellular Neural Networks: a Review", Proc. of Sixth Italian Workshop on Parallel Architectures and Neural Networks, Vietri sul Mare, Italy, May 12-14, 1993. (E. Caianiello, ed.), World Scientific, Singapore. Our research group at "La Sapienza" University of Rome, Italy, has been involved in CNN research for several years, and will host next IEEE International Workshop on Cellular Neural Networks and their Applications (CNNA-94), which will be held in Rome, December 18-21, 1994. We are now announcing the start of a new mailing list dedicated to Cellular Neural Networks. It will give the opportunity of discussing current research, exchanging news, submitting questions. Due to memory shortage, we are currently not able to offer an archive service, and hope that some other group will be able to volunteer for the establishment of this means of fast distribution of recent reports and papers. The list will not be moderated, at least as long as the necessity does not arise. THOSE INTERESTED IN BEING INCLUDED IN THE LIST SHOULD SEND A MESSAGE to Marco Balsi (who will be supervising the functioning of the list) at address mb at tce.ing.uniroma1.it (151.100.8.30). This is the address to which any communication not intended to go to all subscribers of the list should be sent. We would also appreciate if you let us know the address of colleagues who might be interested in the list (rather than just forward the announcement directly), so that we can send them this announcement and keep track of those that were contacted, avoiding duplications. TO SEND MESSAGES TO ALL SUBSCRIBERS PLEASE USE THE FOLLOWING ADDRESS: cells at tce.ing.uniroma1.it (151.100.8.30) We hope that this service will encourage communication and foster collaboration among researchers working on CNNs and related topics. We are looking forward for your comments, and subscriptions to the list! Yours, Prof. V. Cimagalli Dipartimento di Ingegneria Elettronica Universita' "La Sapienza" di Roma via Eudossiana, 18, 00184 Roma Italy fax: +39-6-4742647 From David.Green at anu.edu.au Sun Nov 28 19:30:44 1993 From: David.Green at anu.edu.au (David G Green) Date: Mon, 29 Nov 1993 11:30:44 +1100 Subject: COMPLEX'94 Message-ID: <199311290030.AA06144@anusf.anu.edu.au> COMPLEX'94 Second Australian National Conference Sponsored by the University of Central Queensland Australian National University September 26-28th, 1994 University of Central Queensland Rockhampton Queensland Australia FIRST CIRCULAR AND CALL FOR PAPERS The inaugural Australian National Conference on Complex Systems was held at the Australian National University in 1992. Recognising the need to maintain and stimulate research interest in these topics the University of Central Queensland is hosting the Second Australian National Conference on Complex Systems in Rockhampton. Rockhampton is situated on the Tropic of Capricorn in Queensland on the east coast of Australia and is 35kms from the Central Queensland Coast. It is within easy access of tourist resorts including the resort island, Great Keppel, the Central Queensland Hinterland, and the Great Barrier Reef. This first circular is intended to provide basic information about the conference and to canvas expressions of interest, both in attending and in presenting papers or posters. A second circular, to be distributed in late January/early February, will provide details of keynote speakers, the program, registration procedures, etc. Please pass on this notice to interested colleagues. For further information contact the organisers (see below). AIMS: Complex systems are systems whose evolution is dominated by non-linearity or interactions between their components. Such systems may be very simple but reproduce very complex phenomena. Terms such as artificial life, biocomplexity, chaos, criticality, fractals, learning systems, neural networks, non-linear dynamics, parallel computation, percolation, self-organization have become common place. From sampat at CVAX.IPFW.INDIANA.EDU Mon Nov 29 12:47:51 1993 From: sampat at CVAX.IPFW.INDIANA.EDU (Pulin) Date: Mon, 29 Nov 1993 12:47:51 EST Subject: fifth neural network conference proceedings... Message-ID: <0097643E.6B752BA0.1678@CVAX.IPFW.INDIANA.EDU> The Proceedings of the Fifth Conference on Neural Networks and Parallel Distributed Processing at Indiana University-Purdue University at Fort Wayne, held April 9-11, 1992 are now available. They can be ordered ($9 + $1 U.S. mail cost; make checks payable to IPFW) from: Secretary, Department of Physics FAX: (219)481-6880 Voice: (219)481-6306 OR 481-6157 Indiana University Purdue University Fort Wayne email: proceedings at ipfwcvax.bitnet Fort Wayne, IN 46805-1499 The following papers are included in the Proceedings of the Fifth Conference: Tutorials Phil Best, Miami University, Processing of Spatial Information in the Brain William Frederick, Indiana-Purdue University, Introduction to Fuzzy Logic Helmut Heller and K. Schulten, University of Illinois, Parallel Distributed Computing for Molecular Dynamics: Simulation of Large Hetrogenous Systems on a Systolic Ring of Transputer Krzysztof J. Cios, University Of Toledo, An Algorithm Which Self-Generates Neural Network Architecture - Summary of Tutorial Biological and Cooperative Phenomena Optimization Ljubomir T. Citkusev & Ljubomir J. Buturovic, Boston University, Non- Derivative Network for Early Vision M.B. Khatri & P.G. Madhavan, Indiana-Purdue University, Indianapolis, ANN Simulation of the Place Cell Phenomenon Using Cue Size Ratio J. Wu, M. Penna, P.G. Madhavan, & L. Zheng, Purdue University at Indianapolis, Cognitive Map Building and Navigation J. Wu, C. Zhu, Michael A. Penna & S. Ochs, Purdue University at Indianapolis, Using the NADEL to Solve the Correspondence Problem Arun Jagota, SUNY-Buffalo, On the Computational Complexity of Analyzing a Hopfield-Clique Network Network Analysis M.R. Banan & K.D. Hjelmstad, University of Illinois at Urbana-Champaign, A Supervised Training Environment Based on Local Adaptation, Fuzzyness, and Simulation Pranab K. Das II & W.C. Schieve, University of Texas at Austin, Memory in Small Hopfield Neural Networks: Fixed Points, Limit Cycles and Chaos Arun Maskara & Andrew Noetzel, Polytechnic University, Forced Learning in Simple Recurrent Neural Networks Samir I. Sayegh, Indiana-Purdue University, Neural Networks Sequential vs Cumulative Update: An * Expansion D.A. Brown, P.L.N. Murthy, & L. Berke, The College of Wooster, Self- Adaptation in Backpropagation Networks Through Variable Decomposition and Output Set Decomposition Sandip Sen, University of Michigan, Noise Sensitivity in a Simple Classifier System Xin Wang, University of Southern California, Complex Dynamics of Discrete- Time Neural Networks Zhenni Wang and Christine di Massimo, University of Newcastle, A Procedure for Determining the Canonical Structure of Multilayer Feedforward Neural Networks Srikanth Radhakrishnan and C, Koutsougeras, Tulane University, Pattern Classification Using the Hybrid Coulomb Energy Network Applications K.D. Hooks, A. Malkani, & L. C. Rabelo, Ohio University, Application of Artificial Neural Networks in Quality Control Charts B.E. Stephens & P.G. Madhavan, Purdue University at Indianapolis, Simple Nonlinear Curve Fitting Using the Artificial Neural Network Nasser Ansari & Janusz A. Starzyk, Ohio University, Distance Field Approach to Handwritten Character Recognition Thomas L. Hemminger & Yoh-Han Pao, Case Western Reserve University, A Real-Time Neural-Net Computing Approach to the Detection and Classification of Underwater Acoustic Transients Seibert L. Murphy & Samir I. Sayegh, Indiana-Purdue University, Analysis of the Classification Performance of a Back Propagation Neural Network Designed for Acoustic Screening S. Keyvan, L. C. Rabelo, & A. Malkani, Ohio University, Nuclear Diagnostic Monitoring System Using Adaptive Resonance Theory From rjw at ccs.neu.edu Tue Nov 30 09:04:42 1993 From: rjw at ccs.neu.edu (Ronald J Williams) Date: Tue, 30 Nov 1993 09:04:42 -0500 Subject: revised paper available Message-ID: <9311301404.AA24614@walden.ccs.neu.edu> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/williams.perf-bound.ps.Z **PLEASE DO NOT FORWARD TO OTHER GROUPS** The following improved version of the recently announced paper with the same title is now available in the neuroprose directory. The compressed file has the same name and supersedes the earlier version. In addition to containing some new material, this new version has a corrected TR number and an added co-author, and it is 20 pages long. For those unable to obtain the file by ftp, hardcopies can be obtained by contacting: Diane Burke, College of Computer Science, 161 CN, Northeastern University, Boston, MA 02115, USA. Tight Performance Bounds on Greedy Policies Based on Imperfect Value Functions Northeastern University College of Computer Science Technical Report NU-CCS-93-14 Ronald J. Williams Leemon C. Baird, III College of Computer Science Wright Laboratory Northeastern University Wright-Patterson Air Force Base rjw at ccs.neu.edu bairdlc at wL.wpafb.af.mil Abstract: Consider a given value function on states of a Markov decision problem, as might result from applying a reinforcement learning algorithm. Unless this value function equals the corresponding optimal value function, at some states there will be a discrepancy, which is natural to call the Bellman residual, between what the value function specifies at that state and what is obtained by a one-step lookahead along the seemingly best action at that state using the given value function to evaluate all succeeding states. This paper derives a tight bound on how far from optimal the discounted return for a greedy policy based on the given value function will be as a function of the maximum norm magnitude of this Bellman residual. A corresponding result is also obtained for value functions defined on state-action pairs, as are used in Q-learning. One significant application of these results is to problems where a function approximator is used to learn a value function, with training of the approximator based on trying to minimize the Bellman residual across states or state-action pairs. When control is based on the use of the resulting value function, this result provides a link between how well the objectives of function approximator training are met and the quality of the resulting control. To obtain a copy: ftp cheops.cis.ohio-state.edu login: anonymous password: cd pub/neuroprose binary get williams.perf-bound.ps.Z quit Then at your system: uncompress williams.perf-bound.ps.Z lpr -P williams.perf-bound.ps From icip at pine.ece.utexas.edu Tue Nov 30 22:01:04 1993 From: icip at pine.ece.utexas.edu (International Conf on Image Processing Mail Box) Date: Tue, 30 Nov 93 21:01:04 CST Subject: No subject Message-ID: <9312010301.AA11738@pine.ece.utexas.edu> PLEASE POST PLEASE POST PLEASE POST PLEASE POST *************************************************************** FIRST IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING November 13-16, 1994 Austin Convention Center, Austin, Texas, USA CALL FOR PAPERS Sponsored by the Institute of Electrical and Electronics Engineers (IEEE) Signal Processing Society, ICIP-94 is the inaugural international conference on theoretical, experimental and applied image processing. It will provide a centralized, high-quality forum for presentation of technological advances and research results by scientists and engineers working in Image Processing and associated disciplines such as multimedia and video technology. Also encouraged are image processing applications in areas such as the biomedical sciences and geosciences. SCOPE: 1. IMAGE PROCESSING: Coding, Filtering, Enhancement, Restoration, Segmentation, Multiresolution Processing, Multispectral Processing, Image Representation, Image Analysis, Interpolation and Spatial Transformations, Motion Detection and Estimation, Image Sequence Processing, Video Signal Processing, Neural Networks for image processing and model-based compression, Noise Modeling, Architectures and Software. 2. COMPUTED IMAGING: Acoustic Imaging, Radar Imaging, Tomography, Magnetic Resonance Imaging, Geophysical and Seismic Imaging, Radio Astronomy, Speckle Imaging, Computer Holography, Confocal Microscopy, Electron Microscopy, X-ray Crystallography, Coded-Aperture Imaging, Real-Aperture Arrays. 3. IMAGE SCANNING DISPLAY AND PRINTING: Scanning and Sampling, Quantization and Halftoning, Color Reproduction, Image Representation and Rendering, Graphics and Fonts, Architectures and Software for Display and Printing Systems, Image Quality, Visualization. 4. VIDEO: Digital video, Multimedia, HD video and packet video, video signal processor chips. 5. APPLICATIONS: Application of image processing technology to any field. PROGRAM COMMITTEE: GENERAL CHAIR: Alan C. Bovik, U. Texas, Austin TECHNICAL CHAIRS: Tom Huang, U. Illinois, Champaign and John W. Woods, Rensselaer, Troy SPECIAL SESSIONS CHAIR: Mike Orchard, U. Illinois, Champaign EAST EUROPEAN LIASON: Henri Maitre, TELECOM, Paris FAR EAST LIASON: Bede Liu, Princeton University SUBMISSION PROCEDURES Prospective authors are invited to propose papers for lecture or poster presentation in any of the technical areas listed above. To submit a proposal, prepare a summary of the paper using no more than 3 pages including figures and references. Send five copies of the paper summary along with a cover sheet stating the paper title, technical area(s) and contact address to: John W. Woods Center for Image Processing Research Rensselaer Polytechnic Institute Troy, NY 12180-3590, USA. Each selected paper (five-page limit) will be published in the Proceedings of ICIP-94, using high-quality paper for good image reproduction. Style files in LaTeX will be provided for the convenience of the authors. SCHEDULE Paper summaries/abstracts due*: 15 February 1994 Notification of Acceptance: 1 May 1994 Camera-Ready papers: 15 July 1994 Conference: 13-16 November 1994 *For an automatic electronic reminder, send a "reminder please" message to: icip at pine.ece.utexas.edu CONFERENCE ENVIRONMENT ICIP-94 will be held in the recently completed state-of-the-art Convention Center in downtown Austin. The Convention Center is situated two blocks from the Town Lake, and is only 12 minutes from Robert Meuller Airport. It is surrounded by many modern hotels that provide comfortable accommodation for $75-$125 per night. Austin, the state capital, is renowned for its natural hill- country beauty and an active cultural scene. Within walking distance of the Convention Center are several hiking and jogging trails, as well as opportunities for a variety of aquatic sports. Live bands perform in various clubs around the city and at night spots along Sixth Street, offering a range of jazz, blues, country/Western, reggae, swing and rock music. Day temperatures are typically in the upper sixties in mid-November. An exciting range of EXHIBITS, TUTORIALS, SPECIAL PRODUCT SESSIONS,, and SOCIAL EVENTS will be offered. For further details about ICIP-94, please contact: Conference Management Services 3024 Thousand Oaks Drive Austin, Texas 78746 Tel: 512/327/4012; Fax:512/327/8132 or email: icip at pine.ece.utexas.edu FINAL CALL FOR PAPERS FIRST IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING November 13-16, 1994 Austin Convention Center, Austin, Texas, USA