From Connectionists-Request at CS.CMU.EDU Tue Sep 1 00:05:22 1992 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 01 Sep 92 00:05:22 -0400 Subject: Bi-monthly Reminder Message-ID: <8690.715320322@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From gem at cogsci.indiana.edu Tue Sep 1 15:49:44 1992 From: gem at cogsci.indiana.edu (Gary McGraw) Date: Tue, 1 Sep 92 14:49:44 EST Subject: Technical report available (Letter Spirit) Message-ID: The following technical report has been placed in the neuroprose archive as mcgraw.letter_spirit.ps.Z and is available via ftp (from 128.146.8.52). Letter Spirit: Recognition and Creation of Letterforms Based on Fluid Concepts by Gary McGraw, Center for Research on Concepts and Cognition TR 61 Although this work is not really connectionism per se, the approach it represents has much in common with connectionist ideas. Here is an abstract: The Letter Spirit project is an attempt to model central aspects of human creativity on a computer. We believe that creativity flourishes in the mind because of the flexible and context-sensitive nature of concepts (which we call fluid concepts to reflect this essential plasticity.) We believe that creativity is a by-product of the fluidity of concepts, and that a reasonable model of conceptual fluidity can shed much light on creativity. Letter Spirit explores creativity through the art of letter design. The aim of Letter Spirit is to model how the 26 lowercase letters of the roman alphabet can be rendered in a uniform style. The program will start with one or more seed letters representing a style, and create the rest of the letters in such a way that they share the same style, or spirit. Letter Spirit involves a blend of high-level perception and conceptual play that will allow it to create in a cognitively plausible fashion. High-level perception involves processing information to the level of meaning by accessing concepts and making sense of sensory data at a conceptual level. ---------------------------------------------------------------------- The paper is also available via ftp from cogsci.indiana.edu as /pub/mcgraw.letter_spirit.ps E-mail versions may be available to interested parties without ftp access. Send inquiries to gem at cogsci.indiana.edu (or mcgrawg at moose.cs.indiana.edu). From RIANI at GENOVA.INFN.IT Thu Sep 3 05:43:00 1992 From: RIANI at GENOVA.INFN.IT (RIANI@GENOVA.INFN.IT) Date: 03 Sep 1992 09:43 +0000 (GMT) Subject: research fellowship Message-ID: <2216@GENOVA.INFN.IT> Subj: Research fellowship. A post-doc fellowship by the European Community Commision (program Mobility and Human Capital) could be assigned to the research group on Neural Networks of the Unita' di Genova del Consorzio INFM for a period of 6 to 12 months. The salary of the fellowship will be 3600 Ecu/month (insurance fees and taxes incuded). The candidates must be citizen of an EC country except Italy. The research topic of the fellowship will be one between: (a) neural networks for handwriting recognition; (b) studies of neural network algorithms for molecular electronic systems. The interested candidates must send a curriculum vitae, a pubblication list and a letter of interest to myself. Prof. Massimo Riani Unita' di Genova del Consorzio INFM Via Dodecaneso 33 16146 Genova - Italy email : riani at genova.infn.it fax : +39-10-314218 From lange at CS.UCLA.EDU Wed Sep 2 07:21:33 1992 From: lange at CS.UCLA.EDU (Trent Lange) Date: Wed, 2 Sep 92 04:21:33 PDT Subject: Papers in Neuroprose Archive Message-ID: <920902.112133z.10403.lange@lanai.cs.ucla.edu> The following two reprints have been placed in the Neuroprose Archives at Ohio State University: ====================================================================== Lexical and Pragmatic Disambiguation and Reinterpretation in Connectionist Networks Trent E. Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Lexical and pragmatic ambiguity is a major source of uncertainty in natural language understanding. Symbolic models can make high- level inferences necessary for understanding text, but handle ambiguity poorly, especially when later context requires a reinterpretation of the input. Structured connectionist networks, on the other hand, can use their graded levels of activation to perform lexical disambiguation, but have trouble performing the variable bindings and inferencing necessary for language understanding. We have previously described a structured spreading-activation model, ROBIN, which overcomes many of these problems and allows the massively-parallel application of a large class of general knowledge rules. This paper describes how ROBIN uses these abilities and the contextual evidence from its semantic networks to disambiguate words and infer the most plausible plan/goal analysis of the input, while using the same mechanism to smoothly reinterpret the input if later context makes an alternative interpretation more likely. We present several experiments illustrating these abilities and comparing them to those of other connectionist models, and discuss several directions in which we are extending the model. * Appears in International Journal of Man-Machine Studies, 36: 191-220. 1992. ====================================================================== REMIND: Retrieval From Episodic Memory by INferencing and Disambiguation Trent E. Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Charles M. Wharton Department of Psychology University of California, Los Angeles Most AI simulations have modeled memory retrieval separately from language understanding, even though both activities seem to use many of the same processes. This paper describes REMIND (Retrieval from Episodic Memory through INferencing and Disambiguation), a structured spreading-activation model of integrated text comprehension and episodic reminding. In REMIND, activation is spread through a semantic network that performs dynamic inferencing and disambiguation to infer a conceptual representation of an input cue. Because stored episodes are associated with concepts used to understand them, the spreading-activation process also activates any memory episodes in the network that share features or knowledge structures with the cue. After the cue's conceptual representation is formed, the network recalls the memory episode having the highest activation. Since the inferences made from a cue often include actors' plans and goals only implied in a cue's text, REMIND is able to get abstract, analogical remindings that would not be possible without an integrated understanding and retrieval model. * To appear in J. Barnden and K. Holyoak (Eds.), Advances in Connectionist and Neural Computation Theory, Volume II: Analogical Connections. Norwood, NJ: Ablex. ====================================================================== Both papers are broken into two (large) postscript files stored in compressed tarfiles. To obtain a copy via FTP (courtesy of Jordan Pollack): unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your E-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get lange.disambiguation.tar.Z ftp> get lange.remind.tar.Z ftp> quit unix% zcat lange.disambiguation.tar.Z | tar -xvf - unix% lpr -s -P lange.disambiguation1.ps unix% lpr -s -P lange.disambiguation2.ps unix% zcat lange.remind.tar.Z | tar -xvf - unix% lpr -s -P lange.remind1.ps unix% lpr -s -P lange.remind2.ps Note that the -s option for lpr is needed for most printers because of the large size of the uncompressed postscript files (~ 1 meg each). Sorry, no hard copies available. Trent Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 E-Mail Address: lange at cs.ucla.edu From tgd at chert.CS.ORST.EDU Sun Sep 6 11:56:44 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Sun, 6 Sep 92 08:56:44 PDT Subject: A neat idea from L. Breiman In-Reply-To: Henrik Klagges's message of Mon, 31 Aug 92 14:01:11 PDT <9208312101.AA25658@tazdevil.llnl.gov> Message-ID: <9209061556.AA24983@research.CS.ORST.EDU> > hold {d_2, ..., d_{k-1}} constant ...by re-making decision d_1 How can you hold d_2 ... etc constant if they might depend on d_1, like in a game tree ? Cheers, Henrik IBM Research Lawrence Livermore National Labs Suppose d_1' is an alternative way of making d_1. You can now evaluate the "board position" corresponding to {d_1', d_2, ..., d_{k_1}}. In some cases, of course, this will not be a legal position (and it should get a bad evaluation), but in many situations, it will be legal and possibly superior to {d_1, d_2, ..., d_{k-1}}. --Tom From hunter at nlm.nih.gov Tue Sep 8 10:42:31 1992 From: hunter at nlm.nih.gov (Larry Hunter) Date: Tue, 8 Sep 92 10:42:31 -0400 Subject: Call for Papers: Intelligent Systems for Molecular Biology Message-ID: <9209081442.AA04678@work.nlm.nih.gov> ***************** CALL FOR PAPERS ***************** The First International Conference on Intelligent Systems for Molecular Biology July 7-9, 1993 Washington, DC Organizing Committee Program Committee --------------------- ----------------------------- Lawrence Hunter, D. Brutlag, Stanford National Library of Medicine B. Buchanan, U. of Pittsburgh C. Burks, Los Alamos David Searls, F. Cohen, UC-SF University of Pennsylvania C. Fields, TIGR M. Gribskov, UC-SD Jude Shavlik, P. Karp, SRI University of Wisconsin A. Lapedes, Los Alamos R. Lathrop, MIT Schedule C. Lawrence, Baylor --------------------- M. Mavrovouniotis, U-Md Papers and Tutorial G. Michaels, NIH/DCRT Proposals Due: H. Morowitz, George Mason February 15, 1993 K. Nitta, ICOT M. Noordewier, Rutgers Replies to Authors: R. Overbeek, Argonne March 29, 1993 C. Rawlings, ICRF D. States, NLM, NIH Revised Papers Due: G. Stormo, U. of Colorado April 26, 1993 E. Uberbacher, Oak Ridge D. Waltz, Thinking Machines Sponsors: American Association for Artificial Intelligence, National Library of Medicine The First International Conference on Intelligent Systems for Molecular Biology will take place in Washington, DC, July 7-9, 1993. The conference will bring together scientists who are applying the technologies of artificial intelligence, robotics, neural networks, massively parallel computing, advanced data modelling, and related methods to problems in molecular biology. Participation is invited from both producers and consumers of any novel computational or robotic system, provided it supports a biological task that is cognitively challenging, involves a synthesis of information from multiple sources at multiple levels, or in some other way exhibits the abstraction and emergent properties of an "intelligent system." The three-day conference, to be held in the attractive conference facilities of the Lister Hill Center, National Library of Medicine, National Institutes of Health, will feature both introductory tutorials and original, refereed papers, to be published in an archival Proceedings. The conference will immediately precede the Eleventh National Conference of the American Association for Artificial Intelligence, also in Washington. Papers should be 12 pages, single-spaced and set in 12 point type, including title, abstract, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone, and FAX numbers. Submit 6 copies to the address shown. For more information, contact ISMB at nlm.nih.gov. Jude Shavlik Computer Sciences Dept University of Wisconsin 1210 W. Dayton Street Madison, WI 53706 ***************************************************************** From tgd at chert.CS.ORST.EDU Wed Sep 9 19:49:43 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 9 Sep 92 16:49:43 PDT Subject: Machine Learning 9:2/3 Message-ID: <9209092349.AA21744@research.CS.ORST.EDU> Machine Learning July 1992, Volume 9, Issues 2/3 Special Issue on Computational Learning Theory Introduction J. Case and A. Blumer Lower Bound Methods and Separation Results for On-Line Learning Models W. Maass and G. Turan Learning Conjunctions of Horn Clauses D. Angluin, M. Frazier, and L. Pitt A Learning Criterion for Stochastic Rules K. Yamanishi On the Computational Complexity of Approximating Distributions by Probabilistic Automata N. Abe and M. K. Warmuth A Universal Method of Scientific Inquiry D.N. Osherson, M. Stob, and S. Weinstein ----- Subscriptions - Volume 8-9 (8 issues) includes postage and handling. $140 Individual $88 Member AAAI $301 Institutional Kluwer Academic Publishers P.O. Box 358 Accord Station Hingham, MA 02018-0358 USA or Kluwer Academic Publishers Group P.O. Box 322 3300 AH Dordrecht THE NETHERLANDS From RREILLY at ccvax.ucd.ie Wed Sep 9 06:33:00 1992 From: RREILLY at ccvax.ucd.ie (RREILLY@ccvax.ucd.ie) Date: 09 Sep 1992 10:33 +0000 (WET) Subject: Post-doctoral Fellowship Message-ID: Human Capital and Mobility Programme of the Commission of the European Communities Postdoctoral Fellowship ============================================================== Applications are invited for an EC funded post-doctoral fellowship with the connectionist research group in the Dept. of Computer Science, University College Dublin, Ireland. The duration of the fellowship may be between 6-12 months. Remuneration will be at a rate of 3,255 ECU/month (this covers subsistence, tax, social insurance, etc.). The fellowship is open to EC citizens other than citizens of Ireland. The research topics are: (1) The connectionist modelling of eye-movement control in reading, and (2) The connectionist modelling of natural language processing. Interested candidates should send me a letter of application, a CV, and a list of their publications. They should also indicate which research topic, and what particular aspects of it, they are interested in working on. Since the closing date for receipt of applications is September 25, candidates are encouraged to send their applications either by e-mail or FAX. Ronan Reilly Dept. of Computer Science University College Dublin Belfield Dublin 4 IRELAND Tel.: +353.1.7062475 Fax : +353.1.2697262 e-mail: rreilly at ccvax.ucd.ie ===================================================================== From ingber at alumni.cco.caltech.edu Sat Sep 12 14:47:12 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Sat, 12 Sep 1992 11:47:12 -0700 Subject: 2nd Request for (p)Reprints on Simulated Annealing Message-ID: <9209121847.AA05098@alumni.cco.caltech.edu> 2nd Request for (p)Reprints on Simulated Annealing I posted the text below in July, and have received many interesting papers which I will at least mention in my review. It is clear that many researchers use something "like" simulated annealing (SA) in their work to approach quite difficult computational problems. They take advantage of the ease of including complex constraints and nonlinearities into an SA approach that requires a quite simple and small code, especially relative to many other search algorithms. However, the bulk of the papers I have seen use the standard Boltzmann annealing, for which it has been proven sufficient to only use a log annealing schedule for the temperature parameter in order to statistically achieve a global optimal solution. This can require a great deal of CPU time to implement, and so these papers actually "quench" their searches by using much faster temperature schedules, too fast to theoretically claim they are achieving the global optimum. Instead they have defined their own method of simulated quenching (SQ). In many of their problems this really is not much of an issue, as there is enough additional information about their system to be able to claim that their SQ is good enough, and the ease of implementation certainly warrants its use. I.e., anyone familiar with trying to use other "standard" methods of nonlinear optimization on difficult problems will appreciate this. I also appreciate that faster SA methods, such as I have published myself, are not as easily implemented. I would like to have more examples of: (1) papers that have really used SA instead of SQ in difficult problems. (2) proposed/tested improvements to SA which still have the important feature of establishing at least a heuristic argument that a global optimum can indeed be reached, e.g., some kind of ergodic argument. The review is on SA, and I do not have the allotted space or intention to compare SA to other important and interesting algorithms. Thanks. Lester }I have accepted an invitation to prepare a review article on simulated }annealing for Statistics and Computing. The first draft is due 15 }Jan 93. } }If you or your colleagues have performed some unique work using }this methodology that you think could be included in this review, }please send me (p)reprints via regular mail. As I will be }making an effort to prepare a coherent article, not necessarily an }all inclusive one, please do not be too annoyed if I must choose not }to include/reference work you suggest. Of course, I will formally }reference or acknowledge any inclusion of your suggestions/material }in this paper. While there has been work done, and much more remains }to be done, on rigorous proofs and pedagogical examples/comparisons, }I plan on stressing the use of this approach on complex, nonlinear }and even stochastic systems. } }I am a "proponent" of a statistical mechanical approach to selected }problems in several fields; some recent reprints are available via }anonymous ftp from ftp.umiacs.umd.edu [128.8.120.23] in the pub/ingber }directory. I am not a hardened "proponent" of simulated annealing; }I welcome papers criticizing or comparing simulated annealing to }other approaches. I already plan on including some references that }are openly quite hostile to this approach. # Prof. Lester Ingber # # ingber at alumni.caltech.edu # # P.O. Box 857 # # McLean, VA 22101 [10ATT]0-700-L-INGBER # From harnad at Princeton.EDU Sat Sep 12 17:33:57 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 12 Sep 92 17:33:57 EDT Subject: Express Saccades & Attention: BBS Call for Commentators Message-ID: <9209122133.AA08081@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by B. Fischer & H. Weber on express saccadic eye movements and attention. It has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ EXPRESS SACCADES AND VISUAL ATTENTION B. Fischer and H. Weber Department Neurophysiology Hansastr. 9 D - 78 Freiburg Germany aiple at sun1.ruf.uni-freiburg.de (c/o Franz Aiple) KEYWORDS: Eye movements, Saccade, Express Saccade, Vision, Fixation, Attention, Cortex, Reaction Time, Dyslexia ABSTRACT: One of the most intriguing and controversial observations in oculomotor research in recent years is the phenomenon of express saccades in man and monkey. These are saccades of so extremely short reaction times (100 ms in man, 70 ms in monkey) that some experts on eye movements still regard them as artifacts or anticipatory reactions that do not need any further explanation. On the other hand, some research groups consider them to be not only authentic but also a valuable means of investigating the mechanisms of saccade generation, the coordination of vision and eye movements, and the mechanisms of visual attention. This target article puts together pieces of experimental evidence in oculomotor and related research - with special emphasis on the express saccade - in order to enhance our present understanding of the coordination of vision, visual attention, and eye movements necessary for visual perception and cognition. We hypothethize that an optomotor reflex is responsible for the occurrence of express saccades, one that is controlled by higher brain functions of disengaged visual attention and decision making. We describe a neural network as a basis for more elaborate mathematical models and computer simulations of the optomotor system in primates. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.fischer). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.fischer When you have the file(s) you want, type: quit Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ---------- JANET users who do not have an ftp facilty for interactive file transfer (this requires a JIPS connection on your local machine - consult your system manager if in doubt) can use a similar facility available at JANET site UK.AC.NSF.SUN (numeric equivalent 000040010180), logging in using 'guestftp' as both login and password. The online help information gives details of the transfer procedure which is similar to the above. The file received on the NSF.SUN machine needs to be transferred to your home machine to read it, which can be done either using a 'push' command on the NSF.SUN machine, or (usually faster) by initiating the file transfer from your home machine. In the latter case the file on the NSF.SUN machine must be referred to as directory-name/filename (the directory name to use being that provided by you when you logged on to UK.AC.NSF.SUN). To be sociable (since NSF.SUN is short of disc space), once you have received the file on your own machine you should delete the file from the UK.AC.NSF.SUN machine. This facility is very often overloaded, and an off-line relay facility at site UK.AC.FT-RELAY (which is simpler to use in any case) can be used as an alternative. The process is almost identical to file transfer within JANET, and the general method is illustrated in the following example. With some machines, filenames and the username need to be placed within quotes to prevent unacceptable transposion to upper case (as may apply also to the transfer from NSF.SUN described above). transfer Send or Fetch: f From holm at nordita.dk Mon Sep 14 11:26:45 1992 From: holm at nordita.dk (Holm Schwarze) Date: Mon, 14 Sep 92 17:26:45 +0200 Subject: paper available in neuroprose Message-ID: <9209141526.AA01594@norsci0.nordita.dk> ** DO NOT FORWARD TO OTHER GROUPS ** The following paper has been placed in the Neuroprose archive in file schwarze.committee.ps.Z . Retrieval instructions follow the abstract. Hardcopies are not available. -- Holm Schwarze (holm at nordita.dk) ------------------------------------------------------------------------- GENERALIZATION IN FULLY CONNECTED COMMITTEE MACHINES H. Schwarze and J. Hertz CONNECT, The Niels Bohr Institute and Nordita Blegdamsvej 17, DK-2100 Copenhagen, Denmark ABSTRACT We study supervised learning in a fully connected committee machine trained to implement a rule of the same structure. The generalization error as a function of the number of training examples per weight is calculated within the annealed approximation. For binary weights we find a discontinuous transition from poor to perfect generalization. Beyond this transition metastable states exist even for large training sets. The scaling of the order parameters with the number of hidden units depends on the size of the training set. For continuous weights we find a discontinuous transition from a committee--symmetric solution to one with specialized hidden units. ------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get schwarze.committee.ps.Z ftp> quit unix> uncompress schwarze.committee.ps.Z unix> lpr -P schwarze.committee.ps ------------------------------------------------------------------------- From ie-list at cs.ucl.ac.uk Mon Sep 14 07:13:32 1992 From: ie-list at cs.ucl.ac.uk (IE Digest Moderator) Date: Mon, 14 Sep 92 12:13:32 +0100 Subject: Intelligent systems for Economics digest Message-ID: Announcing the Intelligent systems for Economics digest (IE-digest) ------------------------------------------------------------------- The Intelligent systems for Economics digest aims to act as a forum to exchange ideas on using `intelligent' techniques to model economic and financial systems. Techniques which were originally developed to model psychological and biological processes are now receiving considerable attention as tools for modelling and understanding economic and financial processes. These techniques which include neural networks, genetic algorithms and expert systems are now being used in a wide variety of applications including the modelling of economic cycles, modelling of artificial economies, portfolio optimisation and credit evaluation. The IE-digest will carry announcements of papers, calls for papers, requests for information and will act as a medium for researchers to exchange ideas in this rapidly growing research area. The format of the IE-digest is similar to other moderated forums such as the "neuron-digest". A depository has been set up to deposit papers, bibliographies, and software, which can be accessed via FTP. Past issues of the IE-digest will also be kept there. * The Relevant Technologies Neural networks, Genetic Algorithms, Classifier Systems, Expert Systems, Fuzzy Logic, Rule Induction, Dynamical Systems Theory (Chaos Theory), Artificial Life techniques and Hybrid Systems combining these technologies. * The IE-digest welcomes postings on the application of these technologies in the following areas. (The list is not exhaustive). Economic Applications: Modelling artificial economies, Forecasting economic time series, modelling behavioural Decision Making, modelling the evolution of economic webs, modelling economic development, modelling structural changes in economies and Artificial Adaptive Agents. Financial Applications: Portfolio Optimisation, Forecasting and modelling Financial Markets, Understanding Financial News, Risk Management, Trading Systems, Credit Evaluation, Bond Rating, and Modelling Artificial Traders and Markets, and other related applications. Send administrative requests (additions, deletions to the list etc) to: IE-list-request at cs.ucl.ac.uk Send contributions to: IE-list at cs.ucl.ac.uk (For users in the UK, IE-list-request at uk.ac.ucl.cs IE-list at uk.ac.ucl.cs) The archive for papers,software, and back issues can be accessed via anonymous ftp; at cs.ucl.ac.uk - The directory name is: ie (128.16.5.31) [The documents are available by FTAM and can be for NIFTP and info-server too.] List Moderator: Suran Goonatilake, Dept. of Computer Science, University College London, Gower St., London WC1E 6BT, UK surang at cs.ucl.ac.uk From wolff at cache.crc.ricoh.com Mon Sep 14 12:09:58 1992 From: wolff at cache.crc.ricoh.com (Gregory J. Wolff) Date: Mon, 14 Sep 92 09:09:58 -0700 Subject: Paper available on Neuroprose: Stork.obs.ps.Z Message-ID: <9209141609.AA09662@styx.crc.ricoh.com> The following paper has been placed on the neuroprose archive as stork.obs.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will be presented at NIPS-92. ========================================================================= Second Order Derivatives for Network Pruning: Optimal Brain Surgeon Babak Hassibi and David G. Stork, Ricoh California Research Center ABSTRACT: We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization and increase the speed of further training. Our method, Optimal Brain Surgeon (OBS), is significantly better than magnitude-based methods, which can often remove the wrong weights. OBS also represents a major improvement over other methods, such as Optimal Brain Damage [Le Cun, Denker and Solla, 1990], because ours uses the full off-diagonal information of the Hessian matrix H. Crucial to OBS is a recursion relation for calculating H inverse from training data and structural information of the net. We illustrate OBS on standard benchmark problems: the MONK's problems. The most successful method in a recent competition in machine learning [Thrun et al., 1991] was backpropagation using weight decay, which yielded a network with 58 weights for one MONKs problem. OBS requires only 14 weights for the same performance accuracy. On two other MONKs problems, our method required only 38% and 10% of the weights found by magnitude-based pruning. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get stork.obs.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for stork.obs.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress stork.obs.ps gvax> lpr stork.obs.ps From mherrmann at informatik.uni-leipzig.dbp.de Mon Sep 14 11:21:48 1992 From: mherrmann at informatik.uni-leipzig.dbp.de (mherrmann@informatik.uni-leipzig.dbp.de) Date: Mon, 14 Sep 1992 17:21:48 +0200 Subject: context sensitivity of representations Message-ID: <920914172146*/S=mherrmann/OU=informatik/PRMD=UNI-LEIPZIG/ADMD=DBP/C=DE/@MHS> Andy Clark (andyc at cogs.sussex.ac.uk) asks: > Must [the representational] process bottom out somewhere in a > set of microfeatures which are genuinely SEMANTIC (genuinely > contentful but which are NOT prone to contextual infection? I believe that it is possible to build a representational system in which all the components are context-sensitive, and that it *may* be necessary to do so in order to achieve sophisticated behaviour such as analogical reasoning. (This belief is not based on empirical work - so those who don't like speculation on this mailing list will probably want to leave right now.) I want to break up Andy's question into four areas: bottoming out of data structures, bottoming out of meaning, context sensitivity, and semantic primitives. BOTTOMING OUT OF DATA STRUCTURES My interest is in analogical reasoning - a domain that demands sophisticated data structures. So I will assume that we are talking about representational systems capable of implementing complex structures. Traditional symbolic AI systems often have tree-structured representations. Repeated application of a decomposition operator (CAR or CDR for LISP) will eventually bottom out at a leaf of the tree. However, this needn't always be the case: e.g. the underlying structure might be a network. If you wanted a system that reasoned about unobserved parts of instances you might have: a tree representing the component structure of an instance and a semantic net encoding default class information. Application of a decomposition operator to a leaf of the instance tree would cause the leaf to be expanded with a fringe of information resulting from binding instance information into semantic net information. Thus the tree would *appear* to be unbounded. In the connectionist domain, Bruce MacLennan (1991) discussed knowledge representation in infinite-dimensional vector spaces. A decomposition operator is just a function that can be applied to any pattern to yield a component pattern of the same dimensionality as the parent. Repeated decomposition *may* yield nonsense patterns - but, in principle, it should be possible to construct a cyclic structure that never bottoms out, like a semantic net. MacLennan also points out the possibility of multiple decomposition operators (not necessarily restricted to the inverse of a single composition operator). I suspect that systems designed to generate multiply decomposable representations will be very interesting. BOTTOMING OUT OF MEANING Consider a dictionary. The definitions form a network with each word defined in terms of other words. There are no semantic primitives here. Each word is as fuzzily defined as any other, although analysis may reveal that some words are more central than others. The amazing fact is that dictionaries are actually useful - it is possible to gain some understanding of the meaning of a word from a dictionary even though it contains no semantic primitives. But this only works if there is an environment that is understood by the reader and represented by the dictionary. Stevan Harnad (1990), in discussing the symbol-grounding problem, makes the point that it is impossible to learn Chinese from a Chinese to Chinese dictionary. Purely formal systems (like maths) are predicated on internal consistency, can be understood with no reference to external environments, and would be nonsense without semantic primitives. Representational systems *can* be based on semantic primitives, as in classical AI, but I suspect that they are crippled when used to represent an environment of greater intrinsic complexity than the complexity of the representation language. Representational systems *can* be built that are not dependent on semantic primitives, as in the case of the dictionary, provided that an external environment is available for grounding of the meaning. CONTEXT SENSITIVITY Andy Clark gives an example of context sensitivity where the pattern representing the to-be-represented-object varies depending on the context in which the object occurs. In this case the micro-structure of the representation varies (although it begs the question of how the system 'knows' that the different patterns represent the same thing. It seems to me that there are other types of context-sensitivity. Looking at the outermost perceptual end of the system there are context sensitivities in the encoding process. For example there are many perceptual adaptation phenomena and simultaneous contrast phenomena. From the point of view of an external observer the encoding of some external quantity into a neural firing rate is dependent on the environmental context. From the point of view of up-stream processing, a given firing level does not have a constant mapping to the external physical quantity. The process of transduction from the perceptual to cognitive domains is also context dependent. Dave Chalmers et al (1991) argue very strongly for the context sensitivity of the representational process. Their contention is that the information extracted from the perceptual flow *must* depend on the cognitive states (current goals, beliefs etc) of the system. In this case the individual components of the representation are *not necessarily* context dependent, but the overall representational structure that is extracted from the perceptual data must be depend on the context. Similarly, cognitive activities (such as reasoning) might be seen as similar to the perceptual process (interpreting one structure to yield another) and could also be expected to be context-sensitive. Returning to the Chinese dictionary example given earlier, it is obvious that the interpretation of the components is context- sensitive (cf Wittgenstein on the impossibility of precise definition) but the actual words themselves (representational atoms) are discrete and context-free. Someone with a flair for maths might be able to come up with a proof for the inevitability of context-sensitivity. An organism is constrained to make inferences under conditions of extreme uncertainty: it must respond to an environment that is more complex than can be represented exactly, it operates under resource constraints of computational power and response time, and the number of observations available is far too small to uniquely constrain a mental model. Under such conditions the best strategy may well be to allow underconstrained mental models with the low bandwidth input being used as a source of confirmation of model predictions rather than being directly transduced into the model. SEMANTIC PRIMITIVES Andy couches his explanation of context-sensitivity in terms of microfeatures. Conceiving of representations in this way makes it difficult to see how representations can be based on other than context-free atoms, because microfeatures *are* context- free atoms (if we ignore the low-level perceptual context- sensitivity mentioned above). The term 'micro-features' conjures up associations of grandmother cells and hand-coded representations of the type argued against by Chalmers et al. It should be obvious from my earlier comments that I don't think semantic primitives are necessary for grounded systems. This begs the question of how the system could be grounded. The prime requirement is that the environmental input can be predicted (or at least checked for consistency) from the representation. This obviously doesn't *necessarily* require direct representation of the environmental patterns in the cognitive structure - only that such patterns can be generated or checked at the point of transduction. Many symbolic AI representations are based on the notion of objects (and I believe that to be a useful abstraction), and while there may be objects in the real world each individual perceptual input says next to nothing about objects. That is, objects are an internal construction, not a simple re-coding of the perceptual input. Another possibility is that the representations are based on system dynamics rather than direct encoding of an 'external reality'. The usual view is to see representations as passive objects (like computer data structures) that are acted upon - but is also possible to see representations as active entities that transform other representations (a little akin to procedural representations in AI). Janet Wiles et al (1991) have argued that activation patterns in recurrent networks can be seen dually as static representations and as dynamic operators. The trick with this approach is that the representations must be functional and cannot be arbitrary - so the hard party is to learn/construct representations so that the dynamic effect when the representation is applied as an operator has the correct semantics as confirmed by the environment. The same argument can be applied to perceptual inputs. The usual conception of perceptual processing has a direct signal path from lower to higher levels with the signal being recoded along the way. A more indirect, context-sensitive, interpretive approach would view the main signal flow as being confined within the higher levels and view the perceptual input as modulating the processing parameters of the high-level signal transformations rather than directly inserting signals into that flow. Confirmation of accuracy of modelling the environment could be by the perceptual signals modulating the parameters of a cost function on the cognitive representation rather than entering directly into the cost function. SUMMARY Data structures don't have to bottom out. Meaning doesn't have to bottom out in semantic primitives. You do need an environment to ground the symbols. There are different types of context-sensitivity. Everything *should* be context-sensitive if you want to build a sophisticated system. The representation doesn't have to *directly* represent the environment. REFERENCES Chalmers, D.J., French, R.M., & Hofstadter, D.R. (1991). High- level perception, representation and analogy: A critique of artificial intelligence methodology. Indiana University, Bloomington, CRCC technical report 49. Harnad, S. (1990) The symbol grounding problem. "Physica D" 42: 335-346. MacLennan, B. (1991) Continuous symbol systems: The logic of connectionism. University of Tennessee, Knoxville, Department of Computer Science technical report CS-91-145. Wiles, J., Stewart, J.E.M., & Bloesch, A. (1991) Patterns of activations are operators in recurrent networks. Proceedings of the 2nd Australian Conference on Neural Networks, 44-48. ---------------- I will let you know when I get this to work. Don't hold your breath. Ross Gayler ross at psych.psy.uq.oz.au From kak at max.ee.lsu.edu Tue Sep 15 11:23:24 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 15 Sep 92 10:23:24 CDT Subject: No subject Message-ID: <9209151523.AA11940@max.ee.lsu.edu> ---------------------- Papers for the Sessions on Neural Networks at FT&T [First International Conference on Fuzzy Theory & Technology, October 14-18, 1992, Durham,NC] General Chair: Professor Paul P. Wang, Dept of Electrical Engrg, Duke University, Durham, NC 27706 ---------------------- -------- Session 1: October 15, 1992, 215 PM- 355 PM Chairman : Professor W.A. Porter, Univ of Alabama at Huntsville H. Kim, University of Missouri- Rolla, Designing of Reliable Feedforward Neural Networks Based On Fault-Tolerant Neurons . W.A. Porter, C. Bowden, W. Liu, University of Alabama at Huntsville and U.S. Army Missile Command, Alphabet Character Recognition with a Generalizing Neural Network . V. Kurkova, P.C. Kainen, Czechoslovak Academy of Sciences and Industrial Math, Univ of Maryland, Fuzzy Orthogonal Dimension and Error-Correcting Classification by Perceptron Type Networks . G. Georgiou, California State University, San Bernardino, Activation Functions for Neural Networks in the Complex Domain . S.C. Kak, LSU, A New Learning Algorithm for Feedforward Neural Networks . -------------- Session 2: October 16, 1992, 945 AM- 1130 AM Chairman : Professor George Georgiou, California State University, San Bernardino S. Saha and J.P. Christensen, LSU, Genetic Design of Sparse Neural Networks . H.L. Hiew and C.P. Tsang, Univ of Western Australia, An Adaptive Fuzzy System for Modelling Chaos . F. Lin and K. Lee, Santa Clara University and Cirrus Logic, A Parallel Computation Network for the Maximum Clique Problem . S. Sivasubramaniam, Acutec, Ft. Lauderdale, A Feature Extraction Heuristic for Neural Networks . W.A. Porter, S.X. Zheng, and W. Liu, Univ of Alabama at Huntsville, A Neural Controller for Discrete Plants with Unknown Noise . C. Cramer, LSU, Pruning Hidden Neurons in the Kak Algorithm . From ajr at eng.cam.ac.uk Tue Sep 15 15:42:51 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Tue, 15 Sep 92 15:42:51 BST Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <14990.9209151442@dsl.eng.cam.ac.uk> The subject line says it all. Given N classes each of which has a Gaussian distribution in the input space (with common covariance matrix), it is reasonably well known that the discriminant function is a hyperplane (e.g. Kohonen's book, section 7.2). But what I didn't know until a month ago is that if you calculate the posterior probabilities using Bayes rule from the Gaussian likelihoods, then you end up with a weighted sum computation and the Potts/softmax activation function for N classes or the sigmoid for the two class case. This is exactly the same function as computed in the last layer of a multi-layer perceptron used for classification. One nice corollary is that the "bias" weight contains the log of the prior for the class, and so may be adjusted to compensate for different training/testing environments. Another is that provided the data near the class boundary can accurately be modelled as Gaussian, the sigmoid gives a good estimate of the posterior probabilities. From this viewpoint, the function of the lower levels of a multi-layer perceptron are to generate Gaussian distributions with identical covariance matrices. Feedback from veterans of the field has been "yes, of course I knew that", but in case this is new to you like it was to me, I have written it up as part of a tutorial paper which is available from the anonymous ftp site svr-ftp.cam.eng.ac.uk as file reports/robinson_cnnss92.ps.Z. The same directory carries an INDEX file detailing other reports which may be of interest. Tony [Robinson] From rohwerrj at cs.aston.ac.uk Tue Sep 15 18:05:18 1992 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Tue, 15 Sep 92 18:05:18 BST Subject: studentships available Message-ID: <4383.9209151705@cs.aston.ac.uk> ***************************************************************************** PhD STUDENTSHIPS AVAILABLE in NEURAL NETWORKS Dept. of Computer Science and Applied Mathematics Aston University ***************************************************************************** Funding has unexpectedly become available at the last minute for 1 or possibly 2 PhD studentships in the Neural Networks group at Aston University. Ideally the students would enroll in October 1992. The group currently consists of Professor David Bounds, lecturers Richard Rohwer and Alan Harget, and 7 PhD students. Current research projects are drawn from Genetic Algorithms and Artificial Life, as well as main-line neural network subjects such as local basis function techniques and training algorithm research, with an emphasis on recurrent networks. For further information please contact me at the address below. Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 (failing that, leave message at x4243) FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs <-- email communication preferred. From rohwerrj at cs.aston.ac.uk Tue Sep 15 18:03:13 1992 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Tue, 15 Sep 92 18:03:13 BST Subject: Senior Academic Post Available Message-ID: <4379.9209151703@cs.aston.ac.uk> ************************************************************************** Senior Acedemic Post Available Dept. of Computer Science and Applied Mathematics Aston University ************************************************************************** The Aston University Department of Computer Science and Applied Mathematics is building a research group in neural networks, genetic algorithms and related subjects. The group, led by the department chairman Professor David Bounds, and lecturers Richard Rohwer and Alan Harget currently has 7 PhD students. The department is seeking a new senior faculty member, preferably at Reader or Professorial level, to augment this group. The candidate must have proven skills as a research leader. The appointee will also be involved in some teaching and fundraising and will be expected to actively build upon Aston's close relationship with industry. There is no prescribed time table for filling this post. The Department has substantial computing resources, including a sequent symmetry and 2 large Sun networks. Space has been set aside for expansion. Aston University is in Birmingham, a convenient central England location with easy access to the rest of England and Wales. Inquiries should be directed to: Professor David Bounds CSAM Aston University Aston Triangle Birmingham B4 7ET ENGLAND (44 or 0) (21) 359-3611 x4243 From hinton at ai.toronto.edu Wed Sep 16 11:03:53 1992 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Wed, 16 Sep 1992 11:03:53 -0400 Subject: The sigmoid is the poserior distribution from Gaussian likelihoods In-Reply-To: Your message of Tue, 15 Sep 92 10:42:51 -0400. Message-ID: <92Sep16.110402edt.441@neuron.ai.toronto.edu> This result (for a single output unit) was published by Hinton and Nowlan in 1990 in Neural Computation (Vol 2 page 359). But it is unlikely that this was the first publication. Geoff From ajr at eng.cam.ac.uk Wed Sep 16 16:31:38 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Wed, 16 Sep 92 16:31:38 BST Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <27646.9209161531@dsl.eng.cam.ac.uk> Geoff Hinton writes: >This result (for a single output unit) was published by Hinton and Nowlan in >1990 in Neural Computation (Vol 2 page 359). But it is unlikely that this was >the first publication. Indeed. There is a pretty good analysis in Duda and Hart, "Pattern Classification and Scene Analysis" Wiley Interscience (1973). I'm reliably informed that the forthcoming second edition will be even better. Also, I made a mistake in my original posting, the ftp address is really svr-ftp.eng.cam.ac.uk, sorry about that. Tony [Robinson] From jaap.murre at mrc-apu.cam.ac.uk Tue Sep 15 17:45:08 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 15 Sep 92 17:45:08 BST Subject: Request references on volume and connectivity Message-ID: <11996.9209151645@sirius.mrc-apu.cam.ac.uk> Request for references I have recently been working on some theories on the implementation of neural networks. For example, it can be shown that a fully connected brain would measure over 10x10x10 meters. I am also interested in the volume of randomly connected and modular neural networks. I am not sure whether I am working within the correct framework at the moment, so I would like to verify my results. Does anyone know of similar work? The only reference I have come across is by Nelson and Bower (1990) in TINS. They calculated that a fully connected brain with neurons on the surface of sphere would have a 10 km radius! This is clearly not a useful framework for approximation as it leads to gross overestimates. Thanks, Jaap Murre Jacob M.J. Murre MRC Applied Psychology Unit 15 Chaucer Road Cambridge CB2 2EF United Kingdom  From robtag at udsab.dia.unisa.it Thu Sep 17 13:06:03 1992 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Thu, 17 Sep 92 19:06:03 +0200 Subject: Postdoctoral Felloship Message-ID: <9209171706.AA14953@udsab.dia.unisa.it> Human Capital and Mobility Programme of the Commission of the European Communities Postdoctoral Fellowship ============================================================== Applications are invited for 3 EC funded post-doctoral fellowships with the INFM (Italian Institute of Matter Physics) Section of Salerno University, Italia. The duration of the fellowships are 12 months. Remuneration will be at a rate of 3,644 ECU/month (this covers subsistence, tax, social insurance, etc.). The fellowship is open to EC citizens other than citizens of Italy. The research topics are: (1) Electronic states in normal and superconducting systems with strong correlations (2) Neural Networks for Signal Processing (3) Superconductivity at high critical temperature Interested candidates should send a letter of application, a CV, and a list of their publications to: Dr. Alfonso Romano Dept. Fisica Teorica Univ. Salerno I-84081 Baronissi (SA) Italia E-mail alforom at salerno.infn.it fax +39 89 822275 Since the closing date for receipt of applications is September 24, candidates are encouraged to send their applications either by e-mail or FAX. ===================================================================== A more detailed description of the research activities (in Latex) follows: \magnification=1200 \baselineskip 15pt \tolerance 10000 \hsize 150truemm \vsize 220truemm \nopagenumbers \noindent {\bf 1) Supervisor: Prof.$\,$M.Marinaro} \vskip 1truecm \noindent Prof.$\,$M.Marinaro offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be theoretical and based on the use of Quantum Field Theory methods applied to Condensed Matter Physics. \noindent The fellow will work in a group of 4 experienced people, which collaborate with other scientists, such as Prof.$\,$H.Matsumoto (Sendai, Japan), Prof.$\,$R.Micnas (Poznan, Poland), Prof.$\,$G.Iadonisi (Naples, Italy). \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Electronic states in normal and superconducting systems with strong correlations \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} The physical properties of systems with strong electronic correlations have been so far studied within the framework of the periodic Anderson model, by means of a perturbative expansion in the kinetic term of the conduction electrons. Special attention has been devoted to the structure of the electronic density of states and to the study of the singlet and triplet superconducting solutions generated by the inclusion of an attractive off-site interaction between correlated electrons. \item {} The continuation of this kind of analysis and the application of similar techniques to other correlated electron models, such as those used in the theory of high-$T_c$ superconductors, represents the research activity planned for the next future. \vskip 0.2truecm \noindent See, for example: \noindent M.Marinaro, C.Noce and A.Romano, J. Phys.: Cond. Matt. {\bf 3}, 3719 (1991); Il Nuovo Cimento D, at press (September or October 1992 issue) %\picture 1 5 {} \vfill\eject \noindent {\bf 2) Supervisor: Prof.$\,$E.R.Caianiello} \vskip 1truecm \noindent Prof.$\,$E.R.Caianiello offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be mainly experimental and based on the use of Neural Networks for adaptive signal processing and feature extraction. \noindent The fellow will work in a group of 5 experienced people, which collaborate with other scientists, coming from the Universities of Rome and Pavia, IRST of Trento, MIT of Boston. \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Neural Networks for Signal Processing \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} Study of learning in neural networks to obtain the best performances in complex hybrid systems for signal processing. The nets are used in the phases of filtering, feature extraction and classification either for speech processing or for 2D pattern recognition. \vfill\eject \noindent {\bf 3) Supervisor: Prof.$\,$F.Mancini} \vskip 1truecm \noindent Prof.$\,$F.Mancini offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be theoretical and based on the use of Quantum Field Theory techniques applied to Condensed Matter Physics. \noindent The research project is a part of a common program between the Institute of Materials Research at Tohoku University, Sendai (Japan) and the Department of Theoretical Physics at the University of Salerno. \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Superconductivity at high critical temperature \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} The phenomenon of superconductivity in the new materials which exhibit a high critical temperature is still not well understood from a theoretical point of view. By making use of the p-d model, our purpose is to investigate the fundamental mechanism which induces superconductivity in the new superconductor oxides. At first stage, the theoretical effort is concentrated on understanding the electronic structure realized in proximity of the metal-insulator transition. \bye From shawn at helmholtz.sdsc.edu Thu Sep 17 13:08:42 1992 From: shawn at helmholtz.sdsc.edu (Shawn Lockery) Date: Thu, 17 Sep 92 10:08:42 PDT Subject: No subject Message-ID: <9209171708.AA25081@helmholtz.sdsc.edu> POSTDOCTORAL POSITION INSTITUTE OF NEUROSCIENCE UNIVERSITY OF OREGON I am looking for an electrophysiologist experienced in intracellular and voltage-clamp recording with an interest in distributed processing and network modeling. Projects include identification of interneurons, measurement of synaptic transfer functions, measurement of parameters for compartmental models of identified neurons, and compartmental and neural network modeling. Please send letter and CV via email. Shawn R. Lockery Present address: CNL Salk Institute Box 85800 San Diego, CA 92186-5800 shawn at helmholtz.sdsc.edu fax: (619) 587-0417 GENERAL DESCRIPTION OF THE RESEARCH INTERESTS Research in the Lockery lab investigates the distributed processing of sensory information in well-defined invertebrate networks. Distributed representations occur in a great many neural systems, but how they are integrated in the production of behavior is poorly understood. This problem is addressed by analyzing the neural basis of behavior and learning in two relatively simple distributed processing behaviors: the local bending reflex of the leech and the chemotactic response of the nematode C. elegans. Composed of a small number of repeatably identifiable sensory, motor, and interneurons, the local bending reflex computes a sensory-motor input-output function using a population of interneurons each with many sensory inputs and motor outputs. Lockery and co-workers record this input-output function intracellularly and use the recordings as input to neural network training algorithms such as backpropagation to adjust synaptic connections in models of the reflex. The models predict as-yet-undiscovered interneurons and possible sites of synaptic plasticity underlying nonassociative conditioning. These predictions are tested in physiological experiments to measure the connections of identified interneurons in normal and conditioned animals. Previous anatomical studies have described the complete wiring diagram of the nervous system of C. elegans. The anatomy shows that interneurons receive input from several chemosensory neurons with differing chemical sensitivities and have outputs to many different motor neurons. To understand how the network controlling chemotaxis operates, we train models of the anatomically defined circuitry to reproduce observed chemotactic behavior. The models are constrained by parameters that can be measured physiologically and predict the results of experiments in which particular neurons are ablated in the behaving animal. From edelman at wisdom.weizmann.ac.il Fri Sep 18 02:11:26 1992 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Fri, 18 Sep 92 08:11:26 +0200 Subject: Request references on volume and connectivity In-Reply-To: Jaap Murre's message of Tue, 15 Sep 92 17:45:08 BST <11996.9209151645@sirius.mrc-apu.cam.ac.uk> Message-ID: <9209180611.AA12293@white.wisdom.weizmann.ac.il> In a recent issue of TINS Mitchison discussed some related questions: @article{Mitchison92 author="G. Mitchison", title="Axonal trees and cortical architecture", journal="Trends in Neurosciences", volume="15", pages="122-126", year="1992" } There is also a relevant paper by Young, on the pattern of area interconnection in primate visual cortex: @article{Young92, author="M. P. Young", title="Objective analysis of the topological organization of the primate cortical visual system", journal="Nature", volume="358", pages="152-155", year="1992" } -Shimon From anshu at lexington.rutgers.edu Fri Sep 18 15:31:59 1992 From: anshu at lexington.rutgers.edu (anshu@lexington.rutgers.edu) Date: Fri, 18 Sep 92 15:31:59 EDT Subject: Neural Network Workshop Message-ID: <9209181931.AA24968@lexington.rutgers.edu> CAIP Center, Rutgers University & FAA announces IInd NEURAL NETWORK WORKSHOP presenting * The state of the art in Neural Network theory and applications * With some of the most eminent people in the field including two Nobel laureates and a Field's Medal winner (Attendance is limited and on a first-come-first basis) NEURAL NETWORK WORKSHOP Richard Mammone, Chairman Sponsored by FAA Technical Center Hosted by the Center for Computer Aids for Industrial Productivity (CAIP) TENTATIVE PROGRAM TUESDAY - THURSDAY 27 - 29 OCTOBER, 1992 _____________________________________ THE STATE UNIVERSITY OF NEW JERSEY RUTGERS ______________________________________ Center for Computer Aids for Industrial Productivity (CAIP) Frelinghuysen Road - P.O. Box 1390 - Piscataway - New Jersey 08855-1390 Tel: 908/932-4208 - FAX: 908/932-4775 A New Jersey Commission on Science and Technology Center Tuesday, 27 October 1992 ************************** 8:30 a.m. _____________________Registration; Coffee____________________ 8:45 a.m. Opening Remarks Leo T. Powell, FAA Technical Center Richard Mammone - Workshop Chairman,Rutgers University 8: 55 a.m. Neural Networks for Speech Processing and Language Session Chairman, Allen Gorin, - AT&T Bell Laboratories 9:00 a.m. Neural Networks in the Acquisition of Speech by Machine Frank Fallside, Cambridge University, U.K. 9:30 a.m. The Nervous System: Fantasy and Reality Nelson Kiang - Massachusetts Eye and Ear 10:10 a.m. ________________________Coffee Break________________________ 10:30 a.m. Processing of Speech Segments in the Auditory Periphery Oded Ghitza - AT&T Bell Labs 10:50 a.m. Is There a Role for Neural Networks in Speech Recognition? John Bridle - Dragon 11:10 a.m. Some Relationships Between Artificial Neural Nets and Hidden Markov Models Arthur Nadas - IBM T. J. Watson Research Center 11:30 p.m. _____________________________Lunch_______________________ 1:30 p.m. The Neuropsychology of Word Reading: A Connectionist Approach David Plaut - Carnegie Mellon University 1:50 p.m. States Versus Stacks: Representing Grammatical Structure in a Recurrent Neural Network Jeffrey Elman - UCSD 2:10 p.m. Connections and Associations in Language Acquisition Naftali Tishby - Hebrew University, Israel 2:30 p.m. Recurrent Neural Networks and Sequential Machines Lee Giles - NEC 2:50 p.m. _________________________Coffee Break_______________________ 3:10 p.m. A Self-Learning Neural Tree Network for Phoneme Classification Mazin Rahim - CAIP Center, Rutgers University 3:30 p.m. Decision Feedback Learning of Neural Networks Fred Juang - AT&T Bell Laboratories 3:50 p.m. An Experiment in Spoken Language Acquisition Allen Gorin, Session Chairman - AT&T Bell Laboratories 4:10 p.m. Visual Focus of Attention in Language Acquisition Ananth Sankar - AT&T Bell Laboratories 4:30 p.m. Integrating Segmental Neural Nets with Hidden Markov Models for Continuous Speech Recognition John Makhoul, George Zaualiagkos, Richard Schwartz, Steve Austin - BBN Systems and Technologies, Cambridge, MA 4:50 p.m. Panel Discussion - The Future of Neural Nets for Speech Processing Steve Levinson, Chairman; John Makhoul, Ester Levine, Naftali Tishby, John Bridle 5:40 p.m. Decision Making Using Conventional Calculations Versus Neural Nets for Advanced Explosive Detection Systems Thomas Miller - Tensor Tech. Assoc. 6:00 p.m. _____________________________Dinner________________________ 7:30 p.m. Break Out Groups Room 1: What Are the Most Successful Applications of Neural Networks? Chris Scofield (Chairman), Philip Gouin, Larry Jackel, Eric Schwartz, Ed DeRouin Room 2: What Theoretical Contributions Have Neural Network Researchers Made? Eduardo Sontag (Chairman), Georg Schnitzer, Fred Girosi, S. Venkatesh, Steven Judd, Jeff Vitter, Wolfgang Maass, Charles Fefferman, Kurt Hornik Room 3: What Is the Impact of Government Support on the Development of Networks? Wagih Makky (Chairman), Shiu Cheung, Richard Ricart, John Cozzens, Steve Suddarth Wednesday, 28 October 1992 **************************** 8:55 a.m. Neural Network Applications in Vision Session Chairman, Chris Scofield, - Nestor 9:00 a.m. Integrated Segmentation and Recognition of Handprinted Characters James Keeler - MCC 9:20 a.m. Neural Net Image Analysis for Postal Applications: From Locating Address Blocks to Determining Zip Codes Larry Jackel - AT&T Bell Laboratories 9:40 a.m. Space Invariant Active Vision Eric Schwartz - Brain Research 10:00 a.m. _________________________Coffee Break_______________________ 10:30 a.m. Engineering Document Processing with Neural Networks Philip Gouin - Nestor, Inc. 10:50 a.m. Goal - Oriented Training of Neural Networks Ed DeRouin - Thought Processes, Inc. 11:10 a.m. Hybrid Neural Networks and Image Restoration K.V. Prasad - CalTech 11:30 a.m. Neural Networks for Vision Session K.V. Prasad, Session Chairman - CalTech 11:50 a.m. A Discrete Radon Transform Method for Invariant Image Analysis Using Artificial Neural Networks John Doherty - Iowa State University 12:10 p.m. _____________________________Lunch________________________ 1:30 p.m. (Title to be announced) Leon Cooper - Brown University 1:50 p.m. Dynamic Systems and Perception Alexander Pentland - Massachusetts Institute of Technology 2:00 p.m. Deterministic Annealing for Optimization Alan Yuille - Harvard University 2:10 p.m. Neural Networks in Vision Yehoshua Zeevi - Technion Israel 2:30 p.m. A Neural Chip Set for Supervised Learning and CAM Josh Alspector - Bellcore 2:50 p.m. Cortical Dynamics of Feature Binding & Reset: Control of Visual Persistence Ennio Mingolla, Gregory Francis, Stephen Grossberg 3:10 p.m. _________________________Coffee Break_______________________ 3:30 p.m. Face Recognition Using an NTN Joseph Wilder - CAIP 3:50 p.m. Bounds for the Computational Power and Learning Complexity of Analog Neural Nets Wolfgang Maass - Graz, Austria 4:10 p.m. Computational Issues in Neural Networks George Cybenko - Dartmouth College 4:30 p.m. Title to be announced Kurt Hornik - Wein University, Austria 4:50 p.m. Technical Discussions 6:00 p.m. Dinner and Celebration in Honor of Jim Flanagan for Receiving The Marconi International Fellowship Award Thursday, 29 October 1992 *************************** 8:45 a.m. Recurrent Network Sessions Session Chairman, Richard Ricart-Booz Allen 8:50 a.m. To be announced S. Y. Kung - Princeton 9:10 a.m. Comparison of Feedforward and Recurrent Sensitivity Gary Kuhn - Siemens 9:30 a.m. Short Term Memory Mechanisms for Recurrent Neural Networks Bert DeVries, John Pearson - David Sarnoff Research Center 9:50 a.m. Recurrent Neural Networks for Speaker Recognition Richard Ricart-Booz Allen 10:10 a.m. Processing of Complex Stimuli in the Mammalian Cochlear Nucleus Eric Young - Johns Hopkins 10:30 a.m. _________________________Coffee Break_______________________ 10:50 a.m. Applications of Neural Networks Session Chairman, Richard Mammone - Rutgers University 11:10 a.m. Neural Networks for the Detection of Plastic Explosives in Airline Baggage Richard Mammone 11:30 a.m. Non-Literal Transfer of Information Among Inductive Learners Lorien Pratt - Colorado School of Mines 12:00 p.m. _____________________________Lunch________________________ 1:30 p.m. Neural Networks for Identification and Control of Nonlinear Systems Eduardo Sontag - Rutgers University 1:50 p.m. Using Neural Networks to Identify DNA Sequences Mick Noordeweir - Rutgers University 2:10 p.m. Large Scale Holographic Optical Neural Network for Data Fusion and Signal Processing Taiwei Lu - Physical Optics Corp. 2:30 p.m. A Biologically Based Synthetic Nervous System for a Real World Device George Reeke, Jr., Gerald Edelman - The Neurosciences Institute 2:50 a.m. Title to be announced Shigeru Katagiri - ATR, Japan 3:10 p.m. "Learning by Learning" in Neural Networks Devang Naik - Rutgers University 3:30 p.m. Relabeling Methods of Learning Wen Wu - CAIP 3:50 p.m. Long Term Memory for Neural Networks Anshu Agarwal - Rutgers University 4:10 p.m. Wavelet Neural Networks Toufic Boubez - Rutgers University 4:30 p.m. End of Workshop ---------*----*----*-------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------ NEURAL NETWORK WORKSHOP 27-29 October, 1992 |--------------------------------------------------------------------| | WORKSHOP REGISTRATION FORM | | | | YES! I want to attend the Neural Network Workshop, October 27-29, | | 1992. I understand my registration fee includes all sessions, | | dinners, refreshment breaks, reception and working materials. | | | | Name ___________________________________________________________ | | | | Company ________________________________________________________ | | | | Address ________________________________________________________ | | | | City/State/Zip _________________________________________________ | | | | Telephone No. __________________________________________________ | | | |--------------------------------------------------------------------| REGISTRATION IS LIMITED! APPLICATIONS WILL ONLY BE CONSIDERED WHEN ACCOMPANIED WITH PAYMENT. MAKE CHECKS PAYABLE TO THE CAIP CENTER, RUTGERS UNIVERSITY. Registration: Non-member fee ($395) $____________ Member fee for participants from CAIP member organizations ($295) $____________ EARLY REGISTRATION IS ADVISED! Mail form & payment to: CAIP Center, Rutgers Univ, 7th floor, CoRE Blgd., PO Box-1390, Piscataway,NJ-08855. ........................................................................... |--------------------------------------------------------------------| | HOTEL REGISTRATION FORM | | | | Name ___________________________________________________________ | | | | Company ________________________________________________________ | | | | Address ________________________________________________________ | | | | Daytime Phone No. ______________________________________________ | | | | A block of rooms for this conference has been reserved at a special| | University room rate of $81 per single/double room per night. | | Hotel Reservations will be made through the CAIP Center. | | ------------------------------------------------------- | | I will require room(s): | | Monday, October 26 ( ) | | Tuesday, October 27 ( ) | | Wednesday, October 28( ) | | Thursday, October 29 ( ) | |--------------------------------------------------------------------| ------------------------------------------------------------------------------- From giles at research.nj.nec.com Fri Sep 18 17:55:07 1992 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 18 Sep 92 17:55:07 EDT Subject: Research Position Message-ID: <9209182155.AA05466@fuzzy> POSITION: RESEARCH ASSOCIATE The NEC Research Institute in Princeton, NJ has an immediate opening for a RESEARCH ASSOCIATE in the area of neural networks/connectionism and dynamics/control. Research is currently underway to better understand dynamic neural networks and their computational capabilities. Towards this end, we are looking for a research associate who will contribute to this research effort and work closely with the research group. The successful candidate must have experience in basic research and be able to effectively communicate research results. He or she should have experience in using computer simulations, preferably in the area of artificial neural networks. In addition his or her background should include extensive experience in programming in the UNIX/C environment (nearly all work is performed on Silicon Graphics workstations). Tasks in this area will also involve code maintenance, modification and enhancement as required by the research program. Interested applicants should send their resumes by mail, fax or email with 2 references to: Dr. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 Phone: 609-951-2642 FAX: 609-951-2482 email:giles at research.nj.nec.com Applicants must show DOCUMENTATION OF ELIGIBILITY FOR EMPLOYMENT. NEC is an equal opportunity employer: M/F/H/V. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 USA Internet: giles at research.nj.nec.com UUCP: princeton!nec!giles PHONE: (609) 951-2642 FAX: (609) 951-2482 From B344DSL at UTARLG.UTA.EDU Sun Sep 20 21:40:00 1992 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Sun, 20 Sep 1992 20:40 CDT Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <01GP0RBZFW6G000O13@utarlg.uta.edu> I think there was some remark about the sigmoid as the distribution function arising from a Gaussian density in Grossberg's paper in Studies in Applied Math,1973, and/or in one of those paper's two direct sequels: by Ellias and Gross- berg in Biological Cybernetics 1975, and Grossberg and Levine in Journal of Theoretical Biology 1975. Dan Levine From ATGOS at ASUVM.INRE.ASU.EDU Fri Sep 18 17:50:01 1992 From: ATGOS at ASUVM.INRE.ASU.EDU (Arizona State U.) Date: Fri, 18 Sep 92 14:50:01 MST Subject: Job Opening Message-ID: JOB ANOUNCEMENT Experimental Psychologist Arizona State University is recruiting an associate or assistant professor in experimental psychology. The successful candidate must have a Ph.D. in psychology and a strong publication record. Specialization in any area of cognitive or experimental psychology is acceptable, including cognitive development. Special consideration will be given to candidates whose research applies to connectionist/adaptive dynamical systems/neural modeling and biomedical issues (broadly conceived). The position will begin August, 1993. Send Vita, reprints, and three letters of reference to Dr. Guy Van Orden, Experimental Psychology Search Committee, Department of Psychology, Arizona State University, Tempe AZ 85287-1104. Deadline for application is December 1, 1992, and every two weeks thereafter until filled. ASU is an equal opportunity and affirmitive action employer. From rsun at athos.cs.ua.edu Mon Sep 21 12:34:25 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Mon, 21 Sep 1992 11:34:25 -0500 Subject: No subject Message-ID: <9209211634.AA12933@athos.cs.ua.edu> CALL FOR PAPERS ARCHITECTURES FOR INTEGRATING NEURAL AND SYMBOLIC PROCESSES A Special Issue of Connection Science: a journal of AI, cognitive science and neurocomputing Although there has been a great deal of research in integrating neural and symbolic processes, both from a cognitive and/or applications viewpoint, there has been relatively little effort in comparing, categorizing and combining these fairly isolated approaches, especially from a cognitive perspective. This special issue is intended to address the cognitive architectural aspects of this integration: the issue will bring together various architectural approaches as well as focus on specific architectures that solve particular problems, that exhibit cognitive plausibility, that yield new insights, and that show potential for scaling up. Papers are expected to address the following questions, but are not limited to such questions: * What have we achieved so far by integrating neural and symbolic processes? * What are the relative advantages/disadvantages of each approach? * How cognitively plausible is each proposed approach? * Is there any commonality among various architectural approaches? Should we try to synthesize existing approaches? How do we synthesize these approaches? (Does there exist a generic and uniquely correct cognitive architecture?) * What are the problems, difficulties and outstanding issues in integrating neural and symbolic processes? * How do symbolic representation and connectionist learning schemes interact in integrated systems? The papers can be either theoretical or experimental in scope, and can comment on the current state of affairs and address what advances are necessary so that continued progress can be made. However, prospective authors should emphasize the principles involved along with an explanation of why the particular model works or does not work, and what it is we can learn from the model. For example, does the model predict some testable behavior which can lead to new insights? All papers will be rigorously refereed, and should conform to the following rules, in addition to the usual requirements of the journal. Authors must submit five (5) printed copies of their papers to either of the addresses listed below by January 5, 1993. Notification of receipt will be electronically mailed to the first author (or designated author) soon after receipt. Notification of acceptance or rejection of submitted papers will be mailed to the first author (or designated author) by March 31, 1993. Final verson of accepted papers will be due May, 28, 1993. All 5 copies of a submitted paper must be clearly legible. Neither computer files nor fax submissions are acceptable. Submissions must be printed on 8 1/2 in. x 11 in. or A4 paper using 12 point type (10 characters per inch for typewriters). Each copy of the paper must have a title page (separate from the body of the paper) containing the title of the paper, the names and addresses of all authors, including e-mail addresses, and a short (less than 200 word) abstract. Review Criteria [Significance:] How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offered advance the state of the art? [Originality:] Has this or similar work been previously reported? Are the problems and approaches new? Is this a novel combination of familiar techniques? Does the paper point out differences from related research? Is it re-inventing the wheel using new terminology? [Quality:] Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? How are its claims backed up? [Clarity:] Is the paper clearly written? Does it motivate the research? Does the paper properly situate itself with respect to previous work? Are the results described and evaluated? Is the paper organized in a logical fashion? Submissions should be delivered to one of the following addresses: Dr. Lawrence Bookman Prof. Ron Sun Sun Microsystems Laboratories Department of Computer Science Two Federal Street The University of Alabama Billerica MA 01821, USA Tuscaloosa, AL 35487, USA Net: lbookman at east.sun.com Net: rsun at athos.cs.ua.edu From jaap.murre at mrc-apu.cam.ac.uk Tue Sep 22 12:21:36 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 22 Sep 92 12:21:36 BST Subject: Listing of references on connectivity Message-ID: <11780.9209221121@sirius.mrc-apu.cam.ac.uk> In the past week, I have received the following references on connectivity and volume in the brain (and in parallel hardware): Braitenberg, V, Schuz A. (1990). Anatomy of the cortex: statistics and geometry. Berlin: Springer-Verlag. Cherniak, C. (1990). The bounded brain: toward quantitative neuroanatomy. Journal of Cognitive Neuroscience, 2, 58-68. Hofman, M.A. (1985). Neuronal correlates of corticalization in mammals: a Theory. Journal of Theoretical Biology, 112, 77-95. Mitchison, M.P. (1992). Axonal trees and cortical architecture. Trends in Neurosciences, 15, 122-126. Schuz, A., & G. Palm (1989). Density of neurons and synapses in the cerebral cortex of the mouse. Journal of Comparative Neurology, 286, 442-455. Vitanyi, P.M.B. (1988). Locality, communication and interconnect length in multicomputers. SICOM, 17, 659-672. Waltz, D.L. (1988). The prospects for building truly intelligent machines. Daedalus (Proc. American Academy of Arts and Sciences), 117, 191-212. Young, M.P. (1992). Objective analysis of the topological organization of the primate cortical visual system. Nature, 358, 152-155. I want to thank everyone for reacting to my request. Jaap Murre  From hamps at shannon.ECE.CMU.EDU Tue Sep 22 09:35:15 1992 From: hamps at shannon.ECE.CMU.EDU (John B. Hampshire II) Date: Tue, 22 Sep 92 09:35:15 EDT Subject: sigmoid <=> Gaussian a posteriori distributions Message-ID: <9209221335.AA08182@ shannon.ece.cmu.edu.ECE.CMU.EDU > The proof of this linkage (like so many proofs associated with connectionist models) goes way back... probably to Gauss. Before neural networks were in vogue, the sigmoid was associated with linear classifiers in the form of the exponential logistic. So, for example, guys like Lachenbruch showed the proof in the context of linear regression/classification in the 60's. This is meant only to inform, not to kick sand in anybody's face. -John From B344DSL at UTARLG.UTA.EDU Tue Sep 22 11:53:00 1992 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Tue, 22 Sep 1992 10:53 CDT Subject: Sigmoid Posterior Message-ID: <01GP2ZEOMJOW000UUC@utarlg.uta.edu> The statement I made about the sigmoids being discussed in early Grossberg papers was off the cuff, but I did find it in my own joint paper with Grossberg (Grossberg-Levine, Journal of Theoretical Biology 53, 341-380, 1975). On p. 343, the function f(x) is introduced as a term that appears repeatedly in the basic equations for shunting recurrent on-center off- surround interactions (the equations themselves introduced on p. 342). We state, "In vivo, f(w) is often a sigmoid function of w (Kernell, 1965a, b; Rall, 1955), and such a function arises from integrating a Gaussian, Cauchy, or other similar distribution of thresholds within a population." Dan Levine From cowan at synapse.uchicago.edu Tue Sep 22 12:26:39 1992 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 22 Sep 92 11:26:39 CDT Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <9209221626.AA07536@synapse> Geoff: Apropos Dan Levine's remarks, I introduced the sigmoid in 1965 at the Wiener Memorial Meeting in Genoa, as a smooth approximation to the firing rate vs current curve of a single neuron with a shot noise input. (Published in 1968-71). Later with Hugh Wilson in 1972, 1973 we redefined the sigmoid as arising in a population of neurons with varying thresholds as the integral of a unimodal probability density. Jack Cowan From rsun at athos.cs.ua.edu Tue Sep 22 17:24:51 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 22 Sep 1992 16:24:51 -0500 Subject: No subject Message-ID: <9209222124.AA12158@athos.cs.ua.edu> CALL FOR PAPERS ARCHITECTURES FOR INTEGRATING NEURAL AND SYMBOLIC PROCESSES A Special Issue of Connection Science: a journal of AI, cognitive science and neurocomputing Although there has been a great deal of research in integrating neural and symbolic processes, both from a cognitive and/or applications viewpoint, there has been relatively little effort in comparing, categorizing and combining these fairly isolated approaches, especially from a cognitive perspective. This special issue is intended to address the cognitive architectural aspects of this integration: the issue will bring together various architectural approaches as well as focus on specific architectures that solve particular problems, that exhibit cognitive plausibility, that yield new insights, and that show potential for scaling up. Papers are expected to address the following questions, but are not limited to such questions: * What have we achieved so far by integrating neural and symbolic processes? * What are the relative advantages/disadvantages of each approach? * How cognitively plausible is each proposed approach? * Is there any commonality among various architectural approaches? Should we try to synthesize existing approaches? How do we synthesize these approaches? (Does there exist a generic and uniquely correct cognitive architecture?) * What are the problems, difficulties and outstanding issues in integrating neural and symbolic processes? * How do symbolic representation and connectionist learning schemes interact in integrated systems? The papers can be either theoretical or experimental in scope, and can comment on the current state of affairs and address what advances are necessary so that continued progress can be made. However, prospective authors should emphasize the principles involved along with an explanation of why the particular model works or does not work, and what it is we can learn from the model. For example, does the model predict some testable behavior which can lead to new insights? All papers will be rigorously refereed, and should conform to the following rules, in addition to the usual requirements of the journal. Authors must submit five (5) printed copies of their papers to either of the addresses listed below by January 5, 1993. Notification of receipt will be electronically mailed to the first author (or designated author) soon after receipt. Notification of acceptance or rejection of submitted papers will be mailed to the first author (or designated author) by March 31, 1993. Final verson of accepted papers will be due May, 28, 1993. All 5 copies of a submitted paper must be clearly legible. Neither computer files nor fax submissions are acceptable. Submissions must be printed on 8 1/2 in. x 11 in. or A4 paper using 12 point type (10 characters per inch for typewriters). Each copy of the paper must have a title page (separate from the body of the paper) containing the title of the paper, the names and addresses of all authors, including e-mail addresses, and a short (less than 200 word) abstract. Review Criteria [Significance:] How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offered advance the state of the art? [Originality:] Has this or similar work been previously reported? Are the problems and approaches new? Is this a novel combination of familiar techniques? Does the paper point out differences from related research? Is it re-inventing the wheel using new terminology? [Quality:] Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? How are its claims backed up? [Clarity:] Is the paper clearly written? Does it motivate the research? Does the paper properly situate itself with respect to previous work? Are the results described and evaluated? Is the paper organized in a logical fashion? Submissions should be delivered to one of the following addresses: Dr. Lawrence Bookman Prof. Ron Sun Sun Microsystems Laboratories Department of Computer Science Two Federal Street The University of Alabama Billerica MA 01821, USA Tuscaloosa, AL 35487, USA Net: lbookman at east.sun.com Net: rsun at athos.cs.ua.edu From marcus at ips102.desy.de Wed Sep 23 10:57:09 1992 From: marcus at ips102.desy.de (Marcus Speh) Date: Wed, 23 Sep 92 16:57:09 +0200 Subject: Neural Multilevel Scheme for Disordered Systems (Preprint) Message-ID: The following paper will eventually appear in the International Journal of Modern Physics C [Physics and Computers] -------------------------------------------------------------------- "Neural multigrid for gauge theories and other disordered systems" M.Baeker, T. Kalkreuter, G. Mack and M. Speh II. Institut f. Theoretische Physik, Universitaet Hamburg The preprint is available via anonymous FTP or as a hard copy (free, see below). It contains our contribution to the conference "Physics Computing '92" in Prag, Czecheslovakia, August 24-18, 1992. -------------------------------------------------------------------- ABSTRACT We present evidence that multigrid works for wave equations in disordered systems, e.g. in the presence of gauge fields, no matter how strong the disorder, but one needs to introduce a "neural computations" point of view into large scale simulations: First, the system must learn how to do the simulations efficiently, then do the simulation (fast). The method can also be used to provide smooth interpolation kernels which are needed in multigrid Monte Carlo updates. Keywords: Multigrid Neural Networks Disordered Systems Gauge Fields Neural Multigrid For comments, questions or suggestions, please contact: Marcus Speh (marcus at ips102.desy.de) -------------------------------------------------------------------- To obtain a copy via FTP (9 pages with figures appended) use the standard procedure: ftp cheops.cis.ohio-state.edu anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get speh.neuralmg.ps.Z ftp> quit zcat speh.neuralmg.ps.Z | lpr ------------------------------------------------------------------------ If FTP is impossible, a free hard copy can be obtained sending a request via E-Mail, FAX or Snailmail to: Marcus Speh II. Inst.Theor.Physik/DESY Universitaet Hamburg Luruper Chaussee 149 2000 Hamburg 50 Tel. (0049)(40)8998 2260 FAX. (0049)(40)8998 2267 ------------------------------------------------------------------------ From jwk1 at forth.stirling.ac.uk Thu Sep 24 05:14:44 1992 From: jwk1 at forth.stirling.ac.uk (Dr James W Kay) Date: Thu, 24 Sep 92 10:14:44 +0100 Subject: sigmoid as a posterior Message-ID: <9209240914.AA06094@forth.stirling.ac.uk> Note the connection with the work of the late J.A.Anderson within the statistical community. In his paper on logistic discrimination(Biometrika 1972, 59,19-35), he modelled the posterior probabilities directly using logistic functions. While this includes the Gaussian, equal covariance, situation as a special case, it is more general.. and of course his formulation could today be directly implemented as an ANN. In his paper he cites earlier related work by Cox, Day and Kerridge. He uses maximum likelihood to "learn the weights", but later he advocated the use of penalised maximum likelihood(or Bayesian or Regularisation, by other names). It is easy to see how additional hidden units could be incorporated within this framework and this would ess be essentially a version of projection pursuit logistic discrimination. JIm Kay From cramer at max.ee.lsu.edu Wed Sep 23 11:57:14 1992 From: cramer at max.ee.lsu.edu (Chris Cramer) Date: Wed, 23 Sep 92 10:57:14 CDT Subject: No subject Message-ID: <9209231557.AA27340@max.ee.lsu.edu> The following technical report is available. If you would like to have copies do let me know. Pruning Hidden Neurons in the Kak Algorithm Chris Cramer ABSTRACT The Kak algorithm is an important new approach to training a feed-forward network. Kak has shown that it is possible to compute weights for a single corner of an input space by inspection. In this paper, the author will show that a facet classification algorithm, capable of mapping any input space using fewer hidden neurons, is also possible by combining a trial and error method with direct computation. This facet algorithm allows for several input/output sequences to be covered by a single weight vector, thus pruning the necessary number of hidden neurons. This is achieved by summing the weights, given by the Kak algorithm, for the various corner of the input space which are mapped to one. Once the weights have been computed, the threshold weight may be determined. This algorithm allows for the network to be trained during operation, after the initial training. The author will demonstrate the superiority of the facet classification algorithm over the perceptron and backpropagation algorithms in computing a weight vector. Technical Report ECE, 92-09, LSU. September 22, 1992 From mike at PARK.BU.EDU Fri Sep 25 16:13:28 1992 From: mike at PARK.BU.EDU (mike@PARK.BU.EDU) Date: Fri, 25 Sep 92 16:13:28 -0400 Subject: World Congress on Neural Networks Message-ID: <9209252013.AA11148@fenway.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS 1993 INTERNATIONAL NEURAL NETWORK ANNUAL MEETING July 11--15, 1993 Portland Convention Center Portland, Oregon This international research conference will be the largest and most interdisciplinary meeting in 1993 covering all the areas relevant to neural network research. Neural network models in psychology and cognitive science, neuroscience and neuropsychology, engineering and design, technology and applications, and computational and mathematical analysis will all be featured. The meeting structure will particularly emphasize the dynamic interplay of neurobiological modelling with advanced engineering and technological applications. Hybrid systems wherein neural network models are linked to fuzzy, genetic, and symbolic models are also most welcome. GENERAL CHAIR: George Lendaris PROGRAM CHAIRS: Stephen Grossberg and Bart Kosko COOPERATING SOCIETIES CHAIR: Mark Kon Plenary Lectures include: ------------------------- 3-D VISION AND FIGURE-GROUND POP-OUT, Stephen Grossberg COHERENCE AS AN ORGANIZING PRINCIPLE OF CORTICAL FUNCTION, Wolf Singer REAL-TIME ON-CHIP LEARNING IN ANALOG VLSI NETWORKS, Carver Mead INTELLIGENT CONTROL USING NEURAL NETWORKS, Kumpati Narendra NEURAL FUZZY SYSTEMS, Bart Kosko Tutorials, which will be offered on Sunday, July 11, 1993, include: ------------------------------------------------------------------- ADAPTIVE RESONANCE THEORY, Gail Carpenter BIOLOGICAL VISION, V.S. Ramachandran COGNITIVE SCIENCE, David Rumelhart COGNITIVE NEUROSCIENCE, Robert Desimone NEURAL COMPUTATION AND VLSI, Eric Schwartz NEURAL CONTROL AND ROBOTICS, Michael Kuperstein NEURAL FUZZY SYSTEMS, Fred Watkins NEUROBIOLOGY AND CHAOS, Walter Freeman PRACTICAL APPLICATIONS OF NEURAL NETWORK THEORY, Robert Hecht-Nielsen STRUCTURAL AND MATHEMATICAL APPROACHES TO SIGNAL PROCESSING, S.Y. Kung SUPERVISED LEARNING, Hal White PROGRAM COMMITTEE: ------------------ D. Alkon, S. Amari, J. Anderson, P. Baldi, A. Barto, D. Bullock, J. Byrne, G. Carpenter, D. Casasent, T. Caudell, R. Chellappa, M. Cohen, L. Cooper, W. Daugherty, J. Daugman, J. Dayhoff, R. Desimone, R. Eckmiller, B. Ermentrout, K. Fukushima, S. Gielen, L. Giles, P. Gochin, R. Granger, S. Grossberg, A. Guez, D. Hammerstrom, R. Hecht-Nielsen, J. Houk, W. Karplus, S. Kelso, B. Kosko, S.Y. Kung, M. Kuperstein, D. Levine, C. von der Malsburg, E. Marder, A. Maren, J. Marshall, J. McClelland, E. Mingolla, K. Narendra, H. Ogmen, E. Oja, L. Optican, F. Pineda, V.S. Ramachandran, D. Rumelhart, E. Schwartz, M. Seibert, J. Shynk, D. Specht, H. Szu, R. Taber, Y. Takefugi, J. Taylor, P. Werbos, H. White, B. Widrow, R. Williams Technical Sessions include: --------------------------- TOPIC SESSION CHAIRS ----- -------------- Biological Vision C. von der Malsburg, V.S. Ramachandran Machine Vision R. Chellappa, K. Fukushima Speech and Language M. Cohen, D. Rumelhart Biological Sensory-Motor Control A. Barto, S. Kelso Robotics and Control M. Kuperstein, K. Narendra Supervised Learning L. Cooper, P. Werbos Unsupervised Learning G. Carpenter, E. Oja Pattern Recognition T. Kohonen, D. Specht Local Circuit Neurobiology J. Byrne, J. Houk Cognitive Neuroscience R. Desimone, L. Optican Intelligent Neural Systems S. Grossberg, D. Levine Neural Fuzzy Systems W. Daugherty, B. Kosko Signal Processing S.Y. Kung, B. Widrow Neurodynamics S. Amari, H. White Electro-Optical Neurocomputers L. Giles, H. Szu Associative Memory J. Anderson, J. Taylor Applications J. Dayhoff, R. Hecht-Nielsen International Neural Network Society ------------------------------------ President: Paul Werbos President-Elect & Treasurer: Harold Szu Secretary: Judith Dayhoff Board of Governors ------------------ Shun-ichi Amari Stephen Grossberg Richard Andersen Mitsuo Kawato James A. Anderson Christof Koch Andrew Barto Teuvo Kohonen Gail Carpenter Bart Kosko Walter Freeman Christoph von der Malsburg Kunihiko Fukushima David Rumelhart Lee Giles Bernard Widrow Cooperating Societies include: ------------------------------ European Neural Network Society Japanese Neural Network Society IEEE Neural Networks Council IEEE Computer Society International Fuzzy Systems Association Call for Papers --------------- Papers must be received by January 15, 1993. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail, and accepted papers will be published as submitted. Papers will be reviewed by session co-chairs and the program committee, and all authors will be informed of the decision. All papers accepted for presentation will be published in full in the Conference Proceedings, which is expected to be available at the conference for distribution to all regular conference registrants. Six (6) copies (one original and five copies) of the paper are required for submission. Do not fold or staple the original camera-ready copy. The paper must be complete within 4 pages, including figures, tables, and references, and should be written in English. There will be a charge of $20 per page for papers exceeding 4 pages. Checks for over-length charges should be made payable to WCNN'93, and must be included with the submitted paper; if the paper is not accepted, the check will be returned. Only complete papers will be considered. Papers must be submitted camera-ready on 8-1/2" x 11" white paper with one-inch margins on all four sides. Papers should be prepared by typewriter or letter quality printer in one-column format, single-spaced, in Times Roman or similar type style of 10 points or larger, and printed on one side of the page only. All text, figures, captions, and references must be clean, sharp, readable, and high contrast. FAX submissions are not acceptable. Centered at the top of the first page should be the complete title, author name(s), affiliation(s), and mailing address(es). This is to be followed by a blank space, then the abstract (up to 15 lines), followed by the text. In an accompanying letter, the following information must be included: Full title of the paper Corresponding author name, mailing address, telephone and fax numbers Technical session (1st and 2nd choices) Oral or poster session preferred Presenter name, mailing address, telephone and fax numbers Audio/Visual requirements Send papers to: --------------- WCNN'93 Talley Management Group Inc. 1825 I Street NW Suite 400 Washington, DC 20006 TEL: (609) 845-1720 FAX: (609) 853-0411 Registration Fees: ------------------ MEETING: -------- Before Before After 01/15/93 06/15/93 06/15/93 -------- -------- -------- Member $175 $270 $350 Non-Member * $275 $370 $450 Student $50 $75 $95 * Includes a 1993 INNS membership and a 1-year subscription to the INNS journal Neural Networks TUTORIALS: ---------- Member/ $225 $295 $345 Non-Member Student $50 $75 $95 For additional information regarding the meeting, including special travel rates, hotels, the planning of exhibits, or INNS membership please call or fax the numbers listed above. From P.Refenes at cs.ucl.ac.uk Mon Sep 28 13:22:43 1992 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Mon, 28 Sep 92 18:22:43 +0100 Subject: PRE-PRINT: Financial modeling using neural networks Message-ID: The following preprint is available - hard copies by surface mail only. - ----------------------------------------------- FINANCIAL FORECASTING USING NEURAL NETWORKS A. N. REFENES, M. AZEMA-BARAC & P. C. TRELEAVEN Department of Computer Science, University College London, Gower Street WC1 6BT, London UK. ABSTRACT Modeling of financial systems has traditionally been done with models assuming partial equilibrium. Such models have been very useful in expanding our understanding of the capital markets; nevertheless many empirical financial anomalies have remained unexplainable. It is possible that this may be due to the partial equilibrium nature of these models. Attempting to model the capital markets in a general equlibrium framework still remains analytically intractable. Because of their inductive nature, dynamical systems such as neural networks can bypass the step of theory formulation, and they can infer complex non-linear relationships between input and output variables. Neural Networks have now been applied to a number of live systems and have demonstrated far better performance than conventional approaches. In this paper review the state-of-the art in financial modeling using neural networks and describe typical applications in key areas of univariate time series forecasting, multivariate data analysis, classification, and pattern recognition. The applications cover areas such as asset allocation, foreign exchange, stock ranking and bond trading. We describe the parameters that influence neural performance, and identify intervals of parameter values over which statistical stability can be achieved. -------------------------------------------------------- From marwan at sedal.su.oz.au Tue Sep 29 05:37:04 1992 From: marwan at sedal.su.oz.au (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Tue, 29 Sep 1992 19:37:04 +1000 Subject: Multi-module Neural Computing Environment Message-ID: <9209290937.AA04629@sedal.sedal.su.OZ.AU> Multi-Module Neural Computing Environment (MUME) MUME is a simulation environment for multi-modules neural computing. It provides an object oriented facility for the simulation and training of multiple nets with various architectures and learning algorithms. MUME includes a library of network architectures including feedforward, simple recurrent, and continuously running recurrent neural networks. Each architecture is supported by a variety of learning algorithms. MUME can be used for large scale neural network simulations as it provides support for learning in multi-net environments. It also provide pre- and post-processing facilities. The object oriented structure makes simple the addition of new network classes and new learning algorithms. New classes/algorithms can be simply added to the library or compiled into a program at run-time. The interface between classes is performed using Network Service Functions which can be easily created for a new class/algorithm. The architectures and learning algorithms currently available are: Class Learning algorithms ------------ ------------------- MLP backprop, weight perturbation, node perturbation, summed weight perturbation SRN backprop through time, weight update driven node splitting, History bound nets CRRN Williams and Zipser Programmable Limited precision nets Weight perturbation, Combined Search Algorithm, Simulated Annealing Other general purpose classes include (viewed as nets): o DC source o Time delays o Random source o FIFOs and LIFOs o Winner-take-all o X out of Y classifiers The modules are provided in a library. Several "front-ends" or clients are also available. MUME can be used to include non-neural computing modules (decision trees, ...) in applications. The software is the product of a number of staff and postgraduate students at the Machine Intelligence Group at Sydney University Electrical Engineering. It is currently being used in research, research and development and teaching, in ECG and ICEG classification, and speech and image recognition. As such, we are interested in institutions that can exploit the tool (especially in educational courses) and build up on it. The software is written in 'C' and is being used on Sun and DEC workstations. Efforts are underway to port it to the Fujitsu VP2200 vector processor using the VCC vectorising C compiler. MUME is made available to research institutions on media/doc/postage cost arrangements. Information on how to acquire it may be obtained by writing (or email) to: Marwan Jabri SEDAL Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: 660-1228 Email: marwan at sedal.su.oz.au From rba at bellcore.com Tue Sep 29 12:19:53 1992 From: rba at bellcore.com (Bob Allen) Date: Tue, 29 Sep 92 12:19:53 -0400 Subject: No subject Message-ID: <9209291619.AA07630@vintage.bellcore.com> NIPS92, December 1-3, 1992, Denver, Colorado STUDENT FINANCIAL SUPPORT Modest financial support for travel to attend the NIPS conference in Denver may be available to students and other young researchers who have worked on neural networks. Those requesting support should post a one page summary of their background and research interests, a curriculum vitae and their e-mail address to: Dr. Robert B. Allen NIPS92 Treasurer Bellcore MRE 2A-367 445 South Street Morristown, NJ 07962-1910 The support will be $250 for North America and $500 for overseas. Travel grant checks for those receiving awards will be available at the conference registration desk. Qualifying requests will be filled in the order they are received. In the event that requests exceed available funds, additional requests may be paid later, based on the financial success of the conference. From cowan at synapse.uchicago.edu Tue Sep 29 15:58:17 1992 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 29 Sep 92 14:58:17 CDT Subject: Werner Reichardt Message-ID: <9209291958.AA09691@synapse> It is with great regret that I have to announce the death of Werner Reichardt. Werner was a student in Berlin at the outbreak of WW II and fought against the Nazis in the German underground. He was captured by the Gestapo but saved by the Russians shortly before his scheduled execution. Werner began his career as a post doc with Max Delbruck at CalTech, but first became known for his work with Bernard Hassenstein on motion detection. He set up the Max Planck Institute for Biological Cybernetics in the early 60s and founded the journal Kybernetik, now known as Biological Cybernetics. He produced a great deal of excellent pioneering work on Fly vision, especially with Tommy Poggio. He will be greatly missed by his many friends, of whom I count myself fortunate to have been one. Jack Cowan From KOCH at IAGO.CALTECH.EDU Wed Sep 30 12:28:03 1992 From: KOCH at IAGO.CALTECH.EDU (KOCH@IAGO.CALTECH.EDU) Date: 30 Sep 1992 09:28:03 -0700 (PDT) Subject: Werner Reichardt Message-ID: <01GPE2JF64HE934Z0G@IAGO.CALTECH.EDU> I would like to second what Jack wrote about the importance of Werner Reichardt's work. His second-order, correlation model (know today simply as the Reichardt model) for motion detection in bettles and flies (first postulated in 1956 in a joint publication with Hassenstein) is, together with the Hodgkin-Huxley equations, one of the oldest and most successful models in neurobiology. He and his group over the last 30 years amassed both behavioral and electropyhysiological evidence supporting such a model for the fly. More recent work on the intensity-based, short-range motion perception system in humans (Adelson-Bergen, Watson-Ahumada, Van Santen-Sperling) uses the same formalism as does the fly correlation model. Furthermore, at the electrophysiological level, a number of studies support the notion of such detectors in area 17 in cats. One could therefore argue that we have good evidence that Reichardt's correlation model---in which the linearly filtered output of one receptors is multiplied by the spatially offset and temporally delayed filtered output of a neighbouring receptor---describes the first stage in the motion pathway, from flies to humans. That's quite a legacy to leave behind. Christof From mpadgett at eng.auburn.edu Wed Sep 30 04:02:30 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Wed, 30 Sep 92 03:02:30 CDT Subject: SimTec92*WNN92*FNN92 Message-ID: <9209300802.AA28083@eng.auburn.edu> SimTec92 * WNN92/Houston * FNN92 Symposium Nov. 4-7, 1992 South Shore Harbour Resort, Clear Lake, TX -- near NASA/JSC SimTec: Aerospace, Emerging Technologies, Simulation Appliations and Life Sciences WNN Conference / Workshop on NEURAL NETWORKS and FUZZY LOGIC Wednesday, November 4 - Friday, November 6 FNN Symposium: TUTORIALS on Fuzzy Logic, Neural Networks, Standards Saturday, November 7 WNN is sponsored by The Society for Computer Simulation, International, Co-sponsored by NASA/JSC, GSFC, & LaRC in cooperation with SPIE and INNS. The IEEE-Neural Networks Council is a participating society. PRELIMINARY SESSION AGENDA Wednesday, November 4 8:00 am - ... Registration 8:30 am - 9:00 am WELCOME 9:00 am - 10:00 am KEYNOTE SPEAKER: Storey Musgrave, MD 10:15 am - 12:15 pm PLENARY: Simulation & Space Station Freedom 12:15 pm - 1:30 pm NETS USERS GROUP: R. Shelton, NASA/JSC 1:30 pm - 3:00 pm PARALLEL SESSIONS STANDARDS, George Rogers, NSWC A Neurocomputing Benchmark for Digital Computers George W. Rogers, Jeffrey L. Solka, John Ellis, and Harold D. Szu, NSWC Comparison of Artificial Neural Networks and Traditional Classifiers via the Two-Spiral Problem Witoon Suewatanakul, UT, Austin, Austin, TX Performance Comparison of Some Neural Network Paradigms for Solving the Seismic Phase Identification Problem Gyu-Sang Jan, Farid Dowla, V. Vemuri, UC Davis, Livermore, CA ARCHITECTURES, Kevin Reilly, UAB Realization of a Modified CMAC Architecture Using Reconfigurable Logic Devices Aleksander Kolcz, N.M. Allinson, U York, UK Hybrid Systems Approaches with Ascertainment Learning in Neural Networks Kevin D. Reilly, M.F. Villa UAB, Y. Hayashi, Ibaraki U. A Hybrid Neural Network System with Serial Learning and Associative Components V. Anumolu, Kevin D. Reilly, N. W. Bray, UAB 1:30 pm - 4:30 pm SCS Process Controls Standards 3:30 pm - 5:00 pm PARALLEL SESSIONS OPTIMIZATION & LEARNING I, Vasant Honovar, Iowa State U. An Empirical Comparison of Flat-spot Elimination Techniques in BackPropagation Networks Karthik Balakrishan, Rajesh Parekh, Vasant A. Honavar, Iowa State U. A Fast Algorithm with a Guarantee to Learn: Binary Synaptic Weights Algorithm on Neural Networks Figen Ulgen, Norio Akamatsu, Tokushima U., Japan Function Preserving Weight Transformations of Perceptron Type Networks Vera Kurkova, Inst of Computer & Information, Czechoslavkia Modelling of Two-Dimensional Incompressible Potential Flows by Programmable Feedforward Networks Andrew J. Meade, Jr., Rice U. 4:30 pm - 5:30 pm STANDARDS ACTIVITIES OVERVIEW - Come and Go Joseph J. Cynamon, The Mitre Corp., SCS Associate VP for Standards Robert Shelton, NASA/JSC SimTec/WNN/FNN Standards Chair 5:30 pm - 7:00 pm Exhibitors Reception Thursday, November 5 8:30 am - 10:00 am PARALLEL SESSIONS TEMPORAL MODELING, James Lo, U. Maryland Time-Delay Neural Network (TDNN) Simulator to Detect Time-Variant Signals E. J. Carroll, N. P. Coleman, Jr., Picatinny Arsenal, G. N. Reddy, Lamar U. Synthetic Approach to Optimal Filtering James Ting-Ho Lo, U. Maryland Perceptual Linear Prediction and Neural Networks for Speech Analysis Sheryl L. Knotts, James A. Freeman, Loral Space Information Systems; Thomas L. Harman, U Houston-ClearLake A Neural Network for Temporal Pattern Classification Narasimhan S. Kumar, A. Dale Whittaker, Texas A&M VISION, Tim Cleghorn, NASA/JSC Adaptive Mixture Neural Networks for Functional Estimation George W. Rogers, NSWC, Carey E. Priebe, David J. Marchette, Jeffrey L. Solka, NSWC Kernal Estimators and Mixture Models in Artificial Neural Networks Carey E. Priebbe, David J. Marchette, George W. Rogers and Jeffrey L. Solka, NSWC A Neural Net Based 2D-Vision System for Real-Time Applications G.N. Reddy, S. Vaithilingham, W. C. Bean, Lamar U, Picatinny Arsenal; J. M Mazzu, Charles River Analytics, Inc. A Hybrid Neural Network System for Robotic Object Recognition and Pose Determination J. M. Mazzy, A. K. Caglaran, Charles River Analytics, Inc. 8:30 am - 10:00 am IEEE-NNC Hardware Interface Standards 10:30 am - 12:00 pm PARALLEL SESSIONS CONTROLS I, Claire McCullough, Redstone Arsenal A Manufacturing Cell Control Strategy Using Neural Networks and Discrete Event Simulation Qi Wan, Jeffrey K. Cochran, Arizona State U. Adaptive Control of Noisy Nonlinear Systems Using Neural Networks Claire L. McCullough, UAH An Adaptive Neural Network Controller of Robot Manipulator Youcef Derbal, M. M. Bayoumi, Queen's U., Canada A Neural Network Control System for Unit Food Dehydration Processes A. Dale Whittaker, Texas A&M NEURAL NETWORKS SIMULATION, David Tam, U North Texas A Generalizable Object-Oriented Neural Simulator for Reconstructing Functional Properties of Biological Neuronal Networks David Tam, U North Texas, Denton A Novel Vectorial Phase-Space Analysis of Spacio-Temporal Firing Patterns David Tam, U North Texas, Denton Exploring Class Reuse and Integration for Hybrid Simulation in C++ Teresa L. Hitt, Jefferson State Community College and Kevin D. Reilly, UAB 10:30 am - 12:00 pm IEEE-NNC Software Interface Standards 12:00 pm - 1:30 pm NETS Users II: - Meet and Go Out to Eat 1:30 pm - 3:00 pm PARALLEL SESSIONS CONTROLS II, Robert Shelton, NASA/JSC Using Functional Link Neural Nets for General Linear Least Squares Model Fitting Alfredo Somolinos, Mercy College Stabilization of Tethered Satellites using Parametric Avalanche Neural Network Robert Dawes, Ajay Patrikar, Martingale Research Corp. A Tree-Addressed Local Neural Network Robert Shelton, NASA/JSC A Neural Net Controller to Balance and Inverted Pendulum Lin Bin, Gongyuan Ding, G. N. Reddy, Lamar U. OPTIMIZATION & LEARNING II, G. N. Reddy, Lamar U. and James Villareal, NASA/JSC Modified Simulated Annealing Using Sample Distribution from the Energy Space of the Problem Instance G. Sampath, Marist College An Improved Neural Network Simulator for Solving the Travelling Salesman Problem Vinay Saxena, G. N. Reddy, Wendell C. Bean, lamar U An Empirical Analysis of the Expected Source Values Rule Richard Spartz, Vasant A. Honavar, Iowa State U. 1:30 pm - 3:00 pm SCS Standards Board: Procedures J. F. Cynamon, The Mitre Corp., SCS Associate VP for Standards 3:30 pm - 5:00 pm PARALLEL SESSIONS FUZZY LOGIC, Robert Lea, NASA/JSC Fuzzy Logic: A Brief Introduction Steve Marsh, Duberly Mazuelos, Motorola, Austin, TX Correlation-Recall Encoding Operations for Fuzzy Associative Memories Sergey Aityan, Texas A&M A Simple Fuzzy Logic Real-Time Camera Tracking System Kevin N. Magee, John B. Cheatham, Jr., Rice U. PATTERN RECOGNITION & APPLICATIONS I, A. Martin Wildberger, EPRI Overview of Neural Network Projects at the Electric Power Research Institute A. Martin Wildberger, EPRI A Solution to Large-Scale Group Technology Problems: Art1 Neural Network Approach Cihan H. Dagli, Cheng-Fo Sen, U Missouri-Rolla 3:30 pm - 6:00 pm IEEE-NNC Standards Overview (Come and Go) Walter J. Karplus, UCLA Chair, IEEE-NNC Standards Committee Performance Evaluation Working Group 7:00 pm - ... Dutch Dinner on the Town A Fuzzy/Neural Network Planning and Networking Activity Sponsored by: SCS kbs Applications to Simulators TAC, SCS NN and Simulation Standards Committee, IEEE-NNC Standards Committee and INNS Standards SIG Friday, November 6 7:00 am - 8:30 am IEEE-NNC Glossary Working Group 8:30 am - 10:00 am PARALLEL SESSIONS PATTERN RECOGNITION & APPLICATIONS II, S. Piche, Thought Processes, Inc. and R. Shelton, NASA/JSC Freeway Incident Detection Using Advanced Technology Edmond Ching-Ping Chang, Texas A&M Integration of Local and Global Neural Classifiers for Passive Sonar Signals Joydeep Ghosh, Kagan Turner, UT Austin; Steven Beck, Larry Deuser, Tracor Applied Sciences A Neural Network Approach for the Investigation of Chemical Phenomena Jerry A. Darsey, ORNL; Bobby Sumpter, Coral Getino, Donald W. Noid, U Arkansas-Little Rock 8:30 am - 10:00 am SCS NN and Simulation Standards Discussion "Embedding Fuzzy Neural Control Modules into Simulations" Mary Lou Padgett, Auburn U., Troy Henson, IBM Corp. and Michelle Izygon, Barrios Technology, Inc. 10:30 am - 12:00 pm PARALLEL SESSIONS NEURAL NETWORKS AND FUZZY LOGIC, David Bendell Hertz, U. Miami Fuzzy-Neuro Controller for Back-Propagation Networks David Bendell Hertz, Qing Hu, U Miami Controlling Training of Neural Classifiers with Fuzzy Logic Ed DeRouin, Joe Brown, Thought Processes, Inc. Neural Networks Applications: Techniques for Increasing Validity Mary Lou Padgett, Auburn University 10:30 am - 12:00 pm SCS Human Factors Discussion 12:30 pm - 2:00 pm Awards Luncheon at Gilruth Center on NASA/JSC 2:00 pm - 5:00 pm TOURS of NASA/JSC Simulation Facilities 5:30 pm - 7:00 pm KBS Reception for Lofti Zadeh and Paper Award Winners (Guests $10.00) FNN Symposium: Tutorials on Fuzzy Logic, Neural Networks, Standards Saturday, November 7 Chairs: Joseph Mica, NASA/GSFC and Robert Savely, NASA/JSC 8:00 am Registration 8:30 am - 9:45 am "Fuzzy Logic, Neural Networks and Soft Computing" Lofti Zadeh, UC Berkeley 9:45 am - 10:00 am Break 10:00 am - 11:15 am "Fuzzy Applications" Yashvant Jani, Togai Infralogic 11:15 am - 12:30 pm "Fuzzy Helicopter Control" Captain Greg Walker, NASA Langley 12:30 pm - 1:30 pm Dutch Lunch 1:30 pm - 2:00 pm "Analysis of Stability of Fuzzy Control and Connections between Fuzzy Control and Nonlinear Control" R. Langari, Texas A&M 2:00 pm - 5:00 pm NEURAL NETWORKS: Mary Lou Padgett, Auburn U. 2:00 pm - 2:30 pm "Fuzzy Neural Relationships" 2:30 pm - 3:30 pm "Neural Networks Basics" 3:30 pm - 3:45 pm Break 3:45 pm - 4:45 pm "Neural Networks Applications" Proposed Standard Glossary, Backpropagation Examples, NASA/JSC NETS Executable & examples to be given 4:45 pm - 5:00 pm "Neural Network Futures" Sign up for SCS, IEEE, INNS Activities 5:00 pm - ... "NASA NETS Users Group Meeting SignUp" R E G I S T R A T I O N SimTec'92 1992 International Simulation Technology Conference * WNN92/Houston and FNN Symposium November 4-6, 1992 * South Shore Harbour Resort, Clear Lake, Texas (near Houston and NASA/JSC) Name: ________________________________________________________ Organization:_________________________________________________ Street Address:_______________________________________________ ______________________________________________________________ City:_______________________ State:_____ Zip:________________ Country:____________________ Phone:__________________________ Member #:___________________ Fax:____________________________ Email:________________________________________________________ Early Registration (through October 9, 1992) ___ $315 Author/Member* ___ $50 Student ___ $395 Non-member Late Registration (after October 9, 1992) ___ $375 Author/Member* ___ $50 Student ___ $460 Non-member Note: Each presentation requires payment of full registration fee by an author. * Member rates apply to members of SCS, IEEE, INNS and NASA. FNN Symposium: Saturday, Nov 7, 8:00 am - 5:00 pm (Note Separate Fee) Fuzzy Logic (Zadeh) and Neural Networks (Padgett) Professional Development Seminars, Saturday, Nov. 7 (8-5) A[ ] Computer Performance Evaluation, M. Obaidat B[ ] Essentials of Project Management, R. Meermans Method of Payment: (No Cash Accepted) ___ VISA ___ Mastercard ___ American Express Card Number:________________________ Exp. Date_______________ Authorizing Signature:________________________________________ ___ Check ___ Company Purchase Order ___ Gov't DD Form 1556 Advance Registration Fees are transferable. Student rate applies to FULL TIME STUDENTS only. A faculty signature is required to certify student's enrollment status. __________________________________________________ Faculty Member Signature Conference Registration Fee (SimTec & WNN92/Houston) (includes Reception, Proceedings, coffee breaks. For Full Registrants only) $__________ New Membership ($60.00) $__________ FNN Symposium or PDS Registration (per Seminar, Sat. Nov. 7) [ ] Early $185 [ ] Late $275 $__________ PDS: A{ ] PDS: B[ ] FNN:[ ] Additional Copies of Proceedings @ $75 each: (Regular Rates are $100) $__________ TOTAL AMOUNT REMITTED $__________ Please send Conference Registration Form along with your registration fee to: 1992 International Simulation Technology Conference c/o SCS, 4838 Ronson Ct., Suite L, San Diego, CA 92111. Phone: (619) 277-3888; FAX (619) 277-3930. Checks Must be drawn on U.S. Banks and in U.S. Dollars. Make check payable to: "SCS" (reference 1992 SimTec). CONTACT SCS OFFICE for complete Preliminary Program containing papers for ALL SimTec tracks (Aerospace, Emerging Technologies, Simulation Appliations and Life Sciences). Travel arrangement information is included in this program. Directions from Hobby Airport are given. MAKE HOTEL ARRANGEMENTS SEPARATELY. Contact South Shore Harbour Resort, 2500 South Shore Blvd., League City, TX 77573, Phone: (713) 334-1000 or (800)442-5005; FAX: (713)334-1157. Conference rates (before October 1, 1992) are $90 Single or Double. Program Committee for SimTec92: General Chair: Tony Sava, IBM Corp.; Associate General Chair: Robin Kirkham, Lockheed/ESC; Program Chair, Troy Henson, IBM Corp.; Technical Editor and SCS Associate VP for SimTec, Mary Lou Padgett, Auburn University; Local Arrangements and Associate Technical Editor: Ankur Hajare, Mitre Corp.; Exhibits Chair: Wade Webster, Lockheed/ESC; NASA Representatives: Robert Savely, NASA/JSC; Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro, ESA ************************************************************************ ************************************************************************ CALL FOR PAPERS SimTec93/WNN93/FNN93 in SAN FRANCISCO November 7-10, 1993 San Francisco Airport Marriott SimTec 93 1993 International Simulation Technology Conference Emerging Technologies * Simulation Applications * Aerospace Visualization, Circuit Simulation, Intelligent Programming Tools and Techniques Multimedia in Simulation Pattern Recognition, Controls, Microelectronics, Life Sciences, Management, Supercomputing, Parallel & Distributed Processing Simulation Facilities, Training Command & Control Paper competitions in Academic, Industry and Government Categories Awards Luncheon * Exhibitors Reception TOUR of NASA/AMES PANELS, DISCUSSIONS, PROFESSIONAL ACTIVITIES, STANDARDS, SOFTWARE EXCHANGES JOIN US in BEAUTIFUL SAN FRANCISCO . . . Bring your family . . . Sponsored by The Society for Computer Simulation, International Co-sponsored by NASA/JSC,GSFC and LRC in cooperation with SPIE WNN93/San Francisco Workshop on Neural Networks and Fuzzy Logic WNN is an informal conference on neural networks, fuzzy logic and related applications. Artificial networks and life sciences foundations are of interest. The meeting features workshops, standards discussions and paper contests. In addition to SimTec sponsors, the IEEE Neural Networks Council is a participating society and INNS is cooperating. Mary Lou Padgett, Auburn University; Robert Shelton, NASA/JSC; Walter J. Karplus, UCLA; Bart Kosko, USC and Paul Werbos, NSF; NASA Representative: Robert Savely, NASA/JSC FNN93: Fuzzy Neural Networks Symposium with Tutorials and Standards A collection of presentations on fuzzy logic theory and applications, neural fuzzy control and basic neural networks concepts and applications will be featured on Sunday. Anyone interested in these concepts should benefit from participating. Proposed Neural Networks and Fuzzy Logic Standards will be explained by Padgett. NASA/NETS software executable and examples will be included in this tutorial. Sponsored by SCS, co-sponsored by NASA/JSC,GSFC. SimTec93 * WNN93 * FNN93 DEADLINES: Abstracts/Draft Papers: May 1, 1993; Camera-Ready: June 30, 1993. TO SUBMIT AN ABSTRACT, PROPOSE A SESSION OR SUGGEST A TOPIC OF INTEREST . . . Contact: Mary Lou Padgett, SCS Associate VP for SimTec, Auburn University, 1165 Owens Road, Auburn, AL 36830. Phone: (205) 821-2472/3488 Fax: (205) 844-1809 Email: mpadgett at eng.auburn.edu. SCS Office: Phone: (619) 277-3888 Fax: (619) 277-3930. General Chair: Ted Lambert, Program Chair: Martin Dost; Associate Program Chair: Ralph Huntsinger, UC Chico; NASA Representatives: Robert Savely, NASA/JSC and Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro Committee Includes: Bill Cameron; Paul Luker, UC Chico; Norman Pobanz, Bechtel; Stuart Schlessinger; A. Martin Wildberger; Tim Cleghorn, NASA/JSC; Robert Lea, NASA/JSC ************************************************************************ ************************************************************************ SimTec 93 1993 International Simulation Technology Conference WNN93/San Francisco & FNN93 Symposium November 7-10, 1992 San Francisco Airport Marriott If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by Email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu ======= SCS OFFICE: (619) 277-3888 ======= SimTec 93 * WNN93/San Francisco * FNN93 Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177 ************************************************************************  From mpadgett at eng.auburn.edu Wed Sep 30 04:01:54 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Wed, 30 Sep 92 03:01:54 CDT Subject: SimTec93*WNN93*FNN93 Message-ID: <9209300801.AA28062@eng.auburn.edu> ******************************************************************************** ******************************************************************************** CALL FOR PAPERS SimTec93/WNN93/FNN93 in SAN FRANCISCO November 7-10, 1993 San Francisco Airport Marriott SimTec 93 1993 International Simulation Technology Conference Emerging Technologies * Simulation Applications * Aerospace Visualization, Circuit Simulation, Intelligent Programming Tools and Techniques Multimedia in Simulation Pattern Recognition, Controls, Microelectronics, Life Sciences, Management, Supercomputing, Parallel & Distributed Processing Simulation Facilities, Training Command & Control Paper competitions in Academic, Industry and Government Categories Awards Luncheon * Exhibitors Reception TOUR of NASA/AMES PANELS, DISCUSSIONS, PROFESSIONAL ACTIVITIES, STANDARDS, SOFTWARE EXCHANGES JOIN US in BEAUTIFUL SAN FRANCISCO . . . Bring your family . . . Sponsored by The Society for Computer Simulation, International Co-sponsored by NASA/JSC,GSFC and LRC in cooperation with SPIE WNN93/San Francisco Workshop on Neural Networks and Fuzzy Logic WNN is an informal conference on neural networks, fuzzy logic and related applications. Artificial networks and life sciences foundations are of interest. The meeting features workshops, standards discussions and paper contests. In addition to SimTec sponsors, the IEEE Neural Networks Council is a participating society and INNS is cooperating. Mary Lou Padgett, Auburn University; Robert Shelton, NASA/JSC; Walter J. Karplus, UCLA; Bart Kosko, USC and Paul Werbos, NSF; NASA Representative: Robert Savely, NASA/JSC FNN93: Fuzzy Neural Networks Symposium with Tutorials and Standards A collection of presentations on fuzzy logic theory and applications, neural fuzzy control and basic neural networks concepts and applications will be featured on Sunday. Anyone interested in these concepts should benefit from participating. Proposed Neural Networks and Fuzzy Logic Standards will be explained by Padgett. NASA/NETS software executable and examples will be included in this tutorial. Sponsored by SCS, co-sponsored by NASA/JSC,GSFC. SimTec93 * WNN93 * FNN93 DEADLINES: Abstracts/Draft Papers: May 1, 1993; Camera-Ready: June 30, 1993. TO SUBMIT AN ABSTRACT, PROPOSE A SESSION OR SUGGEST A TOPIC OF INTEREST . . . Contact: Mary Lou Padgett, SCS Associate VP for SimTec, Auburn University, 1165 Owens Road, Auburn, AL 36830. Phone: (205) 821-2472/3488 Fax: (205) 844-1809 Email: mpadgett at eng.auburn.edu. SCS Office: Phone: (619) 277-3888 Fax: (619) 277-3930. General Chair: Ted Lambert, Program Chair: Martin Dost; Associate Program Chair: Ralph Huntsinger, UC Chico; NASA Representatives: Robert Savely, NASA/JSC and Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro Committee Includes: Bill Cameron; Paul Luker, UC Chico; Norman Pobanz, Bechtel; Stuart Schlessinger; A. Martin Wildberger; Tim Cleghorn, NASA/JSC; Robert Lea, NASA/JSC ************************************************************************ ************************************************************************ SimTec 93 1993 International Simulation Technology Conference WNN93/San Francisco & FNN93 Symposium November 7-10, 1992 San Francisco Airport Marriott If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by Email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu ======= SCS OFFICE: (619) 277-3888 ======= SimTec 93 * WNN93/San Francisco * FNN93 Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177 ************************************************************************  From Connectionists-Request at CS.CMU.EDU Tue Sep 1 00:05:22 1992 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Tue, 01 Sep 92 00:05:22 -0400 Subject: Bi-monthly Reminder Message-ID: <8690.715320322@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From gem at cogsci.indiana.edu Tue Sep 1 15:49:44 1992 From: gem at cogsci.indiana.edu (Gary McGraw) Date: Tue, 1 Sep 92 14:49:44 EST Subject: Technical report available (Letter Spirit) Message-ID: The following technical report has been placed in the neuroprose archive as mcgraw.letter_spirit.ps.Z and is available via ftp (from 128.146.8.52). Letter Spirit: Recognition and Creation of Letterforms Based on Fluid Concepts by Gary McGraw, Center for Research on Concepts and Cognition TR 61 Although this work is not really connectionism per se, the approach it represents has much in common with connectionist ideas. Here is an abstract: The Letter Spirit project is an attempt to model central aspects of human creativity on a computer. We believe that creativity flourishes in the mind because of the flexible and context-sensitive nature of concepts (which we call fluid concepts to reflect this essential plasticity.) We believe that creativity is a by-product of the fluidity of concepts, and that a reasonable model of conceptual fluidity can shed much light on creativity. Letter Spirit explores creativity through the art of letter design. The aim of Letter Spirit is to model how the 26 lowercase letters of the roman alphabet can be rendered in a uniform style. The program will start with one or more seed letters representing a style, and create the rest of the letters in such a way that they share the same style, or spirit. Letter Spirit involves a blend of high-level perception and conceptual play that will allow it to create in a cognitively plausible fashion. High-level perception involves processing information to the level of meaning by accessing concepts and making sense of sensory data at a conceptual level. ---------------------------------------------------------------------- The paper is also available via ftp from cogsci.indiana.edu as /pub/mcgraw.letter_spirit.ps E-mail versions may be available to interested parties without ftp access. Send inquiries to gem at cogsci.indiana.edu (or mcgrawg at moose.cs.indiana.edu). From RIANI at GENOVA.INFN.IT Thu Sep 3 05:43:00 1992 From: RIANI at GENOVA.INFN.IT (RIANI@GENOVA.INFN.IT) Date: 03 Sep 1992 09:43 +0000 (GMT) Subject: research fellowship Message-ID: <2216@GENOVA.INFN.IT> Subj: Research fellowship. A post-doc fellowship by the European Community Commision (program Mobility and Human Capital) could be assigned to the research group on Neural Networks of the Unita' di Genova del Consorzio INFM for a period of 6 to 12 months. The salary of the fellowship will be 3600 Ecu/month (insurance fees and taxes incuded). The candidates must be citizen of an EC country except Italy. The research topic of the fellowship will be one between: (a) neural networks for handwriting recognition; (b) studies of neural network algorithms for molecular electronic systems. The interested candidates must send a curriculum vitae, a pubblication list and a letter of interest to myself. Prof. Massimo Riani Unita' di Genova del Consorzio INFM Via Dodecaneso 33 16146 Genova - Italy email : riani at genova.infn.it fax : +39-10-314218 From lange at CS.UCLA.EDU Wed Sep 2 07:21:33 1992 From: lange at CS.UCLA.EDU (Trent Lange) Date: Wed, 2 Sep 92 04:21:33 PDT Subject: Papers in Neuroprose Archive Message-ID: <920902.112133z.10403.lange@lanai.cs.ucla.edu> The following two reprints have been placed in the Neuroprose Archives at Ohio State University: ====================================================================== Lexical and Pragmatic Disambiguation and Reinterpretation in Connectionist Networks Trent E. Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Lexical and pragmatic ambiguity is a major source of uncertainty in natural language understanding. Symbolic models can make high- level inferences necessary for understanding text, but handle ambiguity poorly, especially when later context requires a reinterpretation of the input. Structured connectionist networks, on the other hand, can use their graded levels of activation to perform lexical disambiguation, but have trouble performing the variable bindings and inferencing necessary for language understanding. We have previously described a structured spreading-activation model, ROBIN, which overcomes many of these problems and allows the massively-parallel application of a large class of general knowledge rules. This paper describes how ROBIN uses these abilities and the contextual evidence from its semantic networks to disambiguate words and infer the most plausible plan/goal analysis of the input, while using the same mechanism to smoothly reinterpret the input if later context makes an alternative interpretation more likely. We present several experiments illustrating these abilities and comparing them to those of other connectionist models, and discuss several directions in which we are extending the model. * Appears in International Journal of Man-Machine Studies, 36: 191-220. 1992. ====================================================================== REMIND: Retrieval From Episodic Memory by INferencing and Disambiguation Trent E. Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Charles M. Wharton Department of Psychology University of California, Los Angeles Most AI simulations have modeled memory retrieval separately from language understanding, even though both activities seem to use many of the same processes. This paper describes REMIND (Retrieval from Episodic Memory through INferencing and Disambiguation), a structured spreading-activation model of integrated text comprehension and episodic reminding. In REMIND, activation is spread through a semantic network that performs dynamic inferencing and disambiguation to infer a conceptual representation of an input cue. Because stored episodes are associated with concepts used to understand them, the spreading-activation process also activates any memory episodes in the network that share features or knowledge structures with the cue. After the cue's conceptual representation is formed, the network recalls the memory episode having the highest activation. Since the inferences made from a cue often include actors' plans and goals only implied in a cue's text, REMIND is able to get abstract, analogical remindings that would not be possible without an integrated understanding and retrieval model. * To appear in J. Barnden and K. Holyoak (Eds.), Advances in Connectionist and Neural Computation Theory, Volume II: Analogical Connections. Norwood, NJ: Ablex. ====================================================================== Both papers are broken into two (large) postscript files stored in compressed tarfiles. To obtain a copy via FTP (courtesy of Jordan Pollack): unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your E-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get lange.disambiguation.tar.Z ftp> get lange.remind.tar.Z ftp> quit unix% zcat lange.disambiguation.tar.Z | tar -xvf - unix% lpr -s -P lange.disambiguation1.ps unix% lpr -s -P lange.disambiguation2.ps unix% zcat lange.remind.tar.Z | tar -xvf - unix% lpr -s -P lange.remind1.ps unix% lpr -s -P lange.remind2.ps Note that the -s option for lpr is needed for most printers because of the large size of the uncompressed postscript files (~ 1 meg each). Sorry, no hard copies available. Trent Lange Artificial Intelligence Laboratory Computer Science Department University of California, Los Angeles Los Angeles, CA 90024 E-Mail Address: lange at cs.ucla.edu From tgd at chert.CS.ORST.EDU Sun Sep 6 11:56:44 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Sun, 6 Sep 92 08:56:44 PDT Subject: A neat idea from L. Breiman In-Reply-To: Henrik Klagges's message of Mon, 31 Aug 92 14:01:11 PDT <9208312101.AA25658@tazdevil.llnl.gov> Message-ID: <9209061556.AA24983@research.CS.ORST.EDU> > hold {d_2, ..., d_{k-1}} constant ...by re-making decision d_1 How can you hold d_2 ... etc constant if they might depend on d_1, like in a game tree ? Cheers, Henrik IBM Research Lawrence Livermore National Labs Suppose d_1' is an alternative way of making d_1. You can now evaluate the "board position" corresponding to {d_1', d_2, ..., d_{k_1}}. In some cases, of course, this will not be a legal position (and it should get a bad evaluation), but in many situations, it will be legal and possibly superior to {d_1, d_2, ..., d_{k-1}}. --Tom From hunter at nlm.nih.gov Tue Sep 8 10:42:31 1992 From: hunter at nlm.nih.gov (Larry Hunter) Date: Tue, 8 Sep 92 10:42:31 -0400 Subject: Call for Papers: Intelligent Systems for Molecular Biology Message-ID: <9209081442.AA04678@work.nlm.nih.gov> ***************** CALL FOR PAPERS ***************** The First International Conference on Intelligent Systems for Molecular Biology July 7-9, 1993 Washington, DC Organizing Committee Program Committee --------------------- ----------------------------- Lawrence Hunter, D. Brutlag, Stanford National Library of Medicine B. Buchanan, U. of Pittsburgh C. Burks, Los Alamos David Searls, F. Cohen, UC-SF University of Pennsylvania C. Fields, TIGR M. Gribskov, UC-SD Jude Shavlik, P. Karp, SRI University of Wisconsin A. Lapedes, Los Alamos R. Lathrop, MIT Schedule C. Lawrence, Baylor --------------------- M. Mavrovouniotis, U-Md Papers and Tutorial G. Michaels, NIH/DCRT Proposals Due: H. Morowitz, George Mason February 15, 1993 K. Nitta, ICOT M. Noordewier, Rutgers Replies to Authors: R. Overbeek, Argonne March 29, 1993 C. Rawlings, ICRF D. States, NLM, NIH Revised Papers Due: G. Stormo, U. of Colorado April 26, 1993 E. Uberbacher, Oak Ridge D. Waltz, Thinking Machines Sponsors: American Association for Artificial Intelligence, National Library of Medicine The First International Conference on Intelligent Systems for Molecular Biology will take place in Washington, DC, July 7-9, 1993. The conference will bring together scientists who are applying the technologies of artificial intelligence, robotics, neural networks, massively parallel computing, advanced data modelling, and related methods to problems in molecular biology. Participation is invited from both producers and consumers of any novel computational or robotic system, provided it supports a biological task that is cognitively challenging, involves a synthesis of information from multiple sources at multiple levels, or in some other way exhibits the abstraction and emergent properties of an "intelligent system." The three-day conference, to be held in the attractive conference facilities of the Lister Hill Center, National Library of Medicine, National Institutes of Health, will feature both introductory tutorials and original, refereed papers, to be published in an archival Proceedings. The conference will immediately precede the Eleventh National Conference of the American Association for Artificial Intelligence, also in Washington. Papers should be 12 pages, single-spaced and set in 12 point type, including title, abstract, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone, and FAX numbers. Submit 6 copies to the address shown. For more information, contact ISMB at nlm.nih.gov. Jude Shavlik Computer Sciences Dept University of Wisconsin 1210 W. Dayton Street Madison, WI 53706 ***************************************************************** From tgd at chert.CS.ORST.EDU Wed Sep 9 19:49:43 1992 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 9 Sep 92 16:49:43 PDT Subject: Machine Learning 9:2/3 Message-ID: <9209092349.AA21744@research.CS.ORST.EDU> Machine Learning July 1992, Volume 9, Issues 2/3 Special Issue on Computational Learning Theory Introduction J. Case and A. Blumer Lower Bound Methods and Separation Results for On-Line Learning Models W. Maass and G. Turan Learning Conjunctions of Horn Clauses D. Angluin, M. Frazier, and L. Pitt A Learning Criterion for Stochastic Rules K. Yamanishi On the Computational Complexity of Approximating Distributions by Probabilistic Automata N. Abe and M. K. Warmuth A Universal Method of Scientific Inquiry D.N. Osherson, M. Stob, and S. Weinstein ----- Subscriptions - Volume 8-9 (8 issues) includes postage and handling. $140 Individual $88 Member AAAI $301 Institutional Kluwer Academic Publishers P.O. Box 358 Accord Station Hingham, MA 02018-0358 USA or Kluwer Academic Publishers Group P.O. Box 322 3300 AH Dordrecht THE NETHERLANDS From RREILLY at ccvax.ucd.ie Wed Sep 9 06:33:00 1992 From: RREILLY at ccvax.ucd.ie (RREILLY@ccvax.ucd.ie) Date: 09 Sep 1992 10:33 +0000 (WET) Subject: Post-doctoral Fellowship Message-ID: Human Capital and Mobility Programme of the Commission of the European Communities Postdoctoral Fellowship ============================================================== Applications are invited for an EC funded post-doctoral fellowship with the connectionist research group in the Dept. of Computer Science, University College Dublin, Ireland. The duration of the fellowship may be between 6-12 months. Remuneration will be at a rate of 3,255 ECU/month (this covers subsistence, tax, social insurance, etc.). The fellowship is open to EC citizens other than citizens of Ireland. The research topics are: (1) The connectionist modelling of eye-movement control in reading, and (2) The connectionist modelling of natural language processing. Interested candidates should send me a letter of application, a CV, and a list of their publications. They should also indicate which research topic, and what particular aspects of it, they are interested in working on. Since the closing date for receipt of applications is September 25, candidates are encouraged to send their applications either by e-mail or FAX. Ronan Reilly Dept. of Computer Science University College Dublin Belfield Dublin 4 IRELAND Tel.: +353.1.7062475 Fax : +353.1.2697262 e-mail: rreilly at ccvax.ucd.ie ===================================================================== From ingber at alumni.cco.caltech.edu Sat Sep 12 14:47:12 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Sat, 12 Sep 1992 11:47:12 -0700 Subject: 2nd Request for (p)Reprints on Simulated Annealing Message-ID: <9209121847.AA05098@alumni.cco.caltech.edu> 2nd Request for (p)Reprints on Simulated Annealing I posted the text below in July, and have received many interesting papers which I will at least mention in my review. It is clear that many researchers use something "like" simulated annealing (SA) in their work to approach quite difficult computational problems. They take advantage of the ease of including complex constraints and nonlinearities into an SA approach that requires a quite simple and small code, especially relative to many other search algorithms. However, the bulk of the papers I have seen use the standard Boltzmann annealing, for which it has been proven sufficient to only use a log annealing schedule for the temperature parameter in order to statistically achieve a global optimal solution. This can require a great deal of CPU time to implement, and so these papers actually "quench" their searches by using much faster temperature schedules, too fast to theoretically claim they are achieving the global optimum. Instead they have defined their own method of simulated quenching (SQ). In many of their problems this really is not much of an issue, as there is enough additional information about their system to be able to claim that their SQ is good enough, and the ease of implementation certainly warrants its use. I.e., anyone familiar with trying to use other "standard" methods of nonlinear optimization on difficult problems will appreciate this. I also appreciate that faster SA methods, such as I have published myself, are not as easily implemented. I would like to have more examples of: (1) papers that have really used SA instead of SQ in difficult problems. (2) proposed/tested improvements to SA which still have the important feature of establishing at least a heuristic argument that a global optimum can indeed be reached, e.g., some kind of ergodic argument. The review is on SA, and I do not have the allotted space or intention to compare SA to other important and interesting algorithms. Thanks. Lester }I have accepted an invitation to prepare a review article on simulated }annealing for Statistics and Computing. The first draft is due 15 }Jan 93. } }If you or your colleagues have performed some unique work using }this methodology that you think could be included in this review, }please send me (p)reprints via regular mail. As I will be }making an effort to prepare a coherent article, not necessarily an }all inclusive one, please do not be too annoyed if I must choose not }to include/reference work you suggest. Of course, I will formally }reference or acknowledge any inclusion of your suggestions/material }in this paper. While there has been work done, and much more remains }to be done, on rigorous proofs and pedagogical examples/comparisons, }I plan on stressing the use of this approach on complex, nonlinear }and even stochastic systems. } }I am a "proponent" of a statistical mechanical approach to selected }problems in several fields; some recent reprints are available via }anonymous ftp from ftp.umiacs.umd.edu [128.8.120.23] in the pub/ingber }directory. I am not a hardened "proponent" of simulated annealing; }I welcome papers criticizing or comparing simulated annealing to }other approaches. I already plan on including some references that }are openly quite hostile to this approach. # Prof. Lester Ingber # # ingber at alumni.caltech.edu # # P.O. Box 857 # # McLean, VA 22101 [10ATT]0-700-L-INGBER # From harnad at Princeton.EDU Sat Sep 12 17:33:57 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Sat, 12 Sep 92 17:33:57 EDT Subject: Express Saccades & Attention: BBS Call for Commentators Message-ID: <9209122133.AA08081@clarity.Princeton.EDU> Below is the abstract of a forthcoming target article by B. Fischer & H. Weber on express saccadic eye movements and attention. It has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at clarity.princeton.edu or harnad at pucc.bitnet or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp according to the instructions that follow after the abstract. ____________________________________________________________________ EXPRESS SACCADES AND VISUAL ATTENTION B. Fischer and H. Weber Department Neurophysiology Hansastr. 9 D - 78 Freiburg Germany aiple at sun1.ruf.uni-freiburg.de (c/o Franz Aiple) KEYWORDS: Eye movements, Saccade, Express Saccade, Vision, Fixation, Attention, Cortex, Reaction Time, Dyslexia ABSTRACT: One of the most intriguing and controversial observations in oculomotor research in recent years is the phenomenon of express saccades in man and monkey. These are saccades of so extremely short reaction times (100 ms in man, 70 ms in monkey) that some experts on eye movements still regard them as artifacts or anticipatory reactions that do not need any further explanation. On the other hand, some research groups consider them to be not only authentic but also a valuable means of investigating the mechanisms of saccade generation, the coordination of vision and eye movements, and the mechanisms of visual attention. This target article puts together pieces of experimental evidence in oculomotor and related research - with special emphasis on the express saccade - in order to enhance our present understanding of the coordination of vision, visual attention, and eye movements necessary for visual perception and cognition. We hypothethize that an optomotor reflex is responsible for the occurrence of express saccades, one that is controlled by higher brain functions of disengaged visual attention and decision making. We describe a neural network as a basis for more elaborate mathematical models and computer simulations of the optomotor system in primates. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from princeton.edu according to the instructions below (the filename is bbs.fischer). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- To retrieve a file by ftp from a Unix/Internet site, type either: ftp princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as per instructions (make sure to include the specified @), and then change directories with: cd /pub/harnad To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.fischer When you have the file(s) you want, type: quit Certain non-Unix/Internet sites have a facility you can use that is equivalent to the above. Sometimes the procedure for connecting to princeton.edu will be a two step process such as: ftp followed at the prompt by: open princeton.edu or open 128.112.128.1 In case of doubt or difficulty, consult your system manager. ---------- JANET users who do not have an ftp facilty for interactive file transfer (this requires a JIPS connection on your local machine - consult your system manager if in doubt) can use a similar facility available at JANET site UK.AC.NSF.SUN (numeric equivalent 000040010180), logging in using 'guestftp' as both login and password. The online help information gives details of the transfer procedure which is similar to the above. The file received on the NSF.SUN machine needs to be transferred to your home machine to read it, which can be done either using a 'push' command on the NSF.SUN machine, or (usually faster) by initiating the file transfer from your home machine. In the latter case the file on the NSF.SUN machine must be referred to as directory-name/filename (the directory name to use being that provided by you when you logged on to UK.AC.NSF.SUN). To be sociable (since NSF.SUN is short of disc space), once you have received the file on your own machine you should delete the file from the UK.AC.NSF.SUN machine. This facility is very often overloaded, and an off-line relay facility at site UK.AC.FT-RELAY (which is simpler to use in any case) can be used as an alternative. The process is almost identical to file transfer within JANET, and the general method is illustrated in the following example. With some machines, filenames and the username need to be placed within quotes to prevent unacceptable transposion to upper case (as may apply also to the transfer from NSF.SUN described above). transfer Send or Fetch: f From holm at nordita.dk Mon Sep 14 11:26:45 1992 From: holm at nordita.dk (Holm Schwarze) Date: Mon, 14 Sep 92 17:26:45 +0200 Subject: paper available in neuroprose Message-ID: <9209141526.AA01594@norsci0.nordita.dk> ** DO NOT FORWARD TO OTHER GROUPS ** The following paper has been placed in the Neuroprose archive in file schwarze.committee.ps.Z . Retrieval instructions follow the abstract. Hardcopies are not available. -- Holm Schwarze (holm at nordita.dk) ------------------------------------------------------------------------- GENERALIZATION IN FULLY CONNECTED COMMITTEE MACHINES H. Schwarze and J. Hertz CONNECT, The Niels Bohr Institute and Nordita Blegdamsvej 17, DK-2100 Copenhagen, Denmark ABSTRACT We study supervised learning in a fully connected committee machine trained to implement a rule of the same structure. The generalization error as a function of the number of training examples per weight is calculated within the annealed approximation. For binary weights we find a discontinuous transition from poor to perfect generalization. Beyond this transition metastable states exist even for large training sets. The scaling of the order parameters with the number of hidden units depends on the size of the training set. For continuous weights we find a discontinuous transition from a committee--symmetric solution to one with specialized hidden units. ------------------------------------------------------------------------- To retrieve the paper by anonymous ftp: unix> ftp archive.cis.ohio-state.edu # (128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get schwarze.committee.ps.Z ftp> quit unix> uncompress schwarze.committee.ps.Z unix> lpr -P schwarze.committee.ps ------------------------------------------------------------------------- From ie-list at cs.ucl.ac.uk Mon Sep 14 07:13:32 1992 From: ie-list at cs.ucl.ac.uk (IE Digest Moderator) Date: Mon, 14 Sep 92 12:13:32 +0100 Subject: Intelligent systems for Economics digest Message-ID: Announcing the Intelligent systems for Economics digest (IE-digest) ------------------------------------------------------------------- The Intelligent systems for Economics digest aims to act as a forum to exchange ideas on using `intelligent' techniques to model economic and financial systems. Techniques which were originally developed to model psychological and biological processes are now receiving considerable attention as tools for modelling and understanding economic and financial processes. These techniques which include neural networks, genetic algorithms and expert systems are now being used in a wide variety of applications including the modelling of economic cycles, modelling of artificial economies, portfolio optimisation and credit evaluation. The IE-digest will carry announcements of papers, calls for papers, requests for information and will act as a medium for researchers to exchange ideas in this rapidly growing research area. The format of the IE-digest is similar to other moderated forums such as the "neuron-digest". A depository has been set up to deposit papers, bibliographies, and software, which can be accessed via FTP. Past issues of the IE-digest will also be kept there. * The Relevant Technologies Neural networks, Genetic Algorithms, Classifier Systems, Expert Systems, Fuzzy Logic, Rule Induction, Dynamical Systems Theory (Chaos Theory), Artificial Life techniques and Hybrid Systems combining these technologies. * The IE-digest welcomes postings on the application of these technologies in the following areas. (The list is not exhaustive). Economic Applications: Modelling artificial economies, Forecasting economic time series, modelling behavioural Decision Making, modelling the evolution of economic webs, modelling economic development, modelling structural changes in economies and Artificial Adaptive Agents. Financial Applications: Portfolio Optimisation, Forecasting and modelling Financial Markets, Understanding Financial News, Risk Management, Trading Systems, Credit Evaluation, Bond Rating, and Modelling Artificial Traders and Markets, and other related applications. Send administrative requests (additions, deletions to the list etc) to: IE-list-request at cs.ucl.ac.uk Send contributions to: IE-list at cs.ucl.ac.uk (For users in the UK, IE-list-request at uk.ac.ucl.cs IE-list at uk.ac.ucl.cs) The archive for papers,software, and back issues can be accessed via anonymous ftp; at cs.ucl.ac.uk - The directory name is: ie (128.16.5.31) [The documents are available by FTAM and can be for NIFTP and info-server too.] List Moderator: Suran Goonatilake, Dept. of Computer Science, University College London, Gower St., London WC1E 6BT, UK surang at cs.ucl.ac.uk From wolff at cache.crc.ricoh.com Mon Sep 14 12:09:58 1992 From: wolff at cache.crc.ricoh.com (Gregory J. Wolff) Date: Mon, 14 Sep 92 09:09:58 -0700 Subject: Paper available on Neuroprose: Stork.obs.ps.Z Message-ID: <9209141609.AA09662@styx.crc.ricoh.com> The following paper has been placed on the neuroprose archive as stork.obs.ps.Z and is available via anonymous ftp (from archive.cis.ohio-state.edu in the pub/neuroprose directory). This paper will be presented at NIPS-92. ========================================================================= Second Order Derivatives for Network Pruning: Optimal Brain Surgeon Babak Hassibi and David G. Stork, Ricoh California Research Center ABSTRACT: We investigate the use of information from all second order derivatives of the error function to perform network pruning (i.e., removing unimportant weights from a trained network) in order to improve generalization and increase the speed of further training. Our method, Optimal Brain Surgeon (OBS), is significantly better than magnitude-based methods, which can often remove the wrong weights. OBS also represents a major improvement over other methods, such as Optimal Brain Damage [Le Cun, Denker and Solla, 1990], because ours uses the full off-diagonal information of the Hessian matrix H. Crucial to OBS is a recursion relation for calculating H inverse from training data and structural information of the net. We illustrate OBS on standard benchmark problems: the MONK's problems. The most successful method in a recent competition in machine learning [Thrun et al., 1991] was backpropagation using weight decay, which yielded a network with 58 weights for one MONKs problem. OBS requires only 14 weights for the same performance accuracy. On two other MONKs problems, our method required only 38% and 10% of the weights found by magnitude-based pruning. =========================================================================== Here is an example of how to retrieve this file: gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron at wherever 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get stork.obs.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for stork.obs.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> uncompress stork.obs.ps gvax> lpr stork.obs.ps From mherrmann at informatik.uni-leipzig.dbp.de Mon Sep 14 11:21:48 1992 From: mherrmann at informatik.uni-leipzig.dbp.de (mherrmann@informatik.uni-leipzig.dbp.de) Date: Mon, 14 Sep 1992 17:21:48 +0200 Subject: context sensitivity of representations Message-ID: <920914172146*/S=mherrmann/OU=informatik/PRMD=UNI-LEIPZIG/ADMD=DBP/C=DE/@MHS> Andy Clark (andyc at cogs.sussex.ac.uk) asks: > Must [the representational] process bottom out somewhere in a > set of microfeatures which are genuinely SEMANTIC (genuinely > contentful but which are NOT prone to contextual infection? I believe that it is possible to build a representational system in which all the components are context-sensitive, and that it *may* be necessary to do so in order to achieve sophisticated behaviour such as analogical reasoning. (This belief is not based on empirical work - so those who don't like speculation on this mailing list will probably want to leave right now.) I want to break up Andy's question into four areas: bottoming out of data structures, bottoming out of meaning, context sensitivity, and semantic primitives. BOTTOMING OUT OF DATA STRUCTURES My interest is in analogical reasoning - a domain that demands sophisticated data structures. So I will assume that we are talking about representational systems capable of implementing complex structures. Traditional symbolic AI systems often have tree-structured representations. Repeated application of a decomposition operator (CAR or CDR for LISP) will eventually bottom out at a leaf of the tree. However, this needn't always be the case: e.g. the underlying structure might be a network. If you wanted a system that reasoned about unobserved parts of instances you might have: a tree representing the component structure of an instance and a semantic net encoding default class information. Application of a decomposition operator to a leaf of the instance tree would cause the leaf to be expanded with a fringe of information resulting from binding instance information into semantic net information. Thus the tree would *appear* to be unbounded. In the connectionist domain, Bruce MacLennan (1991) discussed knowledge representation in infinite-dimensional vector spaces. A decomposition operator is just a function that can be applied to any pattern to yield a component pattern of the same dimensionality as the parent. Repeated decomposition *may* yield nonsense patterns - but, in principle, it should be possible to construct a cyclic structure that never bottoms out, like a semantic net. MacLennan also points out the possibility of multiple decomposition operators (not necessarily restricted to the inverse of a single composition operator). I suspect that systems designed to generate multiply decomposable representations will be very interesting. BOTTOMING OUT OF MEANING Consider a dictionary. The definitions form a network with each word defined in terms of other words. There are no semantic primitives here. Each word is as fuzzily defined as any other, although analysis may reveal that some words are more central than others. The amazing fact is that dictionaries are actually useful - it is possible to gain some understanding of the meaning of a word from a dictionary even though it contains no semantic primitives. But this only works if there is an environment that is understood by the reader and represented by the dictionary. Stevan Harnad (1990), in discussing the symbol-grounding problem, makes the point that it is impossible to learn Chinese from a Chinese to Chinese dictionary. Purely formal systems (like maths) are predicated on internal consistency, can be understood with no reference to external environments, and would be nonsense without semantic primitives. Representational systems *can* be based on semantic primitives, as in classical AI, but I suspect that they are crippled when used to represent an environment of greater intrinsic complexity than the complexity of the representation language. Representational systems *can* be built that are not dependent on semantic primitives, as in the case of the dictionary, provided that an external environment is available for grounding of the meaning. CONTEXT SENSITIVITY Andy Clark gives an example of context sensitivity where the pattern representing the to-be-represented-object varies depending on the context in which the object occurs. In this case the micro-structure of the representation varies (although it begs the question of how the system 'knows' that the different patterns represent the same thing. It seems to me that there are other types of context-sensitivity. Looking at the outermost perceptual end of the system there are context sensitivities in the encoding process. For example there are many perceptual adaptation phenomena and simultaneous contrast phenomena. From the point of view of an external observer the encoding of some external quantity into a neural firing rate is dependent on the environmental context. From the point of view of up-stream processing, a given firing level does not have a constant mapping to the external physical quantity. The process of transduction from the perceptual to cognitive domains is also context dependent. Dave Chalmers et al (1991) argue very strongly for the context sensitivity of the representational process. Their contention is that the information extracted from the perceptual flow *must* depend on the cognitive states (current goals, beliefs etc) of the system. In this case the individual components of the representation are *not necessarily* context dependent, but the overall representational structure that is extracted from the perceptual data must be depend on the context. Similarly, cognitive activities (such as reasoning) might be seen as similar to the perceptual process (interpreting one structure to yield another) and could also be expected to be context-sensitive. Returning to the Chinese dictionary example given earlier, it is obvious that the interpretation of the components is context- sensitive (cf Wittgenstein on the impossibility of precise definition) but the actual words themselves (representational atoms) are discrete and context-free. Someone with a flair for maths might be able to come up with a proof for the inevitability of context-sensitivity. An organism is constrained to make inferences under conditions of extreme uncertainty: it must respond to an environment that is more complex than can be represented exactly, it operates under resource constraints of computational power and response time, and the number of observations available is far too small to uniquely constrain a mental model. Under such conditions the best strategy may well be to allow underconstrained mental models with the low bandwidth input being used as a source of confirmation of model predictions rather than being directly transduced into the model. SEMANTIC PRIMITIVES Andy couches his explanation of context-sensitivity in terms of microfeatures. Conceiving of representations in this way makes it difficult to see how representations can be based on other than context-free atoms, because microfeatures *are* context- free atoms (if we ignore the low-level perceptual context- sensitivity mentioned above). The term 'micro-features' conjures up associations of grandmother cells and hand-coded representations of the type argued against by Chalmers et al. It should be obvious from my earlier comments that I don't think semantic primitives are necessary for grounded systems. This begs the question of how the system could be grounded. The prime requirement is that the environmental input can be predicted (or at least checked for consistency) from the representation. This obviously doesn't *necessarily* require direct representation of the environmental patterns in the cognitive structure - only that such patterns can be generated or checked at the point of transduction. Many symbolic AI representations are based on the notion of objects (and I believe that to be a useful abstraction), and while there may be objects in the real world each individual perceptual input says next to nothing about objects. That is, objects are an internal construction, not a simple re-coding of the perceptual input. Another possibility is that the representations are based on system dynamics rather than direct encoding of an 'external reality'. The usual view is to see representations as passive objects (like computer data structures) that are acted upon - but is also possible to see representations as active entities that transform other representations (a little akin to procedural representations in AI). Janet Wiles et al (1991) have argued that activation patterns in recurrent networks can be seen dually as static representations and as dynamic operators. The trick with this approach is that the representations must be functional and cannot be arbitrary - so the hard party is to learn/construct representations so that the dynamic effect when the representation is applied as an operator has the correct semantics as confirmed by the environment. The same argument can be applied to perceptual inputs. The usual conception of perceptual processing has a direct signal path from lower to higher levels with the signal being recoded along the way. A more indirect, context-sensitive, interpretive approach would view the main signal flow as being confined within the higher levels and view the perceptual input as modulating the processing parameters of the high-level signal transformations rather than directly inserting signals into that flow. Confirmation of accuracy of modelling the environment could be by the perceptual signals modulating the parameters of a cost function on the cognitive representation rather than entering directly into the cost function. SUMMARY Data structures don't have to bottom out. Meaning doesn't have to bottom out in semantic primitives. You do need an environment to ground the symbols. There are different types of context-sensitivity. Everything *should* be context-sensitive if you want to build a sophisticated system. The representation doesn't have to *directly* represent the environment. REFERENCES Chalmers, D.J., French, R.M., & Hofstadter, D.R. (1991). High- level perception, representation and analogy: A critique of artificial intelligence methodology. Indiana University, Bloomington, CRCC technical report 49. Harnad, S. (1990) The symbol grounding problem. "Physica D" 42: 335-346. MacLennan, B. (1991) Continuous symbol systems: The logic of connectionism. University of Tennessee, Knoxville, Department of Computer Science technical report CS-91-145. Wiles, J., Stewart, J.E.M., & Bloesch, A. (1991) Patterns of activations are operators in recurrent networks. Proceedings of the 2nd Australian Conference on Neural Networks, 44-48. ---------------- I will let you know when I get this to work. Don't hold your breath. Ross Gayler ross at psych.psy.uq.oz.au From kak at max.ee.lsu.edu Tue Sep 15 11:23:24 1992 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Tue, 15 Sep 92 10:23:24 CDT Subject: No subject Message-ID: <9209151523.AA11940@max.ee.lsu.edu> ---------------------- Papers for the Sessions on Neural Networks at FT&T [First International Conference on Fuzzy Theory & Technology, October 14-18, 1992, Durham,NC] General Chair: Professor Paul P. Wang, Dept of Electrical Engrg, Duke University, Durham, NC 27706 ---------------------- -------- Session 1: October 15, 1992, 215 PM- 355 PM Chairman : Professor W.A. Porter, Univ of Alabama at Huntsville H. Kim, University of Missouri- Rolla, Designing of Reliable Feedforward Neural Networks Based On Fault-Tolerant Neurons . W.A. Porter, C. Bowden, W. Liu, University of Alabama at Huntsville and U.S. Army Missile Command, Alphabet Character Recognition with a Generalizing Neural Network . V. Kurkova, P.C. Kainen, Czechoslovak Academy of Sciences and Industrial Math, Univ of Maryland, Fuzzy Orthogonal Dimension and Error-Correcting Classification by Perceptron Type Networks . G. Georgiou, California State University, San Bernardino, Activation Functions for Neural Networks in the Complex Domain . S.C. Kak, LSU, A New Learning Algorithm for Feedforward Neural Networks . -------------- Session 2: October 16, 1992, 945 AM- 1130 AM Chairman : Professor George Georgiou, California State University, San Bernardino S. Saha and J.P. Christensen, LSU, Genetic Design of Sparse Neural Networks . H.L. Hiew and C.P. Tsang, Univ of Western Australia, An Adaptive Fuzzy System for Modelling Chaos . F. Lin and K. Lee, Santa Clara University and Cirrus Logic, A Parallel Computation Network for the Maximum Clique Problem . S. Sivasubramaniam, Acutec, Ft. Lauderdale, A Feature Extraction Heuristic for Neural Networks . W.A. Porter, S.X. Zheng, and W. Liu, Univ of Alabama at Huntsville, A Neural Controller for Discrete Plants with Unknown Noise . C. Cramer, LSU, Pruning Hidden Neurons in the Kak Algorithm . From ajr at eng.cam.ac.uk Tue Sep 15 15:42:51 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Tue, 15 Sep 92 15:42:51 BST Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <14990.9209151442@dsl.eng.cam.ac.uk> The subject line says it all. Given N classes each of which has a Gaussian distribution in the input space (with common covariance matrix), it is reasonably well known that the discriminant function is a hyperplane (e.g. Kohonen's book, section 7.2). But what I didn't know until a month ago is that if you calculate the posterior probabilities using Bayes rule from the Gaussian likelihoods, then you end up with a weighted sum computation and the Potts/softmax activation function for N classes or the sigmoid for the two class case. This is exactly the same function as computed in the last layer of a multi-layer perceptron used for classification. One nice corollary is that the "bias" weight contains the log of the prior for the class, and so may be adjusted to compensate for different training/testing environments. Another is that provided the data near the class boundary can accurately be modelled as Gaussian, the sigmoid gives a good estimate of the posterior probabilities. From this viewpoint, the function of the lower levels of a multi-layer perceptron are to generate Gaussian distributions with identical covariance matrices. Feedback from veterans of the field has been "yes, of course I knew that", but in case this is new to you like it was to me, I have written it up as part of a tutorial paper which is available from the anonymous ftp site svr-ftp.cam.eng.ac.uk as file reports/robinson_cnnss92.ps.Z. The same directory carries an INDEX file detailing other reports which may be of interest. Tony [Robinson] From rohwerrj at cs.aston.ac.uk Tue Sep 15 18:05:18 1992 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Tue, 15 Sep 92 18:05:18 BST Subject: studentships available Message-ID: <4383.9209151705@cs.aston.ac.uk> ***************************************************************************** PhD STUDENTSHIPS AVAILABLE in NEURAL NETWORKS Dept. of Computer Science and Applied Mathematics Aston University ***************************************************************************** Funding has unexpectedly become available at the last minute for 1 or possibly 2 PhD studentships in the Neural Networks group at Aston University. Ideally the students would enroll in October 1992. The group currently consists of Professor David Bounds, lecturers Richard Rohwer and Alan Harget, and 7 PhD students. Current research projects are drawn from Genetic Algorithms and Artificial Life, as well as main-line neural network subjects such as local basis function techniques and training algorithm research, with an emphasis on recurrent networks. For further information please contact me at the address below. Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 (failing that, leave message at x4243) FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs <-- email communication preferred. From rohwerrj at cs.aston.ac.uk Tue Sep 15 18:03:13 1992 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Tue, 15 Sep 92 18:03:13 BST Subject: Senior Academic Post Available Message-ID: <4379.9209151703@cs.aston.ac.uk> ************************************************************************** Senior Acedemic Post Available Dept. of Computer Science and Applied Mathematics Aston University ************************************************************************** The Aston University Department of Computer Science and Applied Mathematics is building a research group in neural networks, genetic algorithms and related subjects. The group, led by the department chairman Professor David Bounds, and lecturers Richard Rohwer and Alan Harget currently has 7 PhD students. The department is seeking a new senior faculty member, preferably at Reader or Professorial level, to augment this group. The candidate must have proven skills as a research leader. The appointee will also be involved in some teaching and fundraising and will be expected to actively build upon Aston's close relationship with industry. There is no prescribed time table for filling this post. The Department has substantial computing resources, including a sequent symmetry and 2 large Sun networks. Space has been set aside for expansion. Aston University is in Birmingham, a convenient central England location with easy access to the rest of England and Wales. Inquiries should be directed to: Professor David Bounds CSAM Aston University Aston Triangle Birmingham B4 7ET ENGLAND (44 or 0) (21) 359-3611 x4243 From hinton at ai.toronto.edu Wed Sep 16 11:03:53 1992 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Wed, 16 Sep 1992 11:03:53 -0400 Subject: The sigmoid is the poserior distribution from Gaussian likelihoods In-Reply-To: Your message of Tue, 15 Sep 92 10:42:51 -0400. Message-ID: <92Sep16.110402edt.441@neuron.ai.toronto.edu> This result (for a single output unit) was published by Hinton and Nowlan in 1990 in Neural Computation (Vol 2 page 359). But it is unlikely that this was the first publication. Geoff From ajr at eng.cam.ac.uk Wed Sep 16 16:31:38 1992 From: ajr at eng.cam.ac.uk (Tony Robinson) Date: Wed, 16 Sep 92 16:31:38 BST Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <27646.9209161531@dsl.eng.cam.ac.uk> Geoff Hinton writes: >This result (for a single output unit) was published by Hinton and Nowlan in >1990 in Neural Computation (Vol 2 page 359). But it is unlikely that this was >the first publication. Indeed. There is a pretty good analysis in Duda and Hart, "Pattern Classification and Scene Analysis" Wiley Interscience (1973). I'm reliably informed that the forthcoming second edition will be even better. Also, I made a mistake in my original posting, the ftp address is really svr-ftp.eng.cam.ac.uk, sorry about that. Tony [Robinson] From jaap.murre at mrc-apu.cam.ac.uk Tue Sep 15 17:45:08 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 15 Sep 92 17:45:08 BST Subject: Request references on volume and connectivity Message-ID: <11996.9209151645@sirius.mrc-apu.cam.ac.uk> Request for references I have recently been working on some theories on the implementation of neural networks. For example, it can be shown that a fully connected brain would measure over 10x10x10 meters. I am also interested in the volume of randomly connected and modular neural networks. I am not sure whether I am working within the correct framework at the moment, so I would like to verify my results. Does anyone know of similar work? The only reference I have come across is by Nelson and Bower (1990) in TINS. They calculated that a fully connected brain with neurons on the surface of sphere would have a 10 km radius! This is clearly not a useful framework for approximation as it leads to gross overestimates. Thanks, Jaap Murre Jacob M.J. Murre MRC Applied Psychology Unit 15 Chaucer Road Cambridge CB2 2EF United Kingdom  From robtag at udsab.dia.unisa.it Thu Sep 17 13:06:03 1992 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Thu, 17 Sep 92 19:06:03 +0200 Subject: Postdoctoral Felloship Message-ID: <9209171706.AA14953@udsab.dia.unisa.it> Human Capital and Mobility Programme of the Commission of the European Communities Postdoctoral Fellowship ============================================================== Applications are invited for 3 EC funded post-doctoral fellowships with the INFM (Italian Institute of Matter Physics) Section of Salerno University, Italia. The duration of the fellowships are 12 months. Remuneration will be at a rate of 3,644 ECU/month (this covers subsistence, tax, social insurance, etc.). The fellowship is open to EC citizens other than citizens of Italy. The research topics are: (1) Electronic states in normal and superconducting systems with strong correlations (2) Neural Networks for Signal Processing (3) Superconductivity at high critical temperature Interested candidates should send a letter of application, a CV, and a list of their publications to: Dr. Alfonso Romano Dept. Fisica Teorica Univ. Salerno I-84081 Baronissi (SA) Italia E-mail alforom at salerno.infn.it fax +39 89 822275 Since the closing date for receipt of applications is September 24, candidates are encouraged to send their applications either by e-mail or FAX. ===================================================================== A more detailed description of the research activities (in Latex) follows: \magnification=1200 \baselineskip 15pt \tolerance 10000 \hsize 150truemm \vsize 220truemm \nopagenumbers \noindent {\bf 1) Supervisor: Prof.$\,$M.Marinaro} \vskip 1truecm \noindent Prof.$\,$M.Marinaro offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be theoretical and based on the use of Quantum Field Theory methods applied to Condensed Matter Physics. \noindent The fellow will work in a group of 4 experienced people, which collaborate with other scientists, such as Prof.$\,$H.Matsumoto (Sendai, Japan), Prof.$\,$R.Micnas (Poznan, Poland), Prof.$\,$G.Iadonisi (Naples, Italy). \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Electronic states in normal and superconducting systems with strong correlations \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} The physical properties of systems with strong electronic correlations have been so far studied within the framework of the periodic Anderson model, by means of a perturbative expansion in the kinetic term of the conduction electrons. Special attention has been devoted to the structure of the electronic density of states and to the study of the singlet and triplet superconducting solutions generated by the inclusion of an attractive off-site interaction between correlated electrons. \item {} The continuation of this kind of analysis and the application of similar techniques to other correlated electron models, such as those used in the theory of high-$T_c$ superconductors, represents the research activity planned for the next future. \vskip 0.2truecm \noindent See, for example: \noindent M.Marinaro, C.Noce and A.Romano, J. Phys.: Cond. Matt. {\bf 3}, 3719 (1991); Il Nuovo Cimento D, at press (September or October 1992 issue) %\picture 1 5 {} \vfill\eject \noindent {\bf 2) Supervisor: Prof.$\,$E.R.Caianiello} \vskip 1truecm \noindent Prof.$\,$E.R.Caianiello offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be mainly experimental and based on the use of Neural Networks for adaptive signal processing and feature extraction. \noindent The fellow will work in a group of 5 experienced people, which collaborate with other scientists, coming from the Universities of Rome and Pavia, IRST of Trento, MIT of Boston. \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Neural Networks for Signal Processing \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} Study of learning in neural networks to obtain the best performances in complex hybrid systems for signal processing. The nets are used in the phases of filtering, feature extraction and classification either for speech processing or for 2D pattern recognition. \vfill\eject \noindent {\bf 3) Supervisor: Prof.$\,$F.Mancini} \vskip 1truecm \noindent Prof.$\,$F.Mancini offers to supervise one postdoctoral fellow for a period of 12 months. The activity will be theoretical and based on the use of Quantum Field Theory techniques applied to Condensed Matter Physics. \noindent The research project is a part of a common program between the Institute of Materials Research at Tohoku University, Sendai (Japan) and the Department of Theoretical Physics at the University of Salerno. \noindent The activity proposed to the perspective fellow is the following: \vskip 0.2truecm \noindent {\bf Name of activity} \vskip 0.2truecm \item {} Superconductivity at high critical temperature \vskip 0.2truecm \noindent {\bf Objectives of activity} \item {} The phenomenon of superconductivity in the new materials which exhibit a high critical temperature is still not well understood from a theoretical point of view. By making use of the p-d model, our purpose is to investigate the fundamental mechanism which induces superconductivity in the new superconductor oxides. At first stage, the theoretical effort is concentrated on understanding the electronic structure realized in proximity of the metal-insulator transition. \bye From shawn at helmholtz.sdsc.edu Thu Sep 17 13:08:42 1992 From: shawn at helmholtz.sdsc.edu (Shawn Lockery) Date: Thu, 17 Sep 92 10:08:42 PDT Subject: No subject Message-ID: <9209171708.AA25081@helmholtz.sdsc.edu> POSTDOCTORAL POSITION INSTITUTE OF NEUROSCIENCE UNIVERSITY OF OREGON I am looking for an electrophysiologist experienced in intracellular and voltage-clamp recording with an interest in distributed processing and network modeling. Projects include identification of interneurons, measurement of synaptic transfer functions, measurement of parameters for compartmental models of identified neurons, and compartmental and neural network modeling. Please send letter and CV via email. Shawn R. Lockery Present address: CNL Salk Institute Box 85800 San Diego, CA 92186-5800 shawn at helmholtz.sdsc.edu fax: (619) 587-0417 GENERAL DESCRIPTION OF THE RESEARCH INTERESTS Research in the Lockery lab investigates the distributed processing of sensory information in well-defined invertebrate networks. Distributed representations occur in a great many neural systems, but how they are integrated in the production of behavior is poorly understood. This problem is addressed by analyzing the neural basis of behavior and learning in two relatively simple distributed processing behaviors: the local bending reflex of the leech and the chemotactic response of the nematode C. elegans. Composed of a small number of repeatably identifiable sensory, motor, and interneurons, the local bending reflex computes a sensory-motor input-output function using a population of interneurons each with many sensory inputs and motor outputs. Lockery and co-workers record this input-output function intracellularly and use the recordings as input to neural network training algorithms such as backpropagation to adjust synaptic connections in models of the reflex. The models predict as-yet-undiscovered interneurons and possible sites of synaptic plasticity underlying nonassociative conditioning. These predictions are tested in physiological experiments to measure the connections of identified interneurons in normal and conditioned animals. Previous anatomical studies have described the complete wiring diagram of the nervous system of C. elegans. The anatomy shows that interneurons receive input from several chemosensory neurons with differing chemical sensitivities and have outputs to many different motor neurons. To understand how the network controlling chemotaxis operates, we train models of the anatomically defined circuitry to reproduce observed chemotactic behavior. The models are constrained by parameters that can be measured physiologically and predict the results of experiments in which particular neurons are ablated in the behaving animal. From edelman at wisdom.weizmann.ac.il Fri Sep 18 02:11:26 1992 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Fri, 18 Sep 92 08:11:26 +0200 Subject: Request references on volume and connectivity In-Reply-To: Jaap Murre's message of Tue, 15 Sep 92 17:45:08 BST <11996.9209151645@sirius.mrc-apu.cam.ac.uk> Message-ID: <9209180611.AA12293@white.wisdom.weizmann.ac.il> In a recent issue of TINS Mitchison discussed some related questions: @article{Mitchison92 author="G. Mitchison", title="Axonal trees and cortical architecture", journal="Trends in Neurosciences", volume="15", pages="122-126", year="1992" } There is also a relevant paper by Young, on the pattern of area interconnection in primate visual cortex: @article{Young92, author="M. P. Young", title="Objective analysis of the topological organization of the primate cortical visual system", journal="Nature", volume="358", pages="152-155", year="1992" } -Shimon From anshu at lexington.rutgers.edu Fri Sep 18 15:31:59 1992 From: anshu at lexington.rutgers.edu (anshu@lexington.rutgers.edu) Date: Fri, 18 Sep 92 15:31:59 EDT Subject: Neural Network Workshop Message-ID: <9209181931.AA24968@lexington.rutgers.edu> CAIP Center, Rutgers University & FAA announces IInd NEURAL NETWORK WORKSHOP presenting * The state of the art in Neural Network theory and applications * With some of the most eminent people in the field including two Nobel laureates and a Field's Medal winner (Attendance is limited and on a first-come-first basis) NEURAL NETWORK WORKSHOP Richard Mammone, Chairman Sponsored by FAA Technical Center Hosted by the Center for Computer Aids for Industrial Productivity (CAIP) TENTATIVE PROGRAM TUESDAY - THURSDAY 27 - 29 OCTOBER, 1992 _____________________________________ THE STATE UNIVERSITY OF NEW JERSEY RUTGERS ______________________________________ Center for Computer Aids for Industrial Productivity (CAIP) Frelinghuysen Road - P.O. Box 1390 - Piscataway - New Jersey 08855-1390 Tel: 908/932-4208 - FAX: 908/932-4775 A New Jersey Commission on Science and Technology Center Tuesday, 27 October 1992 ************************** 8:30 a.m. _____________________Registration; Coffee____________________ 8:45 a.m. Opening Remarks Leo T. Powell, FAA Technical Center Richard Mammone - Workshop Chairman,Rutgers University 8: 55 a.m. Neural Networks for Speech Processing and Language Session Chairman, Allen Gorin, - AT&T Bell Laboratories 9:00 a.m. Neural Networks in the Acquisition of Speech by Machine Frank Fallside, Cambridge University, U.K. 9:30 a.m. The Nervous System: Fantasy and Reality Nelson Kiang - Massachusetts Eye and Ear 10:10 a.m. ________________________Coffee Break________________________ 10:30 a.m. Processing of Speech Segments in the Auditory Periphery Oded Ghitza - AT&T Bell Labs 10:50 a.m. Is There a Role for Neural Networks in Speech Recognition? John Bridle - Dragon 11:10 a.m. Some Relationships Between Artificial Neural Nets and Hidden Markov Models Arthur Nadas - IBM T. J. Watson Research Center 11:30 p.m. _____________________________Lunch_______________________ 1:30 p.m. The Neuropsychology of Word Reading: A Connectionist Approach David Plaut - Carnegie Mellon University 1:50 p.m. States Versus Stacks: Representing Grammatical Structure in a Recurrent Neural Network Jeffrey Elman - UCSD 2:10 p.m. Connections and Associations in Language Acquisition Naftali Tishby - Hebrew University, Israel 2:30 p.m. Recurrent Neural Networks and Sequential Machines Lee Giles - NEC 2:50 p.m. _________________________Coffee Break_______________________ 3:10 p.m. A Self-Learning Neural Tree Network for Phoneme Classification Mazin Rahim - CAIP Center, Rutgers University 3:30 p.m. Decision Feedback Learning of Neural Networks Fred Juang - AT&T Bell Laboratories 3:50 p.m. An Experiment in Spoken Language Acquisition Allen Gorin, Session Chairman - AT&T Bell Laboratories 4:10 p.m. Visual Focus of Attention in Language Acquisition Ananth Sankar - AT&T Bell Laboratories 4:30 p.m. Integrating Segmental Neural Nets with Hidden Markov Models for Continuous Speech Recognition John Makhoul, George Zaualiagkos, Richard Schwartz, Steve Austin - BBN Systems and Technologies, Cambridge, MA 4:50 p.m. Panel Discussion - The Future of Neural Nets for Speech Processing Steve Levinson, Chairman; John Makhoul, Ester Levine, Naftali Tishby, John Bridle 5:40 p.m. Decision Making Using Conventional Calculations Versus Neural Nets for Advanced Explosive Detection Systems Thomas Miller - Tensor Tech. Assoc. 6:00 p.m. _____________________________Dinner________________________ 7:30 p.m. Break Out Groups Room 1: What Are the Most Successful Applications of Neural Networks? Chris Scofield (Chairman), Philip Gouin, Larry Jackel, Eric Schwartz, Ed DeRouin Room 2: What Theoretical Contributions Have Neural Network Researchers Made? Eduardo Sontag (Chairman), Georg Schnitzer, Fred Girosi, S. Venkatesh, Steven Judd, Jeff Vitter, Wolfgang Maass, Charles Fefferman, Kurt Hornik Room 3: What Is the Impact of Government Support on the Development of Networks? Wagih Makky (Chairman), Shiu Cheung, Richard Ricart, John Cozzens, Steve Suddarth Wednesday, 28 October 1992 **************************** 8:55 a.m. Neural Network Applications in Vision Session Chairman, Chris Scofield, - Nestor 9:00 a.m. Integrated Segmentation and Recognition of Handprinted Characters James Keeler - MCC 9:20 a.m. Neural Net Image Analysis for Postal Applications: From Locating Address Blocks to Determining Zip Codes Larry Jackel - AT&T Bell Laboratories 9:40 a.m. Space Invariant Active Vision Eric Schwartz - Brain Research 10:00 a.m. _________________________Coffee Break_______________________ 10:30 a.m. Engineering Document Processing with Neural Networks Philip Gouin - Nestor, Inc. 10:50 a.m. Goal - Oriented Training of Neural Networks Ed DeRouin - Thought Processes, Inc. 11:10 a.m. Hybrid Neural Networks and Image Restoration K.V. Prasad - CalTech 11:30 a.m. Neural Networks for Vision Session K.V. Prasad, Session Chairman - CalTech 11:50 a.m. A Discrete Radon Transform Method for Invariant Image Analysis Using Artificial Neural Networks John Doherty - Iowa State University 12:10 p.m. _____________________________Lunch________________________ 1:30 p.m. (Title to be announced) Leon Cooper - Brown University 1:50 p.m. Dynamic Systems and Perception Alexander Pentland - Massachusetts Institute of Technology 2:00 p.m. Deterministic Annealing for Optimization Alan Yuille - Harvard University 2:10 p.m. Neural Networks in Vision Yehoshua Zeevi - Technion Israel 2:30 p.m. A Neural Chip Set for Supervised Learning and CAM Josh Alspector - Bellcore 2:50 p.m. Cortical Dynamics of Feature Binding & Reset: Control of Visual Persistence Ennio Mingolla, Gregory Francis, Stephen Grossberg 3:10 p.m. _________________________Coffee Break_______________________ 3:30 p.m. Face Recognition Using an NTN Joseph Wilder - CAIP 3:50 p.m. Bounds for the Computational Power and Learning Complexity of Analog Neural Nets Wolfgang Maass - Graz, Austria 4:10 p.m. Computational Issues in Neural Networks George Cybenko - Dartmouth College 4:30 p.m. Title to be announced Kurt Hornik - Wein University, Austria 4:50 p.m. Technical Discussions 6:00 p.m. Dinner and Celebration in Honor of Jim Flanagan for Receiving The Marconi International Fellowship Award Thursday, 29 October 1992 *************************** 8:45 a.m. Recurrent Network Sessions Session Chairman, Richard Ricart-Booz Allen 8:50 a.m. To be announced S. Y. Kung - Princeton 9:10 a.m. Comparison of Feedforward and Recurrent Sensitivity Gary Kuhn - Siemens 9:30 a.m. Short Term Memory Mechanisms for Recurrent Neural Networks Bert DeVries, John Pearson - David Sarnoff Research Center 9:50 a.m. Recurrent Neural Networks for Speaker Recognition Richard Ricart-Booz Allen 10:10 a.m. Processing of Complex Stimuli in the Mammalian Cochlear Nucleus Eric Young - Johns Hopkins 10:30 a.m. _________________________Coffee Break_______________________ 10:50 a.m. Applications of Neural Networks Session Chairman, Richard Mammone - Rutgers University 11:10 a.m. Neural Networks for the Detection of Plastic Explosives in Airline Baggage Richard Mammone 11:30 a.m. Non-Literal Transfer of Information Among Inductive Learners Lorien Pratt - Colorado School of Mines 12:00 p.m. _____________________________Lunch________________________ 1:30 p.m. Neural Networks for Identification and Control of Nonlinear Systems Eduardo Sontag - Rutgers University 1:50 p.m. Using Neural Networks to Identify DNA Sequences Mick Noordeweir - Rutgers University 2:10 p.m. Large Scale Holographic Optical Neural Network for Data Fusion and Signal Processing Taiwei Lu - Physical Optics Corp. 2:30 p.m. A Biologically Based Synthetic Nervous System for a Real World Device George Reeke, Jr., Gerald Edelman - The Neurosciences Institute 2:50 a.m. Title to be announced Shigeru Katagiri - ATR, Japan 3:10 p.m. "Learning by Learning" in Neural Networks Devang Naik - Rutgers University 3:30 p.m. Relabeling Methods of Learning Wen Wu - CAIP 3:50 p.m. Long Term Memory for Neural Networks Anshu Agarwal - Rutgers University 4:10 p.m. Wavelet Neural Networks Toufic Boubez - Rutgers University 4:30 p.m. End of Workshop ---------*----*----*-------- ------------------------------------------------------------------------------- ------------------------------------------------------------------------------ NEURAL NETWORK WORKSHOP 27-29 October, 1992 |--------------------------------------------------------------------| | WORKSHOP REGISTRATION FORM | | | | YES! I want to attend the Neural Network Workshop, October 27-29, | | 1992. I understand my registration fee includes all sessions, | | dinners, refreshment breaks, reception and working materials. | | | | Name ___________________________________________________________ | | | | Company ________________________________________________________ | | | | Address ________________________________________________________ | | | | City/State/Zip _________________________________________________ | | | | Telephone No. __________________________________________________ | | | |--------------------------------------------------------------------| REGISTRATION IS LIMITED! APPLICATIONS WILL ONLY BE CONSIDERED WHEN ACCOMPANIED WITH PAYMENT. MAKE CHECKS PAYABLE TO THE CAIP CENTER, RUTGERS UNIVERSITY. Registration: Non-member fee ($395) $____________ Member fee for participants from CAIP member organizations ($295) $____________ EARLY REGISTRATION IS ADVISED! Mail form & payment to: CAIP Center, Rutgers Univ, 7th floor, CoRE Blgd., PO Box-1390, Piscataway,NJ-08855. ........................................................................... |--------------------------------------------------------------------| | HOTEL REGISTRATION FORM | | | | Name ___________________________________________________________ | | | | Company ________________________________________________________ | | | | Address ________________________________________________________ | | | | Daytime Phone No. ______________________________________________ | | | | A block of rooms for this conference has been reserved at a special| | University room rate of $81 per single/double room per night. | | Hotel Reservations will be made through the CAIP Center. | | ------------------------------------------------------- | | I will require room(s): | | Monday, October 26 ( ) | | Tuesday, October 27 ( ) | | Wednesday, October 28( ) | | Thursday, October 29 ( ) | |--------------------------------------------------------------------| ------------------------------------------------------------------------------- From giles at research.nj.nec.com Fri Sep 18 17:55:07 1992 From: giles at research.nj.nec.com (Lee Giles) Date: Fri, 18 Sep 92 17:55:07 EDT Subject: Research Position Message-ID: <9209182155.AA05466@fuzzy> POSITION: RESEARCH ASSOCIATE The NEC Research Institute in Princeton, NJ has an immediate opening for a RESEARCH ASSOCIATE in the area of neural networks/connectionism and dynamics/control. Research is currently underway to better understand dynamic neural networks and their computational capabilities. Towards this end, we are looking for a research associate who will contribute to this research effort and work closely with the research group. The successful candidate must have experience in basic research and be able to effectively communicate research results. He or she should have experience in using computer simulations, preferably in the area of artificial neural networks. In addition his or her background should include extensive experience in programming in the UNIX/C environment (nearly all work is performed on Silicon Graphics workstations). Tasks in this area will also involve code maintenance, modification and enhancement as required by the research program. Interested applicants should send their resumes by mail, fax or email with 2 references to: Dr. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 Phone: 609-951-2642 FAX: 609-951-2482 email:giles at research.nj.nec.com Applicants must show DOCUMENTATION OF ELIGIBILITY FOR EMPLOYMENT. NEC is an equal opportunity employer: M/F/H/V. C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 USA Internet: giles at research.nj.nec.com UUCP: princeton!nec!giles PHONE: (609) 951-2642 FAX: (609) 951-2482 From B344DSL at UTARLG.UTA.EDU Sun Sep 20 21:40:00 1992 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Sun, 20 Sep 1992 20:40 CDT Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <01GP0RBZFW6G000O13@utarlg.uta.edu> I think there was some remark about the sigmoid as the distribution function arising from a Gaussian density in Grossberg's paper in Studies in Applied Math,1973, and/or in one of those paper's two direct sequels: by Ellias and Gross- berg in Biological Cybernetics 1975, and Grossberg and Levine in Journal of Theoretical Biology 1975. Dan Levine From ATGOS at ASUVM.INRE.ASU.EDU Fri Sep 18 17:50:01 1992 From: ATGOS at ASUVM.INRE.ASU.EDU (Arizona State U.) Date: Fri, 18 Sep 92 14:50:01 MST Subject: Job Opening Message-ID: JOB ANOUNCEMENT Experimental Psychologist Arizona State University is recruiting an associate or assistant professor in experimental psychology. The successful candidate must have a Ph.D. in psychology and a strong publication record. Specialization in any area of cognitive or experimental psychology is acceptable, including cognitive development. Special consideration will be given to candidates whose research applies to connectionist/adaptive dynamical systems/neural modeling and biomedical issues (broadly conceived). The position will begin August, 1993. Send Vita, reprints, and three letters of reference to Dr. Guy Van Orden, Experimental Psychology Search Committee, Department of Psychology, Arizona State University, Tempe AZ 85287-1104. Deadline for application is December 1, 1992, and every two weeks thereafter until filled. ASU is an equal opportunity and affirmitive action employer. From rsun at athos.cs.ua.edu Mon Sep 21 12:34:25 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Mon, 21 Sep 1992 11:34:25 -0500 Subject: No subject Message-ID: <9209211634.AA12933@athos.cs.ua.edu> CALL FOR PAPERS ARCHITECTURES FOR INTEGRATING NEURAL AND SYMBOLIC PROCESSES A Special Issue of Connection Science: a journal of AI, cognitive science and neurocomputing Although there has been a great deal of research in integrating neural and symbolic processes, both from a cognitive and/or applications viewpoint, there has been relatively little effort in comparing, categorizing and combining these fairly isolated approaches, especially from a cognitive perspective. This special issue is intended to address the cognitive architectural aspects of this integration: the issue will bring together various architectural approaches as well as focus on specific architectures that solve particular problems, that exhibit cognitive plausibility, that yield new insights, and that show potential for scaling up. Papers are expected to address the following questions, but are not limited to such questions: * What have we achieved so far by integrating neural and symbolic processes? * What are the relative advantages/disadvantages of each approach? * How cognitively plausible is each proposed approach? * Is there any commonality among various architectural approaches? Should we try to synthesize existing approaches? How do we synthesize these approaches? (Does there exist a generic and uniquely correct cognitive architecture?) * What are the problems, difficulties and outstanding issues in integrating neural and symbolic processes? * How do symbolic representation and connectionist learning schemes interact in integrated systems? The papers can be either theoretical or experimental in scope, and can comment on the current state of affairs and address what advances are necessary so that continued progress can be made. However, prospective authors should emphasize the principles involved along with an explanation of why the particular model works or does not work, and what it is we can learn from the model. For example, does the model predict some testable behavior which can lead to new insights? All papers will be rigorously refereed, and should conform to the following rules, in addition to the usual requirements of the journal. Authors must submit five (5) printed copies of their papers to either of the addresses listed below by January 5, 1993. Notification of receipt will be electronically mailed to the first author (or designated author) soon after receipt. Notification of acceptance or rejection of submitted papers will be mailed to the first author (or designated author) by March 31, 1993. Final verson of accepted papers will be due May, 28, 1993. All 5 copies of a submitted paper must be clearly legible. Neither computer files nor fax submissions are acceptable. Submissions must be printed on 8 1/2 in. x 11 in. or A4 paper using 12 point type (10 characters per inch for typewriters). Each copy of the paper must have a title page (separate from the body of the paper) containing the title of the paper, the names and addresses of all authors, including e-mail addresses, and a short (less than 200 word) abstract. Review Criteria [Significance:] How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offered advance the state of the art? [Originality:] Has this or similar work been previously reported? Are the problems and approaches new? Is this a novel combination of familiar techniques? Does the paper point out differences from related research? Is it re-inventing the wheel using new terminology? [Quality:] Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? How are its claims backed up? [Clarity:] Is the paper clearly written? Does it motivate the research? Does the paper properly situate itself with respect to previous work? Are the results described and evaluated? Is the paper organized in a logical fashion? Submissions should be delivered to one of the following addresses: Dr. Lawrence Bookman Prof. Ron Sun Sun Microsystems Laboratories Department of Computer Science Two Federal Street The University of Alabama Billerica MA 01821, USA Tuscaloosa, AL 35487, USA Net: lbookman at east.sun.com Net: rsun at athos.cs.ua.edu From jaap.murre at mrc-apu.cam.ac.uk Tue Sep 22 12:21:36 1992 From: jaap.murre at mrc-apu.cam.ac.uk (Jaap Murre) Date: Tue, 22 Sep 92 12:21:36 BST Subject: Listing of references on connectivity Message-ID: <11780.9209221121@sirius.mrc-apu.cam.ac.uk> In the past week, I have received the following references on connectivity and volume in the brain (and in parallel hardware): Braitenberg, V, Schuz A. (1990). Anatomy of the cortex: statistics and geometry. Berlin: Springer-Verlag. Cherniak, C. (1990). The bounded brain: toward quantitative neuroanatomy. Journal of Cognitive Neuroscience, 2, 58-68. Hofman, M.A. (1985). Neuronal correlates of corticalization in mammals: a Theory. Journal of Theoretical Biology, 112, 77-95. Mitchison, M.P. (1992). Axonal trees and cortical architecture. Trends in Neurosciences, 15, 122-126. Schuz, A., & G. Palm (1989). Density of neurons and synapses in the cerebral cortex of the mouse. Journal of Comparative Neurology, 286, 442-455. Vitanyi, P.M.B. (1988). Locality, communication and interconnect length in multicomputers. SICOM, 17, 659-672. Waltz, D.L. (1988). The prospects for building truly intelligent machines. Daedalus (Proc. American Academy of Arts and Sciences), 117, 191-212. Young, M.P. (1992). Objective analysis of the topological organization of the primate cortical visual system. Nature, 358, 152-155. I want to thank everyone for reacting to my request. Jaap Murre  From hamps at shannon.ECE.CMU.EDU Tue Sep 22 09:35:15 1992 From: hamps at shannon.ECE.CMU.EDU (John B. Hampshire II) Date: Tue, 22 Sep 92 09:35:15 EDT Subject: sigmoid <=> Gaussian a posteriori distributions Message-ID: <9209221335.AA08182@ shannon.ece.cmu.edu.ECE.CMU.EDU > The proof of this linkage (like so many proofs associated with connectionist models) goes way back... probably to Gauss. Before neural networks were in vogue, the sigmoid was associated with linear classifiers in the form of the exponential logistic. So, for example, guys like Lachenbruch showed the proof in the context of linear regression/classification in the 60's. This is meant only to inform, not to kick sand in anybody's face. -John From B344DSL at UTARLG.UTA.EDU Tue Sep 22 11:53:00 1992 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Tue, 22 Sep 1992 10:53 CDT Subject: Sigmoid Posterior Message-ID: <01GP2ZEOMJOW000UUC@utarlg.uta.edu> The statement I made about the sigmoids being discussed in early Grossberg papers was off the cuff, but I did find it in my own joint paper with Grossberg (Grossberg-Levine, Journal of Theoretical Biology 53, 341-380, 1975). On p. 343, the function f(x) is introduced as a term that appears repeatedly in the basic equations for shunting recurrent on-center off- surround interactions (the equations themselves introduced on p. 342). We state, "In vivo, f(w) is often a sigmoid function of w (Kernell, 1965a, b; Rall, 1955), and such a function arises from integrating a Gaussian, Cauchy, or other similar distribution of thresholds within a population." Dan Levine From cowan at synapse.uchicago.edu Tue Sep 22 12:26:39 1992 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 22 Sep 92 11:26:39 CDT Subject: The sigmoid is the poserior distribution from Gaussian likelihoods Message-ID: <9209221626.AA07536@synapse> Geoff: Apropos Dan Levine's remarks, I introduced the sigmoid in 1965 at the Wiener Memorial Meeting in Genoa, as a smooth approximation to the firing rate vs current curve of a single neuron with a shot noise input. (Published in 1968-71). Later with Hugh Wilson in 1972, 1973 we redefined the sigmoid as arising in a population of neurons with varying thresholds as the integral of a unimodal probability density. Jack Cowan From rsun at athos.cs.ua.edu Tue Sep 22 17:24:51 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Tue, 22 Sep 1992 16:24:51 -0500 Subject: No subject Message-ID: <9209222124.AA12158@athos.cs.ua.edu> CALL FOR PAPERS ARCHITECTURES FOR INTEGRATING NEURAL AND SYMBOLIC PROCESSES A Special Issue of Connection Science: a journal of AI, cognitive science and neurocomputing Although there has been a great deal of research in integrating neural and symbolic processes, both from a cognitive and/or applications viewpoint, there has been relatively little effort in comparing, categorizing and combining these fairly isolated approaches, especially from a cognitive perspective. This special issue is intended to address the cognitive architectural aspects of this integration: the issue will bring together various architectural approaches as well as focus on specific architectures that solve particular problems, that exhibit cognitive plausibility, that yield new insights, and that show potential for scaling up. Papers are expected to address the following questions, but are not limited to such questions: * What have we achieved so far by integrating neural and symbolic processes? * What are the relative advantages/disadvantages of each approach? * How cognitively plausible is each proposed approach? * Is there any commonality among various architectural approaches? Should we try to synthesize existing approaches? How do we synthesize these approaches? (Does there exist a generic and uniquely correct cognitive architecture?) * What are the problems, difficulties and outstanding issues in integrating neural and symbolic processes? * How do symbolic representation and connectionist learning schemes interact in integrated systems? The papers can be either theoretical or experimental in scope, and can comment on the current state of affairs and address what advances are necessary so that continued progress can be made. However, prospective authors should emphasize the principles involved along with an explanation of why the particular model works or does not work, and what it is we can learn from the model. For example, does the model predict some testable behavior which can lead to new insights? All papers will be rigorously refereed, and should conform to the following rules, in addition to the usual requirements of the journal. Authors must submit five (5) printed copies of their papers to either of the addresses listed below by January 5, 1993. Notification of receipt will be electronically mailed to the first author (or designated author) soon after receipt. Notification of acceptance or rejection of submitted papers will be mailed to the first author (or designated author) by March 31, 1993. Final verson of accepted papers will be due May, 28, 1993. All 5 copies of a submitted paper must be clearly legible. Neither computer files nor fax submissions are acceptable. Submissions must be printed on 8 1/2 in. x 11 in. or A4 paper using 12 point type (10 characters per inch for typewriters). Each copy of the paper must have a title page (separate from the body of the paper) containing the title of the paper, the names and addresses of all authors, including e-mail addresses, and a short (less than 200 word) abstract. Review Criteria [Significance:] How important is the work reported? Does it attack an important/difficult problem or a peripheral/simple one? Does the approach offered advance the state of the art? [Originality:] Has this or similar work been previously reported? Are the problems and approaches new? Is this a novel combination of familiar techniques? Does the paper point out differences from related research? Is it re-inventing the wheel using new terminology? [Quality:] Is the paper technically sound? Does it carefully evaluate the strengths and limitations of its contribution? How are its claims backed up? [Clarity:] Is the paper clearly written? Does it motivate the research? Does the paper properly situate itself with respect to previous work? Are the results described and evaluated? Is the paper organized in a logical fashion? Submissions should be delivered to one of the following addresses: Dr. Lawrence Bookman Prof. Ron Sun Sun Microsystems Laboratories Department of Computer Science Two Federal Street The University of Alabama Billerica MA 01821, USA Tuscaloosa, AL 35487, USA Net: lbookman at east.sun.com Net: rsun at athos.cs.ua.edu From marcus at ips102.desy.de Wed Sep 23 10:57:09 1992 From: marcus at ips102.desy.de (Marcus Speh) Date: Wed, 23 Sep 92 16:57:09 +0200 Subject: Neural Multilevel Scheme for Disordered Systems (Preprint) Message-ID: The following paper will eventually appear in the International Journal of Modern Physics C [Physics and Computers] -------------------------------------------------------------------- "Neural multigrid for gauge theories and other disordered systems" M.Baeker, T. Kalkreuter, G. Mack and M. Speh II. Institut f. Theoretische Physik, Universitaet Hamburg The preprint is available via anonymous FTP or as a hard copy (free, see below). It contains our contribution to the conference "Physics Computing '92" in Prag, Czecheslovakia, August 24-18, 1992. -------------------------------------------------------------------- ABSTRACT We present evidence that multigrid works for wave equations in disordered systems, e.g. in the presence of gauge fields, no matter how strong the disorder, but one needs to introduce a "neural computations" point of view into large scale simulations: First, the system must learn how to do the simulations efficiently, then do the simulation (fast). The method can also be used to provide smooth interpolation kernels which are needed in multigrid Monte Carlo updates. Keywords: Multigrid Neural Networks Disordered Systems Gauge Fields Neural Multigrid For comments, questions or suggestions, please contact: Marcus Speh (marcus at ips102.desy.de) -------------------------------------------------------------------- To obtain a copy via FTP (9 pages with figures appended) use the standard procedure: ftp cheops.cis.ohio-state.edu anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get speh.neuralmg.ps.Z ftp> quit zcat speh.neuralmg.ps.Z | lpr ------------------------------------------------------------------------ If FTP is impossible, a free hard copy can be obtained sending a request via E-Mail, FAX or Snailmail to: Marcus Speh II. Inst.Theor.Physik/DESY Universitaet Hamburg Luruper Chaussee 149 2000 Hamburg 50 Tel. (0049)(40)8998 2260 FAX. (0049)(40)8998 2267 ------------------------------------------------------------------------ From jwk1 at forth.stirling.ac.uk Thu Sep 24 05:14:44 1992 From: jwk1 at forth.stirling.ac.uk (Dr James W Kay) Date: Thu, 24 Sep 92 10:14:44 +0100 Subject: sigmoid as a posterior Message-ID: <9209240914.AA06094@forth.stirling.ac.uk> Note the connection with the work of the late J.A.Anderson within the statistical community. In his paper on logistic discrimination(Biometrika 1972, 59,19-35), he modelled the posterior probabilities directly using logistic functions. While this includes the Gaussian, equal covariance, situation as a special case, it is more general.. and of course his formulation could today be directly implemented as an ANN. In his paper he cites earlier related work by Cox, Day and Kerridge. He uses maximum likelihood to "learn the weights", but later he advocated the use of penalised maximum likelihood(or Bayesian or Regularisation, by other names). It is easy to see how additional hidden units could be incorporated within this framework and this would ess be essentially a version of projection pursuit logistic discrimination. JIm Kay From cramer at max.ee.lsu.edu Wed Sep 23 11:57:14 1992 From: cramer at max.ee.lsu.edu (Chris Cramer) Date: Wed, 23 Sep 92 10:57:14 CDT Subject: No subject Message-ID: <9209231557.AA27340@max.ee.lsu.edu> The following technical report is available. If you would like to have copies do let me know. Pruning Hidden Neurons in the Kak Algorithm Chris Cramer ABSTRACT The Kak algorithm is an important new approach to training a feed-forward network. Kak has shown that it is possible to compute weights for a single corner of an input space by inspection. In this paper, the author will show that a facet classification algorithm, capable of mapping any input space using fewer hidden neurons, is also possible by combining a trial and error method with direct computation. This facet algorithm allows for several input/output sequences to be covered by a single weight vector, thus pruning the necessary number of hidden neurons. This is achieved by summing the weights, given by the Kak algorithm, for the various corner of the input space which are mapped to one. Once the weights have been computed, the threshold weight may be determined. This algorithm allows for the network to be trained during operation, after the initial training. The author will demonstrate the superiority of the facet classification algorithm over the perceptron and backpropagation algorithms in computing a weight vector. Technical Report ECE, 92-09, LSU. September 22, 1992 From mike at PARK.BU.EDU Fri Sep 25 16:13:28 1992 From: mike at PARK.BU.EDU (mike@PARK.BU.EDU) Date: Fri, 25 Sep 92 16:13:28 -0400 Subject: World Congress on Neural Networks Message-ID: <9209252013.AA11148@fenway.bu.edu> WORLD CONGRESS ON NEURAL NETWORKS 1993 INTERNATIONAL NEURAL NETWORK ANNUAL MEETING July 11--15, 1993 Portland Convention Center Portland, Oregon This international research conference will be the largest and most interdisciplinary meeting in 1993 covering all the areas relevant to neural network research. Neural network models in psychology and cognitive science, neuroscience and neuropsychology, engineering and design, technology and applications, and computational and mathematical analysis will all be featured. The meeting structure will particularly emphasize the dynamic interplay of neurobiological modelling with advanced engineering and technological applications. Hybrid systems wherein neural network models are linked to fuzzy, genetic, and symbolic models are also most welcome. GENERAL CHAIR: George Lendaris PROGRAM CHAIRS: Stephen Grossberg and Bart Kosko COOPERATING SOCIETIES CHAIR: Mark Kon Plenary Lectures include: ------------------------- 3-D VISION AND FIGURE-GROUND POP-OUT, Stephen Grossberg COHERENCE AS AN ORGANIZING PRINCIPLE OF CORTICAL FUNCTION, Wolf Singer REAL-TIME ON-CHIP LEARNING IN ANALOG VLSI NETWORKS, Carver Mead INTELLIGENT CONTROL USING NEURAL NETWORKS, Kumpati Narendra NEURAL FUZZY SYSTEMS, Bart Kosko Tutorials, which will be offered on Sunday, July 11, 1993, include: ------------------------------------------------------------------- ADAPTIVE RESONANCE THEORY, Gail Carpenter BIOLOGICAL VISION, V.S. Ramachandran COGNITIVE SCIENCE, David Rumelhart COGNITIVE NEUROSCIENCE, Robert Desimone NEURAL COMPUTATION AND VLSI, Eric Schwartz NEURAL CONTROL AND ROBOTICS, Michael Kuperstein NEURAL FUZZY SYSTEMS, Fred Watkins NEUROBIOLOGY AND CHAOS, Walter Freeman PRACTICAL APPLICATIONS OF NEURAL NETWORK THEORY, Robert Hecht-Nielsen STRUCTURAL AND MATHEMATICAL APPROACHES TO SIGNAL PROCESSING, S.Y. Kung SUPERVISED LEARNING, Hal White PROGRAM COMMITTEE: ------------------ D. Alkon, S. Amari, J. Anderson, P. Baldi, A. Barto, D. Bullock, J. Byrne, G. Carpenter, D. Casasent, T. Caudell, R. Chellappa, M. Cohen, L. Cooper, W. Daugherty, J. Daugman, J. Dayhoff, R. Desimone, R. Eckmiller, B. Ermentrout, K. Fukushima, S. Gielen, L. Giles, P. Gochin, R. Granger, S. Grossberg, A. Guez, D. Hammerstrom, R. Hecht-Nielsen, J. Houk, W. Karplus, S. Kelso, B. Kosko, S.Y. Kung, M. Kuperstein, D. Levine, C. von der Malsburg, E. Marder, A. Maren, J. Marshall, J. McClelland, E. Mingolla, K. Narendra, H. Ogmen, E. Oja, L. Optican, F. Pineda, V.S. Ramachandran, D. Rumelhart, E. Schwartz, M. Seibert, J. Shynk, D. Specht, H. Szu, R. Taber, Y. Takefugi, J. Taylor, P. Werbos, H. White, B. Widrow, R. Williams Technical Sessions include: --------------------------- TOPIC SESSION CHAIRS ----- -------------- Biological Vision C. von der Malsburg, V.S. Ramachandran Machine Vision R. Chellappa, K. Fukushima Speech and Language M. Cohen, D. Rumelhart Biological Sensory-Motor Control A. Barto, S. Kelso Robotics and Control M. Kuperstein, K. Narendra Supervised Learning L. Cooper, P. Werbos Unsupervised Learning G. Carpenter, E. Oja Pattern Recognition T. Kohonen, D. Specht Local Circuit Neurobiology J. Byrne, J. Houk Cognitive Neuroscience R. Desimone, L. Optican Intelligent Neural Systems S. Grossberg, D. Levine Neural Fuzzy Systems W. Daugherty, B. Kosko Signal Processing S.Y. Kung, B. Widrow Neurodynamics S. Amari, H. White Electro-Optical Neurocomputers L. Giles, H. Szu Associative Memory J. Anderson, J. Taylor Applications J. Dayhoff, R. Hecht-Nielsen International Neural Network Society ------------------------------------ President: Paul Werbos President-Elect & Treasurer: Harold Szu Secretary: Judith Dayhoff Board of Governors ------------------ Shun-ichi Amari Stephen Grossberg Richard Andersen Mitsuo Kawato James A. Anderson Christof Koch Andrew Barto Teuvo Kohonen Gail Carpenter Bart Kosko Walter Freeman Christoph von der Malsburg Kunihiko Fukushima David Rumelhart Lee Giles Bernard Widrow Cooperating Societies include: ------------------------------ European Neural Network Society Japanese Neural Network Society IEEE Neural Networks Council IEEE Computer Society International Fuzzy Systems Association Call for Papers --------------- Papers must be received by January 15, 1993. International authors should submit their work via Air Mail or Express Courier so as to ensure timely arrival. All submissions will be acknowledged by mail, and accepted papers will be published as submitted. Papers will be reviewed by session co-chairs and the program committee, and all authors will be informed of the decision. All papers accepted for presentation will be published in full in the Conference Proceedings, which is expected to be available at the conference for distribution to all regular conference registrants. Six (6) copies (one original and five copies) of the paper are required for submission. Do not fold or staple the original camera-ready copy. The paper must be complete within 4 pages, including figures, tables, and references, and should be written in English. There will be a charge of $20 per page for papers exceeding 4 pages. Checks for over-length charges should be made payable to WCNN'93, and must be included with the submitted paper; if the paper is not accepted, the check will be returned. Only complete papers will be considered. Papers must be submitted camera-ready on 8-1/2" x 11" white paper with one-inch margins on all four sides. Papers should be prepared by typewriter or letter quality printer in one-column format, single-spaced, in Times Roman or similar type style of 10 points or larger, and printed on one side of the page only. All text, figures, captions, and references must be clean, sharp, readable, and high contrast. FAX submissions are not acceptable. Centered at the top of the first page should be the complete title, author name(s), affiliation(s), and mailing address(es). This is to be followed by a blank space, then the abstract (up to 15 lines), followed by the text. In an accompanying letter, the following information must be included: Full title of the paper Corresponding author name, mailing address, telephone and fax numbers Technical session (1st and 2nd choices) Oral or poster session preferred Presenter name, mailing address, telephone and fax numbers Audio/Visual requirements Send papers to: --------------- WCNN'93 Talley Management Group Inc. 1825 I Street NW Suite 400 Washington, DC 20006 TEL: (609) 845-1720 FAX: (609) 853-0411 Registration Fees: ------------------ MEETING: -------- Before Before After 01/15/93 06/15/93 06/15/93 -------- -------- -------- Member $175 $270 $350 Non-Member * $275 $370 $450 Student $50 $75 $95 * Includes a 1993 INNS membership and a 1-year subscription to the INNS journal Neural Networks TUTORIALS: ---------- Member/ $225 $295 $345 Non-Member Student $50 $75 $95 For additional information regarding the meeting, including special travel rates, hotels, the planning of exhibits, or INNS membership please call or fax the numbers listed above. From P.Refenes at cs.ucl.ac.uk Mon Sep 28 13:22:43 1992 From: P.Refenes at cs.ucl.ac.uk (P.Refenes@cs.ucl.ac.uk) Date: Mon, 28 Sep 92 18:22:43 +0100 Subject: PRE-PRINT: Financial modeling using neural networks Message-ID: The following preprint is available - hard copies by surface mail only. - ----------------------------------------------- FINANCIAL FORECASTING USING NEURAL NETWORKS A. N. REFENES, M. AZEMA-BARAC & P. C. TRELEAVEN Department of Computer Science, University College London, Gower Street WC1 6BT, London UK. ABSTRACT Modeling of financial systems has traditionally been done with models assuming partial equilibrium. Such models have been very useful in expanding our understanding of the capital markets; nevertheless many empirical financial anomalies have remained unexplainable. It is possible that this may be due to the partial equilibrium nature of these models. Attempting to model the capital markets in a general equlibrium framework still remains analytically intractable. Because of their inductive nature, dynamical systems such as neural networks can bypass the step of theory formulation, and they can infer complex non-linear relationships between input and output variables. Neural Networks have now been applied to a number of live systems and have demonstrated far better performance than conventional approaches. In this paper review the state-of-the art in financial modeling using neural networks and describe typical applications in key areas of univariate time series forecasting, multivariate data analysis, classification, and pattern recognition. The applications cover areas such as asset allocation, foreign exchange, stock ranking and bond trading. We describe the parameters that influence neural performance, and identify intervals of parameter values over which statistical stability can be achieved. -------------------------------------------------------- From marwan at sedal.su.oz.au Tue Sep 29 05:37:04 1992 From: marwan at sedal.su.oz.au (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Tue, 29 Sep 1992 19:37:04 +1000 Subject: Multi-module Neural Computing Environment Message-ID: <9209290937.AA04629@sedal.sedal.su.OZ.AU> Multi-Module Neural Computing Environment (MUME) MUME is a simulation environment for multi-modules neural computing. It provides an object oriented facility for the simulation and training of multiple nets with various architectures and learning algorithms. MUME includes a library of network architectures including feedforward, simple recurrent, and continuously running recurrent neural networks. Each architecture is supported by a variety of learning algorithms. MUME can be used for large scale neural network simulations as it provides support for learning in multi-net environments. It also provide pre- and post-processing facilities. The object oriented structure makes simple the addition of new network classes and new learning algorithms. New classes/algorithms can be simply added to the library or compiled into a program at run-time. The interface between classes is performed using Network Service Functions which can be easily created for a new class/algorithm. The architectures and learning algorithms currently available are: Class Learning algorithms ------------ ------------------- MLP backprop, weight perturbation, node perturbation, summed weight perturbation SRN backprop through time, weight update driven node splitting, History bound nets CRRN Williams and Zipser Programmable Limited precision nets Weight perturbation, Combined Search Algorithm, Simulated Annealing Other general purpose classes include (viewed as nets): o DC source o Time delays o Random source o FIFOs and LIFOs o Winner-take-all o X out of Y classifiers The modules are provided in a library. Several "front-ends" or clients are also available. MUME can be used to include non-neural computing modules (decision trees, ...) in applications. The software is the product of a number of staff and postgraduate students at the Machine Intelligence Group at Sydney University Electrical Engineering. It is currently being used in research, research and development and teaching, in ECG and ICEG classification, and speech and image recognition. As such, we are interested in institutions that can exploit the tool (especially in educational courses) and build up on it. The software is written in 'C' and is being used on Sun and DEC workstations. Efforts are underway to port it to the Fujitsu VP2200 vector processor using the VCC vectorising C compiler. MUME is made available to research institutions on media/doc/postage cost arrangements. Information on how to acquire it may be obtained by writing (or email) to: Marwan Jabri SEDAL Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: 660-1228 Email: marwan at sedal.su.oz.au From rba at bellcore.com Tue Sep 29 12:19:53 1992 From: rba at bellcore.com (Bob Allen) Date: Tue, 29 Sep 92 12:19:53 -0400 Subject: No subject Message-ID: <9209291619.AA07630@vintage.bellcore.com> NIPS92, December 1-3, 1992, Denver, Colorado STUDENT FINANCIAL SUPPORT Modest financial support for travel to attend the NIPS conference in Denver may be available to students and other young researchers who have worked on neural networks. Those requesting support should post a one page summary of their background and research interests, a curriculum vitae and their e-mail address to: Dr. Robert B. Allen NIPS92 Treasurer Bellcore MRE 2A-367 445 South Street Morristown, NJ 07962-1910 The support will be $250 for North America and $500 for overseas. Travel grant checks for those receiving awards will be available at the conference registration desk. Qualifying requests will be filled in the order they are received. In the event that requests exceed available funds, additional requests may be paid later, based on the financial success of the conference. From cowan at synapse.uchicago.edu Tue Sep 29 15:58:17 1992 From: cowan at synapse.uchicago.edu (Jack Cowan) Date: Tue, 29 Sep 92 14:58:17 CDT Subject: Werner Reichardt Message-ID: <9209291958.AA09691@synapse> It is with great regret that I have to announce the death of Werner Reichardt. Werner was a student in Berlin at the outbreak of WW II and fought against the Nazis in the German underground. He was captured by the Gestapo but saved by the Russians shortly before his scheduled execution. Werner began his career as a post doc with Max Delbruck at CalTech, but first became known for his work with Bernard Hassenstein on motion detection. He set up the Max Planck Institute for Biological Cybernetics in the early 60s and founded the journal Kybernetik, now known as Biological Cybernetics. He produced a great deal of excellent pioneering work on Fly vision, especially with Tommy Poggio. He will be greatly missed by his many friends, of whom I count myself fortunate to have been one. Jack Cowan From KOCH at IAGO.CALTECH.EDU Wed Sep 30 12:28:03 1992 From: KOCH at IAGO.CALTECH.EDU (KOCH@IAGO.CALTECH.EDU) Date: 30 Sep 1992 09:28:03 -0700 (PDT) Subject: Werner Reichardt Message-ID: <01GPE2JF64HE934Z0G@IAGO.CALTECH.EDU> I would like to second what Jack wrote about the importance of Werner Reichardt's work. His second-order, correlation model (know today simply as the Reichardt model) for motion detection in bettles and flies (first postulated in 1956 in a joint publication with Hassenstein) is, together with the Hodgkin-Huxley equations, one of the oldest and most successful models in neurobiology. He and his group over the last 30 years amassed both behavioral and electropyhysiological evidence supporting such a model for the fly. More recent work on the intensity-based, short-range motion perception system in humans (Adelson-Bergen, Watson-Ahumada, Van Santen-Sperling) uses the same formalism as does the fly correlation model. Furthermore, at the electrophysiological level, a number of studies support the notion of such detectors in area 17 in cats. One could therefore argue that we have good evidence that Reichardt's correlation model---in which the linearly filtered output of one receptors is multiplied by the spatially offset and temporally delayed filtered output of a neighbouring receptor---describes the first stage in the motion pathway, from flies to humans. That's quite a legacy to leave behind. Christof From mpadgett at eng.auburn.edu Wed Sep 30 04:02:30 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Wed, 30 Sep 92 03:02:30 CDT Subject: SimTec92*WNN92*FNN92 Message-ID: <9209300802.AA28083@eng.auburn.edu> SimTec92 * WNN92/Houston * FNN92 Symposium Nov. 4-7, 1992 South Shore Harbour Resort, Clear Lake, TX -- near NASA/JSC SimTec: Aerospace, Emerging Technologies, Simulation Appliations and Life Sciences WNN Conference / Workshop on NEURAL NETWORKS and FUZZY LOGIC Wednesday, November 4 - Friday, November 6 FNN Symposium: TUTORIALS on Fuzzy Logic, Neural Networks, Standards Saturday, November 7 WNN is sponsored by The Society for Computer Simulation, International, Co-sponsored by NASA/JSC, GSFC, & LaRC in cooperation with SPIE and INNS. The IEEE-Neural Networks Council is a participating society. PRELIMINARY SESSION AGENDA Wednesday, November 4 8:00 am - ... Registration 8:30 am - 9:00 am WELCOME 9:00 am - 10:00 am KEYNOTE SPEAKER: Storey Musgrave, MD 10:15 am - 12:15 pm PLENARY: Simulation & Space Station Freedom 12:15 pm - 1:30 pm NETS USERS GROUP: R. Shelton, NASA/JSC 1:30 pm - 3:00 pm PARALLEL SESSIONS STANDARDS, George Rogers, NSWC A Neurocomputing Benchmark for Digital Computers George W. Rogers, Jeffrey L. Solka, John Ellis, and Harold D. Szu, NSWC Comparison of Artificial Neural Networks and Traditional Classifiers via the Two-Spiral Problem Witoon Suewatanakul, UT, Austin, Austin, TX Performance Comparison of Some Neural Network Paradigms for Solving the Seismic Phase Identification Problem Gyu-Sang Jan, Farid Dowla, V. Vemuri, UC Davis, Livermore, CA ARCHITECTURES, Kevin Reilly, UAB Realization of a Modified CMAC Architecture Using Reconfigurable Logic Devices Aleksander Kolcz, N.M. Allinson, U York, UK Hybrid Systems Approaches with Ascertainment Learning in Neural Networks Kevin D. Reilly, M.F. Villa UAB, Y. Hayashi, Ibaraki U. A Hybrid Neural Network System with Serial Learning and Associative Components V. Anumolu, Kevin D. Reilly, N. W. Bray, UAB 1:30 pm - 4:30 pm SCS Process Controls Standards 3:30 pm - 5:00 pm PARALLEL SESSIONS OPTIMIZATION & LEARNING I, Vasant Honovar, Iowa State U. An Empirical Comparison of Flat-spot Elimination Techniques in BackPropagation Networks Karthik Balakrishan, Rajesh Parekh, Vasant A. Honavar, Iowa State U. A Fast Algorithm with a Guarantee to Learn: Binary Synaptic Weights Algorithm on Neural Networks Figen Ulgen, Norio Akamatsu, Tokushima U., Japan Function Preserving Weight Transformations of Perceptron Type Networks Vera Kurkova, Inst of Computer & Information, Czechoslavkia Modelling of Two-Dimensional Incompressible Potential Flows by Programmable Feedforward Networks Andrew J. Meade, Jr., Rice U. 4:30 pm - 5:30 pm STANDARDS ACTIVITIES OVERVIEW - Come and Go Joseph J. Cynamon, The Mitre Corp., SCS Associate VP for Standards Robert Shelton, NASA/JSC SimTec/WNN/FNN Standards Chair 5:30 pm - 7:00 pm Exhibitors Reception Thursday, November 5 8:30 am - 10:00 am PARALLEL SESSIONS TEMPORAL MODELING, James Lo, U. Maryland Time-Delay Neural Network (TDNN) Simulator to Detect Time-Variant Signals E. J. Carroll, N. P. Coleman, Jr., Picatinny Arsenal, G. N. Reddy, Lamar U. Synthetic Approach to Optimal Filtering James Ting-Ho Lo, U. Maryland Perceptual Linear Prediction and Neural Networks for Speech Analysis Sheryl L. Knotts, James A. Freeman, Loral Space Information Systems; Thomas L. Harman, U Houston-ClearLake A Neural Network for Temporal Pattern Classification Narasimhan S. Kumar, A. Dale Whittaker, Texas A&M VISION, Tim Cleghorn, NASA/JSC Adaptive Mixture Neural Networks for Functional Estimation George W. Rogers, NSWC, Carey E. Priebe, David J. Marchette, Jeffrey L. Solka, NSWC Kernal Estimators and Mixture Models in Artificial Neural Networks Carey E. Priebbe, David J. Marchette, George W. Rogers and Jeffrey L. Solka, NSWC A Neural Net Based 2D-Vision System for Real-Time Applications G.N. Reddy, S. Vaithilingham, W. C. Bean, Lamar U, Picatinny Arsenal; J. M Mazzu, Charles River Analytics, Inc. A Hybrid Neural Network System for Robotic Object Recognition and Pose Determination J. M. Mazzy, A. K. Caglaran, Charles River Analytics, Inc. 8:30 am - 10:00 am IEEE-NNC Hardware Interface Standards 10:30 am - 12:00 pm PARALLEL SESSIONS CONTROLS I, Claire McCullough, Redstone Arsenal A Manufacturing Cell Control Strategy Using Neural Networks and Discrete Event Simulation Qi Wan, Jeffrey K. Cochran, Arizona State U. Adaptive Control of Noisy Nonlinear Systems Using Neural Networks Claire L. McCullough, UAH An Adaptive Neural Network Controller of Robot Manipulator Youcef Derbal, M. M. Bayoumi, Queen's U., Canada A Neural Network Control System for Unit Food Dehydration Processes A. Dale Whittaker, Texas A&M NEURAL NETWORKS SIMULATION, David Tam, U North Texas A Generalizable Object-Oriented Neural Simulator for Reconstructing Functional Properties of Biological Neuronal Networks David Tam, U North Texas, Denton A Novel Vectorial Phase-Space Analysis of Spacio-Temporal Firing Patterns David Tam, U North Texas, Denton Exploring Class Reuse and Integration for Hybrid Simulation in C++ Teresa L. Hitt, Jefferson State Community College and Kevin D. Reilly, UAB 10:30 am - 12:00 pm IEEE-NNC Software Interface Standards 12:00 pm - 1:30 pm NETS Users II: - Meet and Go Out to Eat 1:30 pm - 3:00 pm PARALLEL SESSIONS CONTROLS II, Robert Shelton, NASA/JSC Using Functional Link Neural Nets for General Linear Least Squares Model Fitting Alfredo Somolinos, Mercy College Stabilization of Tethered Satellites using Parametric Avalanche Neural Network Robert Dawes, Ajay Patrikar, Martingale Research Corp. A Tree-Addressed Local Neural Network Robert Shelton, NASA/JSC A Neural Net Controller to Balance and Inverted Pendulum Lin Bin, Gongyuan Ding, G. N. Reddy, Lamar U. OPTIMIZATION & LEARNING II, G. N. Reddy, Lamar U. and James Villareal, NASA/JSC Modified Simulated Annealing Using Sample Distribution from the Energy Space of the Problem Instance G. Sampath, Marist College An Improved Neural Network Simulator for Solving the Travelling Salesman Problem Vinay Saxena, G. N. Reddy, Wendell C. Bean, lamar U An Empirical Analysis of the Expected Source Values Rule Richard Spartz, Vasant A. Honavar, Iowa State U. 1:30 pm - 3:00 pm SCS Standards Board: Procedures J. F. Cynamon, The Mitre Corp., SCS Associate VP for Standards 3:30 pm - 5:00 pm PARALLEL SESSIONS FUZZY LOGIC, Robert Lea, NASA/JSC Fuzzy Logic: A Brief Introduction Steve Marsh, Duberly Mazuelos, Motorola, Austin, TX Correlation-Recall Encoding Operations for Fuzzy Associative Memories Sergey Aityan, Texas A&M A Simple Fuzzy Logic Real-Time Camera Tracking System Kevin N. Magee, John B. Cheatham, Jr., Rice U. PATTERN RECOGNITION & APPLICATIONS I, A. Martin Wildberger, EPRI Overview of Neural Network Projects at the Electric Power Research Institute A. Martin Wildberger, EPRI A Solution to Large-Scale Group Technology Problems: Art1 Neural Network Approach Cihan H. Dagli, Cheng-Fo Sen, U Missouri-Rolla 3:30 pm - 6:00 pm IEEE-NNC Standards Overview (Come and Go) Walter J. Karplus, UCLA Chair, IEEE-NNC Standards Committee Performance Evaluation Working Group 7:00 pm - ... Dutch Dinner on the Town A Fuzzy/Neural Network Planning and Networking Activity Sponsored by: SCS kbs Applications to Simulators TAC, SCS NN and Simulation Standards Committee, IEEE-NNC Standards Committee and INNS Standards SIG Friday, November 6 7:00 am - 8:30 am IEEE-NNC Glossary Working Group 8:30 am - 10:00 am PARALLEL SESSIONS PATTERN RECOGNITION & APPLICATIONS II, S. Piche, Thought Processes, Inc. and R. Shelton, NASA/JSC Freeway Incident Detection Using Advanced Technology Edmond Ching-Ping Chang, Texas A&M Integration of Local and Global Neural Classifiers for Passive Sonar Signals Joydeep Ghosh, Kagan Turner, UT Austin; Steven Beck, Larry Deuser, Tracor Applied Sciences A Neural Network Approach for the Investigation of Chemical Phenomena Jerry A. Darsey, ORNL; Bobby Sumpter, Coral Getino, Donald W. Noid, U Arkansas-Little Rock 8:30 am - 10:00 am SCS NN and Simulation Standards Discussion "Embedding Fuzzy Neural Control Modules into Simulations" Mary Lou Padgett, Auburn U., Troy Henson, IBM Corp. and Michelle Izygon, Barrios Technology, Inc. 10:30 am - 12:00 pm PARALLEL SESSIONS NEURAL NETWORKS AND FUZZY LOGIC, David Bendell Hertz, U. Miami Fuzzy-Neuro Controller for Back-Propagation Networks David Bendell Hertz, Qing Hu, U Miami Controlling Training of Neural Classifiers with Fuzzy Logic Ed DeRouin, Joe Brown, Thought Processes, Inc. Neural Networks Applications: Techniques for Increasing Validity Mary Lou Padgett, Auburn University 10:30 am - 12:00 pm SCS Human Factors Discussion 12:30 pm - 2:00 pm Awards Luncheon at Gilruth Center on NASA/JSC 2:00 pm - 5:00 pm TOURS of NASA/JSC Simulation Facilities 5:30 pm - 7:00 pm KBS Reception for Lofti Zadeh and Paper Award Winners (Guests $10.00) FNN Symposium: Tutorials on Fuzzy Logic, Neural Networks, Standards Saturday, November 7 Chairs: Joseph Mica, NASA/GSFC and Robert Savely, NASA/JSC 8:00 am Registration 8:30 am - 9:45 am "Fuzzy Logic, Neural Networks and Soft Computing" Lofti Zadeh, UC Berkeley 9:45 am - 10:00 am Break 10:00 am - 11:15 am "Fuzzy Applications" Yashvant Jani, Togai Infralogic 11:15 am - 12:30 pm "Fuzzy Helicopter Control" Captain Greg Walker, NASA Langley 12:30 pm - 1:30 pm Dutch Lunch 1:30 pm - 2:00 pm "Analysis of Stability of Fuzzy Control and Connections between Fuzzy Control and Nonlinear Control" R. Langari, Texas A&M 2:00 pm - 5:00 pm NEURAL NETWORKS: Mary Lou Padgett, Auburn U. 2:00 pm - 2:30 pm "Fuzzy Neural Relationships" 2:30 pm - 3:30 pm "Neural Networks Basics" 3:30 pm - 3:45 pm Break 3:45 pm - 4:45 pm "Neural Networks Applications" Proposed Standard Glossary, Backpropagation Examples, NASA/JSC NETS Executable & examples to be given 4:45 pm - 5:00 pm "Neural Network Futures" Sign up for SCS, IEEE, INNS Activities 5:00 pm - ... "NASA NETS Users Group Meeting SignUp" R E G I S T R A T I O N SimTec'92 1992 International Simulation Technology Conference * WNN92/Houston and FNN Symposium November 4-6, 1992 * South Shore Harbour Resort, Clear Lake, Texas (near Houston and NASA/JSC) Name: ________________________________________________________ Organization:_________________________________________________ Street Address:_______________________________________________ ______________________________________________________________ City:_______________________ State:_____ Zip:________________ Country:____________________ Phone:__________________________ Member #:___________________ Fax:____________________________ Email:________________________________________________________ Early Registration (through October 9, 1992) ___ $315 Author/Member* ___ $50 Student ___ $395 Non-member Late Registration (after October 9, 1992) ___ $375 Author/Member* ___ $50 Student ___ $460 Non-member Note: Each presentation requires payment of full registration fee by an author. * Member rates apply to members of SCS, IEEE, INNS and NASA. FNN Symposium: Saturday, Nov 7, 8:00 am - 5:00 pm (Note Separate Fee) Fuzzy Logic (Zadeh) and Neural Networks (Padgett) Professional Development Seminars, Saturday, Nov. 7 (8-5) A[ ] Computer Performance Evaluation, M. Obaidat B[ ] Essentials of Project Management, R. Meermans Method of Payment: (No Cash Accepted) ___ VISA ___ Mastercard ___ American Express Card Number:________________________ Exp. Date_______________ Authorizing Signature:________________________________________ ___ Check ___ Company Purchase Order ___ Gov't DD Form 1556 Advance Registration Fees are transferable. Student rate applies to FULL TIME STUDENTS only. A faculty signature is required to certify student's enrollment status. __________________________________________________ Faculty Member Signature Conference Registration Fee (SimTec & WNN92/Houston) (includes Reception, Proceedings, coffee breaks. For Full Registrants only) $__________ New Membership ($60.00) $__________ FNN Symposium or PDS Registration (per Seminar, Sat. Nov. 7) [ ] Early $185 [ ] Late $275 $__________ PDS: A{ ] PDS: B[ ] FNN:[ ] Additional Copies of Proceedings @ $75 each: (Regular Rates are $100) $__________ TOTAL AMOUNT REMITTED $__________ Please send Conference Registration Form along with your registration fee to: 1992 International Simulation Technology Conference c/o SCS, 4838 Ronson Ct., Suite L, San Diego, CA 92111. Phone: (619) 277-3888; FAX (619) 277-3930. Checks Must be drawn on U.S. Banks and in U.S. Dollars. Make check payable to: "SCS" (reference 1992 SimTec). CONTACT SCS OFFICE for complete Preliminary Program containing papers for ALL SimTec tracks (Aerospace, Emerging Technologies, Simulation Appliations and Life Sciences). Travel arrangement information is included in this program. Directions from Hobby Airport are given. MAKE HOTEL ARRANGEMENTS SEPARATELY. Contact South Shore Harbour Resort, 2500 South Shore Blvd., League City, TX 77573, Phone: (713) 334-1000 or (800)442-5005; FAX: (713)334-1157. Conference rates (before October 1, 1992) are $90 Single or Double. Program Committee for SimTec92: General Chair: Tony Sava, IBM Corp.; Associate General Chair: Robin Kirkham, Lockheed/ESC; Program Chair, Troy Henson, IBM Corp.; Technical Editor and SCS Associate VP for SimTec, Mary Lou Padgett, Auburn University; Local Arrangements and Associate Technical Editor: Ankur Hajare, Mitre Corp.; Exhibits Chair: Wade Webster, Lockheed/ESC; NASA Representatives: Robert Savely, NASA/JSC; Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro, ESA ************************************************************************ ************************************************************************ CALL FOR PAPERS SimTec93/WNN93/FNN93 in SAN FRANCISCO November 7-10, 1993 San Francisco Airport Marriott SimTec 93 1993 International Simulation Technology Conference Emerging Technologies * Simulation Applications * Aerospace Visualization, Circuit Simulation, Intelligent Programming Tools and Techniques Multimedia in Simulation Pattern Recognition, Controls, Microelectronics, Life Sciences, Management, Supercomputing, Parallel & Distributed Processing Simulation Facilities, Training Command & Control Paper competitions in Academic, Industry and Government Categories Awards Luncheon * Exhibitors Reception TOUR of NASA/AMES PANELS, DISCUSSIONS, PROFESSIONAL ACTIVITIES, STANDARDS, SOFTWARE EXCHANGES JOIN US in BEAUTIFUL SAN FRANCISCO . . . Bring your family . . . Sponsored by The Society for Computer Simulation, International Co-sponsored by NASA/JSC,GSFC and LRC in cooperation with SPIE WNN93/San Francisco Workshop on Neural Networks and Fuzzy Logic WNN is an informal conference on neural networks, fuzzy logic and related applications. Artificial networks and life sciences foundations are of interest. The meeting features workshops, standards discussions and paper contests. In addition to SimTec sponsors, the IEEE Neural Networks Council is a participating society and INNS is cooperating. Mary Lou Padgett, Auburn University; Robert Shelton, NASA/JSC; Walter J. Karplus, UCLA; Bart Kosko, USC and Paul Werbos, NSF; NASA Representative: Robert Savely, NASA/JSC FNN93: Fuzzy Neural Networks Symposium with Tutorials and Standards A collection of presentations on fuzzy logic theory and applications, neural fuzzy control and basic neural networks concepts and applications will be featured on Sunday. Anyone interested in these concepts should benefit from participating. Proposed Neural Networks and Fuzzy Logic Standards will be explained by Padgett. NASA/NETS software executable and examples will be included in this tutorial. Sponsored by SCS, co-sponsored by NASA/JSC,GSFC. SimTec93 * WNN93 * FNN93 DEADLINES: Abstracts/Draft Papers: May 1, 1993; Camera-Ready: June 30, 1993. TO SUBMIT AN ABSTRACT, PROPOSE A SESSION OR SUGGEST A TOPIC OF INTEREST . . . Contact: Mary Lou Padgett, SCS Associate VP for SimTec, Auburn University, 1165 Owens Road, Auburn, AL 36830. Phone: (205) 821-2472/3488 Fax: (205) 844-1809 Email: mpadgett at eng.auburn.edu. SCS Office: Phone: (619) 277-3888 Fax: (619) 277-3930. General Chair: Ted Lambert, Program Chair: Martin Dost; Associate Program Chair: Ralph Huntsinger, UC Chico; NASA Representatives: Robert Savely, NASA/JSC and Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro Committee Includes: Bill Cameron; Paul Luker, UC Chico; Norman Pobanz, Bechtel; Stuart Schlessinger; A. Martin Wildberger; Tim Cleghorn, NASA/JSC; Robert Lea, NASA/JSC ************************************************************************ ************************************************************************ SimTec 93 1993 International Simulation Technology Conference WNN93/San Francisco & FNN93 Symposium November 7-10, 1992 San Francisco Airport Marriott If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by Email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu ======= SCS OFFICE: (619) 277-3888 ======= SimTec 93 * WNN93/San Francisco * FNN93 Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177 ************************************************************************  From mpadgett at eng.auburn.edu Wed Sep 30 04:01:54 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Wed, 30 Sep 92 03:01:54 CDT Subject: SimTec93*WNN93*FNN93 Message-ID: <9209300801.AA28062@eng.auburn.edu> ******************************************************************************** ******************************************************************************** CALL FOR PAPERS SimTec93/WNN93/FNN93 in SAN FRANCISCO November 7-10, 1993 San Francisco Airport Marriott SimTec 93 1993 International Simulation Technology Conference Emerging Technologies * Simulation Applications * Aerospace Visualization, Circuit Simulation, Intelligent Programming Tools and Techniques Multimedia in Simulation Pattern Recognition, Controls, Microelectronics, Life Sciences, Management, Supercomputing, Parallel & Distributed Processing Simulation Facilities, Training Command & Control Paper competitions in Academic, Industry and Government Categories Awards Luncheon * Exhibitors Reception TOUR of NASA/AMES PANELS, DISCUSSIONS, PROFESSIONAL ACTIVITIES, STANDARDS, SOFTWARE EXCHANGES JOIN US in BEAUTIFUL SAN FRANCISCO . . . Bring your family . . . Sponsored by The Society for Computer Simulation, International Co-sponsored by NASA/JSC,GSFC and LRC in cooperation with SPIE WNN93/San Francisco Workshop on Neural Networks and Fuzzy Logic WNN is an informal conference on neural networks, fuzzy logic and related applications. Artificial networks and life sciences foundations are of interest. The meeting features workshops, standards discussions and paper contests. In addition to SimTec sponsors, the IEEE Neural Networks Council is a participating society and INNS is cooperating. Mary Lou Padgett, Auburn University; Robert Shelton, NASA/JSC; Walter J. Karplus, UCLA; Bart Kosko, USC and Paul Werbos, NSF; NASA Representative: Robert Savely, NASA/JSC FNN93: Fuzzy Neural Networks Symposium with Tutorials and Standards A collection of presentations on fuzzy logic theory and applications, neural fuzzy control and basic neural networks concepts and applications will be featured on Sunday. Anyone interested in these concepts should benefit from participating. Proposed Neural Networks and Fuzzy Logic Standards will be explained by Padgett. NASA/NETS software executable and examples will be included in this tutorial. Sponsored by SCS, co-sponsored by NASA/JSC,GSFC. SimTec93 * WNN93 * FNN93 DEADLINES: Abstracts/Draft Papers: May 1, 1993; Camera-Ready: June 30, 1993. TO SUBMIT AN ABSTRACT, PROPOSE A SESSION OR SUGGEST A TOPIC OF INTEREST . . . Contact: Mary Lou Padgett, SCS Associate VP for SimTec, Auburn University, 1165 Owens Road, Auburn, AL 36830. Phone: (205) 821-2472/3488 Fax: (205) 844-1809 Email: mpadgett at eng.auburn.edu. SCS Office: Phone: (619) 277-3888 Fax: (619) 277-3930. General Chair: Ted Lambert, Program Chair: Martin Dost; Associate Program Chair: Ralph Huntsinger, UC Chico; NASA Representatives: Robert Savely, NASA/JSC and Joseph Mica, NASA/GSFC; ESA Representative: Juan Miro Committee Includes: Bill Cameron; Paul Luker, UC Chico; Norman Pobanz, Bechtel; Stuart Schlessinger; A. Martin Wildberger; Tim Cleghorn, NASA/JSC; Robert Lea, NASA/JSC ************************************************************************ ************************************************************************ SimTec 93 1993 International Simulation Technology Conference WNN93/San Francisco & FNN93 Symposium November 7-10, 1992 San Francisco Airport Marriott If you wish to receive further information about SimTec, WNN and FNN, please, return (preferably by Email) the form printed below: NAME: AFFILIATION: ADDRESS: PHONE: FAX: EMAIL: Please send more information on registration ( ) optional tours ( ). I intend to submit a paper ( ), a tutorial ( ), an abstract only ( ). I may give a demonstration ( ) or exhibit ( ). Return to: Mary Lou Padgett, 1165 Owens Rd., Auburn, AL 36830. email: mpadgett at eng.auburn.edu ======= SCS OFFICE: (619) 277-3888 ======= SimTec 93 * WNN93/San Francisco * FNN93 Society for Computer Simulation, International P.O. Box 17900, San Diego, CA 92177 ************************************************************************