From Connectionists-Request at cs.cmu.edu Fri Nov 1 00:05:14 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Nov 96 00:05:14 EST Subject: Bi-monthly Reminder Message-ID: <29745.846824714@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From bruno at redwood.ucdavis.edu Fri Nov 1 22:52:40 1996 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Fri, 1 Nov 1996 19:52:40 -0800 Subject: NIPS96 workshop on natural images Message-ID: <199611020352.TAA09325@redwood.ucdavis.edu> NIPS96 workshop announcement: ----------------------------- The structure of natural images and efficient image coding Organized by Dan Ruderman and Bruno Olshausen This one-day workshop will cover recent work on natural image statistics and their relation to visual system design. The discovery of scaling in natural images (Burton and Moorhead 1987, Field 1987, Tolhurst et al 1992, Ruderman and Bialek 1994, Dong and Atick 1995, van der Schaaf and van Hateren 1996) has led to much interest in their statistical structure as well as the constraints these statistics place on efficient coding in the visual system. The work of Atick and Redlich (1990) in particular has demonstrated how the statistics of images may be combined with the efficiency principles of Attneave and Barlow to make quantitative predictions about the properties of ganglion cell receptive fields. Many researchers since have followed suit with other optimization strategies, such as sparse coding (Olshausen and Field 1996, Fyfe and Baddeley 1995) and information maximization (Bell and Sejnowski 1996), in an attempt to relate the response properties of cortical cells to the statistics of natural images in terms of efficient coding. This forum will offer the first open informal discussion of this rapidly-evolving approach toward understanding the visual system. Participants: Roland Baddeley, Oxford University Tony Bell, Salk Institute Dawei Dong, Caltech David Field, Cornell University Jack Gallant, U.C. Berkeley Hans van Hateren, University of Groningen David Mumford, Harvard University Penio Penev, Rockefeller University Pam Reinagel, Caltech Harel Shouval, Brown University Michael Webster, University of Nevada, Reno Tony Zador, Salk Institute See the web site, http://redwood.ucdavis.edu/NIPS96/abstracts.html, for abstracts and further information. From fdoyle at ecn.purdue.edu Sat Nov 2 07:41:31 1996 From: fdoyle at ecn.purdue.edu (Frank Doyle) Date: Sat, 2 Nov 1996 07:41:31 -0500 (EST) Subject: Postdoctoral Position Message-ID: <199611021241.HAA19362@volterra.ecn.purdue.edu> POSTDOCTORAL POSITION (Biosystems Analysis and Control) School of Chemical Engineering, Purdue University This position is part of an ONR and NSF-funded project to study the baroreflex for possible applications to process control systems. In addition, the project involves strong collaborations with the DuPont company, and extended interactions with the DuPont group are an intended part of this project. The project involves experimental and computational studies aimed at understanding local modulation of cardiac behavior for applications to locally intelligent control systems design. Specific sub-projects include: (i) second messenger modeling of the mechanisms responsible for modulation in a ganglion cell; (ii) computational modeling of the local and central reflex interactions; (iii) abstraction of control principles from the reflex model for chemical process control applications; and (iv) [possibly] experimental studies to validate the local model. The ideal applicant will be able to contribute to the cellular modeling from both a computational and (possibly) an experimental perspective. Familiarity with control systems engineering is also desirable. More information about the laboratory can be found at the URL http://volterra.ecn.purdue.edu/~fdoyle/neuro.html The position is available immediately and will last a minimum period of one year, with the possibility to be extended for another year. The applicants should have a PhD in a relevant discipline and a solid experience in biochemical engineering and control systems. This work will also involve some amount of cellular neurophysiology. While a strong background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques. Purdue University offers competitive salary and benefits. Applicants should send a CV, a statement of their professional interests (not longer than 1 page) and the names, addresses and telephone numbers of at least two reference to Frank Doyle either via email (fdoyle at ecn.purdue.edu) or via surface mail at the following address: Frank Doyle School of Chemical Engineering Purdue University West Lafayette IN 47907-1283 Purdue University is an equal opportunity, affirmative action educator and employer. From FRYRL at f1groups.fsd.jhuapl.edu Mon Nov 4 09:37:00 1996 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Mon, 04 Nov 96 09:37:00 EST Subject: New paper available Message-ID: <327D7EE5@fsdsmtpgw.fsd.jhuapl.edu> A paper entitled "Neuromechanics" has been placed in the Neuroprose neural network paper storage site. This paper was an invited paper given at the 1996 International Conference on Neural Information Processing, Hong Kong, in September. Title: Neuromechanics Abstract : Elements of classical, statistical, and even quantum mechanics can be found in the described neural model through analogous constructs of position, momentum, Gibbs distributions, partition functions, and perhaps most importantly, observability. Such analogies suggest that the subject model represents a type of neural mechanics that is distinguished from other mechanical formulations of physics in two important regards. First, physical constants are not constant, but rather represent Lagrange factors that vary over time in response to learning. Secondly, neural systems attempt to optimize the very information-theoretic objective functions upon which their structure is founded. This paper provides an overview of an approach to neural modeling and understanding and highlights correspondences between this model, mechanical formulations of physics, and computational neurophysiology. Retrieval: Perform an anonymous logon to: archive.cis.ohio-state.edu The paper is located in the directory \pub\neuroprose with the file name "fry.neurmech.ps.Z." This paper is in encapulated postscript in compressed format. It can be uncompressed by using the UNIX "uncompress" utility. No hardcopies are available. R. Fry Robert L. Fry Johns Hopkins Road Laurel, MD 20723-6099 robert_fry at jhuapl.edu From ojensen at cajal.ccs.brandeis.edu Mon Nov 4 18:58:00 1996 From: ojensen at cajal.ccs.brandeis.edu (Ole Jensen - Lisman Lab) Date: Mon, 04 Nov 1996 18:58:00 -0500 (EST) Subject: Physiologically Realistic Memory Network Message-ID: <9611042358.AA04582@cajal.ccs.brandeis.edu> Physiologically Realistic Memory Network ======================================== The following 4 papers have appeared together as a series in the journal Learning and Memory (1996) 3: 243-287. They can be down-loaded in post-script format from http://eliza.cc.brandeis.edu/people/ojensen/ We have attempted to construct a physiologically realistic memory model based on nested theta/gamma oscillation. The network model can explain important aspect of data from human memory psychology (Lisman and Idiart, Science 267:1512-15) and place cell recordings (PAPER 4). Ole Jensen (ojensen at cajal.ccs.brandeis.edu) Volen Center for Complex Systems Brandeis University Waltham MA 02254 Come see our poster at the Neuroscience meeting, Nov 19, Tuesday 1:00 PM, X-$, 549:14. PAPER 1: -------- Physiologically Realistic Formation of Autoassociative Memory in Networks with Theta/Gamma Oscillations: Role of Fast NMDA Channels. Learning and Memory (1996) 3:243-256. Ole Jensen, Marco A. P. Idiart, and John E. Lisman Recordings from brain regions involved in memory function show dual oscillations in which each cycle of a low frequency theta oscillation (5-8Hz) is subdivided into about 7 subcycles by high frequency gamma oscillations (20-60Hz). It has been proposed (Lisman and Idiart 1995) that such networks are a multiplexed short-term memory (STM) buffer that can actively maintain about 7 memories, a capability of human STM. A memory is encoded by a subset of principal neurons that fire synchronously in a particular gamma subcycle. Firing is maintained by a membrane process intrinsic to each cell. We now extend this model by incorporating recurrent connections with modifiable synapses to store long-term memory (LTM). The repetition provided by STM gradually modifies synapses in a physiologically realistic way. Because different memories are active in different gamma subcycles, the formation of autoassociative LTM requires that synaptic modification depend on NMDA channels having a time-constant of deactivation that is of the same order as the duration of a gamma subcycle (15- 50 msec). Many types of NMDA channels have longer time-constants (150 msec), as for instance those found in the hippocampus, but both fast and slow NMDA channels are present in cortex. This is the first proposal for the special role of these fast NMDA channels. The STM for novel items must depend on activity-dependent changes intrinsic to neurons rather than recurrent connections, which have not developed the required selectivity. Because these intrinsic mechanisms are not error correcting, STM will become slowly corrupted by noise. This limits the accuracy with which LTM can become encoded after a single presentation. Accurate encoding of items in LTM can be achieved by multiple presentations, provided different memory items are presented in a varied interleaved order. Our results indicate that a limited memory capacity STM model can be integrated in the same network with a high capacity LTM model. PAPER 2: -------- Novel Lists of 7+/-2 Known Items Can Be Reliably Stored in an Oscillatory Short-Term Memory Network: Interaction with Long-Term Memory Learning and Memory (1996) 3:257-263. Ole Jensen and John E. Lisman This paper proposes a model for the short-term memory (STM) of unique lists of known items, as for instance a phone number. We show that the ability to accurately store such lists in STM depends strongly on interaction with the pre-existing long-term memory (LTM) for individual items (e.g. digits). We have examined this interaction in computer simulations of a network based on physiologically realistic membrane conductances, synaptic plasticity processes and brain oscillations. In the model, seven short-term memories can be kept active, each in a different gamma-frequency subcycle of a theta frequency oscillation. Each STM is maintained and timed by an activity-dependent ramping process. LTM is stored by the strength of synapses in recurrent collaterals. The presence of pre-existing LTM for an item greatly enhances the ability of the network to store an item in STM. Without LTM, the precise timing required to keep cells firing within a given gamma subcycle cannot be maintained and STM is gradually degraded. With LTM, timing errors can be corrected and the accuracy and order of items is maintained. This attractor property of STM storage is remarkable because it occurs even though there is no LTM that identifies which items are on the list or their order. Multiple known items can be stored in STM, even though their representation is overlapping. However multiple, identical memories cannot be stored in STM, consistent with the psychophysical demonstration of repetition blindness. Our results indicate that meaningful computation (memory completion) can occur in the millisecond range during an individual gamma cycle. PAPER 3: -------- Theta/Gamma Networks with Slow NMDA Channels Learn Sequences and Encode Episodic Memory: Role of NMDA Channels in Recall Learning and Memory (1996) 3:264-278. Ole Jensen and John E. Lisman This paper examines the role of slow NMDA channels (deactivation about 150 msec) in networks that multiplex different memories in different gamma subcycles of a low frequency theta oscillation. The NMDA channels are in the synapses of recurrent collaterals and govern synaptic modification in accord with known physiological properties. Because slow NMDA channels have a time-constant that spans several gamma cycles, synaptic connections will form between cells that represent different memories. This enables brain structures that have slow NMDA channels to store heteroassociative sequence information in long-term memory (LTM). Recall of this stored sequence information can be initiated by presentation of initial elements of the sequence. The remaining sequence is then recalled at a rate of 1 memory every gamma cycle. A new role for the NMDA channel suggested by our finding is that recall at gamma frequency works well if slow NMDA channels provide the dominant component of the EPSP at the synapse of recurrent collaterals: the slow onset of these channels and their long duration allows the firing of one memory during one gamma cycle to trigger the next memory during the subsequent gamma cycle. An interesting feature of the readout mechanism is that the activation of a given memory is due to cumulative input from multiple previous memories in the stored sequence, not just the previous one. The network thus stores sequence information in a doubly redundant way: activation of a memory depends on the strength of synaptic inputs from multiple cells of multiple previous memories. The cumulative property of sequence storage has support from the psychophysical literature. Cumulative learning also provides a solution to the disambiguation problem that occurs when different sequences have a region of overlap. In a final set of simulations, we show how coupling an autoassociative network to a heteroassociative network allows the storage of episodic memories (a unique sequence of briefly occurring known items). The autoassociative network (cortex) captures the sequence in short-term memory (STM) and provides the accurate, time-compressed repetition required to drive synaptic modification in the heteroassociative network (hippocampus). This is the first mechanistically detailed model showing how known brain properties, including network oscillations, recurrent collaterals, AMPA channels, NMDA channel subtypes, the ADP, and the AHP can act together to accomplish memory storage and recall. PAPER 4: -------- Hippocampal CA3 Region Predicts Memory Sequences: Accounting for the Phase Precession of Place Cells Learning and Memory (1996) 3:279-287 Ole Jensen and John E. Lisman Hippocampal recordings show that different place cells fire at different phases during the same theta oscillation, probably at the peak of different gamma cycles. As the rat moves through the place field of a given cell, the phase of firing during the theta cycle advances progressively (O'Keefe and Recce 1993; Skaggs et al. 1996). In this paper we have sought to determine whether a recently developed model of hippocampal and cortical memory function can explain this phase advance and other properties of place cells. According to this physiologically based model, the CA3 network stores information about the sequence of places traversed during learning. Here we show that the phase advance can be understood if it is assumed that the hippocampus is in a recall mode that operates when the animal is already familiar with a path. In this mode, sensory information about the current position triggers recall of the upcoming 5- 6 places (memories) in the path at a rate of one memory per gamma cycle. The model predicts that the average phase advance will be one gamma cycle per theta cycle, a value in reasonable agreement with the data. The model also correctly accounts for 1) the fact that the firing of a place cell occurs during $\sim$7 theta cycles (on average) as the animal crosses the place field 2) the observation that the phase of place cell firing depends more systematically on position than on time 3) the fact that traversal of an already familiar path produces further modifications (shifts the firing of a cell to an earlier position in the path). This later finding suggests that recall of previously stored information, strengthens the memory of that information. In the model, this occurs because of a novel role of NMDA channels in recall. The general success of the model provides support for the idea that the hippocampus stores sequence information and makes predictions of expected positions during gamma-frequency recall. From lbl at nagoya.bmc.riken.go.jp Mon Nov 4 21:27:12 1996 From: lbl at nagoya.bmc.riken.go.jp (Bao-Liang Lu) Date: Tue, 5 Nov 1996 11:27:12 +0900 Subject: Paper available: Parallel and modular Multi-sieving Neural Net Message-ID: <9611050227.AA10631@xian> The following paper, which was published in Proc. of 1996 IEEE International Conference on Systems, Man, and Cybernetics, Beijing, China, Oct. 14-17, is available via FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:/pub/publish/Lu/lu-ieee-smc96.ps.Z ========================================================================== TITLE: A Parallel and Modular Multi-Sieving Neural Network Architecture with Multiple Control Networks AUTHORS: Bao-Liang Lu (1) Koji Ito (2) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (2) Tokyo Institute of Technology ABSTRACT: We have proposed a constructive learning method called multi-sieving learning for implementing automatic decomposition of learning tasks and a parallel and modular multi-sieving network architecture in our previous work. In this paper we present a new parallel and modular multi-sieving neural network architecture to which multiple control networks are introduced. In this architecture the learning tasks for a control network is decomposed into a finite set of manageable subtasks, and each of the subtasks is learned by an individual control sun-network. An important advantage of this architecture is that the learning tasks for control networks can be learned efficiently, and therefore automatic decomposition of complex learning tasks can be achieved easily. (6 pages. No hard copies available.) Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, The Institute of Physical and Chemical Research (RIKEN) 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.bmc.riken.go.jp From payman at u.washington.edu Mon Nov 4 23:14:08 1996 From: payman at u.washington.edu (Payman Arabshahi) Date: Mon, 4 Nov 1996 20:14:08 -0800 (PST) Subject: CIFEr'97 deadline extension Message-ID: <199611050414.UAA22115@saul4.u.washington.edu> !!!! Deadline for submission of summaries has been extended to December 2 !!!! IEEE/IAFE 1997 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer97 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 23-25, 1997 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting * Hedging Strategies The IEEE/IAFE CIFEr Conference is * Risk Arbitrage the third annual collaboration * Exotic Options between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by December 2, 1996. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by December 2, 1996. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1997. Complete papers (not exceeding seven pages of 8.5 inch by 11 inch size) will be due by February 14, 1997, and will be published in the conference proceedings. ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1997. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 14, 1997. ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'97 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers) and the IEEE Neural Networks Council. The IEEE Barbara Klemm (Institute of Electrical and CIFEr'97 Secretariat Electronics Engineers) is the Meeting Management world's largest engineering and IEEE/IAFE Computational Intelligence computer science professional for Financial Engineering non-profit association and sponsors 2603 Main Street, Suite # 690 hundreds of technical conferences Irvine, California 92714 and publications annually. The IAFE is a professional non-profit Tel: (714) 752-8205 or financial association with members (800) 321-6338 worldwide specializing in new financial product design, derivative Fax: (714) 752-7444 structures, risk management strategies, arbitrage techniques, Email: Meetingmgt at aol.com and application of computational Web: http://www.ieee.org/nnc/cifer97 techniques to finance. ---------------------------------------------------------------------------- Payman Arabshahi CIFEr'97 Organizational Chair Tel: (206) 644-8026 Dept. Electrical Eng./Box 352500 Fax: (206) 543-3842 University of Washington Seattle, WA 98195 Email: payman at ee.washington.edu ---------------------------------------------------------------------------- From jose at kreizler.rutgers.edu Tue Nov 5 10:29:39 1996 From: jose at kreizler.rutgers.edu (Stephen J. Hanson) Date: Tue, 5 Nov 1996 10:29:39 -0500 Subject: RUTGERS (Newark Campus) Psychology Department-Two Tenure Track Message-ID: <199611051529.KAA03937@kreizler.rutgers.edu> The Department of Psychology of Rutgers University-Newark Campus anticipates making TWO tenure-track appointments in Cognitive Science or Cognitive Psychology at the Assistant Professor level. The Psychology Department is interested in expanding its program in the are of Cognitive Science. There are two focus clusters for the searches. In the first cluster candidates should have an active research program in one or more of the following areas: memory, learning attention, action, high-level vision. Particular interest will exist in candidates that combine one or more of the research interests above with mathematical or computational approaches with special emphasis in connectionist modeling (as in for example related to programs in Cognitive Neuroscience). Candidates in the second cluster should have an active research program in one or more of the following areas: human-computer interaction, cognitive engineering, cognitive modeling, IT systems, learning systems, CSCW, distance learning or multimedia systems. The positions call for candidates with an active research program and who are effective teachers at both the graduate and undergraduate levels. Review of applications will begin on February 1, 1997 but will continue to be accepted until the positions are filled. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation to Professor S. J. Hanson, Chair, Department of Psychology - Cognitive Search, Rutgers University, Newark, NJ 07102. Email enquiries can be made to cogsci at psychology.rutgers.edu From sontag at control.rutgers.edu Tue Nov 5 11:08:00 1996 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 5 Nov 1996 11:08:00 -0500 Subject: TR available - Controllability of Recurrent Nets Message-ID: <199611051608.LAA20840@control.rutgers.edu> COMPLETE CONTROLLABILITY OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS Eduardo D. Sontag and Hector J. Sussmann Rutgers Center for Systems and Control (SYCON) Department of Mathematics, Rutgers University This paper presents a characterization of controllability for the class of (continuous-time) recurrent neural networks. These are described by differential equations of the following type: x'(t) = S [ Ax(t) + Bu(t) ] where "S" is a diagonal mapping S [a,b,c,...] = (s(a),s(b),s(c),...) and "s" is a scalar real map called the activation function of the network. Each coordinate of the vector x(t) is a real-valued variable which represents the internal state of a neuron, and each coordinate of u(t) is an external input signal applied at time t. Recurrent networks whose activation s is the identity function s(x)=x are precisely the linear systems studied in control theory. With nonlinear s, one obtains general families of recurrent nets. Controllability means that any state x can be transformed into any other possible state z, by means of a suitable input signal u(t) applied on some time interval. When s is the identity (the case typical in control theory), controllability can be checked by means of a simple algebraic test due to Kalman (1960). The current paper provides a simple characterization for recurrent networks when s(x) = tanh(x) is the activation typically used in neural network practice. The condition is very different from the one that applies to linear systems. ============================================================================ The paper is available starting from Eduardo Sontag's WWW HomePage at URL: http://www.math.rutgers.edu/~sontag/ (follow link to "online papers"). Many other related papers can be also found at this site. If Web access if inconvenient, it is also possible to use anonymous FTP: ftp math.rutgers.edu login: anonymous cd pub/sontag bin get reach-sigmoid.ps.gz Once file is retrieved, use gunzip to uncompress and then print as postscript. ============================================================================ Comments welcome. From c.k.i.williams at aston.ac.uk Wed Nov 6 13:37:18 1996 From: c.k.i.williams at aston.ac.uk (Chris Williams) Date: Wed, 06 Nov 1996 18:37:18 +0000 Subject: NIPS*96 post-conference workshop om Model Complexity Message-ID: <3734.199611061837@sun.aston.ac.uk> Note the call for short presentations near the bottom of this message. NIPS*96 Post-conference Workshop MODEL COMPLEXITY Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 ORGANIZERS: Chris Williams (Aston University, UK, c.k.i.williams at aston.ac.uk) Joachim Utans (London Business School, UK, J.Utans at lbs.lon.ac.uk) OVERVIEW: One of the most important difficulties in using neural networks for real-world problems is the issue of model complexity, and how it affects the generalization performance. One approach states that model complexity should be tailored to the amount of training data available, e.g. by using architectures with small numbers of adaptable parameters, or by penalizing the fit of larger models (e.g. AIC, BIC, Structural Risk Minimization, GPE). Alternatively, computationally expensive numerical estimates of the generalization performance (cross-validation (CV), Bootstrap, and related methods) can be used to compare and select models (for example Moody and Utans, 1994). Methods based on regularization restrict model complexity by reducing the "effective" number of parameters (Moody 1992). On the other hand, Bayesian methods see no need to limit model complexity, as overfitting is obviated by marginalization, where predictions are made by averaging over the posterior weight distribution. As Neal (1995) has argued, there may be no reason to believe that neural network models for real-world problems should be limited to nets containing only a "small" number of hidden units. In the limit of an infinite number of hidden units neural networks become Gaussian processes, and hence are closely related to the splines approach (Wahba, 1990). Another important aspect of model building is the selection of a subset of relevant input variables to include in the model, for instance, in a regression context, the subset of independent variables, or lagged values for a time series problem. The aim of this workshop is to present the different ideas on these topics, and to provide guidance to those confronted with the problem of model complexity on real-world problems. SPEAKERS: Leo Breiman (University of California Berkeley) Federico Girosi (MIT) Trevor Hastie (Stanford) Michael Kearns (AT&T Laboratories Research) John Moody (Oregon Graduate Institute) Grace Wahba (University of Wisconsin at Madison) Hal White (University of California San Diego) Huaiyu Zhu (Santa Fe Institute) WORKSHOP FORMAT: Of the 6 hours scheduled, about 4 will be taken up with presentations from the speakers listed above. We also very keen to make sure that there is time for discussion of the points raised. However, we also want to provide an opportunity for others to make short presentations or raise questions; we are considering making available a limited number of mini-slots of approx. 5-10 minutes (2-3 overheads plus time for a short discusson) for presentations on relevant topics. Because the workshop is scheduled for one day only and depending on the number of proposals received we may schedule the short presentations to the extend beyond the regular morning session. CALL FOR PARTICIPATION: If you would like to make a 5-10 minute presentation please email the organizers by Thursday 12 December, giving your name, a title for your presentation and a short abstract. We will be finalizing the program in the following week. WEB PAGE: The workshop web page is located at http://www.ncrg.aston.ac.uk/nips96/ It includes abstracts for the invited talks. From salomon at ifi.unizh.ch Thu Nov 7 08:35:42 1996 From: salomon at ifi.unizh.ch (Ralf Salomon) Date: Thu, 7 Nov 1996 14:35:42 +0100 (MET) Subject: Open Post-Doc Position Message-ID: <"josef.ifi..426:07.10.96.13.35.43"@ifi.unizh.ch> POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- AI Lab, Department of Computer Science University of Zurich, Switzerland The AI Lab at the University of Zurich participates in the VIRGO project (Vision-Based Robot Navigation Research Network), which is sponsored by the TMR research program of the European Union. For this project, we are looking for a highly motivated individual for an 18-month postdoctoral research position. The goal of VIRGO is to coordinate European research and postgraduate training activities that address the development of intelligent robotic systems able to navigate in (partially) unknown and possibly changing environments. For further details, please visit VIRGO's home page http://www.ics.forth.gr/virgo/ . The ideal candidate would have good programming skills (C/C++) and a strong background in neural networks. Furthermore, working in this interdisciplinary project requires good interpersonal skills and the ability of adopting new perspectives. The main focus of the research will be in the field of insect navigation, mimicking principles of biological systems. Also, work involving robot hardware, such as assembling and repairing robots, sensor and actuator systems, building controllers etc., is involved. The position is open immediately, but the actual starting date can be negotiated. The salary will be according to local university regulations for postdocs and can be expected to be about SFr 60.000 (approximately USD 50.000) per year. Since the project is sponsored by the European Union, the candidate should be a European citizen. To apply for this position, send your curriculum vitae including a list of references, a list of publications, and two or three representative publications either by e-mail or surface mail to pfeifer at ifi.unizh.ch or Prof. Rolf Pfeifer AI Lab, Department of Computer Science University of Zurich Winterthurerstr. 190 8057 Zurich Switzerland From finndag at ira.uka.de Thu Nov 7 19:41:46 1996 From: finndag at ira.uka.de (Finn Dag Buoe) Date: Thu, 07 Nov 1996 19:41:46 -0500 Subject: Ph.D thesis on connectionist natural language processing Message-ID: <"irafs2.ira.704:07.11.96.18.45.11"@ira.uka.de> The following doctoral thesis (and 3 of my related papers for COLING96, ECAI96, and ICSLP96) are available at the WWW page: http://werner.ira.uka.de/ISL.speech.publications.html -------------------------------------------------------------------------- FEASPAR - A FEATURE STRUCTURE PARSER LEARNING TO PARSE SPONTANEOUS SPEECH (120 pages) Finn Dag Buo Ph.D thesis University of Karlsruhe Abstract Traditionally, automatic natural language parsing and translation have been performed with various symbolic approaches. Many of these have the advantage of a highly specific output formalism, allowing fine-grained parse analyses and, therefore, very precise translations. Within the last decade, statistical, and connectionist techniques have been proposed to learn the parsing task in order to avoid the tedious manual modeling of grammar and malformation. How to learn a detailed output representation and how to learn to parse robustly even ill-formed input, has until now remained an open question. This thesis provides an answer to this question by presenting a connectionist parser that needs a small corpus and a minimum of hand modeling, that learns, and that is robust towards spontaneous speech and speech recognizer effects. The parser delivers feature structure parses, and has a performance comparable to a good hand modeled unification based parser. The connectionist parser FeasPar consists of several neural networks and a Consistency Checking Search. The number of, architecture of, and other parameters of the neural networks are automatically derived from the training data. The search finds the combination of the neural net outputs that produces the most probable consistent analysis. To demonstrate learnability and robustness, FeasPar is trained with transcribed sentences from the English Spontaneous Scheduling Task and evaluated for network, overall parse, and translation performance, with transcribed and speech data. The latter contains speech recognition errors. FeasPar requires only minor human effort and performs better or comparable to a good symbolic parser developed with a 2 year, human expert effort. A key result is obtained by using speech data to evaluate the JANUS speech-to-speech translation system with different parsers. With FeasPar, acceptable translation performance is 60.5 %, versus 60.8 % with a GLR* parser. FeasPar requires two weeks of human labor to prepare the lexicon and 600 sentences of training data, whereas the GLR* parser required significant human expert grammar modeling. Presented in this thesis are the Chunk'n'Label Principle, showing how to divide the entire parsing tasks into several small tasks performed by neural networks, as well as the FeasPar architecture, and various methods for network performance improvement. Further, a knowledge analysis and two methods for improving the overall parsing performance are presented. Several evaluations and comparisons with a GLR* parser, producing exactly the same output formalism, illustrate FeasPar's advantages. ================================================================================ Finn Dag Buo SAP AG Germany finn.buoe at sap-ag.de ================================================================================ From jagota at cse.ucsc.edu Fri Nov 8 21:08:02 1996 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Fri, 8 Nov 1996 18:08:02 -0800 Subject: NIPS96 optimization workshop Message-ID: <199611090208.SAA08654@bristlecone.cse.ucsc.edu> This is an announcement and call for participation. Those interested in the topic and wishing to contribute a half-hour talk (some slots are open) may e-mail me a title and brief abstract. Or stop by at the venue and participate in other ways. Arun Jagota Nature Inspired Algorithms for Combinatorial Optimization NIPS*96 Postconference Workshop December 7, Saturday, 1996, Snowmass, Colorado 7:30-10:30 AM, 4-7 PM Organizer: Arun Jagota, jagota at cse.ucsc.edu The 1980s was a decade of intense activity in the application of nature inspired methods for the approximate solution of difficult combinatorial optimization problems. Many such problems are NP-hard, yet need to be solved (at least approximately) in real-world applications. This workshop will discuss the application of four nature inspired paradigms to combinatorial optimization: neural nets, evolutionary computing, methods rooted in physics, and DNA biocomputing. The workshop will consist of talks and discussions. The talks will present snap-shots of the state-of-the-art in these areas; the discussions will focus on common themes and differences across them. FORMAT: Eight 30 minute talks; two 60 minute discussions. Some elasticity possible. One of the discussions is intended to focus on conceptual comparisons: common themes, differences, relative strengths and weaknesses. The other discussion on benchmarks to facilitate cross-paradigm comparisons. SPEAKERS: Shumeet Baluja TBA CMU Jan van den Berg Physics-Based Neural Optimization Methods Erasmus U, Rotterdam Max Garzon The Reliability of DNA based Solutions to Optimization U of Memphis Arun Jagota Heuristic Primal-Target NN Methods on Some Hypergraph UCSC Problems Juergen Quittek Balancing Graph Mappings by Self-Organization ICSI -------------------------------------------------------------------------- From georgiou at wiley.csusb.edu Sun Nov 10 19:31:54 1996 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Sun, 10 Nov 1996 16:31:54 -0800 Subject: LCFP: ICCIN'97 Deadline Revision Message-ID: <199611110031.QAA09683@wiley.csusb.edu> Please note the revised deadline for submissions: December 6, 1996. It was changed by popular demand so that it be in line with the deadline of the general JCIS'97 Conference. ------------------------------------------------------------------------ Last Call for Papers 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE http://www.csci.csusb.edu/iccin Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University This conference is part of the Third Joint Conference Information Sciences. Plenary Speakers include the following: James S. Albus James Anderson Roger Brockett Earl Dowell David E. Goldberg Stephen Grossberg Y. C. Ho John H. Holland Zdzislaw Pawlak Lotfi A. Zadeh Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evaluationary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Optimization o Parallel Computer Applications o Theory of Evolutionary Computation Summary Submission Deadline: December 6, 1996 Decision & Notification: January 1, 1997 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 georgiou at csci.csusb.edu More information on Conference Web site: http://www.csci.csusb.edu/iccin From oby at cs.tu-berlin.de Mon Nov 11 07:26:36 1996 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 11 Nov 1996 13:26:36 +0100 (MET) Subject: Visual Cortex Workshop Message-ID: <199611111226.NAA11116@pollux.cs.tu-berlin.de> The Graduiertenkolleg "Signalling Chains in Living Systems" (Randolf Menzel, PI) is organizing a visual cortex workshop which takes place in Berlin, Germany, in December. The theme of the workshop is the interaction between experimentalists and modellers in the area of biological vision. Participation is free, and all interested people are wellcome. The workshop is sponsored by the German Science Foundation via the Graduiertenkolleg, by HFSPO and by the FU and TU Berlin. Klaus Obermayer ****************************************************************** ****************************************************************** Workshop on Experiments and Models of Visual Cortex Friday, 13th of December, 1996 Institute for Computer Science, Free University of Berlin, Takustrasse 9, 14195 Berlin, Germany ------------------------------------------------------------------ PROGRAM: 9.00 Wellcome and Introduction 9.10 Ulf Eysel, U. Bochum, Lateral signal processing, response specificity and maps in the visual cortex. 10.00 David Somers, MIT, Local and long-range circuit modeling of primary visual cortex. 10.50 - 11.10 Break 11.10 Jack Cowan, U. Chicago, A simple model of orientation tuning and its consequences. 12.00 Bartlett Mel, USC, Translation-invariant orientation tuning in visual `complex' cells could derive from intradendritic computations. 12.50 - 14.30 Lunch 14.30 Rodney Douglas, ETH Zuerich, Computational principles in the microcircuits of the neocortex. 15.20 Nikos Logothetis, Baylor College, On the neural mechanisms of perceptual multistability. 16.10 - 16.30 Break 16.30 Heinrich Buelthoff, MPI biol. Kybernetik, The view-based approach to high level vision. 17.20 Dana Ballard, U. Rochester The visual cortex as a hierarchical Kalman predictor. 18.10 Adjourn ------------------------------------------------------------------ Abstracts and further information can be obtained via WWW at: http://www.kon.cs.tu-berlin.de/colloq/symposium-dec96.html ------------------------------------------------------------------ Prof. Klaus Obermayer phone: 49-30-314-73442 FR2-1, KON, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany From szepes at sol.cc.u-szeged.hu Wed Nov 13 05:53:15 1996 From: szepes at sol.cc.u-szeged.hu (Szepesvari Csaba) Date: Wed, 13 Nov 1996 11:53:15 +0100 (MET) Subject: CFP: Theoretical Approaches to Adaptive Intelligent Control Message-ID: CALL FOR PAPERS Those interested in the topic and wishing to contribute a half-hour talk (some slots are open) may e-mail me a title and brief abstract. Csaba Szepesvari Theoretical Approaches to Adaptive Intelligent Control session at the conference COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE March 2--5, 1997, Research Triangle Park, North Carolina Organizer: Csaba Szepesvari, szepes at math.u-szeged.hu The aim of this section is to bring together researchers of Reinforcement Learning (RL), Adaptive- and NeuroControl (AC,NC) in order to facilitate the discussion between the people working in these fields, overview the research done in these fields and develope connections between them. Although the framework and the ultimate goal of these fields are different, the theoretical problems arising share many common features (stability(AC) = convergence (RL), performance optimization(AC) = exploration exploitation dilemma (RL), observability and controllability(AC) = hidden state, perceptual aliasing(RL), etc .). Ideally, the contributions should discuss the current problems arising in these fields from a theoretical point of view. In case of sufficient interest a panel discussion at the end of the session will be included. Summaries on Adaptive Intelligent Control are solicited for this session. Summaries should be sent to me and will be published in the proceedings. After the conference speakers may extend their summaries to full papers which will go through the usual refereeing process. Accepted papers will be published as journal articles. For more information on the conference see the Web site http://www.csci.csusb.edu/iccin. SUBMISSION DEADLINE: December 6, 1996 Csaba Szepesvari Session Organizer on Theoretical Approaches to Adaptive Control Computational Intelligence and Neuroscience szepes at math.u-szeged.hu From marney at ai.mit.edu Sun Nov 10 16:34:12 1996 From: marney at ai.mit.edu (Marney Smyth) Date: Sun, 10 Nov 1996 16:34:12 -0500 (EST) Subject: Learning Methods for Prediction, Classification, Message-ID: <9611102134.AA04438@carpentras.ai.mit.edu> ************************************************************** *** *** *** Learning Methods for Prediction, Classification, *** *** Novelty Detection and Time Series Analysis *** *** *** *** Los Angeles, CA, December 14-15, 1996 *** *** *** *** Geoffrey Hinton, University of Toronto *** *** Michael Jordan, Massachusetts Inst. of Tech. *** *** *** ************************************************************** A two-day intensive Tutorial on Advanced Learning Methods will be held on December 14 and 15, 1996, at Loews Hotel, Santa Monica, CA. Space is available for up to 50 participants for the course. The course will provide an in-depth discussion of the large collection of new tools that have become available in recent years for developing autonomous learning systems and for aiding in the analysis of complex multivariate data. These tools include neural networks, hidden Markov models, belief networks, decision trees, memory-based methods, as well as increasingly sophisticated combinations of these architectures. Applications include prediction, classification, fault detection, time series analysis, diagnosis, optimization, system identification and control, exploratory data analysis and many other problems in statistics, machine learning and data mining. The course will be devoted equally to the conceptual foundations of recent developments in machine learning and to the deployment of these tools in applied settings. Case studies will be described to show how learning systems can be developed in real-world settings. Architectures and algorithms will be presented in some detail, but with a minimum of mathematical formalism and with a focus on intuitive understanding. Emphasis will be placed on using machine methods as tools that can be combined to solve the problem at hand. WHO SHOULD ATTEND THIS COURSE? The course is intended for engineers, data analysts, scientists, managers and others who would like to understand the basic principles underlying learning systems. The focus will be on neural network models and related graphical models such as mixture models, hidden Markov models, Kalman filters and belief networks. No previous exposure to machine learning algorithms is necessary although a degree in engineering or science (or equivalent experience) is desirable. Those attending can expect to gain an understanding of the current state-of-the-art in machine learning and be in a position to make informed decisions about whether this technology is relevant to specific problems in their area of interest. COURSE OUTLINE Overview of learning systems; LMS, perceptrons and support vectors; generalized linear models; multilayer networks; recurrent networks; weight decay, regularization and committees; optimization methods; active learning; applications to prediction, classification and control Graphical models: Markov random fields and Bayesian belief networks; junction trees and probabilistic message passing; calculating most probable configurations; Boltzmann machines; influence diagrams; structure learning algorithms; applications to diagnosis, density estimation, novelty detection and sensitivity analysis Clustering; mixture models; mixtures of experts models; the EM algorithm; decision trees; hidden Markov models; variations on hidden Markov models; applications to prediction, classification and time series modeling Subspace methods; mixtures of principal component modules; factor analysis and its relation to PCA; Kalman filtering; switching mixtures of Kalman filters; tree-structured Kalman filters; applications to novelty detection and system identification Approximate methods: sampling methods, variational methods; graphical models with sigmoid units and noisy-OR units; factorial HMMs; the Helmholtz machine; computationally efficient upper and lower bounds for graphical models REGISTRATION Standard Registration: $700 Student Registration: $400 Cancellation Policy: Cancellation before Friday December 6th, 1996, incurs a penalty of $150.00. Cancellation after Friday December 6th, 1996, incurs a penalty of one-half of Registration Fee. Registration Fee includes Course Materials, breakfast, coffee breaks, and lunch on Saturday December 14th. On-site Registration is possible. Payment of on-site registration must be in US Dollar amounts, by Money Order or Check (preferably drawn on a US Bank account). Those interested in participating should return the completed Registration Form and Fee as soon as possible, as the total number of places is limited by the size of the venue. Please print this form, and fill in the hard copy to return by mail REGISTRATION FORM Learning Methods for Prediction, Classification, Novelty Detection and Time Series Analysis Saturday, December 14 - Sunday, December 15, 1996 Santa Monica, CA, USA. -------------------------------------- Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or Institution ______________________________________ Standard Registration ____ Student Registration ____ Mailing Address (for receipt) _________________________ __________________________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email address (Lunch Menu, Saturday December 14th - tick as appropriate): ___ Vegetarian ___ Non-Vegetarian [Image] Fee payment must be made by MONEY ORDER or PERSONAL CHECK. All amounts are given in US dollar figures. Make fee payable to Prof. Michael Jordan. Mail it, together with this completed Registration Form to: Professor Michael Jordan Dept. of Brain and Cognitive Sciences M.I.T. E10-034D 77 Massachusetts Avenue Cambridge, MA 02139 USA HOTEL ACCOMMODATION Hotel accomodations are the personal responsibility of each participant. The Tutorial will be held in Lowes Santa Monica Beach Hotel, 1700 Ocean Avenue Santa Monica CA 90401 (310) 458-6700 FAX (310) 458-0020 on December 14 and 15, 1996. The hotel has reserved a block of rooms for participants of the course. The special room rates for participants are: U.S. $170.00 (city view) per night + tax U.S. $250.00 (full ocean view) per night + tax Please be aware that these prices do not include State or City taxes. Participants may wish to avail of discounted overnight parking rate of $13.30 (self) and $15.50 (valet). ADDITIONAL INFORMATION A registration form is available from the course's WWW page at http://www.ai.mit.edu/projects/cbcl/web-pis/jordan/course/index.html Marney Smyth Phone: 617 258-8928 Fax: 617 258-6779 E-mail: marney at ai.mit.edu From amari at zoo.riken.go.jp Thu Nov 14 02:08:57 1996 From: amari at zoo.riken.go.jp (Shunichi Amari) Date: Thu, 14 Nov 1996 16:08:57 +0900 Subject: Neural networks awards Message-ID: <9611140708.AA22444@zoo.riken.go.jp> INNS Awards Call For Nominations The International Neural Network Society has established an Awards Program to recognize INNS members who have made outstanding contributions in the field of Neural Networks. Nominations for candidates are to be sought for the following categories: The Hebb, Hemholtz and Gabor Awards. Two awards, among the three, of $500 will be presented each year to senior members of INNS for outstanding contributions made in the field of Neural Networks. Young Investigator Award. Each year two awards of $250.00 will be presented for significant contributions in the field of Neural Networks to members with no more than five years postdoctoral experience and under the 40 years of age. The Award Committee should receive nominations, from other than the nominee, of no more than two A4 pages in length, outlining reasons for the award to the nominee, along with a list of at least five important and published papers of the nominee. The nominations must be made by mail, fax or e-mail no later than February 28, 1997. The member who submits the nomination should also provide the Committee with the following information for both the nominee and themselves: name, address, position/title, phone, fax and e-mail address. Awards will be presented at ICNN'97, June 9-12, 1997 at Westin Galleria Hotel, Houston, Texas. Nominations should be sent to : INNS c/o Talley Management Group 875 Kings Highway, Suite 200 Woodbury, NJ 08096 Phone: 609/845-9094 Fax: 609/853-0411 E-Mail: headquarters at WCNN.ccmail.CompuServe.Com ------- End of forwarded message ------- From tibs at utstat.toronto.edu Thu Nov 14 09:38:00 1996 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Thu, 14 Nov 96 09:38 EST Subject: new tech report Message-ID: Classification by pairwise coupling Trevor Hastie Stanford University Robert Tibshirani University of Toronto We discuss a strategy for polychotomous classification that involves estimating probabilities for each pair of classes and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated datasets. Classifiers used include linear discriminants, nearest neighbours, and the support vector machine. Available at http://utstat.toronto.edu/tibs/research.html ftp://utstat.toronto.edu/pub/tibs/coupling.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From robtag at dia.unisa.it Thu Nov 14 11:45:50 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 14 Nov 1996 17:45:50 +0100 Subject: Wirn 97 First Call for Paper Message-ID: <9611141645.AA08295@udsab.dia.unisa.it> ***************** CALL FOR PAPERS ***************** The 9-th Italian Workshop on Neural Nets WIRN VIETRI-97 May 22-24, 1997 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) M. Ceccarelli ( CNR Napoli) A. Colla (ELSAG Bailey Genova) M. Frixione ( I.I.A.S.S.) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino ) A. Petrosino ( I.I.A.S.S.) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno ) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1997 Replies to Authors: March 31, 1997 Revised Papers Due: May 24, 1997 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) Istituto Italiano per gli Studi Filosofici, Napoli The 9-th Italian Workshop on Neural Nets (WIRN VIETRI-97) will take place in Vietri Sul Mare, Salerno ITALY, May 22-24, 1997. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by World Scientific Publishing. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. The camera ready format will be sent with the acceptation letter of the referees. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 97 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1994,1995,1996 until February 28 1997) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 97 c/o IIASS before February 28,1997. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From gaudiano at cns.bu.edu Thu Nov 14 17:25:16 1996 From: gaudiano at cns.bu.edu (Paolo Gaudiano) Date: Thu, 14 Nov 1996 17:25:16 -0500 Subject: Call for papers: Intelligent Robotics Message-ID: <199611142225.RAA25167@mattapan.bu.edu> CALL FOR PAPERS Special Session on Intelligent Robotics 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (http://www.csci.csusb.edu/iccin) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 SUBMISSION DEADLINE: December 6, 1996. Organizer: Paolo Gaudiano, Boston University Neurobotics Lab, gaudiano at cns.bu.edu This session will include a combination of invited and submitted articles in the area of intelligent robotics. We welcome submissions describing applied work utilizing traditional AI, neural networks, fuzzy logic, reinforcement learning, or any other technique relevant to the main themes of the conference. Preference will be given to papers describing work done on real robotic systems, though simulator results will be acceptable when the potential applicability to real robots is clear. Submissions need to conform to the format specified for ICCIN'97: summary papers shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by November 15, 1996. After acceptance, a deposit of $150 check must be received by January 31, 1997, to guarantee the publication of the 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. Your papers should be RECEIVED at the following address by December 6th, 1996: Paolo Gaudiano Boston University Dept. of Cognitive and Neural Systems 677 Beacon Street Boston, MA 02215 USA Alternatively, you may submit your camera-ready paper electronically. Postscript is preferred. If you have a different format (e.g., Word, Frame, ...), please send me e-mail ahead of time to gaudiano at cns.bu.edu to see if an electronic submission is possible. If you have already submitted a relevant paper in response to the original ICCIN call for papers, please notify the coordinator (George Georgiou, georgiou at csci.csusb.edu) that you would like to be considered for this session. -- Paolo Gaudiano Dept. of Cognitive & Neural Systems Boston University Phone:617-353-9482 Fax:617-353-7755 677 Beacon Street e-mail: gaudiano at cns.bu.edu Boston, MA 02215 USA WEB URL: http://cns-web.bu.edu/ Neurobotics Lab Phone: 617-353-1347 WEB URL: http://neurobotics.bu.edu/ From esann at dice.ucl.ac.be Fri Nov 15 07:53:36 1996 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 15 Nov 1996 14:53:36 +0200 Subject: ESANN'97 final call for papers Message-ID: <199611151351.OAA05052@ns1.dice.ucl.ac.be> --------------------------------------------------- | European Symposium | | on Artificial Neural Networks | | | | Bruges - April 16-17-18, 1997 | | | | Final announcement and call for papers | --------------------------------------------------- Dear colleagues, This is to remind you that the deadline for the submission of papers to ESANN'97, the European Symposium on Artificial Neural Networks, is November 29, 1997. All information about this conference and the submission of papers is available on the ESANN WWW server: http://www.dice.ucl.ac.be/neural-nets/esann or can be sent by e-mail upon request. If you intend to submit a paper to ESANN'97, and if you think that you will have difficulties to meet exactly the deadline, please send a fax to the conference secretariat with the title of the paper, the authors and the abstract (even if not definitive); this will accelerate the processing of your paper after reception, and will give us the possibility to contact you if we do not receive it. Thank you in advance for your contribution to ESANN'97! Sincerely yours, Michel Verleysen _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/neural-nets/esann _____________________________ _____________________________ From Dimitris.Dracopoulos at ens-lyon.fr Fri Nov 15 09:06:41 1996 From: Dimitris.Dracopoulos at ens-lyon.fr (Dimitris Dracopoulos) Date: Fri, 15 Nov 1996 15:06:41 +0100 (MET) Subject: Last CFP: Neural and Evolutionary Algorithms for Intelligent Control Message-ID: <199611151406.PAA01059@banyuls.ens-lyon.fr> NEURAL AND EVOLUTIONARY ALGORITHMS FOR INTELLIGENT CONTROL ---------------------------------------------------------- L A S T C A L L F O R P A P E R S Special Session in: "15th IMACS World Congress 1997 on Scientific Computation, Modelling and Applied Mathematics", August 24-29 1997, Berlin, Germany Special Session Organizer-Chair: Dimitri C. Dracopoulos ------------------------------- (Ecole Normale Superieure de Lyon, LIP and Brunel University, London) Scope: ----- The focus of the session will be in the latest developments of the state-of-the-art neurocontrol and evolutionary techniques. Today, many advanced intelligent control applications utilize methods like the above, and papers describing these are mostly welcome. Theoretical discussions of how these techniques can be proved to be stable are also highly welcome. Topics: ------ -Neurocontrollers * optimization over time * adaptive critic designs * brain-like neurocontrollers -Evolutionary techniques as pure controllers * genetic algorithms * evolutionary programming * genetic programming -Hybrid methods (neural nets + evolutionary algorithms) -Theoretical and Stability issues for neuro-evolutionary control -Advanced Control Applications Paper Contributions: -------------------- Each paper will be published in the Proceedings of the IMACS'97 World Congress. The accepted papers will be orally presented (25 minutes each, including 5 min for discussion). Important dates: ---------------- December 5, 1996, Deadline for receiving papers. January 10, 1997, Notification of acceptance. February 1997, Author typing instructions, for camera-ready copies. Submission guidelines: --------------------- One hardcopy, 6 pages limit, 10pt font, should be sent to the Session Chair: Professor Dimitri C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Ecole Normale Superieure de Lyon 46 Allee d'Italie 69364 Lyon - Cedex 07, France. In the case of multiple authors then in the paper it should be indicated which author is to receive correspondence. The corresponding author is requested to include in the cover letter: complete postal address, e-mail address, phone number, fax number, a list of keywords (no more than 5). ** Electronic submissions (in postscript format) will be accepted. ** More information (preliminary) on the "15th IMACS World Congress 1997" can be found in: "http://www.first.gmd.de/imacs97/". Please note that special discounted registration fees (proceedings but no social program) will be available. -- Professor Dimitris C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Telephone: +33 (0) 472728504 Ecole Normale Superieure de Lyon Fax: +33 (0) 472728080 46 Allee d'Italie E-mail: Dimitris.Dracopoulos at ens-lyon.fr 69364 Lyon - Cedex 07 France From akg at enterprise.arl.psu.edu Fri Nov 15 17:14:47 1996 From: akg at enterprise.arl.psu.edu (Amulya K. Garga) Date: Fri, 15 Nov 1996 17:14:47 -0500 (EST) Subject: CFP: ICNN'97 Special Session on NN for Monitoring Complex Systems Message-ID: <199611152214.RAA27310@wisdom.arl.psu.edu> CALL FOR PAPERS Special Session on Neural Networks Applications for Monitoring Complex Systems at the International Conference on Neural Networks 1997 Westin Galleria Hotel Houston, Texas 9-12 June 1997 SUBMISSION DEADLINE: 30 November 1996. ORGANIZER: Amulya K. Garga Applied Research Lab, The Pennsylvania State University garga at psu.edu This session will consist of invited and submitted papers in the area of Neural Networks Applications for Monitoring Complex Systems. We welcome submissions describing applied work utilizing neural networks, fuzzy logic, AI, or any other techniques relevant to the main themes of ICNN'97. SUMMARY: In recent years, extensive neural networks research has focused on the problem of monitoring complex systems. Applications include condition-based maintenance for complex machinery (e.g., in which mechanical systems are monitored to determine their current state and predict their remaining useful life), monitoring of industrial processes, and medical applications such as health monitoring and diagnosis. These applications involve the need for pattern recognition, non-linear predictive modeling, representation of imprecise information, implicit approximate reasoning. A number of researchers have investigated the application of neural networks to these problems. These applications have proven to be particularly challenging because of such factors as poor observability (e.g., low signal-to-noise ratios), the need to identify so-called rare events (e.g., failure events), multiple time-scale phenomena, the need to recognize context-based changes to operating conditions, and the general lack of available training data. The applications provide both a showcase for demonstrating the utility of neural networks, as well providing unique challenges whose solutions require advances in neural network theory and practice. Realistic solutions require a multi-disciplinary approach involving physical modeling of non-linear systems and of sensor technologies, multi-sensor data fusion, statistical signal processing, and artificial intelligence methods. The solution of these application problems should have international societal impacts including improved safety for mechanical systems, reduced costs, and the potential for improved health care through semi-automated monitoring and diagnosis. To date this rapidly evolving research has only been reported in specialized conferences without wide recognition or attention. Research is being performed in many countries including the U.S., Canada, Japan, Australia, and many European countries. This special session is important and timely and is intended to provide a forum where the international neural network community as well as to the application community can present mutually applicable and beneficial contributions. In addition, the session would provide the international research community with access to data sets for future neural network research. PAPER SUMISSION: Submissions need to conform to the format specified for ICNN'97: Six copies (one original and five copies) of the paper must be submitted. Papers must be camera-ready on 8 1/2 by 11 white paper, one-column format in Times or similar font style, 10 points or larger with one inch margins on all four sides. Do not fold or staple the original camera-ready copy. Four pages are encouraged; however, the paper must not exceed six pages, including figures, tables, and references, and should be written in English. Submissions that do not adhere to the guidelines above will be returned unreviewed. Centered at the top of the first page should be the complete title, author name(s), and postal and electronic mailing addresses. In the accompanying letter, the following information must be included: Full Title of the Paper Technical Area (First and Second Choices) Corresponding Author (Name, Postal and E-Mail Addresses, Tel. & FAX Nos) Your papers should be RECEIVED at the following address by 30 NOVEMBER 1996: Ms. Leanne Zindler Applied Research Laboratory The Pennsylvania State University P.O. Box 30 North Atherton Street State College, PA 16804-0030, USA Phone: +1 814-863-4344 If you have already submitted a relevant paper in response to the original ICNN'97 call for papers, please notify the ICNN'97 technical program coordinator (Prof. J. M. Keller, keller at ece.missouri.edu) that you would like to be considered for this session. PAPER REVIEW: The papers will undergo a review process similar to the other papers being considered for ICNN'97. Accepted papers will also appear in the proceedings of the ICNN'97. TIMETABLE: Paper sumission: 30 November 1996 Review complete: 2 January 1997 Revised sumission: 1 February 1997 ICNN'97 is sponsored by: IEEE Neural Network Council (NNC) and International Neural Network Society (INNS). URLs: This CFP: http://wisdom.arl.psu.edu/People/akg/NNMonCS.html ICNN'97: http://www.eng.auburn.edu/department/ee/ICNN97/icnn.htm http://www.mindspring.com/~pci-inc/ICNN97/ http://wwweng.uwyo.edu/icnn97/ IEEE NNC: http://www.ieee.org/nnc/ INNS: http://cns-web.bu.edu/inns/ -- Amulya K. Garga, Ph.D. | Email: garga at psu.edu Applied Research Laboratory | http://wisdom.arl.psu.edu/People/Amulya_Garga P.O. Box 30 | Phone: 814-863-5841 Fax: 814-863-0673 State College, PA 16804-0030 | From doya at erato.atr.co.jp Fri Nov 15 23:03:29 1996 From: doya at erato.atr.co.jp (Kenji Doya) Date: Sat, 16 Nov 1996 13:03:29 +0900 Subject: Position available: Kawato Dynamic Brain Project Message-ID: <199611160403.NAA02154@dorothy.erato.atr.co.jp> Research Position, Kawato Dynamic Brain Project, JSTC Postdoctoral research positions will be available April 1997 in Kawato Dynamic Brain Project, a part of the Exploratory Research for Advanced Technology (ERATO) program run by Japan Science and Technology Corporation, a government sponsored agency. The goal of this five year project (October 1996 - September 2001) is to understand the brain mechanisms of human cognition and sensory-motor learning to the extent that we can reproduce them as computer programs and robotic systems. Candidates must have strong background in mathematical, computational, neurobiological, cognitive and/or robotic sciences, and should have broad interests in one or more of the following research areas: (1) Computational Neurobiology: models of cerebellum, basal ganglia and cortical motor areas; dynamic computation in visual processing; encoding of of hierarchical sequences in the brain (speech and songs); learning of rhythmic and transient motor patterns (stand up to walk). (2) Computational Psychology: process of visuo-motor coordinate transformation and trajectory planning; objective functions for natural biological motion; functional brain imaging (fMRI, PET, MEG); human motor psychophysical experiments. (3) Computational Learning: modular function approximation algorithms; reinforcement learning in real time; principles of nonlinear dynamics for perceptual-motor coordination; robot learning from human demonstration; building of a humanoid robot with dextrous arms and oculomotor systems; motor psychophysical experiments which compare robot and human behavior. Detailed descriptions of the project is being placed in our Web page: http://www.erato.atr.co.jp/ The project is currently located in Advanced Telecommunications Research Institute International (ATR-I), where a considerably multi-national, multi-lingual community has selforganized. Applicants must have Ph.D. or equivalent degree. Appointments will be made for one or two years with possible extensions. Salaries are competitive. Please send a CV, a list of publications, copies of up to three major publications, names and addresses of up to three references, and a cover letter describing your research interests to: Search Comittee Kawato Dynamic Brain Project, JSTC 2-2 Hikaridai, Seika-cho, Soraku-gun Kyoto 619-02, Japan In order to receive full consideration, applications must be received by January 15, 1997. The search, however, will continue beyond this date until all positions are filled. Please, feel free to inquire about the specifics of the positions by sending mail to: email: search at erato.atr.co.jp fax: +81-774-95-3001 For general information about ERATO reserach positions, see http://www2.jst-c.go.jp/jst/erato/erato-e/NewPositions/ **************************************************************** Kenji Doya Computational Neurobiology Group Kawato Dynamic Brain Project, Japan Science and Technology Corp. 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan tel:+81-774-95-1210 email:doya at erato.atr.co.jp fax:+81-774-95-3001 http://www.erato.atr.co.jp/~doya From ping at cogsci.richmond.edu Sat Nov 16 13:12:33 1996 From: ping at cogsci.richmond.edu (Ping Li) Date: Sat, 16 Nov 1996 13:12:33 -0500 (EST) Subject: Connection Science Vol. 8 (1) Message-ID: <199611161812.NAA14915@cogsci.richmond.edu.urich.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1092 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/eed1a4ec/attachment.ksh From harnad at cogsci.soton.ac.uk Sat Nov 16 15:25:18 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 16 Nov 96 20:25:18 GMT Subject: Long-Term Potentiation: BBS Call for Commentators Message-ID: <6298.9611162025@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: LONG-TERM POTENTIATION: WHAT'S LEARNING GOT TO DO WITH IT? by Tracey J. Shors & Louis D. Matzel This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ LONG-TERM POTENTIATION: WHAT'S LEARNING GOT TO DO WITH IT? Tracey J. Shors & Louis D. Matzel Department of Psychology and Program in Neuroscience, Princeton University, Princeton, New Jersey 08544 shors at pucc.princeton.edu Department of Psychology, Program in Biopsychology and Behavioral Neuroscience, Rutgers University, New Brunswick, New Jersey 08903 matzel at rci.rutgers.edu KEYWORDS: NMDA, synaptic plasticity, Hebbian synapses, calcium, hippocampus, theta rhythm, spatial learning, classical conditioning, attention, arousal, memory systems ABSTRACT: Long-term potentiation (LTP) is operationally defined as a long-lasting increase in synaptic efficacy which follows high-frequency stimulation of afferent fibers. Since the first full description of the phenomenon in 1973, exploration of the mechanisms underlying LTP induction has been one of the most active areas of research in neuroscience. Of principal interest to those who study LTP, particularly LTP in the mammalian hippocampus, is its presumed role in the establishment of stable memories, a role consistent with "Hebbian" descriptions of memory formation. Other characteristics of LTP, including its rapid induction, persistence, and correlation with natural brain rhythms, provide circumstantial support for this connection to memory storage. Nonetheless, there is little empirical evidence that directly links LTP to the storage of memories. In this commentary, we review a range of cellular and behavioral characteristics of LTP, and evaluate whether those characteristics are consistent with the purported role of hippocampal LTP in memory formation. We suggest that much of the present focus on LTP reflects a preconception that LTP is a learning mechanism, although the empirical evidence often suggests that LTP is unsuitable for such a role. As an alternative to serving as a memory storage device, we propose that LTP may serve as a neural equivalent to an arousal or attention device in the brain. Accordingly, LTP is suggested to nonspecifically increase the effective salience of discrete external stimuli and thereby is capable of facilitating the induction of memories at distant synapses. In an environment open to critical inquiry, other hypotheses regarding the functional utility of this intensely studied mechanism are conceivable; the intent of this article is not exclusively to promote a single hypothesis, but rather to stimulate discussion about the neural mechanisms that are likely to underlie memory storage, and to appraise whether LTP can reasonably be considered a viable candidate for such a mechanism. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.shors). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.shors.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.shors ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.shors gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shors When you have the file(s) you want, type: quit From harnad at cogsci.soton.ac.uk Sat Nov 16 15:29:06 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 16 Nov 96 20:29:06 GMT Subject: Embodied Cognition: BBS Call for Commentators Message-ID: <6310.9611162029@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: DEICTIC CODES FOR THE EMBODIMENT OF COGNITION by Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, & Rajesh P. N. Rao This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ DEICTIC CODES FOR THE EMBODIMENT OF COGNITION Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, and Rajesh P. N. Rao Computer Science Department University of Rochester Rochester, NY 14627, USA {dana, mary, pook, rao}@cs.rochester.edu KEYWORDS: deictic computations; embodiment; working memory; natural tasks; eye movements; brain computation; binding; sensory-motor tasks; pointers. ABSTRACT: To describe phenomena that occur at different time scales, computational models of the brain must necessarily incorporate different levels of abstraction. We argue that at time scales of approximately one-third of a second, orienting movements of the body play a crucial role in cognition and form a useful computational level. This level is more abstract than that used to capture neural phenomena yet is framed at a level of abstraction below that traditionally used to study high-level cognitive processes such as reasoning. We term this level the embodiment level. At the embodiment level, the constraints of the physical system determine the nature of cognitive operations. The key synergy is that, at time scales of about one-third second, the natural sequentiality of body movements can be matched to the natural computational economies of sequential decision systems. The way this is done is through a system of implicit reference termed deictic, whereby pointing movements are used to bind objects in the world to cognitive programs. The focus of this paper is to study how deictic bindings enable the solution of natural tasks. We show how deictic computation provides a mechanism for representing the essential features that link external sensory data with internal cognitive programs and motor actions. In particular, we argue that one of the central features of cognition, working memory, can be related to moment-by-moment dispositions of body features such as eye movements and hand movements. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.ballard). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.ballard.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.ballard ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.ballard gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.ballard When you have the file(s) you want, type: quit ---------- From maruoka at maruoka.ecei.tohoku.ac.jp Mon Nov 18 12:25:19 1996 From: maruoka at maruoka.ecei.tohoku.ac.jp (Akira Maruoka) Date: Mon, 18 Nov 96 12:25:19 JST Subject: ALT97 first ANNOUNCEMENT Message-ID: <9611180325.AA08640@taihei.maruoka.ecei.tohoku.ac.jp> This CFP was sent to several mailing lists. Please accept my apologies if you receive multiple copies. Akira Maruoka ---------------------------------------------------------------------- CALL FOR PAPERS---ALT 97 The Eighth International Workshop on Algorithmic Learning Theory Sendai, Japan October 6-8, 1997 ______________________________________________________________________ The 8th International Workshop on Algorithmic Learning Theory (ALT'97) will be held in Sendai, Japan during October 6-8, 1997. The workshop is sponsored by the Japanese Society for Artificial Intelligence (JSAI) and Tohoku University. We invite submissions to ALT'97 in all areas related to algorithmic learning theory including (but not limited to): the design and analysis of learning algorithms, the theory of machine learning, computational logic of/for machine discovery, inductive inference, learning via queries, artificial and biological neural networks, pattern recognition, learning by analogy, Bayesian/MDL/MML estimation, statistical learning, inductive logic programming, application of learning to databases and biological sequence analysis. In addition to above theoretical topics, we invite submissions to two special tracks on data mining and case-based learning, aimed at promoting applications of theoretical ideas. INVITED TALKS. Invited talks will be given by Manuel Blum (UC Berkeley and City Univ. Hong Kong), Wolfgang Maass (Tech. Univ. Graz), Lenny Pitt (Univ. Illinois), and Masahiko Sato (Kyoto Univ.). SUBMISSIONS. Authors may either e-mail postscript files of their abstracts to mli at cs.cityu.edu.hk, or submit nine copies of their extended abstracts to: Professor Ming Li - ALT'97 Department of Computer Science City University of Hong Kong Tat Chee Avenue Kowloon, Hong Kong Abstracts must be received by April 1, 1997. Notification of acceptance or rejection will be (e)mailed to the first (or designated) author by May 19, 1997. Camera-ready copy of accepted papers will be due June 16, 1997. FORMAT. The submitted abstract should consist of a cover page with title, author names, postal and e-mail addresses, an approximately 200 word summary, and a body not longer than ten (10) pages of size A4 or 7x10.5 inches in twelve-point font. You may use appendices to include long but major proofs. If you submit hardcopies, double-sided printing is encouraged. POLICY. Each submitted abstract will be reviewed by the members of the program committee, and be judged on clarity, significance, and originality. Joint submissions to other conferences with published proceedings are not allowed. Papers that have appeared in journals or other conferences are not appropriate for ALT'97. Proceedings will be published as a volume in the Lecture Notes in Artificial Intelligence, Springer-Verlag, and will be available at the conference. Selected papers of ALT'97 will be invited to a special issue of the journal Theoretical Computer Science. One scholarship of $500US sponsored by IFIP TC 1.4 will be awarded to a student author (please mark student authors) in order to attend ALT'97. Conference chair: Professor Akira Maruoka Tohoku University Sendai, Japan 980 maruoka at ecei.tohoku.ac.jp Program committee chair: Ming Li (City Univ. HK and Univ. Waterloo) Program Committee: Naoki Abe (NEC, Japan) Nader Bshouty (Univ. Calgary, Canada) Nicolo Cesa-Bianchi (Milano Univ., Italy) Makoto Haraguchi (Hokkaido Univ., Japan) Hiroki Ishizaka (Kyushu Tech., Japan) Klaus P. Jantke (HTWK Leipzig, Germany) Philip Long (Nat. Univ. Singapore, Singapore) Shinichi Morishita (IBM Japan, Japan) Hiroshi Motoda (Osaka Univ., Japan) Yasubumi Sakakibara (Tokyo Denki Univ., Japan) Arun Sharma (New South Wales, Australia) Ayumi Shinohara (Kyushu Univ., Japan) Carl Smith (Univ. Maryland, USA) Frank Stephan (RKU, Germany) Naftali Tishby (Hebrew Univ., Israel) Paul Vitanyi (CWI, Netherlands) Les Valiant (Harvard, USA) Osamu Watanabe (Titech., Japan) Takashi Yokomori (UEC, Japan) Bin Yu (UC Berkeley, USA) Local arrangements chair: Professor Hirotomo Aso Graduate School of Engineering Tohoku University Sendai, Japan 980 alt97 at maruoka.ecei.tohoku.ac.jp For more information, contact: Email: alt97 at maruoka.ecei.tohoku.ac.jp Homepage: http://www.maruoka.ecei.tohoku.ac.jp/~alt97  From lopez at physik.uni-wuerzburg.de Mon Nov 18 07:30:18 1996 From: lopez at physik.uni-wuerzburg.de (Bernardo Lopez) Date: Mon, 18 Nov 1996 13:30:18 +0100 (MEZ) Subject: Paper available: Learning by dilution in a Neural Network Message-ID: <199611181230.NAA13542@wptx14.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1996/WUE-ITP-96-028.ps.gz The following manuscript is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Learning by dilution in a Neural Network" B. Lopez and W. Kinzel Ref. WUE-ITP-96-028 Abstract A perceptron with N random weights can store of the order of N patterns by removing a fraction of the weights without changing their strengths. The critical storage capacity as a function of the concentration of the remaining bonds for random outputs and for outputs given by a teacher perceptron is calculated. A simple Hebb--like dilution algorithm is presented which in the teacher case reaches the optimal generalization ability. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1996 ftp> binary ftp> get WUE-ITP-96-028.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-028.ps.gz e.g. unix> lp WUE-ITP-96-028.ps [15 pages] (*) can be replaced by "get WUE-ITP-96-028.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ From fayyad at MICROSOFT.com Sun Nov 17 23:47:27 1996 From: fayyad at MICROSOFT.com (Usama Fayyad) Date: Sun, 17 Nov 1996 20:47:27 -0800 Subject: Data Mining & knowledge Discovery Journal: contents vol 1:1 Message-ID: ANNOUNCEMENT and CALL FOR PAPERS Below are the contents of the first issue of the new journal: Knowledge Discovery and Data Mining, Kluwer Academic Publishers. The journal is accepting submissions of works from a wide variety of fields that relate to data mining and knowledge discovery in databases (KDD). We accept regular research contributions, survey articles, application details papers, as well as short (2-page) application summaries. The goal is for Data Mining and Knowledge Discovery to become the premiere forum for publishing high quality original work from the wide variety of fields on which KDD draws, including: statistics, pattern recognition, database research and systems, modelling uncertainty and decision making, neural networks, machine learning, OLAP, data warehousing, high-performance and parallel computing, and visualization. The goal is to create a reference resource where researchers and practitioners in the area can lookup and communicate relevant work from a wide variety of fields. The journal's homepage provides detailed call for papers, description of the journal and its scope, and a list of the Editorial Board. Abstracts of the articles in the first issue and the editorial are also on-line. The home page is maintained at: http://www.research.microsoft.com/research/datamine - If you are interested in submitting a paper, please visit the homepage: http://www.research.microsoft.com/research/datamine to look up instructions. - if you would like a free sample issue sent to you, click on the link in http://www.research.microsoft.com/research/datamine and provide an address via the on-line form. Usama Fayyad, co-Editor-in-Chief Data Mining and Knowledge Discovery (datamine at microsoft.com) ======================================================================= Data Mining and Knowledge Discovery http://www.research.microsoft.com/research/datamine CONTENTS OF: Volume 1, Issue 1 ============================== For more details, abstracts, and on-line version of Editorial, see http://www.research.microsoft.com/research/datamine/vol1-1 ===========Volume 1, Number 1, March 1997=========== EDITORIAL by Usama Fayyad PAPERS ====== Statistical Themes and Lessons for Data Mining Clark Glymour, David Madigan, Daryl Pregibon, Padhraic Smyth Data Cube: A Relational Aggregation Operator Generalizing Group-by, Cross-Tab, and Sub Totals Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, IBM, Toronto, Hamid Pirahesh On Bias, Variance, 0/1 - loss, and the Curse-of-Dimensionality Jerome H. Friedman Bayesian Networks for Data Mining David Heckerman BRIEF APPLICATIONS SUMMARIES: ============================ Advanced Scout: Data Mining and Knowledge Discovery in NBA data Ed Colet, Inderpal Bhandari, Jennifer Parker, Zachary Pines, Rajiv Pratap, Krishnakumar Ramanujam ------------------------------------------------------------------------ To get a free sample copy of the above issue, visit the web page at http://www.research.microsoft.com/research/datamine Those who do not have web access may send their address to Kluwer by e-mail at: sdelman at wkap.com From kremer at running.dgcd.doc.ca Mon Nov 18 13:33:17 1996 From: kremer at running.dgcd.doc.ca (Stefan C. Kremer) Date: Mon, 18 Nov 1996 13:33:17 -0500 (EST) Subject: NIPS 96 Workshop Announcement: Dynamical Recurrent Networks, Day 2 Message-ID: NIPS 96 Workshop Announcement: ============================== Dynamical Recurrent Networks Post Conference Workshop Day 2 Organized by John Kolen and Stefan Kremer Saturday, December 7, 1996 Snowmass, Colorado Introduction: There has been significant interest in recent years in dynamic recurrent neural networks and their application to control, system identification, signal processing, and time series analysis and prediction. Much of this work is simply an extension of techniques which work well for feedforward networks to recurrent networks. However, when dynamics are added to a system there are many complex issues which are not relevant to the study of feedforward nets, such as the existence of attractors and questions of stability, controllability, and observability. In addition, the architectures and learning algorithms that work well for feedforward systems are not necessarily useful or efficient in recurrent systems. The first day of the workshop highlights the use of traditional results from systems theory and nonlinear dynamics to analyze the behavior of recurrent networks. The aim of the workshop is to expose recurrent network designers to the traditional frameworks available in these well established fields. A clearer understanding of the known results and open problems in these fields, as they relate to recurrent networks, will hopefully enable people working with recurrent networks to design more robust systems which can be more efficiently trained. This session will overview known results from systems theory and nonlinear dynamics which are relevant to recurrent networks, discuss their significance in the context of recurrent networks, and highlight open problems. (More information about Day 1 of the workshop can be found at: http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html). The second day of the workshop addresses the issues of designing and selecting architectures and algorithms for dynamic recurrent networks. Unlike previous workshops, which have typically focussed on reporting the results of applying specific network architectures to specific problems, this session is intended to assist both users and developers of recurrent networks to select appropriate architectures and algorithms for specific tasks. In addition, this session will provide a backward flow of information -- a forum where researchers can listen to the needs of application developers. The wide variety, rapid development and diverse applications of recurrent networks are sure to make for exciting and controversial discussions. Day 1, Friday, Dec. 6, 1996 =========================== More information about Day 1 of the workshop can be found at: http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html Day 2, Saturday, Dec. 7, 1996 ============================= Target Audience: This workshop is targeted at two groups. First, application developers faced with the task of selecting an appropriate tool for their problem will find this workshop invaluable. Second, researchers interested in studying and extending the capabilities of dynamic recurrent networks will wish to communicate their findings and observations to other researchers in the area. In addition, these researchers will have an opportunity to listen to the needs of their technology's users. Format: The format of the second day is designed to encouraged open discussion. Presenters have provided 1 or 2 references to electronically accessible papers. At the workshop itself, the presenter will be asked to briefly (10 minutes) discuss highlights, conclusions, controversial issues or open problems of their research. This presentation will be followed by a 20 minute discussion period during which the expression of contrary opinions, related problems and speculation regarding solutions to open problems will be encouraged. The workshop will conclude with a one hour panel discussion. Important Note to People Attending this Workshop: The goal of this workshop is to offer an opportunity` for an open discussion of important issues in the area of dynamic networks. To achieve this goal, the presenters have been asked not to give a detailed description of their work but rather to give only a very brief synopsis in order to maximize the available discussion time. Attendees will get the most from this workshop if they are already familiar with the details of the work to be discussed. To make this possible, the presenters have made papers relevant to the discussions available electronically via the links on the workshop's web-page. Attendees who are not already familiar with the work of the presenters at the workshop are encouraged to examine the workshops web page (at: "http://running.dgcd.doc.ca/NIPS96/"), and to retrieve and examine the papers prior to attending the workshop. List of Talk Titles and Speakers: Learning Markovian Models for Sequence Processing Yoshua Bengio, University of Montreal / AT&T Labs - Research Guessing Can Outperform Many Long Time Lag Algorithms Jürgen Schmidhuber, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale. Sepp Hochreiter, Fakultät für Informatik, Technische Universität München. Optimal Learning of Data Structure. Marco Gori, Universita' di Firenze How Embedded Memory in Recurrent Neural Network Architectures Helps Learning Long-term Temporal Dependencies. T. Lin, B. Horne & C. Lee Giles, NEC Research Institute, Princeton, NJ. Discovering the time scale of trends and periodic structure. Michael Mozer and Kelvin Fedrick, University of Colorado. Title to be announced. Lee Feldkamp, Ford Motor Co. Labs. Representation and learning issues for RNNs learning context free languages Janet Wiles and Brad Tonkes, Departments of Computer Science and Psychology, University of Queensland Title to be announced. Speaker to be Announced Long Short Term Memory. Sepp Hochreiter, Fakultät für Informatik, Technische Universität München. Jürgen Schmidhuber, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale. Web page: Please note: more detailed and up to date information regarding this workshop, as well as the reference papers described above can be found at the Workshop's web page located at: http://running.dgcd.doc.ca/NIPS96/ -- Dr. Stefan C. Kremer, Research Scientist, Artificial Neural Systems Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca From devries at sarnoff.com Mon Nov 18 16:00:15 1996 From: devries at sarnoff.com (Aalbert De Vries x2456) Date: Mon, 18 Nov 96 16:00:15 EST Subject: NNSP*97 Workshop Announcement Message-ID: <9611182100.AA27194@peanut.sarnoff.com> ************************************************************* * We apologize for multiple deliveries of this announcement * ************************************************************* 1997 IEEE Workshop on Neural Networks for Signal Processing 24-26 September 1997 Amelia Island Plantation, Florida FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of the IEEE Signal Processing Society and the co-sponsorship of the IEEE Neural Network Council, we are proud to announce the seventh of a series of IEEE Workshops on Neural Networks for Signal Processing. Papers are solicited for, but not limited to, the following topics: * Paradigms: artificial neural networks, Markov models, fuzzy logic, inference net, evolutionary computation, nonlinear signal processing, and wavelets * Application areas: speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition * Theories: generalization, design algorithms, optimization, parameter estimation, and network architectures * Implementations: parallel and distributed implementation, hardware design, and other general implementation technologies Instructions for submitting papers Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. Submissions should be sent to: Dr. Jose C. Principe IEEE NNSP'97 444 CSE Bldg #42 P.O. Box 116130 University of Florida Gainesville, FL 32611 Important Dates: **************************************************** * Submission of extended summary: January 27, 1997 * **************************************************** * Notification of acceptance: March 31, 1997 * Submission of photo-ready accepted paper: April 26, 1997 * Advanced registration: before July 1, 1997 Further Information Local Organizer Ms. Sharon Bosarge Telephone: 352-392-2585 Fax: 352-392-0044 e-mail: sharon at ee1.ee.ufl.edu World Wide Web http://www.cnel.ufl.edu/nnsp97/ Organization General Chairs Lee Giles (giles at research.nj.nec.com), NEC Research Nelson Morgan (morgan at icsi.berkeley.edu), UC Berkeley Proceeding Chair Elizabeth J. Wilson (bwilson at ed.ray.com), Raytheon Co. Publicity Chair Bert DeVries (bdevries at sarnoff.com), David Sarnoff Research Center Program Chair Jose Principe (principe at synapse.ee.ufl.edu), University of Florida Program Committee Les ATLAS Andrew BACK A. CONSTANTINIDES Federico GIROSI Lars Kai HANSEN Allen GORIN Yu-Hen HU Jenq-Neng HWANG Biing-Hwang JUANG Shigeru KATAGIRI Gary KUHN Sun-Yuan KUNG Richard LIPPMANN John MAKHOUL Elias MANOLAKOS Erkki OJA Tomaso POGGIO Tulay ADALI Volker TRESP John SORENSEN Takao WATANABE Raymond WATROUS Andreas WEIGEND Christian WELLEKENS About Amelia Island Plantation Amelia Island is in the extreme northeast Florida, across the St. Mary's river. The island is just 29 miles from Jacksonville International Airport, which is served by all major airlines. Amelia Island Plantation is a 1,250 acre resort/paradise that offers something for every traveler. The Plantation offers 33,000 square feet of workable meeting space and a staff dedicated to providing an efficient, yet relaxed atmosphere. The many amenities of the Plantation include 45 holes of championship golf, 23 Har-Tru tennis courts, modern fitness facilities, an award winning children's program, more than 7 miles of flora-filled bike and jogging trails, 21 swimming pools, diverse accommodations, exquisite dining opportunities, and of course, miles of glistening Atlantic beach front. From td at elec.uq.edu.au Mon Nov 18 21:52:32 1996 From: td at elec.uq.edu.au (Tom Downs) Date: Tue, 19 Nov 1996 12:52:32 +1000 (EST) Subject: postdoc available Message-ID: <199611190252.MAA21705@s4.elec.uq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 1286 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c69d0cc0/attachment.ksh From marwan at ee.usyd.edu.au Mon Nov 18 23:23:59 1996 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Tue, 19 Nov 1996 15:23:59 +1100 (EST) Subject: Research Positions (posted for a colleague) Message-ID: The research positions below are posted for a colleague Prof. Max Bennett (maxb at physiol.su.oz.au) ------------------------------------------------------------------------ Research positions for Electrical Engineers in Neurobiology. Two positions are sought for Honours Graduates in Electrical Engineering to participate in a research program funded by the National Health and Medical Research Council for a minimum of three years. This program involves theoretical analysis and modelling of how currents are generated at nerve terminals and subsequently flow in neurones and muscle cells to change their excitability. The program also involves experimental work in which the electrical properties of neurones and muscle cells are determined in order to provide quantitative evaluation of the parameters used in the electrical modelling. Incorporating some of this research for a PhD is also possible. Renumeration will be in the range of $25,000 to $30,000 per annum. For further information, contact Prof. Max Bennett, Neurobiology Laboratory, Dept. of Physiology, University of Sydney, NSW 2006 Australia or at maxb at physiol.su.oz.au From hu at eceserv0.ece.wisc.edu Tue Nov 19 16:47:31 1996 From: hu at eceserv0.ece.wisc.edu (Yu Hen Hu) Date: Tue, 19 Nov 1996 15:47:31 -0600 Subject: No subject Message-ID: <199611192147.AA08557@eceserv0.ece.wisc.edu> Submitted by Yu Hen Hu (hu at engr.wisc.edu) Please forgive me if you receive multiple copies of this posting. ******************************************************************** * LAST CALL FOR PAPERS DEADLINE 12/1/96 * * * * A Special Issue of IEEE Transactions on Signal Processing: * * Applications of Neural Networks to Signal Processing * * * ******************************************************************** Expected Publication Date: November 1997 Issue *** Submission Deadline: December 1, 1996 *** Guest Editors: A. G. Constantinides, Simon Haykin, Yu Hen Hu, Jenq-Neng Hwang, Shigeru Katagiri, Sun-Yuan Kung, T. A. Poggio Significant progress has been made applying artificial neural network (ANN) techniques to signal processing. From a signal processing perspective, it is imperative to understand how the neural network based algorithms are related to more conventional approaches in terms of performance, cost, and practical implementation issues. Questions like these demand honest, pragmatic, innovative, and imaginative answers. This special issue offers a unique forum for researchers and practitioners in this field to present their view on these important questions. We seek highest quality manuscripts which focus on the signal processing aspects of a neural network based algorithm, applications or implementation. Topics of interests include, but are not limited to: Neural network based signal detection, classification, and understanding algorithms. Nonlinear system identification, signal prediction, modeling, adaptive filtering, and neural network learning algorithms. Neural network applications to biomedical signal processing, including medical imaging, Electrocardiogram, EEG, and related topics. Signal processing algorithms for biological neural system modeling Comparison of neural network based approach with conventional signal processing algorithms for solving real world signal processing tasks. Real world signal processing applications based on neural networks. Fast and parallel algorithms for efficient implementation of neural networks based signal processing systems. Prospective authors are encouraged to SUBMIT MANUSCRIPTS BY DECEMBER 1, 1996 to: Professor Yu-Hen Hu E-mail: hu at engr.wisc.edu Univ. of Wisconsin - Madison, Phone: (608) 262-6724 Dept. of Electrical and Computer Engineering Fax: (608) 262-1267 1415 Engineering Drive Madison, WI 53706-1691 U.S.A. On the cover letter, indicate the manuscript is submitted to the special issue on neural network for signal processing . All manuscripts should conform to the submission guideline detailed in the "information for authors" printed in each issue of the IEEE Transactions on Signal Processing. Specifically, the length of each manuscript should not exceed 30 double-spaced pages. SCHEDULE Manuscript received by: December 1, 1996 Completion of initial review: March 31, 1997 Final manuscript received by : June 30, 1997 Expected publication date: November, 1997 DISTINGUISHED GUEST EDITORS Prof. A. G. Constantinides, Imperial College, UK, a.constantinides at romeo.ic.ac.uk Prof. Simon Haykin, McMaster University, Canada, haykin at synapse.crl.mcmaster.ca Prof. Yu Hen Hu, Univ. of Wisconsin, U.S.A., hu at engr.wisc.edu Prof. Jenq-Neng Hwang, University of Washington, U.S.A., hwang at ee.washington.edu Dr. Shigeru Katagiri, ATR, JAPAN, katagiri at hip.atr.co.jp Prof. Sun-Yuan Kung, Princeton University, U.S.A., kung at princeton.edu Prof. T. A. Poggio, Massachusetts Inst. of Tech., U.S.A., tp-temp at ai.mit.edu From ping at cogsci.richmond.edu Tue Nov 19 22:55:17 1996 From: ping at cogsci.richmond.edu (Ping Li) Date: Tue, 19 Nov 1996 22:55:17 -0500 (EST) Subject: Connection Science Message-ID: <199611200355.WAA20946@cogsci.richmond.edu.urich.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1519 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/be435af1/attachment.ksh From td at elec.uq.edu.au Wed Nov 20 18:36:48 1996 From: td at elec.uq.edu.au (Tom Downs) Date: Thu, 21 Nov 1996 09:36:48 +1000 (EST) Subject: PostDoc Available Message-ID: <199611202336.JAA29483@print.elec.uq.edu.au> POSTDOCTORAL RESEARCH FELLOWSHIP Neural Networks Lab, Department of Electrical and Computer Engineering, University of Queensland, Brisbane, Australia, 4072. TOPIC : Estimating generalization performance in feedforward neural networks This 3-year position arises following the award of an Australian Research Council grant for a project in the area of generalization performance estimation. The ideal candidate will have a strong background in applied probability and mathematical statistics and will be capable of building upon the recent contributions to the field made by engineers, computer scientists and physicists. Good programming skills (preferably in C or C++) and good interpersonal skills will also be expected. The position is available from early in 1997. Salary will be at the level of a University of Queensland postdoctoral position and will start at around A$38,000 per annum with annual increments. To apply for this position, please send your CV and the names of three referees either to Prof T Downs at the above address or, by email, to td at elec.uq.edu.au. -- regards Dept. of Electrical and Computer Engineering, Tom Downs University of Queensland, QLD, Australia, 4072 Phone: +61-7-365-3869 Fax: +61-7-365-4999 INTERNET: td at elec.uq.edu.au From raffaele at caio.irmkant.rm.cnr.it Thu Nov 21 21:00:14 1996 From: raffaele at caio.irmkant.rm.cnr.it (Raffaele Calabretta) Date: Thu, 21 Nov 1996 20:00:14 -0600 (CST) Subject: paper available on diploid neural networks Message-ID: The following paper (to appear in Neural Processing Letters) is now available via anonymous ftp: ------------------------------------------------------------------------- "Two is better than one: a diploid genotype for neural networks" --------------------------------------------------------------- Raffaele Calabretta (1,3), Riccardo Galbiati (2), Stefano Nolfi (1) and Domenico Parisi (1) 1 Department of Neural Systems and Artificial Life Institute of Psychology, National Research Council e-mail: raffaele at caio.irmkant.rm.cnr.it 2 Department of Biology, University "Tor Vergata" 3 Centro di Studio per la Chimica del Farmaco, National Research Council Department of Pharmaceutical Studies, University "La Sapienza" Rome, Italy --------------------------------------------------------------------------- Abstract: In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings. Key words: adaptation, diploidy, genetic algorithms, genotype-phenotype mapping, neural networks. _________________________________________________________________ FTP-host: gracco.irmkant.rm.cnr.it FTP-filename: /pub/raffaele/calabretta.diploidy.ps.Z The paper has been placed in the anonymous-ftp archive (see above for ftp-host) and is now available as a compressed postscript file named: calabretta.diploidy.ps.Z Retrieval procedure: unix> ftp gracco.irmkant.rm.cnr.it Name: anonymous Password: {your e-mail address} ftp> cd pub/raffaele ftp> bin ftp> get calabretta.diploidy.ps.Z ftp> quit unix> uncompress calabretta.diploidy.ps.Z e.g. unix> lpr calabretta.diploidy.ps (8 pages of output) The paper is also available on World Wide Web: http://kant.irmkant.rm.cnr.it/gral.html Comments welcome Raffaele Calabretta e-mail address: raffaele at caio.irmkant.rm.cnr.it From jhowse at squid.lanl.gov Thu Nov 21 14:48:38 1996 From: jhowse at squid.lanl.gov (James Howse) Date: Thu, 21 Nov 96 12:48:38 MST Subject: NIPS*96 Workshop Announcement: Dynamical Recurrent Networks, Day 1 Message-ID: <9611211948.AA19897@squid.lanl.gov> NIPS*96 Workshop Announcement Dynamical Recurrent Networks NIPS*96 Postconference Workshop Day 1 Organized by James Howse and Bill Horne Friday, December 6, 1996 Snowmass, Colorado Workshop Abstract There has been significant interest in recent years in dynamic recurrent neural networks and their application to control, system identification, signal processing, and time series analysis and prediction. Much of this work is simply an extension of techniques which work well for feedforward networks to recurrent networks. However, when dynamics are added to a system there are many complex issues which are not relevant to the study of feedforward nets, such as the existence of attractors and questions of stability, controllability, and observability. In addition, the architectures and learning algorithms that work well for feedforward systems are not necessarily useful or efficient in recurrent systems. The first day of the workshop highlights the use of traditional results from systems theory and nonlinear dynamics to analyze the behavior of recurrent networks. The aim of the workshop is to expose recurrent network designers to the traditional frameworks available in these well established fields. A clearer understanding of the known results and open problems in these fields, as they relate to recurrent networks, will hopefully enable people working with recurrent networks to design more robust systems which can be more efficiently trained. This session will overview known results from systems theory and nonlinear dynamics which are relevant to recurrent networks, discuss their significance in the context of recurrent networks, and highlight open problems. The second day of the workshop addresses the issues of designing and selecting architectures and algorithms for dynamic recurrent networks. Unlike previous workshops, which have typically focussed on reporting the results of applying specific network architectures to specific problems, this session is intended to assist both users and developers of recurrent networks to select appropriate architectures and algorithms for specific tasks. In addition, this session will provide a backward flow of information -- a forum where researchers can listen to the needs of application developers. The wide variety, rapid development and diverse applications of recurrent networks are sure to make for exciting and controversial discussions. ::::::::::::: Format for Day 1 The format for this session is a series of 30 minute talks with 5 minutes for specific questions, followed by time for open discussion after all of the talks. The talks will give a tutorial overview of traditional results from systems theory or nonlinear dynamics, discuss their relationship to some problem in recurrent neural networks, and then outline unresolved problems related to these results. The discussions will center around possible ways to resolve the open problems, as well as clarifying the understanding of established results. The goal of this session is to introduce more of the NIPS community to ideas from control theory and nonlinear dynamics, and to illustrate the utility of these ideas in analyzing and synthesizing recurrent networks. ::::::::::::: Web Sites for the Workshop Additional information concerning Day 1 can be found at http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html. Information about Day 2 can be obtained at http://running.dgcd.doc.ca/NIPS96/. ::::::::::::: Schedule for Friday, December 6th Morning Session (7:30-10:30am) Structural Neural Dynamics and Computation Xin Wang Dynamical Recognizers: What Languages Can Recurrent Neural Networks Recognize in Real Time? Cris Moore Decoding Discrete Structures from Fixed Points of Analog Hopfield Networks Arun Jagota Recurrent Networks and Supervised Learning Jennie Si Afternoon Session (4:00-7:00pm) System Theory of Recurrent Networks Eduardo D. Sontag Learning Controllers for Complex Behavioral Systems Shankar Sastry and Lara Crawford Neural Network Verification of Hybrid Dynamical System Stability Michael Lemmon ::::::::::::: Talk Abstracts Title: Structural Neural Dynamics and Computation Author: Xin Wang Xerox Corporation Abstract: Dynamics and computation of neural networks can be regarded as two types of meaning of mathematical equations that are used to describe dynamical and computational behaviors of the networks. They are in parallel to operational semantics and denotational semantics of computer programs written in programming languages. Lessons learned in study of formal semantics and impacts of structural programming and object-oriented programming methodologies tell us that a structural approach has to be taken, in order to deal with complexity in analysis and synthesis caused by large-sized neural networks. This talk will start with presenting some small-sized networks that possess very rich dynamical and bifurcational behaviors, ranging from convergent to chaotic and from saddle to period-doubling bifurcations, and then examine some conditions under which these types of behaviors are preserved by standard constructions such as Cartesian product and cascade. ---------- Title: Dynamical Recognizers: What Languages Can Recurrent Neural Networks Recognize in Real Time? Author: Cris Moore Computation, Dynamics, and Inference Santa Fe Institute Abstract: There has been considerable interest recently in using recurrent neural networks as dynamical models of language, complementary to the standard symbolic and grammatical approaches. Numerous researchers have shown that RNNs can recognize regular, context-free, and even context-sensitive languages in real time. We place these results in a mathematical framework by treating RNNs with varying activation functions as iterated maps with varying functional forms. We relate the classes of languages recognizable in real time by these different types of RNNs directly to "classical" language classes from computational complexity theory. We prove, for instance, that there are languages recognizable in real time with piecewise-linear or quadratic activations that linear functions cannot, and that there are languages recognizable with exponential or sinusoidal activations that are not recognizable by polynomial activations of any degree. Our methods are essentially identical to the Vapnik-Chervonenkis dimension. We also relate these results to Blum, Shub and Smale's definition of analog computation, as well as Siegelmann and Sontag's. ---------- Title: Decoding Discrete Structures from Fixed Points of Analog Hopfield Networks Author: Arun Jagota Department of Computer Science University of California, Santa Cruz Abstract: In this talk we examine the relationship between the fixed points of certain specialized families of binary Hopfield networks and certain stable regions of their associated analog Hopfield network families. More specifically, consider some specialized family F of binary Hopfield networks whose fixed points have some well-characterized structure. We consider an analog version of the family F obtained by replacing the hard-threshold neurons by sigmoidal ones and replacing the discrete dynamics of the binary model by a continuous one. We ask the question: can discrete structures identical to or similar to those that are fixed points in the binary family F be recovered from certain stable regions of the associated analog family? We obtain revealing answers for certain families. Our results lead to a better understanding of the recoverability of discrete structures from stable regions of analog networks. They have applications to solving discrete problems via analog networks. We also discuss many open mathematical problems that our studies reveal. Several of the results were obtained in joint work with Fernanda Botelho and Max Garzon. ---------- Title: Recurrent Networks and Supervised Learning Author: Jennie Si Department of Electrical Engineering Arizona State University Abstract: After several years of adventure, researchers in the field of artificial neural networks have reached a common consensus about what neural networks can do and what their limitations are. In particular, there has been some fundamental results on the existence of artificial neural networks for function approximation and nonlinear dynamic system modeling; on neural networks for associative memory applications, etc. Some theoretical advances were made in neural networks for control applications, in an adaptive setting. In this talk, the emphasis is given to some recent progress aiming at a quantitative evaluation of neural network performance for some fundamental tasks, e.g., static and dynamic approximation; computation issues in training neural networks characterized by both memory and computation complexities. All the above discussions will be based on neural network models representing nonlinear static and dynamic input-output systems as well as state space nonlinear dynamic systems. Further applications of the fundamental neural network theory to simulation based approximation technique for nonlinear dyanmic progarmming will also be discussed. This technique may represent an important and practically applicable dynamic programming solution to complex problems that invoke the dual course of large dimension and lack of an accurate mathematical model. ---------- Title: System Theory of Recurrent Networks Author: Eduardo D. Sontag Department of Mathematics Rutgers University Abstract: We consider general recurrent networks. These are described by the differential equations x' = S(Ax+Bu) , y = Cx , in continuous time, or the analogous discrete-time version. Here S(.) is a diagonal mapping of the form S(a,b,c,...) = (s(a),s(b),s(c),...) where s(.) is a scalar real map called the "activation" of the network. The vector x represents the state of the system, u is the time-dependent input signal, and y represents the measurements or outputs of the system. Recurrent networks whose activation s(.) is the identity function s(x)=x are precisely the linear systems studied in control theory. It is perhaps an amazing fact that a nontrivial and interesting system theory can be developed for recurrent nets whose activation is the one typically used in neural net practice, s(x)=tanh(x). (One reason that makes this fact surprising is that recurrent nets with this activation are, in a suitable sense, universal approximators for arbitrary nonlinear systems.) This talk will survey recent results by the speaker and several coauthors (Albertini, Dasgupta, Koiran, Koplon, Siegelmann, Sussmann) regarding issues of parameter identifiability, controllability, observability, system approximation, computability, parameter reconstruction, and sample complexity for learning and generalization. We provide simple algebraic tests for many properties, expressed in terms of the "weight" or parameter matrices (A,B,C) that characterize the system. ---------- Title: Learning Controllers for Complex Behavioral Systems Authors: Shankar Sastry and Lara Crawford Electronics Research Laboratory University of California, Berkeley Abstract: Biological control systems routinely guide complex dynamical systems, such as the human body, through complicated tasks, such as running or diving. Conventional control techniques, however, stumble with these problems, which have complex dynamics, many degrees of freedom, and an only partially specified desired task (e.g., "move forward fast," or "execute a one-and-one-half-somersault dive"). To address behaviorally-specified problems like these, we are using a biologically-inspired, hierarchical control structure, in which network-based controllers learn the controls required at each level of the hierarchy, and no system model is required. The encoding and decoding of the information passed between hierarchical levels, including both controller commands and behavioral feedback, is an important design issue affecting both the size of the controller network needed and the ease with which it can learn; we have used biological encoding schemes for inspiration wherever possible. For example, the lowest-level controller outputs an encoded torque profile; the encoding is based on the way biological pattern generators for single-joint movements restrict the allowed control torque profiles to a particular parametrized control family. Such an encoding removes all time dependence from the controller's consideration, simplifying the learning task considerably to one of function approximation. The implementation of the controller networks themselves could take several forms, but we have chosen to use radial basis functions, which have some advantages over conventional networks. Through a learning architecture with good encodings for both the controls and the desired behaviors, many of the difficulties in controlling complex behavioral systems can be overcome. In this talk, we apply the control structure described above, with 800-element networks and a form of supervised learning, to the problem of controlling a human diver. The system learns open-loop controls to steer a 16-DOF human model through various dives, including a one-and-one-half somersault pike and a one-and-one-half somersault with a full twist. ---------- Title: Neural Network Verification of Hybrid Dynamical System Stability Author: Michael Lemmon Department of Electrical Engineering University of Notre Dame Abstract: Hybrid dynamical systems (HDS) can occur when a smooth dynamical system is supervised by discrete-event dynamical system. Such systems are frequently found in computer-controlled systems. A key issue in the development of hybrid system controllers concerns verifying that the system possesses certain generic properties such as safety, stability, and optimality. It has been possible to study the verifiability of restricted classes of hybrid systems. Examples of such systems include switched systems consisting of first-order integrators [Alur et al.], hybrid systems whose "switching" surfaces satisfy certain invariance properties [Lemmon et al.], and planar hybrid systems [Guckenheimer]. The extension of these verification methods to more general systems [Deshpande et al.], however, appears to be computationally intractable. This is due in large part to the complex behaviours that such systems can demonstrate. Simulation experiments with a simple system consisting of switched integrators (relative degree greater than 2) suggest that the $\omega$-limit sets of these systems can be single fixed points, periodic points, or Cantor sets. Neural networks may provide one method for assisting in the analysis of hybrid systems. A neural network can be used to approximate the Poincare map of a switched hybrid system. Such methods can be extremely useful in verifying whether a given HDS exhibits asymptotically stable periodic behaviours. The purpose of this talk are twofold. First, a summary of the principal results and open research areas in hybrid systems will be given. Second, the talk will discuss recent results on the use of neural networks in the verification of hybrid system stability. From thimm at idiap.ch Fri Nov 22 07:54:44 1996 From: thimm at idiap.ch (Georg Thimm) Date: Fri, 22 Nov 1996 13:54:44 +0100 Subject: CFP: Session at KES'97: Knowledge Extraction from and with Neural Networks Message-ID: <199611221254.NAA11805@avoi.idiap.ch> Call for Papers Knowledge Extraction from and with Neural Networks A session organized by G. Thimm at the First International Conference on Conventional and Knowledge-Based Intelligent Electronic Systems, KES '97 21st - 23rd May 1997, Adelaide, Australia Electronics Association of South Australia (please see below for the KES call of papers) Successfully trained Neural networks contain a certain knowledge: applied to formerly unseen data, they give (often) a correct answer. However, this knowledge is usually difficult to access, although an intelligible qualitative or quantitative representation is of interest in research and development: - The performance on untrained data can be evaluated (in complement to statistical methods). - The extracted knowledge can be used in the development of more efficient algorithms. - The knowledge is of scientific interest. Suggested topics for papers are: - Knowledge extraction techniques from neural networks. - Neural network architectures designed for knowledge extraction. - Applications in which extracted knowledge plays a role. - Ways to represent knowledge extracted from neural networks. - Performance estimation of neural networks using extracted knowledge. - Methods that use knowledge extracted from a neural network (but not the network). Authors are invited (but not required) to point out how the extracted knowledge is used and why neural networks as a intermediate representation of knowledge are of advantage. SUBMISSION OF PAPERS * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * Please submit one original and three copies of camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Georg Thimm IDIAP Rue de Simplon 4 C.P. 592 CH-1920 Switzerland Email: thimm at idiap.ch Tel: ++41 27 721 77 39 Fax: ++41 27 721 77 12 DEADLINES Receipt of papers for this session January 15, 1997 Notification of acceptance February 15, 1997 FURTHER INFORMATION Please look at http://www.idiap.ch/~thimm/KES_ses.html or contact G. Thimm for information on this special session or http://www.kes97.conf.au for the main conference. ================================================================== FIRST INTERNATIONAL CONFERENCE ON CONVENTIONAL AND KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '97 21st - 23rd May 1997, Adelaide, Australia Electronics Association of South Australia CALL FOR PARTICIPATION The aim of this conference is to provide international forum for presentation of recent results in the general areas of Electronic Systems Design and Industrial Applications and Information Technology. Honorary Chair I. Sethi, WSA, USA Conference Chair L.C. Jain, UniSA, Australia General Chair C. Pay, EASA, Australia Conference Advisor R.P. Johnson, DSTO, Australia Conference Director N.M. Martin, DSTO, Australia Publicity Chair R.K. Jain, UA, Australia Publications Chair G.N. Allen, KES, Australia Austria Liaison Chairs F. Leisch, TUW K. Hornik, TUW Canada Liaison Chair C.W. de Silva, UBC New Zealand Liaison Chair N. Kasabov, UO Korea Liaison Chair J.H. Kim, KAIST Japan Liaison Chairs K. Hirota, TIT T. Tanaka, FIT Y. Sato, HU England Liaison Chair M.J. Taylor, UL France Liaison Chair E. Sanchez, Neurinfo USA Liaison Chairs N. Nayak, IBM Watson C.L. Karr, UA Polland Liaison Chair J. Kacprzyk, PAS Romania Liaison Chair M. Negoita, GEF Russia Liaison Chair V.I. Neprintsev, VSU Singapore Liaison Chair D. Mital, NTU The Netherlands Liaison Chair W. van Luenen, URL India Liaison Chairs B.S. Sonde, IISc A.N. Bannore, RLT Germany Liaison Chair U. Seiffert, UM Hungary Liaison Chair L.T. Koczy, TUB Italy Liaison Chair G. Guida, UB The conference will consist of plenary sessions, contributory sessions, poster papers, workshops and exhibition mainly on the theory and applications of conventional and knowledge-based intelligent systems using: . ARTIFICIAL NEURAL NETS . FUZZY SYSTEMS . EVOLUTIONARY COMPUTING . CHAOS THEORY THE TOPICS OF INTEREST The topics of interest include, but not limited to: Biomedical engineering; Consumer electronics; Electronic communication systems; Electronic control systems; Electronic production systems; Electronic security; Education and training; Industrial electronics; Knowledge- based intelligent engineering systems using expert systems, neural networks, fuzzy logic, evolutionary programming and chaos theory; Marketing; Mechatronics; Multimedia; Microelectronics; Optical electronics; Sensor technology; Signal processing; Virtual reality. SUBMISSION OF PAPERS * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * Please submit one original and three copies of the camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Dr. L.C. Jain, Knowledge-based Intelligent Engineering Systems, School of Electronic Engineering, University of South Australia, Adelaide, The Levels, S.A., 5095, Australia. Tel: 61 8 302 3315 Fax: 61 8 302 3384 E-Mail etLCJ at Levels.UniSA.Edu.Au INVITED LECTURES The conference committee is also soliciting proposals for invited sessions focussing on new or emerging electronic technologies. Researchers, application engineers and managers are invited to submit proposals to Dr L.C. Jain by 31st October 1996. KEY DATES Conference and Exhibition - 22nd and 23rd May 1997 Workshops - 21st May 1997 Conference Dinner and Industry Awards for Excellence 22nd May 1997 DEADLINES Receipt of paper 31st December 1996 Receipts of workshop proposals - 31st October 1996 Notification of acceptance - 30th January 1997 REGISTRATION FEE ( Conference Early Registration (until 28 . 2 . 97) AU$ 300 ( Conference Registration (after 28 . 2 . 97) AU$ 350 ( Conference Early Registration for full-time student (until 28 . 2 . 97) AU$ 200 ( Conference Registration for full-time student (after 28 . 2 . 97) AU$ 250 ( Workshop Registration (AU$ 150 for one workshop) AU$ 150 ( Conference Dinner AU$ 65 ______________________________________________________________________ TOTAL AU$ _________ KES '97 REGISTRATION FORM Name: _______________________________________________ Title: ________________________________ Position: ________________ Organisation: _________________________________________________ Address: ________________________________________________________________________ ________________________________________________________________________ Tel: Fax: E-mail: PAYMENT DETAILS Please debit the following account in the amount of $_____________ ( Mastercard ( Bank card CARD NUMBER Expiry Date: Name of card holder __________________________________ Signature ______________________ OR . Please send cheque payable to: EASA Conference Account Conference Secretariat Knowledge - Based Intelligent Engineering Systems University of South Australia Adelaide, The Levels, S.A. 5095 Australia OR . Transfer the amount directly to the Bank Account Name: EASA Conference Account Account Number: 735 - 038 50 - 0833 Westpac Bank 56 O'Connel Street North Adelaide, S.A. 5006 Australia All participants are required to fill the registration form and register for this Conference. From Tony.Plate at Comp.VUW.AC.NZ Thu Nov 21 19:38:36 1996 From: Tony.Plate at Comp.VUW.AC.NZ (Tony Plate) Date: Fri, 22 Nov 1996 13:38:36 +1300 Subject: Faculty position Message-ID: <199611220038.NAA08761@rialto.comp.vuw.ac.nz> Department of Computer Science Victoria University of Wellington LECTURESHIP in COMPUTER SCIENCE Position No: 634 November 1996 ---------------------------------------------------------------------------- The University invites applications from suitably qualified persons for a lectureship in Computer Science. Applicants should hold a PhD in computer science and show evidence of strong research potential and excellence in teaching. The Department is seeking to appoint within its established research areas of software engineering (including databases), concurrent and distributed systems, and artificial intelligence. The position is permanent, subject to a probationary period. Teaching programmes include the PhD, both research and professional Masters Degrees, and a BSc. The Department has 13 academic staff supported by 6 programming staff, 20-30 graduate students, and about 90 undergraduates per year. Further information about the Department is available at http://www.comp.vuw.ac.nz, or from the chairperson Peter.Andreae at vuw.ac.nz. Victoria University is situated in Wellington, the capital city of New Zealand. The city offers an outstanding combination of recreational and cultural activities. The University has an enrolment of about 10,000 and teaches comprehensive programmes in the sciences, arts, commerce, education, law and architecture. The salary scale for Lecturers is currently NZ$41,820-NZ$49,470 per annum, where there is a bar; then NZ$41.000-NZ$52,530 per annum. Enquiries and applications should be sent to the address below by the closing date of 30 January 1996. Applications should include the following: 1. name 2. address and telephone/fax numbers; email address if applicable 3. academic qualifications 4. present position 5. details of appointments held, with special reference to teaching appointments 6. research experience 7. field in which specially qualified 8. publications, prefarably under appropriate headings, eg, books, articles, monographs 9. names and addresses (fax numbers and/or email addresses if possible) of three persons from whom reference can be requested (in addition to naming referees, you can include recent testimonials if you wish) 10. date on which able to commence duties ---------------------------------------------------------------------------- In honouring the Treaty of Waitangi, the University welcomes applications from Tangata Whenua. It also welcomes applications from women, Pacific Island peoples, ethnic minorites, and people with disabilities. ---------------------------------------------------------------------------- Appointments Administrator Human Resources Directorate Victoria University of Wellington PO Box 600, Wellington New Zealand tel: +64 4 495-5272 fax: +64 4 495 5238 Peter.Gargiulo at vuw.ac.nz ------------- From jabri at valaga.salk.edu Sun Nov 24 23:05:56 1996 From: jabri at valaga.salk.edu (Marwan Jabri) Date: Sun, 24 Nov 1996 20:05:56 -0800 Subject: Postdoctoral Research Fellowships Message-ID: <199611250405.UAA01190@lopina.salk.edu> Postdoctoral Research Fellowships (closing date Dec 13, 1996) Neuromorphic Systems Research SEDAL, Department of Electrical Engineering University of Sydney The University of Sydney through its U2000 program has 15 postdoctoral fellowships available for open competition. These fellowships are for recent PhD graduates (PhD awarded within the last five years or about to be awarded) who wish to carry out full-time research at the University of Sydney. The fellowships can be taken at the University's Neuromorphic Systems Research Group, within the Systems Engineering & Design Automation Laboratory (SEDAL) at the Department of Electrical Engineering. The range of current research activities/projects include: o Modelling of visual, auditory, olfactory and somatosensory pathways, o Modelling of the superior colliculus o Optical character recognition o Machine learning o Auditory localization o VLSI implementations of neural systems o Learning on silicon o Biologically inspired signal processing The fellowships are available for a minimum of three years, with a possible further one year extension. They are available from February 1997 and must be taken up by June 1997. Fellowships carry a salary of $A38,092 to $A40,889 and include the cost of return airfare to Sydney (fares for dependants, and removal expenses, will not be provided) and an initial setting-up grant for the research project of $A25,000. Applications close in Sydney on 13 December 1996 and decisions about awards will be made in January 1997. If you are interested, please contact first: Professor Marwan Jabri Phone: +61-2-9351 2240 Fax : +61-2-9351 7209 marwan at sedal.usyd.edu.au http://www.sedal.usyd.edu.au/~marwan and from whom an application form can be obtained (postscript or html). Applicats should complete the application form and provide, in addition, the following information: 1.an up-to-date curriculum vitae 2.a list of publications (distinguishing clearly between full publications and abstracts of papers presented to learned societies) 3.an outline of the proposed research project in no more than two pages 4.the names and addresses of two referees who will be forwarding testimonials 5.overseas applicants must also provide details (names, ages of children) of any family members who will accompany the applicant to Australia, if successful (this information is necessary for the University to assist with visa arrangements) then post the completed application form and additional information/documents to: The Director Research and Scholarships Office The University of Sydney 2006 Australia From robert at fit.qut.edu.au Mon Nov 25 00:02:09 1996 From: robert at fit.qut.edu.au (Robert Andrews) Date: Mon, 25 Nov 1996 15:02:09 +1000 Subject: NIPS*96 Rule Extraction W'Shop Message-ID: <199611250452.OAA14649@sky.fit.qut.edu.au> Tenth Annual Conference on NEURAL INFORMATION PROCESSING SYSTEMS RULE EXTRACTION FROM TRAINED ARTIFICIAL NEURAL NETWORKS Snowmass, CO. Friday Dec 6th 1996 WORKSHOP PROGRAMME 7:00 - 7:30 Rule Refinement and Local Function Networks Robert Andrews (Queensland University of Technology) 7:30 - 8:00 A Systematic Method for Decompositional Rule Extaction From Neural Networks R. Krishnan (Centre for AI and Robotics) 8:00 - 8:30 Rule Extraction as Learning Mark Craven (Carnegie Mellon University) 8:30 - 9:00 MITER: Mutual Information and Template for Extracting Rules Tayeb Nedjari (Institut Galilee, Univ Paris Nord) 9:00 - 9:30 Recurrent Neural Networks and Rule Extraction Lee Giles (NEC Research Institute, Princeton) 9:30 - 10:00 Beyond Finite State Machines: Steps Towards Representing and Extracting Context Free Languages From Recurrent Neural Networks Janet Wiles (University of Queensland) 4:00 - 4:30 Explanation Based Generalisation and Connectionist Systems Joachim Diederich (Queensland University of Technology) 4:30 - 5:00 Law Discovery Using Neural Networks Kazumi Saito (NTT Communication Laboratories) 5:00 - 5:30 A Comparison Between Two Rule Extraction Methods for Continuous Input Data Ishwar Sethi (Wayne State University) 5:30 - 7:00 Panel Discussion Panel Members: R Andrews, J Diederich, L Giles, M Craven, I Sethi, J Wiles From feopper at wicc.weizmann.ac.il Mon Nov 25 01:04:15 1996 From: feopper at wicc.weizmann.ac.il (Opper Manfred) Date: Mon, 25 Nov 1996 08:04:15 +0200 (WET) Subject: preprint Message-ID: <199611250604.IAA69428@wishful.weizmann.ac.il> A non-text attachment was scrubbed... Name: not available Type: text Size: 1429 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/ff25035a/attachment.ksh From mam at kassandra.informatik.uni-dortmund.de Mon Nov 25 10:19:11 1996 From: mam at kassandra.informatik.uni-dortmund.de (Martin Mandischer) Date: Mon, 25 Nov 1996 16:19:11 +0100 Subject: 2nd CFP: Euromicro-Workshop on Computational Intelligence, 97 Message-ID: <199611251519.QAA01037@kassandra.informatik.uni-dortmund.de> SECOND CALL FOR PAPERS ====================== Euromicro-Workshop on Computational Intelligence Budapest, Hungary September 3-4, 1997 CONFERENCE SCOPE ================ Artificial Neural Networks (NN), Fuzzy-Logic Systems (FL), and Evolutionary Algorithms (EA) have been investigated since three decades. Their broad application, however, had to wait until powerful computers became available within the last decade. Their potential is by no means exhausted today. On the contrary, more and more they are jointly applied to solve real-world hard problems. The term Computational Intelligence has been coined for such combinations, and the first CI World Congress in 1994 was witness of the boom in this challenging field, the next one being scheduled for 1998. This Workshop aims at bringing together developers and users of CI methods in order to enhance the synergetic potential. Besides pure subsymbolic knowledge processing, also combinations of symbolic and subsymbolic approaches shall be addressed. Topics of interest include, but are not restricted to: Evolutionary Algorithms: theoretical aspects of evolutionary computation, modifications and extensions to evolutionary algorithms such as evolution strategies, evolutionary programming, and genetic algorithms, recent trends in applications. Fuzzy-Logic: mathematical foundations of fuzzy logic and control, semantics of fuzzy rule bases, fuzzy clustering, decision making, image processing, recent trends in applications. Neural Networks: advances in supervised and unsupervised learning algorithms, constructive algorithms for networks, prediction and non-linear dynamics of networks, pattern recognition, data analysis, associative memory. Combined CI-Methods: evolutionary algorithms for neural networks, fuzzy neural networks, evolutionary optimized fuzzy systems, and any combination with classical as well as novel methods. The Workshop will have just one stream of oral and poster sessions. This will give ample time for open discussions. GENERAL INFORMATION =================== The Workshop will be organized in parallel with the annual Euromicro Conference, the 23rd one to be held in Budapest, the well known capital of Hungary. Budapest is famous for many reasons and there are very many attractions to be visited. Local arrangements and tourist information will be provided in the final program of the Workshop. Joint registration to the Workshop and the 23rd Euromicro Conference will be available. Organizing Chairperson ---------------------- Prof. Ferenc Vajda, Hungarian Academy of Science, Budapest, Hungary Program Chairpersons -------------------- Prof. Hans-Paul Schwefel Prof. Bernd Reusch Department of Computer Science University of Dortmund 44221 Dortmund Germany Deputy Program Chairpersons --------------------------- Martin Mandischer, University of Dortmund, Germany Karl-Heinz Temme, University of Dortmund, Germany Programme Committee ------------------- Monica Alderighi (Italy) Thomas Baeck (Germany) Gusz Eiben (The Netherlands) Kenneth De Jong (USA) David Fogel (USA) Ralf Garionis (Germany) Antonio Gonzalez (Spain) Tadeusz Grabowiecki (Poland) Karl Goser (Germany) Lech Jozwiak (The Netherlands) Kurt P. Judmann (Austria) Harro Kiendl (Germany) Hiroaki Kitano (Japan) Erich Peter Klement (Austria) Matti Kurki (Finland) L. Koczy (Hungary) Rudolf Kruse (Germany) Reinhard Maenner (Germany) Bernard Manderick (Belgium) Martin Mandischer (Germany) Stefano Messina (Italy) Zbigniew Michalewicz (USA) Antonio Nunez (Spain) Adam Postula (Australia) Bernd Reusch (Germany) Guenter Rudolph (Germany) Ulrich Rueckert (Germany) Werner von Seelen (Germany) Marc Schoenauer (France) Hans-Paul Schwefel (Germany) Karl-Heinz Temme (Germany) J. L. Verdegay (Spain) Klaus Waldschmidt (Germany) Lotfi Zadeh (USA) Andreas Zell (Germany) SUBMISSION OF PAPERS ==================== Prospective authors are encouraged to send a PostScript version of their full paper (not exceeding 4000 words in length and including a 150-200 words abstract) through WWW (http://LS11-www.informatik.uni-dortmund.de/EUROMICRO). Alternatively, submissions may be sent as hardcopies by postal mail. In that case 5 copies are due to be sent to one of the program chairpersons. The following information should be included in the submission: All necessary clearances have been obtained for the publication of this paper. If accepted, the author(s) will prepare the final camera-ready manuscript in time for inclusion in the proceedings and will personally present the paper at the Workshop. The closing date for submissions is February 1st, 1997. Authors will be notified of acceptance by April 1st, 1997. Camera-ready versions will be required by June 1st, 1997. The proceedings will be published by IEEE Press. Note: We aim at high quality papers rather than a full program. MORE INFORMATION ================ Information on the Workshop and Euromicro Conference is available through WWW: Workshop: http://LS11-www.informatik.uni-dortmund.de/EUROMICRO Conference: http://www.elet.polimi.it/pub/data/Nello.Scarabottolo/www_docs/em97 Budapest: http://www.fsz.bme.hu/hungary/budapest/budapest.html IMPORTANT DATES =============== Submission of papers: February 1st, 1997 Notification of acceptance: April 1st, 1997 Camera-ready papers due: June 1st, 1997 --------------------------------------------------------- Martin Mandischer Informatik Centrum Dortmund (ICD) University of Dortmund Center for Applied Systems Analysis (CASA) Department of Computer Science Joseph-von-Fraunhofer-Str. 20 44221 Dortmund 44227 Dortmund Germany Room 2.73 Phone: 0231-9700-369 Fax: 0231-9700-959 E-Mail: mandischer at LS11.informatik.uni-dortmund.de --------------- WWW: http://LS11-www.informatik.uni-dortmund.de/people/mam/ From moody at chianti.cse.ogi.edu Mon Nov 25 11:56:12 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Mon, 25 Nov 96 08:56:12 -0800 Subject: Computational Finance MS Programs at OGI Message-ID: <9611251656.AA12007@chianti.cse.ogi.edu> ======================================================================= COMPUTATIONAL FINANCE at Oregon Graduate Institute of Science & Technology (OGI) Master of Science Concentrations in Computer Science & Engineering (CSE) Electrical Engineering (EE) Now Reviewing MS Applications for Fall 1997. New: Certificate Program Designed for Part-Time Students. For more information, contact OGI Admissions at (503)690-1027 or admissions at admin.ogi.edu, or visit our Web site at: http://www.cse.ogi.edu/CompFin/ Students with an interest in Neural Networks, Machine Learning, or Data Mining are particularly encouraged to apply. ======================================================================= Computational Finance Overview: Advances in computing technology now enable the widespread use of sophisticated, computationally intensive analysis techniques applied to finance and financial markets. The real-time analysis of tick-by-tick financial market data, and the real-time management of portfolios of thousands of securities is now sweeping the financial industry. This has opened up new job opportunities for scientists, engineers, and computer science professionals in the field of Computational Finance. Scientists and engineers with training in neural networks, machine learning, nonparametric statistics, and time series analysis are particularly well positioned for doing state-of-the-art quantitative analysis in the financial industry. A number of major financial institutions now use neural networks and related techniques to manage billions of dollars in assets. Curriculum: The strong demand within the financial industry for technically- sophisticated graduates is addressed at OGI by the Master of Science and Certificate Programs in Computational Finance. Unlike a standard two year MBA, OGI's intensive 12 month programs are directed at training scientists, engineers, and technically oriented financial professionals in the area of quantitative finance. The master's programs lead to a Master of Science in Computer Science and Engineering (CSE track) or in Electrical Engineering (EE track). The MS programs can be completed within 12 months on a full-time basis. In addition, OGI has introduced a Certificate program designed to provide professionals in engineering and finance a means of upgrading their skills or acquiring new skills in quantitative finance on a part-time basis. The Computational Finance MS concentrations feature a unique combination of courses that provides a solid foundation in finance at a non-trivial, quantitative level, plus the essential core knowledge and skill sets of computer science or the information technology areas of electrical engineering. These skills are important for advanced analysis of markets and for the development of state-of-the-art investment analysis, portfolio management, trading, derivatives pricing, and risk management systems. The MS in CSE is ideal preparation for students interested in securing positions in information systems in the financial industry, while the MS in EE provides rigorous training for students interested in pursuing careers as quantitative analysts at leading-edge financial firms. In addition to the core courses in Computational Finance, CS, and EE, students can take elective courses in neural networks, nonparametric statistics, pattern recognition, and adaptive signal processing. The curriculum is strongly project-oriented, using state-of-the-art computing facilities and live/historical data from the world's major financial markets provided by Dow Jones Telerate. Students are trained in the use of high-level numerical and analytical software packages for analyzing financial data. OGI has established itself as a leading institution in research and education in Computational Finance. Moreover, OGI has strong research programs in a number of areas that are highly relevant for work in quantitative analysis and information systems in the financial industry. These include neural networks, nonparametric statistics, signal processing, time series analysis, human computer interaction, database systems, transaction processing, object-oriented programming, and software engineering. ----------------------------------------------------------------------- Admissions ----------------------------------------------------------------------- Applications for entrance into the Computational Finance MS programs for Fall Quarter 1997 are currently being considered. The deadlines for receipt of applications are: January 15 (Early Decision Deadline, decisions by February 15) March 15 (Final Deadline, decisions by April 15) A candidate must hold a bachelor's degree in computer science, engineering, mathematics, statistics, one of the biological or physical sciences, finance, econometrics, or one of the quantitative social sciences. Candidates who hold advanced degrees in these fields or who have experience in the financial industry are also encouraged to apply. Applications for the Certificate Program are considered on an ongoing basis for entrance in any quarter. ---------------------------------------------------------------------- Contact Information ---------------------------------------------------------------------- For general information and admissions materials: Visit our web site at: http://www.cse.ogi.edu/CompFin/ or contact: Office of Admissions Oregon Graduate Institute P.O.Box 91000 Portland, OR 97291-1000 E-mail: admissions at admin.ogi.edu Phone: (503)690-1027 For special inquiries: E-mail: compfin at cse.ogi.edu ====================================================================== From gorr at willamette.edu Tue Nov 26 17:29:01 1996 From: gorr at willamette.edu (Jenny Orr) Date: Tue, 26 Nov 1996 14:29:01 -0800 (PST) Subject: NIPS 96 Workshop Announcement Message-ID: <199611262229.OAA01916@mirror.willamette.edu> NIPS 96 Workshop Announcement: ============================== Tricks of the Trade Workshop Schedule Saturday December 6, 1996 Snowmass, CO 7:30am-10:30am, 4pm-7pm ---------------------------------------------------------------------------- ORGANIZERS: Jenny Orr Willamette University gorr at willamette.edu Klaus Muller GMD First, Germany klaus at first.gmd.de Rich Caruana Carnegie Mellon caruana at cs.cmu.edu ---------------------------------------------------------------------------- OBJECTIVES: Using neural networks to solve difficult problems often requires as much art as science. Researchers and practitioners acquire, through experience and word-of-mouth, techniques and heuristics that help them succeed. Often these ``tricks'' are theoretically well motivated. Sometimes they're the result of trial and error. In this workshop we ask you to share the ``tricks'' you have found helpful. Our focus will be mainly regression and classification. For abstracts on talks and other information, see our web page at http://www.willamette.edu/~gorr/nipsws.htm ---------------------------------------------------------------------------- Morning Session 7:30am: Jenny Orr, Welcome 7:35am: Yann LeCun, To be announced. 8:20am: Nicol Schraudolph, Bettering Backprop 8:35am: Martin Schlang, Stable on-line adaptation or initial learning 8:50am: 20 minute discussion and break 9:10am: Larry Yaeger, Reducing A Priori Biases Improves Recognition Accuracy (at the Expense of Classification Accuracy) 9:50am: Steve Lawrence, Neural Network Classification and Unequal Prior Class Probabilities 10:05am: Shumeet Baluja, Sampling Negative Instances by Collecting False Positives 10:15am: Discussion and morning wrap-up ---------------------------------------------------------------------------- Afternoon Session 4:00pm: Hans George Zimmermann, Training of a neural net in time series analysis 4:40pm: Tony Plate, Convergence of hyperparameters in Mackay's Bayesian Backpropagation 4:55pm: Jan Larsen, Design and Regularization of Neural Networks: The Optimal Use of A Validation 5:10pm: Renee S. Renner, Optimization techniques for improving performance and training of a constructive neural network 5:25pm: Patrick van der Smagt, Optimisation in feed-forward neural networks: On conjugate gradient, network size, and local minima 5:40pm: 15 minute discussion and break 5:55pm: David Horn, Optimal Ensemble Averaging of Neural Networks 6:10pm: Timothy X Brown, Jump Connectivity 6:20pm: Chan Lai-Wan, Tricks that make recurrent networks work 6:30pm: Rich Caruana, 101 Fun (and Useful) Things to Do With Extra Outputs 6:45pm: Discussion and workshop wrap-up From atick at monaco.rockefeller.edu Wed Nov 27 09:33:42 1996 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Wed, 27 Nov 1996 09:33:42 -0500 Subject: NetworkL CNS, November Issue, TOC. Message-ID: <9611270933.ZM20082@monaco.rockefeller.edu> The November issue of Network is now online. Those with institutional subscriptions should be able to access the full journal from their desktop computer. FYI, here is the table of contents. Also as of the next issue, we will use incremental publishing so a paper will be available online as soon as it is ready! Be sure to check out the topical review in this issue on ** human colour perception and its adaptation ** by M Webster-- it is one of the most comprehensive reviews in this area. NETWORK: COMPUTATION IN NEURAL SYSTEMS Volume 7 (1996) Issue 4, Pages: 587--758 Editorial: Looking Ahead TOPICAL REVIEW 587 Human colour perception and its adaptation M A Webster PAPERS 635 Neural model of visual stereomatching: slant, transparency and clouds J A Marshall, G J Kalarickal and E B Graves 671 A coupled attractor model of the rodent head direction system A D Redish, A N Elga and D S Touretzky 687 A single spike suffices: the simplest form of stochastic resonance in model neurons M Stemmler 717 Estimate of mutual information carried by neuronal responses from small data samples P Gurzi, G Biella and A Spalvieri 727 Topology selection for self-organizing maps A Utsugi 741 A search for the optimal thresholding sequence in an associative memory H Hirase and M Recce 757 AUTHOR INDEX (with titles), Volume 7 -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From linster at berg.HARVARD.EDU Wed Nov 27 13:22:46 1996 From: linster at berg.HARVARD.EDU (Christiane Linster) Date: Wed, 27 Nov 1996 13:22:46 -0500 (EST) Subject: NIPS WS Program Message-ID: WORKSHOP PROGRAM NIPS'96 Postconference Workshop NEURAL MODULATION AND NEURAL INFORMATION PROCESSING Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Akaysha Tang Christiane Linster OBJECTIVES Neural modulation is ubiquitous in the nervous system and can provide the neural system with additional computational power that has yet to be characterized. From a computational point of view, the effects of neuromodulation on neural information processing can be far more sophisticated than the simple increased/decreased gain control, assumed by many modelers. We would like to bring together scientists from diverse fields of studies, including psychopharmacology, behavioral genetics, neurophysiology, neural networks, and computational neuroscience. We hope, through sessions of highly critical, interactive and interdisciplinary discussions, * to identify the strengths and weaknesses of existing research methodology and practices within each of the field; * to work out a series of strategies to increase the interactions between experimental and theoretical research and; * to further our understanding of the role of neuromodulation in neural information processing. PROGRAM Morning session 1. Vladimir Brezina, Mt. Sinai School of Medicine "Functional consequences of divergence and convergence in physiological signaling pathways" Vladimir Brezina, Mt. Sinai School of Medicine 2. Sacha Nelson, Brandeis University "Neuromodulation of short-term plasticity at visual cortical synapses" and Afterpotentials: Seeing their Functional Role" 3. Michael Hasselmo and Christiane Linster, Harvard University "Acetylcholine, Noradrenaline and Memory" 4. Sue Becker, McMaster University "On the Computational Utility of Contextually Modulated Plasticity: A Model, soem Empirical Results and Speculations on Cortical Function" Afternoon session 4. Eytan Ruppin, Tel Aviv University "Synaptic Runaway in Associative Networks and The Pathogenesis of Schizophrenic Psychosis" 5. Dennis Feeney, University of New Mexico "Late Noradrenergic Pharmacotherapy Promotes Functional Recovery After Cortical Injury" 6. Thomas Brozoski, Grinnell College "Forebrain Dopamine Fluctuations During Pavlovian Conditioning" 7. Terrence Sejnowski, Salk Institute "Dopamine and Temporal-Difference Learning in the Basal Ganglia" From moore at santafe.edu Wed Nov 27 18:09:03 1996 From: moore at santafe.edu (Cris Moore) Date: Wed, 27 Nov 1996 16:09:03 -0700 (MST) Subject: preprint available Message-ID: A preprint is available, entitled: Predicting Non-linear Cellular Automata Quickly by Decomposing them into Linear Ones C. Moore and T. Pnin http://www.santafe.edu/~moore ftp://ftp.santafe.edu/pub/moore/semi.ps Abstract: We show that a wide variety of non-linear cellular automata (CAs) can be decomposed into a {\em quasidirect product} of linear ones. These CAs can be predicted by parallel circuits of depth $\ord(\log^2 t)$ using gates with binary inputs, or $\ord(\log t)$ depth if ``sum mod $p$'' gates with an unbounded number of inputs are allowed. Thus these CAs can be predicted by (idealized) parallel computers much faster than by explicit simulation, even though they are non-linear. This class includes any CA whose rule, when written as an algebra, is a solvable group. We also show that CAs based on nilpotent groups can be predicted in depth $\ord(\log t)$ or $\ord(1)$ by circuits with binary or ``sum mod $p$'' gates respectively. We use these techniques to give an efficient algorithm for a CA rule which, like elementary CA rule 18, has diffusing defects that annihilate in pairs. This can be used to predict the motion of defects in rule 18 in $\ord(\log^2 t)$ parallel time. - Cris moore at santafe.edu From brossmann at firewall.sni-usa.com Wed Nov 27 16:05:22 1996 From: brossmann at firewall.sni-usa.com (Frank Brossmann) Date: Wed, 27 Nov 1996 16:05:22 -0500 Subject: Job interview at NIPS'96 for Advanced Technologies Group, Siemens Nixdorf Information Systems, Inc. Message-ID: <199611272105.QAA07127@passer.sni-usa.com> SIEMENS NIXDORF INFORMATION SYSTEMS, INC. ADVANCED TECHNOLOGIES GROUP NEURAL NETWORK BASED MODELING OPPORTUNITY IN RESEARCH / CONSULTING / MARKETING During NIPS'96, we will be interviewing for a challenging position at the newly founded startup enterprise, based in Boston, MA. The successful applicant for this job should have a Masters, Ph.D. or MBA in computer science, mathematics, statistics, physics, economics, finance, or a related field. Experience in using statistical artificial intelligence such as neural networks and other approximation and modeling methods, as well as interest in finance, economics, and their underlying structural processes are crucial. The applicant should be willing to work in close collaboration with other group members, and should be full of creativity, enthusiasm and flexibility, and also enjoy traveling to conferences and customers for consulting purposes. In the first phase, we will develop some case studies in finance, marketing, and Internet related applications using the advanced simulator platform SENN (Software Environment for Neural Networks). This engaging work will leave space for the applicant's ideas and creativity. It will focus on solutions for real-world problems. The possibility of collaboration with Prof. Andreas Weigend, New York (Stern School of Business, NYU), and Dr. Georg Zimmermann, Munich (Siemens AG Corporate Research and Development) exist. Subsequent opportunities include advising top US companies in their data modeling efforts, as well as introducing state-of-the-art research and its software implementation to the relevant communities. In the longer run, we also expect significant contribution to development and implementation of a sales and marketing strategy for SENN. We expect the applicant to start working on January 1, 1997. If you are interested, please contact at NIPS'96 - Georg Zimmermann, Ralph Neuneier, Michiaki Taniguchi, or Martin Schlang. If you are not attending NIPS'96, please email your resume by December 5 to: - brossmann at sni-usa.com (plain text or MS Word). Frank Brossmann, Director Advanced Technologies Group, Siemens Nixdorf Information Systems, Inc. 200 Wheeler Road Burlington, MA 01803 Email: brossmann at sni-usa.com From back at zoo.riken.go.jp Wed Nov 27 23:03:29 1996 From: back at zoo.riken.go.jp (Andrew Back) Date: Thu, 28 Nov 1996 13:03:29 +0900 (JST) Subject: NIPS*96 Workshop - Blind Signal Processing Message-ID: Dear colleagues, Please find attached, the schedule for the NIPS*96 workshop on Blind Signal Processing. Further information, including abstracts and some papers are available on the workshop WWW homepage: http://www.bip.riken.go.jp/absl/back/nips96ws/nips96ws.html Andrzej Cichocki Andrew Back ---------------------------------------------------------------------------- NIPS*96 Workshop Blind Signal Processing and Their Applications (Neural Information Processing Approaches) Saturday Dec 7, 1996 Snowmaas (Aspen), Colorado Workshop Organizers: Andrzej Cichocki Andrew D. Back Brain Information Processing Group Frontier Research Program RIKEN, The Institute of Physical and Chemical Research Hirosawa 2-1, Wako-shi, Saitama, 351-01, Japan Phone: +81-48-462-1111 ext: 6733 Fax: +81-48-462-4633 Email: cia at hare.riken.go.jp back at zoo.riken.go.jp Blind Signal Processing is an emerging area of research in neural networks and image/signal processing with many potential applications. It originated in France in the late 80's and since then there has continued to be a strong and growing interest in the field. Blind signal processing problems can be classified into three areas: (1) blind signal separation of sources and/or independent component analysis (ICA), (2) blind channel identification and (3) blind deconvolution and blind equalization. These areas will be addressed in this workshop. See the objectives below for further details. ---------------------------------------------------------------------------- Objectives The main objectives of this workshop are to: Give presentations by experts in the field on the state of the art in this exciting area of research. Compare the performance of recently developed adaptive un-supervised learning algorithms for neural networks. Discuss issues surrounding prospective applications and the suitability of current neural network models. Hence we seek to provide a forum for better understanding current limitations of neural network models. Examine issues surrounding local, online adaptive learning algorithms and their robustness and biologically plausibility or justification. Discuss issues concerning effective computer simulation programs. Discuss open problems and perspectives for future research in this area. Especially, we intend to discuss the following items: 1. Criteria for blind separation and blind deconvolution problems (both for time and frequency domain approaches) 2. Natural (or relative) gradient approach to blind signal processing. 3. Neural networks for blind separation of time delayed and convolved signals. 4. On line adaptive learning algorithms for blind signal processing with variable learning rate (learning of learning rate). 5.Open problems, e.g. dynamic on-line determination of number of sources (more sources than sensors), influence of noise, robustness of algorithms, stability, convergence, identifiability, non-causal, non-stationary dynamic systems . 6. Applications in different areas of science and engineering, e.g., non-invasive medical diagnosis (EEG, ECG), telecommunication, voice recognition problems, image processing and enhancement. ---------------------------------------------------------------------------- Workshop Schedule 7:30-7:50 A Review of Blind Signal Processing: Results and Open Issues Andrzej Cichocki and Andrew Back Brain Information Processing Group, Frontier Research Program RIKEN, Japan 7:50-8:10 Natural Gradient in Blind Separation and Deconvolution - Information Geometrical Approach Schun-ichi Amari Brain Information Processing Group, Frontier Research Program RIKEN, Japan 8:10-8:30 Entropic Contrasts for Blind Source Separation Jean-Francois Cardoso Ecole Nationale Superieure des Telecommunications, Paris, France 8:30-8:40 Coffee Break/Discussion Time 8:40-9:00 Several Theorems on Information Theoretic Independent Component Analysis Lei Xu. J. Ruan and Shun-ichi Amari The Chinese University of Hong Kong, Hong Kong Brain Information Processing Group, FRP, Riken, Japan 9:00-9:20 From Neural PCA to Neural ICA Erkki Oja, Juha Karhunen and Aapo Hyvarinen Helsinki University of Technology, Finland 9:20-9:40 Local Adaptive Algorithms and their Convergence Analysis for Decorrelation and Blind Equalization/Deconvolution Scott Douglas and Andrzej Cichocki Department of EE, University of Utah, USA FRP Riken, Japan 9:40-10:00 Negentropy and Kurtosis as Projection Pursuit Indices Provide Generalized ICA Algorithms Mark Girolami and Colin Fyfe The University of Paisley, Scotland 10:00-10:15 Bussgang Methods for Separation of Multipath Mixtures Russel Lambert Dept of Electrical Engineering University of South California, USA 10:15-10:30 Discussion Time 4:00-4:20 Blind Signal Separation by Output Decorrelation Dominic C.B. Chan, Simon J. Goodsil and Peter J.W. Rayner University of Cambridge, United Kingdom 4:20-4:40 Temporal Decorrelation Using Teacher Forcing Anti-Hebbian Learning and its Application in Adaptive Blind Source Separation Jose C. Principe, Chuan Wang, and Hsiao-Chun Wu University of Florida, USA 4:40-5:00 A Direct Adaptive Blind Equalizer for Multi-Channel Transmission Seungjin Choi and Ruey-wen Liu University of Notre Dame, USA 5:00-5:10 Coffee Break/Discussion Time 5:10:5:30 IIR Filters for Blind Deconvolution Using Information Maximization Kari Torkkola Motorola Phoenix Corporate Research, USA 5:30-5:40 Information Maximization and Independent Component Analysis: Is there a difference ? D. Obradovic and G. Deco Siemens AG, Coporate Research and Development, Germany 5:40-5:50 Convergence Properties of Cichocki's Extension of the Herault-Jutten Source Separation Neural Network Yannick Deville Laboratoires d'Electronique Philips S.A.S. (LEP) France 5:50-6:10 Independent Component Analysis of EEG and ERP Data Tzyy-Ping Jung, Scott Makeig, Anthony J. Bell and Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute, CNL, USA 6:10-6:20 Blind separation of delayed and convolved sources - the problem Tony Bell and Te-Won Lee Computational Neurobiology Laboratory The Salk Institute, CNL, USA 6:20-6:30 Information Back-propagation for Blind Separation of Sources from Non-linear Mixture Howard H. Yang, Shun-ichi Amari and Andrzej Cichocki Brain Information Processing Group, FRP, RIKEN, Japan 6:30-7:00 Discussion Time -- From janetw at cs.uq.edu.au Thu Nov 28 03:44:08 1996 From: janetw at cs.uq.edu.au (Janet Wiles) Date: Thu, 28 Nov 1996 18:44:08 +1000 (EST) Subject: 6-month Postdoc Available - AUSTRALIA Message-ID: 6-MONTH RESEARCH FELLOWSHIP (POSTDOC LEVEL) Cognitive Science Group, Department of Computer Science University of Queensland, Brisbane, Australia, 4072. TOPIC : Recurrent neural networks and language processing This short term position is to study computational properties of neural networks in language processing. The ideal candidate will have a strong background in neural networks or dynamic systems theory and preferably have some experience with linguistics or formal language theory. Good programming skills and familiarity with Unix will be expected. The position is available for six months during 1997. Salary will be at the level of a University of Queensland postdoctoral position (around A$38,000). Relocation expenses to and from Brisbane cannot be provided on this grant. To apply for this position, please send your CV and the names of three referees to Dr J Wiles at the address below of email janetw at cs.uq.edu.au ------------------------------------------------------------- Dr Janet Wiles _-_|\ Director of the Cognitive Science Program / * Depts of Computer Science and Psychology \_.-._/ The University of Queensland v Brisbane QLD 4072 AUSTRALIA http://psy.uq.oz.au/CogPsych/home.html ------------------------------------------------------------- From maass at igi.tu-graz.ac.at Thu Nov 28 07:53:50 1996 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Thu, 28 Nov 96 13:53:50 +0100 Subject: new paper on the effect of analog noise in neural computation Message-ID: <199611281253.AA03322@figids03.tu-graz.ac.at> The following paper is now available for copying from http://www.math.jyu.fi/~orponen/papers/noisyac.ps The paper has 19 pages. " On the Effect of Analog Noise in Discrete-Time Analog Computations " by Wolfgang Maass and Pekka Orponen Inst. for Theor. Comp. Sci. Department of Mathematics Technische Universitaet Graz University of Jyvaskyla Klosterwiesgasse 32/2 P.O. Box 35 A-8010 Graz, Austria Jyvaskyla, Finland maass at igi.tu-graz.ac.at orponen at math.jyu.fi Abstract: We introduce a model for analog noise in analog computations with discrete time that is flexible enough to cover the most important concrete cases, such as analog noise in sigmoidal neural nets and networks of spiking neurons. The noise model can also be applied to cases where there are dependencies among the noise-sources, and to hybrid analog/digital systems. In contrast to previous models for noise in analog computations (which demand that the output of the computation has to be 100% reliable), we assume that the output of a noisy analog computation has to be correct only with a certain probability (which may be chosen to be very high). We believe that this convention is more adequate for the analysis of "real world" analog computations. In addition this convention is consistent with the common models for noisy digital computations in computational complexity theory. We show that under very general conditions the presence of analog noise reduces the power of analog computational models to that of a finite automaton, and we exhibit bounds for the number of states of such finite automaton. We also prove a new type of upper bound for the VC-dimension of computational models with analog noise. In the case of a feedforward sigmoidal neural net this bound does not depend on the the total number of units in the net. An extended abstract of this paper will appear in the Proceedings of NIPS '96. From niranjan at eng.cam.ac.uk Thu Nov 28 14:54:59 1996 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Thu, 28 Nov 96 19:54:59 GMT Subject: CallForPapers Message-ID: <9611281954.9193@baby.eng.cam.ac.uk> Last Call, with new deadline *** 20 December 1996 *** ======================================================================== IEE ANN 97 - CALL FOR PAPERS Fifth International Conference on "Artificial Neural Networks" Churchill College, University of Cambridge, UK: 7-9 July 1997 Objectives: This is the fifth in a series of successful conferences bringing together up-to-date research in the field of artificial neural networks. This includes theoretical advances in statistical aspects of learning, dynamical systems theory and function approximation in the context of artificial neural networks as well as practical applications that have benefited from the use of this technology. Topics in these areas are: Architectures and Learning Algorithms: Computational Learning Theory Learning Algorithms Function Approximation Approaches to Model Selection Density Estimation Model validation and verification Comparisons with classical techniques. Applications: Vision and Robotics, Speech and Language Processing, Monitoring Complex systems such as Engines Medical diagnostics Financial Systems Modelling Advances in Implementation: Parallel Hardware, Software Systems Contributions: Papers reporting original research related to the above areas are invited. A selection will be made based on originality of the contribution and clarity of presentation. In addition to oral presentations, a selection of papers submitted will be presented as posters. Poster presentations will be seen as offering better interaction with participants. Please indicate in the reply slip if you would prefer to present your submission as a poster. It is expected that papers presented in this form will be linked to particular sessions and the Chairmen of the Session will give a brief introduction to the posters. Prospective authors are required to make their submissions in the form of complete draft papers of a maximum of up to six pages by the deadline (see below). A clear summary and full details for corresponding with the authors (including email) should be provided. Authors of accepted papers will be required to to provide a camera ready version of their full paper of up to six pages. A LaTeX style file and/or instructions on the layout will be made available nearer the time. Tutorial on "An Introduction to Neural Computing Applications and Techniques" On the morning of Monday, 7 July 1997, there will be a half-day tutorial session prior to the formal opening of the Conference. Tutorials will cover basic concepts in neural computing and their applications. Deadlines: Intending authors should note the following dates: Closing date for the submission of full papers Friday, 6 December 1996 Notification of acceptance February 1997 Deadline for any revised papers Friday, 14 March 1997 Venue: The Conference will be held at Churchill College, Cambridge. Cambridge is a city in the English countryside that is world renowned for its University. There are now some thirty colleges, the earliest, Peterhouse, having been founded by the Bishop of Ely in 1284; and their rich architectural heritage is there to be enjoyed. Today Cambridge combines its academic heritage with the atmosphere of a modern business and tourist centre. There are varied aspects of the city to be enjoyed from historic colleges ato the bustling market square, browsing along the busy shopping streets, punting on the river cam, visiting the University museums or taking in the delights of the surrounding landscape. Committee: Dr M Niranjan, University of Cambridge (Chairman), Dr J Austin, University of York Dr S Collins, Defense Research Agency Dr P H Cowley, Rolls Royce Applied Science Laboratories Professor C J Harris, University of Southampton Dr I T Nabney, Aston University Dr S Olafsson, BT Laboratories Corresponding Members (To be confirmed) Professor F Fogelman-Soulii SLIGOS, France Dr C L Giles NEC Research Institute, USA Dr R M Goodman Caltech, USA Dr D McMichael CSSIP, Australia Dr M Plumbley NEuroNet (Kings College, UK) Professor U R|ckert University of Paderborn, Germany Dr C J Satchwell Aragorn Systems Ltd, Monaco Organisers The Computing and Control Division of the IEE. The following organisations have been invited to co-sponsor the event: The British Computer SocietyBritish Machine Vision Association Department of Trade and Industry - Electronics and Engineering Division Engineering and Physical Sciences Research Council - Informatics Division European Neural Networks Society EURASIP The Institute of Mathematics and its Applications The Institute of Physics Neural Computing Applications Forum PAPER FORMAT Authors wishing to contribute to the Conference should send an original and four copies of each paper by 20 December 1996 to the ANN97 Secretariat. Papers should be submitted on A4 sheets with a 2-column format. Each column is 6.69 cm in width with a spacing of 1.27 cm between columns. Text should be typed in 10 point Times-Roman font with single line spacing. The first page should incorporate (first line) the title, (double line space) author(s), (double line space), and affiliations. Only six pages are allowed per paper (inclusive of figures, tables etc.). Please contact the ANN97 Secretariat should you require a sample of the required layout. If presented in good quality, suitable for camera-ready copying, successful contributions will be published from the original copy submitted in the first instance. Otherwise, authors will be requested to re-submit. All contributions will be published as part of the IEE Conference Proceeding series and will be available to all delegates at the event. EXHIBITION It is proposed to organise an exhibition in conjunction with the Conference. For further details please contact the ANN 97 Secretariat. _______________________________________________________________________ Please complete in BLOCK capitals and return to: ANN97 Secretariat, IEE Conference Services, Savoy Place, London WC2R 0BL, UK. Fax: +44 (0)171 240 8830, Email: nashley at iee.org.uk OR joconnell at iee.org.uk I am interested in the ANN 97 conference and require ... copies of the provisional programme and registration form. I wish to offer a contribution provisionally entitled: ....................................................... ......................................................... Which falls with in topic area number:.................. * 1. Vision * 2. Speech * 3. Control and robotics * 4. Biomedical * 5. Financial and business * 6. Signal processing * 7. Radar/sonar * 8. Data fusion * 9. Analogue * 10. Digital * 11. Optical * 12. Learning algorithms * 13. Network architectures * 14. Functional approximations * 15. Statistical Methods * 16. None of the above I wish to offer a contribution in the form of ( ) an oral presentation ( ) a poster presentation I require further details on ( ) Scholarship ( ) Exhibition ( ) Tutorial Personal details: Name: Organisation: Address: Postcode/Zipcode: Country: Telephone: Fax: Email: ======================================================================== From mcasey at volen.brandeis.edu Fri Nov 29 19:18:18 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Fri, 29 Nov 1996 19:18:18 -0500 (EST) Subject: new paper on the effect of analog noise in neural computation In-Reply-To: <199611281253.AA03322@figids03.tu-graz.ac.at> Message-ID: Dear Connectionists, It seems that the paper "On the Effect of Analog Noise in Discrete-Time Analog Computations" by Wolfgang Maass and Pekka Orponen has misrepresented the work in my recent Neural Computation article "The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction" (Volume 8, number 6, pp. 1135-1178, 1996). Maass and Orponen repeatedly take credit for the following result: > We show that under very general conditions the presence of analog > noise reduces the power of analog computational models to that of a > finite automaton.... This was shown in Corollary 3.1 of my paper. Maass and Orponen fail to mention this in their paper. My goal, in the NC paper, was to investigate the representational powers of RNNs, and to do so I included analog noise to avoid unrealistic assumptions and conclusions about computation in physical systems (including physically instantiated RNNs). The model of analog noise that I used was originated by the mathematician Rufus Bowen and includes the model in Maass and Orponen's paper as a special case. So the discussion in their paper about the generality of their model of analog noise as opposed to those previously used is based on a misinterpretation of Bowen's pseudo-orbit formalism, which makes no assumptions about the distribution of the noise process beyond its boundedness (which Maass and Orponen also assume in their paper). If you would like to verify these claims, my paper is available in Neural Computation or on the web at http://eliza.ccs.brandeis.edu/people/mcasey/papers.html Thanks for your time and attention. Best regards, Mike ******************************************************************** Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey (617) 736-3144 (voice) (617) 736-3142 (fax) From mcasey at volen.brandeis.edu Sat Nov 30 20:00:10 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Sat, 30 Nov 1996 20:00:10 -0500 (EST) Subject: analog noise In-Reply-To: <199611300118.AA05387@figids03.tu-graz.ac.at> Message-ID: Regarding the RNN+noise implying only finite state machine power result, I completely agree that their result is a nice extension of the corollary that I proved (which I recently found to be a conjecture/assumption of Turing in his 1936 paper), but they failed to mention that I had already proved something very similar. It seemed that a discussion of the type posted here belonged in the paper. > In addition we have also relaxed the definition of analog noise in > our model to include also Gaussian noise etc, but this is perhaps > a less essential point. This is still incorrect. Their "clipped" Gaussian noise is a special case of bounded noise with arbitrary distribution (Bowen's pseudo-orbit formalism), so there's no sense in which they "relaxed" the definition of analog noise. If the state space is bounded, and the noise is clipped to keep the state of the system in the state space, then there is no loss of generality in using bounded noise. Furthermore, none of their results depend on the noise being unbounded (so even if it were unbounded, it wouldn't lead to anything interesting). Finally, in section 4 of their paper where they concretely discuss RNNs performing computations, they assume that the noise is bounded and that the computation is done with perfect reliability (which were precisely the assumptions that I used which they have spent so much time discrediting in other parts of the paper). I sincerely apologize for taking up more bandwidth with this discussion. I hope that it is of some interest to the community. Any further discussion on my part will take place off-line. Best regards, Mike ******************************************************************** Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey (617) 736-3144 (voice) (617) 736-3142 (fax) From Connectionists-Request at cs.cmu.edu Fri Nov 1 00:05:14 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Nov 96 00:05:14 EST Subject: Bi-monthly Reminder Message-ID: <29745.846824714@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From bruno at redwood.ucdavis.edu Fri Nov 1 22:52:40 1996 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Fri, 1 Nov 1996 19:52:40 -0800 Subject: NIPS96 workshop on natural images Message-ID: <199611020352.TAA09325@redwood.ucdavis.edu> NIPS96 workshop announcement: ----------------------------- The structure of natural images and efficient image coding Organized by Dan Ruderman and Bruno Olshausen This one-day workshop will cover recent work on natural image statistics and their relation to visual system design. The discovery of scaling in natural images (Burton and Moorhead 1987, Field 1987, Tolhurst et al 1992, Ruderman and Bialek 1994, Dong and Atick 1995, van der Schaaf and van Hateren 1996) has led to much interest in their statistical structure as well as the constraints these statistics place on efficient coding in the visual system. The work of Atick and Redlich (1990) in particular has demonstrated how the statistics of images may be combined with the efficiency principles of Attneave and Barlow to make quantitative predictions about the properties of ganglion cell receptive fields. Many researchers since have followed suit with other optimization strategies, such as sparse coding (Olshausen and Field 1996, Fyfe and Baddeley 1995) and information maximization (Bell and Sejnowski 1996), in an attempt to relate the response properties of cortical cells to the statistics of natural images in terms of efficient coding. This forum will offer the first open informal discussion of this rapidly-evolving approach toward understanding the visual system. Participants: Roland Baddeley, Oxford University Tony Bell, Salk Institute Dawei Dong, Caltech David Field, Cornell University Jack Gallant, U.C. Berkeley Hans van Hateren, University of Groningen David Mumford, Harvard University Penio Penev, Rockefeller University Pam Reinagel, Caltech Harel Shouval, Brown University Michael Webster, University of Nevada, Reno Tony Zador, Salk Institute See the web site, http://redwood.ucdavis.edu/NIPS96/abstracts.html, for abstracts and further information. From fdoyle at ecn.purdue.edu Sat Nov 2 07:41:31 1996 From: fdoyle at ecn.purdue.edu (Frank Doyle) Date: Sat, 2 Nov 1996 07:41:31 -0500 (EST) Subject: Postdoctoral Position Message-ID: <199611021241.HAA19362@volterra.ecn.purdue.edu> POSTDOCTORAL POSITION (Biosystems Analysis and Control) School of Chemical Engineering, Purdue University This position is part of an ONR and NSF-funded project to study the baroreflex for possible applications to process control systems. In addition, the project involves strong collaborations with the DuPont company, and extended interactions with the DuPont group are an intended part of this project. The project involves experimental and computational studies aimed at understanding local modulation of cardiac behavior for applications to locally intelligent control systems design. Specific sub-projects include: (i) second messenger modeling of the mechanisms responsible for modulation in a ganglion cell; (ii) computational modeling of the local and central reflex interactions; (iii) abstraction of control principles from the reflex model for chemical process control applications; and (iv) [possibly] experimental studies to validate the local model. The ideal applicant will be able to contribute to the cellular modeling from both a computational and (possibly) an experimental perspective. Familiarity with control systems engineering is also desirable. More information about the laboratory can be found at the URL http://volterra.ecn.purdue.edu/~fdoyle/neuro.html The position is available immediately and will last a minimum period of one year, with the possibility to be extended for another year. The applicants should have a PhD in a relevant discipline and a solid experience in biochemical engineering and control systems. This work will also involve some amount of cellular neurophysiology. While a strong background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques. Purdue University offers competitive salary and benefits. Applicants should send a CV, a statement of their professional interests (not longer than 1 page) and the names, addresses and telephone numbers of at least two reference to Frank Doyle either via email (fdoyle at ecn.purdue.edu) or via surface mail at the following address: Frank Doyle School of Chemical Engineering Purdue University West Lafayette IN 47907-1283 Purdue University is an equal opportunity, affirmative action educator and employer. From FRYRL at f1groups.fsd.jhuapl.edu Mon Nov 4 09:37:00 1996 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Mon, 04 Nov 96 09:37:00 EST Subject: New paper available Message-ID: <327D7EE5@fsdsmtpgw.fsd.jhuapl.edu> A paper entitled "Neuromechanics" has been placed in the Neuroprose neural network paper storage site. This paper was an invited paper given at the 1996 International Conference on Neural Information Processing, Hong Kong, in September. Title: Neuromechanics Abstract : Elements of classical, statistical, and even quantum mechanics can be found in the described neural model through analogous constructs of position, momentum, Gibbs distributions, partition functions, and perhaps most importantly, observability. Such analogies suggest that the subject model represents a type of neural mechanics that is distinguished from other mechanical formulations of physics in two important regards. First, physical constants are not constant, but rather represent Lagrange factors that vary over time in response to learning. Secondly, neural systems attempt to optimize the very information-theoretic objective functions upon which their structure is founded. This paper provides an overview of an approach to neural modeling and understanding and highlights correspondences between this model, mechanical formulations of physics, and computational neurophysiology. Retrieval: Perform an anonymous logon to: archive.cis.ohio-state.edu The paper is located in the directory \pub\neuroprose with the file name "fry.neurmech.ps.Z." This paper is in encapulated postscript in compressed format. It can be uncompressed by using the UNIX "uncompress" utility. No hardcopies are available. R. Fry Robert L. Fry Johns Hopkins Road Laurel, MD 20723-6099 robert_fry at jhuapl.edu From ojensen at cajal.ccs.brandeis.edu Mon Nov 4 18:58:00 1996 From: ojensen at cajal.ccs.brandeis.edu (Ole Jensen - Lisman Lab) Date: Mon, 04 Nov 1996 18:58:00 -0500 (EST) Subject: Physiologically Realistic Memory Network Message-ID: <9611042358.AA04582@cajal.ccs.brandeis.edu> Physiologically Realistic Memory Network ======================================== The following 4 papers have appeared together as a series in the journal Learning and Memory (1996) 3: 243-287. They can be down-loaded in post-script format from http://eliza.cc.brandeis.edu/people/ojensen/ We have attempted to construct a physiologically realistic memory model based on nested theta/gamma oscillation. The network model can explain important aspect of data from human memory psychology (Lisman and Idiart, Science 267:1512-15) and place cell recordings (PAPER 4). Ole Jensen (ojensen at cajal.ccs.brandeis.edu) Volen Center for Complex Systems Brandeis University Waltham MA 02254 Come see our poster at the Neuroscience meeting, Nov 19, Tuesday 1:00 PM, X-$, 549:14. PAPER 1: -------- Physiologically Realistic Formation of Autoassociative Memory in Networks with Theta/Gamma Oscillations: Role of Fast NMDA Channels. Learning and Memory (1996) 3:243-256. Ole Jensen, Marco A. P. Idiart, and John E. Lisman Recordings from brain regions involved in memory function show dual oscillations in which each cycle of a low frequency theta oscillation (5-8Hz) is subdivided into about 7 subcycles by high frequency gamma oscillations (20-60Hz). It has been proposed (Lisman and Idiart 1995) that such networks are a multiplexed short-term memory (STM) buffer that can actively maintain about 7 memories, a capability of human STM. A memory is encoded by a subset of principal neurons that fire synchronously in a particular gamma subcycle. Firing is maintained by a membrane process intrinsic to each cell. We now extend this model by incorporating recurrent connections with modifiable synapses to store long-term memory (LTM). The repetition provided by STM gradually modifies synapses in a physiologically realistic way. Because different memories are active in different gamma subcycles, the formation of autoassociative LTM requires that synaptic modification depend on NMDA channels having a time-constant of deactivation that is of the same order as the duration of a gamma subcycle (15- 50 msec). Many types of NMDA channels have longer time-constants (150 msec), as for instance those found in the hippocampus, but both fast and slow NMDA channels are present in cortex. This is the first proposal for the special role of these fast NMDA channels. The STM for novel items must depend on activity-dependent changes intrinsic to neurons rather than recurrent connections, which have not developed the required selectivity. Because these intrinsic mechanisms are not error correcting, STM will become slowly corrupted by noise. This limits the accuracy with which LTM can become encoded after a single presentation. Accurate encoding of items in LTM can be achieved by multiple presentations, provided different memory items are presented in a varied interleaved order. Our results indicate that a limited memory capacity STM model can be integrated in the same network with a high capacity LTM model. PAPER 2: -------- Novel Lists of 7+/-2 Known Items Can Be Reliably Stored in an Oscillatory Short-Term Memory Network: Interaction with Long-Term Memory Learning and Memory (1996) 3:257-263. Ole Jensen and John E. Lisman This paper proposes a model for the short-term memory (STM) of unique lists of known items, as for instance a phone number. We show that the ability to accurately store such lists in STM depends strongly on interaction with the pre-existing long-term memory (LTM) for individual items (e.g. digits). We have examined this interaction in computer simulations of a network based on physiologically realistic membrane conductances, synaptic plasticity processes and brain oscillations. In the model, seven short-term memories can be kept active, each in a different gamma-frequency subcycle of a theta frequency oscillation. Each STM is maintained and timed by an activity-dependent ramping process. LTM is stored by the strength of synapses in recurrent collaterals. The presence of pre-existing LTM for an item greatly enhances the ability of the network to store an item in STM. Without LTM, the precise timing required to keep cells firing within a given gamma subcycle cannot be maintained and STM is gradually degraded. With LTM, timing errors can be corrected and the accuracy and order of items is maintained. This attractor property of STM storage is remarkable because it occurs even though there is no LTM that identifies which items are on the list or their order. Multiple known items can be stored in STM, even though their representation is overlapping. However multiple, identical memories cannot be stored in STM, consistent with the psychophysical demonstration of repetition blindness. Our results indicate that meaningful computation (memory completion) can occur in the millisecond range during an individual gamma cycle. PAPER 3: -------- Theta/Gamma Networks with Slow NMDA Channels Learn Sequences and Encode Episodic Memory: Role of NMDA Channels in Recall Learning and Memory (1996) 3:264-278. Ole Jensen and John E. Lisman This paper examines the role of slow NMDA channels (deactivation about 150 msec) in networks that multiplex different memories in different gamma subcycles of a low frequency theta oscillation. The NMDA channels are in the synapses of recurrent collaterals and govern synaptic modification in accord with known physiological properties. Because slow NMDA channels have a time-constant that spans several gamma cycles, synaptic connections will form between cells that represent different memories. This enables brain structures that have slow NMDA channels to store heteroassociative sequence information in long-term memory (LTM). Recall of this stored sequence information can be initiated by presentation of initial elements of the sequence. The remaining sequence is then recalled at a rate of 1 memory every gamma cycle. A new role for the NMDA channel suggested by our finding is that recall at gamma frequency works well if slow NMDA channels provide the dominant component of the EPSP at the synapse of recurrent collaterals: the slow onset of these channels and their long duration allows the firing of one memory during one gamma cycle to trigger the next memory during the subsequent gamma cycle. An interesting feature of the readout mechanism is that the activation of a given memory is due to cumulative input from multiple previous memories in the stored sequence, not just the previous one. The network thus stores sequence information in a doubly redundant way: activation of a memory depends on the strength of synaptic inputs from multiple cells of multiple previous memories. The cumulative property of sequence storage has support from the psychophysical literature. Cumulative learning also provides a solution to the disambiguation problem that occurs when different sequences have a region of overlap. In a final set of simulations, we show how coupling an autoassociative network to a heteroassociative network allows the storage of episodic memories (a unique sequence of briefly occurring known items). The autoassociative network (cortex) captures the sequence in short-term memory (STM) and provides the accurate, time-compressed repetition required to drive synaptic modification in the heteroassociative network (hippocampus). This is the first mechanistically detailed model showing how known brain properties, including network oscillations, recurrent collaterals, AMPA channels, NMDA channel subtypes, the ADP, and the AHP can act together to accomplish memory storage and recall. PAPER 4: -------- Hippocampal CA3 Region Predicts Memory Sequences: Accounting for the Phase Precession of Place Cells Learning and Memory (1996) 3:279-287 Ole Jensen and John E. Lisman Hippocampal recordings show that different place cells fire at different phases during the same theta oscillation, probably at the peak of different gamma cycles. As the rat moves through the place field of a given cell, the phase of firing during the theta cycle advances progressively (O'Keefe and Recce 1993; Skaggs et al. 1996). In this paper we have sought to determine whether a recently developed model of hippocampal and cortical memory function can explain this phase advance and other properties of place cells. According to this physiologically based model, the CA3 network stores information about the sequence of places traversed during learning. Here we show that the phase advance can be understood if it is assumed that the hippocampus is in a recall mode that operates when the animal is already familiar with a path. In this mode, sensory information about the current position triggers recall of the upcoming 5- 6 places (memories) in the path at a rate of one memory per gamma cycle. The model predicts that the average phase advance will be one gamma cycle per theta cycle, a value in reasonable agreement with the data. The model also correctly accounts for 1) the fact that the firing of a place cell occurs during $\sim$7 theta cycles (on average) as the animal crosses the place field 2) the observation that the phase of place cell firing depends more systematically on position than on time 3) the fact that traversal of an already familiar path produces further modifications (shifts the firing of a cell to an earlier position in the path). This later finding suggests that recall of previously stored information, strengthens the memory of that information. In the model, this occurs because of a novel role of NMDA channels in recall. The general success of the model provides support for the idea that the hippocampus stores sequence information and makes predictions of expected positions during gamma-frequency recall. From lbl at nagoya.bmc.riken.go.jp Mon Nov 4 21:27:12 1996 From: lbl at nagoya.bmc.riken.go.jp (Bao-Liang Lu) Date: Tue, 5 Nov 1996 11:27:12 +0900 Subject: Paper available: Parallel and modular Multi-sieving Neural Net Message-ID: <9611050227.AA10631@xian> The following paper, which was published in Proc. of 1996 IEEE International Conference on Systems, Man, and Cybernetics, Beijing, China, Oct. 14-17, is available via FTP. FTP-host:ftp.bmc.riken.go.jp FTP-file:/pub/publish/Lu/lu-ieee-smc96.ps.Z ========================================================================== TITLE: A Parallel and Modular Multi-Sieving Neural Network Architecture with Multiple Control Networks AUTHORS: Bao-Liang Lu (1) Koji Ito (2) ORGANISATIONS: (1) The Institute of Physical and Chemical Research (2) Tokyo Institute of Technology ABSTRACT: We have proposed a constructive learning method called multi-sieving learning for implementing automatic decomposition of learning tasks and a parallel and modular multi-sieving network architecture in our previous work. In this paper we present a new parallel and modular multi-sieving neural network architecture to which multiple control networks are introduced. In this architecture the learning tasks for a control network is decomposed into a finite set of manageable subtasks, and each of the subtasks is learned by an individual control sun-network. An important advantage of this architecture is that the learning tasks for control networks can be learned efficiently, and therefore automatic decomposition of complex learning tasks can be achieved easily. (6 pages. No hard copies available.) Bao-Liang Lu --------------------------------------------- Bio-Mimetic Control Research Center, The Institute of Physical and Chemical Research (RIKEN) 3-8-31 Rokuban, Atsuta-ku, Nagoya 456, Japan Phone: +81-52-654-9137 Fax: +81-52-654-9138 Email: lbl at nagoya.bmc.riken.go.jp From payman at u.washington.edu Mon Nov 4 23:14:08 1996 From: payman at u.washington.edu (Payman Arabshahi) Date: Mon, 4 Nov 1996 20:14:08 -0800 (PST) Subject: CIFEr'97 deadline extension Message-ID: <199611050414.UAA22115@saul4.u.washington.edu> !!!! Deadline for submission of summaries has been extended to December 2 !!!! IEEE/IAFE 1997 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer97 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 23-25, 1997 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting * Hedging Strategies The IEEE/IAFE CIFEr Conference is * Risk Arbitrage the third annual collaboration * Exotic Options between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by December 2, 1996. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by December 2, 1996. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1997. Complete papers (not exceeding seven pages of 8.5 inch by 11 inch size) will be due by February 14, 1997, and will be published in the conference proceedings. ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1997. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 14, 1997. ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'97 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers) and the IEEE Neural Networks Council. The IEEE Barbara Klemm (Institute of Electrical and CIFEr'97 Secretariat Electronics Engineers) is the Meeting Management world's largest engineering and IEEE/IAFE Computational Intelligence computer science professional for Financial Engineering non-profit association and sponsors 2603 Main Street, Suite # 690 hundreds of technical conferences Irvine, California 92714 and publications annually. The IAFE is a professional non-profit Tel: (714) 752-8205 or financial association with members (800) 321-6338 worldwide specializing in new financial product design, derivative Fax: (714) 752-7444 structures, risk management strategies, arbitrage techniques, Email: Meetingmgt at aol.com and application of computational Web: http://www.ieee.org/nnc/cifer97 techniques to finance. ---------------------------------------------------------------------------- Payman Arabshahi CIFEr'97 Organizational Chair Tel: (206) 644-8026 Dept. Electrical Eng./Box 352500 Fax: (206) 543-3842 University of Washington Seattle, WA 98195 Email: payman at ee.washington.edu ---------------------------------------------------------------------------- From jose at kreizler.rutgers.edu Tue Nov 5 10:29:39 1996 From: jose at kreizler.rutgers.edu (Stephen J. Hanson) Date: Tue, 5 Nov 1996 10:29:39 -0500 Subject: RUTGERS (Newark Campus) Psychology Department-Two Tenure Track Message-ID: <199611051529.KAA03937@kreizler.rutgers.edu> The Department of Psychology of Rutgers University-Newark Campus anticipates making TWO tenure-track appointments in Cognitive Science or Cognitive Psychology at the Assistant Professor level. The Psychology Department is interested in expanding its program in the are of Cognitive Science. There are two focus clusters for the searches. In the first cluster candidates should have an active research program in one or more of the following areas: memory, learning attention, action, high-level vision. Particular interest will exist in candidates that combine one or more of the research interests above with mathematical or computational approaches with special emphasis in connectionist modeling (as in for example related to programs in Cognitive Neuroscience). Candidates in the second cluster should have an active research program in one or more of the following areas: human-computer interaction, cognitive engineering, cognitive modeling, IT systems, learning systems, CSCW, distance learning or multimedia systems. The positions call for candidates with an active research program and who are effective teachers at both the graduate and undergraduate levels. Review of applications will begin on February 1, 1997 but will continue to be accepted until the positions are filled. Rutgers University is an equal opportunity/affirmative action employer. Qualified women and minority candidates are especially encouraged to apply. Send CV and three letters of recommendation to Professor S. J. Hanson, Chair, Department of Psychology - Cognitive Search, Rutgers University, Newark, NJ 07102. Email enquiries can be made to cogsci at psychology.rutgers.edu From sontag at control.rutgers.edu Tue Nov 5 11:08:00 1996 From: sontag at control.rutgers.edu (Eduardo Sontag) Date: Tue, 5 Nov 1996 11:08:00 -0500 Subject: TR available - Controllability of Recurrent Nets Message-ID: <199611051608.LAA20840@control.rutgers.edu> COMPLETE CONTROLLABILITY OF CONTINUOUS-TIME RECURRENT NEURAL NETWORKS Eduardo D. Sontag and Hector J. Sussmann Rutgers Center for Systems and Control (SYCON) Department of Mathematics, Rutgers University This paper presents a characterization of controllability for the class of (continuous-time) recurrent neural networks. These are described by differential equations of the following type: x'(t) = S [ Ax(t) + Bu(t) ] where "S" is a diagonal mapping S [a,b,c,...] = (s(a),s(b),s(c),...) and "s" is a scalar real map called the activation function of the network. Each coordinate of the vector x(t) is a real-valued variable which represents the internal state of a neuron, and each coordinate of u(t) is an external input signal applied at time t. Recurrent networks whose activation s is the identity function s(x)=x are precisely the linear systems studied in control theory. With nonlinear s, one obtains general families of recurrent nets. Controllability means that any state x can be transformed into any other possible state z, by means of a suitable input signal u(t) applied on some time interval. When s is the identity (the case typical in control theory), controllability can be checked by means of a simple algebraic test due to Kalman (1960). The current paper provides a simple characterization for recurrent networks when s(x) = tanh(x) is the activation typically used in neural network practice. The condition is very different from the one that applies to linear systems. ============================================================================ The paper is available starting from Eduardo Sontag's WWW HomePage at URL: http://www.math.rutgers.edu/~sontag/ (follow link to "online papers"). Many other related papers can be also found at this site. If Web access if inconvenient, it is also possible to use anonymous FTP: ftp math.rutgers.edu login: anonymous cd pub/sontag bin get reach-sigmoid.ps.gz Once file is retrieved, use gunzip to uncompress and then print as postscript. ============================================================================ Comments welcome. From c.k.i.williams at aston.ac.uk Wed Nov 6 13:37:18 1996 From: c.k.i.williams at aston.ac.uk (Chris Williams) Date: Wed, 06 Nov 1996 18:37:18 +0000 Subject: NIPS*96 post-conference workshop om Model Complexity Message-ID: <3734.199611061837@sun.aston.ac.uk> Note the call for short presentations near the bottom of this message. NIPS*96 Post-conference Workshop MODEL COMPLEXITY Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 ORGANIZERS: Chris Williams (Aston University, UK, c.k.i.williams at aston.ac.uk) Joachim Utans (London Business School, UK, J.Utans at lbs.lon.ac.uk) OVERVIEW: One of the most important difficulties in using neural networks for real-world problems is the issue of model complexity, and how it affects the generalization performance. One approach states that model complexity should be tailored to the amount of training data available, e.g. by using architectures with small numbers of adaptable parameters, or by penalizing the fit of larger models (e.g. AIC, BIC, Structural Risk Minimization, GPE). Alternatively, computationally expensive numerical estimates of the generalization performance (cross-validation (CV), Bootstrap, and related methods) can be used to compare and select models (for example Moody and Utans, 1994). Methods based on regularization restrict model complexity by reducing the "effective" number of parameters (Moody 1992). On the other hand, Bayesian methods see no need to limit model complexity, as overfitting is obviated by marginalization, where predictions are made by averaging over the posterior weight distribution. As Neal (1995) has argued, there may be no reason to believe that neural network models for real-world problems should be limited to nets containing only a "small" number of hidden units. In the limit of an infinite number of hidden units neural networks become Gaussian processes, and hence are closely related to the splines approach (Wahba, 1990). Another important aspect of model building is the selection of a subset of relevant input variables to include in the model, for instance, in a regression context, the subset of independent variables, or lagged values for a time series problem. The aim of this workshop is to present the different ideas on these topics, and to provide guidance to those confronted with the problem of model complexity on real-world problems. SPEAKERS: Leo Breiman (University of California Berkeley) Federico Girosi (MIT) Trevor Hastie (Stanford) Michael Kearns (AT&T Laboratories Research) John Moody (Oregon Graduate Institute) Grace Wahba (University of Wisconsin at Madison) Hal White (University of California San Diego) Huaiyu Zhu (Santa Fe Institute) WORKSHOP FORMAT: Of the 6 hours scheduled, about 4 will be taken up with presentations from the speakers listed above. We also very keen to make sure that there is time for discussion of the points raised. However, we also want to provide an opportunity for others to make short presentations or raise questions; we are considering making available a limited number of mini-slots of approx. 5-10 minutes (2-3 overheads plus time for a short discusson) for presentations on relevant topics. Because the workshop is scheduled for one day only and depending on the number of proposals received we may schedule the short presentations to the extend beyond the regular morning session. CALL FOR PARTICIPATION: If you would like to make a 5-10 minute presentation please email the organizers by Thursday 12 December, giving your name, a title for your presentation and a short abstract. We will be finalizing the program in the following week. WEB PAGE: The workshop web page is located at http://www.ncrg.aston.ac.uk/nips96/ It includes abstracts for the invited talks. From salomon at ifi.unizh.ch Thu Nov 7 08:35:42 1996 From: salomon at ifi.unizh.ch (Ralf Salomon) Date: Thu, 7 Nov 1996 14:35:42 +0100 (MET) Subject: Open Post-Doc Position Message-ID: <"josef.ifi..426:07.10.96.13.35.43"@ifi.unizh.ch> POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- AI Lab, Department of Computer Science University of Zurich, Switzerland The AI Lab at the University of Zurich participates in the VIRGO project (Vision-Based Robot Navigation Research Network), which is sponsored by the TMR research program of the European Union. For this project, we are looking for a highly motivated individual for an 18-month postdoctoral research position. The goal of VIRGO is to coordinate European research and postgraduate training activities that address the development of intelligent robotic systems able to navigate in (partially) unknown and possibly changing environments. For further details, please visit VIRGO's home page http://www.ics.forth.gr/virgo/ . The ideal candidate would have good programming skills (C/C++) and a strong background in neural networks. Furthermore, working in this interdisciplinary project requires good interpersonal skills and the ability of adopting new perspectives. The main focus of the research will be in the field of insect navigation, mimicking principles of biological systems. Also, work involving robot hardware, such as assembling and repairing robots, sensor and actuator systems, building controllers etc., is involved. The position is open immediately, but the actual starting date can be negotiated. The salary will be according to local university regulations for postdocs and can be expected to be about SFr 60.000 (approximately USD 50.000) per year. Since the project is sponsored by the European Union, the candidate should be a European citizen. To apply for this position, send your curriculum vitae including a list of references, a list of publications, and two or three representative publications either by e-mail or surface mail to pfeifer at ifi.unizh.ch or Prof. Rolf Pfeifer AI Lab, Department of Computer Science University of Zurich Winterthurerstr. 190 8057 Zurich Switzerland From finndag at ira.uka.de Thu Nov 7 19:41:46 1996 From: finndag at ira.uka.de (Finn Dag Buoe) Date: Thu, 07 Nov 1996 19:41:46 -0500 Subject: Ph.D thesis on connectionist natural language processing Message-ID: <"irafs2.ira.704:07.11.96.18.45.11"@ira.uka.de> The following doctoral thesis (and 3 of my related papers for COLING96, ECAI96, and ICSLP96) are available at the WWW page: http://werner.ira.uka.de/ISL.speech.publications.html -------------------------------------------------------------------------- FEASPAR - A FEATURE STRUCTURE PARSER LEARNING TO PARSE SPONTANEOUS SPEECH (120 pages) Finn Dag Buo Ph.D thesis University of Karlsruhe Abstract Traditionally, automatic natural language parsing and translation have been performed with various symbolic approaches. Many of these have the advantage of a highly specific output formalism, allowing fine-grained parse analyses and, therefore, very precise translations. Within the last decade, statistical, and connectionist techniques have been proposed to learn the parsing task in order to avoid the tedious manual modeling of grammar and malformation. How to learn a detailed output representation and how to learn to parse robustly even ill-formed input, has until now remained an open question. This thesis provides an answer to this question by presenting a connectionist parser that needs a small corpus and a minimum of hand modeling, that learns, and that is robust towards spontaneous speech and speech recognizer effects. The parser delivers feature structure parses, and has a performance comparable to a good hand modeled unification based parser. The connectionist parser FeasPar consists of several neural networks and a Consistency Checking Search. The number of, architecture of, and other parameters of the neural networks are automatically derived from the training data. The search finds the combination of the neural net outputs that produces the most probable consistent analysis. To demonstrate learnability and robustness, FeasPar is trained with transcribed sentences from the English Spontaneous Scheduling Task and evaluated for network, overall parse, and translation performance, with transcribed and speech data. The latter contains speech recognition errors. FeasPar requires only minor human effort and performs better or comparable to a good symbolic parser developed with a 2 year, human expert effort. A key result is obtained by using speech data to evaluate the JANUS speech-to-speech translation system with different parsers. With FeasPar, acceptable translation performance is 60.5 %, versus 60.8 % with a GLR* parser. FeasPar requires two weeks of human labor to prepare the lexicon and 600 sentences of training data, whereas the GLR* parser required significant human expert grammar modeling. Presented in this thesis are the Chunk'n'Label Principle, showing how to divide the entire parsing tasks into several small tasks performed by neural networks, as well as the FeasPar architecture, and various methods for network performance improvement. Further, a knowledge analysis and two methods for improving the overall parsing performance are presented. Several evaluations and comparisons with a GLR* parser, producing exactly the same output formalism, illustrate FeasPar's advantages. ================================================================================ Finn Dag Buo SAP AG Germany finn.buoe at sap-ag.de ================================================================================ From jagota at cse.ucsc.edu Fri Nov 8 21:08:02 1996 From: jagota at cse.ucsc.edu (Arun Jagota) Date: Fri, 8 Nov 1996 18:08:02 -0800 Subject: NIPS96 optimization workshop Message-ID: <199611090208.SAA08654@bristlecone.cse.ucsc.edu> This is an announcement and call for participation. Those interested in the topic and wishing to contribute a half-hour talk (some slots are open) may e-mail me a title and brief abstract. Or stop by at the venue and participate in other ways. Arun Jagota Nature Inspired Algorithms for Combinatorial Optimization NIPS*96 Postconference Workshop December 7, Saturday, 1996, Snowmass, Colorado 7:30-10:30 AM, 4-7 PM Organizer: Arun Jagota, jagota at cse.ucsc.edu The 1980s was a decade of intense activity in the application of nature inspired methods for the approximate solution of difficult combinatorial optimization problems. Many such problems are NP-hard, yet need to be solved (at least approximately) in real-world applications. This workshop will discuss the application of four nature inspired paradigms to combinatorial optimization: neural nets, evolutionary computing, methods rooted in physics, and DNA biocomputing. The workshop will consist of talks and discussions. The talks will present snap-shots of the state-of-the-art in these areas; the discussions will focus on common themes and differences across them. FORMAT: Eight 30 minute talks; two 60 minute discussions. Some elasticity possible. One of the discussions is intended to focus on conceptual comparisons: common themes, differences, relative strengths and weaknesses. The other discussion on benchmarks to facilitate cross-paradigm comparisons. SPEAKERS: Shumeet Baluja TBA CMU Jan van den Berg Physics-Based Neural Optimization Methods Erasmus U, Rotterdam Max Garzon The Reliability of DNA based Solutions to Optimization U of Memphis Arun Jagota Heuristic Primal-Target NN Methods on Some Hypergraph UCSC Problems Juergen Quittek Balancing Graph Mappings by Self-Organization ICSI -------------------------------------------------------------------------- From georgiou at wiley.csusb.edu Sun Nov 10 19:31:54 1996 From: georgiou at wiley.csusb.edu (georgiou@wiley.csusb.edu) Date: Sun, 10 Nov 1996 16:31:54 -0800 Subject: LCFP: ICCIN'97 Deadline Revision Message-ID: <199611110031.QAA09683@wiley.csusb.edu> Please note the revised deadline for submissions: December 6, 1996. It was changed by popular demand so that it be in line with the deadline of the general JCIS'97 Conference. ------------------------------------------------------------------------ Last Call for Papers 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE http://www.csci.csusb.edu/iccin Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University This conference is part of the Third Joint Conference Information Sciences. Plenary Speakers include the following: James S. Albus James Anderson Roger Brockett Earl Dowell David E. Goldberg Stephen Grossberg Y. C. Ho John H. Holland Zdzislaw Pawlak Lotfi A. Zadeh Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evaluationary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Optimization o Parallel Computer Applications o Theory of Evolutionary Computation Summary Submission Deadline: December 6, 1996 Decision & Notification: January 1, 1997 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 georgiou at csci.csusb.edu More information on Conference Web site: http://www.csci.csusb.edu/iccin From oby at cs.tu-berlin.de Mon Nov 11 07:26:36 1996 From: oby at cs.tu-berlin.de (Klaus Obermayer) Date: Mon, 11 Nov 1996 13:26:36 +0100 (MET) Subject: Visual Cortex Workshop Message-ID: <199611111226.NAA11116@pollux.cs.tu-berlin.de> The Graduiertenkolleg "Signalling Chains in Living Systems" (Randolf Menzel, PI) is organizing a visual cortex workshop which takes place in Berlin, Germany, in December. The theme of the workshop is the interaction between experimentalists and modellers in the area of biological vision. Participation is free, and all interested people are wellcome. The workshop is sponsored by the German Science Foundation via the Graduiertenkolleg, by HFSPO and by the FU and TU Berlin. Klaus Obermayer ****************************************************************** ****************************************************************** Workshop on Experiments and Models of Visual Cortex Friday, 13th of December, 1996 Institute for Computer Science, Free University of Berlin, Takustrasse 9, 14195 Berlin, Germany ------------------------------------------------------------------ PROGRAM: 9.00 Wellcome and Introduction 9.10 Ulf Eysel, U. Bochum, Lateral signal processing, response specificity and maps in the visual cortex. 10.00 David Somers, MIT, Local and long-range circuit modeling of primary visual cortex. 10.50 - 11.10 Break 11.10 Jack Cowan, U. Chicago, A simple model of orientation tuning and its consequences. 12.00 Bartlett Mel, USC, Translation-invariant orientation tuning in visual `complex' cells could derive from intradendritic computations. 12.50 - 14.30 Lunch 14.30 Rodney Douglas, ETH Zuerich, Computational principles in the microcircuits of the neocortex. 15.20 Nikos Logothetis, Baylor College, On the neural mechanisms of perceptual multistability. 16.10 - 16.30 Break 16.30 Heinrich Buelthoff, MPI biol. Kybernetik, The view-based approach to high level vision. 17.20 Dana Ballard, U. Rochester The visual cortex as a hierarchical Kalman predictor. 18.10 Adjourn ------------------------------------------------------------------ Abstracts and further information can be obtained via WWW at: http://www.kon.cs.tu-berlin.de/colloq/symposium-dec96.html ------------------------------------------------------------------ Prof. Klaus Obermayer phone: 49-30-314-73442 FR2-1, KON, Informatik 49-30-314-73120 Technische Universitaet Berlin fax: 49-30-314-73121 Franklinstrasse 28/29 e-mail: oby at cs.tu-berlin.de 10587 Berlin, Germany From szepes at sol.cc.u-szeged.hu Wed Nov 13 05:53:15 1996 From: szepes at sol.cc.u-szeged.hu (Szepesvari Csaba) Date: Wed, 13 Nov 1996 11:53:15 +0100 (MET) Subject: CFP: Theoretical Approaches to Adaptive Intelligent Control Message-ID: CALL FOR PAPERS Those interested in the topic and wishing to contribute a half-hour talk (some slots are open) may e-mail me a title and brief abstract. Csaba Szepesvari Theoretical Approaches to Adaptive Intelligent Control session at the conference COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE March 2--5, 1997, Research Triangle Park, North Carolina Organizer: Csaba Szepesvari, szepes at math.u-szeged.hu The aim of this section is to bring together researchers of Reinforcement Learning (RL), Adaptive- and NeuroControl (AC,NC) in order to facilitate the discussion between the people working in these fields, overview the research done in these fields and develope connections between them. Although the framework and the ultimate goal of these fields are different, the theoretical problems arising share many common features (stability(AC) = convergence (RL), performance optimization(AC) = exploration exploitation dilemma (RL), observability and controllability(AC) = hidden state, perceptual aliasing(RL), etc .). Ideally, the contributions should discuss the current problems arising in these fields from a theoretical point of view. In case of sufficient interest a panel discussion at the end of the session will be included. Summaries on Adaptive Intelligent Control are solicited for this session. Summaries should be sent to me and will be published in the proceedings. After the conference speakers may extend their summaries to full papers which will go through the usual refereeing process. Accepted papers will be published as journal articles. For more information on the conference see the Web site http://www.csci.csusb.edu/iccin. SUBMISSION DEADLINE: December 6, 1996 Csaba Szepesvari Session Organizer on Theoretical Approaches to Adaptive Control Computational Intelligence and Neuroscience szepes at math.u-szeged.hu From marney at ai.mit.edu Sun Nov 10 16:34:12 1996 From: marney at ai.mit.edu (Marney Smyth) Date: Sun, 10 Nov 1996 16:34:12 -0500 (EST) Subject: Learning Methods for Prediction, Classification, Message-ID: <9611102134.AA04438@carpentras.ai.mit.edu> ************************************************************** *** *** *** Learning Methods for Prediction, Classification, *** *** Novelty Detection and Time Series Analysis *** *** *** *** Los Angeles, CA, December 14-15, 1996 *** *** *** *** Geoffrey Hinton, University of Toronto *** *** Michael Jordan, Massachusetts Inst. of Tech. *** *** *** ************************************************************** A two-day intensive Tutorial on Advanced Learning Methods will be held on December 14 and 15, 1996, at Loews Hotel, Santa Monica, CA. Space is available for up to 50 participants for the course. The course will provide an in-depth discussion of the large collection of new tools that have become available in recent years for developing autonomous learning systems and for aiding in the analysis of complex multivariate data. These tools include neural networks, hidden Markov models, belief networks, decision trees, memory-based methods, as well as increasingly sophisticated combinations of these architectures. Applications include prediction, classification, fault detection, time series analysis, diagnosis, optimization, system identification and control, exploratory data analysis and many other problems in statistics, machine learning and data mining. The course will be devoted equally to the conceptual foundations of recent developments in machine learning and to the deployment of these tools in applied settings. Case studies will be described to show how learning systems can be developed in real-world settings. Architectures and algorithms will be presented in some detail, but with a minimum of mathematical formalism and with a focus on intuitive understanding. Emphasis will be placed on using machine methods as tools that can be combined to solve the problem at hand. WHO SHOULD ATTEND THIS COURSE? The course is intended for engineers, data analysts, scientists, managers and others who would like to understand the basic principles underlying learning systems. The focus will be on neural network models and related graphical models such as mixture models, hidden Markov models, Kalman filters and belief networks. No previous exposure to machine learning algorithms is necessary although a degree in engineering or science (or equivalent experience) is desirable. Those attending can expect to gain an understanding of the current state-of-the-art in machine learning and be in a position to make informed decisions about whether this technology is relevant to specific problems in their area of interest. COURSE OUTLINE Overview of learning systems; LMS, perceptrons and support vectors; generalized linear models; multilayer networks; recurrent networks; weight decay, regularization and committees; optimization methods; active learning; applications to prediction, classification and control Graphical models: Markov random fields and Bayesian belief networks; junction trees and probabilistic message passing; calculating most probable configurations; Boltzmann machines; influence diagrams; structure learning algorithms; applications to diagnosis, density estimation, novelty detection and sensitivity analysis Clustering; mixture models; mixtures of experts models; the EM algorithm; decision trees; hidden Markov models; variations on hidden Markov models; applications to prediction, classification and time series modeling Subspace methods; mixtures of principal component modules; factor analysis and its relation to PCA; Kalman filtering; switching mixtures of Kalman filters; tree-structured Kalman filters; applications to novelty detection and system identification Approximate methods: sampling methods, variational methods; graphical models with sigmoid units and noisy-OR units; factorial HMMs; the Helmholtz machine; computationally efficient upper and lower bounds for graphical models REGISTRATION Standard Registration: $700 Student Registration: $400 Cancellation Policy: Cancellation before Friday December 6th, 1996, incurs a penalty of $150.00. Cancellation after Friday December 6th, 1996, incurs a penalty of one-half of Registration Fee. Registration Fee includes Course Materials, breakfast, coffee breaks, and lunch on Saturday December 14th. On-site Registration is possible. Payment of on-site registration must be in US Dollar amounts, by Money Order or Check (preferably drawn on a US Bank account). Those interested in participating should return the completed Registration Form and Fee as soon as possible, as the total number of places is limited by the size of the venue. Please print this form, and fill in the hard copy to return by mail REGISTRATION FORM Learning Methods for Prediction, Classification, Novelty Detection and Time Series Analysis Saturday, December 14 - Sunday, December 15, 1996 Santa Monica, CA, USA. -------------------------------------- Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or Institution ______________________________________ Standard Registration ____ Student Registration ____ Mailing Address (for receipt) _________________________ __________________________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email address (Lunch Menu, Saturday December 14th - tick as appropriate): ___ Vegetarian ___ Non-Vegetarian [Image] Fee payment must be made by MONEY ORDER or PERSONAL CHECK. All amounts are given in US dollar figures. Make fee payable to Prof. Michael Jordan. Mail it, together with this completed Registration Form to: Professor Michael Jordan Dept. of Brain and Cognitive Sciences M.I.T. E10-034D 77 Massachusetts Avenue Cambridge, MA 02139 USA HOTEL ACCOMMODATION Hotel accomodations are the personal responsibility of each participant. The Tutorial will be held in Lowes Santa Monica Beach Hotel, 1700 Ocean Avenue Santa Monica CA 90401 (310) 458-6700 FAX (310) 458-0020 on December 14 and 15, 1996. The hotel has reserved a block of rooms for participants of the course. The special room rates for participants are: U.S. $170.00 (city view) per night + tax U.S. $250.00 (full ocean view) per night + tax Please be aware that these prices do not include State or City taxes. Participants may wish to avail of discounted overnight parking rate of $13.30 (self) and $15.50 (valet). ADDITIONAL INFORMATION A registration form is available from the course's WWW page at http://www.ai.mit.edu/projects/cbcl/web-pis/jordan/course/index.html Marney Smyth Phone: 617 258-8928 Fax: 617 258-6779 E-mail: marney at ai.mit.edu From amari at zoo.riken.go.jp Thu Nov 14 02:08:57 1996 From: amari at zoo.riken.go.jp (Shunichi Amari) Date: Thu, 14 Nov 1996 16:08:57 +0900 Subject: Neural networks awards Message-ID: <9611140708.AA22444@zoo.riken.go.jp> INNS Awards Call For Nominations The International Neural Network Society has established an Awards Program to recognize INNS members who have made outstanding contributions in the field of Neural Networks. Nominations for candidates are to be sought for the following categories: The Hebb, Hemholtz and Gabor Awards. Two awards, among the three, of $500 will be presented each year to senior members of INNS for outstanding contributions made in the field of Neural Networks. Young Investigator Award. Each year two awards of $250.00 will be presented for significant contributions in the field of Neural Networks to members with no more than five years postdoctoral experience and under the 40 years of age. The Award Committee should receive nominations, from other than the nominee, of no more than two A4 pages in length, outlining reasons for the award to the nominee, along with a list of at least five important and published papers of the nominee. The nominations must be made by mail, fax or e-mail no later than February 28, 1997. The member who submits the nomination should also provide the Committee with the following information for both the nominee and themselves: name, address, position/title, phone, fax and e-mail address. Awards will be presented at ICNN'97, June 9-12, 1997 at Westin Galleria Hotel, Houston, Texas. Nominations should be sent to : INNS c/o Talley Management Group 875 Kings Highway, Suite 200 Woodbury, NJ 08096 Phone: 609/845-9094 Fax: 609/853-0411 E-Mail: headquarters at WCNN.ccmail.CompuServe.Com ------- End of forwarded message ------- From tibs at utstat.toronto.edu Thu Nov 14 09:38:00 1996 From: tibs at utstat.toronto.edu (tibs@utstat.toronto.edu) Date: Thu, 14 Nov 96 09:38 EST Subject: new tech report Message-ID: Classification by pairwise coupling Trevor Hastie Stanford University Robert Tibshirani University of Toronto We discuss a strategy for polychotomous classification that involves estimating probabilities for each pair of classes and then coupling the estimates together. The coupling model is similar to the Bradley-Terry method for paired comparisons. We study the nature of the class probability estimates that arise, and examine the performance of the procedure in real and simulated datasets. Classifiers used include linear discriminants, nearest neighbours, and the support vector machine. Available at http://utstat.toronto.edu/tibs/research.html ftp://utstat.toronto.edu/pub/tibs/coupling.ps Comments welcome! ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Rob Tibshirani, Dept of Preventive Med & Biostats, and Dept of Statistics Univ of Toronto, Toronto, Canada M5S 1A8. Phone: 416-978-4642 (PMB), 416-978-0673 (stats). FAX: 416 978-8299 computer fax 416-978-1525 (please call or email me to inform) tibs at utstat.toronto.edu. ftp: //utstat.toronto.edu/pub/tibs http://www.utstat.toronto.edu/~tibs +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From robtag at dia.unisa.it Thu Nov 14 11:45:50 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Thu, 14 Nov 1996 17:45:50 +0100 Subject: Wirn 97 First Call for Paper Message-ID: <9611141645.AA08295@udsab.dia.unisa.it> ***************** CALL FOR PAPERS ***************** The 9-th Italian Workshop on Neural Nets WIRN VIETRI-97 May 22-24, 1997 Vietri Sul Mare, Salerno ITALY **************** FIRST ANNOUNCEMENT ***************** Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) D. D. Caviglia ( Univ. Genova) P. Campadelli ( Univ. Milano) M. Ceccarelli ( CNR Napoli) A. Colla (ELSAG Bailey Genova) M. Frixione ( I.I.A.S.S.) C. Furlanello (IRST Trento) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) F. Masulli (Univ. Genova) P. Morasso (Univ. Genova) G. Orlandi ( Univ. Roma) T. Parisini (Univ. Trieste) E. Pasero ( Politecnico Torino ) A. Petrosino ( I.I.A.S.S.) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno ) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Schedule ----------------------- Papers Due: January 31, 1997 Replies to Authors: March 31, 1997 Revised Papers Due: May 24, 1997 Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricerca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) Istituto Italiano per gli Studi Filosofici, Napoli The 9-th Italian Workshop on Neural Nets (WIRN VIETRI-97) will take place in Vietri Sul Mare, Salerno ITALY, May 22-24, 1997. The conference will bring together scientists who are studying several topics related to neural networks. The three-day conference, to be held in the I.I.A.S.S., will feature both introductory tutorials and original, refereed papers, to be published by World Scientific Publishing. Papers should be 6 pages,including title, figures, tables, and bibliography. The first page should give keywords, postal and electronic mailing addresses, telephone and FAX numbers, indicating oral or poster presentation. The camera ready format will be sent with the acceptation letter of the referees. Submit 3 copies and a 1 page abstract (containing keywords, postal and electronic mailing addresses, telephone, and FAX numbers with no more than 300 words) to the address shown (WIRN 97 c/o IIASS). An electronic copy of the abstract should be sent to the E-mail address below. During the Workshop the "Premio E.R. Caianiello" will be assigned to the best Ph.D. thesis in the area of Neural Nets and related fields of Italian researchers. The amount is of 2.000.000 Italian Lire. The interested researchers (with the Ph.D degree got in 1994,1995,1996 until February 28 1997) must send 3 copies of a c.v. and of the thesis to "Premio Caianiello" WIRN 97 c/o IIASS before February 28,1997. It is possible to partecipate to the prize at most twice. For more information, contact the Secretary of I.I.A.S.S. I.I.A.S.S Via G.Pellegrino, 19 84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 E-Mail robtag at udsab.dia.unisa.it or the www pages at the address below: http:://www-dsi.ing.unifi.it/neural ***************************************************************** From gaudiano at cns.bu.edu Thu Nov 14 17:25:16 1996 From: gaudiano at cns.bu.edu (Paolo Gaudiano) Date: Thu, 14 Nov 1996 17:25:16 -0500 Subject: Call for papers: Intelligent Robotics Message-ID: <199611142225.RAA25167@mattapan.bu.edu> CALL FOR PAPERS Special Session on Intelligent Robotics 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (http://www.csci.csusb.edu/iccin) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 SUBMISSION DEADLINE: December 6, 1996. Organizer: Paolo Gaudiano, Boston University Neurobotics Lab, gaudiano at cns.bu.edu This session will include a combination of invited and submitted articles in the area of intelligent robotics. We welcome submissions describing applied work utilizing traditional AI, neural networks, fuzzy logic, reinforcement learning, or any other technique relevant to the main themes of the conference. Preference will be given to papers describing work done on real robotic systems, though simulator results will be acceptable when the potential applicability to real robots is clear. Submissions need to conform to the format specified for ICCIN'97: summary papers shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by November 15, 1996. After acceptance, a deposit of $150 check must be received by January 31, 1997, to guarantee the publication of the 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. Your papers should be RECEIVED at the following address by December 6th, 1996: Paolo Gaudiano Boston University Dept. of Cognitive and Neural Systems 677 Beacon Street Boston, MA 02215 USA Alternatively, you may submit your camera-ready paper electronically. Postscript is preferred. If you have a different format (e.g., Word, Frame, ...), please send me e-mail ahead of time to gaudiano at cns.bu.edu to see if an electronic submission is possible. If you have already submitted a relevant paper in response to the original ICCIN call for papers, please notify the coordinator (George Georgiou, georgiou at csci.csusb.edu) that you would like to be considered for this session. -- Paolo Gaudiano Dept. of Cognitive & Neural Systems Boston University Phone:617-353-9482 Fax:617-353-7755 677 Beacon Street e-mail: gaudiano at cns.bu.edu Boston, MA 02215 USA WEB URL: http://cns-web.bu.edu/ Neurobotics Lab Phone: 617-353-1347 WEB URL: http://neurobotics.bu.edu/ From esann at dice.ucl.ac.be Fri Nov 15 07:53:36 1996 From: esann at dice.ucl.ac.be (esann@dice.ucl.ac.be) Date: Fri, 15 Nov 1996 14:53:36 +0200 Subject: ESANN'97 final call for papers Message-ID: <199611151351.OAA05052@ns1.dice.ucl.ac.be> --------------------------------------------------- | European Symposium | | on Artificial Neural Networks | | | | Bruges - April 16-17-18, 1997 | | | | Final announcement and call for papers | --------------------------------------------------- Dear colleagues, This is to remind you that the deadline for the submission of papers to ESANN'97, the European Symposium on Artificial Neural Networks, is November 29, 1997. All information about this conference and the submission of papers is available on the ESANN WWW server: http://www.dice.ucl.ac.be/neural-nets/esann or can be sent by e-mail upon request. If you intend to submit a paper to ESANN'97, and if you think that you will have difficulties to meet exactly the deadline, please send a fax to the conference secretariat with the title of the paper, the authors and the abstract (even if not definitive); this will accelerate the processing of your paper after reception, and will give us the possibility to contact you if we do not receive it. Thank you in advance for your contribution to ESANN'97! Sincerely yours, Michel Verleysen _____________________________ _____________________________ D facto publications - Michel Verleysen conference services Univ. Cath. de Louvain - DICE 45 rue Masui 3, pl. du Levant 1000 Brussels B-1348 Louvain-la-Neuve Belgium Belgium tel: +32 2 203 43 63 tel: +32 10 47 25 51 fax: +32 2 203 42 94 fax: +32 10 47 25 98 esann at dice.ucl.ac.be verleysen at dice.ucl.ac.be http://www.dice.ucl.ac.be/neural-nets/esann _____________________________ _____________________________ From Dimitris.Dracopoulos at ens-lyon.fr Fri Nov 15 09:06:41 1996 From: Dimitris.Dracopoulos at ens-lyon.fr (Dimitris Dracopoulos) Date: Fri, 15 Nov 1996 15:06:41 +0100 (MET) Subject: Last CFP: Neural and Evolutionary Algorithms for Intelligent Control Message-ID: <199611151406.PAA01059@banyuls.ens-lyon.fr> NEURAL AND EVOLUTIONARY ALGORITHMS FOR INTELLIGENT CONTROL ---------------------------------------------------------- L A S T C A L L F O R P A P E R S Special Session in: "15th IMACS World Congress 1997 on Scientific Computation, Modelling and Applied Mathematics", August 24-29 1997, Berlin, Germany Special Session Organizer-Chair: Dimitri C. Dracopoulos ------------------------------- (Ecole Normale Superieure de Lyon, LIP and Brunel University, London) Scope: ----- The focus of the session will be in the latest developments of the state-of-the-art neurocontrol and evolutionary techniques. Today, many advanced intelligent control applications utilize methods like the above, and papers describing these are mostly welcome. Theoretical discussions of how these techniques can be proved to be stable are also highly welcome. Topics: ------ -Neurocontrollers * optimization over time * adaptive critic designs * brain-like neurocontrollers -Evolutionary techniques as pure controllers * genetic algorithms * evolutionary programming * genetic programming -Hybrid methods (neural nets + evolutionary algorithms) -Theoretical and Stability issues for neuro-evolutionary control -Advanced Control Applications Paper Contributions: -------------------- Each paper will be published in the Proceedings of the IMACS'97 World Congress. The accepted papers will be orally presented (25 minutes each, including 5 min for discussion). Important dates: ---------------- December 5, 1996, Deadline for receiving papers. January 10, 1997, Notification of acceptance. February 1997, Author typing instructions, for camera-ready copies. Submission guidelines: --------------------- One hardcopy, 6 pages limit, 10pt font, should be sent to the Session Chair: Professor Dimitri C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Ecole Normale Superieure de Lyon 46 Allee d'Italie 69364 Lyon - Cedex 07, France. In the case of multiple authors then in the paper it should be indicated which author is to receive correspondence. The corresponding author is requested to include in the cover letter: complete postal address, e-mail address, phone number, fax number, a list of keywords (no more than 5). ** Electronic submissions (in postscript format) will be accepted. ** More information (preliminary) on the "15th IMACS World Congress 1997" can be found in: "http://www.first.gmd.de/imacs97/". Please note that special discounted registration fees (proceedings but no social program) will be available. -- Professor Dimitris C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Telephone: +33 (0) 472728504 Ecole Normale Superieure de Lyon Fax: +33 (0) 472728080 46 Allee d'Italie E-mail: Dimitris.Dracopoulos at ens-lyon.fr 69364 Lyon - Cedex 07 France From akg at enterprise.arl.psu.edu Fri Nov 15 17:14:47 1996 From: akg at enterprise.arl.psu.edu (Amulya K. Garga) Date: Fri, 15 Nov 1996 17:14:47 -0500 (EST) Subject: CFP: ICNN'97 Special Session on NN for Monitoring Complex Systems Message-ID: <199611152214.RAA27310@wisdom.arl.psu.edu> CALL FOR PAPERS Special Session on Neural Networks Applications for Monitoring Complex Systems at the International Conference on Neural Networks 1997 Westin Galleria Hotel Houston, Texas 9-12 June 1997 SUBMISSION DEADLINE: 30 November 1996. ORGANIZER: Amulya K. Garga Applied Research Lab, The Pennsylvania State University garga at psu.edu This session will consist of invited and submitted papers in the area of Neural Networks Applications for Monitoring Complex Systems. We welcome submissions describing applied work utilizing neural networks, fuzzy logic, AI, or any other techniques relevant to the main themes of ICNN'97. SUMMARY: In recent years, extensive neural networks research has focused on the problem of monitoring complex systems. Applications include condition-based maintenance for complex machinery (e.g., in which mechanical systems are monitored to determine their current state and predict their remaining useful life), monitoring of industrial processes, and medical applications such as health monitoring and diagnosis. These applications involve the need for pattern recognition, non-linear predictive modeling, representation of imprecise information, implicit approximate reasoning. A number of researchers have investigated the application of neural networks to these problems. These applications have proven to be particularly challenging because of such factors as poor observability (e.g., low signal-to-noise ratios), the need to identify so-called rare events (e.g., failure events), multiple time-scale phenomena, the need to recognize context-based changes to operating conditions, and the general lack of available training data. The applications provide both a showcase for demonstrating the utility of neural networks, as well providing unique challenges whose solutions require advances in neural network theory and practice. Realistic solutions require a multi-disciplinary approach involving physical modeling of non-linear systems and of sensor technologies, multi-sensor data fusion, statistical signal processing, and artificial intelligence methods. The solution of these application problems should have international societal impacts including improved safety for mechanical systems, reduced costs, and the potential for improved health care through semi-automated monitoring and diagnosis. To date this rapidly evolving research has only been reported in specialized conferences without wide recognition or attention. Research is being performed in many countries including the U.S., Canada, Japan, Australia, and many European countries. This special session is important and timely and is intended to provide a forum where the international neural network community as well as to the application community can present mutually applicable and beneficial contributions. In addition, the session would provide the international research community with access to data sets for future neural network research. PAPER SUMISSION: Submissions need to conform to the format specified for ICNN'97: Six copies (one original and five copies) of the paper must be submitted. Papers must be camera-ready on 8 1/2 by 11 white paper, one-column format in Times or similar font style, 10 points or larger with one inch margins on all four sides. Do not fold or staple the original camera-ready copy. Four pages are encouraged; however, the paper must not exceed six pages, including figures, tables, and references, and should be written in English. Submissions that do not adhere to the guidelines above will be returned unreviewed. Centered at the top of the first page should be the complete title, author name(s), and postal and electronic mailing addresses. In the accompanying letter, the following information must be included: Full Title of the Paper Technical Area (First and Second Choices) Corresponding Author (Name, Postal and E-Mail Addresses, Tel. & FAX Nos) Your papers should be RECEIVED at the following address by 30 NOVEMBER 1996: Ms. Leanne Zindler Applied Research Laboratory The Pennsylvania State University P.O. Box 30 North Atherton Street State College, PA 16804-0030, USA Phone: +1 814-863-4344 If you have already submitted a relevant paper in response to the original ICNN'97 call for papers, please notify the ICNN'97 technical program coordinator (Prof. J. M. Keller, keller at ece.missouri.edu) that you would like to be considered for this session. PAPER REVIEW: The papers will undergo a review process similar to the other papers being considered for ICNN'97. Accepted papers will also appear in the proceedings of the ICNN'97. TIMETABLE: Paper sumission: 30 November 1996 Review complete: 2 January 1997 Revised sumission: 1 February 1997 ICNN'97 is sponsored by: IEEE Neural Network Council (NNC) and International Neural Network Society (INNS). URLs: This CFP: http://wisdom.arl.psu.edu/People/akg/NNMonCS.html ICNN'97: http://www.eng.auburn.edu/department/ee/ICNN97/icnn.htm http://www.mindspring.com/~pci-inc/ICNN97/ http://wwweng.uwyo.edu/icnn97/ IEEE NNC: http://www.ieee.org/nnc/ INNS: http://cns-web.bu.edu/inns/ -- Amulya K. Garga, Ph.D. | Email: garga at psu.edu Applied Research Laboratory | http://wisdom.arl.psu.edu/People/Amulya_Garga P.O. Box 30 | Phone: 814-863-5841 Fax: 814-863-0673 State College, PA 16804-0030 | From doya at erato.atr.co.jp Fri Nov 15 23:03:29 1996 From: doya at erato.atr.co.jp (Kenji Doya) Date: Sat, 16 Nov 1996 13:03:29 +0900 Subject: Position available: Kawato Dynamic Brain Project Message-ID: <199611160403.NAA02154@dorothy.erato.atr.co.jp> Research Position, Kawato Dynamic Brain Project, JSTC Postdoctoral research positions will be available April 1997 in Kawato Dynamic Brain Project, a part of the Exploratory Research for Advanced Technology (ERATO) program run by Japan Science and Technology Corporation, a government sponsored agency. The goal of this five year project (October 1996 - September 2001) is to understand the brain mechanisms of human cognition and sensory-motor learning to the extent that we can reproduce them as computer programs and robotic systems. Candidates must have strong background in mathematical, computational, neurobiological, cognitive and/or robotic sciences, and should have broad interests in one or more of the following research areas: (1) Computational Neurobiology: models of cerebellum, basal ganglia and cortical motor areas; dynamic computation in visual processing; encoding of of hierarchical sequences in the brain (speech and songs); learning of rhythmic and transient motor patterns (stand up to walk). (2) Computational Psychology: process of visuo-motor coordinate transformation and trajectory planning; objective functions for natural biological motion; functional brain imaging (fMRI, PET, MEG); human motor psychophysical experiments. (3) Computational Learning: modular function approximation algorithms; reinforcement learning in real time; principles of nonlinear dynamics for perceptual-motor coordination; robot learning from human demonstration; building of a humanoid robot with dextrous arms and oculomotor systems; motor psychophysical experiments which compare robot and human behavior. Detailed descriptions of the project is being placed in our Web page: http://www.erato.atr.co.jp/ The project is currently located in Advanced Telecommunications Research Institute International (ATR-I), where a considerably multi-national, multi-lingual community has selforganized. Applicants must have Ph.D. or equivalent degree. Appointments will be made for one or two years with possible extensions. Salaries are competitive. Please send a CV, a list of publications, copies of up to three major publications, names and addresses of up to three references, and a cover letter describing your research interests to: Search Comittee Kawato Dynamic Brain Project, JSTC 2-2 Hikaridai, Seika-cho, Soraku-gun Kyoto 619-02, Japan In order to receive full consideration, applications must be received by January 15, 1997. The search, however, will continue beyond this date until all positions are filled. Please, feel free to inquire about the specifics of the positions by sending mail to: email: search at erato.atr.co.jp fax: +81-774-95-3001 For general information about ERATO reserach positions, see http://www2.jst-c.go.jp/jst/erato/erato-e/NewPositions/ **************************************************************** Kenji Doya Computational Neurobiology Group Kawato Dynamic Brain Project, Japan Science and Technology Corp. 2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 619-02, Japan tel:+81-774-95-1210 email:doya at erato.atr.co.jp fax:+81-774-95-3001 http://www.erato.atr.co.jp/~doya From ping at cogsci.richmond.edu Sat Nov 16 13:12:33 1996 From: ping at cogsci.richmond.edu (Ping Li) Date: Sat, 16 Nov 1996 13:12:33 -0500 (EST) Subject: Connection Science Vol. 8 (1) Message-ID: <199611161812.NAA14915@cogsci.richmond.edu.urich.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1092 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/eed1a4ec/attachment-0001.ksh From harnad at cogsci.soton.ac.uk Sat Nov 16 15:25:18 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 16 Nov 96 20:25:18 GMT Subject: Long-Term Potentiation: BBS Call for Commentators Message-ID: <6298.9611162025@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: LONG-TERM POTENTIATION: WHAT'S LEARNING GOT TO DO WITH IT? by Tracey J. Shors & Louis D. Matzel This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ LONG-TERM POTENTIATION: WHAT'S LEARNING GOT TO DO WITH IT? Tracey J. Shors & Louis D. Matzel Department of Psychology and Program in Neuroscience, Princeton University, Princeton, New Jersey 08544 shors at pucc.princeton.edu Department of Psychology, Program in Biopsychology and Behavioral Neuroscience, Rutgers University, New Brunswick, New Jersey 08903 matzel at rci.rutgers.edu KEYWORDS: NMDA, synaptic plasticity, Hebbian synapses, calcium, hippocampus, theta rhythm, spatial learning, classical conditioning, attention, arousal, memory systems ABSTRACT: Long-term potentiation (LTP) is operationally defined as a long-lasting increase in synaptic efficacy which follows high-frequency stimulation of afferent fibers. Since the first full description of the phenomenon in 1973, exploration of the mechanisms underlying LTP induction has been one of the most active areas of research in neuroscience. Of principal interest to those who study LTP, particularly LTP in the mammalian hippocampus, is its presumed role in the establishment of stable memories, a role consistent with "Hebbian" descriptions of memory formation. Other characteristics of LTP, including its rapid induction, persistence, and correlation with natural brain rhythms, provide circumstantial support for this connection to memory storage. Nonetheless, there is little empirical evidence that directly links LTP to the storage of memories. In this commentary, we review a range of cellular and behavioral characteristics of LTP, and evaluate whether those characteristics are consistent with the purported role of hippocampal LTP in memory formation. We suggest that much of the present focus on LTP reflects a preconception that LTP is a learning mechanism, although the empirical evidence often suggests that LTP is unsuitable for such a role. As an alternative to serving as a memory storage device, we propose that LTP may serve as a neural equivalent to an arousal or attention device in the brain. Accordingly, LTP is suggested to nonspecifically increase the effective salience of discrete external stimuli and thereby is capable of facilitating the induction of memories at distant synapses. In an environment open to critical inquiry, other hypotheses regarding the functional utility of this intensely studied mechanism are conceivable; the intent of this article is not exclusively to promote a single hypothesis, but rather to stimulate discussion about the neural mechanisms that are likely to underlie memory storage, and to appraise whether LTP can reasonably be considered a viable candidate for such a mechanism. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.shors). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.shors.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.shors ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.shors gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.shors When you have the file(s) you want, type: quit From harnad at cogsci.soton.ac.uk Sat Nov 16 15:29:06 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sat, 16 Nov 96 20:29:06 GMT Subject: Embodied Cognition: BBS Call for Commentators Message-ID: <6310.9611162029@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: DEICTIC CODES FOR THE EMBODIMENT OF COGNITION by Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, & Rajesh P. N. Rao This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at cogsci.soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/ ftp://ftp.princeton.edu/pub/harnad/BBS/ ftp://ftp.cogsci.soton.ac.uk/pub/bbs/ gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ DEICTIC CODES FOR THE EMBODIMENT OF COGNITION Dana H. Ballard, Mary M. Hayhoe, Polly K. Pook, and Rajesh P. N. Rao Computer Science Department University of Rochester Rochester, NY 14627, USA {dana, mary, pook, rao}@cs.rochester.edu KEYWORDS: deictic computations; embodiment; working memory; natural tasks; eye movements; brain computation; binding; sensory-motor tasks; pointers. ABSTRACT: To describe phenomena that occur at different time scales, computational models of the brain must necessarily incorporate different levels of abstraction. We argue that at time scales of approximately one-third of a second, orienting movements of the body play a crucial role in cognition and form a useful computational level. This level is more abstract than that used to capture neural phenomena yet is framed at a level of abstraction below that traditionally used to study high-level cognitive processes such as reasoning. We term this level the embodiment level. At the embodiment level, the constraints of the physical system determine the nature of cognitive operations. The key synergy is that, at time scales of about one-third second, the natural sequentiality of body movements can be matched to the natural computational economies of sequential decision systems. The way this is done is through a system of implicit reference termed deictic, whereby pointing movements are used to bind objects in the world to cognitive programs. The focus of this paper is to study how deictic bindings enable the solution of natural tasks. We show how deictic computation provides a mechanism for representing the essential features that link external sensory data with internal cognitive programs and motor actions. In particular, we argue that one of the central features of cognition, working memory, can be related to moment-by-moment dispositions of body features such as eye movements and hand movements. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.ballard). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs/ http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.ballard.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.ballard ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.ballard gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.ballard When you have the file(s) you want, type: quit ---------- From maruoka at maruoka.ecei.tohoku.ac.jp Mon Nov 18 12:25:19 1996 From: maruoka at maruoka.ecei.tohoku.ac.jp (Akira Maruoka) Date: Mon, 18 Nov 96 12:25:19 JST Subject: ALT97 first ANNOUNCEMENT Message-ID: <9611180325.AA08640@taihei.maruoka.ecei.tohoku.ac.jp> This CFP was sent to several mailing lists. Please accept my apologies if you receive multiple copies. Akira Maruoka ---------------------------------------------------------------------- CALL FOR PAPERS---ALT 97 The Eighth International Workshop on Algorithmic Learning Theory Sendai, Japan October 6-8, 1997 ______________________________________________________________________ The 8th International Workshop on Algorithmic Learning Theory (ALT'97) will be held in Sendai, Japan during October 6-8, 1997. The workshop is sponsored by the Japanese Society for Artificial Intelligence (JSAI) and Tohoku University. We invite submissions to ALT'97 in all areas related to algorithmic learning theory including (but not limited to): the design and analysis of learning algorithms, the theory of machine learning, computational logic of/for machine discovery, inductive inference, learning via queries, artificial and biological neural networks, pattern recognition, learning by analogy, Bayesian/MDL/MML estimation, statistical learning, inductive logic programming, application of learning to databases and biological sequence analysis. In addition to above theoretical topics, we invite submissions to two special tracks on data mining and case-based learning, aimed at promoting applications of theoretical ideas. INVITED TALKS. Invited talks will be given by Manuel Blum (UC Berkeley and City Univ. Hong Kong), Wolfgang Maass (Tech. Univ. Graz), Lenny Pitt (Univ. Illinois), and Masahiko Sato (Kyoto Univ.). SUBMISSIONS. Authors may either e-mail postscript files of their abstracts to mli at cs.cityu.edu.hk, or submit nine copies of their extended abstracts to: Professor Ming Li - ALT'97 Department of Computer Science City University of Hong Kong Tat Chee Avenue Kowloon, Hong Kong Abstracts must be received by April 1, 1997. Notification of acceptance or rejection will be (e)mailed to the first (or designated) author by May 19, 1997. Camera-ready copy of accepted papers will be due June 16, 1997. FORMAT. The submitted abstract should consist of a cover page with title, author names, postal and e-mail addresses, an approximately 200 word summary, and a body not longer than ten (10) pages of size A4 or 7x10.5 inches in twelve-point font. You may use appendices to include long but major proofs. If you submit hardcopies, double-sided printing is encouraged. POLICY. Each submitted abstract will be reviewed by the members of the program committee, and be judged on clarity, significance, and originality. Joint submissions to other conferences with published proceedings are not allowed. Papers that have appeared in journals or other conferences are not appropriate for ALT'97. Proceedings will be published as a volume in the Lecture Notes in Artificial Intelligence, Springer-Verlag, and will be available at the conference. Selected papers of ALT'97 will be invited to a special issue of the journal Theoretical Computer Science. One scholarship of $500US sponsored by IFIP TC 1.4 will be awarded to a student author (please mark student authors) in order to attend ALT'97. Conference chair: Professor Akira Maruoka Tohoku University Sendai, Japan 980 maruoka at ecei.tohoku.ac.jp Program committee chair: Ming Li (City Univ. HK and Univ. Waterloo) Program Committee: Naoki Abe (NEC, Japan) Nader Bshouty (Univ. Calgary, Canada) Nicolo Cesa-Bianchi (Milano Univ., Italy) Makoto Haraguchi (Hokkaido Univ., Japan) Hiroki Ishizaka (Kyushu Tech., Japan) Klaus P. Jantke (HTWK Leipzig, Germany) Philip Long (Nat. Univ. Singapore, Singapore) Shinichi Morishita (IBM Japan, Japan) Hiroshi Motoda (Osaka Univ., Japan) Yasubumi Sakakibara (Tokyo Denki Univ., Japan) Arun Sharma (New South Wales, Australia) Ayumi Shinohara (Kyushu Univ., Japan) Carl Smith (Univ. Maryland, USA) Frank Stephan (RKU, Germany) Naftali Tishby (Hebrew Univ., Israel) Paul Vitanyi (CWI, Netherlands) Les Valiant (Harvard, USA) Osamu Watanabe (Titech., Japan) Takashi Yokomori (UEC, Japan) Bin Yu (UC Berkeley, USA) Local arrangements chair: Professor Hirotomo Aso Graduate School of Engineering Tohoku University Sendai, Japan 980 alt97 at maruoka.ecei.tohoku.ac.jp For more information, contact: Email: alt97 at maruoka.ecei.tohoku.ac.jp Homepage: http://www.maruoka.ecei.tohoku.ac.jp/~alt97  From lopez at physik.uni-wuerzburg.de Mon Nov 18 07:30:18 1996 From: lopez at physik.uni-wuerzburg.de (Bernardo Lopez) Date: Mon, 18 Nov 1996 13:30:18 +0100 (MEZ) Subject: Paper available: Learning by dilution in a Neural Network Message-ID: <199611181230.NAA13542@wptx14.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1996/WUE-ITP-96-028.ps.gz The following manuscript is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Learning by dilution in a Neural Network" B. Lopez and W. Kinzel Ref. WUE-ITP-96-028 Abstract A perceptron with N random weights can store of the order of N patterns by removing a fraction of the weights without changing their strengths. The critical storage capacity as a function of the concentration of the remaining bonds for random outputs and for outputs given by a teacher perceptron is calculated. A simple Hebb--like dilution algorithm is presented which in the teacher case reaches the optimal generalization ability. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1996 ftp> binary ftp> get WUE-ITP-96-028.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-028.ps.gz e.g. unix> lp WUE-ITP-96-028.ps [15 pages] (*) can be replaced by "get WUE-ITP-96-028.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ From fayyad at MICROSOFT.com Sun Nov 17 23:47:27 1996 From: fayyad at MICROSOFT.com (Usama Fayyad) Date: Sun, 17 Nov 1996 20:47:27 -0800 Subject: Data Mining & knowledge Discovery Journal: contents vol 1:1 Message-ID: ANNOUNCEMENT and CALL FOR PAPERS Below are the contents of the first issue of the new journal: Knowledge Discovery and Data Mining, Kluwer Academic Publishers. The journal is accepting submissions of works from a wide variety of fields that relate to data mining and knowledge discovery in databases (KDD). We accept regular research contributions, survey articles, application details papers, as well as short (2-page) application summaries. The goal is for Data Mining and Knowledge Discovery to become the premiere forum for publishing high quality original work from the wide variety of fields on which KDD draws, including: statistics, pattern recognition, database research and systems, modelling uncertainty and decision making, neural networks, machine learning, OLAP, data warehousing, high-performance and parallel computing, and visualization. The goal is to create a reference resource where researchers and practitioners in the area can lookup and communicate relevant work from a wide variety of fields. The journal's homepage provides detailed call for papers, description of the journal and its scope, and a list of the Editorial Board. Abstracts of the articles in the first issue and the editorial are also on-line. The home page is maintained at: http://www.research.microsoft.com/research/datamine - If you are interested in submitting a paper, please visit the homepage: http://www.research.microsoft.com/research/datamine to look up instructions. - if you would like a free sample issue sent to you, click on the link in http://www.research.microsoft.com/research/datamine and provide an address via the on-line form. Usama Fayyad, co-Editor-in-Chief Data Mining and Knowledge Discovery (datamine at microsoft.com) ======================================================================= Data Mining and Knowledge Discovery http://www.research.microsoft.com/research/datamine CONTENTS OF: Volume 1, Issue 1 ============================== For more details, abstracts, and on-line version of Editorial, see http://www.research.microsoft.com/research/datamine/vol1-1 ===========Volume 1, Number 1, March 1997=========== EDITORIAL by Usama Fayyad PAPERS ====== Statistical Themes and Lessons for Data Mining Clark Glymour, David Madigan, Daryl Pregibon, Padhraic Smyth Data Cube: A Relational Aggregation Operator Generalizing Group-by, Cross-Tab, and Sub Totals Jim Gray, Surajit Chaudhuri, Adam Bosworth, Andrew Layman, Don Reichart, Murali Venkatrao, Frank Pellow, IBM, Toronto, Hamid Pirahesh On Bias, Variance, 0/1 - loss, and the Curse-of-Dimensionality Jerome H. Friedman Bayesian Networks for Data Mining David Heckerman BRIEF APPLICATIONS SUMMARIES: ============================ Advanced Scout: Data Mining and Knowledge Discovery in NBA data Ed Colet, Inderpal Bhandari, Jennifer Parker, Zachary Pines, Rajiv Pratap, Krishnakumar Ramanujam ------------------------------------------------------------------------ To get a free sample copy of the above issue, visit the web page at http://www.research.microsoft.com/research/datamine Those who do not have web access may send their address to Kluwer by e-mail at: sdelman at wkap.com From kremer at running.dgcd.doc.ca Mon Nov 18 13:33:17 1996 From: kremer at running.dgcd.doc.ca (Stefan C. Kremer) Date: Mon, 18 Nov 1996 13:33:17 -0500 (EST) Subject: NIPS 96 Workshop Announcement: Dynamical Recurrent Networks, Day 2 Message-ID: NIPS 96 Workshop Announcement: ============================== Dynamical Recurrent Networks Post Conference Workshop Day 2 Organized by John Kolen and Stefan Kremer Saturday, December 7, 1996 Snowmass, Colorado Introduction: There has been significant interest in recent years in dynamic recurrent neural networks and their application to control, system identification, signal processing, and time series analysis and prediction. Much of this work is simply an extension of techniques which work well for feedforward networks to recurrent networks. However, when dynamics are added to a system there are many complex issues which are not relevant to the study of feedforward nets, such as the existence of attractors and questions of stability, controllability, and observability. In addition, the architectures and learning algorithms that work well for feedforward systems are not necessarily useful or efficient in recurrent systems. The first day of the workshop highlights the use of traditional results from systems theory and nonlinear dynamics to analyze the behavior of recurrent networks. The aim of the workshop is to expose recurrent network designers to the traditional frameworks available in these well established fields. A clearer understanding of the known results and open problems in these fields, as they relate to recurrent networks, will hopefully enable people working with recurrent networks to design more robust systems which can be more efficiently trained. This session will overview known results from systems theory and nonlinear dynamics which are relevant to recurrent networks, discuss their significance in the context of recurrent networks, and highlight open problems. (More information about Day 1 of the workshop can be found at: http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html). The second day of the workshop addresses the issues of designing and selecting architectures and algorithms for dynamic recurrent networks. Unlike previous workshops, which have typically focussed on reporting the results of applying specific network architectures to specific problems, this session is intended to assist both users and developers of recurrent networks to select appropriate architectures and algorithms for specific tasks. In addition, this session will provide a backward flow of information -- a forum where researchers can listen to the needs of application developers. The wide variety, rapid development and diverse applications of recurrent networks are sure to make for exciting and controversial discussions. Day 1, Friday, Dec. 6, 1996 =========================== More information about Day 1 of the workshop can be found at: http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html Day 2, Saturday, Dec. 7, 1996 ============================= Target Audience: This workshop is targeted at two groups. First, application developers faced with the task of selecting an appropriate tool for their problem will find this workshop invaluable. Second, researchers interested in studying and extending the capabilities of dynamic recurrent networks will wish to communicate their findings and observations to other researchers in the area. In addition, these researchers will have an opportunity to listen to the needs of their technology's users. Format: The format of the second day is designed to encouraged open discussion. Presenters have provided 1 or 2 references to electronically accessible papers. At the workshop itself, the presenter will be asked to briefly (10 minutes) discuss highlights, conclusions, controversial issues or open problems of their research. This presentation will be followed by a 20 minute discussion period during which the expression of contrary opinions, related problems and speculation regarding solutions to open problems will be encouraged. The workshop will conclude with a one hour panel discussion. Important Note to People Attending this Workshop: The goal of this workshop is to offer an opportunity` for an open discussion of important issues in the area of dynamic networks. To achieve this goal, the presenters have been asked not to give a detailed description of their work but rather to give only a very brief synopsis in order to maximize the available discussion time. Attendees will get the most from this workshop if they are already familiar with the details of the work to be discussed. To make this possible, the presenters have made papers relevant to the discussions available electronically via the links on the workshop's web-page. Attendees who are not already familiar with the work of the presenters at the workshop are encouraged to examine the workshops web page (at: "http://running.dgcd.doc.ca/NIPS96/"), and to retrieve and examine the papers prior to attending the workshop. List of Talk Titles and Speakers: Learning Markovian Models for Sequence Processing Yoshua Bengio, University of Montreal / AT&T Labs - Research Guessing Can Outperform Many Long Time Lag Algorithms Jürgen Schmidhuber, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale. Sepp Hochreiter, Fakultät für Informatik, Technische Universität München. Optimal Learning of Data Structure. Marco Gori, Universita' di Firenze How Embedded Memory in Recurrent Neural Network Architectures Helps Learning Long-term Temporal Dependencies. T. Lin, B. Horne & C. Lee Giles, NEC Research Institute, Princeton, NJ. Discovering the time scale of trends and periodic structure. Michael Mozer and Kelvin Fedrick, University of Colorado. Title to be announced. Lee Feldkamp, Ford Motor Co. Labs. Representation and learning issues for RNNs learning context free languages Janet Wiles and Brad Tonkes, Departments of Computer Science and Psychology, University of Queensland Title to be announced. Speaker to be Announced Long Short Term Memory. Sepp Hochreiter, Fakultät für Informatik, Technische Universität München. Jürgen Schmidhuber, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale. Web page: Please note: more detailed and up to date information regarding this workshop, as well as the reference papers described above can be found at the Workshop's web page located at: http://running.dgcd.doc.ca/NIPS96/ -- Dr. Stefan C. Kremer, Research Scientist, Artificial Neural Systems Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca From devries at sarnoff.com Mon Nov 18 16:00:15 1996 From: devries at sarnoff.com (Aalbert De Vries x2456) Date: Mon, 18 Nov 96 16:00:15 EST Subject: NNSP*97 Workshop Announcement Message-ID: <9611182100.AA27194@peanut.sarnoff.com> ************************************************************* * We apologize for multiple deliveries of this announcement * ************************************************************* 1997 IEEE Workshop on Neural Networks for Signal Processing 24-26 September 1997 Amelia Island Plantation, Florida FIRST ANNOUNCEMENT AND CALL FOR PAPERS Thanks to the sponsorship of the IEEE Signal Processing Society and the co-sponsorship of the IEEE Neural Network Council, we are proud to announce the seventh of a series of IEEE Workshops on Neural Networks for Signal Processing. Papers are solicited for, but not limited to, the following topics: * Paradigms: artificial neural networks, Markov models, fuzzy logic, inference net, evolutionary computation, nonlinear signal processing, and wavelets * Application areas: speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition * Theories: generalization, design algorithms, optimization, parameter estimation, and network architectures * Implementations: parallel and distributed implementation, hardware design, and other general implementation technologies Instructions for submitting papers Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. Submissions should be sent to: Dr. Jose C. Principe IEEE NNSP'97 444 CSE Bldg #42 P.O. Box 116130 University of Florida Gainesville, FL 32611 Important Dates: **************************************************** * Submission of extended summary: January 27, 1997 * **************************************************** * Notification of acceptance: March 31, 1997 * Submission of photo-ready accepted paper: April 26, 1997 * Advanced registration: before July 1, 1997 Further Information Local Organizer Ms. Sharon Bosarge Telephone: 352-392-2585 Fax: 352-392-0044 e-mail: sharon at ee1.ee.ufl.edu World Wide Web http://www.cnel.ufl.edu/nnsp97/ Organization General Chairs Lee Giles (giles at research.nj.nec.com), NEC Research Nelson Morgan (morgan at icsi.berkeley.edu), UC Berkeley Proceeding Chair Elizabeth J. Wilson (bwilson at ed.ray.com), Raytheon Co. Publicity Chair Bert DeVries (bdevries at sarnoff.com), David Sarnoff Research Center Program Chair Jose Principe (principe at synapse.ee.ufl.edu), University of Florida Program Committee Les ATLAS Andrew BACK A. CONSTANTINIDES Federico GIROSI Lars Kai HANSEN Allen GORIN Yu-Hen HU Jenq-Neng HWANG Biing-Hwang JUANG Shigeru KATAGIRI Gary KUHN Sun-Yuan KUNG Richard LIPPMANN John MAKHOUL Elias MANOLAKOS Erkki OJA Tomaso POGGIO Tulay ADALI Volker TRESP John SORENSEN Takao WATANABE Raymond WATROUS Andreas WEIGEND Christian WELLEKENS About Amelia Island Plantation Amelia Island is in the extreme northeast Florida, across the St. Mary's river. The island is just 29 miles from Jacksonville International Airport, which is served by all major airlines. Amelia Island Plantation is a 1,250 acre resort/paradise that offers something for every traveler. The Plantation offers 33,000 square feet of workable meeting space and a staff dedicated to providing an efficient, yet relaxed atmosphere. The many amenities of the Plantation include 45 holes of championship golf, 23 Har-Tru tennis courts, modern fitness facilities, an award winning children's program, more than 7 miles of flora-filled bike and jogging trails, 21 swimming pools, diverse accommodations, exquisite dining opportunities, and of course, miles of glistening Atlantic beach front. From td at elec.uq.edu.au Mon Nov 18 21:52:32 1996 From: td at elec.uq.edu.au (Tom Downs) Date: Tue, 19 Nov 1996 12:52:32 +1000 (EST) Subject: postdoc available Message-ID: <199611190252.MAA21705@s4.elec.uq.edu.au> A non-text attachment was scrubbed... Name: not available Type: text Size: 1286 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/c69d0cc0/attachment-0001.ksh From marwan at ee.usyd.edu.au Mon Nov 18 23:23:59 1996 From: marwan at ee.usyd.edu.au (Marwan Jabri) Date: Tue, 19 Nov 1996 15:23:59 +1100 (EST) Subject: Research Positions (posted for a colleague) Message-ID: The research positions below are posted for a colleague Prof. Max Bennett (maxb at physiol.su.oz.au) ------------------------------------------------------------------------ Research positions for Electrical Engineers in Neurobiology. Two positions are sought for Honours Graduates in Electrical Engineering to participate in a research program funded by the National Health and Medical Research Council for a minimum of three years. This program involves theoretical analysis and modelling of how currents are generated at nerve terminals and subsequently flow in neurones and muscle cells to change their excitability. The program also involves experimental work in which the electrical properties of neurones and muscle cells are determined in order to provide quantitative evaluation of the parameters used in the electrical modelling. Incorporating some of this research for a PhD is also possible. Renumeration will be in the range of $25,000 to $30,000 per annum. For further information, contact Prof. Max Bennett, Neurobiology Laboratory, Dept. of Physiology, University of Sydney, NSW 2006 Australia or at maxb at physiol.su.oz.au From hu at eceserv0.ece.wisc.edu Tue Nov 19 16:47:31 1996 From: hu at eceserv0.ece.wisc.edu (Yu Hen Hu) Date: Tue, 19 Nov 1996 15:47:31 -0600 Subject: No subject Message-ID: <199611192147.AA08557@eceserv0.ece.wisc.edu> Submitted by Yu Hen Hu (hu at engr.wisc.edu) Please forgive me if you receive multiple copies of this posting. ******************************************************************** * LAST CALL FOR PAPERS DEADLINE 12/1/96 * * * * A Special Issue of IEEE Transactions on Signal Processing: * * Applications of Neural Networks to Signal Processing * * * ******************************************************************** Expected Publication Date: November 1997 Issue *** Submission Deadline: December 1, 1996 *** Guest Editors: A. G. Constantinides, Simon Haykin, Yu Hen Hu, Jenq-Neng Hwang, Shigeru Katagiri, Sun-Yuan Kung, T. A. Poggio Significant progress has been made applying artificial neural network (ANN) techniques to signal processing. From a signal processing perspective, it is imperative to understand how the neural network based algorithms are related to more conventional approaches in terms of performance, cost, and practical implementation issues. Questions like these demand honest, pragmatic, innovative, and imaginative answers. This special issue offers a unique forum for researchers and practitioners in this field to present their view on these important questions. We seek highest quality manuscripts which focus on the signal processing aspects of a neural network based algorithm, applications or implementation. Topics of interests include, but are not limited to: Neural network based signal detection, classification, and understanding algorithms. Nonlinear system identification, signal prediction, modeling, adaptive filtering, and neural network learning algorithms. Neural network applications to biomedical signal processing, including medical imaging, Electrocardiogram, EEG, and related topics. Signal processing algorithms for biological neural system modeling Comparison of neural network based approach with conventional signal processing algorithms for solving real world signal processing tasks. Real world signal processing applications based on neural networks. Fast and parallel algorithms for efficient implementation of neural networks based signal processing systems. Prospective authors are encouraged to SUBMIT MANUSCRIPTS BY DECEMBER 1, 1996 to: Professor Yu-Hen Hu E-mail: hu at engr.wisc.edu Univ. of Wisconsin - Madison, Phone: (608) 262-6724 Dept. of Electrical and Computer Engineering Fax: (608) 262-1267 1415 Engineering Drive Madison, WI 53706-1691 U.S.A. On the cover letter, indicate the manuscript is submitted to the special issue on neural network for signal processing . All manuscripts should conform to the submission guideline detailed in the "information for authors" printed in each issue of the IEEE Transactions on Signal Processing. Specifically, the length of each manuscript should not exceed 30 double-spaced pages. SCHEDULE Manuscript received by: December 1, 1996 Completion of initial review: March 31, 1997 Final manuscript received by : June 30, 1997 Expected publication date: November, 1997 DISTINGUISHED GUEST EDITORS Prof. A. G. Constantinides, Imperial College, UK, a.constantinides at romeo.ic.ac.uk Prof. Simon Haykin, McMaster University, Canada, haykin at synapse.crl.mcmaster.ca Prof. Yu Hen Hu, Univ. of Wisconsin, U.S.A., hu at engr.wisc.edu Prof. Jenq-Neng Hwang, University of Washington, U.S.A., hwang at ee.washington.edu Dr. Shigeru Katagiri, ATR, JAPAN, katagiri at hip.atr.co.jp Prof. Sun-Yuan Kung, Princeton University, U.S.A., kung at princeton.edu Prof. T. A. Poggio, Massachusetts Inst. of Tech., U.S.A., tp-temp at ai.mit.edu From ping at cogsci.richmond.edu Tue Nov 19 22:55:17 1996 From: ping at cogsci.richmond.edu (Ping Li) Date: Tue, 19 Nov 1996 22:55:17 -0500 (EST) Subject: Connection Science Message-ID: <199611200355.WAA20946@cogsci.richmond.edu.urich.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1519 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/be435af1/attachment-0001.ksh From td at elec.uq.edu.au Wed Nov 20 18:36:48 1996 From: td at elec.uq.edu.au (Tom Downs) Date: Thu, 21 Nov 1996 09:36:48 +1000 (EST) Subject: PostDoc Available Message-ID: <199611202336.JAA29483@print.elec.uq.edu.au> POSTDOCTORAL RESEARCH FELLOWSHIP Neural Networks Lab, Department of Electrical and Computer Engineering, University of Queensland, Brisbane, Australia, 4072. TOPIC : Estimating generalization performance in feedforward neural networks This 3-year position arises following the award of an Australian Research Council grant for a project in the area of generalization performance estimation. The ideal candidate will have a strong background in applied probability and mathematical statistics and will be capable of building upon the recent contributions to the field made by engineers, computer scientists and physicists. Good programming skills (preferably in C or C++) and good interpersonal skills will also be expected. The position is available from early in 1997. Salary will be at the level of a University of Queensland postdoctoral position and will start at around A$38,000 per annum with annual increments. To apply for this position, please send your CV and the names of three referees either to Prof T Downs at the above address or, by email, to td at elec.uq.edu.au. -- regards Dept. of Electrical and Computer Engineering, Tom Downs University of Queensland, QLD, Australia, 4072 Phone: +61-7-365-3869 Fax: +61-7-365-4999 INTERNET: td at elec.uq.edu.au From raffaele at caio.irmkant.rm.cnr.it Thu Nov 21 21:00:14 1996 From: raffaele at caio.irmkant.rm.cnr.it (Raffaele Calabretta) Date: Thu, 21 Nov 1996 20:00:14 -0600 (CST) Subject: paper available on diploid neural networks Message-ID: The following paper (to appear in Neural Processing Letters) is now available via anonymous ftp: ------------------------------------------------------------------------- "Two is better than one: a diploid genotype for neural networks" --------------------------------------------------------------- Raffaele Calabretta (1,3), Riccardo Galbiati (2), Stefano Nolfi (1) and Domenico Parisi (1) 1 Department of Neural Systems and Artificial Life Institute of Psychology, National Research Council e-mail: raffaele at caio.irmkant.rm.cnr.it 2 Department of Biology, University "Tor Vergata" 3 Centro di Studio per la Chimica del Farmaco, National Research Council Department of Pharmaceutical Studies, University "La Sapienza" Rome, Italy --------------------------------------------------------------------------- Abstract: In nature the genotype of many organisms exhibits diploidy, i.e., it includes two copies of every gene. In this paper we describe the results of simulations comparing the behavior of haploid and diploid populations of ecological neural networks living in both fixed and changing environments. We show that diploid genotypes create more variability in fitness in the population than haploid genotypes and buffer better environmental change; as a consequence, if one wants to obtain good results for both average and peak fitness in a single population one should choose a diploid population with an appropriate mutation rate. Some results of our simulations parallel biological findings. Key words: adaptation, diploidy, genetic algorithms, genotype-phenotype mapping, neural networks. _________________________________________________________________ FTP-host: gracco.irmkant.rm.cnr.it FTP-filename: /pub/raffaele/calabretta.diploidy.ps.Z The paper has been placed in the anonymous-ftp archive (see above for ftp-host) and is now available as a compressed postscript file named: calabretta.diploidy.ps.Z Retrieval procedure: unix> ftp gracco.irmkant.rm.cnr.it Name: anonymous Password: {your e-mail address} ftp> cd pub/raffaele ftp> bin ftp> get calabretta.diploidy.ps.Z ftp> quit unix> uncompress calabretta.diploidy.ps.Z e.g. unix> lpr calabretta.diploidy.ps (8 pages of output) The paper is also available on World Wide Web: http://kant.irmkant.rm.cnr.it/gral.html Comments welcome Raffaele Calabretta e-mail address: raffaele at caio.irmkant.rm.cnr.it From jhowse at squid.lanl.gov Thu Nov 21 14:48:38 1996 From: jhowse at squid.lanl.gov (James Howse) Date: Thu, 21 Nov 96 12:48:38 MST Subject: NIPS*96 Workshop Announcement: Dynamical Recurrent Networks, Day 1 Message-ID: <9611211948.AA19897@squid.lanl.gov> NIPS*96 Workshop Announcement Dynamical Recurrent Networks NIPS*96 Postconference Workshop Day 1 Organized by James Howse and Bill Horne Friday, December 6, 1996 Snowmass, Colorado Workshop Abstract There has been significant interest in recent years in dynamic recurrent neural networks and their application to control, system identification, signal processing, and time series analysis and prediction. Much of this work is simply an extension of techniques which work well for feedforward networks to recurrent networks. However, when dynamics are added to a system there are many complex issues which are not relevant to the study of feedforward nets, such as the existence of attractors and questions of stability, controllability, and observability. In addition, the architectures and learning algorithms that work well for feedforward systems are not necessarily useful or efficient in recurrent systems. The first day of the workshop highlights the use of traditional results from systems theory and nonlinear dynamics to analyze the behavior of recurrent networks. The aim of the workshop is to expose recurrent network designers to the traditional frameworks available in these well established fields. A clearer understanding of the known results and open problems in these fields, as they relate to recurrent networks, will hopefully enable people working with recurrent networks to design more robust systems which can be more efficiently trained. This session will overview known results from systems theory and nonlinear dynamics which are relevant to recurrent networks, discuss their significance in the context of recurrent networks, and highlight open problems. The second day of the workshop addresses the issues of designing and selecting architectures and algorithms for dynamic recurrent networks. Unlike previous workshops, which have typically focussed on reporting the results of applying specific network architectures to specific problems, this session is intended to assist both users and developers of recurrent networks to select appropriate architectures and algorithms for specific tasks. In addition, this session will provide a backward flow of information -- a forum where researchers can listen to the needs of application developers. The wide variety, rapid development and diverse applications of recurrent networks are sure to make for exciting and controversial discussions. ::::::::::::: Format for Day 1 The format for this session is a series of 30 minute talks with 5 minutes for specific questions, followed by time for open discussion after all of the talks. The talks will give a tutorial overview of traditional results from systems theory or nonlinear dynamics, discuss their relationship to some problem in recurrent neural networks, and then outline unresolved problems related to these results. The discussions will center around possible ways to resolve the open problems, as well as clarifying the understanding of established results. The goal of this session is to introduce more of the NIPS community to ideas from control theory and nonlinear dynamics, and to illustrate the utility of these ideas in analyzing and synthesizing recurrent networks. ::::::::::::: Web Sites for the Workshop Additional information concerning Day 1 can be found at http://flute.lanl.gov/NIS-7_home_pages/jhowse/talk_abstracts.html. Information about Day 2 can be obtained at http://running.dgcd.doc.ca/NIPS96/. ::::::::::::: Schedule for Friday, December 6th Morning Session (7:30-10:30am) Structural Neural Dynamics and Computation Xin Wang Dynamical Recognizers: What Languages Can Recurrent Neural Networks Recognize in Real Time? Cris Moore Decoding Discrete Structures from Fixed Points of Analog Hopfield Networks Arun Jagota Recurrent Networks and Supervised Learning Jennie Si Afternoon Session (4:00-7:00pm) System Theory of Recurrent Networks Eduardo D. Sontag Learning Controllers for Complex Behavioral Systems Shankar Sastry and Lara Crawford Neural Network Verification of Hybrid Dynamical System Stability Michael Lemmon ::::::::::::: Talk Abstracts Title: Structural Neural Dynamics and Computation Author: Xin Wang Xerox Corporation Abstract: Dynamics and computation of neural networks can be regarded as two types of meaning of mathematical equations that are used to describe dynamical and computational behaviors of the networks. They are in parallel to operational semantics and denotational semantics of computer programs written in programming languages. Lessons learned in study of formal semantics and impacts of structural programming and object-oriented programming methodologies tell us that a structural approach has to be taken, in order to deal with complexity in analysis and synthesis caused by large-sized neural networks. This talk will start with presenting some small-sized networks that possess very rich dynamical and bifurcational behaviors, ranging from convergent to chaotic and from saddle to period-doubling bifurcations, and then examine some conditions under which these types of behaviors are preserved by standard constructions such as Cartesian product and cascade. ---------- Title: Dynamical Recognizers: What Languages Can Recurrent Neural Networks Recognize in Real Time? Author: Cris Moore Computation, Dynamics, and Inference Santa Fe Institute Abstract: There has been considerable interest recently in using recurrent neural networks as dynamical models of language, complementary to the standard symbolic and grammatical approaches. Numerous researchers have shown that RNNs can recognize regular, context-free, and even context-sensitive languages in real time. We place these results in a mathematical framework by treating RNNs with varying activation functions as iterated maps with varying functional forms. We relate the classes of languages recognizable in real time by these different types of RNNs directly to "classical" language classes from computational complexity theory. We prove, for instance, that there are languages recognizable in real time with piecewise-linear or quadratic activations that linear functions cannot, and that there are languages recognizable with exponential or sinusoidal activations that are not recognizable by polynomial activations of any degree. Our methods are essentially identical to the Vapnik-Chervonenkis dimension. We also relate these results to Blum, Shub and Smale's definition of analog computation, as well as Siegelmann and Sontag's. ---------- Title: Decoding Discrete Structures from Fixed Points of Analog Hopfield Networks Author: Arun Jagota Department of Computer Science University of California, Santa Cruz Abstract: In this talk we examine the relationship between the fixed points of certain specialized families of binary Hopfield networks and certain stable regions of their associated analog Hopfield network families. More specifically, consider some specialized family F of binary Hopfield networks whose fixed points have some well-characterized structure. We consider an analog version of the family F obtained by replacing the hard-threshold neurons by sigmoidal ones and replacing the discrete dynamics of the binary model by a continuous one. We ask the question: can discrete structures identical to or similar to those that are fixed points in the binary family F be recovered from certain stable regions of the associated analog family? We obtain revealing answers for certain families. Our results lead to a better understanding of the recoverability of discrete structures from stable regions of analog networks. They have applications to solving discrete problems via analog networks. We also discuss many open mathematical problems that our studies reveal. Several of the results were obtained in joint work with Fernanda Botelho and Max Garzon. ---------- Title: Recurrent Networks and Supervised Learning Author: Jennie Si Department of Electrical Engineering Arizona State University Abstract: After several years of adventure, researchers in the field of artificial neural networks have reached a common consensus about what neural networks can do and what their limitations are. In particular, there has been some fundamental results on the existence of artificial neural networks for function approximation and nonlinear dynamic system modeling; on neural networks for associative memory applications, etc. Some theoretical advances were made in neural networks for control applications, in an adaptive setting. In this talk, the emphasis is given to some recent progress aiming at a quantitative evaluation of neural network performance for some fundamental tasks, e.g., static and dynamic approximation; computation issues in training neural networks characterized by both memory and computation complexities. All the above discussions will be based on neural network models representing nonlinear static and dynamic input-output systems as well as state space nonlinear dynamic systems. Further applications of the fundamental neural network theory to simulation based approximation technique for nonlinear dyanmic progarmming will also be discussed. This technique may represent an important and practically applicable dynamic programming solution to complex problems that invoke the dual course of large dimension and lack of an accurate mathematical model. ---------- Title: System Theory of Recurrent Networks Author: Eduardo D. Sontag Department of Mathematics Rutgers University Abstract: We consider general recurrent networks. These are described by the differential equations x' = S(Ax+Bu) , y = Cx , in continuous time, or the analogous discrete-time version. Here S(.) is a diagonal mapping of the form S(a,b,c,...) = (s(a),s(b),s(c),...) where s(.) is a scalar real map called the "activation" of the network. The vector x represents the state of the system, u is the time-dependent input signal, and y represents the measurements or outputs of the system. Recurrent networks whose activation s(.) is the identity function s(x)=x are precisely the linear systems studied in control theory. It is perhaps an amazing fact that a nontrivial and interesting system theory can be developed for recurrent nets whose activation is the one typically used in neural net practice, s(x)=tanh(x). (One reason that makes this fact surprising is that recurrent nets with this activation are, in a suitable sense, universal approximators for arbitrary nonlinear systems.) This talk will survey recent results by the speaker and several coauthors (Albertini, Dasgupta, Koiran, Koplon, Siegelmann, Sussmann) regarding issues of parameter identifiability, controllability, observability, system approximation, computability, parameter reconstruction, and sample complexity for learning and generalization. We provide simple algebraic tests for many properties, expressed in terms of the "weight" or parameter matrices (A,B,C) that characterize the system. ---------- Title: Learning Controllers for Complex Behavioral Systems Authors: Shankar Sastry and Lara Crawford Electronics Research Laboratory University of California, Berkeley Abstract: Biological control systems routinely guide complex dynamical systems, such as the human body, through complicated tasks, such as running or diving. Conventional control techniques, however, stumble with these problems, which have complex dynamics, many degrees of freedom, and an only partially specified desired task (e.g., "move forward fast," or "execute a one-and-one-half-somersault dive"). To address behaviorally-specified problems like these, we are using a biologically-inspired, hierarchical control structure, in which network-based controllers learn the controls required at each level of the hierarchy, and no system model is required. The encoding and decoding of the information passed between hierarchical levels, including both controller commands and behavioral feedback, is an important design issue affecting both the size of the controller network needed and the ease with which it can learn; we have used biological encoding schemes for inspiration wherever possible. For example, the lowest-level controller outputs an encoded torque profile; the encoding is based on the way biological pattern generators for single-joint movements restrict the allowed control torque profiles to a particular parametrized control family. Such an encoding removes all time dependence from the controller's consideration, simplifying the learning task considerably to one of function approximation. The implementation of the controller networks themselves could take several forms, but we have chosen to use radial basis functions, which have some advantages over conventional networks. Through a learning architecture with good encodings for both the controls and the desired behaviors, many of the difficulties in controlling complex behavioral systems can be overcome. In this talk, we apply the control structure described above, with 800-element networks and a form of supervised learning, to the problem of controlling a human diver. The system learns open-loop controls to steer a 16-DOF human model through various dives, including a one-and-one-half somersault pike and a one-and-one-half somersault with a full twist. ---------- Title: Neural Network Verification of Hybrid Dynamical System Stability Author: Michael Lemmon Department of Electrical Engineering University of Notre Dame Abstract: Hybrid dynamical systems (HDS) can occur when a smooth dynamical system is supervised by discrete-event dynamical system. Such systems are frequently found in computer-controlled systems. A key issue in the development of hybrid system controllers concerns verifying that the system possesses certain generic properties such as safety, stability, and optimality. It has been possible to study the verifiability of restricted classes of hybrid systems. Examples of such systems include switched systems consisting of first-order integrators [Alur et al.], hybrid systems whose "switching" surfaces satisfy certain invariance properties [Lemmon et al.], and planar hybrid systems [Guckenheimer]. The extension of these verification methods to more general systems [Deshpande et al.], however, appears to be computationally intractable. This is due in large part to the complex behaviours that such systems can demonstrate. Simulation experiments with a simple system consisting of switched integrators (relative degree greater than 2) suggest that the $\omega$-limit sets of these systems can be single fixed points, periodic points, or Cantor sets. Neural networks may provide one method for assisting in the analysis of hybrid systems. A neural network can be used to approximate the Poincare map of a switched hybrid system. Such methods can be extremely useful in verifying whether a given HDS exhibits asymptotically stable periodic behaviours. The purpose of this talk are twofold. First, a summary of the principal results and open research areas in hybrid systems will be given. Second, the talk will discuss recent results on the use of neural networks in the verification of hybrid system stability. From thimm at idiap.ch Fri Nov 22 07:54:44 1996 From: thimm at idiap.ch (Georg Thimm) Date: Fri, 22 Nov 1996 13:54:44 +0100 Subject: CFP: Session at KES'97: Knowledge Extraction from and with Neural Networks Message-ID: <199611221254.NAA11805@avoi.idiap.ch> Call for Papers Knowledge Extraction from and with Neural Networks A session organized by G. Thimm at the First International Conference on Conventional and Knowledge-Based Intelligent Electronic Systems, KES '97 21st - 23rd May 1997, Adelaide, Australia Electronics Association of South Australia (please see below for the KES call of papers) Successfully trained Neural networks contain a certain knowledge: applied to formerly unseen data, they give (often) a correct answer. However, this knowledge is usually difficult to access, although an intelligible qualitative or quantitative representation is of interest in research and development: - The performance on untrained data can be evaluated (in complement to statistical methods). - The extracted knowledge can be used in the development of more efficient algorithms. - The knowledge is of scientific interest. Suggested topics for papers are: - Knowledge extraction techniques from neural networks. - Neural network architectures designed for knowledge extraction. - Applications in which extracted knowledge plays a role. - Ways to represent knowledge extracted from neural networks. - Performance estimation of neural networks using extracted knowledge. - Methods that use knowledge extracted from a neural network (but not the network). Authors are invited (but not required) to point out how the extracted knowledge is used and why neural networks as a intermediate representation of knowledge are of advantage. SUBMISSION OF PAPERS * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * Please submit one original and three copies of camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Georg Thimm IDIAP Rue de Simplon 4 C.P. 592 CH-1920 Switzerland Email: thimm at idiap.ch Tel: ++41 27 721 77 39 Fax: ++41 27 721 77 12 DEADLINES Receipt of papers for this session January 15, 1997 Notification of acceptance February 15, 1997 FURTHER INFORMATION Please look at http://www.idiap.ch/~thimm/KES_ses.html or contact G. Thimm for information on this special session or http://www.kes97.conf.au for the main conference. ================================================================== FIRST INTERNATIONAL CONFERENCE ON CONVENTIONAL AND KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '97 21st - 23rd May 1997, Adelaide, Australia Electronics Association of South Australia CALL FOR PARTICIPATION The aim of this conference is to provide international forum for presentation of recent results in the general areas of Electronic Systems Design and Industrial Applications and Information Technology. Honorary Chair I. Sethi, WSA, USA Conference Chair L.C. Jain, UniSA, Australia General Chair C. Pay, EASA, Australia Conference Advisor R.P. Johnson, DSTO, Australia Conference Director N.M. Martin, DSTO, Australia Publicity Chair R.K. Jain, UA, Australia Publications Chair G.N. Allen, KES, Australia Austria Liaison Chairs F. Leisch, TUW K. Hornik, TUW Canada Liaison Chair C.W. de Silva, UBC New Zealand Liaison Chair N. Kasabov, UO Korea Liaison Chair J.H. Kim, KAIST Japan Liaison Chairs K. Hirota, TIT T. Tanaka, FIT Y. Sato, HU England Liaison Chair M.J. Taylor, UL France Liaison Chair E. Sanchez, Neurinfo USA Liaison Chairs N. Nayak, IBM Watson C.L. Karr, UA Polland Liaison Chair J. Kacprzyk, PAS Romania Liaison Chair M. Negoita, GEF Russia Liaison Chair V.I. Neprintsev, VSU Singapore Liaison Chair D. Mital, NTU The Netherlands Liaison Chair W. van Luenen, URL India Liaison Chairs B.S. Sonde, IISc A.N. Bannore, RLT Germany Liaison Chair U. Seiffert, UM Hungary Liaison Chair L.T. Koczy, TUB Italy Liaison Chair G. Guida, UB The conference will consist of plenary sessions, contributory sessions, poster papers, workshops and exhibition mainly on the theory and applications of conventional and knowledge-based intelligent systems using: . ARTIFICIAL NEURAL NETS . FUZZY SYSTEMS . EVOLUTIONARY COMPUTING . CHAOS THEORY THE TOPICS OF INTEREST The topics of interest include, but not limited to: Biomedical engineering; Consumer electronics; Electronic communication systems; Electronic control systems; Electronic production systems; Electronic security; Education and training; Industrial electronics; Knowledge- based intelligent engineering systems using expert systems, neural networks, fuzzy logic, evolutionary programming and chaos theory; Marketing; Mechatronics; Multimedia; Microelectronics; Optical electronics; Sensor technology; Signal processing; Virtual reality. SUBMISSION OF PAPERS * Papers must be written in English (5 to 10 pages maximum). * Paper presentation is about 20 minutes each including questions and discussions. * Include corresponding author with full name, address, telephone and fax numbers, E-Mail address. * Include presenter address and his/her 4 line resume for introduction purposes only. * Fax or E-Mail copies are not acceptable. * Please submit one original and three copies of the camera ready paper (A4 size), two column format in Times or similar font style, 10 points with one inch margin on all four sides for review to: Dr. L.C. Jain, Knowledge-based Intelligent Engineering Systems, School of Electronic Engineering, University of South Australia, Adelaide, The Levels, S.A., 5095, Australia. Tel: 61 8 302 3315 Fax: 61 8 302 3384 E-Mail etLCJ at Levels.UniSA.Edu.Au INVITED LECTURES The conference committee is also soliciting proposals for invited sessions focussing on new or emerging electronic technologies. Researchers, application engineers and managers are invited to submit proposals to Dr L.C. Jain by 31st October 1996. KEY DATES Conference and Exhibition - 22nd and 23rd May 1997 Workshops - 21st May 1997 Conference Dinner and Industry Awards for Excellence 22nd May 1997 DEADLINES Receipt of paper 31st December 1996 Receipts of workshop proposals - 31st October 1996 Notification of acceptance - 30th January 1997 REGISTRATION FEE ( Conference Early Registration (until 28 . 2 . 97) AU$ 300 ( Conference Registration (after 28 . 2 . 97) AU$ 350 ( Conference Early Registration for full-time student (until 28 . 2 . 97) AU$ 200 ( Conference Registration for full-time student (after 28 . 2 . 97) AU$ 250 ( Workshop Registration (AU$ 150 for one workshop) AU$ 150 ( Conference Dinner AU$ 65 ______________________________________________________________________ TOTAL AU$ _________ KES '97 REGISTRATION FORM Name: _______________________________________________ Title: ________________________________ Position: ________________ Organisation: _________________________________________________ Address: ________________________________________________________________________ ________________________________________________________________________ Tel: Fax: E-mail: PAYMENT DETAILS Please debit the following account in the amount of $_____________ ( Mastercard ( Bank card CARD NUMBER Expiry Date: Name of card holder __________________________________ Signature ______________________ OR . Please send cheque payable to: EASA Conference Account Conference Secretariat Knowledge - Based Intelligent Engineering Systems University of South Australia Adelaide, The Levels, S.A. 5095 Australia OR . Transfer the amount directly to the Bank Account Name: EASA Conference Account Account Number: 735 - 038 50 - 0833 Westpac Bank 56 O'Connel Street North Adelaide, S.A. 5006 Australia All participants are required to fill the registration form and register for this Conference. From Tony.Plate at Comp.VUW.AC.NZ Thu Nov 21 19:38:36 1996 From: Tony.Plate at Comp.VUW.AC.NZ (Tony Plate) Date: Fri, 22 Nov 1996 13:38:36 +1300 Subject: Faculty position Message-ID: <199611220038.NAA08761@rialto.comp.vuw.ac.nz> Department of Computer Science Victoria University of Wellington LECTURESHIP in COMPUTER SCIENCE Position No: 634 November 1996 ---------------------------------------------------------------------------- The University invites applications from suitably qualified persons for a lectureship in Computer Science. Applicants should hold a PhD in computer science and show evidence of strong research potential and excellence in teaching. The Department is seeking to appoint within its established research areas of software engineering (including databases), concurrent and distributed systems, and artificial intelligence. The position is permanent, subject to a probationary period. Teaching programmes include the PhD, both research and professional Masters Degrees, and a BSc. The Department has 13 academic staff supported by 6 programming staff, 20-30 graduate students, and about 90 undergraduates per year. Further information about the Department is available at http://www.comp.vuw.ac.nz, or from the chairperson Peter.Andreae at vuw.ac.nz. Victoria University is situated in Wellington, the capital city of New Zealand. The city offers an outstanding combination of recreational and cultural activities. The University has an enrolment of about 10,000 and teaches comprehensive programmes in the sciences, arts, commerce, education, law and architecture. The salary scale for Lecturers is currently NZ$41,820-NZ$49,470 per annum, where there is a bar; then NZ$41.000-NZ$52,530 per annum. Enquiries and applications should be sent to the address below by the closing date of 30 January 1996. Applications should include the following: 1. name 2. address and telephone/fax numbers; email address if applicable 3. academic qualifications 4. present position 5. details of appointments held, with special reference to teaching appointments 6. research experience 7. field in which specially qualified 8. publications, prefarably under appropriate headings, eg, books, articles, monographs 9. names and addresses (fax numbers and/or email addresses if possible) of three persons from whom reference can be requested (in addition to naming referees, you can include recent testimonials if you wish) 10. date on which able to commence duties ---------------------------------------------------------------------------- In honouring the Treaty of Waitangi, the University welcomes applications from Tangata Whenua. It also welcomes applications from women, Pacific Island peoples, ethnic minorites, and people with disabilities. ---------------------------------------------------------------------------- Appointments Administrator Human Resources Directorate Victoria University of Wellington PO Box 600, Wellington New Zealand tel: +64 4 495-5272 fax: +64 4 495 5238 Peter.Gargiulo at vuw.ac.nz ------------- From jabri at valaga.salk.edu Sun Nov 24 23:05:56 1996 From: jabri at valaga.salk.edu (Marwan Jabri) Date: Sun, 24 Nov 1996 20:05:56 -0800 Subject: Postdoctoral Research Fellowships Message-ID: <199611250405.UAA01190@lopina.salk.edu> Postdoctoral Research Fellowships (closing date Dec 13, 1996) Neuromorphic Systems Research SEDAL, Department of Electrical Engineering University of Sydney The University of Sydney through its U2000 program has 15 postdoctoral fellowships available for open competition. These fellowships are for recent PhD graduates (PhD awarded within the last five years or about to be awarded) who wish to carry out full-time research at the University of Sydney. The fellowships can be taken at the University's Neuromorphic Systems Research Group, within the Systems Engineering & Design Automation Laboratory (SEDAL) at the Department of Electrical Engineering. The range of current research activities/projects include: o Modelling of visual, auditory, olfactory and somatosensory pathways, o Modelling of the superior colliculus o Optical character recognition o Machine learning o Auditory localization o VLSI implementations of neural systems o Learning on silicon o Biologically inspired signal processing The fellowships are available for a minimum of three years, with a possible further one year extension. They are available from February 1997 and must be taken up by June 1997. Fellowships carry a salary of $A38,092 to $A40,889 and include the cost of return airfare to Sydney (fares for dependants, and removal expenses, will not be provided) and an initial setting-up grant for the research project of $A25,000. Applications close in Sydney on 13 December 1996 and decisions about awards will be made in January 1997. If you are interested, please contact first: Professor Marwan Jabri Phone: +61-2-9351 2240 Fax : +61-2-9351 7209 marwan at sedal.usyd.edu.au http://www.sedal.usyd.edu.au/~marwan and from whom an application form can be obtained (postscript or html). Applicats should complete the application form and provide, in addition, the following information: 1.an up-to-date curriculum vitae 2.a list of publications (distinguishing clearly between full publications and abstracts of papers presented to learned societies) 3.an outline of the proposed research project in no more than two pages 4.the names and addresses of two referees who will be forwarding testimonials 5.overseas applicants must also provide details (names, ages of children) of any family members who will accompany the applicant to Australia, if successful (this information is necessary for the University to assist with visa arrangements) then post the completed application form and additional information/documents to: The Director Research and Scholarships Office The University of Sydney 2006 Australia From robert at fit.qut.edu.au Mon Nov 25 00:02:09 1996 From: robert at fit.qut.edu.au (Robert Andrews) Date: Mon, 25 Nov 1996 15:02:09 +1000 Subject: NIPS*96 Rule Extraction W'Shop Message-ID: <199611250452.OAA14649@sky.fit.qut.edu.au> Tenth Annual Conference on NEURAL INFORMATION PROCESSING SYSTEMS RULE EXTRACTION FROM TRAINED ARTIFICIAL NEURAL NETWORKS Snowmass, CO. Friday Dec 6th 1996 WORKSHOP PROGRAMME 7:00 - 7:30 Rule Refinement and Local Function Networks Robert Andrews (Queensland University of Technology) 7:30 - 8:00 A Systematic Method for Decompositional Rule Extaction From Neural Networks R. Krishnan (Centre for AI and Robotics) 8:00 - 8:30 Rule Extraction as Learning Mark Craven (Carnegie Mellon University) 8:30 - 9:00 MITER: Mutual Information and Template for Extracting Rules Tayeb Nedjari (Institut Galilee, Univ Paris Nord) 9:00 - 9:30 Recurrent Neural Networks and Rule Extraction Lee Giles (NEC Research Institute, Princeton) 9:30 - 10:00 Beyond Finite State Machines: Steps Towards Representing and Extracting Context Free Languages From Recurrent Neural Networks Janet Wiles (University of Queensland) 4:00 - 4:30 Explanation Based Generalisation and Connectionist Systems Joachim Diederich (Queensland University of Technology) 4:30 - 5:00 Law Discovery Using Neural Networks Kazumi Saito (NTT Communication Laboratories) 5:00 - 5:30 A Comparison Between Two Rule Extraction Methods for Continuous Input Data Ishwar Sethi (Wayne State University) 5:30 - 7:00 Panel Discussion Panel Members: R Andrews, J Diederich, L Giles, M Craven, I Sethi, J Wiles From feopper at wicc.weizmann.ac.il Mon Nov 25 01:04:15 1996 From: feopper at wicc.weizmann.ac.il (Opper Manfred) Date: Mon, 25 Nov 1996 08:04:15 +0200 (WET) Subject: preprint Message-ID: <199611250604.IAA69428@wishful.weizmann.ac.il> A non-text attachment was scrubbed... Name: not available Type: text Size: 1429 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/ff25035a/attachment-0001.ksh From mam at kassandra.informatik.uni-dortmund.de Mon Nov 25 10:19:11 1996 From: mam at kassandra.informatik.uni-dortmund.de (Martin Mandischer) Date: Mon, 25 Nov 1996 16:19:11 +0100 Subject: 2nd CFP: Euromicro-Workshop on Computational Intelligence, 97 Message-ID: <199611251519.QAA01037@kassandra.informatik.uni-dortmund.de> SECOND CALL FOR PAPERS ====================== Euromicro-Workshop on Computational Intelligence Budapest, Hungary September 3-4, 1997 CONFERENCE SCOPE ================ Artificial Neural Networks (NN), Fuzzy-Logic Systems (FL), and Evolutionary Algorithms (EA) have been investigated since three decades. Their broad application, however, had to wait until powerful computers became available within the last decade. Their potential is by no means exhausted today. On the contrary, more and more they are jointly applied to solve real-world hard problems. The term Computational Intelligence has been coined for such combinations, and the first CI World Congress in 1994 was witness of the boom in this challenging field, the next one being scheduled for 1998. This Workshop aims at bringing together developers and users of CI methods in order to enhance the synergetic potential. Besides pure subsymbolic knowledge processing, also combinations of symbolic and subsymbolic approaches shall be addressed. Topics of interest include, but are not restricted to: Evolutionary Algorithms: theoretical aspects of evolutionary computation, modifications and extensions to evolutionary algorithms such as evolution strategies, evolutionary programming, and genetic algorithms, recent trends in applications. Fuzzy-Logic: mathematical foundations of fuzzy logic and control, semantics of fuzzy rule bases, fuzzy clustering, decision making, image processing, recent trends in applications. Neural Networks: advances in supervised and unsupervised learning algorithms, constructive algorithms for networks, prediction and non-linear dynamics of networks, pattern recognition, data analysis, associative memory. Combined CI-Methods: evolutionary algorithms for neural networks, fuzzy neural networks, evolutionary optimized fuzzy systems, and any combination with classical as well as novel methods. The Workshop will have just one stream of oral and poster sessions. This will give ample time for open discussions. GENERAL INFORMATION =================== The Workshop will be organized in parallel with the annual Euromicro Conference, the 23rd one to be held in Budapest, the well known capital of Hungary. Budapest is famous for many reasons and there are very many attractions to be visited. Local arrangements and tourist information will be provided in the final program of the Workshop. Joint registration to the Workshop and the 23rd Euromicro Conference will be available. Organizing Chairperson ---------------------- Prof. Ferenc Vajda, Hungarian Academy of Science, Budapest, Hungary Program Chairpersons -------------------- Prof. Hans-Paul Schwefel Prof. Bernd Reusch Department of Computer Science University of Dortmund 44221 Dortmund Germany Deputy Program Chairpersons --------------------------- Martin Mandischer, University of Dortmund, Germany Karl-Heinz Temme, University of Dortmund, Germany Programme Committee ------------------- Monica Alderighi (Italy) Thomas Baeck (Germany) Gusz Eiben (The Netherlands) Kenneth De Jong (USA) David Fogel (USA) Ralf Garionis (Germany) Antonio Gonzalez (Spain) Tadeusz Grabowiecki (Poland) Karl Goser (Germany) Lech Jozwiak (The Netherlands) Kurt P. Judmann (Austria) Harro Kiendl (Germany) Hiroaki Kitano (Japan) Erich Peter Klement (Austria) Matti Kurki (Finland) L. Koczy (Hungary) Rudolf Kruse (Germany) Reinhard Maenner (Germany) Bernard Manderick (Belgium) Martin Mandischer (Germany) Stefano Messina (Italy) Zbigniew Michalewicz (USA) Antonio Nunez (Spain) Adam Postula (Australia) Bernd Reusch (Germany) Guenter Rudolph (Germany) Ulrich Rueckert (Germany) Werner von Seelen (Germany) Marc Schoenauer (France) Hans-Paul Schwefel (Germany) Karl-Heinz Temme (Germany) J. L. Verdegay (Spain) Klaus Waldschmidt (Germany) Lotfi Zadeh (USA) Andreas Zell (Germany) SUBMISSION OF PAPERS ==================== Prospective authors are encouraged to send a PostScript version of their full paper (not exceeding 4000 words in length and including a 150-200 words abstract) through WWW (http://LS11-www.informatik.uni-dortmund.de/EUROMICRO). Alternatively, submissions may be sent as hardcopies by postal mail. In that case 5 copies are due to be sent to one of the program chairpersons. The following information should be included in the submission: All necessary clearances have been obtained for the publication of this paper. If accepted, the author(s) will prepare the final camera-ready manuscript in time for inclusion in the proceedings and will personally present the paper at the Workshop. The closing date for submissions is February 1st, 1997. Authors will be notified of acceptance by April 1st, 1997. Camera-ready versions will be required by June 1st, 1997. The proceedings will be published by IEEE Press. Note: We aim at high quality papers rather than a full program. MORE INFORMATION ================ Information on the Workshop and Euromicro Conference is available through WWW: Workshop: http://LS11-www.informatik.uni-dortmund.de/EUROMICRO Conference: http://www.elet.polimi.it/pub/data/Nello.Scarabottolo/www_docs/em97 Budapest: http://www.fsz.bme.hu/hungary/budapest/budapest.html IMPORTANT DATES =============== Submission of papers: February 1st, 1997 Notification of acceptance: April 1st, 1997 Camera-ready papers due: June 1st, 1997 --------------------------------------------------------- Martin Mandischer Informatik Centrum Dortmund (ICD) University of Dortmund Center for Applied Systems Analysis (CASA) Department of Computer Science Joseph-von-Fraunhofer-Str. 20 44221 Dortmund 44227 Dortmund Germany Room 2.73 Phone: 0231-9700-369 Fax: 0231-9700-959 E-Mail: mandischer at LS11.informatik.uni-dortmund.de --------------- WWW: http://LS11-www.informatik.uni-dortmund.de/people/mam/ From moody at chianti.cse.ogi.edu Mon Nov 25 11:56:12 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Mon, 25 Nov 96 08:56:12 -0800 Subject: Computational Finance MS Programs at OGI Message-ID: <9611251656.AA12007@chianti.cse.ogi.edu> ======================================================================= COMPUTATIONAL FINANCE at Oregon Graduate Institute of Science & Technology (OGI) Master of Science Concentrations in Computer Science & Engineering (CSE) Electrical Engineering (EE) Now Reviewing MS Applications for Fall 1997. New: Certificate Program Designed for Part-Time Students. For more information, contact OGI Admissions at (503)690-1027 or admissions at admin.ogi.edu, or visit our Web site at: http://www.cse.ogi.edu/CompFin/ Students with an interest in Neural Networks, Machine Learning, or Data Mining are particularly encouraged to apply. ======================================================================= Computational Finance Overview: Advances in computing technology now enable the widespread use of sophisticated, computationally intensive analysis techniques applied to finance and financial markets. The real-time analysis of tick-by-tick financial market data, and the real-time management of portfolios of thousands of securities is now sweeping the financial industry. This has opened up new job opportunities for scientists, engineers, and computer science professionals in the field of Computational Finance. Scientists and engineers with training in neural networks, machine learning, nonparametric statistics, and time series analysis are particularly well positioned for doing state-of-the-art quantitative analysis in the financial industry. A number of major financial institutions now use neural networks and related techniques to manage billions of dollars in assets. Curriculum: The strong demand within the financial industry for technically- sophisticated graduates is addressed at OGI by the Master of Science and Certificate Programs in Computational Finance. Unlike a standard two year MBA, OGI's intensive 12 month programs are directed at training scientists, engineers, and technically oriented financial professionals in the area of quantitative finance. The master's programs lead to a Master of Science in Computer Science and Engineering (CSE track) or in Electrical Engineering (EE track). The MS programs can be completed within 12 months on a full-time basis. In addition, OGI has introduced a Certificate program designed to provide professionals in engineering and finance a means of upgrading their skills or acquiring new skills in quantitative finance on a part-time basis. The Computational Finance MS concentrations feature a unique combination of courses that provides a solid foundation in finance at a non-trivial, quantitative level, plus the essential core knowledge and skill sets of computer science or the information technology areas of electrical engineering. These skills are important for advanced analysis of markets and for the development of state-of-the-art investment analysis, portfolio management, trading, derivatives pricing, and risk management systems. The MS in CSE is ideal preparation for students interested in securing positions in information systems in the financial industry, while the MS in EE provides rigorous training for students interested in pursuing careers as quantitative analysts at leading-edge financial firms. In addition to the core courses in Computational Finance, CS, and EE, students can take elective courses in neural networks, nonparametric statistics, pattern recognition, and adaptive signal processing. The curriculum is strongly project-oriented, using state-of-the-art computing facilities and live/historical data from the world's major financial markets provided by Dow Jones Telerate. Students are trained in the use of high-level numerical and analytical software packages for analyzing financial data. OGI has established itself as a leading institution in research and education in Computational Finance. Moreover, OGI has strong research programs in a number of areas that are highly relevant for work in quantitative analysis and information systems in the financial industry. These include neural networks, nonparametric statistics, signal processing, time series analysis, human computer interaction, database systems, transaction processing, object-oriented programming, and software engineering. ----------------------------------------------------------------------- Admissions ----------------------------------------------------------------------- Applications for entrance into the Computational Finance MS programs for Fall Quarter 1997 are currently being considered. The deadlines for receipt of applications are: January 15 (Early Decision Deadline, decisions by February 15) March 15 (Final Deadline, decisions by April 15) A candidate must hold a bachelor's degree in computer science, engineering, mathematics, statistics, one of the biological or physical sciences, finance, econometrics, or one of the quantitative social sciences. Candidates who hold advanced degrees in these fields or who have experience in the financial industry are also encouraged to apply. Applications for the Certificate Program are considered on an ongoing basis for entrance in any quarter. ---------------------------------------------------------------------- Contact Information ---------------------------------------------------------------------- For general information and admissions materials: Visit our web site at: http://www.cse.ogi.edu/CompFin/ or contact: Office of Admissions Oregon Graduate Institute P.O.Box 91000 Portland, OR 97291-1000 E-mail: admissions at admin.ogi.edu Phone: (503)690-1027 For special inquiries: E-mail: compfin at cse.ogi.edu ====================================================================== From gorr at willamette.edu Tue Nov 26 17:29:01 1996 From: gorr at willamette.edu (Jenny Orr) Date: Tue, 26 Nov 1996 14:29:01 -0800 (PST) Subject: NIPS 96 Workshop Announcement Message-ID: <199611262229.OAA01916@mirror.willamette.edu> NIPS 96 Workshop Announcement: ============================== Tricks of the Trade Workshop Schedule Saturday December 6, 1996 Snowmass, CO 7:30am-10:30am, 4pm-7pm ---------------------------------------------------------------------------- ORGANIZERS: Jenny Orr Willamette University gorr at willamette.edu Klaus Muller GMD First, Germany klaus at first.gmd.de Rich Caruana Carnegie Mellon caruana at cs.cmu.edu ---------------------------------------------------------------------------- OBJECTIVES: Using neural networks to solve difficult problems often requires as much art as science. Researchers and practitioners acquire, through experience and word-of-mouth, techniques and heuristics that help them succeed. Often these ``tricks'' are theoretically well motivated. Sometimes they're the result of trial and error. In this workshop we ask you to share the ``tricks'' you have found helpful. Our focus will be mainly regression and classification. For abstracts on talks and other information, see our web page at http://www.willamette.edu/~gorr/nipsws.htm ---------------------------------------------------------------------------- Morning Session 7:30am: Jenny Orr, Welcome 7:35am: Yann LeCun, To be announced. 8:20am: Nicol Schraudolph, Bettering Backprop 8:35am: Martin Schlang, Stable on-line adaptation or initial learning 8:50am: 20 minute discussion and break 9:10am: Larry Yaeger, Reducing A Priori Biases Improves Recognition Accuracy (at the Expense of Classification Accuracy) 9:50am: Steve Lawrence, Neural Network Classification and Unequal Prior Class Probabilities 10:05am: Shumeet Baluja, Sampling Negative Instances by Collecting False Positives 10:15am: Discussion and morning wrap-up ---------------------------------------------------------------------------- Afternoon Session 4:00pm: Hans George Zimmermann, Training of a neural net in time series analysis 4:40pm: Tony Plate, Convergence of hyperparameters in Mackay's Bayesian Backpropagation 4:55pm: Jan Larsen, Design and Regularization of Neural Networks: The Optimal Use of A Validation 5:10pm: Renee S. Renner, Optimization techniques for improving performance and training of a constructive neural network 5:25pm: Patrick van der Smagt, Optimisation in feed-forward neural networks: On conjugate gradient, network size, and local minima 5:40pm: 15 minute discussion and break 5:55pm: David Horn, Optimal Ensemble Averaging of Neural Networks 6:10pm: Timothy X Brown, Jump Connectivity 6:20pm: Chan Lai-Wan, Tricks that make recurrent networks work 6:30pm: Rich Caruana, 101 Fun (and Useful) Things to Do With Extra Outputs 6:45pm: Discussion and workshop wrap-up From atick at monaco.rockefeller.edu Wed Nov 27 09:33:42 1996 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Wed, 27 Nov 1996 09:33:42 -0500 Subject: NetworkL CNS, November Issue, TOC. Message-ID: <9611270933.ZM20082@monaco.rockefeller.edu> The November issue of Network is now online. Those with institutional subscriptions should be able to access the full journal from their desktop computer. FYI, here is the table of contents. Also as of the next issue, we will use incremental publishing so a paper will be available online as soon as it is ready! Be sure to check out the topical review in this issue on ** human colour perception and its adaptation ** by M Webster-- it is one of the most comprehensive reviews in this area. NETWORK: COMPUTATION IN NEURAL SYSTEMS Volume 7 (1996) Issue 4, Pages: 587--758 Editorial: Looking Ahead TOPICAL REVIEW 587 Human colour perception and its adaptation M A Webster PAPERS 635 Neural model of visual stereomatching: slant, transparency and clouds J A Marshall, G J Kalarickal and E B Graves 671 A coupled attractor model of the rodent head direction system A D Redish, A N Elga and D S Touretzky 687 A single spike suffices: the simplest form of stochastic resonance in model neurons M Stemmler 717 Estimate of mutual information carried by neuronal responses from small data samples P Gurzi, G Biella and A Spalvieri 727 Topology selection for self-organizing maps A Utsugi 741 A search for the optimal thresholding sequence in an associative memory H Hirase and M Recce 757 AUTHOR INDEX (with titles), Volume 7 -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422 From linster at berg.HARVARD.EDU Wed Nov 27 13:22:46 1996 From: linster at berg.HARVARD.EDU (Christiane Linster) Date: Wed, 27 Nov 1996 13:22:46 -0500 (EST) Subject: NIPS WS Program Message-ID: WORKSHOP PROGRAM NIPS'96 Postconference Workshop NEURAL MODULATION AND NEURAL INFORMATION PROCESSING Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Akaysha Tang Christiane Linster OBJECTIVES Neural modulation is ubiquitous in the nervous system and can provide the neural system with additional computational power that has yet to be characterized. From a computational point of view, the effects of neuromodulation on neural information processing can be far more sophisticated than the simple increased/decreased gain control, assumed by many modelers. We would like to bring together scientists from diverse fields of studies, including psychopharmacology, behavioral genetics, neurophysiology, neural networks, and computational neuroscience. We hope, through sessions of highly critical, interactive and interdisciplinary discussions, * to identify the strengths and weaknesses of existing research methodology and practices within each of the field; * to work out a series of strategies to increase the interactions between experimental and theoretical research and; * to further our understanding of the role of neuromodulation in neural information processing. PROGRAM Morning session 1. Vladimir Brezina, Mt. Sinai School of Medicine "Functional consequences of divergence and convergence in physiological signaling pathways" Vladimir Brezina, Mt. Sinai School of Medicine 2. Sacha Nelson, Brandeis University "Neuromodulation of short-term plasticity at visual cortical synapses" and Afterpotentials: Seeing their Functional Role" 3. Michael Hasselmo and Christiane Linster, Harvard University "Acetylcholine, Noradrenaline and Memory" 4. Sue Becker, McMaster University "On the Computational Utility of Contextually Modulated Plasticity: A Model, soem Empirical Results and Speculations on Cortical Function" Afternoon session 4. Eytan Ruppin, Tel Aviv University "Synaptic Runaway in Associative Networks and The Pathogenesis of Schizophrenic Psychosis" 5. Dennis Feeney, University of New Mexico "Late Noradrenergic Pharmacotherapy Promotes Functional Recovery After Cortical Injury" 6. Thomas Brozoski, Grinnell College "Forebrain Dopamine Fluctuations During Pavlovian Conditioning" 7. Terrence Sejnowski, Salk Institute "Dopamine and Temporal-Difference Learning in the Basal Ganglia" From moore at santafe.edu Wed Nov 27 18:09:03 1996 From: moore at santafe.edu (Cris Moore) Date: Wed, 27 Nov 1996 16:09:03 -0700 (MST) Subject: preprint available Message-ID: A preprint is available, entitled: Predicting Non-linear Cellular Automata Quickly by Decomposing them into Linear Ones C. Moore and T. Pnin http://www.santafe.edu/~moore ftp://ftp.santafe.edu/pub/moore/semi.ps Abstract: We show that a wide variety of non-linear cellular automata (CAs) can be decomposed into a {\em quasidirect product} of linear ones. These CAs can be predicted by parallel circuits of depth $\ord(\log^2 t)$ using gates with binary inputs, or $\ord(\log t)$ depth if ``sum mod $p$'' gates with an unbounded number of inputs are allowed. Thus these CAs can be predicted by (idealized) parallel computers much faster than by explicit simulation, even though they are non-linear. This class includes any CA whose rule, when written as an algebra, is a solvable group. We also show that CAs based on nilpotent groups can be predicted in depth $\ord(\log t)$ or $\ord(1)$ by circuits with binary or ``sum mod $p$'' gates respectively. We use these techniques to give an efficient algorithm for a CA rule which, like elementary CA rule 18, has diffusing defects that annihilate in pairs. This can be used to predict the motion of defects in rule 18 in $\ord(\log^2 t)$ parallel time. - Cris moore at santafe.edu From brossmann at firewall.sni-usa.com Wed Nov 27 16:05:22 1996 From: brossmann at firewall.sni-usa.com (Frank Brossmann) Date: Wed, 27 Nov 1996 16:05:22 -0500 Subject: Job interview at NIPS'96 for Advanced Technologies Group, Siemens Nixdorf Information Systems, Inc. Message-ID: <199611272105.QAA07127@passer.sni-usa.com> SIEMENS NIXDORF INFORMATION SYSTEMS, INC. ADVANCED TECHNOLOGIES GROUP NEURAL NETWORK BASED MODELING OPPORTUNITY IN RESEARCH / CONSULTING / MARKETING During NIPS'96, we will be interviewing for a challenging position at the newly founded startup enterprise, based in Boston, MA. The successful applicant for this job should have a Masters, Ph.D. or MBA in computer science, mathematics, statistics, physics, economics, finance, or a related field. Experience in using statistical artificial intelligence such as neural networks and other approximation and modeling methods, as well as interest in finance, economics, and their underlying structural processes are crucial. The applicant should be willing to work in close collaboration with other group members, and should be full of creativity, enthusiasm and flexibility, and also enjoy traveling to conferences and customers for consulting purposes. In the first phase, we will develop some case studies in finance, marketing, and Internet related applications using the advanced simulator platform SENN (Software Environment for Neural Networks). This engaging work will leave space for the applicant's ideas and creativity. It will focus on solutions for real-world problems. The possibility of collaboration with Prof. Andreas Weigend, New York (Stern School of Business, NYU), and Dr. Georg Zimmermann, Munich (Siemens AG Corporate Research and Development) exist. Subsequent opportunities include advising top US companies in their data modeling efforts, as well as introducing state-of-the-art research and its software implementation to the relevant communities. In the longer run, we also expect significant contribution to development and implementation of a sales and marketing strategy for SENN. We expect the applicant to start working on January 1, 1997. If you are interested, please contact at NIPS'96 - Georg Zimmermann, Ralph Neuneier, Michiaki Taniguchi, or Martin Schlang. If you are not attending NIPS'96, please email your resume by December 5 to: - brossmann at sni-usa.com (plain text or MS Word). Frank Brossmann, Director Advanced Technologies Group, Siemens Nixdorf Information Systems, Inc. 200 Wheeler Road Burlington, MA 01803 Email: brossmann at sni-usa.com From back at zoo.riken.go.jp Wed Nov 27 23:03:29 1996 From: back at zoo.riken.go.jp (Andrew Back) Date: Thu, 28 Nov 1996 13:03:29 +0900 (JST) Subject: NIPS*96 Workshop - Blind Signal Processing Message-ID: Dear colleagues, Please find attached, the schedule for the NIPS*96 workshop on Blind Signal Processing. Further information, including abstracts and some papers are available on the workshop WWW homepage: http://www.bip.riken.go.jp/absl/back/nips96ws/nips96ws.html Andrzej Cichocki Andrew Back ---------------------------------------------------------------------------- NIPS*96 Workshop Blind Signal Processing and Their Applications (Neural Information Processing Approaches) Saturday Dec 7, 1996 Snowmaas (Aspen), Colorado Workshop Organizers: Andrzej Cichocki Andrew D. Back Brain Information Processing Group Frontier Research Program RIKEN, The Institute of Physical and Chemical Research Hirosawa 2-1, Wako-shi, Saitama, 351-01, Japan Phone: +81-48-462-1111 ext: 6733 Fax: +81-48-462-4633 Email: cia at hare.riken.go.jp back at zoo.riken.go.jp Blind Signal Processing is an emerging area of research in neural networks and image/signal processing with many potential applications. It originated in France in the late 80's and since then there has continued to be a strong and growing interest in the field. Blind signal processing problems can be classified into three areas: (1) blind signal separation of sources and/or independent component analysis (ICA), (2) blind channel identification and (3) blind deconvolution and blind equalization. These areas will be addressed in this workshop. See the objectives below for further details. ---------------------------------------------------------------------------- Objectives The main objectives of this workshop are to: Give presentations by experts in the field on the state of the art in this exciting area of research. Compare the performance of recently developed adaptive un-supervised learning algorithms for neural networks. Discuss issues surrounding prospective applications and the suitability of current neural network models. Hence we seek to provide a forum for better understanding current limitations of neural network models. Examine issues surrounding local, online adaptive learning algorithms and their robustness and biologically plausibility or justification. Discuss issues concerning effective computer simulation programs. Discuss open problems and perspectives for future research in this area. Especially, we intend to discuss the following items: 1. Criteria for blind separation and blind deconvolution problems (both for time and frequency domain approaches) 2. Natural (or relative) gradient approach to blind signal processing. 3. Neural networks for blind separation of time delayed and convolved signals. 4. On line adaptive learning algorithms for blind signal processing with variable learning rate (learning of learning rate). 5.Open problems, e.g. dynamic on-line determination of number of sources (more sources than sensors), influence of noise, robustness of algorithms, stability, convergence, identifiability, non-causal, non-stationary dynamic systems . 6. Applications in different areas of science and engineering, e.g., non-invasive medical diagnosis (EEG, ECG), telecommunication, voice recognition problems, image processing and enhancement. ---------------------------------------------------------------------------- Workshop Schedule 7:30-7:50 A Review of Blind Signal Processing: Results and Open Issues Andrzej Cichocki and Andrew Back Brain Information Processing Group, Frontier Research Program RIKEN, Japan 7:50-8:10 Natural Gradient in Blind Separation and Deconvolution - Information Geometrical Approach Schun-ichi Amari Brain Information Processing Group, Frontier Research Program RIKEN, Japan 8:10-8:30 Entropic Contrasts for Blind Source Separation Jean-Francois Cardoso Ecole Nationale Superieure des Telecommunications, Paris, France 8:30-8:40 Coffee Break/Discussion Time 8:40-9:00 Several Theorems on Information Theoretic Independent Component Analysis Lei Xu. J. Ruan and Shun-ichi Amari The Chinese University of Hong Kong, Hong Kong Brain Information Processing Group, FRP, Riken, Japan 9:00-9:20 From Neural PCA to Neural ICA Erkki Oja, Juha Karhunen and Aapo Hyvarinen Helsinki University of Technology, Finland 9:20-9:40 Local Adaptive Algorithms and their Convergence Analysis for Decorrelation and Blind Equalization/Deconvolution Scott Douglas and Andrzej Cichocki Department of EE, University of Utah, USA FRP Riken, Japan 9:40-10:00 Negentropy and Kurtosis as Projection Pursuit Indices Provide Generalized ICA Algorithms Mark Girolami and Colin Fyfe The University of Paisley, Scotland 10:00-10:15 Bussgang Methods for Separation of Multipath Mixtures Russel Lambert Dept of Electrical Engineering University of South California, USA 10:15-10:30 Discussion Time 4:00-4:20 Blind Signal Separation by Output Decorrelation Dominic C.B. Chan, Simon J. Goodsil and Peter J.W. Rayner University of Cambridge, United Kingdom 4:20-4:40 Temporal Decorrelation Using Teacher Forcing Anti-Hebbian Learning and its Application in Adaptive Blind Source Separation Jose C. Principe, Chuan Wang, and Hsiao-Chun Wu University of Florida, USA 4:40-5:00 A Direct Adaptive Blind Equalizer for Multi-Channel Transmission Seungjin Choi and Ruey-wen Liu University of Notre Dame, USA 5:00-5:10 Coffee Break/Discussion Time 5:10:5:30 IIR Filters for Blind Deconvolution Using Information Maximization Kari Torkkola Motorola Phoenix Corporate Research, USA 5:30-5:40 Information Maximization and Independent Component Analysis: Is there a difference ? D. Obradovic and G. Deco Siemens AG, Coporate Research and Development, Germany 5:40-5:50 Convergence Properties of Cichocki's Extension of the Herault-Jutten Source Separation Neural Network Yannick Deville Laboratoires d'Electronique Philips S.A.S. (LEP) France 5:50-6:10 Independent Component Analysis of EEG and ERP Data Tzyy-Ping Jung, Scott Makeig, Anthony J. Bell and Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute, CNL, USA 6:10-6:20 Blind separation of delayed and convolved sources - the problem Tony Bell and Te-Won Lee Computational Neurobiology Laboratory The Salk Institute, CNL, USA 6:20-6:30 Information Back-propagation for Blind Separation of Sources from Non-linear Mixture Howard H. Yang, Shun-ichi Amari and Andrzej Cichocki Brain Information Processing Group, FRP, RIKEN, Japan 6:30-7:00 Discussion Time -- From janetw at cs.uq.edu.au Thu Nov 28 03:44:08 1996 From: janetw at cs.uq.edu.au (Janet Wiles) Date: Thu, 28 Nov 1996 18:44:08 +1000 (EST) Subject: 6-month Postdoc Available - AUSTRALIA Message-ID: 6-MONTH RESEARCH FELLOWSHIP (POSTDOC LEVEL) Cognitive Science Group, Department of Computer Science University of Queensland, Brisbane, Australia, 4072. TOPIC : Recurrent neural networks and language processing This short term position is to study computational properties of neural networks in language processing. The ideal candidate will have a strong background in neural networks or dynamic systems theory and preferably have some experience with linguistics or formal language theory. Good programming skills and familiarity with Unix will be expected. The position is available for six months during 1997. Salary will be at the level of a University of Queensland postdoctoral position (around A$38,000). Relocation expenses to and from Brisbane cannot be provided on this grant. To apply for this position, please send your CV and the names of three referees to Dr J Wiles at the address below of email janetw at cs.uq.edu.au ------------------------------------------------------------- Dr Janet Wiles _-_|\ Director of the Cognitive Science Program / * Depts of Computer Science and Psychology \_.-._/ The University of Queensland v Brisbane QLD 4072 AUSTRALIA http://psy.uq.oz.au/CogPsych/home.html ------------------------------------------------------------- From maass at igi.tu-graz.ac.at Thu Nov 28 07:53:50 1996 From: maass at igi.tu-graz.ac.at (Wolfgang Maass) Date: Thu, 28 Nov 96 13:53:50 +0100 Subject: new paper on the effect of analog noise in neural computation Message-ID: <199611281253.AA03322@figids03.tu-graz.ac.at> The following paper is now available for copying from http://www.math.jyu.fi/~orponen/papers/noisyac.ps The paper has 19 pages. " On the Effect of Analog Noise in Discrete-Time Analog Computations " by Wolfgang Maass and Pekka Orponen Inst. for Theor. Comp. Sci. Department of Mathematics Technische Universitaet Graz University of Jyvaskyla Klosterwiesgasse 32/2 P.O. Box 35 A-8010 Graz, Austria Jyvaskyla, Finland maass at igi.tu-graz.ac.at orponen at math.jyu.fi Abstract: We introduce a model for analog noise in analog computations with discrete time that is flexible enough to cover the most important concrete cases, such as analog noise in sigmoidal neural nets and networks of spiking neurons. The noise model can also be applied to cases where there are dependencies among the noise-sources, and to hybrid analog/digital systems. In contrast to previous models for noise in analog computations (which demand that the output of the computation has to be 100% reliable), we assume that the output of a noisy analog computation has to be correct only with a certain probability (which may be chosen to be very high). We believe that this convention is more adequate for the analysis of "real world" analog computations. In addition this convention is consistent with the common models for noisy digital computations in computational complexity theory. We show that under very general conditions the presence of analog noise reduces the power of analog computational models to that of a finite automaton, and we exhibit bounds for the number of states of such finite automaton. We also prove a new type of upper bound for the VC-dimension of computational models with analog noise. In the case of a feedforward sigmoidal neural net this bound does not depend on the the total number of units in the net. An extended abstract of this paper will appear in the Proceedings of NIPS '96. From niranjan at eng.cam.ac.uk Thu Nov 28 14:54:59 1996 From: niranjan at eng.cam.ac.uk (niranjan@eng.cam.ac.uk) Date: Thu, 28 Nov 96 19:54:59 GMT Subject: CallForPapers Message-ID: <9611281954.9193@baby.eng.cam.ac.uk> Last Call, with new deadline *** 20 December 1996 *** ======================================================================== IEE ANN 97 - CALL FOR PAPERS Fifth International Conference on "Artificial Neural Networks" Churchill College, University of Cambridge, UK: 7-9 July 1997 Objectives: This is the fifth in a series of successful conferences bringing together up-to-date research in the field of artificial neural networks. This includes theoretical advances in statistical aspects of learning, dynamical systems theory and function approximation in the context of artificial neural networks as well as practical applications that have benefited from the use of this technology. Topics in these areas are: Architectures and Learning Algorithms: Computational Learning Theory Learning Algorithms Function Approximation Approaches to Model Selection Density Estimation Model validation and verification Comparisons with classical techniques. Applications: Vision and Robotics, Speech and Language Processing, Monitoring Complex systems such as Engines Medical diagnostics Financial Systems Modelling Advances in Implementation: Parallel Hardware, Software Systems Contributions: Papers reporting original research related to the above areas are invited. A selection will be made based on originality of the contribution and clarity of presentation. In addition to oral presentations, a selection of papers submitted will be presented as posters. Poster presentations will be seen as offering better interaction with participants. Please indicate in the reply slip if you would prefer to present your submission as a poster. It is expected that papers presented in this form will be linked to particular sessions and the Chairmen of the Session will give a brief introduction to the posters. Prospective authors are required to make their submissions in the form of complete draft papers of a maximum of up to six pages by the deadline (see below). A clear summary and full details for corresponding with the authors (including email) should be provided. Authors of accepted papers will be required to to provide a camera ready version of their full paper of up to six pages. A LaTeX style file and/or instructions on the layout will be made available nearer the time. Tutorial on "An Introduction to Neural Computing Applications and Techniques" On the morning of Monday, 7 July 1997, there will be a half-day tutorial session prior to the formal opening of the Conference. Tutorials will cover basic concepts in neural computing and their applications. Deadlines: Intending authors should note the following dates: Closing date for the submission of full papers Friday, 6 December 1996 Notification of acceptance February 1997 Deadline for any revised papers Friday, 14 March 1997 Venue: The Conference will be held at Churchill College, Cambridge. Cambridge is a city in the English countryside that is world renowned for its University. There are now some thirty colleges, the earliest, Peterhouse, having been founded by the Bishop of Ely in 1284; and their rich architectural heritage is there to be enjoyed. Today Cambridge combines its academic heritage with the atmosphere of a modern business and tourist centre. There are varied aspects of the city to be enjoyed from historic colleges ato the bustling market square, browsing along the busy shopping streets, punting on the river cam, visiting the University museums or taking in the delights of the surrounding landscape. Committee: Dr M Niranjan, University of Cambridge (Chairman), Dr J Austin, University of York Dr S Collins, Defense Research Agency Dr P H Cowley, Rolls Royce Applied Science Laboratories Professor C J Harris, University of Southampton Dr I T Nabney, Aston University Dr S Olafsson, BT Laboratories Corresponding Members (To be confirmed) Professor F Fogelman-Soulii SLIGOS, France Dr C L Giles NEC Research Institute, USA Dr R M Goodman Caltech, USA Dr D McMichael CSSIP, Australia Dr M Plumbley NEuroNet (Kings College, UK) Professor U R|ckert University of Paderborn, Germany Dr C J Satchwell Aragorn Systems Ltd, Monaco Organisers The Computing and Control Division of the IEE. The following organisations have been invited to co-sponsor the event: The British Computer SocietyBritish Machine Vision Association Department of Trade and Industry - Electronics and Engineering Division Engineering and Physical Sciences Research Council - Informatics Division European Neural Networks Society EURASIP The Institute of Mathematics and its Applications The Institute of Physics Neural Computing Applications Forum PAPER FORMAT Authors wishing to contribute to the Conference should send an original and four copies of each paper by 20 December 1996 to the ANN97 Secretariat. Papers should be submitted on A4 sheets with a 2-column format. Each column is 6.69 cm in width with a spacing of 1.27 cm between columns. Text should be typed in 10 point Times-Roman font with single line spacing. The first page should incorporate (first line) the title, (double line space) author(s), (double line space), and affiliations. Only six pages are allowed per paper (inclusive of figures, tables etc.). Please contact the ANN97 Secretariat should you require a sample of the required layout. If presented in good quality, suitable for camera-ready copying, successful contributions will be published from the original copy submitted in the first instance. Otherwise, authors will be requested to re-submit. All contributions will be published as part of the IEE Conference Proceeding series and will be available to all delegates at the event. EXHIBITION It is proposed to organise an exhibition in conjunction with the Conference. For further details please contact the ANN 97 Secretariat. _______________________________________________________________________ Please complete in BLOCK capitals and return to: ANN97 Secretariat, IEE Conference Services, Savoy Place, London WC2R 0BL, UK. Fax: +44 (0)171 240 8830, Email: nashley at iee.org.uk OR joconnell at iee.org.uk I am interested in the ANN 97 conference and require ... copies of the provisional programme and registration form. I wish to offer a contribution provisionally entitled: ....................................................... ......................................................... Which falls with in topic area number:.................. * 1. Vision * 2. Speech * 3. Control and robotics * 4. Biomedical * 5. Financial and business * 6. Signal processing * 7. Radar/sonar * 8. Data fusion * 9. Analogue * 10. Digital * 11. Optical * 12. Learning algorithms * 13. Network architectures * 14. Functional approximations * 15. Statistical Methods * 16. None of the above I wish to offer a contribution in the form of ( ) an oral presentation ( ) a poster presentation I require further details on ( ) Scholarship ( ) Exhibition ( ) Tutorial Personal details: Name: Organisation: Address: Postcode/Zipcode: Country: Telephone: Fax: Email: ======================================================================== From mcasey at volen.brandeis.edu Fri Nov 29 19:18:18 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Fri, 29 Nov 1996 19:18:18 -0500 (EST) Subject: new paper on the effect of analog noise in neural computation In-Reply-To: <199611281253.AA03322@figids03.tu-graz.ac.at> Message-ID: Dear Connectionists, It seems that the paper "On the Effect of Analog Noise in Discrete-Time Analog Computations" by Wolfgang Maass and Pekka Orponen has misrepresented the work in my recent Neural Computation article "The Dynamics of Discrete-Time Computation, with Application to Recurrent Neural Networks and Finite State Machine Extraction" (Volume 8, number 6, pp. 1135-1178, 1996). Maass and Orponen repeatedly take credit for the following result: > We show that under very general conditions the presence of analog > noise reduces the power of analog computational models to that of a > finite automaton.... This was shown in Corollary 3.1 of my paper. Maass and Orponen fail to mention this in their paper. My goal, in the NC paper, was to investigate the representational powers of RNNs, and to do so I included analog noise to avoid unrealistic assumptions and conclusions about computation in physical systems (including physically instantiated RNNs). The model of analog noise that I used was originated by the mathematician Rufus Bowen and includes the model in Maass and Orponen's paper as a special case. So the discussion in their paper about the generality of their model of analog noise as opposed to those previously used is based on a misinterpretation of Bowen's pseudo-orbit formalism, which makes no assumptions about the distribution of the noise process beyond its boundedness (which Maass and Orponen also assume in their paper). If you would like to verify these claims, my paper is available in Neural Computation or on the web at http://eliza.ccs.brandeis.edu/people/mcasey/papers.html Thanks for your time and attention. Best regards, Mike ******************************************************************** Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey (617) 736-3144 (voice) (617) 736-3142 (fax) From mcasey at volen.brandeis.edu Sat Nov 30 20:00:10 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Sat, 30 Nov 1996 20:00:10 -0500 (EST) Subject: analog noise In-Reply-To: <199611300118.AA05387@figids03.tu-graz.ac.at> Message-ID: Regarding the RNN+noise implying only finite state machine power result, I completely agree that their result is a nice extension of the corollary that I proved (which I recently found to be a conjecture/assumption of Turing in his 1936 paper), but they failed to mention that I had already proved something very similar. It seemed that a discussion of the type posted here belonged in the paper. > In addition we have also relaxed the definition of analog noise in > our model to include also Gaussian noise etc, but this is perhaps > a less essential point. This is still incorrect. Their "clipped" Gaussian noise is a special case of bounded noise with arbitrary distribution (Bowen's pseudo-orbit formalism), so there's no sense in which they "relaxed" the definition of analog noise. If the state space is bounded, and the noise is clipped to keep the state of the system in the state space, then there is no loss of generality in using bounded noise. Furthermore, none of their results depend on the noise being unbounded (so even if it were unbounded, it wouldn't lead to anything interesting). Finally, in section 4 of their paper where they concretely discuss RNNs performing computations, they assume that the noise is bounded and that the computation is done with perfect reliability (which were precisely the assumptions that I used which they have spent so much time discrediting in other parts of the paper). I sincerely apologize for taking up more bandwidth with this discussion. I hope that it is of some interest to the community. Any further discussion on my part will take place off-line. Best regards, Mike ******************************************************************** Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey (617) 736-3144 (voice) (617) 736-3142 (fax)