From Connectionists-Request at cs.cmu.edu Sun Sep 1 00:05:18 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sun, 01 Sep 96 00:05:18 -0400 Subject: Bi-monthly Reminder Message-ID: <10311.841550718@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From denni at bordeaux.cse.ogi.edu Sun Sep 1 01:08:34 1996 From: denni at bordeaux.cse.ogi.edu (Thorsteinn Rognvaldsson) Date: Sat, 31 Aug 96 22:08:34 -0700 Subject: Smoothing Regularizers for PBF NN Message-ID: <9609010508.AA20633@bordeaux.cse.ogi.edu> New tech report available: SMOOTHING REGULARIZERS FOR PROJECTIVE BASIS FUNCTION NETWORKS By: JOHN E. MOODY & THORSTEINN S. ROGNVALDSSON Dept. of Computer Science and Engineering Oregon Graduate Institute of Science and Technology P.O. Box 91000 Portland, Oregon 97291-1000, U.S.A. Emails: moody at cse.ogi.edu denni at cse.ogi.edu (Direct correspondence to Prof. Moody) --------- Abstract: Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for PROJECTIVE BASIS FUNCTIONS (PBFs), such as the widely-used sigmoidal PBFs, have heretofore been proposed. We derive new classes of algebraically-simple m:th-order smoothing regularizers for networks of projective basis functions. Our simple algebraic forms enable the direct enforcement of smoothness without the need for e.g. costly Monte Carlo integrations of the smoothness functional. We show that our regularizers are highly correlated with the values of standard smoothness functionals, and thus suitable for enforcing smoothness constraints onto PBF networks. The regularizers are tested on illustrative sample problems and compared to quadratic weight decay. The new regularizers are shown to yield better generalization errors than weight decay when the implicit assumptions in the latter are wrong. Unlike weight decay, the new regularizers distinguish between the roles of the input and output weights and capture the interactions between them. -------------------------------------------------- Instructions for retrieving your own personal copy: WWW: http://www.cse.ogi.edu/~denni/publications.html FTP: % ftp neural.cse.ogi.edu (username=anonymous, password=your email) > cd pub/neural/papers/ > get moodyRogn96.smooth_long.ps.Z > quit % uncompress moodyRogn96.smooth_long.ps.Z % lpr moodyRogn96.smooth_long.ps (assumes you have a UNIX system)  From back at zoo.riken.go.jp Sun Sep 1 18:58:10 1996 From: back at zoo.riken.go.jp (Andrew Back) Date: Mon, 2 Sep 1996 07:58:10 +0900 (JST) Subject: NIPS'96 Workshop - Blind Signal Processing Message-ID: CALL FOR PAPERS NIPS'96 Postconference Workshop BLIND SIGNAL PROCESSING AND THEIR APPLICATIONS (Neural Information Processing Approaches) Snowmass (Aspen), Colorado USA Sat Dec 7th, 1996 A. Cichocki and A. Back Brain Information Processing Group Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Shi, JAPAN Email: cia at zoo.riken.go.jp, back at zoo.riken.go.jp Fax: (+81) 48 462 4633. URL: http://zoo.riken.go.jp/bip.html Blind Signal Processing is an emerging area of research in neural networks and image/signal processing with many potential applications. It originated in France in the late 80's and since then there has continued to be a strong and growing interest in the field. Blind signal processing problems can be classified into three areas: (1) blind signal separation of sources and/or independent component analysis (ICA), (2) blind channel identification and (3) blind deconvolution and blind equalization. OBJECTIVES The main objectives of this workshop are to: Give presentations by experts in the field on the state of the art in this exciting area of research. Compare the performance of recently developed adaptive un-supervised learning algorithms for neural networks. Discuss issues surrounding prospective applications and the suitability of current neural network models. Hence we seek to provide a forum for better understanding current limitations of neural network models. Examine issues surrounding local, online adaptive learning algorithms and their robustness and biologically plausibility or justification. Discuss issues concerning effective computer simulation programs. Discuss open problems and perspectives for future research in this area. Especially, we intend to discuss the following items: 1. Criteria for blind separation and blind deconvolution problems (both for time and frequency domain approaches) 2. Natural (or relative) gradient approach to blind signal processing. 3. Neural networks for blind separation of time delayed and convolved signals. 4. On line adaptive learning algorithms for blind signal processing with variable learning rate (learning of learning rate). 5.Open problems, e.g. dynamic on-line determination of number of sources (more sources than sensors), influence of noise, robustness of algorithms, stability, convergence, identifiability, non-causal, non-stationary dynamic systems . 6. Applications in different areas of science and engineering, e.g., non-invasive medical diagnosis (EEG, ECG), telecommunication, voice recognition problems, image processing and enhancement. WORKSHOP FORMAT The workshop will be 1-day in length, combining some invited expert speakers and a significant group discussion time. We will open up the workshop in a moderated way. The intent here is to permit a free-flowing, but productive discourse on the topics relevant to this area. Participants will be encouraged to consider the implications of the current findings in their own work, and to raise questions accordingly. We invite and encourage potential participants to "come prepared" for open discussions. SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS If you would like to contribute, please send an abstract or extended summary as soon as possible to: Andrew Back Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Shi, JAPAN Email: back at zoo.riken.go.jp Phone: (+81) 48 467 9629 Fax: (+81) 48 462 4633. Manuscripts may be sent in by email (in postscript format), air mail or by fax. Important Dates: Submission of abstract deadline: 16 September, 1996 Notification of acceptance: 1 October, 1996 Final paper to be sent by: 30 October, 1996 A set of workshop notes will be produced. For accepted papers to be included in the notes, papers accepted for presentation will need to be supplied to us by the due date of 30 Oct, 1996. For the format of papers, the usual NIPS style file should be used with up to 16 pages allowed. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From maggini at sun1.ing.unisi.it Mon Sep 2 03:23:09 1996 From: maggini at sun1.ing.unisi.it (Marco Maggini) Date: Mon, 2 Sep 1996 09:23:09 +0200 Subject: NIPS'96 Postconference Workshop Message-ID: <199609020723.JAA08804@ultra1> ================================================================= CALL FOR PAPERS NIPS'96 Postconference Workshop ----------------------------------------------------------------- ANNs and Continuous Optimization: Local Minima, Sub-optimal Solutions, and Computational Complexity ----------------------------------------------------------------- Snowmass (Aspen), Colorado USA Fri Dec 6th, 1996 M. Gori M. Protasi Universita' di Siena Universita' Tor Vergata (Roma) Universita' di Firenze protasi at utovrm.it marco at mcculloch.ing.unifi.it http://www-dsi.ing.unifi.it/neural Most ANNs used for either learning or problem solving (e.g. Backprop nets and analog Hopfield nets) rely on continuous optimization. The elegance and generality of this approach, however, seems also to represent the main source of troubles that typically arise when approaching complex problems. Most of the times this gives rise to a sort of suspiciousness concerning the actual chance to discover an optimal solution under reasonable computational constraints. The computational complexity of the problem at hand seems to appear in terms of local minima and numerical problems of the chosen optimization algorithm. While most practitioners use to accept without reluctance the flavor of suspiciousness and use to be proud of their eventual experimental achievements, most theoreticians are instead quite skeptical. Obviously, the success of ANNs for either learning and problem solving is often related to the problem at hand and, therefore, one can expect an excellent behavior for a class of problems, while can raise serious suspects about the solution of others. To the best of our knowledge, however, so far, this intuitive idea has no satisfactory theoretical explanation. Basically, there is neither theory to support naturally the intuitive concept of suspiciousness, nor theory to relate this concept to computational complexity. The study of the complexity of algorithms has been essentially performed on discrete structures and an impressive theory has been developed in the last two decades. On the other hand optimization theory has a twofold face: discrete optimization and continuous optimization. Actually there are some important approaches of the computational complexity theory that were proposed for the continuous cases, for instance, the information based-complexity (Traub) and real Turing Machines (Blum-Shub-Smale). These approaches can be fruitfully applied to problems arising in continuous optimization but, generally speaking, the formal study of the efficiency of algorithms and problems has received much more attention in the discrete environment, where the theory can be easily used and error and precision problems are not present. =========================================== DISCUSSION POINTS FOR WORKSHOP PARTICIPANTS =========================================== Taking into account this framework, a fascinating area, that we believe deserves a careful study, concerns the relationship between the emergence of sub-optimal solutions in continuous optimization and the corresponding computational complexity of the problem at hand. More specifically,there are a number of very intriguing open questions: - Is there a relationship between the complexity of algorithms in the continuous and discrete settings for solving the same problem? - Can we deduce bounds on the complexity of discrete algorithms from the study of the properties of continuous ones and vice-versa? - Some loading problems are intuitively easily solvable, while others are considered hard. Are there links between the presence of local minima and the computational complexity of the problem at hand? - What is the impact of approximate solutions on the complexity? (e.g. learning is inherently an approximate process whose complexity is often studied in the framework of theories like PAC) ========== OBJECTIVES ========== The aim of the workshop is not to reach some definite points, but to stimulate a starting discussion, and to put on the table some of the most important themes that we hope could be extensively explored in the future. We also expect that the study of the interplay between continuous and discrete versions of a problem can be very fruitful for both the approaches. Since, until now, this interplay has been rarely explored, the workshop is likely to stimulate different point of view; people working on discrete optimization are in fact likely not to be expert on the continuous side and vice-versa. ========================================= SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS ========================================= If you would like to contribute, please send an abstract or extended summary to: Marco Gori Facolta' di Ingegneria Universita' di Siena Via Roma, 56 53100 Siena (Italy) Fax: +39 577 26.36.02 Electronic submission: Send manuscripts in postscript format at marco at mcculloch.ing.unifi.it Important Dates: Submission of abstract deadline: 30 September, 1996 Notification of acceptance: 21 October, 1996 Final paper to be sent by: 4 November, 1996 For the format of papers, the usual NIPS style file should be used with up to 16 pages allowed. Workshop notes will be produced. NIPS style files are available at http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.sty http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.tex http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.ps Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From pfbaldi at cco.caltech.edu Mon Sep 2 19:58:40 1996 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Mon, 2 Sep 1996 16:58:40 -0700 (PDT) Subject: TR available: Bayesian Methods and Compartmental Modeling Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/baldi.comp.tar.Z The file baldi.comp.tar.Z is now available for copying from the Neuroprose repository: ON THE USE OF BAYESIAN METHODS FOR EVALUATING COMPARTMENTAL NEURAL MODELS (40 pages = 35 pages + 5 figures) (one figure is in color but should print OK in black and white) P. Baldi, M. C. Vanier, and J. M. Bower Department of Computation and Neural Systems Caltech ABSTRACT: In this TR, we provide a tutorial on Bayesian methods for neurobiologists, as well an application of the methods to compartmental modeling. We first derive prior and likelihood functions for compartmental neural models and for spike trains. We then apply the full Bayesian inference machinery to parameter estimation, and model comparison in the case of simple classes of compartmental models, with three and four conductances. We also perform class comparison by approximating integrals over the entire parameter space. Advantages and drawbacks are discussed. Sorry-no hard copies available.  From surmeli at ipe.et.uni-magdeburg.de Tue Sep 3 02:56:19 1996 From: surmeli at ipe.et.uni-magdeburg.de (Dimitrij Surmeli) Date: Tue, 03 Sep 1996 08:56:19 +0200 Subject: Job: GRA in neural nets for control; Magdeburg, Germany Message-ID: <322BD693.3D70@ipe.et.uni-magdeburg.de> Job announcement: The Institute of Measurement and Electronics of the Otto-von-Guericke-University, Magdeburg, Germany has an opening for a Research Assistant in the 'Innovationskolleg ADAMES' as of 1 October 1996. The project is investigating distributed neural network applications for ADAptive MEchanical Systems (ADAMES), ie signal identification, data compression and control systems. This is one area in a multi-disciplinary project involving actively deformable mechanical systems. Desired qualifications include a finished BSc/MSc, experience in neural network applications, control theory, image processing, programming (all in Unix and Win). Helpful: Matlab experience, hardware design, CNAPS neurocomputer experience, analog-digital-analog conversion Working language German. Compensation depending on qualification on the BAT II-O scale. Position suitable to engage in research leading to PhD. Informal inquiries re: ADAMES to surmeli at ipe.et.uni-magdeburg.de Formal application including CV, cover letter, etc to: Prof. B. Michaelis otto-von-Guericke Universitaet Madgeburg Fakultaet fuer Elektrotechnik Institut fuer Prozessmesstechnik und Elektronik Am Kroekentor 2 39106 Magdeburg Germany tel. +49 391 671 4645 fax +49 391 561 6368 email michaelis at ipe.et.uni-magdeburg.de http://pmt05.et.uni-magdeburg.de/TI/TI.html -- Dimitrij Surmeli surmeli at ipe.et.uni-magdeburg.de Anybody got a good name for a neurocomputer CNAPS 512?  From payman at u.washington.edu Tue Sep 3 21:43:19 1996 From: payman at u.washington.edu (Payman Arabshahi) Date: Tue, 3 Sep 96 18:43:19 -0700 Subject: CFP: 1997 Computational Intelligence in Financial Eng. CIFEr Message-ID: <9609040143.AA28874@saul3.u.washington.edu> IEEE/IAFE 1997 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer97 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 23-25, 1997 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting * Hedging Strategies The IEEE/IAFE CIFEr Conference is * Risk Arbitrage the third annual collaboration * Exotic Options between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by November 15, 1996. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by November 15, 1996. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1997. Complete papers (not exceeding seven pages of 8.5 inch by 11 inch size) will be due by February 14, 1997, and will be published in the conference proceedings. ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1997. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 14, 1997. ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'97 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers) and the IEEE Neural Networks Council. The IEEE Barbara Klemm (Institute of Electrical and CIFEr'97 Secretariat Electronics Engineers) is the Meeting Management world's largest engineering and IEEE/IAFE Computational Intelligence computer science professional for Financial Engineering non-profit association and sponsors 2603 Main Street, Suite # 690 hundreds of technical conferences Irvine, California 92714 and publications annually. The IAFE is a professional non-profit Tel: (714) 752-8205 or financial association with members (800) 321-6338 worldwide specializing in new financial product design, derivative Fax: (714) 752-7444 structures, risk management strategies, arbitrage techniques, Email: Meetingmgt at aol.com and application of computational Web: http://www.ieee.org/nnc/cifer97 techniques to finance. ---------------------------------------------------------------------------- Payman Arabshahi CIFEr'97 Organizational Chair Tel: (206) 644-8026 Dept. Electrical Eng./Box 352500 Fax: (206) 543-3842 University of Washington Seattle, WA 98195 Email: payman at ee.washington.edu ----------------------------------------------------------------------------  From abonews at playfair.Stanford.EDU Wed Sep 4 18:52:54 1996 From: abonews at playfair.Stanford.EDU (Art Owen News) Date: Wed, 4 Sep 1996 15:52:54 -0700 Subject: TR Available: Computer Experiments (noise free prediction) Message-ID: <199609042252.PAA05650@tukey.Stanford.EDU> Address: http://playfair.stanford.edu/reports/owen File: main.ps for uncompressed PostScript main.ps.Z for compressed PostScript Authors: J. Koehler and A. Owen The above article is a survey paper on methods for computer experiments. These are noise free, usually continuous valued prediction problems, sometimes motivated by optimization problems in computer aided design. Without noise, how does one estimate the uncertainty in a given answer? A Bayesian approach places a process prior on the underlying function. An emerging frequentist approach samples the input space at random and propagates the sampling error. -Art Owen art at playfair.stanford.edu Replies should be sent to art at playfair not abonews where they might get lost among mailing list mail. My co-author is Jim Koehler: EMAIL: jkoehler at carbon.cudenver.edu http://www-math.cudenver.edu/~jkoehler  From rosen at dragon.cs.utsa.edu Wed Sep 4 20:27:23 1996 From: rosen at dragon.cs.utsa.edu (Bruce) Date: Wed, 4 Sep 1996 19:27:23 -0500 Subject: Paper announcements Message-ID: <199609050027.TAA02780@tachy.cs.utsa.edu> *** Paper Announcements *** The following paper is now available from my research page: http://www.cs.utsa.edu/faculty/rosen/rosen.html Comments are welcomed. ----------------------------------------------------------------------- Ensemble Learning using Decorrelated Neural Networks Bruce E. Rosen ftp://ringer.cs.utsa.edu/pub/rosen/decorrelate.ps.Z We describe a decorrelation network training method for improving the quality of regression learning in ``ensemble'' neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the ``3 Parity'' logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensemble networks having independently trained individual networks. This method is particularly applicable when there is insufficient data to train each individual network on disjoint subsets of training patterns. To appear in: Connection Science, Special issue on Combining Estimators.  From aonishi at bpe.es.osaka-u.ac.jp Thu Sep 5 04:52:20 1996 From: aonishi at bpe.es.osaka-u.ac.jp (Toru Aonishi) Date: Thu, 5 Sep 1996 17:52:20 +0900 Subject: Paper Announcements Message-ID: <199609050852.RAA13861@fsunc.bpe.es.osaka-u.ac.jp> *** Paper Announcements *** The following two papers on analysis of the dynamic link architecture are now available from my FTP site. ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi Comments/suggestions welcome, -Toru Aonishi (aonishi at bpe.es.osaka-u.ac.jp) =========================================================================== A Phase Locking Theory of Matching between Rotated Images by a Dynamic Link Architecture (Submitted to Neural Computation) Toru AONISHI, Koji KURATA and Takeshi MITO Pattern recognition invariant to deformation or translation can be performed with the dynamic link architecture proposed by von der Malsburg. The dynamic link has been applied to some engineering examples efficiently, but has not yet been analyzed mathematically. We propose two models of the dynamic link architecture. Both models are mathematically tractable. The first model can perform matching between rotated images. The second model can also do that, and can additionally detect common parts in a template image and in a data image. To analyze these models mathematically, we reduce each model's equation to a phase equation, showing the mathematical principle behind the rotating invariant matching process. We also carry out computer simulations to verify the mathematical theories involved. FTP-host: ftp.bpe.es.osaka-u.ac.jp FTP-pathname: /pub/FukushimaLab/Papers/aonishi/rotation_dy.ps.gz URL: ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi/rotation_dy.ps.gz 30 pages; 238Kb compressed. =========================================================================== Deformation Theory of Dynamic Link Architecture (Submitted to Neural Computation) Toru AONISHI, Koji KURATA Dynamic link is a self-organizing topographic mapping formed between a template image and a data image. The mapping tends to be continuous, linking two points sharing similar local features, which as a result, can lead to its deformation to some degree. Analyzing this deformation mathematically, we reduce the model equation to a phase equation, which clarifies the principles of this deformation process, the relation between high-dimensional models and low-dimensional ones. It also elucidates the characteristics of the model in the context of standard regularization theory. FTP-host: ftp.bpe.es.osaka-u.ac.jp FTP-pathname: /pub/FukushimaLab/Papers/aonishi/deform_dy.ps.gz URL: ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi/deform_dy.ps.gz 15 pages; 112Kb compressed. ===========================================================================  From ruppin at math.tau.ac.il Fri Sep 6 08:31:24 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Fri, 6 Sep 1996 15:31:24 +0300 (GMT+0300) Subject: CFP:-Modeling-Brain-Disorders Message-ID: <199609061231.PAA14712@gemini.math.tau.ac.il> CALL FOR SUBMISSIONS Special Issue of the Journal "Artificial Intelligence in Medicine" (Published by Elsevier) Theme: COMPUTATIONAL MODELING OF BRAIN DISORDERS Guest-Editors: Eytan Ruppin & James A. Reggia (Tel-Aviv University) (University of Maryland) BACKGROUND As computational methods for brain modeling have advanced during the last several years, there has been an increasing interest in adopting them to study brain disorders in neurology, neuropsychology, and psychiatry. Models of Alzheimer's disease, epilepsy, aphasia, dyslexia, Parkinson's disease, stroke and schizophrenia have been recently studied to obtain a better understanding of the underlying pathophysiological processes. While computer models have the disadvantage of simplifying the underlying neurobiology and the pathophysiology, they also have remarkable advantages: They permit precise and systematic control of the model variables, and an arbitrarily large number of ``subjects''. They are open to detailed inspection, in isolation, of the influence of various metabolic and neural variables on the disease progression, in the hope of gaining insight into why observed behaviors occur. Ultimately, one seeks a sufficiently powerful model that can be used to suggest new pharmacological interventions and rehabilitative actions. OBJECTIVE OF SPECIAL ISSUE The objective of this special issue on modeling brain disorders is to report on the recent studies in this field. The main goal is to increase the awareness of the AI medical community to this research, currently primarily performed by members of the neural networks and `connectionist' community. By bringing together a series of such brain disorders modeling papers, we strive to produce a contemporary overview of the kinds of problems and solutions that this growing research field has generated, and to point to future promising research directions. More specifically, papers are expected to cover one or more of the following topics: -- Specific neural models of brain disorders, expressing the link between their pathogenesis and clinical manifestations. -- Computational models of pathological alterations in basic neural, synaptic and metabolic processes, that may relate to the generation of brain disorders in a significant manner. -- Applications of neural networks that shed light on the pathogenic processes that underlie brain disorders, or explore their temporal evolution and clinical course. -- Methodological issues involved in constructing computational models of brain disorders; obtaining sufficient data, visualizing high-dimensional complex behavior, and testing and validating these models. -- Bridging the apparent gap between functional imaging investigations and current neural modeling studies, arising from their distinct spatio-temporal resolution. SCHEDULE All the submitted manuscripts will be subject to a rigorous review process. The special issue will include 5 papers of 15-20 pages each, plus an editorial. Manuscripts should be prepared in accordance with the journal "submission guidelines" which are available on request, and may also be retrieved from http://www.math.tau.ac.il/~ruppin. November 15, 1996 Submission of tentative title and abstract to declare intension to submit paper. This should be done electronically, to ruppin at math.tau.ac.il. March 15, 1997 Receipt of full papers. Three copies of a manuscript should be sent to: Eytan Ruppin Department of Computer Science School of Mathematics Tel-Aviv University Tel-Aviv, Israel, 69978. August 1, 1997 Notification of acceptance October 1, 1997 Receipt of final-version of manuscripts June 1998 Publication of AIM special issue  From Friedrich.Leisch at ci.tuwien.ac.at Mon Sep 9 08:35:35 1996 From: Friedrich.Leisch at ci.tuwien.ac.at (Friedrich Leisch) Date: Mon, 9 Sep 1996 14:35:35 +0200 Subject: UPDATE: CI BibTeX Database Collection Message-ID: <199609091235.OAA00048@meriadoc.ci.tuwien.ac.at> The BibTeX database collection at the Vienna Center for Computational Intelligence has been updated: New: Advances in Neural Information Processing Systems 8 (NIPS'95) IEEE Transactions on NN 7/4 Neural Computation 8/1-8/6 Neural Networks 9/5 URL: http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html The Vienna Center for Computational Intelligence maintains BibTeX databases for a growing number of CI-related journals and conference proceedings (with emphasis on neural networks). All bibtex files use a unified key format, so citing from our BibTeX files is easy. WE ARE ALWAYS LOOKING FOR NEW ENTRIES TO THIS ARCHIVE AND APPRECIATE ALL CONTRIBUTIONS. TAKE A LOOK AT THE ABOVE WEBSITE FOR DETAILS. Best, Fritz Leisch  From arthur at mail4.ai.univie.ac.at Mon Sep 9 13:04:10 1996 From: arthur at mail4.ai.univie.ac.at (Arthur Flexer) Date: Mon, 9 Sep 1996 19:04:10 +0200 (MET DST) Subject: TR: Limitations of SOM Message-ID: <199609091704.TAA02653@milano.ai.univie.ac.at> Dear colleagues, the following report is available via my personal WWW-page: http://www.ai.univie.ac.at/~arthur/ as ftp://ftp.ai.univie.ac.at/papers/oefai-tr-96-23.ps.Z Sorry, there are no hardcopies available, comments are welcome! Sincerely, Arthur Flexer ----------------------------------------------------------------------------- Arthur Flexer arthur at ai.univie.ac.at Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) Schottengasse 3, A-1010 Vienna, Austria +43-1-5320652(Fax) ----------------------------------------------------------------------------- Flexer A.: Limitations of self-organizing maps for vector quantization and multidimensional scaling, to appear in: Advances in Neural Information Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, available in 1997. Abstract: The limitations of using self-organizing maps (SOM) for either clustering/vector quantization (VQ) or multidimensional scaling (MDS) are being discussed by reviewing recent empirical findings and the relevant theory. SOM's remaining ability of doing both VQ {\em and} MDS at the same time is challenged by a new combined technique of adaptive {\em K}-means clustering plus Sammon mapping of the cluster centroids. SOM are shown to perform significantly worse in terms of quantization error, in recovering the structure of the clusters and in preserving the topology in a comprehensive empirical study using a series of multivariate normal clustering problems.  From payman at isdl.ee.washington.edu Mon Sep 9 20:51:15 1996 From: payman at isdl.ee.washington.edu (Payman Arabshahi) Date: Mon, 9 Sep 1996 17:51:15 -0700 (PDT) Subject: CFP: IEEE Transactions on Neural Networks, Special Issue Message-ID: <199609100051.RAA03483@isdl.ee.washington.edu> -------------------------------------------------------------------------- Call for Papers Special Issue of the IEEE Transactions on Neural Networks: Every Day Applications of Neural Networks -------------------------------------------------------------------------- The objective of this special issue is presentation of cases of ongoing every day use of neural networks in industry, commerce, medicine, engineering, military and other disciplines. Even though artificial neural networks have been around since the 1940's, the last decade has seen a tremendous upsurge in research and development. This activity has been at two levels, (i) advances in neural techniques and network architectures and (ii) exploration of application of this technology in various fields. Neural network technology has reached a degree of maturity as evidenced by an ever increasing number of applications. It is useful, at this stage, to take stock of applications to provide the neural practitioner with (i) knowledge of fields wherein neural technology has had an impact, and (ii) guidance concerning fruitful areas of research and development in neurotechnology that have a significant impact. This special issue of the TNN calls for submission of papers concerning neural technology adopted for ongoing or everyday use. Hybrid neural technology, such as neuro-fuzzy systems, are also appropriate. Submissions are to specifically address the infusion and adaptation of neural technology in various areas. Exploratory applications papers, normally welcome for submission to the TNN, are specifically discouraged for this special issue. Adopted and established applications papers, rather, are appropriate. Submissions to the special issue will be judged based on the veracity of everyday use, comparative performance over previously used techniques and lessons learned from the development and applications Descriptions of remaining open problems or desired, though unachieved performance attainment, are encouraged. Six copies of the manuscript should be mailed to one of the special issue editors by November 15, 1996. The special issue is tentatively scheduled for publication in July 1997. Submissions could either be brief papers or regular papers. Please refer to instructions to authors for TNN. Tharam Dillon Professor of Computer Science Head, Department of Computer Science and Computer Engineering La Trobe University Bundoora, Melbourne, Victoria 3083 Australia Tel: +61 3 479 2598 Fax: +61 3 479 3060 tharam at latcs1.cs.latrobe.edu.ua Payman Arabshahi University of Washington Department of Electrical Engineering Benton Way at Stevens Way Box 352500 Seattle, WA 98195 United States of America payman at ee.washington.edu 206 236 2694 FAX: 206 543 3842 Robert J. Marks II University of Washington Department of Electrical Engineering c/o 1131 199th Street SW Lynnwood, WA 98036-7138 United States of America r.marks at ieee.org 206 543 6990 FAX: 206 776 9297  From icie96 at mara.fi.uba.ar Mon Sep 9 17:20:21 1996 From: icie96 at mara.fi.uba.ar (1995 Congress) Date: Mon, 9 Sep 1996 18:20:21 -0300 (GMT-0300) Subject: New Dates for III ICIE (was ICIE 96) Message-ID: *** NEW DATES ***** NEW DATES ***** NEW DATES ***** NEW DATES CALL FOR PAPERS III INTERNATIONAL CONGRESS ON INFORMATION ENGINEERING III ICIE ===> New Date: April, 16th & 17th. 1997 Computer Science Department. School of Engineering University of Buenos Aires. ARGENTINA ======================================================================== The III International Congress on Information Engineering, will be held in the Computer Science Department of the School of Engineering of the University of Buenos Aires on April, 16th & 17th, 1997. Buenos Aires City. Argentina. PAPERS from all countries are sought that: 1) Present results of researchers work in the area, 2) Present applications to the solution of problems in industry, business, government and related areas. AREAS OF APPLICATION include but are not limited to: manufacturing, automation, control systems, planning, design, production, distribution, marketing, human resources, finance, stock exchange, international business, environmental control, communication media, legal aspects, decision support. TECHNOLOGY TRANSFER include but are not limited to: strategies for introducing and institutionalising Information Engineering Technology, human resources formation in Information Engineering, justification of Information Engineering Projects, cooperation projects, impact of Information Engineering in the social environment of the company, standards. INFORMATION TECHNIQUES include but are not limited to: knowledge-based systems, neural networks, fuzzy systems, artificial intelligence, data bases, computational algebra, computer languages, object oriented technology, multimedia, computer vision, robotics, computer human interface, tutoring systems, networking, software engineering, operational research. Persons wishing to submit a paper have to send an abstract (500 words) by e-mail to icie96 at mara.fi.uba.ar and four (4) copies written in Spanish or English to: Program Committee. Computer Science Department. School of Engineering. University of Buenos Aires. Paseo Colon 850. 4to PISO. (1063) Capital Federal. ARGENTINA The paper shall identify the area and technique to which it belongs. Papers will be evaluated with respect to their originality, correctness, clarity and relevance. Use a Arial or Times New Roman type font, size 12, single space with a maximum of 10 pages in A4 format. Margins: 2.5 cm (top, bottom, left, right). Selected papers will be published in the proceedings of the Congress. IMPORTANT DATES: Papers must be received by November 15th, 1996. Fax version of the paper is allowed for evaluation. Authors will be notified of acceptance of rejection by e-mail or fax by December 15 th. FAX: 54 1 331-0129 (54 for Argentina, 1 for Buenos Aires) PROGRAM COMMITTEE Prof. Lic. Gregorio Perichinsky University of Buenos Aires Prof. Ing. Armando De Giusti University of La Plata Prof. M.Sc. Raul Gallard University of San Luis Prof. Dr. Edmundo Gramajo Technical University of Madrid Buenos Aires institute of Technology Prof. Ph.D. Reza Korramshagol American University Prof. Ph.D. Anita LaSalle American University Prof. Ing. E. Cabellos Pardos University of Salamanca Prof. M.Ing. Raimundo D'Aquila Bs. As. Institute of Technology Prof. Lic. Bibiana Rossi Technological University Prof. Ing. Diana Pallioto University of Santiago del Estero Prof. Ing. Cristina Fenema University of Catamarca Prof. Lic. Stella M. Valiente University of Mar del Plata Prof. C.C. Maria Feldgen University of Buenos Aires Prof. Ing. Osvaldo Clua University of Buenos Aires Prof. M.Ing. R. Garcia Martinez University of Buenos Aires Prof. Lic. Javier Blanque University of Lujan  From sandro at parker.physio.nwu.edu Wed Sep 11 18:49:21 1996 From: sandro at parker.physio.nwu.edu (Sandro Mussa-Ivaldi) Date: Wed, 11 Sep 96 17:49:21 CDT Subject: Postdoctoral Fellowship - Neuromorphic Control Message-ID: <9609112249.AA04812@parker.physio.nwu.edu> ********NEUROMORPHIC CONTROL SYSTEMS******** A postdoctoral position is available at the Department of Physiology of Northwestern University Medical School to work on an interdisciplinary project aimed at developing a control system for an artificial limb. This system will be based on the rat's hindlimb geometry and the controller will emulate some of the known physiology of the muscles and of the spinal cord. The research will involve both theoretical and experimental components and will be carried out in collaboration with Sandro Mussa-Ivaldi and Simon Alford at the Department of Physiology and with Ed Colgate at the Department of Mechanical Engineering. The position is available immediately and will last a minimum period of one year, with the possibility to be extended for another year. The applicants should have a PhD in a relevant discipline and a solid experience in control system engineering and computational sciences. This work will also involve some amount of cellular neurophysiology. While a strong background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques and to carry out experimental work. Northwestern University offers competitive salary and benefits. Applicants should send a CV, a statement of their interests and professional goals (not longer than 1 page) and the names, addresses and telephone numbers of at least two reference to Vera Reeves either via email (v-reeves at nwu.edu) or via surface mail at the following address: Vera Reeves Business Administrator Departmenet of Physiology (M211) Northwestern University Medical School 303 East Chicago Ave Chicago, Il 60611-3008 Northwestern University is an equal opportunity, affirmative action educator and employer. --------------------------- sandro at nwu.edu  From bogner at eleceng.adelaide.edu.au Wed Sep 11 03:17:49 1996 From: bogner at eleceng.adelaide.edu.au (Robert E. Bogner) Date: Wed, 11 Sep 1996 16:47:49 +0930 Subject: Post-doc or Research Fellow position Message-ID: <9609110717.AA28415@augean.eleceng.adelaide.edu.au> POSTDOCTORAL OR RESEARCH FELLOW Signal Processing and Pattern Recognition A postdoctoral or research fellow is sought to join as soon as possible the Centre for Sensor Signal and Information Processing (CSSIP) and the University of Adelaide EE Eng Department. The CSSIP is one of several cooperative research centres awarded by the Australian Government to establish excellence in research and development. The University of Adelaide, represented by the EE Eng Dept, is a partner in this cooperative research centre, together with the Defence Science and Technology Organization (DSTO), four other Universities, and several companies. The cooperative research centre consists of about 100 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine (DEC MPP). The aim of the position is to develop and investigate principles in the areas of sensor signal and image processing, classification and separation of signals, pattern recognition and data fusion. The position is for one year with a strong possibility of renewal. DUTIES: In consultation with task leaders and specialist researchers to investigate alternative algorithm design approaches, to design experiments on applications of signal processing and artificial neural networks, to prepare data and carry out the experiments, to prepare software for testing algorithms, and to prepare or assist with the prepation of technical reports. QUALIFICATIONS: The successful candidate must have a Ph.D. or equivalent achievement, a proven research record, and a demonstrated ability to communicate excellently in written and spoken English. CONDITIONS: Pay and conditions will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$ 36285 to A$ 41000 for a postdoc, and A$ 42538 to A$ 48688 for a research fellow. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering Dept., The University of Adelaide, Adelaide, South Australia 5005. Phone: (61)-08-303-5589, Fax: (61)-08-303 4360 Email: bogner at eleceng.adelaide.edu.au or Dr. A. Bouzerdoum, Phone (61)-08-303-5464, Fax (61)-08-303 4360 Email: bouzerda at eleceng.adelaide.edu.au (OR: at ISSPA96 - leave a message at the message station.) CENTRE FOR SENSOR SIGNAL SIGNAL AND INFORMATION PROCESSING (CSSIP): The University of Adelaide, represented by the Department ofElectrical and Electronic Engineering is a partner in this Cooperative Research Centre, together with the Defence Science andTechnology Organization, four other Universities, and severalcompanies. These research centres are part of a scheme of the Australian Government and they receive considerable financial support from the government, matching contributions from the partners. Thusthey can provide research facilities, contacts, and support for scholars while being effectively extensions of the partner Universities. The objective of the CSSIP is to bring together a sufficient mass of high quality workers, to establish and maintain a centre of excellence and to ensure that the benefits flow continuously to industry and education. Its programs include:. strategic research into underpinning technologies with application to specific projects. education and training responsive to needs. services to industry in research, development and teaching.  From dwang at cis.ohio-state.edu Wed Sep 11 12:38:58 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 11 Sep 1996 12:38:58 -0400 Subject: NIPS'96 Workshop on Auditory Scene Analysis Message-ID: <199609111638.MAA09066@sarajevo.cis.ohio-state.edu> CALL FOR SPEAKERS NIPS'96 Postconference Workshop CONNECTIONIST MODELLING OF AUDITORY SCENE ANALYSIS Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Guy J. Brown DeLiang Wang Department of Computer Science Department of Computer & Information University of Sheffield Sci. and Center for Cognitive Sci. Regent Court, 211 Portobello St. The Ohio State University Sheffield S1 4DP, U.K. Columbus, OH 43210-1277, USA Fax: +44 (0)114 2780972 Fax: (614)2922911 Email: guy at dcs.shef.ac.uk Email: dwang at cis.ohio-state.edu http://www.dcs.shef.ac.uk/~guy http://www.cis.ohio-state.edu/~dwang OBJECTIVES Auditory scene analysis describes the ability of listeners to separate the acoustic events arriving from different environmental sources into separate perceptual representations (streams). It is related to, but is more general than, the well-known "cocktail party effect", which refers to the ability of listeners to segregate one voice from a mixture of many other voices. Computational models of auditory scene analysis are likely to play an important role in building speech recognition systems that work in realistic acoustic environments. However, many aspects of this important modelling problem are as yet largely unsolved. Recently, there has been significant growth in neural modelling of auditory scene analysis since Albert Bregman published his book "Auditory Scene Analysis" in 1990. This workshop seeks to bring together a diverse group of researchers to critically examine the progress made so far in this challenging research area, and to discuss unsolved problems. In particular, we intend to address the following issues: * Whether attention is involved in primitive (bottom-up) auditory scene analysis * How primitive auditory scene analysis is coupled with schema-based (knowledge-based) auditory scene analysis * The utility of the oscillatory approach In addition to reviewing these issues, we would like to chart, if possible, a neural network framework for segmenting simultaneously presented auditory patterns. WORKSHOP FORMAT This one-day workshop will be organised into two three-hour sessions, one in early morning and one in late afternoon. The intermitting time is reserved for skiing or free-wheeling interactions between participants. Each session consists of 2 hour oral presentations and 1 hour panel discussion. SUBMISSION OF ABSTRACTS A group of invited experts, including Albert Bregman, will speak in the workshop. We are seeking a few more speakers to contribute. If you have done work on this or related topics and would like to contribute, please send an abstract as soon as possible to: GUY J. BROWN Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street Sheffield S1 4DP, UK Phone: +44 (0)114 2825568; Fax: +44 (0)114 2780972 Email: guy at dcs.shef.ac.uk Abstracts should be sent in by email or by fax. Important Dates: Deadline for submission of abstracts: 27 September, 1996 Notification of acceptance: 7 October, 1996 A set of workshop notes will be produced. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From edelman at ai.mit.edu Tue Sep 10 20:45:09 1996 From: edelman at ai.mit.edu (Shimon Edelman) Date: Tue, 10 Sep 96 20:45:09 EDT Subject: [arthur@mail4.ai.univie.ac.at: TR: Limitations of SOM] Message-ID: <9609110045.AA03818@peduncle.ai.mit.edu> Seeing that comments are welcome... there seems to be a rather glaring gap in the references in this TR. Fukunaga proposed a similar combination of clustering and topology-preservation criteria in 1972, and there was a recent paper by Webb following up on that work. It would have been nice to see Baxter's idea of Canonical Vector Quantization discussed in this context. By the way, what is called MDS in this TR is actually not (although it is related; MDS is the process of placement of points in a metric space in a maner that preserves their distances - or the ranks thereof - without the knowledge of the original coordinates of the points). @Article{KoontzFukunaga72, author = "W. L. G. Koontz and K. Fukunaga", title = "A nonlinear feature extraction algorithm using distance information", journal = "IEEE Trans. Comput.", year = 1972, volume = 21, pages = "56-63", annote = "combines class separation and distance preservation criteria for dimensionality reduction" } @Article{Webb95, author = "A. R. Webb", title = "Multidimensional-Scaling by Iterative Majorization Using Radial Basis Functions", journal = "Pattern Recognition", year = 1995, volume = 28, pages = "753-759", annote = "MDS, RBFs, nonlinear PCA. This paper considers the use of radial basis functions for modelling the non-linear transformation of a data set obtained by a multidimensional scaling analysis. This approach has two advantages over conventional nonmetric multidimensional scaling. It reduces the number of parameters to estimate and it provides a transformation that may be used on an unseen test set. A scheme based on iterative majorization is proposed for obtaining the parameters of the network." } @TechReport{Baxter95b, author = "J. Baxter", title = "The Canonical Metric for Vector Quantization", institution = "University of London", year = 1995, type = "NeuroCOLT", number = "NC-TR-95-047", annote = "Abstract. To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {Hamming} and {Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {canonical metric} on the space being quantized. It is proved that optimizing the {reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {internal representation learning}" } -Shimon Dr. Shimon Edelman, Center for Biol & Comp Learning, MIT (on leave from Weizmann Inst of Science, Rehovot, Israel) Web home: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+1) 617 253-2964 tel: 253-0549 edelman at ai.mit.edu > From: Arthur Flexer > Subject: TR: Limitations of SOM > To: connectionists at cs.cmu.edu > Date: Mon, 9 Sep 1996 19:04:10 +0200 (MET DST) > > Dear colleagues, > > the following report is available via my personal WWW-page: > > http://www.ai.univie.ac.at/~arthur/ > as > ftp://ftp.ai.univie.ac.at/papers/oefai-tr-96-23.ps.Z > > Sorry, there are no hardcopies available, comments are welcome! > > Sincerely, Arthur Flexer > > ----------------------------------------------------------------------------- > Arthur Flexer arthur at ai.univie.ac.at > Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) > Schottengasse 3, A-1010 Vienna, Austria +43-1-5320652(Fax) > ----------------------------------------------------------------------------- > > > Flexer A.: Limitations of self-organizing maps for vector quantization and > multidimensional scaling, to appear in: Advances in Neural Information > Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, > available in 1997. > > Abstract: > > The limitations of using self-organizing maps (SOM) for either > clustering/vector quantization (VQ) or multidimensional scaling > (MDS) are being discussed by reviewing recent empirical findings and > the relevant theory. SOM's remaining ability of doing both VQ {\em > and} MDS at the same time is challenged by a new combined > technique of adaptive {\em K}-means clustering plus Sammon mapping > of the cluster centroids. SOM are shown to perform significantly > worse in terms of quantization error, in recovering the structure of > the clusters and in preserving the topology in a comprehensive > empirical study using a series of multivariate normal clustering > problems. > >  From linster at berg.harvard.edu Fri Sep 13 18:46:53 1996 From: linster at berg.harvard.edu (Christiane Linster) Date: Fri, 13 Sep 1996 18:46:53 -0400 (EDT) Subject: Nips Workshop Message-ID: CALL FOR PARTICIPATION NIPS'96 Postconference Workshop NEURAL MODULATION AND NEURAL INFORMATION PROCESSING Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Akaysha Tang Christiane Linster The Salk Institute Dept. of Psychology Computational Neurobiology Lab Harvard University 10010 North Torrey Pines Road 33, Kirkland Street La Jolla, CA 92037 Cambridge, MA 02138 Tel: (619) 453 4100 x1618 Tel: (617) 496 2555 Fax: (619) 587 0417 Fax: (617) 495 3827 tang at salk.edu linster at berg.harvard.edu OBJECTIVES Neural modulation is ubiquitous in the nervous system and can provide the neural system with additional computational power that has yet to be characterized. From a computational point of view, the effects of neuromodulation on neural information processing can be far more sophisticated than the simple increased/decreased gain control, assumed by many modelers. We would like to bring together scientists from diverse fields of studies, including psychopharmacology, behavioral genetics, neurophysiology, neural networks, and computational neuroscience. We hope, through sessions of highly critical, interactive and interdisciplinary discussions, * to identify the strengths and weaknesses of existing research methodology and practices within each of the field; * to work out a series of strategies to increase the interactions between experimental and theoretical research and; * to further our understanding of the role of neuromodulation in neural information processing. WORKSHOP FORMAT This one-day workshop will be organised into two three-hour sessions, one in early morning and one in late afternoon. The intermitting time is reserved for skiing or free-wheeling interactions between participants. Each session consists of 2 hour oral presentations and 1 hour panel discussion. A group of invited researchers in the field, including Terry Sejnowski and John Lisman and Michael Hasselmo. If you have done work on this or related topics and would like to attend and/or contribute, please send an email describing your research interests to: Christiane Linster linster at berg.harvard.edu Abstracts should be sent in by email or by fax. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From geoff at salk.edu Fri Sep 13 13:46:02 1996 From: geoff at salk.edu (geoff@salk.edu) Date: Fri, 13 Sep 1996 10:46:02 -0700 (PDT) Subject: Paper available Message-ID: <199609131746.KAA11605@gauss.salk.edu> The following paper is available via ftp://ftp.cnl.salk.edu/pub/geoff/goodhill_sejnowski_96.ps.Z or http://www.cnl.salk.edu/~geoff QUANTIFYING NEIGHBOURHOOD PRESERVATION IN TOPOGRAPHIC MAPPINGS Geoffrey J. Goodhill & Terrence J. Sejnowski The Salk Institute From: Proceedings of the 3rd Joint Symposium on Neural Computation, 1996 ABSTRACT Mappings that preserve neighbourhood relationships are important in many contexts, from neurobiology to multivariate data analysis. It is important to be clear about precisely what is meant by preserving neighbourhoods. At least three issues have to be addressed: how neighbourhoods are defined, how a perfectly neighbourhood preserving mapping is defined, and how an objective function for measuring discrepancies from perfect neighbourhood preservation is defined. We review several standard methods, and using a simple example mapping problem show that the different assumptions of each lead to non-trivially different answers. We also introduce a particular measure for topographic distortion, which has the form of a quadratic assignment problem. Many previous methods are closely related to this measure, which thus serves to unify disparate approaches. 22 pages, uncompressed postscript = 1.1MB NOTE: I advertised a tech report with the same title on this list last year: the new paper contains more recent work.  From cas-cns at cns.bu.edu Mon Sep 16 09:05:02 1996 From: cas-cns at cns.bu.edu (CAS/CNS) Date: Mon, 16 Sep 1996 09:05:02 -0400 Subject: Grad Training - BU Cognitive & Neural Systems Message-ID: <199609161304.JAA09941@cns.bu.edu> ************************************************************************* GRADUATE TRAINING IN THE DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ************************************************************************* The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1997, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS Boston University 677 Beacon Street Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu (Ms. Robin L. Locke) Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of outstanding technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of interdisciplinary graduate courses, each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the afternoon or evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as biology, computer science, engineering, mathematics, and psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems. Students interested in neural network hardware work with researchers in CNS, at the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River Campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. The department is housed in its own new four story building which includes ample space for faculty and student offices and laboratories, as well as an auditorium, classroom and seminar rooms, library, and faculty-student lounge. 1996-97 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior Aijaz Baloch Research Associate of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention of recognition, learning and motor control, computational vision, adaptive control systems, reinforcement learning Helen Barbas Associate Professor, Department of Health Sciences PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neocortex Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual perception, psychophysics, computational models Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development Gail A. Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations Laird Cermak Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College Director, Memory Disorders Research Center Boston Veterans Affairs Medical Center PhD, Ohio State University Memory disorders Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science Director, CAS/CNS Computation Labs PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing William D. Eldred III Associate Professor of Biology PhD, University of Colorado, Health Science Center Visual neural biology Paolo Gaudiano Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of robotics, vision, adaptive sensory-motor control, and behavioral neurobiology Jean Berko Gleason Professor of Psychology PhD, Harvard University Psycholinguistics Douglas Greve Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Active vision Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems Chairman, Department of Cognitive and Neural Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, applied mathematics Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, speech production Thomas G. Kincaid Professor of Electrical, Computer and Systems Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems PhD, Warsaw Medical Academy Retinal physiology, mathematical and computer modeling of dynamical properties of neurons in the visual system Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University Sc.M., PhD, Brown University Barbara Shinn-Cunningham Assistant Professor of Cognitive and Neural Systems PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology Psychoacoustics, audition, auditory localization, binaural hearing, sensorimotor adaptation, mathematical models of human performance Takeo Watanabe Assistant Professor of Psychology PhD, Behavioral Sciences, University of Tokyo Perception of objects and motion and effects of attention on perception using psychophysics and brain imaging (f-MRI) Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid architectures James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests are: dynamic binding, self-organization, shape representation, and classification Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology Visual search ************************************************************************* DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS GRADUATE TRAINING ANNOUNCEMENT Boston University 677 Beacon Street Boston, MA 02215 Phone: 617/353-9481 Fax: 617/353-7755 Email: rll at cns.bu.edu *************************************************************************  From Jon.Baxter at keating.anu.edu.au Mon Sep 16 22:03:05 1996 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Tue, 17 Sep 1996 12:03:05 +1000 (EST) Subject: [arthur@mail4.ai.univie.ac.at: TR: Limitations of SOM] In-Reply-To: <9609110045.AA03818@peduncle.ai.mit.edu> from "Shimon Edelman" at Sep 10, 96 08:45:09 pm Message-ID: <199609170203.MAA01701@keating.anu.edu.au> Commenting on this paper: > > Flexer A.: Limitations of self-organizing maps for vector quantization and > > multidimensional scaling, to appear in: Advances in Neural Information > > Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, > > available in 1997. Shimon Edelman said: > > Seeing that comments are welcome... there seems to be a rather glaring > gap in the references in this TR. Fukunaga proposed a similar > combination of clustering and topology-preservation criteria in 1972, > and there was a recent paper by Webb following up on that work. > > It would have been nice to see Baxter's idea of Canonical Vector > Quantization discussed in this context. If anyone is interested, there is a recent version (July 1996) of the Canonical Quantization paper on my home page: http://keating.anu.edu.au/~jon/papers/canonical.ps.gz Title: The Canonical Distortion Measure for Vector Quantization and Approximation. Author: Jonathan Baxter Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between a random point and its quantization is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is shown how an {\em environment} of functions on an input space $X$ induces a {\em canonical distortion measure} (CDM) on X. The depiction ``canonical'' is justified because it is shown that optimizing the reconstruction error of X with respect to the CDM gives rise to optimal piecewise constant approximations of the functions in the environment. The CDM is calculated in closed form for several different function classes. An algorithm for training neural networks to implement the CDM is presented along with some encouraging experimental results. ------------- Jonathan Baxter Department of Systems Engineering Research School of Information Science and Technology Australian National University Canberra, A.C.T 0200 Australia Tel: +61 6 249 5182 Fax: +61 6 279 8088 E-mail: jon at syseng.anu.edu.au  From gorr at willamette.edu Tue Sep 17 14:32:58 1996 From: gorr at willamette.edu (Jenny Orr) Date: Tue, 17 Sep 1996 11:32:58 -0700 (PDT) Subject: NIPS*96 Postconference Workshop Message-ID: <199609171832.LAA01053@mirror.willamette.edu> CALL FOR SPEAKERS NIPS*96 Postconference Workshop TRICKS OF THE TRADE: How to Make Algorithms REALLY Work Snowmass (Aspen), Colorado USA Saturday Dec 7th, 1996 ORGANIZERS: Jenny Orr Willamette University gorr at willamette.edu Klaus Muller GMD First, Germany klaus at first.gmd.de Rich Caruana Carnegie Mellon caruana at cs.cmu.edu OBJECTIVES: Using neural networks to solve difficult problems often requires as much art as science. Researchers and practitioners acquire, through experience and word-of-mouth, techniques and heuristics that help them succeed. Often these ``tricks'' are theoretically well motivated. Sometimes they're the result of trial and error. In this workshop we ask you to share the ``tricks'' you have found helpful. Our focus will be mainly regression and classification. WHAT IS A TRICK? A technique, rule-of-thumb, or heuristic that: - is easy to describe and understand - can make a real difference in practice - is not (yet) part of well documented technique - has broad application - may or may not (yet) have a theoretical explanation Examples of well known tricks include: early stopping, using symmetric sigmoids, on-line calculation of the largest eigenvalue of the Hessian without computing the hessian to determine optimal learning speed, ... POTENTIAL TOPICS: - architecture design: picking layers, nodes, connectivity, modularity, activation functions, ... - model parameters & learning rates, momentum, annealing schedules, speeding learning: on-line, batch, conjugate gradient, approximating the Hessian, ... - training/test sets: sizes, dealing with too much/little data, noisy and/or missing data, active sampling, skewed samples, ... - generalization: which smoothers/regularizers to use and when to use them, network capacity, learning rate, net initialization, output representation, SSE vs. cross-entropy, ... - training problems: symmetry breaking, bootstrapping large nets, no negative instances, ... WORKSHOP FORMAT: Our goal is to create an enjoyable, quick moving one-day workshop with lot's of ideas and discussion. Each three-hour session will have 5-10 short presentations (10 mins max) and 1-2 longer presentations (30 mins max). The long presentations will allow speakers to present collections of ``tricks'' focussed on particular topics such as how to speed up backprop, when and what regularization to use, ... The short presentations will give the rest of us an opportunity to present isolated ``tricks'' with a minimum of presentation overhead. To help keep things "light", we ask that short presentations use 5 or fewer slides. SUBMISSIONS: We already have a number of speakers lined up (see below), but we are looking for more contributions. If you'd like to give a presentation, please email a short (1 page or less) description of the trick(s) to gorr at willamette.edu. If you wish to discuss a single trick, the total presentation time will be 10 minutes, or less. If you wish to discuss a group of related tricks, please say so and briefly describe all the tricks, and the total presentation should be 20-30 minutes, or less. We will review the submissions and include as many as there is time for in the schedule. If possible, please discuss what the trick is used for, when it is and is not applicable, sample problems you have used it on, how well it seems to work in practice, and any explanation you have, theoretical or otherwise, for why it seems to work. Keep in mind that this is a workshop, and "tricks" do not have to be fully fleshed out methods with rigorous theoretical or empirical evidence. If you've found a technique that's sometimes useful, the odds are others will find it interesting and useful, too. In order to list your presentation in the workshop brochure, we need to have the title and abstract by September 27. IMPORTANT DATES: Deadline for listing title in workshop brochure: 27 September, 1996 Final Deadline for submission of abstracts: 7 October, 1996 Notification of acceptance: 15 October, 1996 FOLLOW-UP TO WORKSHOP: To help insure that presenters get credit for divulging their tricks, we'll ask presenters to prepare concise, one page write-ups of their tricks. These will be compiled into a report and/or published on the web and made available to anyone interested. Tricks that as yet have no known theoretical explanation will be grouped together to form a set of open problems. We are also considering the possibility of publishing a collection of tricks as a short book or special journal issue. CONFIRMED SPEAKERS: Yann LeCun, Larry Yaeger Hans Georg Zimmermann Patrice Simard Eric Wan Rich Caruana Nici Schraudolph Shumeet Baluja David Cohn General info about the Postconference workshops can be found on the NIPS homepage: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/ If you have any questions about this workshop, don't hesitate to contact one of the organizers. We look forward to seeing you in Snowmass! -Jenny, Klaus, and Rich. TOP-TEN REASONS WHY YOU SHOULD PRESENT A TRICK: 10: someone should get credit for the tricks we all end up using, it might as well be you (who's responsible for early stopping?) 9: so you can bond with others who discovered the same trick 8: because you couldn't get it on Letterman's Stupid Human Tricks 7: no one will believe you're an expert if you don't use tricks 6: your trick sucks, but you'll feel better using the other tricks you learn if you present something 5: so you show everyone how very clever you are 4: because you'll feel more comfortable using an unsupported trick if you can get others to use it, too 3: so those of us who see flaws in your tricks can flame you alive 2: you'll feel less guilty skiing if you present something at a workshop 1: because you really don't want to write a whole paper about it  From greiner at scr.siemens.com Tue Sep 17 14:54:17 1996 From: greiner at scr.siemens.com (Russell Greiner) Date: Tue, 17 Sep 1996 14:54:17 -0400 (EDT) Subject: Research/Development Position Message-ID: <199609171854.OAA28363@eagle.scr.siemens.com> The Adaptive Information and Signal Processing Department at Siemens Corporate Research, Inc. (SCR) in Princeton, New Jersey has immediate openings for research and development personnel with experience in one or more of these areas: machine learning expert systems (and other areas of AI) adaptive signal processing fuzzy logic user agents neural networks intelligent control To qualify for one of these positions, you should have a PhD with proven experience in one of the above named areas and a strong interest in applied research and development aimed at delivering working prototypes and products to Siemens companies. You must have very good software development skills including C or C++. Windows API or OLE is a plus. Siemens is a world-wide company with sales of more than $60billion and world-wide employment of almost 400,000. In the US, Siemens has sales of $6billion and almost 40,000 employees. Siemens Corporate Research, Inc. employs approximately 140 technical staff with an emphasis on imaging, multimedia, software engineering and adaptive systems. SCR's mission is to provide technical expertise to develop solutions for Siemens operating companies and groups. Siemens is an Equal Opportunity Employer. If interested, do **NOT** reply to this message. Send your resume to: Russell Greiner Siemens Corporate Research Inc. 755 College Road East Princeton, NJ 08540  From oreilly at flies.mit.edu Tue Sep 17 17:20:47 1996 From: oreilly at flies.mit.edu (Randall O'Reilly) Date: Tue, 17 Sep 1996 16:20:47 -0500 Subject: PhD Thesis Available Message-ID: <199609172120.QAA22328@flies.mit.edu> My PhD thesis is avialable for anonymous ftp downloading: ftp://hydra.psy.cmu.edu/pub/user/oreilly/oreilly_thesis.tar.gz it is 1,085,460 bytes and un-tars into roughly 6 meg of under 1 meg postscript files. -------------------------- The LEABRA Model of Neural Interactions and Learning in the Neocortex Randall C. O'Reilly Center for the Neural Basis of Cognition Department of Psychology Carnegie Mellon University There is evidence that the specialized neural processing systems in the neocortex, which are responsible for much of human cognition, arise from the action of a relatively general-purpose learning mechanism. I propose that such a neocortical learning mechanism can be best understood as the combination of error-driven and self-organizing (Hebbian associative) learning. This model of neocortical learning, called LEABRA (local, error-driven and associative, biologically realistic algorithm), is computationally powerful, has important implications for psychological models, and is biologically feasible. The thesis begins with an evaluation of the strengths and limitations of current neural network learning algorithms as models of a neocortical learning mechanism according to psychological, biological, and computational criteria. I argue that error-driven (e.g., backpropagation) learning is a reasonable computational and psychological model, but it is biologically implausible. I show that backpropagation can be implemented in a biologically plausible fashion by using interactive (bi-directional, recurrent) activation flow, which is known to exist in the neocortex, and has been important for accounting for psychological data. However, the interactivity required for biological and psychological plausibility significantly impairs the ability to respond systematically to novel stimuli, making it still a bad psychological model (e.g., for nonword reading). I propose that the neocortex solves this problem by using inhibitory activity regulation and Hebbian associative learning, the computational properties of which have been explored in the context of self-organizing learning models. I show that by introducing these properties into an interactive (biologically plausible) error-driven network, one obtains a model of neocortical learning that: 1) provides a clear computational role for a number of biological features of the neocortex; 2) behaves systematically on novel stimuli, and exhibits transfer to novel tasks; 3) learns rapidly in networks with many hidden layers; 4) provides flexible access to learned knowledge; 5) shows promise in accounting for psychological phenomena such as the U-shaped curve in over-regularization of the past-tense inflection; 6) has a number of other nice properties. --------------------------------------------- Note that I am now doing a postdoc at at MIT: Center for Biological and Computational Learning Department of Brain and Cognitive Sciences E25-210, MIT Cambridge, MA 02139 oreilly at ai.mit.edu  From jose at kreizler.rutgers.edu Wed Sep 18 08:39:14 1996 From: jose at kreizler.rutgers.edu (Stephen J. Hanson) Date: Wed, 18 Sep 1996 08:39:14 -0400 (EDT) Subject: COGNITIVE SCIENCE RESEARCH/LAB MANAGER Message-ID: Cognitive Science Research / System Administration We are looking for an individual to do research in Cognitive Science and to help administer the computing resources of the Psychology Department at Rutgers-University (Newark Campus). Resources include a network of Sun workstations, PCs and Macs, printers, pc-voice mail system and various peripheral devices. The individual will be responsible for installing and debugging software, and various routine system administration activites. At least half their time will be spent in research involving Cognitive Science especially related to Connectionist networks (or Neural Networks) and Computational Neuroscience. Familiarity with C programming, UNIX system internals (BSD, System V, Solaris, Linux) and Windows (95, NT) and local area networks running TCP/IP is required. Image processing or graphics programing experience are pluses. Candidates should possess either a BS/MS in Computer Science, Cognitive Science, AI or other relevant fields or equivalent experience. Salary will be dependent upon qualifications and experience. Rutgers University is an equal opportunity affirmative action employer. Please send resumes and references to Pauline Mitchell Department of Psychology 101 Warren Street Rutgers University Newark, New Jersey, 07102 Direct email inquiries or resumes to: jose at kreizler.rutgers.edu Stephen J. Hanson Professor & Chair Department of Psychology Smith Hall Rutgers University Newark, NJ 07102 voice: 1-201-648-5095 fax: 1-201-648-1171 email: jose at kreizler.rutgers.edu  From bruno at redwood.ucdavis.edu Wed Sep 18 14:05:25 1996 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Wed, 18 Sep 1996 11:05:25 -0700 Subject: TR: sparse coding and ICA Message-ID: <199609181805.LAA25112@redwood.ucdavis.edu> The following TR is available via ftp://publications.ai.mit.edu/ai-publications/1500-1999/AIM-1580.ps.Z Learning Linear, Sparse, Factorial Codes Bruno A. Olshausen In previous work (Nature, 381:607-609), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the "independent components analysis" algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed. Related papers are available via http://redwood.ucdavis.edu/bruno/papers.html Bruno A. Olshausen Phone: (916) 757-8749 Center for Neuroscience Fax: (916) 757-8827 UC Davis Email: baolshausen at ucdavis.edu 1544 Newton Ct. WWW: http://redwood.ucdavis.edu Davis, CA 95616  From tlindroo at bennet.tutech.fi Thu Sep 19 03:42:34 1996 From: tlindroo at bennet.tutech.fi (Tommi Lindroos) Date: Thu, 19 Sep 1996 10:42:34 +0300 Subject: CFP: EANN 97 Message-ID: <9609190742.AA32604@bennet.tutech.fi> Sorry for this unsolicited mail. Since you are involved in neural networks, we think this conference might be of interest to you. International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 First Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (about 400 words) should be sent to eann97 at kth.se by 21 December 1996 by e-mail in plain ASCII format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 January. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN95.html and http://www.abo.fi/~abulsari/EANN96.html Organising of a few special tracks has been confirmed so far: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, Ersin-Tulunay at metu.edu.tr), Hybrid Systems (D. Tsaptsinos, D.Tsaptsinos at kingston.ac.uk), Mechanical Engineering (A. Scherer, Andreas.Scherer at fernuni-hagen.de), Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at), Process Engineering (R. Baratti, baratti at ndchem3.unica.it) Authors are encouraged to send the abstracts to the organisers of the special tracks, instead of eann97 at kth.se, if your paper is relevant to one of the topics mentioned above. Advisory board J. Hopfield (USA) A. Lansner (Sweden) G. Sj\"odin (Sweden) Organising committee A. Bulsari (Finland) H. Liljenstr\"om (Sweden) D. Tsaptsinos (UK) International program committee (to be confirmed, extended) G. Baier (Germany) R. Baratti (Italy) S. Cho (Korea) T. Clarkson (UK) G. Dorffner (Austria) W. Duch (Poland) A. Gorni (Brazil) J. Heikkonen (Italy) F. Norlund (Sweden) A. Ruano (Portugal) C. Schizas (Cyprus) J. Thibault (Canada) E. Tulunay (Turkey) J. Demott (USA) Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact again. You should receive an abstract number in a couple of days after the submission.  From seung at physics.lucent.com Thu Sep 19 11:03:46 1996 From: seung at physics.lucent.com (Sebastian Seung) Date: Thu, 19 Sep 1996 11:03:46 -0400 Subject: preprints available on Web Message-ID: <199609191503.LAA09840@heungbu.div111.lucent.com> The following preprints can be found in http://portal.research.bell-labs.com/home/seung at physics/papers/ How the brain keeps the eyes still H. S. Seung To appear in PNAS The brain can hold the eyes still because it stores a memory of eye position. The brain's memory of horizontal eye position appears to be represented by persistent neural activity in a network known as the neural integrator, which is localized in the brainstem and cerebellum. Existing experimental data are reinterpreted as evidence for an attractor hypothesis, that the persistent patterns of activity observed in this network form an attractive line of fixed points in its state space. Line attractor dynamics can be produced in linear or nonlinear neural networks by learning mechanisms that precisely tune positive feedback. Unsupervised learning by convex and conic encoding D. D. Lee and H. S. Seung To appear in NIPS Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve lateral connections, which mediate cooperative and competitive interactions and allow for the development of sparse distributed representations. Statistical mechanics of Vapnik-Chervonenkis entropy P. Riegler and H. S. Seung A statistical mechanics of learning is formulated in terms of a Gibbs distribution on the realizable labelings of a set of inputs. The entropy of this distribution is a generalization of the Vapnik-Chervonenkis (VC) entropy, reducing to it exactly in the limit of infinite temperature. Perceptron learning of randomly labeled patterns is analyzed within this formalism.  From riegler at ifi.unizh.ch Thu Sep 19 11:47:26 1996 From: riegler at ifi.unizh.ch (Alex Riegler) Date: Thu, 19 Sep 1996 17:47:26 +0200 Subject: CFP New Trends in Cog Sci Message-ID: Please forward to colleagues etc. Apologies if you have received this already. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ International Workshop N E W T R E N D S I N C O G N I T I V E S C I E N C E NTCS '97 /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ "Does Representation need Reality?" Perspectives from Cognitive Science, Neuroscience, Epistemology, and Artificial Life Vienna, Austria, May 13 - 16, 1997 with plenary talks by: Georg Dorffner, Ernst von Glasersfeld, Stevan Harnad, Wolf Singer, and Sverre Sjoelander organized by the Austrian Society of Cognitive Science (ASoCS) =========================================================================== Latest information can be retrieved from the conference WWW-page =========================================================================== P u r p o s e ___________________________________________________________________________ The goal of this single-track conference is to investigate and discuss new approaches and movements in cognitive science in a workshop-like atmosphere. Among the topics which seem to have emerged in the last years are: embodiment of knowledge, system theoretic and computational neuroscience approaches to cognition, dynamics in recurrent neural architectures, evolutionary and artificial life approaches to cognition, and (epistemological) implications for perception and representation, constructivist concepts and the problem of knowledge representation, autopoiesis, implications for epistemology and philosophy (of science). Evidence for a failure of the traditional understanding of neural representation converges from several fields. Neuroscientific results in the last decade have shown that single cell representations with hierarchical processing towards representing units seems not the way the cortex represents environmental entities. Instead, distributed cell ensemble coding has become a popular concept for representation, both in computational and in empirical neuroscience. However, new problems arise from the new concepts. The problem of binding the distributed parts into a uniform percept can be "solved" by introducing synchronization of the member neurons. A deeper (epistemological) problem, however, is created by recurrent architectures within ensembles generating an internal dynamics in the network. The cortical response to an environmental stimulus is no longer dominated by stimulus properties themselves, but to a considerable degree by the internal state of the network. Thus, a clear and stable reference between a representational state (e.g. in a neuron, a Hebbian ensemble, an activation state, etc.) and the environmental state becomes questionable. Already learned experiences and expectancies might have an impact on the neural activity which is as strong as the stimulus itself. Since these internally stored experiences are constantly changing, the notion of (fixed) representations is challenged. At this point, system theory and constructivism, both investigating the interaction between environment and organism at an abstract level, come into the scene and turn out to provide helpful epistemological concepts. The goal of this conference is to discuss these phenomena and their implications for the understanding of representation, semantics, language, cognitive science, and artificial life. Contrary to many conferences in this field, the focus is on interdisciplinary cooperation and on conceptual and epistemological questions, rather than on technical details. We are trying to achieve this by giving more room to discussion and interaction between the participants (e.g., invited comments on papers, distribution of papers to the participants before the conference, etc.). According to the interdisciplinary character of cognitive science, we welcome papers/talks from the fields of artificial life, empirical, cognitive, and computational neuroscience, philosophy (of science), epistemology, anthropology, computer science, psychology, and linguistics. T o p i c s ___________________________________________________________________________ The conference is centered around but not restricted to the following topics: 1. Representation - epistemological concepts and findings from (computational) neuroscience, cognitive science (recurrent neural architectures, top-down processing, etc.), and philosophy; 2. Alternatives to representation - applying constructivism to cognitive systems; 3. Modeling language, communication, and semantics as a dynamical, evolutionary and/or adaptive process; 4. Representation and cognition in artificial life; 5. What is the role of simulation in understanding cognition? I n v i t e d S p e a k e r s ___________________________________________________________________________ Besides submitted papers the conference will also feature plenary talks by invited speakers who are leaders in their fields. The following is a list of invited speakers in alphabetical order: o Georg Dorffner, Univ. of Vienna (A) o Ernst von Glasersfeld, Univ. of Amherst, MA (USA) o Stevan Harnad, Univ. of Southampton (GB) o Rolf Pfeifer, Univ. of Zurich (CH) o Wolf Singer, Max Planck Institut fuer Hirnforschung, Frankfurt (D) o Sverre Sjoelander, Linkoeping University (S) P a p e r S u b m i s s i o n s ___________________________________________________________________________ We invite submissions of scientific papers to any of the 5 topics listed above. The papers will be reviewed by the Scientific Committee and accepted according to their scientific content, originality, quality of presentation, and relatedness to the conference topic. Please keep to the following guidelines: Hardcopy submission only, 6-9 pages A4 or USLetter single sided in Times Roman 10-12pt (or equivalent). Please send 4 copies to the organizing committee, see address below. In a first step we are planning to publish the proceedings as Technical Report of the Austrian Society for Cognitive Science. In a second step after rewriting the papers and after a second round of review a major publisher will be approached to publish the best papers in an edited volume. For the final versions of the accepted papers electronic submissions are preferred in one of the following formats: Word, FrameMaker, or Ascii. Detailed formatting information will be given upon notification of acceptance. Submission due January 7, 1997 Notification of acceptance February 28 R e g i s t r a t i o n ___________________________________________________________________________ To register please fill out the registration form at the bottom of this CFP and send it by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science (address below) Registration Fee (includes admission to talks, presentations, and proceedings): before April 1st, 1997: Member * 1000 ATS (about 90 US$) Non-Member 1500 ATS (about 135 US$) Student Member ** 400 ATS (about 36 US$) Student Non-Member 1000 ATS (about 90 US$) after April 1st, 1997: Member * 1300 ATS (about 118 US$) Non-Member 1800 ATS (about 163 US$) Student Member ** 500 ATS (about 45 US$) Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID C o n f e r e n c e S i t e a n d A c c o m o d a t i o n ___________________________________________________________________________ The conference takes place in a small beautiful baroque castle in the suburbs of Vienna; the address is: Schloss Neuwaldegg Waldegghofg. 5 A-1170 Wien Austria Tel: +43-1-485-3605 It is surrounded by a beautiful forest and a good (international and Viennese gastronomic) infrastructure. On the tram it takes only 20 minutes to the center of Vienna. (Limited) Accommodation is provided by the castle (about 41 US$ per night (single), 30 US$ per night, per person (double) including breakfast). Please contact the telephone number above. You can find more information about Vienna and accommodation at the Vienna Tourist Board or at the Intropa Travel agent Tel: +43-1-5151-242. Further information will be available soon. D e s t i n a t i o n V i e n n a ? ___________________________________________________________________________ Vienna, Austria, can be reached internationally by plane or train. The Vienna Schwechat airport is located about 16 km from the city center. From the airport, the city air-terminal can be reached by bus (ATS 60.- per person) or taxi (about ATS 400). Rail-passengers arrive at one of the main stations which are located almost in the city center. From the air-terminal and the railway stations the congress site and hotels can be reached easily by underground (U-Bahn), tramway, or bus. A detailed description will be given to the participants. In May the climate is mild in Vienna. It is the time when spring is at its climax and everything is blooming. The weather is warm with occasional (rare) showers. The temperature is about 18 to 24 degrees Celsius. More information about Vienna and Austria on the web: Welcome to Vienna Scene Vienna City Wiener Festwochen - Vienna Festival Public Transport in Vienna (subway) Welcome to Austria General information about Austria Austria Annoted S c i e n t i f i c C o m m i t t e e ___________________________________________________________________________ R. Born Univ. of Linz (A) G. Dorffner Univ. of Vienna (A) E. v. Glasersfeld Univ. of Amherst, MA (USA) S. Harnad Univ. of Southampton (GB) M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) H. Risku Univ. of Skovde (S) S. Sjoelander Linkoeping University (S) A. v. Stein Neuroscience Institute, La Jolla (USA) O r g a n i z i n g C o m m i t t e e ___________________________________________________________________________ M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) T i m e t a b l e ___________________________________________________________________________ Submission due January 7, 1997 Notification of acceptance February 28 Early registration due April 1 Final papers due April 14 Conference date May 13-16, 1997 S p o n s o r i n g O r g a n i z a t i o n s ___________________________________________________________________________ o Christian Doppler Laboratory for Expert Systems (Vienna University of Technology) o Oesterreichische Forschgungsgemeinschaft o Austrian Federal Ministry of Science, Transport and the Arts o City of Vienna A d d i t i o n a l I n f o r m a t i o n ___________________________________________________________________________ For further information on the conference contact: Markus Peschl Dept. for Philosophy of Science University of Vienna Sensengasse 8/10 A-1090 Wien Austria Tel: +43-1-402-7601/41 Fax: +43-1-408-8838 Email: franz-markus.peschl at univie.ac.at General information about the Austrian Society for Cognitive Science can be found on the Society webpage or by contacting Alexander Riegler AILab, Dept. of Computer Science University of Zurich Winterthurerstr. 190 CH-8057 Zurich Switzerland Email: riegler at ifi.unizh.ch R e g i s t r a t i o n f o r m ___________________________________________________________________________ I participate at the Workshop "New Trends in Cognitive Science (NTCS'97)" Full Name ........................................................................ Full Postal Address: ........................................................................ ........................................................................ ........................................................................ Telephone Number (Voice): Fax: ..................................... .................................. Email address: ........................................................................ [ ] I intend to submit a paper Payment in ATS (= Austrian Schillings; 1 US$ is currently about 11 ATS). This fee includes admission to talks, presentations, and proceedings: Before April 1st, 1997: [ ] Member * 1000 ATS (about 90 US$) [ ] Non-Member 1500 ATS (about 135 US$) [ ] Student Member ** 400 ATS (about 36 US$) [ ] Student Non-Member 1000 ATS (about 90 US$) After April 1st, 1997: [ ] Member * 1300 ATS (about 118 US$) [ ] Non-Member 1800 ATS (about 163 US$) [ ] Student Member ** 500 ATS (about 45 US$) [ ] Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID Total: .................... ATS [ ] Visa [ ] Master-/Eurocard Name of Cardholder ........................................ Credit Card Number ........................................ Expiration Date ................. Date: ................ Signature: ........................................ Please send this form by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science, Univ. of Vienna, Sensengasse 8/10, A-1090 Wien, Austria ___________________________________________________________________________ AI Lab * Department of Computer Science * University of Zurich Winterthurerstr. 190 * CH-8057 Zurich, Switzerland * riegler at ifi.unizh.ch  From lba at inesc.pt Thu Sep 19 12:09:47 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 19 Sep 1996 17:09:47 +0100 Subject: Workshop on spatiotemporal models Message-ID: <3241704B.61133CF4@inesc.pt> Below is the registration information for the Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems. Please distribute and/or post it as widely as you please. For further information, as well as for obtaining a postscript version of the registration form (e.g. if you want to post it), please consult: Workshop's web page: http://aleph.inesc.pt/smbas/ Mirror: http://www.cnel.ufl.edu/workshop.html (the mirror sometimes takes a few days to get updated) Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems SMBAS November 6-8 1996 Sintra, Portugal REGISTRATION INFORMATION The SMBAS workshop will take place in Sintra, Portugal, beginning on November 6 afternoon and extending through November 8 afternoon. Three invited speakers are confirmed (Drs. Walter Freeman, Kumpati Narendra and Scott Kelso), and a fourth invited speaker may still be announced. Contributed papers are both from the biological and artificial systems areas, in approximately equal numbers. The list of accepted papers is given at the end of this message. Registrations are open until October 25. Participants who won't present papers are also welcome. The total number of participants is limited to 50. Among non-contributors, the registrations will be accepted on a first come first served basis. The registration prices are: Normal registration 50000 Portuguese escudos Student registration 30000 Portuguese escudos As an indication only, the current exchange rate is of about 150 Portuguese escudos per US dollar. The registration price includes participation in the workshop, proceedings book, lunches on Thursday and Friday and coffee breaks. The price also includes 17% VAT. The registration form is included below. Only registrations accompanied by payment are accepted. Students should also include a letter from their supervisor or university representative, written on the university letterhead, confirming their student status. FORMS OF PAYMENT CHECK - The check should be in Portuguese escudos, made payable to "INESC - SMBAS Workshop". Enclose the check with the registration form. BANK TRANSFER - Make the transfer in Portuguese escudos, to Banco Nacional Ultramarino Arco do Cego Account no. 001399550210009683498 Account holder: INESC Reference: SMBAS Workshop Enclose a copy of the bank transfer document with the registration form. Credit card payments cannot be accepted. FUNDING A grant of 100000 Portuguese escudos from Fundacao Oriente (Portugal) is available to be assigned to a participant from East Asia. Preference will be given to students. Other funding sources, especially the large grant from the Office of Naval Research (USA), are already reflected in the registration prices, which would otherwise be about 80000 escudos higher. LODGING The workshop will be held at the Hotel Tivoli Sintra. This hotel will have special room rates for workshop participants: Single room: 9700 Portuguese escudos per night Double room: 10900 Portuguese escudos per night (for two persons) The Tivoli hotel accepts individual registrations for double rooms, at half the price. In this case the participants will be paired by the hotel. The participants should reserve rooms directly with the hotel. Please mention that you are reserving for this workshop. Hotel Tivoli Sintra Praca da Republica = 2710 Sintra PORTUGAL Tel: +351-1-9233505 Fax: +351-1-9231572 A number of rooms has been allocated in advance at the Tivoli. Should the hotel become full, the Hotel Central is very close and has only slightly higher room rates: Hotel Central Largo Rainha D. Amelia, 35 = 2710 Sintra PORTUGAL Tel.: +351-1-9230963 = (this hotel has no fax) ==================== Registration form (cut here) ======================= Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems November 6-8 1996, Sintra, Portugal Name____________________________________________________________________ _______________________________________________________ Sex____________ Address_________________________________________________________________ City___________________________________ State__________________________ Postal/zip code_________________ Country_______________________________ Phone___________________________ Fax____________________________ E-mail_________________________ Type of registration (check as appropriate): __ Normal (50000 PTE) __ Student (30000 PTE) - enclose a letter from the supervisor or university representative, on university letterhead, confirming the student status Enclosed payment: Amount_________________ in Portuguese escudos (only the Portuguese currency is accepted) Form of payment: __ Check (enclose check with this form) __ Bank transfer (enclose a copy of the bank document) Checks should be made payable to "INESC - SMBAS Workshop" Bank transfers should be made to Banco Nacional Ultramarino Arco do Cego Account no. 001399550210009683498 Account holder: INESC Reference: SMBAS Workshop REGISTRATION DEADLINE: October 25, 1996 Please mail this form, with the payment document to INESC SMBAS Workshop c/o Mrs. Ilda Ribeiro R. Alves Redol, 9 1000 Lisboa Portugal Tel: +351-1-3100313 Fax: +351-1-3145843 Registrations paid by bank transfer can also be sent by fax. In this case fax both the registration form and the bank document, and send the fax to Mrs. Ilda Ribeiro, fax no. +351-1-3145843. =================== (end of registration form) ======================= List of accepted papers METHODS OF TOPOGRAPHICAL TIME-FREQUENCY ANALYSIS OF EEG IN COARSE AND FINE TIME SCALE K. Blinowska, P. Durka, M. Kaminski - Warsaw University, POLAND W. Szelenberger - Warsaw Medical Academy, POLAND SPATIAL EFFECTS AND COMPETIVE COEXISTENCE T. Caraco - State University of New York, USA W. Maniatty and B. Szymanski - Rensselaer Polytechnic Institute, USA RST: A SPATIOTEMPORAL NEURAL NETWORK J.-C. Chappelier and A. Grumbach - ENST, FRANCE TRASIENT SYNAPTIC REDUNDANCY DURING SYNAPTOGENESIS DESCRIBED AS AN ISOSTATIC RANDOM STACKING OF NONPENETRATING HARD SPHERES F. Eddi, G. Waysand - Denis Diderot and P. & M. Curie Universities, FRANCE J. Mariani - P.& M. Curie University, FRANCE MODELLING THE PRENATAL DEVELOPMENT OF THE LATERAL GENICULATE NUCLEUS S. Eglen - University of Sussex, UK A SELF-ORGANIZING TEMPORAL PATTERN RECOGNIZER WITH APPLICATION TO ROBOT LANDMARK RECOGNITION N. Euliano and J. Principe - University of Florida, USA. P. Kulzer - University of Aveiro, PORTUGAL ON THE APPLICATION OF COMPETITIVE NEURAL NETWORKS TO TIME-VARYING CLUSTERING PROBLEMS A. Gonzalez, M. Gra=F1a, A. D'Anjou - Universidad del Pais Vasco, ESPA=D1A M. Cottrell - Universit\'{e} Paris I , FRANCE LOCALISATION AND GOAL-DIRECTED BEHAVIOUR FOR A MOBILE ROBOT USING PLACE CELLS K. Harris and M. Recce - University College London, UK BRIDGING LONG TIME LAGS BY WEIGHT GUESSING AND LONG SHORT TERM MEMORY S. Hochreiter - Technische Universitaet Muenchen, GERMANY J. Schmidhuber - IDSIA, SWITZERLAND COHERENT PHENOMENA IN THE DYNAMICS OF INTEGRATE AND FIRE NEURAL FIELDS D. Horn and I. Opher - Tel Aviv University, ISRAEL SPATIOTEMPORAL TRANSITION TO EPILEPTIC SEIZURES: A NONLINEAR DYNAMICAL ANALYSIS OF SCALP AND INTRACRANIAL EEG RECORDINGS L. Iasemidis, S. Roper, J. Sackellares - University of Florida and V.A. Medical Center, USA J. Principe, J. Czaplewski, R. Gilmore - University of Florida, USA A CONTINUUM MODEL OF THE MAMMALIAN ALPHA RHYTHM D. Liley - Swinburne University of Technology, AUSTRALIA ANALOG COMPUTATIONS WITH TEMPORAL CODING IN NETWORKS OF SPIKING NEURONS W. Maass - Technische Universitaet Graz, AUSTRIA SELECTIVE LINEAR PREDICTION FOR RHYTHMIC ACTIVITY MODELLING N. Martins - Technical University of Lisbon and Polytechnic Institute of Set\'{u}bal, PORTUGAL A. Rosa - Technical University of Lisbon, PORTUGAL CEREBELLAR LEARNING OF DYNAMIC PATTERNS IN THE CONTROL OF MOVEMENT P. Morasso, V. Sanguineti and F. Frisone - University of Genova, ITALY A CHAOTIC OSCILLATOR CELL IN SUBTHRESHOLD CMOS FOR SPATIO-TEMPORAL SIMULATION J. Neeley and J. Harris - University of Florida, USA A TEMPORAL MODEL FOR STORING SPATIOTEMPORAL PATTERN MEMORY IN SHUNTING COOPERATIVE COMPETITIVE NETWORK C. Nehme - Brazilian Navy Research Institute, BRAZIL L. Carvalho and S. Mendes - Federal University of Rio de Janeiro, BRAZIL SPATIO-TEMPORAL SPIKE PATTERN DISCRIMINATION BY NETWORKS OF SILICON NEURONS WITH ARTIFICIAL DENDRITIC TREES D. Northmore and J. Elias - University of Delaware, USA SYNCHRONIZATION IN NEURAL OSCILLATORS THROUGH LOCAL DELAYED INHIBITION G. Renversez - Centre de Physique Th\'{e}orique Centre National de la Recherche Scientifique, FRANCE LEARNING REACTIVE BEHAVIOR FOR AUTONOMOUS ROBOT USING CLASSIFIER SYSTEMS A. Sanchis, J. Molina and P. Isasi - Universidad Carlos III de Madrid, SPAIN MODELING OF CELLULAR AND NETWORK NEURAL MECHANISMS FOR RESPIRATORY PATTERN GENERATION J. Schwaber, I. Rybak - DuPont Central Research, USA J. Paton - University of Bristol, UK FASTER TRAINING OF RECURRENT NETWORKS G. Silva, J. Amaral, T. Langlois and L. Almeida - Instituto de Engenharia de Sistemas e Computadores, PORTUGAL HUMAN PERCEPTION OF SUBTHRESHOLD, NOISE-ENHANCED VISUAL IMAGES E. Simonotto and M. Riani - Universit\'{a} di Genova, ITALY J. Twitti and F. Moss - University of Missouri at St. Louis, USA NEUROCONTROL OF INVERSE DYNAMICS IN FUNCTIONAL ELECTRICAL STIMULATION L. Spaanenburg, J. Nijhuis and A. Ypma - Rijksuniversiteit Groningen, THE NETHERLANDS J. Krijnen - Academic Hospital Groningen, THE NETHERLANDS PROGRAMMED CELL DEATH DURING EARLY DEVELOPMENT OF THE NERVOUS SYSTEM, MODELLED BY PRUNING IN A NEURAL NETWORK J. Vos, J. van Heijst and S. Greuters - University of Groningen, THE NETHERLANDS  From Randy_Ringen at hmc.edu Thu Sep 19 14:10:31 1996 From: Randy_Ringen at hmc.edu (Randy Ringen) Date: Thu, 19 Sep 1996 10:10:31 -0800 Subject: Sejnowski Awarded Prestigious Wright Prize Message-ID: HARVEY MUDD COLLEGE NEWS RELEASE Office of College Relations, Claremont, California 91711-5990 FOR IMMEDIATE RELEASE CONTACT: Randy Ringen or Leslie Baer SEPTEMBER 19, 1996 (909) 624-4146 Ref #: 95/96-47 HARVEY MUDD COLLEGE HONORS BRAIN RESEARCHER TERRENCE J. SEJNOWSKI AS 12TH RECIPIENT OF THE WRIGHT PRIZE October 25 lecture is free and open to the public CLAREMONT, Calif.-Harvey Mudd College is pleased to announce the winner of the 1996 Wright Prize for interdisciplinary study in science and engineering: Terrence J. Sejnowski, a computational neurobiologist, who is an investigator with the Howard Hughes Medical Institute, a professor at the at the Salk Institute of Biological Studies in La Jolla, Calif., and a professor of biology and physics at UC San Diego. Sejnowski will be awarded $15,000 on Friday, 7 p.m., October 25, in Galileo Hall on the Harvey Mudd College campus, when he will give his distinguished lecture, "The Century of the Brain." Admission is free and the event is open to the public. Sejnowski, the 12th Wright awardee, the joins the likes of physicist Freeman Dyson, physician-scientist Jonas Salk, and 1962 Nobel Prize-winning biologist Francis Crick, who were also honored. He is the first awardee selected under a new criteria, which seeks to honor up-and-coming, early-to-mid career researchers in multidisciplinary research in engineering or the sciences, rather than those already widely recognized for their accomplishments. Sejnowski seeks to understand "the computational resources of brains, from the biophysical to the systems levels," he said. His research focuses on how images seen through the eyes are represented in the brain, how memory is organized, and how vision is used to guide actions. As tools in his research, Sejnowski employs theoretical models on how the brain networks this information. He also studies the biophysics of the living brain. As part of the award festivities, Sejnowski will spend two days at Harvey Mudd College, offering the October 25 public lecture, as well as student seminars and informal exchanges with faculty on October 24. "The basic educational philosophy holds interdisciplinary study to be essential in furthering our understanding of science and engineering," said HMC President Henry E. Riggs. "The events surrounding the awarding of the prize give students and faculty alike an opportunity to exchange ideas with a young researcher whose work spans several fields," he said. "Beyond that," he added, "the lecture will present to the public the latest information on the exciting discoveries made concerning how the brain works." Sejnowski graduated summa cum laude from Case Western Reserve University with a B.S. in physics. He received his M.S. and Ph.D. in physics from Princeton. After teaching at The Johns Hopkins University, he became a senior member and professor at the Salk Institute and a professor at UC San Diego. He has received numerous awards, including a Presidential Young Investigator Award from the National Science Foundation, WAS A Fairchild Distinguished Scholar at the California Institute of Technology, and has delivered numerous honorary lectures at the University of Wisconsin; Cambridge University; the Royal Institution, London; and the International Congress of Physiological Sciences. He has published more than 100 scientific articles and coauthored one book. He is the editor-in-chief of "Neural Computation," published by the MIT Press, and serves on the editorial boards of 17 scientific journals. Harvey Mudd College, one of The Claremont Colleges, is an undergraduate coeducational institution of engineering, science, and mathematics that also places strong emphasis on humanities and the social sciences. The college's aim is to graduate engineers and scientists sensitive to the impact of their work on society. Harvey Mudd College ranks among the nation's leading schools in percentage of graduates who earn Ph.D. degrees. The college has been consistently ranked among the top engineering undergraduate specialty schools in the nation by U.S.News & World Report. In a recent study published in Change magazine, in which 212 colleges and universities were ranked, Harvey Mudd College is among an elite group of just 11 institutions that are classified as "High-High," rated outstanding for both their teaching and their research. -end-  From nkasabov at commerce.otago.ac.nz Wed Sep 18 20:40:53 1996 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Wed, 18 Sep 1996 12:40:53 -1200 Subject: A new text and research book from MIT Press Message-ID: <497E9002FD9@jupiter.otago.ac.nz> The MIT Press Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering by Nikola Kasabov Neural networks and fuzzy systems are different approaches to introducing human like reasoning to intelligent information systems. This text is the first to combine the study of these two subjects, their basics and their use, along with the symbolic AI methods and the traditional methods of data analysis, to build comprehensive artificial intelligence systems. In a clear and accessible style, Kasabov describes rule based, fuzzy logic, and connectionist techniques, and their combinations that lead to new techniques such as methods for rules extraction, fuzzy neural networks, hybrid systems, connectionist AI systems, chaotic neurons, etc. All these techniques are applied to a set of simple prototype problems, which makes comparisons possible. A particularly strong feature of the text is that it is replete with applications in engineering, business and finance. AI problems which cover most of the application oriented research in the field (pattern recognition, speech recognition, image processing, classification, planning, optimization, prediction, control, decision making, game simulation, chaos analysis) are discussed and illustrated with concrete examples. Intended both as a text for advanced undergraduate and postgraduate students as well as a reference for researchers in the field of knowledge engineering, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering" has chapters structured for various levels of teaching and includes original work by the author along with the classical material. Data sets for the examples in the book as well as an integrated software environment that can be used to solve the problems and to do the exercises at the end of each chapter, have been made available on the World Wide Web. Nikola Kasabov is Associate Professor in the Department of Computer and Information Science, University of Otago, New Zealand. A Bradford Book Computational Intelligence series September 1996, ISBN 0-262-11212-4, 544 pp. -- 282 illus. $60.00 (cloth) ---------------- Please cut here -------- ------------------------- Please send me a brochure with ordering information and course adoption terms for "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering" by Nikola Kasabov Name __________________________________ City _______________________________ University __________________________ State/Country ___________________ Zip/Code__________________ Return to: Texts Manager, The MIT Press, 55 Hayward Street, Cambridge, MA, 02142, USA (e mail: hardwick at mit.edu), or order the book directly through the MIT Press WWW home page: http://www-mitpress.mit.edu:80/mitp/recent-books/comp/kasfh.html. ---------------------------------------------------------------------------  From pe_keller at ccmail.pnl.gov Fri Sep 20 22:28:00 1996 From: pe_keller at ccmail.pnl.gov (pe_keller@ccmail.pnl.gov) Date: Fri, 20 Sep 1996 19:28 -0700 (PDT) Subject: Career Opportunity for Senior Research Scientist Message-ID: <01I9PIR7VVIQ8Y4WTQ@pnl.gov> Senior Research Scientist Cognitive Controls Battelle, a leading provider of technology solutions, has immediate need for a Senior Scientist to join their cognitive controls initiative. Position will provide technical leadership for a multi-year corporate project applying adaptive/cognitive control theory to applications in emerging technology areas. Position requires an M.S./Ph.D. in Computer and Information Science, Electrical Engineering, or related field with a specialization in adaptive or cognitive control theory (artificial neural networks, fuzzy logic, genetic algorithms) and statistical methods, and at least 5 years' experience in the application of such control methods to engineering problems. Strong leadership capabilities and oral, written, and interpersonal communications skills are essential to this highly interactive position. Applicant selected will be subject to a security investigation and must meet eligibility requirements for access to classified information. Battelle offers competitive salaries, comprehensive benefits, and opportunities for professional development. Qualified candidates are invited to send their resumes to: Employment Department J-35, Battelle, 505 King Avenue, Columbus, OH 43201-2693 or fax them to 614-424-4643. An Equal Opportunity/Affirmative Action Employer M/F/D/V --------------------------------------------------------------------------- This opening is in our Columbus, Ohio, USA facility. For more information about Battelle, try http://www.battelle.org. For more information about our neural network activity at our Pacific Northwest National Laboratory (in Richland, Washington, USA), try http://www.emsl.pnl.gov:2080/docs/cie/neural/ ---------------------------------------------------------------------------  From James_Morgan at brown.edu Mon Sep 23 15:01:45 1996 From: James_Morgan at brown.edu (Jim Morgan) Date: Mon, 23 Sep 1996 15:01:45 -0400 Subject: Position Announcement: Language & Cognitive Processing, Brown University Message-ID: <199609231902.PAA02867@golden.brown.edu> LANGUAGE AND COGNITIVE PROCESSING, Brown University: The Department of Cognitive and Linguistic Sciences invites applications for a three year renewable tenure-track position at the Assistant Professor level beginning July 1, 1997. Areas of interest include but are not limited to phonology or phonological processing, syntax or sentence processing, and lexical access or lexical semantics, using experimental, formal, developmental, neurological, or computational methods. Expertise in two or more areas and/or application of multiple paradigms is preferred. Applicants should have a strong research program and a broad teaching ability in cognitive science and/or linguistics at both the undergraduate and graduate levels. Interest in contributing curricular innovations in keeping with Brown's university-college tradition is desirable. Applicants should have completed all Ph.D. requirements no later than July 1, 1997. Women and minorities are especially encouraged to apply. Send curriculum vitae, three letters of reference, reprints and preprints of publications, and a one page statement of research interests to Dr. Sheila E. Blumstein, Chair, Search Committee, Department of Cognitive and Linguistic Sciences, Brown University, Box 1978, Providence, RI 02912 by January 1, 1997. Brown University is an Equal Opportunity/Affirmative Action Employer.  From terry at salk.edu Tue Sep 24 14:17:50 1996 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 24 Sep 1996 11:17:50 -0700 (PDT) Subject: Neural Computation 8:7 Message-ID: <199609241817.LAA06146@helmholtz.salk.edu> Neural Computation - Contents Volume 8, Number 7 - October 1, 1996 The Lack of a Priori Distinctions Between Learning Algorithms David H. Wolpert The Existence of a Priori Distinctions Between Learning Algorithms David H. Wolpert Note No Free Lunch for Cross Validation Huaiyu Zhu and Richard Rohwer Letter A Self-Organizing Model of "Color Blob" Formation Harry G. Barrow, Alistair J. Bray and Julian M. L. Budd Functional Consequences of an Integration of Motion and Stereopsis in Area MT of Monkey Extrastriate Visual Cortex Markus Lappe Learning Perceptually Salient Visual Parameters Using Spatiotemporal Smoothness Constraints James V. Stone Using Visual Latencies to Improve Image Segmentation Ralf Opara and Florentin Worgotter Learning and Generalization in Cascade Network Architectures Enno Littmann and Helge Ritter Hybrid Modeling, HMM/NN Architectures, and Protein Applications Pierre Baldi and Yves Chauvin ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From pfbaldi at cco.caltech.edu Wed Sep 25 08:40:58 1996 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Wed, 25 Sep 1996 05:40:58 -0700 (PDT) Subject: TR available: Bayesian Methods and Compartmental Modeling Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/baldi.comp.tar.Z The file baldi.comp.tar.Z is now available for copying from the Neuroprose repository: ON THE USE OF BAYESIAN METHODS FOR EVALUATING COMPARTMENTAL NEURAL MODELS (40 pages = 35 pages + 5 figures) (one figure is in color but should print OK in black and white) P. Baldi, M. C. Vanier, and J. M. Bower Department of Computation and Neural Systems Caltech ABSTRACT: In this TR, we provide a tutorial on Bayesian methods for neurobiologists, as well an application of the methods to compartmental modeling. We first derive prior and likelihood functions for compartmental neural models and for spike trains. We then apply the full Bayesian inference machinery to parameter estimation, and model comparison in the case of simple classes of compartmental models, with three and four conductances. We also perform class comparison by approximating integrals over the entire parameter space. Advantages and drawbacks are discussed. Postscript and other problems reported to us have been corrected.  From sverker.sikstrom at psy.umu.se Wed Sep 25 11:44:16 1996 From: sverker.sikstrom at psy.umu.se (Sverker Sikstrom) Date: Wed, 25 Sep 1996 16:44:16 +0100 Subject: PhD THESIS AVAILABLE: A Connectionist Model for Episodic Tests Message-ID: THESIS AVAILABLE My (Sverker Sikstr?m) thesis "TECO: A connectionist Model for Dependency in Successive Episodic Tests" is now available for anonymous download in postscript format at the following url-site: http://www.psy.umu.se/personal/thesis.ps.gz The file is approximately 1MB and unfolds to 3MB. The thesis includes 180 pages. You may also download it from my homepage: http://www.psy.umu.se/personal/Sverker.html The ABSTRACT is as follows: Sikstr?m, P. S. TECO: A connectionist Model for Dependency in Successive Episodic Tests. Doctoral dissertation, Department of Psychology, Ume University, S-90187 Ume, Sweden, 1996; ISBN 91-7191-155-3 Data from a large number of experimental conditions in published studies have shown that recognition and cued recall exhibit a moderate dependency described by the Tulving-Wiseman function. Exceptions from this lawfulness, in the form of higher dependency, are found when the recall test lacks effective cues (i.e., free recall exceptions) or when the recognition test is cued (i.e., cued recognition exceptions). In Study I, the TECO (Target, Event, Cue & Object) theory for dependency in successive tests, is proposed to account for the dependence between recognition and cued recall through the fact that both tests are cued with the instruction to retrieve from the learning episode (i.e., the event). Independence is accounted by differences in cueing; the recall test is cued by a contextual cue, whereas the recognition test is cued by the target. A quantitative degree of dependence, measured by ?, is predicted to be one-third by counting the number of shared cues divided by the total number of cues. Free recall exceptions are predicted to reveal a dependency of one-half because the recall test lacks effective cues. Cued recognition exceptions are predicted to reveal a dependency of two-thirds because both tests are cued with the cue word. A function is derived to predict the conditional probabilities and the results show a reasonable fit with the predictions. In, Study II, the predictions of TECO on successive tests of cued recall and cued recognition, free recall and cued recognition, recognition, free recall and cued recall, recognition and cued recognition is tested. A database is presented for successive episodic tests. In, Study III the lawfulness of recognition failure is discussed. Hintzman claimed that the conditional probability of recognition given recall is constrained by the P(Rn)/P(Rc) boundary and that the phenomenon of recognition failure is, thus, a mathematical artefact. It is argued that this boundary is due to a psychological process and that this boundary carries important information regarding the underlying system. Furthermore, half of the deviation from the predictive function of recognition given cued recall is shown to arise from the lack of statistical power. In Study IV, TECO is simulated in a neural network of a Hopfield type. A theoretical analysis is proposed and nine sets of simulations are conducted. The results show that the theory can be simulated with reasonable agreement to empirical data. Keywords. Episodic memory, recognition failure, successive tests, lawfulness, connectionism. Feel free to contact me on the following address Email:sverker.sikstrom at psy.umu.se ---------------------------------------------------- PhD Sverker Sikstrom, Department of Psychology s90187 Ume University, Sweden Tel: ++46-90-166759, Fax: ++46-90-166695 Homepage:http://www.psy.umu.se/personal/Sverker.html ----------------------------------------------------  From ken at phy.ucsf.edu Wed Sep 25 17:52:47 1996 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 25 Sep 1996 14:52:47 -0700 Subject: Hertz Fellowships for graduate studies Message-ID: <9609252152.AA09350@coltrane.ucsf.edu> [ Moderator's note: Carnegie Mellon, Stanford, and MIT are also on the list of approved schools for Hertz fellowships. -- DST] I wanted to bring to the attention of prospective and current Ph.D. students something I just ran across, the Fannie and John Hertz Foundation fellowship for studies in Applied Physical Sciences. They define applied physical sciences to explicitly include computational neuroscience, artificial intelligence, and robotics. The deadline is quite soon -- Oct. 18. It's a generous fellowship -- 5 yrs, stipend of $20K per year. Info is at http://www.hertzfndn.org Please don't ask me for more info about this fellowship -- go to the foundation and/or its web page. The fellowships are only tenable at a short list of eligible schools, but applicants can also include in their application the desire that other schools be added to that list. In particular, three of the existing schools with Sloan Centers for Theoretical Neurobiology -- Caltech, UCSD, and NYU -- are on the list, but the other two -- UCSF and Brandeis -- are not. As a member of the faculty at UCSF, I'd certainly like to encourage applicants in computational neuroscience to include UCSF and/or Brandeis on your list of desired schools and to check us out in months to come. More info on UCSF can be found at: Neuroscience Program: http://www.neuroscience.ucsf.edu/neuroscience/ Sloan Center: http://www.sloan.ucsf.edu/sloan/ (links to Brandeis and the other Sloan centers can be found on our Sloan page). Ken Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology fax: (415) 476-4929 UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444  From jordan at psyche.mit.edu Wed Sep 25 03:15:03 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Wed, 25 Sep 96 3:15:03 EDT Subject: NIPS Conference Program Message-ID: <9609250715.AA23535@psyche.mit.edu> The NIPS*96 conference program is now available. It can be retrieved via anonymous ftp from: ftp://psyche.mit.edu/pub/NIPS96/nips96-program It will also be available soon from the NIPS*96 homepage. NIPS*96 begins on December 2 with a tutorial program and banquet. The NIPS*96 invited speakers are as follows: MON DEC 2 --------- Computer graphics for film: Automatic versus manual techniques (Banquet talk) E. Enderton Industrial Light and Magic TUE DEC 3 --------- The CONDENSATION algorithm---conditional density propagation and applications to visual tracking (Invited) A. Blake University of Oxford Compositionality, MDL priors, and object recognition (Invited) S. Geman, E. Bienenstock Brown University WED DEC 4 --------- Plasticity of dynamics as opposed to absolute strength of synapse (Invited) H. Markram Weizmann Institute Transition between rate and temporal coding in neocortex as determined by synaptic depression (Invited) M. Tsodyks Weizmann Institute THU DEC 5 --------- Wavelets, wavelet packets, and beyond: Applications of new adaptive signal representations (Invited) D. Donoho Stanford University and University of California Berkeley Michael Jordan NIPS*96 Program Chair  From jordan at psyche.mit.edu Wed Sep 25 16:56:32 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Wed, 25 Sep 96 16:56:32 EDT Subject: NIPS program committee notes Message-ID: <9609252056.AA05917@psyche.mit.edu> Dear connectionists colleagues, I enclose below some notes on this year's NIPS reviewing and decision process. These notes will hopefully be of interest not only to contributors to NIPS*96, but to anyone else who has an ongoing interest in the conference. Note also that there is a "feedback session with the NIPS board" scheduled for Wednesday, December 4th at the conference venue; this would be a good opportunity for public discussion of NIPS reviewing and decision policies. In my experience NIPS has worked hard to earn its role as a flagship conference serving a diverse technical community, particularly through its revolving program committees, and further public discussion of NIPS decision- making procedures can only help to improve the conference. The notes include lists of all of this year's area chairs and reviewers. Mike Jordan NIPS*96 program chair ----------------------------------------------------------- The area chairs for NIPS*96 were as follows: Algorithms and Architectures Chris Bishop, Aston University Steve Omohundro, NEC Research Institute Rob Tibshirani, University of Toronto Theory Michael Kearns, AT&T Research Sara Solla, AT&T Research Vision David Mumford, Harvard University Control Andrew Moore, Carnegie Mellon University Applications Anders Krogh, The Sanger Centre Speech and Signals Eric Wan, Oregon Graduate Institute Neuroscience Bill Bialek, NEC Research Institute Artificial Intelligence/Cognitive Science Stuart Russell, University of California, Berkeley Implementations Fernando Pineda, Johns Hopkins University The area chairs were responsible for recruiting reviewers. All told, 160 reviewers were recruited, from 17 countries. 104 reviewers were from institutions in the US, and 56 reviewers were from institutions outside the US. The breakdown of the submissions by areas was as follows: 1995 1996 ---------------------------------------- Alg & Arch 133 173 Theory 89 79 Neuroscience 43 61 Control & Nav 40 43 Applications 36 42 Vision 46 40 Speech & Sig Proc 20 25 Implementations 25 24 AI & Cog Sci 30 22 ---------------------------------------- Total 462 509 Area chairs assigned papers to reviewers. For cases in which an area chair was an author of a paper the program chair made the selection of reviewers. For cases in which the program chair was an author of a submission the appropriate area chair made the selection of reviewers. Code letters were used for all such reviewers, and neither the area chairs nor the program chair knew (or know) who reviewed their papers. Each paper was reviewed by three reviewers. In most cases all three reviewers were from the same area, but some papers that were particularly interdisciplinary in flavor were reviewed by reviewers from different areas. After the reviews were received and processed the program committee met at MIT in August to make decisions. A few comments on the meeting way the meeting was run: (1) It was agreed that the overriding goal of the program committee's decision process should be to select the best papers, i.e., those exhibiting the most significant thinking and the most thorough development of ideas. All other issues were considered secondary. (2) To achieve (1), the program committee agreed that one of its principal roles was to help eliminate bias in the reviewing process. This took several forms: (a) Close attention was paid to cases in which the reviewers disagreed among themselves. In such cases the area chair often read the paper him/herself to help come to a decision. (b) The area chairs studied histograms of scores to help identify cases where reviewers seemed to be using different scales. (c) The committee tried to identify reviewers who were not as strong or as devoted as others and tried to weight their reviews accordingly. (3) It was agreed that authors who were members of the program committee would be held to higher standards than other authors. That is, if a paper by a program committee author was near a borderline (acceptance, spotlight, oral), it would be demoted. This was considered to be another form of bias minimization, given that the committee was aware that some reviewers might favor program committee members. Also, program committee members who were authors of a paper left the room when their paper was being discussed; they played no role in the decision-making process for their own papers. (4) Other criteria that were utilized in the decision-making process included: junior status of authors (younger authors were favored), new-to-NIPS criteria (outsiders were favored), novelty (new ideas were favored). These criteria also figured in decisions for oral presentations and spotlights, along with additional criteria that favored authors who had not had an oral presentation in recent years and favored presentations of general interest to the NIPS audience. All such criteria, however, were considered secondary, in that they were used to distinguish papers that were gauged to be of roughly equal quality by the reviewers. As stated above, the primary criterion was to select the best papers, and to give oral presentations to papers receiving the most laudatory reviews. (5) Generally speaking, it turned out that the program committee decisions followed the reviewers' scores. A rough guess would be that 1 paper in 10 was moved up or down from where the reviewers' scores placed the paper. (6) The entire program committee participated in the discussions of individual papers for all of the areas. (7) The decision making was seldom easy. It was the overall sense of the program committee that the submissions were exceptionally strong this year. There were many papers near the borderline that were of NIPS quality, but could not be accepted because of size constraints (the conference is limited in size by a number of factors, including the scheduling and the size of the proceedings volume). We hope that authors of these papers will strengthen them a notch and resubmit next year. The process was as fair and as intellectually rigorous as the program committee could make it. It can of course stand improvement, however, and I would hope that people with ideas in this regard will attend the feedback session in Denver. One improvement that I personally think is worth considering, having now seen the reviewing process in such detail, is to allow reviewers to consult among themselves. In this model, reviewers exchange their reviews and discuss them before sending final reviews to the program chair. I review for other conferences where this is done, and I think that it has the substantial advantage of helping to reduce cases where a reviewer just didn't understand something and thus gave a paper an unreasonably low score. Such is my opinion at any case. Perhaps this idea and other such ideas could be discussed in Denver. Mike Jordan ------------------------------------------------------------------- Reviewers for NIPS*96: --------------------- Larry Abbott David Lowe Naoki Abe David Madigan Subutai Ahmad Marina Meila Ethem Alpaydin Bartlett Mel Chuck Anderson David Miller James Anderson Kenneth Miller Chris Atkeson Martin Moller Pierre Baldi Read Montague Naama Barkai Tony Movshon Etienne Barnard Klaus Mueller Andy Barto Alan Murray Francoise Beaufays Ian Nabney Sue Becker Jean-Pierre Nadal Yoshua Bengio Ken Nakayama Michael Biehl Ralph Neuneier Leon Bottou Mahesan Niranjan Herve Bourlard Peter Norvig Timothy Brown Klaus Obermayer Nader Bshouty Erkki Oja Joachim Buhmann Genevieve Orr Carmen Canavier Art Owen Claire Cardie Barak Pearlmutter Ted Carnevale Jing Peng Nestor Caticha Fernando Pereira Gert Cauwenberghs Pietro Perona David Cohn Carsten Peterson Greg Cooper Jay Pittman Corinna Cortes Tony Plate Gary Cottrell John Platt Marie Cottrell Jordan Pollack Bob Crites Alexandre Pouget Christian Darken Jose Principe Peter Dayan Adam Prugel-Bennett Virginia de Sa Anand Rangarajan Alain Destexhe Carl Rasmussen Thomas Dietterich Steve Renals Dawei Dong Barry Richmond Charles Elkan Peter Riegler Ralph Etienne-Cummings Brian Ripley Gary Flake David Rohwer Paolo Frasconi David Saad Bill Freeman Philip Sabes Yoav Freund Lawrence Saul Jerry Friedman Stefan Schaal Patrick Gallinari Jeff Schneider Stuart Geman Terrence Sejnowski Zoubin Ghahramani Robert Shapley Federico Girosi Patrice Simard Mirta Gordon Tai Sing Russ Greiner Yoram Singer Vijaykumar Gullapalli Satinder Singh Isabelle Guyon Padhraic Smyth Lars Hansen Bill Softky John Harris David Somers Michael Hasselmo Devika Subramanian Simon Haykin Richard Sutton David Heckerman Josh Tenenbaum John Hertz Michael Thielscher Andreas Herz Sebastian Thrun Tom Heskes Mike Titterington Geoffrey Hinton Geoffrey Towell Sean Holden Todd Troyer Don Hush Ah Chung Tsoi Nathan Intrator Michael Turmon Tommi Jaakkola Joachim Utans Marwan Jabri Benjamin VanRoy Jeff Jackson Kelvin Wagner Robbie Jacobs Raymond Watrous Chuanyi Ji Yair Weiss Ido Kanter Christopher Williams Bert Kappen Ronald Williams Dan Kersten Robert Williamson Ronny Kohavi David Willshaw Alan Lapedes Ole Winther John Lazzaro David Wolpert Todd Leen Lei Xu Zhaoping Li Alan Yuille Christiane Linster Tony Zador Richard Lippmann Steven Zucker Michael Littman  From levy at xws.com Thu Sep 26 15:36:09 1996 From: levy at xws.com (Kenneth L. Levy) Date: Thu, 26 Sep 1996 12:36:09 -0700 Subject: Dissertation Available: The Transformation of Acoustic Information by Cochlear Nucleus Octopus Cells: A Modeling Study Message-ID: <1.5.4.32.19960926193609.0069a4ec@mail.xws.com> Hello, My (Ken Levy) dissertation is available as a compressed postscript file for DOS/Win and Unix (downloading directions below). The dissertation is entitled "The Transformation of Acoustic Information by Cochlear Nucleus Octopus Cells: A Modeling Study." The cochlear nucleus is the first nuclei of the mammalian auditory brainstem, and the octopus cell type is one of the principlal cell types. Octopus cells respond only at the onset of a toneburst. The dissertation presents an analysis of a compartmental model using GENESIS that led to a description of the underlying mechanism of the onset response of the octopus cell. It also includes the development, analysis and verification of the novel, biologically-plausible model, labeled the intrinsic membrane model (IMM), that produces accurate spike times 100-to-1000 time more efficiently than a compartmental model. Finally, the document covers the utilization of the IMM to demonstrate the enhancement of the encoding of the fundamental frequency of the vowel [i] in background noise by single-cell and ensemble models of octopus cells. Comments and suggestions are welcome and encouraged! Thank you for your time and consideration. --Ken Downloading Information: ------------------------ Homepage URL => http://www.eas.asu.edu/~neurolab or Homepage URL => http://www.xws.com/levy/levypub.html or Anonymous FTP site => ftp.eas.asu.edu (login:anonymous passwd:your email) FTP directory => pub/neurolab Dissertation file => Diss.ps.Z or DissPS.exe IMM => IMMdemo.tar.Z or IMMdemo.exe =============================================================== Kenneth L. Levy, Ph.D. levy at xws.com Acoustic Information Processing Lab http://www.xws.com/levy 31 Skamania Coves Drive Voice: (509) 427-5374 Stevenson, WA 98648 FAX: (509) 427-7131 ===============================================================  From rao at cs.rochester.edu Thu Sep 26 15:07:05 1996 From: rao at cs.rochester.edu (Rajesh Rao) Date: Thu, 26 Sep 1996 15:07:05 -0400 Subject: Tech Report: Visual Cortex as a Hierarchical Predictor Message-ID: <199609261907.PAA19545@skunk.cs.rochester.edu> The following technical report on a hierarchical predictor model of the visual cortex and the complex cell phenomenon of "endstopping" is available for retrieval via ftp. Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== The Visual Cortex as a Hierarchical Predictor Rajesh P.N. Rao and Dana H. Ballard Technical Report 96.4 National Resource Laboratory for the Study of Brain and Behavior Department of Computer Science, University of Rochester September, 1996 Abstract A characteristic feature of the mammalian visual cortex is the reciprocity of connections between cortical areas [1]. While corticocortical feedforward connections have been well studied, the computational function of the corresponding feedback projections has remained relatively unclear. We have modelled the visual cortex as a hierarchical predictor wherein feedback projections carry predictions for lower areas and feedforward projections carry the difference between the predictions and the actual internal state. The activities of model neurons and their synaptic strength are continually adapted using a hierarchical Kalman filter [2] that minimizes errors in prediction. The model generalizes several previously proposed encoding schemes [3,4,5,6,7,8] and allows functional interpretations of a number of well-known psychophysical and neurophysiological phenomena [9]. Here, we present simulation results suggesting that the classical phenomenon of endstopping [10,11] in cortical neurons may be viewed as an emergent property of the cortex implementing a hierarchical Kalman filter-like prediction mechanism for efficient encoding and recognition. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/endstop.ps.Z WWW URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/endstop.ps.Z 20 pages; 302K compressed. The following related papers are also available via ftp: ------------------------------------------------------------------------- Dynamic Model of Visual Recognition Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard (Neural Computation - in press) Abstract The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the Expectation-Maximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z WWW URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 43 pages; 569K compressed. -------------------------------------------------------------------------- A Class of Stochastic Models for Invariant Recognition, Motion, and Stereo Rajesh P.N. Rao and Dana H. Ballard Technical Report 96.1 Abstract We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/invar.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/invar.ps.Z 7 pages; 430K compressed. ========================================================================== Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get endstop.ps ftp> get dynmem.ps ftp> get invar.ps ftp> bye  From es2029 at eng.warwick.ac.uk Fri Sep 27 09:00:32 1996 From: es2029 at eng.warwick.ac.uk (es2029@eng.warwick.ac.uk) Date: Fri, 27 Sep 96 9:00:32 BST Subject: Thesis available: Constrained weight nets Message-ID: <25868.9609270800@eng.warwick.ac.uk> The following PhD Thesis is available on the web: ---------------------------------------------------- Feedforward Neural Networks with Constrained Weights ---------------------------------------------------- Altaf H. Khan (Email address effective 7 Oct 96 a.h.khan at ieee.org) Department of Engineering, University of Warwick, Coventry, CV4 7AL, England August 1996 218 pages - gzipped postscript version available as http://www.eng.warwick.ac.uk/~es2029/thesis.ps.gz This thesis will also be made available on Neuropose in the near future. ---------------------------------------------------- Thesis Summary The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3, 3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off `near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the `onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts. -- Altaf.  From marney at ai.mit.edu Wed Sep 25 21:32:31 1996 From: marney at ai.mit.edu (Marney Smyth) Date: Wed, 25 Sep 1996 21:32:31 -0400 (EDT) Subject: Modern Regression and Classification Course in Boston Message-ID: <9609260132.AA05378@carpentras.ai.mit.edu> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ +++ +++ Modern Regression and Classification +++ +++ Widely Applicable Statistical Methods for +++ +++ Modeling and Prediction +++ +++ +++ +++ Cambridge, MA, December 9 - 10, 1996 +++ +++ +++ +++ Trevor Hastie, Stanford University +++ +++ Rob Tibshirani, University of Toronto +++ +++ +++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This two-day course will give a detailed overview of statistical models for regression and classification. Known as machine-learning in computer science and artificial intelligence, and pattern recognition in engineering, this is a hot field with powerful applications in science, industry and finance. The course covers a wide range of models, from linear regression through various classes of more flexible models, to fully nonparametric regression models, both for the regression problem and for classification. Although a firm theoretical motivation will be presented, the emphasis will be on practical applications and implementations. The course will include many examples and case studies, and participants should leave the course well-armed to tackle real problems with realistic tools. The instructors are at the forefront in research in this area. After a brief overview of linear regression tools, methods for one-dimensional and multi-dimensional smoothing are presented, as well as techniques that assume a specific structure for the regression function. These include splines, wavelets, additive models, MARS (multivariate adaptive regression splines), projection pursuit regression, neural networks and regression trees. The same hierarchy of techniques is available for classification problems. Classical tools such as linear discriminant analysis and logistic regression can be enriched to account for nonlinearities and interactions. Generalized additive models and flexible discriminant analysis, neural networks and radial basis functions, classification trees and kernel estimates are all such generalizations. Other specialized techniques for classification including nearest-neighbor rules and learning vector quantization will also be covered. Apart from describing these techniques and their applications to a wide range of problems, the course will also cover model selection techniques, such as cross-validation and the bootstrap, and diagnostic techniques for model assessment. Software for these techniques will be illustrated, and a comprehensive set of course notes will be provided to each attendee. Additional information is available at the Website: http://playfair.stanford.edu/~trevor/mrc.html COURSE OUTLINE DAY ONE: Overview of regression methods: Linear regression models and least squares. Ridge regression and the lasso. Flexible linear models and basis function methods. linear and nonlinear smoothers; kernels, splines, and wavelets. Bias/variance tradeoff- cross-validation and bootstrap. Smoothing parameters and effective number of parameters. Surface smoothers. ++++++++ Structured Nonparametric Regression: Problems with high dimensional smoothing. Structured high-dimensional regression: additive models. project pursuit regression. CART, MARS. radial basis functions. neural networks. applications to time series forecasting. DAY TWO: Classification: Statistical decision theory and classification rules. Linear procedures: Discriminant Analysis. Logistics regression. Quadratic discriminant analysis, parametric models. Nearest neighbor classification, K-means and LVQ. Adaptive nearest neighbor methods. ++++++++ Nonparametric classification: Classification trees: CART. Flexible/penalized discriminant analysis. Multiple logistic regression models and neural networks. Kernel methods. THE INSTRUCTORS Professor Trevor Hastie of the Statistics and Biostatistics Departments at Stanford University was formerly a member of the Statistics and Data Analysis Research group AT & T Bell Laboratories. He co-authored with Tibshirani the monograph Generalized Additive Models (1990) published by Chapman and Hall, and has many research articles in the area of nonparametric regression and classification. He also co-edited the Wadsworth book Statistical Models in S (1991) with John Chambers. Professor Robert Tibshirani of the Statistics and Biostatistics departments at University of Toronto is the most recent recipient of the COPSS award - an award given jointly by all the leading statistical societies to the most outstanding statistician under the age of 40. He also has many research articles on nonparametric regression and classification. With Bradley Efron he co-authored the best-selling text An Introduction to the Bootstrap in 1993, and has been an active researcher on bootstrap technology for the past 11 years. Quotes from previous participants: "... the best presentation by professional statisticians I have ever had the pleasure of attending" ".. superior to most courses in all respects." Both Prof. Hastie and Prof. Tibshirani are actively involved in research in modern regression and classification and are well-known not only in the statistics community but in the machine-learning and neural network fields as well. The have given many short courses together on classification and regression procedures to a wide variety of academic, government and industrial audiences. These include the American Statistical Association and Interface meetings, NATO ASI Neural Networks and Statistics workshop, AI and Statistics, and the Canadian Statistical Society meetings. BOSTON COURSE: December 9-10, 1996 at the HYATT REGENCY HOTEL, CAMBRIDGE, MASSACHUSETTS. PRICE: $750 per attendee before November 11, 1996. Full time registered students receive a 40% discount (i.e. $450). Cancellation fee is $100 after October 29, 1996. Registration fee after November 11, 1996 is $950 (Students $530). Attendance is limited to the first 60 applicants, so sign up soon! These courses fill up quickly. HOTEL ACCOMMODATION The Hyatt Regency Hotel offers special accommodation rates for course participants ($139 per night). Contact the hotel directly - The Hyatt Regency Hotel, 575 Memorial Drive, Cambridge, MA 02139. Phone : 617 4912-1234 Alternative hotel accommodation information at MRC WebSite: http://playfair.stanford.edu/~trevor/mrc.html COURSE REGISTRATION TO REGISTER: Detach and fill in the Registration Form below: Modern Regression and Classification Widely applicable methods for modeling and prediction December 9 - December 10, 1996 Cambridge, Massachusetts USA Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or Institution ______________________________________ Mailing Address (for receipt) _________________________ __________________________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email address __________________________________________________________ Credit card # (if payment by credit card) Expiration Date (Lunch Menu - tick as appropriate): ___ Vegetarian ___ Non-Vegetarian Fee payment must be made by MONEY ORDER, PERSONAL CHECK, VISA or MASTERCARD. All amounts must in US dollar figures. Make fee payable to Prof. Trevor Hastie. Mail it, together with this completed Registration Form to: Marney Smyth, MIT Press E39-311 55 Hayward Street, Cambridge, MA 02142 USA ALL CREDIT CARD REGISTRATIONS MUST INCLUDE BOTH CARD NUMBER AND EXPIRATION DATE. DEADLINE: Registration before December 2, 1996. DO NOT SEND CASH. Registration fee includes Course Materials, coffee breaks, and lunch both days. If you have further questions, email to marney at ai.mit.edu  From marks at u.washington.edu Mon Sep 30 01:59:09 1996 From: marks at u.washington.edu (Robert Marks) Date: Sun, 29 Sep 96 22:59:09 -0700 Subject: IEEE TNN CFP: Special Issue on Everday Applications Message-ID: <9609300559.AA08401@carson.u.washington.edu> Special Issue of the IEEE Transactions on Neural Networks: Every Day Applications of Neural Networks The objective of this special issue is presentation of cases of ongoing or every day use of neural networks in industry, commerce, medicine, engineering, military and other disciplines. Even though artificial neural networks have been around since the 1940's, the last decade has seen a tremendous upsurge in research and development. This activity has been at two levels, (i) advances in neural techniques and network architectures and (ii) exploration of application of this technology in various fields. Neural network technology has reached a degree of maturity as evidenced by an ever increasing number of applications. It is useful, at this stage, to take stock of applications to provide the neural practitioner (i) knowledge of fields wherein neural technology has had an impact, and (ii) guidance concerning fruitful areas of research and development in neurotechnology that have a significant impact. This special issue of the TNN calls for submission of papers concerning neural technology adopted for ongoing or everyday use. Hybrid neural technology, such as neuro-fuzzy systems, are also appropriate. Submissions are to specifically address the infusion and adaptation of neural technology in various areas. Exploratory applications papers, normally welcome for submission to the TNN, are specifically discouraged for this special issue. Adopted and established applications papers, rather, are appropriate. Submissions to the special issue will be judged based on the veracity of everyday use, comparitive performance over previously used techniques and lessons learned from the development and applications Descriptions of remaining open problems or desired, though unachieved performance attainment, are encouraged. Six copies of the manuscript should be mailed to one of the special issue editors by November 15, 1996. The special issue is tentatively scheduled for publication in July 1997.Submissions could either be brief papers or regular papers. Please refer to instructions to authors for TNN. Tharam Dillon Professor of Computer Science Head, Department of Computer Science and Computer Engineering La Trobe University Bundoora, Melbourne, Victoria 3083 Australia Tel: +61 3 479 2598 Fax: +61 3 479 3060 tharam at latcs1.cs.latrobe.edu.ua Payman Arabshahi University of Washington Department of Electrical Engineering Benton Way at Stevens Way Box 352500 Seattle, WA 98195 United States of America payman at ee.washington.edu 206 236 2694 FAX: 206 543 3842 Robert J. Marks II University of Washington Department of Electrical Engineering c/o 1131 199th Street SW Lynnwood, WA 98036-7138 United States of America r.marks at ieee.org 206 543 6990 FAX: 206 776 9297 ---  From Connectionists-Request at cs.cmu.edu Sun Sep 1 00:05:18 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Sun, 01 Sep 96 00:05:18 -0400 Subject: Bi-monthly Reminder Message-ID: <10311.841550718@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From denni at bordeaux.cse.ogi.edu Sun Sep 1 01:08:34 1996 From: denni at bordeaux.cse.ogi.edu (Thorsteinn Rognvaldsson) Date: Sat, 31 Aug 96 22:08:34 -0700 Subject: Smoothing Regularizers for PBF NN Message-ID: <9609010508.AA20633@bordeaux.cse.ogi.edu> New tech report available: SMOOTHING REGULARIZERS FOR PROJECTIVE BASIS FUNCTION NETWORKS By: JOHN E. MOODY & THORSTEINN S. ROGNVALDSSON Dept. of Computer Science and Engineering Oregon Graduate Institute of Science and Technology P.O. Box 91000 Portland, Oregon 97291-1000, U.S.A. Emails: moody at cse.ogi.edu denni at cse.ogi.edu (Direct correspondence to Prof. Moody) --------- Abstract: Smoothing regularizers for radial basis functions have been studied extensively, but no general smoothing regularizers for PROJECTIVE BASIS FUNCTIONS (PBFs), such as the widely-used sigmoidal PBFs, have heretofore been proposed. We derive new classes of algebraically-simple m:th-order smoothing regularizers for networks of projective basis functions. Our simple algebraic forms enable the direct enforcement of smoothness without the need for e.g. costly Monte Carlo integrations of the smoothness functional. We show that our regularizers are highly correlated with the values of standard smoothness functionals, and thus suitable for enforcing smoothness constraints onto PBF networks. The regularizers are tested on illustrative sample problems and compared to quadratic weight decay. The new regularizers are shown to yield better generalization errors than weight decay when the implicit assumptions in the latter are wrong. Unlike weight decay, the new regularizers distinguish between the roles of the input and output weights and capture the interactions between them. -------------------------------------------------- Instructions for retrieving your own personal copy: WWW: http://www.cse.ogi.edu/~denni/publications.html FTP: % ftp neural.cse.ogi.edu (username=anonymous, password=your email) > cd pub/neural/papers/ > get moodyRogn96.smooth_long.ps.Z > quit % uncompress moodyRogn96.smooth_long.ps.Z % lpr moodyRogn96.smooth_long.ps (assumes you have a UNIX system)  From back at zoo.riken.go.jp Sun Sep 1 18:58:10 1996 From: back at zoo.riken.go.jp (Andrew Back) Date: Mon, 2 Sep 1996 07:58:10 +0900 (JST) Subject: NIPS'96 Workshop - Blind Signal Processing Message-ID: CALL FOR PAPERS NIPS'96 Postconference Workshop BLIND SIGNAL PROCESSING AND THEIR APPLICATIONS (Neural Information Processing Approaches) Snowmass (Aspen), Colorado USA Sat Dec 7th, 1996 A. Cichocki and A. Back Brain Information Processing Group Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Shi, JAPAN Email: cia at zoo.riken.go.jp, back at zoo.riken.go.jp Fax: (+81) 48 462 4633. URL: http://zoo.riken.go.jp/bip.html Blind Signal Processing is an emerging area of research in neural networks and image/signal processing with many potential applications. It originated in France in the late 80's and since then there has continued to be a strong and growing interest in the field. Blind signal processing problems can be classified into three areas: (1) blind signal separation of sources and/or independent component analysis (ICA), (2) blind channel identification and (3) blind deconvolution and blind equalization. OBJECTIVES The main objectives of this workshop are to: Give presentations by experts in the field on the state of the art in this exciting area of research. Compare the performance of recently developed adaptive un-supervised learning algorithms for neural networks. Discuss issues surrounding prospective applications and the suitability of current neural network models. Hence we seek to provide a forum for better understanding current limitations of neural network models. Examine issues surrounding local, online adaptive learning algorithms and their robustness and biologically plausibility or justification. Discuss issues concerning effective computer simulation programs. Discuss open problems and perspectives for future research in this area. Especially, we intend to discuss the following items: 1. Criteria for blind separation and blind deconvolution problems (both for time and frequency domain approaches) 2. Natural (or relative) gradient approach to blind signal processing. 3. Neural networks for blind separation of time delayed and convolved signals. 4. On line adaptive learning algorithms for blind signal processing with variable learning rate (learning of learning rate). 5.Open problems, e.g. dynamic on-line determination of number of sources (more sources than sensors), influence of noise, robustness of algorithms, stability, convergence, identifiability, non-causal, non-stationary dynamic systems . 6. Applications in different areas of science and engineering, e.g., non-invasive medical diagnosis (EEG, ECG), telecommunication, voice recognition problems, image processing and enhancement. WORKSHOP FORMAT The workshop will be 1-day in length, combining some invited expert speakers and a significant group discussion time. We will open up the workshop in a moderated way. The intent here is to permit a free-flowing, but productive discourse on the topics relevant to this area. Participants will be encouraged to consider the implications of the current findings in their own work, and to raise questions accordingly. We invite and encourage potential participants to "come prepared" for open discussions. SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS If you would like to contribute, please send an abstract or extended summary as soon as possible to: Andrew Back Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Shi, JAPAN Email: back at zoo.riken.go.jp Phone: (+81) 48 467 9629 Fax: (+81) 48 462 4633. Manuscripts may be sent in by email (in postscript format), air mail or by fax. Important Dates: Submission of abstract deadline: 16 September, 1996 Notification of acceptance: 1 October, 1996 Final paper to be sent by: 30 October, 1996 A set of workshop notes will be produced. For accepted papers to be included in the notes, papers accepted for presentation will need to be supplied to us by the due date of 30 Oct, 1996. For the format of papers, the usual NIPS style file should be used with up to 16 pages allowed. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From maggini at sun1.ing.unisi.it Mon Sep 2 03:23:09 1996 From: maggini at sun1.ing.unisi.it (Marco Maggini) Date: Mon, 2 Sep 1996 09:23:09 +0200 Subject: NIPS'96 Postconference Workshop Message-ID: <199609020723.JAA08804@ultra1> ================================================================= CALL FOR PAPERS NIPS'96 Postconference Workshop ----------------------------------------------------------------- ANNs and Continuous Optimization: Local Minima, Sub-optimal Solutions, and Computational Complexity ----------------------------------------------------------------- Snowmass (Aspen), Colorado USA Fri Dec 6th, 1996 M. Gori M. Protasi Universita' di Siena Universita' Tor Vergata (Roma) Universita' di Firenze protasi at utovrm.it marco at mcculloch.ing.unifi.it http://www-dsi.ing.unifi.it/neural Most ANNs used for either learning or problem solving (e.g. Backprop nets and analog Hopfield nets) rely on continuous optimization. The elegance and generality of this approach, however, seems also to represent the main source of troubles that typically arise when approaching complex problems. Most of the times this gives rise to a sort of suspiciousness concerning the actual chance to discover an optimal solution under reasonable computational constraints. The computational complexity of the problem at hand seems to appear in terms of local minima and numerical problems of the chosen optimization algorithm. While most practitioners use to accept without reluctance the flavor of suspiciousness and use to be proud of their eventual experimental achievements, most theoreticians are instead quite skeptical. Obviously, the success of ANNs for either learning and problem solving is often related to the problem at hand and, therefore, one can expect an excellent behavior for a class of problems, while can raise serious suspects about the solution of others. To the best of our knowledge, however, so far, this intuitive idea has no satisfactory theoretical explanation. Basically, there is neither theory to support naturally the intuitive concept of suspiciousness, nor theory to relate this concept to computational complexity. The study of the complexity of algorithms has been essentially performed on discrete structures and an impressive theory has been developed in the last two decades. On the other hand optimization theory has a twofold face: discrete optimization and continuous optimization. Actually there are some important approaches of the computational complexity theory that were proposed for the continuous cases, for instance, the information based-complexity (Traub) and real Turing Machines (Blum-Shub-Smale). These approaches can be fruitfully applied to problems arising in continuous optimization but, generally speaking, the formal study of the efficiency of algorithms and problems has received much more attention in the discrete environment, where the theory can be easily used and error and precision problems are not present. =========================================== DISCUSSION POINTS FOR WORKSHOP PARTICIPANTS =========================================== Taking into account this framework, a fascinating area, that we believe deserves a careful study, concerns the relationship between the emergence of sub-optimal solutions in continuous optimization and the corresponding computational complexity of the problem at hand. More specifically,there are a number of very intriguing open questions: - Is there a relationship between the complexity of algorithms in the continuous and discrete settings for solving the same problem? - Can we deduce bounds on the complexity of discrete algorithms from the study of the properties of continuous ones and vice-versa? - Some loading problems are intuitively easily solvable, while others are considered hard. Are there links between the presence of local minima and the computational complexity of the problem at hand? - What is the impact of approximate solutions on the complexity? (e.g. learning is inherently an approximate process whose complexity is often studied in the framework of theories like PAC) ========== OBJECTIVES ========== The aim of the workshop is not to reach some definite points, but to stimulate a starting discussion, and to put on the table some of the most important themes that we hope could be extensively explored in the future. We also expect that the study of the interplay between continuous and discrete versions of a problem can be very fruitful for both the approaches. Since, until now, this interplay has been rarely explored, the workshop is likely to stimulate different point of view; people working on discrete optimization are in fact likely not to be expert on the continuous side and vice-versa. ========================================= SUBMISSION OF WORKSHOP EXTENDED ABSTRACTS ========================================= If you would like to contribute, please send an abstract or extended summary to: Marco Gori Facolta' di Ingegneria Universita' di Siena Via Roma, 56 53100 Siena (Italy) Fax: +39 577 26.36.02 Electronic submission: Send manuscripts in postscript format at marco at mcculloch.ing.unifi.it Important Dates: Submission of abstract deadline: 30 September, 1996 Notification of acceptance: 21 October, 1996 Final paper to be sent by: 4 November, 1996 For the format of papers, the usual NIPS style file should be used with up to 16 pages allowed. Workshop notes will be produced. NIPS style files are available at http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.sty http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.tex http://www.cs.cmu.edu/afs/cs/project/cnbc/nips/formatting/nips.ps Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From pfbaldi at cco.caltech.edu Mon Sep 2 19:58:40 1996 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Mon, 2 Sep 1996 16:58:40 -0700 (PDT) Subject: TR available: Bayesian Methods and Compartmental Modeling Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/baldi.comp.tar.Z The file baldi.comp.tar.Z is now available for copying from the Neuroprose repository: ON THE USE OF BAYESIAN METHODS FOR EVALUATING COMPARTMENTAL NEURAL MODELS (40 pages = 35 pages + 5 figures) (one figure is in color but should print OK in black and white) P. Baldi, M. C. Vanier, and J. M. Bower Department of Computation and Neural Systems Caltech ABSTRACT: In this TR, we provide a tutorial on Bayesian methods for neurobiologists, as well an application of the methods to compartmental modeling. We first derive prior and likelihood functions for compartmental neural models and for spike trains. We then apply the full Bayesian inference machinery to parameter estimation, and model comparison in the case of simple classes of compartmental models, with three and four conductances. We also perform class comparison by approximating integrals over the entire parameter space. Advantages and drawbacks are discussed. Sorry-no hard copies available.  From surmeli at ipe.et.uni-magdeburg.de Tue Sep 3 02:56:19 1996 From: surmeli at ipe.et.uni-magdeburg.de (Dimitrij Surmeli) Date: Tue, 03 Sep 1996 08:56:19 +0200 Subject: Job: GRA in neural nets for control; Magdeburg, Germany Message-ID: <322BD693.3D70@ipe.et.uni-magdeburg.de> Job announcement: The Institute of Measurement and Electronics of the Otto-von-Guericke-University, Magdeburg, Germany has an opening for a Research Assistant in the 'Innovationskolleg ADAMES' as of 1 October 1996. The project is investigating distributed neural network applications for ADAptive MEchanical Systems (ADAMES), ie signal identification, data compression and control systems. This is one area in a multi-disciplinary project involving actively deformable mechanical systems. Desired qualifications include a finished BSc/MSc, experience in neural network applications, control theory, image processing, programming (all in Unix and Win). Helpful: Matlab experience, hardware design, CNAPS neurocomputer experience, analog-digital-analog conversion Working language German. Compensation depending on qualification on the BAT II-O scale. Position suitable to engage in research leading to PhD. Informal inquiries re: ADAMES to surmeli at ipe.et.uni-magdeburg.de Formal application including CV, cover letter, etc to: Prof. B. Michaelis otto-von-Guericke Universitaet Madgeburg Fakultaet fuer Elektrotechnik Institut fuer Prozessmesstechnik und Elektronik Am Kroekentor 2 39106 Magdeburg Germany tel. +49 391 671 4645 fax +49 391 561 6368 email michaelis at ipe.et.uni-magdeburg.de http://pmt05.et.uni-magdeburg.de/TI/TI.html -- Dimitrij Surmeli surmeli at ipe.et.uni-magdeburg.de Anybody got a good name for a neurocomputer CNAPS 512?  From payman at u.washington.edu Tue Sep 3 21:43:19 1996 From: payman at u.washington.edu (Payman Arabshahi) Date: Tue, 3 Sep 96 18:43:19 -0700 Subject: CFP: 1997 Computational Intelligence in Financial Eng. CIFEr Message-ID: <9609040143.AA28874@saul3.u.washington.edu> IEEE/IAFE 1997 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ Visit us on the web at http://www.ieee.org/nnc/cifer97 ------------------------------------ ------------------------------------ Call for Papers Conference Topics Conference on Computational ------------------------------------ Intelligence for Financial Engineering Topics in which papers, panel sessions, and tutorial proposals are (CIFEr) invited include, but are not limited to, the following: Crowne Plaza Manhattan, New York City Financial Engineering Applications: March 23-25, 1997 * Risk Management * Pricing of Structured Sponsors: Securities The IEEE Neural Networks Council, * Asset Allocation The International Association of * Trading Systems Financial Engineers * Forecasting * Hedging Strategies The IEEE/IAFE CIFEr Conference is * Risk Arbitrage the third annual collaboration * Exotic Options between the professional engineering and financial communities, and is Computer & Engineering Applications one of the leading forums for new & Models: technologies and applications in the intersection of computational * Neural Networks intelligence and financial * Probabilistic Modeling/Inference engineering. Intelligent * Fuzzy Systems and Rough Sets computational systems have become * Genetic and Dynamic Optimization indispensable in virtually all * Intelligent Trading Agents financial applications, from * Trading Room Simulation portfolio selection to proprietary * Time Series Analysis trading to risk management. * Non-linear Dynamics ------------------------------------------------------------------------------ Instructions for Authors, Special Sessions, Tutorials, & Exhibits ------------------------------------------------------------------------------ All summaries and proposals for tutorials, panels and special sessions must be received by the conference Secretariat at Meeting Management by November 15, 1996. Our intentions are to publish a book with the best selection of papers accepted. Authors (For Conference Oral Sessions) One copy of the Extended Summary (not exceeding four pages of 8.5 inch by 11 inch size) must be received by Meeting Management by November 15, 1996. Centered at the top of the first page should be the paper's complete title, author name(s), affiliation(s), and mailing addresses(es). Fonts no smaller than 10 pt should be used. Papers must report original work that has not been published previously, and is not under consideration for publication elsewhere. In the letter accompanying the submission, the following information should be included: * Topic(s) * Full title of paper * Corresponding Author's name * Mailing address * Telephone and fax * E-mail (if available) * Presenter (If different from corresponding author, please provide name, mailing address, etc.) Authors will be notified of acceptance of the Extended Summary by January 10, 1997. Complete papers (not exceeding seven pages of 8.5 inch by 11 inch size) will be due by February 14, 1997, and will be published in the conference proceedings. ---------------------------------------------------------------------------- Special Sessions A limited number of special sessions will address subjects within the topical scope of the conference. Each special session will consist of from four to six papers on a specific topic. Proposals for special sessions will be submitted by the session organizer and should include: * Topic(s) * Title of Special Session * Name, address, phone, fax, and email of the Session Organizer * List of paper titles with authors' names and addresses * One page of summaries of all papers Notification of acceptance of special session proposals will be on January 10, 1997. If a proposal for a special session is accepted, the authors will be required to submit a camera ready copy of their paper for the conference proceedings by February 14, 1997. ---------------------------------------------------------------------------- Panel Proposals Proposals for panels addressing topics within the technical scope of the conference will be considered. Panel organizers should describe, in two pages or less, the objective of the panel and the topic(s) to be addressed. Panel sessions should be interactive with panel members and the audience and should not be a sequence of paper presentations by the panel members. The participants in the panel should be identified. No papers will be published from panel activities. Notification of acceptance of panel session proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Tutorial Proposals Proposals for tutorials addressing subjects within the topical scope of the conference will be considered. Proposals for tutorials should describe, in two pages or less, the objective of the tutorial and the topic(s) to be addressed. A detailed syllabus of the course contents should also be included. Most tutorials will be four hours, although proposals for longer tutorials will also be considered. Notification of acceptance of tutorial proposals will be on January 10, 1997. ---------------------------------------------------------------------------- Exhibit Information Businesses with activities related to financial engineering, including software & hardware vendors, publishers and academic institutions, are invited to participate in CIFEr's exhibits. Further information about the exhibits can be obtained from the CIFEr-secretariat, Barbara Klemm. ---------------------------------------------------------------------------- Contact Information Sponsors More information on registration and Sponsorship for CIFEr'97 the program will be provided as soon is being provided by the IAFE as it becomes available. For further (International Association of details, please contact Financial Engineers) and the IEEE Neural Networks Council. The IEEE Barbara Klemm (Institute of Electrical and CIFEr'97 Secretariat Electronics Engineers) is the Meeting Management world's largest engineering and IEEE/IAFE Computational Intelligence computer science professional for Financial Engineering non-profit association and sponsors 2603 Main Street, Suite # 690 hundreds of technical conferences Irvine, California 92714 and publications annually. The IAFE is a professional non-profit Tel: (714) 752-8205 or financial association with members (800) 321-6338 worldwide specializing in new financial product design, derivative Fax: (714) 752-7444 structures, risk management strategies, arbitrage techniques, Email: Meetingmgt at aol.com and application of computational Web: http://www.ieee.org/nnc/cifer97 techniques to finance. ---------------------------------------------------------------------------- Payman Arabshahi CIFEr'97 Organizational Chair Tel: (206) 644-8026 Dept. Electrical Eng./Box 352500 Fax: (206) 543-3842 University of Washington Seattle, WA 98195 Email: payman at ee.washington.edu ----------------------------------------------------------------------------  From abonews at playfair.Stanford.EDU Wed Sep 4 18:52:54 1996 From: abonews at playfair.Stanford.EDU (Art Owen News) Date: Wed, 4 Sep 1996 15:52:54 -0700 Subject: TR Available: Computer Experiments (noise free prediction) Message-ID: <199609042252.PAA05650@tukey.Stanford.EDU> Address: http://playfair.stanford.edu/reports/owen File: main.ps for uncompressed PostScript main.ps.Z for compressed PostScript Authors: J. Koehler and A. Owen The above article is a survey paper on methods for computer experiments. These are noise free, usually continuous valued prediction problems, sometimes motivated by optimization problems in computer aided design. Without noise, how does one estimate the uncertainty in a given answer? A Bayesian approach places a process prior on the underlying function. An emerging frequentist approach samples the input space at random and propagates the sampling error. -Art Owen art at playfair.stanford.edu Replies should be sent to art at playfair not abonews where they might get lost among mailing list mail. My co-author is Jim Koehler: EMAIL: jkoehler at carbon.cudenver.edu http://www-math.cudenver.edu/~jkoehler  From rosen at dragon.cs.utsa.edu Wed Sep 4 20:27:23 1996 From: rosen at dragon.cs.utsa.edu (Bruce) Date: Wed, 4 Sep 1996 19:27:23 -0500 Subject: Paper announcements Message-ID: <199609050027.TAA02780@tachy.cs.utsa.edu> *** Paper Announcements *** The following paper is now available from my research page: http://www.cs.utsa.edu/faculty/rosen/rosen.html Comments are welcomed. ----------------------------------------------------------------------- Ensemble Learning using Decorrelated Neural Networks Bruce E. Rosen ftp://ringer.cs.utsa.edu/pub/rosen/decorrelate.ps.Z We describe a decorrelation network training method for improving the quality of regression learning in ``ensemble'' neural networks that are composed of linear combinations of individual neural networks. In this method, individual networks are trained by backpropagation to not only reproduce a desired output, but also to have their errors be linearly decorrelated with the other networks. Outputs from the individual networks are then linearly combined to produce the output of the ensemble network. We demonstrate the performances of decorrelated network training on learning the ``3 Parity'' logic function, a noisy sine function, and a one dimensional nonlinear function, and compare the results with the ensemble networks composed of independently trained individual networks (without decorrelation training). Empirical results show that when individual networks are forced to be decorrelated with one another the resulting ensemble neural networks have lower mean squared errors than the ensemble networks having independently trained individual networks. This method is particularly applicable when there is insufficient data to train each individual network on disjoint subsets of training patterns. To appear in: Connection Science, Special issue on Combining Estimators.  From aonishi at bpe.es.osaka-u.ac.jp Thu Sep 5 04:52:20 1996 From: aonishi at bpe.es.osaka-u.ac.jp (Toru Aonishi) Date: Thu, 5 Sep 1996 17:52:20 +0900 Subject: Paper Announcements Message-ID: <199609050852.RAA13861@fsunc.bpe.es.osaka-u.ac.jp> *** Paper Announcements *** The following two papers on analysis of the dynamic link architecture are now available from my FTP site. ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi Comments/suggestions welcome, -Toru Aonishi (aonishi at bpe.es.osaka-u.ac.jp) =========================================================================== A Phase Locking Theory of Matching between Rotated Images by a Dynamic Link Architecture (Submitted to Neural Computation) Toru AONISHI, Koji KURATA and Takeshi MITO Pattern recognition invariant to deformation or translation can be performed with the dynamic link architecture proposed by von der Malsburg. The dynamic link has been applied to some engineering examples efficiently, but has not yet been analyzed mathematically. We propose two models of the dynamic link architecture. Both models are mathematically tractable. The first model can perform matching between rotated images. The second model can also do that, and can additionally detect common parts in a template image and in a data image. To analyze these models mathematically, we reduce each model's equation to a phase equation, showing the mathematical principle behind the rotating invariant matching process. We also carry out computer simulations to verify the mathematical theories involved. FTP-host: ftp.bpe.es.osaka-u.ac.jp FTP-pathname: /pub/FukushimaLab/Papers/aonishi/rotation_dy.ps.gz URL: ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi/rotation_dy.ps.gz 30 pages; 238Kb compressed. =========================================================================== Deformation Theory of Dynamic Link Architecture (Submitted to Neural Computation) Toru AONISHI, Koji KURATA Dynamic link is a self-organizing topographic mapping formed between a template image and a data image. The mapping tends to be continuous, linking two points sharing similar local features, which as a result, can lead to its deformation to some degree. Analyzing this deformation mathematically, we reduce the model equation to a phase equation, which clarifies the principles of this deformation process, the relation between high-dimensional models and low-dimensional ones. It also elucidates the characteristics of the model in the context of standard regularization theory. FTP-host: ftp.bpe.es.osaka-u.ac.jp FTP-pathname: /pub/FukushimaLab/Papers/aonishi/deform_dy.ps.gz URL: ftp://ftp.bpe.es.osaka-u.ac.jp/pub/FukushimaLab/Papers/aonishi/deform_dy.ps.gz 15 pages; 112Kb compressed. ===========================================================================  From ruppin at math.tau.ac.il Fri Sep 6 08:31:24 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Fri, 6 Sep 1996 15:31:24 +0300 (GMT+0300) Subject: CFP:-Modeling-Brain-Disorders Message-ID: <199609061231.PAA14712@gemini.math.tau.ac.il> CALL FOR SUBMISSIONS Special Issue of the Journal "Artificial Intelligence in Medicine" (Published by Elsevier) Theme: COMPUTATIONAL MODELING OF BRAIN DISORDERS Guest-Editors: Eytan Ruppin & James A. Reggia (Tel-Aviv University) (University of Maryland) BACKGROUND As computational methods for brain modeling have advanced during the last several years, there has been an increasing interest in adopting them to study brain disorders in neurology, neuropsychology, and psychiatry. Models of Alzheimer's disease, epilepsy, aphasia, dyslexia, Parkinson's disease, stroke and schizophrenia have been recently studied to obtain a better understanding of the underlying pathophysiological processes. While computer models have the disadvantage of simplifying the underlying neurobiology and the pathophysiology, they also have remarkable advantages: They permit precise and systematic control of the model variables, and an arbitrarily large number of ``subjects''. They are open to detailed inspection, in isolation, of the influence of various metabolic and neural variables on the disease progression, in the hope of gaining insight into why observed behaviors occur. Ultimately, one seeks a sufficiently powerful model that can be used to suggest new pharmacological interventions and rehabilitative actions. OBJECTIVE OF SPECIAL ISSUE The objective of this special issue on modeling brain disorders is to report on the recent studies in this field. The main goal is to increase the awareness of the AI medical community to this research, currently primarily performed by members of the neural networks and `connectionist' community. By bringing together a series of such brain disorders modeling papers, we strive to produce a contemporary overview of the kinds of problems and solutions that this growing research field has generated, and to point to future promising research directions. More specifically, papers are expected to cover one or more of the following topics: -- Specific neural models of brain disorders, expressing the link between their pathogenesis and clinical manifestations. -- Computational models of pathological alterations in basic neural, synaptic and metabolic processes, that may relate to the generation of brain disorders in a significant manner. -- Applications of neural networks that shed light on the pathogenic processes that underlie brain disorders, or explore their temporal evolution and clinical course. -- Methodological issues involved in constructing computational models of brain disorders; obtaining sufficient data, visualizing high-dimensional complex behavior, and testing and validating these models. -- Bridging the apparent gap between functional imaging investigations and current neural modeling studies, arising from their distinct spatio-temporal resolution. SCHEDULE All the submitted manuscripts will be subject to a rigorous review process. The special issue will include 5 papers of 15-20 pages each, plus an editorial. Manuscripts should be prepared in accordance with the journal "submission guidelines" which are available on request, and may also be retrieved from http://www.math.tau.ac.il/~ruppin. November 15, 1996 Submission of tentative title and abstract to declare intension to submit paper. This should be done electronically, to ruppin at math.tau.ac.il. March 15, 1997 Receipt of full papers. Three copies of a manuscript should be sent to: Eytan Ruppin Department of Computer Science School of Mathematics Tel-Aviv University Tel-Aviv, Israel, 69978. August 1, 1997 Notification of acceptance October 1, 1997 Receipt of final-version of manuscripts June 1998 Publication of AIM special issue  From Friedrich.Leisch at ci.tuwien.ac.at Mon Sep 9 08:35:35 1996 From: Friedrich.Leisch at ci.tuwien.ac.at (Friedrich Leisch) Date: Mon, 9 Sep 1996 14:35:35 +0200 Subject: UPDATE: CI BibTeX Database Collection Message-ID: <199609091235.OAA00048@meriadoc.ci.tuwien.ac.at> The BibTeX database collection at the Vienna Center for Computational Intelligence has been updated: New: Advances in Neural Information Processing Systems 8 (NIPS'95) IEEE Transactions on NN 7/4 Neural Computation 8/1-8/6 Neural Networks 9/5 URL: http://www.ci.tuwien.ac.at/docs/ci/bibtex_collection.html The Vienna Center for Computational Intelligence maintains BibTeX databases for a growing number of CI-related journals and conference proceedings (with emphasis on neural networks). All bibtex files use a unified key format, so citing from our BibTeX files is easy. WE ARE ALWAYS LOOKING FOR NEW ENTRIES TO THIS ARCHIVE AND APPRECIATE ALL CONTRIBUTIONS. TAKE A LOOK AT THE ABOVE WEBSITE FOR DETAILS. Best, Fritz Leisch  From arthur at mail4.ai.univie.ac.at Mon Sep 9 13:04:10 1996 From: arthur at mail4.ai.univie.ac.at (Arthur Flexer) Date: Mon, 9 Sep 1996 19:04:10 +0200 (MET DST) Subject: TR: Limitations of SOM Message-ID: <199609091704.TAA02653@milano.ai.univie.ac.at> Dear colleagues, the following report is available via my personal WWW-page: http://www.ai.univie.ac.at/~arthur/ as ftp://ftp.ai.univie.ac.at/papers/oefai-tr-96-23.ps.Z Sorry, there are no hardcopies available, comments are welcome! Sincerely, Arthur Flexer ----------------------------------------------------------------------------- Arthur Flexer arthur at ai.univie.ac.at Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) Schottengasse 3, A-1010 Vienna, Austria +43-1-5320652(Fax) ----------------------------------------------------------------------------- Flexer A.: Limitations of self-organizing maps for vector quantization and multidimensional scaling, to appear in: Advances in Neural Information Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, available in 1997. Abstract: The limitations of using self-organizing maps (SOM) for either clustering/vector quantization (VQ) or multidimensional scaling (MDS) are being discussed by reviewing recent empirical findings and the relevant theory. SOM's remaining ability of doing both VQ {\em and} MDS at the same time is challenged by a new combined technique of adaptive {\em K}-means clustering plus Sammon mapping of the cluster centroids. SOM are shown to perform significantly worse in terms of quantization error, in recovering the structure of the clusters and in preserving the topology in a comprehensive empirical study using a series of multivariate normal clustering problems.  From payman at isdl.ee.washington.edu Mon Sep 9 20:51:15 1996 From: payman at isdl.ee.washington.edu (Payman Arabshahi) Date: Mon, 9 Sep 1996 17:51:15 -0700 (PDT) Subject: CFP: IEEE Transactions on Neural Networks, Special Issue Message-ID: <199609100051.RAA03483@isdl.ee.washington.edu> -------------------------------------------------------------------------- Call for Papers Special Issue of the IEEE Transactions on Neural Networks: Every Day Applications of Neural Networks -------------------------------------------------------------------------- The objective of this special issue is presentation of cases of ongoing every day use of neural networks in industry, commerce, medicine, engineering, military and other disciplines. Even though artificial neural networks have been around since the 1940's, the last decade has seen a tremendous upsurge in research and development. This activity has been at two levels, (i) advances in neural techniques and network architectures and (ii) exploration of application of this technology in various fields. Neural network technology has reached a degree of maturity as evidenced by an ever increasing number of applications. It is useful, at this stage, to take stock of applications to provide the neural practitioner with (i) knowledge of fields wherein neural technology has had an impact, and (ii) guidance concerning fruitful areas of research and development in neurotechnology that have a significant impact. This special issue of the TNN calls for submission of papers concerning neural technology adopted for ongoing or everyday use. Hybrid neural technology, such as neuro-fuzzy systems, are also appropriate. Submissions are to specifically address the infusion and adaptation of neural technology in various areas. Exploratory applications papers, normally welcome for submission to the TNN, are specifically discouraged for this special issue. Adopted and established applications papers, rather, are appropriate. Submissions to the special issue will be judged based on the veracity of everyday use, comparative performance over previously used techniques and lessons learned from the development and applications Descriptions of remaining open problems or desired, though unachieved performance attainment, are encouraged. Six copies of the manuscript should be mailed to one of the special issue editors by November 15, 1996. The special issue is tentatively scheduled for publication in July 1997. Submissions could either be brief papers or regular papers. Please refer to instructions to authors for TNN. Tharam Dillon Professor of Computer Science Head, Department of Computer Science and Computer Engineering La Trobe University Bundoora, Melbourne, Victoria 3083 Australia Tel: +61 3 479 2598 Fax: +61 3 479 3060 tharam at latcs1.cs.latrobe.edu.ua Payman Arabshahi University of Washington Department of Electrical Engineering Benton Way at Stevens Way Box 352500 Seattle, WA 98195 United States of America payman at ee.washington.edu 206 236 2694 FAX: 206 543 3842 Robert J. Marks II University of Washington Department of Electrical Engineering c/o 1131 199th Street SW Lynnwood, WA 98036-7138 United States of America r.marks at ieee.org 206 543 6990 FAX: 206 776 9297  From icie96 at mara.fi.uba.ar Mon Sep 9 17:20:21 1996 From: icie96 at mara.fi.uba.ar (1995 Congress) Date: Mon, 9 Sep 1996 18:20:21 -0300 (GMT-0300) Subject: New Dates for III ICIE (was ICIE 96) Message-ID: *** NEW DATES ***** NEW DATES ***** NEW DATES ***** NEW DATES CALL FOR PAPERS III INTERNATIONAL CONGRESS ON INFORMATION ENGINEERING III ICIE ===> New Date: April, 16th & 17th. 1997 Computer Science Department. School of Engineering University of Buenos Aires. ARGENTINA ======================================================================== The III International Congress on Information Engineering, will be held in the Computer Science Department of the School of Engineering of the University of Buenos Aires on April, 16th & 17th, 1997. Buenos Aires City. Argentina. PAPERS from all countries are sought that: 1) Present results of researchers work in the area, 2) Present applications to the solution of problems in industry, business, government and related areas. AREAS OF APPLICATION include but are not limited to: manufacturing, automation, control systems, planning, design, production, distribution, marketing, human resources, finance, stock exchange, international business, environmental control, communication media, legal aspects, decision support. TECHNOLOGY TRANSFER include but are not limited to: strategies for introducing and institutionalising Information Engineering Technology, human resources formation in Information Engineering, justification of Information Engineering Projects, cooperation projects, impact of Information Engineering in the social environment of the company, standards. INFORMATION TECHNIQUES include but are not limited to: knowledge-based systems, neural networks, fuzzy systems, artificial intelligence, data bases, computational algebra, computer languages, object oriented technology, multimedia, computer vision, robotics, computer human interface, tutoring systems, networking, software engineering, operational research. Persons wishing to submit a paper have to send an abstract (500 words) by e-mail to icie96 at mara.fi.uba.ar and four (4) copies written in Spanish or English to: Program Committee. Computer Science Department. School of Engineering. University of Buenos Aires. Paseo Colon 850. 4to PISO. (1063) Capital Federal. ARGENTINA The paper shall identify the area and technique to which it belongs. Papers will be evaluated with respect to their originality, correctness, clarity and relevance. Use a Arial or Times New Roman type font, size 12, single space with a maximum of 10 pages in A4 format. Margins: 2.5 cm (top, bottom, left, right). Selected papers will be published in the proceedings of the Congress. IMPORTANT DATES: Papers must be received by November 15th, 1996. Fax version of the paper is allowed for evaluation. Authors will be notified of acceptance of rejection by e-mail or fax by December 15 th. FAX: 54 1 331-0129 (54 for Argentina, 1 for Buenos Aires) PROGRAM COMMITTEE Prof. Lic. Gregorio Perichinsky University of Buenos Aires Prof. Ing. Armando De Giusti University of La Plata Prof. M.Sc. Raul Gallard University of San Luis Prof. Dr. Edmundo Gramajo Technical University of Madrid Buenos Aires institute of Technology Prof. Ph.D. Reza Korramshagol American University Prof. Ph.D. Anita LaSalle American University Prof. Ing. E. Cabellos Pardos University of Salamanca Prof. M.Ing. Raimundo D'Aquila Bs. As. Institute of Technology Prof. Lic. Bibiana Rossi Technological University Prof. Ing. Diana Pallioto University of Santiago del Estero Prof. Ing. Cristina Fenema University of Catamarca Prof. Lic. Stella M. Valiente University of Mar del Plata Prof. C.C. Maria Feldgen University of Buenos Aires Prof. Ing. Osvaldo Clua University of Buenos Aires Prof. M.Ing. R. Garcia Martinez University of Buenos Aires Prof. Lic. Javier Blanque University of Lujan  From sandro at parker.physio.nwu.edu Wed Sep 11 18:49:21 1996 From: sandro at parker.physio.nwu.edu (Sandro Mussa-Ivaldi) Date: Wed, 11 Sep 96 17:49:21 CDT Subject: Postdoctoral Fellowship - Neuromorphic Control Message-ID: <9609112249.AA04812@parker.physio.nwu.edu> ********NEUROMORPHIC CONTROL SYSTEMS******** A postdoctoral position is available at the Department of Physiology of Northwestern University Medical School to work on an interdisciplinary project aimed at developing a control system for an artificial limb. This system will be based on the rat's hindlimb geometry and the controller will emulate some of the known physiology of the muscles and of the spinal cord. The research will involve both theoretical and experimental components and will be carried out in collaboration with Sandro Mussa-Ivaldi and Simon Alford at the Department of Physiology and with Ed Colgate at the Department of Mechanical Engineering. The position is available immediately and will last a minimum period of one year, with the possibility to be extended for another year. The applicants should have a PhD in a relevant discipline and a solid experience in control system engineering and computational sciences. This work will also involve some amount of cellular neurophysiology. While a strong background on experimental neurophysiology is not a requisite, the candidate should be willing to become acquainted with these techniques and to carry out experimental work. Northwestern University offers competitive salary and benefits. Applicants should send a CV, a statement of their interests and professional goals (not longer than 1 page) and the names, addresses and telephone numbers of at least two reference to Vera Reeves either via email (v-reeves at nwu.edu) or via surface mail at the following address: Vera Reeves Business Administrator Departmenet of Physiology (M211) Northwestern University Medical School 303 East Chicago Ave Chicago, Il 60611-3008 Northwestern University is an equal opportunity, affirmative action educator and employer. --------------------------- sandro at nwu.edu  From bogner at eleceng.adelaide.edu.au Wed Sep 11 03:17:49 1996 From: bogner at eleceng.adelaide.edu.au (Robert E. Bogner) Date: Wed, 11 Sep 1996 16:47:49 +0930 Subject: Post-doc or Research Fellow position Message-ID: <9609110717.AA28415@augean.eleceng.adelaide.edu.au> POSTDOCTORAL OR RESEARCH FELLOW Signal Processing and Pattern Recognition A postdoctoral or research fellow is sought to join as soon as possible the Centre for Sensor Signal and Information Processing (CSSIP) and the University of Adelaide EE Eng Department. The CSSIP is one of several cooperative research centres awarded by the Australian Government to establish excellence in research and development. The University of Adelaide, represented by the EE Eng Dept, is a partner in this cooperative research centre, together with the Defence Science and Technology Organization (DSTO), four other Universities, and several companies. The cooperative research centre consists of about 100 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine (DEC MPP). The aim of the position is to develop and investigate principles in the areas of sensor signal and image processing, classification and separation of signals, pattern recognition and data fusion. The position is for one year with a strong possibility of renewal. DUTIES: In consultation with task leaders and specialist researchers to investigate alternative algorithm design approaches, to design experiments on applications of signal processing and artificial neural networks, to prepare data and carry out the experiments, to prepare software for testing algorithms, and to prepare or assist with the prepation of technical reports. QUALIFICATIONS: The successful candidate must have a Ph.D. or equivalent achievement, a proven research record, and a demonstrated ability to communicate excellently in written and spoken English. CONDITIONS: Pay and conditions will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$ 36285 to A$ 41000 for a postdoc, and A$ 42538 to A$ 48688 for a research fellow. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering Dept., The University of Adelaide, Adelaide, South Australia 5005. Phone: (61)-08-303-5589, Fax: (61)-08-303 4360 Email: bogner at eleceng.adelaide.edu.au or Dr. A. Bouzerdoum, Phone (61)-08-303-5464, Fax (61)-08-303 4360 Email: bouzerda at eleceng.adelaide.edu.au (OR: at ISSPA96 - leave a message at the message station.) CENTRE FOR SENSOR SIGNAL SIGNAL AND INFORMATION PROCESSING (CSSIP): The University of Adelaide, represented by the Department ofElectrical and Electronic Engineering is a partner in this Cooperative Research Centre, together with the Defence Science andTechnology Organization, four other Universities, and severalcompanies. These research centres are part of a scheme of the Australian Government and they receive considerable financial support from the government, matching contributions from the partners. Thusthey can provide research facilities, contacts, and support for scholars while being effectively extensions of the partner Universities. The objective of the CSSIP is to bring together a sufficient mass of high quality workers, to establish and maintain a centre of excellence and to ensure that the benefits flow continuously to industry and education. Its programs include:. strategic research into underpinning technologies with application to specific projects. education and training responsive to needs. services to industry in research, development and teaching.  From dwang at cis.ohio-state.edu Wed Sep 11 12:38:58 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Wed, 11 Sep 1996 12:38:58 -0400 Subject: NIPS'96 Workshop on Auditory Scene Analysis Message-ID: <199609111638.MAA09066@sarajevo.cis.ohio-state.edu> CALL FOR SPEAKERS NIPS'96 Postconference Workshop CONNECTIONIST MODELLING OF AUDITORY SCENE ANALYSIS Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Guy J. Brown DeLiang Wang Department of Computer Science Department of Computer & Information University of Sheffield Sci. and Center for Cognitive Sci. Regent Court, 211 Portobello St. The Ohio State University Sheffield S1 4DP, U.K. Columbus, OH 43210-1277, USA Fax: +44 (0)114 2780972 Fax: (614)2922911 Email: guy at dcs.shef.ac.uk Email: dwang at cis.ohio-state.edu http://www.dcs.shef.ac.uk/~guy http://www.cis.ohio-state.edu/~dwang OBJECTIVES Auditory scene analysis describes the ability of listeners to separate the acoustic events arriving from different environmental sources into separate perceptual representations (streams). It is related to, but is more general than, the well-known "cocktail party effect", which refers to the ability of listeners to segregate one voice from a mixture of many other voices. Computational models of auditory scene analysis are likely to play an important role in building speech recognition systems that work in realistic acoustic environments. However, many aspects of this important modelling problem are as yet largely unsolved. Recently, there has been significant growth in neural modelling of auditory scene analysis since Albert Bregman published his book "Auditory Scene Analysis" in 1990. This workshop seeks to bring together a diverse group of researchers to critically examine the progress made so far in this challenging research area, and to discuss unsolved problems. In particular, we intend to address the following issues: * Whether attention is involved in primitive (bottom-up) auditory scene analysis * How primitive auditory scene analysis is coupled with schema-based (knowledge-based) auditory scene analysis * The utility of the oscillatory approach In addition to reviewing these issues, we would like to chart, if possible, a neural network framework for segmenting simultaneously presented auditory patterns. WORKSHOP FORMAT This one-day workshop will be organised into two three-hour sessions, one in early morning and one in late afternoon. The intermitting time is reserved for skiing or free-wheeling interactions between participants. Each session consists of 2 hour oral presentations and 1 hour panel discussion. SUBMISSION OF ABSTRACTS A group of invited experts, including Albert Bregman, will speak in the workshop. We are seeking a few more speakers to contribute. If you have done work on this or related topics and would like to contribute, please send an abstract as soon as possible to: GUY J. BROWN Department of Computer Science University of Sheffield Regent Court, 211 Portobello Street Sheffield S1 4DP, UK Phone: +44 (0)114 2825568; Fax: +44 (0)114 2780972 Email: guy at dcs.shef.ac.uk Abstracts should be sent in by email or by fax. Important Dates: Deadline for submission of abstracts: 27 September, 1996 Notification of acceptance: 7 October, 1996 A set of workshop notes will be produced. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From edelman at ai.mit.edu Tue Sep 10 20:45:09 1996 From: edelman at ai.mit.edu (Shimon Edelman) Date: Tue, 10 Sep 96 20:45:09 EDT Subject: [arthur@mail4.ai.univie.ac.at: TR: Limitations of SOM] Message-ID: <9609110045.AA03818@peduncle.ai.mit.edu> Seeing that comments are welcome... there seems to be a rather glaring gap in the references in this TR. Fukunaga proposed a similar combination of clustering and topology-preservation criteria in 1972, and there was a recent paper by Webb following up on that work. It would have been nice to see Baxter's idea of Canonical Vector Quantization discussed in this context. By the way, what is called MDS in this TR is actually not (although it is related; MDS is the process of placement of points in a metric space in a maner that preserves their distances - or the ranks thereof - without the knowledge of the original coordinates of the points). @Article{KoontzFukunaga72, author = "W. L. G. Koontz and K. Fukunaga", title = "A nonlinear feature extraction algorithm using distance information", journal = "IEEE Trans. Comput.", year = 1972, volume = 21, pages = "56-63", annote = "combines class separation and distance preservation criteria for dimensionality reduction" } @Article{Webb95, author = "A. R. Webb", title = "Multidimensional-Scaling by Iterative Majorization Using Radial Basis Functions", journal = "Pattern Recognition", year = 1995, volume = 28, pages = "753-759", annote = "MDS, RBFs, nonlinear PCA. This paper considers the use of radial basis functions for modelling the non-linear transformation of a data set obtained by a multidimensional scaling analysis. This approach has two advantages over conventional nonmetric multidimensional scaling. It reduces the number of parameters to estimate and it provides a transformation that may be used on an unseen test set. A scheme based on iterative majorization is proposed for obtaining the parameters of the network." } @TechReport{Baxter95b, author = "J. Baxter", title = "The Canonical Metric for Vector Quantization", institution = "University of London", year = 1995, type = "NeuroCOLT", number = "NC-TR-95-047", annote = "Abstract. To measure the quality of a set of vector quantization points a means of measuring the distance between two points is required. Common metrics such as the {Hamming} and {Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is argued that there often exists a natural {environment} of functions to the quantization process (for example, the word classifiers in speech recognition and the character classifiers in character recognition) and that such an enviroment induces a {canonical metric} on the space being quantized. It is proved that optimizing the {reconstruction error} with respect to the canonical metric gives rise to optimal approximations of the functions in the environment, so that the canonical metric can be viewed as embodying all the essential information relevant to learning the functions in the environment. Techniques for {learning} the canonical metric are discussed, in particular the relationship between learning the canonical metric and {internal representation learning}" } -Shimon Dr. Shimon Edelman, Center for Biol & Comp Learning, MIT (on leave from Weizmann Inst of Science, Rehovot, Israel) Web home: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+1) 617 253-2964 tel: 253-0549 edelman at ai.mit.edu > From: Arthur Flexer > Subject: TR: Limitations of SOM > To: connectionists at cs.cmu.edu > Date: Mon, 9 Sep 1996 19:04:10 +0200 (MET DST) > > Dear colleagues, > > the following report is available via my personal WWW-page: > > http://www.ai.univie.ac.at/~arthur/ > as > ftp://ftp.ai.univie.ac.at/papers/oefai-tr-96-23.ps.Z > > Sorry, there are no hardcopies available, comments are welcome! > > Sincerely, Arthur Flexer > > ----------------------------------------------------------------------------- > Arthur Flexer arthur at ai.univie.ac.at > Austrian Research Inst. for Artificial Intelligence +43-1-5336112(Tel) > Schottengasse 3, A-1010 Vienna, Austria +43-1-5320652(Fax) > ----------------------------------------------------------------------------- > > > Flexer A.: Limitations of self-organizing maps for vector quantization and > multidimensional scaling, to appear in: Advances in Neural Information > Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, > available in 1997. > > Abstract: > > The limitations of using self-organizing maps (SOM) for either > clustering/vector quantization (VQ) or multidimensional scaling > (MDS) are being discussed by reviewing recent empirical findings and > the relevant theory. SOM's remaining ability of doing both VQ {\em > and} MDS at the same time is challenged by a new combined > technique of adaptive {\em K}-means clustering plus Sammon mapping > of the cluster centroids. SOM are shown to perform significantly > worse in terms of quantization error, in recovering the structure of > the clusters and in preserving the topology in a comprehensive > empirical study using a series of multivariate normal clustering > problems. > >  From linster at berg.harvard.edu Fri Sep 13 18:46:53 1996 From: linster at berg.harvard.edu (Christiane Linster) Date: Fri, 13 Sep 1996 18:46:53 -0400 (EDT) Subject: Nips Workshop Message-ID: CALL FOR PARTICIPATION NIPS'96 Postconference Workshop NEURAL MODULATION AND NEURAL INFORMATION PROCESSING Snowmass (Aspen), Colorado USA Friday Dec 6th, 1996 Akaysha Tang Christiane Linster The Salk Institute Dept. of Psychology Computational Neurobiology Lab Harvard University 10010 North Torrey Pines Road 33, Kirkland Street La Jolla, CA 92037 Cambridge, MA 02138 Tel: (619) 453 4100 x1618 Tel: (617) 496 2555 Fax: (619) 587 0417 Fax: (617) 495 3827 tang at salk.edu linster at berg.harvard.edu OBJECTIVES Neural modulation is ubiquitous in the nervous system and can provide the neural system with additional computational power that has yet to be characterized. From a computational point of view, the effects of neuromodulation on neural information processing can be far more sophisticated than the simple increased/decreased gain control, assumed by many modelers. We would like to bring together scientists from diverse fields of studies, including psychopharmacology, behavioral genetics, neurophysiology, neural networks, and computational neuroscience. We hope, through sessions of highly critical, interactive and interdisciplinary discussions, * to identify the strengths and weaknesses of existing research methodology and practices within each of the field; * to work out a series of strategies to increase the interactions between experimental and theoretical research and; * to further our understanding of the role of neuromodulation in neural information processing. WORKSHOP FORMAT This one-day workshop will be organised into two three-hour sessions, one in early morning and one in late afternoon. The intermitting time is reserved for skiing or free-wheeling interactions between participants. Each session consists of 2 hour oral presentations and 1 hour panel discussion. A group of invited researchers in the field, including Terry Sejnowski and John Lisman and Michael Hasselmo. If you have done work on this or related topics and would like to attend and/or contribute, please send an email describing your research interests to: Christiane Linster linster at berg.harvard.edu Abstracts should be sent in by email or by fax. Please contact the workshop organizers for further information, or consult the NIPS WWW home page: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/  From geoff at salk.edu Fri Sep 13 13:46:02 1996 From: geoff at salk.edu (geoff@salk.edu) Date: Fri, 13 Sep 1996 10:46:02 -0700 (PDT) Subject: Paper available Message-ID: <199609131746.KAA11605@gauss.salk.edu> The following paper is available via ftp://ftp.cnl.salk.edu/pub/geoff/goodhill_sejnowski_96.ps.Z or http://www.cnl.salk.edu/~geoff QUANTIFYING NEIGHBOURHOOD PRESERVATION IN TOPOGRAPHIC MAPPINGS Geoffrey J. Goodhill & Terrence J. Sejnowski The Salk Institute From: Proceedings of the 3rd Joint Symposium on Neural Computation, 1996 ABSTRACT Mappings that preserve neighbourhood relationships are important in many contexts, from neurobiology to multivariate data analysis. It is important to be clear about precisely what is meant by preserving neighbourhoods. At least three issues have to be addressed: how neighbourhoods are defined, how a perfectly neighbourhood preserving mapping is defined, and how an objective function for measuring discrepancies from perfect neighbourhood preservation is defined. We review several standard methods, and using a simple example mapping problem show that the different assumptions of each lead to non-trivially different answers. We also introduce a particular measure for topographic distortion, which has the form of a quadratic assignment problem. Many previous methods are closely related to this measure, which thus serves to unify disparate approaches. 22 pages, uncompressed postscript = 1.1MB NOTE: I advertised a tech report with the same title on this list last year: the new paper contains more recent work.  From cas-cns at cns.bu.edu Mon Sep 16 09:05:02 1996 From: cas-cns at cns.bu.edu (CAS/CNS) Date: Mon, 16 Sep 1996 09:05:02 -0400 Subject: Grad Training - BU Cognitive & Neural Systems Message-ID: <199609161304.JAA09941@cns.bu.edu> ************************************************************************* GRADUATE TRAINING IN THE DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS (CNS) AT BOSTON UNIVERSITY ************************************************************************* The Boston University Department of Cognitive and Neural Systems offers comprehensive graduate training in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of technological problems. Applications for Fall, 1997, admission and financial aid are now being accepted for both the MA and PhD degree programs. To obtain a brochure describing the CNS Program and a set of application materials, write, telephone, or fax: DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS Boston University 677 Beacon Street Boston, MA 02215 617/353-9481 (phone) 617/353-7755 (fax) or send via email your full name and mailing address to: rll at cns.bu.edu (Ms. Robin L. Locke) Applications for admission and financial aid should be received by the Graduate School Admissions Office no later than January 15. Late applications will be considered until May 1; after that date applications will be considered only as special cases. Applicants are required to submit undergraduate (and, if applicable, graduate) transcripts, three letters of recommendation, and Graduate Record Examination (GRE) scores. The Advanced Test should be in the candidate's area of departmental specialization. GRE scores may be waived for MA candidates and, in exceptional cases, for PhD candidates, but absence of these scores may decrease an applicant's chances for admission and financial aid. Non-degree students may also enroll in CNS courses on a part-time basis. Stephen Grossberg, Chairman Gail A. Carpenter, Director of Graduate Studies Description of the CNS Department: The Department of Cognitive and Neural Systems (CNS) provides advanced training and research experience for graduate students interested in the neural and computational principles, mechanisms, and architectures that underlie human and animal behavior, and the application of neural network architectures to the solution of outstanding technological problems. Students are trained in a broad range of areas concerning cognitive and neural systems, including vision and image processing; speech and language understanding; adaptive pattern recognition; cognitive information processing; self-organization; associative learning and long-term memory; cooperative and competitive network dynamics and short-term memory; reinforcement, motivation, and attention; adaptive sensory-motor control and robotics; and biological rhythms; as well as the mathematical and computational methods needed to support advanced modeling research and applications. The CNS Department awards MA, PhD, and BA/MA degrees. The CNS Department embodies a number of unique features. It has developed a curriculum that consists of interdisciplinary graduate courses, each of which integrates the psychological, neurobiological, mathematical, and computational information needed to theoretically investigate fundamental issues concerning mind and brain processes and the applications of neural networks to technology. Additional advanced courses, including research seminars, are also offered. Each course is typically taught once a week in the afternoon or evening to make the program available to qualified students, including working professionals, throughout the Boston area. Students develop a coherent area of expertise by designing a program that includes courses in areas such as biology, computer science, engineering, mathematics, and psychology, in addition to courses in the CNS curriculum. The CNS Department prepares students for thesis research with scientists in one of several Boston University research centers or groups, and with Boston-area scientists collaborating with these centers. The unit most closely linked to the department is the Center for Adaptive Systems. Students interested in neural network hardware work with researchers in CNS, at the College of Engineering, and at MIT Lincoln Laboratory. Other research resources include distinguished research groups in neurophysiology, neuroanatomy, and neuropharmacology at the Medical School and the Charles River Campus; in sensory robotics, biomedical engineering, computer and systems engineering, and neuromuscular research within the Engineering School; in dynamical systems within the Mathematics Department; in theoretical computer science within the Computer Science Department; and in biophysics and computational physics within the Physics Department. In addition to its basic research and training program, the department conducts a seminar series, as well as conferences and symposia, which bring together distinguished scientists from both experimental and theoretical disciplines. The department is housed in its own new four story building which includes ample space for faculty and student offices and laboratories, as well as an auditorium, classroom and seminar rooms, library, and faculty-student lounge. 1996-97 CAS MEMBERS and CNS FACULTY: Jelle Atema Professor of Biology Director, Boston University Marine Program (BUMP) PhD, University of Michigan Sensory physiology and behavior Aijaz Baloch Research Associate of Cognitive and Neural Systems PhD, Electrical Engineering, Boston University Neural modeling of role of visual attention of recognition, learning and motor control, computational vision, adaptive control systems, reinforcement learning Helen Barbas Associate Professor, Department of Health Sciences PhD, Physiology/Neurophysiology, McGill University Organization of the prefrontal cortex, evolution of the neocortex Jacob Beck Research Professor of Cognitive and Neural Systems PhD, Psychology, Cornell University Visual perception, psychophysics, computational models Daniel H. Bullock Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, Stanford University Real-time neural systems, sensory-motor learning and control, evolution of intelligence, cognitive development Gail A. Carpenter Professor of Cognitive and Neural Systems and Mathematics Director of Graduate Studies Department of Cognitive and Neural Systems PhD, Mathematics, University of Wisconsin, Madison Pattern recognition, categorization, machine learning, differential equations Laird Cermak Professor of Neuropsychology, School of Medicine Professor of Occupational Therapy, Sargent College Director, Memory Disorders Research Center Boston Veterans Affairs Medical Center PhD, Ohio State University Memory disorders Michael A. Cohen Associate Professor of Cognitive and Neural Systems and Computer Science Director, CAS/CNS Computation Labs PhD, Psychology, Harvard University Speech and language processing, measurement theory, neural modeling, dynamical systems H. Steven Colburn Professor of Biomedical Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Audition, binaural interaction, signal processing models of hearing William D. Eldred III Associate Professor of Biology PhD, University of Colorado, Health Science Center Visual neural biology Paolo Gaudiano Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Computational and neural models of robotics, vision, adaptive sensory-motor control, and behavioral neurobiology Jean Berko Gleason Professor of Psychology PhD, Harvard University Psycholinguistics Douglas Greve Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Active vision Stephen Grossberg Wang Professor of Cognitive and Neural Systems Professor of Mathematics, Psychology, and Biomedical Engineering Director, Center for Adaptive Systems Chairman, Department of Cognitive and Neural Systems PhD, Mathematics, Rockefeller University Theoretical biology, theoretical psychology, dynamical systems, applied mathematics Frank Guenther Assistant Professor of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Biological sensory-motor control, spatial representation, speech production Thomas G. Kincaid Professor of Electrical, Computer and Systems Engineering PhD, Electrical Engineering, Massachusetts Institute of Technology Signal and image processing, neural networks, non-destructive testing Nancy Kopell Professor of Mathematics PhD, Mathematics, University of California at Berkeley Dynamical systems, mathematical physiology, pattern formation in biological/physical systems Ennio Mingolla Associate Professor of Cognitive and Neural Systems and Psychology PhD, Psychology, University of Connecticut Visual perception, mathematical modeling of visual processes Alan Peters Chairman and Professor of Anatomy and Neurobiology, School of Medicine PhD, Zoology, Bristol University, United Kingdom Organization of neurons in the cerebral cortex, effects of aging on the primate brain, fine structure of the nervous system Andrzej Przybyszewski Senior Research Associate of Cognitive and Neural Systems PhD, Warsaw Medical Academy Retinal physiology, mathematical and computer modeling of dynamical properties of neurons in the visual system Adam Reeves Adjunct Professor of Cognitive and Neural Systems Professor of Psychology, Northeastern University PhD, Psychology, City University of New York Psychophysics, cognitive psychology, vision Mark Rubin Research Assistant Professor of Cognitive and Neural Systems Research Physicist, Naval Air Warfare Center, China Lake, CA (on leave) PhD, Physics, University of Chicago Neural networks for vision, pattern recognition, and motor control Robert Savoy Adjunct Associate Professor of Cognitive and Neural Systems Scientist, Rowland Institute for Science PhD, Experimental Psychology, Harvard University Computational neuroscience; visual psychophysics of color, form, and motion perception Eric Schwartz Professor of Cognitive and Neural Systems; Electrical, Computer and Systems Engineering; and Anatomy and Neurobiology PhD, High Energy Physics, Columbia University Computational neuroscience, machine vision, neuroanatomy, neural modeling Robert Sekuler Adjunct Professor of Cognitive and Neural Systems Research Professor of Biomedical Engineering, College of Engineering, BioMolecular Engineering Research Center Jesse and Louis Salvage Professor of Psychology, Brandeis University Sc.M., PhD, Brown University Barbara Shinn-Cunningham Assistant Professor of Cognitive and Neural Systems PhD, Electrical Engineering and Computer Science, Massachusetts Institute of Technology Psychoacoustics, audition, auditory localization, binaural hearing, sensorimotor adaptation, mathematical models of human performance Takeo Watanabe Assistant Professor of Psychology PhD, Behavioral Sciences, University of Tokyo Perception of objects and motion and effects of attention on perception using psychophysics and brain imaging (f-MRI) Allen Waxman Adjunct Associate Professor of Cognitive and Neural Systems Senior Staff Scientist, MIT Lincoln Laboratory PhD, Astrophysics, University of Chicago Visual system modeling, mobile robotic systems, parallel computing, optoelectronic hybrid architectures James Williamson Research Associate of Cognitive and Neural Systems PhD, Cognitive and Neural Systems, Boston University Image processing and object recognition. Particular interests are: dynamic binding, self-organization, shape representation, and classification Jeremy Wolfe Adjunct Associate Professor of Cognitive and Neural Systems Associate Professor of Ophthalmology, Harvard Medical School Psychophysicist, Brigham & Women's Hospital, Surgery Dept. Director of Psychophysical Studies, Center for Clinical Cataract Research PhD, Massachusetts Institute of Technology Visual search ************************************************************************* DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS GRADUATE TRAINING ANNOUNCEMENT Boston University 677 Beacon Street Boston, MA 02215 Phone: 617/353-9481 Fax: 617/353-7755 Email: rll at cns.bu.edu *************************************************************************  From Jon.Baxter at keating.anu.edu.au Mon Sep 16 22:03:05 1996 From: Jon.Baxter at keating.anu.edu.au (Jonathan Baxter) Date: Tue, 17 Sep 1996 12:03:05 +1000 (EST) Subject: [arthur@mail4.ai.univie.ac.at: TR: Limitations of SOM] In-Reply-To: <9609110045.AA03818@peduncle.ai.mit.edu> from "Shimon Edelman" at Sep 10, 96 08:45:09 pm Message-ID: <199609170203.MAA01701@keating.anu.edu.au> Commenting on this paper: > > Flexer A.: Limitations of self-organizing maps for vector quantization and > > multidimensional scaling, to appear in: Advances in Neural Information > > Processing Systems 9, edited by M.C. Mozer, M.I. Jordan, and T. Petsche, > > available in 1997. Shimon Edelman said: > > Seeing that comments are welcome... there seems to be a rather glaring > gap in the references in this TR. Fukunaga proposed a similar > combination of clustering and topology-preservation criteria in 1972, > and there was a recent paper by Webb following up on that work. > > It would have been nice to see Baxter's idea of Canonical Vector > Quantization discussed in this context. If anyone is interested, there is a recent version (July 1996) of the Canonical Quantization paper on my home page: http://keating.anu.edu.au/~jon/papers/canonical.ps.gz Title: The Canonical Distortion Measure for Vector Quantization and Approximation. Author: Jonathan Baxter Abstract: To measure the quality of a set of vector quantization points a means of measuring the distance between a random point and its quantization is required. Common metrics such as the {\em Hamming} and {\em Euclidean} metrics, while mathematically simple, are inappropriate for comparing speech signals or images. In this paper it is shown how an {\em environment} of functions on an input space $X$ induces a {\em canonical distortion measure} (CDM) on X. The depiction ``canonical'' is justified because it is shown that optimizing the reconstruction error of X with respect to the CDM gives rise to optimal piecewise constant approximations of the functions in the environment. The CDM is calculated in closed form for several different function classes. An algorithm for training neural networks to implement the CDM is presented along with some encouraging experimental results. ------------- Jonathan Baxter Department of Systems Engineering Research School of Information Science and Technology Australian National University Canberra, A.C.T 0200 Australia Tel: +61 6 249 5182 Fax: +61 6 279 8088 E-mail: jon at syseng.anu.edu.au  From gorr at willamette.edu Tue Sep 17 14:32:58 1996 From: gorr at willamette.edu (Jenny Orr) Date: Tue, 17 Sep 1996 11:32:58 -0700 (PDT) Subject: NIPS*96 Postconference Workshop Message-ID: <199609171832.LAA01053@mirror.willamette.edu> CALL FOR SPEAKERS NIPS*96 Postconference Workshop TRICKS OF THE TRADE: How to Make Algorithms REALLY Work Snowmass (Aspen), Colorado USA Saturday Dec 7th, 1996 ORGANIZERS: Jenny Orr Willamette University gorr at willamette.edu Klaus Muller GMD First, Germany klaus at first.gmd.de Rich Caruana Carnegie Mellon caruana at cs.cmu.edu OBJECTIVES: Using neural networks to solve difficult problems often requires as much art as science. Researchers and practitioners acquire, through experience and word-of-mouth, techniques and heuristics that help them succeed. Often these ``tricks'' are theoretically well motivated. Sometimes they're the result of trial and error. In this workshop we ask you to share the ``tricks'' you have found helpful. Our focus will be mainly regression and classification. WHAT IS A TRICK? A technique, rule-of-thumb, or heuristic that: - is easy to describe and understand - can make a real difference in practice - is not (yet) part of well documented technique - has broad application - may or may not (yet) have a theoretical explanation Examples of well known tricks include: early stopping, using symmetric sigmoids, on-line calculation of the largest eigenvalue of the Hessian without computing the hessian to determine optimal learning speed, ... POTENTIAL TOPICS: - architecture design: picking layers, nodes, connectivity, modularity, activation functions, ... - model parameters & learning rates, momentum, annealing schedules, speeding learning: on-line, batch, conjugate gradient, approximating the Hessian, ... - training/test sets: sizes, dealing with too much/little data, noisy and/or missing data, active sampling, skewed samples, ... - generalization: which smoothers/regularizers to use and when to use them, network capacity, learning rate, net initialization, output representation, SSE vs. cross-entropy, ... - training problems: symmetry breaking, bootstrapping large nets, no negative instances, ... WORKSHOP FORMAT: Our goal is to create an enjoyable, quick moving one-day workshop with lot's of ideas and discussion. Each three-hour session will have 5-10 short presentations (10 mins max) and 1-2 longer presentations (30 mins max). The long presentations will allow speakers to present collections of ``tricks'' focussed on particular topics such as how to speed up backprop, when and what regularization to use, ... The short presentations will give the rest of us an opportunity to present isolated ``tricks'' with a minimum of presentation overhead. To help keep things "light", we ask that short presentations use 5 or fewer slides. SUBMISSIONS: We already have a number of speakers lined up (see below), but we are looking for more contributions. If you'd like to give a presentation, please email a short (1 page or less) description of the trick(s) to gorr at willamette.edu. If you wish to discuss a single trick, the total presentation time will be 10 minutes, or less. If you wish to discuss a group of related tricks, please say so and briefly describe all the tricks, and the total presentation should be 20-30 minutes, or less. We will review the submissions and include as many as there is time for in the schedule. If possible, please discuss what the trick is used for, when it is and is not applicable, sample problems you have used it on, how well it seems to work in practice, and any explanation you have, theoretical or otherwise, for why it seems to work. Keep in mind that this is a workshop, and "tricks" do not have to be fully fleshed out methods with rigorous theoretical or empirical evidence. If you've found a technique that's sometimes useful, the odds are others will find it interesting and useful, too. In order to list your presentation in the workshop brochure, we need to have the title and abstract by September 27. IMPORTANT DATES: Deadline for listing title in workshop brochure: 27 September, 1996 Final Deadline for submission of abstracts: 7 October, 1996 Notification of acceptance: 15 October, 1996 FOLLOW-UP TO WORKSHOP: To help insure that presenters get credit for divulging their tricks, we'll ask presenters to prepare concise, one page write-ups of their tricks. These will be compiled into a report and/or published on the web and made available to anyone interested. Tricks that as yet have no known theoretical explanation will be grouped together to form a set of open problems. We are also considering the possibility of publishing a collection of tricks as a short book or special journal issue. CONFIRMED SPEAKERS: Yann LeCun, Larry Yaeger Hans Georg Zimmermann Patrice Simard Eric Wan Rich Caruana Nici Schraudolph Shumeet Baluja David Cohn General info about the Postconference workshops can be found on the NIPS homepage: http://www.cs.cmu.edu/afs/cs.cmu.edu/Web/Groups/NIPS/ If you have any questions about this workshop, don't hesitate to contact one of the organizers. We look forward to seeing you in Snowmass! -Jenny, Klaus, and Rich. TOP-TEN REASONS WHY YOU SHOULD PRESENT A TRICK: 10: someone should get credit for the tricks we all end up using, it might as well be you (who's responsible for early stopping?) 9: so you can bond with others who discovered the same trick 8: because you couldn't get it on Letterman's Stupid Human Tricks 7: no one will believe you're an expert if you don't use tricks 6: your trick sucks, but you'll feel better using the other tricks you learn if you present something 5: so you show everyone how very clever you are 4: because you'll feel more comfortable using an unsupported trick if you can get others to use it, too 3: so those of us who see flaws in your tricks can flame you alive 2: you'll feel less guilty skiing if you present something at a workshop 1: because you really don't want to write a whole paper about it  From greiner at scr.siemens.com Tue Sep 17 14:54:17 1996 From: greiner at scr.siemens.com (Russell Greiner) Date: Tue, 17 Sep 1996 14:54:17 -0400 (EDT) Subject: Research/Development Position Message-ID: <199609171854.OAA28363@eagle.scr.siemens.com> The Adaptive Information and Signal Processing Department at Siemens Corporate Research, Inc. (SCR) in Princeton, New Jersey has immediate openings for research and development personnel with experience in one or more of these areas: machine learning expert systems (and other areas of AI) adaptive signal processing fuzzy logic user agents neural networks intelligent control To qualify for one of these positions, you should have a PhD with proven experience in one of the above named areas and a strong interest in applied research and development aimed at delivering working prototypes and products to Siemens companies. You must have very good software development skills including C or C++. Windows API or OLE is a plus. Siemens is a world-wide company with sales of more than $60billion and world-wide employment of almost 400,000. In the US, Siemens has sales of $6billion and almost 40,000 employees. Siemens Corporate Research, Inc. employs approximately 140 technical staff with an emphasis on imaging, multimedia, software engineering and adaptive systems. SCR's mission is to provide technical expertise to develop solutions for Siemens operating companies and groups. Siemens is an Equal Opportunity Employer. If interested, do **NOT** reply to this message. Send your resume to: Russell Greiner Siemens Corporate Research Inc. 755 College Road East Princeton, NJ 08540  From oreilly at flies.mit.edu Tue Sep 17 17:20:47 1996 From: oreilly at flies.mit.edu (Randall O'Reilly) Date: Tue, 17 Sep 1996 16:20:47 -0500 Subject: PhD Thesis Available Message-ID: <199609172120.QAA22328@flies.mit.edu> My PhD thesis is avialable for anonymous ftp downloading: ftp://hydra.psy.cmu.edu/pub/user/oreilly/oreilly_thesis.tar.gz it is 1,085,460 bytes and un-tars into roughly 6 meg of under 1 meg postscript files. -------------------------- The LEABRA Model of Neural Interactions and Learning in the Neocortex Randall C. O'Reilly Center for the Neural Basis of Cognition Department of Psychology Carnegie Mellon University There is evidence that the specialized neural processing systems in the neocortex, which are responsible for much of human cognition, arise from the action of a relatively general-purpose learning mechanism. I propose that such a neocortical learning mechanism can be best understood as the combination of error-driven and self-organizing (Hebbian associative) learning. This model of neocortical learning, called LEABRA (local, error-driven and associative, biologically realistic algorithm), is computationally powerful, has important implications for psychological models, and is biologically feasible. The thesis begins with an evaluation of the strengths and limitations of current neural network learning algorithms as models of a neocortical learning mechanism according to psychological, biological, and computational criteria. I argue that error-driven (e.g., backpropagation) learning is a reasonable computational and psychological model, but it is biologically implausible. I show that backpropagation can be implemented in a biologically plausible fashion by using interactive (bi-directional, recurrent) activation flow, which is known to exist in the neocortex, and has been important for accounting for psychological data. However, the interactivity required for biological and psychological plausibility significantly impairs the ability to respond systematically to novel stimuli, making it still a bad psychological model (e.g., for nonword reading). I propose that the neocortex solves this problem by using inhibitory activity regulation and Hebbian associative learning, the computational properties of which have been explored in the context of self-organizing learning models. I show that by introducing these properties into an interactive (biologically plausible) error-driven network, one obtains a model of neocortical learning that: 1) provides a clear computational role for a number of biological features of the neocortex; 2) behaves systematically on novel stimuli, and exhibits transfer to novel tasks; 3) learns rapidly in networks with many hidden layers; 4) provides flexible access to learned knowledge; 5) shows promise in accounting for psychological phenomena such as the U-shaped curve in over-regularization of the past-tense inflection; 6) has a number of other nice properties. --------------------------------------------- Note that I am now doing a postdoc at at MIT: Center for Biological and Computational Learning Department of Brain and Cognitive Sciences E25-210, MIT Cambridge, MA 02139 oreilly at ai.mit.edu  From jose at kreizler.rutgers.edu Wed Sep 18 08:39:14 1996 From: jose at kreizler.rutgers.edu (Stephen J. Hanson) Date: Wed, 18 Sep 1996 08:39:14 -0400 (EDT) Subject: COGNITIVE SCIENCE RESEARCH/LAB MANAGER Message-ID: Cognitive Science Research / System Administration We are looking for an individual to do research in Cognitive Science and to help administer the computing resources of the Psychology Department at Rutgers-University (Newark Campus). Resources include a network of Sun workstations, PCs and Macs, printers, pc-voice mail system and various peripheral devices. The individual will be responsible for installing and debugging software, and various routine system administration activites. At least half their time will be spent in research involving Cognitive Science especially related to Connectionist networks (or Neural Networks) and Computational Neuroscience. Familiarity with C programming, UNIX system internals (BSD, System V, Solaris, Linux) and Windows (95, NT) and local area networks running TCP/IP is required. Image processing or graphics programing experience are pluses. Candidates should possess either a BS/MS in Computer Science, Cognitive Science, AI or other relevant fields or equivalent experience. Salary will be dependent upon qualifications and experience. Rutgers University is an equal opportunity affirmative action employer. Please send resumes and references to Pauline Mitchell Department of Psychology 101 Warren Street Rutgers University Newark, New Jersey, 07102 Direct email inquiries or resumes to: jose at kreizler.rutgers.edu Stephen J. Hanson Professor & Chair Department of Psychology Smith Hall Rutgers University Newark, NJ 07102 voice: 1-201-648-5095 fax: 1-201-648-1171 email: jose at kreizler.rutgers.edu  From bruno at redwood.ucdavis.edu Wed Sep 18 14:05:25 1996 From: bruno at redwood.ucdavis.edu (Bruno A. Olshausen) Date: Wed, 18 Sep 1996 11:05:25 -0700 Subject: TR: sparse coding and ICA Message-ID: <199609181805.LAA25112@redwood.ucdavis.edu> The following TR is available via ftp://publications.ai.mit.edu/ai-publications/1500-1999/AIM-1580.ps.Z Learning Linear, Sparse, Factorial Codes Bruno A. Olshausen In previous work (Nature, 381:607-609), an algorithm was described for learning linear sparse codes which, when trained on natural images, produces a set of basis functions that are spatially localized, oriented, and bandpass (i.e., wavelet-like). This note shows how the algorithm may be interpreted within a maximum-likelihood framework. Several useful insights emerge from this connection: it makes explicit the relation to statistical independence (i.e., factorial coding), it shows a formal relationship to the "independent components analysis" algorithm of Bell and Sejnowski (1995), and it suggests how to adapt parameters that were previously fixed. Related papers are available via http://redwood.ucdavis.edu/bruno/papers.html Bruno A. Olshausen Phone: (916) 757-8749 Center for Neuroscience Fax: (916) 757-8827 UC Davis Email: baolshausen at ucdavis.edu 1544 Newton Ct. WWW: http://redwood.ucdavis.edu Davis, CA 95616  From tlindroo at bennet.tutech.fi Thu Sep 19 03:42:34 1996 From: tlindroo at bennet.tutech.fi (Tommi Lindroos) Date: Thu, 19 Sep 1996 10:42:34 +0300 Subject: CFP: EANN 97 Message-ID: <9609190742.AA32604@bennet.tutech.fi> Sorry for this unsolicited mail. Since you are involved in neural networks, we think this conference might be of interest to you. International Conference on Engineering Applications of Neural Networks (EANN '97) Stockholm, Sweden 16-18 June 1997 First Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (about 400 words) should be sent to eann97 at kth.se by 21 December 1996 by e-mail in plain ASCII format. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 January. Submissions will be reviewed and the number of full papers will be very limited. For information on earlier EANN conferences see the www pages at http://www.abo.fi/~abulsari/EANN95.html and http://www.abo.fi/~abulsari/EANN96.html Organising of a few special tracks has been confirmed so far: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, Ersin-Tulunay at metu.edu.tr), Hybrid Systems (D. Tsaptsinos, D.Tsaptsinos at kingston.ac.uk), Mechanical Engineering (A. Scherer, Andreas.Scherer at fernuni-hagen.de), Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at), Process Engineering (R. Baratti, baratti at ndchem3.unica.it) Authors are encouraged to send the abstracts to the organisers of the special tracks, instead of eann97 at kth.se, if your paper is relevant to one of the topics mentioned above. Advisory board J. Hopfield (USA) A. Lansner (Sweden) G. Sj\"odin (Sweden) Organising committee A. Bulsari (Finland) H. Liljenstr\"om (Sweden) D. Tsaptsinos (UK) International program committee (to be confirmed, extended) G. Baier (Germany) R. Baratti (Italy) S. Cho (Korea) T. Clarkson (UK) G. Dorffner (Austria) W. Duch (Poland) A. Gorni (Brazil) J. Heikkonen (Italy) F. Norlund (Sweden) A. Ruano (Portugal) C. Schizas (Cyprus) J. Thibault (Canada) E. Tulunay (Turkey) J. Demott (USA) Electronic mail is not absolutely reliable, so if you have not heard from the conference secretariat after sending your abstract, please contact again. You should receive an abstract number in a couple of days after the submission.  From seung at physics.lucent.com Thu Sep 19 11:03:46 1996 From: seung at physics.lucent.com (Sebastian Seung) Date: Thu, 19 Sep 1996 11:03:46 -0400 Subject: preprints available on Web Message-ID: <199609191503.LAA09840@heungbu.div111.lucent.com> The following preprints can be found in http://portal.research.bell-labs.com/home/seung at physics/papers/ How the brain keeps the eyes still H. S. Seung To appear in PNAS The brain can hold the eyes still because it stores a memory of eye position. The brain's memory of horizontal eye position appears to be represented by persistent neural activity in a network known as the neural integrator, which is localized in the brainstem and cerebellum. Existing experimental data are reinterpreted as evidence for an attractor hypothesis, that the persistent patterns of activity observed in this network form an attractive line of fixed points in its state space. Line attractor dynamics can be produced in linear or nonlinear neural networks by learning mechanisms that precisely tune positive feedback. Unsupervised learning by convex and conic encoding D. D. Lee and H. S. Seung To appear in NIPS Unsupervised learning algorithms based on convex and conic encoders are proposed. The encoders find the closest convex or conic combination of basis vectors to the input. The learning algorithms produce basis vectors that minimize the reconstruction error of the encoders. The convex algorithm develops locally linear models of the input, while the conic algorithm discovers features. Both algorithms are used to model handwritten digits and compared with vector quantization and principal component analysis. The neural network implementations involve lateral connections, which mediate cooperative and competitive interactions and allow for the development of sparse distributed representations. Statistical mechanics of Vapnik-Chervonenkis entropy P. Riegler and H. S. Seung A statistical mechanics of learning is formulated in terms of a Gibbs distribution on the realizable labelings of a set of inputs. The entropy of this distribution is a generalization of the Vapnik-Chervonenkis (VC) entropy, reducing to it exactly in the limit of infinite temperature. Perceptron learning of randomly labeled patterns is analyzed within this formalism.  From riegler at ifi.unizh.ch Thu Sep 19 11:47:26 1996 From: riegler at ifi.unizh.ch (Alex Riegler) Date: Thu, 19 Sep 1996 17:47:26 +0200 Subject: CFP New Trends in Cog Sci Message-ID: Please forward to colleagues etc. Apologies if you have received this already. /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ International Workshop N E W T R E N D S I N C O G N I T I V E S C I E N C E NTCS '97 /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/ "Does Representation need Reality?" Perspectives from Cognitive Science, Neuroscience, Epistemology, and Artificial Life Vienna, Austria, May 13 - 16, 1997 with plenary talks by: Georg Dorffner, Ernst von Glasersfeld, Stevan Harnad, Wolf Singer, and Sverre Sjoelander organized by the Austrian Society of Cognitive Science (ASoCS) =========================================================================== Latest information can be retrieved from the conference WWW-page =========================================================================== P u r p o s e ___________________________________________________________________________ The goal of this single-track conference is to investigate and discuss new approaches and movements in cognitive science in a workshop-like atmosphere. Among the topics which seem to have emerged in the last years are: embodiment of knowledge, system theoretic and computational neuroscience approaches to cognition, dynamics in recurrent neural architectures, evolutionary and artificial life approaches to cognition, and (epistemological) implications for perception and representation, constructivist concepts and the problem of knowledge representation, autopoiesis, implications for epistemology and philosophy (of science). Evidence for a failure of the traditional understanding of neural representation converges from several fields. Neuroscientific results in the last decade have shown that single cell representations with hierarchical processing towards representing units seems not the way the cortex represents environmental entities. Instead, distributed cell ensemble coding has become a popular concept for representation, both in computational and in empirical neuroscience. However, new problems arise from the new concepts. The problem of binding the distributed parts into a uniform percept can be "solved" by introducing synchronization of the member neurons. A deeper (epistemological) problem, however, is created by recurrent architectures within ensembles generating an internal dynamics in the network. The cortical response to an environmental stimulus is no longer dominated by stimulus properties themselves, but to a considerable degree by the internal state of the network. Thus, a clear and stable reference between a representational state (e.g. in a neuron, a Hebbian ensemble, an activation state, etc.) and the environmental state becomes questionable. Already learned experiences and expectancies might have an impact on the neural activity which is as strong as the stimulus itself. Since these internally stored experiences are constantly changing, the notion of (fixed) representations is challenged. At this point, system theory and constructivism, both investigating the interaction between environment and organism at an abstract level, come into the scene and turn out to provide helpful epistemological concepts. The goal of this conference is to discuss these phenomena and their implications for the understanding of representation, semantics, language, cognitive science, and artificial life. Contrary to many conferences in this field, the focus is on interdisciplinary cooperation and on conceptual and epistemological questions, rather than on technical details. We are trying to achieve this by giving more room to discussion and interaction between the participants (e.g., invited comments on papers, distribution of papers to the participants before the conference, etc.). According to the interdisciplinary character of cognitive science, we welcome papers/talks from the fields of artificial life, empirical, cognitive, and computational neuroscience, philosophy (of science), epistemology, anthropology, computer science, psychology, and linguistics. T o p i c s ___________________________________________________________________________ The conference is centered around but not restricted to the following topics: 1. Representation - epistemological concepts and findings from (computational) neuroscience, cognitive science (recurrent neural architectures, top-down processing, etc.), and philosophy; 2. Alternatives to representation - applying constructivism to cognitive systems; 3. Modeling language, communication, and semantics as a dynamical, evolutionary and/or adaptive process; 4. Representation and cognition in artificial life; 5. What is the role of simulation in understanding cognition? I n v i t e d S p e a k e r s ___________________________________________________________________________ Besides submitted papers the conference will also feature plenary talks by invited speakers who are leaders in their fields. The following is a list of invited speakers in alphabetical order: o Georg Dorffner, Univ. of Vienna (A) o Ernst von Glasersfeld, Univ. of Amherst, MA (USA) o Stevan Harnad, Univ. of Southampton (GB) o Rolf Pfeifer, Univ. of Zurich (CH) o Wolf Singer, Max Planck Institut fuer Hirnforschung, Frankfurt (D) o Sverre Sjoelander, Linkoeping University (S) P a p e r S u b m i s s i o n s ___________________________________________________________________________ We invite submissions of scientific papers to any of the 5 topics listed above. The papers will be reviewed by the Scientific Committee and accepted according to their scientific content, originality, quality of presentation, and relatedness to the conference topic. Please keep to the following guidelines: Hardcopy submission only, 6-9 pages A4 or USLetter single sided in Times Roman 10-12pt (or equivalent). Please send 4 copies to the organizing committee, see address below. In a first step we are planning to publish the proceedings as Technical Report of the Austrian Society for Cognitive Science. In a second step after rewriting the papers and after a second round of review a major publisher will be approached to publish the best papers in an edited volume. For the final versions of the accepted papers electronic submissions are preferred in one of the following formats: Word, FrameMaker, or Ascii. Detailed formatting information will be given upon notification of acceptance. Submission due January 7, 1997 Notification of acceptance February 28 R e g i s t r a t i o n ___________________________________________________________________________ To register please fill out the registration form at the bottom of this CFP and send it by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science (address below) Registration Fee (includes admission to talks, presentations, and proceedings): before April 1st, 1997: Member * 1000 ATS (about 90 US$) Non-Member 1500 ATS (about 135 US$) Student Member ** 400 ATS (about 36 US$) Student Non-Member 1000 ATS (about 90 US$) after April 1st, 1997: Member * 1300 ATS (about 118 US$) Non-Member 1800 ATS (about 163 US$) Student Member ** 500 ATS (about 45 US$) Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID C o n f e r e n c e S i t e a n d A c c o m o d a t i o n ___________________________________________________________________________ The conference takes place in a small beautiful baroque castle in the suburbs of Vienna; the address is: Schloss Neuwaldegg Waldegghofg. 5 A-1170 Wien Austria Tel: +43-1-485-3605 It is surrounded by a beautiful forest and a good (international and Viennese gastronomic) infrastructure. On the tram it takes only 20 minutes to the center of Vienna. (Limited) Accommodation is provided by the castle (about 41 US$ per night (single), 30 US$ per night, per person (double) including breakfast). Please contact the telephone number above. You can find more information about Vienna and accommodation at the Vienna Tourist Board or at the Intropa Travel agent Tel: +43-1-5151-242. Further information will be available soon. D e s t i n a t i o n V i e n n a ? ___________________________________________________________________________ Vienna, Austria, can be reached internationally by plane or train. The Vienna Schwechat airport is located about 16 km from the city center. From the airport, the city air-terminal can be reached by bus (ATS 60.- per person) or taxi (about ATS 400). Rail-passengers arrive at one of the main stations which are located almost in the city center. From the air-terminal and the railway stations the congress site and hotels can be reached easily by underground (U-Bahn), tramway, or bus. A detailed description will be given to the participants. In May the climate is mild in Vienna. It is the time when spring is at its climax and everything is blooming. The weather is warm with occasional (rare) showers. The temperature is about 18 to 24 degrees Celsius. More information about Vienna and Austria on the web: Welcome to Vienna Scene Vienna City Wiener Festwochen - Vienna Festival Public Transport in Vienna (subway) Welcome to Austria General information about Austria Austria Annoted S c i e n t i f i c C o m m i t t e e ___________________________________________________________________________ R. Born Univ. of Linz (A) G. Dorffner Univ. of Vienna (A) E. v. Glasersfeld Univ. of Amherst, MA (USA) S. Harnad Univ. of Southampton (GB) M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) H. Risku Univ. of Skovde (S) S. Sjoelander Linkoeping University (S) A. v. Stein Neuroscience Institute, La Jolla (USA) O r g a n i z i n g C o m m i t t e e ___________________________________________________________________________ M. Peschl Univ. of Vienna (A) A. Riegler Univ. of Zurich (CH) T i m e t a b l e ___________________________________________________________________________ Submission due January 7, 1997 Notification of acceptance February 28 Early registration due April 1 Final papers due April 14 Conference date May 13-16, 1997 S p o n s o r i n g O r g a n i z a t i o n s ___________________________________________________________________________ o Christian Doppler Laboratory for Expert Systems (Vienna University of Technology) o Oesterreichische Forschgungsgemeinschaft o Austrian Federal Ministry of Science, Transport and the Arts o City of Vienna A d d i t i o n a l I n f o r m a t i o n ___________________________________________________________________________ For further information on the conference contact: Markus Peschl Dept. for Philosophy of Science University of Vienna Sensengasse 8/10 A-1090 Wien Austria Tel: +43-1-402-7601/41 Fax: +43-1-408-8838 Email: franz-markus.peschl at univie.ac.at General information about the Austrian Society for Cognitive Science can be found on the Society webpage or by contacting Alexander Riegler AILab, Dept. of Computer Science University of Zurich Winterthurerstr. 190 CH-8057 Zurich Switzerland Email: riegler at ifi.unizh.ch R e g i s t r a t i o n f o r m ___________________________________________________________________________ I participate at the Workshop "New Trends in Cognitive Science (NTCS'97)" Full Name ........................................................................ Full Postal Address: ........................................................................ ........................................................................ ........................................................................ Telephone Number (Voice): Fax: ..................................... .................................. Email address: ........................................................................ [ ] I intend to submit a paper Payment in ATS (= Austrian Schillings; 1 US$ is currently about 11 ATS). This fee includes admission to talks, presentations, and proceedings: Before April 1st, 1997: [ ] Member * 1000 ATS (about 90 US$) [ ] Non-Member 1500 ATS (about 135 US$) [ ] Student Member ** 400 ATS (about 36 US$) [ ] Student Non-Member 1000 ATS (about 90 US$) After April 1st, 1997: [ ] Member * 1300 ATS (about 118 US$) [ ] Non-Member 1800 ATS (about 163 US$) [ ] Student Member ** 500 ATS (about 45 US$) [ ] Student Non-Member 1300 ATS (about 118 US$) *) Members of the Austrian Society of Cognitive Science **) Requires proof of valid student ID Total: .................... ATS [ ] Visa [ ] Master-/Eurocard Name of Cardholder ........................................ Credit Card Number ........................................ Expiration Date ................. Date: ................ Signature: ........................................ Please send this form by... o Email to franz-markus.peschl at univie.ac.at, or by o Fax to +43-1-408-8838 (attn. M.Peschl), or by o Mail to Markus Peschl, Dept.for Philosophy of Science, Univ. of Vienna, Sensengasse 8/10, A-1090 Wien, Austria ___________________________________________________________________________ AI Lab * Department of Computer Science * University of Zurich Winterthurerstr. 190 * CH-8057 Zurich, Switzerland * riegler at ifi.unizh.ch  From lba at inesc.pt Thu Sep 19 12:09:47 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 19 Sep 1996 17:09:47 +0100 Subject: Workshop on spatiotemporal models Message-ID: <3241704B.61133CF4@inesc.pt> Below is the registration information for the Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems. Please distribute and/or post it as widely as you please. For further information, as well as for obtaining a postscript version of the registration form (e.g. if you want to post it), please consult: Workshop's web page: http://aleph.inesc.pt/smbas/ Mirror: http://www.cnel.ufl.edu/workshop.html (the mirror sometimes takes a few days to get updated) Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ------------------------------------------------------------------- Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems SMBAS November 6-8 1996 Sintra, Portugal REGISTRATION INFORMATION The SMBAS workshop will take place in Sintra, Portugal, beginning on November 6 afternoon and extending through November 8 afternoon. Three invited speakers are confirmed (Drs. Walter Freeman, Kumpati Narendra and Scott Kelso), and a fourth invited speaker may still be announced. Contributed papers are both from the biological and artificial systems areas, in approximately equal numbers. The list of accepted papers is given at the end of this message. Registrations are open until October 25. Participants who won't present papers are also welcome. The total number of participants is limited to 50. Among non-contributors, the registrations will be accepted on a first come first served basis. The registration prices are: Normal registration 50000 Portuguese escudos Student registration 30000 Portuguese escudos As an indication only, the current exchange rate is of about 150 Portuguese escudos per US dollar. The registration price includes participation in the workshop, proceedings book, lunches on Thursday and Friday and coffee breaks. The price also includes 17% VAT. The registration form is included below. Only registrations accompanied by payment are accepted. Students should also include a letter from their supervisor or university representative, written on the university letterhead, confirming their student status. FORMS OF PAYMENT CHECK - The check should be in Portuguese escudos, made payable to "INESC - SMBAS Workshop". Enclose the check with the registration form. BANK TRANSFER - Make the transfer in Portuguese escudos, to Banco Nacional Ultramarino Arco do Cego Account no. 001399550210009683498 Account holder: INESC Reference: SMBAS Workshop Enclose a copy of the bank transfer document with the registration form. Credit card payments cannot be accepted. FUNDING A grant of 100000 Portuguese escudos from Fundacao Oriente (Portugal) is available to be assigned to a participant from East Asia. Preference will be given to students. Other funding sources, especially the large grant from the Office of Naval Research (USA), are already reflected in the registration prices, which would otherwise be about 80000 escudos higher. LODGING The workshop will be held at the Hotel Tivoli Sintra. This hotel will have special room rates for workshop participants: Single room: 9700 Portuguese escudos per night Double room: 10900 Portuguese escudos per night (for two persons) The Tivoli hotel accepts individual registrations for double rooms, at half the price. In this case the participants will be paired by the hotel. The participants should reserve rooms directly with the hotel. Please mention that you are reserving for this workshop. Hotel Tivoli Sintra Praca da Republica = 2710 Sintra PORTUGAL Tel: +351-1-9233505 Fax: +351-1-9231572 A number of rooms has been allocated in advance at the Tivoli. Should the hotel become full, the Hotel Central is very close and has only slightly higher room rates: Hotel Central Largo Rainha D. Amelia, 35 = 2710 Sintra PORTUGAL Tel.: +351-1-9230963 = (this hotel has no fax) ==================== Registration form (cut here) ======================= Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems November 6-8 1996, Sintra, Portugal Name____________________________________________________________________ _______________________________________________________ Sex____________ Address_________________________________________________________________ City___________________________________ State__________________________ Postal/zip code_________________ Country_______________________________ Phone___________________________ Fax____________________________ E-mail_________________________ Type of registration (check as appropriate): __ Normal (50000 PTE) __ Student (30000 PTE) - enclose a letter from the supervisor or university representative, on university letterhead, confirming the student status Enclosed payment: Amount_________________ in Portuguese escudos (only the Portuguese currency is accepted) Form of payment: __ Check (enclose check with this form) __ Bank transfer (enclose a copy of the bank document) Checks should be made payable to "INESC - SMBAS Workshop" Bank transfers should be made to Banco Nacional Ultramarino Arco do Cego Account no. 001399550210009683498 Account holder: INESC Reference: SMBAS Workshop REGISTRATION DEADLINE: October 25, 1996 Please mail this form, with the payment document to INESC SMBAS Workshop c/o Mrs. Ilda Ribeiro R. Alves Redol, 9 1000 Lisboa Portugal Tel: +351-1-3100313 Fax: +351-1-3145843 Registrations paid by bank transfer can also be sent by fax. In this case fax both the registration form and the bank document, and send the fax to Mrs. Ilda Ribeiro, fax no. +351-1-3145843. =================== (end of registration form) ======================= List of accepted papers METHODS OF TOPOGRAPHICAL TIME-FREQUENCY ANALYSIS OF EEG IN COARSE AND FINE TIME SCALE K. Blinowska, P. Durka, M. Kaminski - Warsaw University, POLAND W. Szelenberger - Warsaw Medical Academy, POLAND SPATIAL EFFECTS AND COMPETIVE COEXISTENCE T. Caraco - State University of New York, USA W. Maniatty and B. Szymanski - Rensselaer Polytechnic Institute, USA RST: A SPATIOTEMPORAL NEURAL NETWORK J.-C. Chappelier and A. Grumbach - ENST, FRANCE TRASIENT SYNAPTIC REDUNDANCY DURING SYNAPTOGENESIS DESCRIBED AS AN ISOSTATIC RANDOM STACKING OF NONPENETRATING HARD SPHERES F. Eddi, G. Waysand - Denis Diderot and P. & M. Curie Universities, FRANCE J. Mariani - P.& M. Curie University, FRANCE MODELLING THE PRENATAL DEVELOPMENT OF THE LATERAL GENICULATE NUCLEUS S. Eglen - University of Sussex, UK A SELF-ORGANIZING TEMPORAL PATTERN RECOGNIZER WITH APPLICATION TO ROBOT LANDMARK RECOGNITION N. Euliano and J. Principe - University of Florida, USA. P. Kulzer - University of Aveiro, PORTUGAL ON THE APPLICATION OF COMPETITIVE NEURAL NETWORKS TO TIME-VARYING CLUSTERING PROBLEMS A. Gonzalez, M. Gra=F1a, A. D'Anjou - Universidad del Pais Vasco, ESPA=D1A M. Cottrell - Universit\'{e} Paris I , FRANCE LOCALISATION AND GOAL-DIRECTED BEHAVIOUR FOR A MOBILE ROBOT USING PLACE CELLS K. Harris and M. Recce - University College London, UK BRIDGING LONG TIME LAGS BY WEIGHT GUESSING AND LONG SHORT TERM MEMORY S. Hochreiter - Technische Universitaet Muenchen, GERMANY J. Schmidhuber - IDSIA, SWITZERLAND COHERENT PHENOMENA IN THE DYNAMICS OF INTEGRATE AND FIRE NEURAL FIELDS D. Horn and I. Opher - Tel Aviv University, ISRAEL SPATIOTEMPORAL TRANSITION TO EPILEPTIC SEIZURES: A NONLINEAR DYNAMICAL ANALYSIS OF SCALP AND INTRACRANIAL EEG RECORDINGS L. Iasemidis, S. Roper, J. Sackellares - University of Florida and V.A. Medical Center, USA J. Principe, J. Czaplewski, R. Gilmore - University of Florida, USA A CONTINUUM MODEL OF THE MAMMALIAN ALPHA RHYTHM D. Liley - Swinburne University of Technology, AUSTRALIA ANALOG COMPUTATIONS WITH TEMPORAL CODING IN NETWORKS OF SPIKING NEURONS W. Maass - Technische Universitaet Graz, AUSTRIA SELECTIVE LINEAR PREDICTION FOR RHYTHMIC ACTIVITY MODELLING N. Martins - Technical University of Lisbon and Polytechnic Institute of Set\'{u}bal, PORTUGAL A. Rosa - Technical University of Lisbon, PORTUGAL CEREBELLAR LEARNING OF DYNAMIC PATTERNS IN THE CONTROL OF MOVEMENT P. Morasso, V. Sanguineti and F. Frisone - University of Genova, ITALY A CHAOTIC OSCILLATOR CELL IN SUBTHRESHOLD CMOS FOR SPATIO-TEMPORAL SIMULATION J. Neeley and J. Harris - University of Florida, USA A TEMPORAL MODEL FOR STORING SPATIOTEMPORAL PATTERN MEMORY IN SHUNTING COOPERATIVE COMPETITIVE NETWORK C. Nehme - Brazilian Navy Research Institute, BRAZIL L. Carvalho and S. Mendes - Federal University of Rio de Janeiro, BRAZIL SPATIO-TEMPORAL SPIKE PATTERN DISCRIMINATION BY NETWORKS OF SILICON NEURONS WITH ARTIFICIAL DENDRITIC TREES D. Northmore and J. Elias - University of Delaware, USA SYNCHRONIZATION IN NEURAL OSCILLATORS THROUGH LOCAL DELAYED INHIBITION G. Renversez - Centre de Physique Th\'{e}orique Centre National de la Recherche Scientifique, FRANCE LEARNING REACTIVE BEHAVIOR FOR AUTONOMOUS ROBOT USING CLASSIFIER SYSTEMS A. Sanchis, J. Molina and P. Isasi - Universidad Carlos III de Madrid, SPAIN MODELING OF CELLULAR AND NETWORK NEURAL MECHANISMS FOR RESPIRATORY PATTERN GENERATION J. Schwaber, I. Rybak - DuPont Central Research, USA J. Paton - University of Bristol, UK FASTER TRAINING OF RECURRENT NETWORKS G. Silva, J. Amaral, T. Langlois and L. Almeida - Instituto de Engenharia de Sistemas e Computadores, PORTUGAL HUMAN PERCEPTION OF SUBTHRESHOLD, NOISE-ENHANCED VISUAL IMAGES E. Simonotto and M. Riani - Universit\'{a} di Genova, ITALY J. Twitti and F. Moss - University of Missouri at St. Louis, USA NEUROCONTROL OF INVERSE DYNAMICS IN FUNCTIONAL ELECTRICAL STIMULATION L. Spaanenburg, J. Nijhuis and A. Ypma - Rijksuniversiteit Groningen, THE NETHERLANDS J. Krijnen - Academic Hospital Groningen, THE NETHERLANDS PROGRAMMED CELL DEATH DURING EARLY DEVELOPMENT OF THE NERVOUS SYSTEM, MODELLED BY PRUNING IN A NEURAL NETWORK J. Vos, J. van Heijst and S. Greuters - University of Groningen, THE NETHERLANDS  From Randy_Ringen at hmc.edu Thu Sep 19 14:10:31 1996 From: Randy_Ringen at hmc.edu (Randy Ringen) Date: Thu, 19 Sep 1996 10:10:31 -0800 Subject: Sejnowski Awarded Prestigious Wright Prize Message-ID: HARVEY MUDD COLLEGE NEWS RELEASE Office of College Relations, Claremont, California 91711-5990 FOR IMMEDIATE RELEASE CONTACT: Randy Ringen or Leslie Baer SEPTEMBER 19, 1996 (909) 624-4146 Ref #: 95/96-47 HARVEY MUDD COLLEGE HONORS BRAIN RESEARCHER TERRENCE J. SEJNOWSKI AS 12TH RECIPIENT OF THE WRIGHT PRIZE October 25 lecture is free and open to the public CLAREMONT, Calif.-Harvey Mudd College is pleased to announce the winner of the 1996 Wright Prize for interdisciplinary study in science and engineering: Terrence J. Sejnowski, a computational neurobiologist, who is an investigator with the Howard Hughes Medical Institute, a professor at the at the Salk Institute of Biological Studies in La Jolla, Calif., and a professor of biology and physics at UC San Diego. Sejnowski will be awarded $15,000 on Friday, 7 p.m., October 25, in Galileo Hall on the Harvey Mudd College campus, when he will give his distinguished lecture, "The Century of the Brain." Admission is free and the event is open to the public. Sejnowski, the 12th Wright awardee, the joins the likes of physicist Freeman Dyson, physician-scientist Jonas Salk, and 1962 Nobel Prize-winning biologist Francis Crick, who were also honored. He is the first awardee selected under a new criteria, which seeks to honor up-and-coming, early-to-mid career researchers in multidisciplinary research in engineering or the sciences, rather than those already widely recognized for their accomplishments. Sejnowski seeks to understand "the computational resources of brains, from the biophysical to the systems levels," he said. His research focuses on how images seen through the eyes are represented in the brain, how memory is organized, and how vision is used to guide actions. As tools in his research, Sejnowski employs theoretical models on how the brain networks this information. He also studies the biophysics of the living brain. As part of the award festivities, Sejnowski will spend two days at Harvey Mudd College, offering the October 25 public lecture, as well as student seminars and informal exchanges with faculty on October 24. "The basic educational philosophy holds interdisciplinary study to be essential in furthering our understanding of science and engineering," said HMC President Henry E. Riggs. "The events surrounding the awarding of the prize give students and faculty alike an opportunity to exchange ideas with a young researcher whose work spans several fields," he said. "Beyond that," he added, "the lecture will present to the public the latest information on the exciting discoveries made concerning how the brain works." Sejnowski graduated summa cum laude from Case Western Reserve University with a B.S. in physics. He received his M.S. and Ph.D. in physics from Princeton. After teaching at The Johns Hopkins University, he became a senior member and professor at the Salk Institute and a professor at UC San Diego. He has received numerous awards, including a Presidential Young Investigator Award from the National Science Foundation, WAS A Fairchild Distinguished Scholar at the California Institute of Technology, and has delivered numerous honorary lectures at the University of Wisconsin; Cambridge University; the Royal Institution, London; and the International Congress of Physiological Sciences. He has published more than 100 scientific articles and coauthored one book. He is the editor-in-chief of "Neural Computation," published by the MIT Press, and serves on the editorial boards of 17 scientific journals. Harvey Mudd College, one of The Claremont Colleges, is an undergraduate coeducational institution of engineering, science, and mathematics that also places strong emphasis on humanities and the social sciences. The college's aim is to graduate engineers and scientists sensitive to the impact of their work on society. Harvey Mudd College ranks among the nation's leading schools in percentage of graduates who earn Ph.D. degrees. The college has been consistently ranked among the top engineering undergraduate specialty schools in the nation by U.S.News & World Report. In a recent study published in Change magazine, in which 212 colleges and universities were ranked, Harvey Mudd College is among an elite group of just 11 institutions that are classified as "High-High," rated outstanding for both their teaching and their research. -end-  From nkasabov at commerce.otago.ac.nz Wed Sep 18 20:40:53 1996 From: nkasabov at commerce.otago.ac.nz (Nikola Kasabov) Date: Wed, 18 Sep 1996 12:40:53 -1200 Subject: A new text and research book from MIT Press Message-ID: <497E9002FD9@jupiter.otago.ac.nz> The MIT Press Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering by Nikola Kasabov Neural networks and fuzzy systems are different approaches to introducing human like reasoning to intelligent information systems. This text is the first to combine the study of these two subjects, their basics and their use, along with the symbolic AI methods and the traditional methods of data analysis, to build comprehensive artificial intelligence systems. In a clear and accessible style, Kasabov describes rule based, fuzzy logic, and connectionist techniques, and their combinations that lead to new techniques such as methods for rules extraction, fuzzy neural networks, hybrid systems, connectionist AI systems, chaotic neurons, etc. All these techniques are applied to a set of simple prototype problems, which makes comparisons possible. A particularly strong feature of the text is that it is replete with applications in engineering, business and finance. AI problems which cover most of the application oriented research in the field (pattern recognition, speech recognition, image processing, classification, planning, optimization, prediction, control, decision making, game simulation, chaos analysis) are discussed and illustrated with concrete examples. Intended both as a text for advanced undergraduate and postgraduate students as well as a reference for researchers in the field of knowledge engineering, "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering" has chapters structured for various levels of teaching and includes original work by the author along with the classical material. Data sets for the examples in the book as well as an integrated software environment that can be used to solve the problems and to do the exercises at the end of each chapter, have been made available on the World Wide Web. Nikola Kasabov is Associate Professor in the Department of Computer and Information Science, University of Otago, New Zealand. A Bradford Book Computational Intelligence series September 1996, ISBN 0-262-11212-4, 544 pp. -- 282 illus. $60.00 (cloth) ---------------- Please cut here -------- ------------------------- Please send me a brochure with ordering information and course adoption terms for "Foundations of Neural Networks, Fuzzy Systems, and Knowledge Engineering" by Nikola Kasabov Name __________________________________ City _______________________________ University __________________________ State/Country ___________________ Zip/Code__________________ Return to: Texts Manager, The MIT Press, 55 Hayward Street, Cambridge, MA, 02142, USA (e mail: hardwick at mit.edu), or order the book directly through the MIT Press WWW home page: http://www-mitpress.mit.edu:80/mitp/recent-books/comp/kasfh.html. ---------------------------------------------------------------------------  From pe_keller at ccmail.pnl.gov Fri Sep 20 22:28:00 1996 From: pe_keller at ccmail.pnl.gov (pe_keller@ccmail.pnl.gov) Date: Fri, 20 Sep 1996 19:28 -0700 (PDT) Subject: Career Opportunity for Senior Research Scientist Message-ID: <01I9PIR7VVIQ8Y4WTQ@pnl.gov> Senior Research Scientist Cognitive Controls Battelle, a leading provider of technology solutions, has immediate need for a Senior Scientist to join their cognitive controls initiative. Position will provide technical leadership for a multi-year corporate project applying adaptive/cognitive control theory to applications in emerging technology areas. Position requires an M.S./Ph.D. in Computer and Information Science, Electrical Engineering, or related field with a specialization in adaptive or cognitive control theory (artificial neural networks, fuzzy logic, genetic algorithms) and statistical methods, and at least 5 years' experience in the application of such control methods to engineering problems. Strong leadership capabilities and oral, written, and interpersonal communications skills are essential to this highly interactive position. Applicant selected will be subject to a security investigation and must meet eligibility requirements for access to classified information. Battelle offers competitive salaries, comprehensive benefits, and opportunities for professional development. Qualified candidates are invited to send their resumes to: Employment Department J-35, Battelle, 505 King Avenue, Columbus, OH 43201-2693 or fax them to 614-424-4643. An Equal Opportunity/Affirmative Action Employer M/F/D/V --------------------------------------------------------------------------- This opening is in our Columbus, Ohio, USA facility. For more information about Battelle, try http://www.battelle.org. For more information about our neural network activity at our Pacific Northwest National Laboratory (in Richland, Washington, USA), try http://www.emsl.pnl.gov:2080/docs/cie/neural/ ---------------------------------------------------------------------------  From James_Morgan at brown.edu Mon Sep 23 15:01:45 1996 From: James_Morgan at brown.edu (Jim Morgan) Date: Mon, 23 Sep 1996 15:01:45 -0400 Subject: Position Announcement: Language & Cognitive Processing, Brown University Message-ID: <199609231902.PAA02867@golden.brown.edu> LANGUAGE AND COGNITIVE PROCESSING, Brown University: The Department of Cognitive and Linguistic Sciences invites applications for a three year renewable tenure-track position at the Assistant Professor level beginning July 1, 1997. Areas of interest include but are not limited to phonology or phonological processing, syntax or sentence processing, and lexical access or lexical semantics, using experimental, formal, developmental, neurological, or computational methods. Expertise in two or more areas and/or application of multiple paradigms is preferred. Applicants should have a strong research program and a broad teaching ability in cognitive science and/or linguistics at both the undergraduate and graduate levels. Interest in contributing curricular innovations in keeping with Brown's university-college tradition is desirable. Applicants should have completed all Ph.D. requirements no later than July 1, 1997. Women and minorities are especially encouraged to apply. Send curriculum vitae, three letters of reference, reprints and preprints of publications, and a one page statement of research interests to Dr. Sheila E. Blumstein, Chair, Search Committee, Department of Cognitive and Linguistic Sciences, Brown University, Box 1978, Providence, RI 02912 by January 1, 1997. Brown University is an Equal Opportunity/Affirmative Action Employer.  From terry at salk.edu Tue Sep 24 14:17:50 1996 From: terry at salk.edu (Terry Sejnowski) Date: Tue, 24 Sep 1996 11:17:50 -0700 (PDT) Subject: Neural Computation 8:7 Message-ID: <199609241817.LAA06146@helmholtz.salk.edu> Neural Computation - Contents Volume 8, Number 7 - October 1, 1996 The Lack of a Priori Distinctions Between Learning Algorithms David H. Wolpert The Existence of a Priori Distinctions Between Learning Algorithms David H. Wolpert Note No Free Lunch for Cross Validation Huaiyu Zhu and Richard Rohwer Letter A Self-Organizing Model of "Color Blob" Formation Harry G. Barrow, Alistair J. Bray and Julian M. L. Budd Functional Consequences of an Integration of Motion and Stereopsis in Area MT of Monkey Extrastriate Visual Cortex Markus Lappe Learning Perceptually Salient Visual Parameters Using Spatiotemporal Smoothness Constraints James V. Stone Using Visual Latencies to Improve Image Segmentation Ralf Opara and Florentin Worgotter Learning and Generalization in Cascade Network Architectures Enno Littmann and Helge Ritter Hybrid Modeling, HMM/NN Architectures, and Protein Applications Pierre Baldi and Yves Chauvin ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From pfbaldi at cco.caltech.edu Wed Sep 25 08:40:58 1996 From: pfbaldi at cco.caltech.edu (Pierre Baldi) Date: Wed, 25 Sep 1996 05:40:58 -0700 (PDT) Subject: TR available: Bayesian Methods and Compartmental Modeling Message-ID: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/baldi.comp.tar.Z The file baldi.comp.tar.Z is now available for copying from the Neuroprose repository: ON THE USE OF BAYESIAN METHODS FOR EVALUATING COMPARTMENTAL NEURAL MODELS (40 pages = 35 pages + 5 figures) (one figure is in color but should print OK in black and white) P. Baldi, M. C. Vanier, and J. M. Bower Department of Computation and Neural Systems Caltech ABSTRACT: In this TR, we provide a tutorial on Bayesian methods for neurobiologists, as well an application of the methods to compartmental modeling. We first derive prior and likelihood functions for compartmental neural models and for spike trains. We then apply the full Bayesian inference machinery to parameter estimation, and model comparison in the case of simple classes of compartmental models, with three and four conductances. We also perform class comparison by approximating integrals over the entire parameter space. Advantages and drawbacks are discussed. Postscript and other problems reported to us have been corrected.  From sverker.sikstrom at psy.umu.se Wed Sep 25 11:44:16 1996 From: sverker.sikstrom at psy.umu.se (Sverker Sikstrom) Date: Wed, 25 Sep 1996 16:44:16 +0100 Subject: PhD THESIS AVAILABLE: A Connectionist Model for Episodic Tests Message-ID: THESIS AVAILABLE My (Sverker Sikstr?m) thesis "TECO: A connectionist Model for Dependency in Successive Episodic Tests" is now available for anonymous download in postscript format at the following url-site: http://www.psy.umu.se/personal/thesis.ps.gz The file is approximately 1MB and unfolds to 3MB. The thesis includes 180 pages. You may also download it from my homepage: http://www.psy.umu.se/personal/Sverker.html The ABSTRACT is as follows: Sikstr?m, P. S. TECO: A connectionist Model for Dependency in Successive Episodic Tests. Doctoral dissertation, Department of Psychology, Ume University, S-90187 Ume, Sweden, 1996; ISBN 91-7191-155-3 Data from a large number of experimental conditions in published studies have shown that recognition and cued recall exhibit a moderate dependency described by the Tulving-Wiseman function. Exceptions from this lawfulness, in the form of higher dependency, are found when the recall test lacks effective cues (i.e., free recall exceptions) or when the recognition test is cued (i.e., cued recognition exceptions). In Study I, the TECO (Target, Event, Cue & Object) theory for dependency in successive tests, is proposed to account for the dependence between recognition and cued recall through the fact that both tests are cued with the instruction to retrieve from the learning episode (i.e., the event). Independence is accounted by differences in cueing; the recall test is cued by a contextual cue, whereas the recognition test is cued by the target. A quantitative degree of dependence, measured by ?, is predicted to be one-third by counting the number of shared cues divided by the total number of cues. Free recall exceptions are predicted to reveal a dependency of one-half because the recall test lacks effective cues. Cued recognition exceptions are predicted to reveal a dependency of two-thirds because both tests are cued with the cue word. A function is derived to predict the conditional probabilities and the results show a reasonable fit with the predictions. In, Study II, the predictions of TECO on successive tests of cued recall and cued recognition, free recall and cued recognition, recognition, free recall and cued recall, recognition and cued recognition is tested. A database is presented for successive episodic tests. In, Study III the lawfulness of recognition failure is discussed. Hintzman claimed that the conditional probability of recognition given recall is constrained by the P(Rn)/P(Rc) boundary and that the phenomenon of recognition failure is, thus, a mathematical artefact. It is argued that this boundary is due to a psychological process and that this boundary carries important information regarding the underlying system. Furthermore, half of the deviation from the predictive function of recognition given cued recall is shown to arise from the lack of statistical power. In Study IV, TECO is simulated in a neural network of a Hopfield type. A theoretical analysis is proposed and nine sets of simulations are conducted. The results show that the theory can be simulated with reasonable agreement to empirical data. Keywords. Episodic memory, recognition failure, successive tests, lawfulness, connectionism. Feel free to contact me on the following address Email:sverker.sikstrom at psy.umu.se ---------------------------------------------------- PhD Sverker Sikstrom, Department of Psychology s90187 Ume University, Sweden Tel: ++46-90-166759, Fax: ++46-90-166695 Homepage:http://www.psy.umu.se/personal/Sverker.html ----------------------------------------------------  From ken at phy.ucsf.edu Wed Sep 25 17:52:47 1996 From: ken at phy.ucsf.edu (Ken Miller) Date: Wed, 25 Sep 1996 14:52:47 -0700 Subject: Hertz Fellowships for graduate studies Message-ID: <9609252152.AA09350@coltrane.ucsf.edu> [ Moderator's note: Carnegie Mellon, Stanford, and MIT are also on the list of approved schools for Hertz fellowships. -- DST] I wanted to bring to the attention of prospective and current Ph.D. students something I just ran across, the Fannie and John Hertz Foundation fellowship for studies in Applied Physical Sciences. They define applied physical sciences to explicitly include computational neuroscience, artificial intelligence, and robotics. The deadline is quite soon -- Oct. 18. It's a generous fellowship -- 5 yrs, stipend of $20K per year. Info is at http://www.hertzfndn.org Please don't ask me for more info about this fellowship -- go to the foundation and/or its web page. The fellowships are only tenable at a short list of eligible schools, but applicants can also include in their application the desire that other schools be added to that list. In particular, three of the existing schools with Sloan Centers for Theoretical Neurobiology -- Caltech, UCSD, and NYU -- are on the list, but the other two -- UCSF and Brandeis -- are not. As a member of the faculty at UCSF, I'd certainly like to encourage applicants in computational neuroscience to include UCSF and/or Brandeis on your list of desired schools and to check us out in months to come. More info on UCSF can be found at: Neuroscience Program: http://www.neuroscience.ucsf.edu/neuroscience/ Sloan Center: http://www.sloan.ucsf.edu/sloan/ (links to Brandeis and the other Sloan centers can be found on our Sloan page). Ken Kenneth D. Miller telephone: (415) 476-8217 Dept. of Physiology fax: (415) 476-4929 UCSF internet: ken at phy.ucsf.edu 513 Parnassus www: http://www.keck.ucsf.edu/~ken San Francisco, CA 94143-0444  From jordan at psyche.mit.edu Wed Sep 25 03:15:03 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Wed, 25 Sep 96 3:15:03 EDT Subject: NIPS Conference Program Message-ID: <9609250715.AA23535@psyche.mit.edu> The NIPS*96 conference program is now available. It can be retrieved via anonymous ftp from: ftp://psyche.mit.edu/pub/NIPS96/nips96-program It will also be available soon from the NIPS*96 homepage. NIPS*96 begins on December 2 with a tutorial program and banquet. The NIPS*96 invited speakers are as follows: MON DEC 2 --------- Computer graphics for film: Automatic versus manual techniques (Banquet talk) E. Enderton Industrial Light and Magic TUE DEC 3 --------- The CONDENSATION algorithm---conditional density propagation and applications to visual tracking (Invited) A. Blake University of Oxford Compositionality, MDL priors, and object recognition (Invited) S. Geman, E. Bienenstock Brown University WED DEC 4 --------- Plasticity of dynamics as opposed to absolute strength of synapse (Invited) H. Markram Weizmann Institute Transition between rate and temporal coding in neocortex as determined by synaptic depression (Invited) M. Tsodyks Weizmann Institute THU DEC 5 --------- Wavelets, wavelet packets, and beyond: Applications of new adaptive signal representations (Invited) D. Donoho Stanford University and University of California Berkeley Michael Jordan NIPS*96 Program Chair  From jordan at psyche.mit.edu Wed Sep 25 16:56:32 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Wed, 25 Sep 96 16:56:32 EDT Subject: NIPS program committee notes Message-ID: <9609252056.AA05917@psyche.mit.edu> Dear connectionists colleagues, I enclose below some notes on this year's NIPS reviewing and decision process. These notes will hopefully be of interest not only to contributors to NIPS*96, but to anyone else who has an ongoing interest in the conference. Note also that there is a "feedback session with the NIPS board" scheduled for Wednesday, December 4th at the conference venue; this would be a good opportunity for public discussion of NIPS reviewing and decision policies. In my experience NIPS has worked hard to earn its role as a flagship conference serving a diverse technical community, particularly through its revolving program committees, and further public discussion of NIPS decision- making procedures can only help to improve the conference. The notes include lists of all of this year's area chairs and reviewers. Mike Jordan NIPS*96 program chair ----------------------------------------------------------- The area chairs for NIPS*96 were as follows: Algorithms and Architectures Chris Bishop, Aston University Steve Omohundro, NEC Research Institute Rob Tibshirani, University of Toronto Theory Michael Kearns, AT&T Research Sara Solla, AT&T Research Vision David Mumford, Harvard University Control Andrew Moore, Carnegie Mellon University Applications Anders Krogh, The Sanger Centre Speech and Signals Eric Wan, Oregon Graduate Institute Neuroscience Bill Bialek, NEC Research Institute Artificial Intelligence/Cognitive Science Stuart Russell, University of California, Berkeley Implementations Fernando Pineda, Johns Hopkins University The area chairs were responsible for recruiting reviewers. All told, 160 reviewers were recruited, from 17 countries. 104 reviewers were from institutions in the US, and 56 reviewers were from institutions outside the US. The breakdown of the submissions by areas was as follows: 1995 1996 ---------------------------------------- Alg & Arch 133 173 Theory 89 79 Neuroscience 43 61 Control & Nav 40 43 Applications 36 42 Vision 46 40 Speech & Sig Proc 20 25 Implementations 25 24 AI & Cog Sci 30 22 ---------------------------------------- Total 462 509 Area chairs assigned papers to reviewers. For cases in which an area chair was an author of a paper the program chair made the selection of reviewers. For cases in which the program chair was an author of a submission the appropriate area chair made the selection of reviewers. Code letters were used for all such reviewers, and neither the area chairs nor the program chair knew (or know) who reviewed their papers. Each paper was reviewed by three reviewers. In most cases all three reviewers were from the same area, but some papers that were particularly interdisciplinary in flavor were reviewed by reviewers from different areas. After the reviews were received and processed the program committee met at MIT in August to make decisions. A few comments on the meeting way the meeting was run: (1) It was agreed that the overriding goal of the program committee's decision process should be to select the best papers, i.e., those exhibiting the most significant thinking and the most thorough development of ideas. All other issues were considered secondary. (2) To achieve (1), the program committee agreed that one of its principal roles was to help eliminate bias in the reviewing process. This took several forms: (a) Close attention was paid to cases in which the reviewers disagreed among themselves. In such cases the area chair often read the paper him/herself to help come to a decision. (b) The area chairs studied histograms of scores to help identify cases where reviewers seemed to be using different scales. (c) The committee tried to identify reviewers who were not as strong or as devoted as others and tried to weight their reviews accordingly. (3) It was agreed that authors who were members of the program committee would be held to higher standards than other authors. That is, if a paper by a program committee author was near a borderline (acceptance, spotlight, oral), it would be demoted. This was considered to be another form of bias minimization, given that the committee was aware that some reviewers might favor program committee members. Also, program committee members who were authors of a paper left the room when their paper was being discussed; they played no role in the decision-making process for their own papers. (4) Other criteria that were utilized in the decision-making process included: junior status of authors (younger authors were favored), new-to-NIPS criteria (outsiders were favored), novelty (new ideas were favored). These criteria also figured in decisions for oral presentations and spotlights, along with additional criteria that favored authors who had not had an oral presentation in recent years and favored presentations of general interest to the NIPS audience. All such criteria, however, were considered secondary, in that they were used to distinguish papers that were gauged to be of roughly equal quality by the reviewers. As stated above, the primary criterion was to select the best papers, and to give oral presentations to papers receiving the most laudatory reviews. (5) Generally speaking, it turned out that the program committee decisions followed the reviewers' scores. A rough guess would be that 1 paper in 10 was moved up or down from where the reviewers' scores placed the paper. (6) The entire program committee participated in the discussions of individual papers for all of the areas. (7) The decision making was seldom easy. It was the overall sense of the program committee that the submissions were exceptionally strong this year. There were many papers near the borderline that were of NIPS quality, but could not be accepted because of size constraints (the conference is limited in size by a number of factors, including the scheduling and the size of the proceedings volume). We hope that authors of these papers will strengthen them a notch and resubmit next year. The process was as fair and as intellectually rigorous as the program committee could make it. It can of course stand improvement, however, and I would hope that people with ideas in this regard will attend the feedback session in Denver. One improvement that I personally think is worth considering, having now seen the reviewing process in such detail, is to allow reviewers to consult among themselves. In this model, reviewers exchange their reviews and discuss them before sending final reviews to the program chair. I review for other conferences where this is done, and I think that it has the substantial advantage of helping to reduce cases where a reviewer just didn't understand something and thus gave a paper an unreasonably low score. Such is my opinion at any case. Perhaps this idea and other such ideas could be discussed in Denver. Mike Jordan ------------------------------------------------------------------- Reviewers for NIPS*96: --------------------- Larry Abbott David Lowe Naoki Abe David Madigan Subutai Ahmad Marina Meila Ethem Alpaydin Bartlett Mel Chuck Anderson David Miller James Anderson Kenneth Miller Chris Atkeson Martin Moller Pierre Baldi Read Montague Naama Barkai Tony Movshon Etienne Barnard Klaus Mueller Andy Barto Alan Murray Francoise Beaufays Ian Nabney Sue Becker Jean-Pierre Nadal Yoshua Bengio Ken Nakayama Michael Biehl Ralph Neuneier Leon Bottou Mahesan Niranjan Herve Bourlard Peter Norvig Timothy Brown Klaus Obermayer Nader Bshouty Erkki Oja Joachim Buhmann Genevieve Orr Carmen Canavier Art Owen Claire Cardie Barak Pearlmutter Ted Carnevale Jing Peng Nestor Caticha Fernando Pereira Gert Cauwenberghs Pietro Perona David Cohn Carsten Peterson Greg Cooper Jay Pittman Corinna Cortes Tony Plate Gary Cottrell John Platt Marie Cottrell Jordan Pollack Bob Crites Alexandre Pouget Christian Darken Jose Principe Peter Dayan Adam Prugel-Bennett Virginia de Sa Anand Rangarajan Alain Destexhe Carl Rasmussen Thomas Dietterich Steve Renals Dawei Dong Barry Richmond Charles Elkan Peter Riegler Ralph Etienne-Cummings Brian Ripley Gary Flake David Rohwer Paolo Frasconi David Saad Bill Freeman Philip Sabes Yoav Freund Lawrence Saul Jerry Friedman Stefan Schaal Patrick Gallinari Jeff Schneider Stuart Geman Terrence Sejnowski Zoubin Ghahramani Robert Shapley Federico Girosi Patrice Simard Mirta Gordon Tai Sing Russ Greiner Yoram Singer Vijaykumar Gullapalli Satinder Singh Isabelle Guyon Padhraic Smyth Lars Hansen Bill Softky John Harris David Somers Michael Hasselmo Devika Subramanian Simon Haykin Richard Sutton David Heckerman Josh Tenenbaum John Hertz Michael Thielscher Andreas Herz Sebastian Thrun Tom Heskes Mike Titterington Geoffrey Hinton Geoffrey Towell Sean Holden Todd Troyer Don Hush Ah Chung Tsoi Nathan Intrator Michael Turmon Tommi Jaakkola Joachim Utans Marwan Jabri Benjamin VanRoy Jeff Jackson Kelvin Wagner Robbie Jacobs Raymond Watrous Chuanyi Ji Yair Weiss Ido Kanter Christopher Williams Bert Kappen Ronald Williams Dan Kersten Robert Williamson Ronny Kohavi David Willshaw Alan Lapedes Ole Winther John Lazzaro David Wolpert Todd Leen Lei Xu Zhaoping Li Alan Yuille Christiane Linster Tony Zador Richard Lippmann Steven Zucker Michael Littman  From levy at xws.com Thu Sep 26 15:36:09 1996 From: levy at xws.com (Kenneth L. Levy) Date: Thu, 26 Sep 1996 12:36:09 -0700 Subject: Dissertation Available: The Transformation of Acoustic Information by Cochlear Nucleus Octopus Cells: A Modeling Study Message-ID: <1.5.4.32.19960926193609.0069a4ec@mail.xws.com> Hello, My (Ken Levy) dissertation is available as a compressed postscript file for DOS/Win and Unix (downloading directions below). The dissertation is entitled "The Transformation of Acoustic Information by Cochlear Nucleus Octopus Cells: A Modeling Study." The cochlear nucleus is the first nuclei of the mammalian auditory brainstem, and the octopus cell type is one of the principlal cell types. Octopus cells respond only at the onset of a toneburst. The dissertation presents an analysis of a compartmental model using GENESIS that led to a description of the underlying mechanism of the onset response of the octopus cell. It also includes the development, analysis and verification of the novel, biologically-plausible model, labeled the intrinsic membrane model (IMM), that produces accurate spike times 100-to-1000 time more efficiently than a compartmental model. Finally, the document covers the utilization of the IMM to demonstrate the enhancement of the encoding of the fundamental frequency of the vowel [i] in background noise by single-cell and ensemble models of octopus cells. Comments and suggestions are welcome and encouraged! Thank you for your time and consideration. --Ken Downloading Information: ------------------------ Homepage URL => http://www.eas.asu.edu/~neurolab or Homepage URL => http://www.xws.com/levy/levypub.html or Anonymous FTP site => ftp.eas.asu.edu (login:anonymous passwd:your email) FTP directory => pub/neurolab Dissertation file => Diss.ps.Z or DissPS.exe IMM => IMMdemo.tar.Z or IMMdemo.exe =============================================================== Kenneth L. Levy, Ph.D. levy at xws.com Acoustic Information Processing Lab http://www.xws.com/levy 31 Skamania Coves Drive Voice: (509) 427-5374 Stevenson, WA 98648 FAX: (509) 427-7131 ===============================================================  From rao at cs.rochester.edu Thu Sep 26 15:07:05 1996 From: rao at cs.rochester.edu (Rajesh Rao) Date: Thu, 26 Sep 1996 15:07:05 -0400 Subject: Tech Report: Visual Cortex as a Hierarchical Predictor Message-ID: <199609261907.PAA19545@skunk.cs.rochester.edu> The following technical report on a hierarchical predictor model of the visual cortex and the complex cell phenomenon of "endstopping" is available for retrieval via ftp. Comments and suggestions welcome (This message has been cross-posted - my apologies to those who received it more than once). -- Rajesh Rao Internet: rao at cs.rochester.edu Dept. of Computer Science VOX: (716) 275-2527 University of Rochester FAX: (716) 461-2018 Rochester NY 14627-0226 WWW: http://www.cs.rochester.edu/u/rao/ =========================================================================== The Visual Cortex as a Hierarchical Predictor Rajesh P.N. Rao and Dana H. Ballard Technical Report 96.4 National Resource Laboratory for the Study of Brain and Behavior Department of Computer Science, University of Rochester September, 1996 Abstract A characteristic feature of the mammalian visual cortex is the reciprocity of connections between cortical areas [1]. While corticocortical feedforward connections have been well studied, the computational function of the corresponding feedback projections has remained relatively unclear. We have modelled the visual cortex as a hierarchical predictor wherein feedback projections carry predictions for lower areas and feedforward projections carry the difference between the predictions and the actual internal state. The activities of model neurons and their synaptic strength are continually adapted using a hierarchical Kalman filter [2] that minimizes errors in prediction. The model generalizes several previously proposed encoding schemes [3,4,5,6,7,8] and allows functional interpretations of a number of well-known psychophysical and neurophysiological phenomena [9]. Here, we present simulation results suggesting that the classical phenomenon of endstopping [10,11] in cortical neurons may be viewed as an emergent property of the cortex implementing a hierarchical Kalman filter-like prediction mechanism for efficient encoding and recognition. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/endstop.ps.Z WWW URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/endstop.ps.Z 20 pages; 302K compressed. The following related papers are also available via ftp: ------------------------------------------------------------------------- Dynamic Model of Visual Recognition Predicts Neural Response Properties In The Visual Cortex Rajesh P.N. Rao and Dana H. Ballard (Neural Computation - in press) Abstract The responses of visual cortical neurons during fixation tasks can be significantly modulated by stimuli from beyond the classical receptive field. Modulatory effects in neural responses have also been recently reported in a task where a monkey freely views a natural scene. In this paper, we describe a hierarchical network model of visual recognition that explains these experimental observations by using a form of the extended Kalman filter as given by the Minimum Description Length (MDL) principle. The model dynamically combines input-driven bottom-up signals with expectation-driven top-down signals to predict current recognition state. Synaptic weights in the model are adapted in a Hebbian manner according to a learning rule also derived from the MDL principle. The resulting prediction/learning scheme can be viewed as implementing a form of the Expectation-Maximization (EM) algorithm. The architecture of the model posits an active computational role for the reciprocal connections between adjoining visual cortical areas in determining neural response properties. In particular, the model demonstrates the possible role of feedback from higher cortical areas in mediating neurophysiological effects due to stimuli from beyond the classical receptive field. Simulations of the model are provided that help explain the experimental observations regarding neural responses in both free viewing and fixating conditions. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/dynmem.ps.Z WWW URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/dynmem.ps.Z 43 pages; 569K compressed. -------------------------------------------------------------------------- A Class of Stochastic Models for Invariant Recognition, Motion, and Stereo Rajesh P.N. Rao and Dana H. Ballard Technical Report 96.1 Abstract We describe a general framework for modeling transformations in the image plane using a stochastic generative model. Algorithms that resemble the well-known Kalman filter are derived from the MDL principle for estimating both the generative weights and the current transformation state. The generative model is assumed to be implemented in cortical feedback pathways while the feedforward pathways implement an approximate inverse model to facilitate the estimation of current state. Using the above framework, we derive models for invariant recognition, motion estimation, and stereopsis, and present preliminary simulation results demonstrating recognition of objects in the presence of translations, rotations and scale changes. Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/invar.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/invar.ps.Z 7 pages; 430K compressed. ========================================================================== Anonymous ftp instructions: >ftp ftp.cs.rochester.edu Connected to anon.cs.rochester.edu. 220 anon.cs.rochester.edu FTP server (Version wu-2.4(3)) ready. Name: [type 'anonymous' here] 331 Guest login ok, send your complete e-mail address as password. Password: [type your e-mail address here] ftp> cd /pub/u/rao/papers/ ftp> get endstop.ps ftp> get dynmem.ps ftp> get invar.ps ftp> bye  From es2029 at eng.warwick.ac.uk Fri Sep 27 09:00:32 1996 From: es2029 at eng.warwick.ac.uk (es2029@eng.warwick.ac.uk) Date: Fri, 27 Sep 96 9:00:32 BST Subject: Thesis available: Constrained weight nets Message-ID: <25868.9609270800@eng.warwick.ac.uk> The following PhD Thesis is available on the web: ---------------------------------------------------- Feedforward Neural Networks with Constrained Weights ---------------------------------------------------- Altaf H. Khan (Email address effective 7 Oct 96 a.h.khan at ieee.org) Department of Engineering, University of Warwick, Coventry, CV4 7AL, England August 1996 218 pages - gzipped postscript version available as http://www.eng.warwick.ac.uk/~es2029/thesis.ps.gz This thesis will also be made available on Neuropose in the near future. ---------------------------------------------------- Thesis Summary The conventional multilayer feedforward network having continuous-weights is expensive to implement in digital hardware. Two new types of networks are proposed which lend themselves to cost-effective implementations in hardware and have a fast forward-pass capability. These two differ from the conventional model in having extra constraints on their weights: the first allows its weights to take integer values in the range [-3, 3] only, whereas the second restricts its synapses to the set {-1,0,1} while allowing unrestricted offsets. The benefits of the first configuration are in having weights which are only 3-bits deep and a multiplication operation requiring a maximum of one shift, one add, and one sign-change instruction. The advantages of the second are in having 1-bit synapses and a multiplication operation which consists of a single sign-change instruction. The procedure proposed for training these networks starts like the conventional error backpropagation procedure, but becomes more and more discretised in its behaviour as the network gets closer to an error minimum. Mainly based on steepest descent, it also has a perturbation mechanism to avoid getting trapped in local minima, and a novel mechanism for rounding off `near integers'. It incorporates weight elimination implicitly, which simplifies the choice of the start-up network configuration for training. It is shown that the integer-weight network, although lacking the universal approximation capability, can implement learning tasks, especially classification tasks, to acceptable accuracies. A new theoretical result is presented which shows that the multiplier-free network is a universal approximator over the space of continuous functions of one variable. In light of experimental results it is conjectured that the same is true for functions of many variables. Decision and error surfaces are used to explore the discrete-weight approximation of continuous-weight networks using discretisation schemes other than integer weights. The results suggest that provided a suitable discretisation interval is chosen, a discrete-weight network can be found which performs as well as a continuous-weight networks, but that it may require more hidden neurons than its conventional counterpart. Experiments are performed to compare the generalisation performances of the new networks with that of the conventional one using three very different benchmarks: the MONK's benchmark, a set of artificial tasks designed to compare the capabilities of learning algorithms, the `onset of diabetes mellitus' prediction data set, a realistic set with very noisy attributes, and finally the handwritten numeral recognition database, a realistic but very structured data set. The results indicate that the new networks, despite having strong constraints on their weights, have generalisation performances similar to that of their conventional counterparts. -- Altaf.  From marney at ai.mit.edu Wed Sep 25 21:32:31 1996 From: marney at ai.mit.edu (Marney Smyth) Date: Wed, 25 Sep 1996 21:32:31 -0400 (EDT) Subject: Modern Regression and Classification Course in Boston Message-ID: <9609260132.AA05378@carpentras.ai.mit.edu> ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ +++ +++ +++ Modern Regression and Classification +++ +++ Widely Applicable Statistical Methods for +++ +++ Modeling and Prediction +++ +++ +++ +++ Cambridge, MA, December 9 - 10, 1996 +++ +++ +++ +++ Trevor Hastie, Stanford University +++ +++ Rob Tibshirani, University of Toronto +++ +++ +++ ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ This two-day course will give a detailed overview of statistical models for regression and classification. Known as machine-learning in computer science and artificial intelligence, and pattern recognition in engineering, this is a hot field with powerful applications in science, industry and finance. The course covers a wide range of models, from linear regression through various classes of more flexible models, to fully nonparametric regression models, both for the regression problem and for classification. Although a firm theoretical motivation will be presented, the emphasis will be on practical applications and implementations. The course will include many examples and case studies, and participants should leave the course well-armed to tackle real problems with realistic tools. The instructors are at the forefront in research in this area. After a brief overview of linear regression tools, methods for one-dimensional and multi-dimensional smoothing are presented, as well as techniques that assume a specific structure for the regression function. These include splines, wavelets, additive models, MARS (multivariate adaptive regression splines), projection pursuit regression, neural networks and regression trees. The same hierarchy of techniques is available for classification problems. Classical tools such as linear discriminant analysis and logistic regression can be enriched to account for nonlinearities and interactions. Generalized additive models and flexible discriminant analysis, neural networks and radial basis functions, classification trees and kernel estimates are all such generalizations. Other specialized techniques for classification including nearest-neighbor rules and learning vector quantization will also be covered. Apart from describing these techniques and their applications to a wide range of problems, the course will also cover model selection techniques, such as cross-validation and the bootstrap, and diagnostic techniques for model assessment. Software for these techniques will be illustrated, and a comprehensive set of course notes will be provided to each attendee. Additional information is available at the Website: http://playfair.stanford.edu/~trevor/mrc.html COURSE OUTLINE DAY ONE: Overview of regression methods: Linear regression models and least squares. Ridge regression and the lasso. Flexible linear models and basis function methods. linear and nonlinear smoothers; kernels, splines, and wavelets. Bias/variance tradeoff- cross-validation and bootstrap. Smoothing parameters and effective number of parameters. Surface smoothers. ++++++++ Structured Nonparametric Regression: Problems with high dimensional smoothing. Structured high-dimensional regression: additive models. project pursuit regression. CART, MARS. radial basis functions. neural networks. applications to time series forecasting. DAY TWO: Classification: Statistical decision theory and classification rules. Linear procedures: Discriminant Analysis. Logistics regression. Quadratic discriminant analysis, parametric models. Nearest neighbor classification, K-means and LVQ. Adaptive nearest neighbor methods. ++++++++ Nonparametric classification: Classification trees: CART. Flexible/penalized discriminant analysis. Multiple logistic regression models and neural networks. Kernel methods. THE INSTRUCTORS Professor Trevor Hastie of the Statistics and Biostatistics Departments at Stanford University was formerly a member of the Statistics and Data Analysis Research group AT & T Bell Laboratories. He co-authored with Tibshirani the monograph Generalized Additive Models (1990) published by Chapman and Hall, and has many research articles in the area of nonparametric regression and classification. He also co-edited the Wadsworth book Statistical Models in S (1991) with John Chambers. Professor Robert Tibshirani of the Statistics and Biostatistics departments at University of Toronto is the most recent recipient of the COPSS award - an award given jointly by all the leading statistical societies to the most outstanding statistician under the age of 40. He also has many research articles on nonparametric regression and classification. With Bradley Efron he co-authored the best-selling text An Introduction to the Bootstrap in 1993, and has been an active researcher on bootstrap technology for the past 11 years. Quotes from previous participants: "... the best presentation by professional statisticians I have ever had the pleasure of attending" ".. superior to most courses in all respects." Both Prof. Hastie and Prof. Tibshirani are actively involved in research in modern regression and classification and are well-known not only in the statistics community but in the machine-learning and neural network fields as well. The have given many short courses together on classification and regression procedures to a wide variety of academic, government and industrial audiences. These include the American Statistical Association and Interface meetings, NATO ASI Neural Networks and Statistics workshop, AI and Statistics, and the Canadian Statistical Society meetings. BOSTON COURSE: December 9-10, 1996 at the HYATT REGENCY HOTEL, CAMBRIDGE, MASSACHUSETTS. PRICE: $750 per attendee before November 11, 1996. Full time registered students receive a 40% discount (i.e. $450). Cancellation fee is $100 after October 29, 1996. Registration fee after November 11, 1996 is $950 (Students $530). Attendance is limited to the first 60 applicants, so sign up soon! These courses fill up quickly. HOTEL ACCOMMODATION The Hyatt Regency Hotel offers special accommodation rates for course participants ($139 per night). Contact the hotel directly - The Hyatt Regency Hotel, 575 Memorial Drive, Cambridge, MA 02139. Phone : 617 4912-1234 Alternative hotel accommodation information at MRC WebSite: http://playfair.stanford.edu/~trevor/mrc.html COURSE REGISTRATION TO REGISTER: Detach and fill in the Registration Form below: Modern Regression and Classification Widely applicable methods for modeling and prediction December 9 - December 10, 1996 Cambridge, Massachusetts USA Please complete this form (type or print) Name ___________________________________________________ Last First Middle Firm or Institution ______________________________________ Mailing Address (for receipt) _________________________ __________________________________________________________ __________________________________________________________ __________________________________________________________ Country Phone FAX __________________________________________________________ email address __________________________________________________________ Credit card # (if payment by credit card) Expiration Date (Lunch Menu - tick as appropriate): ___ Vegetarian ___ Non-Vegetarian Fee payment must be made by MONEY ORDER, PERSONAL CHECK, VISA or MASTERCARD. All amounts must in US dollar figures. Make fee payable to Prof. Trevor Hastie. Mail it, together with this completed Registration Form to: Marney Smyth, MIT Press E39-311 55 Hayward Street, Cambridge, MA 02142 USA ALL CREDIT CARD REGISTRATIONS MUST INCLUDE BOTH CARD NUMBER AND EXPIRATION DATE. DEADLINE: Registration before December 2, 1996. DO NOT SEND CASH. Registration fee includes Course Materials, coffee breaks, and lunch both days. If you have further questions, email to marney at ai.mit.edu  From marks at u.washington.edu Mon Sep 30 01:59:09 1996 From: marks at u.washington.edu (Robert Marks) Date: Sun, 29 Sep 96 22:59:09 -0700 Subject: IEEE TNN CFP: Special Issue on Everday Applications Message-ID: <9609300559.AA08401@carson.u.washington.edu> Special Issue of the IEEE Transactions on Neural Networks: Every Day Applications of Neural Networks The objective of this special issue is presentation of cases of ongoing or every day use of neural networks in industry, commerce, medicine, engineering, military and other disciplines. Even though artificial neural networks have been around since the 1940's, the last decade has seen a tremendous upsurge in research and development. This activity has been at two levels, (i) advances in neural techniques and network architectures and (ii) exploration of application of this technology in various fields. Neural network technology has reached a degree of maturity as evidenced by an ever increasing number of applications. It is useful, at this stage, to take stock of applications to provide the neural practitioner (i) knowledge of fields wherein neural technology has had an impact, and (ii) guidance concerning fruitful areas of research and development in neurotechnology that have a significant impact. This special issue of the TNN calls for submission of papers concerning neural technology adopted for ongoing or everyday use. Hybrid neural technology, such as neuro-fuzzy systems, are also appropriate. Submissions are to specifically address the infusion and adaptation of neural technology in various areas. Exploratory applications papers, normally welcome for submission to the TNN, are specifically discouraged for this special issue. Adopted and established applications papers, rather, are appropriate. Submissions to the special issue will be judged based on the veracity of everyday use, comparitive performance over previously used techniques and lessons learned from the development and applications Descriptions of remaining open problems or desired, though unachieved performance attainment, are encouraged. Six copies of the manuscript should be mailed to one of the special issue editors by November 15, 1996. The special issue is tentatively scheduled for publication in July 1997.Submissions could either be brief papers or regular papers. Please refer to instructions to authors for TNN. Tharam Dillon Professor of Computer Science Head, Department of Computer Science and Computer Engineering La Trobe University Bundoora, Melbourne, Victoria 3083 Australia Tel: +61 3 479 2598 Fax: +61 3 479 3060 tharam at latcs1.cs.latrobe.edu.ua Payman Arabshahi University of Washington Department of Electrical Engineering Benton Way at Stevens Way Box 352500 Seattle, WA 98195 United States of America payman at ee.washington.edu 206 236 2694 FAX: 206 543 3842 Robert J. Marks II University of Washington Department of Electrical Engineering c/o 1131 199th Street SW Lynnwood, WA 98036-7138 United States of America r.marks at ieee.org 206 543 6990 FAX: 206 776 9297 ---