From Connectionists-Request at cs.cmu.edu Mon Jan 1 00:05:13 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Jan 96 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <28104.820472713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From Connectionists-Request at cs.cmu.edu Mon Jan 1 00:05:13 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Jan 96 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <28104.820472713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From mpolycar at ece.uc.edu Tue Jan 2 10:19:35 1996 From: mpolycar at ece.uc.edu (Marios Polycarpou) Date: Tue, 2 Jan 1996 10:19:35 -0500 (EST) Subject: ISIC'96: Final Call for Papers Message-ID: <199601021519.KAA19208@zoe.ece.uc.edu> FINAL CALL FOR PAPERS 11th IEEE International Symposium on Intelligent Control (ISIC'96) Sponsored by the IEEE Control Systems Society and held in conjunction with The 1996 IEEE International Conference on Control Applications (CCA) and The IEEE Symposium on Computer-Aided Control System Design (CACSD) September 15-18, 1996 The Ritz-Carlton Hotel, Dearborn, Michigan, USA ISIC General Chair: Kevin M. Passino, The Ohio State University ISIC Program Chair: Jay A. Farrell, University of California, Riverside ISIC Publicity Chair: Marios Polycarpou, University of Cincinnati Intelligent control, the discipline where control algorithms are developed by emulating certain characteristics of intelligent biological systems, is being fueled by recent advancements in computing technology and is emerging as a technology that may open avenues for significant technological advances. For instance, fuzzy controllers which provide for a simplistic emulation of human deduction have been heuristically constructed to perform difficult nonlinear control tasks. Knowledge-based controllers developed using expert systems or planning systems have been used for hierarchical and supervisory control. Learning controllers, which provide for a simplistic emulation of human induction, have been used for the adaptive control of uncertain nonlinear systems. Neural networks have been used to emulate human memorization and learning characteristics to achieve high performance adaptive control for nonlinear systems. Genetic algorithms that use the principles of biological evolution and "survival of the fittest" have been used for computer-aided-design of control systems and to automate the tuning of controllers by evolving in real-time populations of highly fit controllers. Topics in the field of intelligent control are gradually evolving, and expanding on and merging with those of conventional control. For instance, recent work has focused on comparative cost-benefit analyses of conventional and intelligent control techniques using simulation and implementations. In addition, there has been recent activity focused on modeling and nonlinear analysis of intelligent control systems, particularly work focusing on stability analysis. Moreover, there has been a recent focus on the development of intelligent and conventional control systems that can achieve enhanced autonomous operation. Such intelligent autonomous controllers try to integrate conventional and intelligent control approaches to achieve levels of performance, reliability, and autonomous operation previously only seen in systems operated by humans. Papers are being solicited for presentation at ISIC and for publication in the Symposium Proceedings on topics such as: - Architectures for intelligent control - Hierarchical intelligent control - Distributed intelligent systems - Modeling intelligent systems - Mathematical analysis of intelligent systems - Knowledge-based systems - Fuzzy systems / fuzzy control - Neural networks / neural control - Machine learning - Genetic algorithms - Applications / Implementations: - Automotive / vehicular systems - Robotics / Manufacturing - Process control - Aircraft / spacecraft This year the ISIC is being held in conjunction with the 1996 IEEE International Conference on Control Applications and the IEEE Symposium on Computer-Aided Control System Design. Effectively this is one large conference at the beautiful Ritz-Carlton hotel. The programs will be held in parallel so that sessions from each conference can be attended by all. There will be one registration fee and each registrant will receive a complete set of proceedings. For more information, and information on how to submit a paper to the conference see the back of this sheet. ++++++++++ Submissions: ++++++++++ Papers: Five copies of the paper (including an abstract) should be sent by Jan. 22, 1996 to: Jay A. Farrell, ISIC'96 College of Engineering ph: (909) 787-2159 University of California, Riverside fax: (909) 787-3188 Riverside, CA 92521 Jay_Farrell at qmail.ucr.edu Clearly indicate who will serve as the corresponding author and include a telephone number, fax number, email address, and full mailing address. Authors will be notified of acceptance by May 1996. Accepted papers, in final camera ready form (maximum of 6 pages in the proceedings), will be due in June 1996. Invited Sessions: Proposals for invited sessions are being solicited and are due Jan. 22, 1996. The session organizers should contact the Program Chair by Jan. 1, 1996 to discuss their ideas and obtain information on the required invited session proposal format. Workshops and Tutorials: Proposals for pre-symposium workshops should be submitted by Jan. 22, 1996 to: Kevin M. Passino, ISIC'96 Dept. Electrical Engineering ph: (614) 292-5716 The Ohio State University fax: (614) 292-7596 2015 Neil Ave. passino at osu.edu Columbus, OH 43210-1272 Please contact K.M. Passino by Jan. 1, 1996 to discuss the content and required format for the workshop or tutorial proposal. ++++++++++++++++++++++++ Symposium Program Committee: ++++++++++++++++++++++++ James Albus, National Institute of Standards and Technology Karl Astrom, Lund Institute of Technology Matt Barth, University of California, Riverside Michael Branicky, Massachusetts Institute of Technology Edwin Chong, Purdue University Sebastian Engell, University of Dortmund Toshio Fukuda, Nagoya University Zhiqiang Gao, Cleveland State University Dimitry Gorinevsky, Measurex Devron Inc. Ken Hunt, Daimler-Benz AG Tag Gon Kim, KAIST Mieczyslaw Kokar, Northeastern University Ken Loparo, Case Western Reserve University Kwang Lee, The Pennsylvania State University Michael Lemmon, University of Notre Dame Frank Lewis, University of Texas at Arlington Ping Liang, University of California, Riverside Derong Liu, General Motors R&D Center Kumpati Narendra, Yale University Anil Nerode, Cornell University Marios Polycarpou, University of Cincinnati S. Joe Qin, Fisher-Rosemount Systems, Inc. Tariq Samad, Honeywell Technology Center George Saridis, Rensselaer Polytechnic Institute Jennie Si, Arizona State University Mark Spong, University of Illinois at Urbana-Champaign Jeffrey Spooner, Sandia National Laboratories Harry Stephanou, Rensselaer Polytechnic Institute Kimon Valavanis, University of Southwestern Louisiana Li-Xin Wang, Hong Kong University of Science and Tech. Gary Yen, USAF Phillips Laboratory ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * ************************************************************************** From hicks at cs.titech.ac.jp Wed Jan 3 19:57:57 1996 From: hicks at cs.titech.ac.jp (hicks@cs.titech.ac.jp) Date: Thu, 4 Jan 1996 09:57:57 +0900 Subject: Query: Infinite priors Message-ID: <199601040057.JAA27152@euclid.cs.titech.ac.jp> I have the following question concerning the existence of certain priors. Can we say that a uniform prior over an infinite domain exists? For example, the uniform prior over all natural numbers. I wonder since it cannot be expressed in the form p(n) = (f(n)/\sum_n f(n)), where f(n) is a well defined function over the natural numbers. In general, if f(n) is not a summable series, then can the probability function p(n), whose elements have the ratios of the elements f(n), i.e., p(n)/p(m) = f(n)/f(m), be said to exist? I ask because a true Bayesian approach to some problems may require the prior to be defined. If there is no prior, then we can't say we are taking a Bayesian approach. If an infinite uniform prior does not exist, then we cannot take the approach that "no prior knowledge" = "infinite uniform prior". I.e., it would imply that any Bayesian approach involving the prior MUST begin with some assumptions about the prior (i.e., it must be formed from a summable/integrable function). References or opinions would be welcome. Craig Hicks. hicks at cs.titech.ac.jp From steven.young at psy.ox.ac.uk Thu Jan 4 10:46:59 1996 From: steven.young at psy.ox.ac.uk (Steven Young) Date: Thu, 4 Jan 1996 15:46:59 +0000 (GMT) Subject: LAST CALL for participation for the Oxford Summer School on Connectionist Modelling Message-ID: <199601041547.PAA13854@cogsci1.psych.ox.ac.uk> This is the LAST CALL for participation for the 1996 Oxford Summer School on Connectionist Modelling follows. Please pass on this information to people you know who would be interested. -------- OXFORD SUMMER SCHOOL ON CONNECTIONIST MODELLING Department of Experimental Psychology University of Oxford 21 July - 2nd August 1996 Applications are invited for participation in a 2-week residential Summer School on techniques in connectionist modelling. The course is aimed primarily at researchers who wish to exploit neural network models in their teaching and/or research and it will provide a general introduction to connectionist modelling through lectures and exercises on Power PCs. The course is interdisciplinary in content though many of the illustrative examples are taken from cognitive and developmental psychology, and cognitive neuroscience. The instructors with primary responsibility for teaching the course are Kim Plunkett and Edmund Rolls. No prior knowledge of computational modelling will be required though simple word processing skills will be assumed. Participants will be encouraged to start work on their own modelling projects during the Summer School. The cost of participation in the Summer School is #750 to include accommodation (bed and breakfast at St. John's College) and registration. Participants will be expected to cover their own travel and meal costs. A small number of graduate student scholarships providing partial funding may be available. Applicants should indicate whether they wish to be considered for a graduate student scholarship but are advised to seek their own funding as well, since in previous years the number of graduate student applications has far exceeded the number of scholarships available. There is a Summer School World Wide Web page describing the contents of the 1995 Summer School available on: http://cogsci1.psych.ox.ac.uk/summer-school/ Further information about contents of the course can be obtained from Steven.Young at psy.ox.ac.uk If you are interested in participating in the Summer School, please contact: Mrs Sue King Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD Tel: (01865) 271353 Email: susan.king at psy.oxford.ac.uk Please send a brief description of your background with an explanation of why you would like to attend the Summer School (one page maximum) no later than 31st January 1996. Regards, Steven Young. From finnoff at predict.com Thu Jan 4 13:35:37 1996 From: finnoff at predict.com (William Finnoff) Date: Thu, 4 Jan 96 11:35:37 MST Subject: Query: Infinite priors Message-ID: <9601041835.AA08352@predict.com> A non-text attachment was scrubbed... Name: not available Type: text Size: 2176 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/30e4e10e/attachment.ksh From om at research.nj.nec.com Thu Jan 4 13:56:31 1996 From: om at research.nj.nec.com (Stephen M. Omohundro) Date: Thu, 4 Jan 96 13:56:31 -0500 Subject: Query: Infinite priors In-Reply-To: <199601040057.JAA27152@euclid.cs.titech.ac.jp> (hicks@cs.titech.ac.jp) Message-ID: <9601041856.AA01276@iris64> > Date: Thu, 4 Jan 1996 09:57:57 +0900 > From: hicks at cs.titech.ac.jp > > > I have the following question concerning the existence of certain priors. > > Can we say that a uniform prior over an infinite domain exists? For example, > the uniform prior over all natural numbers. I wonder since it cannot be > expressed in the form p(n) = (f(n)/\sum_n f(n)), where f(n) is a well defined > function over the natural numbers. In general, if f(n) is not a summable > series, then can the probability function p(n), whose elements have the ratios > of the elements f(n), i.e., p(n)/p(m) = f(n)/f(m), be said to exist? > > I ask because a true Bayesian approach to some problems may require the prior > to be defined. If there is no prior, then we can't say we are taking a > Bayesian approach. If an infinite uniform prior does not exist, then we > cannot take the approach that "no prior knowledge" = "infinite uniform prior". > I.e., it would imply that any Bayesian approach involving the prior MUST begin > with some assumptions about the prior (i.e., it must be formed from a > summable/integrable function). > > References or opinions would be welcome. Craig Hicks. hicks at cs.titech.ac.jp > These are generally called "improper priors". You can still do much of the Bayesian paradigm using them because in many situations the likelihood function is such that the posterior (which is proportional to the prior times the likelihood) is normalizable even if the prior isn't. Formally you can treat them using a limiting sequence of proper priors. Most books on Bayesian analysis have some discussion of this topic. --Steve -- Stephen M. Omohundro http://www.neci.nj.nec.com/homepages/om NEC Research Institute, Inc. om at research.nj.nec.com 4 Independence Way Phone: 609-951-2719 Princeton, New Jersey 08540 Fax: 609-951-2488 From radford at cs.toronto.edu Thu Jan 4 14:15:27 1996 From: radford at cs.toronto.edu (Radford Neal) Date: Thu, 4 Jan 1996 14:15:27 -0500 Subject: Query on "infinite" priors Message-ID: <96Jan4.141537edt.965@neuron.ai.toronto.edu> Craig Hicks. hicks at cs.titech.ac.jp writes: > Can we say that a uniform prior over an infinite domain exists? For > example, the uniform prior over all natural numbers... > > I ask because a true Bayesian approach to some problems may require > the prior to be defined. If there is no prior, then we can't say we > are taking a Bayesian approach. If an infinite uniform prior does not > exist, then we cannot take the approach that "no prior knowledge" = > "infinite uniform prior". I.e., it would imply that any Bayesian > approach involving the prior MUST begin with some assumptions about > the prior (i.e., it must be formed from a summable/integrable function). This is a long-standing issue in Bayesian inference. These "infinite" priors are usually called "improper" priors, while those that can be normalized are called "proper" priors. Some Bayesians like to use improper priors, as long as the posterior turns out to be proper (which is often, but not always, the case). Other Bayesians eschew improper priors, because strange things can sometimes occur when you use them. One strangeness is that a Bayesian procedure based on an improper prior can be "inadmissible" - ie, be uniformly worse than some other procedure with respect to expected performance, for any state of the world. A famous example (Stein's paradox) is estimation of the mean of a vector of three or more independent components having Gaussian distributions, with the aim of minimizing the expected squared error. The Bayesian estimate with an improper uniform prior is just the sample mean, which turns out to be inadmissible. In contrast Bayesian procedures based on proper priors that are nowhere zero are always admissible. There should be lots of stuff on this in Bayesian textbooks, such as Smith and Bernardo's recent book on "Bayesian Theory" (though I don't have a copy handy to verify just what they say). ---------------------------------------------------------------------------- Radford M. Neal radford at cs.toronto.edu Dept. of Statistics and Dept. of Computer Science radford at utstat.toronto.edu University of Toronto http://www.cs.toronto.edu/~radford ---------------------------------------------------------------------------- From plunkett at crl.ucsd.edu Thu Jan 4 11:41:46 1996 From: plunkett at crl.ucsd.edu (Kim Plunkett) Date: Thu, 4 Jan 96 08:41:46 PST Subject: No subject Message-ID: <9601041641.AA17397@crl.ucsd.edu> University Lectureship University of Oxford Department of Experimental Psychology Applications are invited from human experimental/cognitive psychologists (including connectionist modellers) with a proven record of research and training. The post is tenable from 1 october 1996 or as soon as possible thereafter. The stipend will be according to age on the scale stlg15,154-stlg28,215 per annum. The successful candidate may be offered an Official Fellowship at New College, for which additional renumeration would be available. Further particulars, containing details of the duties and the full range of emoluments and allowances attaching to both the University and College appointments, may be obtained from Professor S.D. Iversen, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, U.K. (telephone +44 (0) 1865 271356; fax +44 (0) 1865 271354) to applications (eight copies, two only from overseas candidates), containing a curriculum vitae, a summary of research, a list of principal publications and the names of three referees, should be sent by 31 January 1996. Candidates will be notified if they are required for interview. The University exists to promote excellence in education and research, and is an equal opportunities employer. From N.Sharkey at dcs.shef.ac.uk Fri Jan 5 07:08:22 1996 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 5 Jan 96 12:08:22 GMT Subject: 2nd and FINAL call for papers Message-ID: <9601051208.AA28421@entropy.dcs.shef.ac.uk> CALL FOR PAPERS ** LEARNING IN ROBOTS AND ANIMALS ** An AISB-96 two-day workshop University of Sussex, Brighton, UK: April, 1st & 2nd, 1996 Co-Sponsored by IEE Professional Group C4 (Artificial Intelligence) WORKSHOP ORGANISERS: Noel Sharkey (chair), University of Sheffield, UK. Gillian Hayes, University of Edinburgh, UK. Jan Heemskerk, University of Sheffield, UK. Tony Prescott, University of Sheffield, UK. PROGRAMME COMMITTEE: Dave Cliff, UK. Marco Dorigo, Italy. Frans Groen, Netherlands. John Hallam, UK. John Mayhew, UK. Martin Nillson, Sweden Claude Touzet, France Barbara Webb, UK. Uwe Zimmer, Germany. Maja Mataric, USA. For Registration Information: alisonw at cogs.susx.ac.uk In the last five years there has been an explosion of research on Neural Networks and Robotics from both a self-learning and an evolutionary perspective. Within this movement there is also a growing interest in natural adaptive systems as a source of ideas for the design of robots, while robots are beginning to be seen as an effective means of evaluating theories of animal learning and behaviour. A fascinating interchange of ideas has begun between a number of hitherto disparate areas of research and a shared science of adaptive autonomous agents is emerging. This two-day workshop proposes to bring together an international group to both present papers of their most recent research, and to discuss the direction of this emerging field. WORKSHOP FORMAT: The workshop will consist of half-hour presentations with at least 15 minutes being allowed for discussion at the end of each presentation. Short videos of mobile robot systems may be included in presentations. Proposals for robot demonstrations are also welcome. Please contact the workshop organisers if you are considering bringing a robot as some local assistance can be arranged. The workshop format may change once the number of accepted papers is known, in particular, there may be some poster presentations. WORKSHOP CONTRIBUTIONS: Contributions are sought from researchers in any field with an interest in the issues outlined above. Areas of particular interest include the following * Reinforcement, supervised, and imitation learning methods for autonomous robots * Evolutionary methods for robotics * The development of modular architectures and reusable representations * Computational models of animal learning with relevance to robots, robot control systems modelled on animal behaviour * Reviews or position papers on learning in autonomous agents Papers will ideally emphasise real world problems, robot implementations, or show clear relevance to the understanding of learning in both natural and artificial systems. Papers should not exceed 5000 words length. Please submit four hard copies to the Workshop Chair (address below) by 30th January, 1996. All papers will be refereed by the Workshop Committee and other specialists. Authors of accepted papers will be notified by 24th February Final versions of accepted papers must be submitted by 10th March, 1996. A collated set of workshop papers will be distributed to workshop attenders. We are currently negotiating to publish the workshop proceedings as a book. SUBMISSIONS TO: Noel Sharkey Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK email: n.sharkey at dcs.sheffield.ac.uk For further information about AISB96 ftp ftp.cogs.susx.ac.uk login as Password: cd pub/aisb/aisb96 From ali at almaden.ibm.com Fri Jan 5 20:43:02 1996 From: ali at almaden.ibm.com (ali@almaden.ibm.com) Date: Fri, 5 Jan 1996 17:43:02 -0800 Subject: Dissertation announcement Message-ID: <9601060143.AA21382@brasil.almaden.ibm.com> The following dissertation is available via anonymous FTP and through http://www.ics.uci.edu/~ali (either as a whole or by chapters). Title: "Learning Probabilistic Relational Concept Descriptions" By Kamal Ali Key words: Learning probabilistic concepts, multiple models, multiple classifiers, combining classifiers, evidence combination, relational learning, First-order learning, Noise-tolerant learning, Learning of small disjuncts, Inductive Logic Programming. A B S T R A C T This dissertation presents results in the area of multiple models (multiple classifiers), learning probabilistic relational (first order) rules from noisy, "real-world" data and reducing the small disjuncts problem - the problem whereby learned rules that cover few training examples have high error rates on test data. Several results are presented in the arena of multiple models. The multiple models approach in relevant to the problem of making accurate classifications in ``real-world'' domains since it facilitates evidence combination which is needed to accurately learn on such domains. It is also useful when learning from small training data samples in which many models appear to be equally "good" w.r.t. the given evaluation metric. Such models often have quite varying error rates on test data so in such situations, the single model method has problems. Increasing search only partly addresses this problem whereas the multiple models approach has the potential to be much more useful. The most important result of the multiple models research is that the *amount* of error reduction afforded by the multiple models approach is linearly correlated with the degree to which the individual models make errors in an uncorrelated manner. This work is the first to model the degree of error reduction due to the use of multiple models. It is also shown that it is possible to learn models that make less correlated errors in domains in which there are many ties in the search evaluation metric during learning. The third major result of the research on multiple models is the realization that models should be learned that make errors in a negatively-correlated manner rather than those that make errors in an uncorrelated (statistically independent) manner. The thesis also presents results on learning probabilistic first-order rules from relational data. It is shown that learning a class description for each class in the data - the one-per-class approach - and attaching probabilistic estimates to the learned rules allows accurate classifications to be made on real-world data sets. The thesis presents the system HYDRA which implements this approach. It is shown that the resulting classifications are often more accurate than those made by three existing methods for learning from noisy, relational data. Furthermore, the learned rules are relational and so are more expressive than the attribute-value rules learned by most induction systems. Finally, results are presented on the small-disjuncts problem in which rules that apply to rare subclasses have high error rates The thesis presents the first approach that is simultaneously successful at reducing the error rates of small disjucnts while also reducing the overall error rate by a statistically significant margin. The previous approach which aimed to reduce small disjunct error rates only did so at the expense of increasing the error rates of large disjuncts. It is shown that the one-per-class approach reduces error rates for such rare rules while not sacrificing the error rates of the other rules. The dissertation is approximately 180 pages long (single spaced) (~590K). ftp ftp.ics.uci.edu logname: anonymous password: your email address cd /pub/ali binary get thesis.ps.Z quit ============================================================================ I am now with the IBM Data Mining group at Almaden (San Jose) - we are looking for good people for data analysis (data mining) and consulting so please feel free to call me at (408) 365 8736. My address is: Kamal Ali, Room D3-250 IBM Almaden Research Center 650 Harry Rd San Jose, CA 95120 ============================================================================== Kamal Mahmood Ali, Ph.D. Phone: 408 927 1354 Consultant and data mining analyst, Fax: 408 927 3025 Data Mining Solutions, Office: ARC D3-250 IBM http://www.almaden.ibm.com/stss/ ============================================================================== From rao at cs.rochester.edu Sat Jan 6 18:25:56 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 6 Jan 1996 18:25:56 -0500 Subject: Paper Available: Eye Movements in Visual Search Message-ID: <199601062325.SAA14449@vulture.cs.rochester.edu> Modeling Saccadic Targeting in Visual Search Rajesh P.N. Rao, Gregory J. Zelinsky, Mary M. Hayhoe and Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY 14627, USA To appear in [Advances in Neural Information Processing Systems 8 (NIPS*95), D. Touretzky, M. Mozer and M. Hasselmo (Eds.), MIT Press, 1996] Abstract Visual cognition depends critically on the ability to make rapid eye movements known as saccades that orient the fovea over targets of interest in a visual scene. Saccades are known to be ballistic: the pattern of muscle activation for foveating a prespecified target location is computed prior to the movement and visual feedback is precluded. Despite these distinctive properties, there has been no general model of the saccadic targeting strategy employed by the human visual system during visual search in natural scenes. This paper proposes a model for saccadic targeting that uses iconic scene representations derived from oriented spatial filters at multiple scales. Visual search proceeds in a coarse-to-fine fashion with the largest scale filter responses being compared first. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in a natural visual search task; preliminary results indicate substantial agreement between eye movements predicted by the model and those recorded from human subjects. ======================================================================== Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/nips95.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/nips95.ps.Z WWW URL: http://www.cs.rochester.edu:80/u/rao/ 7 pages; 570K compressed e-mail: rao at cs.rochester.edu ========================================================================= From katagiri at hip.atr.co.jp Mon Jan 8 03:03:10 1996 From: katagiri at hip.atr.co.jp (Shigeru Katagiri) Date: Mon, 08 Jan 1996 17:03:10 +0900 Subject: NNSP96: Notification Message-ID: <9601080803.AA09264@hector> !!! DEADLINE IS COMING SOON !!! ******************************************************************* ******************************************************************* 1996 IEEE Workshop on Neural Networks for Signal Processing ******************************************************************* ******************************************************************* September 4-6, 1996 Keihanna, Kyoto, Japan Sponsored by the IEEE Signal Processing Society (In cooperation with the IEEE Neural Networks Council) (In cooperation with the Tokyo Chapter) ************************* * * * CALL FOR PAPERS * * * ************************* Thanks to the sponsorship of IEEE Signal Processing Society and to the cooperation of IEEE Signal Processing Society Tokyo Chapter, the sixth of a series of IEEE workshops on Neural Networks for Signal Processing will be held at Keihanna Plaza, Seika, Kyoto, Japan. Papers are solicited for, but not limited to, the following areas: Paradigms: artificial neural networks, Markov models, fuzzy logic, inference net, evolutionary computation, nonlinear signal processing, and wavelet Application areas: speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories: generalization, design algorithms, optimization, parameter estimation, and network architectures Implementations: parallel and distributed implementation, hardware design, and other general implementation technologies Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Ms. Masae SHIOJI, (Tel.) +81 (774) 95 1052, (Fax.) +81 (774) 95 1008, (e-mail) shioji at hip.atr.co.jp, or access URL http://www.hip.atr.co.jp through the world wide web. Please send paper submissions to: Dr. Shigeru KATAGIRI IEEE NNSP'96 ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai Seika-cho, Soraku-gun Kyoto 619-02 Japan SCHEDULE Submission of extended summary : January 26 1996 Notification of acceptance : March 29 Submission of photo-ready accepted paper : April 26 Advanced registration, before : June 28 ******************************************************************* General Chairs Shiro USUI (Toyohashi University of Technology (usui at tut.ac.jp)) Yoh'ichi TOHKURA (ATR HIP Res. Labs. (tohkura at hip.atr.co.jp)) Vice-Chair Nobuyuki OTSU (Electrotechnical Laboratory (otsu at etl.go.jp)) Finance Chair Sei MIYAKE (NHK (miyake at strl.nhk.or.jp)) Proceeding Chair Elizabeth J. Wilson (Raytheon Co. (bwilson at ed.ray.com)) Publicity Chair Erik McDermott (ATR HIP Res. Labs. (mcd at hip.atr.co.jp)) Program Chair Shigeru KATAGIRI (ATR IT Res. Labs. (katagiri at hip.atr.co.jp)) Program Committee L. ATLAS A. BACK P.-C. CHUNG H.-C. FU K. FUKUSHIMA L. GILES F. GIROSI A. GORIN N. HATAOKA Y.-H. HU J.-N. HWANG K. IIZUKA B.-H. JUANG M. KAWATO S. KITAMURA M. KOMURA G. KUHN S.-Y. KUNG K. KYUMA R. LIPPMANN J. MAKHOUL E. MANOLAKOS Y. MATSUYAMA S. MARUNO S. NAKAGAWA M. NIRANJAN E. OJA R. OKA K. PALIWAL T. POGGIO J. PRINCIPE H. SAWAI N. SONEHARA J. SORENSEN W.-C. SIU Y. TAKEBAYASHI V. TRESP T. WATANABE A. WEIGEND C. WELLEKENS E. YODOGAWA ******************************************************************* ============================================================================ Shigeru KATAGIRI, Dr. Eng. Supervisor ATR Interpreting Telecommunications Research Laboratories ATR Human Information Processing Research Laboratories phone: +81 (774) 95 1052 fax: +81 (774) 95 1008 email: katagiri at hip.atr.co.jp address: ATR-ITL 2-2 Hikaridai Seika-cho, Soraku-gun Kyoto 619-02 Japan Associate Editor IEEE Transactions on Signal Processing Program Chair 1996 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP96) ============================================================================ From terry at salk.edu Mon Jan 8 04:26:46 1996 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 8 Jan 96 01:26:46 PST Subject: Neuromorphic Engineering Workshop Message-ID: <9601080926.AA01642@salk.edu> WORKSHOP ON NEUROMORPHIC ENGINEERING JUNE 24 - JULY 14, 1996 TELLURIDE, COLORADO Deadline for application is April 5, 1996. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one three week workshop that will be held in Telluride, Colorado in 1996. The first two Telluride Workshops on Neuromorphic Engineering were held in the summer of 1994 and 1995, sponsored by NSF and co-funded by the "Center for Neuromorphic Systems Engineering" at Caltech, were resounding successes. A summary of these workshops, togther with a list of participants is available from: http://www.klab.caltech.edu/~timmer/telluride.html or http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week workshop is co-organized by Dana Ballard (Rochester, US), Rodney Douglas (Zurich, Switzerland) and Misha Mahowald (Zurich, Switzerland). It is composed of lectures, practical tutorials on aVLSI design, hands-on projects, and interest groups. Apart from the lectures, the activities run concurrently. However, participants are free to attend any of these activities at their own convenience. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the course of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in four groups: 1) active perception, 2) elements of autonomous robots, 3) robot manipulation, and 4) multichip neuron networks. The "active perception" project group will emphasize vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system is based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. Projects will involve programming the DataCube to implement a variety of vision/oculomotor algorithms. The "elements of autonomous robots" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robot manipulation" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip neuron networks" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Dana Ballard, Rochester. Randy Beer, Case-Western Reserve. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Chris Dioro, Caltech. Rodney Douglas, Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Geoffrey Hinton, Toronto. Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Misha Mahowald, Zurich. Stefan Schaal, Georgia Tech. Mark Tilden, Los Alamos. Terry Sejnowski, Salk Institute and UC San Diego. Paul Viola, MIT LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000+ range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, one or two MACs and a few PCs running windows and LINUX. We have funds to reimburse some participants for up to $500.- of domestic travel and for all housing expenses. Please specify on the application whether such finanical help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 5, 1996. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1996. From philh at cogs.susx.ac.uk Mon Jan 8 12:46:21 1996 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Mon, 8 Jan 1996 17:46:21 +0000 (GMT) Subject: Tutorial on Alife and Adaptive Behaviour Message-ID: AISB96 Workshop and Tutoria1l Series 31 March-2 April 1996 University of Sussex Falmer, Brighton, UK One day Tutorial: Artificial Life and Adaptive Behaviour --------------------------------------- Date of Tutorial: 31st March 1996 Presenter(s) - Dave Cliff and Phil Husbands School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH Email: davec or philh @cogs.susx.ac.uk -------------------------------------------------------------------------- Description ----------- This tutorial will provide an introduction to the burgeoning fields of Artificial Life and Adaptive Behaviour. Artificial Life is concerned with the use of computational methods to both model and synthesize phenomena normally associated with living systems. The related, but more focused, discipline of Adaptive Behaviour brings together ideas from a range of disciplines, such as ethology, cognitive science and robotics, to further our understanding of the behaviours and underlying mechanisms that allow animals, and, potentially, robots to survive in uncertain environments. Topics to be covered include: the historical roots of Artificial Life and Adaptive Behaviour; Strong Alife and Weak Alife; principles of behaviour-based robotics; artificial evolution and its application to autonomous robotics; modelling and synthesizing neural and other learning mechanisms for autonomous agents; collective behaviour; artificial worlds; software agents; understanding the origins of life; applications; the philosophical implications of these approaches. The material will be presented in lecture format with liberal use of video, computer and robot demonstrations. Although only key work will be discussed, extensive bibliographies and suggestions for further reading will be provided along with lecture notes and other supporting literature. -------------------------------------------------------------------------- Prerequisites: None -------------------------------------------------------------------------- Tutorial Numbers: Maximum of 50 (constrained by room size) -------------------------------------------------------------------------- Audience: Anyone who thinks the tutorial description sounds interesting and is willing to part with the cash. They won't be sorry. -------------------------------------------------------------------------- Tutorial Fees: Tutorial Fees include course materials, refreshments and lunch. All prices are in pounds Sterling. (AISB Student fees are in parentheses) Early Registration Deadline: 1 March 1996 AISB NON-AISB MEMBERS MEMBERS 1 Day Tutorial 80.00 (55.00) 100.00 LATE REGISTRATION 100.00 (75.00) 120.00 For Full Details of Registration please contact: AISB96 Local Organisation COGS University of Sussex Falmer, Brighton, BN1 9QH Tel: +44 1273 678448 Fax: +44 1273 671320 Email: aisb at cogs.susx.ac.uk ========================================================================== From KOKINOV at BGEARN.BITNET Mon Jan 8 17:16:11 1996 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Mon, 08 Jan 96 17:16:11 BG Subject: Graduate study in CogSci Message-ID: The Department of Cognitive Science at the New Bulgarian University It offers the following degrees: Post-Graduate Diploma, M.Sc., Ph.D. FEATURES Teaching in English both in the regular courses at NBU and in the intensive courses at the Annual International Summer Schools. Strong interdisciplinary program covering Psychology, Artificial Intelligence, Neurosciences, Linguistics, Philosophy, Mathematics, Methods. Theoretical and experimental research in integration of the symbolic and connectionist approaches, emergent hybrid cognitive architectures, models of memory and reasoning, analogy, vision, imagery, agnosia, language and speech processing, aphasia. Advisors: at least two advisors with different backgrounds, possibly one external international advisor. International dissertation committee. INTERNATIONAL ADVISORY BOARD Elizabeth Bates (UCSD, USA), Amedeo Cappelli (CNR, Italy), Cristiano Castelfranchi (CNR, Italy), Daniel Dennett (Tufts University, USA), Charles De Weert (University of Nijmegen, Holland), Christian Freksa (Hamburg University, Germany), Dedre Gentner (Northwestern University, USA), Christopher Habel (Hamburg University, Germany), Douglas Hofstadter (Indiana University, USA), Joachim Hohnsbein (University of Dortmund, Germany), Keith Holyoak (UCLA, USA), Mark Keane (Trinity College, Ireland), Alan Lesgold (University of Pittsburg, USA), Willem Levelt (Max-Plank Institute of Psycholinguistics, Holland), Ennio De Renzi (University of Modena, Italy), David Rumelhart (Stanford University, USA), Richard Shiffrin (Indiana University, USA), Paul Smolensky (University of Colorado, USA), Chris Thornton (University of Sussex, England ), Carlo Umilta' (University of Padova, Italy) ADDMISSION REQUIREMENTS B.Sc. degree in psychology, computer science, linguistics, philosophy, neurosciences, or related fields. Good command of English. Full Scholarships available to students from Eastern and Central Europe. Address: Cognitive Science Department, New Bulgarian University, 21 Montevideo Str. Sofia 1635, Bulgaria, tel.: (+3592) 55-80-65 fax: (+3592) 54-08-02 e-mail: kokinov at bgearn.acad.bg From marshall at cs.unc.edu Mon Jan 8 14:55:33 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Mon, 8 Jan 1996 15:55:33 -0400 Subject: PhD studies in neural networks & vision at UNC-Chapel Hill Message-ID: <199601081955.PAA14274@marshall.cs.unc.edu> ---------------------------------------------------------------------------- PH.D. STUDIES IN NEURAL NETWORKS AND VISION at the University of North Carolina at Chapel Hill ---------------------------------------------------------------------------- Program: M.S./Ph.D. in Computer Science and other departments Faculty with Approx. 40 in the Departments of Computer Science, NN-related Psychology, Neurobiology, Biomedical Engineering, research Speech and Hearing Sciences, Linguistics, Physiology, interests: Pharmacology, and Cell Biology. Personal description of program: I would like to encourage students who are interested in neural networks to apply to our department. There are also about 40 faculty members in various departments here at UNC-Chapel Hill who are doing research in NN-related areas. We are developing a diverse NN community in the area universities, research organizations, and corporations. My own areas of interest include: self-organizing neural networks, visual perception, and sensorimotor integration. My students are currently working on several NN projects involving visual depth, motion, orientation, transparency, and segmentation. I see neural networks as a truly inter- disciplinary field, which includes the study of neuroscience, perceptual and behavioral psychology, computer science, and mathematics. I encourage my students to pursue broad knowledge in all areas related to neural networks. The Department of Computer Science has excellent facilities for research in computational aspects of neural networks, especially as applied to problems in vision. Facilities in other departments are also used for NN-related research. For application materials, send a message to admit at cs.unc.edu. If you would like more information, write to me or Janet Jones (jones at cs.unc.edu) at this department. NOTE: The application deadline for Fall 1996 is JANUARY 31, 1996. Courses: Behavior and its Biological Behavioral Pharmacology Bases Computer Vision Cognitive Development Development of Language Conditioning and Learning Developmental Theory Developmental Neurobiology Experimental Neurophysiology Digital Signal Processing Human Learning Human Cognitive Abilities Learning Theory and Practice Intro to Neural Networks Neural Information Processing Memory Neuroanatomy Neural Networks and Vision Optimal Control Theory Neural Networks and Control Neurochemistry of Action Picture Processing and Pattern Physiological Psychology Recognition Robotics Sensory Processes Statistical Pattern Recognition Synaptic Pharmacology VLSI Design (Analog VLSI) Visual Solid Shape Visual Perception There are numerous researchers locally in NNs and allied fields at UNC-Chapel Hill UNC-Charlotte NC State University NC A&T State University Duke University Microelectronics Center of NC NC Supercomputer Center Research Triangle Institute Army Research Office IBM Bell Northern Research SAS Institute Computing resources (primarily UNIX machines) include: Department of Computer Science - Hewlett-Packard J210 computers - MasPar MP-1 (similar to Connection Machine) - Numerous workstations & minicomputers (DEC, Sun, Mac) - Microelectronics Systems Lab - Graphics and Image Lab NC Supercomputer Center - Cray Y-MP - IBM 3090 - Visualization lab UNC Academic Computing Service - Convex C-220 - IBM 4381 - Several VAX computers Other UNC Departments - Vision research labs (Psychology, Radiology) - Neuroscience research labs (Neurobiology, Physiology) Other resources: o An effort to initiate a graduate program on "Computational and Neurobiological Models of Cognition" is underway at UNC-Chapel Hill. o Several research groups on NNs or vision meet regularly in the area. o The graduate neurobiology programs at UNC-Chapel Hill and at nearby Duke University have several faculty members with research interests in vision and in "systems" neuroscience. o The Whitaker Foundation has recently provided funds to enhance an interdisciplinary research program on "Engineering in Systems Neuroscience" at UNC-Chapel Hill. The program involves researchers from biomedical engineering, physiology, psychology, computer science, statistics, and other departments. o Vision is a major research area in the Computer Science department at UNC-CH, with several faculty members in human visual perception, computer vision, image processing, and computer graphics. o The Triangle Area Neural Network Society holds a colloquium series and sponsors other NN-related activities in the local area. _____ / \ Jonathan A. Marshall marshall at cs.unc.edu ------- Dept. of Computer Science http://www.cs.unc.edu/~marshall | | | | CB 3175, Sitterson Hall | | | | Univ. of North Carolina Office +1-919-962-1887 ======= Chapel Hill, NC 27599-3175, USA Fax +1-919-962-1799 From terry at salk.edu Mon Jan 8 15:52:05 1996 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 8 Jan 96 12:52:05 PST Subject: Neural Computation 8:1 Message-ID: <9601082052.AA06235@salk.edu> NEURAL COMPUTATION Vol 8, Issue 1, January 1996 Article: Lower Bounds for the Computational Power of Networks of Spiking Neurons Wolfgang Maass Note: A Short Proof of the Posterior Probability Property of Classifier Neural Networks Raul Rojas Letters: Coding of Time-Varying Signals in Spike Trains of Integrate-and-Fire Neurons with Random Threshold Fabrizio Gabbiani and Christof Koch A Simple Spike Train Decoder Inspired by the Sampling Theorem William B Levy and David A. August A Model of Spatial Map Formation in the Hippocampus of the Rat Kenneth I. Blum and L. F. Abbott A Neural Model of Olfactory Sensory Memory in the Honeybee's Antennal Lobe Christiane Linster and Claudine Masson A Spherical Basis Function Neural Network for Modeling Auditory Space Rick L. Jenison and Kate Fissell On The Convergence Properties of the Em Algorithm for Gaussian Mixtures Lei Xu and Michael I. Jordan A Comparison of Some Error Estimates for Neural Network Models Robert Tibshirani Neural Networks for Optimal Approximation of Smooth and Analytic Functions H. N. Mhaskar Equivalence of Boltzmann Chains and Hidden Markov Models David J. C. MacKay Diagrammatic Derivation of Gradient Alforithms for Neural Network Eric A Wan and Francoise Beaufays Does Extra Knowledge Necessarily Improve Generalization David Barber and David Saad ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From mel at quake.usc.edu Mon Jan 8 00:42:49 1996 From: mel at quake.usc.edu (Bartlett Mel) Date: Mon, 8 Jan 1996 13:42:49 +0800 Subject: Preprint Available Message-ID: <9601082142.AA05338@quake.usc.edu> Announcing a new preprint, available at: url=ftp://quake.usc.edu/pub/mel/papers/mel.seemore.TR96.ps.gz (22 pages, 1.1M compressed, 34M uncompressed) Sorry, no hardcopies. Problems downloading/printing? Please notify author at mel at quake.usc.edu. -------------------------------------- SEEMORE: Combining Color, Shape, and Texture Histogramming in a Neurally-Inspired Approach to Visual Object Recognition Bartlett W. Mel Department of Biomedical Engineering University of Southern California, MC 1451 Los Angeles, California 90089 ABSTRACT Severe architectural and timing constraints within the primate visual system support the hypothesis that the early phase of object recognition in the brain is based on a feedforward feature-extraction hierarchy. A neurally-inspired feature-space model was developed, called SEEMORE, to explore the representational tradeoffs that arise when a feedforward neural architecture is faced with a difficult 3-D object recognition problem. SEEMORE is based on 102 feature channels that emphasize localized, quasi-viewpoint-invariant nonlinear receptive-field-style filters, and which are as a group sensitive to multiple visual cues (contour, texture, and color). SEEMORE's visual world consists of 100 objects of many different types, including rigid (shovel), non-rigid (telephone cord), and statistical (maple leaf cluster) objects, and photographs of complex scenes. Objects were individually-presented in color video images under stable lighting conditions. Based on 12-36 training views, SEEMORE was required to recognize test views of objects that could vary in position, orientation in the image plane and in depth, and scale (factor of 2); for non-rigid objects, recognition was also tested under gross shape deformations. Correct classification performance on a testset consisting of 600 novel object views was 97% (chance was 1%), and was comparable for the subset of 15 non-rigid objects. Performance was also measured under a variety of image degradation conditions, including partial occlusion, limited clutter, color-shift, and additive noise. Generalization behavior and classification errors illustrate the emergence of several striking natural shape catagories that are not explicitly encoded in the dimensions of the feature space. From koza at CS.Stanford.EDU Mon Jan 8 16:58:26 1996 From: koza at CS.Stanford.EDU (John R. Koza) Date: Mon, 8 Jan 1996 13:58:26 -0800 (PST) Subject: GP-96 Jan 15 Weather Extension  Message-ID: <199601082158.NAA16525@Sunburn.Stanford.EDU> In view of the weather today (and likely continuing weather problems for the next few days on the East Coast of the US), we are extending the deadline for submitting papers to 5 PM Monday January 15, 1996 for ARRIVAL at the AAAI offices in California. Please send your submissions ONLY to the following address: GP-96 Conference c/o American Association for Artificial Intelligence 445 Burgess Drive Menlo Park, CA 94025 USA Please be sure to mark the package "GP-96 Conference." If anyone sent a package to me at Stanford, please notify me separately so I can look for it. The Post Office refuses to deliver mail to the new CSD Building and there are many unopened mail bags at this moment. Best wishes to the snowy East Coast. John Koza From marwan at sedal.usyd.edu.AU Tue Jan 9 18:21:32 1996 From: marwan at sedal.usyd.edu.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Wed, 10 Jan 1996 10:21:32 +1100 Subject: new book Message-ID: <199601092321.KAA00723@cortex.su.OZ.AU> NEW BOOK ADAPTIVE ANALOGUE VLSI NEURAL SYSTEMS M.A. Jabri, R.J. Coggins, and B.G. Flower This is the first practical book on neural networks learning chips and systems. It covers the entire process of implementing neural networks in VLSI chips, beginning with the crucial issues of learning algorithms in an analog framework and limited precision effects, and giving actual case studies of working systems. The approach is systems and applications oriented throughout, demonstrating the attractiveness of such an approach for applications such as adaptive pattern recognition and optical character recognition. Prof. Jabri and his co-authors from AT&T Bell Laboratories, Bellcore and the University of Sydney provide a comprehensive introduction to VLSI neural networks suitable for research and development staff and advanced students. Key benefits to reader: o covers system aspects o examines on-chip learning o deals with the effect of the limited precision of VLSI techniques o covers the issue of low-power implementation of chips with learning synapses Book ordering info: December 1995: 234X156: 272pp, 135 line illus, 7 halftone illus: Paperback: 0-412-61630-0: L29.95 CHAPMAN & HALL 2-6 Boundary Row, London, SE1 8HN, U.K. Telephone: +44-171-865 0066 Fax: +44-171-522 9623 Contents 1 Overview 2 Architectures and Learning Algorithms 2.1 Introduction 2.2. Framework 2.3 Learning 2.4 Perceptrons 2.5 The Multi-Layer Perceptron 2.6 The Backpropagation Algorithm 2.7 Comments 3 MOS Devices and Circuits 3.1 Introduction 3.2 Basic Properties of MOS Devices 3.3 Conduction in MOSFETs 3.4 Complementary MOSFETs 3.5 Noise in MOSFETs 3.6 Circuit models of MOSFETs 3.7 Simple CMOS Amplifiers 3.8 Multistage OP AMPS 3.9 Choice of Amplifiers 3.10 Data Converters 4 Analog VLSI Building Blocks 4.1 Functional Designs to Architectures 4.2 Neurons and Synapses 4.3 Layout Strategies 4.4 Simulation Strategies 5 Kakadu - A Low Power Analog VLSI MLP 5.1 Advantages of Analog Implementation 5.2 Architecture 5.3 Implementation 5.4 Chip Testing 5.5 Discussion 6 Analog VLSI Supervised Learning 6.1 Introduction 6.2 Learning in an Analog Framework 6.3 Notation 6.4 Weight Update Strategies 6.5 Learning Algorithms 6.6 Credit Assignment Efficiency 6.7 Parallelisation Heuristics 6.8 Experimental Methods 6.9 ICEG Experimental Results 6.10 Parity 4 Experimental Results 6.11 Discussion 6.12 Conclusion 7 A Micropower Neural Network 7.1 Introduction 7.2 Architecture 7.3 Training System 7.4 Classification Performance and Power Consumption 7.5 Discussion 7.6 Conclusion 8 On-Chip perturbation Based Learning 8.1 Introduction 8.2 On-Chip Learning Multi-Layer Perceptron 8.3 On-Chip Learning Recurrent Neural Network 8.4 Conclusion 9 Analog Memory Techniques 9.1 Introduction 9.2 Self-Refreshing Storage Cells 9.3 Multiplying DACs 9.4 A/D-D/A Static Storage Cell 9.5 Basic Principle of the Storage Cell 9.6 Circuit Limitations 9.7 Layout Considerations 9.8 Simulation Results 9.9 Discussion 10 Switched Capacitor Techniques 10.1 A Charged Based Network 10.2 Variable Gain, Linear, Switched Capacitor Neurons 11 NET32K High Speed Image Understanding System 11.1 Introduction 11.2 The NET32K Chip 11.3 The NET32K Board System 11.4 Applications 11.5 Summary and Conclusions 12 Boltzmann Machine Learning System 12.1 Introduction 12.2 The Boltzmann Machine 12.3 Deterministic Learning by Error Propagation 12.4 Mean-field Version of Boltzmann Machine 12.5 Electronic Implementation of a Boltzmann Machine 12.6 Building a System using the Learning Chips 12.7 Other Applications References Index From haussler at cse.ucsc.edu Tue Jan 9 20:49:22 1996 From: haussler at cse.ucsc.edu (David Haussler) Date: Tue, 9 Jan 1996 17:49:22 -0800 Subject: new paper available Message-ID: <199601100149.RAA08733@arapaho.cse.ucsc.edu> A new paper by D. Haussler and M. Opper entitled Mutual Information, Metric Entropy, and Risk in Estimation of Probability Distributions is available on the web at http://www.cse.ucsc.edu/~sherrod/ml/research.html An abstract is given below (for those who read LaTex) -David ___________________ Abstract: $\{P_{Y|\theta}: \theta \in \Theta\}$ is a set of probability distributions (with a common dominating measure) on a complete separable metric space $Y$. A state $\theta^* \in \Theta$ is chosen by Nature. A statistician gets $n$ independent observations $Y_1, \ldots, Y_n$ distributed according to $P_{Y|\theta^*}$ and produces an estimated distribution $\hat{P}$ for $P_{Y|\theta^*}$. The statistician suffers a loss based on a measure of the distance between the estimated distribution and the true distribution. We examine the Bayes and minimax risk of this game for various loss functions, including the relative entropy, the squared Hellinger distance, and the $L_1$ distance. We also look at the cumulative relative entropy risk over the distributions estimated during the first $n$ observations. Here the Bayes risk is the mutual information between the random parameter $\Theta^*$ and the observations $Y_1, \ldots, Y_n$. New bounds on this mutual information are given in terms of the Laplace transform of the Hellinger distance between $P_{Y|\theta}$ and $P_{Y|\theta^*}$. From these, bounds on the minimax risk are given in terms of the metric entropy of $\Theta$ with respect to the Hellinger distance. The assumptions required for these bounds are very general and do not depend on the choice of the dominating measure. They apply to both finite and infinite dimensional $\Theta$. They apply in some cases where $Y$ is infinite dimensional, in some cases where $Y$ is not compact, in some cases where the distributions are not smooth, and in some parametric cases where asymptotic normality of the posterior distribution fails. From geoff at salk.edu Wed Jan 10 14:30:33 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Wed, 10 Jan 96 11:30:33 PST Subject: NIPS preprint available Message-ID: <9601101930.AA27087@salk.edu> The following NIPS preprint is available via ftp://salk.edu/pub/geoff/goodhill_nips96.ps.Z or http://cnl.salk.edu/~geoff OPTIMIZING CORTICAL MAPPINGS Geoffrey J. Goodhill(1), Steven Finch(2) & Terrence J. Sejnowski(3) (1) The Salk Institute for Biological Studies 10010 North Torrey Pines Road, La Jolla, CA 92037, USA (2) Human Communication Research Centre University of Edinburgh, 2 Buccleuch Place Edinburgh EH8 9LW, GREAT BRITAIN (3) The Howard Hughes Medical Institute The Salk Institute for Biological Studies 10010 North Torrey Pines Road, La Jolla, CA 92037, USA & Department of Biology University of California San Diego, La Jolla, CA 92037, USA, ABSTRACT ``Topographic'' mappings occur frequently in the brain. A popular approach to understanding the structure of such mappings is to map points representing input features in a space of a few dimensions to points in a 2 dimensional space using some self-organizing algorithm. We argue that a more general approach may be useful, where similarities between features are not constrained to be geometric distances, and the objective function for topographic matching is chosen explicitly rather than being specified implicitly by the self-organizing algorithm. We investigate analytically an example of this more general approach applied to the structure of interdigitated mappings, such as the pattern of ocular dominance columns in primary visual cortex. From hilario at cui.unige.ch Mon Jan 8 06:39:21 1996 From: hilario at cui.unige.ch (Melanie Hilario) Date: Mon, 8 Jan 1996 12:39:21 +0100 Subject: Please send via connectionists mailing list Message-ID: <1710*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> ------------------------------------------------------------------------------- Neural Networks and Structured Knowledge (NNSK) Call for Contributions ECAI '96 Workshop to be held on August 12/13, 1996 during the 12th European Conference on Artificial Intelligence from August 12-16, 1996 in Budapest, Hungary Contributions are invited for the workshop "Neural Networks and Structured Knowledge" to be held in conjunction with ECAI'96 in Budapest, Hungary. ------------------------------------------------------------------------------- Description of the Workshop Neural networks mostly are used for tasks dealing with information presented in vector or matrix form, without a rich internal structure reflecting relations between different entities. In some application areas, e.g. speech processing or forecasting, types of networks have been investigated for their ability to represent sequences of input data. Whereas approaches to use neural networks for the representation and processing of structured knowledge have been around for quite some time, especially in the area of connectionism, they frequently suffer from problems with expressiveness, knowledge acquisition, adaptivity and learning, or human interpretation. In the last years much progress has been made in the theoretical understanding and the construction of neural systems capable of representing and processing structured knowledge in an adequate way, while maintaining essential capabilities of neural networks such as learning, tolerance of noise, treatment of inconsistencies, and parallel operation. The goal of this workshop is twofold: On one hand, existing mechanisms are critically examined with respect to their suitability for the acquisition, representation, processing and interpretation of structured knowledge. On the other hand, new approaches, especially concerning the design of systems based on such mechanisms, are presented, with particular emphasis on their application to realistic problems. The following topics lie within the intended scope of the workshop: Concepts and Methods: * extraction, injection and refinement of structured knowledge from, into and by neural networks * inductive discovery/formation of structured knowledge * combining symbolic machine learning techniques with neural lerning paradigms to improve performance * classification, recognition, prediction, matching and manipulation of structured information * neural methods that use or discover structural similarities * neural models to infer hierarchical categories * structuring of network architectures: methods for introducing coarse-grained structure into networks, unsupervised learning of internal modularity Application Areas: * medical and technical diagnosis: discovery and manipulation of structured dependencies, constraints, explanations * molecular biology and chemistry: prediction of molecular structure unfolding, classification of chemical structures, DNA analysis * automated reasoning: robust matching, manipulation of logical terms, proof plans, search space reduction * software engineering: quality testing, modularisation of software * geometrical and spatial reasoning: robotics, structured representation of objects in space, figure animation, layouting of objects * other applications that use, generate or manipulate structures with neural methods: structures in music composition, legal reasoning, architectures, technical configuration, ... The list of topics and potential application areas above indicates an important tendency towards neural networks which are capable of dealing with structured information. This can be done on an internal level, where one network is used to represent and process knowledge for a task, or on a higher level as in modular neural networks, where the structure may be represented by the relations between the modules. The central theme of this workshop will be the treatment of structured information using neural networks, independent of the particular network type or processing paradigm. Thus the workshop theme is orthogonal to the issue of connectionist/symbolic integration, and is not intended as a continuation of the more philosphically oriented discussion of symbolic vs. subsymbolic representation and processing. Workshop Format Our hope is to attract 20-30 people for the workshop; the maximum will be 40. The setup of the workshop is specifically designed to encourage an informal and interactive atmosphere (not a mini-conference with a number of formal talks and 2 minutes of questions after a talk). The workshop will be based on the following points: * Talks will have break-points where audience participation is requested * For each talk, at least two organizers or participants will be acting as commentators * There will be discussion sessions specifically devoted to a particular topic of interest, with mandatory contributions from the participants If time permits, one session could be plenary, and another in small groups. The plenary session would discuss a broad topic of general interest, e.g. benefits and problems of different approaches to use neural networks for the representation and processing of structured knowledge. The group sessions would concentrate on specific application areas. * Preprints of the contributions will be made available to the participants electronically in advance * Statements of interest as well as the willingness to act as commentators for other participants' talks are requested from the participants * Self-Introduction of participants at the beginning of the workshop Organizing Committee Franz Kurfess (chair)New Jersey Institute of Technology, Newark, USA Daniel Memmi LIFIA-IMAG Grenoble, France Andreas Kuechler Universitaet Ulm, Germany Arnaud Giacometti Universite de Tours, France Contact Prof. Franz Kurfess Computer and Information Sciences Dept. New Jersey Institute of Technology Newark, NJ 07102, U.S.A. Voice : +1/201-596-5767 Fax : +1/201-596-5767 E-mail: franz at cis.njit.edu Program Committee* Venkat Ajjanagadde - University of Minnesota, Minneapolis Ethem Alpaydin - Bogazici University C. Lee Giles - NEC Research Institute Melanie Hilario - University of Geneva (co-chair) Steffen Hoelldobler - TU Dresden Mirek Kubat - University of Ottawa Guenther Palm - Universitaet Ulm Hava Siegelman - Technion (Israeli Institue of Technology) Alessandro Sperduti - University of Pisa (co-chair) * Tentative list--names of other PC members will be added as confirmations come in. Submission of Papers Contributions should be received no later than March 15. Papers should be no longer than 8 pages; preferred format is one column and of A4 (8 1/2" x 11") size with 3/4" margins all round. The first page of a contribution must contain the following information: title, author(s) name and affiliation, mailing address, phone and fax number, e-mail address, an abstract of ca. 300 words, and three to five keywords. All submissions will be acknowledged by electronic mail; correspondence will be sent to the first author. All submitted papers will be reviewed by at least two members of the program committee. In addition to the technical quality of a submission, we will also take into consideration the potential for discussion in order to stimulate the interactive character of the workshop. If possible, accepted papers will be made electronically available to participants in advance. Workshop proceedings will be distributed to participants by ECAI organizers. We are also currently in negotiations with publishers about an edited volume of workshop contributions, or a special issue in a journal. If you intend to submit a paper please do not hesitate to contact the organizing committee as soon as possible so that the workshop can be formed and planned further. Electronic submissions are strongly encouraged (see the procedure described below). If you cannot submit your paper electronically (due to technical problems or the lack of technical facilities), please send 3 hardcopies to: Andreas Kuechler Department of Neural Information Processing University of Ulm Oberer Eselsberg 89069 Ulm Germany Participation and Registration Participation without a full contribution is possible. In this case we request a statement of interest (to be sent to the Workshop Chair franz at cis.njit.edu) and the willingness to act as commentator for an accepted contribution, which will be made available in advance. Preference will be given to attendees with a paper. To cover costs, a fee of ECU 50 for each participant of each workshop in addition to the normal ECAI-96 conference registration fee will be charged by the main conference organizers. Please note that attendees of workshops MUST register for the main ECAI conference. Schedule Submission deadline March 15, 1996 Notification of acceptance/rejection April 15, 1996 Final version of papers due May 15, 1996 Deadline for participation without paper June 15, 1996 Date of the workshop August 12/13, 1996 ------------------------------------------------------------------------------- Electronic submission procedure This is a two-step procedure: 1. Please send an email with the subject 'nnsk-submission' to nnsk-submission at neuro.informatik.uni-ulm.de in ASCII-format with the title, author name(s) and affiliation, mailing address, phone and fax number, e-mail address, an abstract of ca. 300 words, and three to five keywords. Correspondence (unless otherwise indicated) will be sent to the first author. Specify how you will send your paper (ftp or e-mail). Papers should be submitted in Postscript-format (please avoid exotic or out-of-date systems for the generation of the postscript file). UNIX-file format is preferred, large files should be compressed (using 'compress' or 'gzip'). If you choose the ftp option, please use the file-name .ps and add this name to your email. 2. There will be two alternatives: * ftp Option: Connect via anonymous ftp to neuro.informatik.uni-ulm.de and 'put' your file in the incoming/nnsk-submission directory (please note that this directory is set to write-only). Here is an example of how to upload a file: unix> gzip Andreas.Kuechler.ps unix> ftp neuro.informatik.uni-ulm.de Connected to neuro.informatik.uni-ulm.de. 220 neuro FTP server (SunOS 4.1) ready. Name (neuro.informatik.uni-ulm.de:andi): ftp 331 Guest login ok, send ident as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd incoming/nnsk-submission 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put Andreas.Kuechler.ps.gz 200 PORT command successful. 150 Binary data connection for Andreas.Kuechler.ps.gz (134.60.73.27,2493). 226 Binary Transfer complete. local: Andreas.Kuechler.ps.gz remote: Andreas.Kuechler.ps.gz 54800 bytes sent in 0.12 seconds (4.5e+02 Kbytes/s) ftp> bye 221 Goodbye. unix> * e-mail Option: Send an e-mail with the subject 'paper: ' to nnsk-submission at neuro.informatik.uni-ulm.de and include your postscript-file .ps (please avoid exotic email-formats and mailers). Be sure to 'uuencode' compressed files before sending them. Here is an example of how to send a (compressed and uuencoded) file (via UNIX): unix> gzip Andreas.Kuechler.ps unix> uuencode Andreas.Kuechler.ps.gz Andreas.Kuechler.ps.gz | mail -s 'paper: mytitle/Andreas.Kuechler' nnsk-submission at neuro.informatik.uni-ulm.de unix> ------------------------------------------------------------------------------- Latest information can be retrieved from the NNSK WWW-page http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK.html. From ma_s435 at crystal.king.ac.uk Thu Jan 11 15:39:14 1996 From: ma_s435 at crystal.king.ac.uk (Dimitris Tsaptsinos) Date: Thu, 11 Jan 1996 15:39:14 GMT0BST Subject: Final Call for Papers (EANN96) Message-ID: <701962929@crystal.kingston.ac.uk> Dear colleague, sorry for the unsolicited mail but we thought this conference might be relevant to you, and if so, please lookinto it, or ask us for more information. Regards Dr Dimitris Tsaptsinos +------------------------------+ Dimitris Tsaptsinos Kingston University Maths Dept. Faculty of Science Penhryn Road Kingston upon Thames Surrey KT1 2EE Tel:0181-5472000 x.2516 Email:ma_s435 at kingston.ac.uk +----------always AEK----------+ -------------- Enclosure number 1 ---------------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996 by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN '96, please see http://www.lpac.ac.uk/EANN96 and for reports on EANN '95, contents of the proceedings, etc. please see http://www.abo.fi/~abulsari/EANN95.html Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From rsun at cs.ua.edu Thu Jan 11 18:21:41 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 11 Jan 1996 17:21:41 -0600 Subject: AAAI-96 workshop on cognitive modeling Message-ID: <9601112321.AA15316@athos.cs.ua.edu> AAAI-96 Workshop Computational Cognitive Modeling: Source of the Power to be held during AAAI-96, Portland, Oregon, August 4-5, 1996. CALL FOR PAPERS and PARTICIPATION Computational models for various cognitive tasks, such as language acquisition, skill acquisition, and conceptual development, have been extensively studied by cognitive scientists, AI researchers, and psychologists. We attempt to bring researchers from different backgrounds together, and to examine how and why computational models (connectionist, symbolic, or others) are successful in terms of the source of power. The possible sources of power include: -- Representation of the task; -- General properties of the learning algorithm; -- Data sampling/selection; -- Parameters of the learning algorithms. The workshop will focus on, but not be limited to, the following topics, all of which should be discussed in relation to the source of power: -- Proper criteria for judging success or failure of a model. -- Methods for recognizing the source of power. -- Analyses of the success or failure of existing models. -- Presentation of new cognitive models. Potential presenters should submit a paper (maximum 12 pages, 12 point font). We strongly encourage email submissions of text/postscript files; or you may also send 4 paper copies to one workshop co-chair: Charles Ling (co-chair) Ron Sun (co-chair) Department of Computer Science Department of Computer Science University of Hong Kong University of Alabama Hong Kong Tuscaloosa, AL 35487 ling at csd.uwo.ca rsun at cs.ua.edu Researchers interested in attending Workshop only should send a short description of interests to one co-chair by deadline. The Workshop will consist of invited talks, presentations, and a poster session. All accepted papers will be included in the Workshop Working Notes. Deadline for submission: March 18, 1996. Notification of acceptance: April 15, 1996. Submission of final versions: May 13, 1996. Program Committee: Charles Ling Ron Sun Pat Langley, Stanford University, langley at flamingo.Stanford.EDU Mike Pazzani, UC Irvine, pazzani at super-pan.ICS.UCI.EDU Tom Shultz, McGill University, shultz at psych.mcgill.ca Paul Thagard, Univ. of Waterloo, pthagard at watarts.uwaterloo.ca Kurt VanLehn, Univ. of Pittsburgh, vanlehn+ at pitt.edu Confirmed invited Speakers: Jeff Elman, Mike Pazzani Aaron Sloman (For AAAI-96 registration, contact AAAI, 445 Burgess Drive, Menlo Park, CA94025 or info at aaai.org) From cnna96 at cnm.us.es Thu Jan 11 15:09:13 1996 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Thu, 11 Jan 96 21:09:13 +0100 Subject: CNNA96 Final Call for papers Message-ID: <9601112009.AA11337@cnm1.cnm.us.es> ******************************************************************************** This information is available on the web at http://www.cica.es/~cnm/cnna96 ******************************************************************************** CNNA-96 FINAL CALL FOR PAPERS FOURTH IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS June 24-26, 1996 Seville, SPAIN Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica ******************************************************************************** Organizing Committee: Prof. J.L. Huertas (Chair) Prof. A. Rodriguez-Vazquez Prof. R. Dominguez-Castro Secretary: Dr. S. Espejo Tec. Program: Prof. A. Rodriguez-Vazquez Proceedings: Prof. R. Dominguez-Castro Scientific Committee: Prof. N.N. Aizenberg (Ukrania) Prof. L.O. Chua (U.S.A.) Prof. V. Cimagalli (Italy) Prof. T.G. Clarkson (U.K.) Prof. A.S. Dimitriev (Russia) Prof. M. Hasler (Switzerland) Prof. J. Herault (France) Prof. J.L. Huertas (Spain) Prof. S. Jankowski (Poland) Prof. J. Nossek (Germany) Prof. J. Pineda de Gyvez (U.S.A.) Prof. V. Porra (Finland) Prof. A. Rodriguez-Vazquez (Spain) Prof. T. Roska (Hungary) Prof. B. Sheu (U.S.A.) Prof. M. Tanaka (Japan) Prof. V. Tavsanoglu (U.K.) Prof. J. Vandewalle (Belgium) Sponsors: IEEE Circuits and Systems Society IEEE Spanish Section ECS (European Circuits Society) ******************************************************************************** GENERAL SCOPE & VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in the theory, application, and implementation of Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be hosted by the National Microelectronics Center and the School of Engineering of Seville, in Seville, Spain, on June 24-26, 1996. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years of history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several international flights, as well as many connections via Madrid and Barcelona. The workshop will address theoretical and practical issues in Cellular Neural Networks theory, applications, and implementations. The technical program will consist of plenary lectures by experts in selected areas, along with papers and posters submitted by the participants. Official language will be English. ******************************************************************************** PAPER SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: Basic Theory Applications Learning Software Implementations and Simulators CNN Computers CNN Chips CNN System Development and Testing Prospective authors are invited to submit summaries of their papers to the Conference Secretariat. Submissions should include a cover page with contact-author's name, affiliation, postal address, phone number, fax number, and E-mail address. Postscript electronic submission may be accepted upon request. Summaries submission deadline is February 28, 1996. Acceptance will be notified by mid-April 1996. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in a IEEE-sponsored Proceeding. Final papers will be limited to a maximum of 6 pages. Format details will be provided in author's kit to be sent with notification of acceptance. ******************************************************************************** CORRESPONDENCE Correspondence should be addressed to: CNNA-96 Secretariat Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN Phone +34-5-4239923. FAX +34-5-4231832 e-mail: cnna96 at cnm.us.es ******************************************************************************** CONFERENCE SITE Hotel Al-Andalus. Four stars; 3 years old; luxury; 300 rooms; huge halls and many conference rooms. Located in the metropolitan area of Seville, by the technical campus of the University of Seville, and linked to the historical center through city buses (10' trip). The agreed conference price of a double room is 10.750 pta. per room and night. This price includes full buffet breakfast for the two guests in the room. Other hotels and rates are also available. For further details see the enclosed Hotel Reservation Form. The official travel agency of the conference is Viajes Universal S.A. ******************************************************************************** REGISTRATION Please see attached Registration Form. Registration includes participation in all sessions, proceedings, coffee breaks and lunches. Full registration includes the welcome cocktail and conference banquet. ******************************************************************************** CNNA-96 WWW PAGE http://www.cica.es/~cnm/cnna96 ******************************************************************************** AUTHOR'S SCHEDULE Submission of summaries: February 28, 1996 Notification of acceptance: April 15, 1996 Reception of camera-ready papers: May 15, 1996 ******************************************************************************** REGISTRATION FORM Last Name:________________________________________________________ First Name:_______________________________________________________ Institution:______________________________________________________ Mailing address:__________________________________________________ Street:_________________________________________________________ City:___________________________________________________________ State/Country:__________________________________________________ zip code:_______________________________________________________ Phone:____________________________________________________________ Fax: _____________________________________________________________ e-mail:___________________________________________________________ __ I intend to submit (have submitted) a paper entitled: _____________________ _______________________________________________________________________ Please Check where applicable: ------------------------------------------------------------------------------- | REGISTRATION FEES | BEFORE MAY 15 | AFTER MAY 15 | |-----------------------------------------|-----------------|-----------------| | Full Registration | 46.000 pta. __ | 51.000 pta. __ | |-----------------------------------------|-----------------|-----------------| | Full Registration (IEEE/ECS Members) (*)| 39.000 pta. __ | 44.000 pta. __ | |-----------------------------------------|-----------------|-----------------| | Full-time Students (**) | 17.000 pta. __ | 23.000 pta. __ | ------------------------------------------------------------------------------- (*) IEEE __ / ECS __ member number _____________________________ (**) Please enclose letter of certification from Department chairperson. Spouse/Guest: Welcome Cocktail 2,750 pta. __ Conference Banquet 6,000 pta. __ Last Name:_______________________________________________________ First Name:______________________________________________________ Registration fees may be paid (please check one): By check __ or bank-transfer __ to: BANCO ESPANOL DE CREDITO Avda. Reina Mercedes, 27. E-41012 Sevilla Acct. #: 0030-8443-90-0865291273 By credit card: VISA __ or Master-Card __ Card-holder's name: ________________________________ Card number: ________________________________ Expiration date: ________________________________ Signature: ________________________________ Date (d/m/y): ________________________________ Total amount due: __________ Please check if you need a receipt of payment: __ ******************************************************************************** CNNA-96 HOTEL RESERVATION FORM -------------------------------------------------------------------------------- HOTEL NAME | ADDRESS & PHONE | PRICE (*) | COMMENTS | | Double/Single | -------------|---------------------|---------------|---------------------------- AL-ANDALUS | Av. La Palmera s/n | | (****) | 41012 Sevilla | 10.750/ 8.400 | Conference site | Ph. +34-5-4230600 | | -------------|---------------------|---------------|---------------------------- NH CIUDAD DE | Av. Manuel Siurot 25| | Within walking SEVILLA | 41013 Sevilla | 12.500/11.300 | distance of conference (****) | Ph. +34-5-4230505 | | site (10' walk) -------------|---------------------|---------------|---------------------------- FERNANDO III | C/ San Jose 21 | | Located in the city center (***) | 41004 Sevilla | 8.100/6.900 | Connected by metropolitan | Ph. +34-5-4217307 | | buses to conf. site (30') -------------|---------------------|---------------|---------------------------- DUCAL | Pza. Encarnacion 19 | | Located in the city center (**) | 41003 Sevilla | 6.900/4.800 | Connected by metropolitan | Ph. +34-5-4215107 | | buses to conf. site (30') -------------------------------------------------------------------------------- (*) Prices include full buffet breakfast and local taxes. They are in spanish pesetas, per room and night. Please mail, fax or phone to: VIAJES UNIVERSAL S.A. Luis de Morales, 1- 41005 Sevilla, SPAIN Phone #: +34-5-4581653 Fax #: +34-5-4575689 Last Name:__________________________________________________________________ First Name:_________________________________________________________________ Address:____________________________City:___________________________________ Postal Code:________________________Phone #:____________Fax #:______________ Hotel Name:_____________________________________________________ Total Number of Rooms:___________Doubles_________Singles________ Arrival date________________Departure date______________________ Total Number of nights:_______Total amount due:_________________ Please check here if you wish the travel agency to arrange room-sharing with another participant _____ Payment: At least seven days before the arrival. Bank transfer to: VIAJES UNIVERSAL s.a. Account # 0030-4223-10-0011107 271. Banco Espanol de Credito, c/ Luis Montoto, 85 - 41005 Sevilla. ******************************************************************************** From perso at DI.Unipi.IT Thu Jan 11 12:53:15 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Thu, 11 Jan 1996 18:53:15 +0100 (MET) Subject: new TR available Message-ID: <199601111753.SAA01494@neuron.di.unipi.it> Technical Report available: Comments are welcome !! ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/tr-16-95.ps.gz ****************************************************** @TECHREPORT{tr-16/95, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Supervised Neural Networks for the Classification of Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-16/95} } Abstract: Up to now, neural networks have been used for classification of unstructured patterns and sequences. Dealing with complex structures, however, standard neural networks, as well as statistical methods, are usually believed to be inadequate because of their feature-based approach. In fact, feature- based approaches usually fail to give satisfactory solutions because of the sensitiveness of the approach to the a priori selected features and the incapacity to represent any specific information on the relationships among the components of the structures. On the contrary, we show that neural networks can represent and classify structured patterns. The key idea underpinning our approach is the use of the so called "complex recursive neuron". A complex recursive neuron can be understood as a generalization to structures of a recurrent neuron. By using complex recursive neurons, basically all the supervised networks developed for the classification of sequences, such as Back-Propagation Through Time networks, Real-Time Recurrent networks, Simple Recurrent Networks, Recurrent Cascade Correlation networks, and Neural Trees can be generalized to structures. The results obtained by some of the above networks (with complex recursive neurons) on classification of logic terms are presented. * No hardcopy available. * FTP procedure: unix> ftp ftp.di.unipi.it Name: anonymous Password: ftp> cd pub/Papers/perso/SPERDUTI ftp> binary ftp> get tr-16-95.ps.gz ftp> bye unix> gunzip tr-16-95.ps.gz unix> lpr tr-16-95.ps.gz (or however you print postscript) _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________ From lawrence at s4.elec.uq.edu.au Fri Jan 12 00:30:03 1996 From: lawrence at s4.elec.uq.edu.au (Steve Lawrence) Date: Fri, 12 Jan 1996 15:30:03 +1000 (EST) Subject: The Gamma MLP for Speech Phoneme Recognition Message-ID: <199601120530.PAA29266@s4.elec.uq.edu.au> The following NIPS 95 paper presents a network with multiple independent Gamma filters which is able to find multiple time resolutions that are optimized for prediction or classification of a given signal. We show large improvements over traditional FIR or TDNN(*) networks. The paper is available from http://www.elec.uq.edu.au/~lawrence - Australia http://www.neci.nj.nec.com/homepages/lawrence - USA We welcome your comments The Gamma MLP for Speech Phoneme Recognition Steve Lawrence, Ah Chung Tsoi, Andrew Back Electrical and Computer Engineering University of Queensland, St. Lucia 4072, Australia ABSTRACT We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi IIR MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in a substantial reduction in error rates. (*) We use the term TDNN to describe an MLP with a window of delayed inputs, not the shared weight architecture of Lang, et al. --- Steve Lawrence +61 41 113 6686 http://www.neci.nj.nec.com/homepages/lawrence From georgju at Physik.Uni-Wuerzburg.DE Fri Jan 12 03:21:50 1996 From: georgju at Physik.Uni-Wuerzburg.DE (Georg Jung) Date: Fri, 12 Jan 1996 09:21:50 +0100 (MEZ) Subject: Paper available "Selection of examples for a linear classifier" Message-ID: <199601120821.JAA12596@wptx10.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename:/pub/neuroprose/jung.selection_examples.ps.Z The file jung.selection_examples.ps.Z is now available for ftp from Neuroprose repository. The same file (name: WUE-ITP-95-022.ps.gz) is available for ftp from the preprint server of University Wurzburg (ftp.physik.uni-wuerzburg.de) filename: /pub/preprint/WUE-ITP-95-022.ps.gz Selection of examples for a linear Classifier (20 pages) Georg Jung and Manfred Opper Physikalisches Institut, Julius-Maximilians-Universit\"at Am Hubland, D-97074 W\"urzburg, Federal Republic of Germany, The Baskin Center for Computer Engineering \& Information Sciences, University of California, Santa Cruz CA 95064, USA ABSTRACT: We investigate the problem of selecting an informative subsample out of a neural network's training data. Using the replica method of statistical mechanics, we calculate the performance of a heuristic selection algorithm for a linear neural network which avoids overfitting. Sorry, no hardcopies available Comments are greatly appreciated. -- ____________________________________________ | | | Georg Jung | | | | Wissenschaftlicher Mitarbeiter am | | am Lehrstuhl "Computational Physics" der | | | | Julius-Maximilians-Universitaet | | Fakultaet fuer Physik und Astronomie | | Am Hubland, D-97074 Wuerzburg | |____________________________________________| \ \ \ Arbeitsplatz: \ \ Raum: E-222, \ \ Telefon: 0931--888-4908 \ \ E-Mail: georgju at physik.uni-wuerzburg.de \ \ \ |___________________________________________| From perso at DI.Unipi.IT Fri Jan 12 05:35:57 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Fri, 12 Jan 1996 11:35:57 +0100 (MET) Subject: new TR available (revised version) Message-ID: <199601121035.LAA02387@neuron.di.unipi.it> In a previous e-mail I announced the following TR ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/tr-16-95.ps.gz ****************************************************** @TECHREPORT{tr-16/95, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Supervised Neural Networks for the Classification of Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-16/95} } Unfortunately, due to a copy error, the postscript file was containing a draft version of the TR. I have fixed the bug, so now the postscript file contains the correct version of the TR. Sorry for that! Regards _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________ From bengioy at IRO.UMontreal.CA Fri Jan 12 13:15:16 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Fri, 12 Jan 1996 13:15:16 -0500 Subject: New book: NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION Message-ID: <199601121815.NAA13072@rouge.IRO.UMontreal.CA> NEW BOOK! NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION Yoshua BENGIO Learning algorithms for sequential data are crucial in many applications, in fields such as speech recognition, time-series prediction, control and signal monitoring. This book applies the techniques of artificial neural networks, in particular recurrent networks, time-delay networks, convolutional networks, and hidden Markov models, using real world examples. Highlights include basic elements for the practical application of back-propagation and back-propagation through time, integrating domain knowledge and learning from examples, and hybrids of neural networks with hidden Markov models. International Thomson Computer Press ISBN 1-85032-170-1 This book is available at bookstores near you, or from the publisher: In the US: US$52.95 800-842-3636, fax 606-525-7778, or 800-865-5840, fax 606-647-5013 In Canada: CA$73.95 416-752-9100 ext 444, fax 416-752-9646 On the Internet: http://www.thomson.com/itcp.html http://www.thomson.com/orderinfo.html americas-info at list.thomson.com (in the Americas) row-info at list.thomson.com (rest of the World) Contents 1 Introduction 1.1 Connectionist Models 1.2 Learning Theory 2 The Back-Propagation Algorithm 2.1 Introduction to Back-Propagation 2.2 Formal Description 2.3 Heuristics to Improve Convergence and Generalization 2.4 Extensions 3 Integrating Domain Knowledge and Learning from Examples 3.1 Automatic Speech Recognition 3.2 Importance of Pre-processing Input Data 3.3 Input Coding 3.4 Input Invariances 3.5 Importance of Architecture Constraints on the Network 3.6 Modularization 3.7 Output Coding 4 Sequence Analysis 4.1 Introduction 4.2 Time Delay Neural Networks 4.3 Recurrent Networks 4.4 BPS 4.5 Supervision of a Recurrent Network Does Not Need to Be Everywhere 4.6 Problems with Training of Recurrent Networks 4.7 Dynamic Programming Post-Processors 4.8 Hidden Markov Models 5 Integrating ANNs with Other Systems 5.1 Advantages and Disadvantages of Current Algorithms for ANNs 5.2 Modularization and Joint Optimization 6 Radial Basis Functions and Local Representation 6.1 Radial Basis Functions Networks 6.2 Neurobiological Plausibility 6.3 Relation to Vector Quantization, Clustering, and Semi-Continuous HMMs 6.4 Methodology 6.5 Experiments on Phoneme Recognition with RBFs 7 Density Estimation with a Neural Network 7.1 Relation Between Input PDF and Output PDF 7.2 Density Estimation 7.3 Conclusion 8 Post-Processors Based on Dynamic Programming 8.1 ANN/DP Hybrids 8.2 ANN/HMM Hybrids 8.3 ANN/HMM Hybrid: Phoneme Recognition Experiments 8.4 ANN/HMM Hybrid: Online Handwriting Recognition Experiments References Index -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206 From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Sun Jan 14 03:14:52 1996 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Sun, 14 Jan 96 03:14:52 EST Subject: postdoc position: computational neuroscience and rodent navigation Message-ID: <16098.821607292@DST.BOLTZ.CS.CMU.EDU> The Center for the Neural Basis of Cognition, a joint center of Carnegie Mellon University and the University of Pittsburgh, is accepting applications for postdoctoral positions in computational and cognitive neuroscience. One position for which candidates are actively being sought involves computational modeling and neurophysiological investigation of the rodent navigation system. Applicants should be either: * A neuroscientist with experience in single-unit recording from behaving animals, and some computer experience, who would like to do postdoctoral work involving computer modeling of the rodent hippocampal and head direction systems. * A computational neuroscientist already proficient in modeling biological neural networks, with a strong interest in helping to set up a neurophysiological recording facility as part of their postdoctoral training. Full details on the CNBC postdoctoral program are available on our web site at http://www.cs.cmu.edu/Web/Groups/CNBC -- follow the link to the NPC (Neural Processes in Cognition) program. Applications are due by February 1, but late applications may still be considered if the position is not filled. Persons interested in this position should contact me directly at dst at cs.cmu.edu. -- Dave Touretzky http://www.cs.cmu.edu/~dst dst at cs.cmu.edu Computer Science Department & Center for the Neural Basis of Cognition Carnegie Mellon University, Pittsburgh, PA 15213-3891 From ptodd at mpipf-muenchen.mpg.de Mon Jan 15 07:48:24 1996 From: ptodd at mpipf-muenchen.mpg.de (ptodd@mpipf-muenchen.mpg.de) Date: Mon, 15 Jan 96 13:48:24 +0100 Subject: predoc/postdoc positions in Munich: modeling cognitive algorithms Message-ID: <9601151248.AA00777@hellbender.mpipf-muenchen.mpg.de> (The following ad will appear in the APS Observer, and connectionists with interests in domain-specific forms of cognition are encouraged to apply. Feel free to write to me with questions about the group or how you or a continuing/graduating student might fit in. --Peter Todd) The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Psychological Research in Munich, Germany is seeking applicants for 1 Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 Postdoctoral Fellowship (tax-free stipend range DM 36,000-40,000) for one-year positions beginning in September 1996. Candidates should be interested in modeling satisficing decision-making algorithms in real-world environmental domains, and should have expertise in one of the following areas: computer simulation, biological categorization, evolutionary biology or psychology, experimental economics, judgment and decision making, risk perception. For a list of current researchers and interests, please send email to Dr. Peter Todd at ptodd at mpipf-muenchen.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by March 15, 1996 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Psychological Research, Leopoldstrasse 24, 80802 Munich, Germany. From datamine at aig.jpl.nasa.gov Mon Jan 15 16:42:27 1996 From: datamine at aig.jpl.nasa.gov (Data Mining Journal) Date: Mon, 15 Jan 96 13:42:27 PST Subject: New Journal -- Data Mining and Knowledge Discovery Message-ID: <9601152142.AA09697@mathman.jpl.nasa.gov> please post the following annoucement to your group, Thanks, Usama ________________________________________________________________ Usama Fayyad | Fayyad at aig.jpl.nasa.gov Machine Learning Systems Group | Jet Propulsion Lab M/S 525-3660 | (818) 306-6197 office California Institute of Technology | (818) 306-6912 FAX 4800 Oak Grove Drive | Pasadena, CA 91109 |http://www-aig.jpl.nasa.gov/ _____________________________________|__________________________ **************************************************************** New Journal Announcement: Data Mining and Knowledge Discovery an international journal http://www.research.microsoft.com/research/datamine/ Published by Kluwer Academic Publishers C a l l f o r P a p e r s **************************************************************** Advances in data gathering, storage, and distribution technologies have far outpaced computational advances in techniques for analyzing and understanding data. This created an urgent need for a new generation of tools and techniques for automated Data Mining and Knowledge Discovery in Databases (KDD). KDD is a broad area that integrates methods from several fields including statistics, databases, AI, machine learning, pattern recognition, machine discovery, uncertainty modeling, data visualization, high performance computing, management information systems (MIS), and knowledge-based systems. KDD refers to a multi-step process that can be highly interactive and iterative. It includes data selection/sampling, preprocessing and transformation for subsequent steps. Data mining algorithms are then used to discover patterns, clusters and models from data. These patterns and hypotheses are then rendered in operational forms that are easy for people to visualize and understand. Data mining is a step in the overall KDD process. However, most published work has focused solely on (semi-)automated data mining methods. By including data mining explicitly in the name of the journal, we hope to emphasize its role, and build bridges to communities working solely on data mining. Our goal is to make Data Mining and Knowledge Discovery a flagship journal publication in the KDD area, providing a unified forum for the KDD research community, whose publications are currently scattered among many different journals. The journal will publish state-of-the-art papers in both the research and practice of KDD, surveys of important techniques from related fields, and application papers of general interest. In addition, there will be a pragmatic section including short application reports (1-3 pages), book and system reviews, and relevant product announcements. Please visit the journal's WWW homepage at: http://www.research.microsoft.com/research/datamine/ to obtain further information, including: - A list of topics of interest, - full call for papers, - instructions for submission, - contact information, subscription information, and - ordering a free sample issue. Editors-in-Chief: Usama M. Fayyad ================ Jet Propulsion Laboratory, California Institute of Technology, USA Heikki Mannila University of Helsinki, Finland Gregory Piatetsky-Shapiro GTE Laboratories, USA Editorial Board: =============== Rakesh Agrawal (IBM Almaden Research Center, USA) Tej Anand (AT&T Global Information Solutions, USA) Ron Brachman (AT&T Bell Laboratories, USA) Wray Buntine (Thinkbank Inc, USA) Peter Cheeseman (NASA AMES Research Center, USA) Greg Cooper (University of Pittsburgh, USA) Bruce Croft (University of Mass. Amherst, USA) Dan Druker (Arbor Software, USA) Saso Dzeroski (Jozef Stefan Institute, Slovenia) Oren Etzioni (University of Washington, USA) Jerome Friedman (Stanford University, USA) Brian Gaines (University of Calgary, Canada) Clark Glymour (Carnegie-Mellon University, USA) Jim Gray (Microsoft Research, USA) Georges Grinstein (University of Lowell, USA) Jiawei Han (Simon Fraser University, Canada) David Hand (Open University, UK) Trevor Hastie (Stanford University, USA) David Heckerman (Microsoft Research, USA) Se June Hong (IBM T.J. Watson Research Center, USA) Thomasz Imielinski (Rutgers University, USA) Larry Jackel (AT&T Bell Labs, USA) Larry Kerschberg (George Mason University, USA) Willi Kloesgen (GMD, Germany) Yves Kodratoff (Lab. de Recherche Informatique, France) Pat Langley (ISLE/Stanford University, USA) Tsau Lin (San Jose State University, USA) David Madigan (University of Washington, USA) Ami Motro (George Mason University, USA) Shojiro Nishio (Osaka University, Japan) Judea Pearl (University of California, Los Angeles, USA) Ed Pednault (AT&T Bell Laboratories, USA) Daryl Pregibon (AT&T Bell Laboratories, USA) J. Ross Quinlan (University of Sydney, Australia) Jude Shavlik (University of Wisconsin - Madison, USA) Arno Siebes (CWI, Netherlands) Evangelos Simoudis (IBM Almaden Research Center, USA) Andrzej Skowron (University of Warsaw, Poland) Padhraic Smyth (Jet Propulsion Laboratory, USA) Salvatore Stolfo (Columbia University, USA) Alex Tuzhilin (NYU Stern School, USA) Ramasamy Uthurusamy (General Motors Research Laboratories, USA) Vladimir Vapnik (AT&T Bell Labs, USA) Ronald Yager (Iona College, USA) Xindong Wu (Monash University, Australia) Wojciech Ziarko (University of Regina, Canada) Jan Zytkow (Wichita State University, USA) ====================================================================== If you would like to receive information from Kluwer on this journal, and to receive a free sample issue by mail, please fill out the form attached below, and e-mail it to datamine at aig.jpl.nasa.gov Please use the following in SUBJECT field: REQUEST for SAMPLE J-DMKD ------cut-here------cut-here------cut-here------cut-here------cut-here---- .. Please do NOT remove keywords following '___', simply fill in provided .. fields and return as is. This form will be processed automatically. .. If you do not wish to complete a field, please LEAVE BLANK. .. Subject should be: REQUEST for SAMPLE J-DMK .. mail completed form, including keywords in CAPS to .. datamine at aig.jpl.nasa.gov .. ___ REQUEST FOR FREE SAMPLE ISSUE OF DATA MINING AND KNOWLEDGE DISCOVERY ___ ___ NAME: ___ EMAIL: ___ AFFILIATION: ___ POSTAL_ADDRESS_LINE1: ___ POSTAL_ADDRESS_LINE2: ___ POSTAL_ADDRESS_LINE3: ___ POSTAL_ADDRESS_LINE4: ___ CITY: ___ STATE: ___ ZIP: ___ COUNTRY: ___ TELEPHONE: ___ FAX: ___ END_FORM: do not edit this line, anything below it is discarded. From geoff at salk.edu Tue Jan 16 13:15:43 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Tue, 16 Jan 96 10:15:43 PST Subject: Preprint - revised information Message-ID: <9601161815.AA15105@salk.edu> A few days ago I advertised a preprint entitled "Optimizing cortical mappings" by Goodhill, Finch and Sejnowski. Unfortunately since then the ftp and http details have changed. The new ones are ftp://ftp.cnl.salk.edu/pub/geoff/goodhill_nips96.ps.Z and http://www.cnl.salk.edu/~geoff Apologies, Geoff Goodhill From drl at eng.cam.ac.uk Tue Jan 16 10:00:00 1996 From: drl at eng.cam.ac.uk (drl@eng.cam.ac.uk) Date: Tue, 16 Jan 96 15:00:00 GMT Subject: Tech Report announcement Message-ID: <9601161500.17843@dante.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Limits on the discrimination possible with discrete valued data, with application to medical risk prediction D. R. Lovell, C. R. Dance, M. Niranjan, R. W. Prager and K. J. Dalton Technical Report CUED/F-INFENG/TR243 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract We describe an upper bound on the {\em accuracy} (in the ROC sense) attainable in two-alternative forced choice risk prediction, for a specific set of data represented by discrete features. By accuracy, we mean the probability that a risk prediction system will correctly rank a randomly chosen high risk case and a randomly chosen low risk case. We also present methods for estimating the maximum accuracy we can expect to attain using a given set of discrete features to represent data sampled from a given population. These techniques allow an experimenter to calculate the maximum performance that could be achieved, without having to resort to applying specific risk prediction methods. Furthermore, these techniques can be used to rank discrete features in order of their effect on maximum attainable accuracy. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get lovell_tr243.ps.Z ftp> quit unix> uncompress NAME_tr243.ps.Z unix> lpr lovell_tr243.ps (or however you print PostScript) No hardcopies available. From bengioy at IRO.UMontreal.CA Tue Jan 16 17:48:26 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Tue, 16 Jan 1996 17:48:26 -0500 Subject: Montreal workshop and spring school on NNs and learning algorithms Message-ID: <199601162248.RAA01887@oust.iro.umontreal.ca> Montreal Workshop and Spring School on Neural Nets and Learning Algorithms April 15-30 1996 Centre de Recherche Mathematique, Universite de Montreal MORE INFO AT: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html This workshop and concentrated course on artificial neural networks and learning algorithms is organized by the Centre de Recherches Mathematiques of the University of Montreal (Montreal, Quebec, Canada). The first week of the the workshop will concentrate on learning theory, statistics, and generalization. The second week (and beginning of third) will concentrate on learning algorithms, architectures, applications and implementations. The organizers of the workshop are Bernard Goulard (Montreal), Yoshua Bengio (Montreal), Bertrand Giraud (CEA Saclay, France) and Renato De Mori (McGill). The invited speakers are G. Hinton (Toronto), V. Vapnik (AT&T), M. Jordan (MIT), H. Bourlard (Mons), T. Hastie (Stanford), R. Tibshirani (Toronto), F. Girosi (MIT), M. Mozer (Boulder), J.P. Nadal (ENS, Paris), Y. Le Cun (AT&T), M. Marchand (U of Ottawa), J. Shawe-Taylor (London), L. Bottou (Paris), F. Pineda (Baltimore), J. Moody (Oregon), S. Bengio (INRS Montreal), J. Cloutier (Montreal), S. Haykin (McMaster), M. Gori (Florence), J. Pollack (Brandeis), S. Becker (McMaster), Y. Bengio (Montreal), S. Nowlan (Motorola), P. Simard (AT&T), G. Dreyfus (ESPCI Paris), P. Dayan (MIT), N. Intrator (Tel Aviv), B. Giraud (France), B. Pearlmutter (Siemens), H.P. Graf (AT&T). TENTATIVE SCHEDULE (see details at http://www.iro.umontreal.ca/labs/neuro/spring96/english.html) Week 1 Introduction, learning theory and statistics April 15: Y. Bengio, J.P. Nadal, G. Dreyfus, B. Giraud April 16: Y. Bengio, F. Girosi, L. Bottou, J.P. Nadal, G. Dreyfus, B. Giraud April 17: V. Vapnik, L. Bottou, F. Girosi, M. Marchand, J. Shawe-Taylor, V. Vapnik April 18: J. Shawe-Taylor, V. Vapnik, R. Tibshirani, T. Hastie, M. Jordan April 19: M. Marchand, S. Bengio, R. Tibshirani, T. Hastie, M. Jordan Week 2 and 3 Algorithms, architectures and applications April 22: S. Haykin, H. Bourlard, M. Gori, M. Mozer, F. Pineda April 23: S. Haykin, F. Pineda, H. Bourlard, M. Mozer, J. Pollack, P. Dayan April 24: M. Gori, J. Pollack, P. Dayan, B. Pearlmutter, S. Becker, P. Simard April 25: S. Becker, G. Hinton, N. Intrator, B. Pearlmutter, S. Nowlan, Y. Le Cun April 26: S. Bengio, Y. Le Cun, S. Nowlan, N. Intrator, P. Simard April 29: J. Moody, Y. Bengio, J. Cloutier, H.P. Graf April 30: J. Moody, J. Cloutier, H.P. Graf REGISTRATION INFORMATION: $100 (Canadian) or 75 $US, if received before April 1st $150 (Canadian) or 115 $US, if received on or after April 1st $25 (Canadian) or 19 $US, for students and post-doctoral fellows. The number of participants will be limited, on a first-come first-served basis. Please register early! Have a look at http://www.iro.umontreal.ca/labs/neuro/spring96/english.html for more details, or directly load the registration form by ftp (postscript: ftp://ftp.iro.umontreal.ca/pub/neuro/registration.ps or ascii: ftp://ftp.iro.umontreal.ca/pub/neuro/registration.asc). Reduced hotel rates can be obtained by returning your registration form with your choice of hotel before March 15th. For more information, contact Louis Pelletier, pelletl at crm.umontreal.ca, 514-343-2197, fax 514-343-2254 Centre de Recherche Mathematique, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C-3J7, Canada. -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206 From baluja at GS93.SP.CS.CMU.EDU Tue Jan 16 15:04:11 1996 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Tue, 16 Jan 96 15:04:11 EST Subject: Paper: Medical Risk Evaluation - Rankprop and Multitask Learning Message-ID: Paper Available: -------------------------- Using the Future to "Sort Out" the Present: Rankprop and Multitask Learning for Medical Risk Evaluation Rich Caruana, Shumeet Baluja, and Tom Mitchell Abstract: -------------------------- A patient visits the doctor; the doctor reviews the patient's history, asks questions, makes basic measurements (blood pressure, ...), and prescribes tests or treatment. The prescribed course of action is based on an assessment of patient risk---patients at higher risk are given more and faster attention. It is also sequential---it is too expensive to immediately order all tests which might later be of value. This paper presents two methods that together improve the accuracy of backprop nets on a pneumonia risk assessment problem by 10-50\%. {\em Rankprop} improves on backpropagation with sum of squares error in ranking patients by risk. {\em Multitask learning} takes advantage of {\em future} lab tests available in the training set, but not available in practice when predictions must be made. Both methods are broadly applicable. Retrieval Information -------------------------- This paper will appear in NIPS 8. Available via the web from: http://www.cs.cmu.edu/~baluja/techreps.html From heckerma at microsoft.com Tue Jan 16 22:34:47 1996 From: heckerma at microsoft.com (David Heckerman) Date: Tue, 16 Jan 1996 19:34:47 -0800 Subject: Summary: NIPS workshop on learning in graphical models Message-ID: Summary: NIPS 95 Workshop on Learning in Bayesian Networks and Other Graphical Models We discussed the relationships between Bayesian networks, decomposable models, Markov random fields, Boltzmann machines, Hidden Markov models, stochastic grammars, and feedforward neural networks, exposing complementary strengths and weaknesses in the various formalisms. For example, Bayesian networks are particularly strong in their focus on explicit representations of probabilistic independencies (the arrows in a belief network have a strong semantics in this regard), their full use of Bayesian methods, and their focus on density estimation. Neural networks are particularly strong in their ties to approximation theory, and in their focus on predictive modeling in non-linear classification and regression contexts. Topics discussed included issues in optimization, including the use of gradient-based methods and EM algorithms; issues in approximation, including the use of mean field algorithms and stochastic sampling; issues in representation, including exploration of the roles of ``hidden'' or ``latent'' variables in learning; search methods for model selection and model averaging; and engineering issues. A more detailed summary, as well as pointers to slides and related papers can be found at http://www.research.microsoft.com/research/nips95bn/ From gs at next2.ss.uci.edu Wed Jan 17 02:22:22 1996 From: gs at next2.ss.uci.edu (George Sperling) Date: Tue, 16 Jan 96 23:22:22 -0800 Subject: Conference Announcement Message-ID: <9601170722.AA07036@next2.ss.uci.edu> TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE Teton Village, Jackson Hole, Wyoming January 28 - February 2, 1996 Organizer: George Sperling, University of California, Irvine The TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE will meet in Teton Village, Jackson Hole, Wyoming, January 28 - February 2, 1996. The conference covers a wide range of subjects in what has come to be called cognitive science, ranging from visual and auditory physiology and psychophysics to human information processing, cognition, learning and memory, to computational approaches to these problems including neural networks and artificial intelligence. The aim is to provide overview talks that are comprehensible and interesting to a wide scientific audience --such as one might fantasy would occur at a National or Royal Academy of Science if such organizations were indeed devoted to scientific interchange. Attendance is limited by the size of the conference facility to about 50 persons. The Conference begins with a reception on Sunday evening, January 28, at 6:00p. Regular sessions meet from Monday through Friday at 4:00p to 8:00p; rest of the day is free. On Friday, 8:00p, there is a banquet for participants. A preliminary program is appended. The conference hotel, the Inn at Jackson Hole, is directly at the base of the ski slopes, a short walk from the tram and other ski lifts. The Conference has arranged special room rates for registered participants. To reserve lodging, telephone The Inn 1-800-842-7666 and inform the desk that you are with the Interdisciplinary Conference (AIC). Other hotels, restaurants, ski rental facilities, shops, and cross country ski trails, are all within walking distance. There are flights directly to Jackson Hole AP (taxi or bus to the hotel). Alternatively, Jackson is a five-hour drive from Salt Lake City. Additional information about the conference, previous programs, etc, are available at the WWW site below. To attend the conference, fill out the online registration form or request hardcopy from the organizer, and send the registration fee ($100) to the address below. To be sure of receiving future mailings, return a copy of the registration form with your current address. Annual Interdisciplinary Conference c/o Prof. George Sperling Cognitive Science Dept., SST-6 University of California Irvine, CA 92717 E-mail: sperling at uci.edu http://www.socsci.uci.edu/cogsci/HIPLab/AIC (for info about AIC-21) http://www.jacksonhole.com/ski (info about Jackson, WY) http://www.socsci.uci.edu/cogsci (for info about UCI Cognitive Sciences) --------------------------------------------------------------------------- P.S. UCI Update from the organizer: In spite of the fiscal difficulties faced by the State of California, UCI continues to move forward (two Nobel Prizes in 1995) and the Department of Cognitive Science is flourishing. In fall, 1995, the Department of Cognitive Science will be recruiting for three faculty positions with considerable flexibility in areas. There is an opening for a graduate student and a postdoc in my lab, and there are excellent opportunities for graduate students in the department --see the enclosed announcement and the WWW site above. =========================================================================== TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE Teton Village, Jackson Hole, Wyoming January 26 - February 2, 1996 Organizer: George Sperling, University of California, Irvine Preliminary Schedule (16Jan96) Sunday, January 28: 6:00 - 7:30 p.m. ** Reception ** Registration, Appetizers, Snacks, Refreshments. Monday, January 29, 4:00 - 8:00 p.m. Auditory Biology and Psychophysics; Visual Physiology Karen Glendenning, Psychology, Florida State U. Hearing: A Comparative Perspective. Bruce Masterton, Psychology, Florida State U. Role of the Central Auditory System in Hearing. Sam Williamson, Physics, New York University. The Decay of Sensory Memory. Randy Blake, Psychol, Vanderbilt U. Tachistoscopic Review of Mark Berkley's Research. Adina Roskies, Dept. Neurol, Washington U Med. Topographic Targeting of Retinal Axons in Development. Tuesday, January 29, 4:00 - 8:00 p.m. Motion Perception: Physiology, Psychophysics Larry O'Keefe, Center for Neural Science, NYU. Motion Processing in Primate Visual Cortex. Scott Richman, Cognitive Sci., UCI. A Specialized Receptor for Moving Flicker? Erik Blaser, Cog. Sci, UC Irvine. When is Motion Motion? Sophie Wuerger, Communic & Neurosci, Keele U. Colour in Moving and Stationary Orientation Discrimination. George Sperling, Cognitive Science, UC Irvine. Model of Gain-Control in Motion Processing. Wednesday Feb. 1, 4:00 - 8:00 Visual Learning, Learning; Information Processing Lorraine Allan, Psychology, McMaster U. New Slants on the McCollough Effect. Shepard Siegel, Psychology, McMaster U. What Contingent Color Aftereffects Tell Us About Drug Addiction. Hal Pashler, Psychology, U Cal., San Diego. Dual-Task Bottlenecks: Structural or Strategic? Geoffrey Loftus, Brain and Cog Sci, MIT. Information Acquisition and Phenomenology. Bill Prinzmetal, Dept of Psychology, UC Berkely. The Phenomenology of Attention. Zhong-Lin Lu, Cogn. Sci., U Cal. Irvine. Salience Model of Spatial Attention. Thursday 4:00 - 8:00 Memory Tim McNamara, Psychology, Vanderbilt. Viewpoint Dependence in Human Spatial Memory. Roger Ratcliff & Gail McKoon, Psychology, Northwestern U. Models of RT and Word Identification Richard Shiffrin, Psychology, Indiana U. A Model for Implicit and Explicit Momory. Barbara Dosher, Cogn. Sci., U Cal. Irvine. Forgetting in Implicit and Explicit Memory Tasks. David Caulton, Natl Inst Health. Memory Retrieval Dynamics: Behavioral and Electrophysiological Approaches. Friday 4:00 - 8:00 p.m. Computational Issues Sandy Pentland, Media Lab., MIT. The Perception of Driving Intentions. Leonid Kontsevich, Smith-Kettlewell Eye Research Institute. The Role of Partial Similarity in 3D Vision. Misha Pavel, EE., Oregon Graduate Institute. The Role of Features in the Perception of Symmetry. Maria Kozhevnikov, Physics, Technion, Israel. A Mathematical Model of Conceptual Development. Shulamith Eckstein, Physics, Technion, Israel. A Dynamic Model of Cognitive Growth in a Population. * * * 8:00 Fireside Banquet at The Inn * * * From iconip96 at cs.cuhk.hk Wed Jan 17 08:32:02 1996 From: iconip96 at cs.cuhk.hk (iconip96) Date: Wed, 17 Jan 1996 21:32:02 +0800 (HKT) Subject: *** ICONIP'96 FINAL CALL FOR PAPERS *** Message-ID: <199601171332.VAA00980@cs.cuhk.hk> Please do not re-distribute this CFP to other lists. We apologize should you receive multiple copies of this CFP from different sources. ====================================================================== FINAL CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong In cooperation with IEEE / NNC --IEEE Neural Networks Council INNS - International Neural Network Society ENNS - European Neural Network Society JNNS - Japanese Neural Network Society CNNC - China Neural Networks Council ====================================================================== The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference also further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. The conference consists of two tracks. One is SCIENTIFIC TRACK for the latest results on Theories, Technologies, Methods, Architectures and Algorithms in neural information processing. The other is APPLICATION TRACK for various neural network applications in any engineering/technical field and any business/service sector. There will be a one-day tutorial on the neural networks for capital markets which reflects Hong Kong's local interests on financial services. In addition, there will be several invited lectures in the main conference. Hong Kong is one of the most dynamic cities in the world with world-class facilities, easy accessibility, exciting entertainment, and high levels of service and professionalism. Come to Hong Kong! Visit this Eastern Pearl in this historical period before Hong Kong's eminent return to China in 1997. Tutorials On Financial Engineering ================================== 1. Professor John Moody, Oregon Graduate Institute, USA "Time Series Modeling: Classical and Nonlinear Approaches" 2. Professor Halbert White, University California, San Diego, USA "Option Pricing In Modern Finance Theory And The Relevance Of Artificial Neural Networks" 3. The third tutorial speaker will also be an internationally well known expert in neural networks for the capital markets. Keynote Talks ============= 1. Professor Shun-ichi Amari, Tokyo University. "Information Geometry of Neural Networks" 2. Professor Yaser Abu-Mostafa, California Institute of Technology, USA "The Bin Model for Learning and Generalization" 3. Professor Leo Breiman, University California, Berkeley, USA "Democratizing Predictors" 4. Professor Christoph von der Malsburg, Ruhr-Universitat Bochum, Germany "Scene Analysis Based on Dynamic Links" (tentatively) 5. Professor Erkki Oja, Helsinki University of Technology, Finland "Blind Signal Separation by Neural Networks " *** PLUS AROUND 20 INVITED PAPERS GIVEN BY WELL KNOWN RESEARCHERS IN THE FIELD. *** CONFERENCE TOPICS ================= SCIENTIFIC TRACK: ----------------- * Theory * Algorithms & Architectures * Supervised Learning * Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas APPLICATION TRACK: ------------------ * Foreign Exchange * Equities & Commodities * Risk management * Options & Futures * Forecasting & Strategic Planning * Government and Services * Garments and Fashions * Telecommunications * Control & Modeling * Manufacturing * Chemical engineering * Transportation * Environmental engineering * Remote sensing * Power systems * Defense * Multimedia systems * Document Processing * Medical imaging * Biomedical application * Geophysical sciences * Other Applications CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 Tutorial on Financial Engineering Sept, 24, 1996 Conference Sept, 25-27, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format (or letter) white paper with 25 mm (1 inch) margins on all four sides, in one column format, no more than six pages (four pages preferred) including figures and references, single- spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of track, technical session(s), and format of presentation, either oral or poster. All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. The address for information inquiries and paper submissions: ICONIP'96 Secretariat Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Lofti Zadeh, UC Berkeley Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Pohang Hsin-Chia Fu, NCTU., Hsinchu Toshio Fukuda, Nagoya U., Nagoya Kunihiko Fukushima, Osaka U., Osaka Zhenya He, Southeastern U., Nanjing Marwan Jabri, U. Sydney, Sydney Nikola Kasabov, U. Otago, Dunedin Yousou Wu, Tsinghua U., Beijing Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK B.T. Low (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki U. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Jim Bezdek, U. West Florida Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Yu Hen Hu, U. Wisconsin - Madison Jeng-Neng Hwang, U. Washington Nathan Intrator, Tel-Aviv U. Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Steven Nowlan, Synaptics Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School David Sanchez, U. Miami Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU Jacek Zurada, U. Louisville From scott at cpl_mmag.nhrc.navy.mil Wed Jan 17 13:32:05 1996 From: scott at cpl_mmag.nhrc.navy.mil (Scott Makeig) Date: Wed, 17 Jan 1996 10:32:05 -0800 (PST) Subject: 2 papers applying neural networks to EEG data Message-ID: <199601171832.KAA13143@cpl_mmag.nhrc.navy.mil> Announcing the availability of preprints of two articles to be published in the NIPS conference proceedings: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% INDEPENDENT COMPONENT ANALYSIS OF ELECTROENCEPHALOGRAPHIC DATA Scott Makeig Anthony J. Bell Naval Health Research Center Computational Neurobiology Lab P.O. Box 85122 The Salk Institute, P.O. Box 85800 San Diego CA 92186-5122 San Diego, CA 92186-5800 scott at cpl_mmag.nhrc.navy.mil tony at salk.edu Tzyy-Ping Jung Terrence J. Sejnowski Naval Health Research Center and Howard Hughes Medical Institute and Computational Neurobiology Lab Computational Neurobiology Lab jung at salk.edu terry at salk.edu ABSTRACT Because of the distance between the skull and brain and their different resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve significant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski(1994) is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identification from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to different random seeds. (2) ICA analysis may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA analysis is capable of isolating overlapping alpha and theta wave bursts to separate ICA channels (4) Nonstationarities in EEG and behavioral state can be tracked using ICA analysis via changes in the amount of residual correlation between ICA-filtered output channels. Sites: http://128.49.52.9/~scott/bib.html ftp://ftp.cnl.salk.edu/pub/jung/nips95b.ps.Z %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% USING NEURAL NETWORKS TO MONITOR ALERTNESS FROM CHANGES IN EEG CORRELATION AND COHERENCE Scott Makeig Tzyy-Ping Jung Naval Health Research Center Naval Health Research Center and P.O. Box 85122 Computational Neurobiology Lab San Diego, CA 92186-5122 The Salk Institute scott at cpl_mmag.nhrc.navy.mil jung at salk.edu Terrence J. Sejnowski Howard Hughes Medical Institute & Computational Neurobiology Lab The Salk Institute terry at salk.edu ABSTRACT We report here that changes in the normalized electroencephalographic (EEG) cross-spectrum can be used in conjunction with feedforward neural networks to monitor changes in alertness of operators continuously and in near-real time. Previously, we have shown that EEG spectral amplitudes covary with changes in alertness as indexed by changes in behavioral error rate on an auditory detection task (Makeig & Inlow, 1993). Here, we report for the first time that increases in the frequency of detection errors in this task are also accompanied by patterns of increased and decreased spectral coherence in several frequency bands and EEG channel pairs. Relationships between EEG coherence and performance vary between subjects, but within subjects, their topographic and spectral profiles appear stable from session to session. Changes in alertness also covary with changes in correlations among EEG waveforms recorded at different scalp sites, and neural networks can also estimate alertness from correlation changes in spontaneous and unobtrusively-recorded EEG signals. Sites: http://128.49.52.9/~scott/bib.html ftp://ftp.cnl.salk.edu/pub/jung/nips95a.ps.Z From marni at salk.edu Wed Jan 17 16:12:10 1996 From: marni at salk.edu (Marian Stewart Bartlett) Date: Wed, 17 Jan 1996 13:12:10 -0800 Subject: Preprint available Message-ID: <199601172112.NAA01871@chardin.salk.edu> The following preprints are available via anonymous ftp or http://www.cnl.salk.edu/~marni ------------------------------------------------------------------------ CLASSIFYING FACIAL ACTION Marian Stewart Bartlett, Paul A. Viola, Terrence J. Sejnowski, Beatrice A. Golomb, Jan Larsen, Joseph C. Hager, and Paul Ekman To appear in "Advances in Neural Information Processing Systems 8", D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), MIT Press, Cambridge, MA, 1996. ABSTRACT The Facial Action Coding System, (FACS), devised by Ekman and Friesen, provides an objective means for measuring the facial muscle contractions involved in a facial expression. In this paper, we approach automated facial expression analysis by detecting and classifying facial actions. We generated a database of over 1100 image sequences of 24 subjects performing over 150 distinct facial actions or action combinations. We compare three different approaches to classifying the facial actions in these images: Holistic spatial analysis based on principal components of graylevel images; explicit measurement of local image features such as wrinkles; and template matching with motion flow fields. On a dataset containing six individual actions and 20 subjects, these methods had 89%, 57%, and 85% performances respectively for generalization to novel subjects. When combined, performance improved to 92%. nips95.ps.Z 7 pages; 352K compressed -------------------------------------------------------------------------- UNSUPERVISED LEARNING OF INVARIANT REPRESENTATIONS OF FACES THROUGH TEMPORAL ASSOCIATION Marian Stewart Bartlett and Terrence J. Sejnowski To appear in "The Neurobiology of Computation: Proceedings of the Annual Computational Neuroscience Meeting." J.M. Bower, ed. Kluwer Academic Publishers, Boston. ABSTRACT The appearance of an object or a face changes continuously as the observer moves through the environment or as a face changes expression or pose. Recognizing an object or a face despite these image changes is a challenging problem for computer vision systems, yet we perform the task quickly and easily. This simulation investigates the ability of an unsupervised learning mechanism to acquire representations that are tolerant to such changes in the image. The learning mechanism finds these representations by capturing temporal relationships between 2-D patterns. Previous models of temporal association learning have used idealized input representations. The input to this model consists of graylevel images of faces. A two-layer network learned face representations that incorporated changes of pose up to 30 degrees. A second network learned representations that were independent of facial expression. cns95.ta.ps.Z 6 pages; 428K compressed ========================================================================= FTP-host: ftp.cnl.salk.edu FTP-pathnames: /pub/marni/nips95.ps.Z and /pub/marni/cns95.ta.ps.Z URL: ftp://ftp.cnl.salk.edu/pub/marni WWW URL: http://www.cnl.salk.edu/~marni If you have difficulties, email marni at salk.edu From fu at cis.ufl.edu Thu Jan 18 14:49:32 1996 From: fu at cis.ufl.edu (fu@cis.ufl.edu) Date: Thu, 18 Jan 1996 14:49:32 -0500 Subject: Special issue on Knowledge-Based Neural Networks Message-ID: <199601181949.OAA18615@whale.cis.ufl.edu> Special Issue: Knowledge-Based Neural Networks {Knowledge-Based Systems, 8(6), December 1995} Guest Editor: LiMin Fu, University of Florida (Gainesville, USA) Introduction to knowledge-based neural networks L Fu Dynamically adding symbolically meaningful nodes to knowledge-based neural networks D W Opitz and J W Shavlik Recurrent neural networks and prior knowledge for sequence processing: A constrained nondeterministic approach P Frasconi, M Gori, and G Soda Initialization of neural networks by means of decision trees I Ivanova and M Kubat Extension of the temporal synchrony approach to dynamic variable binding in a connectionist inference system N S Park, D Robertson, and K Stenning Hybrid modeling in pattern recognition and control Jim Bezdek Survey and critique of techniques for extracting rules from trained artificial neural networks R. Andrewsm J Diederich, and A B Tickle ======================================================== Orders: Elsevier Science BV, Order Fulfilment Department, P.O. Box 211, 1000 AE, Amsterdam, The Netherlands. Tel: +31 (20) 485-3642 Fax: +31 (20) 485-3598 From dnoelle at cs.ucsd.edu Thu Jan 18 15:40:16 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 18 Jan 96 12:40:16 -0800 Subject: Cog Sci 96: Final Call For Papers Message-ID: <9601182040.AA25271@beowulf> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California SECOND (AND FINAL) CALL FOR PAPERS DUE DATE: Thursday, February 1, 1996 CONTACT: cogsci96 at cs.ucsd.edu EXECUTIVE SUMMARY OF CHANGES FROM ORIGINAL CFP After discussion with the advisory board, we decided to go with a three-tiered approach after all. There will be six page papers in the proceedings for both talks and posters. However, even if your paper/poster is not accepted, you will have a chance to submit a one page abstract for publication and poster presentation. Or, you may submit a one-page abstract initially (actually two pages in the submission format) for guaranteed acceptance. This is meant to accommodate the very different cultures of the component disciplines of the Society, while making a minimal change from previous years' formats. Also, this CFP provides a partial list of the program committee, the plenary speakers, a rough schedule for the paper reviewing process, and some keywords to aid in the process of reviewing your paper. INTRODUCTION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. As a change this year, we follow the example of Psychonomics and the Neuroscience Conferences and invite Members of the Society to submit short abstracts for guaranteed poster presentation at the conference. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS 1. "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) 2. "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto) Jay McClelland (CMU) Dave Rumelhart (Stanford) 3. "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) 4. "Reconstructing Consciousness" Paul Churchland (UCSD) PROGRAM COMMITTEE (a partial list): Garrison W. Cottrell (UCSD) -- Program Chair Farrell Ackerman (UCSD) -- Linguistics Tom Albright (Salk Institute) -- Neuroscience Patricia Churchland (UCSD) -- Philosophy Roy D'Andrade (UCSD) -- Anthropology Charles Elkan (UCSD) -- Computer Science Catherine Harris (Boston U.) -- Psychology Doug Medin (Northwestern) -- Psychology Risto Miikkulainen (U. of Texas, Austin) -- Computer Science Kim Plunkett (Oxford) -- Psychology Martin Sereno (UCSD) -- Neuroscience Tim van Gelder (Indiana U. & U. of Melbourne) -- Philosophy GUIDELINES FOR PAPER SUBMISSIONS Novel research papers are invited on any topic related to cognition. Members of the Society may submit a one page abstract (two pages in double-spaced submission format) for poster presentation, which will be automatically accepted for publication in the proceedings. Submitted full-length papers will be evaluated through peer review with respect to several criteria, including originality, quality, and significance of research, relevance to a broad audience of cognitive science researchers, and clarity of presentation. Papers will be accepted for either oral or poster presentation, and will receive 6 pages in the proceedings in the final, camera-ready format. Papers that are rejected at this stage may be re-submitted (if the author is a Society member) as a one page abstract in the camera-ready format, due at the same date as camera-ready papers. Poster abstracts from non-members will be accepted, but the presenter should join the Society prior to presenting the poster. Papers accepted for oral presentation will be presented at the conference as scheduled talks. Papers accepted for poster presentation and one page abstracts will be presented at a poster session at the conference. All papers may present results from completed research as well as report on current research with an emphasis on novel approaches, methods, ideas, and perspectives. Posters may report on recent work to be published elsewhere that has not been previously presented at the conference. Authors should submit five (5) copies of the paper in hard copy form by Thursday, February 1, 1996, to: Dr. Garrison W. Cottrell Computer Science and Engineering 0114 FED EX ONLY: 3250 Applied Physics and Math University of California San Diego La Jolla, Ca. 92093-0114 phone for FED EX: 619-534-5948 (my secretary, Marie Kreider) If confirmation of receipt is desired, please use certified mail or enclose a self-addressed stamped envelope or postcard. DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author are eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of eleven (11) pages long (excluding only the cover page but including figures and references), with 1 inch margins on all sides (i.e., the text should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), double-spaced, and in 12-point type. Each page should be numbered (excluding the cover page). Template and style files conforming to these specifications for several text formatting programs, including LaTeX, Framemaker, Word, and Word Perfect are available by anonymous FTP from "cs.ucsd.edu" in the "pub/cogsci96/formats" directory. There is a self-explanatory subdirectory hierarchy under that directory for papers and posters. Formatting information is also available via the World Wide Web at the conference web page located at "http://www.cse.ucsd.edu/events/cogsci96/". Submitted abstracts should be two pages in submitted format, with the same margins as full papers. Style files for these are available at the same location as above. Final versions of papers and poster abstracts will be required only after authors are notified of acceptance; accepted papers may be published in a CD-ROM version of the proceedings. Abstracts will be available before the meeting from a WWW server. Final versions must follow the HTML style guidelines which will be made available to the authors of accepted papers and abstracts. This year we will again attempt to publish the proceedings in two modalities, paper and a CD-ROM version. Depending on a decision of the Governing Board, we may be switching completely from paper to CD-ROM publication in order to control escalating costs and permit use of search software. [Comments on this change should be directed to "alan at lrdc4.lrdc.pitt.edu" (Alan Lesgold, Secretary/Treasurer).] COVER PAGE Each copy of the submitted paper must include a cover page, separate from the body of the paper, which includes: 1. Title of paper. 2. Full names, postal addresses, phone numbers, and e-mail addresses of all authors. 3. An abstract of no more than 200 words. 4. Three to five keywords in decreasing order of relevance. The keywords will be used in the index for the proceedings. You may use the keywords from the attached list, or you may make up your own. Please try to give a primary discipline (or pair of disciplines) to which the paper is addressed (e.g., Psychology, Philosophy, etc.) 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster only submissions should follow paper format, but be no more than 2 pages in this format (final poster abstracts will follow the same 2 column format as papers). Accepted papers will be presented as talks. Submitted posters by Society Members will be accepted for poster presentation, but may, at the discretion of the Program Committee, be invited for oral presentation. Non-members may join the Society at the time of submission. 6. A note stating if the paper is eligible to compete for a Marr Prize. DEADLINE Papers must be received by Thursday, February 1, 1996. Papers received after this date will be recycled. REVIEW SCHEDULE February 1: Papers due March 21: Decisions/Reviews Returned To Authors April 14: Final Papers & Abstracts Due CALL FOR SYMPOSIA (The call for symposia has been deleted here, as the deadline has passed.) CONFERENCE CHAIRS Edwin Hutchins and Walter Savitch PROGRAM CHAIR Garrison W. Cottrell Please direct email to "cogsci96 at cs.ucsd.edu". KEYWORDS Please identify an appropriate major discipline for your work (try to name no more than two!) and up to three subareas from the following list. Anthropology Behavioral Ecology Cognition & Education Cognitive Anthropology Distributed Cognition Situated Cognition Social & Group Cognition Computer Science Artificial Intelligence Artificial Life Case-Based Learning Case-Based Reasoning Category & Concept Learning Category & Concept Representation Computer Aided Instruction Computer Human Interaction Computer Vision Connectionism Discovery-Based Learning Distributed Systems Explanation Generation Hybrid Representations Inference & Decision Making Intelligent Agents Machine Learning Memory Model-Based Reasoning Natural Language Generation Natural Language Learning Natural Language Processing Planning & Action Problem Solving Reasoning Heuristics Reasoning Under Time Constraints Robotics Rule-Based Reasoning Situated Cognition Speech Generation Speech Processing Text Comprehension & Translation Linguistics Cognitive Linguistics Discourse & Text Comprehension Generative Linguistics Language Acquisition & Development Language Generation Language Understanding Lexical Semantics Phonology & Word Recognition Pragmatics & Communication Psycholinguistics Sentence Processing Syntax Neuroscience Attention Brain Imaging Cognitive Neuroscience Computational Neuroscience Consciousness Memory Motor Control Language Acquisition & Development Language Generation Language Understanding Neuropsychology Neural Plasticity Perception & Recognition Planning & Action Spatial Processing Philosophy Philosophy Of Anthropology Philosophy Of Biology Philosophy Of Language Philosophy Of Mind Philosophy Of Neuroscience Philosophy Of Psychology Philosophy Of Science Psychology Analogical Reasoning Associative Learning Attention Behavioral Ecology Case-Based Learning Case-Based Reasoning Category & Concept Learning Category & Concept Representation Cognition & Education Consciousness Discourse & Text Comprehension Discovery-Based Learning Distributed Cognition Evolutionary Psychology Explanation Generation Imagery Inference & Decision Making Language Acquisition & Development Language Generation Language Understanding Lexical Semantics Memory Model-Based Reasoning Neuropsychology Perception & Recognition Phonology & Word Recognition Planning & Action Pragmatics & Communication Problem Solving Psycholinguistics Reasoning Heuristics Reasoning Under Time Constraints Rule-Based Reasoning Sentence Processing Situated Cognition Spatial Processing Syntactic Processing From krista at torus.hut.fi Fri Jan 19 07:33:42 1996 From: krista at torus.hut.fi (Krista Lagus) Date: Fri, 19 Jan 1996 14:33:42 +0200 (EET) Subject: A novel SOM-based approach to free-text mining Message-ID: A novel SOM-based approach to free-text mining 19.1.1996 -- WEBSOM demo for newsgroup exploration Welcome to test the document exploration tool WEBSOM. An ordered map of the information space is provided: similar documents lie near each other on the map. The order helps in finding related documents once any interesting document is found. Currently a demo for browsing the 4900 articles that have appeared in the Usenet newsgroup comp.ai.neural-nets since 19.6.1995 is available in the WWW address http://websom.hut.fi/websom/ The WEBSOM home pages also contain an article describing the WEBSOM method and documentation of the demo. The demonstration requires a graphical WWW browser (such as Mosaic or Netscape), but the documentation can be read also with other browsers. The WEBSOM team: Timo Honkela Samuel Kaski Krista Lagus Teuvo Kohonen Helsinki University of Technology Neural Networks Research Centre Rakentajanaukio 2C FIN-02150 Espoo Finland email: websom at websom.hut.fi From davec at cogs.susx.ac.uk Fri Jan 19 06:41:06 1996 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 19 Jan 1996 11:41:06 +0000 (GMT) Subject: MSc in Evolutionary and Adaptive Systems Message-ID: Please distribute: The University of Sussex School of Cognitive and Computing Sciences Graduate Research Centre (COGS GRC) Master of Science (MSc) Degree in EVOLUTIONARY AND ADAPTIVE SYSTEMS Applications are invited for entry in October 1996 to the Master of Science (MSc) degree in Evolutionary and Adaptive Systems. The degree can be taken in one year full-time, or part-time over two years. Students initially follow taught courses, as preparation for an individual research project leading to a Masters Thesis. This email gives a brief summary of the degree. For further details, see: World-wide web: http://www.cogs.susx.ac.uk/lab/adapt/easy_msc.html Anonymous ftp: ftp to cogs.susx.ac.uk cd to pub/users/davec get (in binary mode) easy_msc.ps.Z (69K) Or contact the address at the end of this email to request hard-copy. The MSc is sponsored in part by: BNR Europe Ltd, Hewlett-Packard, Millennium Interactive. BACKGROUND The past decade has seen the formation of new research fields, crossing traditional boundaries between biology, computer science, and cognitive science. Known variously as Artificial Life, Simulation of Adaptive Behavior, and Evolutionary Computation, the common theme is a focus on adaptation in natural and artificial systems. This research has the potential both to further our understanding of living and adaptive mechanisms in nature, and to construct artificial systems which show the same flexibility, robustness, and capacity for adaptation as is seen in animals. The international research community is sufficiently large to support five series of biennial conferences on various aspects of the field (ICGA, ALife, ECAL, SAB, PPSN), and there are currently three international journals (all produced by MIT Press) for archival publication of significant research findings. The Evolutionary and Adaptive Systems (EASy) Research Group at the University of Sussex School of Cognitive and Computing Sciences (COGS) is now widely recognised as one of the world's foremost groups of researchers in this area, with approximately 35 people actively engaged in research. Students on the EASy MSc will be involved in this lively interdisciplinary environment. At the end of the course, students will have been trained to a standard where they are capable of pursuing doctoral research in any area of Evolutionary and Adaptive Systems; and of applying those techniques in industry. INTERNATIONAL STEERING GROUP M. A. Arbib (Uni. of Southern California, USA); M. Bedau (Reed College, USA); R. D. Beer (Case Western Reserve Uni, USA); R. A. Brooks (MIT, USA); H. Cruse (Universitat Bielefeld, Germany); K. De Jong (George Mason Uni., USA); D. Dennett (Tufts, USA); D. Floreano (LCT, Italy); J. Hallam (Uni. of Edinburgh, UK); I. Horswill (North Western Uni., USA); L. P. Kaelbling (Brown Uni., USA); C. G. Langton (Santa Fe Inst., USA); M. J. Mataric (Brandeis Uni., USA); J.-A. Meyer (Ecole Normale Superieure, France); G. F. Miller (MPIPF, Germany); R. Pfeiffer (Uni. of Zurich, Switz.); T. S. Ray (ATR, Japan); C. Reynolds (Silicon Graphics Inc, USA); H. L. Roitblat (Uni. of Hawaii, USA); T. Smithers (Euskal Herriko Unibertsitatae, Spain); L. Steels (VUB, Belgium); P. Todd (MPIPF, Germany); B. H. Webb (Uni. of Nottingham, UK); S. W. Wilson (Rowland Inst., USA). FULL-TIME SYLLABUS Autumn Term (Oct--Dec) ---------------------- Four compulsory courses: Artificial Life Introduction to Computer Science Formal Computational Skills Adaptive Behavior in Animals and Robots Spring Term (Jan-Mar) --------------------- Two compulsory courses: Adaptive Systems Neural Networks Two options chosen from the following list (further options may become available; some options may not be available in some years): Simulation of Adaptive Behavior History and Philosophy of Adaptive Systems Development in Human and Artificial Life Computer Vision Philosophy of Cognitive Science Computational Neuroscience Summer (Apr-Aug) ---------------- Research project, which should include a substantial practical (programming) element, leading to submission of a 12000-word masters thesis. It is intended that there will be industrial involvement in some projects. SUSSEX FACULTY INVOLVED IN THE MSc Prof. H. G. Barrow; Prof. M. A. Boden; Dr. H. Buxton; R. Chrisley; Prof. A. J. Clark; Dr. D. Cliff; Dr. T. S. Collett; Dr. P. Husbands; Dr. D. Osorio; Dr. J. C. Rutkowska; Dr. D. S. Young. APPLICATION PROCEDURE Application forms are available from: Postgraduate Admissions Office Sussex House University of Sussex Brighton BN1 9RH England, U.K. Tel: +44 (0)1273 678412 Email: PG.Admissions at admin.susx.ac.uk Early application is encouraged: there are a limited number of places on the MSc. If you have any further queries about this degree, please contact: Dr D Cliff School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England, U.K. Tel: +44 (0)1273 678754 Fax: +44 (0)1273 671320 E-mail: davec at cogs.susx.ac.uk From glinert at cs.rpi.edu Thu Jan 18 22:10:13 1996 From: glinert at cs.rpi.edu (glinert@cs.rpi.edu) Date: Thu, 18 Jan 96 22:10:13 EST Subject: ASSETS'96 AP + Reg Forms Message-ID: <9601190310.AA20547@colossus.cs.rpi.edu> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ADVANCE PROGRAM AND REGISTRATION FORMS ASSETS'96 The Second International ACM/SIGCAPH Conference on Assistive Technologies April 11 - 12, 1996 Waterfront Centre Hotel Vancouver BC, Canada Sponsored by the ACM's Special Interest Group on Computers and the Physically Handicapped, ASSETS'96 is the second of a new series of conferences whose goal is to provide a forum where researchers and developers from academia and industry can meet to exchange ideas and report on new developments relating to computer-based systems to help people with impairments and disabilities of all kinds. This announcement includes 4 parts: o Message from the Program Chair o ASSETS'96 Advance Program o ASSETS'96 Registration Form o Hotel Information If you have any questions or would like further information, please consult the conference web pages at http://www.cs.rpi.edu/assets or contact the ASSETS'96 General Chair: Ephraim P. Glinert Dept. of Computer Science R. P. I. Troy, NY 12180 Phone: (518) 276 2657 E-mail: glinert at cs.rpi.edu /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ MESSAGE FROM THE PROGRAM CHAIR ============================== As Assets '96 Program Chair, I am pleased to extend a warm invitation to you to attend ASSETS'96, the 1996 ACM/SIGCAPH International Conference on Assistive Technologies! This is the second in an annual series of meetings whose goal is to provide a forum where researchers and developers from academia and industry can meet to exchange ideas and report on leading edge developments relating to computer based systems to help people with disabilities. This year, conference attendees will hear 21 exciting presentations on state-of-the art approaches to vision impairments, motor impairments, hearing impairments, augmentative communication, special education needs, Internet access issues, and much more. All submissions have undergone a rigorous review process to assure that the program is of the high technical quality associated with the best ACM conferences, and no more papers have been accepted than can comfortably be presented in a single track (no parallel sessions), with ample time included in the schedule for interaction among presenters and attendees. Come join us in beautiful Vancouver for a great time and a rewarding professional experience! David L. Jaffe VA Palo Alto Health Care System /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ASSETS'96 ADVANCE PROGRAM ========================= NOTE: For each paper, only the affiliation of the first author is given. WED 4/10: 6:00 pm - 9:00 pm Registration + Reception THU 4/11: 8:00 am - 5:00 pm Registration 8:00 am - 9:00 am Continental Breakfast 8:45 am - 9:00 am Welcome to ASSETS'96! 9:00 am -10:00 am KEYNOTE ADDRESS: David Rose, Center for Applied Special Technology (CAST) 10:00 am -10:30 am Break 10:30 am -12:00 noon Papers I: The User Interface I "Touching and hearing GUIs: Design issues for the PC access system" C. Ramstein, O. Martial, A. Dufresne, M. Carignan, P. Chasse and P. Mabilleau Center for Information Technologies Innovation (Canada) "Enhancing scanning input with nonspeech sounds" S.A. Brewster, V. Raty and A. Kortkangas University of Glasgow (UK) "A study of input device manipulation difficulties" S. Trewin University of Edinburgh (UK) 12:00 - 1:00 pm Lunch 1:00 pm - 2:00 pm SIGCAPH Business Meeting 2:00 pm - 3:00 pm Papers II: The World Wide Web "V-Lynx: Bringing the World Wide Web to sight-impaired users" M. Krell and D. Cubranic University of Southern Mississippi (USA) "Computer generated 3-dimensional models of manual alphabet shapes for the World Wide Web" S. Geitz, T. Hanson and S. Maher Gallaudet University (USA) 3:00 pm - 3:30 pm Break 3:30 pm - 5:30 pm Papers III: Vision Impairments I "EMACSPEAK: Direct Speech Access" T.V. Raman Adobe Systems "The Pantobraille: Design and pre-evaluation of a single cell braille display based on a force feedback device" C. Ramstein Center for Information Technologies Innovation (Canada) "Interactive tactile display system: A support system for the visually impaired to recognize 3D objects" Y. Kawai and F. Tomita Electrotechnical Laboratory (Japan) "Audiograf: A diagram reader for the blind" A.R. Kennel Institut fur Informationssysteme (Switzerland) 6:00 pm - 9:00 pm Buffet Dinner 8:00 pm - 9:00 pm ASSETS'97 Organizational Meeting FRI 4/12: 8:00 am -12:00 noon Registration 8:00 am - 9:00 am Continental Breakfast 9:00 am -10:00 am Papers IV: Empirical Studies "EVA, an early vocalization analyzer: An empirical validity study of computer categorization" H.J. Fell, L.J. Ferrier, Z. Mooraj, E. Benson and D. Schneider Northeastern University (USA) "An approach to the evaluation of assistive technology" R.D. Stevens and A.D.N. Edwards University of York (UK) 10:00 am -10:30 am Break 10:30 am -12:00 noon Papers V: The User Interface II "Designing interface toolkit with dynamic selectable modality" S. Kawai, H. Aida and T. Saito University of Tokyo (Japan) "Multimodal input for computer access and augmentative communication" A. Smith, J. Dunaway, P. Demasco and D. Peischl A.I. duPont Institute / University of Delaware (USA) "The Keybowl: An ergonomically designed document processing device" P.J. McAlindon, K.M. Stanney and N.C. Silver University of Central Florida (USA) 12:00 - 1:00 pm Lunch 1:00 pm - 2:00 pm Panel Discussion "Designing the World Wide Web for people with disabilities" M.G. Paciello, Digital Equipment Corporation (USA) G.C. Vanderheiden, TRACE R&D Center (USA) L.F. Laux, US West Communications, Inc. (USA) P.R. McNally, University of Hertfordshire (UK) 2:00 pm - 3:00 pm Papers VI: Multimedia "A gesture recognition architecture for sign language" A. Braffort LIMSI/CNRS (France) "`Composibility': Widening participation in music making for people with disabilities via music software and controller solutions" T. Anderson and C. Smith University of York (UK) 3:00 pm - 3:30 pm Break 3:30 pm - 5:30 pm Papers VII: Vision Impairments II "A generic direct manipulation 3D auditory environment for hierarchical navigation in nonvisual interaction" A. Savidis, C. Stephanidis, A. Korte, K. Crispien and K. Fellbaum Foundation for Research and Technology - Hellas (Greece) "Improving the usability of speech-based interfaces for blind users" I.J. Pitt and A.D.N. Edwards University of York (UK) "TDraw: A computer-based tactile drawing tool for blind people" M. Kurze Free University of Berlin (Germany) "Development of dialogue systems for a mobility aid for blind people: Initial design and usability testing" T. Strothotte, S. Fritz, R. Michel, A. Raab, H. Petrie, V. Johnson, L. Reichert and A. Schalt Universitat Magdeburg (Germany) 5:30 pm Closing Remarks /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ASSETS'96 REGISTRATION FORM =========================== This form is 2 pages long. Please print it out, complete both pages and mail it WITH FULL PAYMENT to: Ephraim P. Glinert, ASSETS'96 Dept. of Computer Science R. P. I. Troy, NY 12180 We're sorry, but e-mail registration forms and/or forms not accompanied by full payment (check or credit card information) CANNOT be accepted. CONFERENCE REGISTRATION FEES EARLY LATE / ON-SITE -------------------------------------------------------- ACM member: $ 395 $ 475 Nonmember: $ 580 $ 660 Full time student: $ 220 $ 270 -------------------------------------------------------- 1: CONFERENCE REGISTRATION (from the table above): $ ___________ 2: SECOND BUFFER DINNER TICKET (Thursday, April 11): $ 50 ___YES ___NO 3: SECOND COPY OF THE CONFERENCE PROCEEDINGS: $ 30 ___YES ___NO TOTAL AMOUNT DUE: $ ___________ NOTES: o Registration fee includes: ADMISSION to all sessions ONE COPY of the conference PROCEEDINGS RECEPTION, 5 MEALS AND 4 BREAKS as shown in the Advance Program!!! o To qualify for the EARLY RATE, your registration must be postmarked on or before WEDNESDAY, MARCH 27, 1996. If you are an ACM MEMBER, please supply your ID# __________________ . STUDENTS, please attach a clear photocopy of your valid student ID. o CANCELLATIONS will be accepted up to FRIDAY, MARCH 15, 1996 subject to a 20% handling fee. ASSETS'96 REGISTRATION FORM (continued) ======================================= PERSONAL INFORMATION: Name __________________________________________________________________________ Affiliation ___________________________________________________________________ Address _______________________________________________________________________ City _______________________________ State/Province __________________________ Country __________________________________ ZIP/Postal Code ___________________ E-mail ________________________________________________________________________ Phone ___________________________________ FAX ________________________________ ***I have a disability for which I require special accommodation ___YES ___NO If YES, please attach a separate sheet with details. Thank you! PAYMENT INFORMATION: ___CHECK in U.S. funds enclosed, made payable to "ACM ASSETS'96" ___Please charge $ ___________ to my CREDIT CARD: Card type: ___AMEX ___VISA ___MasterCard Card # _______________________________________ Expiration Date ___________ Name On Card ______________________________________________________________ Billing Address ___________________________________________________________ Cardholder Signature ________________________________________ (ASSETS'96) /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ HOTEL INFORMATION ================= All conference events will take place at the Waterfront Centre Hotel, a member of the Canadian Pacific group. The hotel is located in downtown Vancouver, next to the convention center and cruise ship terminal. Waterfront Centre Hotel 900 Canada Place Way Vancouver, British Columbia V6C 3L5 CANADA Phone: (604) 691 1991 or (800) 441 1414 FAX: (604) 691 1999 A block of rooms for attendees of ASSETS'96 has been set aside at specially discounted rates: Single $140 Canadian per night, plus applicable taxes Double/Twin $160 Canadian per night, plus applicable taxes Waterfront Suite $360 Canadian per night, plus applicable taxes To reserve space at these prices, please call the hotel directly on or before MARCH 15, 1996 and refer to "ACM ASSETS'96". /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ If you have any questions or would like further information, please consult the conference web pages at http://www.cs.rpi.edu/assets or contact the ASSETS'96 General Chair: Ephraim P. Glinert Dept. of Computer Science R. P. I. Troy, NY 12180 Phone: (518) 276 2657 E-mail: glinert at cs.rpi.edu /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ From trevor at mallet.Stanford.EDU Fri Jan 19 20:15:56 1996 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Fri, 19 Jan 1996 17:15:56 -0800 (PST) Subject: Regression and Classification course Message-ID: <199601200115.RAA13247@mallet.Stanford.EDU> ************ SHORT COURSE ANNOUNCEMENT ********** MODERN REGRESSION AND CLASSIFICATION May 9-10, 1996 Stanford Park Hotel, Menlo Park ************************************************* A two-day course on widely applicable statistical methods for modelling and prediction, featuring Professor Trevor Hastie and Professor Robert Tibshirani Stanford University University of Toronto This two day course covers modern tools for statistical prediction and classification. We start from square one, with a review of linear techniques for regression and classification, and then take attendees through a tour of: o Flexible regression techniques o Classification and regression trees o Neural networks o Projection pursuit regression o Nearest Neighbor methods o Learning vector quantization o Wavelets o Bootstrap and cross-validation We will also illustrate software tools for implementing the methods. Our objective is to provide attendees with the background and knowledge necessary to apply these modern tools to solve their own real-world problems. The course is geared for: o Statisticians o Financial analysts o Industrial managers o Medical and Quantitative researchers o Scientists o others interested in prediction and classification Attendees should have an undergraduate degree in a quantitative field, or have knowledge and experience working in such a field. For more details on the course, how to register, price etc: o point your web browser to: http://playfair.stanford.edu/~trevor/mrc.html OR send a request by o FAX to Prof. T. Hastie at (415) 326-0854, OR o email to trevor at playfair.stanford.edu From payman at ebs330.eb.uah.edu Sat Jan 20 15:24:11 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Sat, 20 Jan 96 14:24:11 CST Subject: CIFEr'96 Oral & Poster Presentations Message-ID: <9601202024.AA20275@ebs330> IEEE/IAFE 1996 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ IEEE/IAFE Conference on Computational Intelligence for Financial Engineering March 24-26, 1996 Crowne Plaza Manhattan - New York City http://www.ieee.org/nnc/conferences/cfp/cifer96.html [next update: February 1] ORAL PRESENTATIONS ------------------ Financial Computing Environments -------------------------------- "New Computational Architectures for Pricing Derivatives" R. Freedman, R. DiGiorgio "CAFE: A Complex Adaptive Financial Environment" R. Even, B. Mishra "Financial Trading Center at the University of Texas" P. Jaillet Market Behavior Models ---------------------- "Neural Networks Prediction of Multivariate Financial Time Series: The Swiss Bond Case" T. Ankenbrand, M. Tomassini "Bridging the Gap Between Nonlinearity Tests and the Efficient Market Hypothesis by Genetic Programming" S. Chen, C. Yeh "Models of Market Behavior: Bringing Realistic Games to Market" S. Leven Chaos and Time Series for Financial Systems ------------------------------------------- "Impetus for Future Growth in the Globalization of Stock Investments: An Evidence from Joint Time Series and Chaos Analyses" M. Hoque "Finding Time Series Among the Chaos: Stochastics, Deseasonalization, and Texture-Detection using Neural Nets" P. Werbos "Financial Time Series Analysis and Forecasting Using Computer Simulation and Methods of Nonlinear Adaptive Control of Chaotic Systems" A. Fradhov, S. Fradhov, A. Markov, D. Oliva Neural Nets for Financial Applications -------------------------------------- "Experiments in Predicting the German Stock Index DAX with Density Estimating Neural Networks" D. Ormoneit, R. Neuneier "Stock Market Prediction Using Different Neural Network Classification Architectures" C. Dagli, K. Schierholt "Modelling Stock Return Sensitivities to Economic Factors with the Kalman Filter and Neural Networks" Y. Bentz, L. Boone, J. Connor Fuzzy Logic for Financial Applications -------------------------------------- "Computer Supported Determination of Bank Credit Conditions" S. Schwarze "Fuzzy Logic and Genetic Algorithms for Financial Risk Management" T. Rubinson, R. Yager "Foreign Exchange Rate Prediction by Fuzzy Inferencing on Deterministic Chaos" S. Ghoshray Financial Data Mining --------------------- "Stock Selection Combining Rule Generation and Risk/Reward Portfolio Optimization" C. Apte, S. Hong, A. King "Data Driven Risk Management System" R. Grossman "Intelligent Hybrid System for Data Mining" M. Hambaba Simulation Techniques for Derivatives Pricing --------------------------------------------- "Path Integral Monte Carlo Method and Maximum Entropy: A Complete Solution for the Derivative Valuation Problem" M. Makivic Problems with Monte Carlo Simulation in the Pricing of Contingent Claims" J. Molle, F. Zapatero "Faster Simulation of the Prices of Derivative Securities" S. Paskov Financial Time Series Prediction I ---------------------------------- "Automated Mathematical Modelling for Financial Time Series Prediction Using Fuzzy Logic, Dynamical Systems and Fractal Theory" O. Castillo, P. Melin "Max-Min Optimal Investing" E. Ordentlich, T. Cover "Building Long/Short Portfolios Using Rule Induction" G. John, P. Miller Financial Time Series Prediction II ----------------------------------- "Adaptive Rival Penalized Competitive Learning and Combined Linear Predictor with Application to Financial Investment" Y. Cheung, Z. Lai, L. Xu "A Rule-based Neural Stock Trading Decision Support System" S. Chou, C. Chen, C. Yang, F. Lai "The Gene Expression Messy Genetic Algorithm for Financial Applications" H. Kargupta, K. Buescher Term Structure Modeling ----------------------- "Analysing Shocks on the Interest Rates Structure with Kohonen Map" M. Cottrell, E. De Bodt, P. Gregoire, E. Henrion "Interest Rate Futures: Estimation of Volatility Parameters in an Arbitrage-Free Framework" R. Bhar, C. Chiarella "Prediction of Individual Bond Prices Via the TDM Model" T. Kariya, H. Tsuda Financial Market Volatility --------------------------- "Robust Estimation Analytics for Financial Risk Management" H. Green, R. Martin, M. Pearson "Implied Volatility Functions: Empirical Tests" B. Dumas, J. Fleming, R. Whaley "Evaluation of Common Models Used in the Estimation of Historical Volatility" J. Dalle Molle Business Decision Tools ----------------------- "Fuzzy Queries for Top-Management Succession Planning" T. Sutter, M. Schroder, R. Kruse, J. Gebhardt "Density Based Clustering and Radial Basis Function Modeling to Generate Credit Card Fraud Scores" V. Hanagandi, A. Dhar, K. Buescher "Nonlinear Analysis of Retail Performance" D. Vaccari POSTER PRESENTATIONS -------------------- "Fuzzy Set Methods for Uncertainty Representation in Risky Financial Decisions" R. Yager "Trading Mechanisms and Return Volatility: Empirical Investigation on Shang Hai Stock Exchange Based on a Neural Network Model" Z. Lai, Y. Chuang, L. Xu "Application of Fuzzy Regression Models to Predict Exchange Rates for Composite Currencies" S. Ghoshray "Risk Management in an Uncertain Environment by Fuzzy Statistical Methods" S. Ghoshray "Heuristic Techniques in Tax Structuring for Multinationals" D. Fatouros, G. Salkin, N. Christofides "MLP and Fuzzy Approaches to Prediction of the SEC's Investigative Targets" E. Feroz, T. Kwon "A Corporate Solvency Map Through Self-Organizing Neural Networks" Y. Alici "The Applicability of Information Criteria for Neural Network Architecture Selection" C. Haefke, C. Helmenstein "Stock Prediction Using Different Neural Network Classification Architectures" C. Dagli, K. Schierholt -- Payman Arabshahi Electronic Publicity Chair, CIFEr'96 Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece From alpaydin at boun.edu.tr Sun Jan 21 07:19:28 1996 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Sun, 21 Jan 1996 15:19:28 +0300 (MEST) Subject: CFP: TAINN'96, Conf on AI & NN (Istanbul/Turkey) Message-ID: Pre-S. We're sorry if you receive multiple copies of this message. ase forward * Please post * Please forward * Please post * Please forwa Call for Papers TAINN'96, Istanbul 5th Turkish Symposium on Artificial Intelligence and Neural Networks To be held at Istanbul Technical University, Macka Campus June 27 - 28, 1996 Jointly-organized by Istanbul Technical University and Bogazici University. SPONSORS Istanbul Technical University, Bogazici University, and Turkish Scientific and Technical Research Council (Tubitak) IN COOPERATION WITH IEEE Computer Society Turkey Chapter, ACM SIGART Bilkent Chapter SCOPE Theory: Search, Knowledge Representation, Computational Learning Theory, Complexity Theory, Dynamical Systems, Combinatorial Optimization, Function Approximation, Estimation, Machine Learning, Machine Discovery, Social and Philosophical Issues. Algorithms and Architectures: Learning Algorithms, Multilayer Perceptrons, Recurrent Networks, Decision Trees, Genetic and Evolutionary Algorithms, Fuzzy Logic, Heuristic Search Methods, Symbolic Reasoning. Applications: Expert Systems, Natural Language Processing, Computer Vision, Image Processing, Speech Recognition Coding and Synthesis, Handwriting Recognition, Time-Series Prediction, Medical Processing, Financial Analysis, Music Processing, Control, Navigation, Path Planning, Automated Theorem Proving, Symbolic Algebraic Computation. Cognitive and Neuro Sciences: Human Learning, Memory and Language, Perception, Psychophysics, Computational Models. Implementation: Simulation Tools, Parallel Processing, Analog and Digital VLSI, Neurocomputing Systems. ORGANIZING COMMITTEE E. Alpaydin (Bogazici), U. Cilingiroglu (ITU), F. Gurgen (Bogazici), C. Guzelis (ITU) TECHNICAL COMMITTEE A.H. Abdel Wahab (Egypt), L. Akarun (Bogazici), L. Akin (Bogazici), V. Akman (Bilkent), F. Alpaslan (METU), K. Altinel (Bogazici), V. Atalay (METU), C. Bozsahin (METU), S. Canu (Compiegne, France), E. Celebi (ITU), I. Cicekli (Bilkent), K. Ciliz (Bogazici), D. Cohn (Harlequin, USA), D. Davenport (Bilkent), C. Dichev (BAS, Bulgaria), A. Erkmen (METU), G. Ernst (Case Western Reserve, USA), A. Fatholahzadeh (Supelec, France), Z. Ghahramani (Toronto, Canada), H. Ghaziri (Beirut, Lebanon), C. Goknar (ITU), M. Guler (METU), A. Guvenir (Bilkent), U. Halici (METU), M. Jabri (Sydney, Australia), M. Jordan (MIT, USA), S. Kocabas (Tubitak MAM-ITU), S. Kuru (Bogazici), K. Oflazer (Bilkent), R. Parikh (CUNY, USA), F. Masulli (Genova, Italy), M. de la Maza (MIT, USA), R. Murray-Smith (DaimlerBenz, Germany), Y. Ozturk (Ege), E. Oztemel (Tubitak MAM-SAU), F. Pekergin (EHEI, France), B. Sankur (Bogazici), A.F. Savaci (ITU), C. Say (Bogazici), L. Shastri (ICSI, USA), M. Sungur (METU), E. Tulunay (METU), G. Ucoluk (METU), N. Yalabik (METU), W. Zadrozny (IBM, USA). PAPER SUBMISSION Submit three hard-copies of full papers in English or Turkish limited to 10 pages in 12pt size or poster papers limited to 4 pages along with 5 keywords by March 1, 1996 to YZYSA'96/TAINN'96, Department of Computer Engineering, Bogazici University, Bebek TR-80815 Istanbul Turkey Accepted papers will be printed in the proceedings. The symposium will also host special sessions on certain subtopics. Proposals by qualified individuals interested in chairing one of these is solicited. The goal is to provide a forum for researchers to better focus on a certain subtopic and discuss important issues. Individuals proposing have the following responsibilities: o Arranging presentations by experts of the topic o Moderating or leading the session o Writing an overview of the topic and the session for the proceedings. Mail proposals by March 1, 1996 to YZYSA'96/TAINN'96, Faculty of Electrical and Electronics Engineering, Istanbul Technical University, Maslak TR-80626 Istanbul Turkey MORE INFORMATION Email: tainn96 at boun.edu.tr URL: http://www.cmpe.boun.edu.tr/~tainn96 * Participation from Eastern European, Balkan, and Middle-East countries is especially solicited. From harnad at cogsci.soton.ac.uk Sun Jan 21 17:03:00 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sun, 21 Jan 96 22:03:00 GMT Subject: Learning/Representation: BBS Call for Commentators Message-ID: <3879.9601212203@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: COMPUTATION, REPRESENTATION AND LEARNING by Andy Clark and Chris Thronton This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ TRADING SPACES: COMPUTATION, REPRESENTATION AND THE LIMITS OF UNINFORMED LEARNING Andy Clark Philosophy/Neuroscience/Psychology Program, Washington University in St Louis, Campus Box 1073, St Louis, MO-63130, USA andy at twinearth.wustl.edu Chris Thornton, Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, UK Chris.Thornton at cogs.sussex.ac.uk KEYWORDS: Learning, connectionism, statistics, representation, search ABSTRACT: Some regularities enjoy only an attenuated existence in a body of training data. These are regularities whose statistical visibility depends on some systematic re-coding of the data. The space of possible re-codings is, however, infinitely large - it is the space of applicable Turing machines. As a result, mappings which pivot on such attenuated regularities cannot, in general, be found by brute force search. The class of problems which present such mappings we call the class of `type-2 problems'. Type-1 problems, by contrast, present tractable problems of search insofar as the relevant regularities can be found by sampling the input data as originally coded. Type-2 problems, we suggest, present neither rare nor pathological cases. They are rife in biologically realistic settings and in domains ranging from simple animat behaviors to language acquisition. Not only are such problems rife - they are standardly solved! This presents a puzzle. How, given the statistical intractability of these type-2 cases does nature turn the trick? One answer, which we do not pursue, is to suppose that evolution gifts us with exactly the right set of re-coding biases so as to reduce specific type-2 problems to (tractable) type-1 mappings. Such a heavy duty nativism is no doubt sometimes plausible. But we believe there are other, more general mechanisms also at work. Such mechanisms provide general (not task-specific) strategies for managing problems of type-2 complexity. Several such mechanisms are investigated. At the heart of each is a fundamental ploy viz. the maximal exploitation of states of representation already achieved by prior (type-1) learning so as to reduce the amount of subsequent computational search. Such exploitation both characterises and helps make unitary sense of a diverse range of mechanisms. These include simple incremental learning (Elman 1993), modular connectionism (Jacobs, Jordan and Barto 1991), and the developmental hypothesis of `representational redescription' (Karmiloff-Smith A Functional 1979, Karmiloff-Smith PDP 1992). In addition, the most distinctive features of human cognition---language and culture---may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.clark). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.clark ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.clark To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.clark When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From harnad at cogsci.soton.ac.uk Sun Jan 21 17:06:14 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sun, 21 Jan 96 22:06:14 GMT Subject: Directed Movement: BBS Call for Commentators Message-ID: <3919.9601212206@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: SPEED/ACCURACY TRADEOFFS IN TARGET DIRECTED MOVEMENTS By Rejean Plamondon & Adel M. Alimi This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ SPEED/ACCURACY TRADEOFFS IN TARGET DIRECTED MOVEMENTS Rejean Plamondon & Adel M. Alimi Ecole Polytechnique de Montreal Laboratoire Scribens Departement de genie Electrique et de genie informatique C.P. 6079, Succ. "Centre-Ville" Montreal PQ H3C 3A7 ha03 at music.mus.polymtl.ca KEYWORDS: Speed/accuracy tradeoffs, Fitts' law, central limit theorem, velocity profile, delta-lognormal law, quadratic law, power law. ABSTRACT: This paper presents a critical survey of the scientific literature dealing with the speed/accuracy tradeoffs of rapid-aimed movements. It highlights the numerous mathematical and theoretical interpretations that have been proposed over recent decades from the different studies that have been conducted on this topic. Although the variety of points of view reflects the richness of the field as well as the high degree of interest that such basic phenomena represent in the understanding of human movements, it questions the validity of many models with respect to their capacity to explain all the basic observations consistently reported in the field. In this perspective, this paper summarizes the kinematic theory of rapid human movements, proposed recently by the first author, and analyzes its predictions in the context of speed/accuracy tradeoffs. Numerous data available from the scientific literature are reanalyzed and reinterpreted in the context of this new theory. It is shown that the various aspects of the speed/accuracy tradeoffs can be taken into account by considering the asymptotic behavior of a large number of coupled linear systems, from which a delta-lognormal law can be derived, to describe the velocity profile of an end-effector driven by a neuromuscular synergy. This law not only describes velocity profiles almost perfectly, but it also predicts the kinematic properties of simple rapid movements and provides a consistent framework for the analysis of different types of rapid movements using a quadratic (or power) law that emerges from the model. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.glenberg). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.glenberg ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.glenberg To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.glenberg When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From Jari.Kangas at hut.fi Mon Jan 22 01:54:51 1996 From: Jari.Kangas at hut.fi (Jari Kangas) Date: Mon, 22 Jan 1996 08:54:51 +0200 Subject: Location for SOM_PAK and LVQ_PAK has changed Message-ID: <310334BB.167E@hut.fi> Dear Neural Network Researchers, Out ftp-site cochlea.hut.fi containing the SOM_PAK and LVQ_PAK program packages has been off for a while because of hardware errors. We have now moved the public domain program packages to another location under our research centre www-page: http://nucleus.hut.fi/nnrc.html From cns-cas at cns.bu.edu Mon Jan 22 10:47:23 1996 From: cns-cas at cns.bu.edu (CNS/CAS) Date: Mon, 22 Jan 1996 10:47:23 -0500 Subject: B.U. Neural Systems Seminars Message-ID: <199601221543.KAA05031@cns.bu.edu> CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY January 26 SELF--SIMILARITY IN NEURAL SIGNALS Professor Malvin Teich, Department of Electrical, Computer, and Systems Engineering, Boston University February 2 THE FUNCTIONAL ARCHITECTURE OF HUMAN VISUAL MOTION PERCEPTION Dr. Zhong-Lin Lu, Department of Cognitive Sciences and Institute for Mathematical Behavioral Sciences, University of California at Irvine February 9 DIVERSITY IN THE STRUCTURE AND FUNCTION OF HIPPOCAMPAL SYNAPSES Professor Kristen Harris, Division of Neuroscience, Children's Hospital and Program in Neuroscience, Harvard Medical School February 16 GROUP BEHAVIOR AND LEARNING IN AUTONOMOUS AGENTS Dr. Maja Mataric, Department of Computer Science, Brandeis University March 15 TOPOGRAPHY OF COGNITION: CELLULAR AND CIRCUIT BASIS OF WORKING MEMORY Dr. Patricia Goldman-Rakic, Neurobiology Section, Yale University School of Medicine March 22 EMOTION, MEMORY, AND THE BRAIN Professor Joseph LeDoux, Center for Neural Science, New York University April 5 AUDITORY PROCESSING OF COMPLEX SOUNDS Professor Laurel Carney, Department of Biomedical Engineering, Boston University April 19, 1:00--5:00 P.M. OPENING CELEBRATION FOR 677 BEACON STREET Invited lectures and refreshments to celebrate the new CNS building. Details to follow. Call 353-7857 for information. All talks except April 19 on Fridays at 2:00 PM in Room B02 (Please note the new lecture time!) Refreshments after the lecture in Room B01 677 Beacon Street, Boston From mav at psy.uq.oz.au Mon Jan 22 23:04:45 1996 From: mav at psy.uq.oz.au (Simon Dennis) Date: Tue, 23 Jan 1996 14:04:45 +1000 (EST) Subject: Journal Launch: NOETICA, A Cognitive Science Forum Message-ID: Welcome to NOETICA: A COGNITIVE SCIENCE FORUM We are pleased to announce the International launch of Noetica: A Cognitive Science Forum - a world wide web journal devoted to the interdisciplinary field of cognitive science. The journal is open for submissions and can be accessed using browsers such as Netscape, Mosaic and lynx at: http://psy.uq.edu.au/CogPsych/Noetica/ or alternatively you may access the mirror site at: http://www.cs.indiana.edu/Noetica/toc.html If you would like to subscribe to the cogpsy mailing list (which includes receiving a regular list of the new contents of Noetica) use the subscription form under "To Subscribe" on the home page or email us at noetica at psy.uq.edu.au. We would welcome any feedback you might have on the journal and look forward to providing a timely, lively, high quality forum for the discussion of cognitive science issues. Yours sincerely, Simon Dennis Cyril Latimer Kate Stevens Janet Wiles TABLE OF CONTENTS JOURNAL Volume 1 - 1995 Issue 1. The Impact of the Environment on the Word Frequency and Null List Strength Effects in Recognition Memory by Simon Dennis OPEN FORUM Volume 1 - 1995 The first three issues of volume one are papers which were presented at the Symposium on Connectionist Models and Psychology which took place in January, 1994 at the Department of Psychology, The University of Queensland, Australia. Issue 1: The rationale for psychologists using (connectionist) models Introduction: Peter Slezak. Target paper: Cyril Latimer. Computer Modelling of Cognitive Processes Invited Commentary: Max Coltheart. Connectionist Modelling and Cognitive Psychology Sally Andrews. What Connectionist Models Can (and Cannot) Tell Us George Oliphant. Connectionism, Psychology and Science Commentary: Paul Bakker. Good models of humble origins Richard Heath. Mathematical models, connectionism and cognitive processes Ellen Watson. Definitions and Interpretations: Comments on the symposium on connectionist models and psychology Issue 2. The correspondence between human and neural network performance Introduction: Cyril Latimer Review: Kate Stevens. The In(put)s and Out(put)s of Comparing Human and Network Performance: Some Ideas on Representations, Activations and Weights Review: Graeme Halford and William Wilson. How Far Do Neural Network Models Account for Human Reasoning? Commentary: Steven Phillips. Understanding as generalisation not just representation. Review: Simon Dennis. The Correspondence Between Psychological and Network Variables In Connectionist Models of Human Memory Commentaries: Andrew Heathcote. Connectionism: Implementation constraints for psychological models Phillip Sutcliffe. Contribution to discussion Issue 3. Computational processes over distributed memories Introduction: Steven Schwartz Review: Janet Wiles. The Connectionist Modeler's Toolkit: A review of some basic processes over distributed memories Invited Commentary: Mike Johnson. On the search for metaphors Zoltan Schreter. Distributed and Localist Representation in the Brain and in Connectionist Models Issue 4. The Sydney Morning Herald Word Database by Simon Dennis Issue 5. Introducing a new connectionist model: The spreading waves of activation network by Scott A. Gazzard ------------------------------------------------------------------------ Dr Simon Dennis Address: Department of Psychology Email: mav at psy.uq.edu.au The University of Queensland WWW: http://psy.uq.edu.au/~mav Brisbane, QLD, 4072, Australia From tgc at kcl.ac.uk Tue Jan 23 05:15:55 1996 From: tgc at kcl.ac.uk (Trevor Clarkson) Date: Tue, 23 Jan 1996 10:15:55 +0000 Subject: NEuroFuzzy Workshop in Prague, 16-18 April 1996 Message-ID: A limited number of studentships of 450 ECU are still available from the NEuroNet programme for EU students only to attend the NEuroFuzzy workshop. The grant is a contribution to travel, accommodation and registration so that students will be able to participate in the technical sessions as well as the tutorials. The first set of studentships have been approved and letters have been sent to successful applicants. The remaining studentships will be awarded on a first-come first-served basis to students who are registered full-time for a university degree. Applicants should send a short (half-page) biography which clearly states age, European nationality and place of study. This must be accompanied by a letter of support from their head of department confirming these details. For details concerning these studentships only, contact the NEuroNet office: Ms Terhi Garner, NEuroNet Department of Electronic and Electrical Engineering, King's College London Strand, London WC2R 2LS Email: terhi.garner at kcl.ac.uk Fax: +44 171 873 2559 __________________________________________________________________________ Professor Trevor Clarkson * * Director, NEuroNet (European Network of Excellence in Neural Networks) * Department of Electronic and Electrical Engineering * * King's College London * * Strand, London WC2R 2LS, UK * * * * Tel: +44 171 873 2367/2388 Fax: +44 171 873 2559 WWW: http://www.neuronet.ph.kcl.ac.uk/ Email: tgc at kcl.ac.uk __________________________________________________________________________ From cia at kamo.riken.go.jp Tue Jan 23 06:54:21 1996 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Tue, 23 Jan 96 20:54:21 +0900 Subject: New publications on blind signal processing Message-ID: <9601231154.AA14093@kamo.riken.go.jp> Dear Colleagues: Below please find a list of papers devoted to blind separation of sources presented at NOLTA-95 and NIPS . Some of these papers are available on the web site: http://www.bip.riken.go.jp/absl/absl.html I am preparing now extensive list of publications, reports and programs about blind signal processing (blind deconvolution, equalization, separation of sources, cocktail party-problem, blind identification and blind medium structure identification). Any information about new publications on these subjects are welcomed. Comments on our paper are also welcomed. Andrew Cichocki ------------------------------------------- Dr. A. Cichocki, Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, URL: http://www.bip.riken.go.jp/absl/absl.html --------------------------------------------------- List of papers of Special Invited Session BLIND SEPARATION OF SOURCES- Information Processing in the Brain, NOLTA-95 , Las Vegas, USA, December 10-14, 1995. (Chair and organizer A. Cichocki) Proceedings 1995 International Symposium on Nonlinear Theory and Applications Vol.1: 1. Shun-ichi AMARI, Andrzej CICHOCKI and Howard Hua YANG, "RECURRENT NEURAL NETWORKS FOR BLIND SEPARATION OF SOURCES", pp.37-42. 2. Anthony J. BELL and Terrence J. SEJNOWSKI, "FAST BLIND SEPARATION BASED ON INFORMATION THEORY", pp. 43-47. 3. Adel BELOUCHRANI and Jean-Francois CARDOSO, "MAXIMUM LIKELIHOOD SOURCE SEPARATION BY THE EXPECTATION-MAXIMIZATION TECHNIQUE: DETERMINISTIC AND STOCHASTIC IMPLEMENTATION", pp.49-53. 4. Jean-Francois CARDOSO, "THE INVARIANT APPROACH TO SOURCE SEPARATION", pp. 55-60. 5. Andrzej CICHOCKI, Wlodzimierz KASPRZAK and Shun-ichi AMARI, "MULTI-LAYER NEURAL NETWORKS WITH LOCAL ADAPTIVE LEARNING RULES FOR BLIND SEPARATION OF SOURCE SIGNALS", pp.61-65. 6. Yannick DEVILLE and Laurence ANDRY, "APPLICATION OF BLIND SOURCE SEPARATION TECHNIQUES TO MULTI-TAG CONTACTLESS IDENTIFICATION SYSTEMS", pp. 73-78. 7. Jie HUANG , Noboru OHNISHI and Naboru SUGIE "SOUND SEPARATION BASED ON PERCEPTUAL GROUPING OF SOUND SEGMENTS", pp.67-72. 8. Christian JUTTEN and Jean-Francois CARDOSO, "SEPARATION OF SOURCES: REALLY BLIND ?" pp. 79-84. 9. Kiyotoshi MATSUOKA and Mitsuru KAWAMOTO, "BLIND SIGNAL SEPARATION BASED ON A MUTUAL INFORMATION CRITERION", pp. 85-91. 10. Lieven De LATHAUWER, Pierre COMON, Bart De MOOR and Joos VANDEWALLE, "HIGHER-ORDER POWER METHOD - APPLICATION IN INDEPENDENT COMPONENT ANALYSIS", pp. 91-96. 11. Jie ZHU, Xi-Ren CAO, and Ruey-Wen LIU, "BLIND SOURCE SEPARATION BASED ON OUTPUT INDEPENDENCE - THEORY AND IMPLEMENTATION", pp. 97-102. ---------------------------------------------------------------------------- Selected list of recent publications and reports about ICA [1] S. Amari, A. Cichocki and H. H. Yang, "A new learning algorithm for blind signal separation", NIPS-95, Denver Dec. 1995, vol.8, MIT Press, 1996 (in print). [2] S. Amari, A. Cichocki and H. H. Yang, "Recurrent neural networks for blind separation of sources", Nolta-95 , Las Vegas, Dec.10-15, 1995, vol.1, pp. 37-42. [3] A.Cichocki and L. Moszczynski, "A new learning algorithm for for blind separation of sources", Electronics Letters, vol.28, No.21,1992, pp.1986-1987. [4] A. Cichocki, R. Unbehauen and E. Rummert, Robust learning algorithm for blind separation of signals", Electronics Letters, vol.30, No.17, 18th August 1994, pp.1386-1387. [5] A. Cichocki, R. Unbehauen, L. Moszczynski and E. Rummert, "A new on-line adaptive algorithm for blind separation of source signals", 1994 Int. Symposium on Artificial Neural Networks ISANN-94, Tainan, Taiwan , Dec.1994, pp.406-411. [6] A. Cichocki, R. Bogner, L. Moszczynski, Improved adaptive algorithms for blind separation of sources", Proc. of Conference on Electronic Circuits and Systems, KKTOiUE, Zakopane Poland, Oct. 25-27, 1995, pp. 647-652. [7] A. Cichocki, R. Unbehauen, "Robust neural networks with on-line learning for blind identification and blind separation of sources", submitted for publication to IEEE Transaction on Circuits and Systems (submitted June 1994). [8] A.Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing, John Wiley 1994 (new revised and improved edition), pp. 461-471. [9] A. Cichocki, W. Kasprzak, S. Amari, "Multi-layer neural networks with a local adaptive learning rule for blind separation of source signals", Nolta-95, Las Vegas, Dec.10-15, 1995, vol.1 pp. 61-66. [10] A. Cichocki, S. Amari, M. Adachi and W. Kasprzak, "Self-adaptive neural networks for blind separation of sources", ISCAS-96 May 1996, Atlanta, USA. --------------------------------------------------------------- From S.Goonatilake at cs.ucl.ac.uk Tue Jan 23 12:30:48 1996 From: S.Goonatilake at cs.ucl.ac.uk (Suran Goonatilake) Date: Tue, 23 Jan 96 17:30:48 +0000 Subject: New Book - Intelligent Systems for Finance and Business Message-ID: NEW BOOK ANNOUNCEMENT INTELLIGENT SYSTEMS FOR FINANCE AND BUSINESS Suran Goonatilake and Philip Treleaven (Eds.) University College London Intelligent Systems are now beginning to be successfully applied in a variety of financial and business modelling tasks. These methods which include genetic algorithms, neural networks, fuzzy systems and intelligent hybrid systems are now being applied in credit evaluation, direct marketing, fraud detection, securities trading and portfolio management, and in many cases are outperforming traditional approaches. This book brings together leading professionals from the US, Europe and Asia who have developed intelligent systems to tackle some of the most challenging problems in finance and business. It covers applications of a large number of intelligent techniques: genetic algorithms, neural networks, fuzzy logic, expert systems, rule induction, genetic programming, case based reasoning and intelligent hybrid systems. Case studies are drawn from a wide variety of business sectors. Applications that are detailed include: credit evaluation, direct marketing, insider dealing detection, insurance fraud detection, insurance claims processing, financial trading, portfolio management, and economic modelling. CONTENTS ======== Foreword: Cathy Basch, Visa International Chapter 1: Intelligent Systems for Finance and Business: An Overview Suran Goonatilake, University College London, UK. PART ONE: CREDIT SERVICES Chapter 2: Intelligent Systems at American Express Robert Didner, American Express Chapter 3: Credit Evaluation using a Genetic Algorithm R. Walker, E.W. Haasdijk and M.C. Gerrets, CAP-Volmac. Chapter 4: Neural Networks for Credit Scoring David Leigh PART TWO: DIRECT MARKETING Chapter 5: Neural Networks for Data Driven Marketing Peter Furness, AMS Management Systems Chapter 6:Intelligent Systems for Market Segmentation and Local Market Planning Richard Webber, CCN Marketing PART THREE: FRAUD DETECTION AND INSURANCE Chapter 7: A Fuzzy System for Detecting Anomalous Behaviors in Healthcare Provider Claims Earl Cox, Metus Systems. Chapter 8: Insider Dealing Detection at the Toronto Stock Exchange Steve Mott, Cognitive Systems Chapter 9: EFD: Heuristic Statistics for Insurance Fraud Detection J.A. Major and D.R. Riedinger, Travelers Insurance Co Chapter 10: Expert Systems at Lloyd's of London Colin Talbot, Lloyd's of London PART FOUR: SECURITIES TRADING AND PORTFOLIO MANAGEMENT Chapter 11: Neural Networks in Investment Management A. N. Refenes, A. D. Zapranis, J.T. Connor and D.W. Bunn London Business School Chapter 12: Fuzzy Logic for financial trading Shunichi Tano, Hitachi Labs.. Chapter 13: Syntactic Pattern-Based Inductive Learning for Chart Analysis Jae K. Lee, Hyun Soo Kim, KAIST. PART FIVE: ECONOMIC MODELLING Chapter 14: Genetic Programming for Economic Modelling John Koza, Stanford University Chapter 15: Modelling artificial stock markets using Genetic Algorithms Paul Tayler, Brunel University Chapter 16: Intelligent, Self Organising Models in Economics and Finance Peter Allen, Cranfield Institute of Technology PART SIX: IMPLEMENTING INTELLIGENT SYSTEMS Chapter 17: Software for Intelligent Systems Philip Treleaven, University College London ------------------------------------------------------------------ ISBN : 0471 94404 1 Publication Date : December 1995 Price: $55, (Sterling) 40 Publishers: (US) John Wiley & Sons Inc., 605 Third Avenue, New York, NY 10158-0012 Tel: 1-800-225-5945 (UK) John Wiley & Sons Ltd, Baffins Lane, Chichester, West Sussex, PO19 1UD, UK. Tel: 0800 243 407 ------------------------------------------------------------------- A World Wide Web page at : http://www.cs.ucl.ac.uk/staff/S.Goonatilake/busbook.html From john at dcs.rhbnc.ac.uk Tue Jan 23 10:35:55 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Jan 96 15:35:55 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199601231535.PAA06083@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for most of the titles. *** Please note that the location of the files has been changed so that *** any copies you have of the previous instructions should be discarded. *** The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-001: ---------------------------------------- On digital nondeterminism by Felipe Cucker, Universitat Pompeu Fabra, Spain Martin Matamala, Universidad de Chile, Chile No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-002: ---------------------------------------- Complexity and Real Computation: A Manifesto by Lenore Blum, International Computer Science Institute, Berkeley, USA Felipe Cucker, Universitat Pompeu Fabra, Spain Mike Shub, IBM T.J. Watson Research Center, New York, USA Steve Smale, University of California, USA Abstract: Finding a natural meeting ground between the highly developed complexity theory of computer science -- with its historical roots in logic and the discrete mathematics of the integers -- and the traditional domain of real computation, the more eclectic less foundational field of numerical analysis -- with its rich history and longstanding traditions in the continuous mathematics of analysis -- presents a compelling challenge. Here we illustrate the issues and pose our perspective toward resolution. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-003: ---------------------------------------- Models for Parallel Computation with Real Numbers by F. Cucker, Universitat Pompeu Fabra, Spain J.L. Montana, Universidad de Cantabria, Spain L.M. Pardo, Universidad de Cantabria, Spain Abstract: This paper deals with two models for parallel computations over the reals. On the one hand, a generalization of the real Turing machine obtained by assembling a polynomial number of such machines that work together in polylogarithmic time (more or less like a PRAM in the Boolean setting) and, on the other hand, a model consisting of families of algebraic circuits generated in some uniform way. We show that the classes defined by these two models are related by a chain of inclusions and that some of these inclusions are strict. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-004: ---------------------------------------- Nash Trees and Nash Complexity by Felipe Cucker, Universitat Pompeu Fabra, Spain Thomas Lickteig, Universit\"at Bonn, Germany Abstract: Numerical analysis computational problems such as Cholesky decomposition of a positive definite matrix, or unitary transformation of a complex matrix into upper triangular form (for instance by the Householder algorithm), require algorithms that use also ``non-arithmetical'' operations such as square roots. The aim of this paper is twofold: 1. Generalizing the notions of arithmetical semi-algebraic decision trees and computation trees (that is, with outputs) we suggest a definition of Nash trees and Nash straight line programs (SLPs), necessary to formalize and analyse numerical analysis algorithms and their complexity as mentioned above. These trees and SLPs have a Nash operational signature $N^R$ over a real closed field $R$. Based on the sheaf of abstract Nash functions over the real spectrum of a ring as introduced by M.-F. Roy, we propose a category nash_R of partial (homogeneous) N^R-algebras in which these Nash operations make sense in a natural way. 2. Using this framework, in particular the execution of $N^R$-SLPs in appropriate $N^R$-algebras, we extend the degree-gradient lower bound to Nash decision complexity of the membership problem of co-one-dimensional semi-algebraic subsets of open semi-algebraic subsets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-005: ---------------------------------------- On the computational power and super-Turing capabilities of dynamical systems by Olivier Bournez, Department LIP, ENS-Lyon, France Michel Cosnard, Department LIP, ENS-Lyon, France Abstract: We explore the simulation and computational capabilities of dynamical systems. We first introduce and compare several notions of simulation between discrete systems. We give a general framework that allows dynamical systems to be considered as computational machines. We introduce a new discrete model of computation: the analog automaton model. We determine the computational power of this model and prove that it does have super-Turing capabilities. We then prove that many very simple dynamical systems from the literature are actually able to simulate analog automata. From this result we deduce that many dynamical systems have intrinsically super-Turing capabilities. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-006: ---------------------------------------- Finite Sample Size Results for Robust Model Selection; Application to Neural Networks by Joel Ratsaby, Technion, Israel Ronny Meir, Technion, Israel Abstract: The problem of model selection in the face of finite sample size is considered within the framework of statistical decision theory. Focusing on the special case of regression, we introduce a model selection criterion which is shown to be robust in the sense that, with high confidence, even for a finite sample size it selects the best model. Our derivation is based on uniform convergence methods, augmented by results from the theory of function approximation, which permit us to make definite probabilistic statements about the finite sample behavior. These results stand in contrast to classical approaches, which can only guarantee the asymptotic optimality of the choice. The criterion is demonstrated for the problem of model selection in feedforward neural networks. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-007: ---------------------------------------- On the structure of $\npoly{C}$ by Gregorio Malajovich, Klaus Meer, RWTH Aachen, Germany Abstract: This paper deals with complexity classes $\poly{C}$ and $\npoly{C}$, as they were introduced over the complex numbers by Blum, Shub and Smale. Under the assumption $\poly{C} \ne \npoly{C}$ the existence of non-complete problems in $\npoly{C}$~, not belonging to $\poly{C}$~, is established. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-008: ---------------------------------------- Dynamic Recurrent Neural Networks: a Dynamical Analysis by Jean-Philippe DRAYE, Davor PAVISIC, Facult\'{e} Polytechnique de Mons, Belgium, Guy CHERON, Ga\"{e}tan LIBERT, University of Brussels, Belgium Abstract: In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters~: the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a sta tistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequent ial behavior of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of th e weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recu rrent neural networks can bring new powerful features in the field of neural computing. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-009: ---------------------------------------- Scale-sensitive Dimensions, Uniform Convergence, and Learnability by Noga Alon, Tel Aviv University (ISRAEL), Shai Ben-David, Technion, (ISRAEL), Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, David Haussler, UC Santa Cruz, (USA) Abstract: Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gin\'e, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or ``agnostic'') framework. Furthermore, we show a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-010: ---------------------------------------- On-line Prediction and Conversion Strategies by Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, Yoav Freund, AT\&T Bell Laboratories, David P.\ Helmbold, University of California, Santa Cruz, Manfred K.\ Warmuth, University of California, Santa Cruz Abstract: We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight $\beta^m$ where $\beta$ is a constant in $[0,1)$ and $m$ is the number of mistakes made by the expert in the past. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-011: ---------------------------------------- Worst-case Quadratic Loss Bounds for Prediction Using Linear Functions and Gradient Descent by Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, Philip M. Long, Duke University, Manfred K. Warmuth, UC Santa Cruz Abstract: In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We show worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of {\it a priori} information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-012: ---------------------------------------- Using Bayesian Methods for Avoiding Overfitting and for Ranking Networks in Multilayer Perceptrons Learning by Michel de Bollivier, EC Joint Research Centre, Italy, Domenico Perrotta, EC Joint Research Centre and Ecole Normale Sup\'{e}rieure de Lyon, France Abstract: This work is an experimental attempt to determine whether the Bayesian paradigm could improve Multi-Layer Perceptrons (MLPs) learning methods. In particular, we exper iment here the paradigm developed by D. MacKay (1992). The paper points out the main or critical points of MacKay's work and introduces very practical points of Bayesian MLPs, having in mind future applications. Then, Bayesian MLPs are used on three public classification databases and compar ed to other methods. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-013: ---------------------------------------- Lower Bounds for the Computational Power of Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-014: ---------------------------------------- Analog Computations on Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We characterize the class of functions with real-valued input and output which can be computed by networks of spiking neurons with piecewise linear response- and threshold-functions and unlimited timing precision. We show that this class coincides with the class of functions computable by recurrent analog neural nets with piecewise linear activation functions, and with the class of functions computable on a certain type of random access machine (N-RAM) which we introduce in this article. This result is proven via constructive real-time simulations. Hence it provides in particular a convenient method for constructing networks of spiking neurons that compute a given real-valued function $f$: it now suffices to write a program for constructing networks of spiking neurons that compute a given real-valued function $f$: it now suffices to write a program for computing $f$ on an N-RAM; that program can be ``automatically'' transformed into an equivalent network of spiking neurons (by our simulation result). Finally, one learns from the results of this paper that certain very simple piecewise linear response- and threshold-functions for spiking neurons are {\it universal}, in the sense that neurons with these particular response- and threshold-functions can simulate networks of spiking neurons with {\it arbitrary} piecewise linear response- and threshold-functions. The results of this paper also show that certain very simple piecewise linear activation functions are in a corresponding sense universal for recurrent analog neural nets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-015: ---------------------------------------- Vapnik-Chervonenkis Dimension of Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We will survey in this article the most important known bounds for the VC-dimension of neural nets that consist of linear threshold gates (section 2) and for the case of neural nets with real-valued activation functions (section 3). In section 4 we discuss a generalization of the VC-dimension for neural nets with non-boolean network-output. With regard to a discussion of the VC-dimension of models for networks of {\it spiking neurons} we refer to Maass (1994). ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-016: ---------------------------------------- On the Computational Power of Noisy Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: This article provides some first results about the computational power of neural networks that are based on a neuron model which is acceptable to many neurobiologists as being reasonably realistic for a biological neuron. Biological neurons communicate via spike-trains, i.e. via sequences of stereotyped pulses (``spikes'') that encode information in their time-differences (``temporal coding''). In addition it is wellknown that biological neurons are quite ``noisy'', i.e. the precise times when they ``fire'' (and thereby issue a spike) depend not only on the incoming spike-trains, but also on various types of ``noise''. It has remained unknown whether one can in principle carry out reliable digital computations with noisy spiking neurons. This article presents rigorous constructions for simulating in real-time arbitrary given boolean circuits and finite automata with arbitrarily high reliability by networks of noisy spiking neurons. In addition we show that with the help of ``shunting inhibition'' such networks can simulate in real-time any McCulloch-Pitts neuron (or ``threshold gate''), and therefore any multilayer perceptron (or ``threshold circuit'') in a reliable manner. These constructions provide a possible explanation for the fact that biological neural systems can carry out quite complex computations within 100 msec. It turns out that the assumption that these constructions require about the shape of the EPSP's and the behaviour of the noise are surprisingly weak. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-017: ---------------------------------------- Die Komplexit\"at des Rechnens und Lernens mit neuronalen Netzen -- Ein Kurzf\"uhrer by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: This is a very short guide to the basic concepts of the theory of computing and learning with neural networks with emphasis on computational complexity. Fundamental results on circuit complexity of neural networks and PAC-learning are mentioned but no proofs are given. A list of references to the most important and most recent books in the field is included. The report was written in German on the occasion of giving a course at the Autumn School in Connectionism and Neural Networks ``HeKoNN 95'' in M\"unster. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-018: ---------------------------------------- Tracking the best disjunction by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred Warmuth, University of California at Santa Cruz, USA Abstract: Littlestone developed a simple deterministic on-line learning algorithm for learning $k$-literal disjunctions. This algorithm (called Winnow) keeps one weight for each of the $n$ variables and does multiplicative updates to its weights. We develop a randomized version of Winnow and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target {\em disjunction schedule} $\Tau$ is a sequence of disjunctions (one per trial) and the {\em shift size} is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence. We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of examples. This algorithm that allows us to track the predictions of the best disjunction is hardly more complex than the original version. However the amortized analysis needed for obtaining worst-case mistake bounds requires new techniques. In some cases our lower bounds show that the upper bounds of our algorithm have the right constant in front of the leading term in the mistake bound and almost the right constant in front of the second leading term. By combining the tracking capability with existing applications of Winnow we are able to enhance these applications to the shifting case as well. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-019: ---------------------------------------- Learning Nested Differences in the Presence of Malicious Noise by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the learnability of nested differences of intersection-closed classes in the presence of malicious noise. Examples of intersection-closed classes include axis-parallel rectangles, monomials, linear sub-spaces, and so forth. We present an on-line algorithm whose mistake bound is optimal in the sense that there are concept classes for which each learning algorithm (using nested differences as hypotheses) can be forced to make at least that many mistakes. We also present an algorithm for learning in the PAC model with malicious noise. Surprisingly enough, the noise rate tolerable by these algorithms does not depend on the complexity of the target class but depends only on the complexity of the underlying intersection-closed class. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-020: ---------------------------------------- Characterizing the Learnability of Kolmogorov Easy Circuit Expressions by Jos\'e L. Balc\'azar, Universitat Polit\'ecnica de Catalunya, Spain Harry Buhrman, Centrum voor Wiskunde en Informatica, the Netherlands Abstract: We show that Kolmogorov easy circuit expressions can be learned with membership queries in polynomial time if and only if every NE-predicate is E-solvable. Moreover we show that the previously known algorithm, that uses an oracle in NP, is optimal in some relativized world. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-021: ---------------------------------------- T2 - Computing optimal 2-level decision tree by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria *** Note: This is a C program available in tarred (compressed) format. Description: This is a short description of the T2 program discussed in P. Auer, R.C. Holte, and W. Maass. Theory and applications of agnostic PAC-learning with small decision trees. In Proc. 7th Int. Machine Learning Conf., Tahoe City (USA), 1995. Please see the paper for a description of the algorithm and a discussion of the results. (There is a typo in the paper in Table 2: The Sky2 value for HE is 89.0% instead of 91.0%.) T2 calculates optimal decision trees up to depth 2. T2 accepts exactly the same input as C4.5, consisting of a name-file, a data-file, and an optional test-file. The output of TREE2 is a decision tree similar to the decision trees of C4.5, but there are some differences. T2 uses two kinds of decision nodes: (1) discrete splits on an discrete attribute where the node has as many branches as there are possible attribute values, and (2) interval splits of continuous attributes. A node which performs an interval split divides the real line into intervals and has as many branches as there are intervals. The number of intervals is restricted to be (a) at most MAXINTERVALS if all the branches of the decision node lead to leaves, and to be (b) at most 2 otherwise. MAXINTERVALS can be set by the user. The attribute value ``unknown'' is treated as a special attribute value. Each decision node (discrete or continuous) has an additional branch which takes care of unknown attribute values. T2 builds the decision tree satisfying the above constraints and minimizing the number of misclassifications of cases in the data-file. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-022: ---------------------------------------- Efficient Learning with Virtual Threshold Gates by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred Warmuth, University of California, Santa Cruz, USA Abstract: We reduce learning simple geometric concept classes to learning disjunctions over exponentially many variables. We then apply an on-line algorithm called Winnow whose number of prediction mistakes grows only logarithmically with the number of variables. The hypotheses of Winnow are linear threshold functions with one weight per variable. We find ways to keep the exponentially many weights of Winnow implicitly so that the time for the algorithm to compute a prediction and update its ``virtual'' weights is polynomial. Our method can be used to learn $d$-dimensional axis-parallel boxes when $d$ is variable, and unions of $d$-dimensional axis-parallel boxes when $d$ is constant. The worst-case number of mistakes of our algorithms for the above classes is optimal to within a constant factor, and our algorithms inherit the noise robustness of Winnow. We think that other on-line algorithms with multiplicative weight updates whose loss bounds grow logarithmically with the dimension are amenable to our methods. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-023: ---------------------------------------- On learnability and predicate logic (Extended Abstract) by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Gy. Tur\'{a}n, University of Illinois at Chicago, USA No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-024: ---------------------------------------- Lower Bounds on Identification Criteria for Perceptron-like Learning Rules by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: Topic of this paper is the computational complexity of identifying neural weights using Perceptron-like learning rules. By Perceptron-like rules we understand instructions to modify weight vectors by adding or subtracting constant values after occurrence of an error. By computational complexity we mean worst-case bounds on the number of correction steps. The training examples are taken from Boolean functions computable by McCulloch-Pitts neurons. Exact identification by the Perceptron rule is known to take exponential time in the worst case. Therefore, we define identification criteria that do not require that the learning process exactly identifies the function being learned: PAC identification, order identification, and sign identification. Our results show that Perceptron-like learning rules cannot satisfy any of these criteria when the number of correction steps is to be bounded by a polynomial. This indicates that even by considerably lowering one's demands on the learning process one cannot prevent Perceptron rules from being computationally infeasible. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-025: ---------------------------------------- On Methods to Keep Learning Away from Intractability (Extended abstract) by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the complexity of learning from restricted sets of training examples. With the intention to make learning easier we introduce two types of restrictions that describe the permitted training examples. The strength of the restrictions can be tuned by choosing specific parameters. We ask how strictly their values must be limited to turn NP-complete learning problems into polynomial-time solvable ones. Results are presented for Perceptrons with binary and arbitrary weights. We show that there exist bounds for the parameters that sharply separate efficiently solvable from intractable learning problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-026: ---------------------------------------- Accuracy of techniques for the logical analysis of data by Martin Anthony, London School of Economics, UK Abstract: We analyse the generalisation accuracy of standard techniques for the `logical analysis of data', within a probabilistic framework. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-027: ---------------------------------------- Interpolation and Learning in Artificial Neural Networks by Martin Anthony, London School of Economics, UK No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-028: ---------------------------------------- Threshold Functions, Decision Lists, and the Representation of Boolean Functions by Martin Anthony, London School of Economics, UK Abstract: We describe a geometrically-motivated technique for data classification. Given a finite set of points in Euclidean space, each classified according to some target classification, we use a hyperplane to separate off a set of points all having the same classification; these points are then deleted from the database and the procedure is iterated until no points remain. We explain how such an iterative `chopping procedure' leads to a type of decision list classification of the data points and in a classification of the data by means of a linear threshold artificial neural network with one hidden layer. In the case where the data points are all the $2^n$ vertices of the Boolean hypercube, the technique produces a neural network representation of Boolean functions differing from the obvious one based on a function's disjunctive normal formula. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-029: ---------------------------------------- Learning of Depth Two Neural Nets with Constant Fan-in at the Hidden Nodes by Peter Auer, University of California, Santa Cruz, USA, Stephen Kwek, University of Illinois, USA, Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred K. Warmuth, University of California, Santa Cruz, USA Abstract: We present algorithms for learning depth two neural networks where the hidden nodes are threshold gates with constant fan-in. The transfer function of the output node might be more general: in addition to the threshold function we have results for the logistic and the linear transfer function at the output node. We give batch and on-line learning algorithms for these classes of neural networks and prove bounds on the performance of our algorithms. The batch algorithms work for real valued inputs whereas the on-line algorithms require that the inputs are discretized. The hypotheses of our algorithms are essentially also neural networks of depth two. However, their number of hidden nodes might be much larger than the number of hidden nodes of the neural network that has to be learned. Our algorithms can handle a large number of hidden nodes since they rely on multiplicative weight updates at the output node, and the performance of these algorithms scales only logarithmically with the number of hidden nodes used. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From cia at kamo.riken.go.jp Tue Jan 23 21:18:35 1996 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Wed, 24 Jan 96 11:18:35 +0900 Subject: Blind Signal Processing - Call for Paper Message-ID: <9601240218.AA14489@kamo.riken.go.jp> Call for papers in special Invited Session in ICONIP96, Hong Kong: BLIND SIGNAL PROCESSING - ADAPTIVE AND NEURAL NETWORK APPROACHES I would like to announce that I am organizing Special Invited Session in ICONIP-96 (September 24-27, 1996,Hong-Kong) devoted to blind signal processing using neural and adaptive approaches. Papers devoted to all aspects of blind signal processing: blind deconvolution, equalization, separation of sources, blind identification, blind medium structure identification, cocktail party-problem, applications to EEG and ECG, voice enhancement and recognition, etc. are welcomed. Authors are invited to submit by e-mail (to me) as soon as possible,but not latter than February 15, extended summary (2-3 pages) or full paper. The final camera ready paper should be submitted not latter than March 1, 1996. Andrew Cichocki --------------------------- Dr. A. Cichocki, Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 48 462 4633. URL: http://www.bip.riken.go.jp/absl/absl.html From kyana at bme.ei.hosei.ac.jp Wed Jan 24 04:14:06 1996 From: kyana at bme.ei.hosei.ac.jp (Kazuo Yana) Date: Wed, 24 Jan 1996 18:14:06 +0900 Subject: invitation to BSI96 Message-ID: <199601240914.SAA18449@yana01.bme.ei.hosei.ac.jp> THE 2ND IFMBE-IMIA INTERNATIONAL WORKSHOP ON BIOSIGNAL INTERPRETATION (BSI96) September 23 - 28, 1996 Kanagawa, JAPAN CALL FOR PAPER SCOPE OF THE WORKSHOP The International Federation for Medical and Biological Engineering (IFMBE) and the International Medical Informatics Association (IMIA), in collaboration with the Japan Society of Medical Electronics and Biological Engineering, will organize the Second Workshop on Biosignal Interpretation (BSI96). This workshop aims to explore in the relatively new field of biosignal interpretation: model based biosignal analysis, interpretation and integration, extending existing signal processing technology for the effective utilization of biosignals in a practical environment and in a deeper understanding of biological functions. This is the second workshop in this area. The first workshop, IMIA-IFMBE Working Conference on Biosignal Interpretation, was held at Skorping, Denmark in August, 1993. SCIENTIFIC PROGRAM Prospective authors are invited to propose original contributions which meet the general scope mentioned above in any of the following subject categories. (1) Mathematical modeling of experimental and clinical biosignlas (nonlinear phenomena, chaos, fractals, neural network modeling, cardiovascular and respiratory fluctuations analysis, ECG/ EEG/ EMG signal modeling, potential mapping, inverse problem, miscellaneous) (2) Biosignal processing and pattern analysis (nonstationary/nonlinear analysis, time frequency analysis, statistical time series analysis, signal detection, signal reconstruction, neural network, wavelet analysis, recording and display instrumentation, miscellaneous) (3) On-line interactive signal acquisition and processing (intelligent monitoring, ambulatory system, miscellaneous) (4) Decision-support methods (parameter estimation, decision making, rule based/expert systems, automatic diagnoses, data reasoning, man-machine interface, miscellaneous). For in depth discussion, enough time will be assigned for all oral and poster presentations. Besides paper presentations, real system/software demonstrations are encouraged. PUBLICATION All papers will be published in the workshop proceedings. 30-40 papers will be selected to be published as a regular paper in the IMIA official journal: Methods of Information in Medicine. IMPORTANT DEADLINES 1. Submission of abstract (500 words or less): February 29, 1996. 2. Notification of Acceptance: April 15, 1996. 3. Submission of full length paper: July 15, 1996. ABSTRACT FORMAT (due by February 29, 1996) Title (Centered) Author(s) and Affiliation(s) (Centered) Abstract (500 words or less) should be sent to the conference secretariat (Professor Kazuo Yana,Department of Electronic Informatics, Hosei University, Koganei City Tokyo 184 JAPAN) by February 29, 1996. The abstract should be single spaced and clearly typed on A4 or letter size paper and have appropriate margins (Approx 2cm or 1 inch.) . After you receive notification of acceptance of your paper (by April 15), prepare for a camera ready conference paper (max 4 printed pages). The format will be sent to all the?perticipant s with the notification of the acceptance by April 15. The paper is due by July 15. Selected papers will be publised as a regular paper as is or with minor revision in Methods of Information in Medicine. CONFERENCE SITE Shonan Village Center, Hayama-machi, Kanagawa 240-01, Japan Phone: +81-468-55-1800 FAX:+81-468-55-1816. The Shonan Village Center which offers the workshop facilities and accomodations is situated on a Shonan hill in the central part of the Miura Peninshula, commanding a view of Mt. Fuji and over looking Sagami Bay. Sagami Bay is famous place for marine sports. There are other sports facilities nearby such as golf courses and tennis courts. The area is known as the holiday resort closest to Tokyo. Proximity to the anciet capital city of Kamakura, exotic harbor city of Yokohama and other tourist spots may add an attractive feature to the site for participants in planning after-conference tours. PARTICIPATION FEES (Tentative) Registration (includes the workshop proc., coferenced materials, banquet ticket): Before July 31... 25,000 YEN (15,000 YEN for students) After July 31... 30,000 YEN (20,000 YEN for students) Accomodation (5 nights including meals and services ): 75,000 YEN/Person (65,000 YEN/Person for students) The registration form will be sent by April 15 with notification of acceptance and a tentative program. A limited number of accomodations for observers and accompanying persons are available. ORGANIZATION General Chair Kajiya, Fumihiko ( Kawasaki Medical School: kajiya at me.kawasaki-m.ac.jp) Executive Committee Co-Chairs Sato, Shunsuke (Osaka University: sato at bpe.es.osaka-u.ac.jp) Takahashi, Takashi( Kyoto University: tak at kuhp.kyoto-u.ac.jp) Scientific Program Co-Chairs van Bemmel, Jan H. (Rotterdam, NL); Saranummi, Niilo (Tampere, FIN) International Scientific Program Committee: Cerutti, Sergio (Milano, I); Dawant, Benoit(Nashville, USA) Jansen, Ben H. (Houston, USA); Kaplan, Danny (Montreal, CA) Kitney, Richard I. (London, UK); Rosenfalck, Annelise (Aalborg, DK) Rubel, Paul (Lyon, F); Sato, Shunsuke (Osaka, J) Saul, Philip (Cambridge, USA); Zywietz, Christoph (Hanover, FRG) Executive Committee Bin, He (University of Illinois at Chicago) Hayano, Jun-ichiro (Nagoya City University) Ichimaru, Yuhei (Dokkyo University) Kiryu, Tohru (Niigata University) Musha, Toshimitsu(Keio University/Brain Functions Laboratory, Inc.) Okuyama, Fumio (Tokyo Medical and Dental University) Yamamoto, Mitsuaki (Tohoku University) Yamamoto, Yoshiharu (Tokyo University) Yana, Kazuo (Hosei University) For further information, please contact: Professor Kazuo Yana Secretariat, the 2nd IFMBE-IMIA Workshop on Biosignal Interpretation Dept. Electronic Informatics, Hosei University, Koganei Tokyo 184 JAPAN Phone/FAX: +81-(0)423-87-6188 E-mail: kyana at bme.ei.hosei.ac.jp Internet Home Page: http://www.bme.ei.hosei.ac.jp/BSI96/ From georg at ai.univie.ac.at Wed Jan 24 05:09:06 1996 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 24 Jan 1996 11:09:06 +0100 (MET) Subject: C.f.Abstracts: NN in Biomedical Systems Message-ID: <199601241009.LAA23480@jedlesee.ai.univie.ac.at> Following the general call for papers for EANN '96 (Int. Coference on Engineering Applications of Neural Networks), we are still soliciting abstracts for the ========================================= Special Track on Neural Networks in Biomedical Systems ========================================= Any application of neural networks in the medical domain will be welcome. Examples are: - biosignal processing (e.g. EEG, ECG, intensive care, etc.) - biomedical image processing (e.g. in radiology, dermatology, etc.) - diagnostic support in medicine - topographical mapping of diseases or syndromes - epidemological studies - control of biomedical devices (e.g. heart/lung machines, respirators, etc.) - optimization of therapy - monitoring (e.g. in intensive care) - and many more Special emphasis will be put on careful validation of results to make clear the value of neural networks in the application (e.g. through cross-validation with mutiple training sets and comparison to alternatives, such as linear methods). One-page abstracts can be submitted until =============== Feb. 15, 1996 =============== Please state clearly what data was used (number of input features, number of training and test samples) and your results (e.g. by reporting mean performance and standard deviation from a cross-validation). Final papers will be due around March 21, 1996. It is planned to publish the best-quality papers in a special issue of a journal. Abstracts should be emailed to: ====================== georg at ai.univie.ac.at ====================== Below is a description of the EANN conference. Georg Dorffner Dept. of Medical Cybernetics and Artificial Intelligence University of Vienna Freyung 6/2 A-1010 Vienna, Austria phone: +43-1-53532810 fax: +43-1-5320652 email: georg at ai.univie.ac.at http://www.ai.univie.ac.at/oefai/nn/georg.html ---------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, environmental engineering, and biomedical engineering. Abstracts of one page (200 to 400 words) should be sent by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN'96, please see http://www.lpac.ac.uk/EANN96 and for reports on EANN '95, contents of the proceedings, etc. please see http://www.abo.fi/~abulsari/EANN95.html Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From David_Redish at GS151.SP.CS.CMU.EDU Wed Jan 24 12:00:37 1996 From: David_Redish at GS151.SP.CS.CMU.EDU (David Redish) Date: Wed, 24 Jan 1996 12:00:37 -0500 Subject: new web site for NIPS*95 papers Message-ID: <13596.822502837@GS151.SP.CS.CMU.EDU> Many of the papers presented at NIPS*95 have been made available online by their authors. The NIPS Foundation now maintains a web site where abstracts and URLs for these papers are collected: http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS95/Papers.html New papers are being added regularly. The complete list of papers presented at NIPS*95 is available on the NIPS home page. The printed NIPS*95 proceedings will be available from MIT Press in May. ------------------------------------------------------------ David Redish Computer Science Department CMU graduate student Neural Processes in Cognition Training Program Center for the Neural Basis of Cognition http://www.cs.cmu.edu/Web/People/dredish/home.html ------------------------------------------------------------ maintainer, CNBC website: http://www.cs.cmu.edu/Web/Groups/CNBC maintainer, NIPS*96 website: http://www.cs.cmu.edu/Web/Groups/NIPS ------------------------------------------------------------ From giles at research.nj.nec.com Thu Jan 25 17:02:39 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 25 Jan 96 17:02:39 EST Subject: TR available: PRODUCT UNIT LEARNING Message-ID: <9601252202.AA06686@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: (A short version of this TR was published in NIPS7) _____________________________________________________________________ PRODUCT UNIT LEARNING Technical Report UMIACS-TR-95-80 and CS-TR-3503, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 Laurens R. Leerink{a}, C. Lee Giles{b,c}, Bill G. Horne{b}, Marwan A.Jabri{a} {a}SEDAL, Dept. of Electrical Engineering, The U. of Sydney, Sydney, NSW 2006, Australia {b}NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA {c}UMIACS, U. of Maryland, College Park, MD 20742, USA ABSTRACT Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. A constructive algorithm is then introduced which adds product units to a network as required by the problem. Simulations show that for the same problems this method creates a network with significantly less neurons than those constructed by the tiling and upstart algorithms. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) {Fahlman90} system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions. -------------------------------------------------------------------------- -------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3503.product.units.neural.nets.ps.Z ---------------------------------------------------------------------------- -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From luis.almeida at inesc.pt Fri Jan 26 07:51:42 1996 From: luis.almeida at inesc.pt (Luis B. Almeida) Date: Fri, 26 Jan 1996 13:51:42 +0100 Subject: Workshop: Spatiotemporal Models Message-ID: <3108CE5E.3F784554@inesc.pt> *** PLEASE POST *** PLEASE FORWARD TO OTHER APPROPRIATE LISTS *** Preliminary announcement Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems Sintra, Portugal, 6-8 November 1996 A workshop is being organized, on the topic of Spatiotemporal Models in Biological and Artificial Systems, to foster the discussion of the latest developments in these fields, and the cross-fertilization of ideas between people from the areas of biological and artificial information processing systems. This is a preliminary announcement of the workshop, to allow potential participants enough time to prepare their works for submission. The size of the workshop is planned to be relatively small (around 50 people), to enhance the communication among participants. Submissions will be subjected to an international peer review procedure. All accepted submissions will be scheduled for poster presentation. The authors of the best-rated submissions will make oral presentations, in addition to their poster presentations. Presentation of an accepted contribution is mandatory for participation in the workshop. There will also be a number of presentations by renowned invited speakers. Submissions will consist of the full papers in their final form. Paper revision after the review is not expected to be possible. The camera-ready paper format is not available yet, but a rough indication is eight A4 pages, typed single-spaced in a 12 point font, with 3.5 cm margins all around. The accepted contributions will be published by a major scientific publisher. The proceedings volume is planned to be distributed to the participants at the beginning of the workshop. The workshop will take place on 6-8 November 1996 in Sintra, Portugal. The tentative schedule is as follows: Deadline for paper submission 30 April 1996 Results of paper review 31 July 1996 Workshop 6-8 November 1996 Although no confirmation is available yet, we expect to have partial funding for the workshop, from research-funding institutions. If so, this will allow us to partially subsidize the participants' expenses. The workshop is planned to have a duration of two and a half days, from a wednesday afternoon (6 Nov.) through the next friday afternoon (8 Nov.). The participants who so desire will have the opportunity to stay the following weekend, for sightseeing. Sintra is a beautiful little town, located about 20 km west of Lisbon. It used to be a vacation place of the Portuguese aristocracy, and has in its vicinity a number of beautiful palaces, a moor castle, a monastery carved in the rock and other interesting spots. It is on the edge of a small mountain which creates a microclimate with a luxurious vegetation. Sintra has recently been designated World Patrimonium. Further announcements of the workshop will be made, but people who wish to stay informed can send e-mail to Luis B. Almeida (see below), to be included in the workshop mailing list. Workshop organizers: Chair Fernando Lopes da Silva Amsterdam University, The Netherlands Technical program Jose C. Principe University of Florida, Gainesville, FL, USA principe at synapse.ee.ufl.edu Local arrangements Luis B. Almeida Instituto Superior Tecnico / INESC, Lisbon, Portugal luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ----------------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From piuri at elet.polimi.it Sun Jan 28 03:34:32 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Sun, 28 Jan 1996 09:34:32 +0100 Subject: NICROSP'96 - deadline extension Message-ID: <9601280834.AA20390@ipmel2.elet.polimi.it> ====================================================================== NICROSP'96 * * * DEADLINE EXTENSION * * * Due to a delay in posting on some web servers, submission deadlines have been extended as follows: one-page abstract for review assignment: by February 19th, 1996 extended summary or full paper for review: by March 3rd, 1996 For details see the following call for papers. 1996 International Workshop on Neural Networks for Identification, Control, Robotics, and Signal/Image Processing Venice, Italy - 21-23 August 1996 ====================================================================== Sponsored by the IEEE Computer Society and the IEEE CS Technical Committee on Pattern Analysis and Machine Intelligence. In cooperation with: ACM SIGART, IEEE Circuits and Systems Society, IEEE Control Systems Society, IEEE Instrumentation and Measurement Society, IEEE Neural Network Council, IEEE North-Italy Section, IEEE Region 8, IEEE Robotics and Automation Society (pending), IEEE Signal Processing Society (pending), IEEE System, Man, and Cybernetics Society, IMACS, INNS (pending), ISCA, AEI, AICA, ANIPLA, FAST. CALL FOR PAPERS This workshop is directed to create a unique synergetic discussion forum and a strong link between theoretical researchers and practitioners in the application fields of identification, control, robotics, and signal/image processing by using neural techniques. The three-days single-session schedule will provide the ideal environment for in-depth analysis and discussions concerning the theoretical aspects of the applications and the use of neural networks in the practice. Invited talks in each area will provide a starting point for the discussion and give the state of the art in the corresponding field. Panels will provide an interactive discussion. Researchers and practitioners are invited to submit papers concerning theoretical foundations of neural computation, experimental results or practical applications related to the specific workshop's areas. Interested authors should submit a half-page abstract to the program chair by e-mail or fax by February 19, 1996, for review planning. Then, an extended summary or the full paper (limited to 20 double-spaced pages including figures and tables) must be sent to the program chair by March 3, 1996 (PostScript email submission is strongly encouraged). Submissions should contain: the corresponding author, affiliation, complete address, fax, email, and the preferred workshop track (identification, control, robotics, signal processing, image processing). Submission implies the willingness of at least one of the authors to register, attend the workshop and present the paper. Papers' selection is based on the full paper: the corresponding author will be notified by March 30, 1996. The camera-ready version, limited to 10 one-column IEEE-book-standard pages, is due by May 1, 1996. Proceedings will be published by the IEEE Computer Society Press. The extended version of selected papers will be considered for publication in special issues of international journals. General Chair Prof. Edgar Sanchez-Sinencio Department of Electrical Engineering Texas A&M University College Station, TX 77843-3128 USA phone (409) 845-7498 fax (409) 845-7161 email sanchez at eesun1.tamu.edu Program Chair Prof. Vincenzo Piuri Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32, I-20133 Milano, Italy phone +39-2-2399-3606 fax +39-2-2399-3411 email piuri at elet.polimi.it Publication Chair Dr. Jose' Pineda de Gyvez Department of Electrical Engineering Texas A&M University Publicity, Registr. & Local Arrangment Chair Dr. Cesare Alippi Department of Electronics and Information Politecnico di Milano Workshop Secretariat Ms. Laura Caldirola Department of Electronics and Information Politecnico di Milano phone +39-2-2399-3623 fax +39-2-2399-3411 email caldirol at elet.polimi.it Program Committee (preliminary list) Shun-Ichi Amari, University of Tokyo, Japan Panos Antsaklis, Univ. Notre Dame, USA Magdy Bayoumi, University of Southwestern Louisiana, USA James C. Bezdek, University of West Florida, USA Pierre Borne, Ecole Politechnique de Lille, France Luiz Caloba, Universidad Federal de Rio de Janeiro, Brazil Jill Card, Digital Equipment Corporation, USA Chris De Silva, University of Western Australia, Australia Laurene Fausett, Florida Institute of Technology, USA C. Lee Giles, NEC, USA Karl Goser, University of Dortmund, Germany Simon Jones, University of Loughborough, UK Michael Jordan, Massachussets Institute of Technology, USA Robert J. Marks II, University of Washington, USA Jean D. Nicoud, EPFL, Switzerland Eros Pasero, Politecnico di Torino, Italy Emil M. Petriu, University of Ottawa, Canada Alberto Prieto, Universidad de Granada, Spain Gianguido Rizzotto, SGS-Thomson, Italy Edgar Sanchez-Sinencio, A&M University, USA Bernd Schuermann, Siemens, Germany Earl E. Swartzlander, University of Texas at Austin, USA Philip Treleaven, University College London, UK Kenzo Watanabe, Shizuoka University, Japan Michel Weinfeld, Ecole Politechnique de Paris, France ====================================================================== From N.Sharkey at dcs.shef.ac.uk Sun Jan 28 06:18:58 1996 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Sun, 28 Jan 96 11:18:58 GMT Subject: EANN-96 - ROBOTICS Message-ID: <9601281118.AA28201@entropy.dcs.shef.ac.uk> Sorry if you get this twice, but I messed up the mailing last week. *** ROBOTICS TRACK of EANN-96 *** London, UK: 17-19 June, 1996. For those of you a bit late in submitting you abstracts (200-400 words) for the Robotics track of EANN-96, you can send them directly to me electronically at n.sharkey at dcs.shef.ac.uk (or fax). But please let me know of your intention to do so. For more information on EANN '96: http://www.lpac.ac.uk/EANN96 For reports on EANN '95, contents of the proceedings, etc.: http://www.abo.fi/~abulsari/EANN95.html Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 February. noel Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk FAX: (0114) 2780972 From hd at research.att.com Fri Jan 26 14:44:53 1996 From: hd at research.att.com (Harris Drucker) Date: Fri, 26 Jan 96 14:44:53 EST Subject: please post Message-ID: <9601261939.AA16638@big.info.att.com> Please announce to connectionists bulletin board: Two papers on using boosting techniques to improve the performance of classification trees are available via anonymous ftp: The first paper describes a preliminary set of experiments showing that an ensemble of trees constructed using Freund and Schapire's boosting algorithm is much better than single trees: Boosting Decision Trees Harris Drucker and Corinna Cortes to be published in NIPS 8, 1996. A new boosting algorithm of Freund and Schapire is used to improve the performance of decision trees which are constructed using the information ratio criterion of Quinlan's C4.5 algorithm. This boosting algorithm iteratively constructs a series of decision trees, each decision tree being trained and pruned on examples that have been filtered by previously trained trees. Examples that have been incorrectly classified by the previous trees in the ensemble are resampled with higher probability to give a new probability distribution for the next tree in the ensemble to train on. Results from optical character recognition (OCR), and knowledge discovery and data mining problems show that in comparison to single trees, or to trees trained independently, or to trees trained on subsets of the feature space, the boosting ensemble is much better. The second paper extends this work giving more details and applies this technique to the design of a fast preclassifier for OCR: Fast Decision Tree Ensembles for Optical Character Recognition by Harris Drucker accepted by Fifth Annual Symposium on Document Analysis and Information Retrieval (1996) in Las Vegas To get both-papers: unix> ftp ftp.monmouth.edu (or ftp 192.100.64.2) Connected to monmouth.edu. 220 monnet FTP server (Version wu-2.4(1) Mon Oct 9 18:48:45 EDT 1995) ready. Name (ftp.monmouth.edu:hd): anonymous (no space after the :) 331 Guest login ok, send your complete e-mail address as password. Password: (your email address) 230-Welcome to the Monmouth University FTP server 230- ftp> binary Type set to I. ftp> cd pub/drucker 250 CWD command successful ftp> get nips-paper.ps.Z ftp> get las-vegas-paper.ps.Z ftp> quit unix> uncompress nips-paper.ps.Z unix> uncompress las-vegas-paper.ps.Z unix> lpr (or your postscript print command) (either paper) Any problems, contact me at hd at harris.monmouth.edu Harris Drucker From ersintul at rorqual.cc.metu.edu.tr Tue Jan 30 07:14:53 1996 From: ersintul at rorqual.cc.metu.edu.tr (ersin tulunay) Date: Tue, 30 Jan 1996 15:14:53 +0300 (MEST) Subject: EANN'96: Special Track on Control Systems Message-ID: Dear Neural Net and Control System Researcher, A special track on Control Systems will be organized during the International Conference on Engineering Applications of Neural Networks (EANN'96) which is to be held in London, UK between 17-19 June 1996. The Final Call for papers for the EANN'96 can be found at the end of this message. The EANN'95 was held in Helsinki, Finland between 21-23 August 1995. Reports on the EANN'95 and the contents of the proceedings etc. may be seen on http://www.abo.fi/~abulsari/EANN'95.html Based on a number of good quality papers presented in the EANN'95 it was possible to edit a special issue for the Journal of Systems Engineering. For the forthcoming meeting we have plans for publishing same kind of special issue in a suitable journal also. For this, of course, your contribution is essential. I am sure you agree that one of the most interesting areas of utilization of neural nets is control systems. However, the number of papers published so far has not been as many as control systems deserve. In particular, as far as the real world applications are concerned there are only a few works published. Therefore, your contribution to EANN'96 will be very functional. We would like also to plan arranging a discussion session in a leisurely atmosphere on the use of neural nets in control applications. The ideas which may come up will help the organization of the following future activities. The conference, and especially this meeting will help us making efficient and sincere personal contacts with our colleagues and will provide a good apportunity for planting seeds for mutual collaborative research for international projects which might be funded by international bodies such as European Union. We are looking forward to receiving your abstracts latest by 15 February 1996. With kind regards, Dr.Ersin Tulunay Electrical and Electronic Engineering Department Middle East Technical University 06531 Ankara, Turkey Tel: +90 312 2102335 (Office) +90 312 2101199 (Home) Fax: +90 312 2101261 (Office) E-mail : etulunay at ed.eee.metu.edu.tr ---------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, environmental engineering, and biomedical engineering. Abstracts of one page (200 to 400 words) should be sent by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN'96, please see http://www.lpac.ac.uk/EANN96 Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From chentouf at kepler.inpg.fr Tue Jan 30 11:29:26 1996 From: chentouf at kepler.inpg.fr (rachida) Date: Tue, 30 Jan 1996 17:29:26 +0100 Subject: 2 papers available Message-ID: <199601301629.RAA09862@kepler.inpg.fr> First paper: Combining Sigmoids and Radial Basis Functions in Evolutive Neural Architectures. available at: ftp://tirf.inpg.fr/pub/HTML/chentouf/esann96_chentouf.ps.gz ABSTRACT An incremental algorithm for supervised learning of noisy data using two layers neural networks with linear output units and a mixture of sigmoids and radial basis functions in the hidden layer (2-[S,RBF]NN) is proposed. Each time the network has to be extended, we compare different estimations of the residual error: the one provided by a sigmoidal unit responding to the overall input space, and those provided by a number of RBFs responding to localized regions. The unit which provides the best estimation is selected and installed in the existing network. The procedure is repeated until the error reduces to the noise in the data. Experimental results show that the incremental algorithm using 2-[S,RBF]NN is considerably faster than the one using only sigmoidal hidden units. It also leads to a less complex final network and avoids being trapped in spurious minima. This paper has been accepted for publication in the European Symposium on Artificial Neural Networks, Bruges, Belgium , April, 96. =========================================================== The second paper is an extended abstract (the final version is in preparation): DWINA: Depth and Width Incremental Neural Algorithm. available at: ftp://ftp.tirf.inpg.fr/pub/HTML/chentouf/icnn96_chentouf.ps.gz ABSTRACT This paper presents DWINA: an algorithm for depth and width design of neural architectures in the case of supervised learning with noisy data. Each new unit is trained to learn the error of the existing network and is connected to it such that it does not affect its previous performance. Criteria for choosing between increasing width or increasing depth are proposed. The connection procedure for each case is also described. The stopping criterion is very simple and consists in comparing the residual error signal to the noise signal. Preliminary experiments point out the efficacy of the algorithm especially to avoid spurious minima and to design a network with a well-suited size. The complexity of the algorithm (number of operations) is on average the same as that needed in a convergent run of the BP algorithm on a static architecture having the optimal number of parameters. Moreover, it is found that no significant difference exist between networks having the same number of parameters and different structure. Finally, the algorithm presents an interesting behaviour since the MSE on the training set tends to decrease continuously during the process evolving directly and surely to the solution of the mapping problem. This paper has been accepted for publication in the IEEE International Conference on Neural Networks, Washington, June, 96. __ ______ __ ________ _______ __ __ __ ________ ______ / / /_ __/ / / / ____ / / _____/ / // /\ / // ____ // ____/ / / / / / / / /___/ / / /___ ____ / // /\ \ / // /___/ // / ____ / /_____ / / / / / ___/ / _____/ /___/ / // / \ \/ // /_____// /_/ __/ /_______/ /_/ /_/ /_/\__\ /_/ /_//_/ \_\//_/ /_____/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= || Mrs Rachida CHENTOUF || || LTIRF-INPG || || 46, AV Felix Viallet || || 38031 Grenoble - FRANCE || || Tel : (+33) 76.57.43.64 || || Fax : (+33) 76.57.47.90 || || || || WWW: ftp://tirf.inpg.fr/pub/HTML/chentouf/rachida.html || -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From marks at u.washington.edu Tue Jan 30 20:17:35 1996 From: marks at u.washington.edu (Robert Marks) Date: Tue, 30 Jan 96 17:17:35 -0800 Subject: TNN Abstracts Posting Message-ID: <9601310117.AA06219@carson.u.washington.edu> Web Abstracts Robert J. Marks II, Editor-in-Chief IEEE Transactions on Neural Networks According to the 1994 Journal Citation Reports, the IEEE Transactions on Neural Networks, based on frequency of citation, has a half-life of 3.1 years. Life is short in our technology. Engineer Sherman Minton expressed it nicely. "Half of everything you know will be wrong in 10 years; you just don't know which half." To make neural network research more temporally accessible, the TNN is establishing a WWW page posting of abstracts of papers submitted to the IEEE Transactions on Neural Networks. Dr. Jianchang Mao will coordinate the posting effort. Authors submitting papers to the TNN may, at their own discretion, submit ASCII information on their papers via e-mail to Jianchang Mao, Abstracts Editor IEEE Transactions on Neural Networks IBM Almaden Research Center Image and Multimedia Systems, DPEE/B3 650 Harry Road San Jose, CA 95120 IEEETNN at almaden.ibm.com Submission of the information may only be done only after a paper has been submitted to the IEEE Transactions on Neural Networks and a TNN paper number has been assigned. The following information should be included in the message sent to the Abstracts Editor. - The TNN number assigned to the paper. - The paper title - Authors and their affiliation. Please include e-mail addresses - The abstract of the paper - (Optional) Information on how to view or obtain a full copy of the paper. Electronic access on the WWW or ftp is preferred. If you currently have a paper in any stage of review in the TNN, you may also submit abstract information for posting. The TNN Abstracts page will be appended to the home page of the IEEE Neural Networks Council (http://www.ieee.org.nnc) under the able coordinator of Professor Payman Arabshahi. The NNC home page also includes a remarkably complete listing of conferences in computational intelligence, IEEE copyright forms and information, the NNC newsletter, information about neural network research centers and NNC sponsored books. The most recent table of contents of the IEEE Transactions on Fuzzy Systems and the IEEE Transactions on Neural Networks are also posted. Happy surfing. And may your technological half-life be long. From chentouf at kepler.inpg.fr Wed Jan 31 06:44:33 1996 From: chentouf at kepler.inpg.fr (rachida) Date: Wed, 31 Jan 1996 12:44:33 +0100 Subject: new paper available "Combining Sigmoids and RBFs" Message-ID: <199601311144.MAA00665@kepler.inpg.fr> The following paper: Combining Sigmoids and Radial Basis Functions in Evolutive Neural Architectures. is available at: ftp://tirf.inpg.fr/pub/HTML/chentouf/esann96_chentouf.ps.gz ABSTRACT An incremental algorithm for supervised learning of noisy data using two layers neural networks with linear output units and a mixture of sigmoids and radial basis functions in the hidden layer (2-[S,RBF]NN) is proposed. Each time the network has to be extended, we compare different estimations of the residual error: the one provided by a sigmoidal unit responding to the overall input space, and those provided by a number of RBFs responding to localized regions. The unit which provides the best estimation is selected and installed in the existing network. The procedure is repeated until the error reduces to the noise in the data. Experimental results show that the incremental algorithm using 2-[S,RBF]NN is considerably faster than the one using only sigmoidal hidden units. It also leads to a less complex final network and avoids being trapped in spurious minima. ========= This paper has been accepted for publication in the European Symposium on Artificial Neural Networks, Bruges, Belgium , April, 96. __ ______ __ ________ _______ __ __ __ ________ ______ / / /_ __/ / / / ____ / / _____/ / // /\ / // ____ // ____/ / / / / / / / /___/ / / /___ ____ / // /\ \ / // /___/ // / ____ / /_____ / / / / / ___/ / _____/ /___/ / // / \ \/ // /_____// /_/ __/ /_______/ /_/ /_/ /_/\__\ /_/ /_//_/ \_\//_/ /_____/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= || Mrs Rachida CHENTOUF || || LTIRF-INPG || || 46, AV Felix Viallet || || 38031 Grenoble - FRANCE || || Tel : (+33) 76.57.43.64 || || Fax : (+33) 76.57.47.90 || || || || WWW: ftp://tirf.inpg.fr/pub/HTML/chentouf/rachida.html || -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From jan at uran.informatik.uni-bonn.de Wed Jan 31 10:11:36 1996 From: jan at uran.informatik.uni-bonn.de (Jan Puzicha) Date: Wed, 31 Jan 1996 16:11:36 +0100 Subject: Publications and Abstracts available online Message-ID: <199601311511.QAA13190@thalia.informatik.uni-bonn.de> The following Publications are now available as abstracts and compressed postscript online via the WWW-Home-Page of the Computer Vision and Pattern Recognition Group of the University of Bonn, Germany: http://www-dbv.cs.uni-bonn.de/ This page also contains information about peoble, scientific projects (segmentation, stereo, compression, data clustering, vector quantization, multidimensional scaling, autonomous robotics, associative memories) and new results in textured image segmentation of the group as well as links to related sites, conferences and jounals. Data Clustering J. Buhmann, Data clustering and learning, in Handbook of Brain Theory and Neural Networks, M. Arbib, ed., Bradfort Books/MIT Press, 1995. J. Buhmann, Vector Quantization with Complexity Costs, IEEE Transactions on Information Theory, 39, pp.1133-1145, 1993. J. Buhmann and T. Hofmann, A Maximum Entropy Approach to Pairwise Data Clustering, in Proceedings of the International Conference on Pattern Recognition, Hebrew University, Jerusalem, vol.II, IEEE Computer Society Press, pp.207-212, 1994. J. Buhmann and T. Hofmann, Pairwise data clustering by deterministic Annealing, Tech. Rep. IAI-TR-95-7, Institut fr Informatik III, Universit"at Bonn. T. Hofmann and J. Buhmann, Multidimensional scaling and data clustering, in Advances in Neural Information Processing Systems 7, Morgan Kaufmann Publishers, 1995. T. Hofmann and J. Buhmann, Hierarchical pairwise data clustering by mean-field annealing. ICANN 1995. Robotics J. Buhmann, W. Burgard, A.B. Cremers, D. Fox, T. Hofmann, F. Schneider, J. Strikos and S. Thrun. The Mobile Robot Rhino. AI Magazin, 16:1, 1995. Face Recognition J. Buhmann, M. Lades and F. Eeckmann. Illumination-Invariant Face Recognition with a Contrast Sensitive Silicon Retina. In: Advances in Neural Information Processing Systems (NIPS) 6, Morgan Kaufmann Publishers, pp 769-776, 1994. H. Aurisch, J. Strikos and J. Buhmann. A Real-Time Face Recognition System with a Retina camera. Internal report, summerizes the results of our face regognition research accomplished in summer 1993. Associative Memories J. Buhmann, Oscillatory Associative Memories, in Handbook of Brain Theory & Neural Networks, M. Arbib (ed.), Bradfort Books, MIT Press, 1995. Greeting Jan Puzicha -------------------------------------------------------------------- Jan Puzicha | email: jan at uran.cs.uni-bonn.de Institute f. Informatics III | jan at cs.uni-bonn.de University of Bonn | WWW : http://www.cs.uni-bonn.de/~jan | Roemerstrasse 164 | Tel. : +49 228 550-383 D-53117 Bonn | Fax : +49 228 550-382 From moody at chianti.cse.ogi.edu Wed Jan 31 20:47:20 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Wed, 31 Jan 96 17:47:20 -0800 Subject: Graduate Study at the Oregon Graduate Institute Message-ID: <9602010147.AA20447@chianti.cse.ogi.edu> OGI (Oregon Graduate Institute of Science and Technology) has openings for a few outstanding students in its Computer Science and Electrical Engineering Masters and Ph.D programs in the areas of Neural Networks, Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. OGI has 14 faculty, senior research staff, and postdocs in these areas. Short descriptions of our research interests are appended below. The primary purposes of this message are: 1) To invite inquiries and applications from prospective students interested in studying for a Masters or PhD Degree in the above areas. 2) To notify prospective PhD students who are U.S. Citizens or U.S. Nationals of various fellowship opportunities at OGI. Fellowships provide full or partial financial support while studying for the PhD. OGI is a young, but rapidly growing, private research institute located in the Silicon Forest area west of downtown Portland, Oregon. OGI offers Masters and PhD programs in Computer Science and Engineering, Electrical Engineering, Applied Physics, Materials Science and Engineering, Environmental Science and Engineering, Chemistry, Biochemistry, Molecular Biology, and Management. The Portland area has a high concentration of high tech companies that includes major firms like Intel, Hewlett Packard, Tektronix, Sequent Computer, Mentor Graphics, Wacker Siltronics, and numerous smaller companies like Planar Systems, FLIR Systems, Flight Dynamics, and Adaptive Solutions (an OGI spin-off that manufactures high performance parallel computers for neural network and signal processing applications). The admissions deadline for the OGI PhD programs is March 1. Masters program applications are accepted year-round. Inquiries about these programs and admissions for either Computer Science or Electrical Engineering should be addressed to: Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Portland, OR 97291 Phone: (503)690-1028, or (800)685-2423 (toll-free in the US and Canada) Worldwide Web: http://www.ogi.edu/webtest/admissions.html Internet: admissions at admin.ogi.edu Due to the late time in the PhD applications season, though, informal applications should be sent directly to the CSE Department. For these informal applications, please include a letter specifying your research interests, photocopies of your GRE Scores, TOEFL Scores, and College transcripts, and indicate your interest in either the PhD or Masters programs. Please send these materials to: Betty Shannon, Academic Coordinator Department of Computer Science and Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503)690-1255 Internet: bettys at cse.ogi.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science and Engineering & Department of Electrical Engineering and Applied Physics Research Interests of Faculty, Research Staff, and Postdocs in Neural Networks, Signal Processing, Control, Speech, Language, Vision, Time Series, and Computational Finance (Note: Additional information is available on the Web at http://www.ogi.edu/ ) Etienne Barnard (Associate Professor, EEAP): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EEAP); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and computational finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems and financial market analysis. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction and financial markets. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EEAP): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From erol at ee.duke.edu Wed Jan 31 11:40:17 1996 From: erol at ee.duke.edu (Erol Gelenbe) Date: Wed, 31 Jan 1996 11:40:17 -0500 (EST) Subject: No subject In-Reply-To: <199601281900.PAA18319@marshall.cs.unc.edu> Message-ID: BIOLOGICALLY INSPIRED AUTONOMOUS SYSTEMS Computation, Cognition and Control Duke University -- March 4 and 5, 1996 Departments of Electrical and Computer Engineering, Psychology Experimental, Biomedical Engineering, Neurobiology, and NSF-ERC Preliminary Program March 4 -- 8:00- 8:45 Registration 8:45- 9:00 Erol Gelenbe and Nestor Schmajuk -- Welcome 9:00- 9:30 Jean-Arcady Meyer (ENS, Paris) From moody at chianti.cse.ogi.edu Wed Jan 31 22:27:19 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Wed, 31 Jan 96 19:27:19 -0800 Subject: CFP: NEURAL NETWORKS in the CAPITAL MARKETS 1996 Message-ID: <9602010327.AA20766@chianti.cse.ogi.edu> -- Preliminary Announcement and Call for Papers -- NNCM-96 FOURTH INTERNATIONAL CONFERENCE NEURAL NETWORKS in the CAPITAL MARKETS Wednesday-Friday, November 20-22, 1996 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School Neural networks have been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the increasing interest in the NNCM conferences held in the U.K. and the U.S., the fourth annual NNCM is planned for November 20-22, 1996, in Pasadena, California. This is a research meeting where original and significant contributions to the field are presented. In addition, introductory tutorials will be included to familiarize audiences of different backgrounds with the financial and the mathematical aspects of the field. Areas of Interest: Price forecasting for stocks, bonds, commodities, and foreign exchange; asset allocation and risk management; volatility analysis and pricing of derivatives; cointegration, correlation, and multivariate data analysis; credit assessment and economic forecasting; statistical methods, learning techniques, and hybrid systems. Organizing Committee: Dr. Y. Abu-Mostafa, Caltech (Chairman) Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. M. Jabri, Sydney University Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. I. Matsuba, Chiba University Dr. J. Moody, Oregon Graduate Institute Dr. C. Pedreira, Catholic Univ. PUC-Rio Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Munster Dr. A. Timermann, UC San Diego Dr. A. Weigend, University of Colorado Dr. H. White, UC San Diego Dr. L. Xu, Chinese University of Hong Kong Submission of Papers: Original contributions representing new and significant research, development, and applications in the above areas of interest are invited. Authors should send 5 copies of a 1000-word summary clearly stating their results to Dr. Y. Abu-Mostafa, Caltech 136-93, Pasadena, CA 91125, U.S.A. All submissions must be received before May 1, 1996. There will be a rigorous refereeing process to select the high-quality papers to be presented at the conference. Location: The conference will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. The hotel is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Mailing List: If you wish to be added to the mailing list of NNCM-96, please send your postal address, e-mail address, and fax number to Dr. Y. Abu-Mostafa, Caltech 136-93, Pasadena, CA 91125, U.S.A. e-mail: yaser at caltech.edu , fax (818) 795-0326 Home Page: http://www.cs.caltech.edu/~learn/nncm.html From lawrence at s4.elec.uq.edu.au Wed Jan 31 23:45:34 1996 From: lawrence at s4.elec.uq.edu.au (Steve Lawrence) Date: Thu, 1 Feb 1996 14:45:34 +1000 (EST) Subject: Paper available: Function Approximation with Neural Networks and Local Methods Message-ID: <199602010445.OAA00201@s4.elec.uq.edu.au> The following paper presents an overview of global MLP approximation and local approximation. It is known that MLPs can respond poorly to isolated data points and we demonstrate that considering histograms of k-NN density estimates of the data can help in prior determination of the best method. http://www.elec.uq.edu.au/~lawrence - Australia http://www.neci.nj.nec.com/homepages/lawrence - USA We welcome your comments Function Approximation with Neural Networks and Local Methods: Bias, Variance and Smoothness Steve Lawrence, Ah Chung Tsoi, Andrew Back Electrical and Computer Engineering University of Queensland, St. Lucia 4072, Australia ABSTRACT We review the use of global and local methods for estimating a function mapping $\mathcal{R}\mathnormal{^m} \Rightarrow \mathcal{R}\mathnormal{^n}$ from samples of the function containing noise. The relationship between the methods is examined and an empirical comparison is performed using the multi-layer perceptron (MLP) global neural network model, the single nearest-neighbour model, a linear local approximation (LA) model, and the following commonly used datasets: the Mackey-Glass chaotic time series, the Sunspot time series, British English Vowel data, TIMIT speech phonemes, building energy prediction data, and the sonar dataset. We find that the simple local approximation models often outperform the MLP. No criteria such as classification/prediction, size of the training set, dimensionality of the training set, etc. can be used to distinguish whether the MLP or the local approximation method will be superior. However, we find that if we consider histograms of the $k$-NN density estimates for the training datasets then we can choose the best performing method {\em a priori} by selecting local approximation when the spread of the density histogram is large and choosing the MLP otherwise. This result correlates with the hypothesis that the global MLP model is less appropriate when the characteristics of the function to be approximated varies throughout the input space. We discuss the results, the smoothness assumption often made in function approximation, and the bias/variance dilemma. From Connectionists-Request at cs.cmu.edu Mon Jan 1 00:05:13 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Jan 96 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <28104.820472713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From Connectionists-Request at cs.cmu.edu Mon Jan 1 00:05:13 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Mon, 01 Jan 96 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <28104.820472713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu". From mpolycar at ece.uc.edu Tue Jan 2 10:19:35 1996 From: mpolycar at ece.uc.edu (Marios Polycarpou) Date: Tue, 2 Jan 1996 10:19:35 -0500 (EST) Subject: ISIC'96: Final Call for Papers Message-ID: <199601021519.KAA19208@zoe.ece.uc.edu> FINAL CALL FOR PAPERS 11th IEEE International Symposium on Intelligent Control (ISIC'96) Sponsored by the IEEE Control Systems Society and held in conjunction with The 1996 IEEE International Conference on Control Applications (CCA) and The IEEE Symposium on Computer-Aided Control System Design (CACSD) September 15-18, 1996 The Ritz-Carlton Hotel, Dearborn, Michigan, USA ISIC General Chair: Kevin M. Passino, The Ohio State University ISIC Program Chair: Jay A. Farrell, University of California, Riverside ISIC Publicity Chair: Marios Polycarpou, University of Cincinnati Intelligent control, the discipline where control algorithms are developed by emulating certain characteristics of intelligent biological systems, is being fueled by recent advancements in computing technology and is emerging as a technology that may open avenues for significant technological advances. For instance, fuzzy controllers which provide for a simplistic emulation of human deduction have been heuristically constructed to perform difficult nonlinear control tasks. Knowledge-based controllers developed using expert systems or planning systems have been used for hierarchical and supervisory control. Learning controllers, which provide for a simplistic emulation of human induction, have been used for the adaptive control of uncertain nonlinear systems. Neural networks have been used to emulate human memorization and learning characteristics to achieve high performance adaptive control for nonlinear systems. Genetic algorithms that use the principles of biological evolution and "survival of the fittest" have been used for computer-aided-design of control systems and to automate the tuning of controllers by evolving in real-time populations of highly fit controllers. Topics in the field of intelligent control are gradually evolving, and expanding on and merging with those of conventional control. For instance, recent work has focused on comparative cost-benefit analyses of conventional and intelligent control techniques using simulation and implementations. In addition, there has been recent activity focused on modeling and nonlinear analysis of intelligent control systems, particularly work focusing on stability analysis. Moreover, there has been a recent focus on the development of intelligent and conventional control systems that can achieve enhanced autonomous operation. Such intelligent autonomous controllers try to integrate conventional and intelligent control approaches to achieve levels of performance, reliability, and autonomous operation previously only seen in systems operated by humans. Papers are being solicited for presentation at ISIC and for publication in the Symposium Proceedings on topics such as: - Architectures for intelligent control - Hierarchical intelligent control - Distributed intelligent systems - Modeling intelligent systems - Mathematical analysis of intelligent systems - Knowledge-based systems - Fuzzy systems / fuzzy control - Neural networks / neural control - Machine learning - Genetic algorithms - Applications / Implementations: - Automotive / vehicular systems - Robotics / Manufacturing - Process control - Aircraft / spacecraft This year the ISIC is being held in conjunction with the 1996 IEEE International Conference on Control Applications and the IEEE Symposium on Computer-Aided Control System Design. Effectively this is one large conference at the beautiful Ritz-Carlton hotel. The programs will be held in parallel so that sessions from each conference can be attended by all. There will be one registration fee and each registrant will receive a complete set of proceedings. For more information, and information on how to submit a paper to the conference see the back of this sheet. ++++++++++ Submissions: ++++++++++ Papers: Five copies of the paper (including an abstract) should be sent by Jan. 22, 1996 to: Jay A. Farrell, ISIC'96 College of Engineering ph: (909) 787-2159 University of California, Riverside fax: (909) 787-3188 Riverside, CA 92521 Jay_Farrell at qmail.ucr.edu Clearly indicate who will serve as the corresponding author and include a telephone number, fax number, email address, and full mailing address. Authors will be notified of acceptance by May 1996. Accepted papers, in final camera ready form (maximum of 6 pages in the proceedings), will be due in June 1996. Invited Sessions: Proposals for invited sessions are being solicited and are due Jan. 22, 1996. The session organizers should contact the Program Chair by Jan. 1, 1996 to discuss their ideas and obtain information on the required invited session proposal format. Workshops and Tutorials: Proposals for pre-symposium workshops should be submitted by Jan. 22, 1996 to: Kevin M. Passino, ISIC'96 Dept. Electrical Engineering ph: (614) 292-5716 The Ohio State University fax: (614) 292-7596 2015 Neil Ave. passino at osu.edu Columbus, OH 43210-1272 Please contact K.M. Passino by Jan. 1, 1996 to discuss the content and required format for the workshop or tutorial proposal. ++++++++++++++++++++++++ Symposium Program Committee: ++++++++++++++++++++++++ James Albus, National Institute of Standards and Technology Karl Astrom, Lund Institute of Technology Matt Barth, University of California, Riverside Michael Branicky, Massachusetts Institute of Technology Edwin Chong, Purdue University Sebastian Engell, University of Dortmund Toshio Fukuda, Nagoya University Zhiqiang Gao, Cleveland State University Dimitry Gorinevsky, Measurex Devron Inc. Ken Hunt, Daimler-Benz AG Tag Gon Kim, KAIST Mieczyslaw Kokar, Northeastern University Ken Loparo, Case Western Reserve University Kwang Lee, The Pennsylvania State University Michael Lemmon, University of Notre Dame Frank Lewis, University of Texas at Arlington Ping Liang, University of California, Riverside Derong Liu, General Motors R&D Center Kumpati Narendra, Yale University Anil Nerode, Cornell University Marios Polycarpou, University of Cincinnati S. Joe Qin, Fisher-Rosemount Systems, Inc. Tariq Samad, Honeywell Technology Center George Saridis, Rensselaer Polytechnic Institute Jennie Si, Arizona State University Mark Spong, University of Illinois at Urbana-Champaign Jeffrey Spooner, Sandia National Laboratories Harry Stephanou, Rensselaer Polytechnic Institute Kimon Valavanis, University of Southwestern Louisiana Li-Xin Wang, Hong Kong University of Science and Tech. Gary Yen, USAF Phillips Laboratory ************************************************************************** * Prof. Marios M. Polycarpou | TEL: (513) 556-4763 * * University of Cincinnati | FAX: (513) 556-7326 * * Dept. Electrical & Computer Engineering | * * Cincinnati, Ohio 45221-0030 | Email: polycarpou at uc.edu * ************************************************************************** From hicks at cs.titech.ac.jp Wed Jan 3 19:57:57 1996 From: hicks at cs.titech.ac.jp (hicks@cs.titech.ac.jp) Date: Thu, 4 Jan 1996 09:57:57 +0900 Subject: Query: Infinite priors Message-ID: <199601040057.JAA27152@euclid.cs.titech.ac.jp> I have the following question concerning the existence of certain priors. Can we say that a uniform prior over an infinite domain exists? For example, the uniform prior over all natural numbers. I wonder since it cannot be expressed in the form p(n) = (f(n)/\sum_n f(n)), where f(n) is a well defined function over the natural numbers. In general, if f(n) is not a summable series, then can the probability function p(n), whose elements have the ratios of the elements f(n), i.e., p(n)/p(m) = f(n)/f(m), be said to exist? I ask because a true Bayesian approach to some problems may require the prior to be defined. If there is no prior, then we can't say we are taking a Bayesian approach. If an infinite uniform prior does not exist, then we cannot take the approach that "no prior knowledge" = "infinite uniform prior". I.e., it would imply that any Bayesian approach involving the prior MUST begin with some assumptions about the prior (i.e., it must be formed from a summable/integrable function). References or opinions would be welcome. Craig Hicks. hicks at cs.titech.ac.jp From steven.young at psy.ox.ac.uk Thu Jan 4 10:46:59 1996 From: steven.young at psy.ox.ac.uk (Steven Young) Date: Thu, 4 Jan 1996 15:46:59 +0000 (GMT) Subject: LAST CALL for participation for the Oxford Summer School on Connectionist Modelling Message-ID: <199601041547.PAA13854@cogsci1.psych.ox.ac.uk> This is the LAST CALL for participation for the 1996 Oxford Summer School on Connectionist Modelling follows. Please pass on this information to people you know who would be interested. -------- OXFORD SUMMER SCHOOL ON CONNECTIONIST MODELLING Department of Experimental Psychology University of Oxford 21 July - 2nd August 1996 Applications are invited for participation in a 2-week residential Summer School on techniques in connectionist modelling. The course is aimed primarily at researchers who wish to exploit neural network models in their teaching and/or research and it will provide a general introduction to connectionist modelling through lectures and exercises on Power PCs. The course is interdisciplinary in content though many of the illustrative examples are taken from cognitive and developmental psychology, and cognitive neuroscience. The instructors with primary responsibility for teaching the course are Kim Plunkett and Edmund Rolls. No prior knowledge of computational modelling will be required though simple word processing skills will be assumed. Participants will be encouraged to start work on their own modelling projects during the Summer School. The cost of participation in the Summer School is #750 to include accommodation (bed and breakfast at St. John's College) and registration. Participants will be expected to cover their own travel and meal costs. A small number of graduate student scholarships providing partial funding may be available. Applicants should indicate whether they wish to be considered for a graduate student scholarship but are advised to seek their own funding as well, since in previous years the number of graduate student applications has far exceeded the number of scholarships available. There is a Summer School World Wide Web page describing the contents of the 1995 Summer School available on: http://cogsci1.psych.ox.ac.uk/summer-school/ Further information about contents of the course can be obtained from Steven.Young at psy.ox.ac.uk If you are interested in participating in the Summer School, please contact: Mrs Sue King Department of Experimental Psychology University of Oxford South Parks Road Oxford OX1 3UD Tel: (01865) 271353 Email: susan.king at psy.oxford.ac.uk Please send a brief description of your background with an explanation of why you would like to attend the Summer School (one page maximum) no later than 31st January 1996. Regards, Steven Young. From finnoff at predict.com Thu Jan 4 13:35:37 1996 From: finnoff at predict.com (William Finnoff) Date: Thu, 4 Jan 96 11:35:37 MST Subject: Query: Infinite priors Message-ID: <9601041835.AA08352@predict.com> A non-text attachment was scrubbed... Name: not available Type: text Size: 2176 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/30e4e10e/attachment-0001.ksh From om at research.nj.nec.com Thu Jan 4 13:56:31 1996 From: om at research.nj.nec.com (Stephen M. Omohundro) Date: Thu, 4 Jan 96 13:56:31 -0500 Subject: Query: Infinite priors In-Reply-To: <199601040057.JAA27152@euclid.cs.titech.ac.jp> (hicks@cs.titech.ac.jp) Message-ID: <9601041856.AA01276@iris64> > Date: Thu, 4 Jan 1996 09:57:57 +0900 > From: hicks at cs.titech.ac.jp > > > I have the following question concerning the existence of certain priors. > > Can we say that a uniform prior over an infinite domain exists? For example, > the uniform prior over all natural numbers. I wonder since it cannot be > expressed in the form p(n) = (f(n)/\sum_n f(n)), where f(n) is a well defined > function over the natural numbers. In general, if f(n) is not a summable > series, then can the probability function p(n), whose elements have the ratios > of the elements f(n), i.e., p(n)/p(m) = f(n)/f(m), be said to exist? > > I ask because a true Bayesian approach to some problems may require the prior > to be defined. If there is no prior, then we can't say we are taking a > Bayesian approach. If an infinite uniform prior does not exist, then we > cannot take the approach that "no prior knowledge" = "infinite uniform prior". > I.e., it would imply that any Bayesian approach involving the prior MUST begin > with some assumptions about the prior (i.e., it must be formed from a > summable/integrable function). > > References or opinions would be welcome. Craig Hicks. hicks at cs.titech.ac.jp > These are generally called "improper priors". You can still do much of the Bayesian paradigm using them because in many situations the likelihood function is such that the posterior (which is proportional to the prior times the likelihood) is normalizable even if the prior isn't. Formally you can treat them using a limiting sequence of proper priors. Most books on Bayesian analysis have some discussion of this topic. --Steve -- Stephen M. Omohundro http://www.neci.nj.nec.com/homepages/om NEC Research Institute, Inc. om at research.nj.nec.com 4 Independence Way Phone: 609-951-2719 Princeton, New Jersey 08540 Fax: 609-951-2488 From radford at cs.toronto.edu Thu Jan 4 14:15:27 1996 From: radford at cs.toronto.edu (Radford Neal) Date: Thu, 4 Jan 1996 14:15:27 -0500 Subject: Query on "infinite" priors Message-ID: <96Jan4.141537edt.965@neuron.ai.toronto.edu> Craig Hicks. hicks at cs.titech.ac.jp writes: > Can we say that a uniform prior over an infinite domain exists? For > example, the uniform prior over all natural numbers... > > I ask because a true Bayesian approach to some problems may require > the prior to be defined. If there is no prior, then we can't say we > are taking a Bayesian approach. If an infinite uniform prior does not > exist, then we cannot take the approach that "no prior knowledge" = > "infinite uniform prior". I.e., it would imply that any Bayesian > approach involving the prior MUST begin with some assumptions about > the prior (i.e., it must be formed from a summable/integrable function). This is a long-standing issue in Bayesian inference. These "infinite" priors are usually called "improper" priors, while those that can be normalized are called "proper" priors. Some Bayesians like to use improper priors, as long as the posterior turns out to be proper (which is often, but not always, the case). Other Bayesians eschew improper priors, because strange things can sometimes occur when you use them. One strangeness is that a Bayesian procedure based on an improper prior can be "inadmissible" - ie, be uniformly worse than some other procedure with respect to expected performance, for any state of the world. A famous example (Stein's paradox) is estimation of the mean of a vector of three or more independent components having Gaussian distributions, with the aim of minimizing the expected squared error. The Bayesian estimate with an improper uniform prior is just the sample mean, which turns out to be inadmissible. In contrast Bayesian procedures based on proper priors that are nowhere zero are always admissible. There should be lots of stuff on this in Bayesian textbooks, such as Smith and Bernardo's recent book on "Bayesian Theory" (though I don't have a copy handy to verify just what they say). ---------------------------------------------------------------------------- Radford M. Neal radford at cs.toronto.edu Dept. of Statistics and Dept. of Computer Science radford at utstat.toronto.edu University of Toronto http://www.cs.toronto.edu/~radford ---------------------------------------------------------------------------- From plunkett at crl.ucsd.edu Thu Jan 4 11:41:46 1996 From: plunkett at crl.ucsd.edu (Kim Plunkett) Date: Thu, 4 Jan 96 08:41:46 PST Subject: No subject Message-ID: <9601041641.AA17397@crl.ucsd.edu> University Lectureship University of Oxford Department of Experimental Psychology Applications are invited from human experimental/cognitive psychologists (including connectionist modellers) with a proven record of research and training. The post is tenable from 1 october 1996 or as soon as possible thereafter. The stipend will be according to age on the scale stlg15,154-stlg28,215 per annum. The successful candidate may be offered an Official Fellowship at New College, for which additional renumeration would be available. Further particulars, containing details of the duties and the full range of emoluments and allowances attaching to both the University and College appointments, may be obtained from Professor S.D. Iversen, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, U.K. (telephone +44 (0) 1865 271356; fax +44 (0) 1865 271354) to applications (eight copies, two only from overseas candidates), containing a curriculum vitae, a summary of research, a list of principal publications and the names of three referees, should be sent by 31 January 1996. Candidates will be notified if they are required for interview. The University exists to promote excellence in education and research, and is an equal opportunities employer. From N.Sharkey at dcs.shef.ac.uk Fri Jan 5 07:08:22 1996 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Fri, 5 Jan 96 12:08:22 GMT Subject: 2nd and FINAL call for papers Message-ID: <9601051208.AA28421@entropy.dcs.shef.ac.uk> CALL FOR PAPERS ** LEARNING IN ROBOTS AND ANIMALS ** An AISB-96 two-day workshop University of Sussex, Brighton, UK: April, 1st & 2nd, 1996 Co-Sponsored by IEE Professional Group C4 (Artificial Intelligence) WORKSHOP ORGANISERS: Noel Sharkey (chair), University of Sheffield, UK. Gillian Hayes, University of Edinburgh, UK. Jan Heemskerk, University of Sheffield, UK. Tony Prescott, University of Sheffield, UK. PROGRAMME COMMITTEE: Dave Cliff, UK. Marco Dorigo, Italy. Frans Groen, Netherlands. John Hallam, UK. John Mayhew, UK. Martin Nillson, Sweden Claude Touzet, France Barbara Webb, UK. Uwe Zimmer, Germany. Maja Mataric, USA. For Registration Information: alisonw at cogs.susx.ac.uk In the last five years there has been an explosion of research on Neural Networks and Robotics from both a self-learning and an evolutionary perspective. Within this movement there is also a growing interest in natural adaptive systems as a source of ideas for the design of robots, while robots are beginning to be seen as an effective means of evaluating theories of animal learning and behaviour. A fascinating interchange of ideas has begun between a number of hitherto disparate areas of research and a shared science of adaptive autonomous agents is emerging. This two-day workshop proposes to bring together an international group to both present papers of their most recent research, and to discuss the direction of this emerging field. WORKSHOP FORMAT: The workshop will consist of half-hour presentations with at least 15 minutes being allowed for discussion at the end of each presentation. Short videos of mobile robot systems may be included in presentations. Proposals for robot demonstrations are also welcome. Please contact the workshop organisers if you are considering bringing a robot as some local assistance can be arranged. The workshop format may change once the number of accepted papers is known, in particular, there may be some poster presentations. WORKSHOP CONTRIBUTIONS: Contributions are sought from researchers in any field with an interest in the issues outlined above. Areas of particular interest include the following * Reinforcement, supervised, and imitation learning methods for autonomous robots * Evolutionary methods for robotics * The development of modular architectures and reusable representations * Computational models of animal learning with relevance to robots, robot control systems modelled on animal behaviour * Reviews or position papers on learning in autonomous agents Papers will ideally emphasise real world problems, robot implementations, or show clear relevance to the understanding of learning in both natural and artificial systems. Papers should not exceed 5000 words length. Please submit four hard copies to the Workshop Chair (address below) by 30th January, 1996. All papers will be refereed by the Workshop Committee and other specialists. Authors of accepted papers will be notified by 24th February Final versions of accepted papers must be submitted by 10th March, 1996. A collated set of workshop papers will be distributed to workshop attenders. We are currently negotiating to publish the workshop proceedings as a book. SUBMISSIONS TO: Noel Sharkey Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK email: n.sharkey at dcs.sheffield.ac.uk For further information about AISB96 ftp ftp.cogs.susx.ac.uk login as Password: cd pub/aisb/aisb96 From ali at almaden.ibm.com Fri Jan 5 20:43:02 1996 From: ali at almaden.ibm.com (ali@almaden.ibm.com) Date: Fri, 5 Jan 1996 17:43:02 -0800 Subject: Dissertation announcement Message-ID: <9601060143.AA21382@brasil.almaden.ibm.com> The following dissertation is available via anonymous FTP and through http://www.ics.uci.edu/~ali (either as a whole or by chapters). Title: "Learning Probabilistic Relational Concept Descriptions" By Kamal Ali Key words: Learning probabilistic concepts, multiple models, multiple classifiers, combining classifiers, evidence combination, relational learning, First-order learning, Noise-tolerant learning, Learning of small disjuncts, Inductive Logic Programming. A B S T R A C T This dissertation presents results in the area of multiple models (multiple classifiers), learning probabilistic relational (first order) rules from noisy, "real-world" data and reducing the small disjuncts problem - the problem whereby learned rules that cover few training examples have high error rates on test data. Several results are presented in the arena of multiple models. The multiple models approach in relevant to the problem of making accurate classifications in ``real-world'' domains since it facilitates evidence combination which is needed to accurately learn on such domains. It is also useful when learning from small training data samples in which many models appear to be equally "good" w.r.t. the given evaluation metric. Such models often have quite varying error rates on test data so in such situations, the single model method has problems. Increasing search only partly addresses this problem whereas the multiple models approach has the potential to be much more useful. The most important result of the multiple models research is that the *amount* of error reduction afforded by the multiple models approach is linearly correlated with the degree to which the individual models make errors in an uncorrelated manner. This work is the first to model the degree of error reduction due to the use of multiple models. It is also shown that it is possible to learn models that make less correlated errors in domains in which there are many ties in the search evaluation metric during learning. The third major result of the research on multiple models is the realization that models should be learned that make errors in a negatively-correlated manner rather than those that make errors in an uncorrelated (statistically independent) manner. The thesis also presents results on learning probabilistic first-order rules from relational data. It is shown that learning a class description for each class in the data - the one-per-class approach - and attaching probabilistic estimates to the learned rules allows accurate classifications to be made on real-world data sets. The thesis presents the system HYDRA which implements this approach. It is shown that the resulting classifications are often more accurate than those made by three existing methods for learning from noisy, relational data. Furthermore, the learned rules are relational and so are more expressive than the attribute-value rules learned by most induction systems. Finally, results are presented on the small-disjuncts problem in which rules that apply to rare subclasses have high error rates The thesis presents the first approach that is simultaneously successful at reducing the error rates of small disjucnts while also reducing the overall error rate by a statistically significant margin. The previous approach which aimed to reduce small disjunct error rates only did so at the expense of increasing the error rates of large disjuncts. It is shown that the one-per-class approach reduces error rates for such rare rules while not sacrificing the error rates of the other rules. The dissertation is approximately 180 pages long (single spaced) (~590K). ftp ftp.ics.uci.edu logname: anonymous password: your email address cd /pub/ali binary get thesis.ps.Z quit ============================================================================ I am now with the IBM Data Mining group at Almaden (San Jose) - we are looking for good people for data analysis (data mining) and consulting so please feel free to call me at (408) 365 8736. My address is: Kamal Ali, Room D3-250 IBM Almaden Research Center 650 Harry Rd San Jose, CA 95120 ============================================================================== Kamal Mahmood Ali, Ph.D. Phone: 408 927 1354 Consultant and data mining analyst, Fax: 408 927 3025 Data Mining Solutions, Office: ARC D3-250 IBM http://www.almaden.ibm.com/stss/ ============================================================================== From rao at cs.rochester.edu Sat Jan 6 18:25:56 1996 From: rao at cs.rochester.edu (rao@cs.rochester.edu) Date: Sat, 6 Jan 1996 18:25:56 -0500 Subject: Paper Available: Eye Movements in Visual Search Message-ID: <199601062325.SAA14449@vulture.cs.rochester.edu> Modeling Saccadic Targeting in Visual Search Rajesh P.N. Rao, Gregory J. Zelinsky, Mary M. Hayhoe and Dana H. Ballard Department of Computer Science University of Rochester Rochester, NY 14627, USA To appear in [Advances in Neural Information Processing Systems 8 (NIPS*95), D. Touretzky, M. Mozer and M. Hasselmo (Eds.), MIT Press, 1996] Abstract Visual cognition depends critically on the ability to make rapid eye movements known as saccades that orient the fovea over targets of interest in a visual scene. Saccades are known to be ballistic: the pattern of muscle activation for foveating a prespecified target location is computed prior to the movement and visual feedback is precluded. Despite these distinctive properties, there has been no general model of the saccadic targeting strategy employed by the human visual system during visual search in natural scenes. This paper proposes a model for saccadic targeting that uses iconic scene representations derived from oriented spatial filters at multiple scales. Visual search proceeds in a coarse-to-fine fashion with the largest scale filter responses being compared first. The model was empirically tested by comparing its performance with actual eye movement data from human subjects in a natural visual search task; preliminary results indicate substantial agreement between eye movements predicted by the model and those recorded from human subjects. ======================================================================== Retrieval information: FTP-host: ftp.cs.rochester.edu FTP-pathname: /pub/u/rao/papers/nips95.ps.Z URL: ftp://ftp.cs.rochester.edu/pub/u/rao/papers/nips95.ps.Z WWW URL: http://www.cs.rochester.edu:80/u/rao/ 7 pages; 570K compressed e-mail: rao at cs.rochester.edu ========================================================================= From katagiri at hip.atr.co.jp Mon Jan 8 03:03:10 1996 From: katagiri at hip.atr.co.jp (Shigeru Katagiri) Date: Mon, 08 Jan 1996 17:03:10 +0900 Subject: NNSP96: Notification Message-ID: <9601080803.AA09264@hector> !!! DEADLINE IS COMING SOON !!! ******************************************************************* ******************************************************************* 1996 IEEE Workshop on Neural Networks for Signal Processing ******************************************************************* ******************************************************************* September 4-6, 1996 Keihanna, Kyoto, Japan Sponsored by the IEEE Signal Processing Society (In cooperation with the IEEE Neural Networks Council) (In cooperation with the Tokyo Chapter) ************************* * * * CALL FOR PAPERS * * * ************************* Thanks to the sponsorship of IEEE Signal Processing Society and to the cooperation of IEEE Signal Processing Society Tokyo Chapter, the sixth of a series of IEEE workshops on Neural Networks for Signal Processing will be held at Keihanna Plaza, Seika, Kyoto, Japan. Papers are solicited for, but not limited to, the following areas: Paradigms: artificial neural networks, Markov models, fuzzy logic, inference net, evolutionary computation, nonlinear signal processing, and wavelet Application areas: speech processing, image processing, OCR, robotics, adaptive filtering, communications, sensors, system identification, issues related to RWC, and other general signal processing and pattern recognition Theories: generalization, design algorithms, optimization, parameter estimation, and network architectures Implementations: parallel and distributed implementation, hardware design, and other general implementation technologies Prospective authors are invited to submit 5 copies of extended summaries of no more than 6 pages. The top of the first page of the summary should include a title, authors' names, affiliations, address, telephone and fax numbers and email address, if any. Camera-ready full papers of accepted proposals will be published in a hard-bound volume by IEEE and distributed at the workshop. For further information, please contact Ms. Masae SHIOJI, (Tel.) +81 (774) 95 1052, (Fax.) +81 (774) 95 1008, (e-mail) shioji at hip.atr.co.jp, or access URL http://www.hip.atr.co.jp through the world wide web. Please send paper submissions to: Dr. Shigeru KATAGIRI IEEE NNSP'96 ATR Interpreting Telecommunications Research Laboratories 2-2 Hikaridai Seika-cho, Soraku-gun Kyoto 619-02 Japan SCHEDULE Submission of extended summary : January 26 1996 Notification of acceptance : March 29 Submission of photo-ready accepted paper : April 26 Advanced registration, before : June 28 ******************************************************************* General Chairs Shiro USUI (Toyohashi University of Technology (usui at tut.ac.jp)) Yoh'ichi TOHKURA (ATR HIP Res. Labs. (tohkura at hip.atr.co.jp)) Vice-Chair Nobuyuki OTSU (Electrotechnical Laboratory (otsu at etl.go.jp)) Finance Chair Sei MIYAKE (NHK (miyake at strl.nhk.or.jp)) Proceeding Chair Elizabeth J. Wilson (Raytheon Co. (bwilson at ed.ray.com)) Publicity Chair Erik McDermott (ATR HIP Res. Labs. (mcd at hip.atr.co.jp)) Program Chair Shigeru KATAGIRI (ATR IT Res. Labs. (katagiri at hip.atr.co.jp)) Program Committee L. ATLAS A. BACK P.-C. CHUNG H.-C. FU K. FUKUSHIMA L. GILES F. GIROSI A. GORIN N. HATAOKA Y.-H. HU J.-N. HWANG K. IIZUKA B.-H. JUANG M. KAWATO S. KITAMURA M. KOMURA G. KUHN S.-Y. KUNG K. KYUMA R. LIPPMANN J. MAKHOUL E. MANOLAKOS Y. MATSUYAMA S. MARUNO S. NAKAGAWA M. NIRANJAN E. OJA R. OKA K. PALIWAL T. POGGIO J. PRINCIPE H. SAWAI N. SONEHARA J. SORENSEN W.-C. SIU Y. TAKEBAYASHI V. TRESP T. WATANABE A. WEIGEND C. WELLEKENS E. YODOGAWA ******************************************************************* ============================================================================ Shigeru KATAGIRI, Dr. Eng. Supervisor ATR Interpreting Telecommunications Research Laboratories ATR Human Information Processing Research Laboratories phone: +81 (774) 95 1052 fax: +81 (774) 95 1008 email: katagiri at hip.atr.co.jp address: ATR-ITL 2-2 Hikaridai Seika-cho, Soraku-gun Kyoto 619-02 Japan Associate Editor IEEE Transactions on Signal Processing Program Chair 1996 IEEE Signal Processing Society Workshop on Neural Networks for Signal Processing (NNSP96) ============================================================================ From terry at salk.edu Mon Jan 8 04:26:46 1996 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 8 Jan 96 01:26:46 PST Subject: Neuromorphic Engineering Workshop Message-ID: <9601080926.AA01642@salk.edu> WORKSHOP ON NEUROMORPHIC ENGINEERING JUNE 24 - JULY 14, 1996 TELLURIDE, COLORADO Deadline for application is April 5, 1996. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one three week workshop that will be held in Telluride, Colorado in 1996. The first two Telluride Workshops on Neuromorphic Engineering were held in the summer of 1994 and 1995, sponsored by NSF and co-funded by the "Center for Neuromorphic Systems Engineering" at Caltech, were resounding successes. A summary of these workshops, togther with a list of participants is available from: http://www.klab.caltech.edu/~timmer/telluride.html or http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week workshop is co-organized by Dana Ballard (Rochester, US), Rodney Douglas (Zurich, Switzerland) and Misha Mahowald (Zurich, Switzerland). It is composed of lectures, practical tutorials on aVLSI design, hands-on projects, and interest groups. Apart from the lectures, the activities run concurrently. However, participants are free to attend any of these activities at their own convenience. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the course of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in four groups: 1) active perception, 2) elements of autonomous robots, 3) robot manipulation, and 4) multichip neuron networks. The "active perception" project group will emphasize vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system is based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. Projects will involve programming the DataCube to implement a variety of vision/oculomotor algorithms. The "elements of autonomous robots" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robot manipulation" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip neuron networks" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Dana Ballard, Rochester. Randy Beer, Case-Western Reserve. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Chris Dioro, Caltech. Rodney Douglas, Zurich. John Elias, Delaware University. Mary Hayhoe, Rochester. Geoffrey Hinton, Toronto. Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Misha Mahowald, Zurich. Stefan Schaal, Georgia Tech. Mark Tilden, Los Alamos. Terry Sejnowski, Salk Institute and UC San Diego. Paul Viola, MIT LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000+ range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, one or two MACs and a few PCs running windows and LINUX. We have funds to reimburse some participants for up to $500.- of domestic travel and for all housing expenses. Please specify on the application whether such finanical help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 5, 1996. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1996. From philh at cogs.susx.ac.uk Mon Jan 8 12:46:21 1996 From: philh at cogs.susx.ac.uk (Phil Husbands) Date: Mon, 8 Jan 1996 17:46:21 +0000 (GMT) Subject: Tutorial on Alife and Adaptive Behaviour Message-ID: AISB96 Workshop and Tutoria1l Series 31 March-2 April 1996 University of Sussex Falmer, Brighton, UK One day Tutorial: Artificial Life and Adaptive Behaviour --------------------------------------- Date of Tutorial: 31st March 1996 Presenter(s) - Dave Cliff and Phil Husbands School of Cognitive and Computing Sciences University of Sussex Falmer, Brighton BN1 9QH Email: davec or philh @cogs.susx.ac.uk -------------------------------------------------------------------------- Description ----------- This tutorial will provide an introduction to the burgeoning fields of Artificial Life and Adaptive Behaviour. Artificial Life is concerned with the use of computational methods to both model and synthesize phenomena normally associated with living systems. The related, but more focused, discipline of Adaptive Behaviour brings together ideas from a range of disciplines, such as ethology, cognitive science and robotics, to further our understanding of the behaviours and underlying mechanisms that allow animals, and, potentially, robots to survive in uncertain environments. Topics to be covered include: the historical roots of Artificial Life and Adaptive Behaviour; Strong Alife and Weak Alife; principles of behaviour-based robotics; artificial evolution and its application to autonomous robotics; modelling and synthesizing neural and other learning mechanisms for autonomous agents; collective behaviour; artificial worlds; software agents; understanding the origins of life; applications; the philosophical implications of these approaches. The material will be presented in lecture format with liberal use of video, computer and robot demonstrations. Although only key work will be discussed, extensive bibliographies and suggestions for further reading will be provided along with lecture notes and other supporting literature. -------------------------------------------------------------------------- Prerequisites: None -------------------------------------------------------------------------- Tutorial Numbers: Maximum of 50 (constrained by room size) -------------------------------------------------------------------------- Audience: Anyone who thinks the tutorial description sounds interesting and is willing to part with the cash. They won't be sorry. -------------------------------------------------------------------------- Tutorial Fees: Tutorial Fees include course materials, refreshments and lunch. All prices are in pounds Sterling. (AISB Student fees are in parentheses) Early Registration Deadline: 1 March 1996 AISB NON-AISB MEMBERS MEMBERS 1 Day Tutorial 80.00 (55.00) 100.00 LATE REGISTRATION 100.00 (75.00) 120.00 For Full Details of Registration please contact: AISB96 Local Organisation COGS University of Sussex Falmer, Brighton, BN1 9QH Tel: +44 1273 678448 Fax: +44 1273 671320 Email: aisb at cogs.susx.ac.uk ========================================================================== From KOKINOV at BGEARN.BITNET Mon Jan 8 17:16:11 1996 From: KOKINOV at BGEARN.BITNET (Boicho Kokinov) Date: Mon, 08 Jan 96 17:16:11 BG Subject: Graduate study in CogSci Message-ID: The Department of Cognitive Science at the New Bulgarian University It offers the following degrees: Post-Graduate Diploma, M.Sc., Ph.D. FEATURES Teaching in English both in the regular courses at NBU and in the intensive courses at the Annual International Summer Schools. Strong interdisciplinary program covering Psychology, Artificial Intelligence, Neurosciences, Linguistics, Philosophy, Mathematics, Methods. Theoretical and experimental research in integration of the symbolic and connectionist approaches, emergent hybrid cognitive architectures, models of memory and reasoning, analogy, vision, imagery, agnosia, language and speech processing, aphasia. Advisors: at least two advisors with different backgrounds, possibly one external international advisor. International dissertation committee. INTERNATIONAL ADVISORY BOARD Elizabeth Bates (UCSD, USA), Amedeo Cappelli (CNR, Italy), Cristiano Castelfranchi (CNR, Italy), Daniel Dennett (Tufts University, USA), Charles De Weert (University of Nijmegen, Holland), Christian Freksa (Hamburg University, Germany), Dedre Gentner (Northwestern University, USA), Christopher Habel (Hamburg University, Germany), Douglas Hofstadter (Indiana University, USA), Joachim Hohnsbein (University of Dortmund, Germany), Keith Holyoak (UCLA, USA), Mark Keane (Trinity College, Ireland), Alan Lesgold (University of Pittsburg, USA), Willem Levelt (Max-Plank Institute of Psycholinguistics, Holland), Ennio De Renzi (University of Modena, Italy), David Rumelhart (Stanford University, USA), Richard Shiffrin (Indiana University, USA), Paul Smolensky (University of Colorado, USA), Chris Thornton (University of Sussex, England ), Carlo Umilta' (University of Padova, Italy) ADDMISSION REQUIREMENTS B.Sc. degree in psychology, computer science, linguistics, philosophy, neurosciences, or related fields. Good command of English. Full Scholarships available to students from Eastern and Central Europe. Address: Cognitive Science Department, New Bulgarian University, 21 Montevideo Str. Sofia 1635, Bulgaria, tel.: (+3592) 55-80-65 fax: (+3592) 54-08-02 e-mail: kokinov at bgearn.acad.bg From marshall at cs.unc.edu Mon Jan 8 14:55:33 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Mon, 8 Jan 1996 15:55:33 -0400 Subject: PhD studies in neural networks & vision at UNC-Chapel Hill Message-ID: <199601081955.PAA14274@marshall.cs.unc.edu> ---------------------------------------------------------------------------- PH.D. STUDIES IN NEURAL NETWORKS AND VISION at the University of North Carolina at Chapel Hill ---------------------------------------------------------------------------- Program: M.S./Ph.D. in Computer Science and other departments Faculty with Approx. 40 in the Departments of Computer Science, NN-related Psychology, Neurobiology, Biomedical Engineering, research Speech and Hearing Sciences, Linguistics, Physiology, interests: Pharmacology, and Cell Biology. Personal description of program: I would like to encourage students who are interested in neural networks to apply to our department. There are also about 40 faculty members in various departments here at UNC-Chapel Hill who are doing research in NN-related areas. We are developing a diverse NN community in the area universities, research organizations, and corporations. My own areas of interest include: self-organizing neural networks, visual perception, and sensorimotor integration. My students are currently working on several NN projects involving visual depth, motion, orientation, transparency, and segmentation. I see neural networks as a truly inter- disciplinary field, which includes the study of neuroscience, perceptual and behavioral psychology, computer science, and mathematics. I encourage my students to pursue broad knowledge in all areas related to neural networks. The Department of Computer Science has excellent facilities for research in computational aspects of neural networks, especially as applied to problems in vision. Facilities in other departments are also used for NN-related research. For application materials, send a message to admit at cs.unc.edu. If you would like more information, write to me or Janet Jones (jones at cs.unc.edu) at this department. NOTE: The application deadline for Fall 1996 is JANUARY 31, 1996. Courses: Behavior and its Biological Behavioral Pharmacology Bases Computer Vision Cognitive Development Development of Language Conditioning and Learning Developmental Theory Developmental Neurobiology Experimental Neurophysiology Digital Signal Processing Human Learning Human Cognitive Abilities Learning Theory and Practice Intro to Neural Networks Neural Information Processing Memory Neuroanatomy Neural Networks and Vision Optimal Control Theory Neural Networks and Control Neurochemistry of Action Picture Processing and Pattern Physiological Psychology Recognition Robotics Sensory Processes Statistical Pattern Recognition Synaptic Pharmacology VLSI Design (Analog VLSI) Visual Solid Shape Visual Perception There are numerous researchers locally in NNs and allied fields at UNC-Chapel Hill UNC-Charlotte NC State University NC A&T State University Duke University Microelectronics Center of NC NC Supercomputer Center Research Triangle Institute Army Research Office IBM Bell Northern Research SAS Institute Computing resources (primarily UNIX machines) include: Department of Computer Science - Hewlett-Packard J210 computers - MasPar MP-1 (similar to Connection Machine) - Numerous workstations & minicomputers (DEC, Sun, Mac) - Microelectronics Systems Lab - Graphics and Image Lab NC Supercomputer Center - Cray Y-MP - IBM 3090 - Visualization lab UNC Academic Computing Service - Convex C-220 - IBM 4381 - Several VAX computers Other UNC Departments - Vision research labs (Psychology, Radiology) - Neuroscience research labs (Neurobiology, Physiology) Other resources: o An effort to initiate a graduate program on "Computational and Neurobiological Models of Cognition" is underway at UNC-Chapel Hill. o Several research groups on NNs or vision meet regularly in the area. o The graduate neurobiology programs at UNC-Chapel Hill and at nearby Duke University have several faculty members with research interests in vision and in "systems" neuroscience. o The Whitaker Foundation has recently provided funds to enhance an interdisciplinary research program on "Engineering in Systems Neuroscience" at UNC-Chapel Hill. The program involves researchers from biomedical engineering, physiology, psychology, computer science, statistics, and other departments. o Vision is a major research area in the Computer Science department at UNC-CH, with several faculty members in human visual perception, computer vision, image processing, and computer graphics. o The Triangle Area Neural Network Society holds a colloquium series and sponsors other NN-related activities in the local area. _____ / \ Jonathan A. Marshall marshall at cs.unc.edu ------- Dept. of Computer Science http://www.cs.unc.edu/~marshall | | | | CB 3175, Sitterson Hall | | | | Univ. of North Carolina Office +1-919-962-1887 ======= Chapel Hill, NC 27599-3175, USA Fax +1-919-962-1799 From terry at salk.edu Mon Jan 8 15:52:05 1996 From: terry at salk.edu (Terry Sejnowski) Date: Mon, 8 Jan 96 12:52:05 PST Subject: Neural Computation 8:1 Message-ID: <9601082052.AA06235@salk.edu> NEURAL COMPUTATION Vol 8, Issue 1, January 1996 Article: Lower Bounds for the Computational Power of Networks of Spiking Neurons Wolfgang Maass Note: A Short Proof of the Posterior Probability Property of Classifier Neural Networks Raul Rojas Letters: Coding of Time-Varying Signals in Spike Trains of Integrate-and-Fire Neurons with Random Threshold Fabrizio Gabbiani and Christof Koch A Simple Spike Train Decoder Inspired by the Sampling Theorem William B Levy and David A. August A Model of Spatial Map Formation in the Hippocampus of the Rat Kenneth I. Blum and L. F. Abbott A Neural Model of Olfactory Sensory Memory in the Honeybee's Antennal Lobe Christiane Linster and Claudine Masson A Spherical Basis Function Neural Network for Modeling Auditory Space Rick L. Jenison and Kate Fissell On The Convergence Properties of the Em Algorithm for Gaussian Mixtures Lei Xu and Michael I. Jordan A Comparison of Some Error Estimates for Neural Network Models Robert Tibshirani Neural Networks for Optimal Approximation of Smooth and Analytic Functions H. N. Mhaskar Equivalence of Boltzmann Chains and Hidden Markov Models David J. C. MacKay Diagrammatic Derivation of Gradient Alforithms for Neural Network Eric A Wan and Francoise Beaufays Does Extra Knowledge Necessarily Improve Generalization David Barber and David Saad ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 ----- From mel at quake.usc.edu Mon Jan 8 00:42:49 1996 From: mel at quake.usc.edu (Bartlett Mel) Date: Mon, 8 Jan 1996 13:42:49 +0800 Subject: Preprint Available Message-ID: <9601082142.AA05338@quake.usc.edu> Announcing a new preprint, available at: url=ftp://quake.usc.edu/pub/mel/papers/mel.seemore.TR96.ps.gz (22 pages, 1.1M compressed, 34M uncompressed) Sorry, no hardcopies. Problems downloading/printing? Please notify author at mel at quake.usc.edu. -------------------------------------- SEEMORE: Combining Color, Shape, and Texture Histogramming in a Neurally-Inspired Approach to Visual Object Recognition Bartlett W. Mel Department of Biomedical Engineering University of Southern California, MC 1451 Los Angeles, California 90089 ABSTRACT Severe architectural and timing constraints within the primate visual system support the hypothesis that the early phase of object recognition in the brain is based on a feedforward feature-extraction hierarchy. A neurally-inspired feature-space model was developed, called SEEMORE, to explore the representational tradeoffs that arise when a feedforward neural architecture is faced with a difficult 3-D object recognition problem. SEEMORE is based on 102 feature channels that emphasize localized, quasi-viewpoint-invariant nonlinear receptive-field-style filters, and which are as a group sensitive to multiple visual cues (contour, texture, and color). SEEMORE's visual world consists of 100 objects of many different types, including rigid (shovel), non-rigid (telephone cord), and statistical (maple leaf cluster) objects, and photographs of complex scenes. Objects were individually-presented in color video images under stable lighting conditions. Based on 12-36 training views, SEEMORE was required to recognize test views of objects that could vary in position, orientation in the image plane and in depth, and scale (factor of 2); for non-rigid objects, recognition was also tested under gross shape deformations. Correct classification performance on a testset consisting of 600 novel object views was 97% (chance was 1%), and was comparable for the subset of 15 non-rigid objects. Performance was also measured under a variety of image degradation conditions, including partial occlusion, limited clutter, color-shift, and additive noise. Generalization behavior and classification errors illustrate the emergence of several striking natural shape catagories that are not explicitly encoded in the dimensions of the feature space. From koza at CS.Stanford.EDU Mon Jan 8 16:58:26 1996 From: koza at CS.Stanford.EDU (John R. Koza) Date: Mon, 8 Jan 1996 13:58:26 -0800 (PST) Subject: GP-96 Jan 15 Weather Extension  Message-ID: <199601082158.NAA16525@Sunburn.Stanford.EDU> In view of the weather today (and likely continuing weather problems for the next few days on the East Coast of the US), we are extending the deadline for submitting papers to 5 PM Monday January 15, 1996 for ARRIVAL at the AAAI offices in California. Please send your submissions ONLY to the following address: GP-96 Conference c/o American Association for Artificial Intelligence 445 Burgess Drive Menlo Park, CA 94025 USA Please be sure to mark the package "GP-96 Conference." If anyone sent a package to me at Stanford, please notify me separately so I can look for it. The Post Office refuses to deliver mail to the new CSD Building and there are many unopened mail bags at this moment. Best wishes to the snowy East Coast. John Koza From marwan at sedal.usyd.edu.AU Tue Jan 9 18:21:32 1996 From: marwan at sedal.usyd.edu.AU (Marwan A. Jabri, Sydney Univ. Elec. Eng., Tel: +61-2 692 2240) Date: Wed, 10 Jan 1996 10:21:32 +1100 Subject: new book Message-ID: <199601092321.KAA00723@cortex.su.OZ.AU> NEW BOOK ADAPTIVE ANALOGUE VLSI NEURAL SYSTEMS M.A. Jabri, R.J. Coggins, and B.G. Flower This is the first practical book on neural networks learning chips and systems. It covers the entire process of implementing neural networks in VLSI chips, beginning with the crucial issues of learning algorithms in an analog framework and limited precision effects, and giving actual case studies of working systems. The approach is systems and applications oriented throughout, demonstrating the attractiveness of such an approach for applications such as adaptive pattern recognition and optical character recognition. Prof. Jabri and his co-authors from AT&T Bell Laboratories, Bellcore and the University of Sydney provide a comprehensive introduction to VLSI neural networks suitable for research and development staff and advanced students. Key benefits to reader: o covers system aspects o examines on-chip learning o deals with the effect of the limited precision of VLSI techniques o covers the issue of low-power implementation of chips with learning synapses Book ordering info: December 1995: 234X156: 272pp, 135 line illus, 7 halftone illus: Paperback: 0-412-61630-0: L29.95 CHAPMAN & HALL 2-6 Boundary Row, London, SE1 8HN, U.K. Telephone: +44-171-865 0066 Fax: +44-171-522 9623 Contents 1 Overview 2 Architectures and Learning Algorithms 2.1 Introduction 2.2. Framework 2.3 Learning 2.4 Perceptrons 2.5 The Multi-Layer Perceptron 2.6 The Backpropagation Algorithm 2.7 Comments 3 MOS Devices and Circuits 3.1 Introduction 3.2 Basic Properties of MOS Devices 3.3 Conduction in MOSFETs 3.4 Complementary MOSFETs 3.5 Noise in MOSFETs 3.6 Circuit models of MOSFETs 3.7 Simple CMOS Amplifiers 3.8 Multistage OP AMPS 3.9 Choice of Amplifiers 3.10 Data Converters 4 Analog VLSI Building Blocks 4.1 Functional Designs to Architectures 4.2 Neurons and Synapses 4.3 Layout Strategies 4.4 Simulation Strategies 5 Kakadu - A Low Power Analog VLSI MLP 5.1 Advantages of Analog Implementation 5.2 Architecture 5.3 Implementation 5.4 Chip Testing 5.5 Discussion 6 Analog VLSI Supervised Learning 6.1 Introduction 6.2 Learning in an Analog Framework 6.3 Notation 6.4 Weight Update Strategies 6.5 Learning Algorithms 6.6 Credit Assignment Efficiency 6.7 Parallelisation Heuristics 6.8 Experimental Methods 6.9 ICEG Experimental Results 6.10 Parity 4 Experimental Results 6.11 Discussion 6.12 Conclusion 7 A Micropower Neural Network 7.1 Introduction 7.2 Architecture 7.3 Training System 7.4 Classification Performance and Power Consumption 7.5 Discussion 7.6 Conclusion 8 On-Chip perturbation Based Learning 8.1 Introduction 8.2 On-Chip Learning Multi-Layer Perceptron 8.3 On-Chip Learning Recurrent Neural Network 8.4 Conclusion 9 Analog Memory Techniques 9.1 Introduction 9.2 Self-Refreshing Storage Cells 9.3 Multiplying DACs 9.4 A/D-D/A Static Storage Cell 9.5 Basic Principle of the Storage Cell 9.6 Circuit Limitations 9.7 Layout Considerations 9.8 Simulation Results 9.9 Discussion 10 Switched Capacitor Techniques 10.1 A Charged Based Network 10.2 Variable Gain, Linear, Switched Capacitor Neurons 11 NET32K High Speed Image Understanding System 11.1 Introduction 11.2 The NET32K Chip 11.3 The NET32K Board System 11.4 Applications 11.5 Summary and Conclusions 12 Boltzmann Machine Learning System 12.1 Introduction 12.2 The Boltzmann Machine 12.3 Deterministic Learning by Error Propagation 12.4 Mean-field Version of Boltzmann Machine 12.5 Electronic Implementation of a Boltzmann Machine 12.6 Building a System using the Learning Chips 12.7 Other Applications References Index From haussler at cse.ucsc.edu Tue Jan 9 20:49:22 1996 From: haussler at cse.ucsc.edu (David Haussler) Date: Tue, 9 Jan 1996 17:49:22 -0800 Subject: new paper available Message-ID: <199601100149.RAA08733@arapaho.cse.ucsc.edu> A new paper by D. Haussler and M. Opper entitled Mutual Information, Metric Entropy, and Risk in Estimation of Probability Distributions is available on the web at http://www.cse.ucsc.edu/~sherrod/ml/research.html An abstract is given below (for those who read LaTex) -David ___________________ Abstract: $\{P_{Y|\theta}: \theta \in \Theta\}$ is a set of probability distributions (with a common dominating measure) on a complete separable metric space $Y$. A state $\theta^* \in \Theta$ is chosen by Nature. A statistician gets $n$ independent observations $Y_1, \ldots, Y_n$ distributed according to $P_{Y|\theta^*}$ and produces an estimated distribution $\hat{P}$ for $P_{Y|\theta^*}$. The statistician suffers a loss based on a measure of the distance between the estimated distribution and the true distribution. We examine the Bayes and minimax risk of this game for various loss functions, including the relative entropy, the squared Hellinger distance, and the $L_1$ distance. We also look at the cumulative relative entropy risk over the distributions estimated during the first $n$ observations. Here the Bayes risk is the mutual information between the random parameter $\Theta^*$ and the observations $Y_1, \ldots, Y_n$. New bounds on this mutual information are given in terms of the Laplace transform of the Hellinger distance between $P_{Y|\theta}$ and $P_{Y|\theta^*}$. From these, bounds on the minimax risk are given in terms of the metric entropy of $\Theta$ with respect to the Hellinger distance. The assumptions required for these bounds are very general and do not depend on the choice of the dominating measure. They apply to both finite and infinite dimensional $\Theta$. They apply in some cases where $Y$ is infinite dimensional, in some cases where $Y$ is not compact, in some cases where the distributions are not smooth, and in some parametric cases where asymptotic normality of the posterior distribution fails. From geoff at salk.edu Wed Jan 10 14:30:33 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Wed, 10 Jan 96 11:30:33 PST Subject: NIPS preprint available Message-ID: <9601101930.AA27087@salk.edu> The following NIPS preprint is available via ftp://salk.edu/pub/geoff/goodhill_nips96.ps.Z or http://cnl.salk.edu/~geoff OPTIMIZING CORTICAL MAPPINGS Geoffrey J. Goodhill(1), Steven Finch(2) & Terrence J. Sejnowski(3) (1) The Salk Institute for Biological Studies 10010 North Torrey Pines Road, La Jolla, CA 92037, USA (2) Human Communication Research Centre University of Edinburgh, 2 Buccleuch Place Edinburgh EH8 9LW, GREAT BRITAIN (3) The Howard Hughes Medical Institute The Salk Institute for Biological Studies 10010 North Torrey Pines Road, La Jolla, CA 92037, USA & Department of Biology University of California San Diego, La Jolla, CA 92037, USA, ABSTRACT ``Topographic'' mappings occur frequently in the brain. A popular approach to understanding the structure of such mappings is to map points representing input features in a space of a few dimensions to points in a 2 dimensional space using some self-organizing algorithm. We argue that a more general approach may be useful, where similarities between features are not constrained to be geometric distances, and the objective function for topographic matching is chosen explicitly rather than being specified implicitly by the self-organizing algorithm. We investigate analytically an example of this more general approach applied to the structure of interdigitated mappings, such as the pattern of ocular dominance columns in primary visual cortex. From hilario at cui.unige.ch Mon Jan 8 06:39:21 1996 From: hilario at cui.unige.ch (Melanie Hilario) Date: Mon, 8 Jan 1996 12:39:21 +0100 Subject: Please send via connectionists mailing list Message-ID: <1710*/S=hilario/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> ------------------------------------------------------------------------------- Neural Networks and Structured Knowledge (NNSK) Call for Contributions ECAI '96 Workshop to be held on August 12/13, 1996 during the 12th European Conference on Artificial Intelligence from August 12-16, 1996 in Budapest, Hungary Contributions are invited for the workshop "Neural Networks and Structured Knowledge" to be held in conjunction with ECAI'96 in Budapest, Hungary. ------------------------------------------------------------------------------- Description of the Workshop Neural networks mostly are used for tasks dealing with information presented in vector or matrix form, without a rich internal structure reflecting relations between different entities. In some application areas, e.g. speech processing or forecasting, types of networks have been investigated for their ability to represent sequences of input data. Whereas approaches to use neural networks for the representation and processing of structured knowledge have been around for quite some time, especially in the area of connectionism, they frequently suffer from problems with expressiveness, knowledge acquisition, adaptivity and learning, or human interpretation. In the last years much progress has been made in the theoretical understanding and the construction of neural systems capable of representing and processing structured knowledge in an adequate way, while maintaining essential capabilities of neural networks such as learning, tolerance of noise, treatment of inconsistencies, and parallel operation. The goal of this workshop is twofold: On one hand, existing mechanisms are critically examined with respect to their suitability for the acquisition, representation, processing and interpretation of structured knowledge. On the other hand, new approaches, especially concerning the design of systems based on such mechanisms, are presented, with particular emphasis on their application to realistic problems. The following topics lie within the intended scope of the workshop: Concepts and Methods: * extraction, injection and refinement of structured knowledge from, into and by neural networks * inductive discovery/formation of structured knowledge * combining symbolic machine learning techniques with neural lerning paradigms to improve performance * classification, recognition, prediction, matching and manipulation of structured information * neural methods that use or discover structural similarities * neural models to infer hierarchical categories * structuring of network architectures: methods for introducing coarse-grained structure into networks, unsupervised learning of internal modularity Application Areas: * medical and technical diagnosis: discovery and manipulation of structured dependencies, constraints, explanations * molecular biology and chemistry: prediction of molecular structure unfolding, classification of chemical structures, DNA analysis * automated reasoning: robust matching, manipulation of logical terms, proof plans, search space reduction * software engineering: quality testing, modularisation of software * geometrical and spatial reasoning: robotics, structured representation of objects in space, figure animation, layouting of objects * other applications that use, generate or manipulate structures with neural methods: structures in music composition, legal reasoning, architectures, technical configuration, ... The list of topics and potential application areas above indicates an important tendency towards neural networks which are capable of dealing with structured information. This can be done on an internal level, where one network is used to represent and process knowledge for a task, or on a higher level as in modular neural networks, where the structure may be represented by the relations between the modules. The central theme of this workshop will be the treatment of structured information using neural networks, independent of the particular network type or processing paradigm. Thus the workshop theme is orthogonal to the issue of connectionist/symbolic integration, and is not intended as a continuation of the more philosphically oriented discussion of symbolic vs. subsymbolic representation and processing. Workshop Format Our hope is to attract 20-30 people for the workshop; the maximum will be 40. The setup of the workshop is specifically designed to encourage an informal and interactive atmosphere (not a mini-conference with a number of formal talks and 2 minutes of questions after a talk). The workshop will be based on the following points: * Talks will have break-points where audience participation is requested * For each talk, at least two organizers or participants will be acting as commentators * There will be discussion sessions specifically devoted to a particular topic of interest, with mandatory contributions from the participants If time permits, one session could be plenary, and another in small groups. The plenary session would discuss a broad topic of general interest, e.g. benefits and problems of different approaches to use neural networks for the representation and processing of structured knowledge. The group sessions would concentrate on specific application areas. * Preprints of the contributions will be made available to the participants electronically in advance * Statements of interest as well as the willingness to act as commentators for other participants' talks are requested from the participants * Self-Introduction of participants at the beginning of the workshop Organizing Committee Franz Kurfess (chair)New Jersey Institute of Technology, Newark, USA Daniel Memmi LIFIA-IMAG Grenoble, France Andreas Kuechler Universitaet Ulm, Germany Arnaud Giacometti Universite de Tours, France Contact Prof. Franz Kurfess Computer and Information Sciences Dept. New Jersey Institute of Technology Newark, NJ 07102, U.S.A. Voice : +1/201-596-5767 Fax : +1/201-596-5767 E-mail: franz at cis.njit.edu Program Committee* Venkat Ajjanagadde - University of Minnesota, Minneapolis Ethem Alpaydin - Bogazici University C. Lee Giles - NEC Research Institute Melanie Hilario - University of Geneva (co-chair) Steffen Hoelldobler - TU Dresden Mirek Kubat - University of Ottawa Guenther Palm - Universitaet Ulm Hava Siegelman - Technion (Israeli Institue of Technology) Alessandro Sperduti - University of Pisa (co-chair) * Tentative list--names of other PC members will be added as confirmations come in. Submission of Papers Contributions should be received no later than March 15. Papers should be no longer than 8 pages; preferred format is one column and of A4 (8 1/2" x 11") size with 3/4" margins all round. The first page of a contribution must contain the following information: title, author(s) name and affiliation, mailing address, phone and fax number, e-mail address, an abstract of ca. 300 words, and three to five keywords. All submissions will be acknowledged by electronic mail; correspondence will be sent to the first author. All submitted papers will be reviewed by at least two members of the program committee. In addition to the technical quality of a submission, we will also take into consideration the potential for discussion in order to stimulate the interactive character of the workshop. If possible, accepted papers will be made electronically available to participants in advance. Workshop proceedings will be distributed to participants by ECAI organizers. We are also currently in negotiations with publishers about an edited volume of workshop contributions, or a special issue in a journal. If you intend to submit a paper please do not hesitate to contact the organizing committee as soon as possible so that the workshop can be formed and planned further. Electronic submissions are strongly encouraged (see the procedure described below). If you cannot submit your paper electronically (due to technical problems or the lack of technical facilities), please send 3 hardcopies to: Andreas Kuechler Department of Neural Information Processing University of Ulm Oberer Eselsberg 89069 Ulm Germany Participation and Registration Participation without a full contribution is possible. In this case we request a statement of interest (to be sent to the Workshop Chair franz at cis.njit.edu) and the willingness to act as commentator for an accepted contribution, which will be made available in advance. Preference will be given to attendees with a paper. To cover costs, a fee of ECU 50 for each participant of each workshop in addition to the normal ECAI-96 conference registration fee will be charged by the main conference organizers. Please note that attendees of workshops MUST register for the main ECAI conference. Schedule Submission deadline March 15, 1996 Notification of acceptance/rejection April 15, 1996 Final version of papers due May 15, 1996 Deadline for participation without paper June 15, 1996 Date of the workshop August 12/13, 1996 ------------------------------------------------------------------------------- Electronic submission procedure This is a two-step procedure: 1. Please send an email with the subject 'nnsk-submission' to nnsk-submission at neuro.informatik.uni-ulm.de in ASCII-format with the title, author name(s) and affiliation, mailing address, phone and fax number, e-mail address, an abstract of ca. 300 words, and three to five keywords. Correspondence (unless otherwise indicated) will be sent to the first author. Specify how you will send your paper (ftp or e-mail). Papers should be submitted in Postscript-format (please avoid exotic or out-of-date systems for the generation of the postscript file). UNIX-file format is preferred, large files should be compressed (using 'compress' or 'gzip'). If you choose the ftp option, please use the file-name .ps and add this name to your email. 2. There will be two alternatives: * ftp Option: Connect via anonymous ftp to neuro.informatik.uni-ulm.de and 'put' your file in the incoming/nnsk-submission directory (please note that this directory is set to write-only). Here is an example of how to upload a file: unix> gzip Andreas.Kuechler.ps unix> ftp neuro.informatik.uni-ulm.de Connected to neuro.informatik.uni-ulm.de. 220 neuro FTP server (SunOS 4.1) ready. Name (neuro.informatik.uni-ulm.de:andi): ftp 331 Guest login ok, send ident as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd incoming/nnsk-submission 250 CWD command successful. ftp> bin 200 Type set to I. ftp> put Andreas.Kuechler.ps.gz 200 PORT command successful. 150 Binary data connection for Andreas.Kuechler.ps.gz (134.60.73.27,2493). 226 Binary Transfer complete. local: Andreas.Kuechler.ps.gz remote: Andreas.Kuechler.ps.gz 54800 bytes sent in 0.12 seconds (4.5e+02 Kbytes/s) ftp> bye 221 Goodbye. unix> * e-mail Option: Send an e-mail with the subject 'paper: ' to nnsk-submission at neuro.informatik.uni-ulm.de and include your postscript-file .ps (please avoid exotic email-formats and mailers). Be sure to 'uuencode' compressed files before sending them. Here is an example of how to send a (compressed and uuencoded) file (via UNIX): unix> gzip Andreas.Kuechler.ps unix> uuencode Andreas.Kuechler.ps.gz Andreas.Kuechler.ps.gz | mail -s 'paper: mytitle/Andreas.Kuechler' nnsk-submission at neuro.informatik.uni-ulm.de unix> ------------------------------------------------------------------------------- Latest information can be retrieved from the NNSK WWW-page http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK.html. From ma_s435 at crystal.king.ac.uk Thu Jan 11 15:39:14 1996 From: ma_s435 at crystal.king.ac.uk (Dimitris Tsaptsinos) Date: Thu, 11 Jan 1996 15:39:14 GMT0BST Subject: Final Call for Papers (EANN96) Message-ID: <701962929@crystal.kingston.ac.uk> Dear colleague, sorry for the unsolicited mail but we thought this conference might be relevant to you, and if so, please lookinto it, or ask us for more information. Regards Dr Dimitris Tsaptsinos +------------------------------+ Dimitris Tsaptsinos Kingston University Maths Dept. Faculty of Science Penhryn Road Kingston upon Thames Surrey KT1 2EE Tel:0181-5472000 x.2516 Email:ma_s435 at kingston.ac.uk +----------always AEK----------+ -------------- Enclosure number 1 ---------------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 Final Call for Papers The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, and environmental engineering. Abstracts of one page (200 to 400 words) should be sent to eann96 at lpac.ac.uk by 21 January 1996 by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Tutorial proposals are also welcome until 21 January 1996. Notification of acceptance will be sent around 15 February. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN '96, please see http://www.lpac.ac.uk/EANN96 and for reports on EANN '95, contents of the proceedings, etc. please see http://www.abo.fi/~abulsari/EANN95.html Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From rsun at cs.ua.edu Thu Jan 11 18:21:41 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Thu, 11 Jan 1996 17:21:41 -0600 Subject: AAAI-96 workshop on cognitive modeling Message-ID: <9601112321.AA15316@athos.cs.ua.edu> AAAI-96 Workshop Computational Cognitive Modeling: Source of the Power to be held during AAAI-96, Portland, Oregon, August 4-5, 1996. CALL FOR PAPERS and PARTICIPATION Computational models for various cognitive tasks, such as language acquisition, skill acquisition, and conceptual development, have been extensively studied by cognitive scientists, AI researchers, and psychologists. We attempt to bring researchers from different backgrounds together, and to examine how and why computational models (connectionist, symbolic, or others) are successful in terms of the source of power. The possible sources of power include: -- Representation of the task; -- General properties of the learning algorithm; -- Data sampling/selection; -- Parameters of the learning algorithms. The workshop will focus on, but not be limited to, the following topics, all of which should be discussed in relation to the source of power: -- Proper criteria for judging success or failure of a model. -- Methods for recognizing the source of power. -- Analyses of the success or failure of existing models. -- Presentation of new cognitive models. Potential presenters should submit a paper (maximum 12 pages, 12 point font). We strongly encourage email submissions of text/postscript files; or you may also send 4 paper copies to one workshop co-chair: Charles Ling (co-chair) Ron Sun (co-chair) Department of Computer Science Department of Computer Science University of Hong Kong University of Alabama Hong Kong Tuscaloosa, AL 35487 ling at csd.uwo.ca rsun at cs.ua.edu Researchers interested in attending Workshop only should send a short description of interests to one co-chair by deadline. The Workshop will consist of invited talks, presentations, and a poster session. All accepted papers will be included in the Workshop Working Notes. Deadline for submission: March 18, 1996. Notification of acceptance: April 15, 1996. Submission of final versions: May 13, 1996. Program Committee: Charles Ling Ron Sun Pat Langley, Stanford University, langley at flamingo.Stanford.EDU Mike Pazzani, UC Irvine, pazzani at super-pan.ICS.UCI.EDU Tom Shultz, McGill University, shultz at psych.mcgill.ca Paul Thagard, Univ. of Waterloo, pthagard at watarts.uwaterloo.ca Kurt VanLehn, Univ. of Pittsburgh, vanlehn+ at pitt.edu Confirmed invited Speakers: Jeff Elman, Mike Pazzani Aaron Sloman (For AAAI-96 registration, contact AAAI, 445 Burgess Drive, Menlo Park, CA94025 or info at aaai.org) From cnna96 at cnm.us.es Thu Jan 11 15:09:13 1996 From: cnna96 at cnm.us.es (4th Workshop on CNN's and Applications) Date: Thu, 11 Jan 96 21:09:13 +0100 Subject: CNNA96 Final Call for papers Message-ID: <9601112009.AA11337@cnm1.cnm.us.es> ******************************************************************************** This information is available on the web at http://www.cica.es/~cnm/cnna96 ******************************************************************************** CNNA-96 FINAL CALL FOR PAPERS FOURTH IEEE INTERNATIONAL WORKSHOP ON CELLULAR NEURAL NETWORKS AND APPLICATIONS June 24-26, 1996 Seville, SPAIN Escuela Superior de Ingenieros de Sevilla Centro Nacional de Microelectronica ******************************************************************************** Organizing Committee: Prof. J.L. Huertas (Chair) Prof. A. Rodriguez-Vazquez Prof. R. Dominguez-Castro Secretary: Dr. S. Espejo Tec. Program: Prof. A. Rodriguez-Vazquez Proceedings: Prof. R. Dominguez-Castro Scientific Committee: Prof. N.N. Aizenberg (Ukrania) Prof. L.O. Chua (U.S.A.) Prof. V. Cimagalli (Italy) Prof. T.G. Clarkson (U.K.) Prof. A.S. Dimitriev (Russia) Prof. M. Hasler (Switzerland) Prof. J. Herault (France) Prof. J.L. Huertas (Spain) Prof. S. Jankowski (Poland) Prof. J. Nossek (Germany) Prof. J. Pineda de Gyvez (U.S.A.) Prof. V. Porra (Finland) Prof. A. Rodriguez-Vazquez (Spain) Prof. T. Roska (Hungary) Prof. B. Sheu (U.S.A.) Prof. M. Tanaka (Japan) Prof. V. Tavsanoglu (U.K.) Prof. J. Vandewalle (Belgium) Sponsors: IEEE Circuits and Systems Society IEEE Spanish Section ECS (European Circuits Society) ******************************************************************************** GENERAL SCOPE & VENUE The CNNA series of workshops aims to provide a biannual international forum to present and discuss recent advances in the theory, application, and implementation of Cellular Neural Networks. Following the successful conferences in Budapest (1990), Munich (1992), and Rome (1994), the fourth workshop will be hosted by the National Microelectronics Center and the School of Engineering of Seville, in Seville, Spain, on June 24-26, 1996. Seville, the capital of Andalusia, and site of the 1992 Universal Exposition, combines a rich cultural heritage accumulated during its more than 2500 years of history with modern infrastructures in a stable and sunny climate. It boasts a large, prestigious university, several high-technology research centers of the Spanish Council of Research, and many cultural attractions. It is linked to Madrid by high-speed train and has an international airport serving several international flights, as well as many connections via Madrid and Barcelona. The workshop will address theoretical and practical issues in Cellular Neural Networks theory, applications, and implementations. The technical program will consist of plenary lectures by experts in selected areas, along with papers and posters submitted by the participants. Official language will be English. ******************************************************************************** PAPER SUBMISSION Papers on all aspects of Cellular Neural Networks are welcome. Topics of interest include, but are not limited to: Basic Theory Applications Learning Software Implementations and Simulators CNN Computers CNN Chips CNN System Development and Testing Prospective authors are invited to submit summaries of their papers to the Conference Secretariat. Submissions should include a cover page with contact-author's name, affiliation, postal address, phone number, fax number, and E-mail address. Postscript electronic submission may be accepted upon request. Summaries submission deadline is February 28, 1996. Acceptance will be notified by mid-April 1996. Authors of accepted papers will be asked to deliver camera-ready versions of their full-papers for publication in a IEEE-sponsored Proceeding. Final papers will be limited to a maximum of 6 pages. Format details will be provided in author's kit to be sent with notification of acceptance. ******************************************************************************** CORRESPONDENCE Correspondence should be addressed to: CNNA-96 Secretariat Centro Nacional de Microelectronica Edif. CICA, Avda. Reina Mercedes s/n, E-41012 Sevilla - SPAIN Phone +34-5-4239923. FAX +34-5-4231832 e-mail: cnna96 at cnm.us.es ******************************************************************************** CONFERENCE SITE Hotel Al-Andalus. Four stars; 3 years old; luxury; 300 rooms; huge halls and many conference rooms. Located in the metropolitan area of Seville, by the technical campus of the University of Seville, and linked to the historical center through city buses (10' trip). The agreed conference price of a double room is 10.750 pta. per room and night. This price includes full buffet breakfast for the two guests in the room. Other hotels and rates are also available. For further details see the enclosed Hotel Reservation Form. The official travel agency of the conference is Viajes Universal S.A. ******************************************************************************** REGISTRATION Please see attached Registration Form. Registration includes participation in all sessions, proceedings, coffee breaks and lunches. Full registration includes the welcome cocktail and conference banquet. ******************************************************************************** CNNA-96 WWW PAGE http://www.cica.es/~cnm/cnna96 ******************************************************************************** AUTHOR'S SCHEDULE Submission of summaries: February 28, 1996 Notification of acceptance: April 15, 1996 Reception of camera-ready papers: May 15, 1996 ******************************************************************************** REGISTRATION FORM Last Name:________________________________________________________ First Name:_______________________________________________________ Institution:______________________________________________________ Mailing address:__________________________________________________ Street:_________________________________________________________ City:___________________________________________________________ State/Country:__________________________________________________ zip code:_______________________________________________________ Phone:____________________________________________________________ Fax: _____________________________________________________________ e-mail:___________________________________________________________ __ I intend to submit (have submitted) a paper entitled: _____________________ _______________________________________________________________________ Please Check where applicable: ------------------------------------------------------------------------------- | REGISTRATION FEES | BEFORE MAY 15 | AFTER MAY 15 | |-----------------------------------------|-----------------|-----------------| | Full Registration | 46.000 pta. __ | 51.000 pta. __ | |-----------------------------------------|-----------------|-----------------| | Full Registration (IEEE/ECS Members) (*)| 39.000 pta. __ | 44.000 pta. __ | |-----------------------------------------|-----------------|-----------------| | Full-time Students (**) | 17.000 pta. __ | 23.000 pta. __ | ------------------------------------------------------------------------------- (*) IEEE __ / ECS __ member number _____________________________ (**) Please enclose letter of certification from Department chairperson. Spouse/Guest: Welcome Cocktail 2,750 pta. __ Conference Banquet 6,000 pta. __ Last Name:_______________________________________________________ First Name:______________________________________________________ Registration fees may be paid (please check one): By check __ or bank-transfer __ to: BANCO ESPANOL DE CREDITO Avda. Reina Mercedes, 27. E-41012 Sevilla Acct. #: 0030-8443-90-0865291273 By credit card: VISA __ or Master-Card __ Card-holder's name: ________________________________ Card number: ________________________________ Expiration date: ________________________________ Signature: ________________________________ Date (d/m/y): ________________________________ Total amount due: __________ Please check if you need a receipt of payment: __ ******************************************************************************** CNNA-96 HOTEL RESERVATION FORM -------------------------------------------------------------------------------- HOTEL NAME | ADDRESS & PHONE | PRICE (*) | COMMENTS | | Double/Single | -------------|---------------------|---------------|---------------------------- AL-ANDALUS | Av. La Palmera s/n | | (****) | 41012 Sevilla | 10.750/ 8.400 | Conference site | Ph. +34-5-4230600 | | -------------|---------------------|---------------|---------------------------- NH CIUDAD DE | Av. Manuel Siurot 25| | Within walking SEVILLA | 41013 Sevilla | 12.500/11.300 | distance of conference (****) | Ph. +34-5-4230505 | | site (10' walk) -------------|---------------------|---------------|---------------------------- FERNANDO III | C/ San Jose 21 | | Located in the city center (***) | 41004 Sevilla | 8.100/6.900 | Connected by metropolitan | Ph. +34-5-4217307 | | buses to conf. site (30') -------------|---------------------|---------------|---------------------------- DUCAL | Pza. Encarnacion 19 | | Located in the city center (**) | 41003 Sevilla | 6.900/4.800 | Connected by metropolitan | Ph. +34-5-4215107 | | buses to conf. site (30') -------------------------------------------------------------------------------- (*) Prices include full buffet breakfast and local taxes. They are in spanish pesetas, per room and night. Please mail, fax or phone to: VIAJES UNIVERSAL S.A. Luis de Morales, 1- 41005 Sevilla, SPAIN Phone #: +34-5-4581653 Fax #: +34-5-4575689 Last Name:__________________________________________________________________ First Name:_________________________________________________________________ Address:____________________________City:___________________________________ Postal Code:________________________Phone #:____________Fax #:______________ Hotel Name:_____________________________________________________ Total Number of Rooms:___________Doubles_________Singles________ Arrival date________________Departure date______________________ Total Number of nights:_______Total amount due:_________________ Please check here if you wish the travel agency to arrange room-sharing with another participant _____ Payment: At least seven days before the arrival. Bank transfer to: VIAJES UNIVERSAL s.a. Account # 0030-4223-10-0011107 271. Banco Espanol de Credito, c/ Luis Montoto, 85 - 41005 Sevilla. ******************************************************************************** From perso at DI.Unipi.IT Thu Jan 11 12:53:15 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Thu, 11 Jan 1996 18:53:15 +0100 (MET) Subject: new TR available Message-ID: <199601111753.SAA01494@neuron.di.unipi.it> Technical Report available: Comments are welcome !! ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/tr-16-95.ps.gz ****************************************************** @TECHREPORT{tr-16/95, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Supervised Neural Networks for the Classification of Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-16/95} } Abstract: Up to now, neural networks have been used for classification of unstructured patterns and sequences. Dealing with complex structures, however, standard neural networks, as well as statistical methods, are usually believed to be inadequate because of their feature-based approach. In fact, feature- based approaches usually fail to give satisfactory solutions because of the sensitiveness of the approach to the a priori selected features and the incapacity to represent any specific information on the relationships among the components of the structures. On the contrary, we show that neural networks can represent and classify structured patterns. The key idea underpinning our approach is the use of the so called "complex recursive neuron". A complex recursive neuron can be understood as a generalization to structures of a recurrent neuron. By using complex recursive neurons, basically all the supervised networks developed for the classification of sequences, such as Back-Propagation Through Time networks, Real-Time Recurrent networks, Simple Recurrent Networks, Recurrent Cascade Correlation networks, and Neural Trees can be generalized to structures. The results obtained by some of the above networks (with complex recursive neurons) on classification of logic terms are presented. * No hardcopy available. * FTP procedure: unix> ftp ftp.di.unipi.it Name: anonymous Password: ftp> cd pub/Papers/perso/SPERDUTI ftp> binary ftp> get tr-16-95.ps.gz ftp> bye unix> gunzip tr-16-95.ps.gz unix> lpr tr-16-95.ps.gz (or however you print postscript) _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________ From lawrence at s4.elec.uq.edu.au Fri Jan 12 00:30:03 1996 From: lawrence at s4.elec.uq.edu.au (Steve Lawrence) Date: Fri, 12 Jan 1996 15:30:03 +1000 (EST) Subject: The Gamma MLP for Speech Phoneme Recognition Message-ID: <199601120530.PAA29266@s4.elec.uq.edu.au> The following NIPS 95 paper presents a network with multiple independent Gamma filters which is able to find multiple time resolutions that are optimized for prediction or classification of a given signal. We show large improvements over traditional FIR or TDNN(*) networks. The paper is available from http://www.elec.uq.edu.au/~lawrence - Australia http://www.neci.nj.nec.com/homepages/lawrence - USA We welcome your comments The Gamma MLP for Speech Phoneme Recognition Steve Lawrence, Ah Chung Tsoi, Andrew Back Electrical and Computer Engineering University of Queensland, St. Lucia 4072, Australia ABSTRACT We define a Gamma multi-layer perceptron (MLP) as an MLP with the usual synaptic weights replaced by gamma filters (as proposed by de Vries and Principe) and associated gain terms throughout all layers. We derive gradient descent update equations and apply the model to the recognition of speech phonemes. We find that both the inclusion of gamma filters in all layers, and the inclusion of synaptic gains, improves the performance of the Gamma MLP. We compare the Gamma MLP with TDNN, Back-Tsoi FIR MLP, and Back-Tsoi IIR MLP architectures, and a local approximation scheme. We find that the Gamma MLP results in a substantial reduction in error rates. (*) We use the term TDNN to describe an MLP with a window of delayed inputs, not the shared weight architecture of Lang, et al. --- Steve Lawrence +61 41 113 6686 http://www.neci.nj.nec.com/homepages/lawrence From georgju at Physik.Uni-Wuerzburg.DE Fri Jan 12 03:21:50 1996 From: georgju at Physik.Uni-Wuerzburg.DE (Georg Jung) Date: Fri, 12 Jan 1996 09:21:50 +0100 (MEZ) Subject: Paper available "Selection of examples for a linear classifier" Message-ID: <199601120821.JAA12596@wptx10.physik.uni-wuerzburg.de> FTP-host: archive.cis.ohio-state.edu FTP-filename:/pub/neuroprose/jung.selection_examples.ps.Z The file jung.selection_examples.ps.Z is now available for ftp from Neuroprose repository. The same file (name: WUE-ITP-95-022.ps.gz) is available for ftp from the preprint server of University Wurzburg (ftp.physik.uni-wuerzburg.de) filename: /pub/preprint/WUE-ITP-95-022.ps.gz Selection of examples for a linear Classifier (20 pages) Georg Jung and Manfred Opper Physikalisches Institut, Julius-Maximilians-Universit\"at Am Hubland, D-97074 W\"urzburg, Federal Republic of Germany, The Baskin Center for Computer Engineering \& Information Sciences, University of California, Santa Cruz CA 95064, USA ABSTRACT: We investigate the problem of selecting an informative subsample out of a neural network's training data. Using the replica method of statistical mechanics, we calculate the performance of a heuristic selection algorithm for a linear neural network which avoids overfitting. Sorry, no hardcopies available Comments are greatly appreciated. -- ____________________________________________ | | | Georg Jung | | | | Wissenschaftlicher Mitarbeiter am | | am Lehrstuhl "Computational Physics" der | | | | Julius-Maximilians-Universitaet | | Fakultaet fuer Physik und Astronomie | | Am Hubland, D-97074 Wuerzburg | |____________________________________________| \ \ \ Arbeitsplatz: \ \ Raum: E-222, \ \ Telefon: 0931--888-4908 \ \ E-Mail: georgju at physik.uni-wuerzburg.de \ \ \ |___________________________________________| From perso at DI.Unipi.IT Fri Jan 12 05:35:57 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Fri, 12 Jan 1996 11:35:57 +0100 (MET) Subject: new TR available (revised version) Message-ID: <199601121035.LAA02387@neuron.di.unipi.it> In a previous e-mail I announced the following TR ****************************************************** * FTP-host: ftp.di.unipi.it FTP-filename: pub/Papers/perso/SPERDUTI/tr-16-95.ps.gz ****************************************************** @TECHREPORT{tr-16/95, AUTHOR = {A. Sperduti and A. Starita}, TITLE = {Supervised Neural Networks for the Classification of Structures}, INSTITUTION = {Dipartimento di Informatica, Universit\`{a} di Pisa}, YEAR = {1995}, NUMBER = {TR-16/95} } Unfortunately, due to a copy error, the postscript file was containing a draft version of the TR. I have fixed the bug, so now the postscript file contains the correct version of the TR. Sorry for that! Regards _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________ From bengioy at IRO.UMontreal.CA Fri Jan 12 13:15:16 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Fri, 12 Jan 1996 13:15:16 -0500 Subject: New book: NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION Message-ID: <199601121815.NAA13072@rouge.IRO.UMontreal.CA> NEW BOOK! NEURAL NETWORKS FOR SPEECH AND SEQUENCE RECOGNITION Yoshua BENGIO Learning algorithms for sequential data are crucial in many applications, in fields such as speech recognition, time-series prediction, control and signal monitoring. This book applies the techniques of artificial neural networks, in particular recurrent networks, time-delay networks, convolutional networks, and hidden Markov models, using real world examples. Highlights include basic elements for the practical application of back-propagation and back-propagation through time, integrating domain knowledge and learning from examples, and hybrids of neural networks with hidden Markov models. International Thomson Computer Press ISBN 1-85032-170-1 This book is available at bookstores near you, or from the publisher: In the US: US$52.95 800-842-3636, fax 606-525-7778, or 800-865-5840, fax 606-647-5013 In Canada: CA$73.95 416-752-9100 ext 444, fax 416-752-9646 On the Internet: http://www.thomson.com/itcp.html http://www.thomson.com/orderinfo.html americas-info at list.thomson.com (in the Americas) row-info at list.thomson.com (rest of the World) Contents 1 Introduction 1.1 Connectionist Models 1.2 Learning Theory 2 The Back-Propagation Algorithm 2.1 Introduction to Back-Propagation 2.2 Formal Description 2.3 Heuristics to Improve Convergence and Generalization 2.4 Extensions 3 Integrating Domain Knowledge and Learning from Examples 3.1 Automatic Speech Recognition 3.2 Importance of Pre-processing Input Data 3.3 Input Coding 3.4 Input Invariances 3.5 Importance of Architecture Constraints on the Network 3.6 Modularization 3.7 Output Coding 4 Sequence Analysis 4.1 Introduction 4.2 Time Delay Neural Networks 4.3 Recurrent Networks 4.4 BPS 4.5 Supervision of a Recurrent Network Does Not Need to Be Everywhere 4.6 Problems with Training of Recurrent Networks 4.7 Dynamic Programming Post-Processors 4.8 Hidden Markov Models 5 Integrating ANNs with Other Systems 5.1 Advantages and Disadvantages of Current Algorithms for ANNs 5.2 Modularization and Joint Optimization 6 Radial Basis Functions and Local Representation 6.1 Radial Basis Functions Networks 6.2 Neurobiological Plausibility 6.3 Relation to Vector Quantization, Clustering, and Semi-Continuous HMMs 6.4 Methodology 6.5 Experiments on Phoneme Recognition with RBFs 7 Density Estimation with a Neural Network 7.1 Relation Between Input PDF and Output PDF 7.2 Density Estimation 7.3 Conclusion 8 Post-Processors Based on Dynamic Programming 8.1 ANN/DP Hybrids 8.2 ANN/HMM Hybrids 8.3 ANN/HMM Hybrid: Phoneme Recognition Experiments 8.4 ANN/HMM Hybrid: Online Handwriting Recognition Experiments References Index -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206 From Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU Sun Jan 14 03:14:52 1996 From: Dave_Touretzky at DST.BOLTZ.CS.CMU.EDU (Dave_Touretzky@DST.BOLTZ.CS.CMU.EDU) Date: Sun, 14 Jan 96 03:14:52 EST Subject: postdoc position: computational neuroscience and rodent navigation Message-ID: <16098.821607292@DST.BOLTZ.CS.CMU.EDU> The Center for the Neural Basis of Cognition, a joint center of Carnegie Mellon University and the University of Pittsburgh, is accepting applications for postdoctoral positions in computational and cognitive neuroscience. One position for which candidates are actively being sought involves computational modeling and neurophysiological investigation of the rodent navigation system. Applicants should be either: * A neuroscientist with experience in single-unit recording from behaving animals, and some computer experience, who would like to do postdoctoral work involving computer modeling of the rodent hippocampal and head direction systems. * A computational neuroscientist already proficient in modeling biological neural networks, with a strong interest in helping to set up a neurophysiological recording facility as part of their postdoctoral training. Full details on the CNBC postdoctoral program are available on our web site at http://www.cs.cmu.edu/Web/Groups/CNBC -- follow the link to the NPC (Neural Processes in Cognition) program. Applications are due by February 1, but late applications may still be considered if the position is not filled. Persons interested in this position should contact me directly at dst at cs.cmu.edu. -- Dave Touretzky http://www.cs.cmu.edu/~dst dst at cs.cmu.edu Computer Science Department & Center for the Neural Basis of Cognition Carnegie Mellon University, Pittsburgh, PA 15213-3891 From ptodd at mpipf-muenchen.mpg.de Mon Jan 15 07:48:24 1996 From: ptodd at mpipf-muenchen.mpg.de (ptodd@mpipf-muenchen.mpg.de) Date: Mon, 15 Jan 96 13:48:24 +0100 Subject: predoc/postdoc positions in Munich: modeling cognitive algorithms Message-ID: <9601151248.AA00777@hellbender.mpipf-muenchen.mpg.de> (The following ad will appear in the APS Observer, and connectionists with interests in domain-specific forms of cognition are encouraged to apply. Feel free to write to me with questions about the group or how you or a continuing/graduating student might fit in. --Peter Todd) The Center for Adaptive Behavior and Cognition at the Max Planck Institute for Psychological Research in Munich, Germany is seeking applicants for 1 Predoctoral Fellowship (tax-free stipend DM 21,600) and 1 Postdoctoral Fellowship (tax-free stipend range DM 36,000-40,000) for one-year positions beginning in September 1996. Candidates should be interested in modeling satisficing decision-making algorithms in real-world environmental domains, and should have expertise in one of the following areas: computer simulation, biological categorization, evolutionary biology or psychology, experimental economics, judgment and decision making, risk perception. For a list of current researchers and interests, please send email to Dr. Peter Todd at ptodd at mpipf-muenchen.mpg.de . The working language of the center is English. Send applications (curriculum vitae, letters of recommendation, and reprints) by March 15, 1996 to Professor Gerd Gigerenzer, Center for Adaptive Behavior and Cognition, Max Planck Institute for Psychological Research, Leopoldstrasse 24, 80802 Munich, Germany. From datamine at aig.jpl.nasa.gov Mon Jan 15 16:42:27 1996 From: datamine at aig.jpl.nasa.gov (Data Mining Journal) Date: Mon, 15 Jan 96 13:42:27 PST Subject: New Journal -- Data Mining and Knowledge Discovery Message-ID: <9601152142.AA09697@mathman.jpl.nasa.gov> please post the following annoucement to your group, Thanks, Usama ________________________________________________________________ Usama Fayyad | Fayyad at aig.jpl.nasa.gov Machine Learning Systems Group | Jet Propulsion Lab M/S 525-3660 | (818) 306-6197 office California Institute of Technology | (818) 306-6912 FAX 4800 Oak Grove Drive | Pasadena, CA 91109 |http://www-aig.jpl.nasa.gov/ _____________________________________|__________________________ **************************************************************** New Journal Announcement: Data Mining and Knowledge Discovery an international journal http://www.research.microsoft.com/research/datamine/ Published by Kluwer Academic Publishers C a l l f o r P a p e r s **************************************************************** Advances in data gathering, storage, and distribution technologies have far outpaced computational advances in techniques for analyzing and understanding data. This created an urgent need for a new generation of tools and techniques for automated Data Mining and Knowledge Discovery in Databases (KDD). KDD is a broad area that integrates methods from several fields including statistics, databases, AI, machine learning, pattern recognition, machine discovery, uncertainty modeling, data visualization, high performance computing, management information systems (MIS), and knowledge-based systems. KDD refers to a multi-step process that can be highly interactive and iterative. It includes data selection/sampling, preprocessing and transformation for subsequent steps. Data mining algorithms are then used to discover patterns, clusters and models from data. These patterns and hypotheses are then rendered in operational forms that are easy for people to visualize and understand. Data mining is a step in the overall KDD process. However, most published work has focused solely on (semi-)automated data mining methods. By including data mining explicitly in the name of the journal, we hope to emphasize its role, and build bridges to communities working solely on data mining. Our goal is to make Data Mining and Knowledge Discovery a flagship journal publication in the KDD area, providing a unified forum for the KDD research community, whose publications are currently scattered among many different journals. The journal will publish state-of-the-art papers in both the research and practice of KDD, surveys of important techniques from related fields, and application papers of general interest. In addition, there will be a pragmatic section including short application reports (1-3 pages), book and system reviews, and relevant product announcements. Please visit the journal's WWW homepage at: http://www.research.microsoft.com/research/datamine/ to obtain further information, including: - A list of topics of interest, - full call for papers, - instructions for submission, - contact information, subscription information, and - ordering a free sample issue. Editors-in-Chief: Usama M. Fayyad ================ Jet Propulsion Laboratory, California Institute of Technology, USA Heikki Mannila University of Helsinki, Finland Gregory Piatetsky-Shapiro GTE Laboratories, USA Editorial Board: =============== Rakesh Agrawal (IBM Almaden Research Center, USA) Tej Anand (AT&T Global Information Solutions, USA) Ron Brachman (AT&T Bell Laboratories, USA) Wray Buntine (Thinkbank Inc, USA) Peter Cheeseman (NASA AMES Research Center, USA) Greg Cooper (University of Pittsburgh, USA) Bruce Croft (University of Mass. Amherst, USA) Dan Druker (Arbor Software, USA) Saso Dzeroski (Jozef Stefan Institute, Slovenia) Oren Etzioni (University of Washington, USA) Jerome Friedman (Stanford University, USA) Brian Gaines (University of Calgary, Canada) Clark Glymour (Carnegie-Mellon University, USA) Jim Gray (Microsoft Research, USA) Georges Grinstein (University of Lowell, USA) Jiawei Han (Simon Fraser University, Canada) David Hand (Open University, UK) Trevor Hastie (Stanford University, USA) David Heckerman (Microsoft Research, USA) Se June Hong (IBM T.J. Watson Research Center, USA) Thomasz Imielinski (Rutgers University, USA) Larry Jackel (AT&T Bell Labs, USA) Larry Kerschberg (George Mason University, USA) Willi Kloesgen (GMD, Germany) Yves Kodratoff (Lab. de Recherche Informatique, France) Pat Langley (ISLE/Stanford University, USA) Tsau Lin (San Jose State University, USA) David Madigan (University of Washington, USA) Ami Motro (George Mason University, USA) Shojiro Nishio (Osaka University, Japan) Judea Pearl (University of California, Los Angeles, USA) Ed Pednault (AT&T Bell Laboratories, USA) Daryl Pregibon (AT&T Bell Laboratories, USA) J. Ross Quinlan (University of Sydney, Australia) Jude Shavlik (University of Wisconsin - Madison, USA) Arno Siebes (CWI, Netherlands) Evangelos Simoudis (IBM Almaden Research Center, USA) Andrzej Skowron (University of Warsaw, Poland) Padhraic Smyth (Jet Propulsion Laboratory, USA) Salvatore Stolfo (Columbia University, USA) Alex Tuzhilin (NYU Stern School, USA) Ramasamy Uthurusamy (General Motors Research Laboratories, USA) Vladimir Vapnik (AT&T Bell Labs, USA) Ronald Yager (Iona College, USA) Xindong Wu (Monash University, Australia) Wojciech Ziarko (University of Regina, Canada) Jan Zytkow (Wichita State University, USA) ====================================================================== If you would like to receive information from Kluwer on this journal, and to receive a free sample issue by mail, please fill out the form attached below, and e-mail it to datamine at aig.jpl.nasa.gov Please use the following in SUBJECT field: REQUEST for SAMPLE J-DMKD ------cut-here------cut-here------cut-here------cut-here------cut-here---- .. Please do NOT remove keywords following '___', simply fill in provided .. fields and return as is. This form will be processed automatically. .. If you do not wish to complete a field, please LEAVE BLANK. .. Subject should be: REQUEST for SAMPLE J-DMK .. mail completed form, including keywords in CAPS to .. datamine at aig.jpl.nasa.gov .. ___ REQUEST FOR FREE SAMPLE ISSUE OF DATA MINING AND KNOWLEDGE DISCOVERY ___ ___ NAME: ___ EMAIL: ___ AFFILIATION: ___ POSTAL_ADDRESS_LINE1: ___ POSTAL_ADDRESS_LINE2: ___ POSTAL_ADDRESS_LINE3: ___ POSTAL_ADDRESS_LINE4: ___ CITY: ___ STATE: ___ ZIP: ___ COUNTRY: ___ TELEPHONE: ___ FAX: ___ END_FORM: do not edit this line, anything below it is discarded. From geoff at salk.edu Tue Jan 16 13:15:43 1996 From: geoff at salk.edu (Geoff Goodhill) Date: Tue, 16 Jan 96 10:15:43 PST Subject: Preprint - revised information Message-ID: <9601161815.AA15105@salk.edu> A few days ago I advertised a preprint entitled "Optimizing cortical mappings" by Goodhill, Finch and Sejnowski. Unfortunately since then the ftp and http details have changed. The new ones are ftp://ftp.cnl.salk.edu/pub/geoff/goodhill_nips96.ps.Z and http://www.cnl.salk.edu/~geoff Apologies, Geoff Goodhill From drl at eng.cam.ac.uk Tue Jan 16 10:00:00 1996 From: drl at eng.cam.ac.uk (drl@eng.cam.ac.uk) Date: Tue, 16 Jan 96 15:00:00 GMT Subject: Tech Report announcement Message-ID: <9601161500.17843@dante.eng.cam.ac.uk> The following technical report is available by anonymous ftp from the archive of the Speech, Vision and Robotics Group at the Cambridge University Engineering Department. Limits on the discrimination possible with discrete valued data, with application to medical risk prediction D. R. Lovell, C. R. Dance, M. Niranjan, R. W. Prager and K. J. Dalton Technical Report CUED/F-INFENG/TR243 Cambridge University Engineering Department Trumpington Street Cambridge CB2 1PZ England Abstract We describe an upper bound on the {\em accuracy} (in the ROC sense) attainable in two-alternative forced choice risk prediction, for a specific set of data represented by discrete features. By accuracy, we mean the probability that a risk prediction system will correctly rank a randomly chosen high risk case and a randomly chosen low risk case. We also present methods for estimating the maximum accuracy we can expect to attain using a given set of discrete features to represent data sampled from a given population. These techniques allow an experimenter to calculate the maximum performance that could be achieved, without having to resort to applying specific risk prediction methods. Furthermore, these techniques can be used to rank discrete features in order of their effect on maximum attainable accuracy. ************************ How to obtain a copy ************************ Via FTP: unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (type your email address) ftp> cd reports ftp> binary ftp> get lovell_tr243.ps.Z ftp> quit unix> uncompress NAME_tr243.ps.Z unix> lpr lovell_tr243.ps (or however you print PostScript) No hardcopies available. From bengioy at IRO.UMontreal.CA Tue Jan 16 17:48:26 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Tue, 16 Jan 1996 17:48:26 -0500 Subject: Montreal workshop and spring school on NNs and learning algorithms Message-ID: <199601162248.RAA01887@oust.iro.umontreal.ca> Montreal Workshop and Spring School on Neural Nets and Learning Algorithms April 15-30 1996 Centre de Recherche Mathematique, Universite de Montreal MORE INFO AT: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html This workshop and concentrated course on artificial neural networks and learning algorithms is organized by the Centre de Recherches Mathematiques of the University of Montreal (Montreal, Quebec, Canada). The first week of the the workshop will concentrate on learning theory, statistics, and generalization. The second week (and beginning of third) will concentrate on learning algorithms, architectures, applications and implementations. The organizers of the workshop are Bernard Goulard (Montreal), Yoshua Bengio (Montreal), Bertrand Giraud (CEA Saclay, France) and Renato De Mori (McGill). The invited speakers are G. Hinton (Toronto), V. Vapnik (AT&T), M. Jordan (MIT), H. Bourlard (Mons), T. Hastie (Stanford), R. Tibshirani (Toronto), F. Girosi (MIT), M. Mozer (Boulder), J.P. Nadal (ENS, Paris), Y. Le Cun (AT&T), M. Marchand (U of Ottawa), J. Shawe-Taylor (London), L. Bottou (Paris), F. Pineda (Baltimore), J. Moody (Oregon), S. Bengio (INRS Montreal), J. Cloutier (Montreal), S. Haykin (McMaster), M. Gori (Florence), J. Pollack (Brandeis), S. Becker (McMaster), Y. Bengio (Montreal), S. Nowlan (Motorola), P. Simard (AT&T), G. Dreyfus (ESPCI Paris), P. Dayan (MIT), N. Intrator (Tel Aviv), B. Giraud (France), B. Pearlmutter (Siemens), H.P. Graf (AT&T). TENTATIVE SCHEDULE (see details at http://www.iro.umontreal.ca/labs/neuro/spring96/english.html) Week 1 Introduction, learning theory and statistics April 15: Y. Bengio, J.P. Nadal, G. Dreyfus, B. Giraud April 16: Y. Bengio, F. Girosi, L. Bottou, J.P. Nadal, G. Dreyfus, B. Giraud April 17: V. Vapnik, L. Bottou, F. Girosi, M. Marchand, J. Shawe-Taylor, V. Vapnik April 18: J. Shawe-Taylor, V. Vapnik, R. Tibshirani, T. Hastie, M. Jordan April 19: M. Marchand, S. Bengio, R. Tibshirani, T. Hastie, M. Jordan Week 2 and 3 Algorithms, architectures and applications April 22: S. Haykin, H. Bourlard, M. Gori, M. Mozer, F. Pineda April 23: S. Haykin, F. Pineda, H. Bourlard, M. Mozer, J. Pollack, P. Dayan April 24: M. Gori, J. Pollack, P. Dayan, B. Pearlmutter, S. Becker, P. Simard April 25: S. Becker, G. Hinton, N. Intrator, B. Pearlmutter, S. Nowlan, Y. Le Cun April 26: S. Bengio, Y. Le Cun, S. Nowlan, N. Intrator, P. Simard April 29: J. Moody, Y. Bengio, J. Cloutier, H.P. Graf April 30: J. Moody, J. Cloutier, H.P. Graf REGISTRATION INFORMATION: $100 (Canadian) or 75 $US, if received before April 1st $150 (Canadian) or 115 $US, if received on or after April 1st $25 (Canadian) or 19 $US, for students and post-doctoral fellows. The number of participants will be limited, on a first-come first-served basis. Please register early! Have a look at http://www.iro.umontreal.ca/labs/neuro/spring96/english.html for more details, or directly load the registration form by ftp (postscript: ftp://ftp.iro.umontreal.ca/pub/neuro/registration.ps or ascii: ftp://ftp.iro.umontreal.ca/pub/neuro/registration.asc). Reduced hotel rates can be obtained by returning your registration form with your choice of hotel before March 15th. For more information, contact Louis Pelletier, pelletl at crm.umontreal.ca, 514-343-2197, fax 514-343-2254 Centre de Recherche Mathematique, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C-3J7, Canada. -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206 From baluja at GS93.SP.CS.CMU.EDU Tue Jan 16 15:04:11 1996 From: baluja at GS93.SP.CS.CMU.EDU (Shumeet Baluja) Date: Tue, 16 Jan 96 15:04:11 EST Subject: Paper: Medical Risk Evaluation - Rankprop and Multitask Learning Message-ID: Paper Available: -------------------------- Using the Future to "Sort Out" the Present: Rankprop and Multitask Learning for Medical Risk Evaluation Rich Caruana, Shumeet Baluja, and Tom Mitchell Abstract: -------------------------- A patient visits the doctor; the doctor reviews the patient's history, asks questions, makes basic measurements (blood pressure, ...), and prescribes tests or treatment. The prescribed course of action is based on an assessment of patient risk---patients at higher risk are given more and faster attention. It is also sequential---it is too expensive to immediately order all tests which might later be of value. This paper presents two methods that together improve the accuracy of backprop nets on a pneumonia risk assessment problem by 10-50\%. {\em Rankprop} improves on backpropagation with sum of squares error in ranking patients by risk. {\em Multitask learning} takes advantage of {\em future} lab tests available in the training set, but not available in practice when predictions must be made. Both methods are broadly applicable. Retrieval Information -------------------------- This paper will appear in NIPS 8. Available via the web from: http://www.cs.cmu.edu/~baluja/techreps.html From heckerma at microsoft.com Tue Jan 16 22:34:47 1996 From: heckerma at microsoft.com (David Heckerman) Date: Tue, 16 Jan 1996 19:34:47 -0800 Subject: Summary: NIPS workshop on learning in graphical models Message-ID: Summary: NIPS 95 Workshop on Learning in Bayesian Networks and Other Graphical Models We discussed the relationships between Bayesian networks, decomposable models, Markov random fields, Boltzmann machines, Hidden Markov models, stochastic grammars, and feedforward neural networks, exposing complementary strengths and weaknesses in the various formalisms. For example, Bayesian networks are particularly strong in their focus on explicit representations of probabilistic independencies (the arrows in a belief network have a strong semantics in this regard), their full use of Bayesian methods, and their focus on density estimation. Neural networks are particularly strong in their ties to approximation theory, and in their focus on predictive modeling in non-linear classification and regression contexts. Topics discussed included issues in optimization, including the use of gradient-based methods and EM algorithms; issues in approximation, including the use of mean field algorithms and stochastic sampling; issues in representation, including exploration of the roles of ``hidden'' or ``latent'' variables in learning; search methods for model selection and model averaging; and engineering issues. A more detailed summary, as well as pointers to slides and related papers can be found at http://www.research.microsoft.com/research/nips95bn/ From gs at next2.ss.uci.edu Wed Jan 17 02:22:22 1996 From: gs at next2.ss.uci.edu (George Sperling) Date: Tue, 16 Jan 96 23:22:22 -0800 Subject: Conference Announcement Message-ID: <9601170722.AA07036@next2.ss.uci.edu> TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE Teton Village, Jackson Hole, Wyoming January 28 - February 2, 1996 Organizer: George Sperling, University of California, Irvine The TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE will meet in Teton Village, Jackson Hole, Wyoming, January 28 - February 2, 1996. The conference covers a wide range of subjects in what has come to be called cognitive science, ranging from visual and auditory physiology and psychophysics to human information processing, cognition, learning and memory, to computational approaches to these problems including neural networks and artificial intelligence. The aim is to provide overview talks that are comprehensible and interesting to a wide scientific audience --such as one might fantasy would occur at a National or Royal Academy of Science if such organizations were indeed devoted to scientific interchange. Attendance is limited by the size of the conference facility to about 50 persons. The Conference begins with a reception on Sunday evening, January 28, at 6:00p. Regular sessions meet from Monday through Friday at 4:00p to 8:00p; rest of the day is free. On Friday, 8:00p, there is a banquet for participants. A preliminary program is appended. The conference hotel, the Inn at Jackson Hole, is directly at the base of the ski slopes, a short walk from the tram and other ski lifts. The Conference has arranged special room rates for registered participants. To reserve lodging, telephone The Inn 1-800-842-7666 and inform the desk that you are with the Interdisciplinary Conference (AIC). Other hotels, restaurants, ski rental facilities, shops, and cross country ski trails, are all within walking distance. There are flights directly to Jackson Hole AP (taxi or bus to the hotel). Alternatively, Jackson is a five-hour drive from Salt Lake City. Additional information about the conference, previous programs, etc, are available at the WWW site below. To attend the conference, fill out the online registration form or request hardcopy from the organizer, and send the registration fee ($100) to the address below. To be sure of receiving future mailings, return a copy of the registration form with your current address. Annual Interdisciplinary Conference c/o Prof. George Sperling Cognitive Science Dept., SST-6 University of California Irvine, CA 92717 E-mail: sperling at uci.edu http://www.socsci.uci.edu/cogsci/HIPLab/AIC (for info about AIC-21) http://www.jacksonhole.com/ski (info about Jackson, WY) http://www.socsci.uci.edu/cogsci (for info about UCI Cognitive Sciences) --------------------------------------------------------------------------- P.S. UCI Update from the organizer: In spite of the fiscal difficulties faced by the State of California, UCI continues to move forward (two Nobel Prizes in 1995) and the Department of Cognitive Science is flourishing. In fall, 1995, the Department of Cognitive Science will be recruiting for three faculty positions with considerable flexibility in areas. There is an opening for a graduate student and a postdoc in my lab, and there are excellent opportunities for graduate students in the department --see the enclosed announcement and the WWW site above. =========================================================================== TWENTY-FIRST ANNUAL INTERDISCIPLINARY CONFERENCE Teton Village, Jackson Hole, Wyoming January 26 - February 2, 1996 Organizer: George Sperling, University of California, Irvine Preliminary Schedule (16Jan96) Sunday, January 28: 6:00 - 7:30 p.m. ** Reception ** Registration, Appetizers, Snacks, Refreshments. Monday, January 29, 4:00 - 8:00 p.m. Auditory Biology and Psychophysics; Visual Physiology Karen Glendenning, Psychology, Florida State U. Hearing: A Comparative Perspective. Bruce Masterton, Psychology, Florida State U. Role of the Central Auditory System in Hearing. Sam Williamson, Physics, New York University. The Decay of Sensory Memory. Randy Blake, Psychol, Vanderbilt U. Tachistoscopic Review of Mark Berkley's Research. Adina Roskies, Dept. Neurol, Washington U Med. Topographic Targeting of Retinal Axons in Development. Tuesday, January 29, 4:00 - 8:00 p.m. Motion Perception: Physiology, Psychophysics Larry O'Keefe, Center for Neural Science, NYU. Motion Processing in Primate Visual Cortex. Scott Richman, Cognitive Sci., UCI. A Specialized Receptor for Moving Flicker? Erik Blaser, Cog. Sci, UC Irvine. When is Motion Motion? Sophie Wuerger, Communic & Neurosci, Keele U. Colour in Moving and Stationary Orientation Discrimination. George Sperling, Cognitive Science, UC Irvine. Model of Gain-Control in Motion Processing. Wednesday Feb. 1, 4:00 - 8:00 Visual Learning, Learning; Information Processing Lorraine Allan, Psychology, McMaster U. New Slants on the McCollough Effect. Shepard Siegel, Psychology, McMaster U. What Contingent Color Aftereffects Tell Us About Drug Addiction. Hal Pashler, Psychology, U Cal., San Diego. Dual-Task Bottlenecks: Structural or Strategic? Geoffrey Loftus, Brain and Cog Sci, MIT. Information Acquisition and Phenomenology. Bill Prinzmetal, Dept of Psychology, UC Berkely. The Phenomenology of Attention. Zhong-Lin Lu, Cogn. Sci., U Cal. Irvine. Salience Model of Spatial Attention. Thursday 4:00 - 8:00 Memory Tim McNamara, Psychology, Vanderbilt. Viewpoint Dependence in Human Spatial Memory. Roger Ratcliff & Gail McKoon, Psychology, Northwestern U. Models of RT and Word Identification Richard Shiffrin, Psychology, Indiana U. A Model for Implicit and Explicit Momory. Barbara Dosher, Cogn. Sci., U Cal. Irvine. Forgetting in Implicit and Explicit Memory Tasks. David Caulton, Natl Inst Health. Memory Retrieval Dynamics: Behavioral and Electrophysiological Approaches. Friday 4:00 - 8:00 p.m. Computational Issues Sandy Pentland, Media Lab., MIT. The Perception of Driving Intentions. Leonid Kontsevich, Smith-Kettlewell Eye Research Institute. The Role of Partial Similarity in 3D Vision. Misha Pavel, EE., Oregon Graduate Institute. The Role of Features in the Perception of Symmetry. Maria Kozhevnikov, Physics, Technion, Israel. A Mathematical Model of Conceptual Development. Shulamith Eckstein, Physics, Technion, Israel. A Dynamic Model of Cognitive Growth in a Population. * * * 8:00 Fireside Banquet at The Inn * * * From iconip96 at cs.cuhk.hk Wed Jan 17 08:32:02 1996 From: iconip96 at cs.cuhk.hk (iconip96) Date: Wed, 17 Jan 1996 21:32:02 +0800 (HKT) Subject: *** ICONIP'96 FINAL CALL FOR PAPERS *** Message-ID: <199601171332.VAA00980@cs.cuhk.hk> Please do not re-distribute this CFP to other lists. We apologize should you receive multiple copies of this CFP from different sources. ====================================================================== FINAL CALL FOR PAPERS 1996 INTERNATIONAL CONFERENCE ON NEURAL INFORMATION PROCESSING The Annual Conference of the Asian Pacific Neural Network Assembly ICONIP'96, September 24 - 27, 1996 Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong In cooperation with IEEE / NNC --IEEE Neural Networks Council INNS - International Neural Network Society ENNS - European Neural Network Society JNNS - Japanese Neural Network Society CNNC - China Neural Networks Council ====================================================================== The goal of ICONIP'96 is to provide a forum for researchers and engineers from academia and industry to meet and to exchange ideas on the latest developments in neural information processing. The conference also further serves to stimulate local and regional interests in neural information processing and its potential applications to industries indigenous to this region. The conference consists of two tracks. One is SCIENTIFIC TRACK for the latest results on Theories, Technologies, Methods, Architectures and Algorithms in neural information processing. The other is APPLICATION TRACK for various neural network applications in any engineering/technical field and any business/service sector. There will be a one-day tutorial on the neural networks for capital markets which reflects Hong Kong's local interests on financial services. In addition, there will be several invited lectures in the main conference. Hong Kong is one of the most dynamic cities in the world with world-class facilities, easy accessibility, exciting entertainment, and high levels of service and professionalism. Come to Hong Kong! Visit this Eastern Pearl in this historical period before Hong Kong's eminent return to China in 1997. Tutorials On Financial Engineering ================================== 1. Professor John Moody, Oregon Graduate Institute, USA "Time Series Modeling: Classical and Nonlinear Approaches" 2. Professor Halbert White, University California, San Diego, USA "Option Pricing In Modern Finance Theory And The Relevance Of Artificial Neural Networks" 3. The third tutorial speaker will also be an internationally well known expert in neural networks for the capital markets. Keynote Talks ============= 1. Professor Shun-ichi Amari, Tokyo University. "Information Geometry of Neural Networks" 2. Professor Yaser Abu-Mostafa, California Institute of Technology, USA "The Bin Model for Learning and Generalization" 3. Professor Leo Breiman, University California, Berkeley, USA "Democratizing Predictors" 4. Professor Christoph von der Malsburg, Ruhr-Universitat Bochum, Germany "Scene Analysis Based on Dynamic Links" (tentatively) 5. Professor Erkki Oja, Helsinki University of Technology, Finland "Blind Signal Separation by Neural Networks " *** PLUS AROUND 20 INVITED PAPERS GIVEN BY WELL KNOWN RESEARCHERS IN THE FIELD. *** CONFERENCE TOPICS ================= SCIENTIFIC TRACK: ----------------- * Theory * Algorithms & Architectures * Supervised Learning * Unsupervised Learning * Hardware Implementations * Hybrid Systems * Neurobiological Systems * Associative Memory * Visual & Speech Processing * Intelligent Control & Robotics * Cognitive Science & AI * Recurrent Net & Dynamics * Image Processing * Pattern Recognition * Computer Vision * Time Series Prediction * Optimization * Fuzzy Logic * Evolutionary Computing * Other Related Areas APPLICATION TRACK: ------------------ * Foreign Exchange * Equities & Commodities * Risk management * Options & Futures * Forecasting & Strategic Planning * Government and Services * Garments and Fashions * Telecommunications * Control & Modeling * Manufacturing * Chemical engineering * Transportation * Environmental engineering * Remote sensing * Power systems * Defense * Multimedia systems * Document Processing * Medical imaging * Biomedical application * Geophysical sciences * Other Applications CONFERENCE'S SCHEDULE ===================== Submission of paper February 1, 1996 Notification of acceptance May 1, 1996 Early registration deadline July 1, 1996 Tutorial on Financial Engineering Sept, 24, 1996 Conference Sept, 25-27, 1996 SUBMISSION INFORMATION ====================== Authors are invited to submit one camera-ready original and five copies of the manuscript written in English on A4-format (or letter) white paper with 25 mm (1 inch) margins on all four sides, in one column format, no more than six pages (four pages preferred) including figures and references, single- spaced, in Times-Roman or similar font of 10 points or larger, and printed on one side of the page only. Electronic or fax submission is not acceptable. Additional pages will be charged at USD $50 per page. Centered at the top of the first page should be the complete title, author(s), affiliation, mailing, and email addresses, followed by an abstract (no more than 150 words) and the text. Each submission should be accompanied by a cover letter indicating the contacting author, affiliation, mailing and email addresses, telephone and fax number, and preference of track, technical session(s), and format of presentation, either oral or poster. All submitted papers will be refereed by experts in the field based on quality, clarity, originality, and significance. Authors may also retrieve the ICONIP style, "iconip.tex" and "iconip.sty" files for the conference by anonymous FTP at ftp.cs.cuhk.hk in the directory /pub/iconip96. The address for information inquiries and paper submissions: ICONIP'96 Secretariat Department of Computer Science and Engineering The Chinese University of Hong Kong Shatin, N.T., Hong Kong Fax (852) 2603-5024 E-mail: iconip96 at cs.cuhk.hk http://www.cs.cuhk.hk/iconip96 ====================================================================== General Co-Chairs ================= Omar Wing, CUHK Shun-ichi Amari, Tokyo U. Advisory Committee ================== International ------------- Yaser Abu-Mostafa, Caltech Michael Arbib, U. Southern Cal. Leo Breiman, UC Berkeley Jack Cowan, U. Chicago Rolf Eckmiller, U. Bonn Jerome Friedman, Stanford U. Stephen Grossberg, Boston U. Robert Hecht-Nielsen, HNC Geoffrey Hinton, U. Toronto Anil Jain, Michigan State U. Teuvo Kohonen, Helsinki U. of Tech. Sun-Yuan Kung, Princeton U. Robert Marks, II, U. Washington Thomas Poggio, MIT Harold Szu, US Naval SWC John Taylor, King's College London David Touretzky, CMU C. v. d. Malsburg, Ruhr-U. Bochum David Willshaw, Edinburgh U. Lofti Zadeh, UC Berkeley Asia-Pacific Region ------------------- Marcelo H. Ang Jr, NUS, Singapore Sung-Yang Bang, POSTECH, Pohang Hsin-Chia Fu, NCTU., Hsinchu Toshio Fukuda, Nagoya U., Nagoya Kunihiko Fukushima, Osaka U., Osaka Zhenya He, Southeastern U., Nanjing Marwan Jabri, U. Sydney, Sydney Nikola Kasabov, U. Otago, Dunedin Yousou Wu, Tsinghua U., Beijing Organizing Committee ==================== L.W. Chan (Co-Chair), CUHK K.S. Leung (Co-Chair), CUHK D.Y. Yeung (Finance), HKUST C.K. Ng (Publication), CityUHK A. Wu (Publication), CityUHK B.T. Low (Publicity), CUHK M.W. Mak (Local Arr.), HKPU C.S. Tong (Local Arr.), HKBU T. Lee (Registration), CUHK K.P. Chan (Tutorial), HKU H.T. Tsui (Industry Liaison), CUHK I. King (Secretary), CUHK Program Committee ================= Co-Chairs --------- Lei Xu, CUHK Michael Jordan, MIT Erkki Oja, Helsinki U. of Tech. Mitsuo Kawato, ATR Members ------- Yoshua Bengio, U. Montreal Jim Bezdek, U. West Florida Chris Bishop, Aston U. Leon Bottou, Neuristique Gail Carpenter, Boston U. Laiwan Chan, CUHK Huishen Chi, Peking U. Peter Dayan, MIT Kenji Doya, ATR Scott Fahlman, CMU Francoise Fogelman, SLIGOS Lee Giles, NEC Research Inst. Michael Hasselmo, Harvard U. Kurt Hornik, Technical U. Wien Yu Hen Hu, U. Wisconsin - Madison Jeng-Neng Hwang, U. Washington Nathan Intrator, Tel-Aviv U. Larry Jackel, AT&T Bell Lab Adam Kowalczyk, Telecom Australia Soo-Young Lee, KAIST Todd Leen, Oregon Grad. Inst. Cheng-Yuan Liou, National Taiwan U. David MacKay, Cavendish Lab Eric Mjolsness, UC San Diego John Moody, Oregon Grad. Inst. Nelson Morgan, ICSI Steven Nowlan, Synaptics Michael Perrone, IBM Watson Lab Ting-Chuen Pong, HKUST Paul Refenes, London Business School David Sanchez, U. Miami Hava Siegelmann, Technion Ah Chung Tsoi, U. Queensland Benjamin Wah, U. Illinois Andreas Weigend, Colorado U. Ronald Williams, Northeastern U. John Wyatt, MIT Alan Yuille, Harvard U. Richard Zemel, CMU Jacek Zurada, U. Louisville From scott at cpl_mmag.nhrc.navy.mil Wed Jan 17 13:32:05 1996 From: scott at cpl_mmag.nhrc.navy.mil (Scott Makeig) Date: Wed, 17 Jan 1996 10:32:05 -0800 (PST) Subject: 2 papers applying neural networks to EEG data Message-ID: <199601171832.KAA13143@cpl_mmag.nhrc.navy.mil> Announcing the availability of preprints of two articles to be published in the NIPS conference proceedings: %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% INDEPENDENT COMPONENT ANALYSIS OF ELECTROENCEPHALOGRAPHIC DATA Scott Makeig Anthony J. Bell Naval Health Research Center Computational Neurobiology Lab P.O. Box 85122 The Salk Institute, P.O. Box 85800 San Diego CA 92186-5122 San Diego, CA 92186-5800 scott at cpl_mmag.nhrc.navy.mil tony at salk.edu Tzyy-Ping Jung Terrence J. Sejnowski Naval Health Research Center and Howard Hughes Medical Institute and Computational Neurobiology Lab Computational Neurobiology Lab jung at salk.edu terry at salk.edu ABSTRACT Because of the distance between the skull and brain and their different resistivities, electroencephalographic (EEG) data collected from any point on the human scalp includes activity generated within a large brain area. This spatial smearing of EEG data by volume conduction does not involve significant time delays, however, suggesting that the Independent Component Analysis (ICA) algorithm of Bell and Sejnowski(1994) is suitable for performing blind source separation on EEG data. The ICA algorithm separates the problem of source identification from that of source localization. First results of applying the ICA algorithm to EEG and event-related potential (ERP) data collected during a sustained auditory detection task show: (1) ICA training is insensitive to different random seeds. (2) ICA analysis may be used to segregate obvious artifactual EEG components (line and muscle noise, eye movements) from other sources. (3) ICA analysis is capable of isolating overlapping alpha and theta wave bursts to separate ICA channels (4) Nonstationarities in EEG and behavioral state can be tracked using ICA analysis via changes in the amount of residual correlation between ICA-filtered output channels. Sites: http://128.49.52.9/~scott/bib.html ftp://ftp.cnl.salk.edu/pub/jung/nips95b.ps.Z %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% USING NEURAL NETWORKS TO MONITOR ALERTNESS FROM CHANGES IN EEG CORRELATION AND COHERENCE Scott Makeig Tzyy-Ping Jung Naval Health Research Center Naval Health Research Center and P.O. Box 85122 Computational Neurobiology Lab San Diego, CA 92186-5122 The Salk Institute scott at cpl_mmag.nhrc.navy.mil jung at salk.edu Terrence J. Sejnowski Howard Hughes Medical Institute & Computational Neurobiology Lab The Salk Institute terry at salk.edu ABSTRACT We report here that changes in the normalized electroencephalographic (EEG) cross-spectrum can be used in conjunction with feedforward neural networks to monitor changes in alertness of operators continuously and in near-real time. Previously, we have shown that EEG spectral amplitudes covary with changes in alertness as indexed by changes in behavioral error rate on an auditory detection task (Makeig & Inlow, 1993). Here, we report for the first time that increases in the frequency of detection errors in this task are also accompanied by patterns of increased and decreased spectral coherence in several frequency bands and EEG channel pairs. Relationships between EEG coherence and performance vary between subjects, but within subjects, their topographic and spectral profiles appear stable from session to session. Changes in alertness also covary with changes in correlations among EEG waveforms recorded at different scalp sites, and neural networks can also estimate alertness from correlation changes in spontaneous and unobtrusively-recorded EEG signals. Sites: http://128.49.52.9/~scott/bib.html ftp://ftp.cnl.salk.edu/pub/jung/nips95a.ps.Z From marni at salk.edu Wed Jan 17 16:12:10 1996 From: marni at salk.edu (Marian Stewart Bartlett) Date: Wed, 17 Jan 1996 13:12:10 -0800 Subject: Preprint available Message-ID: <199601172112.NAA01871@chardin.salk.edu> The following preprints are available via anonymous ftp or http://www.cnl.salk.edu/~marni ------------------------------------------------------------------------ CLASSIFYING FACIAL ACTION Marian Stewart Bartlett, Paul A. Viola, Terrence J. Sejnowski, Beatrice A. Golomb, Jan Larsen, Joseph C. Hager, and Paul Ekman To appear in "Advances in Neural Information Processing Systems 8", D. Touretzky, M. Mozer, and M. Hasselmo (Eds.), MIT Press, Cambridge, MA, 1996. ABSTRACT The Facial Action Coding System, (FACS), devised by Ekman and Friesen, provides an objective means for measuring the facial muscle contractions involved in a facial expression. In this paper, we approach automated facial expression analysis by detecting and classifying facial actions. We generated a database of over 1100 image sequences of 24 subjects performing over 150 distinct facial actions or action combinations. We compare three different approaches to classifying the facial actions in these images: Holistic spatial analysis based on principal components of graylevel images; explicit measurement of local image features such as wrinkles; and template matching with motion flow fields. On a dataset containing six individual actions and 20 subjects, these methods had 89%, 57%, and 85% performances respectively for generalization to novel subjects. When combined, performance improved to 92%. nips95.ps.Z 7 pages; 352K compressed -------------------------------------------------------------------------- UNSUPERVISED LEARNING OF INVARIANT REPRESENTATIONS OF FACES THROUGH TEMPORAL ASSOCIATION Marian Stewart Bartlett and Terrence J. Sejnowski To appear in "The Neurobiology of Computation: Proceedings of the Annual Computational Neuroscience Meeting." J.M. Bower, ed. Kluwer Academic Publishers, Boston. ABSTRACT The appearance of an object or a face changes continuously as the observer moves through the environment or as a face changes expression or pose. Recognizing an object or a face despite these image changes is a challenging problem for computer vision systems, yet we perform the task quickly and easily. This simulation investigates the ability of an unsupervised learning mechanism to acquire representations that are tolerant to such changes in the image. The learning mechanism finds these representations by capturing temporal relationships between 2-D patterns. Previous models of temporal association learning have used idealized input representations. The input to this model consists of graylevel images of faces. A two-layer network learned face representations that incorporated changes of pose up to 30 degrees. A second network learned representations that were independent of facial expression. cns95.ta.ps.Z 6 pages; 428K compressed ========================================================================= FTP-host: ftp.cnl.salk.edu FTP-pathnames: /pub/marni/nips95.ps.Z and /pub/marni/cns95.ta.ps.Z URL: ftp://ftp.cnl.salk.edu/pub/marni WWW URL: http://www.cnl.salk.edu/~marni If you have difficulties, email marni at salk.edu From fu at cis.ufl.edu Thu Jan 18 14:49:32 1996 From: fu at cis.ufl.edu (fu@cis.ufl.edu) Date: Thu, 18 Jan 1996 14:49:32 -0500 Subject: Special issue on Knowledge-Based Neural Networks Message-ID: <199601181949.OAA18615@whale.cis.ufl.edu> Special Issue: Knowledge-Based Neural Networks {Knowledge-Based Systems, 8(6), December 1995} Guest Editor: LiMin Fu, University of Florida (Gainesville, USA) Introduction to knowledge-based neural networks L Fu Dynamically adding symbolically meaningful nodes to knowledge-based neural networks D W Opitz and J W Shavlik Recurrent neural networks and prior knowledge for sequence processing: A constrained nondeterministic approach P Frasconi, M Gori, and G Soda Initialization of neural networks by means of decision trees I Ivanova and M Kubat Extension of the temporal synchrony approach to dynamic variable binding in a connectionist inference system N S Park, D Robertson, and K Stenning Hybrid modeling in pattern recognition and control Jim Bezdek Survey and critique of techniques for extracting rules from trained artificial neural networks R. Andrewsm J Diederich, and A B Tickle ======================================================== Orders: Elsevier Science BV, Order Fulfilment Department, P.O. Box 211, 1000 AE, Amsterdam, The Netherlands. Tel: +31 (20) 485-3642 Fax: +31 (20) 485-3598 From dnoelle at cs.ucsd.edu Thu Jan 18 15:40:16 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Thu, 18 Jan 96 12:40:16 -0800 Subject: Cog Sci 96: Final Call For Papers Message-ID: <9601182040.AA25271@beowulf> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California SECOND (AND FINAL) CALL FOR PAPERS DUE DATE: Thursday, February 1, 1996 CONTACT: cogsci96 at cs.ucsd.edu EXECUTIVE SUMMARY OF CHANGES FROM ORIGINAL CFP After discussion with the advisory board, we decided to go with a three-tiered approach after all. There will be six page papers in the proceedings for both talks and posters. However, even if your paper/poster is not accepted, you will have a chance to submit a one page abstract for publication and poster presentation. Or, you may submit a one-page abstract initially (actually two pages in the submission format) for guaranteed acceptance. This is meant to accommodate the very different cultures of the component disciplines of the Society, while making a minimal change from previous years' formats. Also, this CFP provides a partial list of the program committee, the plenary speakers, a rough schedule for the paper reviewing process, and some keywords to aid in the process of reviewing your paper. INTRODUCTION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. As a change this year, we follow the example of Psychonomics and the Neuroscience Conferences and invite Members of the Society to submit short abstracts for guaranteed poster presentation at the conference. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS 1. "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) 2. "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto) Jay McClelland (CMU) Dave Rumelhart (Stanford) 3. "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) 4. "Reconstructing Consciousness" Paul Churchland (UCSD) PROGRAM COMMITTEE (a partial list): Garrison W. Cottrell (UCSD) -- Program Chair Farrell Ackerman (UCSD) -- Linguistics Tom Albright (Salk Institute) -- Neuroscience Patricia Churchland (UCSD) -- Philosophy Roy D'Andrade (UCSD) -- Anthropology Charles Elkan (UCSD) -- Computer Science Catherine Harris (Boston U.) -- Psychology Doug Medin (Northwestern) -- Psychology Risto Miikkulainen (U. of Texas, Austin) -- Computer Science Kim Plunkett (Oxford) -- Psychology Martin Sereno (UCSD) -- Neuroscience Tim van Gelder (Indiana U. & U. of Melbourne) -- Philosophy GUIDELINES FOR PAPER SUBMISSIONS Novel research papers are invited on any topic related to cognition. Members of the Society may submit a one page abstract (two pages in double-spaced submission format) for poster presentation, which will be automatically accepted for publication in the proceedings. Submitted full-length papers will be evaluated through peer review with respect to several criteria, including originality, quality, and significance of research, relevance to a broad audience of cognitive science researchers, and clarity of presentation. Papers will be accepted for either oral or poster presentation, and will receive 6 pages in the proceedings in the final, camera-ready format. Papers that are rejected at this stage may be re-submitted (if the author is a Society member) as a one page abstract in the camera-ready format, due at the same date as camera-ready papers. Poster abstracts from non-members will be accepted, but the presenter should join the Society prior to presenting the poster. Papers accepted for oral presentation will be presented at the conference as scheduled talks. Papers accepted for poster presentation and one page abstracts will be presented at a poster session at the conference. All papers may present results from completed research as well as report on current research with an emphasis on novel approaches, methods, ideas, and perspectives. Posters may report on recent work to be published elsewhere that has not been previously presented at the conference. Authors should submit five (5) copies of the paper in hard copy form by Thursday, February 1, 1996, to: Dr. Garrison W. Cottrell Computer Science and Engineering 0114 FED EX ONLY: 3250 Applied Physics and Math University of California San Diego La Jolla, Ca. 92093-0114 phone for FED EX: 619-534-5948 (my secretary, Marie Kreider) If confirmation of receipt is desired, please use certified mail or enclose a self-addressed stamped envelope or postcard. DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author are eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of eleven (11) pages long (excluding only the cover page but including figures and references), with 1 inch margins on all sides (i.e., the text should be 6.5 inches by 9 inches, including footnotes but excluding page numbers), double-spaced, and in 12-point type. Each page should be numbered (excluding the cover page). Template and style files conforming to these specifications for several text formatting programs, including LaTeX, Framemaker, Word, and Word Perfect are available by anonymous FTP from "cs.ucsd.edu" in the "pub/cogsci96/formats" directory. There is a self-explanatory subdirectory hierarchy under that directory for papers and posters. Formatting information is also available via the World Wide Web at the conference web page located at "http://www.cse.ucsd.edu/events/cogsci96/". Submitted abstracts should be two pages in submitted format, with the same margins as full papers. Style files for these are available at the same location as above. Final versions of papers and poster abstracts will be required only after authors are notified of acceptance; accepted papers may be published in a CD-ROM version of the proceedings. Abstracts will be available before the meeting from a WWW server. Final versions must follow the HTML style guidelines which will be made available to the authors of accepted papers and abstracts. This year we will again attempt to publish the proceedings in two modalities, paper and a CD-ROM version. Depending on a decision of the Governing Board, we may be switching completely from paper to CD-ROM publication in order to control escalating costs and permit use of search software. [Comments on this change should be directed to "alan at lrdc4.lrdc.pitt.edu" (Alan Lesgold, Secretary/Treasurer).] COVER PAGE Each copy of the submitted paper must include a cover page, separate from the body of the paper, which includes: 1. Title of paper. 2. Full names, postal addresses, phone numbers, and e-mail addresses of all authors. 3. An abstract of no more than 200 words. 4. Three to five keywords in decreasing order of relevance. The keywords will be used in the index for the proceedings. You may use the keywords from the attached list, or you may make up your own. Please try to give a primary discipline (or pair of disciplines) to which the paper is addressed (e.g., Psychology, Philosophy, etc.) 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster only submissions should follow paper format, but be no more than 2 pages in this format (final poster abstracts will follow the same 2 column format as papers). Accepted papers will be presented as talks. Submitted posters by Society Members will be accepted for poster presentation, but may, at the discretion of the Program Committee, be invited for oral presentation. Non-members may join the Society at the time of submission. 6. A note stating if the paper is eligible to compete for a Marr Prize. DEADLINE Papers must be received by Thursday, February 1, 1996. Papers received after this date will be recycled. REVIEW SCHEDULE February 1: Papers due March 21: Decisions/Reviews Returned To Authors April 14: Final Papers & Abstracts Due CALL FOR SYMPOSIA (The call for symposia has been deleted here, as the deadline has passed.) CONFERENCE CHAIRS Edwin Hutchins and Walter Savitch PROGRAM CHAIR Garrison W. Cottrell Please direct email to "cogsci96 at cs.ucsd.edu". KEYWORDS Please identify an appropriate major discipline for your work (try to name no more than two!) and up to three subareas from the following list. Anthropology Behavioral Ecology Cognition & Education Cognitive Anthropology Distributed Cognition Situated Cognition Social & Group Cognition Computer Science Artificial Intelligence Artificial Life Case-Based Learning Case-Based Reasoning Category & Concept Learning Category & Concept Representation Computer Aided Instruction Computer Human Interaction Computer Vision Connectionism Discovery-Based Learning Distributed Systems Explanation Generation Hybrid Representations Inference & Decision Making Intelligent Agents Machine Learning Memory Model-Based Reasoning Natural Language Generation Natural Language Learning Natural Language Processing Planning & Action Problem Solving Reasoning Heuristics Reasoning Under Time Constraints Robotics Rule-Based Reasoning Situated Cognition Speech Generation Speech Processing Text Comprehension & Translation Linguistics Cognitive Linguistics Discourse & Text Comprehension Generative Linguistics Language Acquisition & Development Language Generation Language Understanding Lexical Semantics Phonology & Word Recognition Pragmatics & Communication Psycholinguistics Sentence Processing Syntax Neuroscience Attention Brain Imaging Cognitive Neuroscience Computational Neuroscience Consciousness Memory Motor Control Language Acquisition & Development Language Generation Language Understanding Neuropsychology Neural Plasticity Perception & Recognition Planning & Action Spatial Processing Philosophy Philosophy Of Anthropology Philosophy Of Biology Philosophy Of Language Philosophy Of Mind Philosophy Of Neuroscience Philosophy Of Psychology Philosophy Of Science Psychology Analogical Reasoning Associative Learning Attention Behavioral Ecology Case-Based Learning Case-Based Reasoning Category & Concept Learning Category & Concept Representation Cognition & Education Consciousness Discourse & Text Comprehension Discovery-Based Learning Distributed Cognition Evolutionary Psychology Explanation Generation Imagery Inference & Decision Making Language Acquisition & Development Language Generation Language Understanding Lexical Semantics Memory Model-Based Reasoning Neuropsychology Perception & Recognition Phonology & Word Recognition Planning & Action Pragmatics & Communication Problem Solving Psycholinguistics Reasoning Heuristics Reasoning Under Time Constraints Rule-Based Reasoning Sentence Processing Situated Cognition Spatial Processing Syntactic Processing From krista at torus.hut.fi Fri Jan 19 07:33:42 1996 From: krista at torus.hut.fi (Krista Lagus) Date: Fri, 19 Jan 1996 14:33:42 +0200 (EET) Subject: A novel SOM-based approach to free-text mining Message-ID: A novel SOM-based approach to free-text mining 19.1.1996 -- WEBSOM demo for newsgroup exploration Welcome to test the document exploration tool WEBSOM. An ordered map of the information space is provided: similar documents lie near each other on the map. The order helps in finding related documents once any interesting document is found. Currently a demo for browsing the 4900 articles that have appeared in the Usenet newsgroup comp.ai.neural-nets since 19.6.1995 is available in the WWW address http://websom.hut.fi/websom/ The WEBSOM home pages also contain an article describing the WEBSOM method and documentation of the demo. The demonstration requires a graphical WWW browser (such as Mosaic or Netscape), but the documentation can be read also with other browsers. The WEBSOM team: Timo Honkela Samuel Kaski Krista Lagus Teuvo Kohonen Helsinki University of Technology Neural Networks Research Centre Rakentajanaukio 2C FIN-02150 Espoo Finland email: websom at websom.hut.fi From davec at cogs.susx.ac.uk Fri Jan 19 06:41:06 1996 From: davec at cogs.susx.ac.uk (Dave Cliff) Date: Fri, 19 Jan 1996 11:41:06 +0000 (GMT) Subject: MSc in Evolutionary and Adaptive Systems Message-ID: Please distribute: The University of Sussex School of Cognitive and Computing Sciences Graduate Research Centre (COGS GRC) Master of Science (MSc) Degree in EVOLUTIONARY AND ADAPTIVE SYSTEMS Applications are invited for entry in October 1996 to the Master of Science (MSc) degree in Evolutionary and Adaptive Systems. The degree can be taken in one year full-time, or part-time over two years. Students initially follow taught courses, as preparation for an individual research project leading to a Masters Thesis. This email gives a brief summary of the degree. For further details, see: World-wide web: http://www.cogs.susx.ac.uk/lab/adapt/easy_msc.html Anonymous ftp: ftp to cogs.susx.ac.uk cd to pub/users/davec get (in binary mode) easy_msc.ps.Z (69K) Or contact the address at the end of this email to request hard-copy. The MSc is sponsored in part by: BNR Europe Ltd, Hewlett-Packard, Millennium Interactive. BACKGROUND The past decade has seen the formation of new research fields, crossing traditional boundaries between biology, computer science, and cognitive science. Known variously as Artificial Life, Simulation of Adaptive Behavior, and Evolutionary Computation, the common theme is a focus on adaptation in natural and artificial systems. This research has the potential both to further our understanding of living and adaptive mechanisms in nature, and to construct artificial systems which show the same flexibility, robustness, and capacity for adaptation as is seen in animals. The international research community is sufficiently large to support five series of biennial conferences on various aspects of the field (ICGA, ALife, ECAL, SAB, PPSN), and there are currently three international journals (all produced by MIT Press) for archival publication of significant research findings. The Evolutionary and Adaptive Systems (EASy) Research Group at the University of Sussex School of Cognitive and Computing Sciences (COGS) is now widely recognised as one of the world's foremost groups of researchers in this area, with approximately 35 people actively engaged in research. Students on the EASy MSc will be involved in this lively interdisciplinary environment. At the end of the course, students will have been trained to a standard where they are capable of pursuing doctoral research in any area of Evolutionary and Adaptive Systems; and of applying those techniques in industry. INTERNATIONAL STEERING GROUP M. A. Arbib (Uni. of Southern California, USA); M. Bedau (Reed College, USA); R. D. Beer (Case Western Reserve Uni, USA); R. A. Brooks (MIT, USA); H. Cruse (Universitat Bielefeld, Germany); K. De Jong (George Mason Uni., USA); D. Dennett (Tufts, USA); D. Floreano (LCT, Italy); J. Hallam (Uni. of Edinburgh, UK); I. Horswill (North Western Uni., USA); L. P. Kaelbling (Brown Uni., USA); C. G. Langton (Santa Fe Inst., USA); M. J. Mataric (Brandeis Uni., USA); J.-A. Meyer (Ecole Normale Superieure, France); G. F. Miller (MPIPF, Germany); R. Pfeiffer (Uni. of Zurich, Switz.); T. S. Ray (ATR, Japan); C. Reynolds (Silicon Graphics Inc, USA); H. L. Roitblat (Uni. of Hawaii, USA); T. Smithers (Euskal Herriko Unibertsitatae, Spain); L. Steels (VUB, Belgium); P. Todd (MPIPF, Germany); B. H. Webb (Uni. of Nottingham, UK); S. W. Wilson (Rowland Inst., USA). FULL-TIME SYLLABUS Autumn Term (Oct--Dec) ---------------------- Four compulsory courses: Artificial Life Introduction to Computer Science Formal Computational Skills Adaptive Behavior in Animals and Robots Spring Term (Jan-Mar) --------------------- Two compulsory courses: Adaptive Systems Neural Networks Two options chosen from the following list (further options may become available; some options may not be available in some years): Simulation of Adaptive Behavior History and Philosophy of Adaptive Systems Development in Human and Artificial Life Computer Vision Philosophy of Cognitive Science Computational Neuroscience Summer (Apr-Aug) ---------------- Research project, which should include a substantial practical (programming) element, leading to submission of a 12000-word masters thesis. It is intended that there will be industrial involvement in some projects. SUSSEX FACULTY INVOLVED IN THE MSc Prof. H. G. Barrow; Prof. M. A. Boden; Dr. H. Buxton; R. Chrisley; Prof. A. J. Clark; Dr. D. Cliff; Dr. T. S. Collett; Dr. P. Husbands; Dr. D. Osorio; Dr. J. C. Rutkowska; Dr. D. S. Young. APPLICATION PROCEDURE Application forms are available from: Postgraduate Admissions Office Sussex House University of Sussex Brighton BN1 9RH England, U.K. Tel: +44 (0)1273 678412 Email: PG.Admissions at admin.susx.ac.uk Early application is encouraged: there are a limited number of places on the MSc. If you have any further queries about this degree, please contact: Dr D Cliff School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England, U.K. Tel: +44 (0)1273 678754 Fax: +44 (0)1273 671320 E-mail: davec at cogs.susx.ac.uk From glinert at cs.rpi.edu Thu Jan 18 22:10:13 1996 From: glinert at cs.rpi.edu (glinert@cs.rpi.edu) Date: Thu, 18 Jan 96 22:10:13 EST Subject: ASSETS'96 AP + Reg Forms Message-ID: <9601190310.AA20547@colossus.cs.rpi.edu> /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ADVANCE PROGRAM AND REGISTRATION FORMS ASSETS'96 The Second International ACM/SIGCAPH Conference on Assistive Technologies April 11 - 12, 1996 Waterfront Centre Hotel Vancouver BC, Canada Sponsored by the ACM's Special Interest Group on Computers and the Physically Handicapped, ASSETS'96 is the second of a new series of conferences whose goal is to provide a forum where researchers and developers from academia and industry can meet to exchange ideas and report on new developments relating to computer-based systems to help people with impairments and disabilities of all kinds. This announcement includes 4 parts: o Message from the Program Chair o ASSETS'96 Advance Program o ASSETS'96 Registration Form o Hotel Information If you have any questions or would like further information, please consult the conference web pages at http://www.cs.rpi.edu/assets or contact the ASSETS'96 General Chair: Ephraim P. Glinert Dept. of Computer Science R. P. I. Troy, NY 12180 Phone: (518) 276 2657 E-mail: glinert at cs.rpi.edu /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ MESSAGE FROM THE PROGRAM CHAIR ============================== As Assets '96 Program Chair, I am pleased to extend a warm invitation to you to attend ASSETS'96, the 1996 ACM/SIGCAPH International Conference on Assistive Technologies! This is the second in an annual series of meetings whose goal is to provide a forum where researchers and developers from academia and industry can meet to exchange ideas and report on leading edge developments relating to computer based systems to help people with disabilities. This year, conference attendees will hear 21 exciting presentations on state-of-the art approaches to vision impairments, motor impairments, hearing impairments, augmentative communication, special education needs, Internet access issues, and much more. All submissions have undergone a rigorous review process to assure that the program is of the high technical quality associated with the best ACM conferences, and no more papers have been accepted than can comfortably be presented in a single track (no parallel sessions), with ample time included in the schedule for interaction among presenters and attendees. Come join us in beautiful Vancouver for a great time and a rewarding professional experience! David L. Jaffe VA Palo Alto Health Care System /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ASSETS'96 ADVANCE PROGRAM ========================= NOTE: For each paper, only the affiliation of the first author is given. WED 4/10: 6:00 pm - 9:00 pm Registration + Reception THU 4/11: 8:00 am - 5:00 pm Registration 8:00 am - 9:00 am Continental Breakfast 8:45 am - 9:00 am Welcome to ASSETS'96! 9:00 am -10:00 am KEYNOTE ADDRESS: David Rose, Center for Applied Special Technology (CAST) 10:00 am -10:30 am Break 10:30 am -12:00 noon Papers I: The User Interface I "Touching and hearing GUIs: Design issues for the PC access system" C. Ramstein, O. Martial, A. Dufresne, M. Carignan, P. Chasse and P. Mabilleau Center for Information Technologies Innovation (Canada) "Enhancing scanning input with nonspeech sounds" S.A. Brewster, V. Raty and A. Kortkangas University of Glasgow (UK) "A study of input device manipulation difficulties" S. Trewin University of Edinburgh (UK) 12:00 - 1:00 pm Lunch 1:00 pm - 2:00 pm SIGCAPH Business Meeting 2:00 pm - 3:00 pm Papers II: The World Wide Web "V-Lynx: Bringing the World Wide Web to sight-impaired users" M. Krell and D. Cubranic University of Southern Mississippi (USA) "Computer generated 3-dimensional models of manual alphabet shapes for the World Wide Web" S. Geitz, T. Hanson and S. Maher Gallaudet University (USA) 3:00 pm - 3:30 pm Break 3:30 pm - 5:30 pm Papers III: Vision Impairments I "EMACSPEAK: Direct Speech Access" T.V. Raman Adobe Systems "The Pantobraille: Design and pre-evaluation of a single cell braille display based on a force feedback device" C. Ramstein Center for Information Technologies Innovation (Canada) "Interactive tactile display system: A support system for the visually impaired to recognize 3D objects" Y. Kawai and F. Tomita Electrotechnical Laboratory (Japan) "Audiograf: A diagram reader for the blind" A.R. Kennel Institut fur Informationssysteme (Switzerland) 6:00 pm - 9:00 pm Buffet Dinner 8:00 pm - 9:00 pm ASSETS'97 Organizational Meeting FRI 4/12: 8:00 am -12:00 noon Registration 8:00 am - 9:00 am Continental Breakfast 9:00 am -10:00 am Papers IV: Empirical Studies "EVA, an early vocalization analyzer: An empirical validity study of computer categorization" H.J. Fell, L.J. Ferrier, Z. Mooraj, E. Benson and D. Schneider Northeastern University (USA) "An approach to the evaluation of assistive technology" R.D. Stevens and A.D.N. Edwards University of York (UK) 10:00 am -10:30 am Break 10:30 am -12:00 noon Papers V: The User Interface II "Designing interface toolkit with dynamic selectable modality" S. Kawai, H. Aida and T. Saito University of Tokyo (Japan) "Multimodal input for computer access and augmentative communication" A. Smith, J. Dunaway, P. Demasco and D. Peischl A.I. duPont Institute / University of Delaware (USA) "The Keybowl: An ergonomically designed document processing device" P.J. McAlindon, K.M. Stanney and N.C. Silver University of Central Florida (USA) 12:00 - 1:00 pm Lunch 1:00 pm - 2:00 pm Panel Discussion "Designing the World Wide Web for people with disabilities" M.G. Paciello, Digital Equipment Corporation (USA) G.C. Vanderheiden, TRACE R&D Center (USA) L.F. Laux, US West Communications, Inc. (USA) P.R. McNally, University of Hertfordshire (UK) 2:00 pm - 3:00 pm Papers VI: Multimedia "A gesture recognition architecture for sign language" A. Braffort LIMSI/CNRS (France) "`Composibility': Widening participation in music making for people with disabilities via music software and controller solutions" T. Anderson and C. Smith University of York (UK) 3:00 pm - 3:30 pm Break 3:30 pm - 5:30 pm Papers VII: Vision Impairments II "A generic direct manipulation 3D auditory environment for hierarchical navigation in nonvisual interaction" A. Savidis, C. Stephanidis, A. Korte, K. Crispien and K. Fellbaum Foundation for Research and Technology - Hellas (Greece) "Improving the usability of speech-based interfaces for blind users" I.J. Pitt and A.D.N. Edwards University of York (UK) "TDraw: A computer-based tactile drawing tool for blind people" M. Kurze Free University of Berlin (Germany) "Development of dialogue systems for a mobility aid for blind people: Initial design and usability testing" T. Strothotte, S. Fritz, R. Michel, A. Raab, H. Petrie, V. Johnson, L. Reichert and A. Schalt Universitat Magdeburg (Germany) 5:30 pm Closing Remarks /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ ASSETS'96 REGISTRATION FORM =========================== This form is 2 pages long. Please print it out, complete both pages and mail it WITH FULL PAYMENT to: Ephraim P. Glinert, ASSETS'96 Dept. of Computer Science R. P. I. Troy, NY 12180 We're sorry, but e-mail registration forms and/or forms not accompanied by full payment (check or credit card information) CANNOT be accepted. CONFERENCE REGISTRATION FEES EARLY LATE / ON-SITE -------------------------------------------------------- ACM member: $ 395 $ 475 Nonmember: $ 580 $ 660 Full time student: $ 220 $ 270 -------------------------------------------------------- 1: CONFERENCE REGISTRATION (from the table above): $ ___________ 2: SECOND BUFFER DINNER TICKET (Thursday, April 11): $ 50 ___YES ___NO 3: SECOND COPY OF THE CONFERENCE PROCEEDINGS: $ 30 ___YES ___NO TOTAL AMOUNT DUE: $ ___________ NOTES: o Registration fee includes: ADMISSION to all sessions ONE COPY of the conference PROCEEDINGS RECEPTION, 5 MEALS AND 4 BREAKS as shown in the Advance Program!!! o To qualify for the EARLY RATE, your registration must be postmarked on or before WEDNESDAY, MARCH 27, 1996. If you are an ACM MEMBER, please supply your ID# __________________ . STUDENTS, please attach a clear photocopy of your valid student ID. o CANCELLATIONS will be accepted up to FRIDAY, MARCH 15, 1996 subject to a 20% handling fee. ASSETS'96 REGISTRATION FORM (continued) ======================================= PERSONAL INFORMATION: Name __________________________________________________________________________ Affiliation ___________________________________________________________________ Address _______________________________________________________________________ City _______________________________ State/Province __________________________ Country __________________________________ ZIP/Postal Code ___________________ E-mail ________________________________________________________________________ Phone ___________________________________ FAX ________________________________ ***I have a disability for which I require special accommodation ___YES ___NO If YES, please attach a separate sheet with details. Thank you! PAYMENT INFORMATION: ___CHECK in U.S. funds enclosed, made payable to "ACM ASSETS'96" ___Please charge $ ___________ to my CREDIT CARD: Card type: ___AMEX ___VISA ___MasterCard Card # _______________________________________ Expiration Date ___________ Name On Card ______________________________________________________________ Billing Address ___________________________________________________________ Cardholder Signature ________________________________________ (ASSETS'96) /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ HOTEL INFORMATION ================= All conference events will take place at the Waterfront Centre Hotel, a member of the Canadian Pacific group. The hotel is located in downtown Vancouver, next to the convention center and cruise ship terminal. Waterfront Centre Hotel 900 Canada Place Way Vancouver, British Columbia V6C 3L5 CANADA Phone: (604) 691 1991 or (800) 441 1414 FAX: (604) 691 1999 A block of rooms for attendees of ASSETS'96 has been set aside at specially discounted rates: Single $140 Canadian per night, plus applicable taxes Double/Twin $160 Canadian per night, plus applicable taxes Waterfront Suite $360 Canadian per night, plus applicable taxes To reserve space at these prices, please call the hotel directly on or before MARCH 15, 1996 and refer to "ACM ASSETS'96". /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ If you have any questions or would like further information, please consult the conference web pages at http://www.cs.rpi.edu/assets or contact the ASSETS'96 General Chair: Ephraim P. Glinert Dept. of Computer Science R. P. I. Troy, NY 12180 Phone: (518) 276 2657 E-mail: glinert at cs.rpi.edu /\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\/\ From trevor at mallet.Stanford.EDU Fri Jan 19 20:15:56 1996 From: trevor at mallet.Stanford.EDU (Trevor Hastie) Date: Fri, 19 Jan 1996 17:15:56 -0800 (PST) Subject: Regression and Classification course Message-ID: <199601200115.RAA13247@mallet.Stanford.EDU> ************ SHORT COURSE ANNOUNCEMENT ********** MODERN REGRESSION AND CLASSIFICATION May 9-10, 1996 Stanford Park Hotel, Menlo Park ************************************************* A two-day course on widely applicable statistical methods for modelling and prediction, featuring Professor Trevor Hastie and Professor Robert Tibshirani Stanford University University of Toronto This two day course covers modern tools for statistical prediction and classification. We start from square one, with a review of linear techniques for regression and classification, and then take attendees through a tour of: o Flexible regression techniques o Classification and regression trees o Neural networks o Projection pursuit regression o Nearest Neighbor methods o Learning vector quantization o Wavelets o Bootstrap and cross-validation We will also illustrate software tools for implementing the methods. Our objective is to provide attendees with the background and knowledge necessary to apply these modern tools to solve their own real-world problems. The course is geared for: o Statisticians o Financial analysts o Industrial managers o Medical and Quantitative researchers o Scientists o others interested in prediction and classification Attendees should have an undergraduate degree in a quantitative field, or have knowledge and experience working in such a field. For more details on the course, how to register, price etc: o point your web browser to: http://playfair.stanford.edu/~trevor/mrc.html OR send a request by o FAX to Prof. T. Hastie at (415) 326-0854, OR o email to trevor at playfair.stanford.edu From payman at ebs330.eb.uah.edu Sat Jan 20 15:24:11 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Sat, 20 Jan 96 14:24:11 CST Subject: CIFEr'96 Oral & Poster Presentations Message-ID: <9601202024.AA20275@ebs330> IEEE/IAFE 1996 $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$$$$$$$$ $$$$$$ $$$$$$$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$$$ $$$$$$$ $$$$$$ $$$$$$$$$$ $$$$ $$ $$$$ $$$$ $$$ $$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ $$$$$$$$$$$ $$$$$$ $$$$ $$$$$$$$$$ $$$ IEEE/IAFE Conference on Computational Intelligence for Financial Engineering March 24-26, 1996 Crowne Plaza Manhattan - New York City http://www.ieee.org/nnc/conferences/cfp/cifer96.html [next update: February 1] ORAL PRESENTATIONS ------------------ Financial Computing Environments -------------------------------- "New Computational Architectures for Pricing Derivatives" R. Freedman, R. DiGiorgio "CAFE: A Complex Adaptive Financial Environment" R. Even, B. Mishra "Financial Trading Center at the University of Texas" P. Jaillet Market Behavior Models ---------------------- "Neural Networks Prediction of Multivariate Financial Time Series: The Swiss Bond Case" T. Ankenbrand, M. Tomassini "Bridging the Gap Between Nonlinearity Tests and the Efficient Market Hypothesis by Genetic Programming" S. Chen, C. Yeh "Models of Market Behavior: Bringing Realistic Games to Market" S. Leven Chaos and Time Series for Financial Systems ------------------------------------------- "Impetus for Future Growth in the Globalization of Stock Investments: An Evidence from Joint Time Series and Chaos Analyses" M. Hoque "Finding Time Series Among the Chaos: Stochastics, Deseasonalization, and Texture-Detection using Neural Nets" P. Werbos "Financial Time Series Analysis and Forecasting Using Computer Simulation and Methods of Nonlinear Adaptive Control of Chaotic Systems" A. Fradhov, S. Fradhov, A. Markov, D. Oliva Neural Nets for Financial Applications -------------------------------------- "Experiments in Predicting the German Stock Index DAX with Density Estimating Neural Networks" D. Ormoneit, R. Neuneier "Stock Market Prediction Using Different Neural Network Classification Architectures" C. Dagli, K. Schierholt "Modelling Stock Return Sensitivities to Economic Factors with the Kalman Filter and Neural Networks" Y. Bentz, L. Boone, J. Connor Fuzzy Logic for Financial Applications -------------------------------------- "Computer Supported Determination of Bank Credit Conditions" S. Schwarze "Fuzzy Logic and Genetic Algorithms for Financial Risk Management" T. Rubinson, R. Yager "Foreign Exchange Rate Prediction by Fuzzy Inferencing on Deterministic Chaos" S. Ghoshray Financial Data Mining --------------------- "Stock Selection Combining Rule Generation and Risk/Reward Portfolio Optimization" C. Apte, S. Hong, A. King "Data Driven Risk Management System" R. Grossman "Intelligent Hybrid System for Data Mining" M. Hambaba Simulation Techniques for Derivatives Pricing --------------------------------------------- "Path Integral Monte Carlo Method and Maximum Entropy: A Complete Solution for the Derivative Valuation Problem" M. Makivic Problems with Monte Carlo Simulation in the Pricing of Contingent Claims" J. Molle, F. Zapatero "Faster Simulation of the Prices of Derivative Securities" S. Paskov Financial Time Series Prediction I ---------------------------------- "Automated Mathematical Modelling for Financial Time Series Prediction Using Fuzzy Logic, Dynamical Systems and Fractal Theory" O. Castillo, P. Melin "Max-Min Optimal Investing" E. Ordentlich, T. Cover "Building Long/Short Portfolios Using Rule Induction" G. John, P. Miller Financial Time Series Prediction II ----------------------------------- "Adaptive Rival Penalized Competitive Learning and Combined Linear Predictor with Application to Financial Investment" Y. Cheung, Z. Lai, L. Xu "A Rule-based Neural Stock Trading Decision Support System" S. Chou, C. Chen, C. Yang, F. Lai "The Gene Expression Messy Genetic Algorithm for Financial Applications" H. Kargupta, K. Buescher Term Structure Modeling ----------------------- "Analysing Shocks on the Interest Rates Structure with Kohonen Map" M. Cottrell, E. De Bodt, P. Gregoire, E. Henrion "Interest Rate Futures: Estimation of Volatility Parameters in an Arbitrage-Free Framework" R. Bhar, C. Chiarella "Prediction of Individual Bond Prices Via the TDM Model" T. Kariya, H. Tsuda Financial Market Volatility --------------------------- "Robust Estimation Analytics for Financial Risk Management" H. Green, R. Martin, M. Pearson "Implied Volatility Functions: Empirical Tests" B. Dumas, J. Fleming, R. Whaley "Evaluation of Common Models Used in the Estimation of Historical Volatility" J. Dalle Molle Business Decision Tools ----------------------- "Fuzzy Queries for Top-Management Succession Planning" T. Sutter, M. Schroder, R. Kruse, J. Gebhardt "Density Based Clustering and Radial Basis Function Modeling to Generate Credit Card Fraud Scores" V. Hanagandi, A. Dhar, K. Buescher "Nonlinear Analysis of Retail Performance" D. Vaccari POSTER PRESENTATIONS -------------------- "Fuzzy Set Methods for Uncertainty Representation in Risky Financial Decisions" R. Yager "Trading Mechanisms and Return Volatility: Empirical Investigation on Shang Hai Stock Exchange Based on a Neural Network Model" Z. Lai, Y. Chuang, L. Xu "Application of Fuzzy Regression Models to Predict Exchange Rates for Composite Currencies" S. Ghoshray "Risk Management in an Uncertain Environment by Fuzzy Statistical Methods" S. Ghoshray "Heuristic Techniques in Tax Structuring for Multinationals" D. Fatouros, G. Salkin, N. Christofides "MLP and Fuzzy Approaches to Prediction of the SEC's Investigative Targets" E. Feroz, T. Kwon "A Corporate Solvency Map Through Self-Organizing Neural Networks" Y. Alici "The Applicability of Information Criteria for Neural Network Architecture Selection" C. Haefke, C. Helmenstein "Stock Prediction Using Different Neural Network Classification Architectures" C. Dagli, K. Schierholt -- Payman Arabshahi Electronic Publicity Chair, CIFEr'96 Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece From alpaydin at boun.edu.tr Sun Jan 21 07:19:28 1996 From: alpaydin at boun.edu.tr (Ethem Alpaydin) Date: Sun, 21 Jan 1996 15:19:28 +0300 (MEST) Subject: CFP: TAINN'96, Conf on AI & NN (Istanbul/Turkey) Message-ID: Pre-S. We're sorry if you receive multiple copies of this message. ase forward * Please post * Please forward * Please post * Please forwa Call for Papers TAINN'96, Istanbul 5th Turkish Symposium on Artificial Intelligence and Neural Networks To be held at Istanbul Technical University, Macka Campus June 27 - 28, 1996 Jointly-organized by Istanbul Technical University and Bogazici University. SPONSORS Istanbul Technical University, Bogazici University, and Turkish Scientific and Technical Research Council (Tubitak) IN COOPERATION WITH IEEE Computer Society Turkey Chapter, ACM SIGART Bilkent Chapter SCOPE Theory: Search, Knowledge Representation, Computational Learning Theory, Complexity Theory, Dynamical Systems, Combinatorial Optimization, Function Approximation, Estimation, Machine Learning, Machine Discovery, Social and Philosophical Issues. Algorithms and Architectures: Learning Algorithms, Multilayer Perceptrons, Recurrent Networks, Decision Trees, Genetic and Evolutionary Algorithms, Fuzzy Logic, Heuristic Search Methods, Symbolic Reasoning. Applications: Expert Systems, Natural Language Processing, Computer Vision, Image Processing, Speech Recognition Coding and Synthesis, Handwriting Recognition, Time-Series Prediction, Medical Processing, Financial Analysis, Music Processing, Control, Navigation, Path Planning, Automated Theorem Proving, Symbolic Algebraic Computation. Cognitive and Neuro Sciences: Human Learning, Memory and Language, Perception, Psychophysics, Computational Models. Implementation: Simulation Tools, Parallel Processing, Analog and Digital VLSI, Neurocomputing Systems. ORGANIZING COMMITTEE E. Alpaydin (Bogazici), U. Cilingiroglu (ITU), F. Gurgen (Bogazici), C. Guzelis (ITU) TECHNICAL COMMITTEE A.H. Abdel Wahab (Egypt), L. Akarun (Bogazici), L. Akin (Bogazici), V. Akman (Bilkent), F. Alpaslan (METU), K. Altinel (Bogazici), V. Atalay (METU), C. Bozsahin (METU), S. Canu (Compiegne, France), E. Celebi (ITU), I. Cicekli (Bilkent), K. Ciliz (Bogazici), D. Cohn (Harlequin, USA), D. Davenport (Bilkent), C. Dichev (BAS, Bulgaria), A. Erkmen (METU), G. Ernst (Case Western Reserve, USA), A. Fatholahzadeh (Supelec, France), Z. Ghahramani (Toronto, Canada), H. Ghaziri (Beirut, Lebanon), C. Goknar (ITU), M. Guler (METU), A. Guvenir (Bilkent), U. Halici (METU), M. Jabri (Sydney, Australia), M. Jordan (MIT, USA), S. Kocabas (Tubitak MAM-ITU), S. Kuru (Bogazici), K. Oflazer (Bilkent), R. Parikh (CUNY, USA), F. Masulli (Genova, Italy), M. de la Maza (MIT, USA), R. Murray-Smith (DaimlerBenz, Germany), Y. Ozturk (Ege), E. Oztemel (Tubitak MAM-SAU), F. Pekergin (EHEI, France), B. Sankur (Bogazici), A.F. Savaci (ITU), C. Say (Bogazici), L. Shastri (ICSI, USA), M. Sungur (METU), E. Tulunay (METU), G. Ucoluk (METU), N. Yalabik (METU), W. Zadrozny (IBM, USA). PAPER SUBMISSION Submit three hard-copies of full papers in English or Turkish limited to 10 pages in 12pt size or poster papers limited to 4 pages along with 5 keywords by March 1, 1996 to YZYSA'96/TAINN'96, Department of Computer Engineering, Bogazici University, Bebek TR-80815 Istanbul Turkey Accepted papers will be printed in the proceedings. The symposium will also host special sessions on certain subtopics. Proposals by qualified individuals interested in chairing one of these is solicited. The goal is to provide a forum for researchers to better focus on a certain subtopic and discuss important issues. Individuals proposing have the following responsibilities: o Arranging presentations by experts of the topic o Moderating or leading the session o Writing an overview of the topic and the session for the proceedings. Mail proposals by March 1, 1996 to YZYSA'96/TAINN'96, Faculty of Electrical and Electronics Engineering, Istanbul Technical University, Maslak TR-80626 Istanbul Turkey MORE INFORMATION Email: tainn96 at boun.edu.tr URL: http://www.cmpe.boun.edu.tr/~tainn96 * Participation from Eastern European, Balkan, and Middle-East countries is especially solicited. From harnad at cogsci.soton.ac.uk Sun Jan 21 17:03:00 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sun, 21 Jan 96 22:03:00 GMT Subject: Learning/Representation: BBS Call for Commentators Message-ID: <3879.9601212203@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: COMPUTATION, REPRESENTATION AND LEARNING by Andy Clark and Chris Thronton This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ TRADING SPACES: COMPUTATION, REPRESENTATION AND THE LIMITS OF UNINFORMED LEARNING Andy Clark Philosophy/Neuroscience/Psychology Program, Washington University in St Louis, Campus Box 1073, St Louis, MO-63130, USA andy at twinearth.wustl.edu Chris Thornton, Cognitive and Computing Sciences, University of Sussex, Brighton, BN1 9QH, UK Chris.Thornton at cogs.sussex.ac.uk KEYWORDS: Learning, connectionism, statistics, representation, search ABSTRACT: Some regularities enjoy only an attenuated existence in a body of training data. These are regularities whose statistical visibility depends on some systematic re-coding of the data. The space of possible re-codings is, however, infinitely large - it is the space of applicable Turing machines. As a result, mappings which pivot on such attenuated regularities cannot, in general, be found by brute force search. The class of problems which present such mappings we call the class of `type-2 problems'. Type-1 problems, by contrast, present tractable problems of search insofar as the relevant regularities can be found by sampling the input data as originally coded. Type-2 problems, we suggest, present neither rare nor pathological cases. They are rife in biologically realistic settings and in domains ranging from simple animat behaviors to language acquisition. Not only are such problems rife - they are standardly solved! This presents a puzzle. How, given the statistical intractability of these type-2 cases does nature turn the trick? One answer, which we do not pursue, is to suppose that evolution gifts us with exactly the right set of re-coding biases so as to reduce specific type-2 problems to (tractable) type-1 mappings. Such a heavy duty nativism is no doubt sometimes plausible. But we believe there are other, more general mechanisms also at work. Such mechanisms provide general (not task-specific) strategies for managing problems of type-2 complexity. Several such mechanisms are investigated. At the heart of each is a fundamental ploy viz. the maximal exploitation of states of representation already achieved by prior (type-1) learning so as to reduce the amount of subsequent computational search. Such exploitation both characterises and helps make unitary sense of a diverse range of mechanisms. These include simple incremental learning (Elman 1993), modular connectionism (Jacobs, Jordan and Barto 1991), and the developmental hypothesis of `representational redescription' (Karmiloff-Smith A Functional 1979, Karmiloff-Smith PDP 1992). In addition, the most distinctive features of human cognition---language and culture---may themselves be viewed as adaptations enabling this representation/computation trade-off to be pursued on an even grander scale. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.clark). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.clark ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.clark To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.clark When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From harnad at cogsci.soton.ac.uk Sun Jan 21 17:06:14 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Sun, 21 Jan 96 22:06:14 GMT Subject: Directed Movement: BBS Call for Commentators Message-ID: <3919.9601212206@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming target article on: SPEED/ACCURACY TRADEOFFS IN TARGET DIRECTED MOVEMENTS By Rejean Plamondon & Adel M. Alimi This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ SPEED/ACCURACY TRADEOFFS IN TARGET DIRECTED MOVEMENTS Rejean Plamondon & Adel M. Alimi Ecole Polytechnique de Montreal Laboratoire Scribens Departement de genie Electrique et de genie informatique C.P. 6079, Succ. "Centre-Ville" Montreal PQ H3C 3A7 ha03 at music.mus.polymtl.ca KEYWORDS: Speed/accuracy tradeoffs, Fitts' law, central limit theorem, velocity profile, delta-lognormal law, quadratic law, power law. ABSTRACT: This paper presents a critical survey of the scientific literature dealing with the speed/accuracy tradeoffs of rapid-aimed movements. It highlights the numerous mathematical and theoretical interpretations that have been proposed over recent decades from the different studies that have been conducted on this topic. Although the variety of points of view reflects the richness of the field as well as the high degree of interest that such basic phenomena represent in the understanding of human movements, it questions the validity of many models with respect to their capacity to explain all the basic observations consistently reported in the field. In this perspective, this paper summarizes the kinematic theory of rapid human movements, proposed recently by the first author, and analyzes its predictions in the context of speed/accuracy tradeoffs. Numerous data available from the scientific literature are reanalyzed and reinterpreted in the context of this new theory. It is shown that the various aspects of the speed/accuracy tradeoffs can be taken into account by considering the asymptotic behavior of a large number of coupled linear systems, from which a delta-lognormal law can be derived, to describe the velocity profile of an end-effector driven by a neuromuscular synergy. This law not only describes velocity profiles almost perfectly, but it also predicts the kinematic properties of simple rapid movements and provides a consistent framework for the analysis of different types of rapid movements using a quadratic (or power) law that emerges from the model. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.glenberg). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://cogsci.soton.ac.uk/~harnad/bbs.html gopher://gopher.princeton.edu:70/11/.libraries/.pujournals ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.glenberg ftp://cogsci.soton.ac.uk/pub/harnad/BBS/bbs.glenberg To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.glenberg When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). ------------------------------------------------------------- From Jari.Kangas at hut.fi Mon Jan 22 01:54:51 1996 From: Jari.Kangas at hut.fi (Jari Kangas) Date: Mon, 22 Jan 1996 08:54:51 +0200 Subject: Location for SOM_PAK and LVQ_PAK has changed Message-ID: <310334BB.167E@hut.fi> Dear Neural Network Researchers, Out ftp-site cochlea.hut.fi containing the SOM_PAK and LVQ_PAK program packages has been off for a while because of hardware errors. We have now moved the public domain program packages to another location under our research centre www-page: http://nucleus.hut.fi/nnrc.html From cns-cas at cns.bu.edu Mon Jan 22 10:47:23 1996 From: cns-cas at cns.bu.edu (CNS/CAS) Date: Mon, 22 Jan 1996 10:47:23 -0500 Subject: B.U. Neural Systems Seminars Message-ID: <199601221543.KAA05031@cns.bu.edu> CENTER FOR ADAPTIVE SYSTEMS AND DEPARTMENT OF COGNITIVE AND NEURAL SYSTEMS BOSTON UNIVERSITY January 26 SELF--SIMILARITY IN NEURAL SIGNALS Professor Malvin Teich, Department of Electrical, Computer, and Systems Engineering, Boston University February 2 THE FUNCTIONAL ARCHITECTURE OF HUMAN VISUAL MOTION PERCEPTION Dr. Zhong-Lin Lu, Department of Cognitive Sciences and Institute for Mathematical Behavioral Sciences, University of California at Irvine February 9 DIVERSITY IN THE STRUCTURE AND FUNCTION OF HIPPOCAMPAL SYNAPSES Professor Kristen Harris, Division of Neuroscience, Children's Hospital and Program in Neuroscience, Harvard Medical School February 16 GROUP BEHAVIOR AND LEARNING IN AUTONOMOUS AGENTS Dr. Maja Mataric, Department of Computer Science, Brandeis University March 15 TOPOGRAPHY OF COGNITION: CELLULAR AND CIRCUIT BASIS OF WORKING MEMORY Dr. Patricia Goldman-Rakic, Neurobiology Section, Yale University School of Medicine March 22 EMOTION, MEMORY, AND THE BRAIN Professor Joseph LeDoux, Center for Neural Science, New York University April 5 AUDITORY PROCESSING OF COMPLEX SOUNDS Professor Laurel Carney, Department of Biomedical Engineering, Boston University April 19, 1:00--5:00 P.M. OPENING CELEBRATION FOR 677 BEACON STREET Invited lectures and refreshments to celebrate the new CNS building. Details to follow. Call 353-7857 for information. All talks except April 19 on Fridays at 2:00 PM in Room B02 (Please note the new lecture time!) Refreshments after the lecture in Room B01 677 Beacon Street, Boston From mav at psy.uq.oz.au Mon Jan 22 23:04:45 1996 From: mav at psy.uq.oz.au (Simon Dennis) Date: Tue, 23 Jan 1996 14:04:45 +1000 (EST) Subject: Journal Launch: NOETICA, A Cognitive Science Forum Message-ID: Welcome to NOETICA: A COGNITIVE SCIENCE FORUM We are pleased to announce the International launch of Noetica: A Cognitive Science Forum - a world wide web journal devoted to the interdisciplinary field of cognitive science. The journal is open for submissions and can be accessed using browsers such as Netscape, Mosaic and lynx at: http://psy.uq.edu.au/CogPsych/Noetica/ or alternatively you may access the mirror site at: http://www.cs.indiana.edu/Noetica/toc.html If you would like to subscribe to the cogpsy mailing list (which includes receiving a regular list of the new contents of Noetica) use the subscription form under "To Subscribe" on the home page or email us at noetica at psy.uq.edu.au. We would welcome any feedback you might have on the journal and look forward to providing a timely, lively, high quality forum for the discussion of cognitive science issues. Yours sincerely, Simon Dennis Cyril Latimer Kate Stevens Janet Wiles TABLE OF CONTENTS JOURNAL Volume 1 - 1995 Issue 1. The Impact of the Environment on the Word Frequency and Null List Strength Effects in Recognition Memory by Simon Dennis OPEN FORUM Volume 1 - 1995 The first three issues of volume one are papers which were presented at the Symposium on Connectionist Models and Psychology which took place in January, 1994 at the Department of Psychology, The University of Queensland, Australia. Issue 1: The rationale for psychologists using (connectionist) models Introduction: Peter Slezak. Target paper: Cyril Latimer. Computer Modelling of Cognitive Processes Invited Commentary: Max Coltheart. Connectionist Modelling and Cognitive Psychology Sally Andrews. What Connectionist Models Can (and Cannot) Tell Us George Oliphant. Connectionism, Psychology and Science Commentary: Paul Bakker. Good models of humble origins Richard Heath. Mathematical models, connectionism and cognitive processes Ellen Watson. Definitions and Interpretations: Comments on the symposium on connectionist models and psychology Issue 2. The correspondence between human and neural network performance Introduction: Cyril Latimer Review: Kate Stevens. The In(put)s and Out(put)s of Comparing Human and Network Performance: Some Ideas on Representations, Activations and Weights Review: Graeme Halford and William Wilson. How Far Do Neural Network Models Account for Human Reasoning? Commentary: Steven Phillips. Understanding as generalisation not just representation. Review: Simon Dennis. The Correspondence Between Psychological and Network Variables In Connectionist Models of Human Memory Commentaries: Andrew Heathcote. Connectionism: Implementation constraints for psychological models Phillip Sutcliffe. Contribution to discussion Issue 3. Computational processes over distributed memories Introduction: Steven Schwartz Review: Janet Wiles. The Connectionist Modeler's Toolkit: A review of some basic processes over distributed memories Invited Commentary: Mike Johnson. On the search for metaphors Zoltan Schreter. Distributed and Localist Representation in the Brain and in Connectionist Models Issue 4. The Sydney Morning Herald Word Database by Simon Dennis Issue 5. Introducing a new connectionist model: The spreading waves of activation network by Scott A. Gazzard ------------------------------------------------------------------------ Dr Simon Dennis Address: Department of Psychology Email: mav at psy.uq.edu.au The University of Queensland WWW: http://psy.uq.edu.au/~mav Brisbane, QLD, 4072, Australia From tgc at kcl.ac.uk Tue Jan 23 05:15:55 1996 From: tgc at kcl.ac.uk (Trevor Clarkson) Date: Tue, 23 Jan 1996 10:15:55 +0000 Subject: NEuroFuzzy Workshop in Prague, 16-18 April 1996 Message-ID: A limited number of studentships of 450 ECU are still available from the NEuroNet programme for EU students only to attend the NEuroFuzzy workshop. The grant is a contribution to travel, accommodation and registration so that students will be able to participate in the technical sessions as well as the tutorials. The first set of studentships have been approved and letters have been sent to successful applicants. The remaining studentships will be awarded on a first-come first-served basis to students who are registered full-time for a university degree. Applicants should send a short (half-page) biography which clearly states age, European nationality and place of study. This must be accompanied by a letter of support from their head of department confirming these details. For details concerning these studentships only, contact the NEuroNet office: Ms Terhi Garner, NEuroNet Department of Electronic and Electrical Engineering, King's College London Strand, London WC2R 2LS Email: terhi.garner at kcl.ac.uk Fax: +44 171 873 2559 __________________________________________________________________________ Professor Trevor Clarkson * * Director, NEuroNet (European Network of Excellence in Neural Networks) * Department of Electronic and Electrical Engineering * * King's College London * * Strand, London WC2R 2LS, UK * * * * Tel: +44 171 873 2367/2388 Fax: +44 171 873 2559 WWW: http://www.neuronet.ph.kcl.ac.uk/ Email: tgc at kcl.ac.uk __________________________________________________________________________ From cia at kamo.riken.go.jp Tue Jan 23 06:54:21 1996 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Tue, 23 Jan 96 20:54:21 +0900 Subject: New publications on blind signal processing Message-ID: <9601231154.AA14093@kamo.riken.go.jp> Dear Colleagues: Below please find a list of papers devoted to blind separation of sources presented at NOLTA-95 and NIPS . Some of these papers are available on the web site: http://www.bip.riken.go.jp/absl/absl.html I am preparing now extensive list of publications, reports and programs about blind signal processing (blind deconvolution, equalization, separation of sources, cocktail party-problem, blind identification and blind medium structure identification). Any information about new publications on these subjects are welcomed. Comments on our paper are also welcomed. Andrew Cichocki ------------------------------------------- Dr. A. Cichocki, Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, URL: http://www.bip.riken.go.jp/absl/absl.html --------------------------------------------------- List of papers of Special Invited Session BLIND SEPARATION OF SOURCES- Information Processing in the Brain, NOLTA-95 , Las Vegas, USA, December 10-14, 1995. (Chair and organizer A. Cichocki) Proceedings 1995 International Symposium on Nonlinear Theory and Applications Vol.1: 1. Shun-ichi AMARI, Andrzej CICHOCKI and Howard Hua YANG, "RECURRENT NEURAL NETWORKS FOR BLIND SEPARATION OF SOURCES", pp.37-42. 2. Anthony J. BELL and Terrence J. SEJNOWSKI, "FAST BLIND SEPARATION BASED ON INFORMATION THEORY", pp. 43-47. 3. Adel BELOUCHRANI and Jean-Francois CARDOSO, "MAXIMUM LIKELIHOOD SOURCE SEPARATION BY THE EXPECTATION-MAXIMIZATION TECHNIQUE: DETERMINISTIC AND STOCHASTIC IMPLEMENTATION", pp.49-53. 4. Jean-Francois CARDOSO, "THE INVARIANT APPROACH TO SOURCE SEPARATION", pp. 55-60. 5. Andrzej CICHOCKI, Wlodzimierz KASPRZAK and Shun-ichi AMARI, "MULTI-LAYER NEURAL NETWORKS WITH LOCAL ADAPTIVE LEARNING RULES FOR BLIND SEPARATION OF SOURCE SIGNALS", pp.61-65. 6. Yannick DEVILLE and Laurence ANDRY, "APPLICATION OF BLIND SOURCE SEPARATION TECHNIQUES TO MULTI-TAG CONTACTLESS IDENTIFICATION SYSTEMS", pp. 73-78. 7. Jie HUANG , Noboru OHNISHI and Naboru SUGIE "SOUND SEPARATION BASED ON PERCEPTUAL GROUPING OF SOUND SEGMENTS", pp.67-72. 8. Christian JUTTEN and Jean-Francois CARDOSO, "SEPARATION OF SOURCES: REALLY BLIND ?" pp. 79-84. 9. Kiyotoshi MATSUOKA and Mitsuru KAWAMOTO, "BLIND SIGNAL SEPARATION BASED ON A MUTUAL INFORMATION CRITERION", pp. 85-91. 10. Lieven De LATHAUWER, Pierre COMON, Bart De MOOR and Joos VANDEWALLE, "HIGHER-ORDER POWER METHOD - APPLICATION IN INDEPENDENT COMPONENT ANALYSIS", pp. 91-96. 11. Jie ZHU, Xi-Ren CAO, and Ruey-Wen LIU, "BLIND SOURCE SEPARATION BASED ON OUTPUT INDEPENDENCE - THEORY AND IMPLEMENTATION", pp. 97-102. ---------------------------------------------------------------------------- Selected list of recent publications and reports about ICA [1] S. Amari, A. Cichocki and H. H. Yang, "A new learning algorithm for blind signal separation", NIPS-95, Denver Dec. 1995, vol.8, MIT Press, 1996 (in print). [2] S. Amari, A. Cichocki and H. H. Yang, "Recurrent neural networks for blind separation of sources", Nolta-95 , Las Vegas, Dec.10-15, 1995, vol.1, pp. 37-42. [3] A.Cichocki and L. Moszczynski, "A new learning algorithm for for blind separation of sources", Electronics Letters, vol.28, No.21,1992, pp.1986-1987. [4] A. Cichocki, R. Unbehauen and E. Rummert, Robust learning algorithm for blind separation of signals", Electronics Letters, vol.30, No.17, 18th August 1994, pp.1386-1387. [5] A. Cichocki, R. Unbehauen, L. Moszczynski and E. Rummert, "A new on-line adaptive algorithm for blind separation of source signals", 1994 Int. Symposium on Artificial Neural Networks ISANN-94, Tainan, Taiwan , Dec.1994, pp.406-411. [6] A. Cichocki, R. Bogner, L. Moszczynski, Improved adaptive algorithms for blind separation of sources", Proc. of Conference on Electronic Circuits and Systems, KKTOiUE, Zakopane Poland, Oct. 25-27, 1995, pp. 647-652. [7] A. Cichocki, R. Unbehauen, "Robust neural networks with on-line learning for blind identification and blind separation of sources", submitted for publication to IEEE Transaction on Circuits and Systems (submitted June 1994). [8] A.Cichocki and R. Unbehauen, Neural Networks for Optimization and Signal Processing, John Wiley 1994 (new revised and improved edition), pp. 461-471. [9] A. Cichocki, W. Kasprzak, S. Amari, "Multi-layer neural networks with a local adaptive learning rule for blind separation of source signals", Nolta-95, Las Vegas, Dec.10-15, 1995, vol.1 pp. 61-66. [10] A. Cichocki, S. Amari, M. Adachi and W. Kasprzak, "Self-adaptive neural networks for blind separation of sources", ISCAS-96 May 1996, Atlanta, USA. --------------------------------------------------------------- From S.Goonatilake at cs.ucl.ac.uk Tue Jan 23 12:30:48 1996 From: S.Goonatilake at cs.ucl.ac.uk (Suran Goonatilake) Date: Tue, 23 Jan 96 17:30:48 +0000 Subject: New Book - Intelligent Systems for Finance and Business Message-ID: NEW BOOK ANNOUNCEMENT INTELLIGENT SYSTEMS FOR FINANCE AND BUSINESS Suran Goonatilake and Philip Treleaven (Eds.) University College London Intelligent Systems are now beginning to be successfully applied in a variety of financial and business modelling tasks. These methods which include genetic algorithms, neural networks, fuzzy systems and intelligent hybrid systems are now being applied in credit evaluation, direct marketing, fraud detection, securities trading and portfolio management, and in many cases are outperforming traditional approaches. This book brings together leading professionals from the US, Europe and Asia who have developed intelligent systems to tackle some of the most challenging problems in finance and business. It covers applications of a large number of intelligent techniques: genetic algorithms, neural networks, fuzzy logic, expert systems, rule induction, genetic programming, case based reasoning and intelligent hybrid systems. Case studies are drawn from a wide variety of business sectors. Applications that are detailed include: credit evaluation, direct marketing, insider dealing detection, insurance fraud detection, insurance claims processing, financial trading, portfolio management, and economic modelling. CONTENTS ======== Foreword: Cathy Basch, Visa International Chapter 1: Intelligent Systems for Finance and Business: An Overview Suran Goonatilake, University College London, UK. PART ONE: CREDIT SERVICES Chapter 2: Intelligent Systems at American Express Robert Didner, American Express Chapter 3: Credit Evaluation using a Genetic Algorithm R. Walker, E.W. Haasdijk and M.C. Gerrets, CAP-Volmac. Chapter 4: Neural Networks for Credit Scoring David Leigh PART TWO: DIRECT MARKETING Chapter 5: Neural Networks for Data Driven Marketing Peter Furness, AMS Management Systems Chapter 6:Intelligent Systems for Market Segmentation and Local Market Planning Richard Webber, CCN Marketing PART THREE: FRAUD DETECTION AND INSURANCE Chapter 7: A Fuzzy System for Detecting Anomalous Behaviors in Healthcare Provider Claims Earl Cox, Metus Systems. Chapter 8: Insider Dealing Detection at the Toronto Stock Exchange Steve Mott, Cognitive Systems Chapter 9: EFD: Heuristic Statistics for Insurance Fraud Detection J.A. Major and D.R. Riedinger, Travelers Insurance Co Chapter 10: Expert Systems at Lloyd's of London Colin Talbot, Lloyd's of London PART FOUR: SECURITIES TRADING AND PORTFOLIO MANAGEMENT Chapter 11: Neural Networks in Investment Management A. N. Refenes, A. D. Zapranis, J.T. Connor and D.W. Bunn London Business School Chapter 12: Fuzzy Logic for financial trading Shunichi Tano, Hitachi Labs.. Chapter 13: Syntactic Pattern-Based Inductive Learning for Chart Analysis Jae K. Lee, Hyun Soo Kim, KAIST. PART FIVE: ECONOMIC MODELLING Chapter 14: Genetic Programming for Economic Modelling John Koza, Stanford University Chapter 15: Modelling artificial stock markets using Genetic Algorithms Paul Tayler, Brunel University Chapter 16: Intelligent, Self Organising Models in Economics and Finance Peter Allen, Cranfield Institute of Technology PART SIX: IMPLEMENTING INTELLIGENT SYSTEMS Chapter 17: Software for Intelligent Systems Philip Treleaven, University College London ------------------------------------------------------------------ ISBN : 0471 94404 1 Publication Date : December 1995 Price: $55, (Sterling) 40 Publishers: (US) John Wiley & Sons Inc., 605 Third Avenue, New York, NY 10158-0012 Tel: 1-800-225-5945 (UK) John Wiley & Sons Ltd, Baffins Lane, Chichester, West Sussex, PO19 1UD, UK. Tel: 0800 243 407 ------------------------------------------------------------------- A World Wide Web page at : http://www.cs.ucl.ac.uk/staff/S.Goonatilake/busbook.html From john at dcs.rhbnc.ac.uk Tue Jan 23 10:35:55 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Tue, 23 Jan 96 15:35:55 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199601231535.PAA06083@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for most of the titles. *** Please note that the location of the files has been changed so that *** any copies you have of the previous instructions should be discarded. *** The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-001: ---------------------------------------- On digital nondeterminism by Felipe Cucker, Universitat Pompeu Fabra, Spain Martin Matamala, Universidad de Chile, Chile No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-002: ---------------------------------------- Complexity and Real Computation: A Manifesto by Lenore Blum, International Computer Science Institute, Berkeley, USA Felipe Cucker, Universitat Pompeu Fabra, Spain Mike Shub, IBM T.J. Watson Research Center, New York, USA Steve Smale, University of California, USA Abstract: Finding a natural meeting ground between the highly developed complexity theory of computer science -- with its historical roots in logic and the discrete mathematics of the integers -- and the traditional domain of real computation, the more eclectic less foundational field of numerical analysis -- with its rich history and longstanding traditions in the continuous mathematics of analysis -- presents a compelling challenge. Here we illustrate the issues and pose our perspective toward resolution. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-003: ---------------------------------------- Models for Parallel Computation with Real Numbers by F. Cucker, Universitat Pompeu Fabra, Spain J.L. Montana, Universidad de Cantabria, Spain L.M. Pardo, Universidad de Cantabria, Spain Abstract: This paper deals with two models for parallel computations over the reals. On the one hand, a generalization of the real Turing machine obtained by assembling a polynomial number of such machines that work together in polylogarithmic time (more or less like a PRAM in the Boolean setting) and, on the other hand, a model consisting of families of algebraic circuits generated in some uniform way. We show that the classes defined by these two models are related by a chain of inclusions and that some of these inclusions are strict. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-004: ---------------------------------------- Nash Trees and Nash Complexity by Felipe Cucker, Universitat Pompeu Fabra, Spain Thomas Lickteig, Universit\"at Bonn, Germany Abstract: Numerical analysis computational problems such as Cholesky decomposition of a positive definite matrix, or unitary transformation of a complex matrix into upper triangular form (for instance by the Householder algorithm), require algorithms that use also ``non-arithmetical'' operations such as square roots. The aim of this paper is twofold: 1. Generalizing the notions of arithmetical semi-algebraic decision trees and computation trees (that is, with outputs) we suggest a definition of Nash trees and Nash straight line programs (SLPs), necessary to formalize and analyse numerical analysis algorithms and their complexity as mentioned above. These trees and SLPs have a Nash operational signature $N^R$ over a real closed field $R$. Based on the sheaf of abstract Nash functions over the real spectrum of a ring as introduced by M.-F. Roy, we propose a category nash_R of partial (homogeneous) N^R-algebras in which these Nash operations make sense in a natural way. 2. Using this framework, in particular the execution of $N^R$-SLPs in appropriate $N^R$-algebras, we extend the degree-gradient lower bound to Nash decision complexity of the membership problem of co-one-dimensional semi-algebraic subsets of open semi-algebraic subsets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-005: ---------------------------------------- On the computational power and super-Turing capabilities of dynamical systems by Olivier Bournez, Department LIP, ENS-Lyon, France Michel Cosnard, Department LIP, ENS-Lyon, France Abstract: We explore the simulation and computational capabilities of dynamical systems. We first introduce and compare several notions of simulation between discrete systems. We give a general framework that allows dynamical systems to be considered as computational machines. We introduce a new discrete model of computation: the analog automaton model. We determine the computational power of this model and prove that it does have super-Turing capabilities. We then prove that many very simple dynamical systems from the literature are actually able to simulate analog automata. From this result we deduce that many dynamical systems have intrinsically super-Turing capabilities. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-006: ---------------------------------------- Finite Sample Size Results for Robust Model Selection; Application to Neural Networks by Joel Ratsaby, Technion, Israel Ronny Meir, Technion, Israel Abstract: The problem of model selection in the face of finite sample size is considered within the framework of statistical decision theory. Focusing on the special case of regression, we introduce a model selection criterion which is shown to be robust in the sense that, with high confidence, even for a finite sample size it selects the best model. Our derivation is based on uniform convergence methods, augmented by results from the theory of function approximation, which permit us to make definite probabilistic statements about the finite sample behavior. These results stand in contrast to classical approaches, which can only guarantee the asymptotic optimality of the choice. The criterion is demonstrated for the problem of model selection in feedforward neural networks. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-007: ---------------------------------------- On the structure of $\npoly{C}$ by Gregorio Malajovich, Klaus Meer, RWTH Aachen, Germany Abstract: This paper deals with complexity classes $\poly{C}$ and $\npoly{C}$, as they were introduced over the complex numbers by Blum, Shub and Smale. Under the assumption $\poly{C} \ne \npoly{C}$ the existence of non-complete problems in $\npoly{C}$~, not belonging to $\poly{C}$~, is established. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-008: ---------------------------------------- Dynamic Recurrent Neural Networks: a Dynamical Analysis by Jean-Philippe DRAYE, Davor PAVISIC, Facult\'{e} Polytechnique de Mons, Belgium, Guy CHERON, Ga\"{e}tan LIBERT, University of Brussels, Belgium Abstract: In this paper, we explore the dynamical features of a neural network model which presents two types of adaptative parameters~: the classical weights between the units and the time constants associated with each artificial neuron. The purpose of this study is to provide a strong theoretical basis for modeling and simulating dynamic recurrent neural networks. In order to achieve this, we study the effect of the statistical distribution of the weights and of the time constants on the network dynamics and we make a sta tistical analysis of the neural transformation. We examine the network power spectra (to draw some conclusions over the frequent ial behavior of the network) and we compute the stability regions to explore the stability of the model. We show that the network is sensitive to the variations of the mean values of th e weights and the time constants (because of the temporal aspects of the learned tasks). Nevertheless, our results highlight the improvements in the network dynamics due to the introduction of adaptative time constants and indicate that dynamic recu rrent neural networks can bring new powerful features in the field of neural computing. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-009: ---------------------------------------- Scale-sensitive Dimensions, Uniform Convergence, and Learnability by Noga Alon, Tel Aviv University (ISRAEL), Shai Ben-David, Technion, (ISRAEL), Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, David Haussler, UC Santa Cruz, (USA) Abstract: Learnability in Valiant's PAC learning model has been shown to be strongly related to the existence of uniform laws of large numbers. These laws define a distribution-free convergence property of means to expectations uniformly over classes of random variables. Classes of real-valued functions enjoying such a property are also known as uniform Glivenko-Cantelli classes. In this paper we prove, through a generalization of Sauer's lemma that may be interesting in its own right, a new characterization of uniform Glivenko-Cantelli classes. Our characterization yields Dudley, Gin\'e, and Zinn's previous characterization as a corollary. Furthermore, it is the first based on a simple combinatorial quantity generalizing the Vapnik-Chervonenkis dimension. We apply this result to obtain the weakest combinatorial condition known to imply PAC learnability in the statistical regression (or ``agnostic'') framework. Furthermore, we show a characterization of learnability in the probabilistic concept model, solving an open problem posed by Kearns and Schapire. These results show that the accuracy parameter plays a crucial role in determining the effective complexity of the learner's hypothesis class. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-010: ---------------------------------------- On-line Prediction and Conversion Strategies by Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, Yoav Freund, AT\&T Bell Laboratories, David P.\ Helmbold, University of California, Santa Cruz, Manfred K.\ Warmuth, University of California, Santa Cruz Abstract: We study the problem of deterministically predicting boolean values by combining the boolean predictions of several experts. Previous on-line algorithms for this problem predict with the weighted majority of the experts' predictions. These algorithms give each expert an exponential weight $\beta^m$ where $\beta$ is a constant in $[0,1)$ and $m$ is the number of mistakes made by the expert in the past. We show that it is better to use sums of binomials as weights. In particular, we present a deterministic algorithm using binomial weights that has a better worst case mistake bound than the best deterministic algorithm using exponential weights. The binomial weights naturally arise from a version space argument. We also show how both exponential and binomial weighting schemes can be used to make prediction algorithms robust against noise. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-011: ---------------------------------------- Worst-case Quadratic Loss Bounds for Prediction Using Linear Functions and Gradient Descent by Nicol\`o Cesa-Bianchi, DSI, Universit\`a di Milano, Philip M. Long, Duke University, Manfred K. Warmuth, UC Santa Cruz Abstract: In this paper we study the performance of gradient descent when applied to the problem of on-line linear prediction in arbitrary inner product spaces. We show worst-case bounds on the sum of the squared prediction errors under various assumptions concerning the amount of {\it a priori} information about the sequence to predict. The algorithms we use are variants and extensions of on-line gradient descent. Whereas our algorithms always predict using linear functions as hypotheses, none of our results requires the data to be linearly related. In fact, the bounds proved on the total prediction loss are typically expressed as a function of the total loss of the best fixed linear predictor with bounded norm. All the upper bounds are tight to within constants. Matching lower bounds are provided in some cases. Finally, we apply our results to the problem of on-line prediction for classes of smooth functions. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-012: ---------------------------------------- Using Bayesian Methods for Avoiding Overfitting and for Ranking Networks in Multilayer Perceptrons Learning by Michel de Bollivier, EC Joint Research Centre, Italy, Domenico Perrotta, EC Joint Research Centre and Ecole Normale Sup\'{e}rieure de Lyon, France Abstract: This work is an experimental attempt to determine whether the Bayesian paradigm could improve Multi-Layer Perceptrons (MLPs) learning methods. In particular, we exper iment here the paradigm developed by D. MacKay (1992). The paper points out the main or critical points of MacKay's work and introduces very practical points of Bayesian MLPs, having in mind future applications. Then, Bayesian MLPs are used on three public classification databases and compar ed to other methods. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-013: ---------------------------------------- Lower Bounds for the Computational Power of Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the computational power of a formal model for networks of spiking neurons. It is shown that simple operations on phase-differences between spike-trains provide a very powerful computational tool that can in principle be used to carry out highly complex computations on a small network of spiking neurons. We construct networks of spiking neurons that simulate arbitrary threshold circuits, Turing machines, and a certain type of random access machines with real valued inputs. We also show that relatively weak basic assumptions about the response- and threshold-functions of the spiking neurons are sufficient in order to employ them for such computations. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-014: ---------------------------------------- Analog Computations on Networks of Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We characterize the class of functions with real-valued input and output which can be computed by networks of spiking neurons with piecewise linear response- and threshold-functions and unlimited timing precision. We show that this class coincides with the class of functions computable by recurrent analog neural nets with piecewise linear activation functions, and with the class of functions computable on a certain type of random access machine (N-RAM) which we introduce in this article. This result is proven via constructive real-time simulations. Hence it provides in particular a convenient method for constructing networks of spiking neurons that compute a given real-valued function $f$: it now suffices to write a program for constructing networks of spiking neurons that compute a given real-valued function $f$: it now suffices to write a program for computing $f$ on an N-RAM; that program can be ``automatically'' transformed into an equivalent network of spiking neurons (by our simulation result). Finally, one learns from the results of this paper that certain very simple piecewise linear response- and threshold-functions for spiking neurons are {\it universal}, in the sense that neurons with these particular response- and threshold-functions can simulate networks of spiking neurons with {\it arbitrary} piecewise linear response- and threshold-functions. The results of this paper also show that certain very simple piecewise linear activation functions are in a corresponding sense universal for recurrent analog neural nets. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-015: ---------------------------------------- Vapnik-Chervonenkis Dimension of Neural Nets by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We will survey in this article the most important known bounds for the VC-dimension of neural nets that consist of linear threshold gates (section 2) and for the case of neural nets with real-valued activation functions (section 3). In section 4 we discuss a generalization of the VC-dimension for neural nets with non-boolean network-output. With regard to a discussion of the VC-dimension of models for networks of {\it spiking neurons} we refer to Maass (1994). ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-016: ---------------------------------------- On the Computational Power of Noisy Spiking Neurons by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: This article provides some first results about the computational power of neural networks that are based on a neuron model which is acceptable to many neurobiologists as being reasonably realistic for a biological neuron. Biological neurons communicate via spike-trains, i.e. via sequences of stereotyped pulses (``spikes'') that encode information in their time-differences (``temporal coding''). In addition it is wellknown that biological neurons are quite ``noisy'', i.e. the precise times when they ``fire'' (and thereby issue a spike) depend not only on the incoming spike-trains, but also on various types of ``noise''. It has remained unknown whether one can in principle carry out reliable digital computations with noisy spiking neurons. This article presents rigorous constructions for simulating in real-time arbitrary given boolean circuits and finite automata with arbitrarily high reliability by networks of noisy spiking neurons. In addition we show that with the help of ``shunting inhibition'' such networks can simulate in real-time any McCulloch-Pitts neuron (or ``threshold gate''), and therefore any multilayer perceptron (or ``threshold circuit'') in a reliable manner. These constructions provide a possible explanation for the fact that biological neural systems can carry out quite complex computations within 100 msec. It turns out that the assumption that these constructions require about the shape of the EPSP's and the behaviour of the noise are surprisingly weak. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-017: ---------------------------------------- Die Komplexit\"at des Rechnens und Lernens mit neuronalen Netzen -- Ein Kurzf\"uhrer by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: This is a very short guide to the basic concepts of the theory of computing and learning with neural networks with emphasis on computational complexity. Fundamental results on circuit complexity of neural networks and PAC-learning are mentioned but no proofs are given. A list of references to the most important and most recent books in the field is included. The report was written in German on the occasion of giving a course at the Autumn School in Connectionism and Neural Networks ``HeKoNN 95'' in M\"unster. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-018: ---------------------------------------- Tracking the best disjunction by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred Warmuth, University of California at Santa Cruz, USA Abstract: Littlestone developed a simple deterministic on-line learning algorithm for learning $k$-literal disjunctions. This algorithm (called Winnow) keeps one weight for each of the $n$ variables and does multiplicative updates to its weights. We develop a randomized version of Winnow and prove bounds for an adaptation of the algorithm for the case when the disjunction may change over time. In this case a possible target {\em disjunction schedule} $\Tau$ is a sequence of disjunctions (one per trial) and the {\em shift size} is the total number of literals that are added/removed from the disjunctions as one progresses through the sequence. We develop an algorithm that predicts nearly as well as the best disjunction schedule for an arbitrary sequence of examples. This algorithm that allows us to track the predictions of the best disjunction is hardly more complex than the original version. However the amortized analysis needed for obtaining worst-case mistake bounds requires new techniques. In some cases our lower bounds show that the upper bounds of our algorithm have the right constant in front of the leading term in the mistake bound and almost the right constant in front of the second leading term. By combining the tracking capability with existing applications of Winnow we are able to enhance these applications to the shifting case as well. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-019: ---------------------------------------- Learning Nested Differences in the Presence of Malicious Noise by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the learnability of nested differences of intersection-closed classes in the presence of malicious noise. Examples of intersection-closed classes include axis-parallel rectangles, monomials, linear sub-spaces, and so forth. We present an on-line algorithm whose mistake bound is optimal in the sense that there are concept classes for which each learning algorithm (using nested differences as hypotheses) can be forced to make at least that many mistakes. We also present an algorithm for learning in the PAC model with malicious noise. Surprisingly enough, the noise rate tolerable by these algorithms does not depend on the complexity of the target class but depends only on the complexity of the underlying intersection-closed class. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-020: ---------------------------------------- Characterizing the Learnability of Kolmogorov Easy Circuit Expressions by Jos\'e L. Balc\'azar, Universitat Polit\'ecnica de Catalunya, Spain Harry Buhrman, Centrum voor Wiskunde en Informatica, the Netherlands Abstract: We show that Kolmogorov easy circuit expressions can be learned with membership queries in polynomial time if and only if every NE-predicate is E-solvable. Moreover we show that the previously known algorithm, that uses an oracle in NP, is optimal in some relativized world. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-021: ---------------------------------------- T2 - Computing optimal 2-level decision tree by Peter Auer, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria *** Note: This is a C program available in tarred (compressed) format. Description: This is a short description of the T2 program discussed in P. Auer, R.C. Holte, and W. Maass. Theory and applications of agnostic PAC-learning with small decision trees. In Proc. 7th Int. Machine Learning Conf., Tahoe City (USA), 1995. Please see the paper for a description of the algorithm and a discussion of the results. (There is a typo in the paper in Table 2: The Sky2 value for HE is 89.0% instead of 91.0%.) T2 calculates optimal decision trees up to depth 2. T2 accepts exactly the same input as C4.5, consisting of a name-file, a data-file, and an optional test-file. The output of TREE2 is a decision tree similar to the decision trees of C4.5, but there are some differences. T2 uses two kinds of decision nodes: (1) discrete splits on an discrete attribute where the node has as many branches as there are possible attribute values, and (2) interval splits of continuous attributes. A node which performs an interval split divides the real line into intervals and has as many branches as there are intervals. The number of intervals is restricted to be (a) at most MAXINTERVALS if all the branches of the decision node lead to leaves, and to be (b) at most 2 otherwise. MAXINTERVALS can be set by the user. The attribute value ``unknown'' is treated as a special attribute value. Each decision node (discrete or continuous) has an additional branch which takes care of unknown attribute values. T2 builds the decision tree satisfying the above constraints and minimizing the number of misclassifications of cases in the data-file. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-022: ---------------------------------------- Efficient Learning with Virtual Threshold Gates by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred Warmuth, University of California, Santa Cruz, USA Abstract: We reduce learning simple geometric concept classes to learning disjunctions over exponentially many variables. We then apply an on-line algorithm called Winnow whose number of prediction mistakes grows only logarithmically with the number of variables. The hypotheses of Winnow are linear threshold functions with one weight per variable. We find ways to keep the exponentially many weights of Winnow implicitly so that the time for the algorithm to compute a prediction and update its ``virtual'' weights is polynomial. Our method can be used to learn $d$-dimensional axis-parallel boxes when $d$ is variable, and unions of $d$-dimensional axis-parallel boxes when $d$ is constant. The worst-case number of mistakes of our algorithms for the above classes is optimal to within a constant factor, and our algorithms inherit the noise robustness of Winnow. We think that other on-line algorithms with multiplicative weight updates whose loss bounds grow logarithmically with the dimension are amenable to our methods. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-023: ---------------------------------------- On learnability and predicate logic (Extended Abstract) by Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Gy. Tur\'{a}n, University of Illinois at Chicago, USA No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-024: ---------------------------------------- Lower Bounds on Identification Criteria for Perceptron-like Learning Rules by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: Topic of this paper is the computational complexity of identifying neural weights using Perceptron-like learning rules. By Perceptron-like rules we understand instructions to modify weight vectors by adding or subtracting constant values after occurrence of an error. By computational complexity we mean worst-case bounds on the number of correction steps. The training examples are taken from Boolean functions computable by McCulloch-Pitts neurons. Exact identification by the Perceptron rule is known to take exponential time in the worst case. Therefore, we define identification criteria that do not require that the learning process exactly identifies the function being learned: PAC identification, order identification, and sign identification. Our results show that Perceptron-like learning rules cannot satisfy any of these criteria when the number of correction steps is to be bounded by a polynomial. This indicates that even by considerably lowering one's demands on the learning process one cannot prevent Perceptron rules from being computationally infeasible. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-025: ---------------------------------------- On Methods to Keep Learning Away from Intractability (Extended abstract) by Michael Schmitt, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Abstract: We investigate the complexity of learning from restricted sets of training examples. With the intention to make learning easier we introduce two types of restrictions that describe the permitted training examples. The strength of the restrictions can be tuned by choosing specific parameters. We ask how strictly their values must be limited to turn NP-complete learning problems into polynomial-time solvable ones. Results are presented for Perceptrons with binary and arbitrary weights. We show that there exist bounds for the parameters that sharply separate efficiently solvable from intractable learning problems. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-026: ---------------------------------------- Accuracy of techniques for the logical analysis of data by Martin Anthony, London School of Economics, UK Abstract: We analyse the generalisation accuracy of standard techniques for the `logical analysis of data', within a probabilistic framework. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-027: ---------------------------------------- Interpolation and Learning in Artificial Neural Networks by Martin Anthony, London School of Economics, UK No abstract available. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-028: ---------------------------------------- Threshold Functions, Decision Lists, and the Representation of Boolean Functions by Martin Anthony, London School of Economics, UK Abstract: We describe a geometrically-motivated technique for data classification. Given a finite set of points in Euclidean space, each classified according to some target classification, we use a hyperplane to separate off a set of points all having the same classification; these points are then deleted from the database and the procedure is iterated until no points remain. We explain how such an iterative `chopping procedure' leads to a type of decision list classification of the data points and in a classification of the data by means of a linear threshold artificial neural network with one hidden layer. In the case where the data points are all the $2^n$ vertices of the Boolean hypercube, the technique produces a neural network representation of Boolean functions differing from the obvious one based on a function's disjunctive normal formula. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-029: ---------------------------------------- Learning of Depth Two Neural Nets with Constant Fan-in at the Hidden Nodes by Peter Auer, University of California, Santa Cruz, USA, Stephen Kwek, University of Illinois, USA, Wolfgang Maass, Institute for Theoretical Computer Science, Technische Universitaet Graz, Austria Manfred K. Warmuth, University of California, Santa Cruz, USA Abstract: We present algorithms for learning depth two neural networks where the hidden nodes are threshold gates with constant fan-in. The transfer function of the output node might be more general: in addition to the threshold function we have results for the logistic and the linear transfer function at the output node. We give batch and on-line learning algorithms for these classes of neural networks and prove bounds on the performance of our algorithms. The batch algorithms work for real valued inputs whereas the on-line algorithms require that the inputs are discretized. The hypotheses of our algorithms are essentially also neural networks of depth two. However, their number of hidden nodes might be much larger than the number of hidden nodes of the neural network that has to be learned. Our algorithms can handle a large number of hidden nodes since they rely on multiplicative weight updates at the output node, and the performance of these algorithms scales only logarithmically with the number of hidden nodes used. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor From cia at kamo.riken.go.jp Tue Jan 23 21:18:35 1996 From: cia at kamo.riken.go.jp (cia@kamo.riken.go.jp) Date: Wed, 24 Jan 96 11:18:35 +0900 Subject: Blind Signal Processing - Call for Paper Message-ID: <9601240218.AA14489@kamo.riken.go.jp> Call for papers in special Invited Session in ICONIP96, Hong Kong: BLIND SIGNAL PROCESSING - ADAPTIVE AND NEURAL NETWORK APPROACHES I would like to announce that I am organizing Special Invited Session in ICONIP-96 (September 24-27, 1996,Hong-Kong) devoted to blind signal processing using neural and adaptive approaches. Papers devoted to all aspects of blind signal processing: blind deconvolution, equalization, separation of sources, blind identification, blind medium structure identification, cocktail party-problem, applications to EEG and ECG, voice enhancement and recognition, etc. are welcomed. Authors are invited to submit by e-mail (to me) as soon as possible,but not latter than February 15, extended summary (2-3 pages) or full paper. The final camera ready paper should be submitted not latter than March 1, 1996. Andrew Cichocki --------------------------- Dr. A. Cichocki, Laboratory for Artificial Brain Systems, Frontier Research Program RIKEN, Institute of Physical and Chemical Research, Hirosawa 2-1, Saitama 351-01, WAKO-Schi, JAPAN E-mail: cia at kamo.riken.go.jp, FAX (+81) 48 462 4633. URL: http://www.bip.riken.go.jp/absl/absl.html From kyana at bme.ei.hosei.ac.jp Wed Jan 24 04:14:06 1996 From: kyana at bme.ei.hosei.ac.jp (Kazuo Yana) Date: Wed, 24 Jan 1996 18:14:06 +0900 Subject: invitation to BSI96 Message-ID: <199601240914.SAA18449@yana01.bme.ei.hosei.ac.jp> THE 2ND IFMBE-IMIA INTERNATIONAL WORKSHOP ON BIOSIGNAL INTERPRETATION (BSI96) September 23 - 28, 1996 Kanagawa, JAPAN CALL FOR PAPER SCOPE OF THE WORKSHOP The International Federation for Medical and Biological Engineering (IFMBE) and the International Medical Informatics Association (IMIA), in collaboration with the Japan Society of Medical Electronics and Biological Engineering, will organize the Second Workshop on Biosignal Interpretation (BSI96). This workshop aims to explore in the relatively new field of biosignal interpretation: model based biosignal analysis, interpretation and integration, extending existing signal processing technology for the effective utilization of biosignals in a practical environment and in a deeper understanding of biological functions. This is the second workshop in this area. The first workshop, IMIA-IFMBE Working Conference on Biosignal Interpretation, was held at Skorping, Denmark in August, 1993. SCIENTIFIC PROGRAM Prospective authors are invited to propose original contributions which meet the general scope mentioned above in any of the following subject categories. (1) Mathematical modeling of experimental and clinical biosignlas (nonlinear phenomena, chaos, fractals, neural network modeling, cardiovascular and respiratory fluctuations analysis, ECG/ EEG/ EMG signal modeling, potential mapping, inverse problem, miscellaneous) (2) Biosignal processing and pattern analysis (nonstationary/nonlinear analysis, time frequency analysis, statistical time series analysis, signal detection, signal reconstruction, neural network, wavelet analysis, recording and display instrumentation, miscellaneous) (3) On-line interactive signal acquisition and processing (intelligent monitoring, ambulatory system, miscellaneous) (4) Decision-support methods (parameter estimation, decision making, rule based/expert systems, automatic diagnoses, data reasoning, man-machine interface, miscellaneous). For in depth discussion, enough time will be assigned for all oral and poster presentations. Besides paper presentations, real system/software demonstrations are encouraged. PUBLICATION All papers will be published in the workshop proceedings. 30-40 papers will be selected to be published as a regular paper in the IMIA official journal: Methods of Information in Medicine. IMPORTANT DEADLINES 1. Submission of abstract (500 words or less): February 29, 1996. 2. Notification of Acceptance: April 15, 1996. 3. Submission of full length paper: July 15, 1996. ABSTRACT FORMAT (due by February 29, 1996) Title (Centered) Author(s) and Affiliation(s) (Centered) Abstract (500 words or less) should be sent to the conference secretariat (Professor Kazuo Yana,Department of Electronic Informatics, Hosei University, Koganei City Tokyo 184 JAPAN) by February 29, 1996. The abstract should be single spaced and clearly typed on A4 or letter size paper and have appropriate margins (Approx 2cm or 1 inch.) . After you receive notification of acceptance of your paper (by April 15), prepare for a camera ready conference paper (max 4 printed pages). The format will be sent to all the?perticipant s with the notification of the acceptance by April 15. The paper is due by July 15. Selected papers will be publised as a regular paper as is or with minor revision in Methods of Information in Medicine. CONFERENCE SITE Shonan Village Center, Hayama-machi, Kanagawa 240-01, Japan Phone: +81-468-55-1800 FAX:+81-468-55-1816. The Shonan Village Center which offers the workshop facilities and accomodations is situated on a Shonan hill in the central part of the Miura Peninshula, commanding a view of Mt. Fuji and over looking Sagami Bay. Sagami Bay is famous place for marine sports. There are other sports facilities nearby such as golf courses and tennis courts. The area is known as the holiday resort closest to Tokyo. Proximity to the anciet capital city of Kamakura, exotic harbor city of Yokohama and other tourist spots may add an attractive feature to the site for participants in planning after-conference tours. PARTICIPATION FEES (Tentative) Registration (includes the workshop proc., coferenced materials, banquet ticket): Before July 31... 25,000 YEN (15,000 YEN for students) After July 31... 30,000 YEN (20,000 YEN for students) Accomodation (5 nights including meals and services ): 75,000 YEN/Person (65,000 YEN/Person for students) The registration form will be sent by April 15 with notification of acceptance and a tentative program. A limited number of accomodations for observers and accompanying persons are available. ORGANIZATION General Chair Kajiya, Fumihiko ( Kawasaki Medical School: kajiya at me.kawasaki-m.ac.jp) Executive Committee Co-Chairs Sato, Shunsuke (Osaka University: sato at bpe.es.osaka-u.ac.jp) Takahashi, Takashi( Kyoto University: tak at kuhp.kyoto-u.ac.jp) Scientific Program Co-Chairs van Bemmel, Jan H. (Rotterdam, NL); Saranummi, Niilo (Tampere, FIN) International Scientific Program Committee: Cerutti, Sergio (Milano, I); Dawant, Benoit(Nashville, USA) Jansen, Ben H. (Houston, USA); Kaplan, Danny (Montreal, CA) Kitney, Richard I. (London, UK); Rosenfalck, Annelise (Aalborg, DK) Rubel, Paul (Lyon, F); Sato, Shunsuke (Osaka, J) Saul, Philip (Cambridge, USA); Zywietz, Christoph (Hanover, FRG) Executive Committee Bin, He (University of Illinois at Chicago) Hayano, Jun-ichiro (Nagoya City University) Ichimaru, Yuhei (Dokkyo University) Kiryu, Tohru (Niigata University) Musha, Toshimitsu(Keio University/Brain Functions Laboratory, Inc.) Okuyama, Fumio (Tokyo Medical and Dental University) Yamamoto, Mitsuaki (Tohoku University) Yamamoto, Yoshiharu (Tokyo University) Yana, Kazuo (Hosei University) For further information, please contact: Professor Kazuo Yana Secretariat, the 2nd IFMBE-IMIA Workshop on Biosignal Interpretation Dept. Electronic Informatics, Hosei University, Koganei Tokyo 184 JAPAN Phone/FAX: +81-(0)423-87-6188 E-mail: kyana at bme.ei.hosei.ac.jp Internet Home Page: http://www.bme.ei.hosei.ac.jp/BSI96/ From georg at ai.univie.ac.at Wed Jan 24 05:09:06 1996 From: georg at ai.univie.ac.at (Georg Dorffner) Date: Wed, 24 Jan 1996 11:09:06 +0100 (MET) Subject: C.f.Abstracts: NN in Biomedical Systems Message-ID: <199601241009.LAA23480@jedlesee.ai.univie.ac.at> Following the general call for papers for EANN '96 (Int. Coference on Engineering Applications of Neural Networks), we are still soliciting abstracts for the ========================================= Special Track on Neural Networks in Biomedical Systems ========================================= Any application of neural networks in the medical domain will be welcome. Examples are: - biosignal processing (e.g. EEG, ECG, intensive care, etc.) - biomedical image processing (e.g. in radiology, dermatology, etc.) - diagnostic support in medicine - topographical mapping of diseases or syndromes - epidemological studies - control of biomedical devices (e.g. heart/lung machines, respirators, etc.) - optimization of therapy - monitoring (e.g. in intensive care) - and many more Special emphasis will be put on careful validation of results to make clear the value of neural networks in the application (e.g. through cross-validation with mutiple training sets and comparison to alternatives, such as linear methods). One-page abstracts can be submitted until =============== Feb. 15, 1996 =============== Please state clearly what data was used (number of input features, number of training and test samples) and your results (e.g. by reporting mean performance and standard deviation from a cross-validation). Final papers will be due around March 21, 1996. It is planned to publish the best-quality papers in a special issue of a journal. Abstracts should be emailed to: ====================== georg at ai.univie.ac.at ====================== Below is a description of the EANN conference. Georg Dorffner Dept. of Medical Cybernetics and Artificial Intelligence University of Vienna Freyung 6/2 A-1010 Vienna, Austria phone: +43-1-53532810 fax: +43-1-5320652 email: georg at ai.univie.ac.at http://www.ai.univie.ac.at/oefai/nn/georg.html ---------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, environmental engineering, and biomedical engineering. Abstracts of one page (200 to 400 words) should be sent by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN'96, please see http://www.lpac.ac.uk/EANN96 and for reports on EANN '95, contents of the proceedings, etc. please see http://www.abo.fi/~abulsari/EANN95.html Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From David_Redish at GS151.SP.CS.CMU.EDU Wed Jan 24 12:00:37 1996 From: David_Redish at GS151.SP.CS.CMU.EDU (David Redish) Date: Wed, 24 Jan 1996 12:00:37 -0500 Subject: new web site for NIPS*95 papers Message-ID: <13596.822502837@GS151.SP.CS.CMU.EDU> Many of the papers presented at NIPS*95 have been made available online by their authors. The NIPS Foundation now maintains a web site where abstracts and URLs for these papers are collected: http://www.cs.cmu.edu/Web/Groups/NIPS/NIPS95/Papers.html New papers are being added regularly. The complete list of papers presented at NIPS*95 is available on the NIPS home page. The printed NIPS*95 proceedings will be available from MIT Press in May. ------------------------------------------------------------ David Redish Computer Science Department CMU graduate student Neural Processes in Cognition Training Program Center for the Neural Basis of Cognition http://www.cs.cmu.edu/Web/People/dredish/home.html ------------------------------------------------------------ maintainer, CNBC website: http://www.cs.cmu.edu/Web/Groups/CNBC maintainer, NIPS*96 website: http://www.cs.cmu.edu/Web/Groups/NIPS ------------------------------------------------------------ From giles at research.nj.nec.com Thu Jan 25 17:02:39 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 25 Jan 96 17:02:39 EST Subject: TR available: PRODUCT UNIT LEARNING Message-ID: <9601252202.AA06686@alta> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: (A short version of this TR was published in NIPS7) _____________________________________________________________________ PRODUCT UNIT LEARNING Technical Report UMIACS-TR-95-80 and CS-TR-3503, Institute for Advanced Computer Studies, University of Maryland, College Park, MD 20742 Laurens R. Leerink{a}, C. Lee Giles{b,c}, Bill G. Horne{b}, Marwan A.Jabri{a} {a}SEDAL, Dept. of Electrical Engineering, The U. of Sydney, Sydney, NSW 2006, Australia {b}NEC Research Institute, 4 Independence Way, Princeton, NJ 08540, USA {c}UMIACS, U. of Maryland, College Park, MD 20742, USA ABSTRACT Product units provide a method of automatically learning the higher-order input combinations required for the efficient synthesis of Boolean logic functions by neural networks. Product units also have a higher information capacity than sigmoidal networks. However, this activation function has not received much attention in the literature. A possible reason for this is that one encounters some problems when using standard backpropagation to train networks containing these units. This report examines these problems, and evaluates the performance of three training algorithms on networks of this type. Empirical results indicate that the error surface of networks containing product units have more local minima than corresponding networks with summation units. For this reason, a combination of local and global training algorithms were found to provide the most reliable convergence. We then investigate how `hints' can be added to the training algorithm. By extracting a common frequency from the input weights, and training this frequency separately, we show that convergence can be accelerated. A constructive algorithm is then introduced which adds product units to a network as required by the problem. Simulations show that for the same problems this method creates a network with significantly less neurons than those constructed by the tiling and upstart algorithms. In order to compare their performance with other transfer functions, product units were implemented as candidate units in the Cascade Correlation (CC) {Fahlman90} system. Using these candidate units resulted in smaller networks which trained faster than when the any of the standard (three sigmoidal types and one Gaussian) transfer functions were used. This superiority was confirmed when a pool of candidate units of four different nonlinear activation functions were used, which have to compete for addition to the network. Extensive simulations showed that for the problem of implementing random Boolean logic functions, product units are always chosen above any of the other transfer functions. -------------------------------------------------------------------------- -------------------------------------------------------------------------- http://www.neci.nj.nec.com/homepages/giles.html http://www.cs.umd.edu/TRs/TR-no-abs.html or ftp://ftp.nj.nec.com/pub/giles/papers/UMD-CS-TR-3503.product.units.neural.nets.ps.Z ---------------------------------------------------------------------------- -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html == From luis.almeida at inesc.pt Fri Jan 26 07:51:42 1996 From: luis.almeida at inesc.pt (Luis B. Almeida) Date: Fri, 26 Jan 1996 13:51:42 +0100 Subject: Workshop: Spatiotemporal Models Message-ID: <3108CE5E.3F784554@inesc.pt> *** PLEASE POST *** PLEASE FORWARD TO OTHER APPROPRIATE LISTS *** Preliminary announcement Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems Sintra, Portugal, 6-8 November 1996 A workshop is being organized, on the topic of Spatiotemporal Models in Biological and Artificial Systems, to foster the discussion of the latest developments in these fields, and the cross-fertilization of ideas between people from the areas of biological and artificial information processing systems. This is a preliminary announcement of the workshop, to allow potential participants enough time to prepare their works for submission. The size of the workshop is planned to be relatively small (around 50 people), to enhance the communication among participants. Submissions will be subjected to an international peer review procedure. All accepted submissions will be scheduled for poster presentation. The authors of the best-rated submissions will make oral presentations, in addition to their poster presentations. Presentation of an accepted contribution is mandatory for participation in the workshop. There will also be a number of presentations by renowned invited speakers. Submissions will consist of the full papers in their final form. Paper revision after the review is not expected to be possible. The camera-ready paper format is not available yet, but a rough indication is eight A4 pages, typed single-spaced in a 12 point font, with 3.5 cm margins all around. The accepted contributions will be published by a major scientific publisher. The proceedings volume is planned to be distributed to the participants at the beginning of the workshop. The workshop will take place on 6-8 November 1996 in Sintra, Portugal. The tentative schedule is as follows: Deadline for paper submission 30 April 1996 Results of paper review 31 July 1996 Workshop 6-8 November 1996 Although no confirmation is available yet, we expect to have partial funding for the workshop, from research-funding institutions. If so, this will allow us to partially subsidize the participants' expenses. The workshop is planned to have a duration of two and a half days, from a wednesday afternoon (6 Nov.) through the next friday afternoon (8 Nov.). The participants who so desire will have the opportunity to stay the following weekend, for sightseeing. Sintra is a beautiful little town, located about 20 km west of Lisbon. It used to be a vacation place of the Portuguese aristocracy, and has in its vicinity a number of beautiful palaces, a moor castle, a monastery carved in the rock and other interesting spots. It is on the edge of a small mountain which creates a microclimate with a luxurious vegetation. Sintra has recently been designated World Patrimonium. Further announcements of the workshop will be made, but people who wish to stay informed can send e-mail to Luis B. Almeida (see below), to be included in the workshop mailing list. Workshop organizers: Chair Fernando Lopes da Silva Amsterdam University, The Netherlands Technical program Jose C. Principe University of Florida, Gainesville, FL, USA principe at synapse.ee.ufl.edu Local arrangements Luis B. Almeida Instituto Superior Tecnico / INESC, Lisbon, Portugal luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt ----------------------------------------------------------------------------- *** Indonesia is killing innocent people in East Timor *** From piuri at elet.polimi.it Sun Jan 28 03:34:32 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Sun, 28 Jan 1996 09:34:32 +0100 Subject: NICROSP'96 - deadline extension Message-ID: <9601280834.AA20390@ipmel2.elet.polimi.it> ====================================================================== NICROSP'96 * * * DEADLINE EXTENSION * * * Due to a delay in posting on some web servers, submission deadlines have been extended as follows: one-page abstract for review assignment: by February 19th, 1996 extended summary or full paper for review: by March 3rd, 1996 For details see the following call for papers. 1996 International Workshop on Neural Networks for Identification, Control, Robotics, and Signal/Image Processing Venice, Italy - 21-23 August 1996 ====================================================================== Sponsored by the IEEE Computer Society and the IEEE CS Technical Committee on Pattern Analysis and Machine Intelligence. In cooperation with: ACM SIGART, IEEE Circuits and Systems Society, IEEE Control Systems Society, IEEE Instrumentation and Measurement Society, IEEE Neural Network Council, IEEE North-Italy Section, IEEE Region 8, IEEE Robotics and Automation Society (pending), IEEE Signal Processing Society (pending), IEEE System, Man, and Cybernetics Society, IMACS, INNS (pending), ISCA, AEI, AICA, ANIPLA, FAST. CALL FOR PAPERS This workshop is directed to create a unique synergetic discussion forum and a strong link between theoretical researchers and practitioners in the application fields of identification, control, robotics, and signal/image processing by using neural techniques. The three-days single-session schedule will provide the ideal environment for in-depth analysis and discussions concerning the theoretical aspects of the applications and the use of neural networks in the practice. Invited talks in each area will provide a starting point for the discussion and give the state of the art in the corresponding field. Panels will provide an interactive discussion. Researchers and practitioners are invited to submit papers concerning theoretical foundations of neural computation, experimental results or practical applications related to the specific workshop's areas. Interested authors should submit a half-page abstract to the program chair by e-mail or fax by February 19, 1996, for review planning. Then, an extended summary or the full paper (limited to 20 double-spaced pages including figures and tables) must be sent to the program chair by March 3, 1996 (PostScript email submission is strongly encouraged). Submissions should contain: the corresponding author, affiliation, complete address, fax, email, and the preferred workshop track (identification, control, robotics, signal processing, image processing). Submission implies the willingness of at least one of the authors to register, attend the workshop and present the paper. Papers' selection is based on the full paper: the corresponding author will be notified by March 30, 1996. The camera-ready version, limited to 10 one-column IEEE-book-standard pages, is due by May 1, 1996. Proceedings will be published by the IEEE Computer Society Press. The extended version of selected papers will be considered for publication in special issues of international journals. General Chair Prof. Edgar Sanchez-Sinencio Department of Electrical Engineering Texas A&M University College Station, TX 77843-3128 USA phone (409) 845-7498 fax (409) 845-7161 email sanchez at eesun1.tamu.edu Program Chair Prof. Vincenzo Piuri Department of Electronics and Information Politecnico di Milano piazza L. da Vinci 32, I-20133 Milano, Italy phone +39-2-2399-3606 fax +39-2-2399-3411 email piuri at elet.polimi.it Publication Chair Dr. Jose' Pineda de Gyvez Department of Electrical Engineering Texas A&M University Publicity, Registr. & Local Arrangment Chair Dr. Cesare Alippi Department of Electronics and Information Politecnico di Milano Workshop Secretariat Ms. Laura Caldirola Department of Electronics and Information Politecnico di Milano phone +39-2-2399-3623 fax +39-2-2399-3411 email caldirol at elet.polimi.it Program Committee (preliminary list) Shun-Ichi Amari, University of Tokyo, Japan Panos Antsaklis, Univ. Notre Dame, USA Magdy Bayoumi, University of Southwestern Louisiana, USA James C. Bezdek, University of West Florida, USA Pierre Borne, Ecole Politechnique de Lille, France Luiz Caloba, Universidad Federal de Rio de Janeiro, Brazil Jill Card, Digital Equipment Corporation, USA Chris De Silva, University of Western Australia, Australia Laurene Fausett, Florida Institute of Technology, USA C. Lee Giles, NEC, USA Karl Goser, University of Dortmund, Germany Simon Jones, University of Loughborough, UK Michael Jordan, Massachussets Institute of Technology, USA Robert J. Marks II, University of Washington, USA Jean D. Nicoud, EPFL, Switzerland Eros Pasero, Politecnico di Torino, Italy Emil M. Petriu, University of Ottawa, Canada Alberto Prieto, Universidad de Granada, Spain Gianguido Rizzotto, SGS-Thomson, Italy Edgar Sanchez-Sinencio, A&M University, USA Bernd Schuermann, Siemens, Germany Earl E. Swartzlander, University of Texas at Austin, USA Philip Treleaven, University College London, UK Kenzo Watanabe, Shizuoka University, Japan Michel Weinfeld, Ecole Politechnique de Paris, France ====================================================================== From N.Sharkey at dcs.shef.ac.uk Sun Jan 28 06:18:58 1996 From: N.Sharkey at dcs.shef.ac.uk (N.Sharkey@dcs.shef.ac.uk) Date: Sun, 28 Jan 96 11:18:58 GMT Subject: EANN-96 - ROBOTICS Message-ID: <9601281118.AA28201@entropy.dcs.shef.ac.uk> Sorry if you get this twice, but I messed up the mailing last week. *** ROBOTICS TRACK of EANN-96 *** London, UK: 17-19 June, 1996. For those of you a bit late in submitting you abstracts (200-400 words) for the Robotics track of EANN-96, you can send them directly to me electronically at n.sharkey at dcs.shef.ac.uk (or fax). But please let me know of your intention to do so. For more information on EANN '96: http://www.lpac.ac.uk/EANN96 For reports on EANN '95, contents of the proceedings, etc.: http://www.abo.fi/~abulsari/EANN95.html Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Notification of acceptance will be sent around 15 February. noel Noel Sharkey Professor of Computer Science Department of Computer Science Regent Court University of Sheffield S1 4DP, Sheffield, UK N.Sharkey at dcs.shef.ac.uk FAX: (0114) 2780972 From hd at research.att.com Fri Jan 26 14:44:53 1996 From: hd at research.att.com (Harris Drucker) Date: Fri, 26 Jan 96 14:44:53 EST Subject: please post Message-ID: <9601261939.AA16638@big.info.att.com> Please announce to connectionists bulletin board: Two papers on using boosting techniques to improve the performance of classification trees are available via anonymous ftp: The first paper describes a preliminary set of experiments showing that an ensemble of trees constructed using Freund and Schapire's boosting algorithm is much better than single trees: Boosting Decision Trees Harris Drucker and Corinna Cortes to be published in NIPS 8, 1996. A new boosting algorithm of Freund and Schapire is used to improve the performance of decision trees which are constructed using the information ratio criterion of Quinlan's C4.5 algorithm. This boosting algorithm iteratively constructs a series of decision trees, each decision tree being trained and pruned on examples that have been filtered by previously trained trees. Examples that have been incorrectly classified by the previous trees in the ensemble are resampled with higher probability to give a new probability distribution for the next tree in the ensemble to train on. Results from optical character recognition (OCR), and knowledge discovery and data mining problems show that in comparison to single trees, or to trees trained independently, or to trees trained on subsets of the feature space, the boosting ensemble is much better. The second paper extends this work giving more details and applies this technique to the design of a fast preclassifier for OCR: Fast Decision Tree Ensembles for Optical Character Recognition by Harris Drucker accepted by Fifth Annual Symposium on Document Analysis and Information Retrieval (1996) in Las Vegas To get both-papers: unix> ftp ftp.monmouth.edu (or ftp 192.100.64.2) Connected to monmouth.edu. 220 monnet FTP server (Version wu-2.4(1) Mon Oct 9 18:48:45 EDT 1995) ready. Name (ftp.monmouth.edu:hd): anonymous (no space after the :) 331 Guest login ok, send your complete e-mail address as password. Password: (your email address) 230-Welcome to the Monmouth University FTP server 230- ftp> binary Type set to I. ftp> cd pub/drucker 250 CWD command successful ftp> get nips-paper.ps.Z ftp> get las-vegas-paper.ps.Z ftp> quit unix> uncompress nips-paper.ps.Z unix> uncompress las-vegas-paper.ps.Z unix> lpr (or your postscript print command) (either paper) Any problems, contact me at hd at harris.monmouth.edu Harris Drucker From ersintul at rorqual.cc.metu.edu.tr Tue Jan 30 07:14:53 1996 From: ersintul at rorqual.cc.metu.edu.tr (ersin tulunay) Date: Tue, 30 Jan 1996 15:14:53 +0300 (MEST) Subject: EANN'96: Special Track on Control Systems Message-ID: Dear Neural Net and Control System Researcher, A special track on Control Systems will be organized during the International Conference on Engineering Applications of Neural Networks (EANN'96) which is to be held in London, UK between 17-19 June 1996. The Final Call for papers for the EANN'96 can be found at the end of this message. The EANN'95 was held in Helsinki, Finland between 21-23 August 1995. Reports on the EANN'95 and the contents of the proceedings etc. may be seen on http://www.abo.fi/~abulsari/EANN'95.html Based on a number of good quality papers presented in the EANN'95 it was possible to edit a special issue for the Journal of Systems Engineering. For the forthcoming meeting we have plans for publishing same kind of special issue in a suitable journal also. For this, of course, your contribution is essential. I am sure you agree that one of the most interesting areas of utilization of neural nets is control systems. However, the number of papers published so far has not been as many as control systems deserve. In particular, as far as the real world applications are concerned there are only a few works published. Therefore, your contribution to EANN'96 will be very functional. We would like also to plan arranging a discussion session in a leisurely atmosphere on the use of neural nets in control applications. The ideas which may come up will help the organization of the following future activities. The conference, and especially this meeting will help us making efficient and sincere personal contacts with our colleagues and will provide a good apportunity for planting seeds for mutual collaborative research for international projects which might be funded by international bodies such as European Union. We are looking forward to receiving your abstracts latest by 15 February 1996. With kind regards, Dr.Ersin Tulunay Electrical and Electronic Engineering Department Middle East Technical University 06531 Ankara, Turkey Tel: +90 312 2102335 (Office) +90 312 2101199 (Home) Fax: +90 312 2101261 (Office) E-mail : etulunay at ed.eee.metu.edu.tr ---------- International Conference on Engineering Applications of Neural Networks (EANN '96) London, UK 17--19 June 1996 The conference is a forum for presenting the latest results on neural network applications in technical fields. The applications may be in any engineering or technical field, including but not limited to systems engineering, mechanical engineering, robotics, process engineering, metallurgy, pulp and paper technology, aeronautical engineering, computer science, machine vision, chemistry, chemical engineering, physics, electrical engineering, electronics, civil engineering, geophysical sciences, biotechnology, environmental engineering, and biomedical engineering. Abstracts of one page (200 to 400 words) should be sent by e-mail in PostScript format or ASCII. Please mention two to four keywords, and whether you prefer it to be a short paper or a full paper. The short papers will be 4 pages in length, and full papers may be upto 8 pages. Submissions will be reviewed and the number of full papers will be very limited. For more information on EANN'96, please see http://www.lpac.ac.uk/EANN96 Five special tracks are being organised in EANN '96: Computer Vision (J. Heikkonen, Jukka.Heikkonen at jrc.it), Control Systems (E. Tulunay, ersin-tulunay at metu.edu.tr), Mechanical Engineering (A. Scherer, andreas.scherer at fernuni-hagen.de), Robotics (N. Sharkey, N.Sharkey at dcs.shef.ac.uk), and Biomedical Systems (G. Dorffner, georg at ai.univie.ac.at) Organising committee A. Bulsari (Finland) D. Tsaptsinos (UK) T. Clarkson (UK) International program committee G. Dorffner (Austria) S. Gong (UK) J. Heikkonen (Italy) B. Jervis (UK) E. Oja (Finland) H. Liljenstr\"om (Sweden) G. Papadourakis (Greece) D. T. Pham (UK) P. Refenes (UK) N. Sharkey (UK) N. Steele (UK) D. Williams (UK) W. Duch (Poland) R. Baratti (Italy) G. Baier (Germany) E. Tulunay (Turkey) S. Kartalopoulos (USA) C. Schizas (Cyprus) J. Galvan (Spain) M. Ishikawa (Japan) D. Pearson (France) Registration information for the International Conference on Engineering Applications of Neural Networks (EANN '96) The conference fee will be sterling pounds (GBP) 300 until 28 February, and sterling pounds (GBP) 360 after that. At least one author of each accepted paper should register by 21 March to ensure that the paper will be included in the proceedings. The conference fee can be paid by a bank draft (no personal cheques) payable to EANN '96, to be sent to EANN '96, c/o Dr. D. Tsaptsinos, Kingston University, Mathematics, Kingston upon Thames, Surrey KT1 2EE, UK. The fee includes attendance to the conference and the proceedings. Registration form can be picked up from the www (or can be sent to you by e-mail) and can be returned by e-mail (or post or fax) once the conference fee has been sent. A registration form sent before the payment of the conference fee is not valid. For more information, please ask eann96 at lpac.ac.uk From chentouf at kepler.inpg.fr Tue Jan 30 11:29:26 1996 From: chentouf at kepler.inpg.fr (rachida) Date: Tue, 30 Jan 1996 17:29:26 +0100 Subject: 2 papers available Message-ID: <199601301629.RAA09862@kepler.inpg.fr> First paper: Combining Sigmoids and Radial Basis Functions in Evolutive Neural Architectures. available at: ftp://tirf.inpg.fr/pub/HTML/chentouf/esann96_chentouf.ps.gz ABSTRACT An incremental algorithm for supervised learning of noisy data using two layers neural networks with linear output units and a mixture of sigmoids and radial basis functions in the hidden layer (2-[S,RBF]NN) is proposed. Each time the network has to be extended, we compare different estimations of the residual error: the one provided by a sigmoidal unit responding to the overall input space, and those provided by a number of RBFs responding to localized regions. The unit which provides the best estimation is selected and installed in the existing network. The procedure is repeated until the error reduces to the noise in the data. Experimental results show that the incremental algorithm using 2-[S,RBF]NN is considerably faster than the one using only sigmoidal hidden units. It also leads to a less complex final network and avoids being trapped in spurious minima. This paper has been accepted for publication in the European Symposium on Artificial Neural Networks, Bruges, Belgium , April, 96. =========================================================== The second paper is an extended abstract (the final version is in preparation): DWINA: Depth and Width Incremental Neural Algorithm. available at: ftp://ftp.tirf.inpg.fr/pub/HTML/chentouf/icnn96_chentouf.ps.gz ABSTRACT This paper presents DWINA: an algorithm for depth and width design of neural architectures in the case of supervised learning with noisy data. Each new unit is trained to learn the error of the existing network and is connected to it such that it does not affect its previous performance. Criteria for choosing between increasing width or increasing depth are proposed. The connection procedure for each case is also described. The stopping criterion is very simple and consists in comparing the residual error signal to the noise signal. Preliminary experiments point out the efficacy of the algorithm especially to avoid spurious minima and to design a network with a well-suited size. The complexity of the algorithm (number of operations) is on average the same as that needed in a convergent run of the BP algorithm on a static architecture having the optimal number of parameters. Moreover, it is found that no significant difference exist between networks having the same number of parameters and different structure. Finally, the algorithm presents an interesting behaviour since the MSE on the training set tends to decrease continuously during the process evolving directly and surely to the solution of the mapping problem. This paper has been accepted for publication in the IEEE International Conference on Neural Networks, Washington, June, 96. __ ______ __ ________ _______ __ __ __ ________ ______ / / /_ __/ / / / ____ / / _____/ / // /\ / // ____ // ____/ / / / / / / / /___/ / / /___ ____ / // /\ \ / // /___/ // / ____ / /_____ / / / / / ___/ / _____/ /___/ / // / \ \/ // /_____// /_/ __/ /_______/ /_/ /_/ /_/\__\ /_/ /_//_/ \_\//_/ /_____/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= || Mrs Rachida CHENTOUF || || LTIRF-INPG || || 46, AV Felix Viallet || || 38031 Grenoble - FRANCE || || Tel : (+33) 76.57.43.64 || || Fax : (+33) 76.57.47.90 || || || || WWW: ftp://tirf.inpg.fr/pub/HTML/chentouf/rachida.html || -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From marks at u.washington.edu Tue Jan 30 20:17:35 1996 From: marks at u.washington.edu (Robert Marks) Date: Tue, 30 Jan 96 17:17:35 -0800 Subject: TNN Abstracts Posting Message-ID: <9601310117.AA06219@carson.u.washington.edu> Web Abstracts Robert J. Marks II, Editor-in-Chief IEEE Transactions on Neural Networks According to the 1994 Journal Citation Reports, the IEEE Transactions on Neural Networks, based on frequency of citation, has a half-life of 3.1 years. Life is short in our technology. Engineer Sherman Minton expressed it nicely. "Half of everything you know will be wrong in 10 years; you just don't know which half." To make neural network research more temporally accessible, the TNN is establishing a WWW page posting of abstracts of papers submitted to the IEEE Transactions on Neural Networks. Dr. Jianchang Mao will coordinate the posting effort. Authors submitting papers to the TNN may, at their own discretion, submit ASCII information on their papers via e-mail to Jianchang Mao, Abstracts Editor IEEE Transactions on Neural Networks IBM Almaden Research Center Image and Multimedia Systems, DPEE/B3 650 Harry Road San Jose, CA 95120 IEEETNN at almaden.ibm.com Submission of the information may only be done only after a paper has been submitted to the IEEE Transactions on Neural Networks and a TNN paper number has been assigned. The following information should be included in the message sent to the Abstracts Editor. - The TNN number assigned to the paper. - The paper title - Authors and their affiliation. Please include e-mail addresses - The abstract of the paper - (Optional) Information on how to view or obtain a full copy of the paper. Electronic access on the WWW or ftp is preferred. If you currently have a paper in any stage of review in the TNN, you may also submit abstract information for posting. The TNN Abstracts page will be appended to the home page of the IEEE Neural Networks Council (http://www.ieee.org.nnc) under the able coordinator of Professor Payman Arabshahi. The NNC home page also includes a remarkably complete listing of conferences in computational intelligence, IEEE copyright forms and information, the NNC newsletter, information about neural network research centers and NNC sponsored books. The most recent table of contents of the IEEE Transactions on Fuzzy Systems and the IEEE Transactions on Neural Networks are also posted. Happy surfing. And may your technological half-life be long. From chentouf at kepler.inpg.fr Wed Jan 31 06:44:33 1996 From: chentouf at kepler.inpg.fr (rachida) Date: Wed, 31 Jan 1996 12:44:33 +0100 Subject: new paper available "Combining Sigmoids and RBFs" Message-ID: <199601311144.MAA00665@kepler.inpg.fr> The following paper: Combining Sigmoids and Radial Basis Functions in Evolutive Neural Architectures. is available at: ftp://tirf.inpg.fr/pub/HTML/chentouf/esann96_chentouf.ps.gz ABSTRACT An incremental algorithm for supervised learning of noisy data using two layers neural networks with linear output units and a mixture of sigmoids and radial basis functions in the hidden layer (2-[S,RBF]NN) is proposed. Each time the network has to be extended, we compare different estimations of the residual error: the one provided by a sigmoidal unit responding to the overall input space, and those provided by a number of RBFs responding to localized regions. The unit which provides the best estimation is selected and installed in the existing network. The procedure is repeated until the error reduces to the noise in the data. Experimental results show that the incremental algorithm using 2-[S,RBF]NN is considerably faster than the one using only sigmoidal hidden units. It also leads to a less complex final network and avoids being trapped in spurious minima. ========= This paper has been accepted for publication in the European Symposium on Artificial Neural Networks, Bruges, Belgium , April, 96. __ ______ __ ________ _______ __ __ __ ________ ______ / / /_ __/ / / / ____ / / _____/ / // /\ / // ____ // ____/ / / / / / / / /___/ / / /___ ____ / // /\ \ / // /___/ // / ____ / /_____ / / / / / ___/ / _____/ /___/ / // / \ \/ // /_____// /_/ __/ /_______/ /_/ /_/ /_/\__\ /_/ /_//_/ \_\//_/ /_____/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= || Mrs Rachida CHENTOUF || || LTIRF-INPG || || 46, AV Felix Viallet || || 38031 Grenoble - FRANCE || || Tel : (+33) 76.57.43.64 || || Fax : (+33) 76.57.47.90 || || || || WWW: ftp://tirf.inpg.fr/pub/HTML/chentouf/rachida.html || -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-= From jan at uran.informatik.uni-bonn.de Wed Jan 31 10:11:36 1996 From: jan at uran.informatik.uni-bonn.de (Jan Puzicha) Date: Wed, 31 Jan 1996 16:11:36 +0100 Subject: Publications and Abstracts available online Message-ID: <199601311511.QAA13190@thalia.informatik.uni-bonn.de> The following Publications are now available as abstracts and compressed postscript online via the WWW-Home-Page of the Computer Vision and Pattern Recognition Group of the University of Bonn, Germany: http://www-dbv.cs.uni-bonn.de/ This page also contains information about peoble, scientific projects (segmentation, stereo, compression, data clustering, vector quantization, multidimensional scaling, autonomous robotics, associative memories) and new results in textured image segmentation of the group as well as links to related sites, conferences and jounals. Data Clustering J. Buhmann, Data clustering and learning, in Handbook of Brain Theory and Neural Networks, M. Arbib, ed., Bradfort Books/MIT Press, 1995. J. Buhmann, Vector Quantization with Complexity Costs, IEEE Transactions on Information Theory, 39, pp.1133-1145, 1993. J. Buhmann and T. Hofmann, A Maximum Entropy Approach to Pairwise Data Clustering, in Proceedings of the International Conference on Pattern Recognition, Hebrew University, Jerusalem, vol.II, IEEE Computer Society Press, pp.207-212, 1994. J. Buhmann and T. Hofmann, Pairwise data clustering by deterministic Annealing, Tech. Rep. IAI-TR-95-7, Institut fr Informatik III, Universit"at Bonn. T. Hofmann and J. Buhmann, Multidimensional scaling and data clustering, in Advances in Neural Information Processing Systems 7, Morgan Kaufmann Publishers, 1995. T. Hofmann and J. Buhmann, Hierarchical pairwise data clustering by mean-field annealing. ICANN 1995. Robotics J. Buhmann, W. Burgard, A.B. Cremers, D. Fox, T. Hofmann, F. Schneider, J. Strikos and S. Thrun. The Mobile Robot Rhino. AI Magazin, 16:1, 1995. Face Recognition J. Buhmann, M. Lades and F. Eeckmann. Illumination-Invariant Face Recognition with a Contrast Sensitive Silicon Retina. In: Advances in Neural Information Processing Systems (NIPS) 6, Morgan Kaufmann Publishers, pp 769-776, 1994. H. Aurisch, J. Strikos and J. Buhmann. A Real-Time Face Recognition System with a Retina camera. Internal report, summerizes the results of our face regognition research accomplished in summer 1993. Associative Memories J. Buhmann, Oscillatory Associative Memories, in Handbook of Brain Theory & Neural Networks, M. Arbib (ed.), Bradfort Books, MIT Press, 1995. Greeting Jan Puzicha -------------------------------------------------------------------- Jan Puzicha | email: jan at uran.cs.uni-bonn.de Institute f. Informatics III | jan at cs.uni-bonn.de University of Bonn | WWW : http://www.cs.uni-bonn.de/~jan | Roemerstrasse 164 | Tel. : +49 228 550-383 D-53117 Bonn | Fax : +49 228 550-382 From moody at chianti.cse.ogi.edu Wed Jan 31 20:47:20 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Wed, 31 Jan 96 17:47:20 -0800 Subject: Graduate Study at the Oregon Graduate Institute Message-ID: <9602010147.AA20447@chianti.cse.ogi.edu> OGI (Oregon Graduate Institute of Science and Technology) has openings for a few outstanding students in its Computer Science and Electrical Engineering Masters and Ph.D programs in the areas of Neural Networks, Learning, Signal Processing, Time Series, Control, Speech, Language, Vision, and Computational Finance. OGI has 14 faculty, senior research staff, and postdocs in these areas. Short descriptions of our research interests are appended below. The primary purposes of this message are: 1) To invite inquiries and applications from prospective students interested in studying for a Masters or PhD Degree in the above areas. 2) To notify prospective PhD students who are U.S. Citizens or U.S. Nationals of various fellowship opportunities at OGI. Fellowships provide full or partial financial support while studying for the PhD. OGI is a young, but rapidly growing, private research institute located in the Silicon Forest area west of downtown Portland, Oregon. OGI offers Masters and PhD programs in Computer Science and Engineering, Electrical Engineering, Applied Physics, Materials Science and Engineering, Environmental Science and Engineering, Chemistry, Biochemistry, Molecular Biology, and Management. The Portland area has a high concentration of high tech companies that includes major firms like Intel, Hewlett Packard, Tektronix, Sequent Computer, Mentor Graphics, Wacker Siltronics, and numerous smaller companies like Planar Systems, FLIR Systems, Flight Dynamics, and Adaptive Solutions (an OGI spin-off that manufactures high performance parallel computers for neural network and signal processing applications). The admissions deadline for the OGI PhD programs is March 1. Masters program applications are accepted year-round. Inquiries about these programs and admissions for either Computer Science or Electrical Engineering should be addressed to: Office of Admissions and Records Oregon Graduate Institute PO Box 91000 Portland, OR 97291 Phone: (503)690-1028, or (800)685-2423 (toll-free in the US and Canada) Worldwide Web: http://www.ogi.edu/webtest/admissions.html Internet: admissions at admin.ogi.edu Due to the late time in the PhD applications season, though, informal applications should be sent directly to the CSE Department. For these informal applications, please include a letter specifying your research interests, photocopies of your GRE Scores, TOEFL Scores, and College transcripts, and indicate your interest in either the PhD or Masters programs. Please send these materials to: Betty Shannon, Academic Coordinator Department of Computer Science and Engineering Oregon Graduate Institute PO Box 91000 Portland, OR 97291-1000 Phone: (503)690-1255 Internet: bettys at cse.ogi.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology Department of Computer Science and Engineering & Department of Electrical Engineering and Applied Physics Research Interests of Faculty, Research Staff, and Postdocs in Neural Networks, Signal Processing, Control, Speech, Language, Vision, Time Series, and Computational Finance (Note: Additional information is available on the Web at http://www.ogi.edu/ ) Etienne Barnard (Associate Professor, EEAP): Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole (Professor, CSE): Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty (Research Assistant Professor, CSE): Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom (Associate Professor, CSE): Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Hynek Hermansky (Associate Professor, EEAP); Hynek Hermansky is interested in speech processing by humans and machines with engineering applications in speech and speaker recognition, speech coding, enhancement, and synthesis. His main research interest is in practical engineering models of human information processing. Todd K. Leen (Associate Professor, CSE): Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. John Moody (Associate Professor, CSE): John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, economics, and computational finance. David Novick (Associate Professor, CSE): David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (Associate Professor, EEAP): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Hong Pi (Senior Research Associate, CSE) Hong Pi's research interests include neural network models, time series analysis, and dynamical systems theory. He currently works on the applications of nonlinear modeling and analysis techniques to time series prediction problems and financial market analysis. Thorsteinn S. Rognvaldsson (Post-Doctoral Research Associate, CSE): Thorsteinn Rognvaldsson studies both applications and theory of neural networks and other non-linear methods for function fitting and classification. He is currently working on methods for choosing regularization parameters and also comparing the performance of neural networks with the performance of other techniques for time series prediction and financial markets. Pieter Vermeulen (Senior Research Associate, CSE): Pieter Vermeulen is interested in the theory, design and implementation of pattern-recognition systems, neural networks and telephone based speech systems. He currently works on the realization of speaker independent, small vocabulary interfaces to the public telephone network. Current projects include voice dialing, a system to collect the year 2000 census information and the rapid prototyping of such systems. Eric A. Wan (Assistant Professor, EEAP): Eric Wan's research interests include learning algorithms and architectures for neural networks and adaptive signal processing. He is particularly interested in neural applications to time series prediction, adaptive control, active noise cancellation, and telecommunications. Lizhong Wu (Senior Research Associate, CSE): Lizhong Wu's research interests include neural network theory and modeling, time series analysis and prediction, pattern classification and recognition, signal processing, vector quantization, source coding and data compression. He is now working on the application of neural networks and nonparametric statistical paradigms to finance. From erol at ee.duke.edu Wed Jan 31 11:40:17 1996 From: erol at ee.duke.edu (Erol Gelenbe) Date: Wed, 31 Jan 1996 11:40:17 -0500 (EST) Subject: No subject In-Reply-To: <199601281900.PAA18319@marshall.cs.unc.edu> Message-ID: BIOLOGICALLY INSPIRED AUTONOMOUS SYSTEMS Computation, Cognition and Control Duke University -- March 4 and 5, 1996 Departments of Electrical and Computer Engineering, Psychology Experimental, Biomedical Engineering, Neurobiology, and NSF-ERC Preliminary Program March 4 -- 8:00- 8:45 Registration 8:45- 9:00 Erol Gelenbe and Nestor Schmajuk -- Welcome 9:00- 9:30 Jean-Arcady Meyer (ENS, Paris) From moody at chianti.cse.ogi.edu Wed Jan 31 22:27:19 1996 From: moody at chianti.cse.ogi.edu (John Moody) Date: Wed, 31 Jan 96 19:27:19 -0800 Subject: CFP: NEURAL NETWORKS in the CAPITAL MARKETS 1996 Message-ID: <9602010327.AA20766@chianti.cse.ogi.edu> -- Preliminary Announcement and Call for Papers -- NNCM-96 FOURTH INTERNATIONAL CONFERENCE NEURAL NETWORKS in the CAPITAL MARKETS Wednesday-Friday, November 20-22, 1996 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School Neural networks have been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the increasing interest in the NNCM conferences held in the U.K. and the U.S., the fourth annual NNCM is planned for November 20-22, 1996, in Pasadena, California. This is a research meeting where original and significant contributions to the field are presented. In addition, introductory tutorials will be included to familiarize audiences of different backgrounds with the financial and the mathematical aspects of the field. Areas of Interest: Price forecasting for stocks, bonds, commodities, and foreign exchange; asset allocation and risk management; volatility analysis and pricing of derivatives; cointegration, correlation, and multivariate data analysis; credit assessment and economic forecasting; statistical methods, learning techniques, and hybrid systems. Organizing Committee: Dr. Y. Abu-Mostafa, Caltech (Chairman) Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. M. Jabri, Sydney University Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. I. Matsuba, Chiba University Dr. J. Moody, Oregon Graduate Institute Dr. C. Pedreira, Catholic Univ. PUC-Rio Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Munster Dr. A. Timermann, UC San Diego Dr. A. Weigend, University of Colorado Dr. H. White, UC San Diego Dr. L. Xu, Chinese University of Hong Kong Submission of Papers: Original contributions representing new and significant research, development, and applications in the above areas of interest are invited. Authors should send 5 copies of a 1000-word summary clearly stating their results to Dr. Y. Abu-Mostafa, Caltech 136-93, Pasadena, CA 91125, U.S.A. All submissions must be received before May 1, 1996. There will be a rigorous refereeing process to select the high-quality papers to be presented at the conference. Location: The conference will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. The hotel is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Mailing List: If you wish to be added to the mailing list of NNCM-96, please send your postal address, e-mail address, and fax number to Dr. Y. Abu-Mostafa, Caltech 136-93, Pasadena, CA 91125, U.S.A. e-mail: yaser at caltech.edu , fax (818) 795-0326 Home Page: http://www.cs.caltech.edu/~learn/nncm.html From lawrence at s4.elec.uq.edu.au Wed Jan 31 23:45:34 1996 From: lawrence at s4.elec.uq.edu.au (Steve Lawrence) Date: Thu, 1 Feb 1996 14:45:34 +1000 (EST) Subject: Paper available: Function Approximation with Neural Networks and Local Methods Message-ID: <199602010445.OAA00201@s4.elec.uq.edu.au> The following paper presents an overview of global MLP approximation and local approximation. It is known that MLPs can respond poorly to isolated data points and we demonstrate that considering histograms of k-NN density estimates of the data can help in prior determination of the best method. http://www.elec.uq.edu.au/~lawrence - Australia http://www.neci.nj.nec.com/homepages/lawrence - USA We welcome your comments Function Approximation with Neural Networks and Local Methods: Bias, Variance and Smoothness Steve Lawrence, Ah Chung Tsoi, Andrew Back Electrical and Computer Engineering University of Queensland, St. Lucia 4072, Australia ABSTRACT We review the use of global and local methods for estimating a function mapping $\mathcal{R}\mathnormal{^m} \Rightarrow \mathcal{R}\mathnormal{^n}$ from samples of the function containing noise. The relationship between the methods is examined and an empirical comparison is performed using the multi-layer perceptron (MLP) global neural network model, the single nearest-neighbour model, a linear local approximation (LA) model, and the following commonly used datasets: the Mackey-Glass chaotic time series, the Sunspot time series, British English Vowel data, TIMIT speech phonemes, building energy prediction data, and the sonar dataset. We find that the simple local approximation models often outperform the MLP. No criteria such as classification/prediction, size of the training set, dimensionality of the training set, etc. can be used to distinguish whether the MLP or the local approximation method will be superior. However, we find that if we consider histograms of the $k$-NN density estimates for the training datasets then we can choose the best performing method {\em a priori} by selecting local approximation when the spread of the density histogram is large and choosing the MLP otherwise. This result correlates with the hypothesis that the global MLP model is less appropriate when the characteristics of the function to be approximated varies throughout the input space. We discuss the results, the smoothness assumption often made in function approximation, and the bias/variance dilemma.