From stefan.kremer at crc.doc.ca Tue Oct 1 11:32:48 1996 From: stefan.kremer at crc.doc.ca (Stefan C. Kremer) Date: Tue, 01 Oct 1996 11:32:48 -0400 Subject: Ph.D. dissertation available: A Theory of Grammatical Induction in the Connectionist Paradigm Message-ID: <2.2.32.19961001153248.00696364@digame.dgcd.doc.ca> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z **DO NOT FORWARD TO OTHER GROUPS** Greetings Connectionists Readers: My Ph.D. Dissertation, entitled "A Theory of Grammatical Induction in the Connectionist Paradigm" is now available for anonymous FTP from the Neuroprose archive. Details are provided below. -Stefan ======================================================== A Theory of Grammatical Induction in the Connectionist Paradigm Abstract This dissertation shows that the tractability and efficiency of training particular connectionist networks to implement certain classes of grammars can be formally determined by applying principles and ideas that have been explored in the symbolic grammatical induction paradigm. Furthermore, this formal analysis also allows networks to be tailored to efficiently solve specific grammatical induction problems. Had the formal work that is reported in this dissertation been done earlier, it is possible that connectionist researchers would have been able to take a formal, rather than empirical, approach to understanding the computational power of their nets for grammatical induction. As well, our formal approach could have been applied to understand and develop techniques to functionally increase the power of connectionist grammar induction systems. Instead, these techniques are currently being discovered empirically. This dissertation, by considering classical work done over the past three decades, gives a formal grounding to these empirically discovered methods. In doing so, it also suggests a rationale for making the design decisions which define every connectionist grammar induction system. This allows new networks to be better suited to the problems to which they will be applied. Finally, the dissertation provides insights into applying other refinement techniques that connectionist researchers have yet to consider. Distribution This document is distributed in the form of a tape archive file named: kremer.thesis.tar.Z. The archive contains 7 individual Postscript files named: "kremer.thesis1.ps" (14 pages), "kremer.thesis2.ps" (33 pages), "kremer.thesis3.ps" (12 pages), "kremer.thesis4.ps" (28 pages), "kremer.thesis5.ps" (8 pages), "kremer.thesis6.ps" (33 pages), and "kremer.thesis7.ps" (10 pages). The first file (thesis1) contains the titlepage, copyright notice, abstract, table of contents, list of tables, list of figures, list of abbreviations, list of symbols and introductory chapter of the dissertation. It may help you to decide which sections of the manuscript you wish to download or print. The last file (thesis7) contains both the concluding chapter and the bibliography for the entire document, while all other files each contain the chapter corresponding to their number (i.e. thesis2 contains Chapter 2). At the present time the f ile "kremer.thesis.tar.Z" is available via anonymous FTP from the Neuroprose Archive at URL "ftp://archive.cis.ohio-state.edu/pub/neuroprose/thesis/kremer.thesis.tar.Z" , however, the author reserves the right to remove the file at any time without prior notice. Sorry, the author cannot supply hardcopy versions of this document. Transcript Showing Access Procedure Here is a transcript showing the procedure to retrieve, "de-archive", uncompress, and print the dissertation. This works on my UNIX system. Success with other systems may vary: <*** BEGIN TRANSCRIPT ***> > >ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(1) Wed Jul 5 14:19:42 EDT 1995) ready. Name (archive.cis.ohio-state.edu:kremer): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose/Thesis 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get kremer.thesis.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for kremer.thesis.tar.Z (2265663 bytes). 226 Transfer complete. local: kremer.thesis.tar.Z remote: kremer.thesis.tar.Z 2265663 bytes received in 1.2e+02 seconds (18 Kbytes/s) ftp> bye 221 Goodbye. >uncompress kremer.thesis.tar.Z >tar xvf kremer.thesis.tar x kremer.thesis1.ps, 229167 bytes, 448 tape blocks x kremer.thesis2.ps, 2324747 bytes, 4541 tape blocks x kremer.thesis3.ps, 315922 bytes, 618 tape blocks x kremer.thesis4.ps, 2746246 bytes, 5364 tape blocks x kremer.thesis5.ps, 269211 bytes, 526 tape blocks x kremer.thesis6.ps, 2422155 bytes, 4731 tape blocks x kremer.thesis7.ps, 114407 bytes, 224 tape blocks >lpr -s kremer.thesis?.ps <*** END TRANSCRIPT ***> Comments and Corrections If you have any comments or corrections for the author, please e-mail them to: stefan.kremer at crc.doc.ca. -- Dr. Stefan C. Kremer, Neural Network Research Scientist, Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca  From freeman at systems.caltech.edu Wed Oct 2 03:20:31 1996 From: freeman at systems.caltech.edu (Robert Freeman) Date: Wed, 2 Oct 96 00:20:31 PDT Subject: Conference: Nerual Networks in the Capital Markets 11/20/96 Message-ID: <9610020720.AA18979@gladstone.systems.caltech.edu> ******************************************************************************* --- Registration Package and Preliminary Program --- NNCM-96 FOURTH INTERNATIONAL CONFERENCE NEURAL NETWORKS IN THE CAPITAL MARKETS Wednesday-Friday, November 20-22, 1996 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School http://cs.caltech.edu/~learn/nncm Neural networks have been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the increasing interest in the NNCM conferences held in the U.K. and the U.S., the fourth annual NNCM will be held on November 20-22, 1996, in Pasadena, California. This is a research meeting where original and significant contributions to the field are presented. A day of tutorials (Wednesday, November 20) is included to familiarize audiences of different backgrounds with some of the key financial and mathematical aspects of the field. Invited Speakers: The conference will feature invited talks by three internationally recognized researchers: Dr. Rob Engle, UC San Diego Dr. Andrew Lo, MIT Sloan School Dr. Paul Refenes, London Business School Contributed Papers: NNCM-96 will have 4 oral sessions and 2 poster sessions with more than 40 contributed papers presented by academicians and practitioners from all six continents, both from the neural networks side and the capital markets side. Each paper has been refereed by 3 experts in the field. The areas of the accepted papers include price forecasting for stocks, bonds, commodities, and foreign exchange; asset allocation and risk management; volatility analysis and pricing of derivatives; cointegration, correlation, and multivariate data analysis; credit assessment and economic forecasting; statistical methods, learning techniques, and hybrid systems. Tutorials: Before the main program, there will be a day of tutorials on Wednesday, November 20, 1996. Three two-hour tutorials will be presented as follows: Statistical Models of Financial Volatility Dr. Rob Engle, University of California, San Diego Universal Portfolios and Information Theory Dr. Tom Cover, Stanford University Data-Snooping and Other Selection Biases in Financial Econometrics Dr. Andrew Lo, MIT Sloan School We are very pleased to have tutors of such caliber help bring new audiences from different backgrounds up to speed in this cross-disciplinary area. Schedule Outline: Wednesday, November 20: 9:00- 5:30 Tutorials 1, 2, 3 Thursday, November 21: 8:30-11:30 Oral Session I 11:30- 2:00 Luncheon & Poster Session I 2:00- 5:00 Oral Session II Friday, November 22: 8:30-11:30 Oral Session III 11:30- 2:00 Luncheon & Poster Session II 2:00- 5:00 Oral Session IV Organizing Committee: Dr. Y. Abu-Mostafa, Caltech (Chairman) Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. M. Jabri, Sydney University Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. I. Matsuba, Chiba University Dr. J. Moody, Oregon Graduate Institute Dr. C. Pedreira, Catholic Univ. PUC-Rio Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Augsburg Dr. A. Timmermann, UC San Diego Dr. A. Weigend, University of Colorado Dr. H. White, UC San Diego Dr. L. Xu, Chinese University of Hong Kong Location: The conference will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. One of the most beautiful hotels in the U.S., the Ritz is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Home of Caltech, Pasadena has recently become a major dining/hangout center for Southern California with the growth of its `Old Town', built along the styles of the 1950's. Among the cultural attractions of Pasadena are the Norton Simon Museum, the Huntington Library/Gallery/Gardens, and a number of theaters including the Ambassador Theater. Hotel Reservation: Please contact the Ritz-Carlton Huntington Hotel in Pasadena directly. The phone number is (818) 568-3900 and the fax number is (818) 568-1842. Ask for the NNCM-96 rate. We have negotiated an (incredible) rate of $79+taxes ($110 with $31 credited by NNCM-96 upon registration) per room (single or double occupancy) per night. Please make the hotel reservation IMMEDIATELY as the rate is based on availability. Registration: Registration is done by mail on a first-come, first-served basis. To ensure your place at the conference, please send the following registration form and payment as soon as possible to Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. Please make check payable to Caltech. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - NNCM-96 Registration Form Title:------ Name:------------------------------------------------ Mailing address:------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ e-mail:--------------------------------- fax:--------------------------------- ********Please circle the applicable fees and write the total below******** Main Conference (November 21-22): Registration fee $550 Discounted fee for academicians $275 (letter on university letterhead required) Discounted fee for full-time students $150 (letter from registrar or faculty advisor required) Tutorials (November 20): You must be registered for the main conference in order to register for the tutorials. Tutorials Fee $150 Full-time students $100 (letter from registrar or faculty advisor required) TOTAL: $_________ Please include payment (check or money order in US currency). Please make check payable to Caltech. Mail your completed registration form and payment to Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Transportation: There is shuttle service (around $25 per person) and bus service (around $15 per person) from Los Angeles International Airport (LAX) to the Ritz-Carlton Hotel in Pasadena. A taxi ride will cost approximately $55. There is also shuttle service from Burbank airport (BUR) which is a domestic airport closer to Pasadena. A taxi ride will cost approximately $35. Secretariat: For further information, please contact the NNCM-96 secretariat: Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. e-mail: lucinda at sunoptics.caltech.edu phone (818) 395-4843, fax (818) 795-0326 *******************************************************************************  From wyler at iam.unibe.ch Wed Oct 2 06:20:18 1996 From: wyler at iam.unibe.ch (Kuno Wyler) Date: Wed, 2 Oct 1996 12:20:18 +0200 Subject: POSTDOCTORAL RESEARCH FELLOWSHIP Message-ID: <9610021020.AA11589@garfield.unibe.ch> POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Neural Computing Research Group Institute of Informatics and Applied Mathematics University of Bern, Switzerland The Neural Computing Research Group at the University of Bern is looking for a highly motivated individual for a two year postdoctoral research position in the area of development of a neuromorphic perception system based on multi sensor fusion. The aim of the project is to develop a neurobiologically plausible perception system for novelty detection in a real world environment (e.g. quality control in industrial production lines or supervision of security zones) based on information from different sensor channels and fast learning algorithms. Potential candidates should have strong mathematical and signal processing skills, with a background in neurobiology and neural networks. Working knowledge of programming (Matlab, LabView or C/C++) or VLSI technology is highly desirable but not required. The position will begin January 1, 1997, with possible renewal for an additional two years. The initial salary is SFr. 60'000/year (approx. $50'000). To apply for this position, send your curriculum vitae, publication list with one or two sample publications and two letters of reference before November 1, 1996, either by e-mail or surface mail to wyler at iam.unibe.ch or Dr. Kuno Wyler Neural Computing Research Group Institute of Informatics and Applied Mathematics University of Bern Neubrueckstrasse 10 CH-3012 Bern Switzerland  From ted at SPENCER.CTAN.YALE.EDU Wed Oct 2 14:02:23 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Wed, 2 Oct 1996 18:02:23 GMT Subject: Relating neuronal form to function Message-ID: <199610021802.SAA00877@PLANCK.CTAN.YALE.EDU> A digital preprint, issued somewhat belatedly-- at http://www.nnc.yale.edu/papers/NIPS94/nipsfin.html, the html version of our final draft of this paper: Carnevale, N.T., Tsai, K.Y., Claiborne, B.J., and Brown, T.H. The electrotonic transformation: a tool for relating neuronal form to function. In: Advances in Neural Information Processing Systems, vol. 7, edited by Tesauro, G., Touretzky, D.S., and Leen, T.K. Cambridge, MA, MIT Press, 1995, p. 69-76. Roughly 64K total, including figures. ABSTRACT The spatial distribution and time course of electrical signals in neurons have important theoretical and practical consequences. Because it is difficult to infer how neuronal form affects electrical signaling, we have developed a quantitative yet intuitive approach to the analysis of electrotonus. This approach transforms the architecture of the cell from anatomical to electrotonic space, using the logarithm of voltage attenuation as the distance metric. We describe the theory behind this approach and illustrate its use. --Ted  From kak at ee.lsu.edu Wed Oct 2 15:21:25 1996 From: kak at ee.lsu.edu (Subhash Kak) Date: Wed, 2 Oct 96 14:21:25 CDT Subject: No subject Message-ID: <9610021921.AA05823@ee.lsu.edu> The following papers may be retrieved by anonymous ftp: 1. Speed of Computation and Simulation by S.C. Kak ftp://gate.ee.lsu.edu/pub/kak/spee.ps.Z Abstract: This paper reviews several issues related to information, speed of computation, and simulation of a physical process. It is argued that mental processes proceed at a rate close to the optimal based on thermodynamic considerations. Problems related to the simulation of a quantum mechanical system on a computer are reviewed. Parallels are drawn between biological and adaptive quantum systems. Just published in *Foundations of Physics*, vol 26, 1375-1386, 1996 2. Can we define levels of artificial intelligence? by S.C. Kak Abstract: This paper argues for a graded approach to the study of machine intelligence. In contrast to the Turing Test approach, such an approach has the potential of defining incremental progress in machine intelligence research. Just published in *Journal of Intelligent Systems*, vol 6, 133-144, 1996 ftp://gate.ee.lsu.edu/pub/kak/ai.ps.Z  From jhf at playfair.Stanford.EDU Thu Oct 3 14:21:04 1996 From: jhf at playfair.Stanford.EDU (Jerome H. Friedman) Date: Thu, 3 Oct 1996 11:21:04 -0700 Subject: TR available: Polychotomous classification. Message-ID: <199610031821.LAA26129@playfair.Stanford.EDU> *** Technical Report available *** ANOTHER APPROACH TO POLYCHOTOMOUS CLASSIFICATION Jerome H. Friedman Stanford University (jhf at stat.stanford.edu) ABSTRACT An alternative solution to the K - class (K > 2 - polychotomous) classific- ation problem is proposed. It is a simple extension of K = 2 (dichotomous) classification in that a separate two-class decision boundary is independently constructed between every pair of the K classes. Each of these boundaries is then used to assign an unknown observation to one of its two respective classes. The individual class that receives the most such assignments over these K(K-1)/2 decisions is taken as the predicted class for the observation. Motivation for this approach is provided along with discussion as to those situations where it might be expected to do better than more traditional methods. Examples are presented illustrating that substantial gains in accuracy can sometimes be achieved. Available by ftp from: "ftp://stat.stanford.edu/pub/friedman/poly.ps.Z" Note: this postscript does not view properly on some ghostviews. It does seem to print properly on nearly all postscript printers.  From gaudiano at cns.bu.edu Thu Oct 3 18:03:56 1996 From: gaudiano at cns.bu.edu (Paolo Gaudiano) Date: Thu, 3 Oct 1996 18:03:56 -0400 (EDT) Subject: IMPORTANT: WCNN'97 has merged with ICNN'97 Message-ID: <199610032203.SAA24959@ruggles.bu.edu> IMPORTANT ANNOUNCEMENT for INNS Members and others who planned to submit papers to the World Congress on Neural Networks (WCNN97). From the Board of Governors of the International Neural Network Society The INNS Board of Governors urges all INNS members and other potential authors to speed up preparation of their technical papers to meet a November 15, 1996, deadline (instead of the previously announced date of January 15, 1997). There will be only one major US neural network meeting in 1997: Houston, June 9-12, 1997 The INNS Board of Governors took a positive and definite step toward reinstituting the tradition of joint neural network meetings with the IEEE. Specifically, it was decided to replace the planned 1997 INNS meeting in Boston by offering strong technical involvement in the Houston meeting being planned by the IEEE, June 9-12, 1997. The IEEE has accepted the offer. INNS will be listed as a Technical Co-Sponsor of the meeting, and IEEE has invited the INNS Program Chair, Dan Levine, to serve as a Program Co-Chair for their meeting. The chairs are working together to develop sessions and/or tracks to accommodate certain addtional technical areas traditionally of interest to INNS members. A copy of the IEEE Call for Papers for the 1997 ICNN in Houston is attached. Please note that the paper submission deadline listed is November 1, 1997. Due to the shortness of time, IEEE is willing to allow up to two weeks grace period for INNS members. Thus, November 15th is to be seen as an "absolute" deadline. Additional information can be found on the IEEE ICNN web site at: http://www.mindspring.com/~pci-inc/ICNN97 We look forward to seeing all of you in Houston. ---------------------------------------------------------------------- WELCOME TO THE BRAND NEW ICNN'97 CALL FOR PAPERS..... IEEE-NNC and INNS, in the spirit of earlier IJCNN's, Co-sponsor ICNN97 and Future Conferences ... /////////////////////////////////////////// / / / I C N N ' 9 7 / / / /////////////////////////////////////////// New ICNN '97 Call for Papers INTERNATIONAL CONFERENCE ON NEURAL NETWORKS (ICNN'97) Westin Galleria Hotel, Houston, Texas, USA Tutorials June 8, Conference June 9-12, 1997 CO-SPONSORED BY THE IEEE NEURAL NETWORKS COUNCIL AND THE INTERNATIONAL NEURAL NETWORKS SOCIETY NNC....... IEEE....... INNS This conference is a major international forum for researchers, practitioners and policy makers interested in natural and artificial neural networks. Submissions of papers related, but not limited, to the topics listed below are invited. Applications Architectures Associative Memory Cellular Neural Networks Computational Intelligence Cognitive Science Data Analysis Fuzzy Neural Systems Genetic and Annealing Algorithms Hardware Implementation (Electronic and Optical) Hybrid Systems Image and Signal Processing Intelligent Control Learning and Memory Machine Vision Model Identification Motion Vision Motion Analysis Neurobiology Neurocognition Neurosensors and Wavelets Neurodynamics and Chaos Optimization Pattern Recognition Prediction Robotics Sensation and Perception Sensorimotor Systems Speech, Hearing, and Language System Identification Supervised/Unsupervised Learning Time Series Analysis PAPER SUBMISSION: Papers must be received by the Technical Program Co-Chairs by November 1, 1996. PAPER DEADLINE NOV. 15 ### INNS MEMBERS ONLY ### Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely delivery. All submissions will be acknowledged by electronic or postal mail. Mail all papers to Prof. James M. Keller, Computer Engineering and Computer Science Department; 217 Engineering Building West; University of Missouri; Columbia, MO 65211 USA. Phone: (573) 882-7339. CONTACT General Chair, Prof. Nicolaos B. Karayiannis, at Karayiannis at UH.EDU; Program Co-Chairs: Prof. Daniel S. Levine , at b344dsl at utarlg.uta.edu ; Prof. Keller, at keller at ece.missouri.edu ; Prof. Raghu Krishnapuram, at raghu at ece.missouri.edu or other members of the Program Committee with questions. Six copies (one original and five copies) of the paper must be submitted. Papers must be camera-ready on 8 1/2 by 11 white paper, one-column format in Times or similar font style, 10 points or larger with one inch margins on all four sides. Do not fold or staple the original camera-ready copy. Four pages are encouraged; however, the paper must not exceed six pages, including figures, tables, and references, and should be written in English. Submissions that do not adhere to the guidelines above will be returned unreviewed. Centered at the top of the first page should be the complete title, author name(s), and postal and electronic mailing addresses. In the accompanying letter, the following information must be included: Full Title of the Paper Technical Area (First and Second Choices) Corresponding Author (Name, Postal and E-Mail Addresses, Telephone & FAX Numbers) Preferred Mode of Presentation (Oral or Poster) PAPER REVIEW: Papers will be reviewed by senior researchers in the field and authors will be informed of the decisions by January 2, 1997. Authors of accepted papers will be allowed to revise their papers, and the final versions must be received by February 1, 1997. BEST STUDENT PAPER AWARDS: To qualify, a student or group of students must contribute over 70% of the paper and be the PRIMARY AUTHORS(S). The submission should clearly indicate that the paper is to be considered for the best student paper award, the amount of contribution by the student, current level of study, and e-mail address. SPECIAL SESSIONS: Proposals for plenary and panel sessions must be submitted to the Plenary/Special Sessions Chair , Jacek Zurada by October 15, 1996. TUTORIALS: Proposals for tutorials must be submitted to the Tutorials Chair , John Yen, by October 15, 1996. EXHIBITOR INFORMATION: A large group of vendors and participants from industry, academia and government are expected. Potential exhibitors may request information from the Exhibits Chair, Joydeep Ghosh.. //////////////////// CONTACTS: //////////////////// TECHNICAL: For technical information on the conference, please contact members of the Organizing Committee REGISTRATION: Conference Secretariat: Meeting Management, 2603 Main Street, #690, Irvine, CA 92714 Phone: (714) 752-8205 Fax: (714) 752-7444 Email: Meeting Mgt at aol.com WEB-SITE: New Web Site: http://www.mindspring.com/~pci-inc/ICNN97 (Mary Lou Padgett) Original Web Sites: (Bogdan M. Wilamowski e-mail wilam at uwyo.edu.) Comments/questions on new ICNN'97: General Chair, Nicolaos B. Karayiannis Email: Karayiannis at UH.EDU INNS Board of Governors Member: Daniel S. Levine Email:b344dsl at utarlg.uta.edu /////////////////////////////////////////////////////// NEW ICNN'97 Members of the Organizing Committee /////////////////////////////////////////////////////// General Chair Prof. Nicolaos B. Karayiannis Dept. of Electrical & Computer Engineering University of Houston Houston TX 77204-4793, USA Phone: (713) 743-4436 Fax: (713) 743-4444 Email: Karayiannis at UH.EDU Technical Program and Proceedings Co-Chairs Prof. James M. Keller Computer Engineering and Computer Science Department 217 Engineering Building West University of Missouri Columbia, MO 65211 USA Phone: (573) 882-7339 Fax: (573) 882-0397 Email: keller at ece.missouri.edu Prof. Raghu Krishnapuram University of Missouri Computer Engineering and Computer Science Department Columbia MO 65211 USA Phone: (573) 882-7766 Fax: (573) 882-0397 Email: raghu at ece.missouri.edu INNS CONTACT: Prof. Daniel S. Levine Univ. of Texas at Arlington Department of Psychology Arlington, TX 76019-0408 Phone 817-272-3598 Fax 817-272-2364 Email: b344dsl at utarlg.uta.edu Tutorials Chair Prof. John Yen Texas A&M University Dept. of Computer Science 301 Harvey R. Bright Bldg. College Station TX 77843-3112 USA Phone: (409) 845-5466 Fax: (409) 847-8578 Email: yen at cs.tamu.edu Publicity Chair Mary Lou Padgett Auburn University or Padgett Computer Innovations, Inc. (PCI-INC) 1165 Owens Road Auburn AL 36830 US Phone: (334) 821-2472 Fax: (334) 821-3488 Email: m.padgett at ieee.org Exhibits Chair Prof. Joydeep Ghosh University of Texas Dept. of Electrical & Computer Engineering Engineering Sciences Building (ENS) 516 Austin TX 78712-1084 USA Phone: (512) 471-8980 Fax: (512) 471-5907 Email: ghosh at ece.utexas.edu Plenary/Special Sessions Chair Prof. Jacek M. Zurada University of Louisville Dept. of Electrical Engineering Louisville KY 40292 USA Phone: (502) 852-6314 Fax: (502) 852-6807 Email: jmzura02 at starbase.spd.louisville.edu International Liaison Chair Prof. Sankar K. Pal Machine Intelligence Unit Indian Statistical Institute 203 B. T. Road Calcutta 700 035 INDIA Phone: (0091)-33-556-8085 Fax: (0091)-33-556-6925 Fax: (0091)-33-556-6680 Email: sankar at isical.ernet.in Finance Chair Prof. Ben H. Jansen University of Houston Dept. of Electrical & Computer Engineering Houston TX 77204-4793 USA Phone: (713) 743-4431 Fax: (713) 743-4444 Email: bjansen at uh.edu Local Arrangements Chair Prof. Heidar A. Malki University of Houston Electrical-Electronics Department Houston TX 77204-4083 USA Phone: (713) 743-4075 Fax: (713) 743-4032 Email: malki at uh.edu ////////////////////////////////////////////////////// ICNN'97 TUTORIAL SUBMISSIONS ////////////////////////////////////////////////////// ICNN 97 Tutorial Proposal Submission Guideline Tutorial proposals for 1997 IEEE International Conference on Neural Networks (ICNN 97) are solicited. The proposal should be prepared using the format described below. Proposal Format The proposal should contain the following information: Title and expected duration of the tutorial Objective and expected benefit of the tutorial participants Target audience and their required background A one paragraph justification about the timing of the tutorial. A topic that is in its infancy or is too mature is not likely to be suitable. Therefor, the timing of the proposed tutorial should be justified in terms of (1) the amount of interest in its subject area and (2) the current body of knowledge developed in the area. An outline of the material to be covered by the tutorial. Qualification and contact information (including e-mail address and FAX number) of the instructor Submission Information The proposal should be submitted to the Tutorial Chair of ICNN 97 at the following address by October 15, 1996 using postal mail or e-mail. Tutorial Chair Prof. John Yen Center for Fuzzy Logic, Robotics, and Intelligent Systems Department of Computer Science 301 Harvey R. Bright Bldg. Texas A&M University College Station, TX 77843-3112 U.S.A. TEL: (409) 845-5466 FAX: (409) 847-8578 E-mail: yen at cs.tamu.edu --===================== Mary Lou Padgett 1165 Owens Road Auburn, AL 36830 P: (334) 821-2472 F: (334) 821-3488 m.padgett at ieee.org Auburn University, EE Dept. Padgett Computer Innovations, Inc. (PCI) Simulation, VI, Seminars IEEE Standards Board -- Virtual Intelligence ( VI): NN, FZ, EC, VR --=====================  From cas-cns at cns.bu.edu Fri Oct 4 14:24:57 1996 From: cas-cns at cns.bu.edu (CAs/CNS) Date: Fri, 04 Oct 1996 14:24:57 -0400 Subject: International Conference on VISION, RECOGNITION, ACTION Message-ID: <199610041824.OAA15927@cns.bu.edu> ***** CALL FOR PAPERS ***** International Conference on VISION, RECOGNITION, ACTION: NEURAL MODELS OF MIND AND MACHINE May 28-31, 1997 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research This conference will include a day of tutorials (May 28) followed by 3 days of 21 invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems see, understand, and act upon a changing world. Meeting updates can be found at http://cns-web.bu.edu/cns-meeting/. Hotel and restaurant information can also be found here. CONFIRMED INVITED SPEAKERS AND PROGRAM OUTLINE WEDNESDAY, MAY 28, 1997 TUTORIALS STEPHEN GROSSBERG "Vision, Brain, and Technology" (3 hours in two 1-1/2 hour lectures) This tutorial will provide a self-contained introduction to recent models of how the brain sees. It will also illustrate how these models have been used to help solve difficult image processing problems in technology. The biological part will discuss neural models of visual form, color, depth, figure-ground separation, motion, and attention, and how these several processes cooperate to generate complex percepts. The tutorial will build a theoretical bridge between data about visual perception and data about the architecture and dynamics of the visual brain. Technological applications to image restoration, texture labeling, figure-ground separation, and related problems will be described. GAIL CARPENTER "Self-Organizing Neural Networks for Learning, Recognition, and Prediction: ART Architectures and Applications" (2 hours) In 1976, Stephen Grossberg introduced adaptive resonance as a theory of human cognitive information processing. Over the past decade, the theory has led to an evolving series of real-time neural networks (ART models) that self-organize recognition categories in response to arbitrary sequences of input patterns. The intrinsic stability of an ART system allows rapid learning of new information while essential components of previously learned patterns are preserved. This tutorial will describe basic ART design principles, analytic tools, and benchmark simulations. Both unsupervised networks such as ART 1, ART 2, ART 3, and fuzzy ART, and supervised learning architectures such as ARTMAP, fuzzy ARTMAP, and ART-EMAP will be discussed. Successful applications of the ART and ARTMAP networks, including the Boeing parts retrieval CAD system, automatic mapping from remote sensing satellite measurements, and medical database prediction will be outlined. Computational elements of the recently developed dART and dARTMAP networks, that feature distributed code representations, will also be introduced. ERIC SCHWARTZ "Algorithms and Hardware for the Application of Space-Variant Active Vision to High Performance Machine Vision" (2 hours) The term space-variance refers to the fact that all higher vertebrate visual systems are based on spatial architectures which have non-constant resolution across the visual field. It has been shown that such architectures can lead to up to four orders of magnitude of compression in the space-complexity of vision tasks. However, there are fundamental algorithmic and hardware problems involved in the exploitation of these observations in computer vision, many of which have benefited from considerable progress during the past several years. In this tutorial, a brief outline of the anatomical basis for the notion of space-variance will be provided. Several examples of space-variant active vision systems will be then be discussed, focusing on the hardware specifications for sensors, optics, actuators and DSP based parallel processors. Finally, a review of the algorithmic aspects of these systems will be presented, including issues related to early vision (i.e., edge enhancement via nonlinear diffusion methods), and to pattern matching, based on recent development of an exponential chirp algorithm which can perform high-speed quasi-shift invariant processing on logarithmic image architectures. Functioning examples of space-variant active vision systems based on these developments will be demonstrated, included a miniature visually guided autonomous vehicle, a machine vision system for reading license plates of high-speed vehicles for traffic control, and a blind-prosthetic device based on a "wearable" active vision system. ******************** TUTORIAL BIOSKETCHES: GAIL CARPENTER is professor in the departments of Cognitive and Neural Systems (CNS) and Mathematics at Boston University. She is the CNS Director of Graduate Studies; 1989 Vice-President and 1994-96 Secretary of the International Neural Network Society (INNS); organization chair of the 1988 INNS annual meeting; and a member of the editorial boards of Brain Research, IEEE Transactions on Neural Networks, Neural Computation, and Neural Networks. She has served on the INNS Board of Governors since its founding in 1987, and is a member of the Council of the American Mathematical Society. She is a leading architect of the Adaptive Resonance Theory (ART) family of architectures for fast learning, pattern recognition, and prediction of nonstationary databases, including both unsupervised (ART 1, ART 2, ART 2-A, ART 3, fuzzy ART, distributed ART) and supervised (ARTMAP, fuzzy ARTMAP, ART-EMAP, distributed ARTMAP) ART networks. These systems have been used for a wide range of applications, such as medical diagnosis, remote sensing, automatic target recognition, mobile robots, and database management. Her earlier research includes the development, computational analysis, and applications of neural models of nerve impulse generation (Hodgkin-Huxley equations), vision, cardiac rhythms, and circadian rhythms. Professor Carpenter received her graduate training in mathematics at the University of Wisconsin and was a faculty member at MIT and Northeastern University before moving to Boston University. STEPHEN GROSSBERG is Wang Professor of Cognitive and Neural Systems and Professor of Mathematics, Psychology, and Biomedical Engineering at Boston University. He is the founder and Director of the Center for Adaptive Systems, as well as the founder and Chairman the Department of Cognitive and Neural Systems. He founded and was first President of the International Neural Network Society and also founded and is co-editor-in-chief of the Society's journal, Neural Networks. Grossberg was General Chairman of the first IEEE International Conference on Neural Networks. He is on the editorial boards of Brain Research, Journal of Cognitive Neuroscience, Behavioral and Brain Sciences, Neural Computation, IEEE Transactions on Neural Networks, and Adaptive Behavior. He organized two multi-institutional Congressional Centers of Excellence for research on biological neural networks and their technological applications. He received the IEEE Neural Network Pioneer award, the INNS Leadership Award, the Thinking Technology Award of the Boston Computer Society, and is a Fellow of the American Psychological Association and the Society of Experimental Psychologists. Grossberg and his colleagues have pioneered and developed a number of the fundamental principles, mechanisms, and architectures that form the foundation for contemporary neural network research. This work focuses upon the design principles and mechanisms which enable the behavior of individuals to adapt successfully in real-time to unexpected environmental changes. Core models pioneered by this approach include competitive learning and self-organizing feature maps, adaptive resonance theory, masking fields, gated dipole opponent processes, associative outstars and instars, associative avalanches, nonlinear cooperative-competitive feedback networks, boundary contour and feature contour systems, and vector associative maps. Grossberg received his graduate training at Stanford University and Rockefeller University, and was a Professor at MIT before assuming his present position at Boston University. ERIC SCHWARTZ received the PhD degree in High Energy Physics from Columbia University in 1973, followed by post-doctoral studies with E. Roy John at New York Medical College in neurophysiology. He has served as Associate Professor of Psychiatry at New York University Medical Center and Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences. In 1985, he organized the first Symposium on Computational Neuroscience, and in 1989 founded Vision Applications, Inc. which designs and builds prototype machine vision systems based on space-variant active vision systems. Currently, he is Professor of Cognitive and Neural Systems, Electrical Engineering and Computer Systems and Anatomy and Neurobiology at Boston University. His research experience includes experimental particle physics, physiology (single cell recording), anatomy (2DG, PETT, MRI), computer graphics and image processing, VLSI design, actuator design, and neural modeling. ************************* THURSDAY, MAY 29, 1997 INVITED LECTURES Robert Shapley, New York University: Brain Mechanisms for Visual Perception of Occlusion George Sperling, University of California, Irvine: An Integrated Theory for Attentional Processes in Vision, Recognition, and Memory Patrick Cavanagh, Harvard University: Direct Recognition Stephen Grossberg, Boston University: Perceptual Grouping and Attention during Cortical Form and Motion Processing Robert Desimone, National Institute of Mental Health: Neuronal Mechanisms of Visual Attention Ennio Mingolla, Boston University: Visual Search Patricia Goldman-Rakic, Yale University Medical School: The Machinery of Mind: Models from Neurobiology Larry Squire, San Diego VA Medical Center: Brain Systems for Recognition Memory There will also be a contributed poster session on this day. FRIDAY, MAY 30, 1997 INVITED LECTURES Eric Schwartz, Boston University: Multi-Scale Vortex Structure of the Brain: Anatomy as Architecture in Biological and Machine Vision Lance Optican, National Eye Institute: Neural Control of Rapid Eye Movements John Kalaska, University of Montreal: Reaching to Visual Targets: Cerebral Cortical Neuronal Mechanisms Rodney Brooks, Massachusetts Institute of Technology: Models of Vision-Based Human Interaction There will also be a contributed talk session and a reception, followed by the KEYNOTE LECTURE Stuart Anstis, University of California, San Diego: Moving in Unexpected Directions SATURDAY, MAY 31, 1997 INVITED LECTURES Azriel Rosenfeld, University of Maryland: Some Viewpoints on Vision Terrance Boult, Lehigh University: Polarization Vision Allen Waxman, MIT Lincoln Laboratory: Opponent Color Models of Visible/IR Fusion for Color Night Vision Gail Carpenter, Boston University: Distributed Learning, Recognition, and Prediction in ART and ARTMAP Networks Tomaso Poggio, Massachusetts Institute of Technology: Representing Images for Visual Learning Michael Jordan, Massachusetts Institute of Technology: Graphical Models, Neural Networks, and Variational Approximations Andreas Andreou, Johns Hopkins University: Mixed Analog/Digital Neuromorphic VLSI for Sensory Systems Takeo Kanade, Carnegie Mellon University: Computational VLSI Sensors: Integrating Sensing and Processing There will also be a contributed poster session on this day. CALL FOR ABSTRACTS: Contributed abstracts by active modelers of vision, recognition, or action in cognitive science, computational neuroscience, artificial neural networks, artificial intelligence, and neuromorphic engineering are welcome. They must be received, in English, by January 31, 1997. Notification of acceptance will be given by February 28, 1997. A meeting registration fee must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Each Abstract should fit on one 8 x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract, corresponding author and presenting author name, address, telephone, fax, and email address. Preference for oral or poster presentation should be noted. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: CNS Meeting, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. The program committee will determine whether papers will be accepted in an oral or poster presentation, or rejected. REGISTRATION INFORMATION: Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to: CNS Meeting, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail to the above address, or fax to (617) 353-7755. STUDENT FELLOWSHIPS: A limited number of fellowships for PhD candidates and postdoctoral fellows are available to at least partially defray meeting travel and living costs. The deadline for applying for fellowship support is January 31, 1997. Applicants will be notifed by February 28, 1997. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on offiicial institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. Their size will be determined by student need and the availability of funds. -------------------------------------------------- REGISTRATION FORM (Please Type or Print) Vision, Recognition, Action: Neural Models of Mind and Machine Boston University Boston, Massachusetts Tutorials: May 28, 1997 Meeting: May 29-31, 1997 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, six coffee breaks, and the meeting proceedings. Two coffee breaks and a book of tutorial viewgraph copies will be covered by the tutorial registration fee. CHECK ONE: [ ] $55 Conference plus Tutorial (Regular) [ ] $40 Conference plus Tutorial (Student) [ ] $35 Conference Only (Regular) [ ] $25 Conference Only (Student) [ ] $30 Tutorial Only (Regular) [ ] $25 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: --------------------------------------------------  From jung at pop.uky.edu Fri Oct 4 12:03:24 1996 From: jung at pop.uky.edu (Dr. Ranu Jung) Date: Fri, 4 Oct 1996 16:03:24 +0000 Subject: Graduate Res. Asstantship Message-ID: <199610042111.RAA24134@service1.cc.uky.edu> DYNAMICAL ANALYSIS OF LOCOMOTOR CONTROL (Graduate Research Assistantship) This position is part of a project funded by The Whitaker Foundation that is directed at examining the dynamical interaction between the brain and the spinal cord in the control of locomotion. The project involves experimental and computational studies with sub-projects for: 1) characterization of the intrinsic variability in the fictive locomotor rhythm obtained in in vitro brain-spinal cord preparations of the lamprey, 2) investigation of the role of the feedforward-feedback loop between the brain and the spinal cord in short- and long term control of locomotion (changes in stability states, responses to perturbations), and 3) mathematical model development (biophysically motivated neural networks for the central pattern generator for swimming) and analysis of the models using techniques from dynamical systems theory. The analysis of experimental data will include development of novel signal processing methods and use of techniques from nonlinear systems analysis. The project will be conducted at the Experimental and Computational Neuroscience Laboratory at the Center for Biomedical Engineering. In additin to ties within the Center the laboratory collaborates with members of the Department of Electrical Engineering and the Department of Physiology. The position is available for up to three years for graduate work. Applications, including CV and names of two references, may be sent to Dr. Ranu Jung by email(jung at pop.uky.edu), by FAX(606-257-1856) , or by postal mail to Ranu Jung, Ph.D. 21 Wenner Gren Research Laboratory University of Kentucky Lexington, KY 40506-0070. Additional information can be obtained by contacting Dr. Jung by email or telephone (606-257-5931). Information about other neuroscience related research being conducted at the Center can be obtained from the web at the URL http://www.uky.edu/RGS/CBME/CBMENeuralControl.html Details about the University of Kentucky and the Center for Biomedical Engineering can be obtained on the web at http://www.uky.edu; http://www.uky.edu/RGS/CBME. The University of Kentucky is located in the rolling hills of the Bluegrass Country and has a diverse campus. The Center for Biomedical Engineering is a multidisciplinary center in the Graduate School. We have strong ties to the Medical Center and the School of Engineering. Ranu Jung, Ph.D. email:jung at pop.uky.edu Center for Biomedical Engineering phone:606-257-5931 Wenner-Gren Research Lab. fax: 606-257-1856 University of Kentucky http://www.uky.edu/RGS/CBME/jung.html Lexington, KY 40506-0070  From john at dcs.rhbnc.ac.uk Fri Oct 4 04:40:39 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 04 Oct 96 09:40:39 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199610040840.JAA32168@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-049: ---------------------------------------- Extended Grzegorczyk Hierarchy in the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: In this paper, we give an extension of the Grzegorczyk Hierarchy to the BSS theory of computability which is a generalization of the classical theory. We adapt some classical results related to the Grzegorczyk hierarchy in the new setting. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-050: ---------------------------------------- Learning from Examples and Side Information by Joel Ratsaby, Technion, Israel Vitaly Maiorov, Technion, Israel Abstract: We set up a theoretical framework for learning from examples and side information which enables us to compute the tradeoff between the sample complexity and information complexity for learning a target function in a Sobolev functional class $\cal F$. We use the notion of the {\em $n^{th}$ minimal radius of information} of Traub et. al. \cite{traub} and combine it with VC-theory to define a new quantity $I_{n,d}({\cal F})$ which measures the minimal approximation error of a target $g\in {\cal F}$ by the family of function classes with pseudo-dimension $d$ under a given side information which consists of any $n$ measurements on the target function $g$ constrained to being linear operators. By obtaining almost tight upper and lower bounds on $I_{n,d}({\cal F})$ we find an information operator $\hat{N}_n$ which yields a worst-case error no larger than a logarithmic factor in $n$ and $d$ than the lower bound on $I_{n,d}({\cal F})$. Hence to within a logarithmic factor it is the most efficient way of providing side information about a target $g$ under the constraint that the information operator must be linear and that the approximating class has pseudo-dimension $d$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-051: ---------------------------------------- Complexity and Dimension by Felipe Cucker, City University of Hong Kong Pascal Koiran, Ecole Normale Superieure, Lyon, France Martin Matamala, Universidad de Chile, Chile Abstract: In this note we define a notion of sparseness for subsets of $\Ri$ and we prove that there are no sparse $\NPadd$-hard sets. Here we deal with additive machines which branch on equality tests of the form $x=y$ and $\NPadd$ denotes the corresponding class of sets decidable in nondeterministic polynomial time. Note that this result implies the separation $\Padd\not=\NPadd$ already known. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-052: ---------------------------------------- Semi-Algebraic Complexity -- Additive Complexity of Diagonalization of QuadraticForms by Thomas Lickteig, Universit\"at Bonn, Germany Klaus Meer, RWTH Aachen, Germany Abstract (for references see full paper): We study matrix calculation such as diagonalization of quadratic forms under the aspect of additive complexity and relate these complexities to the complexity of matrix multiplication. While in \cite{BKL} for multiplicative complexity the customary ``thick path existence'' argument was sufficient, here for additive complexity we need the more delicate finess of the real spectrum (cf. \cite{BCR}, \cite{Be}, \cite{KS}) to obtain a complexity relativization. After its outstanding success in semi-algebraic geometry the power of the real spectrum method in complexity theory becomes more and more apparent. Our discussions substantiate once more the signification and future r\^ole of this concept in the mathematical evolution of the field of real algebraic algorithmic complexity. A further technical tool concerning additive complexity is the structural transport metamorphosis from \cite{Li1} which constitutes another use of exponentiation and logarithm as it appears in the work on additive complexity by \cite{Gr} and \cite{Ri} through the use of \cite{Kh}. We confine ourselves here to diagonalization of quadratic forms. In the forthcoming paper \cite{LM} further such relativizations of additive complexity will be given for a series of matrix computational tasks. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-053: ---------------------------------------- Structural Risk Minimization over Data-Dependent Hierarchies by John Shawe-Taylor, Royal Holloway, University of London, UK Peter Bartlett, Australian National University, Australia Robert Williamson, Australian National University, Australia Martin Anthony, London School of Economics, UK Abstract: The paper introduces some generalizations of Vapnik's method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is presented on the generalization performance of classifiers with a ``large margin''. This theoretically explains the impressive generalization performance of the maximal margin hyperplane algorithm of Vapnik and co-workers (which is the basis for their support vector machines). The paper concludes with a more general result in terms of ``luckiness'' functions, which provides a quite general way for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets. Four examples are given of such functions, including the VC dimension measured on the sample. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-054: ---------------------------------------- Confidence Estimates of Classification Accuracy on New Examples by John Shawe-Taylor, Royal Holloway, University of London, UK Abstract: Following recent results (NeuroCOLT Technical Report NC-TR-96-053) showing the importance of the fat shattering dimension in explaining the beneficial effect of a large margin on generalization performance, the current paper investigates how the margin on a test example can be used to give greater certainty of correct classification in the distribution independent model. The results show that even if the classifier does not classify all of the training examples correctly, the fact that a new example has a larger margin than that on the misclassified examples, can be used to give very good estimates for the generalization performance in terms of the fat shattering dimension measured at a scale proportional to the excess margin. The estimate relies on a sufficiently large number of the correctly classified training examples having a margin roughly equal to that used to estimate generalization, indicating that the corresponding output values need to be `well sampled'. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor  From b0616 at nibh.go.jp Mon Oct 7 06:43:41 1996 From: b0616 at nibh.go.jp (Akio Utsugi) Date: Mon, 07 Oct 96 19:43:41 +0900 Subject: Papers and Java demo available Message-ID: <9610071043.AA01367@ipsychob.nibh.go.jp> The following preprints can be found in http://www.aist.go.jp/NIBH/~b0616/research.html Hyperparameter Selection for Self-Organizing Maps A. Utsugi To appear in Neural Computation, vol. 9, no. 2. Abstract: The self-organizing map (SOM) algorithm for finite data is derived as an approximate MAP estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparameters are obtained on the basis of empirical Bayesian estimation and cross-validation, which are representative model selection methods. The properties of these criteria are compared by simulation experiments. These experiments show that the cross-validation methods favor more complex structures than the expected log likelihood supports, which is a measure of compatibility between a model and data distribution. On the other hand, the empirical Bayesian methods have the opposite bias. Topology Selection for Self-Organizing Maps A. Utsugi To appear in Network: Computation in Neural Systems, vol. 7, no. 4. Abstract: A topology-selection method for self-organizing maps (SOMs) based on empirical Bayesian inference is presented. This method is natural extension of the hyperparameter-selection method presented earlier, in which the SOM algorithm is regarded as an estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior on the centroid parameters, and optimal hyperparameters are obtained by maximizing their evidence. In the present paper, comparisons between models with different topologies are made possible by further specifying the prior of the centroid parameters with an additional hyperparameter. In addition, a fast hyperparameter-search algorithm using the derivatives of evidence is presented. The validity of the methods presented is confirmed by simulation experiments. In addition, I made a demonstration program for the above theory using a Java applet, which is accessible via a WWW-browser. --- Akio Utsugi National Institute of Bioscience and Human-Technology  From kasif at osprey.cs.jhu.edu Thu Oct 3 17:53:26 1996 From: kasif at osprey.cs.jhu.edu (Dr. Simon Kasif) Date: Thu, 3 Oct 1996 17:53:26 -0400 (EDT) Subject: AAAI Fall Symposium: LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS Message-ID: <199610032153.RAA23022@osprey.cs.jhu.edu> AAAI 1996 FALL SYMPOSIUM LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS November 9--11, 1996 Additional registration Information and a copy of the registration forms can be found in http://www.aaai.org/Symposia/Fall/1996/ Program Committee: Simon Kasif, (co-chair), University of Illinois-Chicago/Johns Hopkins Univ. Stuart Russell, (co-chair), University of California, Berkeley Robert C. Berwick, Massachusetts Institute of Technology Tom Dean, Brown University Russell Greiner, Siemens Corporate Research Michael Jordan, Massachusetts Institute of Technology Leslie Kaebling, Brown University Daphne Koller, Stanford University Andy Moore, Carnegie Mellon University Dan Roth, Weizmann Institute of Science and Harvard University. ABSTRACT The symposium will consist of invited talks, submitted papers, and panel discussions focusing on practical algorithms and theoretical frameworks that support learning to perform complex behaviors and cognitive tasks. These include tasks such as reasoning and planning with uncertainty, perception, natural language processing and large-scale industrial applications. The underlying theme is the automated construction and improvement of complete intelligent agents, which is closer in spirit to the goals of AI than learning simple classifiers. We expect to have an interdisciplinary meeting with participation of researchers from AI, Neural Networks, Machine Learning, Uncertainty in AI and Knowledge Representation. Some of the key issues we plan to address are: - Development of new theoretical frameworks for analysis of broader learning tasks such as learning to reason, learning to act, and reinforcement learning. - Scalability of learning systems such as reinforcement learning. - Learning complex language tasks. - Research on agents that learn to behave ``rationally'' in complex environments. - Learning and reasoning with complex representations. - Generating new benchmarks and devising a methodological framework for studying empirical scalability of algorithms that learn complex behaviors. - Empirical and theoretical analysis of the scalability of different representations and learning methods. TENTATIVE PROGRAM **********SCHEDULE SUBJECT TO CHANGE********** Saturday, November 9, 1996 Morning 9:00--10:30am Opening Remarks Simon Kasif (UIC and Johns Hopkins University) and Stuart Russell (Berkeley) An Engineering Architecture for Intelligent Systems (45 min) Jim Albus (NIST) 10:00--10:40am, Reinforcement Learning Temporal Abstraction in Model-Based Reinforcement Learning (20 min) R. Sutton (U.Mass) Hierarchical Reinforcement Learning (20 min) F.~Kirchner (GMD) 11:00am--12:30pm, Session II: Reinforcement Learning (cont) Why Did TD-Gammon Work? (20 min) J. Pollack and A. Blair (Brandeis U.) Learning Task Relevant State Spaces with a (20 min) Utile Distinction Test A. McCallum (U. Rochester) Policy Based Clustering for Markov Decision Problems (20 min) R. Parr (Berkeley) Optimality Criteria In Reinforcement Learning (20 min) S. Mahadevan (USF) Discussion (10 min) AFTERNOON 2:00--3:30pm Session III: Learning and Knowledge Representation Learning to Reason (20 min) D. Roth (Harvard U. and Weizmann Institute) Learning the Parameters of First Order Probabilistic Rules (20 min) D. Koller and A. Pfeffer (Stanford) Learning Knowledge and Structure (20 min) J. Pearl Learning Independence Structure (20 min) E. Ristad (Princeton) Discussion (10 min) 4:00--5:30pm Session IV: Learning Complex Behaviors A Survey of Positive Results on Automata Learning (20 min) K. Lang (NEC) Learning to Plan (20 min) E. Baum (NEC) World Modelling: Learning Knowledge Representation (50 min) S. Russell, J. Albus, A. A. Moore, E. Baum, M. Jordan Sunday, November 10, 1996 Morning 9:00--10:30am, Session V: Learning and Knowledge Representation A Neuroidal Architecture for Knowledge Representations (45 min) Les Valiant (Harvard) Learning to be Competent (20 min) R. Khardon (Harvard) Concept Learning for Geometric Reasoning (20 min) E. Sacks (Purdue) Discussion (10 min) 11:00am--12:30pm, Session VI: Learning Principles in Natural Language Some Advances in Transformation-Based Part-of-Speech Tagging (20 min) E. Brill (JHU) Explaining Language Change: Complex Consequences of Simple (20 min) Learning Algorithms P. Nyogi (MIT and Lucent) and Robert Berwick (MIT) Learning the Lexical Semantics of Spatial Motion Verbs from (20 min) Camera Input J. Siskind (Technion) Computational Learning Theory for Natural Language: (20 min) From wsenn at iam.unibe.ch Tue Oct 8 04:02:16 1996 From: wsenn at iam.unibe.ch (Walter Senn) Date: Tue, 8 Oct 1996 10:02:16 +0200 Subject: paper available: Size principle and Information theory Message-ID: <9610080802.AA12239@barney.unibe.ch> The following paper (to appear in Biol. Cybern.) is now available via aftp: ----------------------------------------------------------------------------- Size Principle and Information Theory ===================================== by W.Senn, K. Wyler, H.P. Clamann, J. Kleinle, H.-R. Luescher, L. Mueller Abstract: The several hundreds motor units of a single skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of a vertebrate skeletal muscle the recruitment of its motor units is by increasing size. This so-called Size Principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are only activated if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input e.g. from CNS to that pool. The parallel motoneuron code is sent further down through the motoneuron axons to the muscle. We show that the optimization of this parallel motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, a maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation. ------------------------------------------------------------------------------- Retrieval procedure: unix> ftp iamftp.unibe.ch Name: anonymous Password: {your e-mail address} ftp> cd pub/braintool/publications ftp> get InfoTheory.ps.gz ftp> quit unix> gunzip InfoTheory.ps.gz e.g. unix> lpr paper_InfoTheory.ps Or just to have a look at the www-site http://iamwww.unibe.ch:80/~brainwww/publications/  From kblackw1 at osf1.gmu.edu Tue Oct 8 08:33:05 1996 From: kblackw1 at osf1.gmu.edu (KIM L. BLACKWELL) Date: Tue, 8 Oct 1996 08:33:05 -0400 (EDT) Subject: post-doc and grad student announcement Message-ID: TWO FELLOWSHIPS AVAILABLE AT GEORGE MASON UNIVERSITY Postdoctoral Research Fellowship Predoctoral Research Fellowship Applications are invited for one postdoctoral fellowship and one predoctoral fellowship in the area of development of self-organizing pattern recognition algorithms based on biological information processing in visual and IT cortex. The aims of the project are (1) to develop neurobiologically plausible algorithms of visual pattern recognition which are computationally efficient and robust, and (2) compare performance of resulting algorithms with human performance in order to develop hypotheses about information processing in the brain. Evaluation of algorithms is performed using real world problems (e.g., face recognition and optical character recognition), and by comparison to human observer pattern recognition performance. Both positions are for one year, available immediately, with possible renewal for additional three years. We are seeking a postdoctoral candidate with background in both neurobiology and computer science (UNIX and C or C++). Working knowledge of information theory or mathematical statistics is highly desirable but not required. The successful applicant will be responsible for performing publishable research and for supervision of at least one graduate student. The initital stipend is $30,000/year plus fringe benefits. We are seeking a predoctoral candidate with a background in computer science / engineering, who is interested in learning neurobiology. The initial stipend is $12,000/ year plus tuition. Our decade-old group currently consists of Drs. T.P. Vogl and K.T. Blackwell, and two graduate students, all of whom are actively involved in ongoing collaboration among neuroscientists (electrophysiologists) at NINDS/NIH, and engineers / computer scientists at GMU and the Environmental Research Institute of Michigan (ERIM), a not-for-profit research company formerly a component of the University of Michigan. The goal of our group is to develop effective and efficient pattern recognition algorithms by reverse engineering relevant brain functions. Research activities encompass computational neurobiology, artificial neural networks, and visual psychophysics. Further information about our research and publications may be found in Dr. Thomas Vogl's Homepage: http://mbti.gmu.edu/FACULTY.html To apply for this position, send your curriculum vitae and letters of reference (in ASCII or MIME attached PostScript formats only) to Prof. Avrama Blackwell, email: kblackw1 at osf1.gmu.edu. snail-mail to: George Mason University Dept. of Computational Sciences and Informatics 4400 University Drive Fairfax, VA 22030-4444  From atick at monaco.rockefeller.edu Tue Oct 8 11:41:51 1996 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Tue, 8 Oct 1996 11:41:51 -0400 Subject: Table of Contents for latest issue of Network:CNS Message-ID: <9610081141.ZM12352@monaco.rockefeller.edu> Network: Computation In Neural Systems Volume 7, No 3 Table of Contents TOPICAL REVIEW 439 Auditory cortical representation of complex acoustic spectra as inferred from the ripple analysis method S A Shamma PAPERS 477 Local feature analysis: a general statistical theory for object representation P S Penev and J J Atick 501 Principal component neurons in a realistic visual environment H Shouval and Y Liu 517 Retrieval properties of attractor neural networks that obey Dale's law using a self-consistent signal-to-noise analysis A N Burkitt 533 Divergence measures based on entropy families: a tool for guiding the growth of neural networks H M A Andree, A W Lodder and A Taal 555 A phenomenological approach to salient maps and illusory contours Zhiyong Yang and Songde Ma 573 Bit error probability of an associative memory with many-to-many correspondence T Tanaka Articles in the next issue of Network: Computation in Neural Systems will include: A search for the optimal thresholding sequence in an associative memory H Hirase and M Recce (University College London) Neural model of visual stereomatching: slant, transparency and clouds J A Marshall, G J Kalarickal and E B Graves (University of North Carolina) A coupled attractor model of the rodent head direction system A D Redish, A N Elga and D S Touretzky (Carnegie Mellon University) A single spike suffices: the simplest form of stochastic resonance in model neurons M Stemmler (California Institute of Technology) Topology selection for self-organizing maps A Utsugi (National Institute of Bioscience and Human-Technology, Japan) For those of you who have institutional subscriptions, check out the online version of the journal at http://www.iop.org/Journals/ne. -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422  From b0616 at nibh.go.jp Wed Oct 9 00:33:36 1996 From: b0616 at nibh.go.jp (Akio Utsugi) Date: Wed, 9 Oct 96 00:33:36 JST Subject: Papers and Java demo available Message-ID: <9610081533.AA02768@ipsychob.nibh.go.jp> Yesterday, I announced the availability of preprints for two papers: `Hyperparameter Selection for Self-Organizing Maps' and `Topology Selection for Self-Organizing Maps'. However, some people notified me that the postscript files could not be viewed by a postscript-viewer. Then, I found an error in converting their sources to the postscript files. Now I fixed the error and put the new files on the same position: http://www.aist.go.jp/NIBH/~b0616/research.html In addition, I put the same compressed postscript files on an anonymous ftp site: ftp://ripsport.aist.go.jp/nibh/b0616/ I am very sorry. --- Akio Utsugi National Institute of Bioscience and Human-Technology  From ecm at skew2.kellogg.nwu.edu Tue Oct 8 15:16:16 1996 From: ecm at skew2.kellogg.nwu.edu (ecm@skew2.kellogg.nwu.edu) Date: Tue, 8 Oct 1996 14:16:16 -0500 (CDT) Subject: nonlinear principal components analysis Message-ID: <199610081916.OAA08444@skew2.kellogg.nwu.edu> Technical Report Available Some Theoretical Results on Nonlinear Principal Components Analysis Edward C. Malthouse Northwestern University ecm at nwu.edu Postscript file available via ftp from mkt2715.kellogg.nwu.edu in pub/ecm/nlpca.ps A B S T R A C T Nonlinear principal components analysis (NLPCA) neural networks are feedforward autoassociative networks with five layers. The third layer has fewer nodes than the input or output layers. NLPCA has been shown to give better solutions to several feature extraction problems than existing methods, but very little is know about the theoretical properties of this method or its estimates. This paper studies NLPCA. It proposes a geometric interpretation by showing that NLPCA fits a lower-dimensional curve or surface through the training data. The first three layers project observations onto the curve or surface giving scores. The last three layers define the curve or surface. The first three layers are a continuous function, which I show has several implications: NLPCA ``projections'' are suboptimal producing larger approximation error, NLPCA is unable to model curves and surfaces that intersect themselves, and NLPCA cannot parameterize curves with parameterizations having discontinuous jumps. I establish results on the identification of score values and discuss their implications on interpreting score values. I discuss the relationship between NLPCA and principal curves and surfaces, another nonlinear feature extraction method. Keywords: nonlinear principal components analysis, feature extraction, data compression, principal curves, principal surfaces.  From rich at cs.umass.edu Wed Oct 9 01:56:07 1996 From: rich at cs.umass.edu (Rich Sutton) Date: Wed, 9 Oct 1996 00:56:07 -0500 Subject: A proposed standard interface for RL software Message-ID: To Reinforcement Learning Researchers: We are sending this note to announce a proposed standard interface for reinforcement learning research software. The objectives of this first step towards standardization are fairly modest. The standard covers only the top-level interface between the RL agent and its environment. The idea is that you might program an agent and I might program an environment, and as long as we both follow the standard interface it should be trivial to connect them to each other. This should make it easier for people to swap interconnect novel agents and environments. In the long run, we hope that this will lead to a library of RL environments and agents that can be used as testbeds for RL research of all kinds. We have completed standard interfaces for C++ and CommonLisp. Documentation, interface code, and several examples are complete and available via the web, starting at http://www-anw.cs.umass.edu/People/sutton/RLinterface/RLinterface.html. Feedback and further contributions encouraged. Rich Sutton rich at cs.umass.edu Juan Carlos Santamaria carlos at cc.gatech.edu  From marco at idsia.ch Wed Oct 9 11:26:48 1996 From: marco at idsia.ch (Marco Wiering) Date: Wed, 9 Oct 96 16:26:48 +0100 Subject: new papers Message-ID: <9610091526.AA04183@fava.idsia.ch> HQ-LEARNING: DISCOVERING MARKOVIAN SUBGOALS FOR NON-MARKOVIAN REINFORCEMENT LEARNING Marco Wiering Juergen Schmidhuber Technical Report IDSIA-95-96, 13 pages 108K To solve partially observable Markov decision problems, we introduce HQ-learning, a hierarchical extension of Q-learning. HQ-learning is based on an ordered sequence of subagents, each learning to identify and solve a Markovian subtask of the total task. Each agent learns (1)an appropriate subgoal (though there is no intermediate, external reinforcement for good subgoals), and (2) a Markovian policy, given a particular subgoal. Our experiments demonstrate: (a) The system can easily solve tasks standard Q-learning cannot solve at all. (b) It can solve partially observable mazes with more states than those used in most previous POMDP work. (c) It can quickly solve complex tasks that require manipulation of the environment to free a blocked path to the goal. ------------------------------------------- Also available: THE NEURAL HEAT EXCHANGER ("invited talk" ICONIP'96) An alternative learning method for multi-layer neural nets inspired by the physical heat exchanger. Unlike backprop, it is truly local. It was first presented during occasional talks since 1990, and is closely related to Hinton et. al.'s recent Helmholtz Machine (1995). FTP-host: ftp.idsia.ch FTP-files: /pub/marco/hq96.ps.gz /pub/juergen/hq96.ps.gz /pub/juergen/heat.ps.gz WWW: http://www.idsia.ch/~marco/publications.html http://www.idsia.ch/~juergen/onlinepub.html Comments welcome! Marco Wiering & Juergen Schmidhuber IDSIA  From penev at venezia.rockefeller.edu Thu Oct 10 10:12:50 1996 From: penev at venezia.rockefeller.edu (Penio Penev) Date: Thu, 10 Oct 1996 10:12:50 -0400 (EDT) Subject: paper available: Local Feature Analysis Message-ID: The following paper just appeared: Network: Computation in Neural Systems 7(3), 477-500, 1996 Local Feature Analysis: A General Statistical Theory for Object Representation. Penio S. Penev and Joseph J. Atick Low-dimensional representations of sensory signals are key to solving many of the computational problems encountered in high-level vision. Principal Component Analysis has been used in the past to derive practically useful compact representations for different classes of objects. One major objection to the applicability of PCA is that it invariably leads to global, nontopographic representations that are not amenable to further processing and are not biologically plausible. In this paper we present a new mathematical construction---Local Feature Analysis (LFA)---for deriving local topographic representations for any class of objects. The LFA representations are sparse-distributed and, hence, are effectively low-dimensional and retain all the advantages of the compact representations of the PCA. But unlike the global eigenmodes, they give a description of objects in terms of statistically derived local features and their positions. We illustrate the theory by using it to extract local features for three ensembles---2D images of faces without background, 3D surfaces of human heads, and finally 2D faces on a background. The resulting local representations have powerful applications in head segmentation and face recognition. For those having on-line access to the electronic version of the journal, the paper can be retrieved from http://www.iop.org/EJ/welcome There is also a version with LaTex fonts and USA spelling at our web and ftp site venezia.rockefeller.edu. The file is best viewed on a 600 dpi printer because it contains grayscale images. FTP-host: venezia.rockefeller.edu FTP-filename: group/papers/full/LFA/PenevPS.LFA.ps -- 600 dpi FTP-filename: group/papers/full/LFA/PenevPS.LFA.300.ps -- 300 dpi ftp://venezia.rockefeller.edu/group/papers/full/LFA/PenevPS.LFA.ps ftp://venezia.rockefeller.edu/group/papers/full/LFA/PenevPS.LFA.300.ps -- Penio Penev 1-212-327-7423  From efiesler at idiap.ch Thu Oct 10 10:45:17 1996 From: efiesler at idiap.ch (E. Fiesler) Date: Thu, 10 Oct 1996 16:45:17 +0200 (MET DST) Subject: The Handbook of Neural Computation. Message-ID: <199610101445.QAA00353@catogne.idiap.ch> ----------------------------- PLEASE POST --------------------------------- Announcing the H A N D B O O K O F N E U R A L C O M P U T A T I O N ___________________________________________________________ The first of three volumes in the Computational Intelligence Library http://www.oup-usa.org/acadref/compint.html http://www.oup-usa.org/acadref/honc.html ___________________________________________ The Handbook of Neural Computation is now available for purchase from Oxford University Press and Institute of Physics Publishing. This major new resource for the neural computing community offers a wealth of information on neural network fundamentals, models, hardware and software implementations, and applications. The handbook includes many detailed case studies describing successful applications of artificial neural networks in application areas such as perception and cognition, engineering, physical sciences, biology and biochemistry, medicine, economics, finance and business, computer science, and the arts and humanities. One of the unique features of this handbook is that it has been designed to remain up to date: as neural network models, imple- mentations, and applications continue to develop, the handbook will keep pace by publishing new articles and revisions to exist- ing articles. The print edition of the handbook consists of 1,100 A4-size pages published in loose-leaf format, which will be updated by means of supplements published every six months. The electronic edition, to be launched in January 1997 but now available for advance purchase, includes the complete content of the handbook on CD-ROM, plus integrated access to the latest version of the handbook's content on the World Wide Web. Hence the handbook combines inherent updatability with the latest modes of distribution. The Handbook of Neural Computation is itself part of a larger project called the Computational Intelligence Library, which includes companion handbooks in evolutionary and fuzzy computation. Print Edition: October 1996. 9x12 inches (230x305mm). Four-post binder expands to accommodate supplements. 1,096 pages, 400 illustrations, ISBN 0-7503-0312-3. Electronic Edition: January 1997. CD-ROM plus World Wide Web Access. ISBN 0-7503-0411-1. Further information, including details of a special introductory price offer valid until the end of 1996, may be obtained at: http://www.oup-usa.org/acadref/honc.html and http://www.oup-usa.org/acadref/compint.html or by sending e-mail or regular mail to: Peter Titus Oxford University Press 198 Madison Avenue New York, NY 10016-4314 Fax: (1) 212-726-6442 E-mail: pkt at oup-usa.org TABLE OF CONTENTS Preface Russell Beale and Emile Fiesler Foreword James A Anderson How to Use This Handbook PART A INTRODUCTION A1 Neural Computation: The Background A1.1 The historical background J G Taylor A1.2 The biological and psychological background Michael A Arbib A2 Why Neural Networks? Paul J Werbos A2.1 Summary A2.2 What is a neural network? A2.3 A traditional roadmap of artificial neural network capabilities PART B FUNDAMENTAL CONCEPTS OF NEURAL COMPUTATION B1 The Artificial Neuron Michael A Arbib B1.1 Neurons and neural networks: the most abstract view B1.2 The McCulloch-Pitts neuron B1.3 Hopfield networks B1.4 The leaky integrator neuron B1.5 Pattern recognition B1.6 A note on nonlinearity and continuity B1.7 Variations on a theme B2 Neural Network Topologies Emile Fiesler B2.1 Introduction B2.2 Topology B2.3 Symmetry and asymmetry B2.4 High order topologies B2.5 Fully connected topologies B2.6 Partially connected topologies B2.7 Special topologies B2.8 A formal framework B2.9 Modular topologies Massimo de Francesco B2.10 Theoretical considerations for choosing a network topology Maxwell B Stinchcombe B3 Neural Network Training James L Noyes B3.1 Introduction B3.2 Characteristics of neural network models B3.3 Learning rules B3.4 Acceleration of training B3.5 Training and generalization B4 Data Input and Output Representations Thomas O Jackson B4.1 Introduction B4.2 Data complexity and separability B4.3 The necessity of preserving feature information B4.4 Data preprocessing techniques B4.5 A 'case study' review B4.6 Data representation properties B4.7 Coding schemes B4.8 Discrete codings B4.9 Continuous codings B4.10 Complex representation issues B4.11 Conclusions B5 Network Analysis Techniques B5.1 Introduction Russell Beale B5.2 Iterative inversion of neural networks and its applications Alexander Linden B5.3 Designing analyzable networks Stephen P Luttrell B6 Neural Networks: A Pattern Recognition Perspective Christopher M Bishop B6.1 Introduction B6.2 Classification and regression B6.3 Error functions B6.4 Generalization B6.5 Discussion PART C NEURAL NETWORK MODELS C1 Supervised Models C1.1 Single-layer networks George M Georgiou C1.2 Multilayer perceptrons Luis B Almeida C1.3 Associative memory networks Mohamad H Hassoun and Paul B Watta C1.4 Stochastic neural networks Harold Szu and Masud Cader C1.5 Weightless and other memory-based networks Igor Aleksander and Helen B Morton C1.6 Supervised composite networks Christian Jutten C1.7 Supervised ontogenic networks Emile Fiesler and Krzysztof J Cios C1.8 Adaptive logic networks William W Armstrong and Monroe M Thomas C2 Unsupervised Models C2.1 Feedforward models Michel Verleysen C2.2 Feedback models Gail A Carpenter (C2.2.1), Stephen Grossberg (C2.2.1, C2.2.3), and Peggy Israel Doerschuk (C2.2.2) C2.3 Unsupervised composite networks Cris Koutsougeras C2.4 Unsupervised ontogenetic networks Bernd Fritzke C3 Reinforcement Learning S Sathiya Keerthi and B Ravindran C3.1 Introduction C3.2 Immediate reinforcement learning C3.3 Delayed reinforcement learning C3.4 Methods of estimating V and Q C3.5 Delayed reinforcement learning methods C3.6 Use of neural and other function approximators in reinforcement learning C3.7 Modular and hierarchical architectures PART D HYBRID APPROACHES D1 Neuro-Fuzzy Systems Krzysztof J Cios and Witold Pedrycz D1.1 Introduction D1.2 Fuzzy sets and knowledge representation issues D1.3 Neuro-fuzzy algorithms D1.4 Ontogenic neuro-fuzzy F-CID3 algorithm D1.5 Fuzzy neural networks D1.6 Referential logic-based neurons D1.7 Classes of fuzzy neural networks D1.8 Induced Boolean and core neural networks D2 Neural-Evolutionary Systems V William Porto D2.1 Overview of evolutionary computation as a mechanism for solving neural system break design problems D2.2 Evolutionary computation approaches to solving problems in neural computation D2.3 New areas for evolutionary computation research in neural systems PART E NEURAL NETWORK IMPLEMENTATIONS E1 Neural Network Hardware Implementations E1.1 Introduction Timothy S Axelrod E1.2 Neural network adaptations to hardware implementations Perry D Moerland and Emile Fiesler E1.3 Analog VLSI implementation of neural networks Eric A Vittoz E1.4 Digital integrated circuit implementations Valeriu Beiu E1.5 Optical implementations I Saxena and Paul G Horan PART F APPLICATIONS OF NEURAL COMPUTATION F1 Neural Network Applications F1.1 Introduction Gary Lawrence Murphy F1.2 Pattern classification Thierry Den*ux F1.3 Combinatorial optimization Soheil Shams F1.4 Associative memory James Austin F1.5 Data compression Andrea Basso F1.6 Image processing John Fulcher F1.7 Speech processing Kari Torkkola F1.8 Signal processing Shawn P Day F1.9 Control Paul J Werbos PART G NEURAL NETWORKS IN PRACTICE: CASE STUDIES G1 Perception and Cognition G1.1 Unsupervised segmentation of textured images Nigel M Allinson and Hu Jun Yin G1.2 Character recognition John Fulcher G1.3 Handwritten character recognition using neural networks Thomas M Breuel G1.4 Improved speech recognition using learning vector quantization Kari Torkkola G1.5 Neural networks for alphabet recognition Mark Fanty, Etienne Barnard and Ron Cole G1.6 A neural network for image understanding Heggere S Ranganath, Govindaraj Kuntimad and John L Johnson G1.7 The application of neural networks to image segmentation and way-point identification James Austin G2 Engineering G2.1 Control of a vehicle active suspension model using adaptive logic networks William W Armstrong and Monroe M Thomas G2.2 ATM network control by neural network Atsushi Hiramatsu G2.3 Neural networks to configure maps for a satellite communication network Nirwan Ansari G2.4 Neural network controller for a high-speed packet switch M Mehmet Ali and Huu Tri Nguyen G2.5 Neural networks for optimal robot trajectory planning Dan Simon G2.6 Radial basis function network in design and manufacturing of ceramics Krzysztof J Cios, George Y Baaklini, Laszlo Berke and Alex Vary G2.7 Adaptive control of a negative ion source Stanley K Brown, William C Mead, P Stuart Bowling and Roger D Jones G2.8 Dynamic process modeling and fault prediction using artificial neural networks Barry Lennox and Gary A Montague G2.9 Neural modeling of a polymerization reactor Gordon Lightbody and George W Irwin G2.10 Adaptive noise canceling with nonlinear filters Wolfgang Knecht G2.11 A concise application demonstrator for pulsed neural VLSI Alan F Murray and Geoffrey B Jackson G2.12 Ontogenic CID3 algorithm for recognition of defects in glass ribbon Krzysztof J Cios G3 Physical Sciences G3.1 Neural networks for control of telescope adaptive optics T K Barrett and D G Sandler G3.2 Neural multigrid for disordered systems: lattice gauge theory as an example Martin Bker, Gerhard Mack and Marcus Speh G3.3 Characterization of chaotic signals using fast learning neural networks Shawn D Pethel and Charles M Bowden G4 Biology and Biochemistry G4.1 A neural network for prediction of protein secondary structure Burkhard Rost G4.2 Neural networks for identification of protein coding regions in genomic DNA sequences E E Snyder and Gary D Stormo G4.3 A neural network classifier for chromosome analysis Jim Graham G4.4 A neural network for recognizing distantly related protein sequences Dmitrij Frishman and Patrick Argos G5 Medicine G5.1 Adaptive logic networks in rehabilitation of persons with incomplete spinal cord injury Aleksandar Kostov, William W Armstrong, Monroe M Thomas and Richard B Stein G5.2 Neural networks for diagnosis of myocardial disease Hiroshi Fujita G5.3 Neural networks for intracardiac electrogram recognition Marwan A Jabri G5.4 A neural network to predict lifespan and new metastases in patients with renal cell cancer Craig Niederberger, Susan Pursell and Richard M Golden G5.5 Hopfield neural networks for the optimum segmentation of medical images Riccardo Poli and Guido Valli G5.6 A neural network for the evaluation of hemodynamic variables Tom Pike and Robert A Mustard G6 Economics, Finance and Business G6.1 Application of self-organizing maps to the analysis of economic situations F Blayo G6.2 Forecasting customer response with neural networks David Bounds and Duncan Ross G6.3 Neural networks for financial applications Magali E Azema*Barac and A N Refenes G6.4 Valuations of residential properties using a neural network Gary Grudnitski G7 Computer Science G7.1 Neural networks and human-computer interaction Alan J Dix and Janet E Finlay G8 Arts and Humanities G8.1 Distinguishing literary styles using neural networks Robert A J Matthews and Thomas V N Merriam G8.2 Neural networks for archaeological provenancing John Fulcher PART H THE NEURAL NETWORK RESEARCH COMMUNITY H1 Future Research in Neural Computation H1.1 Mathematical theories of neural networks Shun-ichi Amari H1.2 Neural networks: natural, artificial, hybrid H John Caulfield H1.3 The future of neural networks J G Taylor H1.4 Directions for future research in neural networks James A Anderson List of Contributors Index __________________________________________________________________________ Emile Fiesler, Editor-in-Chief of the Handbook of Neural Computation Research Director IDIAP E-mail: HoNC at IDIAP.CH C.P. 592 CH-1920 Martigny WWW-URL: http://www.idiap.ch/nn.html Switzerland ftp ftp.idiap.ch:/pub/papers/neural/README __________________________________________________________________________  From terry at salk.edu Thu Oct 10 12:07:32 1996 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 10 Oct 1996 09:07:32 -0700 (PDT) Subject: NEURAL COMPUTATION 8:8 Message-ID: <199610101607.JAA06289@helmholtz.salk.edu> Neural Computation - Contents Volume 8, Number 8 - November 15, 1996 Article Synchronized Action of Synaptically Coupled Chaotic Model Neurons Henry D. I. Abarbanel, R. Huerta, M. I. Rabinovich, N. F. Rulkov, P. F. Rowat and A. I Selverston Note On the Capacity of Threshold Adalines with Limited-Precision Weights Maryhelen Stevenson and Shaheedul Huq Letter Binocular Receptive Field Models, Disparity Tuning, and Characteristic Disparity Yu-Dong Zhu and Ning Qian Response Characteristics of a Low-Dimensional Model Neuron Bo Cartling What Matters in Neuronal Locking ? Wulfrum Gerstner, J. Leo van Hemmen and Jack D. Cowan Hebbian Learning of Context in Recurrent Neural Networks Nicolas Brunel Neural Correlation via Random Connections Joshua Chover Singular Perturbation Analysis of Competitive Neural Networks with Different Time-Scales Anke Meyer-Base, Frank Ohl and Henning Scheich How Dependencies between Successive Examples Affect On-Line Learning Wim Wiegerinck and Tom Heskes Autonomous Design of Artificial Neural Networks by Neurex Francois Michaud and Ruben Gonzalez Rubio ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From harnad at cogsci.soton.ac.uk Thu Oct 10 11:47:43 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Thu, 10 Oct 96 16:47:43 +0100 Subject: Cortical Computation: BBS Call for Commentators Message-ID: <9468.9610101547@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: IN SEARCH OF COMMON FOUNDATIONS FOR CORTICAL COMPUTATION by W.A. Phillips and W. Singer This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://www.cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://ftp.cogsci.soton.ac.uk/pub/bbs gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ IN SEARCH OF COMMON FOUNDATIONS FOR CORTICAL COMPUTATION W.A. Phillips and W. Singer Center for Cognitive and Computational Neuroscience, Departments of Psychology and Computing Science, University of Stirling, FK9 4LA, Scotland, UK. wap1 at forth.stir.ac.uk Max Planck Institute for Brain Research, Deutschordenstrasse 46, Postfach 71 06 62, D-60496 Frankfurt/Main, Germany. singer at mpih-frankfurt.mpg.d400.de KEYWORDS: Cell assemblies; cerebral cortex; coordination; context; dynamic binding; functional specialization; learning; neural coding; neural computation; neuropsychology; reading; object recognition; perception; self-organization; synaptic plasticity; synchronization. ABSTRACT: This research concerns forms of coding, processing and learning that are common to many different cortical regions and cognitive functions. Local cortical processors may coordinate their activity by maximizing the transmission of information that is coherently related to the context in which it occurs, thereby forming synchronized population codes. In this coordination, contextual field (CF) connections link processors within and between cortical regions. The effects of CF connections are distinct from those mediating receptive field (RF) input. CFs can guide both learning and processing without becoming confused with RF information. Simulations explore the capabilities of networks built from local processors with both RF and CF connections. Physiological evidence for CFs, synchronization, and plasticity in RF and CF connections is described. Coordination via CFs is related to perceptual grouping, the effects of context on contrast sensitivity, amblyopia, implicit influences of color in achromotopsia, object and word perception, and the discovery of distal environmental variables and their interactions through self-organization. In cortical computation there may occur a flexible evaluation of relations between input signals by locally specialized but adaptive processors whose activity is dynamically associated and coordinated within and between regions through specialized contextual connections. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.phillips). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.phillips.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.phillips ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.phillips gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.phillips When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From elman at crl.ucsd.edu Mon Oct 7 22:54:07 1996 From: elman at crl.ucsd.edu (Jeff Elman) Date: Mon, 7 Oct 96 19:54:07 PDT Subject: new book announcement: Rethinking Innateness Message-ID: <9610080254.AA25940@crl.ucsd.edu> RETHINKING INNATENESS A Connectionist Perspective on Development by Jeffrey L. Elman, Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, Kim Plunkett "Rethinking Innateness is a milestone as important as the appearance ten years ago of the PDP books. More integrated in its structure, more biological in its approach, this book provides a new theoretical framework for cognition that is based on dynamics, growth, and learning. Study this book if you are interested in how minds emerge from developing brains." Terrence J. Sejnowski Professor, Salk Institute for Biological Studies Rethinking Innateness asks the question, "What does it really mean to say that a behavior is innate?" The authors describe a new framework in which interactions, occurring at all levels, give rise to emergent forms and behaviors. These outcomes often may be highly constrained and universal, yet are not themselves directly contained in the genes in any domain-specific way. One of the key contributions of Rethinking Innateness is a taxonomy of ways in which a behavior can be innate. These include constraints at the level of representation, architecture, and timing; typically, behaviors arise through the interaction of constraints at several of these levels. The ideas are explored through dynamic models inspired by a new kind of "developmental connectionism," a marriage of connectionist models and developmental neurobiology, forming a new theoretical framework for the study of behavioral development. While relying heavily on the conceptual and computational tools provided by connectionism, Rethinking Innateness also identifies ways in which these tools need to be enriched by closer attention to biology. Neural Networks and Connectionist Modeling series A Bradford Book November 1996 ISBN 0-262-05052-8 475 pp. $45.00 (cloth) MIT Press WWW page, with ordering information: http://www-mitpress.mit.edu:80/mitp/recent-books/cog/elmrh.html  From dimitrib at MIT.EDU Sat Oct 12 01:46:39 1996 From: dimitrib at MIT.EDU (Dimitri Bertsekas) Date: Sat, 12 Oct 96 00:46:39 EST Subject: New book on Neuro-Dynamic Programming/Reinforcement Learning Message-ID: <9610120445.AA27142@MIT.MIT.EDU> Dear colleagues, our Neuro-Dynamic Programming book has just been published, and we are attaching a description. Dimitri Bertsekas (dimitrib at mit.edu) John Tsitsiklis (jnt at mit.edu) ******************************************************************** NEURO-DYNAMIC PROGRAMMING by Dimitri P. Bertsekas and John N. Tsitsiklis Massachusetts Institute of Technology (512 pages, hardcover, ISBN:1-886529-10-8, $79.00) published by Athena Scientific, Belmont, MA http://world.std.com/~athenasc/ Neuro-Dynamic Programming (NDP for short) is a recent class of reinforcement learning methods that can be used to solve very large and complex dynamic optimization problems. NDP combines simulation, learning, neural networks or other approximation architectures, and the central ideas in dynamic programming. It provides a rigorous framework for addressing challenging and often intractable problems from a broad variety of fields. This book provides the first systematic presentation of the science and the art behind this far-reaching methodology. Among its special features, the book: ----------------------------------------------------------------------- ** Describes and unifies a large number of reinforcement learning methods, including several that are new ** Rigorously explains the mathematical principles behind NDP ** Describes new approaches to formulation and approximate solution of problems in stochastic optimal control, sequential decision making, and discrete optimization ** Illustrates through examples and case studies the practical application of NDP to complex problems from resource allocation, data communications, game playing, and combinatorial optimization ** Presents extensive background and new research material on dynamic programming and neural network training ----------------------------------------------------------------------- CONTENTS 1. Introduction 1.1. Cost-to-go Approximations in Dynamic Programming 1.2. Approximation Architectures 1.3. Simulation and Training 1.4. Neuro-Dynamic Programming 2. Dynamic Programming 2.1. Introduction 2.2. Stochastic Shortest Path Problems 2.3. Discounted Problems 2.4. Problem Formulation and Examples 3. Neural Network Architectures and Training 3.1. Architectures for Approximation 3.2. Neural Network Training 4. Stochastic Iterative Algorithms 4.1. The Basic Model 4.2. Convergence Based on a Smooth Potential Function 4.3. Convergence under Contraction or Monotonicity Assumptions 4.4. The ODE Approach 5. Simulation Methods for a Lookup Table Representation 5.1. Some Aspects of Monte Carlo Simulation 5.2. Policy Evaluation by Monte Carlo Simulation 5.3. Temporal Difference Methods 5.4. Optimistic Policy Iteration 5.5. Simulation-Based Value Iteration 5.6. Q-Learning 6. Approximate DP with Cost-to-Go Function Approximation 6.1. Generic Issues -- From Parameters to Policies 6.2. Approximate Policy Iteration 6.3. Approximate Policy Evaluation Using TD(lambda) 6.4. Optimistic Policy Iteration 6.5. Approximate Value Iteration 6.6. Q-Learning and Advantage Updating 6.7. Value Iteration with State Aggregation 6.8. Euclidean Contractions and Optimal Stopping 6.9. Value Iteration with Representative States 6.10. Bellman Error Methods 6.11. Continuous States and the Slope of the Cost-to-Go 6.12. Approximate Linear Programming 6.13. Overview 7. Extensions 7.1. Average Cost per Stage Problemsn Error 7.2. Dynamic Games 7.3. Parallel Computation Issues 8. Case Studies 8.1. Parking 8.2. Football 8.3. Tetris 8.4. Combinatorial Optimization -- Maintenance and Repair 8.5. Dynamic Channel Allocation 8.6. Backgammon Appendix A: Mathematical Review Appendix B: On Probability Theory and Markov Chains ******************************************************************** PREFACE: http://world.std.com/~athenasc/ ******************************************************************** PUBLISHER'S INFORMATION: Athena Scientific, P.O.Box 391, Belmont, MA, 02178-9998, U.S.A. Email: athenasc at world.std.com, Tel: (617) 489-3097, FAX: (617) 489-2017 WWW Site for Info and Ordering: http://world.std.com/~athenasc/ ********************************************************************  From biehl at physik.uni-wuerzburg.de Mon Oct 14 07:02:29 1996 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Mon, 14 Oct 1996 13:02:29 +0200 (MESZ) Subject: paper available: Noise Robustness in Multilayer Neural Networks Message-ID: <199610141102.NAA18288@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1996/WUE-ITP-96-022.ps.gz The following manuscript is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Noise Robustness in Multilayer Neural Networks" M. Copelli, R. Eichhorn, O. Kinouchi, M. Biehl, R. Simonetti, P. Riegler, and N. Caticha Ref. WUE-ITP-96-022 Abstract The training of multilayered neural networks in the presence of different types of noise is studied. We consider the learning of realizable rules in nonover- lapping architectures. Achieving optimal generalization depends on knowledge of the noise level, however its misestimation may lead to partial or complete loss of the generalization ability. We demonstrate this effect in the framework of online learning and present the results in terms of noise robustness phase diagrams. While for additive (weight) noise the robustness properties depend on the architecture and size of the networks, this is not so for multiplicative (output) noise. In this case we find a universal behaviour independent of the machine size for both the tree parity and committee machines. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1996 ftp> get WUE-ITP-96-022.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-022.ps.gz e.g. unix> lp WUE-ITP-96-022.ps [8 pages] (*) can be replaced by "get WUE-ITP-96-022.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ -- Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de homepage: http://www.physik.uni-wuerzburg.de/~biehl Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141  From dwang at cis.ohio-state.edu Mon Oct 14 09:43:47 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 14 Oct 1996 09:43:47 -0400 Subject: Special Issue on Radial Basis Functions Networks Message-ID: <199610141343.JAA28477@sarajevo.cis.ohio-state.edu> Call for Papers Special Issue on Radial Basis Functions Networks for the journal NEUROCOMPUTING (http://www.elsevier.nl/locate/neucom) -------------------------------------------------------------------- One of the most popular neural network architectures due to their fast learning and broad range of applicability are the Radial Basis Functions (RBF) Networks. This special issue is dedicated to recent advances in all aspects of this type of architecture, including but not restricted to: * theoretical contributions, * learning methods: supervised, unsupervised, on-line, etc., * architectural enhancements, * applications in signal procession, vision, control, etc., * comparative assessments to other neural network architectures. Submit 6 copies of your manuscript (including keywords, biosketch of all authors, email address of corresponding author) to: V. David Sanchez A. Neurocomputing - Editor in Chief - Nova Southeastern University School of Computer and Information Sciences 3100 SW 9th Avenue Fort Lauderdale, FL 33315 U.S.A. Fax +1 (954) 723-2744 Email dsanchez at scis.nova.edu Deadline: 15 December 1996 --------------------------------------------------------------------  From ted at SPENCER.CTAN.YALE.EDU Mon Oct 14 14:05:17 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Mon, 14 Oct 1996 18:05:17 GMT Subject: The Electrotonic Workbench Message-ID: <199610141805.SAA13101@PLANCK.CTAN.YALE.EDU> For those who are concerned with how electrical signals spread in biological neurons, it may be of interest to learn that Michael Hines's simulation program NEURON can now compute the electrotonic transform and display the results as neuromorphic or Log A vs. x renderings at the click of a button. An abstract that presents the basic concepts and illustrates this new feature is at http://www.neuron.yale.edu/papers/ebench/ebench.html Further information about this powerful suite of analytical tools will be presented at the Society for Neuroscience Meeting in Washington, DC on Wednesday, Nov. 20, 1996 at 1 PM (Carnevale, N.T., Tsai, K.Y., and Hines, M.L.. The Electrotonic Workbench. Society for Neuroscience Abstracts 22:1741, 1996, abstract 687.1) This and other references of interest to those who are concerned with biologically realistic neural computation are located at http://www.neuron.yale.edu/papers/nrnrefs.html --Ted  From anderson at CS.ColoState.EDU Mon Oct 14 14:36:41 1996 From: anderson at CS.ColoState.EDU (Chuck Anderson) Date: Mon, 14 Oct 1996 12:36:41 -0600 (MDT) Subject: research assistantship for Spring, 97 Message-ID: <199610141836.MAA03873@clapton.cs.colostate.edu> The Department of Computer Science at Colorado State University, Fort Collins, CO, is looking for a Masters or Ph.D. student to fill a vacant research assistantship position funded by this NSF project: National Science Foundation, MIP-9628770, 8/96--7/99, PIs: T. Chen, Electrical Engineering, and A. von Mayrhauser and C. Anderson, Computer Science, "Behavioral Level Design Verifications Using Software Testing Techniques and Neural Networks" We plan to use network inversion and methods from optimal experiment design to guide the search for test inputs for complex software systems and hardware models coded in VHDL. We would like a student with a strong background in neural networks, experience with compiler design, and an interest in software and hardware testing. You can learn more about our department at http://www.cs.colostate.edu and about the PIs' research projects at http://www.lance.colostate.edu/depts/ee/Profiles/chen.html http://www.cs.colostate.edu/casi/avm/ http://www.cs.colostate.edu/~anderson To qualify for this position, you must be accepted into the Computer Science graduate program at CSU. You may obtain application material by sending a request via e-mail to gradinfo at cs.colostate.edu. You may also send your resume or questions via e-mail to anderson at cs.colostate.edu. Chuck Anderson anderson at cs.colostate.edu Department of Computer Science http://www.cs.colostate.edu/~anderson Colorado State University office: 970-491-7491 Fort Collins, CO 80523-1873 FAX: 970-491-2466  From zador at salk.edu Tue Oct 15 02:18:19 1996 From: zador at salk.edu (Tony Zador) Date: Mon, 14 Oct 1996 23:18:19 -0700 (PDT) Subject: The Electrotonic Workbench Message-ID: <199610150618.XAA29858@helmholtz.salk.edu> Neuron is an excellent tool for computing the morphoelectrotonic transform (MET). However, for those who would prefer to use Mathematica, a toolkit is available at http://mrb.niddk.nih.gov/hagai/publ/Zador95/Abstract.html Several digitized cortical pyramidal neurons are also available at this site. The full published description of the MET can be found in: Zador AM; Agmon-Snir H; Segev I. The morphoelectrotonic transform: a graphical approach to dendritic function.Journal of Neuroscience, 1995 Mar, 15(3 Pt 1):1669-82. The Mathematica MET Toolkit was used to generate all the figures in this paper. ______________________________ Tony Zador Salk Institute MNL/S http://www.sloan.salk.edu/~zador ______________________________  From phkywong at uxmail.ust.hk Tue Oct 15 07:12:36 1996 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Tue, 15 Oct 1996 19:12:36 +0800 Subject: Paper available Message-ID: <96Oct15.191239+0800_hkt.102345-24300+615@uxmail.ust.hk> The following paper, to be orally presented at NIPS'96, is now available via anonymous FTP. (7 pages) ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/rough.ps.gz Microscopic Equations in Rough Energy Landscape for Neural Networks K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phkywong at usthk.ust.hk ABSTRACT We consider the microscopic equations for learning problems in neural networks. The aligning fields of an example are obtained from the cavity fields, which are the fields if that example were absent in the learning process. In a rough energy landscape, we assume that the density of the metastable states obey an exponential distribution, yielding macroscopic properties agreeing with the first step replica symmetry breaking solution. Iterating the microscopic equations provide a learning algorithm, which results in a higher stability than conventional algorithms. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get rough.ps.gz ftp> quit unix> gunzip rough.ps.gz unix> lpr rough.ps (or ghostview rough.ps)  From tgd at chert.CS.ORST.EDU Thu Oct 17 01:16:19 1996 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 16 Oct 96 22:16:19 PDT Subject: Paper available: Statistical Tests for Comparing Supervised Classification Learning Algorithms Message-ID: <9610170516.AA03346@edison.CS.ORST.EDU> The following paper is available from **Hardcopies are not available** Statistical Tests for Comparing Supervised Classification Learning Algorithms Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Abstract: This paper reviews five statistical tests for determining whether one learning algorithm out-performs another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (Type I error). Two widely-used statistical tests are shown to have high probability of Type I error in certain situations and should never be used. These tests are (a) a test for the difference of two proportions and (b) a paired-differences $t$ test based on taking several random train/test splits. A third test, a paired-differences $t$ test based on 10-fold cross-validation, exhibits somewhat elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type I error. The fifth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation. Experiments show that this test also has good Type I error. The paper also measures the power (ability to detect algorithm differences when they do exist) of these tests. The 5x2cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, McNemar's test is the only test with acceptable Type I error. For algorithms that can be executed ten times, the 5x2cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set. -- Thomas G. Dietterich Voice: 541-737-5559 Department of Computer Science FAX: 541-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd Oregon State University Corvallis, OR 97331-3102  From l.s.smith at cs.stir.ac.uk Thu Oct 17 08:18:36 1996 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Thu, 17 Oct 1996 13:18:36 +0100 (BST) Subject: CFP: 1st European Workshop on Neuromorphic Systems Message-ID: <19961017T121836Z.NAA01062@katrine.cs.stir.ac.uk> EWNS: 1st European Workshop on Neuromorphic Systems 29-31 August 1997, University of Stirling, Stirling, Scotland First Call for Papers Organisers: Centre for Cognitive and Computational Neuroscience University of Stirling, Scotland and Department of Electrical Engineering University of Edinburgh, Scotland First Call for Papers Neuromorphic systems are implementations in silicon of sensory and neural systems whose architecture and design are based on neurobiology. This growing area proffers exciting possibilities such as sensory systems which can compete with human senses and pattern recognition systems that can run in real-time. The area is at the intersection of many disciplines: neurophysiology, computer science and electrical engineering. Papers are requested in the following areas: Design issues in sensorineural neuromorphic systems: auditory, visual, olfactory, proprioception, sensorimotor systems. Designs for silicon implementations of neural systems. Papers not exceeding 8 A4 pages are requested: these should be sent to Dr. Leslie Smith Department of Computing Science University of Stirling Stirling FK9 4LA Scotland email: lss at cs.stir.ac.uk FAX (44) 1786 464551 We also propose to hold a number of discussion sessions on some of the questions above. Short position papers (less than 4 pages) are requested. Key Dates Submission Deadline: Mon 7th April 1997 Notification of Acceptance: June 2nd 1997 Further information is on the WWW at http://www.cs.stir.ac.uk/~lss/Neuromorphic/Info1.html  From pci-inc at aub.mindspring.com Sun Oct 20 12:13:36 1996 From: pci-inc at aub.mindspring.com (Mary Lou Padgett) Date: Sun, 20 Oct 1996 12:13:36 -0400 Subject: AMARI: WCNN Announcement Message-ID: <2.2.16.19961020161336.65e77652@pop.aub.mindspring.com> AMARI: WCNN Announcement PRESIDENTIAL ANNOUNCEMENT OF ACTION TAKEN BY INNS BOARD OF GOVERNORS AT WCNN'96 IN SAN DIEGO, SEPTEMBER 17, 1996: In the last few years, many members of INNS have expressed dissatisfactin with the existence of two major, competing neural network meetings in North America. A number of months ago, IEEE and INNS began informal discussions about the possibility of reinstituting cooperation. This week, we had further contacts with IEEE, and the INNS Board Members have made a strong decision to proceed to reinstitute the tradition of joint meetings, perhaps as early as 1997. There are certain details which need to be worked out, and decisioins which need to be approved. In the spirit of cooperation, the INNS Board has elected to replace the planned 1997 INNS meetng in Boston by supporting the meeting in Houston next June, let this time by IEEE, by offering a strong INNS technical involvement. WE REMIND YOU THE PAPER SUBMISSION DEADLINE FOR THE HOUSTON MEETING IS SET AT NOVEMBER 1 (NOV. 15 FOR INNS MEMBERS ONLY). We trust that we will later be able to follow a pattern of alternating the lead roles, as we did with IJCNNs in the past. We urge all of you to plan to join wiht us in Houston, to help support this effort to reunite the neural network community. SHUN-ICHI AMARI Note: See the new web page for the JOINT MEETING for 1997. http://www.mindspring.com/~pci-inc/ICNN97 Send papers to Dan Levine, Co-Program Chair. Mary Lou Padgett 1165 Owens Road Auburn, AL 36830 P: (334) 821-2472 F: (334) 821-3488 m.padgett at ieee.org Auburn University, EE Dept. Padgett Computer Innovations, Inc. (PCI) Simulation, VI, Seminars IEEE Standards Board -- Virtual Intelligence ( VI): NN, FZ, EC, VR  From gcv at di.ufpe.br Mon Oct 21 08:51:41 1996 From: gcv at di.ufpe.br (gcv@di.ufpe.br) Date: Mon, 21 Oct 1996 10:51:41 -0200 Subject: CFP: JBCS on Neural Networks Message-ID: <199610211251.KAA21156@caruaru> Journal of The Brazilian Computer Society (JBCS) CALL FOR PAPERS Special Issue on NEURAL NETWORKS (Tentative Publication Date, July, 1997) Guest Editors: Edson de Barros Carvalho Filho, DI-UFPE and Germano Vasconcelos, DI-UFPE The Journal of the Brazilian Computer Society (JBCS) is an international quarterly publication of the Sociedade Brasileira de Computao (SBC) which serves as a forum for disseminating innovative research in all aspects of Computer Science. The approach of Neural Networks has been widely used in a large variety of problems in Computer Science and in other scientific disciplines making this subject one of the most currently attractive field of investigation. In its 11th edition, celebrating the realisation of the third Brazilian Symposium on Neural Networks, sponsored by SBC, the JBCS is planning a Special Issue on Neural Networks and welcomes worldwide submissions describing original ideas and new results in this topic. Papers may be practical or theoretical in nature. Suggested topics include but are not limited to: * Theoretical Models * Algorithms and Architectures * Biological Perspectives * Cognitive Science * Hybrid Systems * Neural Networks and Fuzzy Systems * Neural Networks and Genetic Algorithms * Pattern Recognition * Control and Robotics * Optimization * Hardware Implementation * Environments & Tools * Prediction * Vision and Image Processing * Speech and Language Processing * Other Applications The purpose of this special edition is to allow fast publication of relevant and original research within six months after paper submission. INSTRUCTIONS TO AUTHORS Contributions will be considered for publication in JBCS if they have not been previously published and are not under consideration for publication elsewhere. Acceptance of papers for publication is subject to a peer review procedure and is conditional to revisions being made given comments from referees. Format details for final submission procedure will be provided for accepted papers. Authors must submit the final version in electronic format, and should provide hard-copy versions for refereeing. Submitted papers are to be written in English and typed double-spaced on one side of white A4 sized paper. Each paper should contain no more than 20 pages, including all text, figures and references. The final manuscript should be approximately 8000 words in length. Submissions will be judged on significance, originality, quality and clarity. Reviewing will be blind to the identities of the authors, so the authors should take care not to identify themselves in the paper: * The submitted manuscript should contain only the paper title and a short abstract. Authors names, affiliations, and the complete mailing address (both postal and email) of the person to whom correspondence should be sent, should be included in an accompanying letter. * No acknowledgment should be included in the version for refereeing (it can be included in the final version of the paper). * There should be no reference to unpublished work by the authors (thesis, working papers). These references can be included in the final version of the paper. * When referring to one's own work, use the third person. For example, say "previously, [Peter1993] has shown that ...", instead of "the author [Peter1993] has shown that ..." All contributions will be acknowledged and refereed. SUBMISSION PROCEDURE Please submit 4 copies of the paper to the Special Issue Editor Germano Crispim Vasconcelos Departamento de Informatica Universidade Federal de Pernambuco Caixa Postal 7851 50732-970, Recife - PE Brazil email: gcv at di.ufpe.br fax: +55 81 2718438 IMPORTANT DATES Submission Deadline January 20, 1997 (PAPERS MUST BE RECEIVED BY THIS DATE - FIRM DEADLINE) Notification of Acceptance March 20, 1997 Final Electronic Version April 20, 1997 Tentative Publication Date July, 1997 For additional information on the Journal, and on how to prepare the manuscript to minimize final version delays, contact the editors or consult webpage http://www.dcc.unicamp.br/~jbcs/cameraready.html  From wray at ultimode.com Sun Oct 20 09:58:06 1996 From: wray at ultimode.com (Wray Buntine) Date: Sun, 20 Oct 1996 13:58:06 GMT Subject: tutorial slides available on graphical models, and on priors Message-ID: <199610201358.NAA17902@ultimode.com> The following slides were prepared for the NATO Workshop on Learning in Graphical Models, just held in Erice, Italy, Sept. 1996. Actually, these are *revised* from the Erice workshop so those in attendance might like to update too. They are available over WWW but not yet available via FTP. You'll find them at my web site: http://WWW.Ultimode.com/~wray/refs.html#tutes Also, please note my new location and email address, given at the end. The graphical models and exponential family talk contains an introduction to lots of learning algorithms using graphical models. Included is an analysis with proofs of the much-hyped mean field algorithm in its general case for the exponential family (as you might have guessed, mean field is simple once you strip away the physics), and lots more. This talk also contains how I believe Gibbs, EM, k-means, and deterministic annealing should be taught (as variants of one another). Computation with the Exponential Family and Graphical Models ============================================================ This tutorial plays two roles: to illustrate how graphical models can be used to present models and algorithms for data analysis, and to present computational methods based on the Exponential Family, a central concept for computational data analysis. The Exponential Family is the most important family of probability distributions. It includes the Gaussian, the binomial, the Poisson, and others. It has unique computational properties: all fast algorithms for data analysis, to my knowledge, have some version of the exponential family at their core. Every student of data analysis, regardless of their discipline (computer science, neural nets, pattern recognition, etc.) should therefore understand the Exponential Family and the key algorithms which are based on them. This tutorial presents the Exponential Family and algorithms using graphical models: Bayesian networks and Markov networks (directed and undirected graphs). These graphical models represent independence and therefore neatly display many of the essential details of the algorithms and models based around the exponential family. Algorithms discussed are the Expectation-Maximization (EM) algorithm, Gibbs sampling, k-means, deterministic annealing, Scoring, Iterative Reweighted Least Squares (IRLS), Mean Field, and Iterative Proportional Fitting (IPF). Connections between these different algorithms are given, and the general formulations presented, in most cases, are readily adapted to arbitrary Exponential Family distributions. The priors tutorial was a *major* revision from my previous version. Those with the older version should update! Prior Probabilities =================== Prior probabilities are the center of most of the old controversies surrounding Bayesian statistics. While the Bayesian/Classical distinctions in statistics are becoming blurred, priors remain a problem, largely because of a lack of good tutorial material and the unfortunate residue of previous misunderstandings. Methods for developing and assessing priors are now routinely used by experienced practitioners. This tutorial will review some of the issues, presenting a view that incorporates decision theory and multi-agent reasoning. First, some perspectives are given: applications, theory, parameters and models, and the role of the decision being made. Then, basic principles are presented: Jaynes' Principle of Invariance is a generalization of Laplace's Principle of Indifference that allows a specification of ignorance to be converted into a prior. A prior for non-linear regression is developed, and the important role of a "measure", over-fitting, and priors on multinomials are presented. Issues such as subjectivity versus objectivity, Occam's razor, various paradoxes, maximum entropy methods, and the so-called non-informative & reference priors are also presented. A bibliography is included. Wray Buntine ============ Consultant to industry and NASA, and Visiting Scientist at EECS, UC Berkeley working on probabilistic methods in computer-aided design of ICs with Dr. Andy Mayer and Prof. Richard Newton. Ultimode Systems, LLC Phone: (415) 324 3447 555 Bryant Str. #186 Email: wray at ultimode.com Palo Alto, 94301 http://WWW.Ultimode.com/~wray/  From krista at nucleus.hut.fi Mon Oct 21 03:39:53 1996 From: krista at nucleus.hut.fi (Krista Lagus) Date: Mon, 21 Oct 1996 10:39:53 +0300 (EET DST) Subject: (1st CFP) WSOM - Workshop on Self-Organizing Maps Message-ID: CALL FOR PAPERS W O R K S H O P O N S E L F - O R G A N I Z I N G M A P S Helsinki University of Technology, Finland June 4-6, 1997 The Self-Organizing Map (SOM) with its variations is the most popular artificial neural network algorithm in the unsupervised learning category. Over 2000 applications have been reported in the open literature, and more and more industrial projects are using the SOM as a tool for solving hard real-world problems. The WORKSHOP ON SELF-ORGANIZING MAPS (WSOM) is the first international meeting to be solely dedicated to the theory and applications of the SOM. People from universities, research institutes, industry, and commerce are invited to join the Workshop and share their views and expertise on the use of the SOM. WORKSHOP The workshop will consist of a tutorial short course on SOM given by prof. Teuvo Kohonen, an opening plenary talk given by prof. Helge Ritter, presentations on various aspects of the SOM given by internationally known experts, and technical contributions. Also poster presentations will be arranged. SCOPE The scope of the Workshop is the Self-Organizing Map with its variants, including but not limited to - theory and analysis; - engineering applications like pattern recognition, process control, and telecommunications; - data analysis and financial applications; - information retrieval and natural language processing applications; - implementations. SUBMISSION Prospective authors are invited to submit papers on any aspect of SOM, including the areas listed above. The paper submission deadline will be March 1, 1997. Detailed information about the submission procedure, as well as registration, accommodation, etc. will soon be available on the Web page http://nucleus.hut.fi/wsom/ CO-OPERATING SOCIETIES This is a satellite workshop of the 10th Scandinavian Conference on Image Analysis (SCIA) to be held on June 9 to 11 in Lappeenranta, Finland, arranged by the Pattern Recognition Society of Finland. Other co-operating societies are the European Neural Network Society (ENNS), IEEE Finland Section, and the Finnish Artificial Intelligence Society. Teuvo Kohonen, WSOM Chairman Erkki Oja, Program Chairman Olli Simula, Organization Chairman  From dwang at cis.ohio-state.edu Mon Oct 21 11:25:32 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 21 Oct 1996 11:25:32 -0400 Subject: Tech report on Double Vowel Segregation Message-ID: <199610211525.LAA15163@sarajevo.cis.ohio-state.edu> A new technical report is available by FTP: MODELLING THE PERCEPTUAL SEGREGATION OF DOUBLE VOWELS WITH A NETWORK OF NEURAL OSCILLATORS Guy J. Brown (1) and DeLiang Wang (2) (1) Department of Computer Science, University of Sheffield, 211 Portobello Street, Sheffield S8 0ET, UK Email: guy at dcs.shef.ac.uk (2) Laboratory for AI Research, Department of Computer Science and Information Science and Center for Cognitive Science, The Ohio State University, Columbus, OH 43210-1277, USA Email: dwang at cis.ohio-state.edu ABSTRACT The ability of listeners to identify two simultaneously presented vowels can be improved by introducing a difference in fundamental frequency (F0) between the vowels. We propose an explanation for this phenomenon in the form of a computational model of concurrent sound segregation, which is motivated by neurophysiological evidence of oscillatory firing activity in the auditory cortex and thalamus. More specifically, the model represents the perceptual grouping of auditory frequency channels as synchronised (phase-locked with zero phase lag) oscillations in a neural network. Computer simulations on a vowel set used in psychophysical studies confirm that the model qualitatively matches the performance of human listeners; vowel identification performance increases with increasing difference in F0. Additionally, the model is able to replicate other findings relating to the perception of harmonic complexes in which one component is mistuned. OBTAINING THE REPORT BY FTP The report is available by anonymous FTP from the site ftp.dcs.shef.ac.uk (enter the word "anonymous" when you are asked for a login name). Then enter: cd /share/spandh/pubs/brown followed by get bw-report96.ps.Z The file is 2.9 MB of compressed postscript. If you have trouble downloading or viewing the file, or if you would like a paper copy to be sent to you, please email Guy Brown (guy at dcs.shef.ac.uk)  From cabestan at eel.upc.es Mon Oct 21 05:27:26 1996 From: cabestan at eel.upc.es (Joan Cabestany) Date: Mon, 21 Oct 1996 10:27:26 +0100 Subject: IWANN'97 final announcement and Call for Papers Message-ID: <199610210826.KAA12078@petrus.upc.es> Dear collegue, please find herewith the Call for Papers and final Announcement of IWANN'97 (International Work-Conference on Artificial and Natural Neural Networks) to be held in Lanzarote - Canary Islands (Spain) next June 4-6, 1997. If you need more details, feel free for contact me: cabestan at eel.upc.es I am sorry if you receive this message from several distribution lists. Yours, Joan Cabestany **************************************************************************** *** IWANN'97 - Final Call for Papers INTERNATIONAL WORK-CONFERENCE ON ARTIFICIAL AND NATURAL NEURAL NETWORKS Biological and Artificial Architectures, Technologies and Applications Contact URL http://petrus.upc.es/iwann97.html Lanzarote - Canary Islands, Spain June 4-6, 1997 ORGANIZED BY Universidad Nacional de Educacion a Distancia (UNED), Madrid Universidad de Las Palmas de Gran Canaria Universidad Politecnica de Catalunya Universidad de Malaga Universidad de Granada IN COOPERATION WITH Asociacion Espaola de Redes Neuronales (AERN) IFIP Working Group in Neural Computer Systems, WG10.6 Spanish RIG IEEE Neural Networks Council UK&RI Communication Chapter of IEEE IWANN'97. The fourth International Workshop on Artificial Neural Networks, now changed to International Work-Conference on Artificial and Natural Neural Networks, will take place in Lanzarote, Canary Islands (Spain) from 4 to 6 of June, 1997. This biennial meeting with focus on biologically inspired and more realistic models of natural neurons and neural nets and new hybrid computing paradigms, was first held in Granada (1991), Sitges (1993) and Torremolinos, Malaga (1995) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540, 686 and 930). SCOPE Neural computation is considered here in the dual perspective of analysis (as science) and synthesis (as engineering). As a science of analysis, neural computation seeks to help neurology, brain theory, and cognitive psychology in the understanding of the functioning of the Nervous Systems by means of computational models of neurons, neural nets and subcellular processes, with the possibility of using electronics and computers as a "laboratory" in which cognitive processes can be simulated and hypothesis proven without having to act directly upon living beings. As a synthesis engineering, neural computation seeks to complement the symbolic perspective of Artificial Intelligence (AI), using the biologically inspired models of distributed, self-programming and self-organizing networks, to solve those non-algorithmic problems of function approximation and pattern classification having to do with changing and only partially known environments. Fault tolerance and dynamic reconfiguration are other basic advantages of neural nets. In the sea of meetings, congresses and workshops on ANN's, IWANN'97 focus on the three subjects that most concern us: (1) The seeking of biologically inspired new models of local computation architectures and learning along with the organizational principles behind of the complexity of intelligent behavior. (2) The searching for some methodological contributions in the analysis and design of knowledge-based ANN's, instead of "blind nets", and in the reduction of the knowledge level to the sub-symbolic implementation level. (3) The cooperation with symbolic AI, with the integration of connectionist and symbolic processing in hybrid and multi-strategy approaches for perception, decision and control tasks, as well as for case-based reasoning, concepts formation and learning. To contribute to the posing and partially solving of these global topics, IWANN'97 offer a brain-storming interdisciplinary forum in advanced Neural Computation for scientists and engineers from biology neuroanatomy, computational neurophysiology, molecular biology, biophysics, linguistics, psychology, mathematics and physics, computer science, artificial intelligence, parallel computing, analog and digital electronics, advanced computer architectures, reverse engineering, cognitive sciences and all the concerned applied domains (sensory systems and signal processing, monitoring, diagnosis, classification and decision making, intelligent control and supervision, perceptual robotics and communication systems). Contributions on the following and related topics are welcome. TOPICS 1. Biological Foundations of Neural Computation: Principles of brain organization. Neuroanatomy and Neurophysiology of synapses, dendro-dendritic contacts, neurons and neural nets in peripheral and central areas. Plasticity, learning and memory in natural neural nets. Models of development and evolution. The computational perspective in Neuroscience. 2. Formal Tools and Computational Models of Neurons and Neural Nets Architectures: Analytic and logic models. Object oriented formulations. Hybrid knowledge representation and inference tools (rules and frames with analytic slots). Probabilistic, bayesian and fuzzy models. Energy related models. 3. Plasticity Phenomena (Maturing, Learning and Memory): Biological mechanisms of learning and memory. Computational formulations using correlational, reinforcement and minimization strategies. Conditioned reflex and associative mechanisms. Inductive-deductive and abductive symbolic-subsymbolic formulations. Generalization. 4. Complex Systems Dynamics: Self-organization, cooperative processes, autopoiesis, emergent computation, synergetic, evolutive optimization and genetic algorithms. Self-reproducing nets. Self-organizing feature maps. Simulated evolution. Social organization phenomena. 5. Cognitive Science and AI: Hybrid knowledge based system. Neural networks for knowledge modeling, acquisition and refinement. Natural language understanding. Concepts formation. Spatial and temporal planning and scheduling. Intentionality. 6. Neural Nets Simulation, Emulation and Implementation: Environments and languages. Parallelization, modularity and autonomy. New hardware implementation strategies (FPGA's, VLSI, neurodevices). Evolutive architectures. Real systems validation and evaluation. 7. Methodology for Data Analysis, Task Selection and Nets Design. 8. Neural Networks for Perception: Biologically inspired preprocessing. Low level processing, source separation, sensor fusion, segmentation, feature extraction, adaptive filtering, noise reduction, texture, stereo correspondence, motion analysis, speech recognition, artificial vision, and hybrid architectures for multisensorial perception. 9. Neural Networks for Communications Systems: Modems and codecs, network management, digital communications. 10. Neural Networks for Control and Robotics: Systems identification, motion planning and control, adaptive, predictive and model-based control systems, navigation, real time applications, visuo-motor coordination. LOCATION BEATRIZ Costa Teguise Hotel Costa Teguise Lanzarote - Canary Islands, June 4-6, 1997 Lanzarote, the most northerly and easterly island of the Canarian archipelago, is at the same time the most unusual one and produces a strange fascination on those who visit it because the fast succession of fire, sea and colors contrasts with craters, green valleys and unforgettable golden and warm beaches. LANGUAGE English will be the official language of IWANN'97. Simultaneous translation will not be provided. INVITED SPEAKERS Prof. Marvin Minsky - Neuronal and Symbolic Perspectives of AI MIT (USA) Prof. Reinhard Eckhorn - Models of Visual Processing Philips University (D) Prof. Valentino Braitenberg - Sensory-Motor Integration Institute for Biological Cybernetics (D) Dr. Javier De Felipe - Microcircuits in the Brain Instituto Cajal. CSIC (E) Dr. Paolo Ienne - Digital Architectures in Neurocomputers EPFL (CH) CALL FOR PAPERS The Programme Committee seeks for original papers on the above mentioned topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length (LNCS Springer-Verlag Series). INSTRUCTIONS TO AUTHORS Five copies (one original and four copies) of the paper must be submitted. The paper must not exceed 10 pages, including figures, tables and references. It should be written in English on A4 paper, in a Times font, 10 point in size, without page numbers (please, indicate the order by numbering the reverse side of the sheets with a pencil) . The printing area should be 12.2 x 19.3 cm. The text should be justified to occupy the full line width, and using one-line spacing. Headings (12 point, bold) should be capitalized and aligned to the left. Title (14 point, bold) should be centered. Abstract and affiliation (9 point) must be also included. If possible, please make use of the latex/plaintex style file available in the WWW page: http://petrus.upc.es/iwann97.html, where you can get more detailed instructions to the authors. In addition, one sheet must be attached including: Title and authors names, list of five keywords, the Topic the paper fits best, preferred presentation (oral or poster) and the corresponding author (name, postal and e-mail addresses, phone and fax numbers). CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Inteligencia Artificial, UNED Senda del Rey, s/n E - 28040 MADRID, Spain E-mail: iwann97 at dia.uned.es Phone: + 34 1 3987155 Fax: + 34 1 3986697 IMPORTANT DATES Final Date for Submission: January 15, 1997 Notification of Acceptance: March 1997 Work-Conference: June 4-6, 1997 INSCRIPTION, TRAVEL AND HOTEL INFORMATION ULTRAMAR EXPRESS Diputacio, 238, 3 E-08007 BARCELONA, Spain Phone: +34 3 4827140 Fax: +34 3 4827158 E-mail: gcasanova at uex.es IBERIA and AVIACO will be the official carriers for IWANN'97, offering special rates and conditions. International code for special rate: BT71B21MPE0038.(See WWW page for special forfaits rates) POSSIBILITY OF GRANTS The Organization Committee of IWANN'97 will provide a limited number of full or partial grants. Please contact the WWW address for further information. STEERING COMMITTEE Joan Cabestany , Universidad Politecnica de Catalunya (E) Jose Mira Mira, UNED (E) Alberto Prieto, Universidad de Granada (E) Francisco Sandoval, Universidad de Malaga (E) ORGANIZATION COMMITTEE Joan Cabestany and Francisco Sandoval (E), Co-chairmen Michael Arbib, University of Southern California (USA) Senen Barro, Universidad de Santiago (E) Gabriel de Blasio, Univ. de Las Palmas de Gran Canaria (E) Trevor Clarkson, King's College London (UK) Ana Delgado, UNED (E) Dante Del Corso, Politecnico de Torino (I) Belen Esteban-Sanchez, ITC (E) Tamas D. Gedeon, University of New South Wales (AUS) Karl Goser, Universitt Dortmund (G) Jeanny Herault, Institute National Polytechnique de Grenoble (F) Jaap Hoekstra, Delft University of Technology (NL) Shunsuke Sato, Osaka University (Jp) Igor Shevelev, Russian Academy of Science(R) Juan Sigenza. IIC (E) Cloe Taddei-Ferretti, Istituto di Cibernetica, CNR (I) Marley Vellasco, Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen, Universite Catholique de Louvain-la-Neuve (B) PROGRAMME COMMITTEE Jose Mira and Alberto Prieto, Co-chairmen (E) Igor Aleksander, Imperial Coll. of Science Technology and Medicine (UK) Jose Ramon Alvarez, UNED (E) Shun-Ichi Amari, University of Tokyo (Jp) Xavier Arreguit, CSEM (CH) Franois Blayo, Univ. Paris 1 (F) Leon Chua, University of California (USA) Marie Cottrell, Univ. Paris 1 (F) Akira Date, Tokyo University of Agriculture and Technology (Jp) Antonio Diaz-Estrella, Universidad de Malaga (E) M. Duranton, Phillips (F) Reinhard Eckhorn, Philips University (D) Kunihiko Fukushima, Osaka University (Jp) Patrik Garda, Univ. Pierre et Marie Curie (F) Anne Guerin-Dugue, INPG (F) Martin Hasler, EPFL (CH) Mohamad H . Hassoun, Wayne State University (USA) Gonzalo Joya, Universidad de Malaga (E) Simon Jones, IERI Loughborough Univ. of Tech. (UK) Christian Jutten, INPG (F) H. Klar, Technische Universitt Berlin (G) K.Nicholas Leibovic, Univ. Buffalo (USA) J.Lettvin, MIT (USA) Francisco Javier Lopez Aligue, Universidad de Extremadura (E) Jordi Madrenas, UPC (E) Pierre Marchal, CSEM (CH) Juan Manuel Moreno, UPC (E) Josef A. Nossek, Der Technischen Universitt Mnchen (G) Julio Ortega, Universidad de Granada (E) Francisco Jose Pelayo, Universidad de Granada (E) Franz Pichler, Johannes Kepler Universitt Linz (A) Vicenzo Piuri, Politecnico di Milano (I) Leonardo Reyneri, Politecnico di Torino (I) Tamas Roska, Hungarian Academy of Sciences (H) E. Sanchez-Sinencio, Texas A&M Univ. (USA) J. Simoes Da Fonseca, Faculty of Medicine of Lisbon (P) John G. Taylor, King's College London (UK) Carme Torras, Instituto de Cibernetica del CSIC-UPC (E) Philip Treleaven, University College London (UK) Elena Valderrama, Centro Nacional de Microelectronica (E)  From dsilver at csd.uwo.ca Tue Oct 22 15:59:40 1996 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Tue, 22 Oct 1996 15:59:40 -0400 (EDT) Subject: Preprint on Inductive Transfer in ANNs available Message-ID: <9610221959.AA05229@church.ai.csd.uwo.ca.csd.uwo.ca> A preprint of the article: "Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness" can be found at: http://www.csd.uwo.ca/~dsilver/CSetaMTL.ps.Z The article has been accepted for publication in the Connection Science special issue on "Transfer in Inductive Systems" due out this fall. ABSTRACT With a distinction made between two forms of task knowledge transfer, {\em representational} and {\em functional}, $\eta$MTL, a modified version of the MTL method of functional (parallel) transfer, is introduced. The $\eta$MTL method employs a separate learning rate, $\eta_k$, for each task output node $k$. $\eta_k$ varies as a function of a measure of relatedness, $R_k$, between the $k$th task and the primary task of interest. Results of experiments demonstrate the ability of $\eta$MTL to dynamically select the most related source task(s) for the functional transfer of prior domain knowledge. The $\eta$MTL method of learning is nearly equivalent to standard MTL when all parallel tasks are sufficiently related to the primary task, and is similar to single task learning when none of the parallel tasks are related to the primary task. If you have any difficulties with transmission or wish to receive the article by another means please contact me as below. . Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. = = dsilver at csd.uwo.ca H: (902)582-7558 O: (902)494-1813 = = WWW home page .... http://www.csd.uwo.ca/~dsilver = =========================================================================  From halici at rorqual.cc.metu.edu.tr Wed Oct 23 02:00:40 1996 From: halici at rorqual.cc.metu.edu.tr (ugur halici) Date: Wed, 23 Oct 1996 10:00:40 +0400 (MEDT) Subject: Special Session on Pattern Recog.,Image Processing & Computer , Vision Message-ID: ******************************************************** Call for Summaries and Participation SPECIAL SESSION on --------------------------------------------------------- PATTERN RECOGNITION, IMAGE PROCESSING and COMPUTER VISION --------------------------------------------------------- 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE Sheraton Imperial Hotel & Convention Center, Research Triangle Park, North Carolina/March 2-5, 1997 ********************************************************* A special session on Pattern Recognition, Image Processing and Computer Vision is to be organized within the 2nd International Conference on Computational Intelligence and Neuroscience. Papers are sought on neural network applications or biologically inspired approaches related to pattern recognition, image processing and computer vision. Prospective authors are requested to contact the session organizer Ugur Halici, Dept. of Electrical Engineering Middle East Technical University, Ankara, 06531, Turkey fax: (+90) 312 210 12 61 email: halici at rorqual.cc.metu.edu.tr by email or fax as soon as possible in order to show their interest and receive information on the paper format. Papers will be accepted based on summaries, which must be received before November 30, 1996 for this session. ICCIN is part of the Third Joint Conference Information Sciences(JCIS), Sheraton Imperial Hotel & Convention Center, Research Triangle Park, North Carolina/March 2-5, 1997 ICCIN Conference Chairs: -------------------------- Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University JCIS Honorary Chairs: ------------------------- Lotfi A. Zadeh & Azriel Rosenfeld Plenary Speakers: ---------------- James S. Albus / Jim Anderson / Roger Brockett / Earl Dowell / David E. Goldberg / Stephen Grossberg / Y. C. Ho / John H. Holland / Zdzislaw Pawlak / Lotfi A. Zadeh ICCIN Web site: http://www.csci.csusb.edu/iccin  From Dimitris.Dracopoulos at ens-lyon.fr Thu Oct 24 08:21:19 1996 From: Dimitris.Dracopoulos at ens-lyon.fr (Dimitris Dracopoulos) Date: Thu, 24 Oct 1996 14:21:19 +0200 (MET DST) Subject: CFP: Neural and Evolutionary Algorithms for Intelligent Control Message-ID: <199610241221.OAA03160@banyuls.ens-lyon.fr> NEURAL AND EVOLUTIONARY ALGORITHMS FOR INTELLIGENT CONTROL ---------------------------------------------------------- C A L L F O R P A P E R S Special Session in: "15th IMACS World Congress 1997 on Scientific Computation, Modelling and Applied Mathematics", August 24-29 1997, Berlin, Germany Special Session Organizer-Chair: Dimitri C. Dracopoulos ------------------------------- (Ecole Normale Superieure de Lyon, LIP) Scope: ----- The focus of the session will concentrate in the latest developments of the state-of-the-art neurocontrol and evolutionary techniques. Today, many advanced intelligent control applications utilize methods like the above, and papers describing these are mostly welcome. Theoretical discussions of how these techniques can be proved to be stable are also highly welcome. Topics: ------ -Neurocontrollers * optimization over time * adaptive critic designs * brain-like neurocontrollers -Evolutionary techniques as pure controllers * genetic algorithms * evolutionary programming * genetic programming -Hybrid methods (neural nets + evolutionary algorithms) -Theoretical and Stability issues for neuro-evolutionary control -Advanced Control Applications Paper Contributions: -------------------- Each paper will be published in the Proceedings of the IMACS'97 World Congress. The accepted papers will be orally presented (25 minutes each, including 5 min for discussion). Important dates: ---------------- December 5, 1996, Deadline for receiving papers. January 10, 1997, Notification of acceptance. February 1997, Author typing instructions, for camera-ready copies. Submission guidelines: --------------------- One hardcopy, 6 pages limit, 10pt font, should be sent to the Session Chair: Professor Dimitri C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Ecole Normale Superieure de Lyon 46 Allee d'Italie 69364 Lyon - Cedex 07, France. In the case of multiple authors then in the paper it should be indicated which author is to receive correspondence. The corresponding author is requested to include in the cover letter: complete postal address, e-mail address, phone number, fax number, a list of keywords (no more than 5). More information (preliminary) on the "15th IMACS World Congress 1997" can be found in: "http://www.first.gmd.de/imacs97/". Please note that special discounted registration fees (proceedings but no social program) will be available. -- Professor Dimitris C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Telephone: +33 (0) 472728504 Ecole Normale Superieure de Lyon Fax: +33 (0) 472728080 46 Allee d'Italie E-mail: Dimitris.Dracopoulos at ens-lyon.fr 69364 Lyon - Cedex 07 France  From giles at research.nj.nec.com Thu Oct 24 15:06:47 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 24 Oct 96 15:06:47 EDT Subject: Technical Report Available Message-ID: <9610241906.AA06199@alta> The following technical report presents the experimental results of three on-line learning solutions in predicting multiprocessor memory access patterns. __________________________________________________________________________ PERFORMANCE OF ON-LINE LEARNING METHODS IN PREDICTING MULTIPROCESSOR MEMORY ACCESS PATTERNS Majd F. Sakr (1,2), Steven P. Levitan (2), Donald M. Chiarulli (3), Bill G. Horne (1), C. Lee Giles (1,4) (1) NEC Research Institute, 4 Independence Way, Princeton NJ 08540 (2) University of Pittsburgh, Electrical Engineering, Pittsburgh PA 15261 (3) University of Pittsburgh, Computer Science, Pittsburgh PA 15260 (4) UMIACS, University of Maryland, College Park, MD 20742 Abstract: Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PU's job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. Keywords: On-line Prediction; Learning; Multiprocessors; Memory; Markov Predictor; Linear Predictor; Time Delay Neural Network ____________________________________________________________________________ The paper is available from: http://www.neci.nj.nec.com/homepages/giles.html http://www.neci.nj.nec.com/homepages/sakr.html http://www.cs.umd.edu/TRs/TR-no-abs.html Comments are very welcome. -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==  From dyyeung at cs.ust.hk Fri Oct 25 03:06:20 1996 From: dyyeung at cs.ust.hk (Dit-Yan Yeung) Date: Fri, 25 Oct 1996 15:06:20 +0800 (HKT) Subject: Theoretical Aspects of Neural Computation (TANC-97) Message-ID: <199610250706.PAA07352@cssu35.cs.ust.hk> Preliminary Call for Papers TANC-97 Hong Kong International Workshop on Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective May 26-28, 1997 Hong Kong University of Science and Technology Over the past decade or so, neural computation has emerged as a research area with active involvement by researchers from a number of different disciplines, including computer science, engineering, mathematics, neurobiology, physics, and statistics. Interdisciplinary collaboration and exchange of ideas has often led us to address research issues in this area from different perspectives. Consequently, some interesting new paradigms and results have become available to the field and have contributed significantly to the strengthening of its theoretical foundations. This workshop, to be held in the Hong Kong University of Science and Technology located at the scenic Clear Water Bay, is intended to bring together researchers from different disciplines to review the current status of neural computation research. In particular, theoretical studies of the following themes will be given special emphasis: Neuroscience Computational and Mathematical Statistical Physics While the focus of this workshop is on theoretical aspects, the impact of recent theoretical advances to applications and the novel application of theoretical results to real-world problems will also be covered. Moreover, as an important objective of the workshop, future research directions and topics that have strong interdisciplinary nature will be explored. The workshop will feature several keynote presentations and invited papers by leading researchers in the field: Keynote Speakers ---------------- Shun-ichi Amari (RIKEN, Japan) Haim Sompolinsky (Hebrew University, Israel) Invited Speakers ---------------- Peter Dayan (MIT, USA) Aike Guo (Chinese Academy of Sciences, China) Ido Kanter (Bar Ilan, Israel) Manfred Opper (Wuerzburg, Germany) Sara Solla (AT&T Research, USA) Lei Xu (Chinese University of Hong Kong) (and more to be confirmed) In addition to keynote and invited papers, there will also be a number of submitted papers. All oral and poster presentations will be scheduled in a single track with no parallel sessions to facilitate interdisciplinary interaction. Additional discussion sessions will be arranged. Moreover, since campus accommodation will be available to all workshop participants, there will be plenty of time for informal discussions. Paper Submission ---------------- All submitted papers will be refereed on the basis of quality, significance, and clarity by a review panel which includes our invited speakers. Each submitted paper written in English may be up to six A4 pages, including figures and references, in single-spaced one-column format using a font size of 10 points or larger. Five copies of the submitted paper should be sent to: TANC-97 Secretariat Department of Physics Hong Kong University of Science and Technology Clear Water Bay, Kowloon Hong Kong Fax: +852-2358-1652 E-mail: tanc97 at usthk.ust.hk WWW: http://www.cs.ust.hk/conf/tanc97 In addition to the paper, there should also be a cover letter with the following information provided: (a) contacting author, fax number, postal and e-mail addresses; (b) up to eight keywords; (c) preference of presentation format (oral or poster). Important Dates --------------- Submission of paper (received): January 15, 1997 Notification of acceptance: February 28, 1997 Organizing Committee -------------------- Kwok-Ping Chan (University of Hong Kong) Lai-Wan Chan (Chinese University of Hong Kong) Irwin King (Chinese University of Hong Kong) Zhaoping Li (Hong Kong University of Science and Technology) Franklin Shin (Hong Kong Polytechnic University) Michael K.Y. Wong (Hong Kong University of Science and Technology) - Chairman Dit-Yan Yeung (Hong Kong University of Science and Technology) Other Attractions ----------------- Hong Kong offers a wide variety of sightseeing activities. It is one of the most international and interesting cities in the world. This will be a great opportunity to visit Hong Kong as the workshop will be held shortly before Hong Kong becomes a Special Administrative Region of China starting from July 1, 1997.  From A.Sharkey at dcs.shef.ac.uk Fri Oct 25 11:15:25 1996 From: A.Sharkey at dcs.shef.ac.uk (Amanda Sharkey) Date: Fri, 25 Oct 96 11:15:25 BST Subject: Research Associate Post: Neural Net Fault Diagnosis Message-ID: <9610251015.AA01370@gw.dcs.shef.ac.uk> Research Associate: On-line Neural Net Fault Diagnosis for Diesel Engines. A Post Doctoral research fellow is required for a period of up to 3 years to join the Neural Computing Group in the Department of Computer Science, University of Sheffield, UK. This EPSRC-funded post is available from November 1st 1996, or as soon as possible thereafter. Salary in the range 14,317-16,628 pounds (UK). This project will involve the collection of fault diagnosis data from a diesel engine, using measures of in-cylinder pressure, engine vibration and noise emission. These data will be used to train a neural net system for fault diagnosis, and will form the basis for the development of a general set of principles for increasing the reliability of a neural net system. The postholder will be required to induce a variety of faults in a real diesel engine (assisted by a technician also employed for the project), to collect data corresponding to those faults using a variety of sensors, and to train neural nets to perform fault diagnosis. Knowledge and practical experience of real diesel engines is essential, as are computational skills. Familiarity with neural computing techniques would be preferred. Further details are available: http://www.dcs.shef.ac.uk/research/groups/nn/engine.html Direct email inquiries and/or CVs to Dr Amanda Sharkey: amanda at dcs.shef.ac.uk  From nikola at prosun.first.gmd.de Fri Oct 25 04:44:40 1996 From: nikola at prosun.first.gmd.de (Nikola Serbedzija) Date: Fri, 25 Oct 96 09:44:40 +0100 Subject: CFP: IMACS - ANN Simulation session Message-ID: <9610250844.AA16045@prosun.first.gmd.de> ------------------------------------------------------------------------- 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 ------------------------------------------------------------------------- CALL FOR PAPERS for the Organized Session on Simulation of Artificial Neural Networks Session Organizers: Gerd Kock and Nikola Serbedzija ------------------------------------------------------------------------- The aim of this session is to reflect the current techniques and trends in the simulation of artificial neural networks (ANNs). Both, software and hardware approaches are solicited. Topics of interest are, but not limited to: * General Aspects of Neural Simualations o design issues for simulation tools o inherent parallelism of ANNS o general-purpose neural simulations o special-purpose neural simulations o fault tolerance aspects * Parallel Implementation of ANNs o data parallel implementations o control parallel implementations * Hardware Emulation of ANNs o silicon technology o optical technology o molecular technology * General Simulation Tools o graphic/menu bases tools o module libraries o specific programming languages * Applications o applications using/demanding parallel o implementations or hardware emulations o applications using/demanding analysis tools or graphical representations provided by simulation tools * Hybrid Systems o the topics from above are of interest also with respect to related hybrid systems (neuro-fuzzy, genetic algorithms) Authors interested in this session are invited to submit 3 copies of an extended summary (about 4 pages) of the paper to one of the session organizers by December 1st. 1996. Submission can be done also by email. The notification of acceptance/rejection will be mailed by February 28th, 1997. The authors of accepted papers will also receive detailed instructions for the final manuscripts preparation. The submission must contain the following information: The name(s) of the author(s), title(s), affiliation(s), complete address(es) (including email, phone, fax). In addition, the author responsible for communication has to be indicated. Important dates --------------- December 1st, 1996 Extended summary due February 28th, 1997 Notification of acceptance/rejection April 30th, 1997 Camera-ready paper due Addresses to send contribution ------------------------------ Dr. Gerd Kock Dr. Nikola Serbedzija GMD FIRST GMD FIRST Rudower Chaussee 5 Rudower Chaussee 5 D-12489 Berlin D-12489 Berlin Germany Germany e-mail: gerd at first.gmd.de e-mail: nikola at first.gmd.de tel: +49 30 / 6392 1863 tel: +49 30 / 6392 1873 =================================================================== IMACS The International Association for Mathematics and Computers in Simulation is an organization of professionals and scientists concerned with computers, computation and applied mathematics, in particular, as they apply to the simulation of systems. This includes numerical analysis, mathematical modelling, approximation theory, computer hardware and software, programming languages and compilers. IMACS also concerns itself with the general philosophy of scientific computation and applied mathematics, and with their impact on society and on disciplinary and interdisciplinary research. IMACS is one of the international scientific associations (with IFAC, IFORS, IFIP and IMEKO) represented in FIACC, the five international organizations in the area of computers, automation, instrumentation and the relevant branches of applied mathematics. Of the five, IMACS (which changed its name from AICA in 1976) is the oldest and was founded in 1956. For more information about the 15th IMACS WORLD CONGRESS turn to WWW page http://www.first.gmd.de/imacs97/ ================================================================  From dwang at cis.ohio-state.edu Fri Oct 25 14:19:56 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 25 Oct 1996 14:19:56 -0400 (EDT) Subject: Neurocomputing, Vol.13 (2-4) Message-ID: <199610251819.OAA01392@sarajevo.cis.ohio-state.edu> NEUROCOMPUTING [NEUCOM] Volume 13, Issue 2-4 (30 SEPTEMBER 1996) Adaptable neuro production systems N.K. Kasabov On the stability of Lagrange programming neural networks for satisfiability problems of propositional calculus M. Nagamatu, T. Yanaru Generalized Hopfield networks for associative memories with multi-valued stable states J.M. Zurada, I. Cloete, E. van der Poel A neurocomputing framework: From methodologies to application S.-B. Cho A systematic method for rational definition of plant diagnostic symptoms by self-organizing neural networks H. Furukawa, T. Ueda, M. Kitamura Neural network indirect adaptive control with fast learning algorithm G.J. Jeon, I. Lee Efficient learning of NN-MLP based on individual evolutionary algorithm Q. Zhao, T. Higuchi Application of neural network algorithm to CAD of magnetic systems Y. Yamazaki, M. Ochiai, A. Holz, T. Hara Controlling public address systems based on fuzzy inference and neural network K. Kurisu, K. Fukuyama Robust world-modelling and navigation in a real world U.R. Zimmer Practical applications of neural networks in texture analysis E. Biebelmann, M. K\"{o}ppen, B. Nickolay Chaotic recurrent neural networks and their application to speech recognition J.K. Ryeu, H.S. Chung On the accuracy of mapping by neural networks trained by backpropagation with forgetting R. Kozma, M. Sakuma, Y. Yokoyama, M. Kitamura Optimal learning in artificial neural networks: A review of theoretical results M. Bianchini, M. Gori Classification by balanced binary representation Y. Baram Fuzzy astronomical seeing nowcasts with a dynamical and recurrent connectionist network A. Aussem, F. Murtagh, M. Sarazin Improved binary classification performance using an information theoretic criterion P. Burrascano, D. Pirollo  From lenherr at mildura.cs.umass.edu Sun Oct 27 01:01:15 1996 From: lenherr at mildura.cs.umass.edu (Fred Lenherr) Date: Sun, 27 Oct 1996 01:01:15 -0500 Subject: Neuroscience Web Search Message-ID: <199610270601.BAA06192@mildura.cs.umass.edu> Hello, I have created a new Web Search Engine devoted entirely to neuroscience. Unlike the large search sites, everything here is relevant by pre-selection. The URL is: http://www.acsiom.org/nsr/neuro.html This is a full-text database and contains more than 55,000 web pages. If you are interested, please take a look at it, and consider placing a link to it on one of your own web pages. Thanks very much, Fred K. Lenherr  From crites at hope.cs.umass.edu Mon Oct 28 01:06:18 1996 From: crites at hope.cs.umass.edu (Bob Crites) Date: Mon, 28 Oct 1996 01:06:18 -0500 (EST) Subject: PhD Thesis Available Message-ID: <199610280606.BAA08370@hope.cs.umass.edu> My Phd thesis is now available for download: LARGE-SCALE DYNAMIC OPTIMIZATION USING TEAMS OF REINFORCEMENT LEARNING AGENTS Robert Harry Crites ftp://ftp.cs.umass.edu/pub/anw/pub/crites/root.ps.Z (202517 bytes) or from my homepage at: http://www-anw.cs.umass.edu/People/crites/crites.html Abstract: Recent algorithmic and theoretical advances in reinforcement learning (RL) are attracting widespread interest. RL algorithms have appeared that approximate dynamic programming (DP) on an incremental basis. Unlike traditional DP algorithms, these algorithms do not require knowledge of the state transition probabilities or reward structure of a system. This allows them to be trained using real or simulated experiences, focusing their computations on the areas of state space that are actually visited during control, making them computationally tractable on very large problems. RL algorithms can be used as components of multi-agent algorithms. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this dissertation we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems. Elevator group control serves as our primary testbed. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are non-stationary due to changing passenger arrival rates. As a way of streamlining the search through policy space, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.  From schapire at research.att.com Mon Oct 28 12:07:59 1996 From: schapire at research.att.com (Robert Schapire) Date: Mon, 28 Oct 1996 12:07:59 -0500 (EST) Subject: Call for papers: COLT '97 Message-ID: <199610281707.MAA06414@arran.research.att.com> =========================================================================== -- Call for Papers -- COLT '97 Tenth Annual Conference on Computational Learning Theory Vanderbilt University, Nashville, Tennessee July 6--9, 1997 =========================================================================== The Tenth Annual Conference on Computational Learning Theory (COLT'97) will be held at Vanderbilt University in Nashville, Tennessee from Sunday, July 6 through Wednesday, July 9, 1997. COLT'97 is sponsored by Vanderbilt University, with additional support from AT&T Labs, and in cooperation with ACM SIGACT and SIGART. The conference will be co-located with the Fourteenth International Conference on Machine Learning (ICML'97) which will be held Tuesday, July 8 through Saturday, July 12. We anticipate a lively program including oral presentations, posters, a number of invited speakers and a half day of tutorials (jointly organized with ICML). We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning. Some of the issues and topics that have been addressed in the past include: * design and analysis of learning algorithms; * sample and computational complexity of learning specific model classes; * frameworks modeling the interaction between the learner, teacher and the environment (such as learning with queries, learning control policies and inductive inference); * learning using complex models (such as neural networks and decision trees); * learning with minimal prior assumptions (such as mistake-bound models, universal prediction, and agnostic learning). We strongly encourage submissions from all disciplines engaged in research on these and related questions. Examples of such fields include computer science, statistics, information theory, pattern recognition, statistical physics, inductive logic programming, information retrieval and reinforcement learning. We also encourage the submission of papers describing experimental results that are supported by theoretical analysis. ABSTRACT SUBMISSION: Authors are encouraged to submit their abstracts electronically. Instructions for how to submit papers electronically can be obtained after December 1 by sending email to colt97 at research.att.com with subject "help", or from our web page. Alternatively, authors may submit fourteen copies (preferably two-sided) of an extended abstract to: Robert Schapire -- COLT'97 AT&T Labs 600 Mountain Avenue, Room 2A-424 Murray Hill, NJ 07974 USA Telephone (for overnight mail): (908) 582-4533 Abstracts (whether hard-copy or electronic) must be RECEIVED by 11:59pm EST on FRIDAY, JANUARY 17, 1997. This deadline is FIRM. (We also will accept abstracts sent via air mail and postmarked by January 6, or sent via overnight carrier by January 16.) Authors will be notified of acceptance or rejection on or before March 24, 1997. Final camera-ready papers will be due by April 18. Papers that have appeared in journals or other conferences, or that are being submitted to other conferences (including ICML), are NOT appropriate for submission to COLT. ABSTRACT FORMAT: The extended abstract should consist of a cover page with title, authors' names, postal and email addresses, and a 200-word summary. The body of the abstract should be no longer than 10 pages with at most 35 lines per page, at most 6.5 inches of text per line, and in 12-point font. If the abstract exceeds 10 pages, only the first 10 pages may be examined. The extended abstract should include a clear definition of the theoretical model used and a clear description of the results, as well as a discussion of their significance, including comparison to other work. Proofs or proof sketches should be included. PROGRAM FORMAT: All accepted papers will be presented orally, although some or all papers may also be included in a poster session. At the discretion of the program committee, the program may consist of both long and short talks, corresponding to longer and shorter papers in the proceedings. By default, all papers will be considered for both categories. Authors who DO NOT want their papers considered for the short category should indicate that fact in a cover letter. PROGRAM CHAIRS: Yoav Freund and Robert Schapire (AT&T Labs). PROGRAM COMMITTEE: Andrew Barron (Yale University), John Case (University of Delaware), Sally Goldman (Washington University), David Helmbold (University of California, Santa Cruz), Rob Holte (University of Ottawa), Eyal Kushilevitz (Technion), Ga`bor Lugosi (Pompeu Fabra University, Barcelona), Arun Sharma (University of New South Wales), John Shawe-Taylor (University of London), Satinder Singh (University of Colorado, Boulder), Haim Sompolinsky (Hebrew University), Volodya Vovk (Royal Holloway, University of London). CONFERENCE AND LOCAL ARRANGEMENTS CHAIR: Vijay Raghavan (Vanderbilt University). STUDENT TRAVEL: We anticipate some funds will be available to partially support travel by student authors. Details will be distributed as they become available. TUTORIALS: The program will include a half day of tutorials, jointly organized by COLT and ICML, and intended as introductions to topics in the theory and practice of machine learning. For further information, or to submit a proposal for a tutorial, contact Sally Goldman, the tutorials chair, at sg at cs.wustl.edu or visit our web page. FOR MORE INFORMATION: Visit the ICML/COLT'97 web page at http://cswww.vuse.vanderbilt.edu/~mlccolt/, or send email to colt97 at research.att.com. This call for papers is available in html and other formats from http://www.research.att.com/~yoav/colt97/cfp.html  From john at dcs.rhbnc.ac.uk Mon Oct 28 15:40:39 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 28 Oct 96 20:40:39 +0000 Subject: Special Issue on VC Dimension Message-ID: <199610282040.UAA12033@platon.cs.rhbnc.ac.uk> DISCRETE APPLIED MATHEMATICS announcing a Special Issue on Vapnik-Chervonenkis Dimension Manuscripts are solicited for a special issue of DISCRETE APPLIED MATHEMATICS on the topic of the Vapnik-Chervonenkis Dimension. The special issue arose out of an ICMS Workshop on the VC Dimension though submission is not restricted to those who attended the workshop. For more information on the Vapnik-Chervonenkis dimension and the aims of the workshop and hence also the Special Issue, please consult the `scientific aims' available through the homepage: http://www.dcs.ed.ac.uk/~mrj/VCWorkshop/ The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Combinatorics of the VC Dimension - Applications of the VC Dimension in Statistics - Applications of the VC Dimension in Learning Theory - Applications of the VC Dimension in Computational Geometry - Applications of the VC Dimension in Complexity Theory Four (4) copies of complete manuscripts should be sent to the Coordinating Editor indicated below by December 31, 1996. Email submission of postscript files (preferably compressed and uuencoded) is acceptable. Manuscripts must be prepared according to the normal submission requirements of Discrete Applied Mathematics, as described inside the back cover of each issue of the journal. All manuscripts will be subject to the regular refereeing process of the journal. Papers should include an abstract. The Guest Editors of the Special Issue are: Coordinating Editor: J.S. Shawe-Taylor Department of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK Email: jst at dcs.rhbnc.ac.uk A. Macintyre Mathematical Institute Oxford University Email: ajm at maths.ox.ac.uk M. Jerrum Department of Computer Science University of Edinburgh Email: mrj at dcs.edinburgh.ac.uk ------- End of Forwarded Message  From carl at cs.toronto.edu Tue Oct 29 15:50:10 1996 From: carl at cs.toronto.edu (Carl Edward Rasmussen) Date: Tue, 29 Oct 1996 15:50:10 -0500 Subject: PhD thesis available Message-ID: <96Oct29.155011edt.987@neuron.ai.toronto.edu> My PhD thesis is now available on the net. It is entitled EVALUATION OF GAUSSIAN PROCESSES AND OTHER METHODS FOR NON-LINEAR REGRESSION The thesis is 138 pages long, occupies 460Kb in compressed postscript and is formatted for double-sided printing. You can obtain a copy via the web at http://www.cs.toronto.edu/~carl/pub.html or via anonymous ftp to ftp.cs.toronto.edu where the file "thesis.ps.gz" is placed in the directory "pub/carl". ABSTRACT: This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new non-parametric Bayesian learning methods relying on Gaussian process priors over functions are developed. These priors are controlled by hyperparameters which set the characteristic length scale for each input dimension. In the simplest method, these parameters are fit from the data using optimization. In the second, fully Bayesian method, a Markov chain Monte Carlo technique is used to integrate over the hyperparameters. One advantage of these Gaussian process methods is that the priors and hyperparameters of the trained models are easy to interpret. The Gaussian process methods are benchmarked against several other methods, on regression tasks using both real data and data generated from realistic simulations. The experiments show that small datasets are unsuitable for benchmarking purposes because the uncertainties in performance measurements are large. A second set of experiments provide strong evidence that the bagging procedure is advantageous for the Multivariate Adaptive Regression Splines (MARS) method. The simulated datasets have controlled characteristics which make them useful for understanding the relationship between properties of the dataset and the performance of different methods. The dependency of the performance on available computation time is also investigated. It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time. The Gaussian process methods are shown to consistently outperform the more conventional methods. -- \ Carl Edward Rasmussen Email: carl at cs.toronto.edu o/\_ Dept of Computer Science Phone: +1 (416) 978 7391 <|__,\ University of Toronto, Home : +1 (416) 531 5685 "> | Toronto, ONTARIO, FAX : +1 (416) 978 1455 ` | Canada, M5S 1A4 web : http://www.cs.toronto.edu/~carl  From marshall at cs.unc.edu Tue Oct 29 11:03:55 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Tue, 29 Oct 1996 12:03:55 -0400 Subject: Paper available: Neural Model of Visual Stereomatching Message-ID: <199610291603.MAA17472@marshall.cs.unc.edu> Paper available in http://www.cs.unc.edu/Research/brainlab/index.html "NEURAL MODEL OF VISUAL STEREOMATCHING: SLANT, TRANSPARENCY, AND CLOUDS" JONATHAN A. MARSHALL, GEORGE J. KALARICKAL, ELIZABETH B. GRAVES Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. marshall at cs.unc.edu, +1-919-962-1887, fax +1-919-962-1799 Stereomatching of oblique and transparent surfaces is described using a model of cortical binocular "tuned" neurons selective for disparities of individual visual features and neurons selective for the position, depth, and 3-D orientation of local surface patches. The model is based on a simple set of learning rules. In the model, monocular neurons project excitatory connection pathways to binocular neurons at appropriate disparities. Binocular neurons project excitatory connection pathways to appropriately tuned "surface patch" neurons. The surface patch neurons project reciprocal excitatory connection pathways to the binocular neurons. Anisotropic intralayer inhibitory connection pathways project between neurons with overlapping receptive fields. The model's responses to simulated stereo image pairs depicting a variety of oblique surfaces and transparently overlaid surfaces are presented. For all the surfaces, the model (1) assigns disparity matches and surface patch representations based on global surface coherence and uniqueness, (2) permits coactivation of neurons representing multiple disparities within the same image location, (3) represents oblique slanted and tilted surfaces directly, rather than approximating them with a series of frontoparallel steps, (4) assigns disparities to a cloud of points at random depths, like human observers, and unlike Prazdny's (1985) method, and (5) causes globally consistent matches to override greedy local matches. The model represents transparency, unlike the Marr and Poggio (1976) model, and it assigns unique disparities, unlike Prazdny's (1985) model. In press, to appear in Network: Computation in Neural Systems, 11/96.  From marshall at cs.unc.edu Tue Oct 29 12:31:39 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Tue, 29 Oct 1996 13:31:39 -0400 Subject: Short-term postdoc opening: Neural modeling of visual perception Message-ID: <199610291731.NAA17815@marshall.cs.unc.edu> ---------------------------------------------------------------------------- Short-Term Position Opening: POSTDOCTORAL RESEARCH ASSOCIATE IN NEURAL MODELING OF VISUAL PERCEPTION at the University of North Carolina at Chapel Hill A short-term postdoctoral position is available in neural modeling of visual perception, in Dr. Jonathan Marshall's research group at UNC-Chapel Hill. The group's research focuses on intermediate-level visual motion perception, surface appearance perception, object perception, binding and grouping, and depth perception. The opening is the 2nd postdoctoral position in a research group that includes one faculty member, one postdoc, and four PhD students. The postdoc will develop and implement computational simulations of neural models of visual mechanisms involved in perception of surface brightness and transparency, depth, motion, and other aspects of visual and neural processing. The postdoc will also work on developing relative-motion algorithms for image processing tasks in object detection, image stabilization, scene segmentation, and invariant object representation. The postdoc may also have opportunities to develop and run visual psychophysics experiments on motion perception and stereopsis. The project includes work on adaptation and self-organization processes that may guide the development and maintenance of such neural mechanisms in human and animal brains. The postdoc will collaborate with other members of the research group. The position requires very good programming skills and (ideally) some image processing and/or neural simulation experience. Experience with visual psychophysics or visual neurophysiology would be a plus. The position is available immediately and is funded for 6 or more months. Because the position is short-term, it might be ideal for a new PhD who needs interim funding before starting another position in the Summer or Fall. Salary is competitive. Please send CV, a letter describing research interests and background, relevant publications, and references to Prof. Jonathan A. Marshall, Department of Computer Science, CB 3175, Sitterson Hall, University of North Carolina, Chapel Hill, NC 27599-3175, USA. Phone 919-962-1887, fax 919-962-1799, marshall at cs.unc.edu, http://www.cs.unc.edu/~marshall. Posted 25 October 1996 ----------------------------------------------------------------------------  From sylee at eekaist.kaist.ac.kr Wed Oct 30 07:32:58 1996 From: sylee at eekaist.kaist.ac.kr (prof. Soo-Young Lee) Date: Wed, 30 Oct 1996 21:32:58 +0900 Subject: Neural Networks Session at SCI'97, Venezuela Message-ID: <199610301232.VAA14099@eekaist.kaist.ac.kr> CALL FOR PAPERS SCI'97 Neural Networks Session at WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS Caracas, Venezuela July 7-11, 1997 I was asked to organize session(s) on neural networks at a multi-disciplinary conference, SCI'97. As you may see at the following announcements of the conference, it is a truly interdiciplinary conference covering intelligent computing, information theory, cybernetics, social systems, psychology, biology, and applications. WE believe this interdisciplinary conference will provide a very good chance to meet researchers from different-but-related disciplines and promote interesting discussions. Therefore, I would like to invite you to present your recent research results at this conference, and meet interesting people. The Neural Networks sessions will cover following topics: * Neural network models (biological and artificial) * Hybrid systems (Neuro, fuzzy, GA, EP, etc.) * Applications (Speech, Time-series, controls, etc.) * Artificiasl life If you are interested in this multidisciplinary conference, please send me an e-mail to sylee at eekaist.kaist.ac.kr. SUBMISSIONS AND DEADLINES January 17, 1997 Submission of extended abstracts or a condensed first draft(500-1500 words) March 10, 1997 Acceptance notifications May 12, 1997 Submission of papers camera/ready, hard copies and electronic versions Best regards, Soo-Young Lee ********************************************************************** WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS Caracas, Venezuela July 7-11, 1997 MAJOR THEMES Conceptual Infrastructure of Systemics, Cybernetics and Informatics Information Systems (ISAS '97) Control Systems Managerial/Corporative Systems Human Resources Systems Natural Resources Systems Social Systems Educational Systems Financial Systems SCI in Psychology, Cognition and Spirituality SCI in Biology and Medicine SCI in Art Globalization, Development and Emerging Economies ACADEMIC AND SCIENTIFIC SPONSORS World Organization of Systemics and Cybernetics (WOSC) (France) IFSR: International Federation for Systems Research (Austria/USA) International Systems Institute (USA) CUST, Engineer Science Institute of the Blaise Pascal University (France) The International Institute for Advanced Studies in Systems Research and Cybernetics (Canada) Society Applied Systems Research (Canada) Cybernetics and Human Knowing: A Journal of Second Order Cybernetics and Cybersemiotics (Denmark) International Institute of Informatics and Systemics (USA) IEEE (Venezuela Chapter) Simon Bolivar University (Venezuela) Universidad Central de Venezuela INCLUSION OF SCI'97 PROCEEDINGS IN A CD-ROM EXTENDED ENCYCLOPEDIA An electronic version of the SCI 97 proceedings will also be available on CD-ROM, with search and hypertext features. Other media, such as sound, animation and video, are also being considered. These proceedings will also be included in the CD-ROM Extended Encyclopedia of Systemics and Cybernetics (TM), whose development in presently in progress. TYPES OF SUBMISSIONS ACCEPTED Research, Review or Position Papers Panel Presentation, Workshop and/or Round Table Proposals New Topics Proposal (which should include a minimum of 15 papers) Focus Symposia (which should include a minimum of 15 papers) JOURNALS PUBLICATIONS FOR BEST PAPERS Best papers will be published by "Cybernetics and Human Knowing: A Journal of Second Order Cybernetics and Cybersemiotics". Members of the Program Committee who are refrees of the Journal will take the decision on the issue. Other Journals are being considered for other areas of SCI'97/ISAS'97. WEB SITE http://www.callaos.com/SCI 97 PURPOSE The purpose of the Conference is to bring together, from universities and corporations, academics and professionals, researchers and consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'97 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. BACKGROUND The success achieved in ISAS'95 (Information Systems Analysis and Synthesis) held in Baden-Baden (Germany), symbolized by the award granted by the International Institute for Advanced Studios in Systems Research and Cybernetics (Canada), as the best and largest symposium at the 5th International Conference on Systems Research, Informatics and Cybernetics, encouraged its sponsors and session chairs to organize ISAS '96 at Orlando and prepare a more general Conference on Systemics, Cybernetics and Informatics (SCI '97) at Caracas (Venezuela). The widely acknowledged success of ISAS'96 (held on July 22-26 at Orlando) by means of spontaneous verbal feedback and a written comprehensive evaluation from 143 authors, of high quality papers, from 32 countries, galvanized the Program and Organizing Committees to make a definitive commitment to organize SCI'97 and ISAS '97 at Caracas, in July 7-11, 1997. Many Program and Organizing Committees members from past international and world conferences are joining us for SCI '97 and ISAS'97, including most of those who organized the World Conference on Systems Sponsored by UNESCO and the United Nations' World Federation of Engineering Organizations (WFEO). We are still looking for more organizational support from experienced scholars, consultants, practitioners, professionals and researchers, as well as from international or national organizations, public or private, academic or professional.  From pauer at igi.tu-graz.ac.at Wed Oct 30 20:22:06 1996 From: pauer at igi.tu-graz.ac.at (Peter Auer) Date: Thu, 31 Oct 1996 02:22:06 +0100 Subject: Special Issue on Computational Learning Theory Message-ID: <199610310122.AA06439@figiss01.tu-graz.ac.at> Call for Papers Special Issue of Algorithmica on Computational Learning Theory Expected publication date: Fall 1997 Submission deadline: January 31, 1997 Guest editors: Peter Auer and Wolfgang Maass Besides constructing mathematical theories for Machine Learning, Computational Learning Theory analyses learning algorithms, tries to identify the necessary and sufficient amount of information required for learning, and investigates the computational complexity of learning problems. For this special issue of Algorithmica we are looking for high quality papers addressing topics such as * design and analysis of learning algorithms, * complexity of learning, * mathematical models of learning, * supervised and unsupervised learning on neural nets * theory of (statistical) pattern recognition * ... (suitable topics are not limited to this short list). Submitted papers will go through the usual refereeing process of Algorithmica. Authors should either send four copies of their paper to Peter Auer Institute for Theoretical Computer Science University of Technology, Graz Klosterwiesgasse 32/2 A-8010 Graz Austria or send their paper electronically as postscript or latex file to silt at igi.tu-graz.ac.at. Submissions should be formated accordingly to the following instructions. Manuscripts should be typed on only one side of the page with wide margins. The title page of the article should include all the authors' affiliations and the mailing address, phone, and fax numbers, and the email address of the corresponding author, 5-10 key words, and a detailed abstract emphasizing the main contribution of the paper. Footnotes other than those referring to the title or author affiliation should be avoided. If they are essential, they should be numbered consecutively and listed on a separate page, following the text. If you are planning to submit a paper to the special issue we would like you to send us a short note (silt at igi.tu-graz.ac.at), so that we are able to plan ahead more easily. This and and eventually updated information can also be found at http://www.cis.tu-graz.ac.at/igi/pauer/silt.html. Requests can be sent to silt at igi.tu-graz.ac.at. Peter Auer and Wolfgang Maass  From stefan.kremer at crc.doc.ca Tue Oct 1 11:32:48 1996 From: stefan.kremer at crc.doc.ca (Stefan C. Kremer) Date: Tue, 01 Oct 1996 11:32:48 -0400 Subject: Ph.D. dissertation available: A Theory of Grammatical Induction in the Connectionist Paradigm Message-ID: <2.2.32.19961001153248.00696364@digame.dgcd.doc.ca> FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z **DO NOT FORWARD TO OTHER GROUPS** Greetings Connectionists Readers: My Ph.D. Dissertation, entitled "A Theory of Grammatical Induction in the Connectionist Paradigm" is now available for anonymous FTP from the Neuroprose archive. Details are provided below. -Stefan ======================================================== A Theory of Grammatical Induction in the Connectionist Paradigm Abstract This dissertation shows that the tractability and efficiency of training particular connectionist networks to implement certain classes of grammars can be formally determined by applying principles and ideas that have been explored in the symbolic grammatical induction paradigm. Furthermore, this formal analysis also allows networks to be tailored to efficiently solve specific grammatical induction problems. Had the formal work that is reported in this dissertation been done earlier, it is possible that connectionist researchers would have been able to take a formal, rather than empirical, approach to understanding the computational power of their nets for grammatical induction. As well, our formal approach could have been applied to understand and develop techniques to functionally increase the power of connectionist grammar induction systems. Instead, these techniques are currently being discovered empirically. This dissertation, by considering classical work done over the past three decades, gives a formal grounding to these empirically discovered methods. In doing so, it also suggests a rationale for making the design decisions which define every connectionist grammar induction system. This allows new networks to be better suited to the problems to which they will be applied. Finally, the dissertation provides insights into applying other refinement techniques that connectionist researchers have yet to consider. Distribution This document is distributed in the form of a tape archive file named: kremer.thesis.tar.Z. The archive contains 7 individual Postscript files named: "kremer.thesis1.ps" (14 pages), "kremer.thesis2.ps" (33 pages), "kremer.thesis3.ps" (12 pages), "kremer.thesis4.ps" (28 pages), "kremer.thesis5.ps" (8 pages), "kremer.thesis6.ps" (33 pages), and "kremer.thesis7.ps" (10 pages). The first file (thesis1) contains the titlepage, copyright notice, abstract, table of contents, list of tables, list of figures, list of abbreviations, list of symbols and introductory chapter of the dissertation. It may help you to decide which sections of the manuscript you wish to download or print. The last file (thesis7) contains both the concluding chapter and the bibliography for the entire document, while all other files each contain the chapter corresponding to their number (i.e. thesis2 contains Chapter 2). At the present time the f ile "kremer.thesis.tar.Z" is available via anonymous FTP from the Neuroprose Archive at URL "ftp://archive.cis.ohio-state.edu/pub/neuroprose/thesis/kremer.thesis.tar.Z" , however, the author reserves the right to remove the file at any time without prior notice. Sorry, the author cannot supply hardcopy versions of this document. Transcript Showing Access Procedure Here is a transcript showing the procedure to retrieve, "de-archive", uncompress, and print the dissertation. This works on my UNIX system. Success with other systems may vary: <*** BEGIN TRANSCRIPT ***> > >ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version wu-2.4(1) Wed Jul 5 14:19:42 EDT 1995) ready. Name (archive.cis.ohio-state.edu:kremer): anonymous 331 Guest login ok, send your complete e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose/Thesis 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get kremer.thesis.tar.Z 200 PORT command successful. 150 Opening BINARY mode data connection for kremer.thesis.tar.Z (2265663 bytes). 226 Transfer complete. local: kremer.thesis.tar.Z remote: kremer.thesis.tar.Z 2265663 bytes received in 1.2e+02 seconds (18 Kbytes/s) ftp> bye 221 Goodbye. >uncompress kremer.thesis.tar.Z >tar xvf kremer.thesis.tar x kremer.thesis1.ps, 229167 bytes, 448 tape blocks x kremer.thesis2.ps, 2324747 bytes, 4541 tape blocks x kremer.thesis3.ps, 315922 bytes, 618 tape blocks x kremer.thesis4.ps, 2746246 bytes, 5364 tape blocks x kremer.thesis5.ps, 269211 bytes, 526 tape blocks x kremer.thesis6.ps, 2422155 bytes, 4731 tape blocks x kremer.thesis7.ps, 114407 bytes, 224 tape blocks >lpr -s kremer.thesis?.ps <*** END TRANSCRIPT ***> Comments and Corrections If you have any comments or corrections for the author, please e-mail them to: stefan.kremer at crc.doc.ca. -- Dr. Stefan C. Kremer, Neural Network Research Scientist, Communications Research Centre, 3701 Carling Ave., P.O. Box 11490, Station H, Ottawa, Ontario K2H 8S2 WWW: http://running.dgcd.doc.ca/~kremer/index.html Tel: (613)990-8175 Fax: (613)990-8369 E-mail: Stefan.Kremer at crc.doc.ca  From freeman at systems.caltech.edu Wed Oct 2 03:20:31 1996 From: freeman at systems.caltech.edu (Robert Freeman) Date: Wed, 2 Oct 96 00:20:31 PDT Subject: Conference: Nerual Networks in the Capital Markets 11/20/96 Message-ID: <9610020720.AA18979@gladstone.systems.caltech.edu> ******************************************************************************* --- Registration Package and Preliminary Program --- NNCM-96 FOURTH INTERNATIONAL CONFERENCE NEURAL NETWORKS IN THE CAPITAL MARKETS Wednesday-Friday, November 20-22, 1996 The Ritz-Carlton Hotel, Pasadena, California, U.S.A. Sponsored by Caltech and London Business School http://cs.caltech.edu/~learn/nncm Neural networks have been applied to a number of live systems in the capital markets, and in many cases have demonstrated better performance than competing approaches. Because of the increasing interest in the NNCM conferences held in the U.K. and the U.S., the fourth annual NNCM will be held on November 20-22, 1996, in Pasadena, California. This is a research meeting where original and significant contributions to the field are presented. A day of tutorials (Wednesday, November 20) is included to familiarize audiences of different backgrounds with some of the key financial and mathematical aspects of the field. Invited Speakers: The conference will feature invited talks by three internationally recognized researchers: Dr. Rob Engle, UC San Diego Dr. Andrew Lo, MIT Sloan School Dr. Paul Refenes, London Business School Contributed Papers: NNCM-96 will have 4 oral sessions and 2 poster sessions with more than 40 contributed papers presented by academicians and practitioners from all six continents, both from the neural networks side and the capital markets side. Each paper has been refereed by 3 experts in the field. The areas of the accepted papers include price forecasting for stocks, bonds, commodities, and foreign exchange; asset allocation and risk management; volatility analysis and pricing of derivatives; cointegration, correlation, and multivariate data analysis; credit assessment and economic forecasting; statistical methods, learning techniques, and hybrid systems. Tutorials: Before the main program, there will be a day of tutorials on Wednesday, November 20, 1996. Three two-hour tutorials will be presented as follows: Statistical Models of Financial Volatility Dr. Rob Engle, University of California, San Diego Universal Portfolios and Information Theory Dr. Tom Cover, Stanford University Data-Snooping and Other Selection Biases in Financial Econometrics Dr. Andrew Lo, MIT Sloan School We are very pleased to have tutors of such caliber help bring new audiences from different backgrounds up to speed in this cross-disciplinary area. Schedule Outline: Wednesday, November 20: 9:00- 5:30 Tutorials 1, 2, 3 Thursday, November 21: 8:30-11:30 Oral Session I 11:30- 2:00 Luncheon & Poster Session I 2:00- 5:00 Oral Session II Friday, November 22: 8:30-11:30 Oral Session III 11:30- 2:00 Luncheon & Poster Session II 2:00- 5:00 Oral Session IV Organizing Committee: Dr. Y. Abu-Mostafa, Caltech (Chairman) Dr. A. Atiya, Cairo University Dr. N. Biggs, London School of Economics Dr. D. Bunn, London Business School Dr. M. Jabri, Sydney University Dr. B. LeBaron, University of Wisconsin Dr. A. Lo, MIT Sloan School Dr. I. Matsuba, Chiba University Dr. J. Moody, Oregon Graduate Institute Dr. C. Pedreira, Catholic Univ. PUC-Rio Dr. A. Refenes, London Business School Dr. M. Steiner, Universitaet Augsburg Dr. A. Timmermann, UC San Diego Dr. A. Weigend, University of Colorado Dr. H. White, UC San Diego Dr. L. Xu, Chinese University of Hong Kong Location: The conference will be held at the Ritz-Carlton Huntington Hotel in Pasadena, within two miles from the Caltech campus. One of the most beautiful hotels in the U.S., the Ritz is a 35-minute drive from Los Angeles International Airport (LAX) with nonstop flights from most major cities in North America, Europe, the Far East, Australia, and South America. Home of Caltech, Pasadena has recently become a major dining/hangout center for Southern California with the growth of its `Old Town', built along the styles of the 1950's. Among the cultural attractions of Pasadena are the Norton Simon Museum, the Huntington Library/Gallery/Gardens, and a number of theaters including the Ambassador Theater. Hotel Reservation: Please contact the Ritz-Carlton Huntington Hotel in Pasadena directly. The phone number is (818) 568-3900 and the fax number is (818) 568-1842. Ask for the NNCM-96 rate. We have negotiated an (incredible) rate of $79+taxes ($110 with $31 credited by NNCM-96 upon registration) per room (single or double occupancy) per night. Please make the hotel reservation IMMEDIATELY as the rate is based on availability. Registration: Registration is done by mail on a first-come, first-served basis. To ensure your place at the conference, please send the following registration form and payment as soon as possible to Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. Please make check payable to Caltech. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - NNCM-96 Registration Form Title:------ Name:------------------------------------------------ Mailing address:------------------------------------------------ ------------------------------------------------ ------------------------------------------------ ------------------------------------------------ e-mail:--------------------------------- fax:--------------------------------- ********Please circle the applicable fees and write the total below******** Main Conference (November 21-22): Registration fee $550 Discounted fee for academicians $275 (letter on university letterhead required) Discounted fee for full-time students $150 (letter from registrar or faculty advisor required) Tutorials (November 20): You must be registered for the main conference in order to register for the tutorials. Tutorials Fee $150 Full-time students $100 (letter from registrar or faculty advisor required) TOTAL: $_________ Please include payment (check or money order in US currency). Please make check payable to Caltech. Mail your completed registration form and payment to Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - Transportation: There is shuttle service (around $25 per person) and bus service (around $15 per person) from Los Angeles International Airport (LAX) to the Ritz-Carlton Hotel in Pasadena. A taxi ride will cost approximately $55. There is also shuttle service from Burbank airport (BUR) which is a domestic airport closer to Pasadena. A taxi ride will cost approximately $35. Secretariat: For further information, please contact the NNCM-96 secretariat: Ms. Lucinda Acosta, Caltech 136-93, Pasadena, CA 91125, U.S.A. e-mail: lucinda at sunoptics.caltech.edu phone (818) 395-4843, fax (818) 795-0326 *******************************************************************************  From wyler at iam.unibe.ch Wed Oct 2 06:20:18 1996 From: wyler at iam.unibe.ch (Kuno Wyler) Date: Wed, 2 Oct 1996 12:20:18 +0200 Subject: POSTDOCTORAL RESEARCH FELLOWSHIP Message-ID: <9610021020.AA11589@garfield.unibe.ch> POSTDOCTORAL RESEARCH FELLOWSHIP -------------------------------- Neural Computing Research Group Institute of Informatics and Applied Mathematics University of Bern, Switzerland The Neural Computing Research Group at the University of Bern is looking for a highly motivated individual for a two year postdoctoral research position in the area of development of a neuromorphic perception system based on multi sensor fusion. The aim of the project is to develop a neurobiologically plausible perception system for novelty detection in a real world environment (e.g. quality control in industrial production lines or supervision of security zones) based on information from different sensor channels and fast learning algorithms. Potential candidates should have strong mathematical and signal processing skills, with a background in neurobiology and neural networks. Working knowledge of programming (Matlab, LabView or C/C++) or VLSI technology is highly desirable but not required. The position will begin January 1, 1997, with possible renewal for an additional two years. The initial salary is SFr. 60'000/year (approx. $50'000). To apply for this position, send your curriculum vitae, publication list with one or two sample publications and two letters of reference before November 1, 1996, either by e-mail or surface mail to wyler at iam.unibe.ch or Dr. Kuno Wyler Neural Computing Research Group Institute of Informatics and Applied Mathematics University of Bern Neubrueckstrasse 10 CH-3012 Bern Switzerland  From ted at SPENCER.CTAN.YALE.EDU Wed Oct 2 14:02:23 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Wed, 2 Oct 1996 18:02:23 GMT Subject: Relating neuronal form to function Message-ID: <199610021802.SAA00877@PLANCK.CTAN.YALE.EDU> A digital preprint, issued somewhat belatedly-- at http://www.nnc.yale.edu/papers/NIPS94/nipsfin.html, the html version of our final draft of this paper: Carnevale, N.T., Tsai, K.Y., Claiborne, B.J., and Brown, T.H. The electrotonic transformation: a tool for relating neuronal form to function. In: Advances in Neural Information Processing Systems, vol. 7, edited by Tesauro, G., Touretzky, D.S., and Leen, T.K. Cambridge, MA, MIT Press, 1995, p. 69-76. Roughly 64K total, including figures. ABSTRACT The spatial distribution and time course of electrical signals in neurons have important theoretical and practical consequences. Because it is difficult to infer how neuronal form affects electrical signaling, we have developed a quantitative yet intuitive approach to the analysis of electrotonus. This approach transforms the architecture of the cell from anatomical to electrotonic space, using the logarithm of voltage attenuation as the distance metric. We describe the theory behind this approach and illustrate its use. --Ted  From kak at ee.lsu.edu Wed Oct 2 15:21:25 1996 From: kak at ee.lsu.edu (Subhash Kak) Date: Wed, 2 Oct 96 14:21:25 CDT Subject: No subject Message-ID: <9610021921.AA05823@ee.lsu.edu> The following papers may be retrieved by anonymous ftp: 1. Speed of Computation and Simulation by S.C. Kak ftp://gate.ee.lsu.edu/pub/kak/spee.ps.Z Abstract: This paper reviews several issues related to information, speed of computation, and simulation of a physical process. It is argued that mental processes proceed at a rate close to the optimal based on thermodynamic considerations. Problems related to the simulation of a quantum mechanical system on a computer are reviewed. Parallels are drawn between biological and adaptive quantum systems. Just published in *Foundations of Physics*, vol 26, 1375-1386, 1996 2. Can we define levels of artificial intelligence? by S.C. Kak Abstract: This paper argues for a graded approach to the study of machine intelligence. In contrast to the Turing Test approach, such an approach has the potential of defining incremental progress in machine intelligence research. Just published in *Journal of Intelligent Systems*, vol 6, 133-144, 1996 ftp://gate.ee.lsu.edu/pub/kak/ai.ps.Z  From jhf at playfair.Stanford.EDU Thu Oct 3 14:21:04 1996 From: jhf at playfair.Stanford.EDU (Jerome H. Friedman) Date: Thu, 3 Oct 1996 11:21:04 -0700 Subject: TR available: Polychotomous classification. Message-ID: <199610031821.LAA26129@playfair.Stanford.EDU> *** Technical Report available *** ANOTHER APPROACH TO POLYCHOTOMOUS CLASSIFICATION Jerome H. Friedman Stanford University (jhf at stat.stanford.edu) ABSTRACT An alternative solution to the K - class (K > 2 - polychotomous) classific- ation problem is proposed. It is a simple extension of K = 2 (dichotomous) classification in that a separate two-class decision boundary is independently constructed between every pair of the K classes. Each of these boundaries is then used to assign an unknown observation to one of its two respective classes. The individual class that receives the most such assignments over these K(K-1)/2 decisions is taken as the predicted class for the observation. Motivation for this approach is provided along with discussion as to those situations where it might be expected to do better than more traditional methods. Examples are presented illustrating that substantial gains in accuracy can sometimes be achieved. Available by ftp from: "ftp://stat.stanford.edu/pub/friedman/poly.ps.Z" Note: this postscript does not view properly on some ghostviews. It does seem to print properly on nearly all postscript printers.  From gaudiano at cns.bu.edu Thu Oct 3 18:03:56 1996 From: gaudiano at cns.bu.edu (Paolo Gaudiano) Date: Thu, 3 Oct 1996 18:03:56 -0400 (EDT) Subject: IMPORTANT: WCNN'97 has merged with ICNN'97 Message-ID: <199610032203.SAA24959@ruggles.bu.edu> IMPORTANT ANNOUNCEMENT for INNS Members and others who planned to submit papers to the World Congress on Neural Networks (WCNN97). From the Board of Governors of the International Neural Network Society The INNS Board of Governors urges all INNS members and other potential authors to speed up preparation of their technical papers to meet a November 15, 1996, deadline (instead of the previously announced date of January 15, 1997). There will be only one major US neural network meeting in 1997: Houston, June 9-12, 1997 The INNS Board of Governors took a positive and definite step toward reinstituting the tradition of joint neural network meetings with the IEEE. Specifically, it was decided to replace the planned 1997 INNS meeting in Boston by offering strong technical involvement in the Houston meeting being planned by the IEEE, June 9-12, 1997. The IEEE has accepted the offer. INNS will be listed as a Technical Co-Sponsor of the meeting, and IEEE has invited the INNS Program Chair, Dan Levine, to serve as a Program Co-Chair for their meeting. The chairs are working together to develop sessions and/or tracks to accommodate certain addtional technical areas traditionally of interest to INNS members. A copy of the IEEE Call for Papers for the 1997 ICNN in Houston is attached. Please note that the paper submission deadline listed is November 1, 1997. Due to the shortness of time, IEEE is willing to allow up to two weeks grace period for INNS members. Thus, November 15th is to be seen as an "absolute" deadline. Additional information can be found on the IEEE ICNN web site at: http://www.mindspring.com/~pci-inc/ICNN97 We look forward to seeing all of you in Houston. ---------------------------------------------------------------------- WELCOME TO THE BRAND NEW ICNN'97 CALL FOR PAPERS..... IEEE-NNC and INNS, in the spirit of earlier IJCNN's, Co-sponsor ICNN97 and Future Conferences ... /////////////////////////////////////////// / / / I C N N ' 9 7 / / / /////////////////////////////////////////// New ICNN '97 Call for Papers INTERNATIONAL CONFERENCE ON NEURAL NETWORKS (ICNN'97) Westin Galleria Hotel, Houston, Texas, USA Tutorials June 8, Conference June 9-12, 1997 CO-SPONSORED BY THE IEEE NEURAL NETWORKS COUNCIL AND THE INTERNATIONAL NEURAL NETWORKS SOCIETY NNC....... IEEE....... INNS This conference is a major international forum for researchers, practitioners and policy makers interested in natural and artificial neural networks. Submissions of papers related, but not limited, to the topics listed below are invited. Applications Architectures Associative Memory Cellular Neural Networks Computational Intelligence Cognitive Science Data Analysis Fuzzy Neural Systems Genetic and Annealing Algorithms Hardware Implementation (Electronic and Optical) Hybrid Systems Image and Signal Processing Intelligent Control Learning and Memory Machine Vision Model Identification Motion Vision Motion Analysis Neurobiology Neurocognition Neurosensors and Wavelets Neurodynamics and Chaos Optimization Pattern Recognition Prediction Robotics Sensation and Perception Sensorimotor Systems Speech, Hearing, and Language System Identification Supervised/Unsupervised Learning Time Series Analysis PAPER SUBMISSION: Papers must be received by the Technical Program Co-Chairs by November 1, 1996. PAPER DEADLINE NOV. 15 ### INNS MEMBERS ONLY ### Papers received after that date will be returned unopened. International authors should submit their work via Air Mail or Express Courier so as to ensure timely delivery. All submissions will be acknowledged by electronic or postal mail. Mail all papers to Prof. James M. Keller, Computer Engineering and Computer Science Department; 217 Engineering Building West; University of Missouri; Columbia, MO 65211 USA. Phone: (573) 882-7339. CONTACT General Chair, Prof. Nicolaos B. Karayiannis, at Karayiannis at UH.EDU; Program Co-Chairs: Prof. Daniel S. Levine , at b344dsl at utarlg.uta.edu ; Prof. Keller, at keller at ece.missouri.edu ; Prof. Raghu Krishnapuram, at raghu at ece.missouri.edu or other members of the Program Committee with questions. Six copies (one original and five copies) of the paper must be submitted. Papers must be camera-ready on 8 1/2 by 11 white paper, one-column format in Times or similar font style, 10 points or larger with one inch margins on all four sides. Do not fold or staple the original camera-ready copy. Four pages are encouraged; however, the paper must not exceed six pages, including figures, tables, and references, and should be written in English. Submissions that do not adhere to the guidelines above will be returned unreviewed. Centered at the top of the first page should be the complete title, author name(s), and postal and electronic mailing addresses. In the accompanying letter, the following information must be included: Full Title of the Paper Technical Area (First and Second Choices) Corresponding Author (Name, Postal and E-Mail Addresses, Telephone & FAX Numbers) Preferred Mode of Presentation (Oral or Poster) PAPER REVIEW: Papers will be reviewed by senior researchers in the field and authors will be informed of the decisions by January 2, 1997. Authors of accepted papers will be allowed to revise their papers, and the final versions must be received by February 1, 1997. BEST STUDENT PAPER AWARDS: To qualify, a student or group of students must contribute over 70% of the paper and be the PRIMARY AUTHORS(S). The submission should clearly indicate that the paper is to be considered for the best student paper award, the amount of contribution by the student, current level of study, and e-mail address. SPECIAL SESSIONS: Proposals for plenary and panel sessions must be submitted to the Plenary/Special Sessions Chair , Jacek Zurada by October 15, 1996. TUTORIALS: Proposals for tutorials must be submitted to the Tutorials Chair , John Yen, by October 15, 1996. EXHIBITOR INFORMATION: A large group of vendors and participants from industry, academia and government are expected. Potential exhibitors may request information from the Exhibits Chair, Joydeep Ghosh.. //////////////////// CONTACTS: //////////////////// TECHNICAL: For technical information on the conference, please contact members of the Organizing Committee REGISTRATION: Conference Secretariat: Meeting Management, 2603 Main Street, #690, Irvine, CA 92714 Phone: (714) 752-8205 Fax: (714) 752-7444 Email: Meeting Mgt at aol.com WEB-SITE: New Web Site: http://www.mindspring.com/~pci-inc/ICNN97 (Mary Lou Padgett) Original Web Sites: (Bogdan M. Wilamowski e-mail wilam at uwyo.edu.) Comments/questions on new ICNN'97: General Chair, Nicolaos B. Karayiannis Email: Karayiannis at UH.EDU INNS Board of Governors Member: Daniel S. Levine Email:b344dsl at utarlg.uta.edu /////////////////////////////////////////////////////// NEW ICNN'97 Members of the Organizing Committee /////////////////////////////////////////////////////// General Chair Prof. Nicolaos B. Karayiannis Dept. of Electrical & Computer Engineering University of Houston Houston TX 77204-4793, USA Phone: (713) 743-4436 Fax: (713) 743-4444 Email: Karayiannis at UH.EDU Technical Program and Proceedings Co-Chairs Prof. James M. Keller Computer Engineering and Computer Science Department 217 Engineering Building West University of Missouri Columbia, MO 65211 USA Phone: (573) 882-7339 Fax: (573) 882-0397 Email: keller at ece.missouri.edu Prof. Raghu Krishnapuram University of Missouri Computer Engineering and Computer Science Department Columbia MO 65211 USA Phone: (573) 882-7766 Fax: (573) 882-0397 Email: raghu at ece.missouri.edu INNS CONTACT: Prof. Daniel S. Levine Univ. of Texas at Arlington Department of Psychology Arlington, TX 76019-0408 Phone 817-272-3598 Fax 817-272-2364 Email: b344dsl at utarlg.uta.edu Tutorials Chair Prof. John Yen Texas A&M University Dept. of Computer Science 301 Harvey R. Bright Bldg. College Station TX 77843-3112 USA Phone: (409) 845-5466 Fax: (409) 847-8578 Email: yen at cs.tamu.edu Publicity Chair Mary Lou Padgett Auburn University or Padgett Computer Innovations, Inc. (PCI-INC) 1165 Owens Road Auburn AL 36830 US Phone: (334) 821-2472 Fax: (334) 821-3488 Email: m.padgett at ieee.org Exhibits Chair Prof. Joydeep Ghosh University of Texas Dept. of Electrical & Computer Engineering Engineering Sciences Building (ENS) 516 Austin TX 78712-1084 USA Phone: (512) 471-8980 Fax: (512) 471-5907 Email: ghosh at ece.utexas.edu Plenary/Special Sessions Chair Prof. Jacek M. Zurada University of Louisville Dept. of Electrical Engineering Louisville KY 40292 USA Phone: (502) 852-6314 Fax: (502) 852-6807 Email: jmzura02 at starbase.spd.louisville.edu International Liaison Chair Prof. Sankar K. Pal Machine Intelligence Unit Indian Statistical Institute 203 B. T. Road Calcutta 700 035 INDIA Phone: (0091)-33-556-8085 Fax: (0091)-33-556-6925 Fax: (0091)-33-556-6680 Email: sankar at isical.ernet.in Finance Chair Prof. Ben H. Jansen University of Houston Dept. of Electrical & Computer Engineering Houston TX 77204-4793 USA Phone: (713) 743-4431 Fax: (713) 743-4444 Email: bjansen at uh.edu Local Arrangements Chair Prof. Heidar A. Malki University of Houston Electrical-Electronics Department Houston TX 77204-4083 USA Phone: (713) 743-4075 Fax: (713) 743-4032 Email: malki at uh.edu ////////////////////////////////////////////////////// ICNN'97 TUTORIAL SUBMISSIONS ////////////////////////////////////////////////////// ICNN 97 Tutorial Proposal Submission Guideline Tutorial proposals for 1997 IEEE International Conference on Neural Networks (ICNN 97) are solicited. The proposal should be prepared using the format described below. Proposal Format The proposal should contain the following information: Title and expected duration of the tutorial Objective and expected benefit of the tutorial participants Target audience and their required background A one paragraph justification about the timing of the tutorial. A topic that is in its infancy or is too mature is not likely to be suitable. Therefor, the timing of the proposed tutorial should be justified in terms of (1) the amount of interest in its subject area and (2) the current body of knowledge developed in the area. An outline of the material to be covered by the tutorial. Qualification and contact information (including e-mail address and FAX number) of the instructor Submission Information The proposal should be submitted to the Tutorial Chair of ICNN 97 at the following address by October 15, 1996 using postal mail or e-mail. Tutorial Chair Prof. John Yen Center for Fuzzy Logic, Robotics, and Intelligent Systems Department of Computer Science 301 Harvey R. Bright Bldg. Texas A&M University College Station, TX 77843-3112 U.S.A. TEL: (409) 845-5466 FAX: (409) 847-8578 E-mail: yen at cs.tamu.edu --===================== Mary Lou Padgett 1165 Owens Road Auburn, AL 36830 P: (334) 821-2472 F: (334) 821-3488 m.padgett at ieee.org Auburn University, EE Dept. Padgett Computer Innovations, Inc. (PCI) Simulation, VI, Seminars IEEE Standards Board -- Virtual Intelligence ( VI): NN, FZ, EC, VR --=====================  From cas-cns at cns.bu.edu Fri Oct 4 14:24:57 1996 From: cas-cns at cns.bu.edu (CAs/CNS) Date: Fri, 04 Oct 1996 14:24:57 -0400 Subject: International Conference on VISION, RECOGNITION, ACTION Message-ID: <199610041824.OAA15927@cns.bu.edu> ***** CALL FOR PAPERS ***** International Conference on VISION, RECOGNITION, ACTION: NEURAL MODELS OF MIND AND MACHINE May 28-31, 1997 Sponsored by the Center for Adaptive Systems and the Department of Cognitive and Neural Systems Boston University with financial support from the Defense Advanced Research Projects Agency and the Office of Naval Research This conference will include a day of tutorials (May 28) followed by 3 days of 21 invited lectures and contributed lectures and posters by experts on the biology and technology of how the brain and other intelligent systems see, understand, and act upon a changing world. Meeting updates can be found at http://cns-web.bu.edu/cns-meeting/. Hotel and restaurant information can also be found here. CONFIRMED INVITED SPEAKERS AND PROGRAM OUTLINE WEDNESDAY, MAY 28, 1997 TUTORIALS STEPHEN GROSSBERG "Vision, Brain, and Technology" (3 hours in two 1-1/2 hour lectures) This tutorial will provide a self-contained introduction to recent models of how the brain sees. It will also illustrate how these models have been used to help solve difficult image processing problems in technology. The biological part will discuss neural models of visual form, color, depth, figure-ground separation, motion, and attention, and how these several processes cooperate to generate complex percepts. The tutorial will build a theoretical bridge between data about visual perception and data about the architecture and dynamics of the visual brain. Technological applications to image restoration, texture labeling, figure-ground separation, and related problems will be described. GAIL CARPENTER "Self-Organizing Neural Networks for Learning, Recognition, and Prediction: ART Architectures and Applications" (2 hours) In 1976, Stephen Grossberg introduced adaptive resonance as a theory of human cognitive information processing. Over the past decade, the theory has led to an evolving series of real-time neural networks (ART models) that self-organize recognition categories in response to arbitrary sequences of input patterns. The intrinsic stability of an ART system allows rapid learning of new information while essential components of previously learned patterns are preserved. This tutorial will describe basic ART design principles, analytic tools, and benchmark simulations. Both unsupervised networks such as ART 1, ART 2, ART 3, and fuzzy ART, and supervised learning architectures such as ARTMAP, fuzzy ARTMAP, and ART-EMAP will be discussed. Successful applications of the ART and ARTMAP networks, including the Boeing parts retrieval CAD system, automatic mapping from remote sensing satellite measurements, and medical database prediction will be outlined. Computational elements of the recently developed dART and dARTMAP networks, that feature distributed code representations, will also be introduced. ERIC SCHWARTZ "Algorithms and Hardware for the Application of Space-Variant Active Vision to High Performance Machine Vision" (2 hours) The term space-variance refers to the fact that all higher vertebrate visual systems are based on spatial architectures which have non-constant resolution across the visual field. It has been shown that such architectures can lead to up to four orders of magnitude of compression in the space-complexity of vision tasks. However, there are fundamental algorithmic and hardware problems involved in the exploitation of these observations in computer vision, many of which have benefited from considerable progress during the past several years. In this tutorial, a brief outline of the anatomical basis for the notion of space-variance will be provided. Several examples of space-variant active vision systems will be then be discussed, focusing on the hardware specifications for sensors, optics, actuators and DSP based parallel processors. Finally, a review of the algorithmic aspects of these systems will be presented, including issues related to early vision (i.e., edge enhancement via nonlinear diffusion methods), and to pattern matching, based on recent development of an exponential chirp algorithm which can perform high-speed quasi-shift invariant processing on logarithmic image architectures. Functioning examples of space-variant active vision systems based on these developments will be demonstrated, included a miniature visually guided autonomous vehicle, a machine vision system for reading license plates of high-speed vehicles for traffic control, and a blind-prosthetic device based on a "wearable" active vision system. ******************** TUTORIAL BIOSKETCHES: GAIL CARPENTER is professor in the departments of Cognitive and Neural Systems (CNS) and Mathematics at Boston University. She is the CNS Director of Graduate Studies; 1989 Vice-President and 1994-96 Secretary of the International Neural Network Society (INNS); organization chair of the 1988 INNS annual meeting; and a member of the editorial boards of Brain Research, IEEE Transactions on Neural Networks, Neural Computation, and Neural Networks. She has served on the INNS Board of Governors since its founding in 1987, and is a member of the Council of the American Mathematical Society. She is a leading architect of the Adaptive Resonance Theory (ART) family of architectures for fast learning, pattern recognition, and prediction of nonstationary databases, including both unsupervised (ART 1, ART 2, ART 2-A, ART 3, fuzzy ART, distributed ART) and supervised (ARTMAP, fuzzy ARTMAP, ART-EMAP, distributed ARTMAP) ART networks. These systems have been used for a wide range of applications, such as medical diagnosis, remote sensing, automatic target recognition, mobile robots, and database management. Her earlier research includes the development, computational analysis, and applications of neural models of nerve impulse generation (Hodgkin-Huxley equations), vision, cardiac rhythms, and circadian rhythms. Professor Carpenter received her graduate training in mathematics at the University of Wisconsin and was a faculty member at MIT and Northeastern University before moving to Boston University. STEPHEN GROSSBERG is Wang Professor of Cognitive and Neural Systems and Professor of Mathematics, Psychology, and Biomedical Engineering at Boston University. He is the founder and Director of the Center for Adaptive Systems, as well as the founder and Chairman the Department of Cognitive and Neural Systems. He founded and was first President of the International Neural Network Society and also founded and is co-editor-in-chief of the Society's journal, Neural Networks. Grossberg was General Chairman of the first IEEE International Conference on Neural Networks. He is on the editorial boards of Brain Research, Journal of Cognitive Neuroscience, Behavioral and Brain Sciences, Neural Computation, IEEE Transactions on Neural Networks, and Adaptive Behavior. He organized two multi-institutional Congressional Centers of Excellence for research on biological neural networks and their technological applications. He received the IEEE Neural Network Pioneer award, the INNS Leadership Award, the Thinking Technology Award of the Boston Computer Society, and is a Fellow of the American Psychological Association and the Society of Experimental Psychologists. Grossberg and his colleagues have pioneered and developed a number of the fundamental principles, mechanisms, and architectures that form the foundation for contemporary neural network research. This work focuses upon the design principles and mechanisms which enable the behavior of individuals to adapt successfully in real-time to unexpected environmental changes. Core models pioneered by this approach include competitive learning and self-organizing feature maps, adaptive resonance theory, masking fields, gated dipole opponent processes, associative outstars and instars, associative avalanches, nonlinear cooperative-competitive feedback networks, boundary contour and feature contour systems, and vector associative maps. Grossberg received his graduate training at Stanford University and Rockefeller University, and was a Professor at MIT before assuming his present position at Boston University. ERIC SCHWARTZ received the PhD degree in High Energy Physics from Columbia University in 1973, followed by post-doctoral studies with E. Roy John at New York Medical College in neurophysiology. He has served as Associate Professor of Psychiatry at New York University Medical Center and Associate Professor of Computer Science at the Courant Institute of Mathematical Sciences. In 1985, he organized the first Symposium on Computational Neuroscience, and in 1989 founded Vision Applications, Inc. which designs and builds prototype machine vision systems based on space-variant active vision systems. Currently, he is Professor of Cognitive and Neural Systems, Electrical Engineering and Computer Systems and Anatomy and Neurobiology at Boston University. His research experience includes experimental particle physics, physiology (single cell recording), anatomy (2DG, PETT, MRI), computer graphics and image processing, VLSI design, actuator design, and neural modeling. ************************* THURSDAY, MAY 29, 1997 INVITED LECTURES Robert Shapley, New York University: Brain Mechanisms for Visual Perception of Occlusion George Sperling, University of California, Irvine: An Integrated Theory for Attentional Processes in Vision, Recognition, and Memory Patrick Cavanagh, Harvard University: Direct Recognition Stephen Grossberg, Boston University: Perceptual Grouping and Attention during Cortical Form and Motion Processing Robert Desimone, National Institute of Mental Health: Neuronal Mechanisms of Visual Attention Ennio Mingolla, Boston University: Visual Search Patricia Goldman-Rakic, Yale University Medical School: The Machinery of Mind: Models from Neurobiology Larry Squire, San Diego VA Medical Center: Brain Systems for Recognition Memory There will also be a contributed poster session on this day. FRIDAY, MAY 30, 1997 INVITED LECTURES Eric Schwartz, Boston University: Multi-Scale Vortex Structure of the Brain: Anatomy as Architecture in Biological and Machine Vision Lance Optican, National Eye Institute: Neural Control of Rapid Eye Movements John Kalaska, University of Montreal: Reaching to Visual Targets: Cerebral Cortical Neuronal Mechanisms Rodney Brooks, Massachusetts Institute of Technology: Models of Vision-Based Human Interaction There will also be a contributed talk session and a reception, followed by the KEYNOTE LECTURE Stuart Anstis, University of California, San Diego: Moving in Unexpected Directions SATURDAY, MAY 31, 1997 INVITED LECTURES Azriel Rosenfeld, University of Maryland: Some Viewpoints on Vision Terrance Boult, Lehigh University: Polarization Vision Allen Waxman, MIT Lincoln Laboratory: Opponent Color Models of Visible/IR Fusion for Color Night Vision Gail Carpenter, Boston University: Distributed Learning, Recognition, and Prediction in ART and ARTMAP Networks Tomaso Poggio, Massachusetts Institute of Technology: Representing Images for Visual Learning Michael Jordan, Massachusetts Institute of Technology: Graphical Models, Neural Networks, and Variational Approximations Andreas Andreou, Johns Hopkins University: Mixed Analog/Digital Neuromorphic VLSI for Sensory Systems Takeo Kanade, Carnegie Mellon University: Computational VLSI Sensors: Integrating Sensing and Processing There will also be a contributed poster session on this day. CALL FOR ABSTRACTS: Contributed abstracts by active modelers of vision, recognition, or action in cognitive science, computational neuroscience, artificial neural networks, artificial intelligence, and neuromorphic engineering are welcome. They must be received, in English, by January 31, 1997. Notification of acceptance will be given by February 28, 1997. A meeting registration fee must accompany each Abstract. See Registration Information below for details. The fee will be returned if the Abstract is not accepted for presentation and publication in the meeting proceedings. Each Abstract should fit on one 8 x 11" white page with 1" margins on all sides, single-column format, single-spaced, Times Roman or similar font of 10 points or larger, printed on one side of the page only. Fax submissions will not be accepted. Abstract title, author name(s), affiliation(s), mailing, and email address(es) should begin each Abstract. An accompanying cover letter should include: Full title of Abstract, corresponding author and presenting author name, address, telephone, fax, and email address. Preference for oral or poster presentation should be noted. (Talks will be 15 minutes long. Posters will be up for a full day. Overhead, slide, and VCR facilities will be available for talks.) Abstracts which do not meet these requirements or which are submitted with insufficient funds will be returned. The original and 3 copies of each Abstract should be sent to: CNS Meeting, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. The program committee will determine whether papers will be accepted in an oral or poster presentation, or rejected. REGISTRATION INFORMATION: Since seating at the meeting is limited, early registration is recommended. To register, please fill out the registration form below. Student registrations must be accompanied by a letter of verification from a department chairperson or faculty/research advisor. If accompanied by an Abstract or if paying by check, mail to: CNS Meeting, c/o Cynthia Bradford, Boston University, Department of Cognitive and Neural Systems, 677 Beacon Street, Boston, MA 02215. If paying by credit card, mail to the above address, or fax to (617) 353-7755. STUDENT FELLOWSHIPS: A limited number of fellowships for PhD candidates and postdoctoral fellows are available to at least partially defray meeting travel and living costs. The deadline for applying for fellowship support is January 31, 1997. Applicants will be notifed by February 28, 1997. Each application should include the applicant's CV, including name; mailing address; email address; current student status; faculty or PhD research advisor's name, address, and email address; relevant courses and other educational data; and a list of research articles. A letter from the listed faculty or PhD advisor on offiicial institutional stationery should accompany the application and summarize how the candidate may benefit from the meeting. Students who also submit an Abstract need to include the registration fee with their Abstract. Reimbursement checks will be distributed after the meeting. Their size will be determined by student need and the availability of funds. -------------------------------------------------- REGISTRATION FORM (Please Type or Print) Vision, Recognition, Action: Neural Models of Mind and Machine Boston University Boston, Massachusetts Tutorials: May 28, 1997 Meeting: May 29-31, 1997 Mr/Ms/Dr/Prof: Name: Affiliation: Address: City, State, Postal Code: Phone and Fax: Email: The conference registration fee includes the meeting program, reception, six coffee breaks, and the meeting proceedings. Two coffee breaks and a book of tutorial viewgraph copies will be covered by the tutorial registration fee. CHECK ONE: [ ] $55 Conference plus Tutorial (Regular) [ ] $40 Conference plus Tutorial (Student) [ ] $35 Conference Only (Regular) [ ] $25 Conference Only (Student) [ ] $30 Tutorial Only (Regular) [ ] $25 Tutorial Only (Student) Method of Payment: [ ] Enclosed is a check made payable to "Boston University". Checks must be made payable in US dollars and issued by a US correspondent bank. Each registrant is responsible for any and all bank charges. [ ] I wish to pay my fees by credit card (MasterCard, Visa, or Discover Card only). Type of card: Name as it appears on the card: Account number: Expiration date: Signature and date: --------------------------------------------------  From jung at pop.uky.edu Fri Oct 4 12:03:24 1996 From: jung at pop.uky.edu (Dr. Ranu Jung) Date: Fri, 4 Oct 1996 16:03:24 +0000 Subject: Graduate Res. Asstantship Message-ID: <199610042111.RAA24134@service1.cc.uky.edu> DYNAMICAL ANALYSIS OF LOCOMOTOR CONTROL (Graduate Research Assistantship) This position is part of a project funded by The Whitaker Foundation that is directed at examining the dynamical interaction between the brain and the spinal cord in the control of locomotion. The project involves experimental and computational studies with sub-projects for: 1) characterization of the intrinsic variability in the fictive locomotor rhythm obtained in in vitro brain-spinal cord preparations of the lamprey, 2) investigation of the role of the feedforward-feedback loop between the brain and the spinal cord in short- and long term control of locomotion (changes in stability states, responses to perturbations), and 3) mathematical model development (biophysically motivated neural networks for the central pattern generator for swimming) and analysis of the models using techniques from dynamical systems theory. The analysis of experimental data will include development of novel signal processing methods and use of techniques from nonlinear systems analysis. The project will be conducted at the Experimental and Computational Neuroscience Laboratory at the Center for Biomedical Engineering. In additin to ties within the Center the laboratory collaborates with members of the Department of Electrical Engineering and the Department of Physiology. The position is available for up to three years for graduate work. Applications, including CV and names of two references, may be sent to Dr. Ranu Jung by email(jung at pop.uky.edu), by FAX(606-257-1856) , or by postal mail to Ranu Jung, Ph.D. 21 Wenner Gren Research Laboratory University of Kentucky Lexington, KY 40506-0070. Additional information can be obtained by contacting Dr. Jung by email or telephone (606-257-5931). Information about other neuroscience related research being conducted at the Center can be obtained from the web at the URL http://www.uky.edu/RGS/CBME/CBMENeuralControl.html Details about the University of Kentucky and the Center for Biomedical Engineering can be obtained on the web at http://www.uky.edu; http://www.uky.edu/RGS/CBME. The University of Kentucky is located in the rolling hills of the Bluegrass Country and has a diverse campus. The Center for Biomedical Engineering is a multidisciplinary center in the Graduate School. We have strong ties to the Medical Center and the School of Engineering. Ranu Jung, Ph.D. email:jung at pop.uky.edu Center for Biomedical Engineering phone:606-257-5931 Wenner-Gren Research Lab. fax: 606-257-1856 University of Kentucky http://www.uky.edu/RGS/CBME/jung.html Lexington, KY 40506-0070  From john at dcs.rhbnc.ac.uk Fri Oct 4 04:40:39 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Fri, 04 Oct 96 09:40:39 +0100 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199610040840.JAA32168@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-049: ---------------------------------------- Extended Grzegorczyk Hierarchy in the BSS Model of Computability by Jean-Sylvestre Gakwaya, Universit\'e de Mons-Hainaut, Belgium Abstract: In this paper, we give an extension of the Grzegorczyk Hierarchy to the BSS theory of computability which is a generalization of the classical theory. We adapt some classical results related to the Grzegorczyk hierarchy in the new setting. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-050: ---------------------------------------- Learning from Examples and Side Information by Joel Ratsaby, Technion, Israel Vitaly Maiorov, Technion, Israel Abstract: We set up a theoretical framework for learning from examples and side information which enables us to compute the tradeoff between the sample complexity and information complexity for learning a target function in a Sobolev functional class $\cal F$. We use the notion of the {\em $n^{th}$ minimal radius of information} of Traub et. al. \cite{traub} and combine it with VC-theory to define a new quantity $I_{n,d}({\cal F})$ which measures the minimal approximation error of a target $g\in {\cal F}$ by the family of function classes with pseudo-dimension $d$ under a given side information which consists of any $n$ measurements on the target function $g$ constrained to being linear operators. By obtaining almost tight upper and lower bounds on $I_{n,d}({\cal F})$ we find an information operator $\hat{N}_n$ which yields a worst-case error no larger than a logarithmic factor in $n$ and $d$ than the lower bound on $I_{n,d}({\cal F})$. Hence to within a logarithmic factor it is the most efficient way of providing side information about a target $g$ under the constraint that the information operator must be linear and that the approximating class has pseudo-dimension $d$. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-051: ---------------------------------------- Complexity and Dimension by Felipe Cucker, City University of Hong Kong Pascal Koiran, Ecole Normale Superieure, Lyon, France Martin Matamala, Universidad de Chile, Chile Abstract: In this note we define a notion of sparseness for subsets of $\Ri$ and we prove that there are no sparse $\NPadd$-hard sets. Here we deal with additive machines which branch on equality tests of the form $x=y$ and $\NPadd$ denotes the corresponding class of sets decidable in nondeterministic polynomial time. Note that this result implies the separation $\Padd\not=\NPadd$ already known. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-052: ---------------------------------------- Semi-Algebraic Complexity -- Additive Complexity of Diagonalization of QuadraticForms by Thomas Lickteig, Universit\"at Bonn, Germany Klaus Meer, RWTH Aachen, Germany Abstract (for references see full paper): We study matrix calculation such as diagonalization of quadratic forms under the aspect of additive complexity and relate these complexities to the complexity of matrix multiplication. While in \cite{BKL} for multiplicative complexity the customary ``thick path existence'' argument was sufficient, here for additive complexity we need the more delicate finess of the real spectrum (cf. \cite{BCR}, \cite{Be}, \cite{KS}) to obtain a complexity relativization. After its outstanding success in semi-algebraic geometry the power of the real spectrum method in complexity theory becomes more and more apparent. Our discussions substantiate once more the signification and future r\^ole of this concept in the mathematical evolution of the field of real algebraic algorithmic complexity. A further technical tool concerning additive complexity is the structural transport metamorphosis from \cite{Li1} which constitutes another use of exponentiation and logarithm as it appears in the work on additive complexity by \cite{Gr} and \cite{Ri} through the use of \cite{Kh}. We confine ourselves here to diagonalization of quadratic forms. In the forthcoming paper \cite{LM} further such relativizations of additive complexity will be given for a series of matrix computational tasks. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-053: ---------------------------------------- Structural Risk Minimization over Data-Dependent Hierarchies by John Shawe-Taylor, Royal Holloway, University of London, UK Peter Bartlett, Australian National University, Australia Robert Williamson, Australian National University, Australia Martin Anthony, London School of Economics, UK Abstract: The paper introduces some generalizations of Vapnik's method of structural risk minimisation (SRM). As well as making explicit some of the details on SRM, it provides a result that allows one to trade off errors on the training sample against improved generalization performance. It then considers the more general case when the hierarchy of classes is chosen in response to the data. A result is presented on the generalization performance of classifiers with a ``large margin''. This theoretically explains the impressive generalization performance of the maximal margin hyperplane algorithm of Vapnik and co-workers (which is the basis for their support vector machines). The paper concludes with a more general result in terms of ``luckiness'' functions, which provides a quite general way for exploiting serendipitous simplicity in observed data to obtain better prediction accuracy from small training sets. Four examples are given of such functions, including the VC dimension measured on the sample. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-054: ---------------------------------------- Confidence Estimates of Classification Accuracy on New Examples by John Shawe-Taylor, Royal Holloway, University of London, UK Abstract: Following recent results (NeuroCOLT Technical Report NC-TR-96-053) showing the importance of the fat shattering dimension in explaining the beneficial effect of a large margin on generalization performance, the current paper investigates how the margin on a test example can be used to give greater certainty of correct classification in the distribution independent model. The results show that even if the classifier does not classify all of the training examples correctly, the fact that a new example has a larger margin than that on the misclassified examples, can be used to give very good estimates for the generalization performance in terms of the fat shattering dimension measured at a scale proportional to the excess margin. The estimate relies on a sufficiently large number of the correctly classified training examples having a margin roughly equal to that used to estimate generalization, indicating that the corresponding output values need to be `well sampled'. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage: http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html or directly to the archive: ftp://ftp.dcs.rhbnc.ac.uk/pub/neurocolt/tech_reports Best wishes John Shawe-Taylor  From b0616 at nibh.go.jp Mon Oct 7 06:43:41 1996 From: b0616 at nibh.go.jp (Akio Utsugi) Date: Mon, 07 Oct 96 19:43:41 +0900 Subject: Papers and Java demo available Message-ID: <9610071043.AA01367@ipsychob.nibh.go.jp> The following preprints can be found in http://www.aist.go.jp/NIBH/~b0616/research.html Hyperparameter Selection for Self-Organizing Maps A. Utsugi To appear in Neural Computation, vol. 9, no. 2. Abstract: The self-organizing map (SOM) algorithm for finite data is derived as an approximate MAP estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior, which is equivalent to a generalized deformable model (GDM). For this model, objective criteria for selecting hyperparameters are obtained on the basis of empirical Bayesian estimation and cross-validation, which are representative model selection methods. The properties of these criteria are compared by simulation experiments. These experiments show that the cross-validation methods favor more complex structures than the expected log likelihood supports, which is a measure of compatibility between a model and data distribution. On the other hand, the empirical Bayesian methods have the opposite bias. Topology Selection for Self-Organizing Maps A. Utsugi To appear in Network: Computation in Neural Systems, vol. 7, no. 4. Abstract: A topology-selection method for self-organizing maps (SOMs) based on empirical Bayesian inference is presented. This method is natural extension of the hyperparameter-selection method presented earlier, in which the SOM algorithm is regarded as an estimation algorithm for a Gaussian mixture model with a Gaussian smoothing prior on the centroid parameters, and optimal hyperparameters are obtained by maximizing their evidence. In the present paper, comparisons between models with different topologies are made possible by further specifying the prior of the centroid parameters with an additional hyperparameter. In addition, a fast hyperparameter-search algorithm using the derivatives of evidence is presented. The validity of the methods presented is confirmed by simulation experiments. In addition, I made a demonstration program for the above theory using a Java applet, which is accessible via a WWW-browser. --- Akio Utsugi National Institute of Bioscience and Human-Technology  From kasif at osprey.cs.jhu.edu Thu Oct 3 17:53:26 1996 From: kasif at osprey.cs.jhu.edu (Dr. Simon Kasif) Date: Thu, 3 Oct 1996 17:53:26 -0400 (EDT) Subject: AAAI Fall Symposium: LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS Message-ID: <199610032153.RAA23022@osprey.cs.jhu.edu> AAAI 1996 FALL SYMPOSIUM LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS November 9--11, 1996 Additional registration Information and a copy of the registration forms can be found in http://www.aaai.org/Symposia/Fall/1996/ Program Committee: Simon Kasif, (co-chair), University of Illinois-Chicago/Johns Hopkins Univ. Stuart Russell, (co-chair), University of California, Berkeley Robert C. Berwick, Massachusetts Institute of Technology Tom Dean, Brown University Russell Greiner, Siemens Corporate Research Michael Jordan, Massachusetts Institute of Technology Leslie Kaebling, Brown University Daphne Koller, Stanford University Andy Moore, Carnegie Mellon University Dan Roth, Weizmann Institute of Science and Harvard University. ABSTRACT The symposium will consist of invited talks, submitted papers, and panel discussions focusing on practical algorithms and theoretical frameworks that support learning to perform complex behaviors and cognitive tasks. These include tasks such as reasoning and planning with uncertainty, perception, natural language processing and large-scale industrial applications. The underlying theme is the automated construction and improvement of complete intelligent agents, which is closer in spirit to the goals of AI than learning simple classifiers. We expect to have an interdisciplinary meeting with participation of researchers from AI, Neural Networks, Machine Learning, Uncertainty in AI and Knowledge Representation. Some of the key issues we plan to address are: - Development of new theoretical frameworks for analysis of broader learning tasks such as learning to reason, learning to act, and reinforcement learning. - Scalability of learning systems such as reinforcement learning. - Learning complex language tasks. - Research on agents that learn to behave ``rationally'' in complex environments. - Learning and reasoning with complex representations. - Generating new benchmarks and devising a methodological framework for studying empirical scalability of algorithms that learn complex behaviors. - Empirical and theoretical analysis of the scalability of different representations and learning methods. TENTATIVE PROGRAM **********SCHEDULE SUBJECT TO CHANGE********** Saturday, November 9, 1996 Morning 9:00--10:30am Opening Remarks Simon Kasif (UIC and Johns Hopkins University) and Stuart Russell (Berkeley) An Engineering Architecture for Intelligent Systems (45 min) Jim Albus (NIST) 10:00--10:40am, Reinforcement Learning Temporal Abstraction in Model-Based Reinforcement Learning (20 min) R. Sutton (U.Mass) Hierarchical Reinforcement Learning (20 min) F.~Kirchner (GMD) 11:00am--12:30pm, Session II: Reinforcement Learning (cont) Why Did TD-Gammon Work? (20 min) J. Pollack and A. Blair (Brandeis U.) Learning Task Relevant State Spaces with a (20 min) Utile Distinction Test A. McCallum (U. Rochester) Policy Based Clustering for Markov Decision Problems (20 min) R. Parr (Berkeley) Optimality Criteria In Reinforcement Learning (20 min) S. Mahadevan (USF) Discussion (10 min) AFTERNOON 2:00--3:30pm Session III: Learning and Knowledge Representation Learning to Reason (20 min) D. Roth (Harvard U. and Weizmann Institute) Learning the Parameters of First Order Probabilistic Rules (20 min) D. Koller and A. Pfeffer (Stanford) Learning Knowledge and Structure (20 min) J. Pearl Learning Independence Structure (20 min) E. Ristad (Princeton) Discussion (10 min) 4:00--5:30pm Session IV: Learning Complex Behaviors A Survey of Positive Results on Automata Learning (20 min) K. Lang (NEC) Learning to Plan (20 min) E. Baum (NEC) World Modelling: Learning Knowledge Representation (50 min) S. Russell, J. Albus, A. A. Moore, E. Baum, M. Jordan Sunday, November 10, 1996 Morning 9:00--10:30am, Session V: Learning and Knowledge Representation A Neuroidal Architecture for Knowledge Representations (45 min) Les Valiant (Harvard) Learning to be Competent (20 min) R. Khardon (Harvard) Concept Learning for Geometric Reasoning (20 min) E. Sacks (Purdue) Discussion (10 min) 11:00am--12:30pm, Session VI: Learning Principles in Natural Language Some Advances in Transformation-Based Part-of-Speech Tagging (20 min) E. Brill (JHU) Explaining Language Change: Complex Consequences of Simple (20 min) Learning Algorithms P. Nyogi (MIT and Lucent) and Robert Berwick (MIT) Learning the Lexical Semantics of Spatial Motion Verbs from (20 min) Camera Input J. Siskind (Technion) Computational Learning Theory for Natural Language: (20 min) From wsenn at iam.unibe.ch Tue Oct 8 04:02:16 1996 From: wsenn at iam.unibe.ch (Walter Senn) Date: Tue, 8 Oct 1996 10:02:16 +0200 Subject: paper available: Size principle and Information theory Message-ID: <9610080802.AA12239@barney.unibe.ch> The following paper (to appear in Biol. Cybern.) is now available via aftp: ----------------------------------------------------------------------------- Size Principle and Information Theory ===================================== by W.Senn, K. Wyler, H.P. Clamann, J. Kleinle, H.-R. Luescher, L. Mueller Abstract: The several hundreds motor units of a single skeletal muscle may be recruited according to different strategies. From all possible recruitment strategies nature selected the simplest one: in most actions of a vertebrate skeletal muscle the recruitment of its motor units is by increasing size. This so-called Size Principle permits a high precision in muscle force generation since small muscle forces are produced exclusively by small motor units. Larger motor units are only activated if the total muscle force has already reached certain critical levels. We show that this recruitment by size is not only optimal in precision but also optimal in an information theoretical sense. We consider the motoneuron pool as an encoder generating a parallel binary code from a common input e.g. from CNS to that pool. The parallel motoneuron code is sent further down through the motoneuron axons to the muscle. We show that the optimization of this parallel motoneuron code with respect to its information content is equivalent to the recruitment of motor units by size. Moreover, a maximal information content of the motoneuron code is equivalent to a minimal expected error in muscle force generation. ------------------------------------------------------------------------------- Retrieval procedure: unix> ftp iamftp.unibe.ch Name: anonymous Password: {your e-mail address} ftp> cd pub/braintool/publications ftp> get InfoTheory.ps.gz ftp> quit unix> gunzip InfoTheory.ps.gz e.g. unix> lpr paper_InfoTheory.ps Or just to have a look at the www-site http://iamwww.unibe.ch:80/~brainwww/publications/  From kblackw1 at osf1.gmu.edu Tue Oct 8 08:33:05 1996 From: kblackw1 at osf1.gmu.edu (KIM L. BLACKWELL) Date: Tue, 8 Oct 1996 08:33:05 -0400 (EDT) Subject: post-doc and grad student announcement Message-ID: TWO FELLOWSHIPS AVAILABLE AT GEORGE MASON UNIVERSITY Postdoctoral Research Fellowship Predoctoral Research Fellowship Applications are invited for one postdoctoral fellowship and one predoctoral fellowship in the area of development of self-organizing pattern recognition algorithms based on biological information processing in visual and IT cortex. The aims of the project are (1) to develop neurobiologically plausible algorithms of visual pattern recognition which are computationally efficient and robust, and (2) compare performance of resulting algorithms with human performance in order to develop hypotheses about information processing in the brain. Evaluation of algorithms is performed using real world problems (e.g., face recognition and optical character recognition), and by comparison to human observer pattern recognition performance. Both positions are for one year, available immediately, with possible renewal for additional three years. We are seeking a postdoctoral candidate with background in both neurobiology and computer science (UNIX and C or C++). Working knowledge of information theory or mathematical statistics is highly desirable but not required. The successful applicant will be responsible for performing publishable research and for supervision of at least one graduate student. The initital stipend is $30,000/year plus fringe benefits. We are seeking a predoctoral candidate with a background in computer science / engineering, who is interested in learning neurobiology. The initial stipend is $12,000/ year plus tuition. Our decade-old group currently consists of Drs. T.P. Vogl and K.T. Blackwell, and two graduate students, all of whom are actively involved in ongoing collaboration among neuroscientists (electrophysiologists) at NINDS/NIH, and engineers / computer scientists at GMU and the Environmental Research Institute of Michigan (ERIM), a not-for-profit research company formerly a component of the University of Michigan. The goal of our group is to develop effective and efficient pattern recognition algorithms by reverse engineering relevant brain functions. Research activities encompass computational neurobiology, artificial neural networks, and visual psychophysics. Further information about our research and publications may be found in Dr. Thomas Vogl's Homepage: http://mbti.gmu.edu/FACULTY.html To apply for this position, send your curriculum vitae and letters of reference (in ASCII or MIME attached PostScript formats only) to Prof. Avrama Blackwell, email: kblackw1 at osf1.gmu.edu. snail-mail to: George Mason University Dept. of Computational Sciences and Informatics 4400 University Drive Fairfax, VA 22030-4444  From atick at monaco.rockefeller.edu Tue Oct 8 11:41:51 1996 From: atick at monaco.rockefeller.edu (Joseph Atick) Date: Tue, 8 Oct 1996 11:41:51 -0400 Subject: Table of Contents for latest issue of Network:CNS Message-ID: <9610081141.ZM12352@monaco.rockefeller.edu> Network: Computation In Neural Systems Volume 7, No 3 Table of Contents TOPICAL REVIEW 439 Auditory cortical representation of complex acoustic spectra as inferred from the ripple analysis method S A Shamma PAPERS 477 Local feature analysis: a general statistical theory for object representation P S Penev and J J Atick 501 Principal component neurons in a realistic visual environment H Shouval and Y Liu 517 Retrieval properties of attractor neural networks that obey Dale's law using a self-consistent signal-to-noise analysis A N Burkitt 533 Divergence measures based on entropy families: a tool for guiding the growth of neural networks H M A Andree, A W Lodder and A Taal 555 A phenomenological approach to salient maps and illusory contours Zhiyong Yang and Songde Ma 573 Bit error probability of an associative memory with many-to-many correspondence T Tanaka Articles in the next issue of Network: Computation in Neural Systems will include: A search for the optimal thresholding sequence in an associative memory H Hirase and M Recce (University College London) Neural model of visual stereomatching: slant, transparency and clouds J A Marshall, G J Kalarickal and E B Graves (University of North Carolina) A coupled attractor model of the rodent head direction system A D Redish, A N Elga and D S Touretzky (Carnegie Mellon University) A single spike suffices: the simplest form of stochastic resonance in model neurons M Stemmler (California Institute of Technology) Topology selection for self-organizing maps A Utsugi (National Institute of Bioscience and Human-Technology, Japan) For those of you who have institutional subscriptions, check out the online version of the journal at http://www.iop.org/Journals/ne. -- Joseph J. Atick Rockefeller University 1230 York Avenue New York, NY 10021 Tel: 212 327 7421 Fax: 212 327 7422  From b0616 at nibh.go.jp Wed Oct 9 00:33:36 1996 From: b0616 at nibh.go.jp (Akio Utsugi) Date: Wed, 9 Oct 96 00:33:36 JST Subject: Papers and Java demo available Message-ID: <9610081533.AA02768@ipsychob.nibh.go.jp> Yesterday, I announced the availability of preprints for two papers: `Hyperparameter Selection for Self-Organizing Maps' and `Topology Selection for Self-Organizing Maps'. However, some people notified me that the postscript files could not be viewed by a postscript-viewer. Then, I found an error in converting their sources to the postscript files. Now I fixed the error and put the new files on the same position: http://www.aist.go.jp/NIBH/~b0616/research.html In addition, I put the same compressed postscript files on an anonymous ftp site: ftp://ripsport.aist.go.jp/nibh/b0616/ I am very sorry. --- Akio Utsugi National Institute of Bioscience and Human-Technology  From ecm at skew2.kellogg.nwu.edu Tue Oct 8 15:16:16 1996 From: ecm at skew2.kellogg.nwu.edu (ecm@skew2.kellogg.nwu.edu) Date: Tue, 8 Oct 1996 14:16:16 -0500 (CDT) Subject: nonlinear principal components analysis Message-ID: <199610081916.OAA08444@skew2.kellogg.nwu.edu> Technical Report Available Some Theoretical Results on Nonlinear Principal Components Analysis Edward C. Malthouse Northwestern University ecm at nwu.edu Postscript file available via ftp from mkt2715.kellogg.nwu.edu in pub/ecm/nlpca.ps A B S T R A C T Nonlinear principal components analysis (NLPCA) neural networks are feedforward autoassociative networks with five layers. The third layer has fewer nodes than the input or output layers. NLPCA has been shown to give better solutions to several feature extraction problems than existing methods, but very little is know about the theoretical properties of this method or its estimates. This paper studies NLPCA. It proposes a geometric interpretation by showing that NLPCA fits a lower-dimensional curve or surface through the training data. The first three layers project observations onto the curve or surface giving scores. The last three layers define the curve or surface. The first three layers are a continuous function, which I show has several implications: NLPCA ``projections'' are suboptimal producing larger approximation error, NLPCA is unable to model curves and surfaces that intersect themselves, and NLPCA cannot parameterize curves with parameterizations having discontinuous jumps. I establish results on the identification of score values and discuss their implications on interpreting score values. I discuss the relationship between NLPCA and principal curves and surfaces, another nonlinear feature extraction method. Keywords: nonlinear principal components analysis, feature extraction, data compression, principal curves, principal surfaces.  From rich at cs.umass.edu Wed Oct 9 01:56:07 1996 From: rich at cs.umass.edu (Rich Sutton) Date: Wed, 9 Oct 1996 00:56:07 -0500 Subject: A proposed standard interface for RL software Message-ID: To Reinforcement Learning Researchers: We are sending this note to announce a proposed standard interface for reinforcement learning research software. The objectives of this first step towards standardization are fairly modest. The standard covers only the top-level interface between the RL agent and its environment. The idea is that you might program an agent and I might program an environment, and as long as we both follow the standard interface it should be trivial to connect them to each other. This should make it easier for people to swap interconnect novel agents and environments. In the long run, we hope that this will lead to a library of RL environments and agents that can be used as testbeds for RL research of all kinds. We have completed standard interfaces for C++ and CommonLisp. Documentation, interface code, and several examples are complete and available via the web, starting at http://www-anw.cs.umass.edu/People/sutton/RLinterface/RLinterface.html. Feedback and further contributions encouraged. Rich Sutton rich at cs.umass.edu Juan Carlos Santamaria carlos at cc.gatech.edu  From marco at idsia.ch Wed Oct 9 11:26:48 1996 From: marco at idsia.ch (Marco Wiering) Date: Wed, 9 Oct 96 16:26:48 +0100 Subject: new papers Message-ID: <9610091526.AA04183@fava.idsia.ch> HQ-LEARNING: DISCOVERING MARKOVIAN SUBGOALS FOR NON-MARKOVIAN REINFORCEMENT LEARNING Marco Wiering Juergen Schmidhuber Technical Report IDSIA-95-96, 13 pages 108K To solve partially observable Markov decision problems, we introduce HQ-learning, a hierarchical extension of Q-learning. HQ-learning is based on an ordered sequence of subagents, each learning to identify and solve a Markovian subtask of the total task. Each agent learns (1)an appropriate subgoal (though there is no intermediate, external reinforcement for good subgoals), and (2) a Markovian policy, given a particular subgoal. Our experiments demonstrate: (a) The system can easily solve tasks standard Q-learning cannot solve at all. (b) It can solve partially observable mazes with more states than those used in most previous POMDP work. (c) It can quickly solve complex tasks that require manipulation of the environment to free a blocked path to the goal. ------------------------------------------- Also available: THE NEURAL HEAT EXCHANGER ("invited talk" ICONIP'96) An alternative learning method for multi-layer neural nets inspired by the physical heat exchanger. Unlike backprop, it is truly local. It was first presented during occasional talks since 1990, and is closely related to Hinton et. al.'s recent Helmholtz Machine (1995). FTP-host: ftp.idsia.ch FTP-files: /pub/marco/hq96.ps.gz /pub/juergen/hq96.ps.gz /pub/juergen/heat.ps.gz WWW: http://www.idsia.ch/~marco/publications.html http://www.idsia.ch/~juergen/onlinepub.html Comments welcome! Marco Wiering & Juergen Schmidhuber IDSIA  From penev at venezia.rockefeller.edu Thu Oct 10 10:12:50 1996 From: penev at venezia.rockefeller.edu (Penio Penev) Date: Thu, 10 Oct 1996 10:12:50 -0400 (EDT) Subject: paper available: Local Feature Analysis Message-ID: The following paper just appeared: Network: Computation in Neural Systems 7(3), 477-500, 1996 Local Feature Analysis: A General Statistical Theory for Object Representation. Penio S. Penev and Joseph J. Atick Low-dimensional representations of sensory signals are key to solving many of the computational problems encountered in high-level vision. Principal Component Analysis has been used in the past to derive practically useful compact representations for different classes of objects. One major objection to the applicability of PCA is that it invariably leads to global, nontopographic representations that are not amenable to further processing and are not biologically plausible. In this paper we present a new mathematical construction---Local Feature Analysis (LFA)---for deriving local topographic representations for any class of objects. The LFA representations are sparse-distributed and, hence, are effectively low-dimensional and retain all the advantages of the compact representations of the PCA. But unlike the global eigenmodes, they give a description of objects in terms of statistically derived local features and their positions. We illustrate the theory by using it to extract local features for three ensembles---2D images of faces without background, 3D surfaces of human heads, and finally 2D faces on a background. The resulting local representations have powerful applications in head segmentation and face recognition. For those having on-line access to the electronic version of the journal, the paper can be retrieved from http://www.iop.org/EJ/welcome There is also a version with LaTex fonts and USA spelling at our web and ftp site venezia.rockefeller.edu. The file is best viewed on a 600 dpi printer because it contains grayscale images. FTP-host: venezia.rockefeller.edu FTP-filename: group/papers/full/LFA/PenevPS.LFA.ps -- 600 dpi FTP-filename: group/papers/full/LFA/PenevPS.LFA.300.ps -- 300 dpi ftp://venezia.rockefeller.edu/group/papers/full/LFA/PenevPS.LFA.ps ftp://venezia.rockefeller.edu/group/papers/full/LFA/PenevPS.LFA.300.ps -- Penio Penev 1-212-327-7423  From efiesler at idiap.ch Thu Oct 10 10:45:17 1996 From: efiesler at idiap.ch (E. Fiesler) Date: Thu, 10 Oct 1996 16:45:17 +0200 (MET DST) Subject: The Handbook of Neural Computation. Message-ID: <199610101445.QAA00353@catogne.idiap.ch> ----------------------------- PLEASE POST --------------------------------- Announcing the H A N D B O O K O F N E U R A L C O M P U T A T I O N ___________________________________________________________ The first of three volumes in the Computational Intelligence Library http://www.oup-usa.org/acadref/compint.html http://www.oup-usa.org/acadref/honc.html ___________________________________________ The Handbook of Neural Computation is now available for purchase from Oxford University Press and Institute of Physics Publishing. This major new resource for the neural computing community offers a wealth of information on neural network fundamentals, models, hardware and software implementations, and applications. The handbook includes many detailed case studies describing successful applications of artificial neural networks in application areas such as perception and cognition, engineering, physical sciences, biology and biochemistry, medicine, economics, finance and business, computer science, and the arts and humanities. One of the unique features of this handbook is that it has been designed to remain up to date: as neural network models, imple- mentations, and applications continue to develop, the handbook will keep pace by publishing new articles and revisions to exist- ing articles. The print edition of the handbook consists of 1,100 A4-size pages published in loose-leaf format, which will be updated by means of supplements published every six months. The electronic edition, to be launched in January 1997 but now available for advance purchase, includes the complete content of the handbook on CD-ROM, plus integrated access to the latest version of the handbook's content on the World Wide Web. Hence the handbook combines inherent updatability with the latest modes of distribution. The Handbook of Neural Computation is itself part of a larger project called the Computational Intelligence Library, which includes companion handbooks in evolutionary and fuzzy computation. Print Edition: October 1996. 9x12 inches (230x305mm). Four-post binder expands to accommodate supplements. 1,096 pages, 400 illustrations, ISBN 0-7503-0312-3. Electronic Edition: January 1997. CD-ROM plus World Wide Web Access. ISBN 0-7503-0411-1. Further information, including details of a special introductory price offer valid until the end of 1996, may be obtained at: http://www.oup-usa.org/acadref/honc.html and http://www.oup-usa.org/acadref/compint.html or by sending e-mail or regular mail to: Peter Titus Oxford University Press 198 Madison Avenue New York, NY 10016-4314 Fax: (1) 212-726-6442 E-mail: pkt at oup-usa.org TABLE OF CONTENTS Preface Russell Beale and Emile Fiesler Foreword James A Anderson How to Use This Handbook PART A INTRODUCTION A1 Neural Computation: The Background A1.1 The historical background J G Taylor A1.2 The biological and psychological background Michael A Arbib A2 Why Neural Networks? Paul J Werbos A2.1 Summary A2.2 What is a neural network? A2.3 A traditional roadmap of artificial neural network capabilities PART B FUNDAMENTAL CONCEPTS OF NEURAL COMPUTATION B1 The Artificial Neuron Michael A Arbib B1.1 Neurons and neural networks: the most abstract view B1.2 The McCulloch-Pitts neuron B1.3 Hopfield networks B1.4 The leaky integrator neuron B1.5 Pattern recognition B1.6 A note on nonlinearity and continuity B1.7 Variations on a theme B2 Neural Network Topologies Emile Fiesler B2.1 Introduction B2.2 Topology B2.3 Symmetry and asymmetry B2.4 High order topologies B2.5 Fully connected topologies B2.6 Partially connected topologies B2.7 Special topologies B2.8 A formal framework B2.9 Modular topologies Massimo de Francesco B2.10 Theoretical considerations for choosing a network topology Maxwell B Stinchcombe B3 Neural Network Training James L Noyes B3.1 Introduction B3.2 Characteristics of neural network models B3.3 Learning rules B3.4 Acceleration of training B3.5 Training and generalization B4 Data Input and Output Representations Thomas O Jackson B4.1 Introduction B4.2 Data complexity and separability B4.3 The necessity of preserving feature information B4.4 Data preprocessing techniques B4.5 A 'case study' review B4.6 Data representation properties B4.7 Coding schemes B4.8 Discrete codings B4.9 Continuous codings B4.10 Complex representation issues B4.11 Conclusions B5 Network Analysis Techniques B5.1 Introduction Russell Beale B5.2 Iterative inversion of neural networks and its applications Alexander Linden B5.3 Designing analyzable networks Stephen P Luttrell B6 Neural Networks: A Pattern Recognition Perspective Christopher M Bishop B6.1 Introduction B6.2 Classification and regression B6.3 Error functions B6.4 Generalization B6.5 Discussion PART C NEURAL NETWORK MODELS C1 Supervised Models C1.1 Single-layer networks George M Georgiou C1.2 Multilayer perceptrons Luis B Almeida C1.3 Associative memory networks Mohamad H Hassoun and Paul B Watta C1.4 Stochastic neural networks Harold Szu and Masud Cader C1.5 Weightless and other memory-based networks Igor Aleksander and Helen B Morton C1.6 Supervised composite networks Christian Jutten C1.7 Supervised ontogenic networks Emile Fiesler and Krzysztof J Cios C1.8 Adaptive logic networks William W Armstrong and Monroe M Thomas C2 Unsupervised Models C2.1 Feedforward models Michel Verleysen C2.2 Feedback models Gail A Carpenter (C2.2.1), Stephen Grossberg (C2.2.1, C2.2.3), and Peggy Israel Doerschuk (C2.2.2) C2.3 Unsupervised composite networks Cris Koutsougeras C2.4 Unsupervised ontogenetic networks Bernd Fritzke C3 Reinforcement Learning S Sathiya Keerthi and B Ravindran C3.1 Introduction C3.2 Immediate reinforcement learning C3.3 Delayed reinforcement learning C3.4 Methods of estimating V and Q C3.5 Delayed reinforcement learning methods C3.6 Use of neural and other function approximators in reinforcement learning C3.7 Modular and hierarchical architectures PART D HYBRID APPROACHES D1 Neuro-Fuzzy Systems Krzysztof J Cios and Witold Pedrycz D1.1 Introduction D1.2 Fuzzy sets and knowledge representation issues D1.3 Neuro-fuzzy algorithms D1.4 Ontogenic neuro-fuzzy F-CID3 algorithm D1.5 Fuzzy neural networks D1.6 Referential logic-based neurons D1.7 Classes of fuzzy neural networks D1.8 Induced Boolean and core neural networks D2 Neural-Evolutionary Systems V William Porto D2.1 Overview of evolutionary computation as a mechanism for solving neural system break design problems D2.2 Evolutionary computation approaches to solving problems in neural computation D2.3 New areas for evolutionary computation research in neural systems PART E NEURAL NETWORK IMPLEMENTATIONS E1 Neural Network Hardware Implementations E1.1 Introduction Timothy S Axelrod E1.2 Neural network adaptations to hardware implementations Perry D Moerland and Emile Fiesler E1.3 Analog VLSI implementation of neural networks Eric A Vittoz E1.4 Digital integrated circuit implementations Valeriu Beiu E1.5 Optical implementations I Saxena and Paul G Horan PART F APPLICATIONS OF NEURAL COMPUTATION F1 Neural Network Applications F1.1 Introduction Gary Lawrence Murphy F1.2 Pattern classification Thierry Den*ux F1.3 Combinatorial optimization Soheil Shams F1.4 Associative memory James Austin F1.5 Data compression Andrea Basso F1.6 Image processing John Fulcher F1.7 Speech processing Kari Torkkola F1.8 Signal processing Shawn P Day F1.9 Control Paul J Werbos PART G NEURAL NETWORKS IN PRACTICE: CASE STUDIES G1 Perception and Cognition G1.1 Unsupervised segmentation of textured images Nigel M Allinson and Hu Jun Yin G1.2 Character recognition John Fulcher G1.3 Handwritten character recognition using neural networks Thomas M Breuel G1.4 Improved speech recognition using learning vector quantization Kari Torkkola G1.5 Neural networks for alphabet recognition Mark Fanty, Etienne Barnard and Ron Cole G1.6 A neural network for image understanding Heggere S Ranganath, Govindaraj Kuntimad and John L Johnson G1.7 The application of neural networks to image segmentation and way-point identification James Austin G2 Engineering G2.1 Control of a vehicle active suspension model using adaptive logic networks William W Armstrong and Monroe M Thomas G2.2 ATM network control by neural network Atsushi Hiramatsu G2.3 Neural networks to configure maps for a satellite communication network Nirwan Ansari G2.4 Neural network controller for a high-speed packet switch M Mehmet Ali and Huu Tri Nguyen G2.5 Neural networks for optimal robot trajectory planning Dan Simon G2.6 Radial basis function network in design and manufacturing of ceramics Krzysztof J Cios, George Y Baaklini, Laszlo Berke and Alex Vary G2.7 Adaptive control of a negative ion source Stanley K Brown, William C Mead, P Stuart Bowling and Roger D Jones G2.8 Dynamic process modeling and fault prediction using artificial neural networks Barry Lennox and Gary A Montague G2.9 Neural modeling of a polymerization reactor Gordon Lightbody and George W Irwin G2.10 Adaptive noise canceling with nonlinear filters Wolfgang Knecht G2.11 A concise application demonstrator for pulsed neural VLSI Alan F Murray and Geoffrey B Jackson G2.12 Ontogenic CID3 algorithm for recognition of defects in glass ribbon Krzysztof J Cios G3 Physical Sciences G3.1 Neural networks for control of telescope adaptive optics T K Barrett and D G Sandler G3.2 Neural multigrid for disordered systems: lattice gauge theory as an example Martin Bker, Gerhard Mack and Marcus Speh G3.3 Characterization of chaotic signals using fast learning neural networks Shawn D Pethel and Charles M Bowden G4 Biology and Biochemistry G4.1 A neural network for prediction of protein secondary structure Burkhard Rost G4.2 Neural networks for identification of protein coding regions in genomic DNA sequences E E Snyder and Gary D Stormo G4.3 A neural network classifier for chromosome analysis Jim Graham G4.4 A neural network for recognizing distantly related protein sequences Dmitrij Frishman and Patrick Argos G5 Medicine G5.1 Adaptive logic networks in rehabilitation of persons with incomplete spinal cord injury Aleksandar Kostov, William W Armstrong, Monroe M Thomas and Richard B Stein G5.2 Neural networks for diagnosis of myocardial disease Hiroshi Fujita G5.3 Neural networks for intracardiac electrogram recognition Marwan A Jabri G5.4 A neural network to predict lifespan and new metastases in patients with renal cell cancer Craig Niederberger, Susan Pursell and Richard M Golden G5.5 Hopfield neural networks for the optimum segmentation of medical images Riccardo Poli and Guido Valli G5.6 A neural network for the evaluation of hemodynamic variables Tom Pike and Robert A Mustard G6 Economics, Finance and Business G6.1 Application of self-organizing maps to the analysis of economic situations F Blayo G6.2 Forecasting customer response with neural networks David Bounds and Duncan Ross G6.3 Neural networks for financial applications Magali E Azema*Barac and A N Refenes G6.4 Valuations of residential properties using a neural network Gary Grudnitski G7 Computer Science G7.1 Neural networks and human-computer interaction Alan J Dix and Janet E Finlay G8 Arts and Humanities G8.1 Distinguishing literary styles using neural networks Robert A J Matthews and Thomas V N Merriam G8.2 Neural networks for archaeological provenancing John Fulcher PART H THE NEURAL NETWORK RESEARCH COMMUNITY H1 Future Research in Neural Computation H1.1 Mathematical theories of neural networks Shun-ichi Amari H1.2 Neural networks: natural, artificial, hybrid H John Caulfield H1.3 The future of neural networks J G Taylor H1.4 Directions for future research in neural networks James A Anderson List of Contributors Index __________________________________________________________________________ Emile Fiesler, Editor-in-Chief of the Handbook of Neural Computation Research Director IDIAP E-mail: HoNC at IDIAP.CH C.P. 592 CH-1920 Martigny WWW-URL: http://www.idiap.ch/nn.html Switzerland ftp ftp.idiap.ch:/pub/papers/neural/README __________________________________________________________________________  From terry at salk.edu Thu Oct 10 12:07:32 1996 From: terry at salk.edu (Terry Sejnowski) Date: Thu, 10 Oct 1996 09:07:32 -0700 (PDT) Subject: NEURAL COMPUTATION 8:8 Message-ID: <199610101607.JAA06289@helmholtz.salk.edu> Neural Computation - Contents Volume 8, Number 8 - November 15, 1996 Article Synchronized Action of Synaptically Coupled Chaotic Model Neurons Henry D. I. Abarbanel, R. Huerta, M. I. Rabinovich, N. F. Rulkov, P. F. Rowat and A. I Selverston Note On the Capacity of Threshold Adalines with Limited-Precision Weights Maryhelen Stevenson and Shaheedul Huq Letter Binocular Receptive Field Models, Disparity Tuning, and Characteristic Disparity Yu-Dong Zhu and Ning Qian Response Characteristics of a Low-Dimensional Model Neuron Bo Cartling What Matters in Neuronal Locking ? Wulfrum Gerstner, J. Leo van Hemmen and Jack D. Cowan Hebbian Learning of Context in Recurrent Neural Networks Nicolas Brunel Neural Correlation via Random Connections Joshua Chover Singular Perturbation Analysis of Competitive Neural Networks with Different Time-Scales Anke Meyer-Base, Frank Ohl and Henning Scheich How Dependencies between Successive Examples Affect On-Line Learning Wim Wiegerinck and Tom Heskes Autonomous Design of Artificial Neural Networks by Neurex Francois Michaud and Ruben Gonzalez Rubio ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1997 - VOLUME 9 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $250 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-8 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From harnad at cogsci.soton.ac.uk Thu Oct 10 11:47:43 1996 From: harnad at cogsci.soton.ac.uk (Stevan Harnad) Date: Thu, 10 Oct 96 16:47:43 +0100 Subject: Cortical Computation: BBS Call for Commentators Message-ID: <9468.9610101547@cogsci.ecs.soton.ac.uk> Below is the abstract of a forthcoming BBS target article on: IN SEARCH OF COMMON FOUNDATIONS FOR CORTICAL COMPUTATION by W.A. Phillips and W. Singer This article has been accepted for publication in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal providing Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be BBS Associates or nominated by a BBS Associate. To be considered as a commentator for this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send EMAIL to: bbs at soton.ac.uk or write to: Behavioral and Brain Sciences Department of Psychology University of Southampton Highfield, Southampton SO17 1BJ UNITED KINGDOM http://www.princeton.edu/~harnad/bbs.html http://www.cogsci.soton.ac.uk/bbs ftp://ftp.princeton.edu/pub/harnad/BBS ftp://ftp.cogsci.soton.ac.uk/pub/bbs gopher://gopher.princeton.edu:70/11/.libraries/.pujournals If you are not a BBS Associate, please send your CV and the name of a BBS Associate (there are currently over 10,000 worldwide) who is familiar with your work. All past BBS authors, referees and commentators are eligible to become BBS Associates. To help us put together a balanced list of commentators, please give some indication of the aspects of the topic on which you would bring your areas of expertise to bear if you were selected as a commentator. An electronic draft of the full text is available for inspection by anonymous ftp (or gopher or world-wide-web) according to the instructions that follow after the abstract. ____________________________________________________________________ IN SEARCH OF COMMON FOUNDATIONS FOR CORTICAL COMPUTATION W.A. Phillips and W. Singer Center for Cognitive and Computational Neuroscience, Departments of Psychology and Computing Science, University of Stirling, FK9 4LA, Scotland, UK. wap1 at forth.stir.ac.uk Max Planck Institute for Brain Research, Deutschordenstrasse 46, Postfach 71 06 62, D-60496 Frankfurt/Main, Germany. singer at mpih-frankfurt.mpg.d400.de KEYWORDS: Cell assemblies; cerebral cortex; coordination; context; dynamic binding; functional specialization; learning; neural coding; neural computation; neuropsychology; reading; object recognition; perception; self-organization; synaptic plasticity; synchronization. ABSTRACT: This research concerns forms of coding, processing and learning that are common to many different cortical regions and cognitive functions. Local cortical processors may coordinate their activity by maximizing the transmission of information that is coherently related to the context in which it occurs, thereby forming synchronized population codes. In this coordination, contextual field (CF) connections link processors within and between cortical regions. The effects of CF connections are distinct from those mediating receptive field (RF) input. CFs can guide both learning and processing without becoming confused with RF information. Simulations explore the capabilities of networks built from local processors with both RF and CF connections. Physiological evidence for CFs, synchronization, and plasticity in RF and CF connections is described. Coordination via CFs is related to perceptual grouping, the effects of context on contrast sensitivity, amblyopia, implicit influences of color in achromotopsia, object and word perception, and the discovery of distal environmental variables and their interactions through self-organization. In cortical computation there may occur a flexible evaluation of relations between input signals by locally specialized but adaptive processors whose activity is dynamically associated and coordinated within and between regions through specialized contextual connections. -------------------------------------------------------------- To help you decide whether you would be an appropriate commentator for this article, an electronic draft is retrievable by anonymous ftp from ftp.princeton.edu according to the instructions below (the filename is bbs.phillips). Please do not prepare a commentary on this draft. Just let us know, after having inspected it, what relevant expertise you feel you would bring to bear on what aspect of the article. ------------------------------------------------------------- These files are also on the World Wide Web and the easiest way to retrieve them is with Netscape, Mosaic, gopher, archie, veronica, etc. Here are some of the URLs you can use to get to the BBS Archive: http://www.princeton.edu/~harnad/bbs.html http://www.cogsci.soton.ac.uk/bbs/Archive/bbs.phillips.html ftp://ftp.princeton.edu/pub/harnad/BBS/bbs.phillips ftp://ftp.cogsci.soton.ac.uk/pub/bbs/Archive/bbs.phillips gopher://gopher.princeton.edu:70/11/.libraries/.pujournals To retrieve a file by ftp from an Internet site, type either: ftp ftp.princeton.edu or ftp 128.112.128.1 When you are asked for your login, type: anonymous Enter password as queried (your password is your actual userid: yourlogin at yourhost.whatever.whatever - be sure to include the "@") cd /pub/harnad/BBS To show the available files, type: ls Next, retrieve the file you want with (for example): get bbs.phillips When you have the file(s) you want, type: quit ---------- Where the above procedure is not available there are two fileservers: ftpmail at decwrl.dec.com and bitftp at pucc.bitnet that will do the transfer for you. To one or the other of them, send the following one line message: help for instructions (which will be similar to the above, but will be in the form of a series of lines in an email message that ftpmail or bitftp will then execute for you). -------------------------------------------------------------  From elman at crl.ucsd.edu Mon Oct 7 22:54:07 1996 From: elman at crl.ucsd.edu (Jeff Elman) Date: Mon, 7 Oct 96 19:54:07 PDT Subject: new book announcement: Rethinking Innateness Message-ID: <9610080254.AA25940@crl.ucsd.edu> RETHINKING INNATENESS A Connectionist Perspective on Development by Jeffrey L. Elman, Elizabeth A. Bates, Mark H. Johnson, Annette Karmiloff-Smith, Domenico Parisi, Kim Plunkett "Rethinking Innateness is a milestone as important as the appearance ten years ago of the PDP books. More integrated in its structure, more biological in its approach, this book provides a new theoretical framework for cognition that is based on dynamics, growth, and learning. Study this book if you are interested in how minds emerge from developing brains." Terrence J. Sejnowski Professor, Salk Institute for Biological Studies Rethinking Innateness asks the question, "What does it really mean to say that a behavior is innate?" The authors describe a new framework in which interactions, occurring at all levels, give rise to emergent forms and behaviors. These outcomes often may be highly constrained and universal, yet are not themselves directly contained in the genes in any domain-specific way. One of the key contributions of Rethinking Innateness is a taxonomy of ways in which a behavior can be innate. These include constraints at the level of representation, architecture, and timing; typically, behaviors arise through the interaction of constraints at several of these levels. The ideas are explored through dynamic models inspired by a new kind of "developmental connectionism," a marriage of connectionist models and developmental neurobiology, forming a new theoretical framework for the study of behavioral development. While relying heavily on the conceptual and computational tools provided by connectionism, Rethinking Innateness also identifies ways in which these tools need to be enriched by closer attention to biology. Neural Networks and Connectionist Modeling series A Bradford Book November 1996 ISBN 0-262-05052-8 475 pp. $45.00 (cloth) MIT Press WWW page, with ordering information: http://www-mitpress.mit.edu:80/mitp/recent-books/cog/elmrh.html  From dimitrib at MIT.EDU Sat Oct 12 01:46:39 1996 From: dimitrib at MIT.EDU (Dimitri Bertsekas) Date: Sat, 12 Oct 96 00:46:39 EST Subject: New book on Neuro-Dynamic Programming/Reinforcement Learning Message-ID: <9610120445.AA27142@MIT.MIT.EDU> Dear colleagues, our Neuro-Dynamic Programming book has just been published, and we are attaching a description. Dimitri Bertsekas (dimitrib at mit.edu) John Tsitsiklis (jnt at mit.edu) ******************************************************************** NEURO-DYNAMIC PROGRAMMING by Dimitri P. Bertsekas and John N. Tsitsiklis Massachusetts Institute of Technology (512 pages, hardcover, ISBN:1-886529-10-8, $79.00) published by Athena Scientific, Belmont, MA http://world.std.com/~athenasc/ Neuro-Dynamic Programming (NDP for short) is a recent class of reinforcement learning methods that can be used to solve very large and complex dynamic optimization problems. NDP combines simulation, learning, neural networks or other approximation architectures, and the central ideas in dynamic programming. It provides a rigorous framework for addressing challenging and often intractable problems from a broad variety of fields. This book provides the first systematic presentation of the science and the art behind this far-reaching methodology. Among its special features, the book: ----------------------------------------------------------------------- ** Describes and unifies a large number of reinforcement learning methods, including several that are new ** Rigorously explains the mathematical principles behind NDP ** Describes new approaches to formulation and approximate solution of problems in stochastic optimal control, sequential decision making, and discrete optimization ** Illustrates through examples and case studies the practical application of NDP to complex problems from resource allocation, data communications, game playing, and combinatorial optimization ** Presents extensive background and new research material on dynamic programming and neural network training ----------------------------------------------------------------------- CONTENTS 1. Introduction 1.1. Cost-to-go Approximations in Dynamic Programming 1.2. Approximation Architectures 1.3. Simulation and Training 1.4. Neuro-Dynamic Programming 2. Dynamic Programming 2.1. Introduction 2.2. Stochastic Shortest Path Problems 2.3. Discounted Problems 2.4. Problem Formulation and Examples 3. Neural Network Architectures and Training 3.1. Architectures for Approximation 3.2. Neural Network Training 4. Stochastic Iterative Algorithms 4.1. The Basic Model 4.2. Convergence Based on a Smooth Potential Function 4.3. Convergence under Contraction or Monotonicity Assumptions 4.4. The ODE Approach 5. Simulation Methods for a Lookup Table Representation 5.1. Some Aspects of Monte Carlo Simulation 5.2. Policy Evaluation by Monte Carlo Simulation 5.3. Temporal Difference Methods 5.4. Optimistic Policy Iteration 5.5. Simulation-Based Value Iteration 5.6. Q-Learning 6. Approximate DP with Cost-to-Go Function Approximation 6.1. Generic Issues -- From Parameters to Policies 6.2. Approximate Policy Iteration 6.3. Approximate Policy Evaluation Using TD(lambda) 6.4. Optimistic Policy Iteration 6.5. Approximate Value Iteration 6.6. Q-Learning and Advantage Updating 6.7. Value Iteration with State Aggregation 6.8. Euclidean Contractions and Optimal Stopping 6.9. Value Iteration with Representative States 6.10. Bellman Error Methods 6.11. Continuous States and the Slope of the Cost-to-Go 6.12. Approximate Linear Programming 6.13. Overview 7. Extensions 7.1. Average Cost per Stage Problemsn Error 7.2. Dynamic Games 7.3. Parallel Computation Issues 8. Case Studies 8.1. Parking 8.2. Football 8.3. Tetris 8.4. Combinatorial Optimization -- Maintenance and Repair 8.5. Dynamic Channel Allocation 8.6. Backgammon Appendix A: Mathematical Review Appendix B: On Probability Theory and Markov Chains ******************************************************************** PREFACE: http://world.std.com/~athenasc/ ******************************************************************** PUBLISHER'S INFORMATION: Athena Scientific, P.O.Box 391, Belmont, MA, 02178-9998, U.S.A. Email: athenasc at world.std.com, Tel: (617) 489-3097, FAX: (617) 489-2017 WWW Site for Info and Ordering: http://world.std.com/~athenasc/ ********************************************************************  From biehl at physik.uni-wuerzburg.de Mon Oct 14 07:02:29 1996 From: biehl at physik.uni-wuerzburg.de (Michael Biehl) Date: Mon, 14 Oct 1996 13:02:29 +0200 (MESZ) Subject: paper available: Noise Robustness in Multilayer Neural Networks Message-ID: <199610141102.NAA18288@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/1996/WUE-ITP-96-022.ps.gz The following manuscript is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Noise Robustness in Multilayer Neural Networks" M. Copelli, R. Eichhorn, O. Kinouchi, M. Biehl, R. Simonetti, P. Riegler, and N. Caticha Ref. WUE-ITP-96-022 Abstract The training of multilayered neural networks in the presence of different types of noise is studied. We consider the learning of realizable rules in nonover- lapping architectures. Achieving optimal generalization depends on knowledge of the noise level, however its misestimation may lead to partial or complete loss of the generalization ability. We demonstrate this effect in the framework of online learning and present the results in terms of noise robustness phase diagrams. While for additive (weight) noise the robustness properties depend on the architecture and size of the networks, this is not so for multiplicative (output) noise. In this case we find a universal behaviour independent of the machine size for both the tree parity and committee machines. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint/1996 ftp> get WUE-ITP-96-022.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-022.ps.gz e.g. unix> lp WUE-ITP-96-022.ps [8 pages] (*) can be replaced by "get WUE-ITP-96-022.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ -- Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de homepage: http://www.physik.uni-wuerzburg.de/~biehl Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141  From dwang at cis.ohio-state.edu Mon Oct 14 09:43:47 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 14 Oct 1996 09:43:47 -0400 Subject: Special Issue on Radial Basis Functions Networks Message-ID: <199610141343.JAA28477@sarajevo.cis.ohio-state.edu> Call for Papers Special Issue on Radial Basis Functions Networks for the journal NEUROCOMPUTING (http://www.elsevier.nl/locate/neucom) -------------------------------------------------------------------- One of the most popular neural network architectures due to their fast learning and broad range of applicability are the Radial Basis Functions (RBF) Networks. This special issue is dedicated to recent advances in all aspects of this type of architecture, including but not restricted to: * theoretical contributions, * learning methods: supervised, unsupervised, on-line, etc., * architectural enhancements, * applications in signal procession, vision, control, etc., * comparative assessments to other neural network architectures. Submit 6 copies of your manuscript (including keywords, biosketch of all authors, email address of corresponding author) to: V. David Sanchez A. Neurocomputing - Editor in Chief - Nova Southeastern University School of Computer and Information Sciences 3100 SW 9th Avenue Fort Lauderdale, FL 33315 U.S.A. Fax +1 (954) 723-2744 Email dsanchez at scis.nova.edu Deadline: 15 December 1996 --------------------------------------------------------------------  From ted at SPENCER.CTAN.YALE.EDU Mon Oct 14 14:05:17 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Mon, 14 Oct 1996 18:05:17 GMT Subject: The Electrotonic Workbench Message-ID: <199610141805.SAA13101@PLANCK.CTAN.YALE.EDU> For those who are concerned with how electrical signals spread in biological neurons, it may be of interest to learn that Michael Hines's simulation program NEURON can now compute the electrotonic transform and display the results as neuromorphic or Log A vs. x renderings at the click of a button. An abstract that presents the basic concepts and illustrates this new feature is at http://www.neuron.yale.edu/papers/ebench/ebench.html Further information about this powerful suite of analytical tools will be presented at the Society for Neuroscience Meeting in Washington, DC on Wednesday, Nov. 20, 1996 at 1 PM (Carnevale, N.T., Tsai, K.Y., and Hines, M.L.. The Electrotonic Workbench. Society for Neuroscience Abstracts 22:1741, 1996, abstract 687.1) This and other references of interest to those who are concerned with biologically realistic neural computation are located at http://www.neuron.yale.edu/papers/nrnrefs.html --Ted  From anderson at CS.ColoState.EDU Mon Oct 14 14:36:41 1996 From: anderson at CS.ColoState.EDU (Chuck Anderson) Date: Mon, 14 Oct 1996 12:36:41 -0600 (MDT) Subject: research assistantship for Spring, 97 Message-ID: <199610141836.MAA03873@clapton.cs.colostate.edu> The Department of Computer Science at Colorado State University, Fort Collins, CO, is looking for a Masters or Ph.D. student to fill a vacant research assistantship position funded by this NSF project: National Science Foundation, MIP-9628770, 8/96--7/99, PIs: T. Chen, Electrical Engineering, and A. von Mayrhauser and C. Anderson, Computer Science, "Behavioral Level Design Verifications Using Software Testing Techniques and Neural Networks" We plan to use network inversion and methods from optimal experiment design to guide the search for test inputs for complex software systems and hardware models coded in VHDL. We would like a student with a strong background in neural networks, experience with compiler design, and an interest in software and hardware testing. You can learn more about our department at http://www.cs.colostate.edu and about the PIs' research projects at http://www.lance.colostate.edu/depts/ee/Profiles/chen.html http://www.cs.colostate.edu/casi/avm/ http://www.cs.colostate.edu/~anderson To qualify for this position, you must be accepted into the Computer Science graduate program at CSU. You may obtain application material by sending a request via e-mail to gradinfo at cs.colostate.edu. You may also send your resume or questions via e-mail to anderson at cs.colostate.edu. Chuck Anderson anderson at cs.colostate.edu Department of Computer Science http://www.cs.colostate.edu/~anderson Colorado State University office: 970-491-7491 Fort Collins, CO 80523-1873 FAX: 970-491-2466  From zador at salk.edu Tue Oct 15 02:18:19 1996 From: zador at salk.edu (Tony Zador) Date: Mon, 14 Oct 1996 23:18:19 -0700 (PDT) Subject: The Electrotonic Workbench Message-ID: <199610150618.XAA29858@helmholtz.salk.edu> Neuron is an excellent tool for computing the morphoelectrotonic transform (MET). However, for those who would prefer to use Mathematica, a toolkit is available at http://mrb.niddk.nih.gov/hagai/publ/Zador95/Abstract.html Several digitized cortical pyramidal neurons are also available at this site. The full published description of the MET can be found in: Zador AM; Agmon-Snir H; Segev I. The morphoelectrotonic transform: a graphical approach to dendritic function.Journal of Neuroscience, 1995 Mar, 15(3 Pt 1):1669-82. The Mathematica MET Toolkit was used to generate all the figures in this paper. ______________________________ Tony Zador Salk Institute MNL/S http://www.sloan.salk.edu/~zador ______________________________  From phkywong at uxmail.ust.hk Tue Oct 15 07:12:36 1996 From: phkywong at uxmail.ust.hk (Dr. Michael Wong) Date: Tue, 15 Oct 1996 19:12:36 +0800 Subject: Paper available Message-ID: <96Oct15.191239+0800_hkt.102345-24300+615@uxmail.ust.hk> The following paper, to be orally presented at NIPS'96, is now available via anonymous FTP. (7 pages) ============================================================================ FTP-host: physics.ust.hk FTP-files: pub/kymwong/rough.ps.gz Microscopic Equations in Rough Energy Landscape for Neural Networks K. Y. Michael Wong Department of Physics, The Hong Kong University of Science and Technology, Clear Water Bay, Kowloon, Hong Kong. E-mail address: phkywong at usthk.ust.hk ABSTRACT We consider the microscopic equations for learning problems in neural networks. The aligning fields of an example are obtained from the cavity fields, which are the fields if that example were absent in the learning process. In a rough energy landscape, we assume that the density of the metastable states obey an exponential distribution, yielding macroscopic properties agreeing with the first step replica symmetry breaking solution. Iterating the microscopic equations provide a learning algorithm, which results in a higher stability than conventional algorithms. ============================================================================ FTP instructions: unix> ftp physics.ust.hk Name: anonymous Password: your full email address ftp> cd pub/kymwong ftp> get rough.ps.gz ftp> quit unix> gunzip rough.ps.gz unix> lpr rough.ps (or ghostview rough.ps)  From tgd at chert.CS.ORST.EDU Thu Oct 17 01:16:19 1996 From: tgd at chert.CS.ORST.EDU (Tom Dietterich) Date: Wed, 16 Oct 96 22:16:19 PDT Subject: Paper available: Statistical Tests for Comparing Supervised Classification Learning Algorithms Message-ID: <9610170516.AA03346@edison.CS.ORST.EDU> The following paper is available from **Hardcopies are not available** Statistical Tests for Comparing Supervised Classification Learning Algorithms Thomas G. Dietterich Department of Computer Science Oregon State University Corvallis, OR 97331 Abstract: This paper reviews five statistical tests for determining whether one learning algorithm out-performs another on a particular learning task. These tests are compared experimentally to determine their probability of incorrectly detecting a difference when no difference exists (Type I error). Two widely-used statistical tests are shown to have high probability of Type I error in certain situations and should never be used. These tests are (a) a test for the difference of two proportions and (b) a paired-differences $t$ test based on taking several random train/test splits. A third test, a paired-differences $t$ test based on 10-fold cross-validation, exhibits somewhat elevated probability of Type I error. A fourth test, McNemar's test, is shown to have low Type I error. The fifth test is a new test, 5x2cv, based on 5 iterations of 2-fold cross-validation. Experiments show that this test also has good Type I error. The paper also measures the power (ability to detect algorithm differences when they do exist) of these tests. The 5x2cv test is shown to be slightly more powerful than McNemar's test. The choice of the best test is determined by the computational cost of running the learning algorithm. For algorithms that can be executed only once, McNemar's test is the only test with acceptable Type I error. For algorithms that can be executed ten times, the 5x2cv test is recommended, because it is slightly more powerful and because it directly measures variation due to the choice of training set. -- Thomas G. Dietterich Voice: 541-737-5559 Department of Computer Science FAX: 541-737-3014 Dearborn Hall, 303 URL: http://www.cs.orst.edu/~tgd Oregon State University Corvallis, OR 97331-3102  From l.s.smith at cs.stir.ac.uk Thu Oct 17 08:18:36 1996 From: l.s.smith at cs.stir.ac.uk (Dr L S Smith (Staff)) Date: Thu, 17 Oct 1996 13:18:36 +0100 (BST) Subject: CFP: 1st European Workshop on Neuromorphic Systems Message-ID: <19961017T121836Z.NAA01062@katrine.cs.stir.ac.uk> EWNS: 1st European Workshop on Neuromorphic Systems 29-31 August 1997, University of Stirling, Stirling, Scotland First Call for Papers Organisers: Centre for Cognitive and Computational Neuroscience University of Stirling, Scotland and Department of Electrical Engineering University of Edinburgh, Scotland First Call for Papers Neuromorphic systems are implementations in silicon of sensory and neural systems whose architecture and design are based on neurobiology. This growing area proffers exciting possibilities such as sensory systems which can compete with human senses and pattern recognition systems that can run in real-time. The area is at the intersection of many disciplines: neurophysiology, computer science and electrical engineering. Papers are requested in the following areas: Design issues in sensorineural neuromorphic systems: auditory, visual, olfactory, proprioception, sensorimotor systems. Designs for silicon implementations of neural systems. Papers not exceeding 8 A4 pages are requested: these should be sent to Dr. Leslie Smith Department of Computing Science University of Stirling Stirling FK9 4LA Scotland email: lss at cs.stir.ac.uk FAX (44) 1786 464551 We also propose to hold a number of discussion sessions on some of the questions above. Short position papers (less than 4 pages) are requested. Key Dates Submission Deadline: Mon 7th April 1997 Notification of Acceptance: June 2nd 1997 Further information is on the WWW at http://www.cs.stir.ac.uk/~lss/Neuromorphic/Info1.html  From pci-inc at aub.mindspring.com Sun Oct 20 12:13:36 1996 From: pci-inc at aub.mindspring.com (Mary Lou Padgett) Date: Sun, 20 Oct 1996 12:13:36 -0400 Subject: AMARI: WCNN Announcement Message-ID: <2.2.16.19961020161336.65e77652@pop.aub.mindspring.com> AMARI: WCNN Announcement PRESIDENTIAL ANNOUNCEMENT OF ACTION TAKEN BY INNS BOARD OF GOVERNORS AT WCNN'96 IN SAN DIEGO, SEPTEMBER 17, 1996: In the last few years, many members of INNS have expressed dissatisfactin with the existence of two major, competing neural network meetings in North America. A number of months ago, IEEE and INNS began informal discussions about the possibility of reinstituting cooperation. This week, we had further contacts with IEEE, and the INNS Board Members have made a strong decision to proceed to reinstitute the tradition of joint meetings, perhaps as early as 1997. There are certain details which need to be worked out, and decisioins which need to be approved. In the spirit of cooperation, the INNS Board has elected to replace the planned 1997 INNS meetng in Boston by supporting the meeting in Houston next June, let this time by IEEE, by offering a strong INNS technical involvement. WE REMIND YOU THE PAPER SUBMISSION DEADLINE FOR THE HOUSTON MEETING IS SET AT NOVEMBER 1 (NOV. 15 FOR INNS MEMBERS ONLY). We trust that we will later be able to follow a pattern of alternating the lead roles, as we did with IJCNNs in the past. We urge all of you to plan to join wiht us in Houston, to help support this effort to reunite the neural network community. SHUN-ICHI AMARI Note: See the new web page for the JOINT MEETING for 1997. http://www.mindspring.com/~pci-inc/ICNN97 Send papers to Dan Levine, Co-Program Chair. Mary Lou Padgett 1165 Owens Road Auburn, AL 36830 P: (334) 821-2472 F: (334) 821-3488 m.padgett at ieee.org Auburn University, EE Dept. Padgett Computer Innovations, Inc. (PCI) Simulation, VI, Seminars IEEE Standards Board -- Virtual Intelligence ( VI): NN, FZ, EC, VR  From gcv at di.ufpe.br Mon Oct 21 08:51:41 1996 From: gcv at di.ufpe.br (gcv@di.ufpe.br) Date: Mon, 21 Oct 1996 10:51:41 -0200 Subject: CFP: JBCS on Neural Networks Message-ID: <199610211251.KAA21156@caruaru> Journal of The Brazilian Computer Society (JBCS) CALL FOR PAPERS Special Issue on NEURAL NETWORKS (Tentative Publication Date, July, 1997) Guest Editors: Edson de Barros Carvalho Filho, DI-UFPE and Germano Vasconcelos, DI-UFPE The Journal of the Brazilian Computer Society (JBCS) is an international quarterly publication of the Sociedade Brasileira de Computao (SBC) which serves as a forum for disseminating innovative research in all aspects of Computer Science. The approach of Neural Networks has been widely used in a large variety of problems in Computer Science and in other scientific disciplines making this subject one of the most currently attractive field of investigation. In its 11th edition, celebrating the realisation of the third Brazilian Symposium on Neural Networks, sponsored by SBC, the JBCS is planning a Special Issue on Neural Networks and welcomes worldwide submissions describing original ideas and new results in this topic. Papers may be practical or theoretical in nature. Suggested topics include but are not limited to: * Theoretical Models * Algorithms and Architectures * Biological Perspectives * Cognitive Science * Hybrid Systems * Neural Networks and Fuzzy Systems * Neural Networks and Genetic Algorithms * Pattern Recognition * Control and Robotics * Optimization * Hardware Implementation * Environments & Tools * Prediction * Vision and Image Processing * Speech and Language Processing * Other Applications The purpose of this special edition is to allow fast publication of relevant and original research within six months after paper submission. INSTRUCTIONS TO AUTHORS Contributions will be considered for publication in JBCS if they have not been previously published and are not under consideration for publication elsewhere. Acceptance of papers for publication is subject to a peer review procedure and is conditional to revisions being made given comments from referees. Format details for final submission procedure will be provided for accepted papers. Authors must submit the final version in electronic format, and should provide hard-copy versions for refereeing. Submitted papers are to be written in English and typed double-spaced on one side of white A4 sized paper. Each paper should contain no more than 20 pages, including all text, figures and references. The final manuscript should be approximately 8000 words in length. Submissions will be judged on significance, originality, quality and clarity. Reviewing will be blind to the identities of the authors, so the authors should take care not to identify themselves in the paper: * The submitted manuscript should contain only the paper title and a short abstract. Authors names, affiliations, and the complete mailing address (both postal and email) of the person to whom correspondence should be sent, should be included in an accompanying letter. * No acknowledgment should be included in the version for refereeing (it can be included in the final version of the paper). * There should be no reference to unpublished work by the authors (thesis, working papers). These references can be included in the final version of the paper. * When referring to one's own work, use the third person. For example, say "previously, [Peter1993] has shown that ...", instead of "the author [Peter1993] has shown that ..." All contributions will be acknowledged and refereed. SUBMISSION PROCEDURE Please submit 4 copies of the paper to the Special Issue Editor Germano Crispim Vasconcelos Departamento de Informatica Universidade Federal de Pernambuco Caixa Postal 7851 50732-970, Recife - PE Brazil email: gcv at di.ufpe.br fax: +55 81 2718438 IMPORTANT DATES Submission Deadline January 20, 1997 (PAPERS MUST BE RECEIVED BY THIS DATE - FIRM DEADLINE) Notification of Acceptance March 20, 1997 Final Electronic Version April 20, 1997 Tentative Publication Date July, 1997 For additional information on the Journal, and on how to prepare the manuscript to minimize final version delays, contact the editors or consult webpage http://www.dcc.unicamp.br/~jbcs/cameraready.html  From wray at ultimode.com Sun Oct 20 09:58:06 1996 From: wray at ultimode.com (Wray Buntine) Date: Sun, 20 Oct 1996 13:58:06 GMT Subject: tutorial slides available on graphical models, and on priors Message-ID: <199610201358.NAA17902@ultimode.com> The following slides were prepared for the NATO Workshop on Learning in Graphical Models, just held in Erice, Italy, Sept. 1996. Actually, these are *revised* from the Erice workshop so those in attendance might like to update too. They are available over WWW but not yet available via FTP. You'll find them at my web site: http://WWW.Ultimode.com/~wray/refs.html#tutes Also, please note my new location and email address, given at the end. The graphical models and exponential family talk contains an introduction to lots of learning algorithms using graphical models. Included is an analysis with proofs of the much-hyped mean field algorithm in its general case for the exponential family (as you might have guessed, mean field is simple once you strip away the physics), and lots more. This talk also contains how I believe Gibbs, EM, k-means, and deterministic annealing should be taught (as variants of one another). Computation with the Exponential Family and Graphical Models ============================================================ This tutorial plays two roles: to illustrate how graphical models can be used to present models and algorithms for data analysis, and to present computational methods based on the Exponential Family, a central concept for computational data analysis. The Exponential Family is the most important family of probability distributions. It includes the Gaussian, the binomial, the Poisson, and others. It has unique computational properties: all fast algorithms for data analysis, to my knowledge, have some version of the exponential family at their core. Every student of data analysis, regardless of their discipline (computer science, neural nets, pattern recognition, etc.) should therefore understand the Exponential Family and the key algorithms which are based on them. This tutorial presents the Exponential Family and algorithms using graphical models: Bayesian networks and Markov networks (directed and undirected graphs). These graphical models represent independence and therefore neatly display many of the essential details of the algorithms and models based around the exponential family. Algorithms discussed are the Expectation-Maximization (EM) algorithm, Gibbs sampling, k-means, deterministic annealing, Scoring, Iterative Reweighted Least Squares (IRLS), Mean Field, and Iterative Proportional Fitting (IPF). Connections between these different algorithms are given, and the general formulations presented, in most cases, are readily adapted to arbitrary Exponential Family distributions. The priors tutorial was a *major* revision from my previous version. Those with the older version should update! Prior Probabilities =================== Prior probabilities are the center of most of the old controversies surrounding Bayesian statistics. While the Bayesian/Classical distinctions in statistics are becoming blurred, priors remain a problem, largely because of a lack of good tutorial material and the unfortunate residue of previous misunderstandings. Methods for developing and assessing priors are now routinely used by experienced practitioners. This tutorial will review some of the issues, presenting a view that incorporates decision theory and multi-agent reasoning. First, some perspectives are given: applications, theory, parameters and models, and the role of the decision being made. Then, basic principles are presented: Jaynes' Principle of Invariance is a generalization of Laplace's Principle of Indifference that allows a specification of ignorance to be converted into a prior. A prior for non-linear regression is developed, and the important role of a "measure", over-fitting, and priors on multinomials are presented. Issues such as subjectivity versus objectivity, Occam's razor, various paradoxes, maximum entropy methods, and the so-called non-informative & reference priors are also presented. A bibliography is included. Wray Buntine ============ Consultant to industry and NASA, and Visiting Scientist at EECS, UC Berkeley working on probabilistic methods in computer-aided design of ICs with Dr. Andy Mayer and Prof. Richard Newton. Ultimode Systems, LLC Phone: (415) 324 3447 555 Bryant Str. #186 Email: wray at ultimode.com Palo Alto, 94301 http://WWW.Ultimode.com/~wray/  From krista at nucleus.hut.fi Mon Oct 21 03:39:53 1996 From: krista at nucleus.hut.fi (Krista Lagus) Date: Mon, 21 Oct 1996 10:39:53 +0300 (EET DST) Subject: (1st CFP) WSOM - Workshop on Self-Organizing Maps Message-ID: CALL FOR PAPERS W O R K S H O P O N S E L F - O R G A N I Z I N G M A P S Helsinki University of Technology, Finland June 4-6, 1997 The Self-Organizing Map (SOM) with its variations is the most popular artificial neural network algorithm in the unsupervised learning category. Over 2000 applications have been reported in the open literature, and more and more industrial projects are using the SOM as a tool for solving hard real-world problems. The WORKSHOP ON SELF-ORGANIZING MAPS (WSOM) is the first international meeting to be solely dedicated to the theory and applications of the SOM. People from universities, research institutes, industry, and commerce are invited to join the Workshop and share their views and expertise on the use of the SOM. WORKSHOP The workshop will consist of a tutorial short course on SOM given by prof. Teuvo Kohonen, an opening plenary talk given by prof. Helge Ritter, presentations on various aspects of the SOM given by internationally known experts, and technical contributions. Also poster presentations will be arranged. SCOPE The scope of the Workshop is the Self-Organizing Map with its variants, including but not limited to - theory and analysis; - engineering applications like pattern recognition, process control, and telecommunications; - data analysis and financial applications; - information retrieval and natural language processing applications; - implementations. SUBMISSION Prospective authors are invited to submit papers on any aspect of SOM, including the areas listed above. The paper submission deadline will be March 1, 1997. Detailed information about the submission procedure, as well as registration, accommodation, etc. will soon be available on the Web page http://nucleus.hut.fi/wsom/ CO-OPERATING SOCIETIES This is a satellite workshop of the 10th Scandinavian Conference on Image Analysis (SCIA) to be held on June 9 to 11 in Lappeenranta, Finland, arranged by the Pattern Recognition Society of Finland. Other co-operating societies are the European Neural Network Society (ENNS), IEEE Finland Section, and the Finnish Artificial Intelligence Society. Teuvo Kohonen, WSOM Chairman Erkki Oja, Program Chairman Olli Simula, Organization Chairman  From dwang at cis.ohio-state.edu Mon Oct 21 11:25:32 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Mon, 21 Oct 1996 11:25:32 -0400 Subject: Tech report on Double Vowel Segregation Message-ID: <199610211525.LAA15163@sarajevo.cis.ohio-state.edu> A new technical report is available by FTP: MODELLING THE PERCEPTUAL SEGREGATION OF DOUBLE VOWELS WITH A NETWORK OF NEURAL OSCILLATORS Guy J. Brown (1) and DeLiang Wang (2) (1) Department of Computer Science, University of Sheffield, 211 Portobello Street, Sheffield S8 0ET, UK Email: guy at dcs.shef.ac.uk (2) Laboratory for AI Research, Department of Computer Science and Information Science and Center for Cognitive Science, The Ohio State University, Columbus, OH 43210-1277, USA Email: dwang at cis.ohio-state.edu ABSTRACT The ability of listeners to identify two simultaneously presented vowels can be improved by introducing a difference in fundamental frequency (F0) between the vowels. We propose an explanation for this phenomenon in the form of a computational model of concurrent sound segregation, which is motivated by neurophysiological evidence of oscillatory firing activity in the auditory cortex and thalamus. More specifically, the model represents the perceptual grouping of auditory frequency channels as synchronised (phase-locked with zero phase lag) oscillations in a neural network. Computer simulations on a vowel set used in psychophysical studies confirm that the model qualitatively matches the performance of human listeners; vowel identification performance increases with increasing difference in F0. Additionally, the model is able to replicate other findings relating to the perception of harmonic complexes in which one component is mistuned. OBTAINING THE REPORT BY FTP The report is available by anonymous FTP from the site ftp.dcs.shef.ac.uk (enter the word "anonymous" when you are asked for a login name). Then enter: cd /share/spandh/pubs/brown followed by get bw-report96.ps.Z The file is 2.9 MB of compressed postscript. If you have trouble downloading or viewing the file, or if you would like a paper copy to be sent to you, please email Guy Brown (guy at dcs.shef.ac.uk)  From cabestan at eel.upc.es Mon Oct 21 05:27:26 1996 From: cabestan at eel.upc.es (Joan Cabestany) Date: Mon, 21 Oct 1996 10:27:26 +0100 Subject: IWANN'97 final announcement and Call for Papers Message-ID: <199610210826.KAA12078@petrus.upc.es> Dear collegue, please find herewith the Call for Papers and final Announcement of IWANN'97 (International Work-Conference on Artificial and Natural Neural Networks) to be held in Lanzarote - Canary Islands (Spain) next June 4-6, 1997. If you need more details, feel free for contact me: cabestan at eel.upc.es I am sorry if you receive this message from several distribution lists. Yours, Joan Cabestany **************************************************************************** *** IWANN'97 - Final Call for Papers INTERNATIONAL WORK-CONFERENCE ON ARTIFICIAL AND NATURAL NEURAL NETWORKS Biological and Artificial Architectures, Technologies and Applications Contact URL http://petrus.upc.es/iwann97.html Lanzarote - Canary Islands, Spain June 4-6, 1997 ORGANIZED BY Universidad Nacional de Educacion a Distancia (UNED), Madrid Universidad de Las Palmas de Gran Canaria Universidad Politecnica de Catalunya Universidad de Malaga Universidad de Granada IN COOPERATION WITH Asociacion Espaola de Redes Neuronales (AERN) IFIP Working Group in Neural Computer Systems, WG10.6 Spanish RIG IEEE Neural Networks Council UK&RI Communication Chapter of IEEE IWANN'97. The fourth International Workshop on Artificial Neural Networks, now changed to International Work-Conference on Artificial and Natural Neural Networks, will take place in Lanzarote, Canary Islands (Spain) from 4 to 6 of June, 1997. This biennial meeting with focus on biologically inspired and more realistic models of natural neurons and neural nets and new hybrid computing paradigms, was first held in Granada (1991), Sitges (1993) and Torremolinos, Malaga (1995) with a growing number of participants from more than 20 countries and with high quality papers published by Springer-Verlag (LNCS 540, 686 and 930). SCOPE Neural computation is considered here in the dual perspective of analysis (as science) and synthesis (as engineering). As a science of analysis, neural computation seeks to help neurology, brain theory, and cognitive psychology in the understanding of the functioning of the Nervous Systems by means of computational models of neurons, neural nets and subcellular processes, with the possibility of using electronics and computers as a "laboratory" in which cognitive processes can be simulated and hypothesis proven without having to act directly upon living beings. As a synthesis engineering, neural computation seeks to complement the symbolic perspective of Artificial Intelligence (AI), using the biologically inspired models of distributed, self-programming and self-organizing networks, to solve those non-algorithmic problems of function approximation and pattern classification having to do with changing and only partially known environments. Fault tolerance and dynamic reconfiguration are other basic advantages of neural nets. In the sea of meetings, congresses and workshops on ANN's, IWANN'97 focus on the three subjects that most concern us: (1) The seeking of biologically inspired new models of local computation architectures and learning along with the organizational principles behind of the complexity of intelligent behavior. (2) The searching for some methodological contributions in the analysis and design of knowledge-based ANN's, instead of "blind nets", and in the reduction of the knowledge level to the sub-symbolic implementation level. (3) The cooperation with symbolic AI, with the integration of connectionist and symbolic processing in hybrid and multi-strategy approaches for perception, decision and control tasks, as well as for case-based reasoning, concepts formation and learning. To contribute to the posing and partially solving of these global topics, IWANN'97 offer a brain-storming interdisciplinary forum in advanced Neural Computation for scientists and engineers from biology neuroanatomy, computational neurophysiology, molecular biology, biophysics, linguistics, psychology, mathematics and physics, computer science, artificial intelligence, parallel computing, analog and digital electronics, advanced computer architectures, reverse engineering, cognitive sciences and all the concerned applied domains (sensory systems and signal processing, monitoring, diagnosis, classification and decision making, intelligent control and supervision, perceptual robotics and communication systems). Contributions on the following and related topics are welcome. TOPICS 1. Biological Foundations of Neural Computation: Principles of brain organization. Neuroanatomy and Neurophysiology of synapses, dendro-dendritic contacts, neurons and neural nets in peripheral and central areas. Plasticity, learning and memory in natural neural nets. Models of development and evolution. The computational perspective in Neuroscience. 2. Formal Tools and Computational Models of Neurons and Neural Nets Architectures: Analytic and logic models. Object oriented formulations. Hybrid knowledge representation and inference tools (rules and frames with analytic slots). Probabilistic, bayesian and fuzzy models. Energy related models. 3. Plasticity Phenomena (Maturing, Learning and Memory): Biological mechanisms of learning and memory. Computational formulations using correlational, reinforcement and minimization strategies. Conditioned reflex and associative mechanisms. Inductive-deductive and abductive symbolic-subsymbolic formulations. Generalization. 4. Complex Systems Dynamics: Self-organization, cooperative processes, autopoiesis, emergent computation, synergetic, evolutive optimization and genetic algorithms. Self-reproducing nets. Self-organizing feature maps. Simulated evolution. Social organization phenomena. 5. Cognitive Science and AI: Hybrid knowledge based system. Neural networks for knowledge modeling, acquisition and refinement. Natural language understanding. Concepts formation. Spatial and temporal planning and scheduling. Intentionality. 6. Neural Nets Simulation, Emulation and Implementation: Environments and languages. Parallelization, modularity and autonomy. New hardware implementation strategies (FPGA's, VLSI, neurodevices). Evolutive architectures. Real systems validation and evaluation. 7. Methodology for Data Analysis, Task Selection and Nets Design. 8. Neural Networks for Perception: Biologically inspired preprocessing. Low level processing, source separation, sensor fusion, segmentation, feature extraction, adaptive filtering, noise reduction, texture, stereo correspondence, motion analysis, speech recognition, artificial vision, and hybrid architectures for multisensorial perception. 9. Neural Networks for Communications Systems: Modems and codecs, network management, digital communications. 10. Neural Networks for Control and Robotics: Systems identification, motion planning and control, adaptive, predictive and model-based control systems, navigation, real time applications, visuo-motor coordination. LOCATION BEATRIZ Costa Teguise Hotel Costa Teguise Lanzarote - Canary Islands, June 4-6, 1997 Lanzarote, the most northerly and easterly island of the Canarian archipelago, is at the same time the most unusual one and produces a strange fascination on those who visit it because the fast succession of fire, sea and colors contrasts with craters, green valleys and unforgettable golden and warm beaches. LANGUAGE English will be the official language of IWANN'97. Simultaneous translation will not be provided. INVITED SPEAKERS Prof. Marvin Minsky - Neuronal and Symbolic Perspectives of AI MIT (USA) Prof. Reinhard Eckhorn - Models of Visual Processing Philips University (D) Prof. Valentino Braitenberg - Sensory-Motor Integration Institute for Biological Cybernetics (D) Dr. Javier De Felipe - Microcircuits in the Brain Instituto Cajal. CSIC (E) Dr. Paolo Ienne - Digital Architectures in Neurocomputers EPFL (CH) CALL FOR PAPERS The Programme Committee seeks for original papers on the above mentioned topics. Authors should pay special attention to explanation of theoretical and technical choices involved, point out possible limitations and describe the current state of their work. All received papers will be reviewed by the Programme Committee. Accepted papers may be presented orally or as poster panels, however all accepted contributions will be published in full length (LNCS Springer-Verlag Series). INSTRUCTIONS TO AUTHORS Five copies (one original and four copies) of the paper must be submitted. The paper must not exceed 10 pages, including figures, tables and references. It should be written in English on A4 paper, in a Times font, 10 point in size, without page numbers (please, indicate the order by numbering the reverse side of the sheets with a pencil) . The printing area should be 12.2 x 19.3 cm. The text should be justified to occupy the full line width, and using one-line spacing. Headings (12 point, bold) should be capitalized and aligned to the left. Title (14 point, bold) should be centered. Abstract and affiliation (9 point) must be also included. If possible, please make use of the latex/plaintex style file available in the WWW page: http://petrus.upc.es/iwann97.html, where you can get more detailed instructions to the authors. In addition, one sheet must be attached including: Title and authors names, list of five keywords, the Topic the paper fits best, preferred presentation (oral or poster) and the corresponding author (name, postal and e-mail addresses, phone and fax numbers). CONTRIBUTIONS MUST BE SENT TO: Prof. Jose Mira Dpto. Inteligencia Artificial, UNED Senda del Rey, s/n E - 28040 MADRID, Spain E-mail: iwann97 at dia.uned.es Phone: + 34 1 3987155 Fax: + 34 1 3986697 IMPORTANT DATES Final Date for Submission: January 15, 1997 Notification of Acceptance: March 1997 Work-Conference: June 4-6, 1997 INSCRIPTION, TRAVEL AND HOTEL INFORMATION ULTRAMAR EXPRESS Diputacio, 238, 3 E-08007 BARCELONA, Spain Phone: +34 3 4827140 Fax: +34 3 4827158 E-mail: gcasanova at uex.es IBERIA and AVIACO will be the official carriers for IWANN'97, offering special rates and conditions. International code for special rate: BT71B21MPE0038.(See WWW page for special forfaits rates) POSSIBILITY OF GRANTS The Organization Committee of IWANN'97 will provide a limited number of full or partial grants. Please contact the WWW address for further information. STEERING COMMITTEE Joan Cabestany , Universidad Politecnica de Catalunya (E) Jose Mira Mira, UNED (E) Alberto Prieto, Universidad de Granada (E) Francisco Sandoval, Universidad de Malaga (E) ORGANIZATION COMMITTEE Joan Cabestany and Francisco Sandoval (E), Co-chairmen Michael Arbib, University of Southern California (USA) Senen Barro, Universidad de Santiago (E) Gabriel de Blasio, Univ. de Las Palmas de Gran Canaria (E) Trevor Clarkson, King's College London (UK) Ana Delgado, UNED (E) Dante Del Corso, Politecnico de Torino (I) Belen Esteban-Sanchez, ITC (E) Tamas D. Gedeon, University of New South Wales (AUS) Karl Goser, Universitt Dortmund (G) Jeanny Herault, Institute National Polytechnique de Grenoble (F) Jaap Hoekstra, Delft University of Technology (NL) Shunsuke Sato, Osaka University (Jp) Igor Shevelev, Russian Academy of Science(R) Juan Sigenza. IIC (E) Cloe Taddei-Ferretti, Istituto di Cibernetica, CNR (I) Marley Vellasco, Pontificia Universidade Catolica do Rio de Janeiro (Br) Michel Verleysen, Universite Catholique de Louvain-la-Neuve (B) PROGRAMME COMMITTEE Jose Mira and Alberto Prieto, Co-chairmen (E) Igor Aleksander, Imperial Coll. of Science Technology and Medicine (UK) Jose Ramon Alvarez, UNED (E) Shun-Ichi Amari, University of Tokyo (Jp) Xavier Arreguit, CSEM (CH) Franois Blayo, Univ. Paris 1 (F) Leon Chua, University of California (USA) Marie Cottrell, Univ. Paris 1 (F) Akira Date, Tokyo University of Agriculture and Technology (Jp) Antonio Diaz-Estrella, Universidad de Malaga (E) M. Duranton, Phillips (F) Reinhard Eckhorn, Philips University (D) Kunihiko Fukushima, Osaka University (Jp) Patrik Garda, Univ. Pierre et Marie Curie (F) Anne Guerin-Dugue, INPG (F) Martin Hasler, EPFL (CH) Mohamad H . Hassoun, Wayne State University (USA) Gonzalo Joya, Universidad de Malaga (E) Simon Jones, IERI Loughborough Univ. of Tech. (UK) Christian Jutten, INPG (F) H. Klar, Technische Universitt Berlin (G) K.Nicholas Leibovic, Univ. Buffalo (USA) J.Lettvin, MIT (USA) Francisco Javier Lopez Aligue, Universidad de Extremadura (E) Jordi Madrenas, UPC (E) Pierre Marchal, CSEM (CH) Juan Manuel Moreno, UPC (E) Josef A. Nossek, Der Technischen Universitt Mnchen (G) Julio Ortega, Universidad de Granada (E) Francisco Jose Pelayo, Universidad de Granada (E) Franz Pichler, Johannes Kepler Universitt Linz (A) Vicenzo Piuri, Politecnico di Milano (I) Leonardo Reyneri, Politecnico di Torino (I) Tamas Roska, Hungarian Academy of Sciences (H) E. Sanchez-Sinencio, Texas A&M Univ. (USA) J. Simoes Da Fonseca, Faculty of Medicine of Lisbon (P) John G. Taylor, King's College London (UK) Carme Torras, Instituto de Cibernetica del CSIC-UPC (E) Philip Treleaven, University College London (UK) Elena Valderrama, Centro Nacional de Microelectronica (E)  From dsilver at csd.uwo.ca Tue Oct 22 15:59:40 1996 From: dsilver at csd.uwo.ca (Danny L. Silver) Date: Tue, 22 Oct 1996 15:59:40 -0400 (EDT) Subject: Preprint on Inductive Transfer in ANNs available Message-ID: <9610221959.AA05229@church.ai.csd.uwo.ca.csd.uwo.ca> A preprint of the article: "Parallel Transfer of Task Knowledge Using Dynamic Learning Rates Based on a Measure of Relatedness" can be found at: http://www.csd.uwo.ca/~dsilver/CSetaMTL.ps.Z The article has been accepted for publication in the Connection Science special issue on "Transfer in Inductive Systems" due out this fall. ABSTRACT With a distinction made between two forms of task knowledge transfer, {\em representational} and {\em functional}, $\eta$MTL, a modified version of the MTL method of functional (parallel) transfer, is introduced. The $\eta$MTL method employs a separate learning rate, $\eta_k$, for each task output node $k$. $\eta_k$ varies as a function of a measure of relatedness, $R_k$, between the $k$th task and the primary task of interest. Results of experiments demonstrate the ability of $\eta$MTL to dynamically select the most related source task(s) for the functional transfer of prior domain knowledge. The $\eta$MTL method of learning is nearly equivalent to standard MTL when all parallel tasks are sufficiently related to the primary task, and is similar to single task learning when none of the parallel tasks are related to the primary task. If you have any difficulties with transmission or wish to receive the article by another means please contact me as below. . Danny -- ========================================================================= = Daniel L. Silver University of Western Ontario, London, Canada = = N6A 3K7 - Dept. of Comp. Sci. = = dsilver at csd.uwo.ca H: (902)582-7558 O: (902)494-1813 = = WWW home page .... http://www.csd.uwo.ca/~dsilver = =========================================================================  From halici at rorqual.cc.metu.edu.tr Wed Oct 23 02:00:40 1996 From: halici at rorqual.cc.metu.edu.tr (ugur halici) Date: Wed, 23 Oct 1996 10:00:40 +0400 (MEDT) Subject: Special Session on Pattern Recog.,Image Processing & Computer , Vision Message-ID: ******************************************************** Call for Summaries and Participation SPECIAL SESSION on --------------------------------------------------------- PATTERN RECOGNITION, IMAGE PROCESSING and COMPUTER VISION --------------------------------------------------------- 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE Sheraton Imperial Hotel & Convention Center, Research Triangle Park, North Carolina/March 2-5, 1997 ********************************************************* A special session on Pattern Recognition, Image Processing and Computer Vision is to be organized within the 2nd International Conference on Computational Intelligence and Neuroscience. Papers are sought on neural network applications or biologically inspired approaches related to pattern recognition, image processing and computer vision. Prospective authors are requested to contact the session organizer Ugur Halici, Dept. of Electrical Engineering Middle East Technical University, Ankara, 06531, Turkey fax: (+90) 312 210 12 61 email: halici at rorqual.cc.metu.edu.tr by email or fax as soon as possible in order to show their interest and receive information on the paper format. Papers will be accepted based on summaries, which must be received before November 30, 1996 for this session. ICCIN is part of the Third Joint Conference Information Sciences(JCIS), Sheraton Imperial Hotel & Convention Center, Research Triangle Park, North Carolina/March 2-5, 1997 ICCIN Conference Chairs: -------------------------- Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University JCIS Honorary Chairs: ------------------------- Lotfi A. Zadeh & Azriel Rosenfeld Plenary Speakers: ---------------- James S. Albus / Jim Anderson / Roger Brockett / Earl Dowell / David E. Goldberg / Stephen Grossberg / Y. C. Ho / John H. Holland / Zdzislaw Pawlak / Lotfi A. Zadeh ICCIN Web site: http://www.csci.csusb.edu/iccin  From Dimitris.Dracopoulos at ens-lyon.fr Thu Oct 24 08:21:19 1996 From: Dimitris.Dracopoulos at ens-lyon.fr (Dimitris Dracopoulos) Date: Thu, 24 Oct 1996 14:21:19 +0200 (MET DST) Subject: CFP: Neural and Evolutionary Algorithms for Intelligent Control Message-ID: <199610241221.OAA03160@banyuls.ens-lyon.fr> NEURAL AND EVOLUTIONARY ALGORITHMS FOR INTELLIGENT CONTROL ---------------------------------------------------------- C A L L F O R P A P E R S Special Session in: "15th IMACS World Congress 1997 on Scientific Computation, Modelling and Applied Mathematics", August 24-29 1997, Berlin, Germany Special Session Organizer-Chair: Dimitri C. Dracopoulos ------------------------------- (Ecole Normale Superieure de Lyon, LIP) Scope: ----- The focus of the session will concentrate in the latest developments of the state-of-the-art neurocontrol and evolutionary techniques. Today, many advanced intelligent control applications utilize methods like the above, and papers describing these are mostly welcome. Theoretical discussions of how these techniques can be proved to be stable are also highly welcome. Topics: ------ -Neurocontrollers * optimization over time * adaptive critic designs * brain-like neurocontrollers -Evolutionary techniques as pure controllers * genetic algorithms * evolutionary programming * genetic programming -Hybrid methods (neural nets + evolutionary algorithms) -Theoretical and Stability issues for neuro-evolutionary control -Advanced Control Applications Paper Contributions: -------------------- Each paper will be published in the Proceedings of the IMACS'97 World Congress. The accepted papers will be orally presented (25 minutes each, including 5 min for discussion). Important dates: ---------------- December 5, 1996, Deadline for receiving papers. January 10, 1997, Notification of acceptance. February 1997, Author typing instructions, for camera-ready copies. Submission guidelines: --------------------- One hardcopy, 6 pages limit, 10pt font, should be sent to the Session Chair: Professor Dimitri C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Ecole Normale Superieure de Lyon 46 Allee d'Italie 69364 Lyon - Cedex 07, France. In the case of multiple authors then in the paper it should be indicated which author is to receive correspondence. The corresponding author is requested to include in the cover letter: complete postal address, e-mail address, phone number, fax number, a list of keywords (no more than 5). More information (preliminary) on the "15th IMACS World Congress 1997" can be found in: "http://www.first.gmd.de/imacs97/". Please note that special discounted registration fees (proceedings but no social program) will be available. -- Professor Dimitris C. Dracopoulos Laboratoire de l' Informatique du Parallelisme (LIP) Telephone: +33 (0) 472728504 Ecole Normale Superieure de Lyon Fax: +33 (0) 472728080 46 Allee d'Italie E-mail: Dimitris.Dracopoulos at ens-lyon.fr 69364 Lyon - Cedex 07 France  From giles at research.nj.nec.com Thu Oct 24 15:06:47 1996 From: giles at research.nj.nec.com (Lee Giles) Date: Thu, 24 Oct 96 15:06:47 EDT Subject: Technical Report Available Message-ID: <9610241906.AA06199@alta> The following technical report presents the experimental results of three on-line learning solutions in predicting multiprocessor memory access patterns. __________________________________________________________________________ PERFORMANCE OF ON-LINE LEARNING METHODS IN PREDICTING MULTIPROCESSOR MEMORY ACCESS PATTERNS Majd F. Sakr (1,2), Steven P. Levitan (2), Donald M. Chiarulli (3), Bill G. Horne (1), C. Lee Giles (1,4) (1) NEC Research Institute, 4 Independence Way, Princeton NJ 08540 (2) University of Pittsburgh, Electrical Engineering, Pittsburgh PA 15261 (3) University of Pittsburgh, Computer Science, Pittsburgh PA 15260 (4) UMIACS, University of Maryland, College Park, MD 20742 Abstract: Shared memory multiprocessors require reconfigurable interconnection networks (INs) for scalability. These INs are reconfigured by an IN control unit. However, these INs are often plagued by undesirable reconfiguration time that is primarily due to control latency, the amount of time delay that the control unit takes to decide on a desired new IN configuration. To reduce control latency, a trainable prediction unit (PU) was devised and added to the IN controller. The PU's job is to anticipate and reduce control configuration time, the major component of the control latency. Three different on-line prediction techniques were tested to learn and predict repetitive memory access patterns for three typical parallel processing applications, the 2-D relaxation algorithm, matrix multiply and Fast Fourier Transform. The predictions were then used by a routing control algorithm to reduce control latency by configuring the IN to provide needed memory access paths before they were requested. Three prediction techniques were used and tested: 1). a Markov predictor, 2). a linear predictor and 3). a time delay neural network (TDNN) predictor. As expected, different predictors performed best on different applications, however, the TDNN produced the best overall results. Keywords: On-line Prediction; Learning; Multiprocessors; Memory; Markov Predictor; Linear Predictor; Time Delay Neural Network ____________________________________________________________________________ The paper is available from: http://www.neci.nj.nec.com/homepages/giles.html http://www.neci.nj.nec.com/homepages/sakr.html http://www.cs.umd.edu/TRs/TR-no-abs.html Comments are very welcome. -- C. Lee Giles / Computer Sciences / NEC Research Institute / 4 Independence Way / Princeton, NJ 08540, USA / 609-951-2642 / Fax 2482 www.neci.nj.nec.com/homepages/giles.html ==  From dyyeung at cs.ust.hk Fri Oct 25 03:06:20 1996 From: dyyeung at cs.ust.hk (Dit-Yan Yeung) Date: Fri, 25 Oct 1996 15:06:20 +0800 (HKT) Subject: Theoretical Aspects of Neural Computation (TANC-97) Message-ID: <199610250706.PAA07352@cssu35.cs.ust.hk> Preliminary Call for Papers TANC-97 Hong Kong International Workshop on Theoretical Aspects of Neural Computation: A Multidisciplinary Perspective May 26-28, 1997 Hong Kong University of Science and Technology Over the past decade or so, neural computation has emerged as a research area with active involvement by researchers from a number of different disciplines, including computer science, engineering, mathematics, neurobiology, physics, and statistics. Interdisciplinary collaboration and exchange of ideas has often led us to address research issues in this area from different perspectives. Consequently, some interesting new paradigms and results have become available to the field and have contributed significantly to the strengthening of its theoretical foundations. This workshop, to be held in the Hong Kong University of Science and Technology located at the scenic Clear Water Bay, is intended to bring together researchers from different disciplines to review the current status of neural computation research. In particular, theoretical studies of the following themes will be given special emphasis: Neuroscience Computational and Mathematical Statistical Physics While the focus of this workshop is on theoretical aspects, the impact of recent theoretical advances to applications and the novel application of theoretical results to real-world problems will also be covered. Moreover, as an important objective of the workshop, future research directions and topics that have strong interdisciplinary nature will be explored. The workshop will feature several keynote presentations and invited papers by leading researchers in the field: Keynote Speakers ---------------- Shun-ichi Amari (RIKEN, Japan) Haim Sompolinsky (Hebrew University, Israel) Invited Speakers ---------------- Peter Dayan (MIT, USA) Aike Guo (Chinese Academy of Sciences, China) Ido Kanter (Bar Ilan, Israel) Manfred Opper (Wuerzburg, Germany) Sara Solla (AT&T Research, USA) Lei Xu (Chinese University of Hong Kong) (and more to be confirmed) In addition to keynote and invited papers, there will also be a number of submitted papers. All oral and poster presentations will be scheduled in a single track with no parallel sessions to facilitate interdisciplinary interaction. Additional discussion sessions will be arranged. Moreover, since campus accommodation will be available to all workshop participants, there will be plenty of time for informal discussions. Paper Submission ---------------- All submitted papers will be refereed on the basis of quality, significance, and clarity by a review panel which includes our invited speakers. Each submitted paper written in English may be up to six A4 pages, including figures and references, in single-spaced one-column format using a font size of 10 points or larger. Five copies of the submitted paper should be sent to: TANC-97 Secretariat Department of Physics Hong Kong University of Science and Technology Clear Water Bay, Kowloon Hong Kong Fax: +852-2358-1652 E-mail: tanc97 at usthk.ust.hk WWW: http://www.cs.ust.hk/conf/tanc97 In addition to the paper, there should also be a cover letter with the following information provided: (a) contacting author, fax number, postal and e-mail addresses; (b) up to eight keywords; (c) preference of presentation format (oral or poster). Important Dates --------------- Submission of paper (received): January 15, 1997 Notification of acceptance: February 28, 1997 Organizing Committee -------------------- Kwok-Ping Chan (University of Hong Kong) Lai-Wan Chan (Chinese University of Hong Kong) Irwin King (Chinese University of Hong Kong) Zhaoping Li (Hong Kong University of Science and Technology) Franklin Shin (Hong Kong Polytechnic University) Michael K.Y. Wong (Hong Kong University of Science and Technology) - Chairman Dit-Yan Yeung (Hong Kong University of Science and Technology) Other Attractions ----------------- Hong Kong offers a wide variety of sightseeing activities. It is one of the most international and interesting cities in the world. This will be a great opportunity to visit Hong Kong as the workshop will be held shortly before Hong Kong becomes a Special Administrative Region of China starting from July 1, 1997.  From A.Sharkey at dcs.shef.ac.uk Fri Oct 25 11:15:25 1996 From: A.Sharkey at dcs.shef.ac.uk (Amanda Sharkey) Date: Fri, 25 Oct 96 11:15:25 BST Subject: Research Associate Post: Neural Net Fault Diagnosis Message-ID: <9610251015.AA01370@gw.dcs.shef.ac.uk> Research Associate: On-line Neural Net Fault Diagnosis for Diesel Engines. A Post Doctoral research fellow is required for a period of up to 3 years to join the Neural Computing Group in the Department of Computer Science, University of Sheffield, UK. This EPSRC-funded post is available from November 1st 1996, or as soon as possible thereafter. Salary in the range 14,317-16,628 pounds (UK). This project will involve the collection of fault diagnosis data from a diesel engine, using measures of in-cylinder pressure, engine vibration and noise emission. These data will be used to train a neural net system for fault diagnosis, and will form the basis for the development of a general set of principles for increasing the reliability of a neural net system. The postholder will be required to induce a variety of faults in a real diesel engine (assisted by a technician also employed for the project), to collect data corresponding to those faults using a variety of sensors, and to train neural nets to perform fault diagnosis. Knowledge and practical experience of real diesel engines is essential, as are computational skills. Familiarity with neural computing techniques would be preferred. Further details are available: http://www.dcs.shef.ac.uk/research/groups/nn/engine.html Direct email inquiries and/or CVs to Dr Amanda Sharkey: amanda at dcs.shef.ac.uk  From nikola at prosun.first.gmd.de Fri Oct 25 04:44:40 1996 From: nikola at prosun.first.gmd.de (Nikola Serbedzija) Date: Fri, 25 Oct 96 09:44:40 +0100 Subject: CFP: IMACS - ANN Simulation session Message-ID: <9610250844.AA16045@prosun.first.gmd.de> ------------------------------------------------------------------------- 15th IMACS WORLD CONGRESS on Scientific Computation, Modelling and Applied Mathematics Berlin, Germany, 24-29 August 1997 ------------------------------------------------------------------------- CALL FOR PAPERS for the Organized Session on Simulation of Artificial Neural Networks Session Organizers: Gerd Kock and Nikola Serbedzija ------------------------------------------------------------------------- The aim of this session is to reflect the current techniques and trends in the simulation of artificial neural networks (ANNs). Both, software and hardware approaches are solicited. Topics of interest are, but not limited to: * General Aspects of Neural Simualations o design issues for simulation tools o inherent parallelism of ANNS o general-purpose neural simulations o special-purpose neural simulations o fault tolerance aspects * Parallel Implementation of ANNs o data parallel implementations o control parallel implementations * Hardware Emulation of ANNs o silicon technology o optical technology o molecular technology * General Simulation Tools o graphic/menu bases tools o module libraries o specific programming languages * Applications o applications using/demanding parallel o implementations or hardware emulations o applications using/demanding analysis tools or graphical representations provided by simulation tools * Hybrid Systems o the topics from above are of interest also with respect to related hybrid systems (neuro-fuzzy, genetic algorithms) Authors interested in this session are invited to submit 3 copies of an extended summary (about 4 pages) of the paper to one of the session organizers by December 1st. 1996. Submission can be done also by email. The notification of acceptance/rejection will be mailed by February 28th, 1997. The authors of accepted papers will also receive detailed instructions for the final manuscripts preparation. The submission must contain the following information: The name(s) of the author(s), title(s), affiliation(s), complete address(es) (including email, phone, fax). In addition, the author responsible for communication has to be indicated. Important dates --------------- December 1st, 1996 Extended summary due February 28th, 1997 Notification of acceptance/rejection April 30th, 1997 Camera-ready paper due Addresses to send contribution ------------------------------ Dr. Gerd Kock Dr. Nikola Serbedzija GMD FIRST GMD FIRST Rudower Chaussee 5 Rudower Chaussee 5 D-12489 Berlin D-12489 Berlin Germany Germany e-mail: gerd at first.gmd.de e-mail: nikola at first.gmd.de tel: +49 30 / 6392 1863 tel: +49 30 / 6392 1873 =================================================================== IMACS The International Association for Mathematics and Computers in Simulation is an organization of professionals and scientists concerned with computers, computation and applied mathematics, in particular, as they apply to the simulation of systems. This includes numerical analysis, mathematical modelling, approximation theory, computer hardware and software, programming languages and compilers. IMACS also concerns itself with the general philosophy of scientific computation and applied mathematics, and with their impact on society and on disciplinary and interdisciplinary research. IMACS is one of the international scientific associations (with IFAC, IFORS, IFIP and IMEKO) represented in FIACC, the five international organizations in the area of computers, automation, instrumentation and the relevant branches of applied mathematics. Of the five, IMACS (which changed its name from AICA in 1976) is the oldest and was founded in 1956. For more information about the 15th IMACS WORLD CONGRESS turn to WWW page http://www.first.gmd.de/imacs97/ ================================================================  From dwang at cis.ohio-state.edu Fri Oct 25 14:19:56 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 25 Oct 1996 14:19:56 -0400 (EDT) Subject: Neurocomputing, Vol.13 (2-4) Message-ID: <199610251819.OAA01392@sarajevo.cis.ohio-state.edu> NEUROCOMPUTING [NEUCOM] Volume 13, Issue 2-4 (30 SEPTEMBER 1996) Adaptable neuro production systems N.K. Kasabov On the stability of Lagrange programming neural networks for satisfiability problems of propositional calculus M. Nagamatu, T. Yanaru Generalized Hopfield networks for associative memories with multi-valued stable states J.M. Zurada, I. Cloete, E. van der Poel A neurocomputing framework: From methodologies to application S.-B. Cho A systematic method for rational definition of plant diagnostic symptoms by self-organizing neural networks H. Furukawa, T. Ueda, M. Kitamura Neural network indirect adaptive control with fast learning algorithm G.J. Jeon, I. Lee Efficient learning of NN-MLP based on individual evolutionary algorithm Q. Zhao, T. Higuchi Application of neural network algorithm to CAD of magnetic systems Y. Yamazaki, M. Ochiai, A. Holz, T. Hara Controlling public address systems based on fuzzy inference and neural network K. Kurisu, K. Fukuyama Robust world-modelling and navigation in a real world U.R. Zimmer Practical applications of neural networks in texture analysis E. Biebelmann, M. K\"{o}ppen, B. Nickolay Chaotic recurrent neural networks and their application to speech recognition J.K. Ryeu, H.S. Chung On the accuracy of mapping by neural networks trained by backpropagation with forgetting R. Kozma, M. Sakuma, Y. Yokoyama, M. Kitamura Optimal learning in artificial neural networks: A review of theoretical results M. Bianchini, M. Gori Classification by balanced binary representation Y. Baram Fuzzy astronomical seeing nowcasts with a dynamical and recurrent connectionist network A. Aussem, F. Murtagh, M. Sarazin Improved binary classification performance using an information theoretic criterion P. Burrascano, D. Pirollo  From lenherr at mildura.cs.umass.edu Sun Oct 27 01:01:15 1996 From: lenherr at mildura.cs.umass.edu (Fred Lenherr) Date: Sun, 27 Oct 1996 01:01:15 -0500 Subject: Neuroscience Web Search Message-ID: <199610270601.BAA06192@mildura.cs.umass.edu> Hello, I have created a new Web Search Engine devoted entirely to neuroscience. Unlike the large search sites, everything here is relevant by pre-selection. The URL is: http://www.acsiom.org/nsr/neuro.html This is a full-text database and contains more than 55,000 web pages. If you are interested, please take a look at it, and consider placing a link to it on one of your own web pages. Thanks very much, Fred K. Lenherr  From crites at hope.cs.umass.edu Mon Oct 28 01:06:18 1996 From: crites at hope.cs.umass.edu (Bob Crites) Date: Mon, 28 Oct 1996 01:06:18 -0500 (EST) Subject: PhD Thesis Available Message-ID: <199610280606.BAA08370@hope.cs.umass.edu> My Phd thesis is now available for download: LARGE-SCALE DYNAMIC OPTIMIZATION USING TEAMS OF REINFORCEMENT LEARNING AGENTS Robert Harry Crites ftp://ftp.cs.umass.edu/pub/anw/pub/crites/root.ps.Z (202517 bytes) or from my homepage at: http://www-anw.cs.umass.edu/People/crites/crites.html Abstract: Recent algorithmic and theoretical advances in reinforcement learning (RL) are attracting widespread interest. RL algorithms have appeared that approximate dynamic programming (DP) on an incremental basis. Unlike traditional DP algorithms, these algorithms do not require knowledge of the state transition probabilities or reward structure of a system. This allows them to be trained using real or simulated experiences, focusing their computations on the areas of state space that are actually visited during control, making them computationally tractable on very large problems. RL algorithms can be used as components of multi-agent algorithms. If each member of a team of agents employs one of these algorithms, a new collective learning algorithm emerges for the team as a whole. In this dissertation we demonstrate that such collective RL algorithms can be powerful heuristic methods for addressing large-scale control problems. Elevator group control serves as our primary testbed. The elevator domain poses a combination of challenges not seen in most RL research to date. Elevator systems operate in continuous state spaces and in continuous time as discrete event dynamic systems. Their states are not fully observable and they are non-stationary due to changing passenger arrival rates. As a way of streamlining the search through policy space, we use a team of RL agents, each of which is responsible for controlling one elevator car. The team receives a global reinforcement signal which appears noisy to each agent due to the effects of the actions of the other agents, the random nature of the arrivals and the incomplete observation of the state. In spite of these complications, we show results that in simulation surpass the best of the heuristic elevator control algorithms of which we are aware. These results demonstrate the power of RL on a very large scale stochastic dynamic optimization problem of practical utility.  From schapire at research.att.com Mon Oct 28 12:07:59 1996 From: schapire at research.att.com (Robert Schapire) Date: Mon, 28 Oct 1996 12:07:59 -0500 (EST) Subject: Call for papers: COLT '97 Message-ID: <199610281707.MAA06414@arran.research.att.com> =========================================================================== -- Call for Papers -- COLT '97 Tenth Annual Conference on Computational Learning Theory Vanderbilt University, Nashville, Tennessee July 6--9, 1997 =========================================================================== The Tenth Annual Conference on Computational Learning Theory (COLT'97) will be held at Vanderbilt University in Nashville, Tennessee from Sunday, July 6 through Wednesday, July 9, 1997. COLT'97 is sponsored by Vanderbilt University, with additional support from AT&T Labs, and in cooperation with ACM SIGACT and SIGART. The conference will be co-located with the Fourteenth International Conference on Machine Learning (ICML'97) which will be held Tuesday, July 8 through Saturday, July 12. We anticipate a lively program including oral presentations, posters, a number of invited speakers and a half day of tutorials (jointly organized with ICML). We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning. Some of the issues and topics that have been addressed in the past include: * design and analysis of learning algorithms; * sample and computational complexity of learning specific model classes; * frameworks modeling the interaction between the learner, teacher and the environment (such as learning with queries, learning control policies and inductive inference); * learning using complex models (such as neural networks and decision trees); * learning with minimal prior assumptions (such as mistake-bound models, universal prediction, and agnostic learning). We strongly encourage submissions from all disciplines engaged in research on these and related questions. Examples of such fields include computer science, statistics, information theory, pattern recognition, statistical physics, inductive logic programming, information retrieval and reinforcement learning. We also encourage the submission of papers describing experimental results that are supported by theoretical analysis. ABSTRACT SUBMISSION: Authors are encouraged to submit their abstracts electronically. Instructions for how to submit papers electronically can be obtained after December 1 by sending email to colt97 at research.att.com with subject "help", or from our web page. Alternatively, authors may submit fourteen copies (preferably two-sided) of an extended abstract to: Robert Schapire -- COLT'97 AT&T Labs 600 Mountain Avenue, Room 2A-424 Murray Hill, NJ 07974 USA Telephone (for overnight mail): (908) 582-4533 Abstracts (whether hard-copy or electronic) must be RECEIVED by 11:59pm EST on FRIDAY, JANUARY 17, 1997. This deadline is FIRM. (We also will accept abstracts sent via air mail and postmarked by January 6, or sent via overnight carrier by January 16.) Authors will be notified of acceptance or rejection on or before March 24, 1997. Final camera-ready papers will be due by April 18. Papers that have appeared in journals or other conferences, or that are being submitted to other conferences (including ICML), are NOT appropriate for submission to COLT. ABSTRACT FORMAT: The extended abstract should consist of a cover page with title, authors' names, postal and email addresses, and a 200-word summary. The body of the abstract should be no longer than 10 pages with at most 35 lines per page, at most 6.5 inches of text per line, and in 12-point font. If the abstract exceeds 10 pages, only the first 10 pages may be examined. The extended abstract should include a clear definition of the theoretical model used and a clear description of the results, as well as a discussion of their significance, including comparison to other work. Proofs or proof sketches should be included. PROGRAM FORMAT: All accepted papers will be presented orally, although some or all papers may also be included in a poster session. At the discretion of the program committee, the program may consist of both long and short talks, corresponding to longer and shorter papers in the proceedings. By default, all papers will be considered for both categories. Authors who DO NOT want their papers considered for the short category should indicate that fact in a cover letter. PROGRAM CHAIRS: Yoav Freund and Robert Schapire (AT&T Labs). PROGRAM COMMITTEE: Andrew Barron (Yale University), John Case (University of Delaware), Sally Goldman (Washington University), David Helmbold (University of California, Santa Cruz), Rob Holte (University of Ottawa), Eyal Kushilevitz (Technion), Ga`bor Lugosi (Pompeu Fabra University, Barcelona), Arun Sharma (University of New South Wales), John Shawe-Taylor (University of London), Satinder Singh (University of Colorado, Boulder), Haim Sompolinsky (Hebrew University), Volodya Vovk (Royal Holloway, University of London). CONFERENCE AND LOCAL ARRANGEMENTS CHAIR: Vijay Raghavan (Vanderbilt University). STUDENT TRAVEL: We anticipate some funds will be available to partially support travel by student authors. Details will be distributed as they become available. TUTORIALS: The program will include a half day of tutorials, jointly organized by COLT and ICML, and intended as introductions to topics in the theory and practice of machine learning. For further information, or to submit a proposal for a tutorial, contact Sally Goldman, the tutorials chair, at sg at cs.wustl.edu or visit our web page. FOR MORE INFORMATION: Visit the ICML/COLT'97 web page at http://cswww.vuse.vanderbilt.edu/~mlccolt/, or send email to colt97 at research.att.com. This call for papers is available in html and other formats from http://www.research.att.com/~yoav/colt97/cfp.html  From john at dcs.rhbnc.ac.uk Mon Oct 28 15:40:39 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 28 Oct 96 20:40:39 +0000 Subject: Special Issue on VC Dimension Message-ID: <199610282040.UAA12033@platon.cs.rhbnc.ac.uk> DISCRETE APPLIED MATHEMATICS announcing a Special Issue on Vapnik-Chervonenkis Dimension Manuscripts are solicited for a special issue of DISCRETE APPLIED MATHEMATICS on the topic of the Vapnik-Chervonenkis Dimension. The special issue arose out of an ICMS Workshop on the VC Dimension though submission is not restricted to those who attended the workshop. For more information on the Vapnik-Chervonenkis dimension and the aims of the workshop and hence also the Special Issue, please consult the `scientific aims' available through the homepage: http://www.dcs.ed.ac.uk/~mrj/VCWorkshop/ The following is a (nonexhaustive) list of possible topics of interest for the SPECIAL ISSUE: - Combinatorics of the VC Dimension - Applications of the VC Dimension in Statistics - Applications of the VC Dimension in Learning Theory - Applications of the VC Dimension in Computational Geometry - Applications of the VC Dimension in Complexity Theory Four (4) copies of complete manuscripts should be sent to the Coordinating Editor indicated below by December 31, 1996. Email submission of postscript files (preferably compressed and uuencoded) is acceptable. Manuscripts must be prepared according to the normal submission requirements of Discrete Applied Mathematics, as described inside the back cover of each issue of the journal. All manuscripts will be subject to the regular refereeing process of the journal. Papers should include an abstract. The Guest Editors of the Special Issue are: Coordinating Editor: J.S. Shawe-Taylor Department of Computer Science Royal Holloway, University of London Egham, Surrey TW20 0EX UK Email: jst at dcs.rhbnc.ac.uk A. Macintyre Mathematical Institute Oxford University Email: ajm at maths.ox.ac.uk M. Jerrum Department of Computer Science University of Edinburgh Email: mrj at dcs.edinburgh.ac.uk ------- End of Forwarded Message  From carl at cs.toronto.edu Tue Oct 29 15:50:10 1996 From: carl at cs.toronto.edu (Carl Edward Rasmussen) Date: Tue, 29 Oct 1996 15:50:10 -0500 Subject: PhD thesis available Message-ID: <96Oct29.155011edt.987@neuron.ai.toronto.edu> My PhD thesis is now available on the net. It is entitled EVALUATION OF GAUSSIAN PROCESSES AND OTHER METHODS FOR NON-LINEAR REGRESSION The thesis is 138 pages long, occupies 460Kb in compressed postscript and is formatted for double-sided printing. You can obtain a copy via the web at http://www.cs.toronto.edu/~carl/pub.html or via anonymous ftp to ftp.cs.toronto.edu where the file "thesis.ps.gz" is placed in the directory "pub/carl". ABSTRACT: This thesis develops two Bayesian learning methods relying on Gaussian processes and a rigorous statistical approach for evaluating such methods. In these experimental designs the sources of uncertainty in the estimated generalisation performances due to both variation in training and test sets are accounted for. The framework allows for estimation of generalisation performance as well as statistical tests of significance for pairwise comparisons. Two experimental designs are recommended and supported by the DELVE software environment. Two new non-parametric Bayesian learning methods relying on Gaussian process priors over functions are developed. These priors are controlled by hyperparameters which set the characteristic length scale for each input dimension. In the simplest method, these parameters are fit from the data using optimization. In the second, fully Bayesian method, a Markov chain Monte Carlo technique is used to integrate over the hyperparameters. One advantage of these Gaussian process methods is that the priors and hyperparameters of the trained models are easy to interpret. The Gaussian process methods are benchmarked against several other methods, on regression tasks using both real data and data generated from realistic simulations. The experiments show that small datasets are unsuitable for benchmarking purposes because the uncertainties in performance measurements are large. A second set of experiments provide strong evidence that the bagging procedure is advantageous for the Multivariate Adaptive Regression Splines (MARS) method. The simulated datasets have controlled characteristics which make them useful for understanding the relationship between properties of the dataset and the performance of different methods. The dependency of the performance on available computation time is also investigated. It is shown that a Bayesian approach to learning in multi-layer perceptron neural networks achieves better performance than the commonly used early stopping procedure, even for reasonably short amounts of computation time. The Gaussian process methods are shown to consistently outperform the more conventional methods. -- \ Carl Edward Rasmussen Email: carl at cs.toronto.edu o/\_ Dept of Computer Science Phone: +1 (416) 978 7391 <|__,\ University of Toronto, Home : +1 (416) 531 5685 "> | Toronto, ONTARIO, FAX : +1 (416) 978 1455 ` | Canada, M5S 1A4 web : http://www.cs.toronto.edu/~carl  From marshall at cs.unc.edu Tue Oct 29 11:03:55 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Tue, 29 Oct 1996 12:03:55 -0400 Subject: Paper available: Neural Model of Visual Stereomatching Message-ID: <199610291603.MAA17472@marshall.cs.unc.edu> Paper available in http://www.cs.unc.edu/Research/brainlab/index.html "NEURAL MODEL OF VISUAL STEREOMATCHING: SLANT, TRANSPARENCY, AND CLOUDS" JONATHAN A. MARSHALL, GEORGE J. KALARICKAL, ELIZABETH B. GRAVES Department of Computer Science, CB 3175, Sitterson Hall University of North Carolina, Chapel Hill, NC 27599-3175, U.S.A. marshall at cs.unc.edu, +1-919-962-1887, fax +1-919-962-1799 Stereomatching of oblique and transparent surfaces is described using a model of cortical binocular "tuned" neurons selective for disparities of individual visual features and neurons selective for the position, depth, and 3-D orientation of local surface patches. The model is based on a simple set of learning rules. In the model, monocular neurons project excitatory connection pathways to binocular neurons at appropriate disparities. Binocular neurons project excitatory connection pathways to appropriately tuned "surface patch" neurons. The surface patch neurons project reciprocal excitatory connection pathways to the binocular neurons. Anisotropic intralayer inhibitory connection pathways project between neurons with overlapping receptive fields. The model's responses to simulated stereo image pairs depicting a variety of oblique surfaces and transparently overlaid surfaces are presented. For all the surfaces, the model (1) assigns disparity matches and surface patch representations based on global surface coherence and uniqueness, (2) permits coactivation of neurons representing multiple disparities within the same image location, (3) represents oblique slanted and tilted surfaces directly, rather than approximating them with a series of frontoparallel steps, (4) assigns disparities to a cloud of points at random depths, like human observers, and unlike Prazdny's (1985) method, and (5) causes globally consistent matches to override greedy local matches. The model represents transparency, unlike the Marr and Poggio (1976) model, and it assigns unique disparities, unlike Prazdny's (1985) model. In press, to appear in Network: Computation in Neural Systems, 11/96.  From marshall at cs.unc.edu Tue Oct 29 12:31:39 1996 From: marshall at cs.unc.edu (Jonathan Marshall) Date: Tue, 29 Oct 1996 13:31:39 -0400 Subject: Short-term postdoc opening: Neural modeling of visual perception Message-ID: <199610291731.NAA17815@marshall.cs.unc.edu> ---------------------------------------------------------------------------- Short-Term Position Opening: POSTDOCTORAL RESEARCH ASSOCIATE IN NEURAL MODELING OF VISUAL PERCEPTION at the University of North Carolina at Chapel Hill A short-term postdoctoral position is available in neural modeling of visual perception, in Dr. Jonathan Marshall's research group at UNC-Chapel Hill. The group's research focuses on intermediate-level visual motion perception, surface appearance perception, object perception, binding and grouping, and depth perception. The opening is the 2nd postdoctoral position in a research group that includes one faculty member, one postdoc, and four PhD students. The postdoc will develop and implement computational simulations of neural models of visual mechanisms involved in perception of surface brightness and transparency, depth, motion, and other aspects of visual and neural processing. The postdoc will also work on developing relative-motion algorithms for image processing tasks in object detection, image stabilization, scene segmentation, and invariant object representation. The postdoc may also have opportunities to develop and run visual psychophysics experiments on motion perception and stereopsis. The project includes work on adaptation and self-organization processes that may guide the development and maintenance of such neural mechanisms in human and animal brains. The postdoc will collaborate with other members of the research group. The position requires very good programming skills and (ideally) some image processing and/or neural simulation experience. Experience with visual psychophysics or visual neurophysiology would be a plus. The position is available immediately and is funded for 6 or more months. Because the position is short-term, it might be ideal for a new PhD who needs interim funding before starting another position in the Summer or Fall. Salary is competitive. Please send CV, a letter describing research interests and background, relevant publications, and references to Prof. Jonathan A. Marshall, Department of Computer Science, CB 3175, Sitterson Hall, University of North Carolina, Chapel Hill, NC 27599-3175, USA. Phone 919-962-1887, fax 919-962-1799, marshall at cs.unc.edu, http://www.cs.unc.edu/~marshall. Posted 25 October 1996 ----------------------------------------------------------------------------  From sylee at eekaist.kaist.ac.kr Wed Oct 30 07:32:58 1996 From: sylee at eekaist.kaist.ac.kr (prof. Soo-Young Lee) Date: Wed, 30 Oct 1996 21:32:58 +0900 Subject: Neural Networks Session at SCI'97, Venezuela Message-ID: <199610301232.VAA14099@eekaist.kaist.ac.kr> CALL FOR PAPERS SCI'97 Neural Networks Session at WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS Caracas, Venezuela July 7-11, 1997 I was asked to organize session(s) on neural networks at a multi-disciplinary conference, SCI'97. As you may see at the following announcements of the conference, it is a truly interdiciplinary conference covering intelligent computing, information theory, cybernetics, social systems, psychology, biology, and applications. WE believe this interdisciplinary conference will provide a very good chance to meet researchers from different-but-related disciplines and promote interesting discussions. Therefore, I would like to invite you to present your recent research results at this conference, and meet interesting people. The Neural Networks sessions will cover following topics: * Neural network models (biological and artificial) * Hybrid systems (Neuro, fuzzy, GA, EP, etc.) * Applications (Speech, Time-series, controls, etc.) * Artificiasl life If you are interested in this multidisciplinary conference, please send me an e-mail to sylee at eekaist.kaist.ac.kr. SUBMISSIONS AND DEADLINES January 17, 1997 Submission of extended abstracts or a condensed first draft(500-1500 words) March 10, 1997 Acceptance notifications May 12, 1997 Submission of papers camera/ready, hard copies and electronic versions Best regards, Soo-Young Lee ********************************************************************** WORLD MULTICONFERENCE ON SYSTEMICS, CYBERNETICS AND INFORMATICS Caracas, Venezuela July 7-11, 1997 MAJOR THEMES Conceptual Infrastructure of Systemics, Cybernetics and Informatics Information Systems (ISAS '97) Control Systems Managerial/Corporative Systems Human Resources Systems Natural Resources Systems Social Systems Educational Systems Financial Systems SCI in Psychology, Cognition and Spirituality SCI in Biology and Medicine SCI in Art Globalization, Development and Emerging Economies ACADEMIC AND SCIENTIFIC SPONSORS World Organization of Systemics and Cybernetics (WOSC) (France) IFSR: International Federation for Systems Research (Austria/USA) International Systems Institute (USA) CUST, Engineer Science Institute of the Blaise Pascal University (France) The International Institute for Advanced Studies in Systems Research and Cybernetics (Canada) Society Applied Systems Research (Canada) Cybernetics and Human Knowing: A Journal of Second Order Cybernetics and Cybersemiotics (Denmark) International Institute of Informatics and Systemics (USA) IEEE (Venezuela Chapter) Simon Bolivar University (Venezuela) Universidad Central de Venezuela INCLUSION OF SCI'97 PROCEEDINGS IN A CD-ROM EXTENDED ENCYCLOPEDIA An electronic version of the SCI 97 proceedings will also be available on CD-ROM, with search and hypertext features. Other media, such as sound, animation and video, are also being considered. These proceedings will also be included in the CD-ROM Extended Encyclopedia of Systemics and Cybernetics (TM), whose development in presently in progress. TYPES OF SUBMISSIONS ACCEPTED Research, Review or Position Papers Panel Presentation, Workshop and/or Round Table Proposals New Topics Proposal (which should include a minimum of 15 papers) Focus Symposia (which should include a minimum of 15 papers) JOURNALS PUBLICATIONS FOR BEST PAPERS Best papers will be published by "Cybernetics and Human Knowing: A Journal of Second Order Cybernetics and Cybersemiotics". Members of the Program Committee who are refrees of the Journal will take the decision on the issue. Other Journals are being considered for other areas of SCI'97/ISAS'97. WEB SITE http://www.callaos.com/SCI 97 PURPOSE The purpose of the Conference is to bring together, from universities and corporations, academics and professionals, researchers and consultants, scientists and engineers, theoreticians and practitioners, all over the world to discuss themes of the conference and to participate with original ideas or innovations, knowledge or experience, theories or methodologies, in the areas of Systemics, Cybernetics and Informatics (SCI). Systemics, Cybernetics and Informatics (SCI) are being increasingly related to each other and to almost every scientific discipline and human activity. Their common transdisciplinarity characterizes and communicates them, generating strong relations among them and with other disciplines. They interpenetrate each other integrating a whole that is permeating human thinking and practice. This phenomenon induced the Organization Committee to structure SCI'97 as a multiconference where participants may focus on an area, or on a discipline, while maintaining open the possibility of attending conferences from other areas or disciplines. This systemic approach stimulates cross-fertilization among different disciplines, inspiring scholars, generating analogies and provoking innovations; which, after all, is one of the very basic principles of the systems movement and a fundamental aim in cybernetics. BACKGROUND The success achieved in ISAS'95 (Information Systems Analysis and Synthesis) held in Baden-Baden (Germany), symbolized by the award granted by the International Institute for Advanced Studios in Systems Research and Cybernetics (Canada), as the best and largest symposium at the 5th International Conference on Systems Research, Informatics and Cybernetics, encouraged its sponsors and session chairs to organize ISAS '96 at Orlando and prepare a more general Conference on Systemics, Cybernetics and Informatics (SCI '97) at Caracas (Venezuela). The widely acknowledged success of ISAS'96 (held on July 22-26 at Orlando) by means of spontaneous verbal feedback and a written comprehensive evaluation from 143 authors, of high quality papers, from 32 countries, galvanized the Program and Organizing Committees to make a definitive commitment to organize SCI'97 and ISAS '97 at Caracas, in July 7-11, 1997. Many Program and Organizing Committees members from past international and world conferences are joining us for SCI '97 and ISAS'97, including most of those who organized the World Conference on Systems Sponsored by UNESCO and the United Nations' World Federation of Engineering Organizations (WFEO). We are still looking for more organizational support from experienced scholars, consultants, practitioners, professionals and researchers, as well as from international or national organizations, public or private, academic or professional.  From pauer at igi.tu-graz.ac.at Wed Oct 30 20:22:06 1996 From: pauer at igi.tu-graz.ac.at (Peter Auer) Date: Thu, 31 Oct 1996 02:22:06 +0100 Subject: Special Issue on Computational Learning Theory Message-ID: <199610310122.AA06439@figiss01.tu-graz.ac.at> Call for Papers Special Issue of Algorithmica on Computational Learning Theory Expected publication date: Fall 1997 Submission deadline: January 31, 1997 Guest editors: Peter Auer and Wolfgang Maass Besides constructing mathematical theories for Machine Learning, Computational Learning Theory analyses learning algorithms, tries to identify the necessary and sufficient amount of information required for learning, and investigates the computational complexity of learning problems. For this special issue of Algorithmica we are looking for high quality papers addressing topics such as * design and analysis of learning algorithms, * complexity of learning, * mathematical models of learning, * supervised and unsupervised learning on neural nets * theory of (statistical) pattern recognition * ... (suitable topics are not limited to this short list). Submitted papers will go through the usual refereeing process of Algorithmica. Authors should either send four copies of their paper to Peter Auer Institute for Theoretical Computer Science University of Technology, Graz Klosterwiesgasse 32/2 A-8010 Graz Austria or send their paper electronically as postscript or latex file to silt at igi.tu-graz.ac.at. Submissions should be formated accordingly to the following instructions. Manuscripts should be typed on only one side of the page with wide margins. The title page of the article should include all the authors' affiliations and the mailing address, phone, and fax numbers, and the email address of the corresponding author, 5-10 key words, and a detailed abstract emphasizing the main contribution of the paper. Footnotes other than those referring to the title or author affiliation should be avoided. If they are essential, they should be numbered consecutively and listed on a separate page, following the text. If you are planning to submit a paper to the special issue we would like you to send us a short note (silt at igi.tu-graz.ac.at), so that we are able to plan ahead more easily. This and and eventually updated information can also be found at http://www.cis.tu-graz.ac.at/igi/pauer/silt.html. Requests can be sent to silt at igi.tu-graz.ac.at. Peter Auer and Wolfgang Maass