From marshall at cs.unc.edu Wed Dec 2 16:42:52 1992 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Wed, 2 Dec 92 16:42:52 -0500 Subject: Jobs in Chapel Hill & Durham, NC Message-ID: <9212022142.AA06514@marshall.cs.unc.edu> The following two jobs are both open to strong vision researchers. An opportunity also exists for the new vision faculty member(s) to participate in (and receive support from) a collaborative research effort on models of human vision, under the MIP (Medical Image Presentation) program project grant at UNC-Chapel Hill; contact Prof. Stephen Pizer, smp at cs.unc.edu, for further information. Researchers with interests in computational and neurobiological models of cognition and of vision would find several collaborative opportunities here in the Research Triangle area of North Carolina. ----------------------------------------------------------------------------- 1. The Psychology Department of the University of North Carolina at Chapel Hill seeks to hire a cognitive psychologist in a tenure track assistant professor position for the fall of 1993. Responsibilities include graduate and undergraduate teaching, research, and research supervision. Applicants in any area of cognitive psychology will be considered. Have 3 letters of recommendation sent and submit a curriculum vitae, up to 3 (p)reprints, and a statement of teaching and programmatic research interests to: Thomas S. Wallsten, Cognitive Search Committee, Department of Psychology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3270. Applications must be received by December 15, 1992. UNC-CH is an Affirmative Action/ Equal Opportunity Employer. Questions can be directed to Tom Wallsten at tom_wallsten at unc.edu. ---------------------------------------------------------------------------- 2. The Department of Experimental Psychology at Duke University has a tenure-track assistant professor position beginning in the Fall, 1993 in the general area of Behavioral Neuroscience with theoretical interests in neural plasticity, learning, motivation, or sensory perception development. Candidates with strong research and teaching interests should send a vitae, representative reprints, and three or more letters of recommendation to: Faculty Search Committee, Department of Experimental Psychology, Duke University, Durham, NC, 27706. Duke is an equal opportunity/affirmative action employer. ---------------------------------------------------------------------------- From rosetree at titan.ucc.umass.edu Tue Dec 1 16:11:20 1992 From: rosetree at titan.ucc.umass.edu (DAVID A ROSENBAUM) Date: Tue, 1 Dec 92 16:11:20 -0500 Subject: JOB OPENING AT UMASS(AMHERST) Message-ID: <9212012111.AA22183@titan.ucc.umass.edu> From jlm at crab.psy.cmu.edu Wed Dec 2 10:18:02 1992 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Wed, 2 Dec 92 10:18:02 EST Subject: Post-Doctoral Openings Message-ID: <9212021518.AA12273@crab.psy.cmu.edu.noname> I have an opening in my laboratory for at least one post-doctoral fellow and possibly two. These would be two-year post-doctoral fellowships, with the possibility of extension if the applicant can raise additional funding for additional years. The default start date is September 1 1993, but other dates may be possible. I'm looking for individuals with strengths in the mathematical analysis of neural networks who wish to apply these strengths to the development of a computational framework for modeling human cognition. Prior work demonstrating these strengths and interests will be given considerable weight. Two specific areas of interest in my laboratory are: 1. Dynamics of information processing. The goal here has been to develop mathematical analyses of stochastic, symmetric, diffusion networks and apply these in an effort to understand the time-course of human information processing as this is exhibited in information processing tasks studied extensively in the human cognitive psychology literature. We are also interested in further studies of learning in stochastic networks, building on recent work in my laboratory (with Javier Movellan, a departing postdoc) and elsewhere. 2. Learning and memory. One goal here is to understand from a computational point of view why humans have two memory systems. Neuropsychological evidence suggests that one may loose the ability to acquire new memories for specific facts and experiences, while at the same time showing completely normal acquisition of various cognitive, perceptuo-motor, and language processing skills. The questions in this area are, Why should there be two different kinds of learning in the human brain, What are the essential properties of each, and how do they work together. Basically, I am looking for individuals who are interested in working on some aspect of either of these broad problems. My style in working with post-docs is to find a specific problem of mutual interest and develop a collaboration around that. Please do not reply by email. If you are interested please send me a letter along with a CV, your publications or preprints, and the names, addresses and phone numbers of two individuals who can comment on your work. Send your materials by December 20 to: James L. McClelland Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 Upon receipt of these materials I will reciprocate with recent papers from my laboratory as a way of beginning a discussion of whether we can find a fit between our interests. From fellous%hyla.usc.edu at usc.edu Sat Dec 5 14:42:41 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Sat, 5 Dec 92 11:42:41 PST Subject: CNE Workshop/USC Call For Papers Message-ID: <9212051942.AA04904@hyla.usc.edu> Thank you for posting this announcement on the list: ---------------------------------------------------------------------------- CALL FOR PAPERS SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa. To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodolo- gy which combines a coarse-grain analysis of cooperative or dis- tributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a power- ful general paradigm). Schemas provide a language for distri- buted artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introducto- ry tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks. Modular neural networks. Linking DAI to Neural Networks to Hybrid Architecture. Formal Theories of Schemas. Hybrid approaches to integrating planning & reaction. Hybrid approaches to learning. Hybrid approaches to commonsense reasoning by integrating neural networks and rule-based reasoning (using schema for the integration). Programming Languages for Schemas and Neural Networks. Concurrent Object-Oriented Programming for Distributed AI and Neural Networks. Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience. Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to: Michael Arbib Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520, USA Tel: (213) 740-9220, Fax: (213) 746-2863, email: arbib at pollux.usc.edu. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of ac- cepted abstracts received prior to April 1, 1993 will be collect- ed with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audi- ence to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. The registration fee of $150 includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to Center for Neural Engineering, USC for $150 together with the following information to: Paulina Tagle Center for Neural Engineering University of Southern California University Park Los Angeles CA 90089-2520 USA --------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ____________________________________________ ADDRESS: ____________________________________________ ____________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ____________________________________________ I intend to submit a paper: YES [ ] NO [ ] --------------------------------------------------------------------- From paul at dendrite.cs.colorado.edu Sun Dec 6 13:04:42 1992 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Sun, 6 Dec 1992 11:04:42 -0700 Subject: Cognitive Science Conference (note DEADLINE) Message-ID: <199212061804.AA20627@axon.cs.colorado.edu> Fifteenth Annual Meeting of the COGNITIVE SCIENCE SOCIETY A MULTIDISCIPLINARY CONFERENCE ON COGNITION June 18 - 21, 1993 University of Colorado at Boulder Call for Papers This year's conference aims at broad coverage of the many and diverse methodologies and topics that comprise Cognitive Science. In addition to computer modeling, the meeting will feature research in computational, theoretical, and psycho-linguistics; cognitive neuroscience; conceptual change and education; artificial intelligence; philosophical foundations; human-computer interaction and a number of other approaches to the study of cognition. A plenary session honoring the memory of Allen Newell is scheduled. Plenary addresses will be given by: Alan Baddeley Andy DiSessa Paul Smolensky Sandra Thomson Bonnie Webber The conference will also highlight invited research papers: Conceptual Change: (Organizers: Nancy Songer & Walter Kintsch) Gaea Leinhardt Ashwin Ram Jeremy Rochelle Language Learning: (Organizers: Paul Smolensky & Walter Kintsch) Michael Brent Robert Frank Brian MacWhinney Situated Action: (Organizer: James Martin) Leslie Kaebling Pattie Maes Bonnie Nardi Alonso Vera Visual Perception & Cognitive Neuroscience: (Organizer: Michael Mozer) Marlene Behrmann Robert Jacobs Hal Pashler David Plaut PAPER SUBMISSIONS With the goal of assembling a high-quality program representative of the diversity of methods and topics in cognitive science, we invite papers presenting interdisciplinary research addressing any cognitive domain and using any of the diverse methodologies of the field. Papers are specifically solicited which address the topics of the invited research sessions listed above. Authors should submit five (5) copies of the paper in hard copy form to: Cognitive Science 1993 Submissions Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author will be eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of six (6) pages long (excluding only the cover page), must have at least 1 inch margins on all sides, and must use no smaller that 10 point type. Camera-ready versions will be required only after authors are notified of acceptance. COVER PAGE Each copy of the paper must include a cover page, separate from the body of the paper, which includes, in order: 1. Title of paper. 2. Full names, postal addresses, phone numbers and e-mail addresses (if possible) of all authors. 3. An abstract of no more than 200 words. 4. The area(s) in which the paper should be reviewed. When possible, please list, in decreasing order of relevance, 1-3 of the following keywords: action/motor control, acquisition/learning, cognitive architecture, cognitive neuroscience, connectionism, conceptual change/education, decision making, foundations, human-computer interaction, language (indicate subarea), memory, reasoning and problem solving, perception, situated action/cognition, skill/expertise. 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster sessions will be highlighted in this year's conference. The proceedings will not distinguish between papers presented orally and those presented as posters. 6. A note stating if the paper is eligible to compete for a Marr Prize. DEADLINE ***** PAPERS ARE DUE JANUARY 19, 1993. ****** Late papers will be accepted until January 31, but authors of late papers will have less time to make revisions after acceptance. SYMPOSIA Proposals for symposia are also invited. Proposals should indicate: (1) A brief description of the topic; (2) How the symposium would address a broad cognitive science audience; (3) Names of symposium organizer(s); (4) List of potential speakers, their topics, and some estimate of their likelihood of participation; (5) proposed symposium format (designed to last 90 minutes). Symposium proposals should be sent as soon as possible, but no later than January 19, 1993. FOR MORE INFORMATION CONTACT Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 E-mail: Cogsci at clipr.colorado.edu Telephone: (303) 492-7638 FAX: (303) 492-2967 From ala at sans.kth.se Sun Dec 6 16:45:19 1992 From: ala at sans.kth.se (Anders Lansner) Date: Sun, 6 Dec 1992 22:45:19 +0100 Subject: Mechatronical Computer Systems that Perceive and Act Message-ID: <199212062145.AA27311@thalamus.sans.kth.se> ************************************************************************* * Invitation to * * International Workshop on Mechatronical Computer Systems * * for Perception and Action, June 1-3, 1993. * * * * Halmstad University, Sweden * * * * First Call for Contributions * ************************************************************************* Mechatronical Computer Systems that Perceive and Act ++++++++++++++++++++++++++++++++++++++++++++++++++++ A new generation ================ Mechatronical computer systems, which we will see in advanced products and production equipment of tomorrow, are designed to do much more than calculate. The interaction with the environment and the integration of computational modules in every part of the equipment, engaging in every aspect of its functioning, put new, and conceptually different, demands on the computer system. A development towards a complete integration bet- ween the mechanical system, advanced sensors and actuators, and a multi- tude of processing modules can be foreseen. At the systems level, power- ful algorithms for perceptual integration, goal-direction and action plan- ning in real time will be critical components. The resulting 'action- oriented systems' may interact with their environments by means of sophi- sticated sensors and actuators, often with a high degree of parallelism, and may be able to learn and adapt to different circumstances and environ- ments. Perceiveing the objects and events of the external world and act- ing upon the situation in accordance with an appropriate behaviour, whet- her programmed, trained, or learned, are key functions of these, next generation, computer systems. The aim of this first International Workshop on Mechatronical Computer Systems for Perception and Action is to gather researchers and industrial development engineers, who work with different aspects of this exciting new generation of computing systems and computerbased applications, to a fruitful exchange of ideas and results and, often interdisciplinary, disc- ussions. Workshop Form ============= One of the days of the workshop will be devoted to 'true workshop activi- ties'. The objective is to identify and propose research directions and key problem areas in mechatronical computing systems for perception and action. In the morning session, invited speakers, as well as other work- shop delegates, will give their perspectives on the theme of the workshop. The work will proceed in smaller working groups during the afternoon, after which the conclusions will be presented in a plenary session. The scientific programme will also include presentations of research res- ults in oral or poster form, or as demonstrations. Subject Areas ============= The programme committee welcomes all kinds of contributions  papers to be presented orally or as posters, demonstrations, etc.  in the areas listed below, as well as other areas of relevance to the theme of the workshop. From the workshop point of view, it is not essential that contributions contain only new, unpublished results. Rather, the new, interdisciplinary collection of delegates that can be expected at the workshop may motivate presentations of earlier published results. Specifically, we invite delegates to state their view of the workshop theme, including identification of key reserch issues and research direc- tions. The planning of the workshop day will be based on these submitted statements , some of which will be presented in the plenary session, some of which in the smaller working groups. At this early stage we also welcome proposals for session themes and in- vited talks. Relevant subject areas are e.g.: -------------------------------- Real-Time Systems Architecture and Real-Time Software. Sensor Systems and Sensory/Motor Coordina- tion. Biologically Inspired Systems. Applications of Unsupervised and Reinforce- ment Learning. Real-Time Decision Making and Action Plan- ning. Parallel Processor Architectures for Embedded Systems. Development Tools and Support Systems for Mechatronical Computer Systems and Applica- tions. Dependable Computer Systems. Robotics and Machine Vision. Neural Networks in Real-Time Applications. Advanced Mechatronical Computing Demands in Industry. IMPORTANT DATES ================ Dec. 15, 1992: Proposals for Invited speakers, Panel discussions, Special sessions, etc. Febr. 1, 1993: Submissions of extended abstracts (4 pages max.) or full papers. Submissions of statements regarding perspectives on the conference theme, that the delegate would like to present at the workshop (4 pages max.) March 1, 1993: Notification of acceptance. Pre-liminary final programme. May 1, 1993: Final papers and statements. ORGANISERS ========== The workshop is arranged by CCA, the Centre for Computer Architecture at Halmstad University, Sweden, in cooperation with the DAMEK Mechatronics Research Group and the SANS (Studies of Artificial Neural Systems) Research Group, both at the Royal Institute of Technology (KTH), Stockholm, Sweden, and the Department of Computer Engineering, Chalmers University of Techno- logy, Gothenburg, Sweden. The Organising Committee includes: Lars Bengtsson, CCA, Organising Chair Anders Lansner, SANS Kenneth Nilsson, CCA Bertil Svensson, Chalmers University of Technology and CCA, Programme and Conference Chair Per-Arne Wiberg, CCA Jan Wikander, DAMEK The workshop is supported by SNNS, the Swedish Neural Network Society. It is financially supported by Halmstad University, the County Aministration of Halland, and Swedish industries. Social Activities Social activities and a Programme for Accompanying persons will be arranged. MCPA Workshop, Centre for Computer Architecture, Halmstad University, Box 823, S-30118 Halmstad, Sweden Tel. +46 35 153134 (Lars Bengtsson), Fax. +46 35 157387, e-mail: mcpa at cca.hh.se FURTHER INFORMATION =================== For further information and registration form, fill out the form below and send to: MCPA Workshop/Lars Bengtsson Centre for Computer Architecture Halmstad University Box 823 S-30118 HALMSTAD, Sweden Alternatively, by simply mailing the text 'send info' to mcpa at cca.hh.se you will be included in the e-mail mailing list and supplied with up-to-date information. //////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////// Please include me in the mailing list of the MCPA Workshop Name: Address: I intend to submit a paper/give a demonstration within the area of: I have suggestions for the workshop programme. Please contact me on phone/fax/e-mail: /////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////// From ingber at alumni.cco.caltech.edu Mon Dec 7 17:24:48 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Mon, 7 Dec 1992 14:24:48 -0800 Subject: VFSR v6.30 now in Statlib Message-ID: <9212072224.AA21221@alumni.cco.caltech.edu> Very Fast Simulated Reannealing (VFSR) vfsr v6.30 is now in Statlib (login as statlib to lib.stat.cmu.edu, file vfsr is in directory general). If you already have vfsr v6.25 from Netlib (login as netlib to research.att.com, file vfsr.Z is in directory opt), this can be updated using a patch I'd be glad to send on request. v6.30 fixes a bug encountered for negative cost functions, and adds some printout to make your bug reports and comments easier to decifer. Lester || Prof. Lester Ingber ingber at alumni.caltech.edu || || P.O. Box 857 || || McLean, VA 22101 703-848-1859 = [10ATT]0-700-L-INGBER || From moody at cse.ogi.edu Tue Dec 8 18:22:00 1992 From: moody at cse.ogi.edu (John Moody) Date: Tue, 8 Dec 92 15:22 PST Subject: PhD and Masters Programs at the Oregon Graduate Institute Message-ID: Fellow Connectionists: The Oregon Graduate Institute of Science and Technology (OGI) has openings for a few outstanding students in its Computer Science Masters and Ph.D programs in the areas of Neural Networks, Learning, Speech, Language, Vision, and Control. Faculty in these areas include Etienne Barnard, Ron Cole, Mark Fanty, Dan Hammerstrom, Todd Leen, Uzi Levin, John Moody, David Novick, Misha Pavel (visiting), and Barak Pearlmutter. Short descriptions of faculty research interests are appended below. OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. Inquiries about the Masters and PhD programs and admissions should be addressed to: Office of Admissions and Records Oregon Graduate Institute of Science and Technology 19600 NW von Neumann Drive Beaverton, OR 97006-1999 or to the Computer Science and Engineering Department at csedept at cse.ogi.edu or (503)690-1150. The final deadline for receipt of all applications materials is March 1, 1993. Applications are reviewed as they are received, and applying early is strongly advised. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology (OGI) Department of Computer Science and Engineering Research Interests of Faculty in Neural Networks, Learning, Speech, Language, Vision, and Control Etienne Barnard: Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole: Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty: Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom: Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Todd K. Leen: Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. Uzi Levin: Uzi Levin's research interests include neural networks, learning systems, decision dynamics in distributed and hierarchical environments, dynamical systems, Markov decision processes, and the application of neural networks to the analysis of financial markets. John Moody: John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, and finance. David Novick: David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (visiting from NYU and NASA Ames): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Barak Pearlmutter: Barak Pearlmutter is interested in adaptive systems in their many manifestations. He currently works on neural network learning, unsupervised learning, generalization, accelerating the learning process, relations to biology, reinforcement learning and control, and applications to practical problems. From jbower at cns.caltech.edu Tue Dec 8 16:23:40 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Tue, 8 Dec 92 13:23:40 PST Subject: CNS*93 Message-ID: <9212082123.AA08110@smaug.cns.caltech.edu> CALL FOR PAPERS Second Annual Computation and Neural Systems Meeting CNS*93 July 31 - August 8 1993 Washington D.C. This is the second annual meeting of an inter-disciplinary conference intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. Last year's meeting in San Francisco brought 300 experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. 85 peer reviewed papers were presented at the meeting on a range of subjects related to understanding how biological neural systems compute. As last year, the meeting is intended to equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The first day of the meeting will be devoted to tutorial presentations and workshops focused on particular technical issues confronting computational neurobiology. The main body of the meeting will include plenary, contributed and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. Following the regular session, there will be two days of focused workshops at a rural site outside of the D.C. area. With this announcement we solicit the submission of presented papers to the meeting. All papers will be refereed. Submission Procedures: Original research contributions are solicited. Authors must submit a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Accepted abstracts will be published in the conference program. Summaries are for program committee use only. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other F. Motor Systems and Sensory Motor Integration G. Behavior H. Cognitive I. Disease Include addresses of all authors on the front of the summary and the abstract including email for each author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered that lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by either surface mail or email. Authors submitting via surface mail should send 6 copies of the abstract and summary to: Chris Ploegaert CNS*93 Submissions Division of Biology 216-76 Caltech Pasadena, CA. 91125 email submissions should be sent to: cp at smaug.cns.caltech.edu (asci, postscript, or latex files accepted) in each case, submissions must be postmarked (emailed) by January 26th, 1993. Registration information: All submitting authors will be sent registration material automatically. Others interested in obtaining registration material once they become available should contact Chris Ploegaert at the above address or via email at: cp at smaug.cns.caltech.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CNS*93 Organizing Committee: Meeting Coordination: Dennis Glanzman, National Institute of Mental Health Frank Eeckman, Lawrence Livermore Labs Program Co-Chairs: James M. Bower, Caltech Eve Marder, Brandeis University John Rinzel, National Institute of Health Finances: John Miller, University of California, Berkeley Gwen Jacobs, University of California, Berkeley Workshop Chair: Bartlett Mel, Caltech European Liaison: Herbert Axelrad, Faculte de Medecine Pitie-Salpetriere Paris ====================================================================== Potential participants interested in the content of last year's meeting can ftp last year's agenda using the following procedure (you enter ""): yourhost% "ftp 131.215.137.69" 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" 230 Guest login ok, access restrictions apply. ftp> "cd cns93" 250 CWD command successful. ftp> "get cns92.agenda" 200 PORT command successful. 150 ASCII data connection for cns92.agenda (131.215.137.69,1363) 226 ASCII Transfer complete. local: cns92.agenda remote: cns92.agenda 17598 bytes received in 0.33 seconds (53 Kbytes/s) ftp> "quit" 221 Goodbye. yourhost% (use any editor to look at the file) ==================================================================== **DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1993** please post From ingber at alumni.cco.caltech.edu Wed Dec 9 05:25:31 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Wed, 9 Dec 1992 02:25:31 -0800 Subject: Very Fast Simulated Reannealing (VFSR) via Ftp or Email Message-ID: <9212091025.AA11704@alumni.cco.caltech.edu> Very Fast Simulated Reannealing (VFSR) via Ftp or Email My previous announcement did not specify the use of ftp, and many people unfamiliar with the use of NETLIB and STATLIB were understandably confused. This announcement is to remedy that problem. STATLIB: vfsr v6.30 Interactive: ftp lib.stat.cmu.edu [login as statlib, your_login_name as password] cd general get vfsr Email: mail statlib at lib.stat.cmu.edu send vfsr from general NETLIB: vfsr v6.25 Interactive: ftp research.att.com [login as netlib, your_login_name as password] cd opt binary get vfsr.Z Email: mail netlib at research.att.com send vfsr from opt PATCH: vfsr-diff-6.25-6.30.Z.uu If you already have vfsr v6.25 from NETLIB, this can be updated using a patch I'd be glad to send on request. strip out text between CUT HERE lines, save to savefile uudecode savefile uncompress vfsr-diff-6.25-6.30.Z mv vfsr-diff-6.25-6.30 VFSR ; cd VFSR patch -p1 < vfsr-diff-6.25-6.30 v6.30 fixes a bug encountered for negative cost functions, and adds some printout to make your bug reports and comments easier to decifer. Lester || Prof. Lester Ingber ingber at alumni.caltech.edu || || P.O. Box 857 || || McLean, VA 22101 703-848-1859 = [10ATT]0-700-L-INGBER || From Patricia.M.Reed at Dartmouth.EDU Wed Dec 9 09:03:40 1992 From: Patricia.M.Reed at Dartmouth.EDU (Patricia.M.Reed@Dartmouth.EDU) Date: 9 Dec 92 09:03:40 EST Subject: Position Opening Message-ID: <2817564@donner.Dartmouth.EDU> The following ad describes a position opening at Dartmouth College in the Neurosciences or Cognitive Neurosciences. Please submit applications/nominations to the address given. **************************** David T. McLaughlin Distinguished Professorship in Cognitive Science or Cognitive Neuroscience Dartmouth College seeks a distinguished individual in cognitive science or cognitive neuroscience to be the first holder of the David T. McLaughlin Distinguished Professorship. It is expected that the appointment will be made at the tenured, full professor level in the Department of Psychology and that the successful candidate will participate in the undergraduate and doctoral programs of the Department. However, the interests and achievements of the appointee should transcend the normal academic boundaries and should encompass scholarship that integrates disciplines within Arts and Science and/or the professional schools of medicine and engineering. Candidates should possess an outstanding record of scholarship and a proven ability to work in an interdisciplinary environment, to attract external funding for their research, and to communicate their work to a diverse audience. In addition to participating in the activities of the cognitive science group in the Psychology Department, the appointee would be expected to foster interactions with other research groups, such as those in computer science and engineering, signal processing, neurosurgery and molecular neuroscience. Nominations and applications should be sent to the following address: P. Bruce Pipes Associate Provost for Academic Affairs Dartmouth College 6004 Parkhurst Hall, Room 204 Hanover, NH 03755-3529 Formal consideration of candidates will begin February 1, 1992. Dartmouth College is an Affirmative Action/Equal Opportunity employer. Applications from and nominations of women and minority candidates are strongly encouraged. From roscheis at CS.Stanford.EDU Wed Dec 9 18:56:48 1992 From: roscheis at CS.Stanford.EDU (Martin Roscheisen) Date: Wed, 9 Dec 92 15:56:48 -0800 Subject: No subject Message-ID: <9212092356.AA22323@csd-d-5.Stanford.EDU> The following is relevant to connectionists interested in natural language. - martin -------------- Feel free to forward to colleagues. Do not redistribute to public mailing lists. ----------------------------------------------------------- Mailing List on Statistics, Natural Language, and Computing We will be maintaining a special-purpose mailing list to provide a platform for - discussing technical issues, - distributing abstracts of new papers, - locating and sharing information, and - announcements (workshops, jobs) related to corpus-based studies of natural language, statistical natural language processing, methods that enable systems to deal with and scale up to actual language use, psycholinguistic evidence of representation of distributional properties of language, as well as applications in such areas as information retrieval, human-computer interaction, and translation. Special care will be taken to keep uninformed or redundant messages at a minimum; the list is filtered and restricted to people actively involved in relevant research. To be added to or dropped from the distribution list send a message to empiricists-request at csli.stanford.edu. Contributions should go to empiricists at csli.stanford.edu. Martin Roscheisen roscheis at cs.stanford.edu David Yarowsky yarowsky at unagi.cis.upenn.edu David Magerman magerman at watson.ibm.com Ido Dagan dagan at research.att.com From SCHNEIDER at vms.cis.pitt.edu Thu Dec 10 16:53:00 1992 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Thu, 10 Dec 1992 16:53 EST Subject: Pre and Post-doc positions in Neural Processes in Cognition in Pittsburg Message-ID: <01GS5P0MZ0E891YBN8@vms.cis.pitt.edu> Program announcement for Interdisciplinary Graduate and Postdoctoral Training in Neural Processes in Cognition at the University of Pittsburgh and Carnegie Mellon University Pre- and Post-Doctoral positions The Pittsburgh Neural Processes in Cognition program, in its third year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an innovative program for students investigating the neurobiology of cognition. The program's focus is the interpretation of cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Such functions include perceiving, attending, learning, planning, and remembering in humans and in animals. A carefully designed program of study prepares each student to perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, fMRI and PET brain scanners, the Pittsburgh Supercomputing Center, and a regional medical center providing access to human clinical populations. This is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Each student receives full financial support, travel allowances and workstation support. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. Last year's class included mathematicians, psychologists, and neuroscience researchers. Pittsburgh is one of America's most exciting and affordable cities, offering outstanding symphony, theater, professional sports, and outdoor recreation in the surrounding Allegheny mountains. More than ten thousand graduate students attend its universities. Core Faculty and interests and affiliation Carnegie Mellon University -Psychology- James McClelland, Johnathan Cohen, Martha Farah, Mark Johnson Computer Science - David Touretzky University of Pittsburgh Behavioral Neuroscience - Michael Ariel Biology - Teresa Chay Information Science - Paul Munro Mathematics - Bard Ermentrout Neurobiology Anatomy and Cell Sciences - Al Humphrey Neurological Surgery - Don Krieger, Robert Sclabassi Neurology - Steven Small Psychiatry - David Lewis, Lisa Morrow, Stuart Steinhauer Psychology - Walter Schneider, Velma Dobson Physiology - Dan Simons Radiology - Mark Mintun Applications: To apply to the program contact the program office or one of the affiliated departments. Students are admitted jointly to a home department and the Neural Processes in Cognition Program. Postdoctoral applicants must have United States resident's status and are expected to have a sponsor among the training faculty. Applications are requested by February 1. For information contact: Professor Walter Schneider Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara St Pittsburgh, PA 15260 Or: call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET. In Email requests for application materials, please provide your address and an indication of which department(s) you might be interested in. From paulina at pollux.usc.edu Fri Dec 11 18:14:19 1992 From: paulina at pollux.usc.edu (Paulina Baligod) Date: Fri, 11 Dec 92 15:14:19 PST Subject: Call for Papers Message-ID: <9212112314.AA03707@pollux.usc.edu> SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introductory tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule- based reasoning (using schema for the integration) Programming Languages for Schemas and Neural Networks Concurrent Object-Oriented Programming for Distributed AI and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of accepted abstracts received prior to April 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audience to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. The registration fee of $150 includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to Center for Neural Engineering, USC for $150 together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. --------------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] From PIURI at IPMEL1.POLIMI.IT Sun Dec 13 05:57:02 1992 From: PIURI at IPMEL1.POLIMI.IT (PIURI@IPMEL1.POLIMI.IT) Date: 13 Dec 1992 10:58:02 +0001 Subject: call for papers Message-ID: <01GS9JGGS8C29BVDD1@icil64.cilea.it> ================================================================ 1993 INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC ARRAY PROCESSORS ASAP'93 25-27 October 1993, Venice, Italy ================================================================ Sponsored by the EUROMICRO Association. In cooperation with IEEE Computer Society (pending), IFIP WG 10.5, AEI, AICA (pending). ................:::::: CALL FOR PAPERS ::::::................. ASAP'93 is an international conference encompassing the theory, design, implementation, and evaluations of application-specific computing. This conference is a successor to the First International Workshop on Systolic Arrays held in Oxford, England, in July 1986. The title has been modified to reflect the expanded interest in highly-parallel algorithmically-specialized processors as well as the application-driven nature of contemporary systems. Since application-specific array-processor research represents a cross-disciplinary field of interest to a broad audience, this conference will present a balanced program covering technical subjects encompassing conceptual design, programming techniques, electronic and optical implementations, and analysis and evaluation of final systems. It is expected that participants will include people from research institutions in government, industry, and academia around the world. The conference will feature an opening keynote address, technical presentations, a panel discussion, and poster displays. One of the poster sessions will be reserved for very recent results, ongoing projects and exploratory work. The official language is English. The themes emphasized at this conference include architectures, algorithms, applications, hardware, software, and design methodology for application-specific parallel computing systems. Papers are expected to address both theoretical and practical aspects of such systems. Of particular interest are contributions that achieve large performance gains with application-specific parallel processors, introduce novel architectural concepts, propose formal and practical frameworks for the specification, design and evaluation of these systems, discuss technology dependencies and the integration of hardware and software components, and describe fabricated systems and their testing. The topics of interest include, but are not limited to, the following: Application-Specific Architectures - systolic, SIMD, MIMD, dataflow systems - homogeneous, heterogeneous, reconfigurable systems - intelligent memory systems and interconnection network - embedded systems and interfaces Algorithms for Application-Specific Computing - matrix operations - transformations (e.g., FFT, Hough) - sorting algorithms - graph algorithms Applications that Require Specialized Computing Systems - signal processing - image processing and vision - communications - robotics and manufacturing - neural networks - scientific computing - artificial intelligence and data bases Software for Application-Specific Computing - languages - operating systems - optimizing compilers Hardware for Application-Specific Computing - VLSI/WSI systems - optical systems - custom and commercial processors - implementation and testing issues - fabricated systems - synchronous vs asynchronous systems Design Methodology for Application-Specific Systems - mapping algorithms onto architectures - partitioning large problems - design tools - fault tolerance (hardware and software) - benchmarks and performance modeling INFORMATION FOR AUTHORS Authors are invited to send a one-page abstract, the title of the paper and the author's address by electronic or postal mail to the Program Chair by MARCH 27, 1993. Authors must submit five copies of their double-spaced typed manuscript (maximum 5000 words) with an abstract to the Program Chair by APRIL 16, 1993. In the submission letter, the authors should indicate which conference areas are most relevant to the paper, and the author responsible for correspondence and the camera-ready version. Papers submitted should be unpublished and not currently under review by other conferences. Notification of acceptance will be posted by JUNE 10, 1993. The camera-ready version must arrive by AUGUST 1, 1993. Proceedings will be published by IEEE Computer Society Press. GENERAL CHAIR Prof. Luigi DADDA Dept. of Electronics and Information, Politecnico di Milano p.za L. da Vinci 32, I-20133 Milano, Italy phone no. (39-2) 2399-3405, fax no. (39-2) 2399-3411 e-mail dadda at ipmel2.elet.polimi.it PROGRAM CHAIR Prof. Benjamin W. WAH Coordinated Science Laboratory, University of Illinois 1308 West Main Street, Urbana, IL 61801, USA phone no. (217) 333-3516, fax no. (217) 244-7175 e-mail wah at manip.crhc.uiuc.edu FINANCIAL & REGISTRATION CHAIR Prof. Vincenzo PIURI Dept. of Electronics and Information, Politecnico di Milano p.za L. da Vinci 32, I-20133 Milano, Italy phone no. (39-2) 2399-3606, fax no. (39-2) 2399-3411 e-mail piuri at ipmel1.polimi.it From andycl at syma.sussex.ac.uk Tue Dec 15 11:43:25 1992 From: andycl at syma.sussex.ac.uk (Andy Clark) Date: Tue, 15 Dec 92 16:43:25 GMT Subject: No subject Message-ID: <20823.9212151643@syma.sussex.ac.uk> bcc: andycl at cogs re: Doctoral Program in Philosophy-Psychology-Neuroscience First Announcement of a New Doctoral Programme in PHILOSOPHY-NEUROSCIENCE-PSYCHOLOGY at Washington University in St. Louis The Philosophy-Neuroscience-Psychology (PNP) program offers a unique opportunity to combine advanced philosophical studies with in-depth work in Neuroscience or Psychology. In addition to meeting the usual requirements for a Doctorate in Philosophy, students will spend one year working in Neuroscience or Psychology. The Neuroscience option will draw on the resources of the Washington University School of Medicine which is an internationally acknowledged center of excellence in neuroscientific research. The initiative will also employ several new PNP related Philosophy faculty and post-doctoral fellows. Students admitted to the PNP program will embark upon a five-year course of study designed to fulfill all the requirements for the Ph.D. in philosophy, including an academic year studying neuroscience at Washington University's School of Medicine or psychology in the Department of Psychology. Finally, each PNP student will write a dissertation jointly directed by a philosopher and a faculty member from either the medical school or the psychology department. THE FACULTY Roger F. Gibson, Ph.D., Missouri, Professor and Chair: Philosophy of Language, Epistemology, Quine Robert B. Barrett, Ph.D., Johns Hopkins, Professor: Pragmatism, Renaissance Science, Philosophy of Social Science, Analytic Philosophy. Andy Clark, Ph.D., Stirling, Visiting Professor (1993-6) and Acting Director of PNP: Philosophy of Cognitive Science, Philosophy of Mind, Philosophy of Language, Connectionism. J. Claude Evans, Ph.D., SUNY-Stony Brook, Associate Pro- fessor: Modern Philosophy, Contemporary Continental Philosophy, Phenomenology, Analytic Philosophy, Social and Political Theory. Marilyn A. Friedman, Ph.D., Western Ontario, Associate Professor: Ethics, Social Philosophy, Feminist Theory. William H. Gass, Ph.D., Cornell, Distinguished University Professor of the Humanities: Philosophy of Literature, Photography, Architecture. Lucian W. Krukowski, Ph.D., Washington University, Pro- fessor: 20th Century Aesthetics, Philosophy of Art, 18th and 19th Century Philosophy, Kant, Hegel, Schopenhauer. Josefa Toribio Mateas, Ph.D., Complutense University, Assistant Professor: Philosophy of Language, Philosophy of Mind. Larry May, Ph.D., New School for Social Research, Pro- fessor: Social and Political Philosophy, Philosophy of Law, Moral and Legal Responsibility. Stanley L. Paulson, Ph.D., Wisconsin, J.D., Harvard, Pro- fessor: Philosophy of Law. Mark Rollins, Ph.D., Columbia, Assistant Professor: Philosophy of Mind, Epistemology, Philosophy of Science, Neuroscience. Jerome P. Schiller, Ph.D., Harvard, Professor: Ancient Philosophy, Plato, Aristotle. Joyce Trebilcot, Ph.D., California at Santa Barbara, Associ- ate Professor: Feminist Philosophy. Joseph S. Ullian, Ph.D., Harvard, Professor: Logic, Philos- ophy of Mathematics, Philosophy of Language. Richard A. Watson, Ph.D., Iowa, Professor: Modern Philoso- phy, Descartes, Historical Sciences. Carl P. Wellman, Ph.D., Harvard, Hortense and Tobias Lewin Professor in the Humanities: Ethics, Philosophy of Law, Legal and Moral Rights. EMERITI Richard H. Popkin, Ph.D., Columbia: History of Ideas, Jewish Intellectual History. Alfred J. Stenner, Ph.D., Michigan State: Philosophy of Science, Epistemology, Philosophy of Language. FINANCIAL SUPPORT Students admitted to the Philosophy-Neuroscience-Psychology (PNP) program are eligible for five years of full financial support at competitive rates in the presence of satisfactory academic progress. APPLICATIONS Application for admission to the Graduate School should be made to: Chair, Graduate Admissions Department of Philosophy Washington University Campus Box 1073 One Brookings Drive St. Louis, MO 63130-4899 Washington University encourages and gives full consideration to all applicants for admission and financial aid without regard to race, color, national origin, handicap, sex, or religious creed. Services for students with hearing, visual, orthopedic, learning, or other disabilities are coordinated through the office of the Assistant Dean for Special Services. From ck at rex.cs.tulane.edu Mon Dec 14 13:32:34 1992 From: ck at rex.cs.tulane.edu (Cris Koutsougeras) Date: Mon, 14 Dec 92 12:32:34 CST Subject: CFP: Intl. Conf. on Tools for AI In-Reply-To: <9212112314.AA03707@pollux.usc.edu>; from "Paulina Baligod" at Dec 11, 92 3:14 pm Message-ID: <9212141832.AA05430@isis> ======================================================================= CALL FOR PAPERS 5th IEEE International Conference on Tools with Artificial Intelligence November 8-11, 1993 Boston, Massachusetts This conference encompasses the technical aspects of specifying, designing, implementing and evaluating computer tools which use artificial intelligence techniques as well as tools for artificial intelligence applications. The topics of interest include the following aspects: o Machine learning, Theory and Algorithms o AI and Software Engineering o Intelligent Multimedia Systems o AI Knowledge Base Architecture o AI Algorithms o AI Language Tools o Reasoning Under Uncertainty, Fuzzy Logic o Logic and Intelligent Databases o Expert Systems and Environments o Artificial Neural Networks o Parallel Processing and Hardware Support o AI and Object-Oriented Systems o AI Applications INFORMATION FOR AUTHORS Authors are requested to submit five copies (in English) of their doubled-spaced typed manuscript (maximum of 25 pages) with an abstract to the program chair by April 15, 1993. The conference language is English and the final papers are restricted seven IEEE model pages. A submission letter that indicates which of the conference areas is most relevant to your paper and the postal address, electronic mail address, telephone number, and fax number(if available) of the contact author must accompany the paper. Authors will be notified of acceptance by July 15, 1993 and will be given instructions for final preparation of their papers at than time. Outstanding papers will be eligible for publication in the International Journal on Artificial Intelligence Tools. Submit papers and panel proposals by April 15, 1993 to: Jeffrey J.P. Tsai Dept. of EECS (M/C 154) (312)996-9324 (office) P.O. Box 4348 (312)996-3422 (secretary) University of Illinois (312)413-0024 (fax) Chicago, IL 60680 tsai at bert.eecs.uic.edu An internet computer account is maintained to provide periodically updated information regarding the conference. Send a message to "tai at rex.cs.tulane.edu" to obtain the latest information including: registration forms, advanced program, tutorials, hotel info, etc. For more information please contact: Conference Chair Steering Committee Chair John Mylopoulos Nikolaos G. Bourbakis Dept. of Computer Science Dept. of Electrical Engineering University of Toronto SUNY at Binghamton 6 King's College Road Binghamton, NY 13902 Toronto, Ontario Tel: (607)777-2165 Canada M5S 1A4 Tel: (416)978-5180 jm at cs.toronto.ca Program Vice-Chairs Machine Learning Bernard Silver GTE Lab AI and Software Engineering Matthias Jarke Technical University of Aachen Logic and Intelligent Database Clement Yu University of Illinois at Chicago AI Knowledge Base Architectures Robert Reynolds Wayne State University Intelligent Multimedia Systems Forouzan Golshani Arizona State University Artificial Neural Networks Ruediger W. Brause J.W.Goethe University Parallel Processing and Hardware Support Ted Lewis Oregon State University AI Applications Kiyoh Nakamura Fujitsu Limited Expert Systems and Environments Philip Sheu Rutgers University Natural Language Processing Fernando Gomez Florida Central University AI Algorithms Jun Gu University of Calgary AI and Object-Oriented Systems Mamdouh H. Ibrahim EDS Corporation Reasoning under Uncertainty, Fuzzy Logic John Yen Texas A&M University Registration and Publication Chair C.Koutsougeras Tulane University Publicity Chairs Mark Perlin Carnegie Mellon University A. Delis University of Maryland E. Kounalis University de NICE Mikio Aoyama Fujitsu Limited J.Y. Juang National Taiwan University Local Arrangement Chairs John Vittall, GTE Lab M.Mortazavi, SUNY Binghamton Steering Committee Nikolaos G. Bourbakis SUNY-Binghamton C.V. Ramamoorthy University of California-Berkeley Harry E. Stephanou Rensselaer Polytechnic Institute Wei-Tek Tsai University of Minnesota Benjamin. W. Wah University of Illinois-Urbana From lss at compsci.stirling.ac.uk Wed Dec 16 11:13:02 1992 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 16 Dec 92 16:13:02 GMT (Wed) Subject: Two new TRs Message-ID: <9212161613.AA08589@uk.ac.stir.cs.tugrik> Two new technical reports are available from the CCCN at the University of Stirling, Scotland. Unfortunately, they are only available by post. To get them, email lss at cs.stir.ac.uk with your postal address. TR CCCN-13: ISSN 0968-0640 COMPUTATIONAL THEORIES OF READING ALOUD: MULTI-LEVEL NEURAL NET APPROACHES W A Phillips and I M Hay Centre for Cognitive and Computational Neuroscience Departments of Psychology and Computing Science Stirling University Stirling FK9 4LA UK December 1992 Abstract Cognitive and neuropsychological studies suggest that there are at least two distinct direct routes from print to sound in addition to the route via semantics. It does not follow that connectionist approaches are thereby weakened. The use of multiple levels of analysis is a general design feature of neural systems, and may apply within phonic and graphic domains. The connectionist net for reading aloud simulated by Seidenberg and McClelland (1989) has elements of this design feature even though their emphasis was upon the capabilities of such systems at any single level of analysis. Our simulations show that modifying their system to make more use of its multi-level potential enhances its performance and explains some of its weaknesses. This suggest possible multi- level connectionist systems. At the lexical level information about the whole letter string is processed as a whole; at the sub-lexical level smaller parts, such as heads and bodies, are processed separately. We argue that these two levels do not operate in basically different ways, and that connectionist and dual-route approaches are mutually supportive. ___________________________________________________________________________ TR CCCN-14: LEXICALITY AND PRONUNCIATION IN A SIMULATED NEURAL NET W A Phillips, I M Hay and L S Smith Centre for Cognitive and Computational Neuroscience Departments of Psychology and Computing Science University of Stirling Stirling FK9 4LA UK December 1992 Abstract Self-supervised compressive neural nets can perform non-linear multi-level latent structure analysis. They therefore have promise for cognitive theory. We study their use in the Seidenberg and McClelland (1989) model of reading. Analysis shows that self-supervised compression in their model can make only a limited contribution to lexical decision, and simulation shows that it interferes with the associative mapping into phonology. Self-supervised compression is therefore put to no good use in their model. This does not weaken the arguments for self-supervised compression, however, and we suggest possible beneficial uses that merit further study. --Leslie Smith, Department of Computing Science/CCCN University of Stirling, Stirling FK9 4LA Scotland. --lss at cs.stir.ac.uk From bnns93 at computer-science.birmingham.ac.uk Wed Dec 16 14:09:22 1992 From: bnns93 at computer-science.birmingham.ac.uk (British Neural Network Society) Date: Wed, 16 Dec 92 19:09:22 GMT Subject: No subject Message-ID: <20628.9212161909@fat-controller.cs.bham.ac.uk> British Neural Network Society Symposium on Recent Advances in Neural Networks CALL FOR PARTICIPATION ====================== January 29th 1993 Lucas Institute, University of Birmingham, Edgbaston, Birmingham, U.K. Start 9:30 Cost: 55 pounds (30 pounds full-time student) A one-day symposium that looks at recent advances in neural networks, with submissions received under the following headings: - Theory & Algorithms Time series, learning theory, fast algorithms. - Applications Finance, image processing, medical, control. - Implementations Software, hardware, optoelectronics. - Biological Networks Perception, motor control, representation. The proceedings will be available after the symposium: participants will have the opportunity to purchase them at a reduced rate. Please note that places are limited to 80, and so an early reply is advised. Payment should be made to BNNS'93. Credit cards are not accepted. Please fill in the form below and return it to: BNNS'93 Registration School of Computer Science University of Birmingham Edgbaston Birmingham B15 2TT UK. ------------------------------------------------------------------------------- Please register me for the BNNS'93 Symposium "Recent Advances in Neural Networks", January 29th 1993. Name:.......................................................................... Address:....................................................................... ....................................................................... ....................................................................... Phone: ............... Fax: ................ email: .......................... Amount: ............... (55 pounds, 30 pounds student, payable to BNNS'93) Cheque number: ......................... From rosen at ringer.cs.utsa.edu Tue Dec 15 16:10:38 1992 From: rosen at ringer.cs.utsa.edu (Bruce Rosen) Date: Tue, 15 Dec 92 15:10:38 CST Subject: neuroprose paper: Function Optimization based on Advanced Simulated Annealing Message-ID: <9212152110.AA04402@ringer.cs.utsa.edu.sunset> A postscript version of my short paper "Function Optimization based on Advanced Simulated Annealing" has been placed in the neuroprose archive. The abstract is given below, followed by retrieval instructions. Bruce Rosen email: rosen at ringer.cs.utsa.edu ------------------------------------------------------- Function Optimization based on Advanced Simulated Annealing Bruce Rosen Division of Mathematics, Computer Science and Statistics The University of Texas at San Antonio, San Antonio, Texas, 78249 Abstract Solutions to numerical problems often involve finding (or fit- ting) a set of parameters to optimize a function. A novel extension of the Simulated Annealing method, Very Fast Simulated Reannealing (VFSR) [1, 2], has been proposed for optimizing difficult functions. VFSR has an exponentially decreasing temperature reduction schedule which is faster than both Boltzmann Annealing and Fast (Cauchy) Annealing. VFSR is shown to be is superior to these two methods on optimizing a difficult multimodal function. 1. L. Ingber, "Very Fast Simulated re-annealing," Mathl. Comput. Modeling, vol. 12, no. 8, pp. 967-973, 1989. 2. L. Ingber and B. E. Rosen, "Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison," Mathematical and Computer Modelling, vol. 16, no. 11, pp. 87-100, 1992. ----------------------------------------------------------------------------- The paper is rosen.advsim.ps.Z in the neuroprose archives. The INDEX sentence is "Preprint: Comparative performances of Advanced Simulated Annealing Methods on optimizing a nowhere-differentiable function" To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get rosen.advsim.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosen.advsim.ps.Z .. ftp> quit 221 Goodbye. unix> uncompress rosen.advsim.ps.Z unix> lpr rosen.advsim.ps From marwan at ee.su.oz.au Wed Dec 16 18:19:55 1992 From: marwan at ee.su.oz.au (Marwan Jabri) Date: Thu, 17 Dec 1992 10:19:55 +1100 Subject: Job Opportunity Message-ID: <9212162319.AA17797@brutus.ee.su.OZ.AU> The University of Sydney Department of Electrical Engineering Systems Engineering and Design Automation Laboratory Girling Watson Research Fellowship Reference No. 51/12 Applications are invited for a Girling Watson Research Fellowship at Sydney University Electrical Engineering. The applicant should have a strong research and development experience, preferably with a background in one or more of the following areas: machine intelligence and connectionist architectures, microelectronics, pattern recognition and classification. The Fellow will work with the Systems Engineering and Design Automation Laboratory (SEDAL), one of the largest laboratories at Sydney University Electrical Engineering. The Fellow will join a group of 18 people (8 staff and 10 postgraduate students). SEDAL currently has projects on pattern recognition for implantable devices, VLSI implementation of connectionist architectures, time series prediction, knowledge integration and continuous learning, and VLSI computer aided design. The Research Fellow position is aimed at: o contributing to the research program o helping with the supervision of postgraduate students o supporting some management aspects of SEDAL o providing occasional teaching support Applicants should have either a PhD or an equivalent industry research and development experience. The appointment is available for a period of three years, subject to satisfactory progress. Salary is in the range of Research Fellow: A$39,463 to A$48,688. Applications quoting the reference number 51/12 can be sent to: The Staff Office The University of Sydney NSW 2006 AUSTRALIA For further information contact Dr. M. Jabri, Tel: (+61-2) 692-2240, Fax: (+61-2) 660-1228, Email: marwan at sedal.su.oz.au From paul at dendrite.cs.colorado.edu Thu Dec 17 12:41:50 1992 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Thu, 17 Dec 1992 10:41:50 -0700 Subject: Re-revised deadline: Cognitive Science Conference Message-ID: <199212171741.AA05296@axon.cs.colorado.edu> We've stretched the deadline as far as we can, including another weekend (doubling the time some of us can spend on writing the paper! ... that's off the record, of course) ... here's the Call for Papers again, with the new deadline, Feb 2: Fifteenth Annual Meeting of the COGNITIVE SCIENCE SOCIETY A MULTIDISCIPLINARY CONFERENCE ON COGNITION June 18 - 21, 1993 University of Colorado at Boulder Call for Participation with Revised Deadlines This year's conference aims at broad coverage of the many and diverse methodologies and topics that comprise Cognitive Science. In addition to computer modeling, the meeting will feature research in computational, theoretical, and psycho-linguistics; cognitive neuroscience; conceptual change and education; artificial intelligence; philosophical foundations; human-computer interaction and a number of other approaches to the study of cognition. A plenary session honoring the memory of Allen Newell is scheduled. Plenary addresses will be given by: Alan Baddeley Andy DiSessa Paul Smolensky Sandra Thompson Bonnie Webber The conference will also highlight invited research papers: Conceptual Change: (Organizers: Nancy Songer & Walter Kintsch) Frank Keil Gaea Leinhardt Ashwin Ram Jeremy Rochelle Language Learning: (Organizers: Paul Smolensky & Walter Kintsch) Michael Brent Robert Frank Brian MacWhinney Situated Action: (Organizer: James Martin) Leslie Kaebling Pattie Maes Bonnie Nardi Alonso Vera Visual Perception & Cognitive Neuroscience: (Organizer: Michael Mozer) Marlene Behrmann Robert Jacobs Hal Pashler David Plaut PAPER SUBMISSIONS With the goal of assembling a high-quality program representative of the diversity of methods and topics in cognitive science, we invite papers presenting interdisciplinary research addressing any cognitive domain and using any of the diverse methodologies of the field. Papers are specifically solicited which address the topics of the invited research sessions listed above. Authors should submit five (5) copies of the paper in hard copy form to: Cognitive Science 1993 Submissions Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author will be eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of six (6) pages long (excluding only the cover page), must have at least 1 inch margins on all sides, and must use no smaller that 10 point type. Camera-ready versions will be required only after authors are notified of acceptance. COVER PAGE Each copy of the paper must include a cover page, separate from the body of the paper, which includes, in order: 1. Title of paper. 2. Full names, postal addresses, phone numbers and e-mail addresses (if possible) of all authors. 3. An abstract of no more than 200 words. 4. The area(s) in which the paper should be reviewed. When possible, please list, in decreasing order of relevance, 1-3 of the following keywords: action/motor control, acquisition/learning, cognitive architecture, cognitive neuroscience, connectionism, conceptual change/education, decision making, foundations, human-computer interaction, language (indicate subarea), memory, reasoning and problem solving, perception, situated action/cognition, skill/expertise. 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster sessions will be highlighted in this year's conference. The proceedings will not distinguish between papers presented orally and those presented as posters. 6. A note stating if the paper is eligible to compete for a Marr Prize. For jointly authored papers, include a note from the student author's advisor explaining the student's contribution to the research. DEADLINE ***** PAPERS ARE DUE FEBRUARY 2, 1993. ****** SYMPOSIA Proposals for symposia are also invited. Proposals should indicate: (1) A brief description of the topic; (2) How the symposium would address a broad cognitive science audience; (3) Names of symposium organizer(s); (4) List of potential speakers, their topics, and some estimate of their likelihood of participation; (5) Proposed symposium format (designed to last 90 minutes). Symposium proposals should be sent as soon as possible, but no later than February 2, 1993. FOR MORE INFORMATION CONTACT Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 E-mail: Cogsci at clipr.colorado.edu Telephone: (303) 492-7638 FAX: (303) 492-2967 From sam at sarnoff.com Wed Dec 16 15:57:30 1992 From: sam at sarnoff.com (Scott A. Markel x2683) Date: Wed, 16 Dec 92 15:57:30 EST Subject: NIPS workshop summary Message-ID: <9212162057.AA03573@sarnoff.sarnoff.com> NIPS 92 Workshop Summary ======================== Computational Issues in Neural Network Training =============================================== Main focus: Optimization algorithms used in training neural networks ---------- Organizers: Scott Markel and Roger Crane ---------- This was a one day workshop exploring the use of optimization algorithms, such as back-propagation, conjugate gradient, and sequential quadratic programming, in neural network training. Approximately 20-25 people participated in the workshop. About two thirds of the participants used some flavor of back propagation as their algorithm of choice, with the other third using conjugate gradient, sequential quadratic programming, or something else. I would guess that participants were split about 60-40 between industry and the academic community. The workshop consisted of lots of discussion and the following presentations: Introduction ------------ Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) I opened by saying that Roger and I are mathematicians and started looking at neural network training problems when neural net researchers were experiencing difficulties with back-propagation. We think there are some wonderfully advanced and robust implementations of classical algorithms developed by the mathematical optimization community that are not being exploited by the neural network community. This is due largely to a lack of interaction between the two communities. This workshop was set up to address that issue. In July we organized a similar workshop for applied mathematicians at SIAM '92 in Los Angeles. Optimization Overview --------------------- Roger Crane (David Sarnoff Research Center - rcrane at sarnoff.com) Roger gave a very brief, but broad, historical overview of optimization algorithm research and development in the mathematical community. He showed a time line starting with gradient descent in the 1950's and progressing to sequential quadratic programming (SQP) in the 1970's and 1980's. SQP is the current state of the art optimization algorithm for constrained optimization. It's a second order method that solves a sequence of quadratic approximation problems. SQP is quite frugal with function evaluations and handles both linear and nonlinear constraints. Roger stressed the robustness of algorithms found in commercial packages (e.g. NAG library) and that reinventing the wheel was usually not a good thing to do since many subtleties will be missed. A good reference for this material is Practical Optimization Gill, P. E., Murray, W., and Wright, M. H. Academic Press: London and New York 1981 Roger's overview generated a lot of discussion. Most of it centered around the fact that second order methods involve using the Hessian, or an approximation to it, and that this is impractical for large problems (> 500-1000 parameters). Participants also commented that the mathematical optimization community has not yet fully realized this and that stochastic optimization techniques are needed for these large problems. All classical methods are inherently deterministic and work only for "batch" training. SQP on a Test Problem --------------------- Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) I followed Roger's presentation with a short set of slides showing actual convergence of a neural network training problem where SQP was the training algorithm. Most of the workshop participants had not seen this kind of convergence before. Yann Le Cun noted that with such sharp convergence generalization would probably be pretty bad. I noted that sharp convergence was necessary if one was trying to do something like count local minima, where generaization is not an issue. In Defense of Gradient Descent ------------------------------ Barak Pearlmutter (Oregon Graduate Institute - bap at merlot.cse.ogi.edu) By this point back propagation and its many flavors had been well defended from the audience. Barak's presentation captured the main points in a clarifying manner. He gave examples of real application neural networks with thousands, millions, and billions of connections. This underscored the need for stochastic optimization techniques. Barak also made some general remarks about the characteristics of error surfaces. Some earlier work by Barak on gradient descent and second order momentum can be found in the NIPS-4 proceedings (p. 887). A strong plea was made by Barak, and echoed by the other participants, for fair comparisons between training methods. Fair comparisons are rare, but much needed. Very Fast Simulated Reannealing ------------------------------- Bruce Rosen (University of Texas at San Antonio - rosen at ringer.cs.utsa.edu) This presentation focused on a new optimization technique called Very Fast Simulated Reannealing (VFSR), which is faster than Boltzmann Annealing (BA) and Fast (Cauchy) Annealing (FA). Unlike back propagation, which Bruce considers mostly a method for pattern association/classification/generalization, simulated annealing methods are perhaps best used for functional optimization. He presented some results on this work, showing a comparison of Very Fast Simulated Reannealing to GA for function optimization and some recent work on function optimization with BA, FA, and VFSR. Bruce's (and Lester Ingber's) code is available from netlib - Interactive: ftp research.att.com [login as netlib, your_login_name as password] cd opt binary get vfsr.Z Email: mail netlib at research.att.com send vfsr from opt Contact Bruce (rosen at ringer.cs.utsa.edu) or Lester (ingber at alumni.cco.caltech.edu) for further information. General Comments ---------------- Yann Le Cun (AT&T Bell Labs - yann at neural.att.com) I asked Yann to summarize some of the comments he and others had been making during the morning session. Even though we didn't give him much time to prepare, he nicely outlined the main points. These included - large problems require stochastic methods - the mathematical community hasn't yet addressed the needs of the neural network community - neural network researchers are using second order information in a variety of ways, but are definitely exploring uncharted territory - symmetric sigmoids are necessary; [0,1] sigmoids cause scaling problems (Roger commented that classical methods would accommodate this) Cascade Correlation and Greedy Learning --------------------------------------- Scott Fahlman (Carnegie Mellon University - scott.fahlman at cs.cmu.edu) Scott's presentation started with a description of QuickProp. This algorithm was developed in an attempt to address the slowness of back propagation. QuickProp uses second order information ala modified Newton method. This was yet another example of neural network researchers seeing no other alternative but to do their own algorithm development. Scott then described Cascade Correlation. CasCor and CasCor2 are greedy learning algorithms. They build the network, putting each new node in its own layer, in response to the remaining error. The newest node is trained to deal with the largest remaining error component. Papers on QuickProp, CasCor, and Recurrent CasCor can be found in the neuroprose archive (see fahlman.quickprop-tr.ps.Z, fahlman.cascor-tr.ps.Z, and fahlman.rcc.ps.Z). Comments on Training Issues --------------------------- Gary Kuhn (Siemens Corporate Research - gmk at learning.siemens.com) Gary presented 1. a procedure for training with stochastic conjugate gradient. (G. Kuhn and N. Herzberg, Some Variations on Training of Recurrent Networks, in R. Mammone & Y. Zeevi, eds, Neural Networks: Theory and Applications, New York, Academic Press, 1991, p 233-244.) 2. a sensitivity analysis that led to a change in the architecture of a speech recognizer and to further, joint optimization of the classifier and its input features. (G. Kuhn, Joint Optimization of Classifier and Feature Space in Speech Recognition, IJCNN '92, IV:709-714.) He related Scott Fahlmans' interest in sensitivity to Yann Le Cun's emphasis on trainability, by showing how a sensitivity analysis led to improved trainability. Active Exemplar Selection ------------------------- Mark Plutowski (University of California - San Diego - pluto at cs.ucsd.edu) Mark gave a quick recap of his NIPS poster on choosing a concise subset for training. Fitting these exemplars results in the entire set being fit as well as desired. This method has only been used on noise free problems, but looks promising. Scott Fahlman expressed the opinion that exploiting the training data was the remaining frontier in neural network research. Final Summary ------------- Incremental, stochastic methods are required for training large networks. Robust, readily available implementations of classical algorithms can be used for training modest sized networks and are especially effective research tools for investigating mathematical issues, e.g. estimating the number of local minima. From alpaydin%TRBOUN.BITNET at BITNET.CC.CMU.EDU Fri Dec 18 14:03:29 1992 From: alpaydin%TRBOUN.BITNET at BITNET.CC.CMU.EDU (alpaydin%TRBOUN.BITNET@BITNET.CC.CMU.EDU) Date: 18 Dec 1992 14:03:29 -0500 (EST) Subject: CFP : 2nd Turkish Conf on AI and ANN Message-ID: <00965465.A8462DC0.14395@trboun> CALL FOR PAPERS 2nd Turkish Symposium on Artificial Intelligence and Artificial Neural Networks Bogazici University Istanbul, Turkey June 24-25, 1993 Supported by : Bogazici University, Istanbul; Bilkent University, Ankara; IEEE Computer Society Turkiye Section; Middle East Technical University, Ankara; TUBITAK, The Scientific and Technical Research Council of Turkey. Scope Commonsense Reasoning, Knowledge Representation, Learning, Natural Language Processing, Control and Planning, Expert Systems, Theorem Proving, Intelligent Databases, Signal Processing, Speech Processing, Vision and Image Processing, Pattern Recognition, Robotics, Programming Languages, Simulation Environments, Theoretical Foundations, Hardware Implementations, Industrial Applications, Social, Legal, and Ethical Aspects, Paper submissions Deadline for full papers limited to 6 single spaced (12 point) A4 pages: March 1, 1993. Author notification: April 1, 1993. Camera ready copies: May 1, 1993. Send submissions (in English or Turkish) to Dr. L. Akin, Department of Computer Engineering, Bogazici University, TR-80815 Istanbul, Turkey. Tel (voice): +90 1 263 15 00 x 1323 (fax): +90 1 265 84 88 E-mail: yz at trboun.bitnet Symposium Chair: Selahattin Kuru, Bogazici Univ. Program Committee: Levent Akin, Bogazici Univ.; Varol Akman, Bilkent Univ.; Ethem Alpaydin, (chair) Bogazici Univ.; Isil Bozma, Bogazici Univ.; M. Kemal Ciliz, Bogazici Univ.; Fikret Gurgen, Bogazici Univ.; H. Altay Guvenir, Bilkent Univ.; Ugur Halici METU; Yorgo Istefanopulos, Bogazici Univ.; Sakir Kocabas, TUBITAK Gebze Res. Center; Selahattin Kuru, Bogazici Univ.; Kemal Oflazer, Bilkent Univ.; A. C. Cem Say, Bogazici Univ.; Nese Yalabik, METU Local Organizing Committee: Levent Akin (chair); Ethem Alpaydin; Hakan Aygun; Sema Oktug; A. C. Cem Say; Mehmet Yagci From irina at laforia.ibp.fr Thu Dec 17 13:24:02 1992 From: irina at laforia.ibp.fr (irina Tchoumatchenko 46.42.32.00 poste 433) Date: Thu, 17 Dec 92 19:24:02 +0100 Subject: call for papers "AI and Genome" Message-ID: <9212171824.AA13503@laforia.ibp.fr> Please, posted: ***************** CALL FOR PAPERS ************************ WORKSHOP "ARTIFICIAL INTELLIGENCE and the GENOME" at the International Joint Conference on Artificial Intelligence IJCAI-93 August 29 - September 3, 1993 Chambery, FRANCE There is a great deal of intellectual excitement in molecular biology (MB) right now. There has been an explosion of new knowledge due to the advent of the Human Genome Program. Traditional methods of computational molecular biology can hardly cope with important complexity issues without adapting a heuristic approach. They enable one to explicitate molecular biology knowledge to solve a problem as well as to present the obtained solution in biologically-meaningful terms. The computational size of many important biological problems overwhelms even the fastest hardware by many orders of magnitude. The approximate and heuristic methods of Artificial Intelligence have already made significant progress in these difficult problems. Perhaps one reason is great deal of biological knowledge is symbolic and complex in their organization. Another reason is the good match between biology and machine learning. Increasing amout of biological data and a significant lack of theoretical understanding suggest the use of generalization techniques to discover "similarities" in data and to develop some pieces of theory. On the other hand, molecular biology is a challenging real-world domain for artificial intelligence research, being neither trivial nor equivalent to solving the general problem of intelligence. This workshop is dedicated to support the young AI/MB field of research. TOPICS OF INTEREST INCLUDE (BUT ARE NOT RESTRICTED TO): ------------------------------------------------------- *** Knowledge-based approaches to molecular biology problem solving; Molecular biology knowledge-representation issues, knowledge-based heuristics to guide molecular biology data processing, explanation of MB data processing results in terms of relevant MB knowledge; *** Data/Knowledge bases for molecular biology; Acquisition of molecular biology knowledge, building public genomic knowledge bases, a concept of "different view points" in the MB data processing context; *** Generalization techniques applied to molecular biology problem solving; Machine learning techniques as well as neural network techniques, supervised learning versus non-supervised learning, scaling properties of different generalization techniques applied to MB problems; *** Biological sequence analysis; AI-based methods for sequence alignment, motif finding, etc., knowledge-guided alignment, comparison of AI-based methods for sequence analysis with the methods of computational biology; *** Prediction of DNA protein coding regions and regulatory sites using AI-methods; Machine learning techniques, neural networks, grammar-based approaches, etc.; *** Predicting protein folding using AI-methods; Predicting secondary, super-secondary, tertiary protein structure, construction protein folding prediction theories by examples; *** Predicting gene/protein functions using AI-methods; Complexity of the function prediction problem, understanding the structure/function relationship in biologically-meaningful examples, structure/functions patterns, attempts toward description of functional space; *** Similarity and homology; Similarity measures for gene/protein class construction, knowledge-based similarity measures, similarity versus homology, inferring evolutionary trees; *** Other perspective approaches to classify and predict properties of MB sequences; Information-theoretic approach, standard non-parametric statistical analysis, Hidden Markov models and statistical physics methods; INVITED TALKS: -------------- L. Hunter, NLM, AI problems in finding genetic sequence motifs J. Shavlik, U. of Wisconsin, Learning important relations in protein structures B. Buchanan, U. of Pittsburgh, to be determined R. Lathrop, MIT, to be determined Y. Kodratoff, U. Paris-Sud, to be determined J.-G. Ganascia, U. Paris-VI, Application of machine learning techniques to the biological investigation viewed as a constructive process SCHEDULE ---------- Papers received: March 1, 1993 Acceptance notification: April 1, 1993 Final papers: June 1, 1993 WORKSHOP FORMAT: ------------------ The format of the workshop will be paper sessions with discussion at the end of each session, and a concluding panel. Prospective particitants should submit papers of five to ten pages in length. Four paper copies are required. Those who would like to attend without a presentation should send a one to two-page description of their relevant research interests. Attendance at the workshop will be limited to 30 or 40 people. Each workshop attendee MUST HAVE REGISTERED FOR THE MAIN CONFERENCE. An additional (low) 300 FF fee for the workshop attendance (about $60) will be required. One student attending the workshop normally (has registered for the main conference) and being in charge of taking notes during the entirre workshop, could be exempted from the additional 300 FF fee. Volunteers are invited. ORGANIZING COMMITTEE -------------------- Buchanan, B. (Univ. of Pittsburgh - USA) Ganascia, J.-G., chairperson (Univ. of Paris-VI - France) Hunter, L. (National Labrary of Medicine - USA) Lathrop, R. (MIT - USA) Kodratoff, Y. (Univ. of Paris-Sud - France) Shavlik, J. W. (Univ. of Wisconsin - USA) PLEASE, SEND SUBMISSIONS TO: --------------------------- Ganascia, J.-G. LAFORIA-CNRS University Paris-VI 4 Place Jussieu 75252 PARIS Cedex 05 France Phone: (33-1)-44-27-47-23 Fax: (33-1)-44-27-70-00 E-mail: ganascia at laforia.ibp.fr From wray at ptolemy.arc.nasa.gov Thu Dec 17 20:40:49 1992 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Thu, 17 Dec 92 17:40:49 PST Subject: Computational Issues in Neural Network Training In-Reply-To: "Scott A. Markel x2683"'s message of Wed, 16 Dec 92 15:57:30 EST <9212162057.AA03573@sarnoff.sarnoff.com> Message-ID: <9212180140.AA04099@ptolemy.arc.nasa.gov> First, thanks to Scott Markel for producing this summary. Its rapid dissemination of important information like this to non-participants that lets the field progress as a whole!!! > SQP on a Test Problem > --------------------- > Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) > > I followed Roger's presentation with a short set of slides showing actual > convergence of a neural network training problem where SQP was the training > algorithm. Most of the workshop participants had not seen this kind of > convergence before. Yann Le Cun noted that with such sharp convergence > generalization would probably be pretty bad. I'd say not necessarily. If you use a good regularization method then sharp convergence shouldn't harm generalization at all. Of course, this begs the question: what is a good regularizing/complexity/prior/MDL term? (choose you own term depending on which regularizing fashion you follow.) Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035 email: wray at kronos.arc.nasa.gov From henrik at robots.ox.ac.uk Mon Dec 21 14:03:08 1992 From: henrik at robots.ox.ac.uk (henrik@robots.ox.ac.uk) Date: Mon, 21 Dec 92 19:03:08 GMT Subject: new paper: A massively parallel neurocomputer Message-ID: <9212211903.AA18874@ulysses.robots.ox.ac.uk> Here is another one ... I just have placed this preprint (the paper is submitted to MicroNeuro 93) in the neuroprose archive, file klagges.massively-parallel.ps.Z. Cheers, Henrik (henrik at robots.ox.ac.uk) Abstract(( We have developed a SIMD massively parallel digital neural network simulator --- called GeNet for Generic Network --- which can evaluate large networks with a variety of learning algorithms at high speed. A medium-size installation with 256 physical nodes and 1 Gbyte of memory can sustain e.g. 1.7 giga 16bit-connection crossings/sec at network sizes of 2 layers with 64K neurons each, a fan-in of 1K and a random wired topology. The neural network core operations are supported by optimized and balanced computation and communication hardware that sustains heavily pipelined processing. In addition to an array of processing units with one global scalar (16 bit) bus, the system is equipped with a ring-shifter (32 bit) and a parallel (256\times16 bit) vector bus that feeds a tree-shaped global vector accumulator. This eases backward communication and the calculation of scalar products of distributed vectors. The VLIW-architecture is highly scalable. A prototype has been cost-effectively implemented without custom VLSI chips. )) FTP instructions: $ ftp archive.cis.ohio-state.edu ftp> user ftp ftp> password ftp> binary ftp> cd pub/neuroprose ftp> get Getps ftp> bye $ chmod +x Getps $ Getps klagges.massively-parallel.ps.Z $ uncompress kl*.ps.Z $ lpr -Plp kl*.ps (or whatever cmd you use for your postscript printer) ========= From henrik at robots.ox.ac.uk Mon Dec 21 14:02:42 1992 From: henrik at robots.ox.ac.uk (henrik@robots.ox.ac.uk) Date: Mon, 21 Dec 92 19:02:42 GMT Subject: new paper: Random wired cascade-correlation Message-ID: <9212211902.AA18870@ulysses.robots.ox.ac.uk> I just have placed this preprint (the paper is submitted to MicroNeuro 93) in the neuroprose archive, file klagges.rndwired-cascor.ps.Z. There also is an accompanying picture of a sample network topology created by LFCC, called klagges.rndwired-topology.GIF (it is a gif file). Cheers, Henrik (henrik at robots.ox.ac.uk) Abstract(( The success of new learning algorithms like Cascade Correlation (CC) lies partly in topology construction strategies which are difficult to map onto SIMD-parallel neurcomputers. A CC variation that limits the connection fan-in and random-wires the neurons was invented to ease the SIMD-implementation. Surprisingly, the method produced superior and very compact networks with improved generalization. In particular, solutions of the 2-spirals problem improved from 133 +- 27 total weights for standard CC down to 60 +- 10 with 75% less connection crossings. Performance increased with candidate pool size and was correlated with a reduction of artefacts in the receptive field visualizations. We argue that, for general neural network learning, construction algorithms are as important as weight adaption rules. This requires sparse matrix support from neurocomputer hardware. )) FTP instructions: $ ftp archive.cis.ohio-state.edu ftp> user ftp ftp> password ftp> binary ftp> cd pub/neuroprose ftp> get Getps ftp> bye $ chmod +x Getps $ Getps klagges.rndwired-cascor.ps.Z $ Getps klagges.rndwired-topology.GIF $ uncompress kl*.ps.Z $ lpr -Plp kl*.ps (or whatever cmd you use for your postscript printer) $ xview kl*.GIF (or " " " " viewing gifs.) ========= From wahba at stat.wisc.edu Mon Dec 21 21:12:22 1992 From: wahba at stat.wisc.edu (Grace Wahba) Date: Mon, 21 Dec 92 20:12:22 -0600 Subject: choose your own randomized regularizer Message-ID: <9212220212.AA26683@hera.stat.wisc.edu> ............................. Re: Regularizing fashions...choose your own: as per Wray Buntine's remarks about Scott Markel's discussion of SQP Also Re: `Large problems require stochastic methods' -Yann Le Cun Variants of the fast randomized version of Generalized Cross Validation might be implementable with very large optimization problems...SQL? see AUTHOR = {D. Girard}, TITLE = {A Fast `{M}onte-{C}arlo Cross-Validation' Procedure for Large Least Squares Problems with Noisy Data}, JOURNAL = {Numer. Math.}, YEAR = {1989}, VOLUME = {56}, PAGES = {1-23} AUTHOR = {D. Girard}, TITLE = {Asymptotic optimality of the fast randomized versions of {GCV} and ${C}_{L}$ in ridge regression and regularization }, JOURNAL = {Ann. Statist.}, YEAR = {1991}, VOLUME = {19}, PAGES = {1950-1963} From rubio at hal.ugr.es Mon Dec 21 14:32:09 1992 From: rubio at hal.ugr.es (rubio@hal.ugr.es) Date: Mon, 21 Dec 1992 19:32:09 UTC Subject: NATO ASI Call for Papers Message-ID: <9212211932.AA06441@hal.ugr.es> >X-Envelope-to: Connectionists at cs.cmu.edu First Announcement: NATO Advanced Study Institute NEW ADVANCES and TRENDS in SPEECH RECOGNITION and CODING 28 June-10 July 1993. Bubion (Granada), SPAIN. Institute Director: Dr. Antonio Rubio-Ayuso, Dept. de Electronica. Facultad de Ciencias. Universidad de Granada. E-18071 GRANADA, SPAIN. tel. 34-58-243193 FAX. 34-58-243230 e-mail ASI at hal.ugr.es Organizing Committee: Dr. Jean-Paul Haton, CRIN / INRIA, France. Dr. Pietro Laface, Politecnico di Torino, Italy. Dr. Renato De Mori, McGill University, Canada. OBJECTIVES, AGENDA and PARTICIPANTS A series of most successful ASIs on Speech Science (the last ones in Bonas, France; Bad Windsheim, Germany; Cetraro, Italy) created a fruitful and stimulating environment to learn about scientific methods, exchange of results, and discussions of new ideas. The goal of this ASI is to congregate the most important experts on Speech Recognition and Coding to discuss and disseminate their most recent findings, in order to spread them among the European and American Centers of Excellence, as well as among a good selection of qualified students. A two-week programme is planned with invited tutorial lectures, and contributed papers by selected students (maximum 65). The proceedings of the ASI will be published by Springer-Verlag. TOPICS The Institute will focus on the new methodologies and techniques that have been recently developed in the speech communication area. Main topics of interest will be: -Low Delay and Wideband Speech Coding. -Very Low bit Rate and Half-Rate Speech Coding. -Speech coding over noisy channels. -Continuous Speech and Isolated word Recognition. -Neural Networks for Speech Recognition and Coding. -Language Modeling. -Speech Analysis, Synthesis and data bases. Any other related topic will also be considered. INVITED LECTURERS A. Gersho (UCSB, USA): "Speech coding." B. H. Juang (AT&T, USA): "Statistical and discriminative methods for speech recognition - from design objectives to implementation." J. Bridle (RSRU, UK): "Neural networks." G. Chollet (Paris Telecom): "Evaluation of ASR systems, algorithms and databases." E. Vidal (UPV, Spain): "Syntactic learning techniques in language modeling and acoustic-phonetic decoding." J. P. Adoul (U. Sherbrooke, Canada): "Lattice and trellis coded quantizations for efficient coding of speech." R. De Mori (McGill Univ, Canada): "Language models based on stochastic grammars and their use in automatic speech recognition." R. Pieraccini (AT&T, USA): "Speech understanding and dialog, a stochastic approach." F. Jelinek (IBM, USA): "New approaches to language modeling for speech recognition." L. Rabiner (AT&T, USA): "Applications of Voice Processing Technology in Telecommunications." N. Farvardin (UMD, USA): "Speech coding over noisy channels." J. P. Haton (CRIN/INRIA, France): "Methods for the automatic recognition of speech in adverse conditions." R. Schwartz (BBN, USA): "Search algorithms of real-time recognition with high accuracy." H. Niemann (Erlangen-Nurnberg Univ., Germany): "Statistical Modeling of segmental and suprasegmental information." I. Trancoso (INESC, Portugal): "An overview of recent advances on CELP." C. H. Lee (AT&T, USA): "Adaptive learning for acoustic and language modeling." P. Laface (Poli. Torino, Italy) H. Ney (Phillips, Germany): "Search Strategies for Very Large Vocabulary, Continuous Speech Recognition." A. Waibel (CMU, USA): "JANUS, A speech translation system." ATTENDANCE, COSTS and FUNDING Participation from as many NATO countries as possible is desired. Additionally, prospective participants from Greece, Portugal and Turkey are especially encouraged to apply.A small number of students from non-NATO countries may be accepted. The estimated cost of hotel accommodation and meals for the two-week duration of the ASI is US$1,000. A limited number of scholarships are available for academic participants from NATO countries. In the case of industrial or commercial participants a US$500 fee will be charged. Participants are responsible for their own health or accident insurance. A deposit of US$200 is required for living expenses. This deposit is non-refundable in the case of late cancelation (after 10 June, 1993). The NATO Institute will be held in the hospitable village of Bubion (Granada), set on Las Alpujarras, a peaceful mountain region with incomparable landscapes. HOW TO REGISTER Each application should include: 1) Full address (including e-mail and FAX). 2) An abstract of the proposed contribution (1-3 pages). 3) Curriculum vitae of the prospective participant. 4) Indication of whether the attendance to the ASI is conditioned to obtaining a NATO grant. For junior applicants, support letters from senior members of the professional speech community would strengthen the application. This application must be sent to the Institute Director address mentioned above. SCHEDULE Submission of proposals (1-3 pages): To be received by 1 April 1993. Notification of acceptance: To be mailed out on 1 May 1993. Submission of the paper: To be received by 10 June 1993. From john at cs.rhbnc.ac.uk Tue Dec 22 05:14:55 1992 From: john at cs.rhbnc.ac.uk (john@cs.rhbnc.ac.uk) Date: Tue, 22 Dec 92 10:14:55 +0000 Subject: EuroColt call for papers Message-ID: <2085.9212221014@csqx.dcs.rhbnc.ac.uk> THE INSTITUTE OF MATHEMATICS AND ITS APPLICATIONS EURO-COLT '93 CONFERENCE ON COMPUTATIONAL LEARNING THEORY December, 1993 Royal Holloway, University of London ANNOUNCEMENT AND CALL FOR PAPERS The inaugural IMA European conference on Computational Learning Theory will be held 20--22 December at Royal Holloway, University of London. We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory and cryptology, decision theory and Bayesian/MDL estimation. As part of our program, we are pleased to announce three invited talks by Les Valiant (Harvard), Lenny Pitt (Illinois) and Wolfgang Maass (Graz). Invitation to Submit a Paper: Authors should submit six copies (preferably two-sided copies) of an extended abstract to be received by 15th May, 1993, to: Miss Pamela Irving, Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. The abstract should consist of a cover page with title, authors' names, (postal and e-mail) addresses, and a 200 word summary and a body of no more than 10 pages. We also solicit proposals for workshops sessions organised by qualified individuals to facilitate in-depth discussion of particular current topics. The workshops would be scheduled for the final day of the conference and would typically last for 3 to 4 hours, including presentation(s) by the organiser(s) of the workshop, with time for additional discussions and contributions (informal short talks). Notification: Authors will be notified of acceptance or rejection by a letter mailed on or before 31st July. Final camera-ready papers will be due on 22nd September. Members of the Organising Committee: John Shawe-Taylor (Chair: Royal Holloway, University of London, email to eurocolt at cs.rhbnc.ac.uk), Martin Anthony (LSE, University of London), Norman Biggs (LSE, University of London), Mark Jerrum (Edinburgh), Hans-Ulrich Simon (University of Dortmund), Paul Vitanyi (CWI Amsterdam). -------------------------------------------------------------------- To: The Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. Telephone: (0702) 354020. Fax: (0702) 354111 EURO-COLT '93 20th--22nd December, 1993 Royal Holloway, University of London NAME ................................ GRADE (If IMA Member) .......... ADDRESS FOR CORRESPONDENCE ........................................... ..................................................................... TELEPHONE NO ........................ FAX NO ......................... I intend to submit an abstract no later than 15th May, 1993 .......... Please send me an application form when available ........ (Please tick where necessary) From ucganlb at ucl.ac.uk Wed Dec 23 05:08:01 1992 From: ucganlb at ucl.ac.uk (Dr Neil Burgess) Date: Wed, 23 Dec 92 10:08:01 +0000 Subject: 2 papers: Hippocampus & navigation, generalisation of constructive alg. Message-ID: <9212231008.AA17345@link-1.ts.bcc.ac.uk> I have just put two pre-prints in neuroprose (see below for abstracts and ftp instructions). Cheers Neil (n.burgess at ucl.ac.uk) _________________________________________________________________________ USING HIPPOCAMPAL `PLACE CELLS' FOR NAVIGATION, EXPLOITING PHASE CODING Neil Burgess, John O'Keefe and Michael Recce Department of Anatomy, University College London London WC1E 6BT, England. ABSTRACT A model of the hippocampus as a central element in rat navigation is presented. Simulations show both the behaviour of single cells and the resultant navigation of the rat. These are compared with single unit recordings and behavioural data. The firing of CA1 place cells is simulated as the (artificial) rat moves in an environment. This is the input for a neuronal network whose output, at each theta $(\theta)$ cycle, is the next direction of travel for the rat. Cells are characterised by the number of spikes fired and the time of firing with respect to hippocampal $\theta$ rhythm. `Learning' occurs in `on-off' synapses that are switched on by simultaneous pre- and post-synaptic activity. The simulated rat navigates successfully to goals encountered one or more times during exploration in open fields. One minute of random exploration of a $1m^2$ environment allows navigation to a newly-presented goal from novel starting positions. A limited number of obstacles can be successfully avoided. _________________________________________________________________________ This paper will be published in NIPS 5. To get the postscript file do: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:userid): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> binary ftp> get burgess.hipnav.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for burgess.hipnav.ps.Z . ftp> quit 221 Goodbye. unix> uncompress burgess.hipnav.ps.Z unix> lpr burgess.hipnav.ps (or whatever you do to print) The uncompressed file is 1.7 Mbytes and may take sometime to print. _________________________________________________________________________ THE GENERALIZATION OF A CONSTRUCTIVE ALGORITHM IN PATTERN CLASSIFICATION PROBLEMS Neil Burgess, Silvano Di Zenzo, Paolo Ferragina and Mario Notturno Granieri Department of Anatomy IBM Rome Scientific Center University College London Viale Oceano Pacifico 171 London WC1E 6BT, ENGLAND. 00144 Rome, Italy ABSTRACT The use of a constructive algorithm for pattern classification is examined. The algorithm, a `Perceptron Cascade', has been shown to converge to zero errors whilst learning any consistent classification of {\it real-valued} pattern vectors (Burgess, 1992). Limiting network size and producing bounded decision regions are noted to be important for the generalization ability of a network. A scheme is suggested by which a result on generalization (Vapnik, 1992) may enable calculation of the optimal network size. A fast algorithm for principal component analysis (Sirat, 1991) is used to construct `hyper-boxes' around each class of patterns to ensure bounded decision regions. Performance is compared with the Gaussian Maximum Likelihood procedure in three artificial problems simulating real pattern classification applications. N. Burgess, submitted to International Journal of Neural Systems (1992). J. A. Sirat, International Journal of Neural Systems, 2, 147-155 (1991). V. Vapnik, NIPS 4, 838-838, Morgan Kaufmann (1992). _________________________________________________________________________ This paper will be published in: International Journal of Neural Systems 3 (Supp. 1992); Proceedings of the Neural Networks: from Biology to High Energy Physics Workshop. The postscript file is in burgess.gencon.ps.Z, follow the above instructions to retrieve it (again, page 5 may take sometime to print as it contains a 1 Mbyte bitmap). From barryf at sedal.su.oz.au Mon Dec 28 20:13:51 1992 From: barryf at sedal.su.oz.au (Barry Flower) Date: Tue, 29 Dec 1992 12:13:51 +1100 Subject: Pre-Print Available in Neuroprose Archive Message-ID: <9212290113.AA13843@sedal.sedal.su.OZ.AU> Connectionists. The following preprint is available in the neuroprose archive, and will appear in the NIPS*92 Proceedings. "Summed Weight Neuron Perturbation: An O(N) Improvement over Weight Perturbation." Barry Flower and Marwan Jabri SEDAL Department of Electrical Engineering University of Sydney NSW 2006 Australia ABSTRACT ~~~~~~~~ The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient. The method is novel in that it achieves a com- putational complexity similar to that of Node Perturbation, O(N**3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware require- ments when used for the training of VLSI implementations of ANNs. A sample session for retrieving the preprint follows: sedal::.mboxd-86} ftp cheops.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version 6.14 Thu Apr 23 14:41:38 EDT 1992) ready. Name (cheops.cis.ohio-state.edu:barryf): anonymous 331 Guest login ok, send e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Mon Feb 17 15:51:43 1992 - 316 days ago 250-Please read the file README~ 250- it was last modified on Wed Feb 6 16:41:29 1991 - 692 days ago 250 CWD command successful. ftp> get flower.swnp.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for flower.swnp.ps.Z (43113 bytes). 226 Transfer complete. local: flower.swnp.ps.Z remote: flower.swnp.ps.Z 43113 bytes received in 16 seconds (2.7 Kbytes/s) ftp> quit 221 Goodbye. Uncompress and finally print the postscript file. sedal::.mboxd-87} uncompress flower.swnp.ps.Z sedal::.mboxd-88} lpr flower.swnp.ps Cheers, ------------------------------------------------------------------- Barry Flower Email: barryf at sedal.oz.au SEDAL, Electrical Engineering, Tel: (+61-2) 692-3297 Sydney University, NSW 2006, Australia Fax: (+61-2) 660-1228 From leow%pav.mcc.com at mcc.com Wed Dec 30 16:23:17 1992 From: leow%pav.mcc.com at mcc.com (W. Leow) Date: Wed, 30 Dec 92 15:23:17 CST Subject: Abstracts for 3 papers in neuroprose Message-ID: <9212302123.AA08372@graviton.pav.mcc.com> The following 3 papers have been placed in the neuroprose archive (sorry, no hard copies available): ------------------------------------------------------------------- Representing Visual Schemas in Neural Networks for Object Recognition Wee Kheng Leow and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 leow,risto at cs.utexas.edu Technical Report AI92-190 December 1992 This research focuses on the task of recognizing objects in simple scenes using neural networks. It addresses two general problems in neural network systems: (1) processing large amounts of input with limited resources, and (2) the representation and use of structured knowledge. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting successively gathered information. The proposed system, VISOR, consists of two main modules. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module (implemented with neural networks) encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. Working cooperatively with the Low-Level Visual Module, it builds a globally consistent interpretation of successively gathered visual information. ------------------------------------------------------------------- Self-Organization with Lateral Connections Joseph Sirosh and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 sirosh,risto at cs.utexas.edu Technical Report AI92-191 December 1992 A self-organizing neural network model for the development of afferent and lateral input connections in cortical feature maps is presented. The weight adaptation process is purely activity-dependent, unsupervised, and local. The afferent input weights self-organize into a topological map of the input space. At the same time, the lateral interaction weights develop a smooth ``Mexican hat'' shaped distribution. Weak lateral connections die off, leaving a pattern of connections that represents the significant long-term correlations of activity on the feature map. The model demonstrates how self-organization can bootstrap itself based on input information only, without global supervision or predetermined lateral interaction. The model can potentially account for experimental observations such as critical periods for self-organization in cortical maps and development of horizontal connections in the primary visual cortex. ------------------------------------------------------------------- Incremental grid growing: Encoding high-dimensional structure into a two-dimensional feature map Justine Blackmore and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 justine,risto at cs.utexas.edu Technical Report AI92-192 December 1992 Knowledge of clusters and their relations is important in understanding high-dimensional input data with unknown distribution. Ordinary feature maps with fully connected, fixed grid topology cannot properly reflect the structure of clusters in the input space---there are no cluster boundaries on the map. Incremental feature map algorithms, where nodes and connections are added to or deleted from the map according to the input distribution, can overcome this problem. However, so far such algorithms have been limited to maps that can be drawn in 2-D only in the case of 2-dimensional input space. In the approach proposed in this paper, nodes are added incrementally to a regular, 2-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space. The process results in a map that explicitly represents the cluster structure of the high-dimensional input. ------------------------------------------------------------------- The standard instructions apply: Use getps, or: Unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> binary ftp> cd pub/neuroprose ftp> get leow.visual-schemas.ps.Z ftp> get sirosh.lateral.ps.Z ftp> get blackmore.incremental.ps.Z ftp> quit Unix> zcat leow.visual-schemas.ps.Z |lpr Unix> zcat blackmore.visual-schemas.ps.Z |lpr Unix> uncompress sirosh.lateral.ps.Z Unix> lpr -s sirosh.lateral.ps sirosh.lateral.ps is over 5MB uncompressed (it is only 16 pages, but has huge figures). If your laserwriter does not have that much memory, most likely you will need to lpr -s, or use psselect or psrev or other such utility to print in smaller chunks. Enjoy! From JKREIDER at VAXF.COLORADO.EDU Wed Dec 30 22:53:22 1992 From: JKREIDER at VAXF.COLORADO.EDU (Dr. Jan F. Kreider, Director, Energy Center, U. of Colorado,Boulder, CO 80309-0428, USA; Phone 303-492-7603) Date: 30 Dec 1992 20:53:22 -0700 (MST) Subject: Building energy predictor Competition - "The Great Energy Shootout" Message-ID: <01GSXV3IG6MA00CHU0@VAXF.COLORADO.EDU> The following is the text of the rules for a data analysis competition having to do with hourly building and weather data. We invite those interested to request the data as described below. Andreas Weigand and Mike Mozer are hereby thanked for their advice and help with the rules and conduct of this competition. "THE GREAT ENERGY PREDICTOR SHOOTOUT" - THE FIRST BUILDING DATA ANALYSIS AND PREDICTION COMPETITION Concept and Summary ASHRAE Meeting Denver, Colorado June, 1993 Co-chaired by Jan F. Kreider and Jeff S. Haberl Active Period: December 1, 1992 - April 30, 1993 INTRODUCTION A wide range of new techniques is now being applied to the analysis problems involved with predicting the future behavior of HVAC systems and deducing properties of these systems. Similar problems arise in most observational disciplines, including physics, biology, and economics. New tools, such as genetic algorithms, simulated annealing, the use of connectionist models for forecasting and tree-based classifiers or the extraction of parameters of nonlinear systems with time-delay embedding, promise to provide results that are unobtainable with more traditional techniques. Unfortunately, the realization and evaluation of this promise has been hampered by the difficulty of making rigorous comparisons between competing techniques, particularly ones that come from different disciplines. The prediction of energy usage by HVAC systems is important for purposes of HVAC diagnostics, system control, optimization and energy management. In order to facilitate such comparisons and to foster contact among the relevant disciplines, ASHRAE's TC4.7 and TC 1.5 have organized a building data analysis and prediction competition in the form of an ASHRAE seminar to be held in Denver in June, 1993. Forecasting or prediction using empirical models will be the goal of the competition. (Neither system characterization, system identification nor simulation code validation [e.g., DOE-2 or BLAST] are the subject of this seminar; they will be addressed in a future session.) Two carefully chosen sets of energy and environmental data from real buildings will be made available to the contestants. Each contestant will be required to prepare quantitative analyses of these data and submit them to the seminar co-chairs prior to the ASHRAE seminar. Those with the best results will be asked to make a presentation to the seminar. At the close of the competition the performance of the techniques submitted will be compared and published. If there is sufficient interest, a server accessible by modem may be set up to operate as an on-line archive of interesting data sets, programs, and comparisons among algorithms in the future. There will be no monetary prizes. An ASHRAE symposium has been scheduled for the Winter 1994 ASHRAE meeting in New Orleans to explore the results of the competition in formal papers. The competition does not require advanced registration; to enter, simply request the data (there is no charge for the data diskette) along with support information and submit your analysis on time. The detailed description of the competition and instructions for acquiring the data and entering the competition are given below. ACCESSING THE DATA The data are available on disks (5.25-in size) in ASCII, IBM-PC format. To receive the data, send a self-addressed 9 x 12 in. envelope, with a $2.90 priority mail stamp affixed, to: Building Energy Predictor Shootout Joint Center for Energy Management Campus Box 428 University of Colorado Boulder, CO 80309-0428 Instructions on submitting a return disk with the analysis of the data will be included in a README file on the data disk. The disk will also include an entry form that each entrant will need to complete and submit along with the results. FOR MORE INFORMATION Further questions about the competition should be directed to either of the organizers: Professor Jan F. Kreider, Director Professor Jeff S. Haberl Joint Center for Energy Management Department of Mechanical Engineering Campus Box 428 Texas A&M University University of Colorado College Station, TX 77843-3123 Boulder, CO 80309-0428 Phone: 409-845-1560 Phone: 303-492-3915 Fax: 409-862-2762 Fax: 303-492-7317 E-mail: JSH4037 at TAMSIGMA (Bitnet) E-mail: JKREIDER at VAXF.COLORADO.EDU Detailed instructions and data set descriptions are included in the attached document. [PB] INSTRUCTIONS - "The Great Energy Predictor Shootout" Contents I. Philosophy II. General Information III. Acquiring and Submitting Data IV. Data Sets V. Submittals VI. Other Matters 1.1 Philosophy This competition has been organized to help clarify the conflicting claims among many researchers who use and analyze building energy data and to foster contact among these persons and their institutions. The intent is not necessarily only to declare winners but rather to set up a format in which rigorous evaluations of techniques can be made. Because there are natural measures of performance, a rank-ordering will be given. In all cases, however, the goal is to collect and analyze quantitative results in order to understand similarities and differences among the approaches. 1.2 General Information Overview This section contains the instructions on how to participate in the competition. Data: Two distinct data sets are provided for prediction. Contestants will be given these two sets of independent variables with the corresponding values of dependent variables, e.g., energy usage. The accuracy of predictions of the dependent variables from values of independent variables from this data set is one of the criteria for judging this competition. However, a more rigorous test is also planned. Some of the dependent variable values will be withheld from each of the two data sets (this is explained in detail in the next section). "Withheld" means that you will be provided with a set of independent variables for which the corresponding values of dependent variables have been withheld by the organizers [NOTE "For example you might be given a testing set consisting of weather and oc The data set from which the independent variables have been withheld are hereinafter called the "testing set" whereas the data that include both independent and dependent variable values are called the "training set." Although this nomenclature is common in some numerical approaches and not in others, it will provide an understandable nomenclature for this competition. The independent variable values in the testing set will will be used by each participant to make their best predictions of the corresponding dependent variables. The organizers will compare these predictions by each contestant with the true (data) values of the dependent variables that are known only to the organizers. This second aspect of the competition is expected to be of considerable interest to the seminar audience. Entries: The competition will start on December 1, 1992 and will end on April 30,1993. Entries received after that date cannot be considered. The format for the entries, described in the following sections and in the entry form supplied on the data diskette must be followed exactly, or the entry will regretfully have to be rejected. Results: Following the close of the competition, the results will be analyzed and published. This will be in the form of the ASHRAE seminar (Denver, June, 1993) as described above. The seminar co-chairs will not participate in the competition and but will be the sole analysts of the results. The overall results will be presented at the seminar by the co-chairs followed by a presentation by each participant on their methodology. The results to be produced by the competitors are in the form of predictions of the dependent variables for the two testing sets of independent variables. These predictions will be submitted to the organizers who will evaluate them using the same methods for all submissions. Competitors will also conduct a self analysis of the accuracy of their prediction approach when applied to the training set. The following criteria will be used by the organizers for assessing the respective accuracies of the entries when analyzing the testing set Coefficient of Variation ,CV: [EQN "CV == [Root [Sigma below [i=1] above n (y sub [pred,i] - y sub [dat Mean Bias Error, MBE: [EQN "MBE == [ [Sigma below [i=1] above n (y sub [pred,i] - y sub [data,i where [EQN "y sub [data,i]"] is a data value of the dependent variable corresponding to a particular set of values of the independent variables. [EQN "y sub [pred,i] "] is a predicted dependent variable value for the same set of independent variables above; these values are predictions by the entrants. [EQN "y bar sub data"] is the mean value of the dependent variable testing data set [EQN "n"] is the number of sets of data in the testing set Other statistics such as the correlation coefficient and maximum error may also be reported in a brief written summary assembled by the seminar co-chairs. Time permitting, graphical comparisons will be also prepared by the organizers. During each entrant's seminar presentation they may use any presentation of scientific value that they wish on the performance of their methods on the training set. Prizes: There are no prizes in the competition (to prevent unnecessary disagreements). Secrecy: Because this is an open scientific study, entries that provide results without describing the methods used are not acceptable. On the other hand we recognize that a great deal of labor might have been applied to develop commercially useful applications and full details of those need not be revealed. Sufficient information has to be supplied so that the results can in principle be independently verified. It is not necessary to submit practical implementation details or the computer code. However, we encourage sharing the software at the end of the competition. At a minimum, each participant should supply a flow chart of their methodology and the data plots described below. Future Plans: If interest warrants, it is planned that a computer server will operate after the close of the competition as a central repository of interesting data, analysis programs, and the results of other comparative studies. 1.3 Acquiring and Submitting Data This section describes how to retrieve the data sets for the competition and how to submit competition entries. The steps are: (1) read this section, (2) acquire the data, (3) analyze the data, and (4) send in your results in along with an entry form. The data are available on disks (5.25-in size) in ASCII, IBM-PC format. To receive the data and other information, send a self-addressed 9 x 12 in. envelope with a $2.90 priority mail stamp affixed to: Building Energy Predictor Shootout Joint Center for Energy Management Campus Box 428 University of Colorado Boulder, CO 80309-0428 Instructions on submitting a return disk with the analysis of the data will be included in a README file on the data disk. The mailing will also include an entry form that each entrant will need to complete out and submit along with the results. Completed entries (diskette with results plus completed entry form) should be mailed to: Energy Shootout Entry Disks at the above address. Part of the entry form will include your name and address and describe the machine type and density that your submittal disks were prepared with. The disks (either 3.5-in or 5.25-in. size of any density) must be in ASCII format readable by an MS-DOS machine. Hard copy or nonconforming entries cannot be accepted. 1.4 Data Sets There are two data sets provided; they are DOS-readable ASCII text files. The data sets have been chosen to address two different sorts of building-related data analysis problems. In this section we describe the general features of the data sets. A.dat (approximately 3,000 points) This is a time record of hourly chilled water, hot water and whole building electricity usage for a four-month period in an institutional building. Weather data and a time stamp are also included. The hourly values of usage of these three energy forms is to be predicted for the two following months. The testing set consists of the two months following the four-month period. B.dat (approximately 2,400 points) These data consist of solar radiation measurements made by four fixed devices to be used to predict the time-varying hourly beam radiation during a six-month period. This four-pyranometer device is used in an adaptive controller to predict building cooling loads. A random sample of data from the full data set has been reserved as the training set of 1500 points. The value of beam radiation is to be predicted from data from four fixed sensors for the testing set of 900 additional points. 1.5 Submittals The prediction tasks differ between the data sets (the sets were chosen to emphasize different prediction problems). The withheld testing data used for evaluating the predictions after the close of the competition will not be available to any of the entrants. A.dat For data set A submit predictions (i.e., forecasts) for chilled water, hot water and whole building electricity use for the two months following the four-month training set. The testing set will include values of the same independent variables (weather, date and time) as the training set. Submit your predictions of the three energy end uses in serial order by appending three columns containing your predictions to the right of the testing set columns provided on the disk data file. You will therefore submit to the organizers the testing set plus three columns containing your predictions. A sample of how you are to submit your data will be supplied with the data diskette. The organizers will compare your predictions to the known values of the three energy uses and report CV and MBE. You are also to prepare and submit with your diskette several graphs for the four-month training set data as shown in Figs. 1 and 2. Figure 1 is a time series plot of actual data and a prediction along with the difference between the two (Fig. 1 is such a plot for one month; you can either prepare one such plot for each of the four months or just one plot for all four months; you will need to prepare at least on such plot for each of the three energy end uses). Figure 2 is a plot of hourly energy use vs. dry bulb temperature. Data for all four months should be presented on one such graph for each of the three energy end uses (total of three plots will be prepared, one each for chilled water, hot water and whole building electricity). On each graph show the values of CV and MBE as defined above. Summary: You will submit to the seminar organizers one file on diskette with your predictions of the three energy end uses for the testing set. You will also submit several graphs, just described, representing the accuracy of your prediction tool when used on the training set only. B.dat For data set B submit predictions for hourly beam radiation given four values of hourly fixed-sensor insolation for the testing set that has been randomly selected from the full data set. The testing set will include the values of the same four independent variables (hourly insolation on four fixed surfaces) as the training set. Submit your predictions of beam insolation in order by appending one column containing their values to the right of the four testing set columns provided on the disk data file. You will therefore submit to the organizers the testing set plus one column containing your predictions. A sample of how you are to submit your data will be supplied with the data diskette. The organizers will compare your predictions to the known values of the beam radiation in the testing set and report CV and MBE. You are also to prepare and submit with your diskette one graph (often called a "scatterplot" or "crossplot") for the training set data as shown by the example in Fig. 3. Figure 3 is a plot of actual data (abscissa) and prediction (ordinate). You should prepare one such plot that includes all data in the training set. On this graph show the value of CV and MBE as defined above. Summary: You will submit to the seminar organizers one file on diskette with your predictions for the testing set. You will also submit a graph, just described, that represents the accuracy of your prediction tool when used on the training set only. Questions about these instructions should be addressed to either of the organizers listed above. 1.6 Deadline and extensions The competition ends at midnight on April 30, 1993; to be fair, we cannot accept entries after this time. We will allow two weeks after this deadline (until May 15th) for only the following two exceptions: * Because of computer difficulty you were unable to submit the data in time. Send the data before May 15th, along with an explanation of the difficulty. The organizers must be notified of your need to have this extension by April 30, 1993 * You just found out about the competition or just received the data. Submit your entry before May 15th, along with an explanation why this extension is needed. [PB] [WS 3.5 in] Figure 1. Example time series plot showing data, prediction and difference between the two. For the competition submittal also affix the values of CV and MBE to each graph. [WS 3.5 in] Figure 2. Example plot showing energy consumption (here steam) plotted vs dry bulb temperature. For the competition submittal also affix the values of CV and MBE to each graph. [PB] [WS 4 in] Figure 3. Example scatterplot showing data and prediction crossplotted. For the competition submittal also affix the values of CV and MBE to each graph. [SHOOTOUT.DOC] [PB] [WS 1 in] M E M O R A N D U M TO: Building Analyst Colleague FROM: Jan F. Kreider SUBJECT: Building Energy Predictor Shootout DATE: November 16, 1992 In order to facilitate comparisons among the many empirical techniques used to predict building demand and energy use and to foster contact among the relevant disciplines, ASHRAE's TC 4.7 and TC 1.5 have organized a building data analysis and prediction competition in the form of an ASHRAE seminar to be held in Denver in June, 1993. Forecasting or prediction using empirical models will be the goal of the competition. Jeff Haberl and I invite you to participate. The attached summary explains how you can enter this friendly, no-cost competition. You have received this mailing because of the known interest of yourself and your colleagues in this area of building science research. The enclosure should be self explanatory but donot hesitate to call me at the number above if you have questions. Good luck! Enclosure [SHOOTOUT.DOC] For example you might be given a testing set consisting of weather and occupancy data (the independent variables) along with chilled water use (the dependent variable) for a four-month period. For the fifth month you would only be given the weather and occupancy data but would be asked to predict the chilled water use based on the capability of your method developed with the four months of training data. / From rsun at athos.cs.ua.edu Wed Dec 30 15:50:35 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 30 Dec 1992 14:50:35 -0600 Subject: No subject Message-ID: <9212302050.AA10600@athos.cs.ua.edu> ++++++++++++++++++++++++++++++++++++++++++++++++++++++ SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa A previous announcement (reproduced below) announced a registration fee of $150 and advertised the availability of hotel accommodation at $70/night. To encourage the participation of qualified students we have made 3 changes: 1) We have appointed Jean-Marc Fellous as Student Chair for the meeting to coordinate the active involvement of such students. 2) We offer a Student Registration Fee of only $40 to students whose application is accompanied by a letter from their supervisor attesting to their student status. 3) Mr. Fellous has identified a number of lower-cost housing options, and will respond to queries to fellous at pollux.usc.edu The original announcement - with updated registration form - follows: ******** To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introductory tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule- based reasoning (using schema for the integration) Programming Languages for Schemas and Neural Networks Concurrent Object-Oriented Programming for Distributed AI and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of accepted abstracts received prior to April 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audience to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, fellous at pollux.usc.edu. The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students) together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. ------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] From tishby at fugue.cs.huji.ac.il Thu Dec 31 13:05:49 1992 From: tishby at fugue.cs.huji.ac.il (Tali Tishby) Date: Thu, 31 Dec 92 20:05:49 +0200 Subject: Learning Workshop in Jerusalem Message-ID: <9212311805.AA17382@fugue.cs.huji.ac.il> Please distribute this notice and not the previous one! THE HEBREW UNIVERSITY OF JERUSALEM THE CENTER FOR NEURAL COMPUTATION LEARNING DAYS IN JERUSALEM Workshop on Fundamental Issues in Biological and Machine Learning May 30 - June 4, 1993 Hebrew University, Jerusalem, Israel The Center for Neural Computation at the Hebrew University is a new multi- diciplinary research center for collaborative investigations of the principles underlying computation and information processing in the brain and in neuron- like artificial computing systems. The Center's activities span theoretical investigations of neural networks in physics, biology and computer science; experimental investigations in neurophysiology, psychophysics and cognitive psychology; and applied research on software and hardware implementations. The first international symposium sponsored by the Center will be held in the spring of 1993, at the Hebrew University of Jerusalem. It will focus on theoretical, experimental and practical aspects of learning in natural and artificial systems. Topics for the meeting include: * Theoretical Issues in Supervised and Unsupervised Learning * Neurophysiological Mechanisms Underlying Learning * Cognitive Psychology and Learning Psychophysics * Applications of Machine and Neural Network Learning Invited speakers include: Moshe Abeles (Hebrew Univ.) Roni Agranat (Hebrew Univ.) Ehud Ahissar (Weizmann Inst.) Asher Cohen (Hebrew Univ.) Yadin Dudai (Weizmann Inst.) David Haussler (UCSC) Yuval Davidor (Weizmann Inst.) Nathan Interator (Tel Aviv Univ.) Michael Jordan (MIT) Yann LeCun (AT&T) Joseph LeDoux (NYU) Bruce MacNaughton (U. Colorado) Yishai Mansour (Tel Aviv Univ.) Helge Ritter (Bielefeld) David Rumelhart (Stanford Univ.) Dov Sagi (Weizmann Inst.) Menachem Segal (Weizmann Inst.) Cristof Von der Malsburg (Bochum) Alex Waibel (CMU) Norman Weinberger (U.C. Irvine) Participation in the Workshop is limited to 100. A small number of contributed papers will be accepted. Interested researchers and students are asked to submit registration forms by March 1, 1993, to Sari Steinberg Bchiri Center for Neural Computation Racah Institute of Physics Hebrew University 91904 Jerusalem Israel Tel: (972) 2 584563 Fax: (972) 2 584437 E-mail: learn at galaxy.huji.ac.il Organizing Committee: Shaul Hochstein, Haim Sompolinsky, Naftali Tishby. REGISTRATION FORM Please fill in the information needed for registration. To ensure participation, please send a copy of this form by e-mail or fax as soon as possible to: Sari Steinberg Bchiri Center for Neural Computation/Racah Institute of Physics Hebrew University 91904 Jerusalem Israel Tel: (972) 2 584563; Fax: (972) 2 584437; E-mail: learn at galaxy.huji.ac.il Name _________________________________________________ Last First Title Affiliation __________________________________________ Position/Department __________________________________ Business Address _____________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Home address _________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Preferred mailing address: ___ Home ___ Business Registration fees (before March 1): ____ Regular $100 ____ Student $ 50 Registration fees (after March 1): ____ Regular $150 ____ Student $ 75 Please send payment by check or international money order in US dollars made payable to: Learning Workshop with a copy of this form by March 1, 1993 to avoid late fee. Signature ___________________________________ Date _________________ ACCOMMODATION If you are interested in assistance in reserving hotel accommodation for the duration of the Workshop, please indicate your preferences below (as far in advance as possible). I wish to reserve a single/double room from __________ to __________ for a total of _______ nights. CONTRIBUTED PAPERS A very limited number of contributed papers will be accepted. Participants interested in submitting papers should fill out the following and enclose a 250-word abstract. Poster/Talk (circle one) Title: __________________________________________________________________ __________________________________________________________________ %--LaTex--% \documentstyle[11pt,fullpage]{article} \begin{document} \newcommand{\beq}[1]{\begin{equation}\label{#1}} \newcommand{\eeq}{\end{equation}} \newcommand{\beqa}[1]{\begin{equation}\label{#1}\begin{eqalign}} \newcommand{\eeqa}{\end{eqalign}\end{equation}} \newcommand{\bsubeq}[1]{\begin{subequations}\label{#1}\begin{eqalignno}} \newcommand{\esubeq}{\end{eqalignno}\end{subequations}} % \begin{titlepage} \title{{\large The Hebrew University of Jerusalem\\ The Center for Neural Computation}\\ \vspace{0.6 in} {\huge\bf Learning Days in Jerusalem}\\ \vspace{0.3 in} {\Large Workshop on Fundamental Issues in Biological and Machine Learning}} \author{{\Large May 30 - June 4, 1993} \\ \\ {\Large Hebrew University, Jerusalem, Israel}} \date{} \maketitle The Center for Neural Computation at the Hebrew University is a new multi-diciplinary research center for collaborative investigations of the principles underlying computation and information processing in the brain and in neuron-like artificial computing systems. The Center's activities span theoretical investigations of neural networks in physics, biology and computer science; experimental investigations in neurophysiology, psychophysics and cognitive psychology; and applied research on software and hardware implementations. \vspace{0.2in} The first international symposium sponsored by the Center will be held in the spring of 1993, at the Hebrew University of Jerusalem. It will focus on theoretical, experimental and practical aspects of learning in natural and artificial systems. \vspace{.2in} Topics for the meeting include: \begin{itemize} \item{\bf Theoretical Issues in Supervised and Unsupervised Learning} \item{\bf Neurophysiological Mechanisms Underlying Learning} \item{\bf Cognitive Psychology and Learning Psychophysics} \item{\bf Applications of Machine and Neural Network Learning} \end{itemize} \vspace{0.2in} \newpage {\bf Invited speakers include:} \begin{tabbing} Moshe Abeles (Hebrew Univ.)xyzpdqrsvpaeiou\=Roni Agranat (Hebrew Univ.)\kill Moshe Abeles (Hebrew Univ.) \> Joseph LeDoux (NYU)\\ Roni Agranat (Hebrew Univ.) \> Bruce MacNaughton (U. Colorado)\\ Ehud Ahissar (Weizmann Inst.) \> Cristoff Von der Malsburg (Bochum)\\ Asher Cohen (Hebrew Univ.) \> Yishai Mansour (Tel Aviv Univ.)\\ Yadin Dudai (Weizmann Inst.) \> Helge Ritter (Bielefeld)\\ David Haussler (UCSC) \> David Rumelhart (Stanford Univ.)\\ Yuval Davidor (Weizmann Inst.) \> Dov Sagi (Weizmann Inst.)\\ Nathan Interator (Tel Aviv Univ.) \> Menachem Segal (Weizmann Inst.)\\ Michael Jordan (MIT) \> Alex Waibel (CMU)\\ Yann LeCun (AT\&T) \> Norman Weinberger (U.C. Irvine) \end{tabbing} \vspace{.3in} \noindent Participation in the Workshop is limited to 100. \vspace{.1in} \noindent A small number of contributed papers will be accepted. \vspace{.1in} \noindent Interested researchers and students are asked to submit registration forms by March 1, 1993, to \vspace{0.2in} \noindent Sari Steinberg Bchiri\\ Center for Neural Computation\\ Racah Institute of Physics\\ Hebrew University\\ 91904 Jerusalem\\ Israel\\ \vspace{0.1in} \noindent Tel: (972) 2 584563\\ Fax: (972) 2 584437\\ E-mail: {\tt learn at galaxy.huji.ac.il}\\ {\bf Organizing Committee: } Shaul Hochstein, Haim Sompolinsky, Naftali Tishby. \newpage \def\fillend{\hrulefill\vrule width 0pt\\} \centerline{\bf REGISTRATION FORM} \medskip Please fill in the information needed for registration. To ensure participation, please send a copy of this form by e-mail or fax as soon as possible to: \begin{tabbing} Sari Steinberg Bchiri lalalalalalalalalalala \= lalalalalalala \kill \noindent Sari Steinberg Bchiri \> E-MAIL: learn at galaxy.huji.ac.il\\ Center for Neural Computation \> TELEPHONE: 972-2-584563\\ Racah Institute of Physics \> FAX: 972-2-584437\\ Hebrew University of Jerusalem\\ 91904 Jerusalem, ISRAEL\\ \centerline {Registration will be confirmed by e-mail.}\\ \end{tabbing} \centerline{\bf Conference Registration} \medskip Name: \fillend Affiliation: \fillend Address: \fillend City: \hrulefill State: \hrulefill Zip: \hrulefill Country: \fillend Country: \fillend Telephone: (\hspace{0.3in}) \hrulefill {\bf E-mail address:} \fillend \centerline{\bf Registration Fee} \noindent $\Box$ Regular registration (before March 1): \$100 \\ $\Box$ Student registration (before March 1): \$50 \\ $\Box$ Late registration (after March 1): \$150 \\ $\Box$ Student late registration (after March 1): \$75 \\ \newpage Please send payment by check or international money order in US dollars made payable to {\bf Learning Workshop} with a copy of this form by March 1, 1993 to avoid late fee. \centerline{\bf Accommodations} If you are interested in assistance in reserving hotel accommodation for the duration of the Workshop, please indicate your preferences below: I wish to reserve a $\Box$ single $\Box$ double room from \makebox[1.0in]{\hrulefill} to \makebox[1.0in]{\hrulefill} for a total of \makebox[.5in]{\hrulefill} nights. \centerline{\bf Contributed Papers} A very limited number of contributed papers will be accepted. Participants interested in submitting papers should fill out the following and enclose a 250-word abstract.\\ $\Box$ Poster $\Box$ Talk\\ Title: \makebox[6.0in]{\hrulefill}\\ \makebox[6.5in]{\hrulefill}\\ \makebox[6.5in]{\hrulefill} \end{document} From harnad at Princeton.EDU Thu Dec 31 20:11:30 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 31 Dec 92 20:11:30 EST Subject: PSYC Call for Book Reviewers: Categorization & Learning Message-ID: <9301010111.AA24165@clarity.Princeton.EDU> From harnad at clarity.princeton.edu Thu Dec 31 19:04:26 1992 From: harnad at clarity.princeton.edu (Stevan Harnad) Date: Thu Dec 31 19:04:26 EST 1992 Subject: psycoloquy.92.3.68.categorization.1.murre (160 lines) Message-ID: CALL FOR BOOK REVIEWERS Below is the Precis of LEARNING AND CATEGORIZATION IN MODULAR NEURAL NETWORKS by JMJ Murre. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it (if you have never reviewed for PSYCOLOQUY of Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message). If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ----------------------------------------------------------------------- psycoloquy.92.3.68.categorization.1.murre Thursday, 31 December 1992 ISSN 1055-0143 (6 paragraphs, 1 reference, 83 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1992 Jacob MJ Murre Precis of: LEARNING AND CATEGORIZATION IN MODULAR NEURAL NETWORKS JMJ Murre 1992, 244 pages Hemel Hempstead: Harvester Wheatsheaf (In Canada and the USA: Hillsdale, NJ: Lawrence Erlbaum) Jacob M.J. Murre MRC Applied Psychology Unit Cambridge, United Kingdom jaap.murre at mrc-applied-psychology.cambridge.ac.uk 1.0 MODULARITY AND MODULATION IN NEURAL NETWORKS 1.1 This book introduces a new neural network model, CALM, for categorization and learning in neural networks. CALM is based on ideas from neurobiology, psychology, and engineering. It defines a neural network paradigm that is both modular and modulatory. CALM stands for Categorizing And Learning Module and it may be viewed as a building block for neural networks. The internal structure of the CALM module is inspired by the neocortical minicolumn. Several of these modules are connected to form an initial neural network architecture. Throughout the book it is argued that modularity is important in overcoming many of the problems and limitations of current neural networks. Another pivotal concept in the CALM module is self-induced arousal, which may modulate the local learning rate and noise level. 1.2 The concept of arousal has roots in both biology and psychology. In CALM, this concept underlies two different modes of learning: elaboration learning and activation learning. Mandler and coworkers have conjectured that these two distinct modes of learning may cause the dissociation of memory observed in explicit and implicit memory tasks. A series of simulations of such experiments demonstrates that arousal-modulated learning and categorization in modular neural networks can account for experimental results with both normal and amnesic patients. In the latter case, pathological but psychologically accurate behavior is produced by "lesioning" the arousal system of the model. The behavior obtained in this way is similar to that in patients with hippocampal lesions, suggesting that the hippocampus may form part of an arousal system in the brain. 1.3 Another application of CALM to psychological modelling shows how a modular CALM network can learn the word superiority effect for letter recognition. As an illustrative practical application, a small model is described that learns to recognize handwritten digits. 2.0 MODULAR NEURAL ARCHITECTURES AND NEUROCOMPUTERS 2.1 The book contains a concise introduction to genetic algorithms, a new computing method based on the metaphor of biological evolution that can be used to design network architectures with superior performance. In particular, it is shown how a genetic algorithm results in a better architecture for the digit-recognition model. 2.2 In five appendices, the role of modularity in parallel hardware and software implementations is discussed in some depth. Several hardware implementations are considered, including a formal analysis of their efficiency on transputer networks and an overview of a dedicated 400- processor neurocomputer built by the developers of CALM in cooperation with Delft Technical University. One of the appendices is dedicated to a discussion of the requirements of simulators for modular neural networks. 3.0 CATASTROPHIC INTERFERENCE AND OTHER ISSUES 3.1 The book ends with an evaluation of the psychological and biological plausibility of CALM models and a discussion of generalization, representational capacity of modular neural networks, and catastrophic interference. A series of simulations and a detailed analysis of Ratcliff's simulations of catastrophic interference show that in almost all cases interference can be attributed to overlap of hidden-layer representations across subsequent blocks of stimuli. It is argued that introducing modularity, or some other form of semidistributed representations, may reduce interference to a more psychologically plausible level. REFERENCE Murre, J.M.J. (1992) Learning and Categorization in Modular Neural Networks. Harvester Wheatsheaf/Erlbaum ---------------------------------------------------------------- PSYCOLOQUY INSTRUCTIONS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 20,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral, cognitive, neural, social, etc.) All contributions are refereed by members of PSYCOLOQUY's Editorial Board. Target articles should normally not exceed 500 lines in length (commentaries and responses should not exceed 200 lines). All target articles must have (1) a short abstract (<100 words), (2) an indexable title, (3) 6-8 indexable keywords, and the (4) author's full name and institutional address. The submission should be accompanied by (5) a rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). Commentaries must have indexable titles and the commentator's full name and institutional address (abstract is optional). All paragraphs should be numbered in articles, commentaries and responses (see format of already articles articles in PSYCOLOQUY). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu From harnad at Princeton.EDU Thu Dec 31 21:43:42 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 31 Dec 92 21:43:42 EST Subject: PSYC Call for Book Reviewers: Language Comprehension Message-ID: <9301010243.AA24666@clarity.Princeton.EDU> CALL FOR BOOK REVIEWERS Below is the Precis of LANGUAGE COMPREHENSION AS STRUCTURE BUILDING by MA Gernsbacher. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it (if you have never reviewed for PSYCOLOQUY of Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message). If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ------------------------------------------------------------------------- psycoloquy.92.3.69.language-comprehension.1.gernsbacher Thurs 31 Dec 1992 ISSN 1055-0143 (29 paragraphs, 2 references, 275 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1992 Morton Ann Gernsbacher Precis of: LANGUAGE COMPREHENSION AS STRUCTURE BUILDING MA Gernsbacher (1990) Hillsdale NJ: Lawrence Erlbaum Morton Ann Gernsbacher Department of Psychology University of Wisconsin-Madison 1202 W. Johnson Street Madison, WI 53706-1611 (608) 262-6989 [fax (608) 262-4029] mortong at macc.wisc.edu 0. KEYWORDS: comprehension, cognitive processes, sentence comprehension, psycholinguistics 1. Language can be viewed as a specialized skill involving language-specific processes and language-specific mechanisms. Another view is that language (both comprehension and production) draws on many general cognitive processes and mechanisms. According to this view, some of the same processes and mechanisms involved in producing and comprehending language are involved in nonlinguistic tasks. 2. This commonality might arise because, as Lieberman (1984) and others have suggested, language comprehension evolved from nonlinguistic cognitive skills. Or the commonality might arise simply because the mind is best understood by reference to a common architecture (e.g., a connectionist architecture). 3. I have adopted the view that many of the processes and mechanisms involved in language comprehension are general ones. This book describes a few of those cognitive processes and mechanisms, using a simple framework -- the Structure Building Framework -- as a guide. 4. According to the Structure Building Framework, the goal of comprehension is to build a coherent mental representation or "structure" of the information being comprehended. Several component processes are involved. First, comprehenders lay foundations for their mental structures. Next, they develop their mental structures by mapping on information when that incoming information coheres with the previous information. If the incoming information is less coherent, however, comprehenders engage in another cognitive process: They shift to initiate a new substructure. So, most representations comprise several branching substructures. 5. The building blocks of these mental structures are memory nodes. Memory nodes are activated by incoming stimuli. Initial activation forms the foundation of mental structures. Once the foundation is laid, subsequent information is often mapped onto a developing structure because the more coherent the incoming information is with the previous information, the more likely it is to activate similar memory nodes. In contrast, the less coherent the incoming information is, the less likely it is to activate similar memory nodes. In this case, the incoming information might activate a different set of nodes, and the activation of this other set of nodes forms the foundation for a new substructure. 6. Once memory nodes are activated, they transmit processing signals, either to enhance (boost or increase) or to suppress (dampen or decrease) other nodes' activation. In other words, two mechanisms control the memory nodes' level of activation: Enhancement and Suppression. Memory nodes are enhanced when the information they represent is necessary for further structure building. They are suppressed when the information they represent is no longer as necessary. 7. This book describes the three subprocesses involved in structure building, namely: the Process of Laying a Foundation for mental structures; the Process of Mapping coherent information onto developing structures; and the Process of Shifting to initiate new substructures. The book also describes the two mechanisms that control these structure building processes, namely: the Mechanism of Enhancement, which increases activation, and the Mechanism of Suppression, which dampens activation. 8. in discussing these processes and mechanisms, I begin by describing the empirical evidence to support them. I then describe comprehension phenomena that result from them. At each point, I stress that I assume that these processes and mechanisms are general; that is, the same ones should underlie nonlinguistic phenomena. This suggests that some of the bases of individual differences in comprehension skill might not be language specific. I describe how I have investigated this hypothesis empirically. 9. The process of laying a foundation is described in Chapter 2. Because comprehenders first lay a foundation, they spend more time reading the first word of a clause or sentence, the first sentence of a paragraph or story episode, and the first word of a spoken clause or spoken sentence; they also spend more time viewing the first picture of a picture story or picture story episode. 10. Comprehenders use these first segments (initial words, sentences, and pictures) to lay foundations for their mental representations of larger units (sentences, paragraphs, and story episodes). Because laying a foundation consumes cognitive effort, comprehenders slow down in understanding initial segments. Indeed, none of these comprehension time effects emerges when the information does not lend itself to building cohesive mental representations, for example, when the sentences, paragraphs, or stories are self-embedded or scrambled. 11. The process of laying a foundation explains why comprehenders are more likely to recall a sentence when cued by its first content word (or a picture of that first content word); why they are more likely to recall a story episode when cued by its first sentence; and why they are more likely to consider the first sentence of a paragraph the main idea of that paragraph, even when the actual theme occurs later. 12. Initial words, sentences, and pictures are optimal cues because they form the foundations of their clause-level, sentence-level, and episode-level structures; only through initial words, sentences, and pictures can later words, sentences, and pictures be mapped onto the developing representation. 13. Laying a foundation explains why comprehenders access the participant mentioned first in a clause faster than they access a participant mentioned later. This Advantage of First Mention occurs regardless of the first-mentioned participant's syntactic position or semantic role. First-mentioned participants are more accessible because they form the foundation of their clause-level substructures. 14. Laying a foundation also explains why the first clause of a multi-clause sentence is most accessible shortly after comprehenders hear or read that multi-clause sentence (even though while they are hearing or reading the sentence, the most recent clause is most accessible). According to the Structure Building Framework, comprehenders represent each clause of a multi-clause sentence in its own substructure. Although they have greatest access to the information that is represented in the substructure that they are currently developing, at some point, the first clause becomes most accessible because the substructure representing the first clause forms the foundation for the whole sentence-level structure. 15. The processes of mapping and shifting are described in Chapter 3. The process of mapping explains why sentences that refer to previously mentioned concepts (and are, therefore, referentially coherent) are read faster than less referentially coherent sentences; why sentences that maintain a previously established time frame (and are, therefore, temporally coherent) are read faster than sentences that are less temporally coherent; why sentences that maintain a previously established location or point of view (and are, therefore, locationally coherent) are read faster than sentences that are less locationally coherent; and why sentences that are logical consequences of previously mentioned actions (and are, therefore, causally coherent) are read faster than sentences that are less causally coherent. 16. The process of shifting from actively building one substructure to initiating another explains why words and sentences that change the topic, point of view, location, or temporal setting take substantially longer to comprehend. The process of shifting also explains why information presented before a change in topic, point of view, location, or temporal setting is harder to retrieve than information presented afterward. Such changes trigger comprehenders to shift and initiate a new substructure; information presented before comprehenders shift is not represented in the same substructure as information presented afterward. 17. Shifting also explains a well known language comprehension phenomenon: Comprehenders quickly forget the exact form of recently comprehended information. This phenomenon is not unique to language; it also occurs while comprehenders are viewing picture stories; and it is also exacerbated after comprehenders cross episode boundaries, even the episode boundaries of picture stories. 18. Finally, shifting explains why comprehenders' memories for stories are organized by the episodes in which the stories were originally heard or read. Comprehenders shift in response to cues that signal a new episode; each episode is hence represented in a separate substructure. 19. The mechanisms of suppression and enhancement are described in Chapter 4. The suppression mechanism explains why only the contextually appropriate meaning of an ambiguous word, such as bug, is available to consciousness although multiple meanings -- even contextually inappropriate ones -- are often immediately activated. The inappropriate meanings do not simply decay; neither do they decrease in activation because their activation is consumed by the appropriate meanings. Rather, the suppression mechanism dampens the activation of inappropriate meanings. It also dampens the activation of less relevant associations of unambiguous words. 20. Suppression and enhancement explain how anaphors (such as pronouns, repeated noun phrases, and so forth) improve their antecedents' accessibility. Anaphors both enhance their antecedents' activation and suppress the activation of other concepts, with the net effect that after anaphoric reference, antecedents are more activated than other concepts. They are accordingly more accessible. 21. Suppression and enhancement are triggered by information that specifies the anaphor's identity. More explicit anaphors trigger more suppression and enhancement. Information from other sources (such as semantic, syntactic, and pragmatic context) also triggers suppression, but it does so less quickly and less powerfully. 22. Suppression and enhancement explain why speakers and writers use more explicit anaphors at longer referential distances, at the beginnings of episodes, and for less topical concepts. The mechanisms of suppression and enhancement also explain why comprehenders have more difficulty accessing referents at longer referential distances, at the beginnings of episodes, and for less topical concepts. 23. Suppression and enhancement explain how concepts marked with cataphoric devices, like spoken stress and the indefinite article, "this," gain a privileged status in comprehenders' mental representations. Cataphoric devices enhance the activation of the concepts they mark. They also improve their concepts' representational status through the suppression: Concepts marked with cataphoric devices are better at suppressing the activation of other concepts, and they are better at resisting being suppressed themselves. 24. Finally, the mechanisms of suppression and enhancement explain why comprehenders typically forget surface information faster than they forget thematic information; why comprehenders forget more surface information after they hear or read thematically organized passages than after they hear or read seemingly unrelated sentences; and why comprehenders better remember the surface forms of abstract sentences and the thematic content of concrete sentences. 25. Individual differences in structure building are described in Chapter 5. The Structure Building Framework explains why skill in comprehending linguistic media (written and spoken stories) is closely related to skill in comprehending nonlinguistic media (picture stories). Comprehensible information, regardless of its medium, is structured, and comprehenders differ in how skillfully they use the cognitive processes and mechanisms that capture this structure. 26. The process of shifting explains why less-skilled comprehenders are poorer at remembering recently comprehended information: They shift too often. The mechanism of suppression explains why less-skilled comprehenders are less able to reject the contextually inappropriate meanings of ambiguous words; why they are less able to reject the incorrect forms of homophones; why they are less able to reject the typical-but-absent members of nonverbal scenes; why they are less able to ignore words written on pictures; and why they are less able to ignore pictures surrounding words: Less-skilled comprehenders have inefficient suppression mechanisms. 27. The distinction between the mechanisms of suppression and enhancement explains why less-skilled comprehenders are not less able to appreciate the contextually appropriate meanings of ambiguous words and why they are not less able to appreciate typical members of nonverbal scenes. It is less-skilled comprehenders' suppression mechanisms, not their enhancement mechanisms, that are faulty. 28. Although the Structure Building Framework accounts parsimoniously for many comprehension phenomena, several questions remain unanswered. In the final chapter, I briefly identify just a few of those questions: Are the cognitive processes and mechanisms indentified by the Structure Building Framework automatic, or are they under comprehenders' conscious control? In what medium are mental structures and substructures represented? How is the Structure Building Framework similar to other approaches to describing comprehension? And what is lost by describing language comprehension at a general level? 29. I conclude that by describing language comprehension using the Structure Building Framework as a guide, I am not forced to accept nativism, to isolate the psychology of language from the remainder of psychology, to honor theory over data, to depend on linguistic theory, or to ignore functionalism. Instead, by describing language comprehension as structure building, I hope to map the study of language comprehension onto the firm foundation of cognitive psychology. REFERENCE Gernsbacher, M.A. (1990) Language Comprehension as Structure Building. Hillsdale NJ: Lawrence Erlbaum Lieberman, P. (1984) The biology and evolution of language. Harvard University Press ----------------------------------------------------------------------- PSYCOLOQUY INSTRUCTIONS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 20,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral, cognitive, neural, social, etc.) All contributions are refereed by members of PSYCOLOQUY's Editorial Board. Target articles should normally not exceed 500 lines in length (commentaries and responses should not exceed 200 lines). All target articles must have (1) a short abstract (<100 words), (2) an indexable title, (3) 6-8 indexable keywords, and the (4) author's full name and institutional address. The submission should be accompanied by (5) a rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). Commentaries must have indexable titles and the commentator's full name and institutional address (abstract is optional). All paragraphs should be numbered in articles, commentaries and responses (see format of already articles articles in PSYCOLOQUY). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu From marshall at cs.unc.edu Wed Dec 2 16:42:52 1992 From: marshall at cs.unc.edu (Jonathan A. Marshall) Date: Wed, 2 Dec 92 16:42:52 -0500 Subject: Jobs in Chapel Hill & Durham, NC Message-ID: <9212022142.AA06514@marshall.cs.unc.edu> The following two jobs are both open to strong vision researchers. An opportunity also exists for the new vision faculty member(s) to participate in (and receive support from) a collaborative research effort on models of human vision, under the MIP (Medical Image Presentation) program project grant at UNC-Chapel Hill; contact Prof. Stephen Pizer, smp at cs.unc.edu, for further information. Researchers with interests in computational and neurobiological models of cognition and of vision would find several collaborative opportunities here in the Research Triangle area of North Carolina. ----------------------------------------------------------------------------- 1. The Psychology Department of the University of North Carolina at Chapel Hill seeks to hire a cognitive psychologist in a tenure track assistant professor position for the fall of 1993. Responsibilities include graduate and undergraduate teaching, research, and research supervision. Applicants in any area of cognitive psychology will be considered. Have 3 letters of recommendation sent and submit a curriculum vitae, up to 3 (p)reprints, and a statement of teaching and programmatic research interests to: Thomas S. Wallsten, Cognitive Search Committee, Department of Psychology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-3270. Applications must be received by December 15, 1992. UNC-CH is an Affirmative Action/ Equal Opportunity Employer. Questions can be directed to Tom Wallsten at tom_wallsten at unc.edu. ---------------------------------------------------------------------------- 2. The Department of Experimental Psychology at Duke University has a tenure-track assistant professor position beginning in the Fall, 1993 in the general area of Behavioral Neuroscience with theoretical interests in neural plasticity, learning, motivation, or sensory perception development. Candidates with strong research and teaching interests should send a vitae, representative reprints, and three or more letters of recommendation to: Faculty Search Committee, Department of Experimental Psychology, Duke University, Durham, NC, 27706. Duke is an equal opportunity/affirmative action employer. ---------------------------------------------------------------------------- From rosetree at titan.ucc.umass.edu Tue Dec 1 16:11:20 1992 From: rosetree at titan.ucc.umass.edu (DAVID A ROSENBAUM) Date: Tue, 1 Dec 92 16:11:20 -0500 Subject: JOB OPENING AT UMASS(AMHERST) Message-ID: <9212012111.AA22183@titan.ucc.umass.edu> From jlm at crab.psy.cmu.edu Wed Dec 2 10:18:02 1992 From: jlm at crab.psy.cmu.edu (James L. McClelland) Date: Wed, 2 Dec 92 10:18:02 EST Subject: Post-Doctoral Openings Message-ID: <9212021518.AA12273@crab.psy.cmu.edu.noname> I have an opening in my laboratory for at least one post-doctoral fellow and possibly two. These would be two-year post-doctoral fellowships, with the possibility of extension if the applicant can raise additional funding for additional years. The default start date is September 1 1993, but other dates may be possible. I'm looking for individuals with strengths in the mathematical analysis of neural networks who wish to apply these strengths to the development of a computational framework for modeling human cognition. Prior work demonstrating these strengths and interests will be given considerable weight. Two specific areas of interest in my laboratory are: 1. Dynamics of information processing. The goal here has been to develop mathematical analyses of stochastic, symmetric, diffusion networks and apply these in an effort to understand the time-course of human information processing as this is exhibited in information processing tasks studied extensively in the human cognitive psychology literature. We are also interested in further studies of learning in stochastic networks, building on recent work in my laboratory (with Javier Movellan, a departing postdoc) and elsewhere. 2. Learning and memory. One goal here is to understand from a computational point of view why humans have two memory systems. Neuropsychological evidence suggests that one may loose the ability to acquire new memories for specific facts and experiences, while at the same time showing completely normal acquisition of various cognitive, perceptuo-motor, and language processing skills. The questions in this area are, Why should there be two different kinds of learning in the human brain, What are the essential properties of each, and how do they work together. Basically, I am looking for individuals who are interested in working on some aspect of either of these broad problems. My style in working with post-docs is to find a specific problem of mutual interest and develop a collaboration around that. Please do not reply by email. If you are interested please send me a letter along with a CV, your publications or preprints, and the names, addresses and phone numbers of two individuals who can comment on your work. Send your materials by December 20 to: James L. McClelland Department of Psychology Carnegie Mellon University Pittsburgh, PA 15213 Upon receipt of these materials I will reciprocate with recent papers from my laboratory as a way of beginning a discussion of whether we can find a fit between our interests. From fellous%hyla.usc.edu at usc.edu Sat Dec 5 14:42:41 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Sat, 5 Dec 92 11:42:41 PST Subject: CNE Workshop/USC Call For Papers Message-ID: <9212051942.AA04904@hyla.usc.edu> Thank you for posting this announcement on the list: ---------------------------------------------------------------------------- CALL FOR PAPERS SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa. To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodolo- gy which combines a coarse-grain analysis of cooperative or dis- tributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a power- ful general paradigm). Schemas provide a language for distri- buted artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introducto- ry tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks. Modular neural networks. Linking DAI to Neural Networks to Hybrid Architecture. Formal Theories of Schemas. Hybrid approaches to integrating planning & reaction. Hybrid approaches to learning. Hybrid approaches to commonsense reasoning by integrating neural networks and rule-based reasoning (using schema for the integration). Programming Languages for Schemas and Neural Networks. Concurrent Object-Oriented Programming for Distributed AI and Neural Networks. Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience. Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to: Michael Arbib Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520, USA Tel: (213) 740-9220, Fax: (213) 746-2863, email: arbib at pollux.usc.edu. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of ac- cepted abstracts received prior to April 1, 1993 will be collect- ed with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audi- ence to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. The registration fee of $150 includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to Center for Neural Engineering, USC for $150 together with the following information to: Paulina Tagle Center for Neural Engineering University of Southern California University Park Los Angeles CA 90089-2520 USA --------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ____________________________________________ ADDRESS: ____________________________________________ ____________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ____________________________________________ I intend to submit a paper: YES [ ] NO [ ] --------------------------------------------------------------------- From paul at dendrite.cs.colorado.edu Sun Dec 6 13:04:42 1992 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Sun, 6 Dec 1992 11:04:42 -0700 Subject: Cognitive Science Conference (note DEADLINE) Message-ID: <199212061804.AA20627@axon.cs.colorado.edu> Fifteenth Annual Meeting of the COGNITIVE SCIENCE SOCIETY A MULTIDISCIPLINARY CONFERENCE ON COGNITION June 18 - 21, 1993 University of Colorado at Boulder Call for Papers This year's conference aims at broad coverage of the many and diverse methodologies and topics that comprise Cognitive Science. In addition to computer modeling, the meeting will feature research in computational, theoretical, and psycho-linguistics; cognitive neuroscience; conceptual change and education; artificial intelligence; philosophical foundations; human-computer interaction and a number of other approaches to the study of cognition. A plenary session honoring the memory of Allen Newell is scheduled. Plenary addresses will be given by: Alan Baddeley Andy DiSessa Paul Smolensky Sandra Thomson Bonnie Webber The conference will also highlight invited research papers: Conceptual Change: (Organizers: Nancy Songer & Walter Kintsch) Gaea Leinhardt Ashwin Ram Jeremy Rochelle Language Learning: (Organizers: Paul Smolensky & Walter Kintsch) Michael Brent Robert Frank Brian MacWhinney Situated Action: (Organizer: James Martin) Leslie Kaebling Pattie Maes Bonnie Nardi Alonso Vera Visual Perception & Cognitive Neuroscience: (Organizer: Michael Mozer) Marlene Behrmann Robert Jacobs Hal Pashler David Plaut PAPER SUBMISSIONS With the goal of assembling a high-quality program representative of the diversity of methods and topics in cognitive science, we invite papers presenting interdisciplinary research addressing any cognitive domain and using any of the diverse methodologies of the field. Papers are specifically solicited which address the topics of the invited research sessions listed above. Authors should submit five (5) copies of the paper in hard copy form to: Cognitive Science 1993 Submissions Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author will be eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of six (6) pages long (excluding only the cover page), must have at least 1 inch margins on all sides, and must use no smaller that 10 point type. Camera-ready versions will be required only after authors are notified of acceptance. COVER PAGE Each copy of the paper must include a cover page, separate from the body of the paper, which includes, in order: 1. Title of paper. 2. Full names, postal addresses, phone numbers and e-mail addresses (if possible) of all authors. 3. An abstract of no more than 200 words. 4. The area(s) in which the paper should be reviewed. When possible, please list, in decreasing order of relevance, 1-3 of the following keywords: action/motor control, acquisition/learning, cognitive architecture, cognitive neuroscience, connectionism, conceptual change/education, decision making, foundations, human-computer interaction, language (indicate subarea), memory, reasoning and problem solving, perception, situated action/cognition, skill/expertise. 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster sessions will be highlighted in this year's conference. The proceedings will not distinguish between papers presented orally and those presented as posters. 6. A note stating if the paper is eligible to compete for a Marr Prize. DEADLINE ***** PAPERS ARE DUE JANUARY 19, 1993. ****** Late papers will be accepted until January 31, but authors of late papers will have less time to make revisions after acceptance. SYMPOSIA Proposals for symposia are also invited. Proposals should indicate: (1) A brief description of the topic; (2) How the symposium would address a broad cognitive science audience; (3) Names of symposium organizer(s); (4) List of potential speakers, their topics, and some estimate of their likelihood of participation; (5) proposed symposium format (designed to last 90 minutes). Symposium proposals should be sent as soon as possible, but no later than January 19, 1993. FOR MORE INFORMATION CONTACT Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 E-mail: Cogsci at clipr.colorado.edu Telephone: (303) 492-7638 FAX: (303) 492-2967 From ala at sans.kth.se Sun Dec 6 16:45:19 1992 From: ala at sans.kth.se (Anders Lansner) Date: Sun, 6 Dec 1992 22:45:19 +0100 Subject: Mechatronical Computer Systems that Perceive and Act Message-ID: <199212062145.AA27311@thalamus.sans.kth.se> ************************************************************************* * Invitation to * * International Workshop on Mechatronical Computer Systems * * for Perception and Action, June 1-3, 1993. * * * * Halmstad University, Sweden * * * * First Call for Contributions * ************************************************************************* Mechatronical Computer Systems that Perceive and Act ++++++++++++++++++++++++++++++++++++++++++++++++++++ A new generation ================ Mechatronical computer systems, which we will see in advanced products and production equipment of tomorrow, are designed to do much more than calculate. The interaction with the environment and the integration of computational modules in every part of the equipment, engaging in every aspect of its functioning, put new, and conceptually different, demands on the computer system. A development towards a complete integration bet- ween the mechanical system, advanced sensors and actuators, and a multi- tude of processing modules can be foreseen. At the systems level, power- ful algorithms for perceptual integration, goal-direction and action plan- ning in real time will be critical components. The resulting 'action- oriented systems' may interact with their environments by means of sophi- sticated sensors and actuators, often with a high degree of parallelism, and may be able to learn and adapt to different circumstances and environ- ments. Perceiveing the objects and events of the external world and act- ing upon the situation in accordance with an appropriate behaviour, whet- her programmed, trained, or learned, are key functions of these, next generation, computer systems. The aim of this first International Workshop on Mechatronical Computer Systems for Perception and Action is to gather researchers and industrial development engineers, who work with different aspects of this exciting new generation of computing systems and computerbased applications, to a fruitful exchange of ideas and results and, often interdisciplinary, disc- ussions. Workshop Form ============= One of the days of the workshop will be devoted to 'true workshop activi- ties'. The objective is to identify and propose research directions and key problem areas in mechatronical computing systems for perception and action. In the morning session, invited speakers, as well as other work- shop delegates, will give their perspectives on the theme of the workshop. The work will proceed in smaller working groups during the afternoon, after which the conclusions will be presented in a plenary session. The scientific programme will also include presentations of research res- ults in oral or poster form, or as demonstrations. Subject Areas ============= The programme committee welcomes all kinds of contributions  papers to be presented orally or as posters, demonstrations, etc.  in the areas listed below, as well as other areas of relevance to the theme of the workshop. From the workshop point of view, it is not essential that contributions contain only new, unpublished results. Rather, the new, interdisciplinary collection of delegates that can be expected at the workshop may motivate presentations of earlier published results. Specifically, we invite delegates to state their view of the workshop theme, including identification of key reserch issues and research direc- tions. The planning of the workshop day will be based on these submitted statements , some of which will be presented in the plenary session, some of which in the smaller working groups. At this early stage we also welcome proposals for session themes and in- vited talks. Relevant subject areas are e.g.: -------------------------------- Real-Time Systems Architecture and Real-Time Software. Sensor Systems and Sensory/Motor Coordina- tion. Biologically Inspired Systems. Applications of Unsupervised and Reinforce- ment Learning. Real-Time Decision Making and Action Plan- ning. Parallel Processor Architectures for Embedded Systems. Development Tools and Support Systems for Mechatronical Computer Systems and Applica- tions. Dependable Computer Systems. Robotics and Machine Vision. Neural Networks in Real-Time Applications. Advanced Mechatronical Computing Demands in Industry. IMPORTANT DATES ================ Dec. 15, 1992: Proposals for Invited speakers, Panel discussions, Special sessions, etc. Febr. 1, 1993: Submissions of extended abstracts (4 pages max.) or full papers. Submissions of statements regarding perspectives on the conference theme, that the delegate would like to present at the workshop (4 pages max.) March 1, 1993: Notification of acceptance. Pre-liminary final programme. May 1, 1993: Final papers and statements. ORGANISERS ========== The workshop is arranged by CCA, the Centre for Computer Architecture at Halmstad University, Sweden, in cooperation with the DAMEK Mechatronics Research Group and the SANS (Studies of Artificial Neural Systems) Research Group, both at the Royal Institute of Technology (KTH), Stockholm, Sweden, and the Department of Computer Engineering, Chalmers University of Techno- logy, Gothenburg, Sweden. The Organising Committee includes: Lars Bengtsson, CCA, Organising Chair Anders Lansner, SANS Kenneth Nilsson, CCA Bertil Svensson, Chalmers University of Technology and CCA, Programme and Conference Chair Per-Arne Wiberg, CCA Jan Wikander, DAMEK The workshop is supported by SNNS, the Swedish Neural Network Society. It is financially supported by Halmstad University, the County Aministration of Halland, and Swedish industries. Social Activities Social activities and a Programme for Accompanying persons will be arranged. MCPA Workshop, Centre for Computer Architecture, Halmstad University, Box 823, S-30118 Halmstad, Sweden Tel. +46 35 153134 (Lars Bengtsson), Fax. +46 35 157387, e-mail: mcpa at cca.hh.se FURTHER INFORMATION =================== For further information and registration form, fill out the form below and send to: MCPA Workshop/Lars Bengtsson Centre for Computer Architecture Halmstad University Box 823 S-30118 HALMSTAD, Sweden Alternatively, by simply mailing the text 'send info' to mcpa at cca.hh.se you will be included in the e-mail mailing list and supplied with up-to-date information. //////////////////////////////////////////////////////////////// //////////////////////////////////////////////////////////////// Please include me in the mailing list of the MCPA Workshop Name: Address: I intend to submit a paper/give a demonstration within the area of: I have suggestions for the workshop programme. Please contact me on phone/fax/e-mail: /////////////////////////////////////////////////////////////////////// /////////////////////////////////////////////////////////////////////// From ingber at alumni.cco.caltech.edu Mon Dec 7 17:24:48 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Mon, 7 Dec 1992 14:24:48 -0800 Subject: VFSR v6.30 now in Statlib Message-ID: <9212072224.AA21221@alumni.cco.caltech.edu> Very Fast Simulated Reannealing (VFSR) vfsr v6.30 is now in Statlib (login as statlib to lib.stat.cmu.edu, file vfsr is in directory general). If you already have vfsr v6.25 from Netlib (login as netlib to research.att.com, file vfsr.Z is in directory opt), this can be updated using a patch I'd be glad to send on request. v6.30 fixes a bug encountered for negative cost functions, and adds some printout to make your bug reports and comments easier to decifer. Lester || Prof. Lester Ingber ingber at alumni.caltech.edu || || P.O. Box 857 || || McLean, VA 22101 703-848-1859 = [10ATT]0-700-L-INGBER || From moody at cse.ogi.edu Tue Dec 8 18:22:00 1992 From: moody at cse.ogi.edu (John Moody) Date: Tue, 8 Dec 92 15:22 PST Subject: PhD and Masters Programs at the Oregon Graduate Institute Message-ID: Fellow Connectionists: The Oregon Graduate Institute of Science and Technology (OGI) has openings for a few outstanding students in its Computer Science Masters and Ph.D programs in the areas of Neural Networks, Learning, Speech, Language, Vision, and Control. Faculty in these areas include Etienne Barnard, Ron Cole, Mark Fanty, Dan Hammerstrom, Todd Leen, Uzi Levin, John Moody, David Novick, Misha Pavel (visiting), and Barak Pearlmutter. Short descriptions of faculty research interests are appended below. OGI is a young, but rapidly growing, private research institute located in the Portland area. OGI offers Masters and PhD programs in Computer Science and Engineering, Applied Physics, Electrical Engineering, Biology, Chemistry, Materials Science and Engineering, and Environmental Science and Engineering. Inquiries about the Masters and PhD programs and admissions should be addressed to: Office of Admissions and Records Oregon Graduate Institute of Science and Technology 19600 NW von Neumann Drive Beaverton, OR 97006-1999 or to the Computer Science and Engineering Department at csedept at cse.ogi.edu or (503)690-1150. The final deadline for receipt of all applications materials is March 1, 1993. Applications are reviewed as they are received, and applying early is strongly advised. +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Oregon Graduate Institute of Science & Technology (OGI) Department of Computer Science and Engineering Research Interests of Faculty in Neural Networks, Learning, Speech, Language, Vision, and Control Etienne Barnard: Etienne Barnard is interested in the theory, design and implementation of pattern-recognition systems, classifiers, and neural networks. He is also interested in adaptive control systems -- specifically, the design of near-optimal controllers for real- world problems such as robotics. Ron Cole: Ron Cole is director of the Center for Spoken Language Understanding at OGI. Research in the Center currently focuses on speaker- independent recognition of continuous speech over the telephone and automatic language identification for English and ten other languages. The approach combines knowledge of hearing, speech perception, acoustic phonetics, prosody and linguistics with neural networks to produce systems that work in the real world. Mark Fanty: Mark Fanty's research interests include continuous speech recognition for the telephone; natural language and dialog for spoken language systems; neural networks for speech recognition; and voice control of computers. Dan Hammerstrom: Based on research performed at the Institute, Dan Hammerstrom and several of his students have spun out a company, Adaptive Solutions Inc., which is creating massively parallel computer hardware for the acceleration of neural network and pattern recognition applications. There are close ties between OGI and Adaptive Solutions. Dan is still on the faculty of the Oregon Graduate Institute and continues to study next generation VLSI neurocomputer architectures. Todd K. Leen: Todd Leen's research spans theory of neural network models, architecture and algorithm design and applications to speech recognition. His theoretical work is currently focused on the foundations of stochastic learning, while his work on Algorithm design is focused on fast algorithms for non-linear data modeling. Uzi Levin: Uzi Levin's research interests include neural networks, learning systems, decision dynamics in distributed and hierarchical environments, dynamical systems, Markov decision processes, and the application of neural networks to the analysis of financial markets. John Moody: John Moody does research on the design and analysis of learning algorithms, statistical learning theory (including generalization and model selection), optimization methods (both deterministic and stochastic), and applications to signal processing, time series, and finance. David Novick: David Novick conducts research in interactive systems, including computational models of conversation, technologically mediated communication, and human-computer interaction. A central theme of this research is the role of meta-acts in the control of interaction. Current projects include dialogue models for telephone-based information systems. Misha Pavel (visiting from NYU and NASA Ames): Misha Pavel does mathematical and neural modeling of adaptive behaviors including visual processing, pattern recognition, visually guided motor control, categorization, and decision making. He is also interested in the application of these models to sensor fusion, visually guided vehicular control, and human-computer interfaces. Barak Pearlmutter: Barak Pearlmutter is interested in adaptive systems in their many manifestations. He currently works on neural network learning, unsupervised learning, generalization, accelerating the learning process, relations to biology, reinforcement learning and control, and applications to practical problems. From jbower at cns.caltech.edu Tue Dec 8 16:23:40 1992 From: jbower at cns.caltech.edu (Jim Bower) Date: Tue, 8 Dec 92 13:23:40 PST Subject: CNS*93 Message-ID: <9212082123.AA08110@smaug.cns.caltech.edu> CALL FOR PAPERS Second Annual Computation and Neural Systems Meeting CNS*93 July 31 - August 8 1993 Washington D.C. This is the second annual meeting of an inter-disciplinary conference intended to address the broad range of research approaches and issues involved in the general field of computational neuroscience. Last year's meeting in San Francisco brought 300 experimental and theoretical neurobiologists along with engineers, computer scientists, cognitive scientists, physicists, and mathematicians together to consider the functioning of biological nervous systems. 85 peer reviewed papers were presented at the meeting on a range of subjects related to understanding how biological neural systems compute. As last year, the meeting is intended to equally emphasize experimental, model-based, and more abstract theoretical approaches to understanding neurobiological computation. The first day of the meeting will be devoted to tutorial presentations and workshops focused on particular technical issues confronting computational neurobiology. The main body of the meeting will include plenary, contributed and poster sessions. There will be no parallel sessions and the full text of presented papers will be published in a proceedings volume. Following the regular session, there will be two days of focused workshops at a rural site outside of the D.C. area. With this announcement we solicit the submission of presented papers to the meeting. All papers will be refereed. Submission Procedures: Original research contributions are solicited. Authors must submit a 1000-word (or less) summary and a separate single page 50-100 word abstract clearly stating their results. Accepted abstracts will be published in the conference program. Summaries are for program committee use only. At the bottom of each abstract page and on the first summary page, indicate preference for oral or poster presentation and specify at least one appropriate category and theme from the following list: Presentation categories: A. Theory and Analysis B. Modeling and Simulation C. Experimental D. Tools and Techniques Themes: A. Development B. Cell Biology C. Excitable Membranes and Synaptic Mechanisms D. Neurotransmitters, Modulators, Receptors E. Sensory Systems 1. Somatosensory 2. Visual 3. Auditory 4. Olfactory 5. Other F. Motor Systems and Sensory Motor Integration G. Behavior H. Cognitive I. Disease Include addresses of all authors on the front of the summary and the abstract including email for each author. Indicate on the front of the summary to which author correspondence should be addressed. Program committee decisions will be sent to the correspondence author only. Submissions will not be considered that lack category information, separate abstract sheets, author addresses, or are late. Submissions can be made by either surface mail or email. Authors submitting via surface mail should send 6 copies of the abstract and summary to: Chris Ploegaert CNS*93 Submissions Division of Biology 216-76 Caltech Pasadena, CA. 91125 email submissions should be sent to: cp at smaug.cns.caltech.edu (asci, postscript, or latex files accepted) in each case, submissions must be postmarked (emailed) by January 26th, 1993. Registration information: All submitting authors will be sent registration material automatically. Others interested in obtaining registration material once they become available should contact Chris Ploegaert at the above address or via email at: cp at smaug.cns.caltech.edu +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CNS*93 Organizing Committee: Meeting Coordination: Dennis Glanzman, National Institute of Mental Health Frank Eeckman, Lawrence Livermore Labs Program Co-Chairs: James M. Bower, Caltech Eve Marder, Brandeis University John Rinzel, National Institute of Health Finances: John Miller, University of California, Berkeley Gwen Jacobs, University of California, Berkeley Workshop Chair: Bartlett Mel, Caltech European Liaison: Herbert Axelrad, Faculte de Medecine Pitie-Salpetriere Paris ====================================================================== Potential participants interested in the content of last year's meeting can ftp last year's agenda using the following procedure (you enter ""): yourhost% "ftp 131.215.137.69" 220 mordor FTP server (SunOS 4.1) ready. Name (131.215.137.69:): "ftp" 331 Guest login ok, send ident as password. Password: "yourname at yourhost.yourside.yourdomain" 230 Guest login ok, access restrictions apply. ftp> "cd cns93" 250 CWD command successful. ftp> "get cns92.agenda" 200 PORT command successful. 150 ASCII data connection for cns92.agenda (131.215.137.69,1363) 226 ASCII Transfer complete. local: cns92.agenda remote: cns92.agenda 17598 bytes received in 0.33 seconds (53 Kbytes/s) ftp> "quit" 221 Goodbye. yourhost% (use any editor to look at the file) ==================================================================== **DEADLINE FOR SUMMARIES & ABSTRACTS IS January 26, 1993** please post From ingber at alumni.cco.caltech.edu Wed Dec 9 05:25:31 1992 From: ingber at alumni.cco.caltech.edu (Lester Ingber) Date: Wed, 9 Dec 1992 02:25:31 -0800 Subject: Very Fast Simulated Reannealing (VFSR) via Ftp or Email Message-ID: <9212091025.AA11704@alumni.cco.caltech.edu> Very Fast Simulated Reannealing (VFSR) via Ftp or Email My previous announcement did not specify the use of ftp, and many people unfamiliar with the use of NETLIB and STATLIB were understandably confused. This announcement is to remedy that problem. STATLIB: vfsr v6.30 Interactive: ftp lib.stat.cmu.edu [login as statlib, your_login_name as password] cd general get vfsr Email: mail statlib at lib.stat.cmu.edu send vfsr from general NETLIB: vfsr v6.25 Interactive: ftp research.att.com [login as netlib, your_login_name as password] cd opt binary get vfsr.Z Email: mail netlib at research.att.com send vfsr from opt PATCH: vfsr-diff-6.25-6.30.Z.uu If you already have vfsr v6.25 from NETLIB, this can be updated using a patch I'd be glad to send on request. strip out text between CUT HERE lines, save to savefile uudecode savefile uncompress vfsr-diff-6.25-6.30.Z mv vfsr-diff-6.25-6.30 VFSR ; cd VFSR patch -p1 < vfsr-diff-6.25-6.30 v6.30 fixes a bug encountered for negative cost functions, and adds some printout to make your bug reports and comments easier to decifer. Lester || Prof. Lester Ingber ingber at alumni.caltech.edu || || P.O. Box 857 || || McLean, VA 22101 703-848-1859 = [10ATT]0-700-L-INGBER || From Patricia.M.Reed at Dartmouth.EDU Wed Dec 9 09:03:40 1992 From: Patricia.M.Reed at Dartmouth.EDU (Patricia.M.Reed@Dartmouth.EDU) Date: 9 Dec 92 09:03:40 EST Subject: Position Opening Message-ID: <2817564@donner.Dartmouth.EDU> The following ad describes a position opening at Dartmouth College in the Neurosciences or Cognitive Neurosciences. Please submit applications/nominations to the address given. **************************** David T. McLaughlin Distinguished Professorship in Cognitive Science or Cognitive Neuroscience Dartmouth College seeks a distinguished individual in cognitive science or cognitive neuroscience to be the first holder of the David T. McLaughlin Distinguished Professorship. It is expected that the appointment will be made at the tenured, full professor level in the Department of Psychology and that the successful candidate will participate in the undergraduate and doctoral programs of the Department. However, the interests and achievements of the appointee should transcend the normal academic boundaries and should encompass scholarship that integrates disciplines within Arts and Science and/or the professional schools of medicine and engineering. Candidates should possess an outstanding record of scholarship and a proven ability to work in an interdisciplinary environment, to attract external funding for their research, and to communicate their work to a diverse audience. In addition to participating in the activities of the cognitive science group in the Psychology Department, the appointee would be expected to foster interactions with other research groups, such as those in computer science and engineering, signal processing, neurosurgery and molecular neuroscience. Nominations and applications should be sent to the following address: P. Bruce Pipes Associate Provost for Academic Affairs Dartmouth College 6004 Parkhurst Hall, Room 204 Hanover, NH 03755-3529 Formal consideration of candidates will begin February 1, 1992. Dartmouth College is an Affirmative Action/Equal Opportunity employer. Applications from and nominations of women and minority candidates are strongly encouraged. From roscheis at CS.Stanford.EDU Wed Dec 9 18:56:48 1992 From: roscheis at CS.Stanford.EDU (Martin Roscheisen) Date: Wed, 9 Dec 92 15:56:48 -0800 Subject: No subject Message-ID: <9212092356.AA22323@csd-d-5.Stanford.EDU> The following is relevant to connectionists interested in natural language. - martin -------------- Feel free to forward to colleagues. Do not redistribute to public mailing lists. ----------------------------------------------------------- Mailing List on Statistics, Natural Language, and Computing We will be maintaining a special-purpose mailing list to provide a platform for - discussing technical issues, - distributing abstracts of new papers, - locating and sharing information, and - announcements (workshops, jobs) related to corpus-based studies of natural language, statistical natural language processing, methods that enable systems to deal with and scale up to actual language use, psycholinguistic evidence of representation of distributional properties of language, as well as applications in such areas as information retrieval, human-computer interaction, and translation. Special care will be taken to keep uninformed or redundant messages at a minimum; the list is filtered and restricted to people actively involved in relevant research. To be added to or dropped from the distribution list send a message to empiricists-request at csli.stanford.edu. Contributions should go to empiricists at csli.stanford.edu. Martin Roscheisen roscheis at cs.stanford.edu David Yarowsky yarowsky at unagi.cis.upenn.edu David Magerman magerman at watson.ibm.com Ido Dagan dagan at research.att.com From SCHNEIDER at vms.cis.pitt.edu Thu Dec 10 16:53:00 1992 From: SCHNEIDER at vms.cis.pitt.edu (SCHNEIDER@vms.cis.pitt.edu) Date: Thu, 10 Dec 1992 16:53 EST Subject: Pre and Post-doc positions in Neural Processes in Cognition in Pittsburg Message-ID: <01GS5P0MZ0E891YBN8@vms.cis.pitt.edu> Program announcement for Interdisciplinary Graduate and Postdoctoral Training in Neural Processes in Cognition at the University of Pittsburgh and Carnegie Mellon University Pre- and Post-Doctoral positions The Pittsburgh Neural Processes in Cognition program, in its third year is providing interdisciplinary training in brain sciences. The National Science Foundation has established an innovative program for students investigating the neurobiology of cognition. The program's focus is the interpretation of cognitive functions in terms of neuroanatomical and neurophysiological data and computer simulations. Such functions include perceiving, attending, learning, planning, and remembering in humans and in animals. A carefully designed program of study prepares each student to perform original research investigating cortical function at multiple levels of analysis. State of the art facilities include: computerized microscopy, human and animal electrophysiological instrumentation, behavioral assessment laboratories, fMRI and PET brain scanners, the Pittsburgh Supercomputing Center, and a regional medical center providing access to human clinical populations. This is a joint program between the University of Pittsburgh, its School of Medicine, and Carnegie Mellon University. Each student receives full financial support, travel allowances and workstation support. Applications are encouraged from students with interest in biology, psychology, engineering, physics, mathematics, or computer science. Last year's class included mathematicians, psychologists, and neuroscience researchers. Pittsburgh is one of America's most exciting and affordable cities, offering outstanding symphony, theater, professional sports, and outdoor recreation in the surrounding Allegheny mountains. More than ten thousand graduate students attend its universities. Core Faculty and interests and affiliation Carnegie Mellon University -Psychology- James McClelland, Johnathan Cohen, Martha Farah, Mark Johnson Computer Science - David Touretzky University of Pittsburgh Behavioral Neuroscience - Michael Ariel Biology - Teresa Chay Information Science - Paul Munro Mathematics - Bard Ermentrout Neurobiology Anatomy and Cell Sciences - Al Humphrey Neurological Surgery - Don Krieger, Robert Sclabassi Neurology - Steven Small Psychiatry - David Lewis, Lisa Morrow, Stuart Steinhauer Psychology - Walter Schneider, Velma Dobson Physiology - Dan Simons Radiology - Mark Mintun Applications: To apply to the program contact the program office or one of the affiliated departments. Students are admitted jointly to a home department and the Neural Processes in Cognition Program. Postdoctoral applicants must have United States resident's status and are expected to have a sponsor among the training faculty. Applications are requested by February 1. For information contact: Professor Walter Schneider Program Director Neural Processes in Cognition University of Pittsburgh 3939 O'Hara St Pittsburgh, PA 15260 Or: call 412-624-7064 or Email to NEUROCOG at VMS.CIS.PITT.BITNET. In Email requests for application materials, please provide your address and an indication of which department(s) you might be interested in. From paulina at pollux.usc.edu Fri Dec 11 18:14:19 1992 From: paulina at pollux.usc.edu (Paulina Baligod) Date: Fri, 11 Dec 92 15:14:19 PST Subject: Call for Papers Message-ID: <9212112314.AA03707@pollux.usc.edu> SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introductory tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule- based reasoning (using schema for the integration) Programming Languages for Schemas and Neural Networks Concurrent Object-Oriented Programming for Distributed AI and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of accepted abstracts received prior to April 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audience to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. The registration fee of $150 includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to Center for Neural Engineering, USC for $150 together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. --------------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] From PIURI at IPMEL1.POLIMI.IT Sun Dec 13 05:57:02 1992 From: PIURI at IPMEL1.POLIMI.IT (PIURI@IPMEL1.POLIMI.IT) Date: 13 Dec 1992 10:58:02 +0001 Subject: call for papers Message-ID: <01GS9JGGS8C29BVDD1@icil64.cilea.it> ================================================================ 1993 INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC ARRAY PROCESSORS ASAP'93 25-27 October 1993, Venice, Italy ================================================================ Sponsored by the EUROMICRO Association. In cooperation with IEEE Computer Society (pending), IFIP WG 10.5, AEI, AICA (pending). ................:::::: CALL FOR PAPERS ::::::................. ASAP'93 is an international conference encompassing the theory, design, implementation, and evaluations of application-specific computing. This conference is a successor to the First International Workshop on Systolic Arrays held in Oxford, England, in July 1986. The title has been modified to reflect the expanded interest in highly-parallel algorithmically-specialized processors as well as the application-driven nature of contemporary systems. Since application-specific array-processor research represents a cross-disciplinary field of interest to a broad audience, this conference will present a balanced program covering technical subjects encompassing conceptual design, programming techniques, electronic and optical implementations, and analysis and evaluation of final systems. It is expected that participants will include people from research institutions in government, industry, and academia around the world. The conference will feature an opening keynote address, technical presentations, a panel discussion, and poster displays. One of the poster sessions will be reserved for very recent results, ongoing projects and exploratory work. The official language is English. The themes emphasized at this conference include architectures, algorithms, applications, hardware, software, and design methodology for application-specific parallel computing systems. Papers are expected to address both theoretical and practical aspects of such systems. Of particular interest are contributions that achieve large performance gains with application-specific parallel processors, introduce novel architectural concepts, propose formal and practical frameworks for the specification, design and evaluation of these systems, discuss technology dependencies and the integration of hardware and software components, and describe fabricated systems and their testing. The topics of interest include, but are not limited to, the following: Application-Specific Architectures - systolic, SIMD, MIMD, dataflow systems - homogeneous, heterogeneous, reconfigurable systems - intelligent memory systems and interconnection network - embedded systems and interfaces Algorithms for Application-Specific Computing - matrix operations - transformations (e.g., FFT, Hough) - sorting algorithms - graph algorithms Applications that Require Specialized Computing Systems - signal processing - image processing and vision - communications - robotics and manufacturing - neural networks - scientific computing - artificial intelligence and data bases Software for Application-Specific Computing - languages - operating systems - optimizing compilers Hardware for Application-Specific Computing - VLSI/WSI systems - optical systems - custom and commercial processors - implementation and testing issues - fabricated systems - synchronous vs asynchronous systems Design Methodology for Application-Specific Systems - mapping algorithms onto architectures - partitioning large problems - design tools - fault tolerance (hardware and software) - benchmarks and performance modeling INFORMATION FOR AUTHORS Authors are invited to send a one-page abstract, the title of the paper and the author's address by electronic or postal mail to the Program Chair by MARCH 27, 1993. Authors must submit five copies of their double-spaced typed manuscript (maximum 5000 words) with an abstract to the Program Chair by APRIL 16, 1993. In the submission letter, the authors should indicate which conference areas are most relevant to the paper, and the author responsible for correspondence and the camera-ready version. Papers submitted should be unpublished and not currently under review by other conferences. Notification of acceptance will be posted by JUNE 10, 1993. The camera-ready version must arrive by AUGUST 1, 1993. Proceedings will be published by IEEE Computer Society Press. GENERAL CHAIR Prof. Luigi DADDA Dept. of Electronics and Information, Politecnico di Milano p.za L. da Vinci 32, I-20133 Milano, Italy phone no. (39-2) 2399-3405, fax no. (39-2) 2399-3411 e-mail dadda at ipmel2.elet.polimi.it PROGRAM CHAIR Prof. Benjamin W. WAH Coordinated Science Laboratory, University of Illinois 1308 West Main Street, Urbana, IL 61801, USA phone no. (217) 333-3516, fax no. (217) 244-7175 e-mail wah at manip.crhc.uiuc.edu FINANCIAL & REGISTRATION CHAIR Prof. Vincenzo PIURI Dept. of Electronics and Information, Politecnico di Milano p.za L. da Vinci 32, I-20133 Milano, Italy phone no. (39-2) 2399-3606, fax no. (39-2) 2399-3411 e-mail piuri at ipmel1.polimi.it From andycl at syma.sussex.ac.uk Tue Dec 15 11:43:25 1992 From: andycl at syma.sussex.ac.uk (Andy Clark) Date: Tue, 15 Dec 92 16:43:25 GMT Subject: No subject Message-ID: <20823.9212151643@syma.sussex.ac.uk> bcc: andycl at cogs re: Doctoral Program in Philosophy-Psychology-Neuroscience First Announcement of a New Doctoral Programme in PHILOSOPHY-NEUROSCIENCE-PSYCHOLOGY at Washington University in St. Louis The Philosophy-Neuroscience-Psychology (PNP) program offers a unique opportunity to combine advanced philosophical studies with in-depth work in Neuroscience or Psychology. In addition to meeting the usual requirements for a Doctorate in Philosophy, students will spend one year working in Neuroscience or Psychology. The Neuroscience option will draw on the resources of the Washington University School of Medicine which is an internationally acknowledged center of excellence in neuroscientific research. The initiative will also employ several new PNP related Philosophy faculty and post-doctoral fellows. Students admitted to the PNP program will embark upon a five-year course of study designed to fulfill all the requirements for the Ph.D. in philosophy, including an academic year studying neuroscience at Washington University's School of Medicine or psychology in the Department of Psychology. Finally, each PNP student will write a dissertation jointly directed by a philosopher and a faculty member from either the medical school or the psychology department. THE FACULTY Roger F. Gibson, Ph.D., Missouri, Professor and Chair: Philosophy of Language, Epistemology, Quine Robert B. Barrett, Ph.D., Johns Hopkins, Professor: Pragmatism, Renaissance Science, Philosophy of Social Science, Analytic Philosophy. Andy Clark, Ph.D., Stirling, Visiting Professor (1993-6) and Acting Director of PNP: Philosophy of Cognitive Science, Philosophy of Mind, Philosophy of Language, Connectionism. J. Claude Evans, Ph.D., SUNY-Stony Brook, Associate Pro- fessor: Modern Philosophy, Contemporary Continental Philosophy, Phenomenology, Analytic Philosophy, Social and Political Theory. Marilyn A. Friedman, Ph.D., Western Ontario, Associate Professor: Ethics, Social Philosophy, Feminist Theory. William H. Gass, Ph.D., Cornell, Distinguished University Professor of the Humanities: Philosophy of Literature, Photography, Architecture. Lucian W. Krukowski, Ph.D., Washington University, Pro- fessor: 20th Century Aesthetics, Philosophy of Art, 18th and 19th Century Philosophy, Kant, Hegel, Schopenhauer. Josefa Toribio Mateas, Ph.D., Complutense University, Assistant Professor: Philosophy of Language, Philosophy of Mind. Larry May, Ph.D., New School for Social Research, Pro- fessor: Social and Political Philosophy, Philosophy of Law, Moral and Legal Responsibility. Stanley L. Paulson, Ph.D., Wisconsin, J.D., Harvard, Pro- fessor: Philosophy of Law. Mark Rollins, Ph.D., Columbia, Assistant Professor: Philosophy of Mind, Epistemology, Philosophy of Science, Neuroscience. Jerome P. Schiller, Ph.D., Harvard, Professor: Ancient Philosophy, Plato, Aristotle. Joyce Trebilcot, Ph.D., California at Santa Barbara, Associ- ate Professor: Feminist Philosophy. Joseph S. Ullian, Ph.D., Harvard, Professor: Logic, Philos- ophy of Mathematics, Philosophy of Language. Richard A. Watson, Ph.D., Iowa, Professor: Modern Philoso- phy, Descartes, Historical Sciences. Carl P. Wellman, Ph.D., Harvard, Hortense and Tobias Lewin Professor in the Humanities: Ethics, Philosophy of Law, Legal and Moral Rights. EMERITI Richard H. Popkin, Ph.D., Columbia: History of Ideas, Jewish Intellectual History. Alfred J. Stenner, Ph.D., Michigan State: Philosophy of Science, Epistemology, Philosophy of Language. FINANCIAL SUPPORT Students admitted to the Philosophy-Neuroscience-Psychology (PNP) program are eligible for five years of full financial support at competitive rates in the presence of satisfactory academic progress. APPLICATIONS Application for admission to the Graduate School should be made to: Chair, Graduate Admissions Department of Philosophy Washington University Campus Box 1073 One Brookings Drive St. Louis, MO 63130-4899 Washington University encourages and gives full consideration to all applicants for admission and financial aid without regard to race, color, national origin, handicap, sex, or religious creed. Services for students with hearing, visual, orthopedic, learning, or other disabilities are coordinated through the office of the Assistant Dean for Special Services. From ck at rex.cs.tulane.edu Mon Dec 14 13:32:34 1992 From: ck at rex.cs.tulane.edu (Cris Koutsougeras) Date: Mon, 14 Dec 92 12:32:34 CST Subject: CFP: Intl. Conf. on Tools for AI In-Reply-To: <9212112314.AA03707@pollux.usc.edu>; from "Paulina Baligod" at Dec 11, 92 3:14 pm Message-ID: <9212141832.AA05430@isis> ======================================================================= CALL FOR PAPERS 5th IEEE International Conference on Tools with Artificial Intelligence November 8-11, 1993 Boston, Massachusetts This conference encompasses the technical aspects of specifying, designing, implementing and evaluating computer tools which use artificial intelligence techniques as well as tools for artificial intelligence applications. The topics of interest include the following aspects: o Machine learning, Theory and Algorithms o AI and Software Engineering o Intelligent Multimedia Systems o AI Knowledge Base Architecture o AI Algorithms o AI Language Tools o Reasoning Under Uncertainty, Fuzzy Logic o Logic and Intelligent Databases o Expert Systems and Environments o Artificial Neural Networks o Parallel Processing and Hardware Support o AI and Object-Oriented Systems o AI Applications INFORMATION FOR AUTHORS Authors are requested to submit five copies (in English) of their doubled-spaced typed manuscript (maximum of 25 pages) with an abstract to the program chair by April 15, 1993. The conference language is English and the final papers are restricted seven IEEE model pages. A submission letter that indicates which of the conference areas is most relevant to your paper and the postal address, electronic mail address, telephone number, and fax number(if available) of the contact author must accompany the paper. Authors will be notified of acceptance by July 15, 1993 and will be given instructions for final preparation of their papers at than time. Outstanding papers will be eligible for publication in the International Journal on Artificial Intelligence Tools. Submit papers and panel proposals by April 15, 1993 to: Jeffrey J.P. Tsai Dept. of EECS (M/C 154) (312)996-9324 (office) P.O. Box 4348 (312)996-3422 (secretary) University of Illinois (312)413-0024 (fax) Chicago, IL 60680 tsai at bert.eecs.uic.edu An internet computer account is maintained to provide periodically updated information regarding the conference. Send a message to "tai at rex.cs.tulane.edu" to obtain the latest information including: registration forms, advanced program, tutorials, hotel info, etc. For more information please contact: Conference Chair Steering Committee Chair John Mylopoulos Nikolaos G. Bourbakis Dept. of Computer Science Dept. of Electrical Engineering University of Toronto SUNY at Binghamton 6 King's College Road Binghamton, NY 13902 Toronto, Ontario Tel: (607)777-2165 Canada M5S 1A4 Tel: (416)978-5180 jm at cs.toronto.ca Program Vice-Chairs Machine Learning Bernard Silver GTE Lab AI and Software Engineering Matthias Jarke Technical University of Aachen Logic and Intelligent Database Clement Yu University of Illinois at Chicago AI Knowledge Base Architectures Robert Reynolds Wayne State University Intelligent Multimedia Systems Forouzan Golshani Arizona State University Artificial Neural Networks Ruediger W. Brause J.W.Goethe University Parallel Processing and Hardware Support Ted Lewis Oregon State University AI Applications Kiyoh Nakamura Fujitsu Limited Expert Systems and Environments Philip Sheu Rutgers University Natural Language Processing Fernando Gomez Florida Central University AI Algorithms Jun Gu University of Calgary AI and Object-Oriented Systems Mamdouh H. Ibrahim EDS Corporation Reasoning under Uncertainty, Fuzzy Logic John Yen Texas A&M University Registration and Publication Chair C.Koutsougeras Tulane University Publicity Chairs Mark Perlin Carnegie Mellon University A. Delis University of Maryland E. Kounalis University de NICE Mikio Aoyama Fujitsu Limited J.Y. Juang National Taiwan University Local Arrangement Chairs John Vittall, GTE Lab M.Mortazavi, SUNY Binghamton Steering Committee Nikolaos G. Bourbakis SUNY-Binghamton C.V. Ramamoorthy University of California-Berkeley Harry E. Stephanou Rensselaer Polytechnic Institute Wei-Tek Tsai University of Minnesota Benjamin. W. Wah University of Illinois-Urbana From lss at compsci.stirling.ac.uk Wed Dec 16 11:13:02 1992 From: lss at compsci.stirling.ac.uk (Dr L S Smith (Staff)) Date: 16 Dec 92 16:13:02 GMT (Wed) Subject: Two new TRs Message-ID: <9212161613.AA08589@uk.ac.stir.cs.tugrik> Two new technical reports are available from the CCCN at the University of Stirling, Scotland. Unfortunately, they are only available by post. To get them, email lss at cs.stir.ac.uk with your postal address. TR CCCN-13: ISSN 0968-0640 COMPUTATIONAL THEORIES OF READING ALOUD: MULTI-LEVEL NEURAL NET APPROACHES W A Phillips and I M Hay Centre for Cognitive and Computational Neuroscience Departments of Psychology and Computing Science Stirling University Stirling FK9 4LA UK December 1992 Abstract Cognitive and neuropsychological studies suggest that there are at least two distinct direct routes from print to sound in addition to the route via semantics. It does not follow that connectionist approaches are thereby weakened. The use of multiple levels of analysis is a general design feature of neural systems, and may apply within phonic and graphic domains. The connectionist net for reading aloud simulated by Seidenberg and McClelland (1989) has elements of this design feature even though their emphasis was upon the capabilities of such systems at any single level of analysis. Our simulations show that modifying their system to make more use of its multi-level potential enhances its performance and explains some of its weaknesses. This suggest possible multi- level connectionist systems. At the lexical level information about the whole letter string is processed as a whole; at the sub-lexical level smaller parts, such as heads and bodies, are processed separately. We argue that these two levels do not operate in basically different ways, and that connectionist and dual-route approaches are mutually supportive. ___________________________________________________________________________ TR CCCN-14: LEXICALITY AND PRONUNCIATION IN A SIMULATED NEURAL NET W A Phillips, I M Hay and L S Smith Centre for Cognitive and Computational Neuroscience Departments of Psychology and Computing Science University of Stirling Stirling FK9 4LA UK December 1992 Abstract Self-supervised compressive neural nets can perform non-linear multi-level latent structure analysis. They therefore have promise for cognitive theory. We study their use in the Seidenberg and McClelland (1989) model of reading. Analysis shows that self-supervised compression in their model can make only a limited contribution to lexical decision, and simulation shows that it interferes with the associative mapping into phonology. Self-supervised compression is therefore put to no good use in their model. This does not weaken the arguments for self-supervised compression, however, and we suggest possible beneficial uses that merit further study. --Leslie Smith, Department of Computing Science/CCCN University of Stirling, Stirling FK9 4LA Scotland. --lss at cs.stir.ac.uk From bnns93 at computer-science.birmingham.ac.uk Wed Dec 16 14:09:22 1992 From: bnns93 at computer-science.birmingham.ac.uk (British Neural Network Society) Date: Wed, 16 Dec 92 19:09:22 GMT Subject: No subject Message-ID: <20628.9212161909@fat-controller.cs.bham.ac.uk> British Neural Network Society Symposium on Recent Advances in Neural Networks CALL FOR PARTICIPATION ====================== January 29th 1993 Lucas Institute, University of Birmingham, Edgbaston, Birmingham, U.K. Start 9:30 Cost: 55 pounds (30 pounds full-time student) A one-day symposium that looks at recent advances in neural networks, with submissions received under the following headings: - Theory & Algorithms Time series, learning theory, fast algorithms. - Applications Finance, image processing, medical, control. - Implementations Software, hardware, optoelectronics. - Biological Networks Perception, motor control, representation. The proceedings will be available after the symposium: participants will have the opportunity to purchase them at a reduced rate. Please note that places are limited to 80, and so an early reply is advised. Payment should be made to BNNS'93. Credit cards are not accepted. Please fill in the form below and return it to: BNNS'93 Registration School of Computer Science University of Birmingham Edgbaston Birmingham B15 2TT UK. ------------------------------------------------------------------------------- Please register me for the BNNS'93 Symposium "Recent Advances in Neural Networks", January 29th 1993. Name:.......................................................................... Address:....................................................................... ....................................................................... ....................................................................... Phone: ............... Fax: ................ email: .......................... Amount: ............... (55 pounds, 30 pounds student, payable to BNNS'93) Cheque number: ......................... From rosen at ringer.cs.utsa.edu Tue Dec 15 16:10:38 1992 From: rosen at ringer.cs.utsa.edu (Bruce Rosen) Date: Tue, 15 Dec 92 15:10:38 CST Subject: neuroprose paper: Function Optimization based on Advanced Simulated Annealing Message-ID: <9212152110.AA04402@ringer.cs.utsa.edu.sunset> A postscript version of my short paper "Function Optimization based on Advanced Simulated Annealing" has been placed in the neuroprose archive. The abstract is given below, followed by retrieval instructions. Bruce Rosen email: rosen at ringer.cs.utsa.edu ------------------------------------------------------- Function Optimization based on Advanced Simulated Annealing Bruce Rosen Division of Mathematics, Computer Science and Statistics The University of Texas at San Antonio, San Antonio, Texas, 78249 Abstract Solutions to numerical problems often involve finding (or fit- ting) a set of parameters to optimize a function. A novel extension of the Simulated Annealing method, Very Fast Simulated Reannealing (VFSR) [1, 2], has been proposed for optimizing difficult functions. VFSR has an exponentially decreasing temperature reduction schedule which is faster than both Boltzmann Annealing and Fast (Cauchy) Annealing. VFSR is shown to be is superior to these two methods on optimizing a difficult multimodal function. 1. L. Ingber, "Very Fast Simulated re-annealing," Mathl. Comput. Modeling, vol. 12, no. 8, pp. 967-973, 1989. 2. L. Ingber and B. E. Rosen, "Genetic Algorithms and Very Fast Simulated Reannealing: A Comparison," Mathematical and Computer Modelling, vol. 16, no. 11, pp. 87-100, 1992. ----------------------------------------------------------------------------- The paper is rosen.advsim.ps.Z in the neuroprose archives. The INDEX sentence is "Preprint: Comparative performances of Advanced Simulated Annealing Methods on optimizing a nowhere-differentiable function" To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> get rosen.advsim.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosen.advsim.ps.Z .. ftp> quit 221 Goodbye. unix> uncompress rosen.advsim.ps.Z unix> lpr rosen.advsim.ps From marwan at ee.su.oz.au Wed Dec 16 18:19:55 1992 From: marwan at ee.su.oz.au (Marwan Jabri) Date: Thu, 17 Dec 1992 10:19:55 +1100 Subject: Job Opportunity Message-ID: <9212162319.AA17797@brutus.ee.su.OZ.AU> The University of Sydney Department of Electrical Engineering Systems Engineering and Design Automation Laboratory Girling Watson Research Fellowship Reference No. 51/12 Applications are invited for a Girling Watson Research Fellowship at Sydney University Electrical Engineering. The applicant should have a strong research and development experience, preferably with a background in one or more of the following areas: machine intelligence and connectionist architectures, microelectronics, pattern recognition and classification. The Fellow will work with the Systems Engineering and Design Automation Laboratory (SEDAL), one of the largest laboratories at Sydney University Electrical Engineering. The Fellow will join a group of 18 people (8 staff and 10 postgraduate students). SEDAL currently has projects on pattern recognition for implantable devices, VLSI implementation of connectionist architectures, time series prediction, knowledge integration and continuous learning, and VLSI computer aided design. The Research Fellow position is aimed at: o contributing to the research program o helping with the supervision of postgraduate students o supporting some management aspects of SEDAL o providing occasional teaching support Applicants should have either a PhD or an equivalent industry research and development experience. The appointment is available for a period of three years, subject to satisfactory progress. Salary is in the range of Research Fellow: A$39,463 to A$48,688. Applications quoting the reference number 51/12 can be sent to: The Staff Office The University of Sydney NSW 2006 AUSTRALIA For further information contact Dr. M. Jabri, Tel: (+61-2) 692-2240, Fax: (+61-2) 660-1228, Email: marwan at sedal.su.oz.au From paul at dendrite.cs.colorado.edu Thu Dec 17 12:41:50 1992 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Thu, 17 Dec 1992 10:41:50 -0700 Subject: Re-revised deadline: Cognitive Science Conference Message-ID: <199212171741.AA05296@axon.cs.colorado.edu> We've stretched the deadline as far as we can, including another weekend (doubling the time some of us can spend on writing the paper! ... that's off the record, of course) ... here's the Call for Papers again, with the new deadline, Feb 2: Fifteenth Annual Meeting of the COGNITIVE SCIENCE SOCIETY A MULTIDISCIPLINARY CONFERENCE ON COGNITION June 18 - 21, 1993 University of Colorado at Boulder Call for Participation with Revised Deadlines This year's conference aims at broad coverage of the many and diverse methodologies and topics that comprise Cognitive Science. In addition to computer modeling, the meeting will feature research in computational, theoretical, and psycho-linguistics; cognitive neuroscience; conceptual change and education; artificial intelligence; philosophical foundations; human-computer interaction and a number of other approaches to the study of cognition. A plenary session honoring the memory of Allen Newell is scheduled. Plenary addresses will be given by: Alan Baddeley Andy DiSessa Paul Smolensky Sandra Thompson Bonnie Webber The conference will also highlight invited research papers: Conceptual Change: (Organizers: Nancy Songer & Walter Kintsch) Frank Keil Gaea Leinhardt Ashwin Ram Jeremy Rochelle Language Learning: (Organizers: Paul Smolensky & Walter Kintsch) Michael Brent Robert Frank Brian MacWhinney Situated Action: (Organizer: James Martin) Leslie Kaebling Pattie Maes Bonnie Nardi Alonso Vera Visual Perception & Cognitive Neuroscience: (Organizer: Michael Mozer) Marlene Behrmann Robert Jacobs Hal Pashler David Plaut PAPER SUBMISSIONS With the goal of assembling a high-quality program representative of the diversity of methods and topics in cognitive science, we invite papers presenting interdisciplinary research addressing any cognitive domain and using any of the diverse methodologies of the field. Papers are specifically solicited which address the topics of the invited research sessions listed above. Authors should submit five (5) copies of the paper in hard copy form to: Cognitive Science 1993 Submissions Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 DAVID MARR MEMORIAL PRIZES FOR EXCELLENT STUDENT PAPERS Papers with a student first author will be eligible to compete for a David Marr Memorial Prize for excellence in research and presentation. The David Marr Prizes are accompanied by a $300.00 honorarium, and are funded by an anonymous donor. LENGTH Papers must be a maximum of six (6) pages long (excluding only the cover page), must have at least 1 inch margins on all sides, and must use no smaller that 10 point type. Camera-ready versions will be required only after authors are notified of acceptance. COVER PAGE Each copy of the paper must include a cover page, separate from the body of the paper, which includes, in order: 1. Title of paper. 2. Full names, postal addresses, phone numbers and e-mail addresses (if possible) of all authors. 3. An abstract of no more than 200 words. 4. The area(s) in which the paper should be reviewed. When possible, please list, in decreasing order of relevance, 1-3 of the following keywords: action/motor control, acquisition/learning, cognitive architecture, cognitive neuroscience, connectionism, conceptual change/education, decision making, foundations, human-computer interaction, language (indicate subarea), memory, reasoning and problem solving, perception, situated action/cognition, skill/expertise. 5. Preference for presentation format: Talk or poster, talk only, poster only. Poster sessions will be highlighted in this year's conference. The proceedings will not distinguish between papers presented orally and those presented as posters. 6. A note stating if the paper is eligible to compete for a Marr Prize. For jointly authored papers, include a note from the student author's advisor explaining the student's contribution to the research. DEADLINE ***** PAPERS ARE DUE FEBRUARY 2, 1993. ****** SYMPOSIA Proposals for symposia are also invited. Proposals should indicate: (1) A brief description of the topic; (2) How the symposium would address a broad cognitive science audience; (3) Names of symposium organizer(s); (4) List of potential speakers, their topics, and some estimate of their likelihood of participation; (5) Proposed symposium format (designed to last 90 minutes). Symposium proposals should be sent as soon as possible, but no later than February 2, 1993. FOR MORE INFORMATION CONTACT Dr. Martha Polson Institute of Cognitive Science Campus Box 344 University of Colorado Boulder, CO 80309-0344 E-mail: Cogsci at clipr.colorado.edu Telephone: (303) 492-7638 FAX: (303) 492-2967 From sam at sarnoff.com Wed Dec 16 15:57:30 1992 From: sam at sarnoff.com (Scott A. Markel x2683) Date: Wed, 16 Dec 92 15:57:30 EST Subject: NIPS workshop summary Message-ID: <9212162057.AA03573@sarnoff.sarnoff.com> NIPS 92 Workshop Summary ======================== Computational Issues in Neural Network Training =============================================== Main focus: Optimization algorithms used in training neural networks ---------- Organizers: Scott Markel and Roger Crane ---------- This was a one day workshop exploring the use of optimization algorithms, such as back-propagation, conjugate gradient, and sequential quadratic programming, in neural network training. Approximately 20-25 people participated in the workshop. About two thirds of the participants used some flavor of back propagation as their algorithm of choice, with the other third using conjugate gradient, sequential quadratic programming, or something else. I would guess that participants were split about 60-40 between industry and the academic community. The workshop consisted of lots of discussion and the following presentations: Introduction ------------ Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) I opened by saying that Roger and I are mathematicians and started looking at neural network training problems when neural net researchers were experiencing difficulties with back-propagation. We think there are some wonderfully advanced and robust implementations of classical algorithms developed by the mathematical optimization community that are not being exploited by the neural network community. This is due largely to a lack of interaction between the two communities. This workshop was set up to address that issue. In July we organized a similar workshop for applied mathematicians at SIAM '92 in Los Angeles. Optimization Overview --------------------- Roger Crane (David Sarnoff Research Center - rcrane at sarnoff.com) Roger gave a very brief, but broad, historical overview of optimization algorithm research and development in the mathematical community. He showed a time line starting with gradient descent in the 1950's and progressing to sequential quadratic programming (SQP) in the 1970's and 1980's. SQP is the current state of the art optimization algorithm for constrained optimization. It's a second order method that solves a sequence of quadratic approximation problems. SQP is quite frugal with function evaluations and handles both linear and nonlinear constraints. Roger stressed the robustness of algorithms found in commercial packages (e.g. NAG library) and that reinventing the wheel was usually not a good thing to do since many subtleties will be missed. A good reference for this material is Practical Optimization Gill, P. E., Murray, W., and Wright, M. H. Academic Press: London and New York 1981 Roger's overview generated a lot of discussion. Most of it centered around the fact that second order methods involve using the Hessian, or an approximation to it, and that this is impractical for large problems (> 500-1000 parameters). Participants also commented that the mathematical optimization community has not yet fully realized this and that stochastic optimization techniques are needed for these large problems. All classical methods are inherently deterministic and work only for "batch" training. SQP on a Test Problem --------------------- Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) I followed Roger's presentation with a short set of slides showing actual convergence of a neural network training problem where SQP was the training algorithm. Most of the workshop participants had not seen this kind of convergence before. Yann Le Cun noted that with such sharp convergence generalization would probably be pretty bad. I noted that sharp convergence was necessary if one was trying to do something like count local minima, where generaization is not an issue. In Defense of Gradient Descent ------------------------------ Barak Pearlmutter (Oregon Graduate Institute - bap at merlot.cse.ogi.edu) By this point back propagation and its many flavors had been well defended from the audience. Barak's presentation captured the main points in a clarifying manner. He gave examples of real application neural networks with thousands, millions, and billions of connections. This underscored the need for stochastic optimization techniques. Barak also made some general remarks about the characteristics of error surfaces. Some earlier work by Barak on gradient descent and second order momentum can be found in the NIPS-4 proceedings (p. 887). A strong plea was made by Barak, and echoed by the other participants, for fair comparisons between training methods. Fair comparisons are rare, but much needed. Very Fast Simulated Reannealing ------------------------------- Bruce Rosen (University of Texas at San Antonio - rosen at ringer.cs.utsa.edu) This presentation focused on a new optimization technique called Very Fast Simulated Reannealing (VFSR), which is faster than Boltzmann Annealing (BA) and Fast (Cauchy) Annealing (FA). Unlike back propagation, which Bruce considers mostly a method for pattern association/classification/generalization, simulated annealing methods are perhaps best used for functional optimization. He presented some results on this work, showing a comparison of Very Fast Simulated Reannealing to GA for function optimization and some recent work on function optimization with BA, FA, and VFSR. Bruce's (and Lester Ingber's) code is available from netlib - Interactive: ftp research.att.com [login as netlib, your_login_name as password] cd opt binary get vfsr.Z Email: mail netlib at research.att.com send vfsr from opt Contact Bruce (rosen at ringer.cs.utsa.edu) or Lester (ingber at alumni.cco.caltech.edu) for further information. General Comments ---------------- Yann Le Cun (AT&T Bell Labs - yann at neural.att.com) I asked Yann to summarize some of the comments he and others had been making during the morning session. Even though we didn't give him much time to prepare, he nicely outlined the main points. These included - large problems require stochastic methods - the mathematical community hasn't yet addressed the needs of the neural network community - neural network researchers are using second order information in a variety of ways, but are definitely exploring uncharted territory - symmetric sigmoids are necessary; [0,1] sigmoids cause scaling problems (Roger commented that classical methods would accommodate this) Cascade Correlation and Greedy Learning --------------------------------------- Scott Fahlman (Carnegie Mellon University - scott.fahlman at cs.cmu.edu) Scott's presentation started with a description of QuickProp. This algorithm was developed in an attempt to address the slowness of back propagation. QuickProp uses second order information ala modified Newton method. This was yet another example of neural network researchers seeing no other alternative but to do their own algorithm development. Scott then described Cascade Correlation. CasCor and CasCor2 are greedy learning algorithms. They build the network, putting each new node in its own layer, in response to the remaining error. The newest node is trained to deal with the largest remaining error component. Papers on QuickProp, CasCor, and Recurrent CasCor can be found in the neuroprose archive (see fahlman.quickprop-tr.ps.Z, fahlman.cascor-tr.ps.Z, and fahlman.rcc.ps.Z). Comments on Training Issues --------------------------- Gary Kuhn (Siemens Corporate Research - gmk at learning.siemens.com) Gary presented 1. a procedure for training with stochastic conjugate gradient. (G. Kuhn and N. Herzberg, Some Variations on Training of Recurrent Networks, in R. Mammone & Y. Zeevi, eds, Neural Networks: Theory and Applications, New York, Academic Press, 1991, p 233-244.) 2. a sensitivity analysis that led to a change in the architecture of a speech recognizer and to further, joint optimization of the classifier and its input features. (G. Kuhn, Joint Optimization of Classifier and Feature Space in Speech Recognition, IJCNN '92, IV:709-714.) He related Scott Fahlmans' interest in sensitivity to Yann Le Cun's emphasis on trainability, by showing how a sensitivity analysis led to improved trainability. Active Exemplar Selection ------------------------- Mark Plutowski (University of California - San Diego - pluto at cs.ucsd.edu) Mark gave a quick recap of his NIPS poster on choosing a concise subset for training. Fitting these exemplars results in the entire set being fit as well as desired. This method has only been used on noise free problems, but looks promising. Scott Fahlman expressed the opinion that exploiting the training data was the remaining frontier in neural network research. Final Summary ------------- Incremental, stochastic methods are required for training large networks. Robust, readily available implementations of classical algorithms can be used for training modest sized networks and are especially effective research tools for investigating mathematical issues, e.g. estimating the number of local minima. From alpaydin%TRBOUN.BITNET at BITNET.CC.CMU.EDU Fri Dec 18 14:03:29 1992 From: alpaydin%TRBOUN.BITNET at BITNET.CC.CMU.EDU (alpaydin%TRBOUN.BITNET@BITNET.CC.CMU.EDU) Date: 18 Dec 1992 14:03:29 -0500 (EST) Subject: CFP : 2nd Turkish Conf on AI and ANN Message-ID: <00965465.A8462DC0.14395@trboun> CALL FOR PAPERS 2nd Turkish Symposium on Artificial Intelligence and Artificial Neural Networks Bogazici University Istanbul, Turkey June 24-25, 1993 Supported by : Bogazici University, Istanbul; Bilkent University, Ankara; IEEE Computer Society Turkiye Section; Middle East Technical University, Ankara; TUBITAK, The Scientific and Technical Research Council of Turkey. Scope Commonsense Reasoning, Knowledge Representation, Learning, Natural Language Processing, Control and Planning, Expert Systems, Theorem Proving, Intelligent Databases, Signal Processing, Speech Processing, Vision and Image Processing, Pattern Recognition, Robotics, Programming Languages, Simulation Environments, Theoretical Foundations, Hardware Implementations, Industrial Applications, Social, Legal, and Ethical Aspects, Paper submissions Deadline for full papers limited to 6 single spaced (12 point) A4 pages: March 1, 1993. Author notification: April 1, 1993. Camera ready copies: May 1, 1993. Send submissions (in English or Turkish) to Dr. L. Akin, Department of Computer Engineering, Bogazici University, TR-80815 Istanbul, Turkey. Tel (voice): +90 1 263 15 00 x 1323 (fax): +90 1 265 84 88 E-mail: yz at trboun.bitnet Symposium Chair: Selahattin Kuru, Bogazici Univ. Program Committee: Levent Akin, Bogazici Univ.; Varol Akman, Bilkent Univ.; Ethem Alpaydin, (chair) Bogazici Univ.; Isil Bozma, Bogazici Univ.; M. Kemal Ciliz, Bogazici Univ.; Fikret Gurgen, Bogazici Univ.; H. Altay Guvenir, Bilkent Univ.; Ugur Halici METU; Yorgo Istefanopulos, Bogazici Univ.; Sakir Kocabas, TUBITAK Gebze Res. Center; Selahattin Kuru, Bogazici Univ.; Kemal Oflazer, Bilkent Univ.; A. C. Cem Say, Bogazici Univ.; Nese Yalabik, METU Local Organizing Committee: Levent Akin (chair); Ethem Alpaydin; Hakan Aygun; Sema Oktug; A. C. Cem Say; Mehmet Yagci From irina at laforia.ibp.fr Thu Dec 17 13:24:02 1992 From: irina at laforia.ibp.fr (irina Tchoumatchenko 46.42.32.00 poste 433) Date: Thu, 17 Dec 92 19:24:02 +0100 Subject: call for papers "AI and Genome" Message-ID: <9212171824.AA13503@laforia.ibp.fr> Please, posted: ***************** CALL FOR PAPERS ************************ WORKSHOP "ARTIFICIAL INTELLIGENCE and the GENOME" at the International Joint Conference on Artificial Intelligence IJCAI-93 August 29 - September 3, 1993 Chambery, FRANCE There is a great deal of intellectual excitement in molecular biology (MB) right now. There has been an explosion of new knowledge due to the advent of the Human Genome Program. Traditional methods of computational molecular biology can hardly cope with important complexity issues without adapting a heuristic approach. They enable one to explicitate molecular biology knowledge to solve a problem as well as to present the obtained solution in biologically-meaningful terms. The computational size of many important biological problems overwhelms even the fastest hardware by many orders of magnitude. The approximate and heuristic methods of Artificial Intelligence have already made significant progress in these difficult problems. Perhaps one reason is great deal of biological knowledge is symbolic and complex in their organization. Another reason is the good match between biology and machine learning. Increasing amout of biological data and a significant lack of theoretical understanding suggest the use of generalization techniques to discover "similarities" in data and to develop some pieces of theory. On the other hand, molecular biology is a challenging real-world domain for artificial intelligence research, being neither trivial nor equivalent to solving the general problem of intelligence. This workshop is dedicated to support the young AI/MB field of research. TOPICS OF INTEREST INCLUDE (BUT ARE NOT RESTRICTED TO): ------------------------------------------------------- *** Knowledge-based approaches to molecular biology problem solving; Molecular biology knowledge-representation issues, knowledge-based heuristics to guide molecular biology data processing, explanation of MB data processing results in terms of relevant MB knowledge; *** Data/Knowledge bases for molecular biology; Acquisition of molecular biology knowledge, building public genomic knowledge bases, a concept of "different view points" in the MB data processing context; *** Generalization techniques applied to molecular biology problem solving; Machine learning techniques as well as neural network techniques, supervised learning versus non-supervised learning, scaling properties of different generalization techniques applied to MB problems; *** Biological sequence analysis; AI-based methods for sequence alignment, motif finding, etc., knowledge-guided alignment, comparison of AI-based methods for sequence analysis with the methods of computational biology; *** Prediction of DNA protein coding regions and regulatory sites using AI-methods; Machine learning techniques, neural networks, grammar-based approaches, etc.; *** Predicting protein folding using AI-methods; Predicting secondary, super-secondary, tertiary protein structure, construction protein folding prediction theories by examples; *** Predicting gene/protein functions using AI-methods; Complexity of the function prediction problem, understanding the structure/function relationship in biologically-meaningful examples, structure/functions patterns, attempts toward description of functional space; *** Similarity and homology; Similarity measures for gene/protein class construction, knowledge-based similarity measures, similarity versus homology, inferring evolutionary trees; *** Other perspective approaches to classify and predict properties of MB sequences; Information-theoretic approach, standard non-parametric statistical analysis, Hidden Markov models and statistical physics methods; INVITED TALKS: -------------- L. Hunter, NLM, AI problems in finding genetic sequence motifs J. Shavlik, U. of Wisconsin, Learning important relations in protein structures B. Buchanan, U. of Pittsburgh, to be determined R. Lathrop, MIT, to be determined Y. Kodratoff, U. Paris-Sud, to be determined J.-G. Ganascia, U. Paris-VI, Application of machine learning techniques to the biological investigation viewed as a constructive process SCHEDULE ---------- Papers received: March 1, 1993 Acceptance notification: April 1, 1993 Final papers: June 1, 1993 WORKSHOP FORMAT: ------------------ The format of the workshop will be paper sessions with discussion at the end of each session, and a concluding panel. Prospective particitants should submit papers of five to ten pages in length. Four paper copies are required. Those who would like to attend without a presentation should send a one to two-page description of their relevant research interests. Attendance at the workshop will be limited to 30 or 40 people. Each workshop attendee MUST HAVE REGISTERED FOR THE MAIN CONFERENCE. An additional (low) 300 FF fee for the workshop attendance (about $60) will be required. One student attending the workshop normally (has registered for the main conference) and being in charge of taking notes during the entirre workshop, could be exempted from the additional 300 FF fee. Volunteers are invited. ORGANIZING COMMITTEE -------------------- Buchanan, B. (Univ. of Pittsburgh - USA) Ganascia, J.-G., chairperson (Univ. of Paris-VI - France) Hunter, L. (National Labrary of Medicine - USA) Lathrop, R. (MIT - USA) Kodratoff, Y. (Univ. of Paris-Sud - France) Shavlik, J. W. (Univ. of Wisconsin - USA) PLEASE, SEND SUBMISSIONS TO: --------------------------- Ganascia, J.-G. LAFORIA-CNRS University Paris-VI 4 Place Jussieu 75252 PARIS Cedex 05 France Phone: (33-1)-44-27-47-23 Fax: (33-1)-44-27-70-00 E-mail: ganascia at laforia.ibp.fr From wray at ptolemy.arc.nasa.gov Thu Dec 17 20:40:49 1992 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Thu, 17 Dec 92 17:40:49 PST Subject: Computational Issues in Neural Network Training In-Reply-To: "Scott A. Markel x2683"'s message of Wed, 16 Dec 92 15:57:30 EST <9212162057.AA03573@sarnoff.sarnoff.com> Message-ID: <9212180140.AA04099@ptolemy.arc.nasa.gov> First, thanks to Scott Markel for producing this summary. Its rapid dissemination of important information like this to non-participants that lets the field progress as a whole!!! > SQP on a Test Problem > --------------------- > Scott Markel (David Sarnoff Research Center - smarkel at sarnoff.com) > > I followed Roger's presentation with a short set of slides showing actual > convergence of a neural network training problem where SQP was the training > algorithm. Most of the workshop participants had not seen this kind of > convergence before. Yann Le Cun noted that with such sharp convergence > generalization would probably be pretty bad. I'd say not necessarily. If you use a good regularization method then sharp convergence shouldn't harm generalization at all. Of course, this begs the question: what is a good regularizing/complexity/prior/MDL term? (choose you own term depending on which regularizing fashion you follow.) Wray Buntine NASA Ames Research Center phone: (415) 604 3389 Mail Stop 269-2 fax: (415) 604 3594 Moffett Field, CA, 94035 email: wray at kronos.arc.nasa.gov From henrik at robots.ox.ac.uk Mon Dec 21 14:03:08 1992 From: henrik at robots.ox.ac.uk (henrik@robots.ox.ac.uk) Date: Mon, 21 Dec 92 19:03:08 GMT Subject: new paper: A massively parallel neurocomputer Message-ID: <9212211903.AA18874@ulysses.robots.ox.ac.uk> Here is another one ... I just have placed this preprint (the paper is submitted to MicroNeuro 93) in the neuroprose archive, file klagges.massively-parallel.ps.Z. Cheers, Henrik (henrik at robots.ox.ac.uk) Abstract(( We have developed a SIMD massively parallel digital neural network simulator --- called GeNet for Generic Network --- which can evaluate large networks with a variety of learning algorithms at high speed. A medium-size installation with 256 physical nodes and 1 Gbyte of memory can sustain e.g. 1.7 giga 16bit-connection crossings/sec at network sizes of 2 layers with 64K neurons each, a fan-in of 1K and a random wired topology. The neural network core operations are supported by optimized and balanced computation and communication hardware that sustains heavily pipelined processing. In addition to an array of processing units with one global scalar (16 bit) bus, the system is equipped with a ring-shifter (32 bit) and a parallel (256\times16 bit) vector bus that feeds a tree-shaped global vector accumulator. This eases backward communication and the calculation of scalar products of distributed vectors. The VLIW-architecture is highly scalable. A prototype has been cost-effectively implemented without custom VLSI chips. )) FTP instructions: $ ftp archive.cis.ohio-state.edu ftp> user ftp ftp> password ftp> binary ftp> cd pub/neuroprose ftp> get Getps ftp> bye $ chmod +x Getps $ Getps klagges.massively-parallel.ps.Z $ uncompress kl*.ps.Z $ lpr -Plp kl*.ps (or whatever cmd you use for your postscript printer) ========= From henrik at robots.ox.ac.uk Mon Dec 21 14:02:42 1992 From: henrik at robots.ox.ac.uk (henrik@robots.ox.ac.uk) Date: Mon, 21 Dec 92 19:02:42 GMT Subject: new paper: Random wired cascade-correlation Message-ID: <9212211902.AA18870@ulysses.robots.ox.ac.uk> I just have placed this preprint (the paper is submitted to MicroNeuro 93) in the neuroprose archive, file klagges.rndwired-cascor.ps.Z. There also is an accompanying picture of a sample network topology created by LFCC, called klagges.rndwired-topology.GIF (it is a gif file). Cheers, Henrik (henrik at robots.ox.ac.uk) Abstract(( The success of new learning algorithms like Cascade Correlation (CC) lies partly in topology construction strategies which are difficult to map onto SIMD-parallel neurcomputers. A CC variation that limits the connection fan-in and random-wires the neurons was invented to ease the SIMD-implementation. Surprisingly, the method produced superior and very compact networks with improved generalization. In particular, solutions of the 2-spirals problem improved from 133 +- 27 total weights for standard CC down to 60 +- 10 with 75% less connection crossings. Performance increased with candidate pool size and was correlated with a reduction of artefacts in the receptive field visualizations. We argue that, for general neural network learning, construction algorithms are as important as weight adaption rules. This requires sparse matrix support from neurocomputer hardware. )) FTP instructions: $ ftp archive.cis.ohio-state.edu ftp> user ftp ftp> password ftp> binary ftp> cd pub/neuroprose ftp> get Getps ftp> bye $ chmod +x Getps $ Getps klagges.rndwired-cascor.ps.Z $ Getps klagges.rndwired-topology.GIF $ uncompress kl*.ps.Z $ lpr -Plp kl*.ps (or whatever cmd you use for your postscript printer) $ xview kl*.GIF (or " " " " viewing gifs.) ========= From wahba at stat.wisc.edu Mon Dec 21 21:12:22 1992 From: wahba at stat.wisc.edu (Grace Wahba) Date: Mon, 21 Dec 92 20:12:22 -0600 Subject: choose your own randomized regularizer Message-ID: <9212220212.AA26683@hera.stat.wisc.edu> ............................. Re: Regularizing fashions...choose your own: as per Wray Buntine's remarks about Scott Markel's discussion of SQP Also Re: `Large problems require stochastic methods' -Yann Le Cun Variants of the fast randomized version of Generalized Cross Validation might be implementable with very large optimization problems...SQL? see AUTHOR = {D. Girard}, TITLE = {A Fast `{M}onte-{C}arlo Cross-Validation' Procedure for Large Least Squares Problems with Noisy Data}, JOURNAL = {Numer. Math.}, YEAR = {1989}, VOLUME = {56}, PAGES = {1-23} AUTHOR = {D. Girard}, TITLE = {Asymptotic optimality of the fast randomized versions of {GCV} and ${C}_{L}$ in ridge regression and regularization }, JOURNAL = {Ann. Statist.}, YEAR = {1991}, VOLUME = {19}, PAGES = {1950-1963} From rubio at hal.ugr.es Mon Dec 21 14:32:09 1992 From: rubio at hal.ugr.es (rubio@hal.ugr.es) Date: Mon, 21 Dec 1992 19:32:09 UTC Subject: NATO ASI Call for Papers Message-ID: <9212211932.AA06441@hal.ugr.es> >X-Envelope-to: Connectionists at cs.cmu.edu First Announcement: NATO Advanced Study Institute NEW ADVANCES and TRENDS in SPEECH RECOGNITION and CODING 28 June-10 July 1993. Bubion (Granada), SPAIN. Institute Director: Dr. Antonio Rubio-Ayuso, Dept. de Electronica. Facultad de Ciencias. Universidad de Granada. E-18071 GRANADA, SPAIN. tel. 34-58-243193 FAX. 34-58-243230 e-mail ASI at hal.ugr.es Organizing Committee: Dr. Jean-Paul Haton, CRIN / INRIA, France. Dr. Pietro Laface, Politecnico di Torino, Italy. Dr. Renato De Mori, McGill University, Canada. OBJECTIVES, AGENDA and PARTICIPANTS A series of most successful ASIs on Speech Science (the last ones in Bonas, France; Bad Windsheim, Germany; Cetraro, Italy) created a fruitful and stimulating environment to learn about scientific methods, exchange of results, and discussions of new ideas. The goal of this ASI is to congregate the most important experts on Speech Recognition and Coding to discuss and disseminate their most recent findings, in order to spread them among the European and American Centers of Excellence, as well as among a good selection of qualified students. A two-week programme is planned with invited tutorial lectures, and contributed papers by selected students (maximum 65). The proceedings of the ASI will be published by Springer-Verlag. TOPICS The Institute will focus on the new methodologies and techniques that have been recently developed in the speech communication area. Main topics of interest will be: -Low Delay and Wideband Speech Coding. -Very Low bit Rate and Half-Rate Speech Coding. -Speech coding over noisy channels. -Continuous Speech and Isolated word Recognition. -Neural Networks for Speech Recognition and Coding. -Language Modeling. -Speech Analysis, Synthesis and data bases. Any other related topic will also be considered. INVITED LECTURERS A. Gersho (UCSB, USA): "Speech coding." B. H. Juang (AT&T, USA): "Statistical and discriminative methods for speech recognition - from design objectives to implementation." J. Bridle (RSRU, UK): "Neural networks." G. Chollet (Paris Telecom): "Evaluation of ASR systems, algorithms and databases." E. Vidal (UPV, Spain): "Syntactic learning techniques in language modeling and acoustic-phonetic decoding." J. P. Adoul (U. Sherbrooke, Canada): "Lattice and trellis coded quantizations for efficient coding of speech." R. De Mori (McGill Univ, Canada): "Language models based on stochastic grammars and their use in automatic speech recognition." R. Pieraccini (AT&T, USA): "Speech understanding and dialog, a stochastic approach." F. Jelinek (IBM, USA): "New approaches to language modeling for speech recognition." L. Rabiner (AT&T, USA): "Applications of Voice Processing Technology in Telecommunications." N. Farvardin (UMD, USA): "Speech coding over noisy channels." J. P. Haton (CRIN/INRIA, France): "Methods for the automatic recognition of speech in adverse conditions." R. Schwartz (BBN, USA): "Search algorithms of real-time recognition with high accuracy." H. Niemann (Erlangen-Nurnberg Univ., Germany): "Statistical Modeling of segmental and suprasegmental information." I. Trancoso (INESC, Portugal): "An overview of recent advances on CELP." C. H. Lee (AT&T, USA): "Adaptive learning for acoustic and language modeling." P. Laface (Poli. Torino, Italy) H. Ney (Phillips, Germany): "Search Strategies for Very Large Vocabulary, Continuous Speech Recognition." A. Waibel (CMU, USA): "JANUS, A speech translation system." ATTENDANCE, COSTS and FUNDING Participation from as many NATO countries as possible is desired. Additionally, prospective participants from Greece, Portugal and Turkey are especially encouraged to apply.A small number of students from non-NATO countries may be accepted. The estimated cost of hotel accommodation and meals for the two-week duration of the ASI is US$1,000. A limited number of scholarships are available for academic participants from NATO countries. In the case of industrial or commercial participants a US$500 fee will be charged. Participants are responsible for their own health or accident insurance. A deposit of US$200 is required for living expenses. This deposit is non-refundable in the case of late cancelation (after 10 June, 1993). The NATO Institute will be held in the hospitable village of Bubion (Granada), set on Las Alpujarras, a peaceful mountain region with incomparable landscapes. HOW TO REGISTER Each application should include: 1) Full address (including e-mail and FAX). 2) An abstract of the proposed contribution (1-3 pages). 3) Curriculum vitae of the prospective participant. 4) Indication of whether the attendance to the ASI is conditioned to obtaining a NATO grant. For junior applicants, support letters from senior members of the professional speech community would strengthen the application. This application must be sent to the Institute Director address mentioned above. SCHEDULE Submission of proposals (1-3 pages): To be received by 1 April 1993. Notification of acceptance: To be mailed out on 1 May 1993. Submission of the paper: To be received by 10 June 1993. From john at cs.rhbnc.ac.uk Tue Dec 22 05:14:55 1992 From: john at cs.rhbnc.ac.uk (john@cs.rhbnc.ac.uk) Date: Tue, 22 Dec 92 10:14:55 +0000 Subject: EuroColt call for papers Message-ID: <2085.9212221014@csqx.dcs.rhbnc.ac.uk> THE INSTITUTE OF MATHEMATICS AND ITS APPLICATIONS EURO-COLT '93 CONFERENCE ON COMPUTATIONAL LEARNING THEORY December, 1993 Royal Holloway, University of London ANNOUNCEMENT AND CALL FOR PAPERS The inaugural IMA European conference on Computational Learning Theory will be held 20--22 December at Royal Holloway, University of London. We invite papers in all areas that relate directly to the analysis of learning algorithms and the theory of machine learning, including artificial and biological neural networks, robotics, pattern recognition, inductive inference, information theory and cryptology, decision theory and Bayesian/MDL estimation. As part of our program, we are pleased to announce three invited talks by Les Valiant (Harvard), Lenny Pitt (Illinois) and Wolfgang Maass (Graz). Invitation to Submit a Paper: Authors should submit six copies (preferably two-sided copies) of an extended abstract to be received by 15th May, 1993, to: Miss Pamela Irving, Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. The abstract should consist of a cover page with title, authors' names, (postal and e-mail) addresses, and a 200 word summary and a body of no more than 10 pages. We also solicit proposals for workshops sessions organised by qualified individuals to facilitate in-depth discussion of particular current topics. The workshops would be scheduled for the final day of the conference and would typically last for 3 to 4 hours, including presentation(s) by the organiser(s) of the workshop, with time for additional discussions and contributions (informal short talks). Notification: Authors will be notified of acceptance or rejection by a letter mailed on or before 31st July. Final camera-ready papers will be due on 22nd September. Members of the Organising Committee: John Shawe-Taylor (Chair: Royal Holloway, University of London, email to eurocolt at cs.rhbnc.ac.uk), Martin Anthony (LSE, University of London), Norman Biggs (LSE, University of London), Mark Jerrum (Edinburgh), Hans-Ulrich Simon (University of Dortmund), Paul Vitanyi (CWI Amsterdam). -------------------------------------------------------------------- To: The Conference Officer, The Institute of Mathematics and its Applications, 16 Nelson Street, Southend-on-Sea, Essex SS1 1EF. Telephone: (0702) 354020. Fax: (0702) 354111 EURO-COLT '93 20th--22nd December, 1993 Royal Holloway, University of London NAME ................................ GRADE (If IMA Member) .......... ADDRESS FOR CORRESPONDENCE ........................................... ..................................................................... TELEPHONE NO ........................ FAX NO ......................... I intend to submit an abstract no later than 15th May, 1993 .......... Please send me an application form when available ........ (Please tick where necessary) From ucganlb at ucl.ac.uk Wed Dec 23 05:08:01 1992 From: ucganlb at ucl.ac.uk (Dr Neil Burgess) Date: Wed, 23 Dec 92 10:08:01 +0000 Subject: 2 papers: Hippocampus & navigation, generalisation of constructive alg. Message-ID: <9212231008.AA17345@link-1.ts.bcc.ac.uk> I have just put two pre-prints in neuroprose (see below for abstracts and ftp instructions). Cheers Neil (n.burgess at ucl.ac.uk) _________________________________________________________________________ USING HIPPOCAMPAL `PLACE CELLS' FOR NAVIGATION, EXPLOITING PHASE CODING Neil Burgess, John O'Keefe and Michael Recce Department of Anatomy, University College London London WC1E 6BT, England. ABSTRACT A model of the hippocampus as a central element in rat navigation is presented. Simulations show both the behaviour of single cells and the resultant navigation of the rat. These are compared with single unit recordings and behavioural data. The firing of CA1 place cells is simulated as the (artificial) rat moves in an environment. This is the input for a neuronal network whose output, at each theta $(\theta)$ cycle, is the next direction of travel for the rat. Cells are characterised by the number of spikes fired and the time of firing with respect to hippocampal $\theta$ rhythm. `Learning' occurs in `on-off' synapses that are switched on by simultaneous pre- and post-synaptic activity. The simulated rat navigates successfully to goals encountered one or more times during exploration in open fields. One minute of random exploration of a $1m^2$ environment allows navigation to a newly-presented goal from novel starting positions. A limited number of obstacles can be successfully avoided. _________________________________________________________________________ This paper will be published in NIPS 5. To get the postscript file do: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:userid): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> binary ftp> get burgess.hipnav.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for burgess.hipnav.ps.Z . ftp> quit 221 Goodbye. unix> uncompress burgess.hipnav.ps.Z unix> lpr burgess.hipnav.ps (or whatever you do to print) The uncompressed file is 1.7 Mbytes and may take sometime to print. _________________________________________________________________________ THE GENERALIZATION OF A CONSTRUCTIVE ALGORITHM IN PATTERN CLASSIFICATION PROBLEMS Neil Burgess, Silvano Di Zenzo, Paolo Ferragina and Mario Notturno Granieri Department of Anatomy IBM Rome Scientific Center University College London Viale Oceano Pacifico 171 London WC1E 6BT, ENGLAND. 00144 Rome, Italy ABSTRACT The use of a constructive algorithm for pattern classification is examined. The algorithm, a `Perceptron Cascade', has been shown to converge to zero errors whilst learning any consistent classification of {\it real-valued} pattern vectors (Burgess, 1992). Limiting network size and producing bounded decision regions are noted to be important for the generalization ability of a network. A scheme is suggested by which a result on generalization (Vapnik, 1992) may enable calculation of the optimal network size. A fast algorithm for principal component analysis (Sirat, 1991) is used to construct `hyper-boxes' around each class of patterns to ensure bounded decision regions. Performance is compared with the Gaussian Maximum Likelihood procedure in three artificial problems simulating real pattern classification applications. N. Burgess, submitted to International Journal of Neural Systems (1992). J. A. Sirat, International Journal of Neural Systems, 2, 147-155 (1991). V. Vapnik, NIPS 4, 838-838, Morgan Kaufmann (1992). _________________________________________________________________________ This paper will be published in: International Journal of Neural Systems 3 (Supp. 1992); Proceedings of the Neural Networks: from Biology to High Energy Physics Workshop. The postscript file is in burgess.gencon.ps.Z, follow the above instructions to retrieve it (again, page 5 may take sometime to print as it contains a 1 Mbyte bitmap). From barryf at sedal.su.oz.au Mon Dec 28 20:13:51 1992 From: barryf at sedal.su.oz.au (Barry Flower) Date: Tue, 29 Dec 1992 12:13:51 +1100 Subject: Pre-Print Available in Neuroprose Archive Message-ID: <9212290113.AA13843@sedal.sedal.su.OZ.AU> Connectionists. The following preprint is available in the neuroprose archive, and will appear in the NIPS*92 Proceedings. "Summed Weight Neuron Perturbation: An O(N) Improvement over Weight Perturbation." Barry Flower and Marwan Jabri SEDAL Department of Electrical Engineering University of Sydney NSW 2006 Australia ABSTRACT ~~~~~~~~ The algorithm presented performs gradient descent on the weight space of an Artificial Neural Network (ANN), using a finite difference to approximate the gradient. The method is novel in that it achieves a com- putational complexity similar to that of Node Perturbation, O(N**3), but does not require access to the activity of hidden or internal neurons. This is possible due to a stochastic relation between perturbations at the weights and the neurons of an ANN. The algorithm is also similar to Weight Perturbation in that it is optimal in terms of hardware require- ments when used for the training of VLSI implementations of ANNs. A sample session for retrieving the preprint follows: sedal::.mboxd-86} ftp cheops.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive FTP server (Version 6.14 Thu Apr 23 14:41:38 EDT 1992) ready. Name (cheops.cis.ohio-state.edu:barryf): anonymous 331 Guest login ok, send e-mail address as password. Password: 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250-Please read the file README 250- it was last modified on Mon Feb 17 15:51:43 1992 - 316 days ago 250-Please read the file README~ 250- it was last modified on Wed Feb 6 16:41:29 1991 - 692 days ago 250 CWD command successful. ftp> get flower.swnp.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for flower.swnp.ps.Z (43113 bytes). 226 Transfer complete. local: flower.swnp.ps.Z remote: flower.swnp.ps.Z 43113 bytes received in 16 seconds (2.7 Kbytes/s) ftp> quit 221 Goodbye. Uncompress and finally print the postscript file. sedal::.mboxd-87} uncompress flower.swnp.ps.Z sedal::.mboxd-88} lpr flower.swnp.ps Cheers, ------------------------------------------------------------------- Barry Flower Email: barryf at sedal.oz.au SEDAL, Electrical Engineering, Tel: (+61-2) 692-3297 Sydney University, NSW 2006, Australia Fax: (+61-2) 660-1228 From leow%pav.mcc.com at mcc.com Wed Dec 30 16:23:17 1992 From: leow%pav.mcc.com at mcc.com (W. Leow) Date: Wed, 30 Dec 92 15:23:17 CST Subject: Abstracts for 3 papers in neuroprose Message-ID: <9212302123.AA08372@graviton.pav.mcc.com> The following 3 papers have been placed in the neuroprose archive (sorry, no hard copies available): ------------------------------------------------------------------- Representing Visual Schemas in Neural Networks for Object Recognition Wee Kheng Leow and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 leow,risto at cs.utexas.edu Technical Report AI92-190 December 1992 This research focuses on the task of recognizing objects in simple scenes using neural networks. It addresses two general problems in neural network systems: (1) processing large amounts of input with limited resources, and (2) the representation and use of structured knowledge. The first problem arises because no practical neural network can process all the visual input simultaneously and efficiently. The solution is to process a small amount of the input in parallel, and successively focus on other parts of the input. This strategy requires that the system maintains structured knowledge for describing and interpreting successively gathered information. The proposed system, VISOR, consists of two main modules. The Low-Level Visual Module (simulated using procedural programs) extracts featural and positional information from the visual input. The Schema Module (implemented with neural networks) encodes structured knowledge about possible objects, and provides top-down information for the Low-Level Visual Module to focus attention at different parts of the scene. Working cooperatively with the Low-Level Visual Module, it builds a globally consistent interpretation of successively gathered visual information. ------------------------------------------------------------------- Self-Organization with Lateral Connections Joseph Sirosh and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 sirosh,risto at cs.utexas.edu Technical Report AI92-191 December 1992 A self-organizing neural network model for the development of afferent and lateral input connections in cortical feature maps is presented. The weight adaptation process is purely activity-dependent, unsupervised, and local. The afferent input weights self-organize into a topological map of the input space. At the same time, the lateral interaction weights develop a smooth ``Mexican hat'' shaped distribution. Weak lateral connections die off, leaving a pattern of connections that represents the significant long-term correlations of activity on the feature map. The model demonstrates how self-organization can bootstrap itself based on input information only, without global supervision or predetermined lateral interaction. The model can potentially account for experimental observations such as critical periods for self-organization in cortical maps and development of horizontal connections in the primary visual cortex. ------------------------------------------------------------------- Incremental grid growing: Encoding high-dimensional structure into a two-dimensional feature map Justine Blackmore and Risto Miikkulainen Department of Computer Sciences The University of Texas at Austin, Austin, TX 78712 justine,risto at cs.utexas.edu Technical Report AI92-192 December 1992 Knowledge of clusters and their relations is important in understanding high-dimensional input data with unknown distribution. Ordinary feature maps with fully connected, fixed grid topology cannot properly reflect the structure of clusters in the input space---there are no cluster boundaries on the map. Incremental feature map algorithms, where nodes and connections are added to or deleted from the map according to the input distribution, can overcome this problem. However, so far such algorithms have been limited to maps that can be drawn in 2-D only in the case of 2-dimensional input space. In the approach proposed in this paper, nodes are added incrementally to a regular, 2-dimensional grid, which is drawable at all times, irrespective of the dimensionality of the input space. The process results in a map that explicitly represents the cluster structure of the high-dimensional input. ------------------------------------------------------------------- The standard instructions apply: Use getps, or: Unix> ftp archive.cis.ohio-state.edu Name: anonymous Password: ftp> binary ftp> cd pub/neuroprose ftp> get leow.visual-schemas.ps.Z ftp> get sirosh.lateral.ps.Z ftp> get blackmore.incremental.ps.Z ftp> quit Unix> zcat leow.visual-schemas.ps.Z |lpr Unix> zcat blackmore.visual-schemas.ps.Z |lpr Unix> uncompress sirosh.lateral.ps.Z Unix> lpr -s sirosh.lateral.ps sirosh.lateral.ps is over 5MB uncompressed (it is only 16 pages, but has huge figures). If your laserwriter does not have that much memory, most likely you will need to lpr -s, or use psselect or psrev or other such utility to print in smaller chunks. Enjoy! From JKREIDER at VAXF.COLORADO.EDU Wed Dec 30 22:53:22 1992 From: JKREIDER at VAXF.COLORADO.EDU (Dr. Jan F. Kreider, Director, Energy Center, U. of Colorado,Boulder, CO 80309-0428, USA; Phone 303-492-7603) Date: 30 Dec 1992 20:53:22 -0700 (MST) Subject: Building energy predictor Competition - "The Great Energy Shootout" Message-ID: <01GSXV3IG6MA00CHU0@VAXF.COLORADO.EDU> The following is the text of the rules for a data analysis competition having to do with hourly building and weather data. We invite those interested to request the data as described below. Andreas Weigand and Mike Mozer are hereby thanked for their advice and help with the rules and conduct of this competition. "THE GREAT ENERGY PREDICTOR SHOOTOUT" - THE FIRST BUILDING DATA ANALYSIS AND PREDICTION COMPETITION Concept and Summary ASHRAE Meeting Denver, Colorado June, 1993 Co-chaired by Jan F. Kreider and Jeff S. Haberl Active Period: December 1, 1992 - April 30, 1993 INTRODUCTION A wide range of new techniques is now being applied to the analysis problems involved with predicting the future behavior of HVAC systems and deducing properties of these systems. Similar problems arise in most observational disciplines, including physics, biology, and economics. New tools, such as genetic algorithms, simulated annealing, the use of connectionist models for forecasting and tree-based classifiers or the extraction of parameters of nonlinear systems with time-delay embedding, promise to provide results that are unobtainable with more traditional techniques. Unfortunately, the realization and evaluation of this promise has been hampered by the difficulty of making rigorous comparisons between competing techniques, particularly ones that come from different disciplines. The prediction of energy usage by HVAC systems is important for purposes of HVAC diagnostics, system control, optimization and energy management. In order to facilitate such comparisons and to foster contact among the relevant disciplines, ASHRAE's TC4.7 and TC 1.5 have organized a building data analysis and prediction competition in the form of an ASHRAE seminar to be held in Denver in June, 1993. Forecasting or prediction using empirical models will be the goal of the competition. (Neither system characterization, system identification nor simulation code validation [e.g., DOE-2 or BLAST] are the subject of this seminar; they will be addressed in a future session.) Two carefully chosen sets of energy and environmental data from real buildings will be made available to the contestants. Each contestant will be required to prepare quantitative analyses of these data and submit them to the seminar co-chairs prior to the ASHRAE seminar. Those with the best results will be asked to make a presentation to the seminar. At the close of the competition the performance of the techniques submitted will be compared and published. If there is sufficient interest, a server accessible by modem may be set up to operate as an on-line archive of interesting data sets, programs, and comparisons among algorithms in the future. There will be no monetary prizes. An ASHRAE symposium has been scheduled for the Winter 1994 ASHRAE meeting in New Orleans to explore the results of the competition in formal papers. The competition does not require advanced registration; to enter, simply request the data (there is no charge for the data diskette) along with support information and submit your analysis on time. The detailed description of the competition and instructions for acquiring the data and entering the competition are given below. ACCESSING THE DATA The data are available on disks (5.25-in size) in ASCII, IBM-PC format. To receive the data, send a self-addressed 9 x 12 in. envelope, with a $2.90 priority mail stamp affixed, to: Building Energy Predictor Shootout Joint Center for Energy Management Campus Box 428 University of Colorado Boulder, CO 80309-0428 Instructions on submitting a return disk with the analysis of the data will be included in a README file on the data disk. The disk will also include an entry form that each entrant will need to complete and submit along with the results. FOR MORE INFORMATION Further questions about the competition should be directed to either of the organizers: Professor Jan F. Kreider, Director Professor Jeff S. Haberl Joint Center for Energy Management Department of Mechanical Engineering Campus Box 428 Texas A&M University University of Colorado College Station, TX 77843-3123 Boulder, CO 80309-0428 Phone: 409-845-1560 Phone: 303-492-3915 Fax: 409-862-2762 Fax: 303-492-7317 E-mail: JSH4037 at TAMSIGMA (Bitnet) E-mail: JKREIDER at VAXF.COLORADO.EDU Detailed instructions and data set descriptions are included in the attached document. [PB] INSTRUCTIONS - "The Great Energy Predictor Shootout" Contents I. Philosophy II. General Information III. Acquiring and Submitting Data IV. Data Sets V. Submittals VI. Other Matters 1.1 Philosophy This competition has been organized to help clarify the conflicting claims among many researchers who use and analyze building energy data and to foster contact among these persons and their institutions. The intent is not necessarily only to declare winners but rather to set up a format in which rigorous evaluations of techniques can be made. Because there are natural measures of performance, a rank-ordering will be given. In all cases, however, the goal is to collect and analyze quantitative results in order to understand similarities and differences among the approaches. 1.2 General Information Overview This section contains the instructions on how to participate in the competition. Data: Two distinct data sets are provided for prediction. Contestants will be given these two sets of independent variables with the corresponding values of dependent variables, e.g., energy usage. The accuracy of predictions of the dependent variables from values of independent variables from this data set is one of the criteria for judging this competition. However, a more rigorous test is also planned. Some of the dependent variable values will be withheld from each of the two data sets (this is explained in detail in the next section). "Withheld" means that you will be provided with a set of independent variables for which the corresponding values of dependent variables have been withheld by the organizers [NOTE "For example you might be given a testing set consisting of weather and oc The data set from which the independent variables have been withheld are hereinafter called the "testing set" whereas the data that include both independent and dependent variable values are called the "training set." Although this nomenclature is common in some numerical approaches and not in others, it will provide an understandable nomenclature for this competition. The independent variable values in the testing set will will be used by each participant to make their best predictions of the corresponding dependent variables. The organizers will compare these predictions by each contestant with the true (data) values of the dependent variables that are known only to the organizers. This second aspect of the competition is expected to be of considerable interest to the seminar audience. Entries: The competition will start on December 1, 1992 and will end on April 30,1993. Entries received after that date cannot be considered. The format for the entries, described in the following sections and in the entry form supplied on the data diskette must be followed exactly, or the entry will regretfully have to be rejected. Results: Following the close of the competition, the results will be analyzed and published. This will be in the form of the ASHRAE seminar (Denver, June, 1993) as described above. The seminar co-chairs will not participate in the competition and but will be the sole analysts of the results. The overall results will be presented at the seminar by the co-chairs followed by a presentation by each participant on their methodology. The results to be produced by the competitors are in the form of predictions of the dependent variables for the two testing sets of independent variables. These predictions will be submitted to the organizers who will evaluate them using the same methods for all submissions. Competitors will also conduct a self analysis of the accuracy of their prediction approach when applied to the training set. The following criteria will be used by the organizers for assessing the respective accuracies of the entries when analyzing the testing set Coefficient of Variation ,CV: [EQN "CV == [Root [Sigma below [i=1] above n (y sub [pred,i] - y sub [dat Mean Bias Error, MBE: [EQN "MBE == [ [Sigma below [i=1] above n (y sub [pred,i] - y sub [data,i where [EQN "y sub [data,i]"] is a data value of the dependent variable corresponding to a particular set of values of the independent variables. [EQN "y sub [pred,i] "] is a predicted dependent variable value for the same set of independent variables above; these values are predictions by the entrants. [EQN "y bar sub data"] is the mean value of the dependent variable testing data set [EQN "n"] is the number of sets of data in the testing set Other statistics such as the correlation coefficient and maximum error may also be reported in a brief written summary assembled by the seminar co-chairs. Time permitting, graphical comparisons will be also prepared by the organizers. During each entrant's seminar presentation they may use any presentation of scientific value that they wish on the performance of their methods on the training set. Prizes: There are no prizes in the competition (to prevent unnecessary disagreements). Secrecy: Because this is an open scientific study, entries that provide results without describing the methods used are not acceptable. On the other hand we recognize that a great deal of labor might have been applied to develop commercially useful applications and full details of those need not be revealed. Sufficient information has to be supplied so that the results can in principle be independently verified. It is not necessary to submit practical implementation details or the computer code. However, we encourage sharing the software at the end of the competition. At a minimum, each participant should supply a flow chart of their methodology and the data plots described below. Future Plans: If interest warrants, it is planned that a computer server will operate after the close of the competition as a central repository of interesting data, analysis programs, and the results of other comparative studies. 1.3 Acquiring and Submitting Data This section describes how to retrieve the data sets for the competition and how to submit competition entries. The steps are: (1) read this section, (2) acquire the data, (3) analyze the data, and (4) send in your results in along with an entry form. The data are available on disks (5.25-in size) in ASCII, IBM-PC format. To receive the data and other information, send a self-addressed 9 x 12 in. envelope with a $2.90 priority mail stamp affixed to: Building Energy Predictor Shootout Joint Center for Energy Management Campus Box 428 University of Colorado Boulder, CO 80309-0428 Instructions on submitting a return disk with the analysis of the data will be included in a README file on the data disk. The mailing will also include an entry form that each entrant will need to complete out and submit along with the results. Completed entries (diskette with results plus completed entry form) should be mailed to: Energy Shootout Entry Disks at the above address. Part of the entry form will include your name and address and describe the machine type and density that your submittal disks were prepared with. The disks (either 3.5-in or 5.25-in. size of any density) must be in ASCII format readable by an MS-DOS machine. Hard copy or nonconforming entries cannot be accepted. 1.4 Data Sets There are two data sets provided; they are DOS-readable ASCII text files. The data sets have been chosen to address two different sorts of building-related data analysis problems. In this section we describe the general features of the data sets. A.dat (approximately 3,000 points) This is a time record of hourly chilled water, hot water and whole building electricity usage for a four-month period in an institutional building. Weather data and a time stamp are also included. The hourly values of usage of these three energy forms is to be predicted for the two following months. The testing set consists of the two months following the four-month period. B.dat (approximately 2,400 points) These data consist of solar radiation measurements made by four fixed devices to be used to predict the time-varying hourly beam radiation during a six-month period. This four-pyranometer device is used in an adaptive controller to predict building cooling loads. A random sample of data from the full data set has been reserved as the training set of 1500 points. The value of beam radiation is to be predicted from data from four fixed sensors for the testing set of 900 additional points. 1.5 Submittals The prediction tasks differ between the data sets (the sets were chosen to emphasize different prediction problems). The withheld testing data used for evaluating the predictions after the close of the competition will not be available to any of the entrants. A.dat For data set A submit predictions (i.e., forecasts) for chilled water, hot water and whole building electricity use for the two months following the four-month training set. The testing set will include values of the same independent variables (weather, date and time) as the training set. Submit your predictions of the three energy end uses in serial order by appending three columns containing your predictions to the right of the testing set columns provided on the disk data file. You will therefore submit to the organizers the testing set plus three columns containing your predictions. A sample of how you are to submit your data will be supplied with the data diskette. The organizers will compare your predictions to the known values of the three energy uses and report CV and MBE. You are also to prepare and submit with your diskette several graphs for the four-month training set data as shown in Figs. 1 and 2. Figure 1 is a time series plot of actual data and a prediction along with the difference between the two (Fig. 1 is such a plot for one month; you can either prepare one such plot for each of the four months or just one plot for all four months; you will need to prepare at least on such plot for each of the three energy end uses). Figure 2 is a plot of hourly energy use vs. dry bulb temperature. Data for all four months should be presented on one such graph for each of the three energy end uses (total of three plots will be prepared, one each for chilled water, hot water and whole building electricity). On each graph show the values of CV and MBE as defined above. Summary: You will submit to the seminar organizers one file on diskette with your predictions of the three energy end uses for the testing set. You will also submit several graphs, just described, representing the accuracy of your prediction tool when used on the training set only. B.dat For data set B submit predictions for hourly beam radiation given four values of hourly fixed-sensor insolation for the testing set that has been randomly selected from the full data set. The testing set will include the values of the same four independent variables (hourly insolation on four fixed surfaces) as the training set. Submit your predictions of beam insolation in order by appending one column containing their values to the right of the four testing set columns provided on the disk data file. You will therefore submit to the organizers the testing set plus one column containing your predictions. A sample of how you are to submit your data will be supplied with the data diskette. The organizers will compare your predictions to the known values of the beam radiation in the testing set and report CV and MBE. You are also to prepare and submit with your diskette one graph (often called a "scatterplot" or "crossplot") for the training set data as shown by the example in Fig. 3. Figure 3 is a plot of actual data (abscissa) and prediction (ordinate). You should prepare one such plot that includes all data in the training set. On this graph show the value of CV and MBE as defined above. Summary: You will submit to the seminar organizers one file on diskette with your predictions for the testing set. You will also submit a graph, just described, that represents the accuracy of your prediction tool when used on the training set only. Questions about these instructions should be addressed to either of the organizers listed above. 1.6 Deadline and extensions The competition ends at midnight on April 30, 1993; to be fair, we cannot accept entries after this time. We will allow two weeks after this deadline (until May 15th) for only the following two exceptions: * Because of computer difficulty you were unable to submit the data in time. Send the data before May 15th, along with an explanation of the difficulty. The organizers must be notified of your need to have this extension by April 30, 1993 * You just found out about the competition or just received the data. Submit your entry before May 15th, along with an explanation why this extension is needed. [PB] [WS 3.5 in] Figure 1. Example time series plot showing data, prediction and difference between the two. For the competition submittal also affix the values of CV and MBE to each graph. [WS 3.5 in] Figure 2. Example plot showing energy consumption (here steam) plotted vs dry bulb temperature. For the competition submittal also affix the values of CV and MBE to each graph. [PB] [WS 4 in] Figure 3. Example scatterplot showing data and prediction crossplotted. For the competition submittal also affix the values of CV and MBE to each graph. [SHOOTOUT.DOC] [PB] [WS 1 in] M E M O R A N D U M TO: Building Analyst Colleague FROM: Jan F. Kreider SUBJECT: Building Energy Predictor Shootout DATE: November 16, 1992 In order to facilitate comparisons among the many empirical techniques used to predict building demand and energy use and to foster contact among the relevant disciplines, ASHRAE's TC 4.7 and TC 1.5 have organized a building data analysis and prediction competition in the form of an ASHRAE seminar to be held in Denver in June, 1993. Forecasting or prediction using empirical models will be the goal of the competition. Jeff Haberl and I invite you to participate. The attached summary explains how you can enter this friendly, no-cost competition. You have received this mailing because of the known interest of yourself and your colleagues in this area of building science research. The enclosure should be self explanatory but donot hesitate to call me at the number above if you have questions. Good luck! Enclosure [SHOOTOUT.DOC] For example you might be given a testing set consisting of weather and occupancy data (the independent variables) along with chilled water use (the dependent variable) for a four-month period. For the fifth month you would only be given the weather and occupancy data but would be asked to predict the chilled water use based on the capability of your method developed with the four months of training data. / From rsun at athos.cs.ua.edu Wed Dec 30 15:50:35 1992 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 30 Dec 1992 14:50:35 -0600 Subject: No subject Message-ID: <9212302050.AA10600@athos.cs.ua.edu> ++++++++++++++++++++++++++++++++++++++++++++++++++++++ SCHEMAS AND NEURAL NETWORKS: INTEGRATING SYMBOLIC AND SUBSYMBOLIC APPROACHES TO COOPERATIVE COMPUTATION A Workshop sponsored by the Center for Neural Engineering University of Southern California Los Angeles, CA 90089-2520 April 13th and 14th, 1993 Program Committee: Michael Arbib (Organizer), John Barnden, George Bekey, Francisco Cervantes-Perez, Damian Lyons, Paul Rosenbloom, Ron Sun, Akinori Yonezawa A previous announcement (reproduced below) announced a registration fee of $150 and advertised the availability of hotel accommodation at $70/night. To encourage the participation of qualified students we have made 3 changes: 1) We have appointed Jean-Marc Fellous as Student Chair for the meeting to coordinate the active involvement of such students. 2) We offer a Student Registration Fee of only $40 to students whose application is accompanied by a letter from their supervisor attesting to their student status. 3) Mr. Fellous has identified a number of lower-cost housing options, and will respond to queries to fellous at pollux.usc.edu The original announcement - with updated registration form - follows: ******** To design complex technological systems and to analyze complex biological and cognitive systems, we need a multilevel methodology which combines a coarse-grain analysis of cooperative or distributed computation (we shall refer to the computing agents at this level as "schemas") with a fine-grain model of flexible, adaptive computation (for which neural networks provide a powerful general paradigm). Schemas provide a language for distributed artificial intelligence, perceptual robotics, cognitive modeling, and brain theory which is "in the style of the brain", but at a relatively high level of abstraction relative to neural networks. The proposed workshop will provide a 2-hour introductory tutorial and problem statement by Michael Arbib, and sessions in which an invited paper will be followed by several contributed papers, selected from those submitted in response to this call for papers. Preference will be given to papers which present practical examples of, theory of, and/or methodology for the design and analysis of complex systems in which the overall specification or analysis is conducted in terms of schemas, and where some but not necessarily all of the schemas are implemented in neural networks. A list of sample topics for contributions is as follows, where a hybrid approach means one in which the abstract schema level is integrated with neural or other lower level models: Schema Theory as a description language for neural networks Modular neural networks Linking DAI to Neural Networks to Hybrid Architecture Formal Theories of Schemas Hybrid approaches to integrating planning & reaction Hybrid approaches to learning Hybrid approaches to commonsense reasoning by integrating neural networks and rule- based reasoning (using schema for the integration) Programming Languages for Schemas and Neural Networks Concurrent Object-Oriented Programming for Distributed AI and Neural Networks Schema Theory Applied in Cognitive Psychology, Linguistics, Robotics, AI and Neuroscience Prospective contributors should send a hard copy of a five-page extended abstract, including figures with informative captions and full references (either by regular mail or fax) by February 15, 1993 to Michael Arbib, Center for Neural Engineering, University of Southern California, Los Angeles, CA 90089-2520, USA [Tel: (213) 740-9220, Fax: (213) 746-2863, arbib at pollux.usc.edu]. Please include your full address, including fax and email, on the paper. Notification of acceptance or rejection will be sent by email no later than March 1, 1993. There are currently no plans to issue a formal proceedings of full papers, but revised versions of accepted abstracts received prior to April 1, 1993 will be collected with the full text of the Tutorial in a CNE Technical Report which will be made available to registrants at the start of the meeting. [A useful way to structure such an abstract is in short numbered sections, where each section presents (in a small type face!) the material corresponding to one transparency/slide in a verbal presentation. This will make it easy for an audience to take notes if they have a copy of the abstract at your presentation.] Hotel Information: Attendees may register at the hotel of their choice, but the closest hotel to USC is the University Hilton, 3540 South Figueroa Street, Los Angeles, CA 90007, Phone: (213) 748- 4141, Reservation: (800) 872-1104, Fax: (213) 748- 0043. A single room costs $70/night while a double room costs $75/night. Workshop participants must specify that they are "Schemas and Neural Networks Workshop" attendees to avail of the above rates. Information on student accommodation may be obtained from the Student Chair, Jean-Marc Fellous, fellous at pollux.usc.edu. The registration fee of $150 ($40 for qualified students who include a "certificate of student status" from their advisor) includes a copy of the abstracts, coffee breaks, and a dinner to be held on the evening of April 13th. Those wishing to register should send a check payable to "Center for Neural Engineering, USC" for $150 ($40 for students) together with the following information to Paulina Tagle, Center for Neural Engineering, University of Southern California, University Park, Los Angeles, CA 90089-2520, USA. ------------------------------------------------------------------- SCHEMAS AND NEURAL NETWORKS Center for Neural Engineering, USC April 13 - 14, 1992 NAME: ___________________________________________ ADDRESS: _________________________________________ PHONE NO.: _______________ FAX:___________________ EMAIL: ___________________________________________ I intend to submit a paper: YES [ ] NO [ ] From tishby at fugue.cs.huji.ac.il Thu Dec 31 13:05:49 1992 From: tishby at fugue.cs.huji.ac.il (Tali Tishby) Date: Thu, 31 Dec 92 20:05:49 +0200 Subject: Learning Workshop in Jerusalem Message-ID: <9212311805.AA17382@fugue.cs.huji.ac.il> Please distribute this notice and not the previous one! THE HEBREW UNIVERSITY OF JERUSALEM THE CENTER FOR NEURAL COMPUTATION LEARNING DAYS IN JERUSALEM Workshop on Fundamental Issues in Biological and Machine Learning May 30 - June 4, 1993 Hebrew University, Jerusalem, Israel The Center for Neural Computation at the Hebrew University is a new multi- diciplinary research center for collaborative investigations of the principles underlying computation and information processing in the brain and in neuron- like artificial computing systems. The Center's activities span theoretical investigations of neural networks in physics, biology and computer science; experimental investigations in neurophysiology, psychophysics and cognitive psychology; and applied research on software and hardware implementations. The first international symposium sponsored by the Center will be held in the spring of 1993, at the Hebrew University of Jerusalem. It will focus on theoretical, experimental and practical aspects of learning in natural and artificial systems. Topics for the meeting include: * Theoretical Issues in Supervised and Unsupervised Learning * Neurophysiological Mechanisms Underlying Learning * Cognitive Psychology and Learning Psychophysics * Applications of Machine and Neural Network Learning Invited speakers include: Moshe Abeles (Hebrew Univ.) Roni Agranat (Hebrew Univ.) Ehud Ahissar (Weizmann Inst.) Asher Cohen (Hebrew Univ.) Yadin Dudai (Weizmann Inst.) David Haussler (UCSC) Yuval Davidor (Weizmann Inst.) Nathan Interator (Tel Aviv Univ.) Michael Jordan (MIT) Yann LeCun (AT&T) Joseph LeDoux (NYU) Bruce MacNaughton (U. Colorado) Yishai Mansour (Tel Aviv Univ.) Helge Ritter (Bielefeld) David Rumelhart (Stanford Univ.) Dov Sagi (Weizmann Inst.) Menachem Segal (Weizmann Inst.) Cristof Von der Malsburg (Bochum) Alex Waibel (CMU) Norman Weinberger (U.C. Irvine) Participation in the Workshop is limited to 100. A small number of contributed papers will be accepted. Interested researchers and students are asked to submit registration forms by March 1, 1993, to Sari Steinberg Bchiri Center for Neural Computation Racah Institute of Physics Hebrew University 91904 Jerusalem Israel Tel: (972) 2 584563 Fax: (972) 2 584437 E-mail: learn at galaxy.huji.ac.il Organizing Committee: Shaul Hochstein, Haim Sompolinsky, Naftali Tishby. REGISTRATION FORM Please fill in the information needed for registration. To ensure participation, please send a copy of this form by e-mail or fax as soon as possible to: Sari Steinberg Bchiri Center for Neural Computation/Racah Institute of Physics Hebrew University 91904 Jerusalem Israel Tel: (972) 2 584563; Fax: (972) 2 584437; E-mail: learn at galaxy.huji.ac.il Name _________________________________________________ Last First Title Affiliation __________________________________________ Position/Department __________________________________ Business Address _____________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Home address _________________________________________ ______________________________________________________ ______________________________________________________ Country Telephone Preferred mailing address: ___ Home ___ Business Registration fees (before March 1): ____ Regular $100 ____ Student $ 50 Registration fees (after March 1): ____ Regular $150 ____ Student $ 75 Please send payment by check or international money order in US dollars made payable to: Learning Workshop with a copy of this form by March 1, 1993 to avoid late fee. Signature ___________________________________ Date _________________ ACCOMMODATION If you are interested in assistance in reserving hotel accommodation for the duration of the Workshop, please indicate your preferences below (as far in advance as possible). I wish to reserve a single/double room from __________ to __________ for a total of _______ nights. CONTRIBUTED PAPERS A very limited number of contributed papers will be accepted. Participants interested in submitting papers should fill out the following and enclose a 250-word abstract. Poster/Talk (circle one) Title: __________________________________________________________________ __________________________________________________________________ %--LaTex--% \documentstyle[11pt,fullpage]{article} \begin{document} \newcommand{\beq}[1]{\begin{equation}\label{#1}} \newcommand{\eeq}{\end{equation}} \newcommand{\beqa}[1]{\begin{equation}\label{#1}\begin{eqalign}} \newcommand{\eeqa}{\end{eqalign}\end{equation}} \newcommand{\bsubeq}[1]{\begin{subequations}\label{#1}\begin{eqalignno}} \newcommand{\esubeq}{\end{eqalignno}\end{subequations}} % \begin{titlepage} \title{{\large The Hebrew University of Jerusalem\\ The Center for Neural Computation}\\ \vspace{0.6 in} {\huge\bf Learning Days in Jerusalem}\\ \vspace{0.3 in} {\Large Workshop on Fundamental Issues in Biological and Machine Learning}} \author{{\Large May 30 - June 4, 1993} \\ \\ {\Large Hebrew University, Jerusalem, Israel}} \date{} \maketitle The Center for Neural Computation at the Hebrew University is a new multi-diciplinary research center for collaborative investigations of the principles underlying computation and information processing in the brain and in neuron-like artificial computing systems. The Center's activities span theoretical investigations of neural networks in physics, biology and computer science; experimental investigations in neurophysiology, psychophysics and cognitive psychology; and applied research on software and hardware implementations. \vspace{0.2in} The first international symposium sponsored by the Center will be held in the spring of 1993, at the Hebrew University of Jerusalem. It will focus on theoretical, experimental and practical aspects of learning in natural and artificial systems. \vspace{.2in} Topics for the meeting include: \begin{itemize} \item{\bf Theoretical Issues in Supervised and Unsupervised Learning} \item{\bf Neurophysiological Mechanisms Underlying Learning} \item{\bf Cognitive Psychology and Learning Psychophysics} \item{\bf Applications of Machine and Neural Network Learning} \end{itemize} \vspace{0.2in} \newpage {\bf Invited speakers include:} \begin{tabbing} Moshe Abeles (Hebrew Univ.)xyzpdqrsvpaeiou\=Roni Agranat (Hebrew Univ.)\kill Moshe Abeles (Hebrew Univ.) \> Joseph LeDoux (NYU)\\ Roni Agranat (Hebrew Univ.) \> Bruce MacNaughton (U. Colorado)\\ Ehud Ahissar (Weizmann Inst.) \> Cristoff Von der Malsburg (Bochum)\\ Asher Cohen (Hebrew Univ.) \> Yishai Mansour (Tel Aviv Univ.)\\ Yadin Dudai (Weizmann Inst.) \> Helge Ritter (Bielefeld)\\ David Haussler (UCSC) \> David Rumelhart (Stanford Univ.)\\ Yuval Davidor (Weizmann Inst.) \> Dov Sagi (Weizmann Inst.)\\ Nathan Interator (Tel Aviv Univ.) \> Menachem Segal (Weizmann Inst.)\\ Michael Jordan (MIT) \> Alex Waibel (CMU)\\ Yann LeCun (AT\&T) \> Norman Weinberger (U.C. Irvine) \end{tabbing} \vspace{.3in} \noindent Participation in the Workshop is limited to 100. \vspace{.1in} \noindent A small number of contributed papers will be accepted. \vspace{.1in} \noindent Interested researchers and students are asked to submit registration forms by March 1, 1993, to \vspace{0.2in} \noindent Sari Steinberg Bchiri\\ Center for Neural Computation\\ Racah Institute of Physics\\ Hebrew University\\ 91904 Jerusalem\\ Israel\\ \vspace{0.1in} \noindent Tel: (972) 2 584563\\ Fax: (972) 2 584437\\ E-mail: {\tt learn at galaxy.huji.ac.il}\\ {\bf Organizing Committee: } Shaul Hochstein, Haim Sompolinsky, Naftali Tishby. \newpage \def\fillend{\hrulefill\vrule width 0pt\\} \centerline{\bf REGISTRATION FORM} \medskip Please fill in the information needed for registration. To ensure participation, please send a copy of this form by e-mail or fax as soon as possible to: \begin{tabbing} Sari Steinberg Bchiri lalalalalalalalalalala \= lalalalalalala \kill \noindent Sari Steinberg Bchiri \> E-MAIL: learn at galaxy.huji.ac.il\\ Center for Neural Computation \> TELEPHONE: 972-2-584563\\ Racah Institute of Physics \> FAX: 972-2-584437\\ Hebrew University of Jerusalem\\ 91904 Jerusalem, ISRAEL\\ \centerline {Registration will be confirmed by e-mail.}\\ \end{tabbing} \centerline{\bf Conference Registration} \medskip Name: \fillend Affiliation: \fillend Address: \fillend City: \hrulefill State: \hrulefill Zip: \hrulefill Country: \fillend Country: \fillend Telephone: (\hspace{0.3in}) \hrulefill {\bf E-mail address:} \fillend \centerline{\bf Registration Fee} \noindent $\Box$ Regular registration (before March 1): \$100 \\ $\Box$ Student registration (before March 1): \$50 \\ $\Box$ Late registration (after March 1): \$150 \\ $\Box$ Student late registration (after March 1): \$75 \\ \newpage Please send payment by check or international money order in US dollars made payable to {\bf Learning Workshop} with a copy of this form by March 1, 1993 to avoid late fee. \centerline{\bf Accommodations} If you are interested in assistance in reserving hotel accommodation for the duration of the Workshop, please indicate your preferences below: I wish to reserve a $\Box$ single $\Box$ double room from \makebox[1.0in]{\hrulefill} to \makebox[1.0in]{\hrulefill} for a total of \makebox[.5in]{\hrulefill} nights. \centerline{\bf Contributed Papers} A very limited number of contributed papers will be accepted. Participants interested in submitting papers should fill out the following and enclose a 250-word abstract.\\ $\Box$ Poster $\Box$ Talk\\ Title: \makebox[6.0in]{\hrulefill}\\ \makebox[6.5in]{\hrulefill}\\ \makebox[6.5in]{\hrulefill} \end{document} From harnad at Princeton.EDU Thu Dec 31 20:11:30 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 31 Dec 92 20:11:30 EST Subject: PSYC Call for Book Reviewers: Categorization & Learning Message-ID: <9301010111.AA24165@clarity.Princeton.EDU> From harnad at clarity.princeton.edu Thu Dec 31 19:04:26 1992 From: harnad at clarity.princeton.edu (Stevan Harnad) Date: Thu Dec 31 19:04:26 EST 1992 Subject: psycoloquy.92.3.68.categorization.1.murre (160 lines) Message-ID: CALL FOR BOOK REVIEWERS Below is the Precis of LEARNING AND CATEGORIZATION IN MODULAR NEURAL NETWORKS by JMJ Murre. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it (if you have never reviewed for PSYCOLOQUY of Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message). If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ----------------------------------------------------------------------- psycoloquy.92.3.68.categorization.1.murre Thursday, 31 December 1992 ISSN 1055-0143 (6 paragraphs, 1 reference, 83 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1992 Jacob MJ Murre Precis of: LEARNING AND CATEGORIZATION IN MODULAR NEURAL NETWORKS JMJ Murre 1992, 244 pages Hemel Hempstead: Harvester Wheatsheaf (In Canada and the USA: Hillsdale, NJ: Lawrence Erlbaum) Jacob M.J. Murre MRC Applied Psychology Unit Cambridge, United Kingdom jaap.murre at mrc-applied-psychology.cambridge.ac.uk 1.0 MODULARITY AND MODULATION IN NEURAL NETWORKS 1.1 This book introduces a new neural network model, CALM, for categorization and learning in neural networks. CALM is based on ideas from neurobiology, psychology, and engineering. It defines a neural network paradigm that is both modular and modulatory. CALM stands for Categorizing And Learning Module and it may be viewed as a building block for neural networks. The internal structure of the CALM module is inspired by the neocortical minicolumn. Several of these modules are connected to form an initial neural network architecture. Throughout the book it is argued that modularity is important in overcoming many of the problems and limitations of current neural networks. Another pivotal concept in the CALM module is self-induced arousal, which may modulate the local learning rate and noise level. 1.2 The concept of arousal has roots in both biology and psychology. In CALM, this concept underlies two different modes of learning: elaboration learning and activation learning. Mandler and coworkers have conjectured that these two distinct modes of learning may cause the dissociation of memory observed in explicit and implicit memory tasks. A series of simulations of such experiments demonstrates that arousal-modulated learning and categorization in modular neural networks can account for experimental results with both normal and amnesic patients. In the latter case, pathological but psychologically accurate behavior is produced by "lesioning" the arousal system of the model. The behavior obtained in this way is similar to that in patients with hippocampal lesions, suggesting that the hippocampus may form part of an arousal system in the brain. 1.3 Another application of CALM to psychological modelling shows how a modular CALM network can learn the word superiority effect for letter recognition. As an illustrative practical application, a small model is described that learns to recognize handwritten digits. 2.0 MODULAR NEURAL ARCHITECTURES AND NEUROCOMPUTERS 2.1 The book contains a concise introduction to genetic algorithms, a new computing method based on the metaphor of biological evolution that can be used to design network architectures with superior performance. In particular, it is shown how a genetic algorithm results in a better architecture for the digit-recognition model. 2.2 In five appendices, the role of modularity in parallel hardware and software implementations is discussed in some depth. Several hardware implementations are considered, including a formal analysis of their efficiency on transputer networks and an overview of a dedicated 400- processor neurocomputer built by the developers of CALM in cooperation with Delft Technical University. One of the appendices is dedicated to a discussion of the requirements of simulators for modular neural networks. 3.0 CATASTROPHIC INTERFERENCE AND OTHER ISSUES 3.1 The book ends with an evaluation of the psychological and biological plausibility of CALM models and a discussion of generalization, representational capacity of modular neural networks, and catastrophic interference. A series of simulations and a detailed analysis of Ratcliff's simulations of catastrophic interference show that in almost all cases interference can be attributed to overlap of hidden-layer representations across subsequent blocks of stimuli. It is argued that introducing modularity, or some other form of semidistributed representations, may reduce interference to a more psychologically plausible level. REFERENCE Murre, J.M.J. (1992) Learning and Categorization in Modular Neural Networks. Harvester Wheatsheaf/Erlbaum ---------------------------------------------------------------- PSYCOLOQUY INSTRUCTIONS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 20,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral, cognitive, neural, social, etc.) All contributions are refereed by members of PSYCOLOQUY's Editorial Board. Target articles should normally not exceed 500 lines in length (commentaries and responses should not exceed 200 lines). All target articles must have (1) a short abstract (<100 words), (2) an indexable title, (3) 6-8 indexable keywords, and the (4) author's full name and institutional address. The submission should be accompanied by (5) a rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). Commentaries must have indexable titles and the commentator's full name and institutional address (abstract is optional). All paragraphs should be numbered in articles, commentaries and responses (see format of already articles articles in PSYCOLOQUY). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu From harnad at Princeton.EDU Thu Dec 31 21:43:42 1992 From: harnad at Princeton.EDU (Stevan Harnad) Date: Thu, 31 Dec 92 21:43:42 EST Subject: PSYC Call for Book Reviewers: Language Comprehension Message-ID: <9301010243.AA24666@clarity.Princeton.EDU> CALL FOR BOOK REVIEWERS Below is the Precis of LANGUAGE COMPREHENSION AS STRUCTURE BUILDING by MA Gernsbacher. This book has been selected for multiple review in PSYCOLOQUY. If you wish to submit a formal book review (see Instructions following Precis) please write to psyc at pucc.bitnet indicating what expertise you would bring to bear on reviewing the book if you were selected to review it (if you have never reviewed for PSYCOLOQUY of Behavioral & Brain Sciences before, it would be helpful if you could also append a copy of your CV to your message). If you are selected as one of the reviewers, you will be sent a copy of the book directly by the publisher (please let us know if you have a copy already). Reviews may also be submitted without invitation, but all reviews will be refereed. The author will reply to all accepted reviews. ------------------------------------------------------------------------- psycoloquy.92.3.69.language-comprehension.1.gernsbacher Thurs 31 Dec 1992 ISSN 1055-0143 (29 paragraphs, 2 references, 275 lines) PSYCOLOQUY is sponsored by the American Psychological Association (APA) Copyright 1992 Morton Ann Gernsbacher Precis of: LANGUAGE COMPREHENSION AS STRUCTURE BUILDING MA Gernsbacher (1990) Hillsdale NJ: Lawrence Erlbaum Morton Ann Gernsbacher Department of Psychology University of Wisconsin-Madison 1202 W. Johnson Street Madison, WI 53706-1611 (608) 262-6989 [fax (608) 262-4029] mortong at macc.wisc.edu 0. KEYWORDS: comprehension, cognitive processes, sentence comprehension, psycholinguistics 1. Language can be viewed as a specialized skill involving language-specific processes and language-specific mechanisms. Another view is that language (both comprehension and production) draws on many general cognitive processes and mechanisms. According to this view, some of the same processes and mechanisms involved in producing and comprehending language are involved in nonlinguistic tasks. 2. This commonality might arise because, as Lieberman (1984) and others have suggested, language comprehension evolved from nonlinguistic cognitive skills. Or the commonality might arise simply because the mind is best understood by reference to a common architecture (e.g., a connectionist architecture). 3. I have adopted the view that many of the processes and mechanisms involved in language comprehension are general ones. This book describes a few of those cognitive processes and mechanisms, using a simple framework -- the Structure Building Framework -- as a guide. 4. According to the Structure Building Framework, the goal of comprehension is to build a coherent mental representation or "structure" of the information being comprehended. Several component processes are involved. First, comprehenders lay foundations for their mental structures. Next, they develop their mental structures by mapping on information when that incoming information coheres with the previous information. If the incoming information is less coherent, however, comprehenders engage in another cognitive process: They shift to initiate a new substructure. So, most representations comprise several branching substructures. 5. The building blocks of these mental structures are memory nodes. Memory nodes are activated by incoming stimuli. Initial activation forms the foundation of mental structures. Once the foundation is laid, subsequent information is often mapped onto a developing structure because the more coherent the incoming information is with the previous information, the more likely it is to activate similar memory nodes. In contrast, the less coherent the incoming information is, the less likely it is to activate similar memory nodes. In this case, the incoming information might activate a different set of nodes, and the activation of this other set of nodes forms the foundation for a new substructure. 6. Once memory nodes are activated, they transmit processing signals, either to enhance (boost or increase) or to suppress (dampen or decrease) other nodes' activation. In other words, two mechanisms control the memory nodes' level of activation: Enhancement and Suppression. Memory nodes are enhanced when the information they represent is necessary for further structure building. They are suppressed when the information they represent is no longer as necessary. 7. This book describes the three subprocesses involved in structure building, namely: the Process of Laying a Foundation for mental structures; the Process of Mapping coherent information onto developing structures; and the Process of Shifting to initiate new substructures. The book also describes the two mechanisms that control these structure building processes, namely: the Mechanism of Enhancement, which increases activation, and the Mechanism of Suppression, which dampens activation. 8. in discussing these processes and mechanisms, I begin by describing the empirical evidence to support them. I then describe comprehension phenomena that result from them. At each point, I stress that I assume that these processes and mechanisms are general; that is, the same ones should underlie nonlinguistic phenomena. This suggests that some of the bases of individual differences in comprehension skill might not be language specific. I describe how I have investigated this hypothesis empirically. 9. The process of laying a foundation is described in Chapter 2. Because comprehenders first lay a foundation, they spend more time reading the first word of a clause or sentence, the first sentence of a paragraph or story episode, and the first word of a spoken clause or spoken sentence; they also spend more time viewing the first picture of a picture story or picture story episode. 10. Comprehenders use these first segments (initial words, sentences, and pictures) to lay foundations for their mental representations of larger units (sentences, paragraphs, and story episodes). Because laying a foundation consumes cognitive effort, comprehenders slow down in understanding initial segments. Indeed, none of these comprehension time effects emerges when the information does not lend itself to building cohesive mental representations, for example, when the sentences, paragraphs, or stories are self-embedded or scrambled. 11. The process of laying a foundation explains why comprehenders are more likely to recall a sentence when cued by its first content word (or a picture of that first content word); why they are more likely to recall a story episode when cued by its first sentence; and why they are more likely to consider the first sentence of a paragraph the main idea of that paragraph, even when the actual theme occurs later. 12. Initial words, sentences, and pictures are optimal cues because they form the foundations of their clause-level, sentence-level, and episode-level structures; only through initial words, sentences, and pictures can later words, sentences, and pictures be mapped onto the developing representation. 13. Laying a foundation explains why comprehenders access the participant mentioned first in a clause faster than they access a participant mentioned later. This Advantage of First Mention occurs regardless of the first-mentioned participant's syntactic position or semantic role. First-mentioned participants are more accessible because they form the foundation of their clause-level substructures. 14. Laying a foundation also explains why the first clause of a multi-clause sentence is most accessible shortly after comprehenders hear or read that multi-clause sentence (even though while they are hearing or reading the sentence, the most recent clause is most accessible). According to the Structure Building Framework, comprehenders represent each clause of a multi-clause sentence in its own substructure. Although they have greatest access to the information that is represented in the substructure that they are currently developing, at some point, the first clause becomes most accessible because the substructure representing the first clause forms the foundation for the whole sentence-level structure. 15. The processes of mapping and shifting are described in Chapter 3. The process of mapping explains why sentences that refer to previously mentioned concepts (and are, therefore, referentially coherent) are read faster than less referentially coherent sentences; why sentences that maintain a previously established time frame (and are, therefore, temporally coherent) are read faster than sentences that are less temporally coherent; why sentences that maintain a previously established location or point of view (and are, therefore, locationally coherent) are read faster than sentences that are less locationally coherent; and why sentences that are logical consequences of previously mentioned actions (and are, therefore, causally coherent) are read faster than sentences that are less causally coherent. 16. The process of shifting from actively building one substructure to initiating another explains why words and sentences that change the topic, point of view, location, or temporal setting take substantially longer to comprehend. The process of shifting also explains why information presented before a change in topic, point of view, location, or temporal setting is harder to retrieve than information presented afterward. Such changes trigger comprehenders to shift and initiate a new substructure; information presented before comprehenders shift is not represented in the same substructure as information presented afterward. 17. Shifting also explains a well known language comprehension phenomenon: Comprehenders quickly forget the exact form of recently comprehended information. This phenomenon is not unique to language; it also occurs while comprehenders are viewing picture stories; and it is also exacerbated after comprehenders cross episode boundaries, even the episode boundaries of picture stories. 18. Finally, shifting explains why comprehenders' memories for stories are organized by the episodes in which the stories were originally heard or read. Comprehenders shift in response to cues that signal a new episode; each episode is hence represented in a separate substructure. 19. The mechanisms of suppression and enhancement are described in Chapter 4. The suppression mechanism explains why only the contextually appropriate meaning of an ambiguous word, such as bug, is available to consciousness although multiple meanings -- even contextually inappropriate ones -- are often immediately activated. The inappropriate meanings do not simply decay; neither do they decrease in activation because their activation is consumed by the appropriate meanings. Rather, the suppression mechanism dampens the activation of inappropriate meanings. It also dampens the activation of less relevant associations of unambiguous words. 20. Suppression and enhancement explain how anaphors (such as pronouns, repeated noun phrases, and so forth) improve their antecedents' accessibility. Anaphors both enhance their antecedents' activation and suppress the activation of other concepts, with the net effect that after anaphoric reference, antecedents are more activated than other concepts. They are accordingly more accessible. 21. Suppression and enhancement are triggered by information that specifies the anaphor's identity. More explicit anaphors trigger more suppression and enhancement. Information from other sources (such as semantic, syntactic, and pragmatic context) also triggers suppression, but it does so less quickly and less powerfully. 22. Suppression and enhancement explain why speakers and writers use more explicit anaphors at longer referential distances, at the beginnings of episodes, and for less topical concepts. The mechanisms of suppression and enhancement also explain why comprehenders have more difficulty accessing referents at longer referential distances, at the beginnings of episodes, and for less topical concepts. 23. Suppression and enhancement explain how concepts marked with cataphoric devices, like spoken stress and the indefinite article, "this," gain a privileged status in comprehenders' mental representations. Cataphoric devices enhance the activation of the concepts they mark. They also improve their concepts' representational status through the suppression: Concepts marked with cataphoric devices are better at suppressing the activation of other concepts, and they are better at resisting being suppressed themselves. 24. Finally, the mechanisms of suppression and enhancement explain why comprehenders typically forget surface information faster than they forget thematic information; why comprehenders forget more surface information after they hear or read thematically organized passages than after they hear or read seemingly unrelated sentences; and why comprehenders better remember the surface forms of abstract sentences and the thematic content of concrete sentences. 25. Individual differences in structure building are described in Chapter 5. The Structure Building Framework explains why skill in comprehending linguistic media (written and spoken stories) is closely related to skill in comprehending nonlinguistic media (picture stories). Comprehensible information, regardless of its medium, is structured, and comprehenders differ in how skillfully they use the cognitive processes and mechanisms that capture this structure. 26. The process of shifting explains why less-skilled comprehenders are poorer at remembering recently comprehended information: They shift too often. The mechanism of suppression explains why less-skilled comprehenders are less able to reject the contextually inappropriate meanings of ambiguous words; why they are less able to reject the incorrect forms of homophones; why they are less able to reject the typical-but-absent members of nonverbal scenes; why they are less able to ignore words written on pictures; and why they are less able to ignore pictures surrounding words: Less-skilled comprehenders have inefficient suppression mechanisms. 27. The distinction between the mechanisms of suppression and enhancement explains why less-skilled comprehenders are not less able to appreciate the contextually appropriate meanings of ambiguous words and why they are not less able to appreciate typical members of nonverbal scenes. It is less-skilled comprehenders' suppression mechanisms, not their enhancement mechanisms, that are faulty. 28. Although the Structure Building Framework accounts parsimoniously for many comprehension phenomena, several questions remain unanswered. In the final chapter, I briefly identify just a few of those questions: Are the cognitive processes and mechanisms indentified by the Structure Building Framework automatic, or are they under comprehenders' conscious control? In what medium are mental structures and substructures represented? How is the Structure Building Framework similar to other approaches to describing comprehension? And what is lost by describing language comprehension at a general level? 29. I conclude that by describing language comprehension using the Structure Building Framework as a guide, I am not forced to accept nativism, to isolate the psychology of language from the remainder of psychology, to honor theory over data, to depend on linguistic theory, or to ignore functionalism. Instead, by describing language comprehension as structure building, I hope to map the study of language comprehension onto the firm foundation of cognitive psychology. REFERENCE Gernsbacher, M.A. (1990) Language Comprehension as Structure Building. Hillsdale NJ: Lawrence Erlbaum Lieberman, P. (1984) The biology and evolution of language. Harvard University Press ----------------------------------------------------------------------- PSYCOLOQUY INSTRUCTIONS PSYCOLOQUY is a refereed electronic journal (ISSN 1055-0143) sponsored on an experimental basis by the American Psychological Association and currently estimated to reach a readership of 20,000. PSYCOLOQUY publishes brief reports of new ideas and findings on which the author wishes to solicit rapid peer feedback, international and interdisciplinary ("Scholarly Skywriting"), in all areas of psychology and its related fields (biobehavioral, cognitive, neural, social, etc.) All contributions are refereed by members of PSYCOLOQUY's Editorial Board. Target articles should normally not exceed 500 lines in length (commentaries and responses should not exceed 200 lines). All target articles must have (1) a short abstract (<100 words), (2) an indexable title, (3) 6-8 indexable keywords, and the (4) author's full name and institutional address. The submission should be accompanied by (5) a rationale for soliciting commentary (e.g., why would commentary be useful and of interest to the field? what kind of commentary do you expect to elicit?) and (6) a list of potential commentators (with their email addresses). Commentaries must have indexable titles and the commentator's full name and institutional address (abstract is optional). All paragraphs should be numbered in articles, commentaries and responses (see format of already articles articles in PSYCOLOQUY). It is strongly recommended that all figures be designed so as to be screen-readable ascii. If this is not possible, the provisional solution is the less desirable hybrid one of submitting them as postscript files (or in some other universally available format) to be printed out locally by readers to supplement the screen-readable text of the article. PSYCOLOQUY also publishes multiple reviews of books in any of the above fields; these should normally be the same length as commentaries, but longer reviews will be considered as well. Book authors should submit a 500-line self-contained Precis of their book, in the format of a target article; if accepted, this will be published in PSYCOLOQUY together with a formal Call for Reviews (of the book, not the Precis). The author's publisher must agree in advance to furnish review copies to the reviewers selected. Authors of accepted manuscripts assign to PSYCOLOQUY the right to publish and distribute their text electronically and to archive and make it permanently retrievable electronically, but they retain the copyright, and after it has appeared in PSYCOLOQUY authors may republish their text in any way they wish -- electronic or print -- as long as they clearly acknowledge PSYCOLOQUY as its original locus of publication. However, except in very special cases, agreed upon in advance, contributions that have already been published or are being considered for publication elsewhere are not eligible to be considered for publication in PSYCOLOQUY, Please submit all material to psyc at pucc.bitnet or psyc at pucc.princeton.edu