From Connectionists-Request at cs.cmu.edu Fri Jan 1 00:05:13 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Jan 93 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <6805.725864713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From rybak at cerebrum.impaqt.drexel.edu Mon Jan 4 12:33:14 1993 From: rybak at cerebrum.impaqt.drexel.edu (Ilya Rybak) Date: Mon, 4 Jan 93 12:33:14 EST Subject: papers available Message-ID: <9301041733.AA14284@impaqt.drexel.edu> Dear Connectionists, Two below papers are accepted for IS&T/SPIE Conference on Human Vision, Vision Procrssing and Digital Display IV, San Jose, 1993. Hard copies of the papers are available with request by e-mail address ilya at cheme.seas.upenn.edu Ilya Rybak ____________________________________________________________________ PHYSIOLOGICAL MODEL OF ORIENTATION SENSITIVITY IN THE VISUAL CORTEX AND ITS USE FOR IMAGE PROCESSING Ilya A. Rybak*, Lubov N. Podladchikova** and Natalia A. Shevtsova** * University of Pennsylvania, Philadelphia, PA, USA ilya at cheme.seas.upenn.edu ** A.B. Kogan Research Institute for Neurocybernetics Rostov State University, Rostov-on-Don, Russia The objectives of the research were: (i) to investigate the dynamics of neuron responses and orientation selectivity in the primary visual cortex; (ii) to find a possible source of bifurcation of visual information into "what" and "where" processing pathways; (iii) to apply the obtained results for visual image processing. The achieve the objectives, a model of the iso-orientation domain (orientation column) of the visual cortex has been developed. The model is based on neurophysiological data and on the idea that orientation selectivity results from a spatial anisotropy of reciprocal lateral inhibition in the domain. Temporal dynamics of neural responses to oriented stimuli were studied with the model. It was shown that the later phase of neuron response had a much sharper orientation tuning than the initial one. The results of modeling were confirmed by neurophysiological experiments on the visual cortex. The findings allow to suggest that the initial phase of neural response encodes the location of the visual stimulus, whereas the later phase encodes its orientation. Temporal dividing of information about object features and their locations at the neuronal level of the primary visual cortex may be considered to be a source for bifurcation of the visual processing into "what" and "where" pathways and may be used for parallel- sequential attentional image processing. The model of neural network system for image processing based on the iso- orientation domain models and above idea is proposed. An example of test image processing is presented. _____________________________________________________________________ -------------------------------------------------------------------- BEHAVIORAL MODEL OF VISUAL PERCEPTION AND RECOGNITION Ilya A. Rybak*, Alexander V. Golovan** and Valentina I. Gusakova** * University of Pennsylvania, Philadelphia, PA, USA ilya at cheme.seas.upenn.edu ** A.B. Kogan Research Institute for Neurocybernetics Rostov State University, Rostov-on-Don, Russia In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixation at the most informative points of the image. So, perception and recognition are not only resuls or neural computations, but are also behavioral processes. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (i) separated processing of "what" (object features) and "where" (spatial features) information at high levels of the visual system; (ii) mechanisms of visual attention using "where" information; (iii) representation of "what" information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a "feature-based frame of reference" (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of "what" (Sensory Memory) and "where" (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensory Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale. ----------------------------------------------------------------------- From bouzerda at eleceng.adelaide.edu.au Mon Jan 4 21:23:13 1993 From: bouzerda at eleceng.adelaide.edu.au (bouzerda@eleceng.adelaide.edu.au) Date: Tue, 5 Jan 1993 13:23:13 +1100 Subject: Job Offer Message-ID: <9301050223.17194@munnari.oz.au> POSTDOCTORAL OR RESEARCH FELLOW in Signal Processing and Neural Networks ************************************** A postdoctoral or research fellow is sought to join as soon as possible the Centre for Sensor Signal and Information Processing (CSSIP) and the University of Adelaide EE Eng Department. The CSSIP is one of several cooperative research centres awarded by the Australian Government to establish excellence in research and development. The University of Adelaide, represented by the EE Eng Dept, is a partner in this cooperative research centre, together with the Defence Science and Technology Organization (DSTO), four other Universities, and several companies. The cooperative research centre consists of more than 50 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine (DEC MPP). The aim is to develop and investigate principles of artificial neural networks for sensor signal and image processing, classification and separation of signals, and data fusion. The position is for one year with a strong possibility of renewal. DUTIES: In consultation with task leaders and specialist researchers to investigate alternative algorithm design approaches, to design experiments on applications of signal processing and artificial neural networks, to prepare data and carry out the experiments, to prepare software for testing algorithms, and to prepare or assist with the prepation of technical reports. QUALIFICATIONS: The successful candidate must have a Ph.D., a proven research record, and a demonstrated ability in written and spoken English. PAY and CONDITIONS: will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$ 36766 to A$ 42852 for a postdoc, and A$ 42333 to A$ 5999 for a research fellow. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering Dept., The University of Adelaide, G.P.O. Box 498, Adelaide, South Australia 5001. Phone: (61)-08-228-5589, Fax: (61)-08-232-5720 Email: bogner at eleceng.adelaide.edu.au Dr. A. Bouzerdoum, Phone (61)-08-228-5464, Fax (61)-08-232-5720 Email: bouzerda at eleceng.adelaide.edu.au From john at cs.uow.edu.au Tue Jan 5 14:11:55 1993 From: john at cs.uow.edu.au (John Fulcher) Date: Tue, 5 Jan 93 14:11:55 EST Subject: no subject (file transmission) Message-ID: <199301050311.AA18078@wraith.cs.uow.edu.au> CALL FOR PAPERS - ANN STANDARDS COMPUTER STANDARDS & INTERFACES For some time now, there has been a need to consolidate and formalise the efforts of researchers in the Artificial Neural Network field. The publishers of this North-Holland journal have deemed it appropriate to devote a forthcoming special issue of Computer Standards & Interfaces to ANN standards, under the guest editorship of John Fulcher, University of Wollongong, Australia. We already have the cooperation of the IEEE/NCC Standards Committee, but are also interested in submissions regarding less formal, de facto "standards". This could range from established, "standard" techniques in various application areas (vision, robotics, speech, VLSI etc.), or ANN techniques generally (such as the backpropagation algorithm & its [numerous] variants, say). Accordingly, survey or review articles would be particularly welcome. If you are interested in submitting a paper for consideration, you will need to send three copies (in either hard copy or electronic form) by March 31st, 1993 to: John Fulcher, Department of Computer Science, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. fax: +61 42 213262 email: john at cs.uow.edu.au.oz From terry at helmholtz.sdsc.edu Tue Jan 5 15:50:41 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 5 Jan 93 12:50:41 PST Subject: Neural Computation 5:1 Message-ID: <9301052050.AA25119@helmholtz.sdsc.edu> NEURAL COMPUTATION - Volume 5 - Number 1 - January 1993 Article Conversion of Temporal Correlations Between Stimuli to Spatial Correlations Between Attractors M. Griniasty, M. V. Tsodyks and Daniel J. Amit Note On the Realization of a Kolmogorov Network Ji-Nan Lin and Rolf Unbehauen Letters Statistical Mechanics for a Network of Spiking Neurons Leonid Kruglyak and William Bialek Acetylcholine and Learning in a Cortical Associative Memory Michael E. Hasselmo Convergent Algorithm for Sensory Receptive Field Development Joseph J. Atick and A. Norman Redlich Three-Dimensional Object Recognition Using an Unsupervised BCM Network: The Usefulness of Distinguishing Features Nathan Intrator and Joshua I. Gold Complexity Optimized Data Clustering by Competitive Neural Networks Joachim Buhmann and Hans Kuhnel Clustering Data by Melting Yiu-fai Wong Coarse Coding Resource-Allocating Network Gustavo Deco and Jurgen Ebmeyer Training Periodic Sequences Using Fourier Series Error Criterion James A. Kottas Generalization and Approximation Capabilities of Multilayer Networks Yoshikane Takahashi Statistical Theory of Learning Curves under Entropic Loss Criterion Shun-ichi Amari and Noboru Murata Learning in the Recurrent Random Neural Network Erol Gelenbe ----- SUBSCRIPTIONS - VOLUME 5 - BIMONTHLY (6 issues) ______ $40 Student ______ $65 Individual ______ $156 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-4 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From nowlan at helmholtz.sdsc.edu Tue Jan 5 14:03:42 1993 From: nowlan at helmholtz.sdsc.edu (Steven J. Nowlan) Date: Tue, 05 Jan 93 12:03:42 MST Subject: Preprint announcement Message-ID: <9301052003.AA28913@bose> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** The following paper has been placed in the Neuroprose archives at Ohio State. The file is nowlan.vismotion.ps.Z. Ftp instructions follow the abstract. Only an electronic version of this paper is available. This is a preprint of the paper to appear in the NIPS 5 proceedings due out later this year. ----------------------------------------------------- Filter Selection Model for Generating Visual Motion Signals Steven J. Nowlan and Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute P.O. Box 5800 San Diego, CA 92186-5800 ABSTRACT: Neurons in area MT of primate visual cortex encode the velocity of moving objects. We present a model of how MT cells aggregate responses from V1 to form such a velocity representation. Two different sets of units, with local receptive fields, receive inputs from motion energy filters. One set of units forms estimates of local motion, while the second set computes the utility of these estimates. Outputs from this second set of units ``gate'' the outputs from the first set through a gain control mechanism. This active process for selecting only a subset of local motion responses to integrate into more global responses distinguishes our model from previous models of velocity estimation. The model yields accurate velocity estimates in synthetic images containing multiple moving targets of varying size, luminance, and spatial frequency profile and deals well with a number of transparency phenomena. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps nowlan.vismotion.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get nowlan.vismotion.ps.Z ftp> quit unix> uncompress nowlan.vismotion.ps.Z unix> lpr -s nowlan.vismotion.ps (or however you print postscript) Steven J. Nowlan Computational Neurobiology Laboratory The Salk Institute P.O. Box 85800 San Diego, CA 92186-5800 Work Phone: 619-453-4100 X124 e-mail: nowlan at helmholtz.sdsc.edu From darken at learning.siemens.com Wed Jan 6 10:20:13 1993 From: darken at learning.siemens.com (Christian Darken) Date: Wed, 6 Jan 93 10:20:13 EST Subject: Great Energy Shootout -- definition Message-ID: <9301061520.AA02844@learning.siemens.com> Pertinent to the "Great Energy Shootout" competition description posted to this list a few days ago, I asked Jan Kreider for a definition of "HVAC system", and got back the following reply. cd > From @mirsa.inria.fr:kreider at manitou Wed Jan 6 04:25:54 1993 > Date: Wed, 6 Jan 93 10:27:55 +0100 > From: Jan KREIDER > To: darken at learning.siemens.com > Subject: RE: Building energy predictor Competition - " > Content-Length: 251 > > Chris - > > An HVAC system is the heating, ventilating and air conditionng (HVAC) > system in a building including chillers, boilers, fans and pumps. Could > you forward this definition to your colleagues in the connectionist > world? > > Merci - Jan F. Kreider > From mico at ludens.elte.hu Wed Jan 6 04:47:34 1993 From: mico at ludens.elte.hu (mico@ludens.elte.hu) Date: Wed, 06 Jan 1993 10:47:34 +0100 Subject: book announcement Message-ID: <00966338.6F92ABC0.19948@ludens.elte.hu> *********************** BOOK ANOUNCEMENT ************************** Klaus Haefner (ed.): Evolution of information processing systems. An interdisciplinary approach for a new understanding of nature and society Berlin ; New York : Springer-Verlag, c1992. Phys. Description: x, 357 p. : 46 Figures. ; 25 cm. Includes bibliographical references (pp. 347-357). ********************************************************************* Features contributions of: Vili Csanyi Sidney Fox Hermann Haken George Kampis Jenny Kien Wolfgang Klement Ervin Laszlo Greg Nicolis John Nicolis Theodor Oeser Mika Pantzar Michael Requardt Anton Semenov This book is based on an invited conference, held in Bremen, Germany, October 8-10 1990. At the conference, issues related to Haefner's Basic Paper were discussed. The Basic Paper is reproduced in the first part of the book. This paper claims that the same information theoretic principles can govern computers, life, and minds, and further, that they are valid for atomic systems and societal systems too. As a counterweight, the other papers of the volume offer, by and large, a critical attitude or at least significant refinements to this thesis. These papers, many of which were written by leading experts, offer new views on information based on ideas of computer science, synergetics, self-organization, neural networks, and self- modification. M. Vargyas egcs301 at hueco.uni-wien.ac.at  From kak at max.ee.lsu.edu Thu Jan 7 14:47:10 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Thu, 7 Jan 93 13:47:10 CST Subject: Quantum neural computer Message-ID: <9301071947.AA07593@max.ee.lsu.edu> Hitherto all computers have been designed based on classical laws. We consider the question of building a quantum neural computer and speculate on its computing power. We argue that such a computer could have the potential to solve artificial intelligence problems. History tells us that paradigms of science and technology draw on each other. Thus Newton's conception of the universe was based on the mechanical engines of the day; thermodynamics followed the heat engines of the 19th century; and computers followed the development of telegraph and telephone. From another point of view, modern computers are based on classical physics. Since classical physics has been superseded by quantum mechanics in the microworld, one might ask the question if a new paradigm of computing based on quantum mechanics can be constructed. Intelligence, and by implication consciousness, has been taken by many computer scientists to emerge from the complexity of the interconnections between the neurons. But if it is taken to be a unity, as urged by Schrodinger and other physicists , then it should be described by a quantum mechanical wave function. No representation in terms of networking of classical objects, such as threshold neurons, can model a wave function. This is another reason that one seeks a new computing paradigm. A brain-mind identity hypothesis, with a mechanistic or electronic representation of the brain processes, does not explain how self-awareness could arise. At the level of ordinary perception there exists a duality and complementarity between an autonomous (and reflexive ) brain and a mind with intentionality. The notion of self seems to hinge on an indivisibility akin to that found in quantum mechanics. This was argued most forcefully by Schrodinger, one of the creators of quantum mechanics. A quantum neural computer will start out with a wavefunction that is a sum of several different problem functions. After the evolution of the wavefunction the measurement operator will force the wavefunction to reduce to the correct eigenfunction with the corresponding measurement that represents the computation. A discussion of these issues is contained in my TECHNICAL REPORT ECE/LSU 92-13, December 15, 1993 entitled CAN WE BUILD A QUANTUM NEURAL COMPUTER? If you would like to have an electronic copy (minus the math) do let me know. Hard-copies are also available. -Subhash Kak Professor of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel:(504) 388-5552; Fax: 504-388-5200 From N.E.Sharkey at dcs.ex.ac.uk Thu Jan 7 06:29:58 1993 From: N.E.Sharkey at dcs.ex.ac.uk (Noel Sharkey) Date: Thu, 7 Jan 93 11:29:58 GMT Subject: apologies Message-ID: <7267.9301071129@propus.dcs.exeter.ac.uk> My apologies for the delay in responding to the many of you who sent in abstracts for the AISB WORKSHOP ON CONNECTIONISM, COGNITION, AND THE NEW AI for the 12th December Deadline. On all of the other advertisements that appeared the deadline was January, 12th. This was a clerical error on my part - sorry. So I will let all of the submittee's know the outcome of the reviews as soon as possible after the closing date of January, 12th. I have enclosed the full advert below for those interested. WORKSHOP ON CONNECTIONISM, COGNITION AND A NEW AI A workshop will be held on March, 30th at the 9th Biennial Conference on Artificial Intelligence (AISB-93) at the University of Birmingham, England organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB). A number of recent developments in Connectionist Research have strong implications for the future of AI and the study of Cognition. Among the most important are developments in Learning, Representation, and Productivity (or Generalisation). The aim of the workshop would be to focus on how these developments may change the way we look at AI and the study of Cognition. Our goal is to have a lively and invigorating debate on the state-of-the-art. SUGGESTED TOPICS INCLUDE (BUT ARE NOT RESTRICTED TO). * Connectionist representation * Generalisation and Transfer of Knowledge * Learning Machines and models of human deveopmental. * Symbolic Learning versus Connectionist learning * Advantages of Connectionist/Symbolic hybrids. * Modelling Cognitive Neuropsychology * Connectionist modelling of Creativity and music (or other arts). DEADLINE FOR SUBMISSION: 12th January, 1992 ORGANISER Noel Sharkey Centre for Connection Science, Computer Science, Exeter. COMMITTEE Andy Clark (Sussex). Glyn Humphries (Birmingham) Kim Plunkett (Oxford) Chris Thornton (Sussex) WORKSHOP ENTRANCE: Attendance at the workshop will be limited to 50 or 60 places, so please LET US KNOW AS SOON AS POSSIBLE IF YOU ARE PLANNING TO ATTEND, and to which of the following categories you belong. DISCUSSION PAPERS Acceptance of discussion papers will be decided on the basis of extended abstracts (try to keep them under 500 words please) clearly specifying a 15 to 20 minute discussion topic for oral presentation. There will also be a small number of invited contributors. ORDINARY PARTICIPANTS A limited number places will be available for participants who wish to sit in on the discussion but do not wish to present a paper. But please get in early with a short note saying what is your purpose in attending. Please send submissions to Noel E. Sharkey, Centre for Connection Science Dept. Computer Science University of Exeter Exeter EX4 4PT Devon U.K. or email noel at uk.ac.exeter.dcs From lba at sara.inesc.pt Thu Jan 7 13:02:40 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Thu, 7 Jan 93 17:02:40 -0100 Subject: workshop announcement Message-ID: <9301071802.AA20411@sara.inesc.pt> IRTICS'93 CALL FOR PAPERS Workshop on Integration Technology for Real-Time Intelligent Control Systems October 5-7, 1993 Madrid, SPAIN Sponsored by: The Commission of the European Communities (CEC) The HINT project Organized by: Instituto de Ingenieria del Conocimiento (IIC) AIM: ---- Nowadays, the necessity of using together several techniques in order to improve the benefits of real-time intelligent control systems has become a constant in most industrial environments. Expert systems, neural networks, modelization, etc... can solve problems but, in many cases, a solution that partly involves different techniques leads to synergistic effects. So, cooperation among different approaches has become a crucial area of interest in this environment,the objective being to get common frameworks where you can use the best of each technique as effectively as possible. This certainty moved us to the present workshop on "Integration for Real-Time Intelligent Control Systems", IRTICS'93, that we are sure will be a good opportunity of examining many of the possibilities that exist, or will exist, in this direction. This workshop aims to encourage the communication and exchange of ideas among researchers, practitioners and end-users aware of the possibilities of integrating different AI technologies in real-time environments. Contributions addressing both theoretical problems and practical experiences will be of great interest for this forum. The workshop is organized by the Instituto de Ingenieria del Conocimiento, IIC, as an external activity of the HINT project: Heterogeneous Integration Architecture for Intelligent Control Systems (ESPRIT 6447). All the correspondence about the workshop should be addressed to the Workshop Secretariat at the IIC. TOPICS: ------- Researchers and practitioners interested in the possibilities of integration among different techniques applied to real-time environments, so as specific works that could be synergiistically enforced by means of integration, are invited to participate in the workshop by submitting an extended abstract as specified. Suggested topics include Integration Techniques for: * Expert Systems * Neural Networks * Fuzzy Logic * Model Based Reasoning * AI Architectures * Intelligent User Support Systems SUBMISSION REQUIREMENTS: ------------------------ Authors should submit 3 copies of an extended abstract (max 2000 words, approximately 5 single spaced pages) to the secretariat of the workshop before the deadline indicated in the timetable. They should include separately a page containing their name and full address, e-mail, fax or telephone, and the section(s) their work is related to. All contributions should be submitted in English. Abstracts will be reviewed according to their relationship with the basic aim of the workshop, their clarity and originality. Accepted papers will be included in a book to be published with the results and conclusions of the workshop. PROGRAMME: ---------- The programme of the workshop will include three different activities: * Invited Contributions * Communications * Discussions We plan to include invited contributions about some state-of-the-art themes that are of interest for all the participants. The rest of the time, parallel sessions will be held about each topic, including communications and a long time for discussions. The last of these sessions will deal with 'integration' as the main topic of the workshop. Participants are invited to summarize the results of their work in this final session. TIMETABLE: ---------- * Submissions must be received: by February 28, 1993. * Notification of acceptance or rejection: by April 30, 1993. * Camera-Ready versions: before June 30, 1993. REGISTRATION FEES: ------------------ The registration fees include midday lunch during the workshop, a Welcome Party to be held on Monday 4th evening and the proceedings of the workshop. Registration: 400 ECU ORGANIZATION COMMITTEE: ----------------------- Enrica Chiozza (IIC, Spain) Pilar Rodriguez-Marin (IIC, Spain) PROGRAM COMMITTEE: ------------------ Fontaine L. (Dassault Electronique, France) Rodriguez-Marin P. (IIC, Spain) Almeida L. B. (INESC, Portugal) Sundin U. (INFOLOGICS, Sweden) De Pablo E. (Repsol Petroleo, S.A., Spain) Jimenez A. (UPM, Spain) SECRETARIAT: ------------ Enrica Chiozza Instituto de Ingenieria del Conocimiento UAM Canto Blanco Modulo C-XVI, P. 4 28049 Madrid SPAIN Fax: (34 1) 397 3972 Phone: (34 1) 397 8520 E-mail: CHIOZZA @ EMDCCI11.BITNET CHIOZZA @ iic.uam.es ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From rsun at athos.cs.ua.edu Thu Jan 7 18:29:02 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Thu, 7 Jan 1993 17:29:02 -0600 Subject: integrating symbolic processing with neural networks Message-ID: I have collected a bibliography of papers on integrating symbolic processing with neural networks, and am looking for additional references. It's available in Neuroprose under the name Sun.bib.Z (in an unedited form); I'll also be happy to send it to you directly if you e-mail me. The bibliography will be included in a book on this topic that I'm co-editing. I'm looking for additional references to make the bibliography as comprehensive as possible. So, I would like authors to send me (a possibly annotated) list of their publications on this topic (this is a chance to make your work better known.) Also, anyone who has already compiled such a bib, please let me know; I would like to incorporate it. Due credit will be given, of course. Here is my address. E-mail response (rsun at cs.ua.edu) is strongly preferred. ================================================================ Ron Sun, Ph.D Assistant Professor Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-8573 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================ Thanks for your cooperation. From mume at sedal.su.oz.au Fri Jan 8 00:08:11 1993 From: mume at sedal.su.oz.au (Multi-Module Environment) Date: Fri, 8 Jan 1993 16:08:11 +1100 Subject: FREE MUME version 0.5 for MSDOS platform Message-ID: <9301080508.AA21134@sedal.sedal.su.OZ.AU> The Multi-Module Neural Computing Environment (MUME) version 0.5 for the MSDOS platform is now available FREE of charge via anonymous ftp on brutus.ee.su.oz.au:/pub/MUME-0.5-DOS.zip The full listing of the file is: -rw-r----- 1 mume mume 1391377 Jan 8 15:45 MUME-0.5-DOS.zip Unzipping it should create a directory called MUME-DOS and it is about 4.6 MB. Following is the README file. Have fun. MUME-Request at sedal.su.OZ.AU -------------------------------------------------------------------------------- Multi-Module Neural Computing Environment (MUME) Version 0.5 (FREE) for MSDOS 5.0 MUME is a simulation environment for multi-modules neural computing. It provides an object oriented facility for the simulation and training of multiple nets with various architectures and learning algorithms. MUME includes a library of network architectures including feedforward, simple recurrent, and continuously running recurrent neural networks. Each architecture is supported by a variety of learning algorithms. MUME can be used for large scale neural network simulations as it provides support for learning in multi-net environments. It also provide pre and post-processing facilities. The object oriented structure makes simple the addition of new network classes and new learning algorithms. New classes/algorithms can be simply added to the library or compiled into a program at run-time. The interface between classes is performed using Network Service Functions which can be easily created for a new class/algorithm. The architectures and learning algorithms currently available are: Class Learning algorithms ------------ ------------------- MLP backprop, weight perturbation, node perturbation, summed weight perturbation SRN backprop through time, weight update driven node splitting, History bound nets CRRN Williams and Zipser Programmable Limited precision nets Weight perturbation, Combined Search Algorithm, Simulated Annealing Other general purpose classes include (viewed as nets): o DC source o Time delays o Random source o FIFOs and LIFOs o Winner-take-all o X out of Y classifiers The modules are provided in a library. Several "front-ends" or clients are also available. MUME can be used to include non-neural computing modules (decision trees, ...) in applications. The software is the product of a number of staff and postgraduate students at the Machine Intelligence Group at Sydney University Electrical Engineering. It is currently being used in research, research and development and teaching, in ECG and ICEG classification, and speech and image recognition. As such, we are interested in institutions that can exploit the tool (especially in educational courses) and build up on it. The software is written in 'C' and is aviable on the following platforms: - Sun (SunOS) - DEC (Ultrix) - Fujitsu VP2200 (UXP/M) - IBM RS6000 (AIX) - Hewlett Packard (HP-UX) - IBM PC compatibles (MSDOS 5.0) -- does not run under MS-Windows' DOS sessions THE MSDOS version of MUME is available as a public domain software. And can be ftp-ed from brutus.ee.su.oz.au:/pub/MUME-0.5-DOS.zip. MUME for the other platforms is available to research institutions on media/doc/postage cost arrangements. Information on how to acquire it may be obtained by writing (or email) to: Marwan Jabri SEDAL Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: 660-1228 Email: marwan at sedal.su.oz.au A MUME mailing list is currently available by sending an email to MUME-Requests at sedal.su.OZ.AU Please put your subscription email address on the 'Subject:' line. To send mail to everybody in the mailing list, send it to: MUME at sedal.su.OZ.AU All bugs reports should be sent to MUME-Bugs at sedal.su.OZ.AU and should include the following details: 1. Date (eg. 12 Feb 1993). 2. Name (eg. John Citizen). 3. Company/Institution (eg. Sydney University Electrical Engineering). 4. Contact Address (eg. what-is-mume at sedal.su.OZ.AU). 5. Version of MUME (eg. MUME 0.5). 6. Machine Name/Type (eg. Sun Sparc 2). 7. Version of the Operating System (eg. SunOS 4.1.1). 8. Brief Description of the problem(s). 9. Error Messages (if any). 10. Related Files (Filename, Version and Relationship to problems). From lautrup at connect.nbi.dk Fri Jan 8 09:31:02 1993 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 8 Jan 93 14:31:02 GMT Subject: No subject Message-ID: POST-DOC POSITION IN NEURAL SIGNAL PROCESSING THEORY The Danish Computational Neural Network Center (CONNECT), announces a one-year post-doc position in the theory of neural signal processing. CONNECT is a joint effort with participants from the University of Copenhagen, Risoe National Laboratory, and the Technical University of Denmark. The position is available March 1, 1993, at the Electronics Institute of the Technical University of Denmark. The work of the neural signal processing group concerns generalization theory, algorithms for architecture optimization, applications in time series analysis, seismic signal processing, image processing and pattern recognition. The candidate must have a strong background in statistics or statistical physics and have several years of experience in neural signal processing. A candidate with proven abilities in generalization theory of signal processing neural networks or in seismic signal processing will be favoured. Further information about the position can be obtained from: Lars Kai Hansen, Phone: (+45) 45 93 12 22, ext 3889. Electronics Institute B349, Fax: (+45) 42 87 07 17. Technical University of Denmark, email: lars at eiffel.ei.dth.dk DK-2800 Lyngby. Applications containing CV, list of publications, and three letters of recommendation should be mailed to Benny Lautrup, CONNECT Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Deadline February 15, 1992 From sjr at eng.cam.ac.uk Fri Jan 8 05:05:01 1993 From: sjr at eng.cam.ac.uk (sjr@eng.cam.ac.uk) Date: Fri, 8 Jan 93 10:05:01 GMT Subject: TR available Message-ID: <16768.9301081005@truth.eng.cam.ac.uk> The following technical report is available via FTP, from the International Computer Science Institute (ftp.icsi.berkeley.edu) and also the Cambridge Univeristy Engineering Department FTP archive (svr-ftp.eng.cam.ac.uk). CONNECTIONIST PROBABILITY ESTIMTAION IN HMM SPEECH RECOGNITION Steve Renals and Nelson Morgan International Computer Science Institute Technical Report TR-92-081 This report is concerned with integrating connectionist networks into a hidden Markov model (HMM) speech recognition system, This is achieved through a statistical understanding of connectionist networks as probability estimators, first elucidated by Herve Bourlard. We review the basis of HMM speech recognition, and point out the possible benefits of incorporating connectionist networks. We discuss some issues necessary to the construction of a connectionist HMM recognition system, and describe the performance of such a system, including evaluations on the DARPA database, in collaboration with Mike Cohen and Horacio Franco of SRI International. In conclusion, we show that a connectionist component improves a state of the art HMM system. ---------------------------------------------------------------------- UK: The report is available from svr-ftp.eng.cam.ac.uk (129.169.8.1), in directory 'reports', file 'renals_icsi92-081.ps.Z' USA: Available from ftp.icsi.berkeley.edu (128.32.201.7), in directory 'pub/techreports', file 'tr-92-081.ps.Z' Sample FTP session: unix% ftp ftp.icsi.berkeley.edu ftp> binary ftp> cd pub/techreports ftp> get tr-92-081.ps.Z ftp> quit unix% zcat tr-92-081.ps.Z | lpr -------------------------------------------------------------------- Steve Renals Cambridge University Engineering Department sjr at eng.cam.ac.uk -------------------------------------------------------------------- From dyer at CS.UCLA.EDU Fri Jan 8 15:52:29 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 8 Jan 93 12:52:29 PST Subject: regarding quantum neural computer announcement Message-ID: <930108.205229z.26424.dyer@lanai.cs.ucla.edu> Dr. S. Kak: Regarding your Quantum neural computer announcement: I am not a physicist and have not yet received your tech rep (so I am really sticking my neck out),but it seems to me that there are two assumptions you make that are potentially controversial: 1. that intelligence needs something at the quantum level (this is something R. Penrose also argues for). To my knowledge, there is no evidence as yet for this. Chess playing machines are now at grand master level -- w/o quantum effects. Connectionist/neural models exhibit nice learning and robustness properties -- w/o quantum effects. Bayesian-based AI expert systems perform sophisticated reasoning-under-uncertainty tasks; there are natural language systems that answer questions about themselves, thus exhibiting some limited form of self-awareness, etc. -- all w/o needing to postulate quantum effects. At this point no persuasive argument has yet been made for needing quantum-level effects to solve any particular task involving reasoning, language, memory or perceptual processes and the like. If there are, then I would certainly like to know them (Penrose sure never came up with any!) 2. that quantum-level phenomena could never be adequately simulated by a Turing machine (i.e. that reality is not computable). After reading a number of (non-specialist) books on quantum physics, I am not yet convinced of this. E.g., the collapse of the wave-form appears to be required as a result of the wave-particle duality, such as observed in the 2-slit experiment. But let's consider the 2-slit experiment. Individual electrons or photons hit a screen, one by one, and register like particles. Over time, however, their overall pattern is like that of waves (e.g. interference, diffraction). But there's an approach that could produce similar results from completely deterministic equations -- i.e. chaos theory. For example, there are chaos games in which the dots generated on a screen jump around as if at random, but over time, a pattern emerges, of instance,. a fern (e.g. p.238 of Gleick's Chaos book). If something that complicted can be produced by a sequence of apparently random dots (particles), then why couldn't something as simple as a wave interference pattern also be produced in this way? This could turn out to be the case if the emission of a particle alters the field in such a way that the path of subsequent particles emitted will produce the desired, wave-like result. In this (albeit hand-waving) case, then, there would exist deterministic equations generating wave-like behavior and the whole thing could be ultimately simulated by a Turing machine. Like I said before, I am not a physicist, so perhaps you (or someone else on connectionists) could correct the(possibly gross) misunderstandings contained within my naive suggestion. In any case, I look forward to obtaining and reading your tech report. -- Michael Dyer From marwan at sedal.su.oz.au Fri Jan 8 00:29:29 1993 From: marwan at sedal.su.oz.au (Marwan Jabri) Date: Fri, 8 Jan 1993 16:29:29 +1100 Subject: ACNN'93 Conference Programme Message-ID: <9301080529.AA21731@sedal.sedal.su.OZ.AU> THE FOURTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS (ACNN'93) CONFERENCE PROGRAMME 1st - 3rd FEBRUARY 1993 UNIVERSITY OF MELBOURNE AUSTRALIA PROGRAMME Monday, 1st February 1993 8.30 - 9.15 am Registration 9.15 - 9.30 am Official Opening 9.30 - 10.30 am Keynote Address G Hinton Department of Computer Science University of Toronto, Canada 10.30 - 11.00 am Morning Coffee 11.00 - 12.30 pm Session 1 An Associative Memory Model for the CA3 Region of the Hippocampus M R Bennett Neurobiology Research Centre W G Gibson & J Robinson School of Mathematics & Statistics University of Sydney, Australia Variable Threshold ART3 Neural Network P Lozo Dept of Electrical & Electronic Engineering University of Adelaide, Australia Constrained Quadratic Optimization using Neural Networks A Bouzerdoum & T R Pattison Dept of Electrical & Electronic Eng University of Adelaide, Australia 12.30 - 3.00 pm Lunch (including Student Poster Session) 3.00 - 3.30 pm Afternoon Tea 3.30 - 5.00 pm Session 2 The Sample Size Necessary for Learning in Multi-layer Networks P L Bartlett Department of Electrical & Computer Engineering University of Queensland, Australia Implementing a Model for Line Perception B P McGregor & M L Cook Centre for Visual Sciences, RSBS Australian National University, Australia Improving the Performance of the Neocognitron D R Lovell, D Simon & A C Tsoi Department of Electrical & Computer Engineering University of Queensland, Australia 5.00 - 7.30 pm Poster Session 1 Tuesday, 2nd February 1993 9.00 - 10.30 am Session 2 Improved Phoneme Recognition Using Multi-Module Recurrent Neural Networks L R Leerink & M Jabri Dept of Electrical Engineering University of Sydney, Australia External Stimuli in Biased Attractor Neural Networks A N Burkitt Computer Sciences Laboratory, RSPS Australian National University, Australia Activity Dependent Plasticity of Synapses in the Central Nervous System F H Guldner Department of Anatomy Khon Kaen University, Thailand 10.30 - 11.00 am Morning Coffee 11.00 - 12.00 noon A Method for Learning from Hints (Invited) Y S Abu-Mostafa California Institute of Technology, U S A 12.00 - 1.30 pm Lunch 1.30 - 3.00pm Session 4 A VLSI Arrhythmia Classifier P H W Leong & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Associative Conditioning Element B L Rogers Information Technology Institute Swinburne University of Technology, Australia Establishing Analogical Mappings by Synchronizing Oscillators B D Burns, J E Hummel & K J Holyoak Department of Psychology University of California, Los Angeles, U S A 3.00 - 3.30 pm Afternoon Tea 3.30 - 5.00 pm Session 5 Experimental Low Cost Neural Networks for Spoken Language Understanding A Kowalczyk & M Dale Telecom Research Laboratories, Australia A Neural Network Implementation of Sokolov's Model of Habituation of the Orienting Reflex B A Daniels Department of Psychology University of Tasmania, Australia Moving Image Compression and Regeneration by Associative Retina Chip Y Nakamura, M Ikegami & M Tanaka Faculty of Science & Technology Sophia University, Japan 5.00 - 7.00 pm Poster Session 2 7.00 - 9.00 pm BBQ and drinks Wednesday, 3rd February 1993 9.00 - 10.00 am A Spectral Domain Associative Memory with Improved Recall (Invited) B Hunt, M S Nadar, P Keller, E Van Colln & A Goyal Department of Electrical & Computer Engineering University of Arizona, USA 10.00 - 10.30 am Morning Coffee 10.30 - 11.30 am Session 6 Modelling Context Effects in Human Character Recognition Using Interconnected Single-Layer Perceptrons C Latimer, C Stevens & M Charles Department of Psychology University of Sydney, Australia Learning Nonlinearly Parametrized Decision Regions K L Halliwell, R C Williamson & I M Y Mareels Interdisciplinary Engineering Program Australian National University, Australia 11.30 - 1.00 pm Lunch (including Ideas-in-Progress Posters) 1.00 - 2.00 pm Session 7 A Comparison of Three On Chip Neuron Designs for a Low Power VLSI MLP R J Coggins, M A Jabri & S Pickard Department of Electrical Engineering University of Sydney, Australia Developments to the CMAC Neural Net C S Berger Department of Electrical & Computer Systems Engineering Monash University, Australia 2.00 - 2.15 pm Closing Poster Session 1 Monday, 1st February 1992 5.00 - 7.30 pm The Effect of Representation on Error Surface S Phillips Department of Computer Science University of Queensland, Australia Statistical Analysis of a Parallel Dynamics Autoassociative Memory Network A M N Fu & W G Gibson School of Mathematics & Statistics University of Sydney, Australia Error Bounds for Approximation by Feedforward Neural Networks M Ma & A C Tsoi Department of Electrical Engineering University of Queensland, Australia A Comparative Study between SGNT and SONN W Wen, V Pang & A Jennings AISS/TSSS Telecom Research Laboratories, Australia A Critical Look at Adaptive Logic Networks S Parameswaran & M F Schulz Department of Electrical & Computer Engineering University of Queensland, Australia Single and Dual Transistor Synapses for Analogue VLSI Artificial Neural Networks B Flower & M A Jabri Department of Electrical Engineering University of Sydney, Australia Well-Balanced Learning by Observing Individual Squared Errors K Kohara & T Kawaoka NTT Network Information Systems Laboratories, Japan Optimization of Multi-Layer Neural Networks using Gauss-Newton Minimization A Bainbridge-Smith, M A Stoksik & R G Lane Department of Electrical & Electronic Engineering University of Tasmania, Australia Exception Learning by Backpropagation: A New Error Function P Bakker & J Wiles Department of Computer Science University of Queensland, Australia R Lister Department of Electrical & Computer Engineering University of Queensland, Australia Genetic Optimization and Representation of Neural Networks M Mandischer Department of Computer Science VI University of Dortmund, Growing Context Units in Simple Recurrent Networks Using the Statistical Attribute of Weight Updates L R Leerink & M A Jabri Department of Electrical Engineering University of Sydney, Australia A Nonlinear Model for Human Associative Memory Based on Error Accumulation R A Heath Department of Psychology University of Newcastle, Australia Towards Connectionist Realization of Fuzzy Production Systems N K Kasabov Department of Information Science University of Otago, New Zealand A Stable Neural Controller for Nonminimum Phase Systems S K Mazumdar & C C Lim Department of Electrical & Electronic Engineering University of Adelaide, Australia An Adaptive Neural Net Based Spectral Classifier J T Hefferan & S Reisenfeld School of Electrical Engineering University of Technology Sydney, Australia Neuro-Morphology of Biological Vision: Fractional Discriminant Functions in the Emulation of Visual Receptive Fields for Remote Sensed Images S K Hungenahally & A Postula School of Microelectronic Engineering Griffith University, Australia L C Jain School of Electronic Engineering University of South Australia, Australia Automated Acquisition of Rules for Diagnosis S Sestito & S Goss Air Operations Division DSTO Aeronautical Research Laboratory, Australia G Merrington & R Eustace Flight Mechanics & Propulsion Division DSTO Aeronautical Research Laboratory, Australia Neural Networks to Compute Global Pattern Rotation and Dilation J S Chahl & M V Srinivasan Centre for Visual Sciences, RSBS Australian National University, Australia A Neural Architecture with Multiple Scales of Organisation D Alexander Behavorial Sciences Macquarie University, Australia Error and Variance Bounds in Multilayer Neural Networks D R Lovell & P L Bartlett Department of Electrical & Computer Engineering University of Queensland, Australia What Size Higher Order Network gives Valid Generalization? S Young & T Downs Department of Electrical Engineering University of Queensland, Australia Poster Session 2 Tuesday, 2nd February 1992 5.00 - 7.00 pm RPROP: A Fast and Robust Backpropagation Learning Strategy M Riedmiller & H Braun Institute fur Logik, Komplexitat und Deduktionssysteme University of Karlsruhe, A VLSI Switched Capacitor Realisation of An Artificial Synapse and Neuron Suitable for Nano-Power Multi-Layer Perceptrons S Pickard & M A Jabri Department of Electrical Engineering University of Sydney, Australia PANNE: A Parallel Artificial Neural Network Engine S Guda, B Flower & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Self-Growing Feed-Forward Counterpropagation Network S J Bye Telecom Research Laboratories, Australia A Adams & P Vamplew Artificial Neural Network Research Group University of Tasmania, Australia Pruning Feed-forward Neural Networks A N Burkitt Computer Sciences Laboratory, RSPS Australian National University, Australia P Ueberholz Physics Department University of Wuppertal, Germany A Comparison of Architectural Alternatives for Recurrent Networks W H Wilson School of Computer Science & Engineering University of New South Wales, Australia An Entropy Based Feature Evaluation and Selection Technique Z Chi & M A Jabri Department of Electrical Engineering University of Sydney, Australia Designing and Training a Multi-Net System with varying Algorithms and Architectures M Arnold & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Minds Eye: Extraction of Structure from Images of Objects with Natural Variability T J Stucke & G Coghill Department of Electrical & Electronic Engineering G C Creak Department of Computer Science University of Auckland, New Zealand Comparison of a Back-Propagation Model of Music Recognition and Human Performance C Stevens & C Latimer Department of Psychology University of Sydney, Australia Analysis of a Neural Network with Application to Human Memory Modelling M Chappell & M S Humphreys Department of Psychology University of Queensland, Australia An Art Model of Human Recognition Memory A Heathcote Psychology Department University of Newcastle, Australia Comparison of Different Neighbourhood Size in Simulated Annealing X Yao Department of Computer Science University College, University of New South Wales, ADFA, Australia Classification of Incomplete Data using Neural Networks M L Southcott & R E Bogner Department of Electrical & Electronic Engineering University of Adelaide, Australia Feature Extraction Using Neural Networks S V R Madiraju, Z Q Liu & T M Caelli Department of Computer Science University of Melbourne, Australia Application of Neural Networks to Quantitative Structure-Activity Relationships of Benzodiazepine/GABAA Receptor Binding Compounds D J Maddalena & G Johnston Department of Pharmacology University of Sydney, Australia Word-boundary Detection using Recurrent Neural Networks L R Leerink & M A Jabri Department of Electrical Engineering University of Sydney, Australia Classification by Single Hidden Layer ANN G Chakrabnorty, N Shiratori & S Noguchi Division of Engineering Tohoku University, Japan Unification in Prolog by Connectionist Models Volker Weber Computer Science Department University of Hamburg, Germany Ideas-in-Progress Posters A Feedforward Neural Network with Complex Weights E Skafidas & M Palaniswami Department of Electrical & Electronic Engineering University of Melbourne, Australia A Method of Training Multi-Layer Networks with Heaviside Characteristics using Internal Representations R J Gaynier & T Downs Department of Electrical & Computer Engineering University of Queensland, Australia Comparing Computed Neural Nets and Living Brains C J A Game Department of Surgery University of Sydney & SEDAL, Australia Neural Networks as Direct Adaptive Controllers M Bahrami School of Electrical engineering University of New South Wales, Australia Performance Criteria for Stop Consonant Identification Using Artificial Neural Nets R Katsch, P Dermody & D Woo Speech Communication Research Group National Acoustic Laboratories, Australia Classifying High Dimensional Spectral Data by Neural Networks R A Dunne Murdoch University, Australia N A Campbell & H T Kiiveri Division of Mathematics & Statistics C S I R O, Australia Registration The conference is being held at the Prince Philip Theatre in the Architecture and Planning Building in Masson Rd (marked on the attached map of Melbourne University). Registration is available in the foyer of the Architecture and Planning Building from 4-6pm on Sunday, 31 January, and from 8.30 each morning of the Conference. Registration fees: Academic A$200.00 Student A$ 75.00 Other A$300.00 Accommodation For accommodation booking, please contact Conference Associates, Tel/Fax: +61 (3) 887 8003. Accommodation has been block booked at: Ormond College Student $ 32.00 (Bed&Breakfast) University of Melbourne Other $ 42.00 The Town House Standard $ 98.00 701 Swanston St Executive $110.00 Carlton (rates include continental breakfast) Lygon Lodge Standard $ 83.00 220 Lygon St Deluxe $ 95.00 Carlton Standard 3 bed room $95.00 Train (Sydney-Melbourne) Economy Return $ 98.00 1st Class Return $158.00 Depart Sydney 8.05 pm Melbourne 8.00 pm Arrive Melbourne 9.10 am Sydney 9.00 am Local Air Travel Information Ansett are the official Carrier for ACNN'93 and are providing a number of services. However, only the normal discount air fares are available for delegates. If booking with Ansett please quote the Master File number: MC 04351. The reservation phone number is 131300. Discount tickets have to be booked at least 14 days in advance. However only a limited number of seats will be available at these prices so book asap and avoid disappointment. Transport from Airport Tullamarine Airport, located 20 km north-west of the city centre on the Tullamarine Freeway, is open 24 hours a day and handles both international and domestic flights at terminals in the same building. The Skybus Airport Coach service runs regularly to the city with a transfer time of 30-40 minutes. The airline and greyhound terminals in the city are adjacent to Swanston St. The contact number for Skybus is (03) 335 3066. Cab fare is about $20-25. Transport to the University of Melbourne The conference is situated in the Architecture and Planning Building in Masson Rd. Tram stop No. 10 in Swanston St (between Faraday and Elgin Sts) is directly opposite. Trams No. 1 and 15 are appropriate, and can be caught from Museum and Flinders St Stations. Parking Parking is not available on campus during regular hours. There are all day meters at 20 cents/hour to the north of the University in College Crescent and in Lygon St adjacent to the Cemetery and free all day parking in Princes Park Drive. Undercover parking is available at $6/day at Tower in Drummond St and there is also all day parking at $5 at the Exhibition Buildings in Rathdown St. Note that Monday, 1 February is a public holiday in Victoria. Limited all day parking may be available on campus for $5. On-Campus Facilities There is a Commonwealth Bank and a Post Office in the building in which the conference is held. The Union is nearby. There are bookshops, chemists and food stops in the union. Note that the Union is close on Monday, 1 February. Lunch Lygon St is a short walk east of the conference venue. There are restaurants, food bars, take-away and eat-ins to meet all budget and dietary tastes. This is open all week including the Australia Day Holiday on 1 February. Messages to Delegates The Conference Registration Desk can be contacted on (03) 344-7962 Sunday 4-6pm and from 8.30am Monday to Wednesday. Messages can also be left with the CITRI receptionist on (03) 282-2400 if the registration desk is unattended. Weather Melbourne is generally warm with sunny days in February. The average temperature is 26 degrees with average overnight temperature of 15 degrees. There are often hot periods in excess of 33 degrees followed by a change with thunderstorms. Maximum temperature recorded is 43 degrees, lowest is 4 degrees. There is a 25% chance of rain. Sponsors Ansett Airlines Australian Telecommunications & Electronics Research Board Carlton United Breweries CITRI, University of Melbourne Defence Science & Technology Organisation SEDAL, University of Sydney Telecom Research Laboratories ACNN'93 Organising Committee Conference Chairman Dr Marwan Jabri , University of Sydney Technical Programme Chairs Dr Andrew Jennings, Telecom Research Laboratories Dr Stephen Pickard, University of Sydney Technical Committee Prof Yianni Attikiouzel, University of Western Australia Prof Max Bennett, University of Sydney Prof Bob Bogner, University of Adelaide Dr Joel Bornstein, University of Melbourne Ms Angela Bowles, BHP Research Melbourne Prof Terry Caelli, University of Melbourne Prof Max Coltheart, Macquarie University Dr Phil Diamond, University of Queensland Mr Barry Flower, University of Sydney Dr Bill Gibson, University of Sydney A/Prof Richard Heath, University of Newcastle Dr Andrew Jennings, Telecom Research Laboratories Dr Adam Kowalczyk, Telecom Research Laboratories Prof Bill Levick, Australian National University Dr D Nandagopal, Defence Science & Technology Organisation Dr M Palaniswami, Defence Science & Technology Organisation Dr Stephen Pickard, University of Sydney Dr Nick Redding, Defence Science & Technology Organisation Dr M Srinivasan, Australian National University Prof Ah Chung Tsoi, University of Queensland Dr Janet Wiles, University of Queensland Dr Bob Williamson, Australian National University Local Committee Dr Joel Bornstein, University of Melbourne Ms Angela Bowles, BHP Research Melbourne Prof Terry Caelli, University of Melbourne Dr Victor Ciesielski, Royal Melbourne Institute of Technology Dr Simon Goss, Defence Science & Technology Organisation Dr Andrew Jennings, Telecom Research Laboratories Dr Adam Kowalczyk, Telecom Research Laboratories Institutions Liaison & Publicity Dr Simon Goss, Defence Science & Technology Organisation Sponsorship Dr Andrew Jennings, Telecom Research Laboratories Publications Mr Philip Leong, University of Sydney From rohwerrj at cs.aston.ac.uk Sat Jan 9 09:51:10 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Sat, 9 Jan 93 14:51:10 GMT Subject: Quantum neural computer Message-ID: <16307.9301091451@cs.aston.ac.uk> > Hitherto all computers have been designed based on classical laws. > We consider the question of building a quantum neural computer and > speculate on its computing power. We argue that such a computer > could have the potential to solve artificial intelligence problems. It is difficult to argue that quantum computation plays an important role in everyone's favorite intelligent computer, the human brain. The characteristically 'quantum' properties of quantum computers, such as the ability to run a superposition of programs simultaneously on a single machine, arise only if the computer is a totally isolated system; ie., it exchanges not a single quantum of energy with its environment. The brain fails this test pathetically. As Professor Kak's TR probably makes clear [My apologies for posting this before obtaining it], the engineering requirements for building any quantum quantum computer, neurally-inspired or not, are quite severe. Furthermore, the required programming style is bizarre: To prevent energy dissapation, programs must be written so that all intermediate results are eventually erased. > Intelligence, and by implication consciousness, has been taken by > many computer scientists to emerge from the complexity of the > interconnections between the neurons. But if it is taken to be a > unity, as urged by Schrodinger and other physicists , > then it should be described by a quantum mechanical wave > function. No representation in terms of networking of classical I am sympathetic to the view that quantum superposition has something important to do with mind. Otherwise it would be (even more) difficult to explain the quantum measurement process. But for the reasons given, I don't think quantum computers are the missing link. Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs From kak at max.ee.lsu.edu Fri Jan 8 16:58:47 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Fri, 8 Jan 93 15:58:47 CST Subject: regarding quantum neural computer announcement Message-ID: <9301082158.AA18390@max.ee.lsu.edu> Dr Dyer: I am aware that objections of the kind that you have made will be offered by many computer scientists. In response I can only say that no chaos-based quantum theory has yet emerged. Indeed, the study of the EPR proposal suggests that such a theory should not exist because there seem to be actain-at-a-distance behavior at the level of single particles. Anyway, my purpose is to start a debate and the overwhelming response I have received in a single day suggests that such a debate may be soon joined. Thanks, -Subhash Kak From yves at netid.com Fri Jan 8 18:19:17 1993 From: yves at netid.com (Yves Chauvin) Date: Fri, 8 Jan 93 15:19:17 PST Subject: Preprint available Message-ID: <9301082319.AA15957@netid.com> **DO NOT FORWARD TO OTHER GROUPS** The following paper, "Hidden Markov Models in Molecular Biology: New Algorithms and Applications", has been placed in the neuroprose archive. It is to be published in the Proceedings of the 1992 NIPS conference, Further information and retrieval instructions are given below. Yves Chauvin yves at netid.com ___________________________________________________________________________ Hidden Markov Models in Molecular Biology: New Algorithms and Applications Pierre Baldi Jet Propulsion Laboratory and Division of Biology, California Institute of Technology Pasadena, CA 91109 Yves Chauvin Net-ID, Inc. Tim Hunkapiller Division of Biology California Institute of Technology Marcella A. McClure Department of Ecology and Evolutionary Biology University of California, Irvine We introduce a new convergent learning algorithm for HMMs that, unlike the classical Baum-Welch algorithm, is smooth and can be applied on-line or in batch mode, with or without the usual Viterbi most likely path approximation. HMMs are then trained to represent and align several protein families including immunoglobulins and kinases. In all cases, the trained models seem to capture the statistical properties characteristic of the families. ___________________________________________________________________________ A complete technical report and related preprints are available upon written request to the first author. ___________________________________________________________________________ Retrieval instructions: The paper is baldi.compbiohmm.ps.Z in the neuroprose archive. To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> binary ftp> get baldi.compbiohmm.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for baldi.compbiohmm.ps.Z . ftp> quit . unix> uncompress baldi.compbiohmm.ps.Z unix> lpr baldi.compbiohmm.ps.Z From cds at vaxserv.sarnoff.com Mon Jan 11 12:00:45 1993 From: cds at vaxserv.sarnoff.com (Clay Spence x3039) Date: Mon, 11 Jan 93 12:00:45 EST Subject: RE> quantum neural computer announcement Message-ID: <9301111700.AA06064@peanut.sarnoff.com> A comment on Dr. Dyer's comments on Dr. Kak's announcement: > 2. that quantum-level phenomena could never be adequately simulated >by a Turing machine (i.e. that reality is not computable). > >After reading a number of (non-specialist) books on quantum physics, I am >not yet convinced of this. ... > >But there's an approach that could produce similar results from completely >deterministic equations -- i.e. chaos theory. ... > >In this (albeit hand-waving) case, then, there would exist >deterministic equations generating wave-like behavior and the whole >thing could be ultimately simulated by a Turing machine. Chaos and quantum mechanics are not equivalent; in quantum mechanics a system has observable properties that one can measure, but in most interpretations it doesn't make sense to say that the properties had those values before the measurement was made, e.g., a particle apparently doesn't have a position until the position is measured. This has experimental consequences which have been verified. (The reference that comes to my mind [Mermin, 1985] is slightly old, but very readable). This kind of effect cannot be produced by a chaotic, deterministic system of particles. However, one can simulate a quantum system on an ordinary computer by solving Schroedinger's equation numerically and randomly choosing measurement results with probability given by the squared magnitude of the wave function. So the conclusion is correct, "the whole thing could be ultimately simulated by a Turing machine", to the extent that one can simulate accurately the quantum system and to the extent that an ordinary computer is like a Turing machine (I'm not a computer scientist). I have no idea whether quantum effects could add anything to a machine's ability to compute or "reason." Mermin, N.D., 1985. Physics Today, Vol. 38, No. 4, p.38. Clay Spence From mitsu at netcom.com Mon Jan 11 16:55:43 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Mon, 11 Jan 93 13:55:43 -0800 Subject: Quantum neural computer Message-ID: <9301112155.AA27873@netcom3.netcom.com> >It is difficult to argue that quantum computation plays an important >role in everyone's favorite intelligent computer, the human brain. >The characteristically 'quantum' properties of quantum computers, >such as the ability to run a superposition of programs simultaneously >on a single machine, arise only if the computer is a totally isolated >system; ie., it exchanges not a single quantum of energy with >its environment. The brain fails this test pathetically. This is not correct, as I understand it: a quantum measurement does not necessarily collapse the entire wave function of a system, and even if it did the mere fact of the exchange of energy with another system does not in fact entail a quantum measurement. If this statement were correct, then measuring *anything* coming out of *any* system would allow you to determine the precise state of every single particle in the emitting system. Consider also the fact that a measurement can be ambiguous between different wave functions: i.e., you may detect a photon, but you don't necessarily know where the photon came from. Mitsu Hadeishi General Partner, Open Mind mitsu at well.sf.ca.us mitsu at netcom.com From gary at cs.UCSD.EDU Mon Jan 11 22:41:33 1993 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Mon, 11 Jan 93 19:41:33 -0800 Subject: another Dognitive Science seminar Message-ID: <9301120341.AA14499@odin.ucsd.edu> SEMINAR Oscillations in Dog Cortex: A new approach Garrison W. Cottrell Department of Dog Science Southern California Condominium College Recent work (Blackie & Wolf, 1990) has shown that when a canine is attending a stimulus, the neurons representing that stimulus fire in synchrony. It has been suggested that this is the mechanism by which the stimulus features are bound together in the dog's brain. It has also been suggested that whole object recognition occurs in the Inferior Temporal (IT) Cortex of the dog[1]. The question then arises: How are the oscillations in one part of the brain used by the object recognition system in another part? If IT is indeed an inferior temporal processor, how could it possibly make use of such temporal information? Part of the problem in studying such phenomena is that the brain processes things so fast, it is difficult to measure recognition events among the blooming, buzzing confusion in the cortex. Hence we suggest that the ideal subjects for studying such processes are older dogs, who appear to have far fewer neurons[2], and those that remain run at a much more leisurely pace. A second reason for studying elderly dogs is that they sleep a great deal, and this is an ideal time to study the baseline activity of recognition system. If the Boltzdogg machine model (Hilton & Slugowski, 1986) is correct, the oscillations observed during sleep reflect the structure of the system in isolation from the environment. There are many difficulties in assessing brain activity. Single cell recordings are at too low a level to asses symbolic activity. Evoked potential studies are good at temporal resolution but poor at source identification. PET studies are useful for localization but have poor temporal resolution[3]. We have discovered a non-invasive technique for studying oscillations in dog brains that also gives us the sources in an unambiguous way. We have found that, contrary to popular belief, leg locomotion during sleep does not mean that dogs are chasing rabbits in their dreams. Rather, neuromodulators released during sleep rewire the output of the recognition system to the leg musculature. Thus, the leg twitches are a direct reflection of cortical oscillations in the four complex object recognition regions[4] in the dog brain. An immediate observation is that the food & master (left & right rear legs) regions oscillate 180 degrees out of phase with the sex & cat face regions (left & right front legs). Thus one can observe right away that representation is a process, since just like a computer process, this one runs, and eventually halts. The major difference is that this process runs when asleep (cf. Unix(tm) sleep(1)). This is a great new instrument for assessing the behavior of the brain, since we avoid problems with animal rights people by using a non-invasive technique. Since the older dog is asleep so much, he presents a terrific wealth of data on brain activity. On a more philosophical note, it suggests that meaning representations are oscillations all the way down - suggesting that West Coast researchers that are into "getting the vibes" are not that far off in their approach. The dogleg technology is certainly giving Dognitive Science a leg up on what's happening in the dog's brain. A live demonstration of leg twitching during sleep will be presented at the talk. ------------ [1]It is unclear why this part of cortex is deemed to be inferior, since it plays such an important role. Some have suggested that the name means that it is bad at temporal processing, and makes up for this lack by being good at spatial processing. [2]Some believe that older dogs' grandmother cells have gone to rest homes. And, although researchers can't agree whether these neurons are simply decaying or are being actively suppressed, it must be true that they can't have all checked out, since Jellybean at 15;10 still recognizes his food. However, he also recognizes grass, dirt, and rotting logs as food, which suggests a degraded distributed representation. [3]Aside from the fact that they do not record activity, CAT scans are obviously an inappropriate tool for studying dognition. [4]It is generally accepted that the four recognition systems are di- vided modularly into the FOOD, OPPOSITE SEX, MASTER and CAT FACES regions. However, there is some argument whether the latter region is recognizing cat faces, upside down monkey faces, or paint brushes (Parrot, 1989). From mume at sedal.su.oz.au Tue Jan 12 00:59:24 1993 From: mume at sedal.su.oz.au (Multi-Module Environment) Date: Tue, 12 Jan 1993 16:59:24 +1100 Subject: change of addresses for MUME. Now: mume-request@sedal.su.OZ.AU, mume-bugs@sedal.su.OZ.AU and MUME@sedal.su.OZ.AU Message-ID: <9301120559.AA04531@sedal.sedal.su.OZ.AU> Dear Connectionists, Apologies to those who tried to send mail to MUME-Request and MUME-Bugs. Our sendmail couldn't handle these names. As a consequence, these are changed to mume-request and mume-bugs, respectively. So to: 1. Register yourself in the mailing list, mail to mume-request at sedal.su.OZ.AU with your email address on the 'Subject:' line. 2. Send mail to everybody in the mailing list, send it to: MUME at sedal.su.OZ.AU This address is unchanged. 3. Report bugs, mail: mume-bugs at sedal.su.OZ.AU MUME From Paul_Gleichauf at B.GP.CS.CMU.EDU Mon Jan 11 12:25:01 1993 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Mon, 11 Jan 93 12:25:01 EST Subject: regarding quantum neural computer announcement In-Reply-To: Your message of "Fri, 08 Jan 93 12:52:29 PST." <930108.205229z.26424.dyer@lanai.cs.ucla.edu> Message-ID: <120.726773101@B.GP.CS.CMU.EDU> Fellow Connectionists, Michael Dyer has raised some interesting issues reminiscent of past discussions that were inspired by Roger Penrose's book "The Emperors New Mind". Whether they have anything to do with Dr. Kak's paper, or his very sketchy initial announcement, will require some careful reading. I am sure that we want to be a bit cautious about getting into a debate about quantum computation and its ostensible relationship to intelligence or consciousness. In particular, the notion that there exist deterministic equations that govern the evolution of a pure particle description is subject to very strict limitations. One of the problems with chaos based deterministic equations that might be hypothesized as governing quantum theory is the assumption that there are "hidden variables" that evolve such systems. The coefficents of the governing equations, which are so important in chaotic systems, have a very strictly proscribed role in quantum theory. These include the prohibition of local hidden variables as coefficents for such equations. This is a very tought hurdle for chaotic model of quantum mechanics. There are papers by J. S. Bell that first breached the issue of the viability of hidden variables on terms of verifiable experimental predictions that are in contradiction with those of quantum mechanics. They, and other technical references to this subject, are collected in Quantum Theory and Measurement, Ed. by J.A. Wheeler and W. Zurek, Princeton University Press, 1983. In a useful sense the equations of quantum theory are quite deterministic, it is just that they determine the development of wavefunctions which are used to compute the probabilty of measurements. The measured results are probabilistic, the fundamental theoretical building blocks of the theory are not. An interesting sidebar is that fairly recent efforts to prove the computational power of so-called quantum computers has not expanded the definition of computability beyond Turing machines. There are some papers which are referenced by Penrose in his book, and there have been some serious efforts to build some real devices that test the theory and have produced consistent results. So if some of us are looking to the quantum to provide more computational power than Turing's machine, we may either have a much more fundamental problem to examine, the foundation of quantum mechanics, or we may be looking to false gods. Quantum mechanics really is a theory of probability amplitudes, and its predictions are consistent with experiments. I know of no evidence yet that the predictions of quantum mechanics are not Turing computatable. I regard my own hopes that there are as romantic notions not substantiated by science. Maybe that is why so many of us have apparently responded to Dr. Kak's announcement as if it claims not only the potential of quantum computation to solve artifical intelligence problems, but the necessity. Paul From kak at max.ee.LSU.EDU Mon Jan 11 17:24:12 1993 From: kak at max.ee.LSU.EDU (Dr. S. Kak) Date: Mon, 11 Jan 93 16:24:12 CST Subject: regarding quantum neural computer announcement Message-ID: <9301112224.AA10868@max.ee.lsu.edu> Let me add a couple of comments to the valuable remarks of Paul Gleichauf. First, outside the house that the connectionists have built interesting winds have started to blow. Information processing has become a central concern of basic physics. This is evidenced by the projected SYMPOSIUM ON THE FOUNDATIONS OF MODERN PHYSICS --Quantum Measurement, Irreversibility and the Physics of Information to be held in Cologne, June 1-5, 1993. [For information contact Peter Mittelstaedt; email pb at thp.uni-koeln.de ]. Second, since the famous experiment by Aspect et al to test Bell's inequality in 1982 it is generally agreed that EPR correlations appear to be of action-at-a-distance type. Some assert that "measurements or observations, in the sense required by quantum theory, can only be made by conscious observers". Might the concept of "conscious observer" as used by the qm-theorist have something to do with the conscious observer at the back of cognitive centers? This becomes a plausible question. One is encouraged to think that enlarging the connectionist paradigm in different ways so as to capture aspects of the qm framework might be useful. How this might be done needs to be figured out. In any event getting some fresh air in could do no harm. "Should necessity of chance be considered as the cause?" -Shveta-ashvatara Upanishad -Subhash Kak From ttj10 at eng.cam.ac.uk Tue Jan 12 07:01:53 1993 From: ttj10 at eng.cam.ac.uk (ttj10@eng.cam.ac.uk) Date: Tue, 12 Jan 93 12:01:53 GMT Subject: Technical report: real pole balancing Message-ID: <25070.9301121201@fear.eng.cam.ac.uk> The following technical report is available via the Cambridge University ftp archive svr-ftp.eng.cam.ac.uk. Instructions for retrieval from the archive follow the summary. ------------------------------------------------------------------------------ Pole Balancing on a Real Rig using a Reinforcement Learning Controller Timothy Jervis and Frank Fallside Cambridge University Engineering Department Cambridge CB2 1PZ, England Abstract In 1983, Barto, Sutton and Anderson~\cite{Barto83} published details of an adaptive controller which learnt to balance a simulated inverted pendulum. This {\em reinforcement learning} controller balanced the pendulum as a by-product of avoiding a cost signal delivered to the controller when the pendulum fell over. This paper describes their controller learning to balance a real inverted pendulum. As far as the authors are aware, this is the first example of a reinforcement learning controller being applied to a real inverted pendulum learning in real time. The results show that the controller was able to improve its performance as it learnt, and that the task is computationally tractable. However, the implementation was not straightforward. Although some of the controller's parameters were tuned automatically by learning, some were not and had to be carefully set for successful control. This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found. ------------------------------------------------------------------------------ FTP INSTRUCTIONS unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (your_userid at your_site) ftp> cd reports ftp> binary ftp> get jervis_tr115.ps.Z ftp> quit unix> uncompress jervis_tr115.ps.Z unix> If "ftp svr-ftp.eng.cam.ac.uk" does not work, you might try "ftp 129.169.24.20". From berg at cs.albany.edu Tue Jan 12 13:36:03 1993 From: berg at cs.albany.edu (George Berg) Date: Tue, 12 Jan 93 13:36:03 EST Subject: Computational Biology Postdoc Message-ID: <9301121836.AA11587@daedalus.albany.edu> Postdoctoral Position in Computational Biology A one-year postdoctoral position supported by an NSF grant is available to study protein secondary and tertiary structure prediction using artificial intelligence and other computational techniques. Position is available starting in March, 1993, or later. The successful applicant will have a strong background in the biochemistry of protein structure. Ability to program is a must. Experience with artificial neural networks is a definite plus. Preferred candidates will have experience with C, UNIX, and molecular modeling. For further information, contact either George Berg (Department of Computer Science) or Jacquelyn Fetrow (Department of Biological Sciences) by electronic mail at postdoc-info at cs.albany.edu. To apply, please send curriculum vitae and three letters of recommendation to: Jacquelyn Fetrow Department of Biological Sciences University at Albany 1400 Washington Avenue Albany, NY 12222 From alexis at CS.UCLA.EDU Tue Jan 12 14:22:27 1993 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Tue, 12 Jan 93 11:22:27 -0800 Subject: RE> quantum neural computer announcement In-Reply-To: Clay Spence x3039's message of Mon, 11 Jan 93 12:00:45 EST <9301111700.AA06064@peanut.sarnoff.com> Message-ID: <9301121922.AA20375@maui.cs.ucla.edu> > Chaos and quantum mechanics are not equivalent; ... > ... > ... This kind of effect cannot be produced by a chaotic, > deterministic system of particles. However, one can simulate a quantum > system on an ordinary computer by solving Schroedinger's equation > numerically and randomly choosing measurement results with probability > given by the squared magnitude of the wave function. > ... To pick a nit: Since a correctly operating (pseudo) random number generator on a (conventional) computer *is* "produced by a chaotic, deterministic system", your assertion that you can use a computer to solve Schroedinger's equations and then select (pseudo) randomly based on the resulting probability distribution is equivalent to saying that you *can* produce this effect using a (chaotic) deterministic system. I would claim that Mike Dyer's assertion, at least at the intentionally "hand-waving" degree that it was presented, remains valid: a) it is far from clear that quantum effects are required to create "machine intelligence", but even if they are b) it is far from clear that functionally equivalent computational effects can not be generated by a Turing machine - alexis. From wray at ptolemy.arc.nasa.gov Mon Jan 11 00:59:14 1993 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Sun, 10 Jan 93 21:59:14 PST Subject: IND Version 2.1 tree software available Message-ID: <9301110559.AA14415@ptolemy.arc.nasa.gov> IND Version 2.1 - creation and manipulation of decision trees from data ---------------------------------------------------------------------- A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which (hopefully) has good prediction of classes on new data. Standard algorithms are CART (by Breiman, Friedman, Olshen and Stone) and Id3 and its successor C4.5 (by Quinlan). More recent techniques are Buntine's smoothing and option trees, Wallace and Patrick's MML method, and Oliver and Wallace's MML decision graphs which extend the tree representation to graphs. IND reimplements and integrates these methods. The newer methods produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consist of four basic kinds of routines; data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The generation routines are used to build classifiers. The test routines are used to evaluate classifiers and to classify data using a classifier. And the display routines are used to display classifiers in various formats. IND is written in K&R C, with controlling scripts in the "csh" shell of UNIX, and extensive UNIX man entries. It is designed to be used on any UNIX system, although it has only been thoroughly tested on SUN platforms. IND comes with a manual giving a guide to tree methods, and pointers to the literature, and several companion documents. Availability ------------ IND Version 2.0 will shortly be available through NASA's COSMIC facility. IND Version 2.1 is available strictly as unsupported beta-test software. If you're interested in obtaining a beta-test copy, with no obligation on your part to provide feedback, contact Wray Buntine NASA Ames Research Center Mail Stop 269-2 Moffett Field, CA, 94035 email: wray at kronos.arc.nasa.gov From alexis at CS.UCLA.EDU Tue Jan 12 17:40:05 1993 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Tue, 12 Jan 93 14:40:05 -0800 Subject: quantum neural computer announcement In-Reply-To: Clay Spence x3039's message of Tue, 12 Jan 93 17:08:07 EST <9301122208.AA08223@peanut.sarnoff.com> Message-ID: <9301122240.AA25486@maui.cs.ucla.edu> > I stand corrected. Measurements of quantum systems are truly random, > ... Just an amusing side bar, it's at least a popular legend that the US military has used little Geiger-counter like devices to help "compute" random numbers when they wanted to be absolutely certain that the results couldn't have been anticipated .... Since the direction that water spins down a drain thats located on the equator should also be truely random (presumably the initial perturbation from the unstable equilibruim would come from Brownian motion in the fluid), it would seem more artistic, if less pragmatic, for an "intel- ligent" computer to periodically flush a line of toilets so situated. (Yeah, I'm a computer scientist, but I don't do hardware :-) Okay, okay, I'll be quite again. - alexis. From stolcke at ICSI.Berkeley.EDU Tue Jan 12 19:34:11 1993 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Tue, 12 Jan 93 16:34:11 PST Subject: YAPOHMM Message-ID: <9301130034.AA10947@icsib30.ICSI.Berkeley.EDU> (Yet another paper on Hidden Markov Models) The following paper, to appear in NIPS-5, is now available by FTP from ftp.icsi.berkeley.edu (128.32.201.7) in the file /pub/ai/stolcke-nips5.ps.Z. Since this is only the latest in a series of similar announcements on connectionists I will spare you the ftp instructions. Let me know if you don't have ftp access and want an e-mailed copy. ----- Hidden Markov Model Induction by Bayesian Model Merging Andreas Stolcke and Stephen Omohundro This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to merge and the stopping criterion are guided by the Bayesian posterior probability. We compare our algorithm with the Baum-Welch method of estimating fixed-size models, and find that it can induce minimal HMMs from data in cases where fixed estimation does not converge or requires redundant parameters to converge. --Andreas From peter at ai.iit.nrc.ca Wed Jan 13 09:38:30 1993 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Wed, 13 Jan 93 09:38:30 EST Subject: regarding quantum neural computer announcement Message-ID: <9301131438.AA03106@ai.iit.nrc.ca> > Second, since the famous experiment by Aspect et al to test > Bell's inequality in 1982 it is generally agreed that EPR > correlations appear to be of action-at-a-distance type. Some > assert that "measurements or observations, in the sense > required by quantum theory, can only be made by conscious > observers". I am not a physicist, but it is my understanding that there is another way of looking at measurements. Instead of saying "measurements ... can only be made by conscious observers", you can talk about reversible and irreversible events. The key thing about a measurement is not whether it is made by an conscious observer, but whether it is an irreversible event. Is this not a viable alternative to dragging consciousness into quantum mechanics? - Peter Turney From giles at research.nj.nec.com Wed Jan 13 11:46:09 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 13 Jan 93 11:46:09 EST Subject: NIPS-5 Deadline Message-ID: <9301131646.AA13566@fuzzy> REMINDER!! The deadline for Proceedings papers for NIPS-5 is January 13th. All papers postmarked on that day will be accepted! C. Lee Giles Publications Chair C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 USA Internet: giles at research.nj.nec.com UUCP: princeton!nec!giles PHONE: (609) 951-2642 FAX: (609) 951-2482 From aboulang at BBN.COM Wed Jan 13 13:29:36 1993 From: aboulang at BBN.COM (aboulang@BBN.COM) Date: Wed, 13 Jan 93 13:29:36 EST Subject: RE> quantum neural computer announcement In-Reply-To: Alexis Wieland's message of Tue, 12 Jan 93 11:22:27 -0800 <9301121922.AA20375@maui.cs.ucla.edu> Message-ID: To pick a nit: Since a correctly operating (pseudo) random number generator on a (conventional) computer *is* "produced by a chaotic, deterministic system", your assertion that you can use a computer to solve Schroedinger's equations and then select (pseudo) randomly based on the resulting probability distribution is equivalent to saying that you *can* produce this effect using a (chaotic) deterministic system. Watch for them small brittle eggs ;-). The crux of the matter is that computers (silicon or whatever) need to have access to the reals to generate non-pseudo random numbers. The non-pseudo bit is important here. There is a symbolic dynamics view of random number generates that brings home the point that all these random number generators do is chew on the bits that were originally input. You can't simulate truly random choice with these. If you had access to a source of infinite algorithmic-complexity numbers (most of reals), you would not run out of bits. (Actually there was some interest by a fellow by the name of Tom Erber to look at some NIST Penning-trap data to look for recurrences in the "telegraphic" fluorescence of the trapped ion. He did not see any.) There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. See also the following: "Neural Networks with Real Weights: Analog Computational Complexity" by Siegelmann and Sontag. This is available on neuroprose in (siegelmann.analog.ps.Z). The problem with all of this is the plausibility of access to the reals even with analog realizations. But this gets us off into another topic. I do however see the Blum, Shub, and Smale work as very foundational and important. Regards, Albert Boulanger aboulanger at bbn.com From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Jan 13 14:30:35 1993 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 13 Jan 93 14:30:35 EST Subject: quantum neural computer announcement In-Reply-To: Your message of "Tue, 12 Jan 93 14:40:05 PST." <9301122240.AA25486@maui.cs.ucla.edu> Message-ID: <18344.726953435@B.GP.CS.CMU.EDU> Again I am going to presume that a couple of "quantum-mechanical corrections" to previous posts is warranted in this forum. This is not an effort on my part to limit discussion, but rather a precaution to try to make sure that contributions remain scientific and germane. I first want to re-pick Alexis' nit. There IS a distinction between chaotic simulation of PARTICLES and the numerical simulation of Schroedinger's equation, a linear partial differential equation for the WAVEFUNCTION. The quantum system is not being simulated by a chaotic system, the selection of a measurement result is what being randomly selected, a sampling of a probabilty distribution. When one chooses to measure some of the wave properties of a quantum phenomenon the notion of particles as the basis of a chaotic simulation breaks down. Dr. Kak has added in his follow-on post that EPR experiments are generally agreed to be of the action-at-a-distance type. This is a loaded phrase that should not be used lightly. In EPR experiments one measures the properties of a correlated system, for example a pair of photons produced by a positron-electron annihilation, and asserts that the measurement of the polarization of one uniquely identifies the polarization of the second without the need for any further measurement. The paradoxical character becomes apparent when the potential measurements are not in the forward lightcone (causally connectable by a light signal). The problem with regarding this as action-at-a-distance is that no information (in the information theory sense) can be transmitted using this technique. Therefore calling this action-at-a-distance, where the speed of light is quite artfully used to convey information, can lead to gross misunderstandings. Paul From rsun at athos.cs.ua.edu Wed Jan 13 14:18:50 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 13 Jan 1993 13:18:50 -0600 Subject: No subject Message-ID: <9301131918.AA14930@athos.cs.ua.edu> In my previous posting regarding references on symbolic processing and connectionist models, I mentioned a file FTPable from Neuroprose. The correct file name is sun.hlcbib.asc.Z (not sun.bib.Z). My apology. --Ron From rohwerrj at cs.aston.ac.uk Wed Jan 13 14:21:27 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 13 Jan 93 19:21:27 GMT Subject: Quantum neural computer Message-ID: <18100.9301131921@cs.aston.ac.uk> > >The characteristically 'quantum' properties of quantum computers, > >such as the ability to run a superposition of programs simultaneously > >on a single machine, arise only if the computer is a totally isolated > >system; ie., it exchanges not a single quantum of energy with > >its environment. The brain fails this test pathetically. > > This is not correct, as I understand it: a quantum This *is* correct, which is why Deutch's work on quantum computers (1) draws on Bennett's dissapationless "billiard ball computer" (2, 3). The trouble is that whether or not a perturbation collapses the wavefunction, which is largely a philosophical undecidable (4), it does destroy quantum phase information unless all quantum phase information is also known for the perturbing system. 1. David Deutsch, "Quantum Theory, the Church-Turing principle, and the universal quantum computer", Proc. Royal Society (London) A400, 97-117, (1985). 2. Charles. H. Bennett, IBM J. Res. Dev. 17, 525. 3. Charles. H. Bennett and Rolf Landauer, "The Fundamental Physical Limits of Computation", Scientific American 253, no. 1, 38-53, (July 1985). 4. Hugh Everett, III, "'Relative State' Formulation of Quantum Mechanics", Reviews of Modern Physics 29, 454-462, (1957). Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs From pluto at cs.UCSD.EDU Wed Jan 13 15:31:59 1993 From: pluto at cs.UCSD.EDU (Mark Plutowksi) Date: Wed, 13 Jan 93 12:31:59 -0800 Subject: Neuroprose submission Message-ID: <9301132031.AA04608@tournesol> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** The following paper has been placed in the Neuroprose archives at Ohio State. The file is pluto.nips92.ps.Z. Ftp instructions follow the abstract. Only an electronic version of this paper is available. This is the paper to appear in the NIPS 5 proceedings due out later this year. If you have high interest in the extended version (in preparation) please email: pluto at cs.ucsd.edu "Learning Mackey-Glass From 25 Examples, Plus or Minus 2" Mark Plutowski*, Halbert White**, Garrison Cottrell* * UCSD: Computer Science & Engineering, and the Institute for Neural Computation. ** UCSD: Department of Economics, and the Institute for Neural Computation. ABSTRACT We apply active exemplar selection to predicting a chaotic time series. Given a fixed set of examples, the method chooses a concise subset for training. Fitting these exemplars results in the entire set being fit as well as desired. The algorithm incorporates a method for regulating network complexity, automatically adding exemplars and hidden units as needed. Fitting examples generated from the Mackey-Glass equation with fractal dimension 2.1 to an rmse of 0.01 required about 25 exemplars and 3 to 6 hidden units. The method requires an order of magnitude fewer floating point operations than training on the entire set of examples, is significantly cheaper than two contending exemplar selection techniques, and suggests a simpler active selection technique that performs comparably. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps pluto.nips92.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get pluto.nips92.ps.Z ftp> quit unix> uncompress pluto.nips92.ps.Z unix> lpr -s pluto.nips92.ps (or however you print postscript) Mark E. Plutowski Computer Science and Engineering University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0114 From rohwerrj at cs.aston.ac.uk Wed Jan 13 15:17:40 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 13 Jan 93 20:17:40 GMT Subject: 2 TRs Message-ID: <18140.9301132017@cs.aston.ac.uk> **DO NOT FORWARD TO OTHER GROUPS** No, it's not another posting on quantum computers, but it's almost as good: an announcement of two somewhat spacey TRs touching lightly on the mind-brain problem. The following 2 papers have been deposited in Jordan Pollack's immensely useful Neuroprose archive at Ohio State. Retrieval instructions at end of message. Hardcopy requests might be answered for cases of dire necessity. --------------------------------------------------------------------------- rohwer.reprep.ps.Z A REPRESENTATION OF REPRESENTATION APPLIED TO A DISCUSSION OF VARIABLE BINDING Richard Rohwer States or state sequences in neural network models are made to represent concepts from applications. This paper motivates, introduces and discusses a formalism for denoting such representations; a representation for representations. The formalism is illustrated by using it to discuss the representation of variable binding and inference abstractly, and then to present four specific representations. One of these is an apparently novel hybrid of phasic and tensor-product representations which retains the desirable properties of each. --------------------------------------------------------------------------- rohwer.howmany.ps.Z HOW MANY THOUGHTS CAN YOU THINK? Richard Rohwer In ordinary computer programmes, the relationship between data in a machine and the concepts it represents is defined arbitrarily by the programmer. It is argued here that the Strong AI hypothesis suggests that no such arbitrariness is possible in the relationship between brain states and mental experiences, and that this may place surprising limitations on the possible variety of mental experiences. Possible psychology experiments are sketched which aim to falsify the Strong AI hypothesis by indicating that these limits can be exceeded. It is concluded that although such experiments might be valuable, they are unlikely to succeed in this aim. --------------------------------------------------------------------------- Retrieval instructions (the usual): ipc9>ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. ftp> cd pub/neuroprose 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get rohwer.reprep.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rohwer.reprep.ps.Z (64235 bytes). 226 Transfer complete. local: rohwer.reprep.ps.Z remote: rohwer.reprep.ps.Z 64235 bytes received in 22 seconds (2.8 Kbytes/s) ftp> get rohwer.howmany.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rohwer.howmany.ps.Z (46680 bytes). 226 Transfer complete. local: rohwer.howmany.ps.Z remote: rohwer.howmany.ps.Z 46680 bytes received in 32 seconds (1.4 Kbytes/s) ftp> quit 221 Goodbye. ipc9>uncompress rohwer.reprep.ps.Z ipc9>uncompress rohwer.howmany.ps.Z ipc9> Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs **DO NOT FORWARD TO OTHER GROUPS** From hinton at basser.cs.su.oz.au Thu Jan 14 17:19:57 1993 From: hinton at basser.cs.su.oz.au (Geoff Hinton) Date: Fri, 15 Jan 1993 09:19:57 +1100 Subject: faculty opening at University of Toronto Message-ID: ***** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***** The Department of Computer Science at the University of Toronto has an opening for a tenure track assistant professor. There will be a lot of competition for this job from all areas of computer science. No particular preference will be given to researchers in the Neural Network area. It would be helpful to be Canadian or a Canadian landed immigrant. The neural network group in computer science has 11 active researchers and excellent computing facilities. If you are an excellent neural network researcher and you are interested in this job, please apply as soon as possible. Send your application to The Chairman Department of Computer Science University of Toronto 10 Kings College Rd Toronto M5S 1A4 Canada To save time, please also send an electronic copy including your curriculum vitae to me at hinton at cs.su.oz.au Geoff Hinton From cds at sarnoff.com Tue Jan 12 17:08:07 1993 From: cds at sarnoff.com (Clay Spence x3039) Date: Tue, 12 Jan 93 17:08:07 EST Subject: quantum neural computer announcement Message-ID: <9301122208.AA08223@peanut.sarnoff.com> > Since a correctly operating (pseudo) random number generator on a > (conventional) computer *is* "produced by a chaotic, deterministic > system", your assertion that you can use a computer to solve > Schroedinger's equations and then select (pseudo) randomly based on > the resulting probability distribution is equivalent to saying that > you *can* produce this effect using a (chaotic) deterministic system. To pick the nit a little more: I stand corrected. Measurements of quantum systems are truly random, so unlike a Turing machine, a quantum computer could produce truly random numbers and simulated measurement results which are known to be free of peculiar correlations. True randomness might be handy, but it seems to me that assertion b ("it is far from clear that functionally equivalent computational effects can not be generated by a Turing machine") is only slightly weakened. In case it's not clear, I generally agree with Mike Dyer and you. The idea of a quantum computer has some appeal to me, but I don't know of any reasons to think that it would offer radically new computing capabilities. Clay Spence From kenm at prodigal.psych.rochester.edu Thu Jan 14 21:09:53 1993 From: kenm at prodigal.psych.rochester.edu (Ken McRae) Date: Thu, 14 Jan 93 21:09:53 EST Subject: correlated properties and computing word meaning Message-ID: <9301150209.AA22710@prodigal.psych.rochester.edu> The following paper is now available in the connectionist archive, archive.cis.ohio-state.edu (128.146.8.52), in pub/neuroprose under the name mcrae.corredprops.ps.Z The Role of Correlated Properties in Accessing Conceptual Memory Ken McRae Virginia de Sa University of Rochester, Rochester, NY Mark S. Seidenberg University of Southern California, Los Angeles, CA keywords: correlated properties, conceptual memory, word meaning, connectionist models, semantic priming ABSTRACT A fundamental question in research on conceptual structure concerns how information is represented in memory and used in tasks such as recognizing words. The present research focused on the role of correlations among semantic properties in conceptual memory. Norms were collected for 190 entities from 10 categories. Property intercorrelations were shown to influence people's performance in both a property verification task and a short interval semantic priming experiment. Furthermore, correlated properties were more important for biological kinds than for artifacts. A connectionist model of the computation of word meaning was implemented in which property intercorrelations developed in the course of learning. The model was used to simulate the results of the two experiments. We then tested a novel prediction derived from the model: that the intercorrelational density of a concept's properties should influence the speed with which a concept is computed. This prediction was confirmed in a final experiment. We concluded that encoded knowledge of property co-occurrences plays a prominent role in the representation and computation of word meaning. From ken at cns.caltech.edu Fri Jan 15 07:57:50 1993 From: ken at cns.caltech.edu (Ken Miller) Date: Fri, 15 Jan 93 04:57:50 PST Subject: Quantum and Classical Foolishness Message-ID: <9301151257.AA05256@zenon.cns.caltech.edu> In response to: -> Some assert that "measurements or observations, in the sense -> required by quantum theory, can only be made by conscious -> observers". -> Might the concept of "conscious observer" as used by the -> qm-theorist have something to do with the conscious observer at the -> back of cognitive centers? There is an ancient classical riddle: "When a tree falls in the forest, and no one is there to hear it, does it make a sound?" The idealist philosophers argued that unless some conscious being is around to register the event, you cannot say it has happened. The most solopsistic would say, until *I* register the event, it has not happened. This is classical foolishness. It is logically and philosophically consistent, but rather useless and pointless. Ultimately one arrives at the notion that history has not happened until you choose to read about it in the morning paper. As Feynmann points out in his lectures in discussing these issues, of course the falling tree makes a sound. A sound is a physical event, a compression wave in the air, and it leaves physical traces --- leaves that are blown off of a tree, thorns that vibrate and scratch a leaf. A sound is as physical as the fallen tree itself. So unless you hold to the solopsistic notion that the tree does not fall until you wander by and see it on the ground, then there is no problem about the sound either. Quantum mechanics adds many new puzzles to science, but this is not one of them. Quantum foolishness is the same solopsistic foolishness as classical foolishness, there is no new quantum effect here. Without going into a course on the subject: in quantum mechanics, we cannot describe a continuous evolution in time in terms of classical variables. Rather, there is a quantum state that is a certain kind of mixture in terms of classical variables, and then at some point there is a measurement, which just means "something happens", B happens rather than C, and the quantum state has accordingly "collapsed". The key point where the foolishness arises is in defining when a measurement has occurred --- when "something has happened." The solopsistic want to say, "well, you really don't know which outcome happened until a conscious observer sees it, so quantum mechanics requires consciousness". And some very good physicists have unfortunately subscribed to this (but not Feynmann --- see the same portion of his lectures where he talks about the tree falling) just as some very good Greek philosophers talked themselves into solopsism. This statement about quantum mechanics is no different from saying "you really don't know whether the tree falls until a conscious observer sees it, so classical mechanics requires consciousness". The point is, a quantum measurement occurs when some *classical physical event* has occurred --- some dial on your meter goes up or down, Schrodinger's cat lives or dies --- and so knowing the outcome is no different in status from knowing about the sound wave of a falling tree. How do you know when this event has occurred? This is a classical problem, the same problem the ancient solopsists screwed around with. And any sensible physicist would say, it happens when it happens, because it's a classical physical event that leaves traces and tracks of its existence whether you look at those traces or not. The mystery and wierdness of quantum mechanics involves understanding how classical physical events emerge out of the quantum world, how the quantum world "collapses" to the classical. But this has nothing to do with consciousness. Consciousness only enters in when trying to figure out when you know that this *classical* physical event has occurred. And that's classical foolishness. So, what does all this have to do with connectionists? Nothing. So I propose we drop the subject of quantum computers until someone has a specific architecture to propose. Ken From ellen at sol.siemens.com Fri Jan 15 09:14:40 1993 From: ellen at sol.siemens.com (Ellen Voorhees) Date: Fri, 15 Jan 93 09:14:40 EST Subject: Job announcement Message-ID: <9301151414.AA02998@sol.siemens.com> The learning department of Siemens Corporate Research in Princeton, New Jersey is looking to hire a researcher interested in statistical and knowledge-based methods for natural language processing, text retrieval, and text categorization. The position requires either a PhD (preferred) or a masters degree with some experience in an appropriate field. The main responsibility of the successful candidate will be to conduct research in automatic information retrieval and (statistical) natural language processing. Tasks include setting up and running experiments, programming, etc. People interested in the position should send a PLAIN ASCII resume to ellen at learning.siemens.com or a hardcopy of the resume to: Human Services Department EV Siemens Corporate Research, Inc. 755 College Road East Princeton, NJ 08540 Siemens is an equal opportunity employer. Ellen Voorhees Member of Technical Staff Siemens Corporate Research, Inc. From kak at max.ee.LSU.EDU Fri Jan 15 12:12:37 1993 From: kak at max.ee.LSU.EDU (Dr. S. Kak) Date: Fri, 15 Jan 93 11:12:37 CST Subject: Symposium on Aliens, Apes, and AI Message-ID: <9301151712.AA18830@max.ee.lsu.edu> A symposium on Aliens, Apes, and AI: Who is a person in the postmodern world? will be held in Huntsville, AL on Feb 13, 1993. The symposium is being organized by profs Lyn Miles and Stephen Harper of U. of Tennessee, Chattanooga. For further information contact FAX 615-755-4279; BITNET:SHARPER at UTCVM ; LMILES at UTCVM ------------------------------------------------------------------- My paper at the symposium is described below: ---------------------------------------------------------------- Symposium on Aliens, Apes, and Artificial Intelligence , The University of Alabama in Huntsville, February 13, 1993. --------------------------------------------------------------- Technical Report 92-12 ECE-LSU December 1, 1992 Reflections In Clouded Mirrors: Selfhood In Animals And Machines by Subhash Kak Copyright Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901 Abstract This essay is a tapestry woven out of three threads: Vedic theory of consciousness, quantum mechanics, and neural networks. The ancient Vedic tradition of philosophy of consciousness that goes back to at least 2000 BCE posits that analytical approaches to defining awareness or personhood end up in paradox. In this tradition one views awareness in terms of the reflection that the hardware of the brain provides to an underlying illuminating or awareness principle called the self . This tradition allows one to separate questions of the tools of awareness, such as eyes and ears and the mind, from the person who obtains this awareness. This tradition will be reviewed and issues related to its application to an understanding of personhood in animals and machines will be taken up. Parallels between the insights of the Vedic tradition and quantum mechanics will be sketched. The observer plays a fundamental role in the measurement problem of quantum mechanics and several scientists have claimed that physics will remain incomplete unless consciousness is incorporated into it. We will also consider the perspective of AI that intelligence emanates from the complexity of the neural hardware of the brain. This will take us to the question that what is it that separates humans from apes and other animals and from machines. We will address the question if machines will ever be endowed with self-awareness. -------------------------------------------------------------- From dyer at CS.UCLA.EDU Fri Jan 15 12:41:57 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 15 Jan 93 09:41:57 PST Subject: true randomness Message-ID: <930115.174157z.01975.dyer@lanai.cs.ucla.edu> Adding a geiger counter to a Turing Machine may seem to make a machine more powerful, but it is no more powerful than, say, adding a plane to a TM (i.e. have the TM control the plane's flight). After all, a plain TM can't fly. There are many chaotic patterns that appear random (until one discovers the underlying non-linear, deterministic equations). Although at this point it appears that the universe is fundamentally probabilistic, it seems possible to me that there could exist a deterministic universe in which there could exist measurers (i.e. scientists) who would be confused into believing (for a time) that their universe must be probabilistic (based on the granularity and methods of their current measurement technology, or theoretical constructs, etc.) In such a case, they would be (falsely) believing that adding a "random" physical process to their TM would produce something that no other TM could produce (i.e. via finite algorithm specification). From dyer at CS.UCLA.EDU Fri Jan 15 13:08:24 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 15 Jan 93 10:08:24 PST Subject: real numbers and TMs Message-ID: <930115.180824z.03349.dyer@lanai.cs.ucla.edu> Albert Boulanger, you said: There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. ======= But is the physics of our universe only modelable in terms of real numbers? e.g. is there actually an infinite amount of ever smaller space between any two neighboring pieces of close-together space? The quantum approach seems to say "no". Also, while there may be an infinite number of digits in a real number, for us to find out that the universe requires reals would require us to spend an infinite amount of time reading off the results of one of our measurements. (Is this right?) Consider a cellular automaton models, where the cell is the smallest quatum of space itself. In such models, there actually is no "motion". Motion is just an illusion -- the result of a similar looking pattern configurations being reconstructed near the original pattern, in the next state of the universe, based on the laws of how cell states interact. I have not come upon any proof that our universe could not be *some sort of* cellular system (perhaps with some bizarre topography and bizarre, non-local "neighborhood" function). In such a case, (a) it would be Turing computable and (b) real numbers would be merely a useful fiction, used by the measurers locked within that universe, but a fiction nonetheless and they would never be able to harnass this "extra-TM" power (other than in the sense that a TM attached to, say, a vehicle can do more, e.g. it can move thru space , which a TM with just a tape could not, (unless we are talking about THAT Turing machine simulating ITS vehicle on its own tape :-)) -- Michael Dyer From cds at sarnoff.com Fri Jan 15 14:39:21 1993 From: cds at sarnoff.com (Clay Spence x3039) Date: Fri, 15 Jan 93 14:39:21 EST Subject: true randomness Message-ID: <9301151939.AA12361@peanut> Alexis and Mike, Mike said: > Adding a geiger counter to a Turing Machine may seem to make a machine > more powerful, but ... I did not mean to imply that adding a true-random generator would make a Turing Machine much more powerful, although for certain applications it would be helpful. I read recently of a physicist who wanted to simulate the three-dimensional Ising model, and used a newer pseudo-random number generator which was supposed to be better than some others. To test his program he tried it on the two-dimensional Ising model, for which the exact statistics are known. The simulation gave the wrong answers. After searching for bugs, he switched to an older pseudo-random number generator and the simulations produced answers consistent with the exact statistics. Of course, it is always possible that he missed a bug in the code which implemented the newer generator. > ...it seems possible to me that there could exist a deterministic > universe in which there could exist measurers (i.e. scientists) who > would be confused into believing (for a time) that their universe > must be probabilistic ... As Paul Gleichauf pointed out, the non-local effects in quantum mechanics forbid local hidden variables, so a deterministic interpretation of quantum mechanics must be somewhat odd. David Bohm (I think that's the right Bohm) invented one which few people like, but it's fairly simple and as far as I know makes predictions which are identical to those of quantum mechanics. The people who prefer more or less conventional interpretations which involve randomness do so because this seems simpler, and so Occam's razor favors it as the preferable hypothesis. You are always free to ignore Occam, he frequently picks the wrong hypothesis, but it isn't clear that anyone is confused. And Alexis said: > Since the direction that water spins down a drain thats located on the > equator should also be truely random (presumably the initial perturbation > from the unstable equilibruim would come from Brownian motion in the > fluid), it would seem more artistic, if less pragmatic, for an "intel- > ligent" computer to periodically flush a line of toilets so situated. > (Yeah, I'm a computer scientist, but I don't do hardware :-) It is true that one can get true randomness from a chaotic deterministic system with infinite state, unlike a pseudo-random number generator. Rolling dice should work ok if done carefully. (I gather Turing machines don't have infinite state information?) As for toilets flushing, which way the water spins depends more on the structure of the toilet. Even in New Jersy at about 40 degrees north, in non-rigorous experiments I can get the water in my bathtub to go either way. I could do it even farther north in Goettingen, Germany. About Alexis' summary of Mike's point: > b) it is far from clear that functionally equivalent computational > effects can not be generated by a Turing machine I don't think this is relevant. Neural nets can be simulated on a Turing machine, and most people (or at least some people) don't think it's a waste of time to study them. The problem is assertion a), or some modification of it. I haven't yet heard an argument for quantum computers that I found convincing. Clay From barto at cs.umass.edu Fri Jan 15 17:20:30 1993 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 15 January 1993 17:20:30 -0500 Subject: Real Pole-Balancing Message-ID: Jervis and Fallside recently posted an abstract on real pole-balancing that prompted me to write this. They have implemented the learning algorithm that we wrote about in 1983 (Barto, Sutton, and Anderson, IEEE Trans. Systems Man and Cybern. 13, pp. 834-846) on a real pole balancing system. I was lucky enough to see their system work and was quite impressed. They indicate that they had some trouble getting it to work and conclude the abstract with the following statement, which I would like to discuss: "This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found." Much progress has been made on this class of learning algorithms since 1983, and we now have a much better understanding of them and their potential. I certainly agree that the pole-balancing problem is not a very good candidate for these algorithms. We artificially limited the information available to the controller, in effect, turning it into a problem that is harder than pole-balancing really is (as we indicated in our 1983 paper). We now understand that learning algorithm, and related ones developed by us and many others (e.g., Watkins, Werbos), as methods for approximating solutions to optimal control problems by means of approximating dynamic programming (DP). A short paper by Rich Sutton, Ron Williams, and me appeared in Control Systems Magazine (vol. 12, April 1992) that describes this perspective. Although much work has been done to address the problems that Jervis and Fallside encountered in specifying a suitable state representation, my real point is that these algorithms seem very well suited for some classes of problems. There are more scales by which to measure problems than "small" and "large". Specifically, we think that these methods (not necessarily the old pole-balancing system, but more recent versions of the same approach) make good computational sense for stochastic optimal control problems with large state sets. You are probably familiar with Tesauro's TD-gammon system, which uses a system similar to the pole-balancer to learn how to play remarkably good backgammon. This is a kind of stochastic optimal control problem (admittedly not one of great engineering utility), and the conventional DP solution method is infeasible due to the very large state set. By many games of self-play, TD-gammon system was able to focus computation onto relevant regions of the state set, and the multi-layer network stored the information gained in a compact form. We think this can be a big advantage in certain classes of stochastic optimal control problems, and we are currently working to provide more evidence for this, as well to develop more theory. Pole-balancing, in the form in which it is relatively easy to solve, is a regulation problem, not a stochastic optimal control problem of the kind that approximate DP methods might be good at. In summary, I agree wholeheartedly with Jervis and Fallside that pole-balancing can be better achieved by other means. But I think the conclusion that the kind of approximate DP methods (of which our 1983 system was a primitive example) are only suited to small problems is not warranted. In fact, we think they are well suited to some optimal control problems that are so big and nonlinear that conventional control design techniques are not computationally feasible. A. Barto From aboulang at BBN.COM Sat Jan 16 17:32:50 1993 From: aboulang at BBN.COM (aboulang@BBN.COM) Date: Sat, 16 Jan 93 17:32:50 EST Subject: real numbers and TMs In-Reply-To: Dr Michael G Dyer's message of Fri, 15 Jan 93 10:08:24 PST <930115.180824z.03349.dyer@lanai.cs.ucla.edu> Message-ID: Date: Fri, 15 Jan 93 10:08:24 PST From: Dr Michael G Dyer Albert Boulanger, you said: There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. ======= But is the physics of our universe only modelable in terms of real numbers? e.g. is there actually an infinite amount of ever smaller space between any two neighboring pieces of close-together space? The quantum approach seems to say "no". Good. This allows me to perhaps elevate this discussion somewhat. I have no ready answers but allow me to outline the lines of my thinking for the past few years. As I mentioned in my first posting, expecting nature to have access to the reals is a question all in itself. If nature is quantized somehow, how can it gain access to a source of infinite algorithmic complexity? Note that it does not need to represent a real number explicitly -- just some implicit mechanism may be good enough. I propose that *open* computing with a heat bath may be just one way. (I want to mention, because I believe that the questions are actually closely related, that no one has really solved the thermodynamic arrow of time question -- ie all dynamics, even QCD, have an outstanding problem -- they do not explain why we go through time in one direction. Where does the irreversibility of macroscopic systems come from? There was a foundational paper by Michael Mackey (of Mackey-Glass eqn fame) that is a careful delineation of possible mechanisms that can answer the arrow-of-time question: "The Dyanamic Origin of Increasing Entropy" Michael C. Mackey, Reviews of Modern Physics, Vol 61, No. 4, October 1989, 981-1015. He proposes two viable mechanisms: trivial coarse graining, "taking the trace of a lager dynamics" and coupling the system with a heat bath. The last option is what interests me. {One reasoen for liking the latter is that, at the QM level, *local* hidden variable mechanisms are ruled out} Let me state another outstanding puzzle: Physical chaos may be a big "con game" by nature since the underlying QM system is described by a linear (infinite dimensional) system. However, the work on QM chaos shows us that nature does a good job at imitating chaos. Here is a QM Chaos ref: "Quantum Chaos", Roderick Jensen, Nature, Vol 355, 23 Jan 1992, 311-317. ) I posit, based on my investigation of asynchronous computation, that the answer is the notion of computing with an external heat bath. The heat bath I believe is *the* source for high algorithmic complexity numbers in physical computing systems and why there may be in fact a legitimate physical chaos (one needs access to the reals for true chaos) even if QM can NOT hack it. Think of the heat bath as an external resource of good quality bits. One may ask about the heat bath itself. Where does it come from? In my mind it is large part due to the asynchronous nature or concurrent events in the real world. (Another contributor is the "nondeterminism" of QM . This nondeterminism thing of QM is a whole other story which I don't want to get into right now.) This of course is a debatable position. Consider a cellular automaton models, where the cell is the smallest quatum of space itself. .... I have not come upon any proof that our universe could not be *some sort of* cellular system (perhaps with some bizarre topography and bizarre, non-local "neighborhood" function). I love these models too, but let's think about a Fredkin-type of model with *asynchronous* dynamics. In such as system there would be a local sense of time at each cell, and any global sense of time is an emergent property of the macroscopic system. I believe that it may be possible to represent infinite algorithmic complexity numbers via the timing relationships of the cells. This is an implicit type of representation I was alluding to above. As I have said, this is an outline of my thoughts on the subject which I hope illustrates to all that dismissing reals from nature and hence dismissing a computational model like Blum, Shub, and Smale's is NOT a trivial subject. **************** BTW, asynchronous dynamics in artificial neural networks is little studied. There has been work by Jacob Bahren at JPL, and John Tsitsiklis at MIT on it, but this has been little more than applying the "chaotic relaxation" results of fixed point type problems from the parallel numerical analysis literature. Go for it! **************** MIMD for the MIND, Albert Boulanger aboulanger at bbn.com From rsun at athos.cs.ua.edu Fri Jan 15 14:17:44 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Fri, 15 Jan 1993 13:17:44 -0600 Subject: No subject Message-ID: <9301151917.AA16349@athos.cs.ua.edu> Paper available: -------------------------------------------- title: STRUCTURING KNOWLEDGE IN VAGUE DOMAINs Ron Sun Department of Computer Science College of Engineering The University of Alabama Tuscaloosa, AL 35487 rsun at cs.ua.edu -------------------------------------------- to appear in: IEEE Transaction on Knowledge and Data Engineering --------------------------------------------- In this paper, we propose a model for structuring knowledge in vague and continuous domains where similarity plays a role in coming up with plausible inferences. The model consists of two levels, one of which is an inference network with nodes representing concepts and links representing rules connecting concepts, and the other is a microfeature based replica of the first level. Based on the interaction between the concept nodes and microfeature nodes in the model, inferences are facilitated and knowledge not explicitly encoded in a system can be deduced via mixed similarity matching and rule application. The model is able to take account of many important desiderata of plausible reasoning, and produces sensible conclusions accordingly. Examples will be presented to illustrate the utility of the model in structuring knowledge to enable useful inferences to be carried out in several domains. ---------------------------------------------------------------- * It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) * No hardcopy available. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.vague.ps.Z ftp> quit unix> uncompress sun.vague.ps.Z unix> lpr sun.vague.ps (or however you print postscript) From gert at cco.caltech.edu Fri Jan 15 17:03:06 1993 From: gert at cco.caltech.edu (Gert Cauwenberghs) Date: Fri, 15 Jan 93 14:03:06 PST Subject: paper announcement Message-ID: <9301152203.AA02448@punisher.caltech.edu> A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization To appear in the NIPS 5 proceedings (Morgan Kauffman, 1993). Gert Cauwenberghs California Institute of Technology Mail-Code 128-95 Pasadena, CA 91125 E-mail: gert at cco.caltech.edu Abstract A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No {\em explicit} information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. Now available in the neuroprose archive: archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory under the file name cauwenberghs.nips92.ps.Z (compressed PostScript). From pluto at cs.UCSD.EDU Sun Jan 17 18:43:03 1993 From: pluto at cs.UCSD.EDU (Mark Plutowksi) Date: Sun, 17 Jan 93 15:43:03 -0800 Subject: Cross-Val: Summary of Lit Survey and Request for References Message-ID: <9301172343.AA15924@beowulf> Hello, This is a follow-on to recent postings on using cross-validation to assess neural network models. It is a request for further references, after an exhausting literature survey of my own which failed to find the results I seek. A summary of my findings follows the request, followed by an informative response from Grace Wahba, and finally, a list of the references I looked at. Thanks for any leads or tips, ================= == Mark Plutowski pluto at cs.ucsd.edu Computer Science and Engineering 0114 University of California, San Diego La Jolla, California, USA. THE REQUEST: ------------ Do you know of convergence/consistency results for justifying cross-validatory model assessment for nonlinear compositions of basis functions, such as the usual sigmoided feedforward network? SUMMARY OF MY LIT SURVEY: ------------------------- While the use of cross-validation to assess nonlinear neural network models CAN be justified to a certain degree, (e.g., [Stone 76,77]) the really nice theoretical results exist for other estimators, e.g., kernel density, histograms, linear models, and splines (see references below.) These results are not directly applicable to neural nets. They all exploit properties of the particular estimators which are not shared by neural networks, in general. In short, the proofs for linear models exploit linear reductions, and the other (nonlinear) estimators for which optimality results have been published have the property that deleting a single example has negligible effect on the estimate outside a bounded region surrounding the example (e.g., kernel density estimators and splines.) In comparison, a single example can affect every weight of a neural network - deleting it can have global effect on the estimate. GRACE WAHBA SAYS: ------------------ Thanks to Grace Wahba for her informative response to my request to her for information after I was unable to get hold of a copy of her relevant book: ============================================================ From wahba at stat.wisc.edu Wed Jan 13 23:32:29 1993 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 13 Jan 93 22:32:29 -0600 Subject: choose your own randomized regularizer Message-ID: <9301140432.AA22884@hera.stat.wisc.edu> Very interesting request.. !! I'm convinced (as you seem to be) that some interesting results are to be obtained using CV or GCV in the context of neural nets. In my book are brief discussions of how GCV can be used in certain nonlinear inverse problems (Sect 8.3), and when one is doing penalized likelihood with non-Gaussian data (Sect 9.2). (No theory is given, however). Finbarr O'Sullivan (finbarr at stat.washington.edu) has further results on problems like those in Sect 8.3. However, I have not seen any theoretical results in the context of sigmoidal feedforward networks (but that sure would be interesting!!). However, if you make a local quadratic approximation to an optimization problem to get a local linear approximation to the influence operator (which plays the role of A(\lambda)), then you have to decide where you are going to take your derivatives. In my book on page 113 (equation (9.2.19) I make a suggestion as to where to take the derivatives , but I later got convinced that that was not the best way to do it. Chong Gu,`Cross-Validating Non-Gaussian Data', J. Computational and Graphical Statistics 1, 169-179, June, 1992 has a discussion of what he (and I) believe is a better way, in that context. That context doesn't look at all like neural nets, I only mention this in case you get into some proofs in the neural network context - in that event I think you may have to worry about where you differentiate and Gu's arguments may be valid more generally.. As far as missing any theoretical result due to not having my book, the only theoretical cross validation result discussed in any detail is that in Craven and Wahba(1979) which has been superceded by the work of Li, Utreras and Andrews. As far as circulating your request to the net do go right ahead- I will be very interested in any answers you get!! \bibitem[Wahba 1990] Wahba,Grace. 1990. "Spline Models for Observational Data" v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 ORDER INFO FOR WAHBA 1990: ========================== List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA service at siam.org Toll-Free 1-800-447-7426 (8:30-4:45 Eastern Standard Time, the US only. Regular phone: (215)382-9800 FAX (215)386-7999 May be ordered on American Express, Visa or Mastercard, or paid by check or money order in US dollars, or may be billed (extra charge). CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. ============================================================ REFERENCES: =========== \bibitem[Li 86] Li, Ker-Chau. 1986. ``Asymptotic optimality of $C_{L}$ and generalized cross-validation in ridge regression with application to spline smoothing.'' {\em The Annals of Statistics}. {\bf 14}, 3, 1101-1112. \bibitem[Li 87] Li, Ker-Chau. 1987. ``Asymptotic optimality for $C_{p}$, $C_{L}$, cross-validation, and generalized cross-validation: discrete index set.'' {\em The Annals of Statistics}. {\bf 15}, 3, 958-975. \bibitem[Utreras 87] Utreras, Florencio I. 1987. ``On generalized cross-validation for multivariate smoothing spline functions.'' {\em SIAM J. Sci. Stat. Comput.} {\bf 8}, 4, July 1987. \bibitem[Andrews 91] Andrews, Donald W.K. 1991. ``Asymptotic optimality of generalized $C_{L}$, cross-validation, and generalized cross-validation in regression with heteroskedastic errors.'' {\em Journal of Econometrics}. {\bf 47} (1991) 359-377. North-Holland. \bibitem[Bowman 80] Bowman, Adrian W. 1980. ``A note on consistency of the kernel method for the analysis of categorical data.'' {\em Biometrika} (1980), {\bf 67}, 3, pp. 682-4. \bibitem[Hall 83] Hall, Peter. 1983. ``Large sample optimality of least squares cross-validation in density estimation.'' {\em The Annals of Statistics}. {\bf 11}, 4, 1156-1174. Stone, Charles J. 1984 ``An asymptotically optimal window selection rule for kernel density estimates.'' {\em The Annals of Statistics}. {\bf 12}, 4, 1285-1297. \bibitem[Stone 59] Stone, M. 1959. ``Application of a measure of information to the design and comparison of regression experiments.'' {\em Annals Math. Stat.} {\bf 30} 55-69 \bibitem[Marron 87] Marron, M. 1987. ``A comparison of cross-validation techniques in density estimation.'' {\em The Annals of Statistics}. {\bf 15}, 1, 152-162. \bibitem[Bowman etal 84] Bowman, Adrian W., Peter Hall, D.M. Titterington. 1984. ``Cross-validation in nonparametric estimation of probabilities and probability densities.'' {\em Biometrika} (1984), {\bf 71}, 2, pp. 341-51. \bibitem[Bowman 84] Bowman, Adrian W. 1984. ``An alternative method of cross-validation for the smoothing of density estimates.'' {\em Biometrika} (1984), {\bf 71}, 2, pp. 353-60. \bibitem[Stone 77] Stone, M. 1977. ``An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion.'' {\em J. Roy. Stat. Soc. Ser B}, {\bf 39}, 1, 44-47. \bibitem[Stone 76] Stone, M. 1976. "Asymptotics for and against cross-validation" ?? From edelman at wisdom.weizmann.ac.il Sun Jan 17 02:08:23 1993 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 17 Jan 93 09:08:23 +0200 Subject: Popper on quantum theory In-Reply-To: "Dr. S. Kak"'s message of Fri, 15 Jan 93 11:12:37 CST <9301151712.AA18830@max.ee.lsu.edu> Message-ID: <9301170708.AA04526@wisdom.weizmann.ac.il> From john at cs.uow.edu.au Mon Jan 18 16:27:14 1993 From: john at cs.uow.edu.au (John Fulcher) Date: Mon, 18 Jan 93 16:27:14 EST Subject: CALL FOR PAPERS - ANN STANDARDS Message-ID: <199301180527.AA16294@wraith.cs.uow.edu.au> CALL FOR PAPERS - ANN STANDARDS COMPUTER STANDARDS & INTERFACES For some time now, there has been a need to consolidate and formalise the efforts of researchers in the Artificial Neural Network field. The publishers of this North-Holland journal have deemed it appropriate to devote a forthcoming special issue of Computer Standards & Interfaces to ANN standards, under the guest editorship of John Fulcher, University of Wollongong, Australia. We already have the cooperation of the IEEE/NCC Standards Committee, but are also interested in submissions regarding less formal, de facto "standards". This could range from established, "standard" techniques in various application areas (vision, robotics, speech, VLSI etc.), or ANN techniques generally (such as the backpropagation algorithm & its [numerous] variants, say). Accordingly, survey or review articles would be particularly welcome. If you are interested in submitting a paper for consideration, you will need to send three copies (in either hard copy or electronic form) by March 31st, 1993 to: John Fulcher, Department of Computer Science, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. fax: +61 42 213262 email: john at cs.uow.edu.au.oz From mitsu at netcom.com Mon Jan 18 18:52:15 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Mon, 18 Jan 93 15:52:15 -0800 Subject: Popper on quantum theory Message-ID: <9301182352.AA15863@netcom3.netcom.com> >Karl R. Popper >Quantum Theory and the Schism in Physics >from the Postscript to "The Logic of Scientific Discovery" >Edited by W. W. Bartley >Unwin Hyman: London, 1982 In this volume, Karl Popper essentially states that he believes the Bell inequality will fail to be demonstrated experimentally. E.g., he thinks the Aspect experiment would have failed (of course, now the Aspect experiment has been called into question, but my guess is that if it were repeated more rigorously the correlations would still appear). All of his analysis pretty much rests on this assumption, which is most likely false. Mitsu Hadeishi General Partner, Open Mind mitsu at netcom.com mitsu at well.sf.ca.us From greene at iitmax.acc.iit.edu Mon Jan 18 22:13:51 1993 From: greene at iitmax.acc.iit.edu (Greene) Date: Mon, 18 Jan 93 21:13:51 -0600 Subject: another Dognitive Science seminar Message-ID: <9301190313.AA19518@iitmax.acc.iit.edu> Long-range synchronizations have long been noted in the nervous system of the dog. A Russian reprint I stashed somewhere in my files in the early '70s (and which doggedly rebuffs my efforts to unearth it) concerned synchronizations between events in the dog's visual system and events in the dog's lower bowel. It is simply a case of What the Dog's Tectum tells the Dog's Rectum. From sg at corwin.CCS.Northeastern.EDU Tue Jan 19 11:59:10 1993 From: sg at corwin.CCS.Northeastern.EDU (steve gallant) Date: Tue, 19 Jan 1993 11:59:10 -0500 Subject: New Book: Neural Network Learning ... Message-ID: <199301191659.AA08462@corwin.ccs.northeastern.edu> NEURAL NETWORK LEARNING And Expert Systems by Steve Gallant The book is intended as a text, reference, and a collection of some of my work. CONTENTS PART I: Basics 1 Introduction and Important Definitions 1.1 Why Connectionist Models? 1.2 The Structure of Connectionist Models 1.3 Two Fundamental Models: Multi-Layer Perceptrons and Backpropagation Networks 1.4 Gradient Descent 1.5 Historic and Bibliographic Notes 1.6 Exercises 1.7 Programming Project 2 Representation Issues 2.1 Representing Boolean Functions 2.2 Distributed Representations 2.3 Feature Spaces and ISA Relations 2.4 Representing Real-Valued Functions 2.5 Example: Taxtime! 2.6 Exercises 2.7 Programming Projects PART II: Learning in Single Layer Models 3 Perceptron Learning and the Pocket Algorithm 3.1 Introduction 3.2 Perceptron Learning for Separable Sets of Training Examples 3.3 The Pocket Algorithm for Non-separable Sets of Training Examples 3.4 Khachiyan's Linear Programming Algorithm 3.5 Exercises 3.6 Programming Projects 4 Winner-Take-All Groups or Linear Machines 4.1 Introduction 4.2 Generalizes Single-Cell Models 4.3 Perceptron Learning for Winner-Take-All Groups 4.4 The Pocket Algorithm for Winner-Take-All Groups 4.5 Kessler's Construction, Perceptron Cycling, and the Pocket Algorithm Proof 4.6 Independent Training 4.7 Exercises 4.8 Programming Projects 5 Autoassociators and One-Shot Learning 5.1 Introduction 5.2 Linear Autoassociators and the Outer Product Training Rule 5.3 Anderson's BSB Model 5.4 Hopfield's Model 5.5 The Traveling Salesman Problem 5.6 The Cohen-Grossberg Theorem 5.7 Kanerva's Model 5.8 Autoassociative Filtering for Feed-Forward Networks 5.9 Concluding Remarks 5.10 Exercises 5.11 Programming Projects 6 Mean Squared Error (MSE) Algorithms 6.1 Motivation 6.2 MSE Approximations 6.3 The Widrow-Hoff Rule or LMS Algorithm 6.4 ADALINE 6.5 Adaptive noise cancellation 6.6 Decision-directed learning 6.7 Exercises 6.8 Programming Projects 7 Unsupervised Learning 7.1 Introduction 7.2 k-Means Clustering 7.3 Topology Preserving Maps 7.4 ART1 7.5 ART2 7.6 Using Clustering Algorithms for Supervised Learning 7.7 Exercises 7.8 Programming Projects PART III: Learning in Multi-Layer Models 8 The Distributed Method and Radial Basis Functions 8.1 Rosenblatt's Approach 8.2 The Distributed Method 8.3 Examples 8.4 How Many Cells? 8.5 Radial Basis Functions 8.6 A Variant: The Anchor Algorithm 8.7 Scaling, Multiple Outputs and Parallelism 8.8 Exercises 8.9 Programming Projects 9 Computational Learning Theory and the BRD Algorithm 9.1 Introduction to Computational Learning Theory 9.2 A Learning Algorithm for Probabilistic Bounded Distributed Concepts 9.3 The BRD Theorem 9.4 Noisy Data and Fallback Estimates 9.5 Bounds for Single-Layer Algorithms 9.6 Fitting Data by Limiting the Number of Iterations 9.7 Discussion 9.8 Exercises 9.9 Programming Project 10 Constructive Algorithms 10.1 The Tower and Pyramid Algorithms 10.2 The Cascade-Correlation Algorithm 10.3 The Tiling Algorithm 10.4 The Upstart Algorithm 10.5 Pruning 10.6 Easy Learning Problems 10.7 Exercises 10.8 Programming Projects 11 Backpropagation 11.1 Introduction 11.2 The Backpropagation Algorithm 11.3 Derivation 11.4 Practical Considerations 11.5 NP-Completeness 11.6 Comments 11.7 Exercises 11.8 Programming Projects 12 Backpropagation: Variations and Applications 12.1 NETtalk 12.2 Backpropagation Through Time 12.3 Handwritten character recognition 12.4 Robot manipulator with excess degrees of freedom 12.5 Exercises 12.6 Programming Projects 13 Simulated Annealing and Boltzmann Machines 13.1 Simulated Annealing 13.2 Boltzmann Machines 13.3 Remarks 13.4 Exercises 13.5 Programming Project PART IV: Neural Network Expert Systems 14 Expert Systems and Neural Networks 14.1 Expert Systems 14.2 Neural Network Decision Systems 14.3 MACIE, and an Example Problem 14.4 Applicability of Neural Network Expert Systems 14.5 Exercises 14.6 Programming Projects 15 Details of the MACIE System 15.1 Inferencing and Forward Chaining 15.2 Confidence Estimation 15.3 Information Acquisition and Backward Chaining 15.4 Concluding Comment 15.5 Exercises 15.6 Programming Projects 16 Noise, Redundancy, Fault Detection, and Bayesian Decision Theory 16.1 Introduction 16.2 The High Tech Lemonade Corporation's Problem 16.3 The Deep Model and the Noise Model 16.4 Generating the Expert System 16.5 Probabilistic Analysis 16.6 Noisy Single-pattern Boolean Fault Detection Problems 16.7 Convergence Theorem 16.8 Comments 16.9 Exercises 16.10 Programming Projects 17 Extracting Rules From Networks 17.1 Why Rules? 17.2 What kind of Rules? 17.3 Inference Justifications 17.4 Rule Sets 17.5 Conventional + Neural Network Expert Systems 17.6 Concluding Remarks 17.7 Exercises 17.8 Programming Projects 18 Appendix: Representation Comparisons 18.1 DNF Expressions and Polynomial Representability 18.2 Decision Trees 18.3 Pi-Lambda Diagrams 18.4 Symmetric Functions and Depth Complexity 18.5 Concluding Remarks 18.6 Exercises References 364 pages, 156 figures. Available from MIT Press by calling (800) 356-0343 or (617) 625-8569. A great stocking-stuffer, especially for friends with wide, flat ankles. SG From mozer at dendrite.cs.colorado.edu Tue Jan 19 14:35:00 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Tue, 19 Jan 1993 12:35:00 -0700 Subject: building energy predictor shootout -- data available by anon ftp Message-ID: <199301191935.AA04069@neuron.cs.colorado.edu> Data for the building energy predictor shootout announced recently over connectionists is available by anonymous ftp from ftp.cs.colorado.edu A sample script to access the data follows below. The files in the energy-shootout directory, all ASCII format, include: rules.asc The shoot out rules and details of the competition atrain.dat The training portion of data set A. atest.dat The testing portion of data set A. btrain.dat The training portion of data set B. btest.dat The testing portion of data set B. dataform.at Details of the format of these four data files along with units of all data. ------------------------------------------------------------------------------- % ftp ftp.cs.colorado.edu Connected to bruno.cs.colorado.edu. 220 bruno FTP server (SunOS 4.1) ready. Name (ftp.cs.colorado.edu:mozer): anonymous 331 Guest login ok, send ident as password. Password: 230-Guest login ok, access restrictions apply. ftp> cd pub/cs/energy-shootout 250 CWD command successful. ftp> ls 200 PORT command successful. 150 ASCII data connection for /bin/ls (128.138.204.25,2207) (0 bytes). atest.dat atrain.dat btest.dat btrain.dat dataform.at read.me rules.asc 226 ASCII Transfer complete. 79 bytes received in 11 seconds (0.0073 Kbytes/s) ftp> get atest.dat 200 PORT command successful. 150 ASCII data connection for atest.dat (128.138.204.25,2208) (93657 bytes). 226 ASCII Transfer complete. local: atest.dat remote: atest.dat 94940 bytes received in 1.1 seconds (82 Kbytes/s) ftp> bye 221 Goodbye. From takagi at diva.berkeley.edu Tue Jan 19 23:49:54 1993 From: takagi at diva.berkeley.edu (Hideyuki Takagi) Date: Tue, 19 Jan 93 20:49:54 -0800 Subject: ICNN'93 and FUZZ-IEEE'93 Message-ID: <9301200449.AA16723@diva.Berkeley.EDU> ===================================================================== From berenji at ptolemy.arc.nasa.gov Tue Jan 19 18:52:58 1993 From: berenji at ptolemy.arc.nasa.gov (Hamid Berenji) Date: Tue, 19 Jan 93 15:52:58 PST Subject: IEEE Conferences Message-ID: ** CALL FOR PARTICIPATION ** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS March 28 - April 1, 1993 San Francisco Hilton San Francisco, California The IEEE Neural Networks Council cordially invites you to attend the Second International Conference on Fuzzy Systems (FUZZ-IEEE'93) and the 1993 IEEE International Conference on Neural Networks (ICNN'93), to be held concurrently at the San Francisco Hilton Hotel, San Francisco, California from March 28 to April 1, 1993. These IEEE-sponsored events have grown to become the largest conferences in their fields. In 1993, their importance will be enhanced by their combined meeting in an environment that assures that conference participants will have full access to all functions and events of either of these multidisciplinary meetings. In addition to an exciting program of plenary lectures, tutorial presentations, and technical sessions and panels, we anticipate an extraordinary trade show and exhibits program affording a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. PLENARY SPEAKERS Lotfi A. Zadeh University of California, Berkeley Didier Dubois Universite Paul Sabatier, Toulouse Hamid R. Berenji NASA Ames Research Center Michio Sugeno Tokyo Institute of Technology E.H. Mamdani Queens Mary College, London Henri Prade Universite Paul Sabatier, Toulouse Bernard Widrow Stanford University Kumpati Narendra Yale University Teuvo Kohonen Helsinki University of Technology, Finland Richard Sutton GTE Laboratories Carver Mead California Institute of Technology Piero Bonissone General Electric Corporate R&D TUTORIALS SUNDAY MARCH 28, 1993, 9:00AM - 12:30PM 1. Introduction to Fuzzy Set Theory, Uncertainty and Information Theory George Klir State University of New York 2. Fuzzy Logic in Databases and Information Retrieval Maria Zemankova National Science Foundation 3. Fuzzy Logic and Neural Networks Pattern Recognition James Bezdek University of West Florida 4. Evolutionary Programming David Fogel Orincon Corporation 5. Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology 6. The Biological Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute SUNDAY, MARCH 28, 1993, 2:00PM - 5:30PM 7. Hardware Approaches to Fuzzy Logic Applications H. Watanabe University of North Carolina 8. Fuzzy Logic and Neural Networks for Control Systems Hamid R. Berenji NASA Ames Research Center 9. Fuzzy Logic and Neural Networks for Computer Vision James Keller University of Missouri 10. Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University 11. Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Brown University 12. Expert Systems and Neural Networks George Lendaris Portland State University **************************************************************************** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS Sponsored by the IEEE Neural Networks Council with the cooperation of the European Neural Networks Society and the Japan Neural Networks Society. IEEE Neural Networks Council Constituent Societies: IEEE Circuits and Systems Society IEEE Communications Society IEEE Computer Society IEEE Control Systems Society IEEE Engineering in Medicine & Biology Society IEEE Industrial Electronics Society IEEE Industry Applications Society IEEE Information Theory Society IEEE Lasers and Electro-Optics Society IEEE Oceanic Engineering Society IEEE Power Engineering Society IEEE Robotics and Automation Society IEEE Signal Processing Society IEEE Social Implications of Technology Society IEEE Systems, Man, and Cybernetics Society IEEE Computer Society IEEE Power Engineering Society ORGANIZATION General Chair: Enrique H. Ruspini Program Cochairs: Hamid R. Berenji Elie Sanchez Shiro Usui ADVISORY BOARD: S.I. Amari L. Cooper F. Fukushima C. Lau L. Stark J. Anderson R.C. Eberhart R. Hecht-Nielsen C.Mead A. Stubberud G. Bekey R. Eckmiller J. Holland N.Packard H. Takagi J.C. Bezdek J. Feldman C. Jorgensen D.Rummelhart P. Treleaven Y. Burnod M. Feldman T. Kohonen B. Skyrms B. Widrow ORGANIZING COMMITTEE PUBLICITY: H.R. Berenji TUTORIALS: J.C. Bezdek PRESS/PUBLIC RELATIONS: C. Welch EXHIBITS: W. Xu FINANCE: R. Tong VIDEO PROCEEDINGS: A. Bergman VOLUNTEERS: A. Worth **************************************************************************** SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS Sponsored by the IEEE Neural Networks Council in Cooperation with IEEE Circuits and Systems Society IEEE Communications Society IEEE Control Systems Society IEEE Systems, Man, and Cybernetics Society International Fuzzy Systems Association (IFSA) North American Fuzzy Information Processing Society (NAFIPS) Japan Society for Fuzzy Theory and Systems (SOFT) European Laboratory for Intelligent Techniques Engineering (ELITE) ORGANIZATION General Chairman: Enrique H. Ruspini SRI International Program Chairman: Piero P. Bonissone General Electric Corporate Research and Development ADVISORY BOARD: J. Bezdek H. Prade M. Sugeno T. Yamakawa D. Dubois E. Sanchez T. Terano L.A. Zadeh G. Klir Ph. Smets E. Trillas H.J. Zimmerman ORGANIZING COMMITTEE EXHIBITS: W. Xu, A. Ralescu, M. Togai, L. Valverde, T. Yamakawa FINANCE: R. Tong(Chair), R. Nutter PRESS/PUBLIC RELATIONS: C. Welch PUBLICITY: H. Berenji (Chair),B. D'Ambrosio, R. Lopez de Mantaras, T. Takagi TUTORIALS: J. Bezdek (Chair), H.R. Berenji, H. Watanabe VIDEO PROCEEDINGS: A. Bergman VOLUNTEERS: A. Worth ************************************************************************** CONFERENCE REGISTRATION FEES: Full Conference registration permits attendance at all events and functions of both conferences with the exception of optional tour programs. The registration fee also includes one set of Proceedings (to be chosen by the registrant) for either FUZZ-IEEE '93 or ICNN '93. Additional ICNN '93 or FUZZ-IEEE '93 Proceedings or CD-ROM versions of the Proceedings are also available for purchase. Registered Registered before 1/31/93 after 1/31/93 IEEE Members $325 US Dollars $395 US Dollars Non-Members $425 US Dollars $495 US Dollars Students* $80 US Dollars $100 US Dollars TUTORIAL REGISTRATION FEES: Members Non-members Students* One Tutorial $295 $345 $150 Two Tutorials $395 $450 $200 * A letter from the Department Head to verify full-time student status at the time of registration is required. At the conference, all students must present a current student ID with picture. FOREIGN PAYMENTS MUST BE MADE BY DRAFT ON A U.S. BANK IN U.S. DOLLARS REFUND POLICY: If your registration must be canceled, your fee will be refunded less $50 U.S. dollars administrative costs. You must notify us in writing by March 1, 1993. No refunds can be given after this date. LOCATION AND ACCOMMODATIONS The Conferences will be held at the San Francisco Hilton located downtown just one block from famous Union Square in the heart of San Francisco; and just twenty minutes from San Francisco International Airport. The Hilton offers participants of the Conferences a very special room rate of $117 (Single) and $127 (Double). San Francisco Hilton One Hilton Square 333 O'Farrell Street San Francisco, CA 94102-2189 Reservations (415) 771-1400 To guarantee your reservation, you must make your reservation with payment directly to the hotel to cover the first night's stay by check or credit card. DEADLINE FOR HOTEL RESERVATIONS: March 1, 1993 SIGHTSEEING TOURS Various sightseeing tours in and around San Francisco and a Dinner Cruise will be offered. Details regarding tours as well as reservation forms will be sent upon registration for the Symposium. AIRLINE INFORMATION American Airlines has waived many of the restrictions to allow the FUZZ-IEEE '93/ICNN '93 attendees to obtain SuperSaver fares for which they would normally not qualify. Bristol Travel has been named the official travel agency for the FUZZ-IEEE '93/ICNN '93 Conferences and can assist you with all your travel needs. To make your reservations call Bristol Travel at (800) 762- 2746. Bristol Travel also provides 24-hour around the-clock service. During off hours you can call (800) 237-7980 and refer to VIT (Very Important Traveler) Number SY2CO. ************************************************************************ CONFERENCE INFORMATION AND REGISTRATION: PLEASE CONTACT: FUZZ-IEEE '93/ICNN '93 Conference Office: P.O. Box 16502 Irvine, CA 92713-6502 USA For Express Mail only: Conference Office 2603 Main Street, Suite 690 Irvine, CA 92714 USA Tel (619) 453-6222 or (800) 321-6338 FAX (714) 752-7444 E-Mail: 70750.345 at compuserve.com From read at helmholtz.sdsc.edu Wed Jan 20 00:16:37 1993 From: read at helmholtz.sdsc.edu (Read Montague) Date: Tue, 19 Jan 93 21:16:37 PST Subject: No subject Message-ID: <9301200516.AA28841@helmholtz.sdsc.edu> POSTDOCTORAL POSITION DIVISION OF NEUROSCIENCE BAYLOR COLLEGE OF MEDICINE A postdoctoral position is available beginning after July, 1993. The position is for one to three years. I am seeking individuals interested in the function of the vertebrate brain. In particular, individuals interested in the problem of how three dimensional neuroanatomy self-organizes into functioning neuronal networks, the range of mechanisms required to explain this self-organizing capability, and the behaviors of the developed networks. I am interested in theoreticians who have a committment to dealing with the facts of biological life and/or experimentalists interested in theory and experiment. A more explicit description of the interests of the lab is given below. Interested parties should send a C.V. and a brief statement of research interests to the address listed below. Present address: P. Read Montague Computational Neurobiology Lab The Salk Institute 10010 North Torrey Pines Rd La Jolla, CA 92037 e-mail: read at helmholtz.sdsc.edu fax: (619) 587-0417 RESEARCH INTERESTS OF THE LAB The primary focus of this laboratory is how three dimensional neuroanatomy self-organizes into functioning neuronal networks, the range of mechanisms required to explain this self-organizing capability, and the behaviors of the developed networks. The approach focuses on dendritic and axonal development as this development relates to the systems-level functions of the developed network. A particular emphasis is placed on computational and theoretical approaches, but experimental techniques are also employed. The goal is not to make the theories simply biologically plausible, but to ground them initially with reliable biological facts so that the synthesized network behavior has a chance both to explain and extend experiments. We are particularly interested in correlational mechanisms of neural development and learning. A separate but related interest of the lab is the role of reinforcement signals in the activity-dependent self-organization of the cortex. Recent work has focused on recasting activity-dependent development in a manner which gives reinforcement signals a natural role during the development of cortical maps and sensory-motor transformations. To place proposed mechanisms of synaptic plasticity and transmission into a more realistic context, we are exploring both activity-dependent and activity-independent mechanisms through which three dimensional dendritic structure develops. We are interested in the contribution such development makes to computational theories of cortical map formation and function. Our experimental efforts are focused upon the function of synapses in the mammalian cerebral cortex with particular interest in how a synapse's local environment modulates its function. Recent experimental efforts have focused on the role of N-methyl-D-aspartate (NMDA) receptors and nitric oxide production in synaptic transmission in the mammalian cerebral cortex. These experiments have utilized in vitro brain slice physiology, electrochemistry, immunocytochemistry, and standard biochemical methods. ------------------------------------------------------ The Division of Neuroscience at Baylor offers many possibilities for collaboration with a number of excellent laboratories exploring questions ranging from the modulation of ionic channel function to visual processing in the mammalian cortex. Listed below are some of the faculty members and their areas of interest. John Maunsell : Processing of visual information by cerebral cortex with a particular interest in neural representations contributing to higher functions such as memory or visual guidance of behaviors. Nikos Logothetis: Physiological mechanisms mediating visual perception and object recognition. Dan Ts'o : Neuronal mechanisms of information processing and visual perception through a combination of conventional electrophysiological and anatomical techniques and more novel methods such as optical imaging and cross-correlation analysis. Sarah Pallas : Functional development of the central visual system; focusing on the relative roles of sensory input and intrinic connectivity in establishing the response properties of target neurons. Dan Johnston : Cellular and molecular mechanisms of long-term synaptic plasticity. Peter Saggau : Mechanisms that control the behavior of populations of nerve cells and in vitro optical recording methods. James W. Patrick : Molecular mechanisms responsible for the function and modification of synapses in the central nervous system. John A Dani : Synaptic communication and the structure and function of ion channels. David Sweatt : Biochemical mechanisms of long-term changes in neuronal function with particular emphasis on long-term potentiation. Paul Pfaffinger : Mechanisms involved in regulating neuronal excitability and synaptic strength. Mark Perin : Molecular events in neurotransmitter release from presynaptic terminals. From aisb93-prog at computer-science.birmingham.ac.uk Tue Jan 19 20:54:40 1993 From: aisb93-prog at computer-science.birmingham.ac.uk (aisb93-prog@computer-science.birmingham.ac.uk) Date: Wed, 20 Jan 93 01:54:40 GMT Subject: AISB'93 Conference in AI and Cognitive Science Message-ID: <12901.9301200154@fat-controller.cs.bham.ac.uk> ________________________________________________________________________ ________________________________________________________________________ CONFERENCE PROGRAMME and REGISTRATION INFORMATION A I S B' 9 3 'P R O S P E C T S F O R A R T I F I C I A L I N T E L L I G E N C E' Cognitive Science Research Centre The University of Birmingham March 29th -- April 2nd 1993 ________________________________________________________________________ ________________________________________________________________________ CONTENTS 1. Message from the Programme Chair 2. Technical Programme 3. Workshops and Tutorials 4. Registration Form ORGANISATION Programme Chair: Aaron Sloman (University of Birmingham) Programme Committee: David Hogg (University of Leeds) Glyn Humphreys (University of Birmingham) Allan Ramsay (University College Dublin) Derek Partridge (University of Exeter) Local Organiser: Donald Peterson (University of Birmingham) Administration: Petra Hickey (University of Birmingham) GENERAL ENQUIRIES AISB'93, School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, U.K. Email: aisb93-prog at cs.bham.ac.uk Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281 WORKSHOP and TUTORIAL ENQUIRIES Hyacinth S. Nwana, Computer Science Dept. Keele University, Newcastle, Staffs ST5 5BG, ENGLAND. JANET: nwanahs at uk.ac.keele.cs Other: nwanahs at cs.keele.ac.uk Phone: +44 (0)782 583413 Fax: +44 (0)782 713082 ________________________________________________________________________ MESSAGE FROM THE PROGRAMME CHAIR ________________________________________________________________________ The biennial conferences of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour are traditionally "single-track" scientific meetings aiming to bring together all areas of research in AI and computational cognitive science, and AISB'93 is no exception. With the end of the century close at hand, it seemed appropriate to choose a forward looking theme, so the five invited speakers, all distinguished researchers in their own sub-fields, have been asked to identify trends and project into the future, instead of simply surveying past achievements. Some but not all of the submitted papers also analyse prospects; the others report on work already done. The referees and the selection committee used as a major criterion for selection the requirement that papers should be of interest to a general AI audience. All of the papers have in common a commitment to a "design-based" approach to the study of intelligence, though some of them focus mainly on requirements, some mainly on designs and some on actual implementations, and of course there is wide variation not only regarding the sub-domains of AI (such as vision, learning, language, emotions) but also between the techniques used (such as symbolic reasoning, neural net models, genetic algorithms), and also between those who attempt to design intelligent agents using a top down analysis of human-like intelligence and those who work bottom up from primitive insect-like mechanisms. There is also international variety, with papers from several European countries and further afield. This variety of topics and approaches promises to make the conference particularly lively, with plenty of scope for controversy. We have therefore decided to allow a little more time than usual for each item in the programme, so that questions and discussions can add to the interest. There will also be poster presentations, where some work that could not be included in the formal proceedings can be presented, and it is expected that there will be book displays by major AI publishers and possibly some displays and demonstrations by vendors of AI software and systems. The conference will be preceded by a programme of seven tutorials and workshops for which separate registration is available. Integral Solutions Limited have agreed to present a prize of AI software, including Poplog, and a place on one of their training courses, for the paper voted "best presented" by the audience. For those involved in AI and Cognitive Science, the conference is a primary opportunity to meet, discuss and learn about current work. For those new to these fields, the conference is a chance to become acquainted with them in pleasant surroundings and to meet the people involved. For full-time students, large reductions in registration fees are offered. The location of the conference is one of the attractive halls of residence in a pleasant lakeside setting at one end of the campus of the University of Birmingham. This is not very far from the city centre, so a visit to one of the local attractions of the centre, such as the renowned Symphony Hall, will require a journey of only a few minutes by taxi or train. Single room accommodation has been booked, and the auditorium is in the same building as the bedrooms and dining room, so that the conference will provide excellent opportunities for informal mixing and discussions. The number of rooms available is limited, so early booking is recommended. We look forward to seeing you and hope you enjoy the conference. Aaron Sloman. ________________________________________________________________________ TECHNICAL PROGRAMME (The order is provisional. Invited talks are asterisked) ________________________________________________________________________ MONDAY MARCH 29TH Workshops and Tutorials (see below) TUESDAY MARCH 30TH (Morning) Workshops and Tutorials (see below) TUESDAY MARCH 30TH (Afternoon) * Kurt Van Lehn (Pittsburg) --- Prospects for modelling human learning (e.g. college physics) Husbands, Harvey, Cliff --- An evolutionary approach to AI Edmund Furse --- Escaping from the box Thomas Vogel --- Learning biped robot obstacle crossing Antunes, Moniz, Azevedo --- RB+ the dynamic estimation of the opponent's strength WEDNESDAY 31ST MARCH * Ian Sommerville (Lancaster) --- Prospects for AI in systems design Oh, Azzelarabe, Sommerville, French --- Incorporating a cooperative design model in a computer aided design improvement system Stuart Watt --- Fractal behaviour analysis Valente, Breuker, Bredewg --- Integrating modeling approaches in the commonKADS library Cawsey, Galliers, Reece, Jones --- Revising beliefs and intentions: a unified framework for agent interaction * Allan Ramsay (Dublin) --- Prospects for natural language processing by machine Lin, Fawcett, Davies --- Genedis: the discourse generator in communal Miwa, Simon --- Production system modelling to represent individual differences: tradeoff between simplicity and accuracy in simulation of behaviour Freksa, Zimmerman --- Enhancing spatial reasoning by the concept of motion POSTER SESSION THURSDAY 1ST APRIL * Glyn Humphreys (Birmingham) --- Prospects for connectionism - science and engineering Rodrigues, Lee --- Nouvelle AI and perceptual control theory Vogel, Popwich, Cercone --- Logic-based inheritance reasoning Beatriz Lopez --- Reactive planning through the integration of a case-based system and a rule-based system James Stone --- Computer vision: what is it good for? SESSION ON EMOTIONS AND MOTIVATION Bruce Katz --- Musical resolution and musical pleasure Moffatt, Phaf, Frijda --- Analysis of a model of emotions Beaudoin, Sloman --- A computational exploration of the attention control theory of motivator processing and emotion Reichgelt, Shadbolt et al. --- EXPLAIN: on implementing more effective tutoring systems POSTER SESSION CONFERENCE DINNER FRIDAY 2ND APRIL (Morning) * David Hogg (Leeds) --- Prospects for computer vision Elio, Watanabe --- Simulating the interactive effects of domain knowledge and category structure within a constructive induction system Dalbosco, Armando --- MRG an integrated multifunctional reasoning system Bibby, Reichgelt --- Modelling multiple uses of the same representation in SOAR1 Sam Steel --- A connection between decision theory and program logic INFORMAL WORKSHOP ON MOTIVATION, EMOTIONS AND ATTENTION (see below) ________________________________________________________________________ Workshop 1: Connectionism, Cognition and a New AI Organiser: Dr Noel Sharkey (Exeter) Committee: Andy Clark (Sussex) Glyn Humphries (Birmingham) Kim Plunkett (Oxford) Chris Thornton (Sussex) Time: Monday 29th pm & Tuesday 30th March (all day) Note: This workshop overlaps with the events in the main Technical Programme on the afternoon on Tuesday 30th. ________________________________________________________________________ A number of recent developments in Connectionist Research have strong implications for the future of AI and the study of Cognition. Among the most important are developments in Learning, Representation, and Productivity (or Generalisation). The aim of the workshop would be to focus on how these developments may change the way we look at AI and the study of Cognition. SUGGESTED TOPICS FOR DISCUSSION ABSTRACTS INCLUDE: Connectionist representation, Generalisation and Transfer of Knowledge, Learning Machines and models of human development, Symbolic Learning versus Connectionist learning, Advantages of Connectionist/Symbolic hybrids, Modelling Cognitive Neuropsychology, Connectionist modelling of Creativity and music (or other arts). WORKSHOP ENTRANCE Attendance at the workshop will be limited to 50 or 60 places, so please let us know as soon as possible if you are planning to attend, and to which of the following categories you belong. DISCUSSION PAPERS Acceptance of discussion papers will be decided on the basis of extended abstracts (try to keep them under 500 words please) clearly specifying a 15 to 20 minute discussion topic for oral presentation. ORDINARY PARTICIPANTS A limited number places will be available for participants who wish to sit in on the discussion but do not wish to present a paper. But please get in early with a short note saying what your purpose in attending is. PLEASE SEND SUBMISSIONS TO: Dr. Noel Sharkey Centre for Connection Science Dept. Computer Science University of Exeter Exeter EX4 4PT Devon U.K. Email: noel at uk.ac.exeter.dcs REGISTRATION: see Registration Form below. ________________________________________________________________________ Workshop 2: Qualitative and Causal Reasoning Organiser: Dr Tony Cohn (Leeds, U.K.) Committee: Mark Lee (Aberystwth) Chris Price (Aberystwth) Chris Preist (Hewlett Packard Labs, Bristol) Time: Monday 29th March + Tuesday 30th March (morning) ________________________________________________________________________ This workshop is intended to follow on from the series of DKBS (Deep Knowledge Based Systems) workshops which were originally initiated under the Alvey programme. QCR93 will be the 8th in the series. The format of the 1.5 day workshop will consist mainly of presentations, with ample time for discussion. It is hoped to have an invited talk in addition. Participation will be by invitation only and numbers will be limited in order to keep an informal atmosphere. If you wish to present a paper at the workshop, please send 4 copies (max 5000 words) to the address below by 20 Feb. An electronic submission is also possible (either postscript or plain ascii). Alternatively send a letter or email explaining your reasons for being interested in attending. Papers may address any aspect of Qualitative and Causal Reasoning and Representation. Thus the scope of the workshop includes the following topics: * Task-level reasoning (e.g., design, diagnosis, training, etc.) * Ontologies (e.g., space, time, fluids, etc.) * Explanation, causality and teleology * Mathematical formalization of QR * Management of multiple models (formalization, architecture, studies) * Model building tools * Integration with other techniques (e.g., dynamics, uncertainty, etc.) * Methodologies for selecting/classifying QR methods * Practical applications of QR, or Model Based Reasoning etc. These topics are not meant to be prescriptive and papers on other related or relevant topics are welcome. Suggestions for special sessions for the workshop are also welcome (eg panel session topics). There may be some partial bursaries available to students who wish to attend. If you wish to apply for such a bursary, then please send a letter giving a case for support (include details of any funding available from elsewhere). A CV should be attached. Electronic submission is preferred. REGISTRATION: see Registration Form below. CORRESPONDENCE AND SUBMISSIONS: Tony Cohn, Division of AI, School of Computer Studies, University of Leeds, LEEDS, LS2 9JT, ENGLAND. UUCP: ...!ukc!leeds!agc JANET: agc at uk.ac.leeds.scs INTERNET: agc at scs.leeds.ac.uk BITNET: agc%uk.ac.leeds.scs at UKACRL PHONE: +44 (0)532 335482 FAX: +44 (0)532 335468 ________________________________________________________________________ Workshop 3: AISB POST-GRADUATE STUDENT WORKSHOP Organiser: Dr Hyacinth Nwana University of Keele, UK. Time: Monday 29th (all day) + Tuesday 30th March (morning) ________________________________________________________________________ Many postgraduate students become academically isolated as a result of working in specialised domains within fairly small departments. This workshop is aimed at providing a forum for graduate students in AI to present and discuss their ideas with other students in related areas. In addition there will invited presentations from a number of prominent researchers in AI. A small number of group discussions is planned, including study for and completion of theses, life after a doctorate, paper refereeing and how to make use of your supervisor. All attendees are expected to present an introduction to their research in a poster session on the first day's morning. In addition a couple of attendees will be given the opportunity to present short papers. Confirmed tutors so far include: Dr John Self (Lancaster) - 'Why do supervisors supervise?' Dr Steve Easterbrook (Sussex) - 'How to write a thesis' Dr Elizabeth Churchill (Nottingham) - Title to be confirmed. Dr Peter Hancox (Birmingham) - Title to be confirmed. Applicants are asked to submit a two-page abstract of their current work. In addition full papers of between 3000 and 5000 words may be submitted. These will be considered for publication in a supplement to the AISB quarterly journal. Deadline for 2-page abstracts: 10th February 1993 Please send an abstract or a full paper of work to: Dr. Hyacinth S. Nwana, Computer Science Dept. Keele University, Newcastle, Staffs ST5 5BG, ENGLAND. JANET: nwanahs at uk.ac.keele.cs other: nwanahs at cs.keele.ac.uk tel: +44 (0)782 583413 fax: +44 (0)782 713082 REGISTRATION: see Registration Form below. ________________________________________________________________________ Workshop 4: Motivation, Emotions and Attention Organiser: Tim Read, University of Birmingham Time: Friday 2nd April 2.30 - 5pm ________________________________________________________________________ An informal workshop will be held after lunch on Friday 2nd April enabling further discussion of issues raised in the Thursday afternoon session on motivation and emotions, and possibly additional presentations. There will be no charge, though numbers will be limited by available space. For more information contact The study of emotion encounters many difficulties, among them the looseness of emotional terminology in everyday speech. A theory of emotion should supersede this terminology, and should connect with such issues as motivation, control of attention, resource limitations architectural parallelism and underlying biological mechanisms. Computation provides useful analogies in generating an information processing account of emotion, and computer modelling is a rigorous and constructive aid in developing theories of affect. It makes sense for researchers within this field to collaborate, and the aim of the workshop is to facilitate cross-fertilisation of ideas, sharing of experience, and healthy discussion. If you wish to make a presentation, please contact: Tim Read School of Computer Science, The University of Birmingham, Edgbaston, Birmingham B15 2TT, England EMAIL T.M.Read at cs.bham.ac.uk Phone: +44-(0)21-414-4766 Fax: +44-(0)21-414-4281 REGISTRATION: see Registration Form below (no charge for this workshop) ________________________________________________________________________ Tutorial 1: Collaborative Human-Computer Systems: Towards an Integrated Theory of Coordination Dr Stefan Kirn University of Muenster, Germany Time: Monday 29th March (morning) ________________________________________________________________________ Intelligent support of human experts' intellectual work is one of the most competitive edges of computer technology today. Important advances have been made in the fields of computer networking, AI (e.g., KADS, CBR, Distributed AI), integrated design frameworks (the European JESSI project), nonstandard databases (e.g., databases for teamwork support), computer supported cooperative work, and organizational theory. The time is ripe for developing integrated human computer collaborative systems to significantly enhance the problem solving capabilities of human experts. Perhaps one of the most interesting challenges here is the development of an integrated theory of human computer coordination. Such a theory will help to link humans and computers together in order to let them collaboratively work on complex "nonstandard" problems. It is the aim of the tutorial to put the loose ends of the above mentioned disciplines together thus arguing towards the development of an integrated theory of human computer coordination. Only undergraduate-level knowledge in at least one of the following fields is assumed: AI, database/information systems, organisational theory and CSCW. Dr Stefan Kirn is senior researcher and project leader at the Institute of Business and Information Systems of the Westfaelische Wilhelms-University of Muenster. He has more than 30 major publications in international journals and conferences, primarily in the areas of DAI, Cooperative Information Systems, CSCW and Computer-Aided Software Engineering. REGISTRATION: see Registration Form below. ________________________________________________________________________ Tutorial 2: The Motivation, Meaning and Use of Constraints Dr Mark Wallace European Computer-Industry Research Centre Munchen, Germany. Time: Monday 29th March (afternoon) ________________________________________________________________________ This tutorial explains how constraints contribute to clear, clean, efficient programs. We study constraints as specification tools, as formal tools, and as implementation tools. Finally we examine the use of constraints in search and optimisation problems. As the tutorial unfolds, we will explain the three different notions of constraints: constraints as built-in relations, with built-in solvers; constraints as active agents, communicating with a store; and propagation constraints. We will also explain how these notions are related, and moreover how the different types of constraints can all be combined in a single program. For programming examples, the logic programming framework will be used. It will be aimed at postgraduates, researchers and teachers of AI, who would like to know what constraints are, and what they are for. Also anyone interested in declarative programming, seeking a solution to the problem of efficiency, will benefit from the tutorial. An understanding of formal logic will be assumed, and some familiarity with logic programming will be necessary to appreciate the programming examples. Dr Mark Wallace leads the Constraints Reasoning Team at ECRC (the European Computer-Industry Research Centre), Munich. He introduced "Negation by Constraints" at SLP'87. He has recently presented papers at IJCAI'92, FGCS'92 and JFPL'92. Recent tutorial presentations include a short course on Deductive and Object-Oriented Knowledge Bases at the Technical University of Munich, and "Constraint Logic Programming - An Informal Introduction", written with the CORE team at ECRC for the Logic Programming Summer School, '92. REGISTRATION: see Registration Form below. ________________________________________________________________________ Tutorial 3: A Little Turing and Goedel for Specialists in AI Prof. Alexis Manaster Ramer Wayne State University, USA. Time: Monday 29th March (morning + afternoon) ________________________________________________________________________ Currently debated issues in the foundations of AI go directly back to technical work of people like Turing and Godel on the power and limits of formal systems and computing devices. Yet neither the relevant results nor the intellectual climate in which they arose are widely discussed in the AI community (for example, how many know that Godel himself believed that the human mind was not subject to the limits set by his theorems on formal systems?). The purpose of this tutorial is to develop a clear picture of the fundamental results and their implications as seen at the time they were obtained and at the present time. We will primarily refer to the work of Godel, Turing, Chomsky, Hinttika, Langendoen and Postal, Searle, and Penrose. Some background knowledge is assumed: some programming, some AI and some discrete mathematics. Dr Alexis Manaster Ramer is professor of Computer Science at Wayne State University. He has over 100 publications and presentations in linguistics, computational linguistics, and foundations of CS and AI. A few years ago, he taught a short course on the theory of computation for the Natural Language Processing group at the IBM T.J.Watson Research Center (Hawthorne, NY, USA) and this past summer taught a one-week advanced course on mathematics of language at the European Summer School in Logic, Language, and Information (Colchester, UK). REGISTRATION: see Registration Form below. ________________________________________________________________________ OTHER MEETINGS ________________________________________________________________________ LAGB CONFERENCE. Shortly before AISB'93, the Linguistics Association of Great Britain (LAGB) will hold its Spring Meeting at the University of Birmingham from 22-24th March, 1993. For more information, contact Dr. William Edmondson: postal address as below; phone +44-(0)21-414-4773; email EDMONDSONWH at vax1.bham.ac.uk JCI CONFERENCE The Joint Council Initiative in Cognitive Science and Human Computer Interaction will hold its Annual Meeting on Monday 29th March 1993 in the same buildings as AISB'93 (in parallel with the AISB'93 workshops and tutorials). The theme will be "Understanding and Supporting Acquisition of Cognitive Skills". For more information, contact Elizabeth Pollitzer, Department of Computing, Imperial College, 180, Queens Gate, London SW7 2BZ, U.K.; phone +44-(0)71-581-8024; email eep at doc.ic.ac.uk. ________________________________________________________________________ REGISTRATION NOTES Main Programme, Workshops and Tutorials ________________________________________________________________________ o Please print off the form, tick through the items you require, enter sub-totals and totals and send by post, together with payment, to: AISB'93 Registrations, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, U.K. o Payment should be made by cheque or money order payable to `The University of Birmingham', drawn in pounds sterling on a UK clearing bank and should accompany the form below. o Registrations postmarked after 10th March count as late registrations. o It is not possible to register by email. o Confirmation of booking, a receipt, and travel details will be sent on receipt of this application form. o The Conference Dinner (20 pounds) is on the evening of Thursday 1st. o Delegates wishing to join AISB (thus avoiding the non-AISB member supplement) should contact: AISB Administration, Cognitive and Computing Sciences, University of Sussex, Brighton BN1 9QH, U.K.; phone: +44-(0)273 678379; fax: +44-(0)273 678188; email: aisb at cogs.susx.ac.uk Donald Peterson, January 1993. ______________________________________________________________________ R E G I S T R A T I O N F O R M ---- A I S B' 9 3 ______________________________________________________________________ Figures in parentheses are for full-time students (send photo copy of ID). ACCOMMODATION and FOOD 28th 29th 30th 31st 1st sub-totals lunch 5.50 5.50 5.50 5.50 ______ dinner 7.50 7.50 7.50 20.00 ______ bed & 23.00 23.00 23.00 23.00 23.00 ______ breakfast total ______ vegetarians please tick _____ TECHNICAL PROGRAMME, WORKSHOPS and TUTORIALS technical programme 175 (40) _____ non-AISB members add 30 _____ late registration add 35 _____ Nwana workshop 50 _____ Sharkey workshop 60 (30) _____ Cohn workshop 60 (30) _____ Read workshop 0 _____ Manaster Ramer tutorial 110 (55) _____ Wallace tutorial 75 (30) _____ Kirn tutorial 75 (30) _____ total _____ Pounds PERSONAL DETAILS Full time Name ___________________________________________ student? Y/N Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Phone _________________________ Fax ___________ Email ___________________________________________ I wish to register for the events indicated, and enclose a cheque in pounds sterling, drawn on a U.K. clearing bank and payable to the `University of Birmingham' for ..... Signed _________________________ Date ___________ From thrun at informatik.uni-bonn.de Tue Jan 19 16:46:35 1993 From: thrun at informatik.uni-bonn.de (Sebastian Thrun) Date: Tue, 19 Jan 93 22:46:35 +0100 Subject: papers in neuroprose archive Message-ID: <9301192146.AA04824@uran> Dear Connectionists, this mail is to announce two new papers in Jordan Pollack's neuroprose archive: 1) Explanation-Based Neural Network Learning for Robot Control by Tom Mitchell and Sebastian Thrun, to appear in: NIPS-5 2) Exploration and Model Building in Mobile Robot Domains by Sebastian Thrun, to appear in: Proceedings of the ICNN-93 Enclosed you find both abstracts and the (standard) instructions for retrieval. Comments are welcome. Have fun, Sebastian Thrun ---------------------------------------------------------------------- Explanation-Based Neural Network Learning for Robot Control Tom M. Mitchell (CMU, mitchell at cs.cmu.edu) Sebastian B. Thrun (Bonn University, thrun at uran.informatik.uni-bonn.de) How can artificial neural nets generalize better from fewer examples? In order to generalize successfully, neural network learning methods typically require large training data sets. We introduce a neural network learning method that generalizes rationally from many fewer data points, relying instead on prior knowledge encoded in previously learned neural networks. For example, in robot control learning tasks reported here, previously learned networks that model the effects of robot actions are used to guide subsequent learning of robot control functions. For each observed training example of the target function (e.g. the robot control policy), the learner *explains* the observed example in terms of its prior knowledge, then *analyzes* this explanation to infer additional information about the shape, or slope, of the target function. This shape knowledge is used to bias generalization in the learned target function. Results are presented applying this approach to a simulated robot task based on reinforcement learning. (file name: mitchell.ebnn-nips5.ps.Z) Exploration and Model Building in Mobile Robot Domains Sebastian B. Thrun (Bonn University, thrun at uran.informatik.uni-bonn.de) I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for the transfer of knowledge across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours. (file name: thrun.robots-icnn93.ps.Z) ---------------------------------------------------------------------- Postscript versions of both papers may be retrieved from Jordan Pollack's neuroprose archive. If you have a Postscript printer, please follow the following instructions below. If not, feel free to contact me (thrun at uran.informatik.uni-bonn.de) for a hardcopy. unix> ftp archive.cis.ohio-state.edu ftp login name> anonymous ftp password> xxx at yyy.zzz ftp> cd pub/neuroprose ftp> bin ftp> get mitchell.ebnn-nips5.ps.Z ftp> get thrun.robots-icnn93.ps.Z ftp> bye unix> uncompress mitchell.ebnn-nips5.ps.Z unix> uncompress thrun.robots-icnn93.ps.Z unix> lpr mitchell.ebnn-nips5.ps.Z unix> lpr thrun.robots-icnn93.ps.Z Note that the second file is rather long. Some printers have limitations for the document size to be printed. In this case, it might be necassary to circumvent this limitation by using "lpr" with the "-s" option at that machine the printer is physically connected to. From ttj10 at eng.cam.ac.uk Wed Jan 20 07:13:24 1993 From: ttj10 at eng.cam.ac.uk (ttj10@eng.cam.ac.uk) Date: Wed, 20 Jan 93 12:13:24 GMT Subject: Real Pole-Balancing Message-ID: <27008.9301201213@fear.eng.cam.ac.uk> Recently I posted an abstract for a technical report on real pole-balancing [1]. Andy Barto his since pointed out that the abstract gives the impression of condemning approximate dynamic programming methods as tools for learning control. This was not our intention. The offending line is "This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found." Firstly, by "this kind of learning controller" was meant the kind of learning controller which required a carefully designed state space decoder. Setting the parameters of the controller was not straightforward, and required some trial and error, helped by prior knowledge of the plant. By "more difficult problems" was meant problems with even more parameters. It seems reasonable to suggest that a better learning scheme would be needed in such instances. But that is not to say that an improved scheme that made use of approximate dynamic programming techniques would not be up to the job. Andy Barto points out that better learning schemes have already been produced. The early ACE/ASE learning algorithm [2] was chosen for our implementation for speed of execution in a real-time environment. It might also be considered interesting as a base-line comparison, since the ACE/ASE controller is relatively well-known. Barto, Sutton and Anderson used Michie and Chambers' [3] state representation, since this was the work on which they were improving. They mentioned this was a critical part of the algorithm, which should be adaptive. A copy of the report is available by ftp from svr-ftp.eng.cam.ac.uk, as reports/jervis_tr115.ps.Z. references: [1] @techreport{Jervis92, author = "T.T.Jervis and F.Fallside", title = "Pole Balancing on a Real Rig using a Reinforcement Learning Controller", year = "1992", month = "December", number = "CUED/F-INFENG/TR 115", institution = "Cambridge University Engineering Department", address = "Trumpington Street, Cambridge, England"} [2] @article{Barto83, author = "A.G. Barto and R.S. Sutton and C.W. Anderson", title = "Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems", year = "1983", month = "September/October", journal = "IEEE Transactions on Systems, Man and Cybernetics", volume = "SMC-13", pages = "834-846"} [3] @incollection{Michie68, author = "D. Michie and R.A. Chambers", title = "Boxes: An Experiment in Adaptive Control", booktitle = "Machine Intelligence", publisher = "Oliver and Boyd", year = "1968", volume = "2", pages = "137-152", editor = "E. Dale and D. Michie"} From stolcke at ICSI.Berkeley.EDU Wed Jan 20 14:22:01 1993 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Wed, 20 Jan 93 11:22:01 PST Subject: new cluster version available Message-ID: <9301201922.AA06528@icsib30.ICSI.Berkeley.EDU> I'm releasing a new version of the time-honored cluster program (that also does PCA). I recently made a small change to the algorithm that speeds clustering up by a factor of n (the number of data points). The algorithm now runs in time O(n^2) (formerly O(n^3)) and uses memory O(n) (formerly O(n^2)). On a sparcstation2, this means you can cluster a 1000-by-10 data set in 39 secs as opposed to 230 secs. Systems short on memory should see even more dramatic improvements due to reduced paging. As before, the source code is availabe by ftp: % mkdir cluster; cd cluster % ftp ftp.icsi.berkeley.edu ftp> cd pub/ai ftp> binary ftp> get cluster-2.5.tar.Z ftp> quit % zcat cluster-2.5.tar.Z | tar xf - % make # after looking over the Makefile --Andreas From n at predict.com Wed Jan 20 17:25:28 1993 From: n at predict.com (n (Norman Packard)) Date: Wed, 20 Jan 93 15:25:28 MST Subject: Job Offer: Research on Financial Analysis in Santa Fe NM Message-ID: <9301202225.AA00816@mule> Job Opening for Research Scientist Prediction Company Financial Forecasting Prediction Company is a small Santa Fe, NM based startup firm utilizing the latest nonlinear forecasting technologies for prediction and computerized trading of derivative financial instruments. The senior technical founders of the firm are Doyne Farmer and Norman Packard, who have worked for over ten years in the fields of chaos theory and nonlinear dynamics. The technical staff includes other senior researchers in the field. The company has the backing of a major technically based trading firm and their partner, a major European bank. There is currently an opening at the company for a research scientist to assist in modeling and related data analysis research. The successful applicant will be a talented scientist with experience in one or more of the following areas: (i) learning algorithms, such as neural networks, decision trees, and genetic algorithms, (ii) time series forecasting, (iii) statistics. Experience in applying learning algorithms to real data and a strong computer programming background, preferably in C++, are essential. A sound background in statistics and experience with financial applications are desirable. Applicants should send resumes to Prediction Company, 234 Griffin Street, Santa Fe, NM 87501 or to Laura Barela at laura%predict.com at santafe.edu. From tap at cs.toronto.edu Thu Jan 21 19:30:04 1993 From: tap at cs.toronto.edu (Tony Plate) Date: Thu, 21 Jan 1993 19:30:04 -0500 Subject: nips*92 preprint available Message-ID: <93Jan21.193005edt.594@neuron.ai.toronto.edu> Preprint available (to appear in C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, Advances in Neural Information Processing Systems 5 (NIPS*92), Morgan Kaufmann, San Mateo, CA) Holographic Recurrent Networks Tony A. Plate Department of Computer Science University of Toronto Toronto, M5S 1A4 Canada tap at ai.utoronto.ca ABSTRACT Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous space. The performance of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks. - Obtain by ftp from archive.cis.ohio-state.edu in pub/neuroprose. - No hardcopy available. - FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get plate.nips5.ps.Z ftp> quit unix> uncompress plate.nips5.ps.Z unix> lpr plate.nips5.ps (or however you print postscript) From sml at essex.ac.uk Thu Jan 21 12:12:25 1993 From: sml at essex.ac.uk (Lucas S M) Date: Thu, 21 Jan 93 17:12:25 GMT Subject: No subject Message-ID: <17256.9301211712@esesparc.essex.ac.uk> 1st ANNOUNCEMENT AND CALL FOR PAPERS -------------------------------------- GRAMMATICAL INFERENCE: THEORY, APPLICATIONS AND ALTERNATIVES -------------------------------------------------------------- 22-23 April, 1993 At the UNIVERSITY OF ESSEX, WIVENHOE PARK, COLCHESTER CO4 3SQ, UK Sponsored by the Institute of Electrical Engineers and the Institute of Mathematics. Relevant Research Areas: * Computational Linguistics * Machine Learning * Pattern Recognition * Neural Networks * Artificial Intelligence MOTIVATION ------------ Grammatical Inference is an immensely important research area that has suffered from the lack of a focussed research community. A two-day colloquium will be held at the University of Essex on the 22-23rd April 1993. The purpose of this colloquium is to bring together researchers who are working on grammatical inference and closely related problems such as sequence learning and prediction. Papers are sought for the technical sessions listed below. BACKGROUND ------------ A grammar is a finite declarative description of a possible infinite set of data (known as the language) that is reversible in the sense that it may be used to detect language membership (or degree of membership) of a pattern, or it may be used generatively to produce samples of the language. The language may be formal and simple such as the set of all symmetric strings over a given alphabet, formal and more complex such as the set of legal PASCAL programs, less formal such as sentences or phrases in natural language, or noisy such as vector-quantised speech or handwriting, or even spatial rather than temporal, such as 2-d images. For the noisy cases stochastic grammars are often used that define the probability that the data was generated by the given grammar. So, given a set of data that the grammar is supposed to generate, and perhaps also a set that it should not generate, the problem is to learn a grammar that not only satisfies these conditions, but more importantly, generalises to unseen data in some desirable way (this may be strictly specified in test-cases where the grammar used to create the training samples is known). To date, the grammatical inference research community has evolved largely divided into the following areas a) Theories about the type of languages that can and cannot be learned. These theories are generally concerned with the types of language that may and may not be learned in polynomial time. Arguably irrelevant in practical terms since in practical applications we are usually happy to settle for a good grammar rather than some `ideal' grammar. b) Explicit Inference; this deals directly with modifiying a set of production rules until a satisfactory grammar is obtained. c) Implicit inference e.g. estimating the parameters of a hidden Markov model -- in this case production rule probabilities in the equivalent stochastic regular grammar are represented by pairs of numbers in the HMM. d) Estimating models where the grammatical equivalence uncertain (e.g. recurrent neural networks), but often aim to solve exactly the same problem. In many cases, researchers in these distinct subfields seem unaware of the other work in the other subfields; this is surely detrimental to the progress of grammatical inference research. TECHNICAL SESSIONS -------------------- Oral and poster papers are requested in the following areas: Theory: What kinds of language are theoretically learnable; the practical import of such theories. Learning 2-d and higher-dimensional grammars, attribute grammars etc. Algorithms: Any new GI algorithms, or new insights on old ones. Grammatical inference assistants, that aim to aid humans in writing grammars. Performance of Genetic algorithms and simulated annealing for grammatical inference etc. Applications: Any interesting applications in natural language processing, speech recognition Speech and language processing, cursive script recognition, pattern recognition, sequence prediction, financial markets etc. Alternatives: The power of alternative approaches to sequence learning, such as stochastic models and artificial neural networks, where the inferred grammar may have a distributed rather than an explicit represention. Competition: A number of datasets will be made available for authors to report the performance of their algorithms on, in terms of learning speed and generalisation power. There is also the possiblity of a live competition in the demonstration session. Demonstration: There will be a session where authors may demonstrate their algorithms. For this purpose we have a large number of Unix workstations running X-Windows, with compilers for C, C++, Pascal, Fortran, Common Lisp and Prolog. If your algorithms are written in a more exotic language, we may still be able to sort something out. PCs can be made available if necessary. DISCUSSIONS ------------- There will be open forum discussions of planning the next Grammatical Inference Conference, and the setting up of a Grammatical Inference Journal (possibly an electronic one). PUBLICATIONS -------------- Loose-bound collections of accepted conference papers will be distributed to delegates upon arrival. It is planned to publish a selection of these papers in a book following the conference. REMOTE PARTICIPATION ---------------------- Authors from distant lands unwilling to travel to Essex for the conference are encouraged to submit a self-explanatory poster-paper that will be displayed at the conference. SUBMISSION DETAILS -------------------- Prospective authors should submit a 2-page abstract to Simon Lucas at the address below by the end of February, 1992. Email and Faxed abstracts are acceptable. Notification of the intention to submit an abstract would would also be appreciated. REGISTRATION DETAILS ---------------------- Prospective delegates are requested to mail/email/fax me at the address below for further details. ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: 0206 872935 Fax: 0206 872900 Email: sml at uk.ac.essex ------------------------------------------------- From berg at cs.albany.edu Fri Jan 22 17:55:01 1993 From: berg at cs.albany.edu (George Berg) Date: Fri, 22 Jan 1993 17:55:01 -0500 (EST) Subject: Computational Biology Faculty Position Message-ID: <9301222255.AA05613@karp.albany.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1759 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/28d22bbc/attachment.ksh From schuetze at csli.stanford.edu Sun Jan 24 14:16:02 1993 From: schuetze at csli.stanford.edu (Hinrich Schuetze) Date: Sun, 24 Jan 93 11:16:02 -0800 Subject: paper on distributed semantic representations Message-ID: <9301241916.AA28750@Csli.Stanford.EDU> The following paper is now available in the connectionist archive, archive.cis.ohio-state.edu (128.146.8.52), in pub/neuroprose under the name: schuetze.wordspace.ps.Z WORD SPACE Hinrich Schuetze CSLI, Stanford University ABSTRACT This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical cooccurrence statistics. Each word is represented by a 97-dimensional vector that is computed by means of a singular-value decomposition of a 5000-by-5000 matrix recording cooccurrence in a large text corpus (The New York Times). The representations are successfully applied to word sense disambiguation using a nearest neighbor method. to appear in: S.~J. Hanson, J.~D. Cowan, and C.~L. Giles (Eds.), {\em Advances in Neural Information Processing Systems 5}. San Mateo CA: Morgan Kaufmann. author's address: Hinrich Schuetze CSLI, Ventura Hall Stanford, CA 94305-4115 schuetze at csli.stanford.edu From mjolsness-eric at CS.YALE.EDU Sun Jan 24 20:35:23 1993 From: mjolsness-eric at CS.YALE.EDU (Eric Mjolsness) Date: Sun, 24 Jan 93 20:35:23 EST Subject: Junior faculty opening at Yale Computer Science Message-ID: <199301250135.AA15033@EXPONENTIAL.SYSTEMSZ.CS.YALE.EDU> ***** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***** I wish to draw the attention of especially well-qualified neural networkers to my department's recruiting advertisement for junior faculty in the December and January Communications of the ACM, which I quote below, and which neither explicitly targets nor excludes neural networkers. Please do not reply to me concerning this opening. The ad: "We expect to have one or more junior faculty positions available for the 1993-94 academic year. We are particularly interested in applicants in the areas of artificial intelligence, theoretical computer science, numerical analysis, and programming languages and systems. Applications should be submitted before April 30, 1993. "Duties will include teaching graduate and undergraduate courses. Applicants are expected to engage in a vigorous research program. "Candidates should hold a Ph.D. in computer science or related discipline. "Qualified women and minority candidates are encouraged to apply. Yale is an affirmative action/equal opportunity employer. "Send vitae and names of three references to: Faculty Recruiting Committee, Department of Computer Science, Yale University, P.O. Box 2158, Yale Station, New Haven, CT 16520." -Eric Mjolsness ------- ------- From robtag at udsab.dia.unisa.it Mon Jan 25 04:36:13 1993 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Mon, 25 Jan 1993 10:36:13 +0100 Subject: 1993 courses and workshops programme Message-ID: <199301250936.AA14324@udsab.dia.unisa.it> **************** IIASS 1993 Workshops and Courses ************** **************** Preliminary Announcement ************** February 9 - 12 A short course on "Hybrid Systems: Neural Nets, Fuzzy Sets and A.I. systems" Lecturers: Dr. Silvano Colombano, NASA Research Center, CA Prof. Piero Morasso, Univ. Genova, Italia ----------------------------------------------------------------- March 23 - 27 A short course on "Languages for Parallel Programming" Lecturers: Prof. Merigot, Univ. Paris Sud, France Prof. A.P. Reeves (to be confirmed) ----------------------------------------------------------------- April second half A short course on "Learning in Neural Nets" Lecturers: Dr. M. Biehl, Physikalisches Inst., Wuerburg, Germany Dr. Sara Solla, AT&T Bell Laboratories ----------------------------------------------------------------- May 12-14 The 6-th Italian Workshop on Neural Nets WIRN VIETRI-93 Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) E. R. Caianiello ( Univ. Salerno) D. D. Caviglia ( Univ. Genova) P. Campadelli ( CNR Milano) M. Ceccarelli ( Univ. Salerno - IRSIP CNR) P. Ciaccia ( Univ. Bologna) M. Frixione ( I.I.A.S.S.) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) A. Negro ( Univ. Salerno) G. Orlandi ( Univ. Roma) E. Pasero ( Politecnico Torino ) A. Petrosino ( Univ. Salerno - IRSIP CNR) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno - IRSIP CNR) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) L. Stringa ( IRST Trento) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricrca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) Invited Speakers ---------------------------------------------------------------- Prof. Stan Gielen, Catholic Univ. of Nijmege, NL Prof. Tommaso Poggio, MIT Prof. Lotfi Zadeh, Berkeley (to be confirmed) ---------------------------------------------------------------- May 24 - 28 A short course on "Neural Nets for Pattern Recognition" Lecturers: Dr. Federico Girosi, MIT Dr. V.N. Vapnik, AT&t Bell Laboratories (to be confirmed) ---------------------------------------------------------------- September 13 - 24 Advanced School on Computational Learning and Cryptography Sponsored by EATCS Italian Chapter Lecturers --------------------------------------------------------------- Prof. Shimon Even, Technion, Haifa, Israel Dr. Moti Yung, IBM T.J. Watson Research Center Dr. Michael Kearns, AT&T Bell Laboratories Prof. Wolfgang Maass, Technische Univ. Graz, Austria Directors -------------------------------------------------------------- Prof. Alfredo De Santis, Univ. Salerno, Italia Prof. Giancarlo Mauri, Univ. Milano, Italia -------------------------------------------------------------- The short courses and WIRN 93 are also sponsored by Progetto Finalizzato CNR "Sistemi Informatici e Calcolo Parallelo" and by Contratto quinquennale CNR-IIASS For any information for the short courses and Wirn 93, please contact the IIASS secretariat I.I.A.S.S Via G.Pellegrino, 19 I-84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 or Dr. Roberto Tagliaferri E-Mail robtag at udsab.dia.unisa.it ***************************************************************** From ahmad at bsun11.zfe.siemens.de Mon Jan 25 03:36:39 1993 From: ahmad at bsun11.zfe.siemens.de (Subutai Ahmad) Date: Mon, 25 Jan 93 09:36:39 +0100 Subject: Missing feature preprint available Message-ID: <9301250836.AA19374@bsun11.zfe.siemens.de> The following paper is available for anonymous ftp on Neuroprose, archive.cis.ohio-state.edu (128.146.8.52), in directory pub/neuroprose, as file "ahmad.missing.ps.Z": Some Solutions to the Missing Feature Problem in Vision Subutai Ahmad and Volker Tresp Siemens Central Research and Development Abstract In visual processing the ability to deal with missing and noisy information is crucial. Occlusions and unreliable feature detectors often lead to situations where little or no direct information about features is available. However the available information is usually sufficient to highly constrain the outputs. We discuss Bayesian techniques for extracting class probabilities given partial data. The optimal solution involves integrating over the missing dimensions weighted by the local probability densities. We show how to obtain closed-form approximations to the Bayesian solution using Gaussian basis function networks. The framework extends naturally to the case of noisy features. Simulations on a complex task (3D hand gesture recognition) validate the theory. When both integration and weighting by input densities are used, performance decreases gracefully with the number of missing or noisy features. Performance is substantially degraded if either step is omitted. To appear in: S. J. Hanson, J. D. Cowan, and C. L. Giles (Eds.), Advances in Neural Information Processing Systems 5. San Mateo CA: Morgan Kaufmann. ---- Subutai Ahmad Siemens AG, ZFE ST SN61, Phone: +49 89 636-3532 Otto-Hahn-Ring 6, FAX: +49 89 636-2393 W-8000 Munich 83, Germany E-mail: ahmad at zfe.siemens.de From cohn at psyche.mit.edu Mon Jan 25 15:47:19 1993 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 25 Jan 93 15:47:19 EST Subject: Robot Learning Workshop summary in neuroprose Message-ID: <9301252047.AA21630@psyche.mit.edu> ROBOT LEARNING Summary of the post-NIPS workshop Vail, Colorado, Dec 5th, 1992 David A. Cohn (MIT) Tom Mitchell (CMU) Sebastian Thrun (CMU) cohn at psyche.mit.edu mitchell at cs.cmu.edu thrun at cs.cmu.edu We have just completed a short summary of the post-NIPS workshop on "Robot Learning" and have placed the summary in the neuroprose archives (with the assistance, of course, of Jordan Pollack). The goal of this workshop was to provide a forum for researchers active in the area of robot learning and related fields. Due to the limited time available, we attempted to focus discussion around two major issues: 1) How can current learning robot techniques scale to more complex domains, characterized by massive sensor input, complex causal interactions, and long time scales? 2) Where are the new ideas in robot learning coming from? ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps cohn.robot-learning-summary.ps", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get cohn.robot-learning-summary.ps ftp> quit unix> lpr -s cohn.robot-learning-summary.ps (or however you print postscript) -David Cohn e-mail: cohn at psyche.mit.edu Dept. of Brain & Cognitive Science phone: (617) 253-8409 MIT, E10-243 Cambridge, MA 02139 From paul at dendrite.cs.colorado.edu Mon Jan 25 00:45:49 1993 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Sun, 24 Jan 1993 22:45:49 -0700 Subject: Colorado Boycott & Cognitive Science Conference Message-ID: <199301250545.AA22442@axon.cs.colorado.edu> [Moderator's note: The following message was accepted only because it is an official announcement by the Cognitive Science Society about the future of their conference. The CONNECTIONISTS list is not the proper venue for political discussions on topics like the Colorado boycott, so followups to this message will not be accepted. People concerned about the future of NIPS or other conferences are advised to contact the organizations that sponsor them. Please do not send mail to me or to Connectionists. -- Dave Touretzky] The Governing Board of the Cognitive Science Society is aware of the recent approval by the voters of Colorado of an amendment to their Constitution that bans antidiscriminatory legal activity to protect the rights of gays and lesbians (the amendment has been barred from implementation pending judicial review). This action has prompted calls for a boycott of the state. Since the Annual Meeting of the Society is scheduled for June 18-21 in Boulder, we feel obliged to make it clear that our action in no way implies an endorsement of the amendment nor disapproval of means being taken to oppose it. We are a small society with no professional convention staff. A last-minute change would produce great expense for members, who often pay their own way. Further, the logistics of conferences make it impossible to make a change now without causing considerable pain to hundreds of people who were not a party to the vote in Colorado. Here are additional reasons for our decision to continue with the meeting. * Boulder is one of the cities whose gay rights ordinance is threatened, and the City of Boulder was the primary party in the injunction suit (and thus in preventing immediate implementation). * The University of Colorado where the conference will be held has a policy of non-discrimination (including sexual preference) towards employees. (It could be strengthened, many believe) * Unlike most annual meetings this one is run entirely by the local committee, so that moving the meeting really means starting over from scratch. * A number of gay-rights organizations in Colorado are now OPPOSING the Boycott. Personally, the members of the Governing Board oppose the Colorado amendment. There is a strong precedent to avoid involving the Society in political issues, and our action in making this statement is primarily to avoid any suggestion that by not changing the meeting, we are opposing the boycott or endorsing the amendment. Obviously, setting any further meetings in Colorado after this June would itself be a political action in the current context, and thus is unlikely. Further, members are free to offer resolutions at the general membership meeting held during the conference (within the limits of our by-laws and corporate charter). Individual members, of course, may wish to take stronger steps. For example, they may wish to donate to one or more groups specifically opposing the amendment, including the groups listed at the end of this posting. Approved by electronic mail poll of the Governing Board. THE COGNITIVE SCIENCE SOCIETY, INC. Alan Lesgold, Secretary/Treasurer ============================================================== Groups Fighting the Amendment ** CLIP: Colorado Legal Initiatives Project. PO Box 44447, Denver, CO 80201. 303-830-2100. ** Equality Colorado. PO Box 300476, Denver, CO 80203. 303-839-5540. ** GLAAD: Gay and Lesbian Alliance for Anti-Defamation. PO Box 480662, Denver, CO 80248. 303-331-2773. ** Boycott Colorado, Inc. PO Box 300158, Denver, CO 80203. 303-777-0560. ** Deadheads Against Discrimination. 2888 Bluff, Suite 496, Boulder, CO 80301. ** Boulder NOW (National Organization for Women). Sally Barrett-Page, PO Box 7972, Boulder, CO 80306. 303-449-8117. ** BOND: Boulder Organization for Non-Discrimination. 1085 14th St., #1023,Boulder, CO 80302. 303-444-3455. From thgoh at iss.nus.sg Tue Jan 26 08:50:13 1993 From: thgoh at iss.nus.sg (Goh Tiong Hwee) Date: Tue, 26 Jan 93 15:10:13+080 Subject: No subject Message-ID: <9301260710.AA05268@iss.nus.sg> **DO NOT FORWARD TO OTHER GROUPS** The following paper has been deposited in Jordan Pollack's immensely useful Neuroprose archive at Ohio State. Retrieval instructions at end of message. Limited hardcopy requests will be answered for next couple of months. --------------------------------------------------------------------------- thgoh.sense.ps.Z SEMANTIC EXTRACTION USING NEURAL NETWORK MODELLING AND SENSITIVITY ANALYSIS --------------------------------------------------------------------------- Retrieval instructions (the usual): ipc9>ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. ftp> cd pub/neuroprose 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get thgoh.sense.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for thgoh.sense.ps.Z (64235 bytes). 226 Transfer complete. TH Goh Institute of Systems Science National University of Singapore Kent Ridge Singapore 0511. Fax: (65)7782571 Email: thgoh at iss.nus.sg **DO NOT FORWARD TO OTHER GROUPS** From P06819JB%WUVMC.bitnet at WUVMD.Wustl.Edu Tue Jan 26 16:49:00 1993 From: P06819JB%WUVMC.bitnet at WUVMD.Wustl.Edu (John T. Bruer) Date: Tue, 26 Jan 93 15:49:00 CST Subject: Job Announcement -- Program Officer, J.S. McDonnell Foundation Message-ID: PROGRAM OFFICER The James S. McDonnell Foundation, a major private foundation having special interests in education and the behavioral and biological sciences, is seeking an energetic, resourceful professional to fill the position of Program Officer. The successful candidate will coordinate the foundation's existing research programs in education and cognitive neuroscience; assist in identifying and formulating new program areas; evaluate grant requests and monitor the progress of ongoing grants; interact regularly with grantees and the foundation's program advisory boards; plan and coordinate national meetings; assist in preparing and presenting material to the Board of Directors; and represent the foundation locally and nationally where appropriate. Nominees and applicants should demonstrate superior oral and written communication skills, and have a proven record of strong administrative abilities. Candidates for the position must hold a graduate degree in the biological, behavioral, or social sciences and have had at least five years teaching, research, or administrative experience. Prior grantmaking experience is not required but will be considered favorably. Salary commensurate with experience plus fringe benefits. The deadline for receipt of application is April 16. Qualified candidates should send a letter explaining their interest in the position, resume, salary requirements, and three letters of reference to: John T. Bruer, Ph.D. President James S. McDonnell Foundation 1034 S. Brentwood Blvd., Suite 1610 St. Louis, MO 63117 The James S. McDonnell Foundation is an Equal Opportunity/ Affirmative Action Employer.  From kak at max.ee.lsu.edu Wed Jan 27 10:07:47 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Wed, 27 Jan 93 09:07:47 CST Subject: The Self, the brain, and phantom limbs Message-ID: <9301271507.AA01384@max.ee.lsu.edu> Several correspondents have sought further information regarding the concept of self that has been discussed in my report "Reflections in clouded mirrors: selfhood in animals and machines" Technical Report 92-12 ECE-LSU, December 1, 1992. The following paper Ronald Melzack, "Phantom limbs, the self and the brain", Canadian Psychology, 1989, 30, 1 describes the "reality" of phantom limbs. What is fascinating is that children born with birth defects have a phantom which may not have such defects. Clearly the notion of self pervades all life, otherwise snakes would lunch on their tails! -Subhash Kak [Replies in English and Sanskrit] From andy at cma.MGH.Harvard.Edu Wed Jan 27 11:23:36 1993 From: andy at cma.MGH.Harvard.Edu (Andrew J. Worth) Date: Wed, 27 Jan 93 11:23:36 EST Subject: Call for Volunteers Message-ID: <9301271623.AA29433@cma.mgh.harvard.edu> --------------------------------------------------------------------------- ** CALL FOR VOLUNTEERS ** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS March 28 - April 1, 1993 San Francisco Hilton San Francisco, California, USA --------------------------------------------------------------------------- Volunteer positions are available for both FUZZ-IEEE'93 and ICNN'93. If you or anyone you know would like to exchange admittance to the conference for working as a volunteer, please respond directly to me at the e-mail address below. In the past, volunteers have given approximately 20 hours of labor (spread out over the entire conference) to receive: - admittance to the conference - a full set of proceedings - attendance to a limited number of tutorials (while working) Volunteer positions include helping at: - Stuffing Conference Proceedings - Poster Sessions - Technical Sessions - Evening Plenary Sessions - Social Events - OPTIONAL duty: Tutorials If you are interested in volunteering, please respond directly to me with the following information: - Electronic Mail Address - Last Name, First Name - Address - Country - Phone and FAX number Positions will be filled on a first commit first served basis. There will be no funding available for volunteer's travel and lodging expenses. PLEASE RESPOND TO: andy at cma.mgh.harvard.edu Thank you, Andy. =---------------------------------------------------------------------= Andrew J. Worth Center for Morphometric Analysis Volunteer Coordinator Neuroscience Center ICNN'93 / FUZZ-IEEE'93 Massachusetts General Hospital-East (617) 726-5711, FAX:726-5677 Building 149, 13th St., MA 02129 USA andy at cma.mgh.harvard.edu =---------------------------------------------------------------------= From bradtke at envy.cs.umass.edu Thu Jan 28 11:20:59 1993 From: bradtke at envy.cs.umass.edu (bradtke@envy.cs.umass.edu) Date: Thu, 28 Jan 93 11:20:59 -0500 Subject: paper placed in neuroprose Message-ID: <9301281620.AA05650@pride.cs.umass.edu> The paper "Learning to Act using Real-Time Dynamic Programming" has been placed in the Neuroprose Archives. It is a revised version of the COINS TR 91-57 "Real-time learning and control using asynchronous dynamic programming" and has been submitted to the AI Journal special issue on Computational Theories of Interaction and Agency. The new version has replaced the old version in the archives. The presentation has been cleaned up throughout, several errors have been corrected, and the experiments greatly expanded. Note that this new version uses a somewhat different experimental problem definition than the old version. ----------------------------------------------------- Learning to Act using Real-Time Dynamic Programming Andrew G. Barto, Steven J. Bradtke, Satinder P. Singh Department of Computer Science University of Massachusetts, Amherst MA 01003 Learning methods based on dynamic programming (DP) are receiving increasing attention in artificial intelligence. Researchers have argued that DP provides the appropriate basis for compiling planning results into reactive strategies for real-time control, as well as for learning such strategies when the system being controlled is incompletely known. We introduce an algorithm based on DP, which we call Real-Time DP (RTDP), by which an embedded system can improve its performance with experience. RTDP generalizes Korf's Learning-Real-Time-A* algorithm to problems involving uncertainty. We invoke results from the theory of asynchronous DP to prove that RTDP achieves optimal behavior in several different classes of problems. We also use the theory of asynchronous DP to illuminate aspects of other DP-based reinforcement learning methods such as Watkins' Q-Learning algorithm. A secondary aim of this article is to provide a bridge between AI research on real-time planning and learning and relevant concepts and algorithms from control theory. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps barto.realtime-dp.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get barto.realtime-dp.ps.Z ftp> quit unix> uncompress barto.realtime-dp.ps unix> lpr -s barto.realtime-dp.ps (or however you print postscript) Steve Bradtke From bradtke at envy.cs.umass.edu Thu Jan 28 11:00:57 1993 From: bradtke at envy.cs.umass.edu (bradtke@envy.cs.umass.edu) Date: Thu, 28 Jan 93 11:00:57 -0500 Subject: preprint of NIPS paper Message-ID: <9301281600.AA05634@pride.cs.umass.edu> The following paper has been placed in the Neuroprose Archives. FTP instructions are given below. Reinforcement Learning Applied to Linear Quadratic Regulation Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Recent research on reinforcement learning has focused on algorithms based on the principles of Dynamic Programming (DP). One of the most promising areas of application for these algorithms is the control of dynamical systems, and some impressive results have been achieved. However, there are significant gaps between practice and theory. In particular, there are no convergence proofs for problems with continuous state and action spaces, or for systems involving non-linear function approximators (such as multilayer perceptrons). This paper presents research applying DP-based reinforcement learning theory to Linear Quadratic Regulation (LQR), an important class of control problems involving continuous state and action spaces and requiring a simple type of non-linear function approximator. We describe an algorithm based on Q-learning that is proven to converge to the optimal controller for a large class of LQR problems. We also describe a slightly different algorithm that is only locally convergent to the optimal Q-function, demonstrating one of the possible pitfalls of using a non-linear function approximator with DP-based learning. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps bradtke.nips5.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get bradtke.nips5.ps.Z ftp> quit unix> uncompress bradtke.nips5.ps unix> lpr -s bradtke.nips5.ps (or however you print postscript) Steve Bradtke From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Thu Jan 28 15:26:12 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Thu, 28 Jan 93 16:26:12 AST Subject: Call for papers: AI Symposium Message-ID: ================================================================== ================================================================== Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn Fredericton Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gdrdenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer r Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- March 30, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton , which which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department _____________________________________ University/Company _____________________________________ _____________________________________ _____________________________________ Address _____________________________________ _____________________________________ _____________________________________ Prov/State _____________________________________ Country _____________________________________ Telephone _____________________________________ Email _____________________________________ Fax _____________________________________ From wahba at stat.wisc.edu Sun Jan 31 20:20:30 1993 From: wahba at stat.wisc.edu (Grace Wahba) Date: Sun, 31 Jan 93 19:20:30 -0600 Subject: submission Message-ID: <9302010120.AA25404@hera.stat.wisc.edu> I would like to submit the following to connectionists - thanks much! **************** This is to announce two papers in the neuroprose archive: 1) Soft Classification, a.k.a. Penalized Log Likelihood and Smoothing Spline Analysis of Variance by Grace Wahba, Chong Gu, Yuedong Wang and Rick Chappell to appear in the proceedings of the Santa Fe Workshop on Supervised Machine Learning, August 1992, D. Wolpert and A. Lapedes, eds. also partly presented at CLNL*92. 2) Smoothing Spline ANOVA with Component-Wise Bayesian `Confidence Intervals' by Chong Gu and Grace Wahba, to appear, J. Computational and Graphical Statistics wahba at stat.wisc.edu, chong at pop.stat.purdue.edu wang at stat.wisc.edu, chappell at stat.wisc.edu Below are the abstracts followed by instructions for retrieving the papers. Grace Wahba ---------------------------------------------------------------------- Soft Classification, a.k.a. Penalized Log Likelihood and Smoothing Spline Analysis of Variance G. Wahba, C. Gu, Y. Wang and R. Chappell We discuss a class of methods for the problem of `soft' classification in supervised learning. In `hard' classification, it is assumed that any two examples with the same attribute vector will always be in the same class, (or have the same outcome), whereas in `soft' classification two examples with the same attribute vector do not necessarily have the same outcome, but the *probability* of a particular outcome does depend on the attribute vector. In this paper we will describe a family of methods which are well suited for the estimation of this probability. The method we describe will produce, for any value in a (reasonable) region of the attribute space, an estimate of the probability that the next example with that value of its attribute vector will be in class 1. Underlying these methods is an assumption that this probability varies in a smooth way (to be defined) as the predictor variables vary. The method combines results from Penalized log likelihood estimation, Smoothing splines, and Analysis of variance, to get the PSA class of methods. In the process of describing PSA we discuss some issues concerning the computation of degrees of freedom for signal, which has wider ramifications for the minimization of generalization error in machine learning. As an illustration we apply the method to the Pima-Indian Diabetes data set in the UCI Repository, and compare the results to Smith et. al. (1988) who used the ADAP learning algorithm on this same data set to forecast the onset of diabetes mellitus. If the probabilities we obtain are thresholded to make a hard classification to compare with the hard classification of Smith et. al. the results are very similar, however the intermediate probabilities that we obtain provide useful and inter- pretable information on how the risk of diabetes varies with some of the risk factors. ........................... Smoothing Spline ANOVA with Component-Wise Bayesian `Confidence Intervals' C. Gu and G. Wahba We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian `confidence intervals' of Wahba(1983) for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. ----------------------------------------------------------------------------- To retrieve these files from the neuroprose archive: unix> ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:wahba): anonymous Password: (use your email address) ftp> binary ftp> cd pub/neuroprose ftp> get wahba.soft-class.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for wahba.soft-class.ps.Z . ftp> get wahba.ssanova.ps.Z . 221 Goodbye. unix> uncompress wahba.soft-class.ps.Z unix> lpr wahba.soft-class.ps unix> uncompress wahba.ssanova.ps.Z unix> lpr wahba.ssanova.ps .. Thanks to Jordan Pollack for maintaining the archive.  From smagt at fwi.uva.nl Sun Jan 31 07:42:57 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Sun, 31 Jan 1993 13:42:57 +0100 Subject: correction: 5th edition of neural network intro book Message-ID: <199301311242.AA19696@carol.fwi.uva.nl> Excuse! It appears that galba's ftp manager (the chap with the PassWord) changed the ftp site; previously, when you logged in on galba, a "cd pub" was done. This is changed for now, such that the instructions for getting The fifth edition of the neural network introductory text An Introduction to Neural Networks Ben Kr\"ose and Patrick van der Smagt Dept. of Computer Systems University of Amsterdam are now: ----------------------------------------------------------------------------- To retrieve the document by anonymous ftp : Unix> ftp galba.mbfys.kun.nl (or ftp 131.174.82.73) Name (galba.mbfys.kun.nl ) anonymous 331 Guest login ok, send ident as password. Password ftp> bin ftp> cd pub/neuro-intro ftp> get neuro-intro.400.ps.Z 150 Opening ASCII mode data connection for neuro-intro.400.ps.Z (xxxxxx bytes). ftp> bye Unix> uncompress neuro-intro.400.ps.Z Unix> lpr -s neuro-intro.400.ps ;; optionally ----------------------------------------------------------------------------- There is a possibility that the previous state (where a "cd neuro-intro" instead of "cd pub/neuro-intro" must be done) will be restored in future. Be forewarned. The file neuro-intro.400.ps.Z is the manuscript for 400dpi printers. If you have a 300dpi printer, get neuro-intro.300.ps.Z instead. The 1991 version is still available as neuro-intro.1991.ps.Z. 1991 Is not the #dots per inch! We don't have such good printers here. Do preview the manuscript before you print it, since otherwise 131 pages of virginal paper are wasted. Some systems cannot handle the large postscript file (around 2M). On Unix systems it helps to give lpr the "-s" flag, such that the postscript file is not spooled but linked (see man lpr). On others, you may have no choice but extract (chunks of) pages manually and print them separately. Unix filters like pstops, psselect, and psxlate (the source code of the latter is available from various ftp sites) can be used to select pages to be printed. Alternatively, print from your previewer. Better still, don't print at all! Enjoy! Patrick PS the length of some chapters reflect the focus of the research in our group. E.g., chapter 6 is ridiculously short (which was brought to my attention) and needs improvement. Next time.  From Connectionists-Request at cs.cmu.edu Fri Jan 1 00:05:13 1993 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Jan 93 00:05:13 EST Subject: Bi-monthly Reminder Message-ID: <6805.725864713@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is not an edited forum like the Neuron Digest, or a free-for-all newsgroup like comp.ai.neural-nets. It's somewhere in between, relying on the self-restraint of its subscribers. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to over a thousand busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. Happy hacking. -- Dave Touretzky & David Redish --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject lately. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, and found the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new text books related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. - Do NOT tell a friend about Connectionists at cs.cmu.edu. Tell him or her only about Connectionists-Request at cs.cmu.edu. This will save your friend from public embarrassment if she/he tries to subscribe. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU (Internet address 128.2.242.8). 2. Login as user anonymous with password your username. 3. 'cd' directly to one of the following directories: /usr/connect/connectionists/archives /usr/connect/connectionists/bibliographies 4. The archives and bibliographies directories are the ONLY ones you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into one of these two directories. Access will be denied to any others, including their parent directory. 5. The archives subdirectory contains back issues of the mailing list. Some bibliographies are in the bibliographies subdirectory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- How to FTP Files from the Neuroprose Archive -------------------------------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints or articles in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. (Along this line, single spaced versions, if possible, will help!) To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. If you do offer hard copies, be prepared for an onslaught. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! Experience dictates the preferred paradigm is to announce an FTP only version with a prominent "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your announcement to the connectionist mailing list. Current naming convention is author.title.filetype[.Z] where title is enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. Very large files (e.g. over 200k) must be squashed (with either a sigmoid function :) or the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is attached as an appendix, and a shell script called Getps in the directory can perform the necessary retrival operations. For further questions contact: Jordan Pollack Assistant Professor CIS Dept/OSU Laboratory for AI Research 2036 Neil Ave Email: pollack at cis.ohio-state.edu Columbus, OH 43210 Phone: (614) 292-4890 Here is an example of naming and placing a file: gvax> cp i-was-right.txt.ps rosenblatt.reborn.ps gvax> compress rosenblatt.reborn.ps gvax> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put rosenblatt.reborn.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rosenblatt.reborn.ps.Z 226 Transfer complete. 100000 bytes sent in 3.14159 seconds ftp> quit 221 Goodbye. gvax> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file rosenblatt.reborn.ps.Z in the Inbox. The INDEX sentence is "Boastful statements by the deceased leader of the neurocomputing field." Please let me know when it is ready to announce to Connectionists at cmu. BTW, I enjoyed reading your review of the new edition of Perceptrons! Frank ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "pt.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "nn-bench-request at cs.cmu.edu". From rybak at cerebrum.impaqt.drexel.edu Mon Jan 4 12:33:14 1993 From: rybak at cerebrum.impaqt.drexel.edu (Ilya Rybak) Date: Mon, 4 Jan 93 12:33:14 EST Subject: papers available Message-ID: <9301041733.AA14284@impaqt.drexel.edu> Dear Connectionists, Two below papers are accepted for IS&T/SPIE Conference on Human Vision, Vision Procrssing and Digital Display IV, San Jose, 1993. Hard copies of the papers are available with request by e-mail address ilya at cheme.seas.upenn.edu Ilya Rybak ____________________________________________________________________ PHYSIOLOGICAL MODEL OF ORIENTATION SENSITIVITY IN THE VISUAL CORTEX AND ITS USE FOR IMAGE PROCESSING Ilya A. Rybak*, Lubov N. Podladchikova** and Natalia A. Shevtsova** * University of Pennsylvania, Philadelphia, PA, USA ilya at cheme.seas.upenn.edu ** A.B. Kogan Research Institute for Neurocybernetics Rostov State University, Rostov-on-Don, Russia The objectives of the research were: (i) to investigate the dynamics of neuron responses and orientation selectivity in the primary visual cortex; (ii) to find a possible source of bifurcation of visual information into "what" and "where" processing pathways; (iii) to apply the obtained results for visual image processing. The achieve the objectives, a model of the iso-orientation domain (orientation column) of the visual cortex has been developed. The model is based on neurophysiological data and on the idea that orientation selectivity results from a spatial anisotropy of reciprocal lateral inhibition in the domain. Temporal dynamics of neural responses to oriented stimuli were studied with the model. It was shown that the later phase of neuron response had a much sharper orientation tuning than the initial one. The results of modeling were confirmed by neurophysiological experiments on the visual cortex. The findings allow to suggest that the initial phase of neural response encodes the location of the visual stimulus, whereas the later phase encodes its orientation. Temporal dividing of information about object features and their locations at the neuronal level of the primary visual cortex may be considered to be a source for bifurcation of the visual processing into "what" and "where" pathways and may be used for parallel- sequential attentional image processing. The model of neural network system for image processing based on the iso- orientation domain models and above idea is proposed. An example of test image processing is presented. _____________________________________________________________________ -------------------------------------------------------------------- BEHAVIORAL MODEL OF VISUAL PERCEPTION AND RECOGNITION Ilya A. Rybak*, Alexander V. Golovan** and Valentina I. Gusakova** * University of Pennsylvania, Philadelphia, PA, USA ilya at cheme.seas.upenn.edu ** A.B. Kogan Research Institute for Neurocybernetics Rostov State University, Rostov-on-Don, Russia In the processes of visual perception and recognition human eyes actively select essential information by way of successive fixation at the most informative points of the image. So, perception and recognition are not only resuls or neural computations, but are also behavioral processes. A behavioral program defining a scanpath of the image is formed at the stage of learning (object memorizing) and consists of sequential motor actions, which are shifts of attention from one to another point of fixation, and sensory signals expected to arrive in response to each shift of attention. In the modern view of the problem, invariant object recognition is provided by the following: (i) separated processing of "what" (object features) and "where" (spatial features) information at high levels of the visual system; (ii) mechanisms of visual attention using "where" information; (iii) representation of "what" information in an object-based frame of reference (OFR). However, most recent models of vision based on OFR have demonstrated the ability of invariant recognition of only simple objects like letters or binary objects without background, i.e. objects to which a frame of reference is easily attached. In contrast, we use not OFR, but a "feature-based frame of reference" (FFR), connected with the basic feature (edge) at the fixation point. This has provided for our model, the ability for invariant representation of complex objects in gray-level images, but demands realization of behavioral aspects of vision described above. The developed model contains a neural network subsystem of low-level vision which extracts a set of primary features (edges) in each fixation, and high- level subsystem consisting of "what" (Sensory Memory) and "where" (Motor Memory) modules. The resolution of primary features extraction decreases with distances from the point of fixation. FFR provides both the invariant representation of object features in Sensory Memory and shifts of attention in Motor Memory. Object recognition consists in successive recall (from Motor Memory) and execution of shifts of attention and successive verification of the expected sets of features (stored in Sensory Memory). The model shows the ability of recognition of complex objects (such as faces) in gray-level images invariant with respect to shift, rotation, and scale. ----------------------------------------------------------------------- From bouzerda at eleceng.adelaide.edu.au Mon Jan 4 21:23:13 1993 From: bouzerda at eleceng.adelaide.edu.au (bouzerda@eleceng.adelaide.edu.au) Date: Tue, 5 Jan 1993 13:23:13 +1100 Subject: Job Offer Message-ID: <9301050223.17194@munnari.oz.au> POSTDOCTORAL OR RESEARCH FELLOW in Signal Processing and Neural Networks ************************************** A postdoctoral or research fellow is sought to join as soon as possible the Centre for Sensor Signal and Information Processing (CSSIP) and the University of Adelaide EE Eng Department. The CSSIP is one of several cooperative research centres awarded by the Australian Government to establish excellence in research and development. The University of Adelaide, represented by the EE Eng Dept, is a partner in this cooperative research centre, together with the Defence Science and Technology Organization (DSTO), four other Universities, and several companies. The cooperative research centre consists of more than 50 effective full time researchers, and is well equipped with many UNIX Workstations and a massively parallel machine (DEC MPP). The aim is to develop and investigate principles of artificial neural networks for sensor signal and image processing, classification and separation of signals, and data fusion. The position is for one year with a strong possibility of renewal. DUTIES: In consultation with task leaders and specialist researchers to investigate alternative algorithm design approaches, to design experiments on applications of signal processing and artificial neural networks, to prepare data and carry out the experiments, to prepare software for testing algorithms, and to prepare or assist with the prepation of technical reports. QUALIFICATIONS: The successful candidate must have a Ph.D., a proven research record, and a demonstrated ability in written and spoken English. PAY and CONDITIONS: will be in accordance with University of Adelaide policies, and will depend on the qualifications and experience. Appointments may be made in scales A$ 36766 to A$ 42852 for a postdoc, and A$ 42333 to A$ 5999 for a research fellow. ENQUIRIES: Prof. R. E. Bogner, Electrical & Electronic Engineering Dept., The University of Adelaide, G.P.O. Box 498, Adelaide, South Australia 5001. Phone: (61)-08-228-5589, Fax: (61)-08-232-5720 Email: bogner at eleceng.adelaide.edu.au Dr. A. Bouzerdoum, Phone (61)-08-228-5464, Fax (61)-08-232-5720 Email: bouzerda at eleceng.adelaide.edu.au From john at cs.uow.edu.au Tue Jan 5 14:11:55 1993 From: john at cs.uow.edu.au (John Fulcher) Date: Tue, 5 Jan 93 14:11:55 EST Subject: no subject (file transmission) Message-ID: <199301050311.AA18078@wraith.cs.uow.edu.au> CALL FOR PAPERS - ANN STANDARDS COMPUTER STANDARDS & INTERFACES For some time now, there has been a need to consolidate and formalise the efforts of researchers in the Artificial Neural Network field. The publishers of this North-Holland journal have deemed it appropriate to devote a forthcoming special issue of Computer Standards & Interfaces to ANN standards, under the guest editorship of John Fulcher, University of Wollongong, Australia. We already have the cooperation of the IEEE/NCC Standards Committee, but are also interested in submissions regarding less formal, de facto "standards". This could range from established, "standard" techniques in various application areas (vision, robotics, speech, VLSI etc.), or ANN techniques generally (such as the backpropagation algorithm & its [numerous] variants, say). Accordingly, survey or review articles would be particularly welcome. If you are interested in submitting a paper for consideration, you will need to send three copies (in either hard copy or electronic form) by March 31st, 1993 to: John Fulcher, Department of Computer Science, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. fax: +61 42 213262 email: john at cs.uow.edu.au.oz From terry at helmholtz.sdsc.edu Tue Jan 5 15:50:41 1993 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 5 Jan 93 12:50:41 PST Subject: Neural Computation 5:1 Message-ID: <9301052050.AA25119@helmholtz.sdsc.edu> NEURAL COMPUTATION - Volume 5 - Number 1 - January 1993 Article Conversion of Temporal Correlations Between Stimuli to Spatial Correlations Between Attractors M. Griniasty, M. V. Tsodyks and Daniel J. Amit Note On the Realization of a Kolmogorov Network Ji-Nan Lin and Rolf Unbehauen Letters Statistical Mechanics for a Network of Spiking Neurons Leonid Kruglyak and William Bialek Acetylcholine and Learning in a Cortical Associative Memory Michael E. Hasselmo Convergent Algorithm for Sensory Receptive Field Development Joseph J. Atick and A. Norman Redlich Three-Dimensional Object Recognition Using an Unsupervised BCM Network: The Usefulness of Distinguishing Features Nathan Intrator and Joshua I. Gold Complexity Optimized Data Clustering by Competitive Neural Networks Joachim Buhmann and Hans Kuhnel Clustering Data by Melting Yiu-fai Wong Coarse Coding Resource-Allocating Network Gustavo Deco and Jurgen Ebmeyer Training Periodic Sequences Using Fourier Series Error Criterion James A. Kottas Generalization and Approximation Capabilities of Multilayer Networks Yoshikane Takahashi Statistical Theory of Learning Curves under Entropic Loss Criterion Shun-ichi Amari and Noboru Murata Learning in the Recurrent Random Neural Network Erol Gelenbe ----- SUBSCRIPTIONS - VOLUME 5 - BIMONTHLY (6 issues) ______ $40 Student ______ $65 Individual ______ $156 Institution Add $22 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-4 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 e-mail: hiscox at mitvma.mit.edu ----- From nowlan at helmholtz.sdsc.edu Tue Jan 5 14:03:42 1993 From: nowlan at helmholtz.sdsc.edu (Steven J. Nowlan) Date: Tue, 05 Jan 93 12:03:42 MST Subject: Preprint announcement Message-ID: <9301052003.AA28913@bose> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** The following paper has been placed in the Neuroprose archives at Ohio State. The file is nowlan.vismotion.ps.Z. Ftp instructions follow the abstract. Only an electronic version of this paper is available. This is a preprint of the paper to appear in the NIPS 5 proceedings due out later this year. ----------------------------------------------------- Filter Selection Model for Generating Visual Motion Signals Steven J. Nowlan and Terrence J. Sejnowski Computational Neurobiology Laboratory The Salk Institute P.O. Box 5800 San Diego, CA 92186-5800 ABSTRACT: Neurons in area MT of primate visual cortex encode the velocity of moving objects. We present a model of how MT cells aggregate responses from V1 to form such a velocity representation. Two different sets of units, with local receptive fields, receive inputs from motion energy filters. One set of units forms estimates of local motion, while the second set computes the utility of these estimates. Outputs from this second set of units ``gate'' the outputs from the first set through a gain control mechanism. This active process for selecting only a subset of local motion responses to integrate into more global responses distinguishes our model from previous models of velocity estimation. The model yields accurate velocity estimates in synthetic images containing multiple moving targets of varying size, luminance, and spatial frequency profile and deals well with a number of transparency phenomena. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps nowlan.vismotion.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get nowlan.vismotion.ps.Z ftp> quit unix> uncompress nowlan.vismotion.ps.Z unix> lpr -s nowlan.vismotion.ps (or however you print postscript) Steven J. Nowlan Computational Neurobiology Laboratory The Salk Institute P.O. Box 85800 San Diego, CA 92186-5800 Work Phone: 619-453-4100 X124 e-mail: nowlan at helmholtz.sdsc.edu From darken at learning.siemens.com Wed Jan 6 10:20:13 1993 From: darken at learning.siemens.com (Christian Darken) Date: Wed, 6 Jan 93 10:20:13 EST Subject: Great Energy Shootout -- definition Message-ID: <9301061520.AA02844@learning.siemens.com> Pertinent to the "Great Energy Shootout" competition description posted to this list a few days ago, I asked Jan Kreider for a definition of "HVAC system", and got back the following reply. cd > From @mirsa.inria.fr:kreider at manitou Wed Jan 6 04:25:54 1993 > Date: Wed, 6 Jan 93 10:27:55 +0100 > From: Jan KREIDER > To: darken at learning.siemens.com > Subject: RE: Building energy predictor Competition - " > Content-Length: 251 > > Chris - > > An HVAC system is the heating, ventilating and air conditionng (HVAC) > system in a building including chillers, boilers, fans and pumps. Could > you forward this definition to your colleagues in the connectionist > world? > > Merci - Jan F. Kreider > From mico at ludens.elte.hu Wed Jan 6 04:47:34 1993 From: mico at ludens.elte.hu (mico@ludens.elte.hu) Date: Wed, 06 Jan 1993 10:47:34 +0100 Subject: book announcement Message-ID: <00966338.6F92ABC0.19948@ludens.elte.hu> *********************** BOOK ANOUNCEMENT ************************** Klaus Haefner (ed.): Evolution of information processing systems. An interdisciplinary approach for a new understanding of nature and society Berlin ; New York : Springer-Verlag, c1992. Phys. Description: x, 357 p. : 46 Figures. ; 25 cm. Includes bibliographical references (pp. 347-357). ********************************************************************* Features contributions of: Vili Csanyi Sidney Fox Hermann Haken George Kampis Jenny Kien Wolfgang Klement Ervin Laszlo Greg Nicolis John Nicolis Theodor Oeser Mika Pantzar Michael Requardt Anton Semenov This book is based on an invited conference, held in Bremen, Germany, October 8-10 1990. At the conference, issues related to Haefner's Basic Paper were discussed. The Basic Paper is reproduced in the first part of the book. This paper claims that the same information theoretic principles can govern computers, life, and minds, and further, that they are valid for atomic systems and societal systems too. As a counterweight, the other papers of the volume offer, by and large, a critical attitude or at least significant refinements to this thesis. These papers, many of which were written by leading experts, offer new views on information based on ideas of computer science, synergetics, self-organization, neural networks, and self- modification. M. Vargyas egcs301 at hueco.uni-wien.ac.at  From kak at max.ee.lsu.edu Thu Jan 7 14:47:10 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Thu, 7 Jan 93 13:47:10 CST Subject: Quantum neural computer Message-ID: <9301071947.AA07593@max.ee.lsu.edu> Hitherto all computers have been designed based on classical laws. We consider the question of building a quantum neural computer and speculate on its computing power. We argue that such a computer could have the potential to solve artificial intelligence problems. History tells us that paradigms of science and technology draw on each other. Thus Newton's conception of the universe was based on the mechanical engines of the day; thermodynamics followed the heat engines of the 19th century; and computers followed the development of telegraph and telephone. From another point of view, modern computers are based on classical physics. Since classical physics has been superseded by quantum mechanics in the microworld, one might ask the question if a new paradigm of computing based on quantum mechanics can be constructed. Intelligence, and by implication consciousness, has been taken by many computer scientists to emerge from the complexity of the interconnections between the neurons. But if it is taken to be a unity, as urged by Schrodinger and other physicists , then it should be described by a quantum mechanical wave function. No representation in terms of networking of classical objects, such as threshold neurons, can model a wave function. This is another reason that one seeks a new computing paradigm. A brain-mind identity hypothesis, with a mechanistic or electronic representation of the brain processes, does not explain how self-awareness could arise. At the level of ordinary perception there exists a duality and complementarity between an autonomous (and reflexive ) brain and a mind with intentionality. The notion of self seems to hinge on an indivisibility akin to that found in quantum mechanics. This was argued most forcefully by Schrodinger, one of the creators of quantum mechanics. A quantum neural computer will start out with a wavefunction that is a sum of several different problem functions. After the evolution of the wavefunction the measurement operator will force the wavefunction to reduce to the correct eigenfunction with the corresponding measurement that represents the computation. A discussion of these issues is contained in my TECHNICAL REPORT ECE/LSU 92-13, December 15, 1993 entitled CAN WE BUILD A QUANTUM NEURAL COMPUTER? If you would like to have an electronic copy (minus the math) do let me know. Hard-copies are also available. -Subhash Kak Professor of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901, USA Tel:(504) 388-5552; Fax: 504-388-5200 From N.E.Sharkey at dcs.ex.ac.uk Thu Jan 7 06:29:58 1993 From: N.E.Sharkey at dcs.ex.ac.uk (Noel Sharkey) Date: Thu, 7 Jan 93 11:29:58 GMT Subject: apologies Message-ID: <7267.9301071129@propus.dcs.exeter.ac.uk> My apologies for the delay in responding to the many of you who sent in abstracts for the AISB WORKSHOP ON CONNECTIONISM, COGNITION, AND THE NEW AI for the 12th December Deadline. On all of the other advertisements that appeared the deadline was January, 12th. This was a clerical error on my part - sorry. So I will let all of the submittee's know the outcome of the reviews as soon as possible after the closing date of January, 12th. I have enclosed the full advert below for those interested. WORKSHOP ON CONNECTIONISM, COGNITION AND A NEW AI A workshop will be held on March, 30th at the 9th Biennial Conference on Artificial Intelligence (AISB-93) at the University of Birmingham, England organised by the Society for the Study of Artificial Intelligence and Simulation of Behaviour (SSAISB). A number of recent developments in Connectionist Research have strong implications for the future of AI and the study of Cognition. Among the most important are developments in Learning, Representation, and Productivity (or Generalisation). The aim of the workshop would be to focus on how these developments may change the way we look at AI and the study of Cognition. Our goal is to have a lively and invigorating debate on the state-of-the-art. SUGGESTED TOPICS INCLUDE (BUT ARE NOT RESTRICTED TO). * Connectionist representation * Generalisation and Transfer of Knowledge * Learning Machines and models of human deveopmental. * Symbolic Learning versus Connectionist learning * Advantages of Connectionist/Symbolic hybrids. * Modelling Cognitive Neuropsychology * Connectionist modelling of Creativity and music (or other arts). DEADLINE FOR SUBMISSION: 12th January, 1992 ORGANISER Noel Sharkey Centre for Connection Science, Computer Science, Exeter. COMMITTEE Andy Clark (Sussex). Glyn Humphries (Birmingham) Kim Plunkett (Oxford) Chris Thornton (Sussex) WORKSHOP ENTRANCE: Attendance at the workshop will be limited to 50 or 60 places, so please LET US KNOW AS SOON AS POSSIBLE IF YOU ARE PLANNING TO ATTEND, and to which of the following categories you belong. DISCUSSION PAPERS Acceptance of discussion papers will be decided on the basis of extended abstracts (try to keep them under 500 words please) clearly specifying a 15 to 20 minute discussion topic for oral presentation. There will also be a small number of invited contributors. ORDINARY PARTICIPANTS A limited number places will be available for participants who wish to sit in on the discussion but do not wish to present a paper. But please get in early with a short note saying what is your purpose in attending. Please send submissions to Noel E. Sharkey, Centre for Connection Science Dept. Computer Science University of Exeter Exeter EX4 4PT Devon U.K. or email noel at uk.ac.exeter.dcs From lba at sara.inesc.pt Thu Jan 7 13:02:40 1993 From: lba at sara.inesc.pt (Luis B. Almeida) Date: Thu, 7 Jan 93 17:02:40 -0100 Subject: workshop announcement Message-ID: <9301071802.AA20411@sara.inesc.pt> IRTICS'93 CALL FOR PAPERS Workshop on Integration Technology for Real-Time Intelligent Control Systems October 5-7, 1993 Madrid, SPAIN Sponsored by: The Commission of the European Communities (CEC) The HINT project Organized by: Instituto de Ingenieria del Conocimiento (IIC) AIM: ---- Nowadays, the necessity of using together several techniques in order to improve the benefits of real-time intelligent control systems has become a constant in most industrial environments. Expert systems, neural networks, modelization, etc... can solve problems but, in many cases, a solution that partly involves different techniques leads to synergistic effects. So, cooperation among different approaches has become a crucial area of interest in this environment,the objective being to get common frameworks where you can use the best of each technique as effectively as possible. This certainty moved us to the present workshop on "Integration for Real-Time Intelligent Control Systems", IRTICS'93, that we are sure will be a good opportunity of examining many of the possibilities that exist, or will exist, in this direction. This workshop aims to encourage the communication and exchange of ideas among researchers, practitioners and end-users aware of the possibilities of integrating different AI technologies in real-time environments. Contributions addressing both theoretical problems and practical experiences will be of great interest for this forum. The workshop is organized by the Instituto de Ingenieria del Conocimiento, IIC, as an external activity of the HINT project: Heterogeneous Integration Architecture for Intelligent Control Systems (ESPRIT 6447). All the correspondence about the workshop should be addressed to the Workshop Secretariat at the IIC. TOPICS: ------- Researchers and practitioners interested in the possibilities of integration among different techniques applied to real-time environments, so as specific works that could be synergiistically enforced by means of integration, are invited to participate in the workshop by submitting an extended abstract as specified. Suggested topics include Integration Techniques for: * Expert Systems * Neural Networks * Fuzzy Logic * Model Based Reasoning * AI Architectures * Intelligent User Support Systems SUBMISSION REQUIREMENTS: ------------------------ Authors should submit 3 copies of an extended abstract (max 2000 words, approximately 5 single spaced pages) to the secretariat of the workshop before the deadline indicated in the timetable. They should include separately a page containing their name and full address, e-mail, fax or telephone, and the section(s) their work is related to. All contributions should be submitted in English. Abstracts will be reviewed according to their relationship with the basic aim of the workshop, their clarity and originality. Accepted papers will be included in a book to be published with the results and conclusions of the workshop. PROGRAMME: ---------- The programme of the workshop will include three different activities: * Invited Contributions * Communications * Discussions We plan to include invited contributions about some state-of-the-art themes that are of interest for all the participants. The rest of the time, parallel sessions will be held about each topic, including communications and a long time for discussions. The last of these sessions will deal with 'integration' as the main topic of the workshop. Participants are invited to summarize the results of their work in this final session. TIMETABLE: ---------- * Submissions must be received: by February 28, 1993. * Notification of acceptance or rejection: by April 30, 1993. * Camera-Ready versions: before June 30, 1993. REGISTRATION FEES: ------------------ The registration fees include midday lunch during the workshop, a Welcome Party to be held on Monday 4th evening and the proceedings of the workshop. Registration: 400 ECU ORGANIZATION COMMITTEE: ----------------------- Enrica Chiozza (IIC, Spain) Pilar Rodriguez-Marin (IIC, Spain) PROGRAM COMMITTEE: ------------------ Fontaine L. (Dassault Electronique, France) Rodriguez-Marin P. (IIC, Spain) Almeida L. B. (INESC, Portugal) Sundin U. (INFOLOGICS, Sweden) De Pablo E. (Repsol Petroleo, S.A., Spain) Jimenez A. (UPM, Spain) SECRETARIAT: ------------ Enrica Chiozza Instituto de Ingenieria del Conocimiento UAM Canto Blanco Modulo C-XVI, P. 4 28049 Madrid SPAIN Fax: (34 1) 397 3972 Phone: (34 1) 397 8520 E-mail: CHIOZZA @ EMDCCI11.BITNET CHIOZZA @ iic.uam.es ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From rsun at athos.cs.ua.edu Thu Jan 7 18:29:02 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Thu, 7 Jan 1993 17:29:02 -0600 Subject: integrating symbolic processing with neural networks Message-ID: I have collected a bibliography of papers on integrating symbolic processing with neural networks, and am looking for additional references. It's available in Neuroprose under the name Sun.bib.Z (in an unedited form); I'll also be happy to send it to you directly if you e-mail me. The bibliography will be included in a book on this topic that I'm co-editing. I'm looking for additional references to make the bibliography as comprehensive as possible. So, I would like authors to send me (a possibly annotated) list of their publications on this topic (this is a chance to make your work better known.) Also, anyone who has already compiled such a bib, please let me know; I would like to incorporate it. Due credit will be given, of course. Here is my address. E-mail response (rsun at cs.ua.edu) is strongly preferred. ================================================================ Ron Sun, Ph.D Assistant Professor Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-8573 Tuscaloosa, AL 35487 rsun at athos.cs.ua.edu ================================================================ Thanks for your cooperation. From mume at sedal.su.oz.au Fri Jan 8 00:08:11 1993 From: mume at sedal.su.oz.au (Multi-Module Environment) Date: Fri, 8 Jan 1993 16:08:11 +1100 Subject: FREE MUME version 0.5 for MSDOS platform Message-ID: <9301080508.AA21134@sedal.sedal.su.OZ.AU> The Multi-Module Neural Computing Environment (MUME) version 0.5 for the MSDOS platform is now available FREE of charge via anonymous ftp on brutus.ee.su.oz.au:/pub/MUME-0.5-DOS.zip The full listing of the file is: -rw-r----- 1 mume mume 1391377 Jan 8 15:45 MUME-0.5-DOS.zip Unzipping it should create a directory called MUME-DOS and it is about 4.6 MB. Following is the README file. Have fun. MUME-Request at sedal.su.OZ.AU -------------------------------------------------------------------------------- Multi-Module Neural Computing Environment (MUME) Version 0.5 (FREE) for MSDOS 5.0 MUME is a simulation environment for multi-modules neural computing. It provides an object oriented facility for the simulation and training of multiple nets with various architectures and learning algorithms. MUME includes a library of network architectures including feedforward, simple recurrent, and continuously running recurrent neural networks. Each architecture is supported by a variety of learning algorithms. MUME can be used for large scale neural network simulations as it provides support for learning in multi-net environments. It also provide pre and post-processing facilities. The object oriented structure makes simple the addition of new network classes and new learning algorithms. New classes/algorithms can be simply added to the library or compiled into a program at run-time. The interface between classes is performed using Network Service Functions which can be easily created for a new class/algorithm. The architectures and learning algorithms currently available are: Class Learning algorithms ------------ ------------------- MLP backprop, weight perturbation, node perturbation, summed weight perturbation SRN backprop through time, weight update driven node splitting, History bound nets CRRN Williams and Zipser Programmable Limited precision nets Weight perturbation, Combined Search Algorithm, Simulated Annealing Other general purpose classes include (viewed as nets): o DC source o Time delays o Random source o FIFOs and LIFOs o Winner-take-all o X out of Y classifiers The modules are provided in a library. Several "front-ends" or clients are also available. MUME can be used to include non-neural computing modules (decision trees, ...) in applications. The software is the product of a number of staff and postgraduate students at the Machine Intelligence Group at Sydney University Electrical Engineering. It is currently being used in research, research and development and teaching, in ECG and ICEG classification, and speech and image recognition. As such, we are interested in institutions that can exploit the tool (especially in educational courses) and build up on it. The software is written in 'C' and is aviable on the following platforms: - Sun (SunOS) - DEC (Ultrix) - Fujitsu VP2200 (UXP/M) - IBM RS6000 (AIX) - Hewlett Packard (HP-UX) - IBM PC compatibles (MSDOS 5.0) -- does not run under MS-Windows' DOS sessions THE MSDOS version of MUME is available as a public domain software. And can be ftp-ed from brutus.ee.su.oz.au:/pub/MUME-0.5-DOS.zip. MUME for the other platforms is available to research institutions on media/doc/postage cost arrangements. Information on how to acquire it may be obtained by writing (or email) to: Marwan Jabri SEDAL Sydney University Electrical Engineering NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: 660-1228 Email: marwan at sedal.su.oz.au A MUME mailing list is currently available by sending an email to MUME-Requests at sedal.su.OZ.AU Please put your subscription email address on the 'Subject:' line. To send mail to everybody in the mailing list, send it to: MUME at sedal.su.OZ.AU All bugs reports should be sent to MUME-Bugs at sedal.su.OZ.AU and should include the following details: 1. Date (eg. 12 Feb 1993). 2. Name (eg. John Citizen). 3. Company/Institution (eg. Sydney University Electrical Engineering). 4. Contact Address (eg. what-is-mume at sedal.su.OZ.AU). 5. Version of MUME (eg. MUME 0.5). 6. Machine Name/Type (eg. Sun Sparc 2). 7. Version of the Operating System (eg. SunOS 4.1.1). 8. Brief Description of the problem(s). 9. Error Messages (if any). 10. Related Files (Filename, Version and Relationship to problems). From lautrup at connect.nbi.dk Fri Jan 8 09:31:02 1993 From: lautrup at connect.nbi.dk (Benny Lautrup) Date: Fri, 8 Jan 93 14:31:02 GMT Subject: No subject Message-ID: POST-DOC POSITION IN NEURAL SIGNAL PROCESSING THEORY The Danish Computational Neural Network Center (CONNECT), announces a one-year post-doc position in the theory of neural signal processing. CONNECT is a joint effort with participants from the University of Copenhagen, Risoe National Laboratory, and the Technical University of Denmark. The position is available March 1, 1993, at the Electronics Institute of the Technical University of Denmark. The work of the neural signal processing group concerns generalization theory, algorithms for architecture optimization, applications in time series analysis, seismic signal processing, image processing and pattern recognition. The candidate must have a strong background in statistics or statistical physics and have several years of experience in neural signal processing. A candidate with proven abilities in generalization theory of signal processing neural networks or in seismic signal processing will be favoured. Further information about the position can be obtained from: Lars Kai Hansen, Phone: (+45) 45 93 12 22, ext 3889. Electronics Institute B349, Fax: (+45) 42 87 07 17. Technical University of Denmark, email: lars at eiffel.ei.dth.dk DK-2800 Lyngby. Applications containing CV, list of publications, and three letters of recommendation should be mailed to Benny Lautrup, CONNECT Niels Bohr Institute Blegdamsvej 17 DK-2100 Copenhagen Deadline February 15, 1992 From sjr at eng.cam.ac.uk Fri Jan 8 05:05:01 1993 From: sjr at eng.cam.ac.uk (sjr@eng.cam.ac.uk) Date: Fri, 8 Jan 93 10:05:01 GMT Subject: TR available Message-ID: <16768.9301081005@truth.eng.cam.ac.uk> The following technical report is available via FTP, from the International Computer Science Institute (ftp.icsi.berkeley.edu) and also the Cambridge Univeristy Engineering Department FTP archive (svr-ftp.eng.cam.ac.uk). CONNECTIONIST PROBABILITY ESTIMTAION IN HMM SPEECH RECOGNITION Steve Renals and Nelson Morgan International Computer Science Institute Technical Report TR-92-081 This report is concerned with integrating connectionist networks into a hidden Markov model (HMM) speech recognition system, This is achieved through a statistical understanding of connectionist networks as probability estimators, first elucidated by Herve Bourlard. We review the basis of HMM speech recognition, and point out the possible benefits of incorporating connectionist networks. We discuss some issues necessary to the construction of a connectionist HMM recognition system, and describe the performance of such a system, including evaluations on the DARPA database, in collaboration with Mike Cohen and Horacio Franco of SRI International. In conclusion, we show that a connectionist component improves a state of the art HMM system. ---------------------------------------------------------------------- UK: The report is available from svr-ftp.eng.cam.ac.uk (129.169.8.1), in directory 'reports', file 'renals_icsi92-081.ps.Z' USA: Available from ftp.icsi.berkeley.edu (128.32.201.7), in directory 'pub/techreports', file 'tr-92-081.ps.Z' Sample FTP session: unix% ftp ftp.icsi.berkeley.edu ftp> binary ftp> cd pub/techreports ftp> get tr-92-081.ps.Z ftp> quit unix% zcat tr-92-081.ps.Z | lpr -------------------------------------------------------------------- Steve Renals Cambridge University Engineering Department sjr at eng.cam.ac.uk -------------------------------------------------------------------- From dyer at CS.UCLA.EDU Fri Jan 8 15:52:29 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 8 Jan 93 12:52:29 PST Subject: regarding quantum neural computer announcement Message-ID: <930108.205229z.26424.dyer@lanai.cs.ucla.edu> Dr. S. Kak: Regarding your Quantum neural computer announcement: I am not a physicist and have not yet received your tech rep (so I am really sticking my neck out),but it seems to me that there are two assumptions you make that are potentially controversial: 1. that intelligence needs something at the quantum level (this is something R. Penrose also argues for). To my knowledge, there is no evidence as yet for this. Chess playing machines are now at grand master level -- w/o quantum effects. Connectionist/neural models exhibit nice learning and robustness properties -- w/o quantum effects. Bayesian-based AI expert systems perform sophisticated reasoning-under-uncertainty tasks; there are natural language systems that answer questions about themselves, thus exhibiting some limited form of self-awareness, etc. -- all w/o needing to postulate quantum effects. At this point no persuasive argument has yet been made for needing quantum-level effects to solve any particular task involving reasoning, language, memory or perceptual processes and the like. If there are, then I would certainly like to know them (Penrose sure never came up with any!) 2. that quantum-level phenomena could never be adequately simulated by a Turing machine (i.e. that reality is not computable). After reading a number of (non-specialist) books on quantum physics, I am not yet convinced of this. E.g., the collapse of the wave-form appears to be required as a result of the wave-particle duality, such as observed in the 2-slit experiment. But let's consider the 2-slit experiment. Individual electrons or photons hit a screen, one by one, and register like particles. Over time, however, their overall pattern is like that of waves (e.g. interference, diffraction). But there's an approach that could produce similar results from completely deterministic equations -- i.e. chaos theory. For example, there are chaos games in which the dots generated on a screen jump around as if at random, but over time, a pattern emerges, of instance,. a fern (e.g. p.238 of Gleick's Chaos book). If something that complicted can be produced by a sequence of apparently random dots (particles), then why couldn't something as simple as a wave interference pattern also be produced in this way? This could turn out to be the case if the emission of a particle alters the field in such a way that the path of subsequent particles emitted will produce the desired, wave-like result. In this (albeit hand-waving) case, then, there would exist deterministic equations generating wave-like behavior and the whole thing could be ultimately simulated by a Turing machine. Like I said before, I am not a physicist, so perhaps you (or someone else on connectionists) could correct the(possibly gross) misunderstandings contained within my naive suggestion. In any case, I look forward to obtaining and reading your tech report. -- Michael Dyer From marwan at sedal.su.oz.au Fri Jan 8 00:29:29 1993 From: marwan at sedal.su.oz.au (Marwan Jabri) Date: Fri, 8 Jan 1993 16:29:29 +1100 Subject: ACNN'93 Conference Programme Message-ID: <9301080529.AA21731@sedal.sedal.su.OZ.AU> THE FOURTH AUSTRALIAN CONFERENCE ON NEURAL NETWORKS (ACNN'93) CONFERENCE PROGRAMME 1st - 3rd FEBRUARY 1993 UNIVERSITY OF MELBOURNE AUSTRALIA PROGRAMME Monday, 1st February 1993 8.30 - 9.15 am Registration 9.15 - 9.30 am Official Opening 9.30 - 10.30 am Keynote Address G Hinton Department of Computer Science University of Toronto, Canada 10.30 - 11.00 am Morning Coffee 11.00 - 12.30 pm Session 1 An Associative Memory Model for the CA3 Region of the Hippocampus M R Bennett Neurobiology Research Centre W G Gibson & J Robinson School of Mathematics & Statistics University of Sydney, Australia Variable Threshold ART3 Neural Network P Lozo Dept of Electrical & Electronic Engineering University of Adelaide, Australia Constrained Quadratic Optimization using Neural Networks A Bouzerdoum & T R Pattison Dept of Electrical & Electronic Eng University of Adelaide, Australia 12.30 - 3.00 pm Lunch (including Student Poster Session) 3.00 - 3.30 pm Afternoon Tea 3.30 - 5.00 pm Session 2 The Sample Size Necessary for Learning in Multi-layer Networks P L Bartlett Department of Electrical & Computer Engineering University of Queensland, Australia Implementing a Model for Line Perception B P McGregor & M L Cook Centre for Visual Sciences, RSBS Australian National University, Australia Improving the Performance of the Neocognitron D R Lovell, D Simon & A C Tsoi Department of Electrical & Computer Engineering University of Queensland, Australia 5.00 - 7.30 pm Poster Session 1 Tuesday, 2nd February 1993 9.00 - 10.30 am Session 2 Improved Phoneme Recognition Using Multi-Module Recurrent Neural Networks L R Leerink & M Jabri Dept of Electrical Engineering University of Sydney, Australia External Stimuli in Biased Attractor Neural Networks A N Burkitt Computer Sciences Laboratory, RSPS Australian National University, Australia Activity Dependent Plasticity of Synapses in the Central Nervous System F H Guldner Department of Anatomy Khon Kaen University, Thailand 10.30 - 11.00 am Morning Coffee 11.00 - 12.00 noon A Method for Learning from Hints (Invited) Y S Abu-Mostafa California Institute of Technology, U S A 12.00 - 1.30 pm Lunch 1.30 - 3.00pm Session 4 A VLSI Arrhythmia Classifier P H W Leong & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Associative Conditioning Element B L Rogers Information Technology Institute Swinburne University of Technology, Australia Establishing Analogical Mappings by Synchronizing Oscillators B D Burns, J E Hummel & K J Holyoak Department of Psychology University of California, Los Angeles, U S A 3.00 - 3.30 pm Afternoon Tea 3.30 - 5.00 pm Session 5 Experimental Low Cost Neural Networks for Spoken Language Understanding A Kowalczyk & M Dale Telecom Research Laboratories, Australia A Neural Network Implementation of Sokolov's Model of Habituation of the Orienting Reflex B A Daniels Department of Psychology University of Tasmania, Australia Moving Image Compression and Regeneration by Associative Retina Chip Y Nakamura, M Ikegami & M Tanaka Faculty of Science & Technology Sophia University, Japan 5.00 - 7.00 pm Poster Session 2 7.00 - 9.00 pm BBQ and drinks Wednesday, 3rd February 1993 9.00 - 10.00 am A Spectral Domain Associative Memory with Improved Recall (Invited) B Hunt, M S Nadar, P Keller, E Van Colln & A Goyal Department of Electrical & Computer Engineering University of Arizona, USA 10.00 - 10.30 am Morning Coffee 10.30 - 11.30 am Session 6 Modelling Context Effects in Human Character Recognition Using Interconnected Single-Layer Perceptrons C Latimer, C Stevens & M Charles Department of Psychology University of Sydney, Australia Learning Nonlinearly Parametrized Decision Regions K L Halliwell, R C Williamson & I M Y Mareels Interdisciplinary Engineering Program Australian National University, Australia 11.30 - 1.00 pm Lunch (including Ideas-in-Progress Posters) 1.00 - 2.00 pm Session 7 A Comparison of Three On Chip Neuron Designs for a Low Power VLSI MLP R J Coggins, M A Jabri & S Pickard Department of Electrical Engineering University of Sydney, Australia Developments to the CMAC Neural Net C S Berger Department of Electrical & Computer Systems Engineering Monash University, Australia 2.00 - 2.15 pm Closing Poster Session 1 Monday, 1st February 1992 5.00 - 7.30 pm The Effect of Representation on Error Surface S Phillips Department of Computer Science University of Queensland, Australia Statistical Analysis of a Parallel Dynamics Autoassociative Memory Network A M N Fu & W G Gibson School of Mathematics & Statistics University of Sydney, Australia Error Bounds for Approximation by Feedforward Neural Networks M Ma & A C Tsoi Department of Electrical Engineering University of Queensland, Australia A Comparative Study between SGNT and SONN W Wen, V Pang & A Jennings AISS/TSSS Telecom Research Laboratories, Australia A Critical Look at Adaptive Logic Networks S Parameswaran & M F Schulz Department of Electrical & Computer Engineering University of Queensland, Australia Single and Dual Transistor Synapses for Analogue VLSI Artificial Neural Networks B Flower & M A Jabri Department of Electrical Engineering University of Sydney, Australia Well-Balanced Learning by Observing Individual Squared Errors K Kohara & T Kawaoka NTT Network Information Systems Laboratories, Japan Optimization of Multi-Layer Neural Networks using Gauss-Newton Minimization A Bainbridge-Smith, M A Stoksik & R G Lane Department of Electrical & Electronic Engineering University of Tasmania, Australia Exception Learning by Backpropagation: A New Error Function P Bakker & J Wiles Department of Computer Science University of Queensland, Australia R Lister Department of Electrical & Computer Engineering University of Queensland, Australia Genetic Optimization and Representation of Neural Networks M Mandischer Department of Computer Science VI University of Dortmund, Growing Context Units in Simple Recurrent Networks Using the Statistical Attribute of Weight Updates L R Leerink & M A Jabri Department of Electrical Engineering University of Sydney, Australia A Nonlinear Model for Human Associative Memory Based on Error Accumulation R A Heath Department of Psychology University of Newcastle, Australia Towards Connectionist Realization of Fuzzy Production Systems N K Kasabov Department of Information Science University of Otago, New Zealand A Stable Neural Controller for Nonminimum Phase Systems S K Mazumdar & C C Lim Department of Electrical & Electronic Engineering University of Adelaide, Australia An Adaptive Neural Net Based Spectral Classifier J T Hefferan & S Reisenfeld School of Electrical Engineering University of Technology Sydney, Australia Neuro-Morphology of Biological Vision: Fractional Discriminant Functions in the Emulation of Visual Receptive Fields for Remote Sensed Images S K Hungenahally & A Postula School of Microelectronic Engineering Griffith University, Australia L C Jain School of Electronic Engineering University of South Australia, Australia Automated Acquisition of Rules for Diagnosis S Sestito & S Goss Air Operations Division DSTO Aeronautical Research Laboratory, Australia G Merrington & R Eustace Flight Mechanics & Propulsion Division DSTO Aeronautical Research Laboratory, Australia Neural Networks to Compute Global Pattern Rotation and Dilation J S Chahl & M V Srinivasan Centre for Visual Sciences, RSBS Australian National University, Australia A Neural Architecture with Multiple Scales of Organisation D Alexander Behavorial Sciences Macquarie University, Australia Error and Variance Bounds in Multilayer Neural Networks D R Lovell & P L Bartlett Department of Electrical & Computer Engineering University of Queensland, Australia What Size Higher Order Network gives Valid Generalization? S Young & T Downs Department of Electrical Engineering University of Queensland, Australia Poster Session 2 Tuesday, 2nd February 1992 5.00 - 7.00 pm RPROP: A Fast and Robust Backpropagation Learning Strategy M Riedmiller & H Braun Institute fur Logik, Komplexitat und Deduktionssysteme University of Karlsruhe, A VLSI Switched Capacitor Realisation of An Artificial Synapse and Neuron Suitable for Nano-Power Multi-Layer Perceptrons S Pickard & M A Jabri Department of Electrical Engineering University of Sydney, Australia PANNE: A Parallel Artificial Neural Network Engine S Guda, B Flower & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Self-Growing Feed-Forward Counterpropagation Network S J Bye Telecom Research Laboratories, Australia A Adams & P Vamplew Artificial Neural Network Research Group University of Tasmania, Australia Pruning Feed-forward Neural Networks A N Burkitt Computer Sciences Laboratory, RSPS Australian National University, Australia P Ueberholz Physics Department University of Wuppertal, Germany A Comparison of Architectural Alternatives for Recurrent Networks W H Wilson School of Computer Science & Engineering University of New South Wales, Australia An Entropy Based Feature Evaluation and Selection Technique Z Chi & M A Jabri Department of Electrical Engineering University of Sydney, Australia Designing and Training a Multi-Net System with varying Algorithms and Architectures M Arnold & M A Jabri Department of Electrical Engineering University of Sydney, Australia The Minds Eye: Extraction of Structure from Images of Objects with Natural Variability T J Stucke & G Coghill Department of Electrical & Electronic Engineering G C Creak Department of Computer Science University of Auckland, New Zealand Comparison of a Back-Propagation Model of Music Recognition and Human Performance C Stevens & C Latimer Department of Psychology University of Sydney, Australia Analysis of a Neural Network with Application to Human Memory Modelling M Chappell & M S Humphreys Department of Psychology University of Queensland, Australia An Art Model of Human Recognition Memory A Heathcote Psychology Department University of Newcastle, Australia Comparison of Different Neighbourhood Size in Simulated Annealing X Yao Department of Computer Science University College, University of New South Wales, ADFA, Australia Classification of Incomplete Data using Neural Networks M L Southcott & R E Bogner Department of Electrical & Electronic Engineering University of Adelaide, Australia Feature Extraction Using Neural Networks S V R Madiraju, Z Q Liu & T M Caelli Department of Computer Science University of Melbourne, Australia Application of Neural Networks to Quantitative Structure-Activity Relationships of Benzodiazepine/GABAA Receptor Binding Compounds D J Maddalena & G Johnston Department of Pharmacology University of Sydney, Australia Word-boundary Detection using Recurrent Neural Networks L R Leerink & M A Jabri Department of Electrical Engineering University of Sydney, Australia Classification by Single Hidden Layer ANN G Chakrabnorty, N Shiratori & S Noguchi Division of Engineering Tohoku University, Japan Unification in Prolog by Connectionist Models Volker Weber Computer Science Department University of Hamburg, Germany Ideas-in-Progress Posters A Feedforward Neural Network with Complex Weights E Skafidas & M Palaniswami Department of Electrical & Electronic Engineering University of Melbourne, Australia A Method of Training Multi-Layer Networks with Heaviside Characteristics using Internal Representations R J Gaynier & T Downs Department of Electrical & Computer Engineering University of Queensland, Australia Comparing Computed Neural Nets and Living Brains C J A Game Department of Surgery University of Sydney & SEDAL, Australia Neural Networks as Direct Adaptive Controllers M Bahrami School of Electrical engineering University of New South Wales, Australia Performance Criteria for Stop Consonant Identification Using Artificial Neural Nets R Katsch, P Dermody & D Woo Speech Communication Research Group National Acoustic Laboratories, Australia Classifying High Dimensional Spectral Data by Neural Networks R A Dunne Murdoch University, Australia N A Campbell & H T Kiiveri Division of Mathematics & Statistics C S I R O, Australia Registration The conference is being held at the Prince Philip Theatre in the Architecture and Planning Building in Masson Rd (marked on the attached map of Melbourne University). Registration is available in the foyer of the Architecture and Planning Building from 4-6pm on Sunday, 31 January, and from 8.30 each morning of the Conference. Registration fees: Academic A$200.00 Student A$ 75.00 Other A$300.00 Accommodation For accommodation booking, please contact Conference Associates, Tel/Fax: +61 (3) 887 8003. Accommodation has been block booked at: Ormond College Student $ 32.00 (Bed&Breakfast) University of Melbourne Other $ 42.00 The Town House Standard $ 98.00 701 Swanston St Executive $110.00 Carlton (rates include continental breakfast) Lygon Lodge Standard $ 83.00 220 Lygon St Deluxe $ 95.00 Carlton Standard 3 bed room $95.00 Train (Sydney-Melbourne) Economy Return $ 98.00 1st Class Return $158.00 Depart Sydney 8.05 pm Melbourne 8.00 pm Arrive Melbourne 9.10 am Sydney 9.00 am Local Air Travel Information Ansett are the official Carrier for ACNN'93 and are providing a number of services. However, only the normal discount air fares are available for delegates. If booking with Ansett please quote the Master File number: MC 04351. The reservation phone number is 131300. Discount tickets have to be booked at least 14 days in advance. However only a limited number of seats will be available at these prices so book asap and avoid disappointment. Transport from Airport Tullamarine Airport, located 20 km north-west of the city centre on the Tullamarine Freeway, is open 24 hours a day and handles both international and domestic flights at terminals in the same building. The Skybus Airport Coach service runs regularly to the city with a transfer time of 30-40 minutes. The airline and greyhound terminals in the city are adjacent to Swanston St. The contact number for Skybus is (03) 335 3066. Cab fare is about $20-25. Transport to the University of Melbourne The conference is situated in the Architecture and Planning Building in Masson Rd. Tram stop No. 10 in Swanston St (between Faraday and Elgin Sts) is directly opposite. Trams No. 1 and 15 are appropriate, and can be caught from Museum and Flinders St Stations. Parking Parking is not available on campus during regular hours. There are all day meters at 20 cents/hour to the north of the University in College Crescent and in Lygon St adjacent to the Cemetery and free all day parking in Princes Park Drive. Undercover parking is available at $6/day at Tower in Drummond St and there is also all day parking at $5 at the Exhibition Buildings in Rathdown St. Note that Monday, 1 February is a public holiday in Victoria. Limited all day parking may be available on campus for $5. On-Campus Facilities There is a Commonwealth Bank and a Post Office in the building in which the conference is held. The Union is nearby. There are bookshops, chemists and food stops in the union. Note that the Union is close on Monday, 1 February. Lunch Lygon St is a short walk east of the conference venue. There are restaurants, food bars, take-away and eat-ins to meet all budget and dietary tastes. This is open all week including the Australia Day Holiday on 1 February. Messages to Delegates The Conference Registration Desk can be contacted on (03) 344-7962 Sunday 4-6pm and from 8.30am Monday to Wednesday. Messages can also be left with the CITRI receptionist on (03) 282-2400 if the registration desk is unattended. Weather Melbourne is generally warm with sunny days in February. The average temperature is 26 degrees with average overnight temperature of 15 degrees. There are often hot periods in excess of 33 degrees followed by a change with thunderstorms. Maximum temperature recorded is 43 degrees, lowest is 4 degrees. There is a 25% chance of rain. Sponsors Ansett Airlines Australian Telecommunications & Electronics Research Board Carlton United Breweries CITRI, University of Melbourne Defence Science & Technology Organisation SEDAL, University of Sydney Telecom Research Laboratories ACNN'93 Organising Committee Conference Chairman Dr Marwan Jabri , University of Sydney Technical Programme Chairs Dr Andrew Jennings, Telecom Research Laboratories Dr Stephen Pickard, University of Sydney Technical Committee Prof Yianni Attikiouzel, University of Western Australia Prof Max Bennett, University of Sydney Prof Bob Bogner, University of Adelaide Dr Joel Bornstein, University of Melbourne Ms Angela Bowles, BHP Research Melbourne Prof Terry Caelli, University of Melbourne Prof Max Coltheart, Macquarie University Dr Phil Diamond, University of Queensland Mr Barry Flower, University of Sydney Dr Bill Gibson, University of Sydney A/Prof Richard Heath, University of Newcastle Dr Andrew Jennings, Telecom Research Laboratories Dr Adam Kowalczyk, Telecom Research Laboratories Prof Bill Levick, Australian National University Dr D Nandagopal, Defence Science & Technology Organisation Dr M Palaniswami, Defence Science & Technology Organisation Dr Stephen Pickard, University of Sydney Dr Nick Redding, Defence Science & Technology Organisation Dr M Srinivasan, Australian National University Prof Ah Chung Tsoi, University of Queensland Dr Janet Wiles, University of Queensland Dr Bob Williamson, Australian National University Local Committee Dr Joel Bornstein, University of Melbourne Ms Angela Bowles, BHP Research Melbourne Prof Terry Caelli, University of Melbourne Dr Victor Ciesielski, Royal Melbourne Institute of Technology Dr Simon Goss, Defence Science & Technology Organisation Dr Andrew Jennings, Telecom Research Laboratories Dr Adam Kowalczyk, Telecom Research Laboratories Institutions Liaison & Publicity Dr Simon Goss, Defence Science & Technology Organisation Sponsorship Dr Andrew Jennings, Telecom Research Laboratories Publications Mr Philip Leong, University of Sydney From rohwerrj at cs.aston.ac.uk Sat Jan 9 09:51:10 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Sat, 9 Jan 93 14:51:10 GMT Subject: Quantum neural computer Message-ID: <16307.9301091451@cs.aston.ac.uk> > Hitherto all computers have been designed based on classical laws. > We consider the question of building a quantum neural computer and > speculate on its computing power. We argue that such a computer > could have the potential to solve artificial intelligence problems. It is difficult to argue that quantum computation plays an important role in everyone's favorite intelligent computer, the human brain. The characteristically 'quantum' properties of quantum computers, such as the ability to run a superposition of programs simultaneously on a single machine, arise only if the computer is a totally isolated system; ie., it exchanges not a single quantum of energy with its environment. The brain fails this test pathetically. As Professor Kak's TR probably makes clear [My apologies for posting this before obtaining it], the engineering requirements for building any quantum quantum computer, neurally-inspired or not, are quite severe. Furthermore, the required programming style is bizarre: To prevent energy dissapation, programs must be written so that all intermediate results are eventually erased. > Intelligence, and by implication consciousness, has been taken by > many computer scientists to emerge from the complexity of the > interconnections between the neurons. But if it is taken to be a > unity, as urged by Schrodinger and other physicists , > then it should be described by a quantum mechanical wave > function. No representation in terms of networking of classical I am sympathetic to the view that quantum superposition has something important to do with mind. Otherwise it would be (even more) difficult to explain the quantum measurement process. But for the reasons given, I don't think quantum computers are the missing link. Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs From kak at max.ee.lsu.edu Fri Jan 8 16:58:47 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Fri, 8 Jan 93 15:58:47 CST Subject: regarding quantum neural computer announcement Message-ID: <9301082158.AA18390@max.ee.lsu.edu> Dr Dyer: I am aware that objections of the kind that you have made will be offered by many computer scientists. In response I can only say that no chaos-based quantum theory has yet emerged. Indeed, the study of the EPR proposal suggests that such a theory should not exist because there seem to be actain-at-a-distance behavior at the level of single particles. Anyway, my purpose is to start a debate and the overwhelming response I have received in a single day suggests that such a debate may be soon joined. Thanks, -Subhash Kak From yves at netid.com Fri Jan 8 18:19:17 1993 From: yves at netid.com (Yves Chauvin) Date: Fri, 8 Jan 93 15:19:17 PST Subject: Preprint available Message-ID: <9301082319.AA15957@netid.com> **DO NOT FORWARD TO OTHER GROUPS** The following paper, "Hidden Markov Models in Molecular Biology: New Algorithms and Applications", has been placed in the neuroprose archive. It is to be published in the Proceedings of the 1992 NIPS conference, Further information and retrieval instructions are given below. Yves Chauvin yves at netid.com ___________________________________________________________________________ Hidden Markov Models in Molecular Biology: New Algorithms and Applications Pierre Baldi Jet Propulsion Laboratory and Division of Biology, California Institute of Technology Pasadena, CA 91109 Yves Chauvin Net-ID, Inc. Tim Hunkapiller Division of Biology California Institute of Technology Marcella A. McClure Department of Ecology and Evolutionary Biology University of California, Irvine We introduce a new convergent learning algorithm for HMMs that, unlike the classical Baum-Welch algorithm, is smooth and can be applied on-line or in batch mode, with or without the usual Viterbi most likely path approximation. HMMs are then trained to represent and align several protein families including immunoglobulins and kinases. In all cases, the trained models seem to capture the statistical properties characteristic of the families. ___________________________________________________________________________ A complete technical report and related preprints are available upon written request to the first author. ___________________________________________________________________________ Retrieval instructions: The paper is baldi.compbiohmm.ps.Z in the neuroprose archive. To retrieve this file from the neuroprose archives: unix> ftp cheops.cis.ohio-state.edu Name (cheops.cis.ohio-state.edu:becker): anonymous Password: (use your email address) ftp> cd pub/neuroprose ftp> binary ftp> get baldi.compbiohmm.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for baldi.compbiohmm.ps.Z . ftp> quit . unix> uncompress baldi.compbiohmm.ps.Z unix> lpr baldi.compbiohmm.ps.Z From cds at vaxserv.sarnoff.com Mon Jan 11 12:00:45 1993 From: cds at vaxserv.sarnoff.com (Clay Spence x3039) Date: Mon, 11 Jan 93 12:00:45 EST Subject: RE> quantum neural computer announcement Message-ID: <9301111700.AA06064@peanut.sarnoff.com> A comment on Dr. Dyer's comments on Dr. Kak's announcement: > 2. that quantum-level phenomena could never be adequately simulated >by a Turing machine (i.e. that reality is not computable). > >After reading a number of (non-specialist) books on quantum physics, I am >not yet convinced of this. ... > >But there's an approach that could produce similar results from completely >deterministic equations -- i.e. chaos theory. ... > >In this (albeit hand-waving) case, then, there would exist >deterministic equations generating wave-like behavior and the whole >thing could be ultimately simulated by a Turing machine. Chaos and quantum mechanics are not equivalent; in quantum mechanics a system has observable properties that one can measure, but in most interpretations it doesn't make sense to say that the properties had those values before the measurement was made, e.g., a particle apparently doesn't have a position until the position is measured. This has experimental consequences which have been verified. (The reference that comes to my mind [Mermin, 1985] is slightly old, but very readable). This kind of effect cannot be produced by a chaotic, deterministic system of particles. However, one can simulate a quantum system on an ordinary computer by solving Schroedinger's equation numerically and randomly choosing measurement results with probability given by the squared magnitude of the wave function. So the conclusion is correct, "the whole thing could be ultimately simulated by a Turing machine", to the extent that one can simulate accurately the quantum system and to the extent that an ordinary computer is like a Turing machine (I'm not a computer scientist). I have no idea whether quantum effects could add anything to a machine's ability to compute or "reason." Mermin, N.D., 1985. Physics Today, Vol. 38, No. 4, p.38. Clay Spence From mitsu at netcom.com Mon Jan 11 16:55:43 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Mon, 11 Jan 93 13:55:43 -0800 Subject: Quantum neural computer Message-ID: <9301112155.AA27873@netcom3.netcom.com> >It is difficult to argue that quantum computation plays an important >role in everyone's favorite intelligent computer, the human brain. >The characteristically 'quantum' properties of quantum computers, >such as the ability to run a superposition of programs simultaneously >on a single machine, arise only if the computer is a totally isolated >system; ie., it exchanges not a single quantum of energy with >its environment. The brain fails this test pathetically. This is not correct, as I understand it: a quantum measurement does not necessarily collapse the entire wave function of a system, and even if it did the mere fact of the exchange of energy with another system does not in fact entail a quantum measurement. If this statement were correct, then measuring *anything* coming out of *any* system would allow you to determine the precise state of every single particle in the emitting system. Consider also the fact that a measurement can be ambiguous between different wave functions: i.e., you may detect a photon, but you don't necessarily know where the photon came from. Mitsu Hadeishi General Partner, Open Mind mitsu at well.sf.ca.us mitsu at netcom.com From gary at cs.UCSD.EDU Mon Jan 11 22:41:33 1993 From: gary at cs.UCSD.EDU (Gary Cottrell) Date: Mon, 11 Jan 93 19:41:33 -0800 Subject: another Dognitive Science seminar Message-ID: <9301120341.AA14499@odin.ucsd.edu> SEMINAR Oscillations in Dog Cortex: A new approach Garrison W. Cottrell Department of Dog Science Southern California Condominium College Recent work (Blackie & Wolf, 1990) has shown that when a canine is attending a stimulus, the neurons representing that stimulus fire in synchrony. It has been suggested that this is the mechanism by which the stimulus features are bound together in the dog's brain. It has also been suggested that whole object recognition occurs in the Inferior Temporal (IT) Cortex of the dog[1]. The question then arises: How are the oscillations in one part of the brain used by the object recognition system in another part? If IT is indeed an inferior temporal processor, how could it possibly make use of such temporal information? Part of the problem in studying such phenomena is that the brain processes things so fast, it is difficult to measure recognition events among the blooming, buzzing confusion in the cortex. Hence we suggest that the ideal subjects for studying such processes are older dogs, who appear to have far fewer neurons[2], and those that remain run at a much more leisurely pace. A second reason for studying elderly dogs is that they sleep a great deal, and this is an ideal time to study the baseline activity of recognition system. If the Boltzdogg machine model (Hilton & Slugowski, 1986) is correct, the oscillations observed during sleep reflect the structure of the system in isolation from the environment. There are many difficulties in assessing brain activity. Single cell recordings are at too low a level to asses symbolic activity. Evoked potential studies are good at temporal resolution but poor at source identification. PET studies are useful for localization but have poor temporal resolution[3]. We have discovered a non-invasive technique for studying oscillations in dog brains that also gives us the sources in an unambiguous way. We have found that, contrary to popular belief, leg locomotion during sleep does not mean that dogs are chasing rabbits in their dreams. Rather, neuromodulators released during sleep rewire the output of the recognition system to the leg musculature. Thus, the leg twitches are a direct reflection of cortical oscillations in the four complex object recognition regions[4] in the dog brain. An immediate observation is that the food & master (left & right rear legs) regions oscillate 180 degrees out of phase with the sex & cat face regions (left & right front legs). Thus one can observe right away that representation is a process, since just like a computer process, this one runs, and eventually halts. The major difference is that this process runs when asleep (cf. Unix(tm) sleep(1)). This is a great new instrument for assessing the behavior of the brain, since we avoid problems with animal rights people by using a non-invasive technique. Since the older dog is asleep so much, he presents a terrific wealth of data on brain activity. On a more philosophical note, it suggests that meaning representations are oscillations all the way down - suggesting that West Coast researchers that are into "getting the vibes" are not that far off in their approach. The dogleg technology is certainly giving Dognitive Science a leg up on what's happening in the dog's brain. A live demonstration of leg twitching during sleep will be presented at the talk. ------------ [1]It is unclear why this part of cortex is deemed to be inferior, since it plays such an important role. Some have suggested that the name means that it is bad at temporal processing, and makes up for this lack by being good at spatial processing. [2]Some believe that older dogs' grandmother cells have gone to rest homes. And, although researchers can't agree whether these neurons are simply decaying or are being actively suppressed, it must be true that they can't have all checked out, since Jellybean at 15;10 still recognizes his food. However, he also recognizes grass, dirt, and rotting logs as food, which suggests a degraded distributed representation. [3]Aside from the fact that they do not record activity, CAT scans are obviously an inappropriate tool for studying dognition. [4]It is generally accepted that the four recognition systems are di- vided modularly into the FOOD, OPPOSITE SEX, MASTER and CAT FACES regions. However, there is some argument whether the latter region is recognizing cat faces, upside down monkey faces, or paint brushes (Parrot, 1989). From mume at sedal.su.oz.au Tue Jan 12 00:59:24 1993 From: mume at sedal.su.oz.au (Multi-Module Environment) Date: Tue, 12 Jan 1993 16:59:24 +1100 Subject: change of addresses for MUME. Now: mume-request@sedal.su.OZ.AU, mume-bugs@sedal.su.OZ.AU and MUME@sedal.su.OZ.AU Message-ID: <9301120559.AA04531@sedal.sedal.su.OZ.AU> Dear Connectionists, Apologies to those who tried to send mail to MUME-Request and MUME-Bugs. Our sendmail couldn't handle these names. As a consequence, these are changed to mume-request and mume-bugs, respectively. So to: 1. Register yourself in the mailing list, mail to mume-request at sedal.su.OZ.AU with your email address on the 'Subject:' line. 2. Send mail to everybody in the mailing list, send it to: MUME at sedal.su.OZ.AU This address is unchanged. 3. Report bugs, mail: mume-bugs at sedal.su.OZ.AU MUME From Paul_Gleichauf at B.GP.CS.CMU.EDU Mon Jan 11 12:25:01 1993 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Mon, 11 Jan 93 12:25:01 EST Subject: regarding quantum neural computer announcement In-Reply-To: Your message of "Fri, 08 Jan 93 12:52:29 PST." <930108.205229z.26424.dyer@lanai.cs.ucla.edu> Message-ID: <120.726773101@B.GP.CS.CMU.EDU> Fellow Connectionists, Michael Dyer has raised some interesting issues reminiscent of past discussions that were inspired by Roger Penrose's book "The Emperors New Mind". Whether they have anything to do with Dr. Kak's paper, or his very sketchy initial announcement, will require some careful reading. I am sure that we want to be a bit cautious about getting into a debate about quantum computation and its ostensible relationship to intelligence or consciousness. In particular, the notion that there exist deterministic equations that govern the evolution of a pure particle description is subject to very strict limitations. One of the problems with chaos based deterministic equations that might be hypothesized as governing quantum theory is the assumption that there are "hidden variables" that evolve such systems. The coefficents of the governing equations, which are so important in chaotic systems, have a very strictly proscribed role in quantum theory. These include the prohibition of local hidden variables as coefficents for such equations. This is a very tought hurdle for chaotic model of quantum mechanics. There are papers by J. S. Bell that first breached the issue of the viability of hidden variables on terms of verifiable experimental predictions that are in contradiction with those of quantum mechanics. They, and other technical references to this subject, are collected in Quantum Theory and Measurement, Ed. by J.A. Wheeler and W. Zurek, Princeton University Press, 1983. In a useful sense the equations of quantum theory are quite deterministic, it is just that they determine the development of wavefunctions which are used to compute the probabilty of measurements. The measured results are probabilistic, the fundamental theoretical building blocks of the theory are not. An interesting sidebar is that fairly recent efforts to prove the computational power of so-called quantum computers has not expanded the definition of computability beyond Turing machines. There are some papers which are referenced by Penrose in his book, and there have been some serious efforts to build some real devices that test the theory and have produced consistent results. So if some of us are looking to the quantum to provide more computational power than Turing's machine, we may either have a much more fundamental problem to examine, the foundation of quantum mechanics, or we may be looking to false gods. Quantum mechanics really is a theory of probability amplitudes, and its predictions are consistent with experiments. I know of no evidence yet that the predictions of quantum mechanics are not Turing computatable. I regard my own hopes that there are as romantic notions not substantiated by science. Maybe that is why so many of us have apparently responded to Dr. Kak's announcement as if it claims not only the potential of quantum computation to solve artifical intelligence problems, but the necessity. Paul From kak at max.ee.LSU.EDU Mon Jan 11 17:24:12 1993 From: kak at max.ee.LSU.EDU (Dr. S. Kak) Date: Mon, 11 Jan 93 16:24:12 CST Subject: regarding quantum neural computer announcement Message-ID: <9301112224.AA10868@max.ee.lsu.edu> Let me add a couple of comments to the valuable remarks of Paul Gleichauf. First, outside the house that the connectionists have built interesting winds have started to blow. Information processing has become a central concern of basic physics. This is evidenced by the projected SYMPOSIUM ON THE FOUNDATIONS OF MODERN PHYSICS --Quantum Measurement, Irreversibility and the Physics of Information to be held in Cologne, June 1-5, 1993. [For information contact Peter Mittelstaedt; email pb at thp.uni-koeln.de ]. Second, since the famous experiment by Aspect et al to test Bell's inequality in 1982 it is generally agreed that EPR correlations appear to be of action-at-a-distance type. Some assert that "measurements or observations, in the sense required by quantum theory, can only be made by conscious observers". Might the concept of "conscious observer" as used by the qm-theorist have something to do with the conscious observer at the back of cognitive centers? This becomes a plausible question. One is encouraged to think that enlarging the connectionist paradigm in different ways so as to capture aspects of the qm framework might be useful. How this might be done needs to be figured out. In any event getting some fresh air in could do no harm. "Should necessity of chance be considered as the cause?" -Shveta-ashvatara Upanishad -Subhash Kak From ttj10 at eng.cam.ac.uk Tue Jan 12 07:01:53 1993 From: ttj10 at eng.cam.ac.uk (ttj10@eng.cam.ac.uk) Date: Tue, 12 Jan 93 12:01:53 GMT Subject: Technical report: real pole balancing Message-ID: <25070.9301121201@fear.eng.cam.ac.uk> The following technical report is available via the Cambridge University ftp archive svr-ftp.eng.cam.ac.uk. Instructions for retrieval from the archive follow the summary. ------------------------------------------------------------------------------ Pole Balancing on a Real Rig using a Reinforcement Learning Controller Timothy Jervis and Frank Fallside Cambridge University Engineering Department Cambridge CB2 1PZ, England Abstract In 1983, Barto, Sutton and Anderson~\cite{Barto83} published details of an adaptive controller which learnt to balance a simulated inverted pendulum. This {\em reinforcement learning} controller balanced the pendulum as a by-product of avoiding a cost signal delivered to the controller when the pendulum fell over. This paper describes their controller learning to balance a real inverted pendulum. As far as the authors are aware, this is the first example of a reinforcement learning controller being applied to a real inverted pendulum learning in real time. The results show that the controller was able to improve its performance as it learnt, and that the task is computationally tractable. However, the implementation was not straightforward. Although some of the controller's parameters were tuned automatically by learning, some were not and had to be carefully set for successful control. This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found. ------------------------------------------------------------------------------ FTP INSTRUCTIONS unix> ftp svr-ftp.eng.cam.ac.uk Name: anonymous Password: (your_userid at your_site) ftp> cd reports ftp> binary ftp> get jervis_tr115.ps.Z ftp> quit unix> uncompress jervis_tr115.ps.Z unix> If "ftp svr-ftp.eng.cam.ac.uk" does not work, you might try "ftp 129.169.24.20". From berg at cs.albany.edu Tue Jan 12 13:36:03 1993 From: berg at cs.albany.edu (George Berg) Date: Tue, 12 Jan 93 13:36:03 EST Subject: Computational Biology Postdoc Message-ID: <9301121836.AA11587@daedalus.albany.edu> Postdoctoral Position in Computational Biology A one-year postdoctoral position supported by an NSF grant is available to study protein secondary and tertiary structure prediction using artificial intelligence and other computational techniques. Position is available starting in March, 1993, or later. The successful applicant will have a strong background in the biochemistry of protein structure. Ability to program is a must. Experience with artificial neural networks is a definite plus. Preferred candidates will have experience with C, UNIX, and molecular modeling. For further information, contact either George Berg (Department of Computer Science) or Jacquelyn Fetrow (Department of Biological Sciences) by electronic mail at postdoc-info at cs.albany.edu. To apply, please send curriculum vitae and three letters of recommendation to: Jacquelyn Fetrow Department of Biological Sciences University at Albany 1400 Washington Avenue Albany, NY 12222 From alexis at CS.UCLA.EDU Tue Jan 12 14:22:27 1993 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Tue, 12 Jan 93 11:22:27 -0800 Subject: RE> quantum neural computer announcement In-Reply-To: Clay Spence x3039's message of Mon, 11 Jan 93 12:00:45 EST <9301111700.AA06064@peanut.sarnoff.com> Message-ID: <9301121922.AA20375@maui.cs.ucla.edu> > Chaos and quantum mechanics are not equivalent; ... > ... > ... This kind of effect cannot be produced by a chaotic, > deterministic system of particles. However, one can simulate a quantum > system on an ordinary computer by solving Schroedinger's equation > numerically and randomly choosing measurement results with probability > given by the squared magnitude of the wave function. > ... To pick a nit: Since a correctly operating (pseudo) random number generator on a (conventional) computer *is* "produced by a chaotic, deterministic system", your assertion that you can use a computer to solve Schroedinger's equations and then select (pseudo) randomly based on the resulting probability distribution is equivalent to saying that you *can* produce this effect using a (chaotic) deterministic system. I would claim that Mike Dyer's assertion, at least at the intentionally "hand-waving" degree that it was presented, remains valid: a) it is far from clear that quantum effects are required to create "machine intelligence", but even if they are b) it is far from clear that functionally equivalent computational effects can not be generated by a Turing machine - alexis. From wray at ptolemy.arc.nasa.gov Mon Jan 11 00:59:14 1993 From: wray at ptolemy.arc.nasa.gov (Wray Buntine) Date: Sun, 10 Jan 93 21:59:14 PST Subject: IND Version 2.1 tree software available Message-ID: <9301110559.AA14415@ptolemy.arc.nasa.gov> IND Version 2.1 - creation and manipulation of decision trees from data ---------------------------------------------------------------------- A common approach to supervised classification and prediction in artificial intelligence and statistical pattern recognition is the use of decision trees. A tree is "grown" from data using a recursive partitioning algorithm to create a tree which (hopefully) has good prediction of classes on new data. Standard algorithms are CART (by Breiman, Friedman, Olshen and Stone) and Id3 and its successor C4.5 (by Quinlan). More recent techniques are Buntine's smoothing and option trees, Wallace and Patrick's MML method, and Oliver and Wallace's MML decision graphs which extend the tree representation to graphs. IND reimplements and integrates these methods. The newer methods produce more accurate class probability estimates that are important in applications like diagnosis. IND is applicable to most data sets consisting of independent instances, each described by a fixed length vector of attribute values. An attribute value may be a number, one of a set of attribute specific symbols, or omitted. One of the attributes is delegated the "target" and IND grows trees to predict the target. Prediction can then be done on new data or the decision tree printed out for inspection. IND provides a range of features and styles with convenience for the casual user as well as fine-tuning for the advanced user or those interested in research. Advanced features allow more extensive search, interactive control and display of tree growing, and Bayesian and MML algorithms for tree pruning and smoothing. These often produce more accurate class probability estimates at the leaves. IND also comes with a comprehensive experimental control suite. IND consist of four basic kinds of routines; data manipulation routines, tree generation routines, tree testing routines, and tree display routines. The data manipulation routines are used to partition a single large data set into smaller training and test sets. The generation routines are used to build classifiers. The test routines are used to evaluate classifiers and to classify data using a classifier. And the display routines are used to display classifiers in various formats. IND is written in K&R C, with controlling scripts in the "csh" shell of UNIX, and extensive UNIX man entries. It is designed to be used on any UNIX system, although it has only been thoroughly tested on SUN platforms. IND comes with a manual giving a guide to tree methods, and pointers to the literature, and several companion documents. Availability ------------ IND Version 2.0 will shortly be available through NASA's COSMIC facility. IND Version 2.1 is available strictly as unsupported beta-test software. If you're interested in obtaining a beta-test copy, with no obligation on your part to provide feedback, contact Wray Buntine NASA Ames Research Center Mail Stop 269-2 Moffett Field, CA, 94035 email: wray at kronos.arc.nasa.gov From alexis at CS.UCLA.EDU Tue Jan 12 17:40:05 1993 From: alexis at CS.UCLA.EDU (Alexis Wieland) Date: Tue, 12 Jan 93 14:40:05 -0800 Subject: quantum neural computer announcement In-Reply-To: Clay Spence x3039's message of Tue, 12 Jan 93 17:08:07 EST <9301122208.AA08223@peanut.sarnoff.com> Message-ID: <9301122240.AA25486@maui.cs.ucla.edu> > I stand corrected. Measurements of quantum systems are truly random, > ... Just an amusing side bar, it's at least a popular legend that the US military has used little Geiger-counter like devices to help "compute" random numbers when they wanted to be absolutely certain that the results couldn't have been anticipated .... Since the direction that water spins down a drain thats located on the equator should also be truely random (presumably the initial perturbation from the unstable equilibruim would come from Brownian motion in the fluid), it would seem more artistic, if less pragmatic, for an "intel- ligent" computer to periodically flush a line of toilets so situated. (Yeah, I'm a computer scientist, but I don't do hardware :-) Okay, okay, I'll be quite again. - alexis. From stolcke at ICSI.Berkeley.EDU Tue Jan 12 19:34:11 1993 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Tue, 12 Jan 93 16:34:11 PST Subject: YAPOHMM Message-ID: <9301130034.AA10947@icsib30.ICSI.Berkeley.EDU> (Yet another paper on Hidden Markov Models) The following paper, to appear in NIPS-5, is now available by FTP from ftp.icsi.berkeley.edu (128.32.201.7) in the file /pub/ai/stolcke-nips5.ps.Z. Since this is only the latest in a series of similar announcements on connectionists I will spare you the ftp instructions. Let me know if you don't have ftp access and want an e-mailed copy. ----- Hidden Markov Model Induction by Bayesian Model Merging Andreas Stolcke and Stephen Omohundro This paper describes a technique for learning both the number of states and the topology of Hidden Markov Models from examples. The induction process starts with the most specific model consistent with the training data and generalizes by successively merging states. Both the choice of states to merge and the stopping criterion are guided by the Bayesian posterior probability. We compare our algorithm with the Baum-Welch method of estimating fixed-size models, and find that it can induce minimal HMMs from data in cases where fixed estimation does not converge or requires redundant parameters to converge. --Andreas From peter at ai.iit.nrc.ca Wed Jan 13 09:38:30 1993 From: peter at ai.iit.nrc.ca (Peter Turney) Date: Wed, 13 Jan 93 09:38:30 EST Subject: regarding quantum neural computer announcement Message-ID: <9301131438.AA03106@ai.iit.nrc.ca> > Second, since the famous experiment by Aspect et al to test > Bell's inequality in 1982 it is generally agreed that EPR > correlations appear to be of action-at-a-distance type. Some > assert that "measurements or observations, in the sense > required by quantum theory, can only be made by conscious > observers". I am not a physicist, but it is my understanding that there is another way of looking at measurements. Instead of saying "measurements ... can only be made by conscious observers", you can talk about reversible and irreversible events. The key thing about a measurement is not whether it is made by an conscious observer, but whether it is an irreversible event. Is this not a viable alternative to dragging consciousness into quantum mechanics? - Peter Turney From giles at research.nj.nec.com Wed Jan 13 11:46:09 1993 From: giles at research.nj.nec.com (Lee Giles) Date: Wed, 13 Jan 93 11:46:09 EST Subject: NIPS-5 Deadline Message-ID: <9301131646.AA13566@fuzzy> REMINDER!! The deadline for Proceedings papers for NIPS-5 is January 13th. All papers postmarked on that day will be accepted! C. Lee Giles Publications Chair C. Lee Giles NEC Research Institute 4 Independence Way Princeton, NJ 08540 USA Internet: giles at research.nj.nec.com UUCP: princeton!nec!giles PHONE: (609) 951-2642 FAX: (609) 951-2482 From aboulang at BBN.COM Wed Jan 13 13:29:36 1993 From: aboulang at BBN.COM (aboulang@BBN.COM) Date: Wed, 13 Jan 93 13:29:36 EST Subject: RE> quantum neural computer announcement In-Reply-To: Alexis Wieland's message of Tue, 12 Jan 93 11:22:27 -0800 <9301121922.AA20375@maui.cs.ucla.edu> Message-ID: To pick a nit: Since a correctly operating (pseudo) random number generator on a (conventional) computer *is* "produced by a chaotic, deterministic system", your assertion that you can use a computer to solve Schroedinger's equations and then select (pseudo) randomly based on the resulting probability distribution is equivalent to saying that you *can* produce this effect using a (chaotic) deterministic system. Watch for them small brittle eggs ;-). The crux of the matter is that computers (silicon or whatever) need to have access to the reals to generate non-pseudo random numbers. The non-pseudo bit is important here. There is a symbolic dynamics view of random number generates that brings home the point that all these random number generators do is chew on the bits that were originally input. You can't simulate truly random choice with these. If you had access to a source of infinite algorithmic-complexity numbers (most of reals), you would not run out of bits. (Actually there was some interest by a fellow by the name of Tom Erber to look at some NIST Penning-trap data to look for recurrences in the "telegraphic" fluorescence of the trapped ion. He did not see any.) There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. See also the following: "Neural Networks with Real Weights: Analog Computational Complexity" by Siegelmann and Sontag. This is available on neuroprose in (siegelmann.analog.ps.Z). The problem with all of this is the plausibility of access to the reals even with analog realizations. But this gets us off into another topic. I do however see the Blum, Shub, and Smale work as very foundational and important. Regards, Albert Boulanger aboulanger at bbn.com From Paul_Gleichauf at B.GP.CS.CMU.EDU Wed Jan 13 14:30:35 1993 From: Paul_Gleichauf at B.GP.CS.CMU.EDU (Paul_Gleichauf@B.GP.CS.CMU.EDU) Date: Wed, 13 Jan 93 14:30:35 EST Subject: quantum neural computer announcement In-Reply-To: Your message of "Tue, 12 Jan 93 14:40:05 PST." <9301122240.AA25486@maui.cs.ucla.edu> Message-ID: <18344.726953435@B.GP.CS.CMU.EDU> Again I am going to presume that a couple of "quantum-mechanical corrections" to previous posts is warranted in this forum. This is not an effort on my part to limit discussion, but rather a precaution to try to make sure that contributions remain scientific and germane. I first want to re-pick Alexis' nit. There IS a distinction between chaotic simulation of PARTICLES and the numerical simulation of Schroedinger's equation, a linear partial differential equation for the WAVEFUNCTION. The quantum system is not being simulated by a chaotic system, the selection of a measurement result is what being randomly selected, a sampling of a probabilty distribution. When one chooses to measure some of the wave properties of a quantum phenomenon the notion of particles as the basis of a chaotic simulation breaks down. Dr. Kak has added in his follow-on post that EPR experiments are generally agreed to be of the action-at-a-distance type. This is a loaded phrase that should not be used lightly. In EPR experiments one measures the properties of a correlated system, for example a pair of photons produced by a positron-electron annihilation, and asserts that the measurement of the polarization of one uniquely identifies the polarization of the second without the need for any further measurement. The paradoxical character becomes apparent when the potential measurements are not in the forward lightcone (causally connectable by a light signal). The problem with regarding this as action-at-a-distance is that no information (in the information theory sense) can be transmitted using this technique. Therefore calling this action-at-a-distance, where the speed of light is quite artfully used to convey information, can lead to gross misunderstandings. Paul From rsun at athos.cs.ua.edu Wed Jan 13 14:18:50 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Wed, 13 Jan 1993 13:18:50 -0600 Subject: No subject Message-ID: <9301131918.AA14930@athos.cs.ua.edu> In my previous posting regarding references on symbolic processing and connectionist models, I mentioned a file FTPable from Neuroprose. The correct file name is sun.hlcbib.asc.Z (not sun.bib.Z). My apology. --Ron From rohwerrj at cs.aston.ac.uk Wed Jan 13 14:21:27 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 13 Jan 93 19:21:27 GMT Subject: Quantum neural computer Message-ID: <18100.9301131921@cs.aston.ac.uk> > >The characteristically 'quantum' properties of quantum computers, > >such as the ability to run a superposition of programs simultaneously > >on a single machine, arise only if the computer is a totally isolated > >system; ie., it exchanges not a single quantum of energy with > >its environment. The brain fails this test pathetically. > > This is not correct, as I understand it: a quantum This *is* correct, which is why Deutch's work on quantum computers (1) draws on Bennett's dissapationless "billiard ball computer" (2, 3). The trouble is that whether or not a perturbation collapses the wavefunction, which is largely a philosophical undecidable (4), it does destroy quantum phase information unless all quantum phase information is also known for the perturbing system. 1. David Deutsch, "Quantum Theory, the Church-Turing principle, and the universal quantum computer", Proc. Royal Society (London) A400, 97-117, (1985). 2. Charles. H. Bennett, IBM J. Res. Dev. 17, 525. 3. Charles. H. Bennett and Rolf Landauer, "The Fundamental Physical Limits of Computation", Scientific American 253, no. 1, 38-53, (July 1985). 4. Hugh Everett, III, "'Relative State' Formulation of Quantum Mechanics", Reviews of Modern Physics 29, 454-462, (1957). Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs From pluto at cs.UCSD.EDU Wed Jan 13 15:31:59 1993 From: pluto at cs.UCSD.EDU (Mark Plutowksi) Date: Wed, 13 Jan 93 12:31:59 -0800 Subject: Neuroprose submission Message-ID: <9301132031.AA04608@tournesol> ****** PAPER AVAILABLE VIA NEUROPROSE *************************************** ****** PLEASE DO NOT FORWARD TO OTHER MAILING LISTS OR BOARDS. THANK YOU. ** The following paper has been placed in the Neuroprose archives at Ohio State. The file is pluto.nips92.ps.Z. Ftp instructions follow the abstract. Only an electronic version of this paper is available. This is the paper to appear in the NIPS 5 proceedings due out later this year. If you have high interest in the extended version (in preparation) please email: pluto at cs.ucsd.edu "Learning Mackey-Glass From 25 Examples, Plus or Minus 2" Mark Plutowski*, Halbert White**, Garrison Cottrell* * UCSD: Computer Science & Engineering, and the Institute for Neural Computation. ** UCSD: Department of Economics, and the Institute for Neural Computation. ABSTRACT We apply active exemplar selection to predicting a chaotic time series. Given a fixed set of examples, the method chooses a concise subset for training. Fitting these exemplars results in the entire set being fit as well as desired. The algorithm incorporates a method for regulating network complexity, automatically adding exemplars and hidden units as needed. Fitting examples generated from the Mackey-Glass equation with fractal dimension 2.1 to an rmse of 0.01 required about 25 exemplars and 3 to 6 hidden units. The method requires an order of magnitude fewer floating point operations than training on the entire set of examples, is significantly cheaper than two contending exemplar selection techniques, and suggests a simpler active selection technique that performs comparably. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps pluto.nips92.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get pluto.nips92.ps.Z ftp> quit unix> uncompress pluto.nips92.ps.Z unix> lpr -s pluto.nips92.ps (or however you print postscript) Mark E. Plutowski Computer Science and Engineering University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0114 From rohwerrj at cs.aston.ac.uk Wed Jan 13 15:17:40 1993 From: rohwerrj at cs.aston.ac.uk (rohwerrj) Date: Wed, 13 Jan 93 20:17:40 GMT Subject: 2 TRs Message-ID: <18140.9301132017@cs.aston.ac.uk> **DO NOT FORWARD TO OTHER GROUPS** No, it's not another posting on quantum computers, but it's almost as good: an announcement of two somewhat spacey TRs touching lightly on the mind-brain problem. The following 2 papers have been deposited in Jordan Pollack's immensely useful Neuroprose archive at Ohio State. Retrieval instructions at end of message. Hardcopy requests might be answered for cases of dire necessity. --------------------------------------------------------------------------- rohwer.reprep.ps.Z A REPRESENTATION OF REPRESENTATION APPLIED TO A DISCUSSION OF VARIABLE BINDING Richard Rohwer States or state sequences in neural network models are made to represent concepts from applications. This paper motivates, introduces and discusses a formalism for denoting such representations; a representation for representations. The formalism is illustrated by using it to discuss the representation of variable binding and inference abstractly, and then to present four specific representations. One of these is an apparently novel hybrid of phasic and tensor-product representations which retains the desirable properties of each. --------------------------------------------------------------------------- rohwer.howmany.ps.Z HOW MANY THOUGHTS CAN YOU THINK? Richard Rohwer In ordinary computer programmes, the relationship between data in a machine and the concepts it represents is defined arbitrarily by the programmer. It is argued here that the Strong AI hypothesis suggests that no such arbitrariness is possible in the relationship between brain states and mental experiences, and that this may place surprising limitations on the possible variety of mental experiences. Possible psychology experiments are sketched which aim to falsify the Strong AI hypothesis by indicating that these limits can be exceeded. It is concluded that although such experiments might be valuable, they are unlikely to succeed in this aim. --------------------------------------------------------------------------- Retrieval instructions (the usual): ipc9>ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. ftp> cd pub/neuroprose 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get rohwer.reprep.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rohwer.reprep.ps.Z (64235 bytes). 226 Transfer complete. local: rohwer.reprep.ps.Z remote: rohwer.reprep.ps.Z 64235 bytes received in 22 seconds (2.8 Kbytes/s) ftp> get rohwer.howmany.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for rohwer.howmany.ps.Z (46680 bytes). 226 Transfer complete. local: rohwer.howmany.ps.Z remote: rohwer.howmany.ps.Z 46680 bytes received in 32 seconds (1.4 Kbytes/s) ftp> quit 221 Goodbye. ipc9>uncompress rohwer.reprep.ps.Z ipc9>uncompress rohwer.howmany.ps.Z ipc9> Richard Rohwer Dept. of Computer Science and Applied Mathematics Aston University Aston Triangle Birmingham B4 7ET ENGLAND Tel: (44 or 0) (21) 359-3611 x4688 FAX: (44 or 0) (21) 333-6215 rohwerrj at uk.ac.aston.cs **DO NOT FORWARD TO OTHER GROUPS** From hinton at basser.cs.su.oz.au Thu Jan 14 17:19:57 1993 From: hinton at basser.cs.su.oz.au (Geoff Hinton) Date: Fri, 15 Jan 1993 09:19:57 +1100 Subject: faculty opening at University of Toronto Message-ID: ***** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***** The Department of Computer Science at the University of Toronto has an opening for a tenure track assistant professor. There will be a lot of competition for this job from all areas of computer science. No particular preference will be given to researchers in the Neural Network area. It would be helpful to be Canadian or a Canadian landed immigrant. The neural network group in computer science has 11 active researchers and excellent computing facilities. If you are an excellent neural network researcher and you are interested in this job, please apply as soon as possible. Send your application to The Chairman Department of Computer Science University of Toronto 10 Kings College Rd Toronto M5S 1A4 Canada To save time, please also send an electronic copy including your curriculum vitae to me at hinton at cs.su.oz.au Geoff Hinton From cds at sarnoff.com Tue Jan 12 17:08:07 1993 From: cds at sarnoff.com (Clay Spence x3039) Date: Tue, 12 Jan 93 17:08:07 EST Subject: quantum neural computer announcement Message-ID: <9301122208.AA08223@peanut.sarnoff.com> > Since a correctly operating (pseudo) random number generator on a > (conventional) computer *is* "produced by a chaotic, deterministic > system", your assertion that you can use a computer to solve > Schroedinger's equations and then select (pseudo) randomly based on > the resulting probability distribution is equivalent to saying that > you *can* produce this effect using a (chaotic) deterministic system. To pick the nit a little more: I stand corrected. Measurements of quantum systems are truly random, so unlike a Turing machine, a quantum computer could produce truly random numbers and simulated measurement results which are known to be free of peculiar correlations. True randomness might be handy, but it seems to me that assertion b ("it is far from clear that functionally equivalent computational effects can not be generated by a Turing machine") is only slightly weakened. In case it's not clear, I generally agree with Mike Dyer and you. The idea of a quantum computer has some appeal to me, but I don't know of any reasons to think that it would offer radically new computing capabilities. Clay Spence From kenm at prodigal.psych.rochester.edu Thu Jan 14 21:09:53 1993 From: kenm at prodigal.psych.rochester.edu (Ken McRae) Date: Thu, 14 Jan 93 21:09:53 EST Subject: correlated properties and computing word meaning Message-ID: <9301150209.AA22710@prodigal.psych.rochester.edu> The following paper is now available in the connectionist archive, archive.cis.ohio-state.edu (128.146.8.52), in pub/neuroprose under the name mcrae.corredprops.ps.Z The Role of Correlated Properties in Accessing Conceptual Memory Ken McRae Virginia de Sa University of Rochester, Rochester, NY Mark S. Seidenberg University of Southern California, Los Angeles, CA keywords: correlated properties, conceptual memory, word meaning, connectionist models, semantic priming ABSTRACT A fundamental question in research on conceptual structure concerns how information is represented in memory and used in tasks such as recognizing words. The present research focused on the role of correlations among semantic properties in conceptual memory. Norms were collected for 190 entities from 10 categories. Property intercorrelations were shown to influence people's performance in both a property verification task and a short interval semantic priming experiment. Furthermore, correlated properties were more important for biological kinds than for artifacts. A connectionist model of the computation of word meaning was implemented in which property intercorrelations developed in the course of learning. The model was used to simulate the results of the two experiments. We then tested a novel prediction derived from the model: that the intercorrelational density of a concept's properties should influence the speed with which a concept is computed. This prediction was confirmed in a final experiment. We concluded that encoded knowledge of property co-occurrences plays a prominent role in the representation and computation of word meaning. From ken at cns.caltech.edu Fri Jan 15 07:57:50 1993 From: ken at cns.caltech.edu (Ken Miller) Date: Fri, 15 Jan 93 04:57:50 PST Subject: Quantum and Classical Foolishness Message-ID: <9301151257.AA05256@zenon.cns.caltech.edu> In response to: -> Some assert that "measurements or observations, in the sense -> required by quantum theory, can only be made by conscious -> observers". -> Might the concept of "conscious observer" as used by the -> qm-theorist have something to do with the conscious observer at the -> back of cognitive centers? There is an ancient classical riddle: "When a tree falls in the forest, and no one is there to hear it, does it make a sound?" The idealist philosophers argued that unless some conscious being is around to register the event, you cannot say it has happened. The most solopsistic would say, until *I* register the event, it has not happened. This is classical foolishness. It is logically and philosophically consistent, but rather useless and pointless. Ultimately one arrives at the notion that history has not happened until you choose to read about it in the morning paper. As Feynmann points out in his lectures in discussing these issues, of course the falling tree makes a sound. A sound is a physical event, a compression wave in the air, and it leaves physical traces --- leaves that are blown off of a tree, thorns that vibrate and scratch a leaf. A sound is as physical as the fallen tree itself. So unless you hold to the solopsistic notion that the tree does not fall until you wander by and see it on the ground, then there is no problem about the sound either. Quantum mechanics adds many new puzzles to science, but this is not one of them. Quantum foolishness is the same solopsistic foolishness as classical foolishness, there is no new quantum effect here. Without going into a course on the subject: in quantum mechanics, we cannot describe a continuous evolution in time in terms of classical variables. Rather, there is a quantum state that is a certain kind of mixture in terms of classical variables, and then at some point there is a measurement, which just means "something happens", B happens rather than C, and the quantum state has accordingly "collapsed". The key point where the foolishness arises is in defining when a measurement has occurred --- when "something has happened." The solopsistic want to say, "well, you really don't know which outcome happened until a conscious observer sees it, so quantum mechanics requires consciousness". And some very good physicists have unfortunately subscribed to this (but not Feynmann --- see the same portion of his lectures where he talks about the tree falling) just as some very good Greek philosophers talked themselves into solopsism. This statement about quantum mechanics is no different from saying "you really don't know whether the tree falls until a conscious observer sees it, so classical mechanics requires consciousness". The point is, a quantum measurement occurs when some *classical physical event* has occurred --- some dial on your meter goes up or down, Schrodinger's cat lives or dies --- and so knowing the outcome is no different in status from knowing about the sound wave of a falling tree. How do you know when this event has occurred? This is a classical problem, the same problem the ancient solopsists screwed around with. And any sensible physicist would say, it happens when it happens, because it's a classical physical event that leaves traces and tracks of its existence whether you look at those traces or not. The mystery and wierdness of quantum mechanics involves understanding how classical physical events emerge out of the quantum world, how the quantum world "collapses" to the classical. But this has nothing to do with consciousness. Consciousness only enters in when trying to figure out when you know that this *classical* physical event has occurred. And that's classical foolishness. So, what does all this have to do with connectionists? Nothing. So I propose we drop the subject of quantum computers until someone has a specific architecture to propose. Ken From ellen at sol.siemens.com Fri Jan 15 09:14:40 1993 From: ellen at sol.siemens.com (Ellen Voorhees) Date: Fri, 15 Jan 93 09:14:40 EST Subject: Job announcement Message-ID: <9301151414.AA02998@sol.siemens.com> The learning department of Siemens Corporate Research in Princeton, New Jersey is looking to hire a researcher interested in statistical and knowledge-based methods for natural language processing, text retrieval, and text categorization. The position requires either a PhD (preferred) or a masters degree with some experience in an appropriate field. The main responsibility of the successful candidate will be to conduct research in automatic information retrieval and (statistical) natural language processing. Tasks include setting up and running experiments, programming, etc. People interested in the position should send a PLAIN ASCII resume to ellen at learning.siemens.com or a hardcopy of the resume to: Human Services Department EV Siemens Corporate Research, Inc. 755 College Road East Princeton, NJ 08540 Siemens is an equal opportunity employer. Ellen Voorhees Member of Technical Staff Siemens Corporate Research, Inc. From kak at max.ee.LSU.EDU Fri Jan 15 12:12:37 1993 From: kak at max.ee.LSU.EDU (Dr. S. Kak) Date: Fri, 15 Jan 93 11:12:37 CST Subject: Symposium on Aliens, Apes, and AI Message-ID: <9301151712.AA18830@max.ee.lsu.edu> A symposium on Aliens, Apes, and AI: Who is a person in the postmodern world? will be held in Huntsville, AL on Feb 13, 1993. The symposium is being organized by profs Lyn Miles and Stephen Harper of U. of Tennessee, Chattanooga. For further information contact FAX 615-755-4279; BITNET:SHARPER at UTCVM ; LMILES at UTCVM ------------------------------------------------------------------- My paper at the symposium is described below: ---------------------------------------------------------------- Symposium on Aliens, Apes, and Artificial Intelligence , The University of Alabama in Huntsville, February 13, 1993. --------------------------------------------------------------- Technical Report 92-12 ECE-LSU December 1, 1992 Reflections In Clouded Mirrors: Selfhood In Animals And Machines by Subhash Kak Copyright Department of Electrical & Computer Engineering Louisiana State University Baton Rouge, LA 70803-5901 Abstract This essay is a tapestry woven out of three threads: Vedic theory of consciousness, quantum mechanics, and neural networks. The ancient Vedic tradition of philosophy of consciousness that goes back to at least 2000 BCE posits that analytical approaches to defining awareness or personhood end up in paradox. In this tradition one views awareness in terms of the reflection that the hardware of the brain provides to an underlying illuminating or awareness principle called the self . This tradition allows one to separate questions of the tools of awareness, such as eyes and ears and the mind, from the person who obtains this awareness. This tradition will be reviewed and issues related to its application to an understanding of personhood in animals and machines will be taken up. Parallels between the insights of the Vedic tradition and quantum mechanics will be sketched. The observer plays a fundamental role in the measurement problem of quantum mechanics and several scientists have claimed that physics will remain incomplete unless consciousness is incorporated into it. We will also consider the perspective of AI that intelligence emanates from the complexity of the neural hardware of the brain. This will take us to the question that what is it that separates humans from apes and other animals and from machines. We will address the question if machines will ever be endowed with self-awareness. -------------------------------------------------------------- From dyer at CS.UCLA.EDU Fri Jan 15 12:41:57 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 15 Jan 93 09:41:57 PST Subject: true randomness Message-ID: <930115.174157z.01975.dyer@lanai.cs.ucla.edu> Adding a geiger counter to a Turing Machine may seem to make a machine more powerful, but it is no more powerful than, say, adding a plane to a TM (i.e. have the TM control the plane's flight). After all, a plain TM can't fly. There are many chaotic patterns that appear random (until one discovers the underlying non-linear, deterministic equations). Although at this point it appears that the universe is fundamentally probabilistic, it seems possible to me that there could exist a deterministic universe in which there could exist measurers (i.e. scientists) who would be confused into believing (for a time) that their universe must be probabilistic (based on the granularity and methods of their current measurement technology, or theoretical constructs, etc.) In such a case, they would be (falsely) believing that adding a "random" physical process to their TM would produce something that no other TM could produce (i.e. via finite algorithm specification). From dyer at CS.UCLA.EDU Fri Jan 15 13:08:24 1993 From: dyer at CS.UCLA.EDU (Dr Michael G Dyer) Date: Fri, 15 Jan 93 10:08:24 PST Subject: real numbers and TMs Message-ID: <930115.180824z.03349.dyer@lanai.cs.ucla.edu> Albert Boulanger, you said: There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. ======= But is the physics of our universe only modelable in terms of real numbers? e.g. is there actually an infinite amount of ever smaller space between any two neighboring pieces of close-together space? The quantum approach seems to say "no". Also, while there may be an infinite number of digits in a real number, for us to find out that the universe requires reals would require us to spend an infinite amount of time reading off the results of one of our measurements. (Is this right?) Consider a cellular automaton models, where the cell is the smallest quatum of space itself. In such models, there actually is no "motion". Motion is just an illusion -- the result of a similar looking pattern configurations being reconstructed near the original pattern, in the next state of the universe, based on the laws of how cell states interact. I have not come upon any proof that our universe could not be *some sort of* cellular system (perhaps with some bizarre topography and bizarre, non-local "neighborhood" function). In such a case, (a) it would be Turing computable and (b) real numbers would be merely a useful fiction, used by the measurers locked within that universe, but a fiction nonetheless and they would never be able to harnass this "extra-TM" power (other than in the sense that a TM attached to, say, a vehicle can do more, e.g. it can move thru space , which a TM with just a tape could not, (unless we are talking about THAT Turing machine simulating ITS vehicle on its own tape :-)) -- Michael Dyer From cds at sarnoff.com Fri Jan 15 14:39:21 1993 From: cds at sarnoff.com (Clay Spence x3039) Date: Fri, 15 Jan 93 14:39:21 EST Subject: true randomness Message-ID: <9301151939.AA12361@peanut> Alexis and Mike, Mike said: > Adding a geiger counter to a Turing Machine may seem to make a machine > more powerful, but ... I did not mean to imply that adding a true-random generator would make a Turing Machine much more powerful, although for certain applications it would be helpful. I read recently of a physicist who wanted to simulate the three-dimensional Ising model, and used a newer pseudo-random number generator which was supposed to be better than some others. To test his program he tried it on the two-dimensional Ising model, for which the exact statistics are known. The simulation gave the wrong answers. After searching for bugs, he switched to an older pseudo-random number generator and the simulations produced answers consistent with the exact statistics. Of course, it is always possible that he missed a bug in the code which implemented the newer generator. > ...it seems possible to me that there could exist a deterministic > universe in which there could exist measurers (i.e. scientists) who > would be confused into believing (for a time) that their universe > must be probabilistic ... As Paul Gleichauf pointed out, the non-local effects in quantum mechanics forbid local hidden variables, so a deterministic interpretation of quantum mechanics must be somewhat odd. David Bohm (I think that's the right Bohm) invented one which few people like, but it's fairly simple and as far as I know makes predictions which are identical to those of quantum mechanics. The people who prefer more or less conventional interpretations which involve randomness do so because this seems simpler, and so Occam's razor favors it as the preferable hypothesis. You are always free to ignore Occam, he frequently picks the wrong hypothesis, but it isn't clear that anyone is confused. And Alexis said: > Since the direction that water spins down a drain thats located on the > equator should also be truely random (presumably the initial perturbation > from the unstable equilibruim would come from Brownian motion in the > fluid), it would seem more artistic, if less pragmatic, for an "intel- > ligent" computer to periodically flush a line of toilets so situated. > (Yeah, I'm a computer scientist, but I don't do hardware :-) It is true that one can get true randomness from a chaotic deterministic system with infinite state, unlike a pseudo-random number generator. Rolling dice should work ok if done carefully. (I gather Turing machines don't have infinite state information?) As for toilets flushing, which way the water spins depends more on the structure of the toilet. Even in New Jersy at about 40 degrees north, in non-rigorous experiments I can get the water in my bathtub to go either way. I could do it even farther north in Goettingen, Germany. About Alexis' summary of Mike's point: > b) it is far from clear that functionally equivalent computational > effects can not be generated by a Turing machine I don't think this is relevant. Neural nets can be simulated on a Turing machine, and most people (or at least some people) don't think it's a waste of time to study them. The problem is assertion a), or some modification of it. I haven't yet heard an argument for quantum computers that I found convincing. Clay From barto at cs.umass.edu Fri Jan 15 17:20:30 1993 From: barto at cs.umass.edu (Andy Barto) Date: Fri, 15 January 1993 17:20:30 -0500 Subject: Real Pole-Balancing Message-ID: Jervis and Fallside recently posted an abstract on real pole-balancing that prompted me to write this. They have implemented the learning algorithm that we wrote about in 1983 (Barto, Sutton, and Anderson, IEEE Trans. Systems Man and Cybern. 13, pp. 834-846) on a real pole balancing system. I was lucky enough to see their system work and was quite impressed. They indicate that they had some trouble getting it to work and conclude the abstract with the following statement, which I would like to discuss: "This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found." Much progress has been made on this class of learning algorithms since 1983, and we now have a much better understanding of them and their potential. I certainly agree that the pole-balancing problem is not a very good candidate for these algorithms. We artificially limited the information available to the controller, in effect, turning it into a problem that is harder than pole-balancing really is (as we indicated in our 1983 paper). We now understand that learning algorithm, and related ones developed by us and many others (e.g., Watkins, Werbos), as methods for approximating solutions to optimal control problems by means of approximating dynamic programming (DP). A short paper by Rich Sutton, Ron Williams, and me appeared in Control Systems Magazine (vol. 12, April 1992) that describes this perspective. Although much work has been done to address the problems that Jervis and Fallside encountered in specifying a suitable state representation, my real point is that these algorithms seem very well suited for some classes of problems. There are more scales by which to measure problems than "small" and "large". Specifically, we think that these methods (not necessarily the old pole-balancing system, but more recent versions of the same approach) make good computational sense for stochastic optimal control problems with large state sets. You are probably familiar with Tesauro's TD-gammon system, which uses a system similar to the pole-balancer to learn how to play remarkably good backgammon. This is a kind of stochastic optimal control problem (admittedly not one of great engineering utility), and the conventional DP solution method is infeasible due to the very large state set. By many games of self-play, TD-gammon system was able to focus computation onto relevant regions of the state set, and the multi-layer network stored the information gained in a compact form. We think this can be a big advantage in certain classes of stochastic optimal control problems, and we are currently working to provide more evidence for this, as well to develop more theory. Pole-balancing, in the form in which it is relatively easy to solve, is a regulation problem, not a stochastic optimal control problem of the kind that approximate DP methods might be good at. In summary, I agree wholeheartedly with Jervis and Fallside that pole-balancing can be better achieved by other means. But I think the conclusion that the kind of approximate DP methods (of which our 1983 system was a primitive example) are only suited to small problems is not warranted. In fact, we think they are well suited to some optimal control problems that are so big and nonlinear that conventional control design techniques are not computationally feasible. A. Barto From aboulang at BBN.COM Sat Jan 16 17:32:50 1993 From: aboulang at BBN.COM (aboulang@BBN.COM) Date: Sat, 16 Jan 93 17:32:50 EST Subject: real numbers and TMs In-Reply-To: Dr Michael G Dyer's message of Fri, 15 Jan 93 10:08:24 PST <930115.180824z.03349.dyer@lanai.cs.ucla.edu> Message-ID: Date: Fri, 15 Jan 93 10:08:24 PST From: Dr Michael G Dyer Albert Boulanger, you said: There is a model of computation using real numbers that some high-powered mathematicians have developed: "Blum L., M. Shub, and S. Smale, "On a Theory of Computation and Complexity Over the Real Numbers: NP Completeness, Recursive Functions and Universal Machines", Bull A.M.S. 21(1989): 1-49. It offers a model of computing using real numbers more powerful than a Turing Machine. ======= But is the physics of our universe only modelable in terms of real numbers? e.g. is there actually an infinite amount of ever smaller space between any two neighboring pieces of close-together space? The quantum approach seems to say "no". Good. This allows me to perhaps elevate this discussion somewhat. I have no ready answers but allow me to outline the lines of my thinking for the past few years. As I mentioned in my first posting, expecting nature to have access to the reals is a question all in itself. If nature is quantized somehow, how can it gain access to a source of infinite algorithmic complexity? Note that it does not need to represent a real number explicitly -- just some implicit mechanism may be good enough. I propose that *open* computing with a heat bath may be just one way. (I want to mention, because I believe that the questions are actually closely related, that no one has really solved the thermodynamic arrow of time question -- ie all dynamics, even QCD, have an outstanding problem -- they do not explain why we go through time in one direction. Where does the irreversibility of macroscopic systems come from? There was a foundational paper by Michael Mackey (of Mackey-Glass eqn fame) that is a careful delineation of possible mechanisms that can answer the arrow-of-time question: "The Dyanamic Origin of Increasing Entropy" Michael C. Mackey, Reviews of Modern Physics, Vol 61, No. 4, October 1989, 981-1015. He proposes two viable mechanisms: trivial coarse graining, "taking the trace of a lager dynamics" and coupling the system with a heat bath. The last option is what interests me. {One reasoen for liking the latter is that, at the QM level, *local* hidden variable mechanisms are ruled out} Let me state another outstanding puzzle: Physical chaos may be a big "con game" by nature since the underlying QM system is described by a linear (infinite dimensional) system. However, the work on QM chaos shows us that nature does a good job at imitating chaos. Here is a QM Chaos ref: "Quantum Chaos", Roderick Jensen, Nature, Vol 355, 23 Jan 1992, 311-317. ) I posit, based on my investigation of asynchronous computation, that the answer is the notion of computing with an external heat bath. The heat bath I believe is *the* source for high algorithmic complexity numbers in physical computing systems and why there may be in fact a legitimate physical chaos (one needs access to the reals for true chaos) even if QM can NOT hack it. Think of the heat bath as an external resource of good quality bits. One may ask about the heat bath itself. Where does it come from? In my mind it is large part due to the asynchronous nature or concurrent events in the real world. (Another contributor is the "nondeterminism" of QM . This nondeterminism thing of QM is a whole other story which I don't want to get into right now.) This of course is a debatable position. Consider a cellular automaton models, where the cell is the smallest quatum of space itself. .... I have not come upon any proof that our universe could not be *some sort of* cellular system (perhaps with some bizarre topography and bizarre, non-local "neighborhood" function). I love these models too, but let's think about a Fredkin-type of model with *asynchronous* dynamics. In such as system there would be a local sense of time at each cell, and any global sense of time is an emergent property of the macroscopic system. I believe that it may be possible to represent infinite algorithmic complexity numbers via the timing relationships of the cells. This is an implicit type of representation I was alluding to above. As I have said, this is an outline of my thoughts on the subject which I hope illustrates to all that dismissing reals from nature and hence dismissing a computational model like Blum, Shub, and Smale's is NOT a trivial subject. **************** BTW, asynchronous dynamics in artificial neural networks is little studied. There has been work by Jacob Bahren at JPL, and John Tsitsiklis at MIT on it, but this has been little more than applying the "chaotic relaxation" results of fixed point type problems from the parallel numerical analysis literature. Go for it! **************** MIMD for the MIND, Albert Boulanger aboulanger at bbn.com From rsun at athos.cs.ua.edu Fri Jan 15 14:17:44 1993 From: rsun at athos.cs.ua.edu (Ron Sun) Date: Fri, 15 Jan 1993 13:17:44 -0600 Subject: No subject Message-ID: <9301151917.AA16349@athos.cs.ua.edu> Paper available: -------------------------------------------- title: STRUCTURING KNOWLEDGE IN VAGUE DOMAINs Ron Sun Department of Computer Science College of Engineering The University of Alabama Tuscaloosa, AL 35487 rsun at cs.ua.edu -------------------------------------------- to appear in: IEEE Transaction on Knowledge and Data Engineering --------------------------------------------- In this paper, we propose a model for structuring knowledge in vague and continuous domains where similarity plays a role in coming up with plausible inferences. The model consists of two levels, one of which is an inference network with nodes representing concepts and links representing rules connecting concepts, and the other is a microfeature based replica of the first level. Based on the interaction between the concept nodes and microfeature nodes in the model, inferences are facilitated and knowledge not explicitly encoded in a system can be deduced via mixed similarity matching and rule application. The model is able to take account of many important desiderata of plausible reasoning, and produces sensible conclusions accordingly. Examples will be presented to illustrate the utility of the model in structuring knowledge to enable useful inferences to be carried out in several domains. ---------------------------------------------------------------- * It is FTPable from archive.cis.ohio-state.edu in: pub/neuroprose (Courtesy of Jordan Pollack) * No hardcopy available. * FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: ftp> cd pub/neuroprose ftp> binary ftp> get sun.vague.ps.Z ftp> quit unix> uncompress sun.vague.ps.Z unix> lpr sun.vague.ps (or however you print postscript) From gert at cco.caltech.edu Fri Jan 15 17:03:06 1993 From: gert at cco.caltech.edu (Gert Cauwenberghs) Date: Fri, 15 Jan 93 14:03:06 PST Subject: paper announcement Message-ID: <9301152203.AA02448@punisher.caltech.edu> A Fast Stochastic Error-Descent Algorithm for Supervised Learning and Optimization To appear in the NIPS 5 proceedings (Morgan Kauffman, 1993). Gert Cauwenberghs California Institute of Technology Mail-Code 128-95 Pasadena, CA 91125 E-mail: gert at cco.caltech.edu Abstract A parallel stochastic algorithm is investigated for error-descent learning and optimization in deterministic networks of arbitrary topology. No {\em explicit} information about internal network structure is needed. The method is based on the model-free distributed learning mechanism of Dembo and Kailath. A modified parameter update rule is proposed by which each individual parameter vector perturbation contributes a decrease in error. A substantially faster learning speed is hence allowed. Furthermore, the modified algorithm supports learning time-varying features in dynamical networks. We analyze the convergence and scaling properties of the algorithm, and present simulation results for dynamic trajectory learning in recurrent networks. Now available in the neuroprose archive: archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory under the file name cauwenberghs.nips92.ps.Z (compressed PostScript). From pluto at cs.UCSD.EDU Sun Jan 17 18:43:03 1993 From: pluto at cs.UCSD.EDU (Mark Plutowksi) Date: Sun, 17 Jan 93 15:43:03 -0800 Subject: Cross-Val: Summary of Lit Survey and Request for References Message-ID: <9301172343.AA15924@beowulf> Hello, This is a follow-on to recent postings on using cross-validation to assess neural network models. It is a request for further references, after an exhausting literature survey of my own which failed to find the results I seek. A summary of my findings follows the request, followed by an informative response from Grace Wahba, and finally, a list of the references I looked at. Thanks for any leads or tips, ================= == Mark Plutowski pluto at cs.ucsd.edu Computer Science and Engineering 0114 University of California, San Diego La Jolla, California, USA. THE REQUEST: ------------ Do you know of convergence/consistency results for justifying cross-validatory model assessment for nonlinear compositions of basis functions, such as the usual sigmoided feedforward network? SUMMARY OF MY LIT SURVEY: ------------------------- While the use of cross-validation to assess nonlinear neural network models CAN be justified to a certain degree, (e.g., [Stone 76,77]) the really nice theoretical results exist for other estimators, e.g., kernel density, histograms, linear models, and splines (see references below.) These results are not directly applicable to neural nets. They all exploit properties of the particular estimators which are not shared by neural networks, in general. In short, the proofs for linear models exploit linear reductions, and the other (nonlinear) estimators for which optimality results have been published have the property that deleting a single example has negligible effect on the estimate outside a bounded region surrounding the example (e.g., kernel density estimators and splines.) In comparison, a single example can affect every weight of a neural network - deleting it can have global effect on the estimate. GRACE WAHBA SAYS: ------------------ Thanks to Grace Wahba for her informative response to my request to her for information after I was unable to get hold of a copy of her relevant book: ============================================================ From wahba at stat.wisc.edu Wed Jan 13 23:32:29 1993 From: wahba at stat.wisc.edu (Grace Wahba) Date: Wed, 13 Jan 93 22:32:29 -0600 Subject: choose your own randomized regularizer Message-ID: <9301140432.AA22884@hera.stat.wisc.edu> Very interesting request.. !! I'm convinced (as you seem to be) that some interesting results are to be obtained using CV or GCV in the context of neural nets. In my book are brief discussions of how GCV can be used in certain nonlinear inverse problems (Sect 8.3), and when one is doing penalized likelihood with non-Gaussian data (Sect 9.2). (No theory is given, however). Finbarr O'Sullivan (finbarr at stat.washington.edu) has further results on problems like those in Sect 8.3. However, I have not seen any theoretical results in the context of sigmoidal feedforward networks (but that sure would be interesting!!). However, if you make a local quadratic approximation to an optimization problem to get a local linear approximation to the influence operator (which plays the role of A(\lambda)), then you have to decide where you are going to take your derivatives. In my book on page 113 (equation (9.2.19) I make a suggestion as to where to take the derivatives , but I later got convinced that that was not the best way to do it. Chong Gu,`Cross-Validating Non-Gaussian Data', J. Computational and Graphical Statistics 1, 169-179, June, 1992 has a discussion of what he (and I) believe is a better way, in that context. That context doesn't look at all like neural nets, I only mention this in case you get into some proofs in the neural network context - in that event I think you may have to worry about where you differentiate and Gu's arguments may be valid more generally.. As far as missing any theoretical result due to not having my book, the only theoretical cross validation result discussed in any detail is that in Craven and Wahba(1979) which has been superceded by the work of Li, Utreras and Andrews. As far as circulating your request to the net do go right ahead- I will be very interested in any answers you get!! \bibitem[Wahba 1990] Wahba,Grace. 1990. "Spline Models for Observational Data" v. 59 in the CBMS-NSF Regional Conference Series in Applied Mathematics, SIAM, Philadelphia, PA, March 1990. Softcover, 169 pages, bibliography, author index. ISBN 0-89871-244-0 ORDER INFO FOR WAHBA 1990: ========================== List Price $24.75, SIAM or CBMS* Member Price $19.80 (Domestic 4th class postage free, UPS or Air extra) May be ordered from SIAM by mail, electronic mail, or phone: SIAM P. O. Box 7260 Philadelphia, PA 19101-7260 USA service at siam.org Toll-Free 1-800-447-7426 (8:30-4:45 Eastern Standard Time, the US only. Regular phone: (215)382-9800 FAX (215)386-7999 May be ordered on American Express, Visa or Mastercard, or paid by check or money order in US dollars, or may be billed (extra charge). CBMS member organizations include AMATC, AMS, ASA, ASL, ASSM, IMS, MAA, NAM, NCSM, ORSA, SOA and TIMS. ============================================================ REFERENCES: =========== \bibitem[Li 86] Li, Ker-Chau. 1986. ``Asymptotic optimality of $C_{L}$ and generalized cross-validation in ridge regression with application to spline smoothing.'' {\em The Annals of Statistics}. {\bf 14}, 3, 1101-1112. \bibitem[Li 87] Li, Ker-Chau. 1987. ``Asymptotic optimality for $C_{p}$, $C_{L}$, cross-validation, and generalized cross-validation: discrete index set.'' {\em The Annals of Statistics}. {\bf 15}, 3, 958-975. \bibitem[Utreras 87] Utreras, Florencio I. 1987. ``On generalized cross-validation for multivariate smoothing spline functions.'' {\em SIAM J. Sci. Stat. Comput.} {\bf 8}, 4, July 1987. \bibitem[Andrews 91] Andrews, Donald W.K. 1991. ``Asymptotic optimality of generalized $C_{L}$, cross-validation, and generalized cross-validation in regression with heteroskedastic errors.'' {\em Journal of Econometrics}. {\bf 47} (1991) 359-377. North-Holland. \bibitem[Bowman 80] Bowman, Adrian W. 1980. ``A note on consistency of the kernel method for the analysis of categorical data.'' {\em Biometrika} (1980), {\bf 67}, 3, pp. 682-4. \bibitem[Hall 83] Hall, Peter. 1983. ``Large sample optimality of least squares cross-validation in density estimation.'' {\em The Annals of Statistics}. {\bf 11}, 4, 1156-1174. Stone, Charles J. 1984 ``An asymptotically optimal window selection rule for kernel density estimates.'' {\em The Annals of Statistics}. {\bf 12}, 4, 1285-1297. \bibitem[Stone 59] Stone, M. 1959. ``Application of a measure of information to the design and comparison of regression experiments.'' {\em Annals Math. Stat.} {\bf 30} 55-69 \bibitem[Marron 87] Marron, M. 1987. ``A comparison of cross-validation techniques in density estimation.'' {\em The Annals of Statistics}. {\bf 15}, 1, 152-162. \bibitem[Bowman etal 84] Bowman, Adrian W., Peter Hall, D.M. Titterington. 1984. ``Cross-validation in nonparametric estimation of probabilities and probability densities.'' {\em Biometrika} (1984), {\bf 71}, 2, pp. 341-51. \bibitem[Bowman 84] Bowman, Adrian W. 1984. ``An alternative method of cross-validation for the smoothing of density estimates.'' {\em Biometrika} (1984), {\bf 71}, 2, pp. 353-60. \bibitem[Stone 77] Stone, M. 1977. ``An asymptotic equivalence of choice of model by cross-validation and Akaike's criterion.'' {\em J. Roy. Stat. Soc. Ser B}, {\bf 39}, 1, 44-47. \bibitem[Stone 76] Stone, M. 1976. "Asymptotics for and against cross-validation" ?? From edelman at wisdom.weizmann.ac.il Sun Jan 17 02:08:23 1993 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 17 Jan 93 09:08:23 +0200 Subject: Popper on quantum theory In-Reply-To: "Dr. S. Kak"'s message of Fri, 15 Jan 93 11:12:37 CST <9301151712.AA18830@max.ee.lsu.edu> Message-ID: <9301170708.AA04526@wisdom.weizmann.ac.il> From john at cs.uow.edu.au Mon Jan 18 16:27:14 1993 From: john at cs.uow.edu.au (John Fulcher) Date: Mon, 18 Jan 93 16:27:14 EST Subject: CALL FOR PAPERS - ANN STANDARDS Message-ID: <199301180527.AA16294@wraith.cs.uow.edu.au> CALL FOR PAPERS - ANN STANDARDS COMPUTER STANDARDS & INTERFACES For some time now, there has been a need to consolidate and formalise the efforts of researchers in the Artificial Neural Network field. The publishers of this North-Holland journal have deemed it appropriate to devote a forthcoming special issue of Computer Standards & Interfaces to ANN standards, under the guest editorship of John Fulcher, University of Wollongong, Australia. We already have the cooperation of the IEEE/NCC Standards Committee, but are also interested in submissions regarding less formal, de facto "standards". This could range from established, "standard" techniques in various application areas (vision, robotics, speech, VLSI etc.), or ANN techniques generally (such as the backpropagation algorithm & its [numerous] variants, say). Accordingly, survey or review articles would be particularly welcome. If you are interested in submitting a paper for consideration, you will need to send three copies (in either hard copy or electronic form) by March 31st, 1993 to: John Fulcher, Department of Computer Science, University of Wollongong, Northfields Avenue, Wollongong NSW 2522, Australia. fax: +61 42 213262 email: john at cs.uow.edu.au.oz From mitsu at netcom.com Mon Jan 18 18:52:15 1993 From: mitsu at netcom.com (Mitsu Hadeishi) Date: Mon, 18 Jan 93 15:52:15 -0800 Subject: Popper on quantum theory Message-ID: <9301182352.AA15863@netcom3.netcom.com> >Karl R. Popper >Quantum Theory and the Schism in Physics >from the Postscript to "The Logic of Scientific Discovery" >Edited by W. W. Bartley >Unwin Hyman: London, 1982 In this volume, Karl Popper essentially states that he believes the Bell inequality will fail to be demonstrated experimentally. E.g., he thinks the Aspect experiment would have failed (of course, now the Aspect experiment has been called into question, but my guess is that if it were repeated more rigorously the correlations would still appear). All of his analysis pretty much rests on this assumption, which is most likely false. Mitsu Hadeishi General Partner, Open Mind mitsu at netcom.com mitsu at well.sf.ca.us From greene at iitmax.acc.iit.edu Mon Jan 18 22:13:51 1993 From: greene at iitmax.acc.iit.edu (Greene) Date: Mon, 18 Jan 93 21:13:51 -0600 Subject: another Dognitive Science seminar Message-ID: <9301190313.AA19518@iitmax.acc.iit.edu> Long-range synchronizations have long been noted in the nervous system of the dog. A Russian reprint I stashed somewhere in my files in the early '70s (and which doggedly rebuffs my efforts to unearth it) concerned synchronizations between events in the dog's visual system and events in the dog's lower bowel. It is simply a case of What the Dog's Tectum tells the Dog's Rectum. From sg at corwin.CCS.Northeastern.EDU Tue Jan 19 11:59:10 1993 From: sg at corwin.CCS.Northeastern.EDU (steve gallant) Date: Tue, 19 Jan 1993 11:59:10 -0500 Subject: New Book: Neural Network Learning ... Message-ID: <199301191659.AA08462@corwin.ccs.northeastern.edu> NEURAL NETWORK LEARNING And Expert Systems by Steve Gallant The book is intended as a text, reference, and a collection of some of my work. CONTENTS PART I: Basics 1 Introduction and Important Definitions 1.1 Why Connectionist Models? 1.2 The Structure of Connectionist Models 1.3 Two Fundamental Models: Multi-Layer Perceptrons and Backpropagation Networks 1.4 Gradient Descent 1.5 Historic and Bibliographic Notes 1.6 Exercises 1.7 Programming Project 2 Representation Issues 2.1 Representing Boolean Functions 2.2 Distributed Representations 2.3 Feature Spaces and ISA Relations 2.4 Representing Real-Valued Functions 2.5 Example: Taxtime! 2.6 Exercises 2.7 Programming Projects PART II: Learning in Single Layer Models 3 Perceptron Learning and the Pocket Algorithm 3.1 Introduction 3.2 Perceptron Learning for Separable Sets of Training Examples 3.3 The Pocket Algorithm for Non-separable Sets of Training Examples 3.4 Khachiyan's Linear Programming Algorithm 3.5 Exercises 3.6 Programming Projects 4 Winner-Take-All Groups or Linear Machines 4.1 Introduction 4.2 Generalizes Single-Cell Models 4.3 Perceptron Learning for Winner-Take-All Groups 4.4 The Pocket Algorithm for Winner-Take-All Groups 4.5 Kessler's Construction, Perceptron Cycling, and the Pocket Algorithm Proof 4.6 Independent Training 4.7 Exercises 4.8 Programming Projects 5 Autoassociators and One-Shot Learning 5.1 Introduction 5.2 Linear Autoassociators and the Outer Product Training Rule 5.3 Anderson's BSB Model 5.4 Hopfield's Model 5.5 The Traveling Salesman Problem 5.6 The Cohen-Grossberg Theorem 5.7 Kanerva's Model 5.8 Autoassociative Filtering for Feed-Forward Networks 5.9 Concluding Remarks 5.10 Exercises 5.11 Programming Projects 6 Mean Squared Error (MSE) Algorithms 6.1 Motivation 6.2 MSE Approximations 6.3 The Widrow-Hoff Rule or LMS Algorithm 6.4 ADALINE 6.5 Adaptive noise cancellation 6.6 Decision-directed learning 6.7 Exercises 6.8 Programming Projects 7 Unsupervised Learning 7.1 Introduction 7.2 k-Means Clustering 7.3 Topology Preserving Maps 7.4 ART1 7.5 ART2 7.6 Using Clustering Algorithms for Supervised Learning 7.7 Exercises 7.8 Programming Projects PART III: Learning in Multi-Layer Models 8 The Distributed Method and Radial Basis Functions 8.1 Rosenblatt's Approach 8.2 The Distributed Method 8.3 Examples 8.4 How Many Cells? 8.5 Radial Basis Functions 8.6 A Variant: The Anchor Algorithm 8.7 Scaling, Multiple Outputs and Parallelism 8.8 Exercises 8.9 Programming Projects 9 Computational Learning Theory and the BRD Algorithm 9.1 Introduction to Computational Learning Theory 9.2 A Learning Algorithm for Probabilistic Bounded Distributed Concepts 9.3 The BRD Theorem 9.4 Noisy Data and Fallback Estimates 9.5 Bounds for Single-Layer Algorithms 9.6 Fitting Data by Limiting the Number of Iterations 9.7 Discussion 9.8 Exercises 9.9 Programming Project 10 Constructive Algorithms 10.1 The Tower and Pyramid Algorithms 10.2 The Cascade-Correlation Algorithm 10.3 The Tiling Algorithm 10.4 The Upstart Algorithm 10.5 Pruning 10.6 Easy Learning Problems 10.7 Exercises 10.8 Programming Projects 11 Backpropagation 11.1 Introduction 11.2 The Backpropagation Algorithm 11.3 Derivation 11.4 Practical Considerations 11.5 NP-Completeness 11.6 Comments 11.7 Exercises 11.8 Programming Projects 12 Backpropagation: Variations and Applications 12.1 NETtalk 12.2 Backpropagation Through Time 12.3 Handwritten character recognition 12.4 Robot manipulator with excess degrees of freedom 12.5 Exercises 12.6 Programming Projects 13 Simulated Annealing and Boltzmann Machines 13.1 Simulated Annealing 13.2 Boltzmann Machines 13.3 Remarks 13.4 Exercises 13.5 Programming Project PART IV: Neural Network Expert Systems 14 Expert Systems and Neural Networks 14.1 Expert Systems 14.2 Neural Network Decision Systems 14.3 MACIE, and an Example Problem 14.4 Applicability of Neural Network Expert Systems 14.5 Exercises 14.6 Programming Projects 15 Details of the MACIE System 15.1 Inferencing and Forward Chaining 15.2 Confidence Estimation 15.3 Information Acquisition and Backward Chaining 15.4 Concluding Comment 15.5 Exercises 15.6 Programming Projects 16 Noise, Redundancy, Fault Detection, and Bayesian Decision Theory 16.1 Introduction 16.2 The High Tech Lemonade Corporation's Problem 16.3 The Deep Model and the Noise Model 16.4 Generating the Expert System 16.5 Probabilistic Analysis 16.6 Noisy Single-pattern Boolean Fault Detection Problems 16.7 Convergence Theorem 16.8 Comments 16.9 Exercises 16.10 Programming Projects 17 Extracting Rules From Networks 17.1 Why Rules? 17.2 What kind of Rules? 17.3 Inference Justifications 17.4 Rule Sets 17.5 Conventional + Neural Network Expert Systems 17.6 Concluding Remarks 17.7 Exercises 17.8 Programming Projects 18 Appendix: Representation Comparisons 18.1 DNF Expressions and Polynomial Representability 18.2 Decision Trees 18.3 Pi-Lambda Diagrams 18.4 Symmetric Functions and Depth Complexity 18.5 Concluding Remarks 18.6 Exercises References 364 pages, 156 figures. Available from MIT Press by calling (800) 356-0343 or (617) 625-8569. A great stocking-stuffer, especially for friends with wide, flat ankles. SG From mozer at dendrite.cs.colorado.edu Tue Jan 19 14:35:00 1993 From: mozer at dendrite.cs.colorado.edu (Michael C. Mozer) Date: Tue, 19 Jan 1993 12:35:00 -0700 Subject: building energy predictor shootout -- data available by anon ftp Message-ID: <199301191935.AA04069@neuron.cs.colorado.edu> Data for the building energy predictor shootout announced recently over connectionists is available by anonymous ftp from ftp.cs.colorado.edu A sample script to access the data follows below. The files in the energy-shootout directory, all ASCII format, include: rules.asc The shoot out rules and details of the competition atrain.dat The training portion of data set A. atest.dat The testing portion of data set A. btrain.dat The training portion of data set B. btest.dat The testing portion of data set B. dataform.at Details of the format of these four data files along with units of all data. ------------------------------------------------------------------------------- % ftp ftp.cs.colorado.edu Connected to bruno.cs.colorado.edu. 220 bruno FTP server (SunOS 4.1) ready. Name (ftp.cs.colorado.edu:mozer): anonymous 331 Guest login ok, send ident as password. Password: 230-Guest login ok, access restrictions apply. ftp> cd pub/cs/energy-shootout 250 CWD command successful. ftp> ls 200 PORT command successful. 150 ASCII data connection for /bin/ls (128.138.204.25,2207) (0 bytes). atest.dat atrain.dat btest.dat btrain.dat dataform.at read.me rules.asc 226 ASCII Transfer complete. 79 bytes received in 11 seconds (0.0073 Kbytes/s) ftp> get atest.dat 200 PORT command successful. 150 ASCII data connection for atest.dat (128.138.204.25,2208) (93657 bytes). 226 ASCII Transfer complete. local: atest.dat remote: atest.dat 94940 bytes received in 1.1 seconds (82 Kbytes/s) ftp> bye 221 Goodbye. From takagi at diva.berkeley.edu Tue Jan 19 23:49:54 1993 From: takagi at diva.berkeley.edu (Hideyuki Takagi) Date: Tue, 19 Jan 93 20:49:54 -0800 Subject: ICNN'93 and FUZZ-IEEE'93 Message-ID: <9301200449.AA16723@diva.Berkeley.EDU> ===================================================================== From berenji at ptolemy.arc.nasa.gov Tue Jan 19 18:52:58 1993 From: berenji at ptolemy.arc.nasa.gov (Hamid Berenji) Date: Tue, 19 Jan 93 15:52:58 PST Subject: IEEE Conferences Message-ID: ** CALL FOR PARTICIPATION ** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS March 28 - April 1, 1993 San Francisco Hilton San Francisco, California The IEEE Neural Networks Council cordially invites you to attend the Second International Conference on Fuzzy Systems (FUZZ-IEEE'93) and the 1993 IEEE International Conference on Neural Networks (ICNN'93), to be held concurrently at the San Francisco Hilton Hotel, San Francisco, California from March 28 to April 1, 1993. These IEEE-sponsored events have grown to become the largest conferences in their fields. In 1993, their importance will be enhanced by their combined meeting in an environment that assures that conference participants will have full access to all functions and events of either of these multidisciplinary meetings. In addition to an exciting program of plenary lectures, tutorial presentations, and technical sessions and panels, we anticipate an extraordinary trade show and exhibits program affording a unique opportunity to become acquainted with the latest developments in products based on neural-networks and fuzzy-systems techniques. PLENARY SPEAKERS Lotfi A. Zadeh University of California, Berkeley Didier Dubois Universite Paul Sabatier, Toulouse Hamid R. Berenji NASA Ames Research Center Michio Sugeno Tokyo Institute of Technology E.H. Mamdani Queens Mary College, London Henri Prade Universite Paul Sabatier, Toulouse Bernard Widrow Stanford University Kumpati Narendra Yale University Teuvo Kohonen Helsinki University of Technology, Finland Richard Sutton GTE Laboratories Carver Mead California Institute of Technology Piero Bonissone General Electric Corporate R&D TUTORIALS SUNDAY MARCH 28, 1993, 9:00AM - 12:30PM 1. Introduction to Fuzzy Set Theory, Uncertainty and Information Theory George Klir State University of New York 2. Fuzzy Logic in Databases and Information Retrieval Maria Zemankova National Science Foundation 3. Fuzzy Logic and Neural Networks Pattern Recognition James Bezdek University of West Florida 4. Evolutionary Programming David Fogel Orincon Corporation 5. Introduction to Biological and Artificial Neural Networks Steven Rogers Air Force Institute of Technology 6. The Biological Brain: Biological Neural Networks Terrence J. Sejnowski The Salk Institute SUNDAY, MARCH 28, 1993, 2:00PM - 5:30PM 7. Hardware Approaches to Fuzzy Logic Applications H. Watanabe University of North Carolina 8. Fuzzy Logic and Neural Networks for Control Systems Hamid R. Berenji NASA Ames Research Center 9. Fuzzy Logic and Neural Networks for Computer Vision James Keller University of Missouri 10. Genetic Algorithms and Neural Networks Darrell Whitley Colorado State University 11. Suggestions from Cognitive Science for Neural Network Applications James A. Anderson Brown University 12. Expert Systems and Neural Networks George Lendaris Portland State University **************************************************************************** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS Sponsored by the IEEE Neural Networks Council with the cooperation of the European Neural Networks Society and the Japan Neural Networks Society. IEEE Neural Networks Council Constituent Societies: IEEE Circuits and Systems Society IEEE Communications Society IEEE Computer Society IEEE Control Systems Society IEEE Engineering in Medicine & Biology Society IEEE Industrial Electronics Society IEEE Industry Applications Society IEEE Information Theory Society IEEE Lasers and Electro-Optics Society IEEE Oceanic Engineering Society IEEE Power Engineering Society IEEE Robotics and Automation Society IEEE Signal Processing Society IEEE Social Implications of Technology Society IEEE Systems, Man, and Cybernetics Society IEEE Computer Society IEEE Power Engineering Society ORGANIZATION General Chair: Enrique H. Ruspini Program Cochairs: Hamid R. Berenji Elie Sanchez Shiro Usui ADVISORY BOARD: S.I. Amari L. Cooper F. Fukushima C. Lau L. Stark J. Anderson R.C. Eberhart R. Hecht-Nielsen C.Mead A. Stubberud G. Bekey R. Eckmiller J. Holland N.Packard H. Takagi J.C. Bezdek J. Feldman C. Jorgensen D.Rummelhart P. Treleaven Y. Burnod M. Feldman T. Kohonen B. Skyrms B. Widrow ORGANIZING COMMITTEE PUBLICITY: H.R. Berenji TUTORIALS: J.C. Bezdek PRESS/PUBLIC RELATIONS: C. Welch EXHIBITS: W. Xu FINANCE: R. Tong VIDEO PROCEEDINGS: A. Bergman VOLUNTEERS: A. Worth **************************************************************************** SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS Sponsored by the IEEE Neural Networks Council in Cooperation with IEEE Circuits and Systems Society IEEE Communications Society IEEE Control Systems Society IEEE Systems, Man, and Cybernetics Society International Fuzzy Systems Association (IFSA) North American Fuzzy Information Processing Society (NAFIPS) Japan Society for Fuzzy Theory and Systems (SOFT) European Laboratory for Intelligent Techniques Engineering (ELITE) ORGANIZATION General Chairman: Enrique H. Ruspini SRI International Program Chairman: Piero P. Bonissone General Electric Corporate Research and Development ADVISORY BOARD: J. Bezdek H. Prade M. Sugeno T. Yamakawa D. Dubois E. Sanchez T. Terano L.A. Zadeh G. Klir Ph. Smets E. Trillas H.J. Zimmerman ORGANIZING COMMITTEE EXHIBITS: W. Xu, A. Ralescu, M. Togai, L. Valverde, T. Yamakawa FINANCE: R. Tong(Chair), R. Nutter PRESS/PUBLIC RELATIONS: C. Welch PUBLICITY: H. Berenji (Chair),B. D'Ambrosio, R. Lopez de Mantaras, T. Takagi TUTORIALS: J. Bezdek (Chair), H.R. Berenji, H. Watanabe VIDEO PROCEEDINGS: A. Bergman VOLUNTEERS: A. Worth ************************************************************************** CONFERENCE REGISTRATION FEES: Full Conference registration permits attendance at all events and functions of both conferences with the exception of optional tour programs. The registration fee also includes one set of Proceedings (to be chosen by the registrant) for either FUZZ-IEEE '93 or ICNN '93. Additional ICNN '93 or FUZZ-IEEE '93 Proceedings or CD-ROM versions of the Proceedings are also available for purchase. Registered Registered before 1/31/93 after 1/31/93 IEEE Members $325 US Dollars $395 US Dollars Non-Members $425 US Dollars $495 US Dollars Students* $80 US Dollars $100 US Dollars TUTORIAL REGISTRATION FEES: Members Non-members Students* One Tutorial $295 $345 $150 Two Tutorials $395 $450 $200 * A letter from the Department Head to verify full-time student status at the time of registration is required. At the conference, all students must present a current student ID with picture. FOREIGN PAYMENTS MUST BE MADE BY DRAFT ON A U.S. BANK IN U.S. DOLLARS REFUND POLICY: If your registration must be canceled, your fee will be refunded less $50 U.S. dollars administrative costs. You must notify us in writing by March 1, 1993. No refunds can be given after this date. LOCATION AND ACCOMMODATIONS The Conferences will be held at the San Francisco Hilton located downtown just one block from famous Union Square in the heart of San Francisco; and just twenty minutes from San Francisco International Airport. The Hilton offers participants of the Conferences a very special room rate of $117 (Single) and $127 (Double). San Francisco Hilton One Hilton Square 333 O'Farrell Street San Francisco, CA 94102-2189 Reservations (415) 771-1400 To guarantee your reservation, you must make your reservation with payment directly to the hotel to cover the first night's stay by check or credit card. DEADLINE FOR HOTEL RESERVATIONS: March 1, 1993 SIGHTSEEING TOURS Various sightseeing tours in and around San Francisco and a Dinner Cruise will be offered. Details regarding tours as well as reservation forms will be sent upon registration for the Symposium. AIRLINE INFORMATION American Airlines has waived many of the restrictions to allow the FUZZ-IEEE '93/ICNN '93 attendees to obtain SuperSaver fares for which they would normally not qualify. Bristol Travel has been named the official travel agency for the FUZZ-IEEE '93/ICNN '93 Conferences and can assist you with all your travel needs. To make your reservations call Bristol Travel at (800) 762- 2746. Bristol Travel also provides 24-hour around the-clock service. During off hours you can call (800) 237-7980 and refer to VIT (Very Important Traveler) Number SY2CO. ************************************************************************ CONFERENCE INFORMATION AND REGISTRATION: PLEASE CONTACT: FUZZ-IEEE '93/ICNN '93 Conference Office: P.O. Box 16502 Irvine, CA 92713-6502 USA For Express Mail only: Conference Office 2603 Main Street, Suite 690 Irvine, CA 92714 USA Tel (619) 453-6222 or (800) 321-6338 FAX (714) 752-7444 E-Mail: 70750.345 at compuserve.com From read at helmholtz.sdsc.edu Wed Jan 20 00:16:37 1993 From: read at helmholtz.sdsc.edu (Read Montague) Date: Tue, 19 Jan 93 21:16:37 PST Subject: No subject Message-ID: <9301200516.AA28841@helmholtz.sdsc.edu> POSTDOCTORAL POSITION DIVISION OF NEUROSCIENCE BAYLOR COLLEGE OF MEDICINE A postdoctoral position is available beginning after July, 1993. The position is for one to three years. I am seeking individuals interested in the function of the vertebrate brain. In particular, individuals interested in the problem of how three dimensional neuroanatomy self-organizes into functioning neuronal networks, the range of mechanisms required to explain this self-organizing capability, and the behaviors of the developed networks. I am interested in theoreticians who have a committment to dealing with the facts of biological life and/or experimentalists interested in theory and experiment. A more explicit description of the interests of the lab is given below. Interested parties should send a C.V. and a brief statement of research interests to the address listed below. Present address: P. Read Montague Computational Neurobiology Lab The Salk Institute 10010 North Torrey Pines Rd La Jolla, CA 92037 e-mail: read at helmholtz.sdsc.edu fax: (619) 587-0417 RESEARCH INTERESTS OF THE LAB The primary focus of this laboratory is how three dimensional neuroanatomy self-organizes into functioning neuronal networks, the range of mechanisms required to explain this self-organizing capability, and the behaviors of the developed networks. The approach focuses on dendritic and axonal development as this development relates to the systems-level functions of the developed network. A particular emphasis is placed on computational and theoretical approaches, but experimental techniques are also employed. The goal is not to make the theories simply biologically plausible, but to ground them initially with reliable biological facts so that the synthesized network behavior has a chance both to explain and extend experiments. We are particularly interested in correlational mechanisms of neural development and learning. A separate but related interest of the lab is the role of reinforcement signals in the activity-dependent self-organization of the cortex. Recent work has focused on recasting activity-dependent development in a manner which gives reinforcement signals a natural role during the development of cortical maps and sensory-motor transformations. To place proposed mechanisms of synaptic plasticity and transmission into a more realistic context, we are exploring both activity-dependent and activity-independent mechanisms through which three dimensional dendritic structure develops. We are interested in the contribution such development makes to computational theories of cortical map formation and function. Our experimental efforts are focused upon the function of synapses in the mammalian cerebral cortex with particular interest in how a synapse's local environment modulates its function. Recent experimental efforts have focused on the role of N-methyl-D-aspartate (NMDA) receptors and nitric oxide production in synaptic transmission in the mammalian cerebral cortex. These experiments have utilized in vitro brain slice physiology, electrochemistry, immunocytochemistry, and standard biochemical methods. ------------------------------------------------------ The Division of Neuroscience at Baylor offers many possibilities for collaboration with a number of excellent laboratories exploring questions ranging from the modulation of ionic channel function to visual processing in the mammalian cortex. Listed below are some of the faculty members and their areas of interest. John Maunsell : Processing of visual information by cerebral cortex with a particular interest in neural representations contributing to higher functions such as memory or visual guidance of behaviors. Nikos Logothetis: Physiological mechanisms mediating visual perception and object recognition. Dan Ts'o : Neuronal mechanisms of information processing and visual perception through a combination of conventional electrophysiological and anatomical techniques and more novel methods such as optical imaging and cross-correlation analysis. Sarah Pallas : Functional development of the central visual system; focusing on the relative roles of sensory input and intrinic connectivity in establishing the response properties of target neurons. Dan Johnston : Cellular and molecular mechanisms of long-term synaptic plasticity. Peter Saggau : Mechanisms that control the behavior of populations of nerve cells and in vitro optical recording methods. James W. Patrick : Molecular mechanisms responsible for the function and modification of synapses in the central nervous system. John A Dani : Synaptic communication and the structure and function of ion channels. David Sweatt : Biochemical mechanisms of long-term changes in neuronal function with particular emphasis on long-term potentiation. Paul Pfaffinger : Mechanisms involved in regulating neuronal excitability and synaptic strength. Mark Perin : Molecular events in neurotransmitter release from presynaptic terminals. From aisb93-prog at computer-science.birmingham.ac.uk Tue Jan 19 20:54:40 1993 From: aisb93-prog at computer-science.birmingham.ac.uk (aisb93-prog@computer-science.birmingham.ac.uk) Date: Wed, 20 Jan 93 01:54:40 GMT Subject: AISB'93 Conference in AI and Cognitive Science Message-ID: <12901.9301200154@fat-controller.cs.bham.ac.uk> ________________________________________________________________________ ________________________________________________________________________ CONFERENCE PROGRAMME and REGISTRATION INFORMATION A I S B' 9 3 'P R O S P E C T S F O R A R T I F I C I A L I N T E L L I G E N C E' Cognitive Science Research Centre The University of Birmingham March 29th -- April 2nd 1993 ________________________________________________________________________ ________________________________________________________________________ CONTENTS 1. Message from the Programme Chair 2. Technical Programme 3. Workshops and Tutorials 4. Registration Form ORGANISATION Programme Chair: Aaron Sloman (University of Birmingham) Programme Committee: David Hogg (University of Leeds) Glyn Humphreys (University of Birmingham) Allan Ramsay (University College Dublin) Derek Partridge (University of Exeter) Local Organiser: Donald Peterson (University of Birmingham) Administration: Petra Hickey (University of Birmingham) GENERAL ENQUIRIES AISB'93, School of Computer Science, The University of Birmingham, Edgbaston, Birmingham, B15 2TT, U.K. Email: aisb93-prog at cs.bham.ac.uk Phone: +44-(0)21-414-3711 Fax: +44-(0)21-414-4281 WORKSHOP and TUTORIAL ENQUIRIES Hyacinth S. Nwana, Computer Science Dept. Keele University, Newcastle, Staffs ST5 5BG, ENGLAND. JANET: nwanahs at uk.ac.keele.cs Other: nwanahs at cs.keele.ac.uk Phone: +44 (0)782 583413 Fax: +44 (0)782 713082 ________________________________________________________________________ MESSAGE FROM THE PROGRAMME CHAIR ________________________________________________________________________ The biennial conferences of the Society for the Study of Artificial Intelligence and the Simulation of Behaviour are traditionally "single-track" scientific meetings aiming to bring together all areas of research in AI and computational cognitive science, and AISB'93 is no exception. With the end of the century close at hand, it seemed appropriate to choose a forward looking theme, so the five invited speakers, all distinguished researchers in their own sub-fields, have been asked to identify trends and project into the future, instead of simply surveying past achievements. Some but not all of the submitted papers also analyse prospects; the others report on work already done. The referees and the selection committee used as a major criterion for selection the requirement that papers should be of interest to a general AI audience. All of the papers have in common a commitment to a "design-based" approach to the study of intelligence, though some of them focus mainly on requirements, some mainly on designs and some on actual implementations, and of course there is wide variation not only regarding the sub-domains of AI (such as vision, learning, language, emotions) but also between the techniques used (such as symbolic reasoning, neural net models, genetic algorithms), and also between those who attempt to design intelligent agents using a top down analysis of human-like intelligence and those who work bottom up from primitive insect-like mechanisms. There is also international variety, with papers from several European countries and further afield. This variety of topics and approaches promises to make the conference particularly lively, with plenty of scope for controversy. We have therefore decided to allow a little more time than usual for each item in the programme, so that questions and discussions can add to the interest. There will also be poster presentations, where some work that could not be included in the formal proceedings can be presented, and it is expected that there will be book displays by major AI publishers and possibly some displays and demonstrations by vendors of AI software and systems. The conference will be preceded by a programme of seven tutorials and workshops for which separate registration is available. Integral Solutions Limited have agreed to present a prize of AI software, including Poplog, and a place on one of their training courses, for the paper voted "best presented" by the audience. For those involved in AI and Cognitive Science, the conference is a primary opportunity to meet, discuss and learn about current work. For those new to these fields, the conference is a chance to become acquainted with them in pleasant surroundings and to meet the people involved. For full-time students, large reductions in registration fees are offered. The location of the conference is one of the attractive halls of residence in a pleasant lakeside setting at one end of the campus of the University of Birmingham. This is not very far from the city centre, so a visit to one of the local attractions of the centre, such as the renowned Symphony Hall, will require a journey of only a few minutes by taxi or train. Single room accommodation has been booked, and the auditorium is in the same building as the bedrooms and dining room, so that the conference will provide excellent opportunities for informal mixing and discussions. The number of rooms available is limited, so early booking is recommended. We look forward to seeing you and hope you enjoy the conference. Aaron Sloman. ________________________________________________________________________ TECHNICAL PROGRAMME (The order is provisional. Invited talks are asterisked) ________________________________________________________________________ MONDAY MARCH 29TH Workshops and Tutorials (see below) TUESDAY MARCH 30TH (Morning) Workshops and Tutorials (see below) TUESDAY MARCH 30TH (Afternoon) * Kurt Van Lehn (Pittsburg) --- Prospects for modelling human learning (e.g. college physics) Husbands, Harvey, Cliff --- An evolutionary approach to AI Edmund Furse --- Escaping from the box Thomas Vogel --- Learning biped robot obstacle crossing Antunes, Moniz, Azevedo --- RB+ the dynamic estimation of the opponent's strength WEDNESDAY 31ST MARCH * Ian Sommerville (Lancaster) --- Prospects for AI in systems design Oh, Azzelarabe, Sommerville, French --- Incorporating a cooperative design model in a computer aided design improvement system Stuart Watt --- Fractal behaviour analysis Valente, Breuker, Bredewg --- Integrating modeling approaches in the commonKADS library Cawsey, Galliers, Reece, Jones --- Revising beliefs and intentions: a unified framework for agent interaction * Allan Ramsay (Dublin) --- Prospects for natural language processing by machine Lin, Fawcett, Davies --- Genedis: the discourse generator in communal Miwa, Simon --- Production system modelling to represent individual differences: tradeoff between simplicity and accuracy in simulation of behaviour Freksa, Zimmerman --- Enhancing spatial reasoning by the concept of motion POSTER SESSION THURSDAY 1ST APRIL * Glyn Humphreys (Birmingham) --- Prospects for connectionism - science and engineering Rodrigues, Lee --- Nouvelle AI and perceptual control theory Vogel, Popwich, Cercone --- Logic-based inheritance reasoning Beatriz Lopez --- Reactive planning through the integration of a case-based system and a rule-based system James Stone --- Computer vision: what is it good for? SESSION ON EMOTIONS AND MOTIVATION Bruce Katz --- Musical resolution and musical pleasure Moffatt, Phaf, Frijda --- Analysis of a model of emotions Beaudoin, Sloman --- A computational exploration of the attention control theory of motivator processing and emotion Reichgelt, Shadbolt et al. --- EXPLAIN: on implementing more effective tutoring systems POSTER SESSION CONFERENCE DINNER FRIDAY 2ND APRIL (Morning) * David Hogg (Leeds) --- Prospects for computer vision Elio, Watanabe --- Simulating the interactive effects of domain knowledge and category structure within a constructive induction system Dalbosco, Armando --- MRG an integrated multifunctional reasoning system Bibby, Reichgelt --- Modelling multiple uses of the same representation in SOAR1 Sam Steel --- A connection between decision theory and program logic INFORMAL WORKSHOP ON MOTIVATION, EMOTIONS AND ATTENTION (see below) ________________________________________________________________________ Workshop 1: Connectionism, Cognition and a New AI Organiser: Dr Noel Sharkey (Exeter) Committee: Andy Clark (Sussex) Glyn Humphries (Birmingham) Kim Plunkett (Oxford) Chris Thornton (Sussex) Time: Monday 29th pm & Tuesday 30th March (all day) Note: This workshop overlaps with the events in the main Technical Programme on the afternoon on Tuesday 30th. ________________________________________________________________________ A number of recent developments in Connectionist Research have strong implications for the future of AI and the study of Cognition. Among the most important are developments in Learning, Representation, and Productivity (or Generalisation). The aim of the workshop would be to focus on how these developments may change the way we look at AI and the study of Cognition. SUGGESTED TOPICS FOR DISCUSSION ABSTRACTS INCLUDE: Connectionist representation, Generalisation and Transfer of Knowledge, Learning Machines and models of human development, Symbolic Learning versus Connectionist learning, Advantages of Connectionist/Symbolic hybrids, Modelling Cognitive Neuropsychology, Connectionist modelling of Creativity and music (or other arts). WORKSHOP ENTRANCE Attendance at the workshop will be limited to 50 or 60 places, so please let us know as soon as possible if you are planning to attend, and to which of the following categories you belong. DISCUSSION PAPERS Acceptance of discussion papers will be decided on the basis of extended abstracts (try to keep them under 500 words please) clearly specifying a 15 to 20 minute discussion topic for oral presentation. ORDINARY PARTICIPANTS A limited number places will be available for participants who wish to sit in on the discussion but do not wish to present a paper. But please get in early with a short note saying what your purpose in attending is. PLEASE SEND SUBMISSIONS TO: Dr. Noel Sharkey Centre for Connection Science Dept. Computer Science University of Exeter Exeter EX4 4PT Devon U.K. Email: noel at uk.ac.exeter.dcs REGISTRATION: see Registration Form below. ________________________________________________________________________ Workshop 2: Qualitative and Causal Reasoning Organiser: Dr Tony Cohn (Leeds, U.K.) Committee: Mark Lee (Aberystwth) Chris Price (Aberystwth) Chris Preist (Hewlett Packard Labs, Bristol) Time: Monday 29th March + Tuesday 30th March (morning) ________________________________________________________________________ This workshop is intended to follow on from the series of DKBS (Deep Knowledge Based Systems) workshops which were originally initiated under the Alvey programme. QCR93 will be the 8th in the series. The format of the 1.5 day workshop will consist mainly of presentations, with ample time for discussion. It is hoped to have an invited talk in addition. Participation will be by invitation only and numbers will be limited in order to keep an informal atmosphere. If you wish to present a paper at the workshop, please send 4 copies (max 5000 words) to the address below by 20 Feb. An electronic submission is also possible (either postscript or plain ascii). Alternatively send a letter or email explaining your reasons for being interested in attending. Papers may address any aspect of Qualitative and Causal Reasoning and Representation. Thus the scope of the workshop includes the following topics: * Task-level reasoning (e.g., design, diagnosis, training, etc.) * Ontologies (e.g., space, time, fluids, etc.) * Explanation, causality and teleology * Mathematical formalization of QR * Management of multiple models (formalization, architecture, studies) * Model building tools * Integration with other techniques (e.g., dynamics, uncertainty, etc.) * Methodologies for selecting/classifying QR methods * Practical applications of QR, or Model Based Reasoning etc. These topics are not meant to be prescriptive and papers on other related or relevant topics are welcome. Suggestions for special sessions for the workshop are also welcome (eg panel session topics). There may be some partial bursaries available to students who wish to attend. If you wish to apply for such a bursary, then please send a letter giving a case for support (include details of any funding available from elsewhere). A CV should be attached. Electronic submission is preferred. REGISTRATION: see Registration Form below. CORRESPONDENCE AND SUBMISSIONS: Tony Cohn, Division of AI, School of Computer Studies, University of Leeds, LEEDS, LS2 9JT, ENGLAND. UUCP: ...!ukc!leeds!agc JANET: agc at uk.ac.leeds.scs INTERNET: agc at scs.leeds.ac.uk BITNET: agc%uk.ac.leeds.scs at UKACRL PHONE: +44 (0)532 335482 FAX: +44 (0)532 335468 ________________________________________________________________________ Workshop 3: AISB POST-GRADUATE STUDENT WORKSHOP Organiser: Dr Hyacinth Nwana University of Keele, UK. Time: Monday 29th (all day) + Tuesday 30th March (morning) ________________________________________________________________________ Many postgraduate students become academically isolated as a result of working in specialised domains within fairly small departments. This workshop is aimed at providing a forum for graduate students in AI to present and discuss their ideas with other students in related areas. In addition there will invited presentations from a number of prominent researchers in AI. A small number of group discussions is planned, including study for and completion of theses, life after a doctorate, paper refereeing and how to make use of your supervisor. All attendees are expected to present an introduction to their research in a poster session on the first day's morning. In addition a couple of attendees will be given the opportunity to present short papers. Confirmed tutors so far include: Dr John Self (Lancaster) - 'Why do supervisors supervise?' Dr Steve Easterbrook (Sussex) - 'How to write a thesis' Dr Elizabeth Churchill (Nottingham) - Title to be confirmed. Dr Peter Hancox (Birmingham) - Title to be confirmed. Applicants are asked to submit a two-page abstract of their current work. In addition full papers of between 3000 and 5000 words may be submitted. These will be considered for publication in a supplement to the AISB quarterly journal. Deadline for 2-page abstracts: 10th February 1993 Please send an abstract or a full paper of work to: Dr. Hyacinth S. Nwana, Computer Science Dept. Keele University, Newcastle, Staffs ST5 5BG, ENGLAND. JANET: nwanahs at uk.ac.keele.cs other: nwanahs at cs.keele.ac.uk tel: +44 (0)782 583413 fax: +44 (0)782 713082 REGISTRATION: see Registration Form below. ________________________________________________________________________ Workshop 4: Motivation, Emotions and Attention Organiser: Tim Read, University of Birmingham Time: Friday 2nd April 2.30 - 5pm ________________________________________________________________________ An informal workshop will be held after lunch on Friday 2nd April enabling further discussion of issues raised in the Thursday afternoon session on motivation and emotions, and possibly additional presentations. There will be no charge, though numbers will be limited by available space. For more information contact The study of emotion encounters many difficulties, among them the looseness of emotional terminology in everyday speech. A theory of emotion should supersede this terminology, and should connect with such issues as motivation, control of attention, resource limitations architectural parallelism and underlying biological mechanisms. Computation provides useful analogies in generating an information processing account of emotion, and computer modelling is a rigorous and constructive aid in developing theories of affect. It makes sense for researchers within this field to collaborate, and the aim of the workshop is to facilitate cross-fertilisation of ideas, sharing of experience, and healthy discussion. If you wish to make a presentation, please contact: Tim Read School of Computer Science, The University of Birmingham, Edgbaston, Birmingham B15 2TT, England EMAIL T.M.Read at cs.bham.ac.uk Phone: +44-(0)21-414-4766 Fax: +44-(0)21-414-4281 REGISTRATION: see Registration Form below (no charge for this workshop) ________________________________________________________________________ Tutorial 1: Collaborative Human-Computer Systems: Towards an Integrated Theory of Coordination Dr Stefan Kirn University of Muenster, Germany Time: Monday 29th March (morning) ________________________________________________________________________ Intelligent support of human experts' intellectual work is one of the most competitive edges of computer technology today. Important advances have been made in the fields of computer networking, AI (e.g., KADS, CBR, Distributed AI), integrated design frameworks (the European JESSI project), nonstandard databases (e.g., databases for teamwork support), computer supported cooperative work, and organizational theory. The time is ripe for developing integrated human computer collaborative systems to significantly enhance the problem solving capabilities of human experts. Perhaps one of the most interesting challenges here is the development of an integrated theory of human computer coordination. Such a theory will help to link humans and computers together in order to let them collaboratively work on complex "nonstandard" problems. It is the aim of the tutorial to put the loose ends of the above mentioned disciplines together thus arguing towards the development of an integrated theory of human computer coordination. Only undergraduate-level knowledge in at least one of the following fields is assumed: AI, database/information systems, organisational theory and CSCW. Dr Stefan Kirn is senior researcher and project leader at the Institute of Business and Information Systems of the Westfaelische Wilhelms-University of Muenster. He has more than 30 major publications in international journals and conferences, primarily in the areas of DAI, Cooperative Information Systems, CSCW and Computer-Aided Software Engineering. REGISTRATION: see Registration Form below. ________________________________________________________________________ Tutorial 2: The Motivation, Meaning and Use of Constraints Dr Mark Wallace European Computer-Industry Research Centre Munchen, Germany. Time: Monday 29th March (afternoon) ________________________________________________________________________ This tutorial explains how constraints contribute to clear, clean, efficient programs. We study constraints as specification tools, as formal tools, and as implementation tools. Finally we examine the use of constraints in search and optimisation problems. As the tutorial unfolds, we will explain the three different notions of constraints: constraints as built-in relations, with built-in solvers; constraints as active agents, communicating with a store; and propagation constraints. We will also explain how these notions are related, and moreover how the different types of constraints can all be combined in a single program. For programming examples, the logic programming framework will be used. It will be aimed at postgraduates, researchers and teachers of AI, who would like to know what constraints are, and what they are for. Also anyone interested in declarative programming, seeking a solution to the problem of efficiency, will benefit from the tutorial. An understanding of formal logic will be assumed, and some familiarity with logic programming will be necessary to appreciate the programming examples. Dr Mark Wallace leads the Constraints Reasoning Team at ECRC (the European Computer-Industry Research Centre), Munich. He introduced "Negation by Constraints" at SLP'87. He has recently presented papers at IJCAI'92, FGCS'92 and JFPL'92. Recent tutorial presentations include a short course on Deductive and Object-Oriented Knowledge Bases at the Technical University of Munich, and "Constraint Logic Programming - An Informal Introduction", written with the CORE team at ECRC for the Logic Programming Summer School, '92. REGISTRATION: see Registration Form below. ________________________________________________________________________ Tutorial 3: A Little Turing and Goedel for Specialists in AI Prof. Alexis Manaster Ramer Wayne State University, USA. Time: Monday 29th March (morning + afternoon) ________________________________________________________________________ Currently debated issues in the foundations of AI go directly back to technical work of people like Turing and Godel on the power and limits of formal systems and computing devices. Yet neither the relevant results nor the intellectual climate in which they arose are widely discussed in the AI community (for example, how many know that Godel himself believed that the human mind was not subject to the limits set by his theorems on formal systems?). The purpose of this tutorial is to develop a clear picture of the fundamental results and their implications as seen at the time they were obtained and at the present time. We will primarily refer to the work of Godel, Turing, Chomsky, Hinttika, Langendoen and Postal, Searle, and Penrose. Some background knowledge is assumed: some programming, some AI and some discrete mathematics. Dr Alexis Manaster Ramer is professor of Computer Science at Wayne State University. He has over 100 publications and presentations in linguistics, computational linguistics, and foundations of CS and AI. A few years ago, he taught a short course on the theory of computation for the Natural Language Processing group at the IBM T.J.Watson Research Center (Hawthorne, NY, USA) and this past summer taught a one-week advanced course on mathematics of language at the European Summer School in Logic, Language, and Information (Colchester, UK). REGISTRATION: see Registration Form below. ________________________________________________________________________ OTHER MEETINGS ________________________________________________________________________ LAGB CONFERENCE. Shortly before AISB'93, the Linguistics Association of Great Britain (LAGB) will hold its Spring Meeting at the University of Birmingham from 22-24th March, 1993. For more information, contact Dr. William Edmondson: postal address as below; phone +44-(0)21-414-4773; email EDMONDSONWH at vax1.bham.ac.uk JCI CONFERENCE The Joint Council Initiative in Cognitive Science and Human Computer Interaction will hold its Annual Meeting on Monday 29th March 1993 in the same buildings as AISB'93 (in parallel with the AISB'93 workshops and tutorials). The theme will be "Understanding and Supporting Acquisition of Cognitive Skills". For more information, contact Elizabeth Pollitzer, Department of Computing, Imperial College, 180, Queens Gate, London SW7 2BZ, U.K.; phone +44-(0)71-581-8024; email eep at doc.ic.ac.uk. ________________________________________________________________________ REGISTRATION NOTES Main Programme, Workshops and Tutorials ________________________________________________________________________ o Please print off the form, tick through the items you require, enter sub-totals and totals and send by post, together with payment, to: AISB'93 Registrations, School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, U.K. o Payment should be made by cheque or money order payable to `The University of Birmingham', drawn in pounds sterling on a UK clearing bank and should accompany the form below. o Registrations postmarked after 10th March count as late registrations. o It is not possible to register by email. o Confirmation of booking, a receipt, and travel details will be sent on receipt of this application form. o The Conference Dinner (20 pounds) is on the evening of Thursday 1st. o Delegates wishing to join AISB (thus avoiding the non-AISB member supplement) should contact: AISB Administration, Cognitive and Computing Sciences, University of Sussex, Brighton BN1 9QH, U.K.; phone: +44-(0)273 678379; fax: +44-(0)273 678188; email: aisb at cogs.susx.ac.uk Donald Peterson, January 1993. ______________________________________________________________________ R E G I S T R A T I O N F O R M ---- A I S B' 9 3 ______________________________________________________________________ Figures in parentheses are for full-time students (send photo copy of ID). ACCOMMODATION and FOOD 28th 29th 30th 31st 1st sub-totals lunch 5.50 5.50 5.50 5.50 ______ dinner 7.50 7.50 7.50 20.00 ______ bed & 23.00 23.00 23.00 23.00 23.00 ______ breakfast total ______ vegetarians please tick _____ TECHNICAL PROGRAMME, WORKSHOPS and TUTORIALS technical programme 175 (40) _____ non-AISB members add 30 _____ late registration add 35 _____ Nwana workshop 50 _____ Sharkey workshop 60 (30) _____ Cohn workshop 60 (30) _____ Read workshop 0 _____ Manaster Ramer tutorial 110 (55) _____ Wallace tutorial 75 (30) _____ Kirn tutorial 75 (30) _____ total _____ Pounds PERSONAL DETAILS Full time Name ___________________________________________ student? Y/N Address ___________________________________________ ___________________________________________ ___________________________________________ ___________________________________________ Phone _________________________ Fax ___________ Email ___________________________________________ I wish to register for the events indicated, and enclose a cheque in pounds sterling, drawn on a U.K. clearing bank and payable to the `University of Birmingham' for ..... Signed _________________________ Date ___________ From thrun at informatik.uni-bonn.de Tue Jan 19 16:46:35 1993 From: thrun at informatik.uni-bonn.de (Sebastian Thrun) Date: Tue, 19 Jan 93 22:46:35 +0100 Subject: papers in neuroprose archive Message-ID: <9301192146.AA04824@uran> Dear Connectionists, this mail is to announce two new papers in Jordan Pollack's neuroprose archive: 1) Explanation-Based Neural Network Learning for Robot Control by Tom Mitchell and Sebastian Thrun, to appear in: NIPS-5 2) Exploration and Model Building in Mobile Robot Domains by Sebastian Thrun, to appear in: Proceedings of the ICNN-93 Enclosed you find both abstracts and the (standard) instructions for retrieval. Comments are welcome. Have fun, Sebastian Thrun ---------------------------------------------------------------------- Explanation-Based Neural Network Learning for Robot Control Tom M. Mitchell (CMU, mitchell at cs.cmu.edu) Sebastian B. Thrun (Bonn University, thrun at uran.informatik.uni-bonn.de) How can artificial neural nets generalize better from fewer examples? In order to generalize successfully, neural network learning methods typically require large training data sets. We introduce a neural network learning method that generalizes rationally from many fewer data points, relying instead on prior knowledge encoded in previously learned neural networks. For example, in robot control learning tasks reported here, previously learned networks that model the effects of robot actions are used to guide subsequent learning of robot control functions. For each observed training example of the target function (e.g. the robot control policy), the learner *explains* the observed example in terms of its prior knowledge, then *analyzes* this explanation to infer additional information about the shape, or slope, of the target function. This shape knowledge is used to bias generalization in the learned target function. Results are presented applying this approach to a simulated robot task based on reinforcement learning. (file name: mitchell.ebnn-nips5.ps.Z) Exploration and Model Building in Mobile Robot Domains Sebastian B. Thrun (Bonn University, thrun at uran.informatik.uni-bonn.de) I present first results on COLUMBUS, an autonomous mobile robot. COLUMBUS operates in initially unknown, structured environments. Its task is to explore and model the environment efficiently while avoiding collisions with obstacles. COLUMBUS uses an instance-based learning technique for modeling its environment. Real-world experiences are generalized via two artificial neural networks that encode the characteristics of the robot's sensors, as well as the characteristics of typical environments the robot is assumed to face. Once trained, these networks allow for the transfer of knowledge across different environments the robot will face over its lifetime. COLUMBUS' models represent both the expected reward and the confidence in these expectations. Exploration is achieved by navigating to low confidence regions. An efficient dynamic programming method is employed in background to find minimal-cost paths that, executed by the robot, maximize exploration. COLUMBUS operates in real-time. It has been operating successfully in an office building environment for periods up to hours. (file name: thrun.robots-icnn93.ps.Z) ---------------------------------------------------------------------- Postscript versions of both papers may be retrieved from Jordan Pollack's neuroprose archive. If you have a Postscript printer, please follow the following instructions below. If not, feel free to contact me (thrun at uran.informatik.uni-bonn.de) for a hardcopy. unix> ftp archive.cis.ohio-state.edu ftp login name> anonymous ftp password> xxx at yyy.zzz ftp> cd pub/neuroprose ftp> bin ftp> get mitchell.ebnn-nips5.ps.Z ftp> get thrun.robots-icnn93.ps.Z ftp> bye unix> uncompress mitchell.ebnn-nips5.ps.Z unix> uncompress thrun.robots-icnn93.ps.Z unix> lpr mitchell.ebnn-nips5.ps.Z unix> lpr thrun.robots-icnn93.ps.Z Note that the second file is rather long. Some printers have limitations for the document size to be printed. In this case, it might be necassary to circumvent this limitation by using "lpr" with the "-s" option at that machine the printer is physically connected to. From ttj10 at eng.cam.ac.uk Wed Jan 20 07:13:24 1993 From: ttj10 at eng.cam.ac.uk (ttj10@eng.cam.ac.uk) Date: Wed, 20 Jan 93 12:13:24 GMT Subject: Real Pole-Balancing Message-ID: <27008.9301201213@fear.eng.cam.ac.uk> Recently I posted an abstract for a technical report on real pole-balancing [1]. Andy Barto his since pointed out that the abstract gives the impression of condemning approximate dynamic programming methods as tools for learning control. This was not our intention. The offending line is "This limits the usefulness of this kind of learning controller to small problems which are likely to be better controlled by other means. Before a learning controller can tackle more difficult problems, a more powerful learning scheme has to be found." Firstly, by "this kind of learning controller" was meant the kind of learning controller which required a carefully designed state space decoder. Setting the parameters of the controller was not straightforward, and required some trial and error, helped by prior knowledge of the plant. By "more difficult problems" was meant problems with even more parameters. It seems reasonable to suggest that a better learning scheme would be needed in such instances. But that is not to say that an improved scheme that made use of approximate dynamic programming techniques would not be up to the job. Andy Barto points out that better learning schemes have already been produced. The early ACE/ASE learning algorithm [2] was chosen for our implementation for speed of execution in a real-time environment. It might also be considered interesting as a base-line comparison, since the ACE/ASE controller is relatively well-known. Barto, Sutton and Anderson used Michie and Chambers' [3] state representation, since this was the work on which they were improving. They mentioned this was a critical part of the algorithm, which should be adaptive. A copy of the report is available by ftp from svr-ftp.eng.cam.ac.uk, as reports/jervis_tr115.ps.Z. references: [1] @techreport{Jervis92, author = "T.T.Jervis and F.Fallside", title = "Pole Balancing on a Real Rig using a Reinforcement Learning Controller", year = "1992", month = "December", number = "CUED/F-INFENG/TR 115", institution = "Cambridge University Engineering Department", address = "Trumpington Street, Cambridge, England"} [2] @article{Barto83, author = "A.G. Barto and R.S. Sutton and C.W. Anderson", title = "Neuronlike Adaptive Elements That Can Solve Difficult Learning Control Problems", year = "1983", month = "September/October", journal = "IEEE Transactions on Systems, Man and Cybernetics", volume = "SMC-13", pages = "834-846"} [3] @incollection{Michie68, author = "D. Michie and R.A. Chambers", title = "Boxes: An Experiment in Adaptive Control", booktitle = "Machine Intelligence", publisher = "Oliver and Boyd", year = "1968", volume = "2", pages = "137-152", editor = "E. Dale and D. Michie"} From stolcke at ICSI.Berkeley.EDU Wed Jan 20 14:22:01 1993 From: stolcke at ICSI.Berkeley.EDU (Andreas Stolcke) Date: Wed, 20 Jan 93 11:22:01 PST Subject: new cluster version available Message-ID: <9301201922.AA06528@icsib30.ICSI.Berkeley.EDU> I'm releasing a new version of the time-honored cluster program (that also does PCA). I recently made a small change to the algorithm that speeds clustering up by a factor of n (the number of data points). The algorithm now runs in time O(n^2) (formerly O(n^3)) and uses memory O(n) (formerly O(n^2)). On a sparcstation2, this means you can cluster a 1000-by-10 data set in 39 secs as opposed to 230 secs. Systems short on memory should see even more dramatic improvements due to reduced paging. As before, the source code is availabe by ftp: % mkdir cluster; cd cluster % ftp ftp.icsi.berkeley.edu ftp> cd pub/ai ftp> binary ftp> get cluster-2.5.tar.Z ftp> quit % zcat cluster-2.5.tar.Z | tar xf - % make # after looking over the Makefile --Andreas From n at predict.com Wed Jan 20 17:25:28 1993 From: n at predict.com (n (Norman Packard)) Date: Wed, 20 Jan 93 15:25:28 MST Subject: Job Offer: Research on Financial Analysis in Santa Fe NM Message-ID: <9301202225.AA00816@mule> Job Opening for Research Scientist Prediction Company Financial Forecasting Prediction Company is a small Santa Fe, NM based startup firm utilizing the latest nonlinear forecasting technologies for prediction and computerized trading of derivative financial instruments. The senior technical founders of the firm are Doyne Farmer and Norman Packard, who have worked for over ten years in the fields of chaos theory and nonlinear dynamics. The technical staff includes other senior researchers in the field. The company has the backing of a major technically based trading firm and their partner, a major European bank. There is currently an opening at the company for a research scientist to assist in modeling and related data analysis research. The successful applicant will be a talented scientist with experience in one or more of the following areas: (i) learning algorithms, such as neural networks, decision trees, and genetic algorithms, (ii) time series forecasting, (iii) statistics. Experience in applying learning algorithms to real data and a strong computer programming background, preferably in C++, are essential. A sound background in statistics and experience with financial applications are desirable. Applicants should send resumes to Prediction Company, 234 Griffin Street, Santa Fe, NM 87501 or to Laura Barela at laura%predict.com at santafe.edu. From tap at cs.toronto.edu Thu Jan 21 19:30:04 1993 From: tap at cs.toronto.edu (Tony Plate) Date: Thu, 21 Jan 1993 19:30:04 -0500 Subject: nips*92 preprint available Message-ID: <93Jan21.193005edt.594@neuron.ai.toronto.edu> Preprint available (to appear in C. L. Giles, S. J. Hanson, and J. D. Cowan, editors, Advances in Neural Information Processing Systems 5 (NIPS*92), Morgan Kaufmann, San Mateo, CA) Holographic Recurrent Networks Tony A. Plate Department of Computer Science University of Toronto Toronto, M5S 1A4 Canada tap at ai.utoronto.ca ABSTRACT Holographic Recurrent Networks (HRNs) are recurrent networks which incorporate associative memory techniques for storing sequential structure. HRNs can be easily and quickly trained using gradient descent techniques to generate sequences of discrete outputs and trajectories through continuous space. The performance of HRNs is found to be superior to that of ordinary recurrent networks on these sequence generation tasks. - Obtain by ftp from archive.cis.ohio-state.edu in pub/neuroprose. - No hardcopy available. - FTP procedure: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get plate.nips5.ps.Z ftp> quit unix> uncompress plate.nips5.ps.Z unix> lpr plate.nips5.ps (or however you print postscript) From sml at essex.ac.uk Thu Jan 21 12:12:25 1993 From: sml at essex.ac.uk (Lucas S M) Date: Thu, 21 Jan 93 17:12:25 GMT Subject: No subject Message-ID: <17256.9301211712@esesparc.essex.ac.uk> 1st ANNOUNCEMENT AND CALL FOR PAPERS -------------------------------------- GRAMMATICAL INFERENCE: THEORY, APPLICATIONS AND ALTERNATIVES -------------------------------------------------------------- 22-23 April, 1993 At the UNIVERSITY OF ESSEX, WIVENHOE PARK, COLCHESTER CO4 3SQ, UK Sponsored by the Institute of Electrical Engineers and the Institute of Mathematics. Relevant Research Areas: * Computational Linguistics * Machine Learning * Pattern Recognition * Neural Networks * Artificial Intelligence MOTIVATION ------------ Grammatical Inference is an immensely important research area that has suffered from the lack of a focussed research community. A two-day colloquium will be held at the University of Essex on the 22-23rd April 1993. The purpose of this colloquium is to bring together researchers who are working on grammatical inference and closely related problems such as sequence learning and prediction. Papers are sought for the technical sessions listed below. BACKGROUND ------------ A grammar is a finite declarative description of a possible infinite set of data (known as the language) that is reversible in the sense that it may be used to detect language membership (or degree of membership) of a pattern, or it may be used generatively to produce samples of the language. The language may be formal and simple such as the set of all symmetric strings over a given alphabet, formal and more complex such as the set of legal PASCAL programs, less formal such as sentences or phrases in natural language, or noisy such as vector-quantised speech or handwriting, or even spatial rather than temporal, such as 2-d images. For the noisy cases stochastic grammars are often used that define the probability that the data was generated by the given grammar. So, given a set of data that the grammar is supposed to generate, and perhaps also a set that it should not generate, the problem is to learn a grammar that not only satisfies these conditions, but more importantly, generalises to unseen data in some desirable way (this may be strictly specified in test-cases where the grammar used to create the training samples is known). To date, the grammatical inference research community has evolved largely divided into the following areas a) Theories about the type of languages that can and cannot be learned. These theories are generally concerned with the types of language that may and may not be learned in polynomial time. Arguably irrelevant in practical terms since in practical applications we are usually happy to settle for a good grammar rather than some `ideal' grammar. b) Explicit Inference; this deals directly with modifiying a set of production rules until a satisfactory grammar is obtained. c) Implicit inference e.g. estimating the parameters of a hidden Markov model -- in this case production rule probabilities in the equivalent stochastic regular grammar are represented by pairs of numbers in the HMM. d) Estimating models where the grammatical equivalence uncertain (e.g. recurrent neural networks), but often aim to solve exactly the same problem. In many cases, researchers in these distinct subfields seem unaware of the other work in the other subfields; this is surely detrimental to the progress of grammatical inference research. TECHNICAL SESSIONS -------------------- Oral and poster papers are requested in the following areas: Theory: What kinds of language are theoretically learnable; the practical import of such theories. Learning 2-d and higher-dimensional grammars, attribute grammars etc. Algorithms: Any new GI algorithms, or new insights on old ones. Grammatical inference assistants, that aim to aid humans in writing grammars. Performance of Genetic algorithms and simulated annealing for grammatical inference etc. Applications: Any interesting applications in natural language processing, speech recognition Speech and language processing, cursive script recognition, pattern recognition, sequence prediction, financial markets etc. Alternatives: The power of alternative approaches to sequence learning, such as stochastic models and artificial neural networks, where the inferred grammar may have a distributed rather than an explicit represention. Competition: A number of datasets will be made available for authors to report the performance of their algorithms on, in terms of learning speed and generalisation power. There is also the possiblity of a live competition in the demonstration session. Demonstration: There will be a session where authors may demonstrate their algorithms. For this purpose we have a large number of Unix workstations running X-Windows, with compilers for C, C++, Pascal, Fortran, Common Lisp and Prolog. If your algorithms are written in a more exotic language, we may still be able to sort something out. PCs can be made available if necessary. DISCUSSIONS ------------- There will be open forum discussions of planning the next Grammatical Inference Conference, and the setting up of a Grammatical Inference Journal (possibly an electronic one). PUBLICATIONS -------------- Loose-bound collections of accepted conference papers will be distributed to delegates upon arrival. It is planned to publish a selection of these papers in a book following the conference. REMOTE PARTICIPATION ---------------------- Authors from distant lands unwilling to travel to Essex for the conference are encouraged to submit a self-explanatory poster-paper that will be displayed at the conference. SUBMISSION DETAILS -------------------- Prospective authors should submit a 2-page abstract to Simon Lucas at the address below by the end of February, 1992. Email and Faxed abstracts are acceptable. Notification of the intention to submit an abstract would would also be appreciated. REGISTRATION DETAILS ---------------------- Prospective delegates are requested to mail/email/fax me at the address below for further details. ------------------------------------------------ Dr. Simon Lucas Department of Electronic Systems Engineering University of Essex Colchester CO4 3SQ United Kingdom Tel: 0206 872935 Fax: 0206 872900 Email: sml at uk.ac.essex ------------------------------------------------- From berg at cs.albany.edu Fri Jan 22 17:55:01 1993 From: berg at cs.albany.edu (George Berg) Date: Fri, 22 Jan 1993 17:55:01 -0500 (EST) Subject: Computational Biology Faculty Position Message-ID: <9301222255.AA05613@karp.albany.edu> A non-text attachment was scrubbed... Name: not available Type: text Size: 1759 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/28d22bbc/attachment-0001.ksh From schuetze at csli.stanford.edu Sun Jan 24 14:16:02 1993 From: schuetze at csli.stanford.edu (Hinrich Schuetze) Date: Sun, 24 Jan 93 11:16:02 -0800 Subject: paper on distributed semantic representations Message-ID: <9301241916.AA28750@Csli.Stanford.EDU> The following paper is now available in the connectionist archive, archive.cis.ohio-state.edu (128.146.8.52), in pub/neuroprose under the name: schuetze.wordspace.ps.Z WORD SPACE Hinrich Schuetze CSLI, Stanford University ABSTRACT This paper describes an efficient, corpus-based method for inducing distributed semantic representations for a large number of words (50,000) from lexical cooccurrence statistics. Each word is represented by a 97-dimensional vector that is computed by means of a singular-value decomposition of a 5000-by-5000 matrix recording cooccurrence in a large text corpus (The New York Times). The representations are successfully applied to word sense disambiguation using a nearest neighbor method. to appear in: S.~J. Hanson, J.~D. Cowan, and C.~L. Giles (Eds.), {\em Advances in Neural Information Processing Systems 5}. San Mateo CA: Morgan Kaufmann. author's address: Hinrich Schuetze CSLI, Ventura Hall Stanford, CA 94305-4115 schuetze at csli.stanford.edu From mjolsness-eric at CS.YALE.EDU Sun Jan 24 20:35:23 1993 From: mjolsness-eric at CS.YALE.EDU (Eric Mjolsness) Date: Sun, 24 Jan 93 20:35:23 EST Subject: Junior faculty opening at Yale Computer Science Message-ID: <199301250135.AA15033@EXPONENTIAL.SYSTEMSZ.CS.YALE.EDU> ***** PLEASE DO NOT FORWARD TO OTHER BBOARDS ***** I wish to draw the attention of especially well-qualified neural networkers to my department's recruiting advertisement for junior faculty in the December and January Communications of the ACM, which I quote below, and which neither explicitly targets nor excludes neural networkers. Please do not reply to me concerning this opening. The ad: "We expect to have one or more junior faculty positions available for the 1993-94 academic year. We are particularly interested in applicants in the areas of artificial intelligence, theoretical computer science, numerical analysis, and programming languages and systems. Applications should be submitted before April 30, 1993. "Duties will include teaching graduate and undergraduate courses. Applicants are expected to engage in a vigorous research program. "Candidates should hold a Ph.D. in computer science or related discipline. "Qualified women and minority candidates are encouraged to apply. Yale is an affirmative action/equal opportunity employer. "Send vitae and names of three references to: Faculty Recruiting Committee, Department of Computer Science, Yale University, P.O. Box 2158, Yale Station, New Haven, CT 16520." -Eric Mjolsness ------- ------- From robtag at udsab.dia.unisa.it Mon Jan 25 04:36:13 1993 From: robtag at udsab.dia.unisa.it (Tagliaferri Roberto) Date: Mon, 25 Jan 1993 10:36:13 +0100 Subject: 1993 courses and workshops programme Message-ID: <199301250936.AA14324@udsab.dia.unisa.it> **************** IIASS 1993 Workshops and Courses ************** **************** Preliminary Announcement ************** February 9 - 12 A short course on "Hybrid Systems: Neural Nets, Fuzzy Sets and A.I. systems" Lecturers: Dr. Silvano Colombano, NASA Research Center, CA Prof. Piero Morasso, Univ. Genova, Italia ----------------------------------------------------------------- March 23 - 27 A short course on "Languages for Parallel Programming" Lecturers: Prof. Merigot, Univ. Paris Sud, France Prof. A.P. Reeves (to be confirmed) ----------------------------------------------------------------- April second half A short course on "Learning in Neural Nets" Lecturers: Dr. M. Biehl, Physikalisches Inst., Wuerburg, Germany Dr. Sara Solla, AT&T Bell Laboratories ----------------------------------------------------------------- May 12-14 The 6-th Italian Workshop on Neural Nets WIRN VIETRI-93 Organizing - Scientific Committee -------------------------------------------------- B. Apolloni (Univ. Milano) A. Bertoni ( Univ. Milano) E. R. Caianiello ( Univ. Salerno) D. D. Caviglia ( Univ. Genova) P. Campadelli ( CNR Milano) M. Ceccarelli ( Univ. Salerno - IRSIP CNR) P. Ciaccia ( Univ. Bologna) M. Frixione ( I.I.A.S.S.) G. M. Guazzo ( I.I.A.S.S.) M. Gori ( Univ. Firenze) F. Lauria ( Univ. Napoli) M. Marinaro ( Univ. Salerno) A. Negro ( Univ. Salerno) G. Orlandi ( Univ. Roma) E. Pasero ( Politecnico Torino ) A. Petrosino ( Univ. Salerno - IRSIP CNR) M. Protasi ( Univ. Roma II) S. Rampone ( Univ. Salerno - IRSIP CNR) R. Serra ( Gruppo Ferruzzi Ravenna) F. Sorbello ( Univ. Palermo) R. Stefanelli ( Politecnico Milano) L. Stringa ( IRST Trento) R. Tagliaferri ( Univ. Salerno) R. Vaccaro ( CNR Napoli) Topics ---------------------------------------------------- Mathematical Models Architectures and Algorithms Hardware and Software Design Hybrid Systems Pattern Recognition and Signal Processing Industrial and Commercial Applications Fuzzy Tecniques for Neural Networks Sponsors ------------------------------------------------------------------------------ International Institute for Advanced Scientific Studies (IIASS) Dept. of Fisica Teorica, University of Salerno Dept. of Informatica e Applicazioni, University of Salerno Dept. of Scienze dell'Informazione, University of Milano Istituto per la Ricrca dei Sistemi Informatici Paralleli (IRSIP - CNR) Societa' Italiana Reti Neuroniche (SIREN) Invited Speakers ---------------------------------------------------------------- Prof. Stan Gielen, Catholic Univ. of Nijmege, NL Prof. Tommaso Poggio, MIT Prof. Lotfi Zadeh, Berkeley (to be confirmed) ---------------------------------------------------------------- May 24 - 28 A short course on "Neural Nets for Pattern Recognition" Lecturers: Dr. Federico Girosi, MIT Dr. V.N. Vapnik, AT&t Bell Laboratories (to be confirmed) ---------------------------------------------------------------- September 13 - 24 Advanced School on Computational Learning and Cryptography Sponsored by EATCS Italian Chapter Lecturers --------------------------------------------------------------- Prof. Shimon Even, Technion, Haifa, Israel Dr. Moti Yung, IBM T.J. Watson Research Center Dr. Michael Kearns, AT&T Bell Laboratories Prof. Wolfgang Maass, Technische Univ. Graz, Austria Directors -------------------------------------------------------------- Prof. Alfredo De Santis, Univ. Salerno, Italia Prof. Giancarlo Mauri, Univ. Milano, Italia -------------------------------------------------------------- The short courses and WIRN 93 are also sponsored by Progetto Finalizzato CNR "Sistemi Informatici e Calcolo Parallelo" and by Contratto quinquennale CNR-IIASS For any information for the short courses and Wirn 93, please contact the IIASS secretariat I.I.A.S.S Via G.Pellegrino, 19 I-84019 Vietri Sul Mare (SA) ITALY Tel. +39 89 761167 Fax +39 89 761189 or Dr. Roberto Tagliaferri E-Mail robtag at udsab.dia.unisa.it ***************************************************************** From ahmad at bsun11.zfe.siemens.de Mon Jan 25 03:36:39 1993 From: ahmad at bsun11.zfe.siemens.de (Subutai Ahmad) Date: Mon, 25 Jan 93 09:36:39 +0100 Subject: Missing feature preprint available Message-ID: <9301250836.AA19374@bsun11.zfe.siemens.de> The following paper is available for anonymous ftp on Neuroprose, archive.cis.ohio-state.edu (128.146.8.52), in directory pub/neuroprose, as file "ahmad.missing.ps.Z": Some Solutions to the Missing Feature Problem in Vision Subutai Ahmad and Volker Tresp Siemens Central Research and Development Abstract In visual processing the ability to deal with missing and noisy information is crucial. Occlusions and unreliable feature detectors often lead to situations where little or no direct information about features is available. However the available information is usually sufficient to highly constrain the outputs. We discuss Bayesian techniques for extracting class probabilities given partial data. The optimal solution involves integrating over the missing dimensions weighted by the local probability densities. We show how to obtain closed-form approximations to the Bayesian solution using Gaussian basis function networks. The framework extends naturally to the case of noisy features. Simulations on a complex task (3D hand gesture recognition) validate the theory. When both integration and weighting by input densities are used, performance decreases gracefully with the number of missing or noisy features. Performance is substantially degraded if either step is omitted. To appear in: S. J. Hanson, J. D. Cowan, and C. L. Giles (Eds.), Advances in Neural Information Processing Systems 5. San Mateo CA: Morgan Kaufmann. ---- Subutai Ahmad Siemens AG, ZFE ST SN61, Phone: +49 89 636-3532 Otto-Hahn-Ring 6, FAX: +49 89 636-2393 W-8000 Munich 83, Germany E-mail: ahmad at zfe.siemens.de From cohn at psyche.mit.edu Mon Jan 25 15:47:19 1993 From: cohn at psyche.mit.edu (David Cohn) Date: Mon, 25 Jan 93 15:47:19 EST Subject: Robot Learning Workshop summary in neuroprose Message-ID: <9301252047.AA21630@psyche.mit.edu> ROBOT LEARNING Summary of the post-NIPS workshop Vail, Colorado, Dec 5th, 1992 David A. Cohn (MIT) Tom Mitchell (CMU) Sebastian Thrun (CMU) cohn at psyche.mit.edu mitchell at cs.cmu.edu thrun at cs.cmu.edu We have just completed a short summary of the post-NIPS workshop on "Robot Learning" and have placed the summary in the neuroprose archives (with the assistance, of course, of Jordan Pollack). The goal of this workshop was to provide a forum for researchers active in the area of robot learning and related fields. Due to the limited time available, we attempted to focus discussion around two major issues: 1) How can current learning robot techniques scale to more complex domains, characterized by massive sensor input, complex causal interactions, and long time scales? 2) Where are the new ideas in robot learning coming from? ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps cohn.robot-learning-summary.ps", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get cohn.robot-learning-summary.ps ftp> quit unix> lpr -s cohn.robot-learning-summary.ps (or however you print postscript) -David Cohn e-mail: cohn at psyche.mit.edu Dept. of Brain & Cognitive Science phone: (617) 253-8409 MIT, E10-243 Cambridge, MA 02139 From paul at dendrite.cs.colorado.edu Mon Jan 25 00:45:49 1993 From: paul at dendrite.cs.colorado.edu (Paul Smolensky) Date: Sun, 24 Jan 1993 22:45:49 -0700 Subject: Colorado Boycott & Cognitive Science Conference Message-ID: <199301250545.AA22442@axon.cs.colorado.edu> [Moderator's note: The following message was accepted only because it is an official announcement by the Cognitive Science Society about the future of their conference. The CONNECTIONISTS list is not the proper venue for political discussions on topics like the Colorado boycott, so followups to this message will not be accepted. People concerned about the future of NIPS or other conferences are advised to contact the organizations that sponsor them. Please do not send mail to me or to Connectionists. -- Dave Touretzky] The Governing Board of the Cognitive Science Society is aware of the recent approval by the voters of Colorado of an amendment to their Constitution that bans antidiscriminatory legal activity to protect the rights of gays and lesbians (the amendment has been barred from implementation pending judicial review). This action has prompted calls for a boycott of the state. Since the Annual Meeting of the Society is scheduled for June 18-21 in Boulder, we feel obliged to make it clear that our action in no way implies an endorsement of the amendment nor disapproval of means being taken to oppose it. We are a small society with no professional convention staff. A last-minute change would produce great expense for members, who often pay their own way. Further, the logistics of conferences make it impossible to make a change now without causing considerable pain to hundreds of people who were not a party to the vote in Colorado. Here are additional reasons for our decision to continue with the meeting. * Boulder is one of the cities whose gay rights ordinance is threatened, and the City of Boulder was the primary party in the injunction suit (and thus in preventing immediate implementation). * The University of Colorado where the conference will be held has a policy of non-discrimination (including sexual preference) towards employees. (It could be strengthened, many believe) * Unlike most annual meetings this one is run entirely by the local committee, so that moving the meeting really means starting over from scratch. * A number of gay-rights organizations in Colorado are now OPPOSING the Boycott. Personally, the members of the Governing Board oppose the Colorado amendment. There is a strong precedent to avoid involving the Society in political issues, and our action in making this statement is primarily to avoid any suggestion that by not changing the meeting, we are opposing the boycott or endorsing the amendment. Obviously, setting any further meetings in Colorado after this June would itself be a political action in the current context, and thus is unlikely. Further, members are free to offer resolutions at the general membership meeting held during the conference (within the limits of our by-laws and corporate charter). Individual members, of course, may wish to take stronger steps. For example, they may wish to donate to one or more groups specifically opposing the amendment, including the groups listed at the end of this posting. Approved by electronic mail poll of the Governing Board. THE COGNITIVE SCIENCE SOCIETY, INC. Alan Lesgold, Secretary/Treasurer ============================================================== Groups Fighting the Amendment ** CLIP: Colorado Legal Initiatives Project. PO Box 44447, Denver, CO 80201. 303-830-2100. ** Equality Colorado. PO Box 300476, Denver, CO 80203. 303-839-5540. ** GLAAD: Gay and Lesbian Alliance for Anti-Defamation. PO Box 480662, Denver, CO 80248. 303-331-2773. ** Boycott Colorado, Inc. PO Box 300158, Denver, CO 80203. 303-777-0560. ** Deadheads Against Discrimination. 2888 Bluff, Suite 496, Boulder, CO 80301. ** Boulder NOW (National Organization for Women). Sally Barrett-Page, PO Box 7972, Boulder, CO 80306. 303-449-8117. ** BOND: Boulder Organization for Non-Discrimination. 1085 14th St., #1023,Boulder, CO 80302. 303-444-3455. From thgoh at iss.nus.sg Tue Jan 26 08:50:13 1993 From: thgoh at iss.nus.sg (Goh Tiong Hwee) Date: Tue, 26 Jan 93 15:10:13+080 Subject: No subject Message-ID: <9301260710.AA05268@iss.nus.sg> **DO NOT FORWARD TO OTHER GROUPS** The following paper has been deposited in Jordan Pollack's immensely useful Neuroprose archive at Ohio State. Retrieval instructions at end of message. Limited hardcopy requests will be answered for next couple of months. --------------------------------------------------------------------------- thgoh.sense.ps.Z SEMANTIC EXTRACTION USING NEURAL NETWORK MODELLING AND SENSITIVITY ANALYSIS --------------------------------------------------------------------------- Retrieval instructions (the usual): ipc9>ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. ftp> cd pub/neuroprose 250 CWD command successful. ftp> binary 200 Type set to I. ftp> get thgoh.sense.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for thgoh.sense.ps.Z (64235 bytes). 226 Transfer complete. TH Goh Institute of Systems Science National University of Singapore Kent Ridge Singapore 0511. Fax: (65)7782571 Email: thgoh at iss.nus.sg **DO NOT FORWARD TO OTHER GROUPS** From P06819JB%WUVMC.bitnet at WUVMD.Wustl.Edu Tue Jan 26 16:49:00 1993 From: P06819JB%WUVMC.bitnet at WUVMD.Wustl.Edu (John T. Bruer) Date: Tue, 26 Jan 93 15:49:00 CST Subject: Job Announcement -- Program Officer, J.S. McDonnell Foundation Message-ID: PROGRAM OFFICER The James S. McDonnell Foundation, a major private foundation having special interests in education and the behavioral and biological sciences, is seeking an energetic, resourceful professional to fill the position of Program Officer. The successful candidate will coordinate the foundation's existing research programs in education and cognitive neuroscience; assist in identifying and formulating new program areas; evaluate grant requests and monitor the progress of ongoing grants; interact regularly with grantees and the foundation's program advisory boards; plan and coordinate national meetings; assist in preparing and presenting material to the Board of Directors; and represent the foundation locally and nationally where appropriate. Nominees and applicants should demonstrate superior oral and written communication skills, and have a proven record of strong administrative abilities. Candidates for the position must hold a graduate degree in the biological, behavioral, or social sciences and have had at least five years teaching, research, or administrative experience. Prior grantmaking experience is not required but will be considered favorably. Salary commensurate with experience plus fringe benefits. The deadline for receipt of application is April 16. Qualified candidates should send a letter explaining their interest in the position, resume, salary requirements, and three letters of reference to: John T. Bruer, Ph.D. President James S. McDonnell Foundation 1034 S. Brentwood Blvd., Suite 1610 St. Louis, MO 63117 The James S. McDonnell Foundation is an Equal Opportunity/ Affirmative Action Employer.  From kak at max.ee.lsu.edu Wed Jan 27 10:07:47 1993 From: kak at max.ee.lsu.edu (Dr. S. Kak) Date: Wed, 27 Jan 93 09:07:47 CST Subject: The Self, the brain, and phantom limbs Message-ID: <9301271507.AA01384@max.ee.lsu.edu> Several correspondents have sought further information regarding the concept of self that has been discussed in my report "Reflections in clouded mirrors: selfhood in animals and machines" Technical Report 92-12 ECE-LSU, December 1, 1992. The following paper Ronald Melzack, "Phantom limbs, the self and the brain", Canadian Psychology, 1989, 30, 1 describes the "reality" of phantom limbs. What is fascinating is that children born with birth defects have a phantom which may not have such defects. Clearly the notion of self pervades all life, otherwise snakes would lunch on their tails! -Subhash Kak [Replies in English and Sanskrit] From andy at cma.MGH.Harvard.Edu Wed Jan 27 11:23:36 1993 From: andy at cma.MGH.Harvard.Edu (Andrew J. Worth) Date: Wed, 27 Jan 93 11:23:36 EST Subject: Call for Volunteers Message-ID: <9301271623.AA29433@cma.mgh.harvard.edu> --------------------------------------------------------------------------- ** CALL FOR VOLUNTEERS ** 1993 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS SECOND IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS March 28 - April 1, 1993 San Francisco Hilton San Francisco, California, USA --------------------------------------------------------------------------- Volunteer positions are available for both FUZZ-IEEE'93 and ICNN'93. If you or anyone you know would like to exchange admittance to the conference for working as a volunteer, please respond directly to me at the e-mail address below. In the past, volunteers have given approximately 20 hours of labor (spread out over the entire conference) to receive: - admittance to the conference - a full set of proceedings - attendance to a limited number of tutorials (while working) Volunteer positions include helping at: - Stuffing Conference Proceedings - Poster Sessions - Technical Sessions - Evening Plenary Sessions - Social Events - OPTIONAL duty: Tutorials If you are interested in volunteering, please respond directly to me with the following information: - Electronic Mail Address - Last Name, First Name - Address - Country - Phone and FAX number Positions will be filled on a first commit first served basis. There will be no funding available for volunteer's travel and lodging expenses. PLEASE RESPOND TO: andy at cma.mgh.harvard.edu Thank you, Andy. =---------------------------------------------------------------------= Andrew J. Worth Center for Morphometric Analysis Volunteer Coordinator Neuroscience Center ICNN'93 / FUZZ-IEEE'93 Massachusetts General Hospital-East (617) 726-5711, FAX:726-5677 Building 149, 13th St., MA 02129 USA andy at cma.mgh.harvard.edu =---------------------------------------------------------------------= From bradtke at envy.cs.umass.edu Thu Jan 28 11:20:59 1993 From: bradtke at envy.cs.umass.edu (bradtke@envy.cs.umass.edu) Date: Thu, 28 Jan 93 11:20:59 -0500 Subject: paper placed in neuroprose Message-ID: <9301281620.AA05650@pride.cs.umass.edu> The paper "Learning to Act using Real-Time Dynamic Programming" has been placed in the Neuroprose Archives. It is a revised version of the COINS TR 91-57 "Real-time learning and control using asynchronous dynamic programming" and has been submitted to the AI Journal special issue on Computational Theories of Interaction and Agency. The new version has replaced the old version in the archives. The presentation has been cleaned up throughout, several errors have been corrected, and the experiments greatly expanded. Note that this new version uses a somewhat different experimental problem definition than the old version. ----------------------------------------------------- Learning to Act using Real-Time Dynamic Programming Andrew G. Barto, Steven J. Bradtke, Satinder P. Singh Department of Computer Science University of Massachusetts, Amherst MA 01003 Learning methods based on dynamic programming (DP) are receiving increasing attention in artificial intelligence. Researchers have argued that DP provides the appropriate basis for compiling planning results into reactive strategies for real-time control, as well as for learning such strategies when the system being controlled is incompletely known. We introduce an algorithm based on DP, which we call Real-Time DP (RTDP), by which an embedded system can improve its performance with experience. RTDP generalizes Korf's Learning-Real-Time-A* algorithm to problems involving uncertainty. We invoke results from the theory of asynchronous DP to prove that RTDP achieves optimal behavior in several different classes of problems. We also use the theory of asynchronous DP to illuminate aspects of other DP-based reinforcement learning methods such as Watkins' Q-Learning algorithm. A secondary aim of this article is to provide a bridge between AI research on real-time planning and learning and relevant concepts and algorithms from control theory. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps barto.realtime-dp.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get barto.realtime-dp.ps.Z ftp> quit unix> uncompress barto.realtime-dp.ps unix> lpr -s barto.realtime-dp.ps (or however you print postscript) Steve Bradtke From bradtke at envy.cs.umass.edu Thu Jan 28 11:00:57 1993 From: bradtke at envy.cs.umass.edu (bradtke@envy.cs.umass.edu) Date: Thu, 28 Jan 93 11:00:57 -0500 Subject: preprint of NIPS paper Message-ID: <9301281600.AA05634@pride.cs.umass.edu> The following paper has been placed in the Neuroprose Archives. FTP instructions are given below. Reinforcement Learning Applied to Linear Quadratic Regulation Steven J. Bradtke Computer Science Department University of Massachusetts Amherst, MA 01003 bradtke at cs.umass.edu Recent research on reinforcement learning has focused on algorithms based on the principles of Dynamic Programming (DP). One of the most promising areas of application for these algorithms is the control of dynamical systems, and some impressive results have been achieved. However, there are significant gaps between practice and theory. In particular, there are no convergence proofs for problems with continuous state and action spaces, or for systems involving non-linear function approximators (such as multilayer perceptrons). This paper presents research applying DP-based reinforcement learning theory to Linear Quadratic Regulation (LQR), an important class of control problems involving continuous state and action spaces and requiring a simple type of non-linear function approximator. We describe an algorithm based on Q-learning that is proven to converge to the optimal controller for a large class of LQR problems. We also describe a slightly different algorithm that is only locally convergent to the optimal Q-function, demonstrating one of the possible pitfalls of using a non-linear function approximator with DP-based learning. ----------------------------------------------------- FTP INSTRUCTIONS Either use "Getps bradtke.nips5.ps.Z", or do the following: unix> ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> binary ftp> get bradtke.nips5.ps.Z ftp> quit unix> uncompress bradtke.nips5.ps unix> lpr -s bradtke.nips5.ps (or however you print postscript) Steve Bradtke From POCHEC%unb.ca at UNBMVS1.csd.unb.ca Thu Jan 28 15:26:12 1993 From: POCHEC%unb.ca at UNBMVS1.csd.unb.ca (POCHEC%unb.ca@UNBMVS1.csd.unb.ca) Date: Thu, 28 Jan 93 16:26:12 AST Subject: Call for papers: AI Symposium Message-ID: ================================================================== ================================================================== Call for Participation The 5th UNB AI Symposium ********************************* * * * Theme: * * ARE WE MOVING AHEAD? * * * ********************************* August 11-14, 1993 Sheraton Inn Fredericton Advisory Committee ================== N. Ahuja, Univ.of Illinois, Urbana W. Bibel, ITH, Darmstadt D. Bobrow, Xerox PARC M. Fischler, SRI P. Gdrdenfors, Lund Univ. S. Grossberg, Boston Univ. J. Haton, CRIN T. Kanade, CMU R. Michalski, George Mason Univ. T. Poggio, MIT Z. Pylyshyn, Univ. of Western Ontario O. Selfridge, GTE Labs Y. Shirai, Osaka Univ. Program Committee ================= The international program committee will consist of approximately 40 members from all main fields of AI and from Cognitive Science. We invite researchers from the various areas of Artificial Intelligence, Cognitive Science and Pattern Recognition, including Vision, Learning, Knowledge Representation and Foundations, to submit articles which assess or review the progress made so far in their respective areas, as well as the relevance of that progress to the whole enterprise of AI. Other papers which do not address the theme are also invited. Feature ======= Four 70 minute invited talks and five panel discussions are devoted to the chosen topic: "Are we moving ahead: Lessons from Computer Vision." The speakers include (in alphabetical order) * Lev Goldfarb * Stephen Grossberg * Robert Haralick * Tomaso Poggio Such a concentrated analysis of the area will be undertaken for the first time. We feel that the "Lessons from Computer r Vision" are of relevance to the entire AI community. Information for Authors ======================= Now: Fill out the form below and email it. --- March 30, 1993: -------------- Four copies of an extended abstract (maximum of 4 pages including references) should be sent to the conference chair. May 15, 1993: ------------- Notification of acceptance will be mailed. July 1, 1993: ------------- Camera-ready copy of paper is due. Conference Chair: Lev Goldfarb Email: goldfarb at unb.ca Mailing address: Faculty of Computer Science University of New Brunswick P. O. Box 4400 Fredericton, New Brunswick Canada E3B 5A3 Phone: (506) 453-4566 FAX: (506) 453-3566 Symposium location The symposium will be held in the Sheraton Inn, Fredericton , which which overlooks the beautiful Saint John River. IMMEDIATE REPLY FORM ==================== (please email to goldfarb at unb.ca) I would like to submit a paper. Title: _____________________________________ _____________________________________ _____________________________________ I would like to organize a session. Title: _____________________________________ _____________________________________ _____________________________________ Name: _____________________________________ _____________________________________ Department _____________________________________ University/Company _____________________________________ _____________________________________ _____________________________________ Address _____________________________________ _____________________________________ _____________________________________ Prov/State _____________________________________ Country _____________________________________ Telephone _____________________________________ Email _____________________________________ Fax _____________________________________ From wahba at stat.wisc.edu Sun Jan 31 20:20:30 1993 From: wahba at stat.wisc.edu (Grace Wahba) Date: Sun, 31 Jan 93 19:20:30 -0600 Subject: submission Message-ID: <9302010120.AA25404@hera.stat.wisc.edu> I would like to submit the following to connectionists - thanks much! **************** This is to announce two papers in the neuroprose archive: 1) Soft Classification, a.k.a. Penalized Log Likelihood and Smoothing Spline Analysis of Variance by Grace Wahba, Chong Gu, Yuedong Wang and Rick Chappell to appear in the proceedings of the Santa Fe Workshop on Supervised Machine Learning, August 1992, D. Wolpert and A. Lapedes, eds. also partly presented at CLNL*92. 2) Smoothing Spline ANOVA with Component-Wise Bayesian `Confidence Intervals' by Chong Gu and Grace Wahba, to appear, J. Computational and Graphical Statistics wahba at stat.wisc.edu, chong at pop.stat.purdue.edu wang at stat.wisc.edu, chappell at stat.wisc.edu Below are the abstracts followed by instructions for retrieving the papers. Grace Wahba ---------------------------------------------------------------------- Soft Classification, a.k.a. Penalized Log Likelihood and Smoothing Spline Analysis of Variance G. Wahba, C. Gu, Y. Wang and R. Chappell We discuss a class of methods for the problem of `soft' classification in supervised learning. In `hard' classification, it is assumed that any two examples with the same attribute vector will always be in the same class, (or have the same outcome), whereas in `soft' classification two examples with the same attribute vector do not necessarily have the same outcome, but the *probability* of a particular outcome does depend on the attribute vector. In this paper we will describe a family of methods which are well suited for the estimation of this probability. The method we describe will produce, for any value in a (reasonable) region of the attribute space, an estimate of the probability that the next example with that value of its attribute vector will be in class 1. Underlying these methods is an assumption that this probability varies in a smooth way (to be defined) as the predictor variables vary. The method combines results from Penalized log likelihood estimation, Smoothing splines, and Analysis of variance, to get the PSA class of methods. In the process of describing PSA we discuss some issues concerning the computation of degrees of freedom for signal, which has wider ramifications for the minimization of generalization error in machine learning. As an illustration we apply the method to the Pima-Indian Diabetes data set in the UCI Repository, and compare the results to Smith et. al. (1988) who used the ADAP learning algorithm on this same data set to forecast the onset of diabetes mellitus. If the probabilities we obtain are thresholded to make a hard classification to compare with the hard classification of Smith et. al. the results are very similar, however the intermediate probabilities that we obtain provide useful and inter- pretable information on how the risk of diabetes varies with some of the risk factors. ........................... Smoothing Spline ANOVA with Component-Wise Bayesian `Confidence Intervals' C. Gu and G. Wahba We study a multivariate smoothing spline estimate of a function of several variables, based on an ANOVA decomposition as sums of main effect functions (of one variable), two-factor interaction functions (of two variables), etc. We derive the Bayesian `confidence intervals' of Wahba(1983) for the components of this decomposition and demonstrate that, even with multiple smoothing parameters, they can be efficiently computed using the publicly available code RKPACK, which was originally designed just to compute the estimates. We carry out a small Monte Carlo study to see how closely the actual properties of these component-wise confidence intervals match their nominal confidence levels. Lastly, we analyze some lake acidity data as a function of calcium concentration, latitude, and longitude, using both polynomial and thin plate spline main effects in the same model. ----------------------------------------------------------------------------- To retrieve these files from the neuroprose archive: unix> ftp archive.cis.ohio-state.edu Name (archive.cis.ohio-state.edu:wahba): anonymous Password: (use your email address) ftp> binary ftp> cd pub/neuroprose ftp> get wahba.soft-class.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for wahba.soft-class.ps.Z . ftp> get wahba.ssanova.ps.Z . 221 Goodbye. unix> uncompress wahba.soft-class.ps.Z unix> lpr wahba.soft-class.ps unix> uncompress wahba.ssanova.ps.Z unix> lpr wahba.ssanova.ps .. Thanks to Jordan Pollack for maintaining the archive.  From smagt at fwi.uva.nl Sun Jan 31 07:42:57 1993 From: smagt at fwi.uva.nl (Patrick van der Smagt) Date: Sun, 31 Jan 1993 13:42:57 +0100 Subject: correction: 5th edition of neural network intro book Message-ID: <199301311242.AA19696@carol.fwi.uva.nl> Excuse! It appears that galba's ftp manager (the chap with the PassWord) changed the ftp site; previously, when you logged in on galba, a "cd pub" was done. This is changed for now, such that the instructions for getting The fifth edition of the neural network introductory text An Introduction to Neural Networks Ben Kr\"ose and Patrick van der Smagt Dept. of Computer Systems University of Amsterdam are now: ----------------------------------------------------------------------------- To retrieve the document by anonymous ftp : Unix> ftp galba.mbfys.kun.nl (or ftp 131.174.82.73) Name (galba.mbfys.kun.nl ) anonymous 331 Guest login ok, send ident as password. Password ftp> bin ftp> cd pub/neuro-intro ftp> get neuro-intro.400.ps.Z 150 Opening ASCII mode data connection for neuro-intro.400.ps.Z (xxxxxx bytes). ftp> bye Unix> uncompress neuro-intro.400.ps.Z Unix> lpr -s neuro-intro.400.ps ;; optionally ----------------------------------------------------------------------------- There is a possibility that the previous state (where a "cd neuro-intro" instead of "cd pub/neuro-intro" must be done) will be restored in future. Be forewarned. The file neuro-intro.400.ps.Z is the manuscript for 400dpi printers. If you have a 300dpi printer, get neuro-intro.300.ps.Z instead. The 1991 version is still available as neuro-intro.1991.ps.Z. 1991 Is not the #dots per inch! We don't have such good printers here. Do preview the manuscript before you print it, since otherwise 131 pages of virginal paper are wasted. Some systems cannot handle the large postscript file (around 2M). On Unix systems it helps to give lpr the "-s" flag, such that the postscript file is not spooled but linked (see man lpr). On others, you may have no choice but extract (chunks of) pages manually and print them separately. Unix filters like pstops, psselect, and psxlate (the source code of the latter is available from various ftp sites) can be used to select pages to be printed. Alternatively, print from your previewer. Better still, don't print at all! Enjoy! Patrick PS the length of some chapters reflect the focus of the research in our group. E.g., chapter 6 is ridiculously short (which was brought to my attention) and needs improvement. Next time.