From kzhang at cogsci.ucsd.edu Fri Mar 1 01:49:34 1996 From: kzhang at cogsci.ucsd.edu (Kechen Zhang) Date: Thu, 29 Feb 1996 22:49:34 -0800 Subject: exact shift-invariance from position-independent weights Message-ID: <9603010649.AA21976@cogsci.UCSD.EDU> People often do not realize that it is actually possible to get shift-invariant responses from position-dependent weight patterns. The mechanism may seem counter-intuitive at the first sight, but the shift-invariance can be rigorously true. The story begins with the puzzling behaviors of the neurons in the visual area MST of macaque monkeys. For example, some neurons responded very well to a disk rotating clockwise on a screen no matter where the center of the disk was located. The same neurons would be inhibited if the disk rotated counterclockwise, once again, no matter where the disk was located on the screen. Of course, some other cells would prefer counterclockwise rotations to clockwise ones, also in a shift-invariant manner. (The same is true for many dilation/contraction neurons, and probably also for spiral neurons.) Recall that MST is just the next processing stage after area MT, where neurons respond typically to translational movements in a comparatively small region (receptive field). One might guess that some nonlinear, higher-order process is underlying the phenomenon. But brain has probably found a much simpler and more elegant solution. The plausible solution first emerged in a computer simulation experiment by Marty and Margaret Sereno. I helped to formalize their findings (hence this message). Poggio and colleagues independently arrived at similar conclusion via a different path. In short, rigorously shift-invariant responses can be obtained from a simple linear feedforward network whose weight pattern is not shift-invariant at all. The shift-invariance follows from what Poggio et al. called the Green theorems and we called the Gauss and Stokes theorems---all special cases of the general Stokes theorem, which can transform an integral along a closed curve into an integral over an area, and vice versa. Because the learned weight pattern (considered vector field) has a constant curl, the final response depend only on the area of that rotating disk. I think this is a nice example of a counter-intuitive neural mechanism for exact shift-invariance. References: [1] Sereno, M. I. and Sereno , M. E. (1991) Learning to see rotation and dilation with a Hebb rule. In: Advances in Neural Information Processing Systems, R. P. Lippman, J. Moody and D. S. Touretzky, eds. pp. 320-326. Morgan Kauffman, San Mateo, CA. [2] Zhang, K., Sereno, M. I. and Sereno , M. E. (1993) Emergence of position-independent detectors of sense of rotation and dilation with Hebbian learning: an analysis. Neural Computation 5: 597-612. [3] Poggio, T., Verri, A. and Torre, V. (1991) Green theorems and qualitative properties of optical flow. MIT A.I. Memo, no. 1289. -Kechen ________________________________________ Kechen Zhang Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 kzhang at cogsci.ucsd.edu ________________________________________  From Connectionists-Request at cs.cmu.edu Fri Mar 1 00:05:47 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Mar 96 00:05:47 EST Subject: Bi-monthly Reminder Message-ID: <24493.825656747@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From joachim at fit.qut.edu.au Fri Mar 1 02:14:05 1996 From: joachim at fit.qut.edu.au (Joachim Diederich) Date: Fri, 1 Mar 1996 17:14:05 +1000 (EST) Subject: Postdoctoral Fellowships Message-ID: <199603010714.RAA18553@aldebaran.fit.qut.edu.au> POSTDOCTORAL RESEARCH FELLOWSHIPS NEUROCOMPUTING RESEARCH CENTRE QUEENSLAND UNIVERSITY OF TECHNOLOGY BRISBANE, AUSTRALIA QUT-NRC invites applications from qualified academics for a limited number of QUT Postdoctoral Fellowships. These fellowships are available to researchers with less than five years full-time professional experience since being awarded their PhD. The duration of the fellowship is between nine months and two years. Applications from researchers with a background in Computational Learning Theory or Hybrid Artificial Intelligence/Neurocomputing Systems are especially welcomed. The salary is A$37,345 to A$40,087 pa, depending on qualifications and experience. Before submitting an application, intending applicants must contact the Neurocomputing Research Centre. Only applications strongly supported by a QUT research centre will be considered by the university. Applications should reach the Human Resources Director QUT Locked Bag 2 Red Hill QLD 4059 by Friday 29 March 1996. Please direct enquiries to: Prof Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Box 2434 Brisbane Q 4001 AUSTRALIA Phone: +61 7 3864-2143 Fax: +61 7 3864-1801 e-mail: joachim at fit.qut.edu.au  From rolf at cs.rug.nl Fri Mar 1 06:53:04 1996 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Fri, 1 Mar 1996 12:53:04 +0100 Subject: Learning shift invariance Message-ID: Dear connectionists, first of all, thanks to Laurenz Wiskott and Jerry Feldman for arranging the arguments and thus giving the discussion a proper fundament. My view on the matter is the following. The (to me) most interesting part is the generalizing ability which Laurenz has named 4b. I would define the challenge for a neural net to learn shift invariance as follows. There are N patterns and P positions. Beginning from tabula rasa, the network is presented ONE pattern in ALL possible positions to learn shift invariance. For practical reasons, more than one pattern may be required, but I would insist that shift invariance has to be learned from a small subset of the possible patterns. After having learned shift invariance that way the network should be able to learn new patterns at a SINGLE position and then recognize them in an invariant way in ANY position. Again, I would allow a small number of positions. I grant, that the network is NOW a structured one. That is what I would call a satisfactory solution to the problem of learning shift invariance. The network in Geoffrey Hinton's paper does a good job, but it fails to meet this requirement. His parameters are N=16, P=12. Every pattern is trained at 10 (random) positions. So the number of training examples is 0.83*P*N, the number of test examples to which the network generalizes is 0.17*P*N. This gets a little awkward for larger values of N and P. The task as outlined above would allow only s*(P+N-1) training examples, where s is the `small number'. Something like 3 should be appropriate, 1 desirable. Then the network should generalize and recognize all P*N examples correctly. Note that there is no objection to the choice of parameters in the paper but to the scaling behavior for larger parameters. The network must have seen the patterns in almost all possible positions to do the generalization. As far as I have followed the discussion the goal of an O(P+N) dependence of the training set size has not been reached yet. I see 3 possibilities for settling the issue. 1) Construct a network that solves the problem as outlined above. 2) Prove that it can not be done. 3) Prove (experimentally) that visual perception can not solve the problem. I am very interested in any progress in one of these directions, and I am looking forward to the further course of this discussion. Rolf +----------------------------------------------------------------------------+ | Rolf P. W"urtz | mailto: rolf at cs.rug.nl | URL: http://www.cs.rug.nl/~rolf/ | | Department of Computing Science, University of Groningen, The Netherlands | +----------------------------------------------------------------------------+  From dwang at cis.ohio-state.edu Fri Mar 1 17:56:13 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 1 Mar 1996 17:56:13 -0500 (EST) Subject: shift invariance Message-ID: <199603012256.RAA25712@shirt.cis.ohio-state.edu> Jerry Feldman writes >2) Understanding how the visual system achieves shift invariance. > > This thread has been non-argumentative. The problem of invariances and >constancies in the visual system remains central in visual science. I can't >think of any useful message-sized summary, but this is an area where >connectionist models should play a crucial role in expressing and testing >theories. But, as several people have pointed out, we can't expect much from >tabula rasa learning. I'd like to know the evidence that the visual system achieves shift (translation) invariance (I'd appreciate references if any). It seems that the eye "focuses" on the object of interest first. In other words, the eye seems to shift with the object, not that the visual system is recognizing the object wherever it occurs on the retina. There seem to be problems with a system that DOES recognize an object no matter where it occurs, when the system faces more than an object as we confront all the time. > The unlearnability of shift invarince is not a problem in practice because >people use preprocessing, weight sharing or other techniques to get shift >invariance where it is known to be needed. However, it does pose a problem for >the brain and for theories that are overly dependent on learning. Why does it pose a problem to the brain? Perhaps the brain is doing what's regarded as "preprocessing" (a black hole containing many "troubling" things). I do agree that there are limits to tabula rasa learning. The reason that we can learn things we do is, perhaps, critically linked to the prewiring of our brain. We know that we have a lot of difficulty in training a chimpanzee's brain to learn our language, let alone 3-layer perceptrons with backprop. DeLiang Wang  From biehl at Physik.Uni-Wuerzburg.DE Fri Mar 1 06:36:58 1996 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Fri, 1 Mar 1996 12:36:58 +0100 (MEZ) Subject: paper: dynamics of learning in twolayered networks Message-ID: <199603011136.MAA03721@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-96-003.ps.gz The following paper is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Transient dynamics of on-line learning in two-layered neural networks" Michael Biehl, Peter Riegler, and Christian W"ohler Ref. WUE-ITP-96-003 Abstract The dynamics of on-line learning in neural networks with continous units is dominated by plateaus in the time dependence of the generalization error. Using tools from statistical mechanics, we show for a soft committee machine the existence of several fixed points of the dynamics of learning that give rise to complicated behavior, such as cascade--like runs through different plateaus with a decreasing value of the corresponding generalization error. We find learning-rate dependent phenomena, such as splitting and disappearing of fixed points of the equations of motion. The dependence of plateau lengths on the initial conditions is described analytically, and simulations confirm the results. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-96-003.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-003.ps.gz e.g. unix> lp WUE-ITP-96-003.ps [15 pages] (*) can be replaced by "get WUE-ITP-96-003.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141  From ingber at ingber.com Sun Mar 3 03:43:36 1996 From: ingber at ingber.com (Lester Ingber) Date: Sun, 3 Mar 1996 00:43:36 -0800 Subject: Papers: Canonical Momenta of Financial Markets and Neocortical EEG Message-ID: <199603030843.AAA06118@shellx.best.com> Papers: Canonical Momenta of Financial Markets and Neocortical EEG The following two preprints are available. markets96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of financial markets and neocortical EEG %B International Conference on Neural Information Processing (ICONIP'96) %I Springer %C New York %D 1996 %O This is an invited paper to the 1996 International Conference on Neural Information Processing (ICONIP'96), Hong Kong, 24-27 September 1996. URL http://www.ingber.com/markets96_momenta.ps.Z A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. smni96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of neocortical EEG %B Physics Computing 96 (PC96) %I PC96 %C Krakow, Poland %D 1996 %O This is an invited paper to Physics Computing 96 (PC96), Krakow, Poland, 17-21 September 1996. URL http://www.ingber.com/smni96_momenta.ps.Z A model of statistical mechanics of neocortical interactions (SMNI) has been fit to EEG data using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and can be used as technical indicators in a recursive ASA optimization process to optimize clinician rules. This methodology has been applied to financial markets. This archive also contains the most recent version 12.10 of Adaptive Simulated Annealing (ASA) %A L. Ingber %T Adaptive Simulated Annealing (ASA) %R [http://www.ingber.com/ASA-shar, ASA-shar.Z, ASA.tar.Z, ASA.tar.gz, ASA.zip] %I Lester Ingber Research %C McLean, VA %D 1993 ASA is one of the most powerful optimization algorithms for nonlinear and stochastic systems, and is being used recursively in the above two projects. Please note that this archive recently has been moved to its present location from http://www.alumni.caltech.edu/~ingber/ and ftp.alumni.caltech.edu:/pub/ingber. Pointers to the new location will be found in the old location. ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under directory ingber.com; i.e., http://www.ingber.com/dir/file and ftp://ftp.ingber.com/dir/file reference the same file. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Lester , ======================================================================== /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Dr. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */  From hermann at impa.br Sun Mar 3 10:12:50 1996 From: hermann at impa.br (Hermann Von Hasseln) Date: Sun, 3 Mar 1996 12:12:50 -0300 Subject: TR announcement (IPF for conditionals) Message-ID: <199603031512.MAA04517@Gauss.impa.br> In connection with the recent announcement of Padhraic Smyth et al. ("Probabilistic Independence Networks For Hideen Markov Probability Models") I'd like to announce the following technical report, which might be of interest for you: AN IPF PROCEDURE FOR MIXED GRAPHICAL MODELS Hermann von Hasseln IMPA Instituto de Matem\'atica Pura e Aplicada Rio de Janeiro, Brazil Abstract We introduce a variant of the well--known iterative proportional fitting (IPF) procedure. Just as the traditional IPF procedure, which uses a given set of marginal probabilities as constraints that have to be satisfied in each iteration, we show that this also can be done with a given set of compatible conditional probabilities. In the case of compatible conditionals convergence is guaranteed by a theorem by Csisz\'ar. We also define a ``mixed'' version of IPF procedures, where the set of constraints is given by a mixed set of marginal and conditional probabilities. Keywords: Iterative proportional fitting, maximum likelihood estimation, graphical models, maximum entropy, minimum discrimination information, conditionally specified distributions. To obtain a copy of these papers, pl. send your email request to hermann at impa.br Comments are welcome. Hermann von Hasseln  From goldfarb at unb.ca Sun Mar 3 15:58:17 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 3 Mar 1996 16:58:17 -0400 (AST) Subject: shift invariance In-Reply-To: <9602281000.ZM15421@ICSI.Berkeley.edu> Message-ID: On Wed, 28 Feb 1996, Jerry Feldman wrote: > 2) Understanding how the visual system achieves shift invariance. > > This thread has been non-argumentative. The problem of invariances and > constancies in the visual system remains central in visual science. I realize that I'm talking to the connectionist "family", but I still want to remind you that under symbolic encoding the above problem essentially "disappears". Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From terry at salk.edu Sun Mar 3 18:09:31 1996 From: terry at salk.edu (Terry Sejnowski) Date: Sun, 3 Mar 96 15:09:31 PST Subject: Journal Impact Factors Message-ID: <9603032309.AA15208@salk.edu> According to the latest Journal Citation Reports ratings, neural network journals took 3 out of the top 4 spots for impact factor (citations per article) for the area COMPUTER SCIENCE/ARTIFICIAL INTELLIGENCE: 1. Neural Computation 3.139 2. IEEE Trans. Pattern. Analy 2.006 3. IEEE Trans. Neural Net. 1.941 4. Neural Networks 1.939 5. Artificial Intelligence 1.915 6. Chemometr Intell Lab 1.752 7. Machine Learning 1.721 8. Network 1.196 9. Int J. Comput. Vision 1.153 10. Cogn. Brain Res. 0.880 11. AI Magazine 0.736 12. Pattern Recognition 0.691 13. Artif. Intell. Medicine 0.672 14. IEEE Expert 0.629 15. Image Vision Comput. 0.602 16. Intern. J. Intell. Systems 0.512 17. IEEE Trans. Knowl. Data En 0.461 18. Artif Intell. Review 0.457 19. Intern. J. Softw. Eng. Know 0.420 20. Pattern Recognition Lett. 0.381 Terry -----  From SAMY at gmr.com Sun Mar 3 21:52:41 1996 From: SAMY at gmr.com (R. Uthurusamy) Date: Sun, 03 Mar 1996 21:52:41 -0500 (EST) Subject: New Book: Advances in Knowledge Discovery and Data Mining Message-ID: <01I1X119VLS68ZEZKM@gmr.com> New Book Announcement: Advances in Knowledge Discovery and Data Mining ----------------------------------------------- Edited by Usama M. Fayyad, Gregory Piatetsky-Shapiro, Padhraic Smyth, and Ramasamy Uthurusamy Published by the AAAI Press / The MIT Press ISBN 0-262-56097-6 March 1996 625 pp. Price: $ 50.00 This book can be ordered online from The MIT Press: http://mitpress.mit.edu/ More info at: http://www-mitpress.mit.edu/mitp/recent-books/comp/fayap.html http://www.aaai.org/Publications/Press/Catalog/fayyad.html (This AAAI website also has abstracts of chapters) ---------------------------------------------------------------------------- "Advances in Knowledge Discovery and Data Mining" brings together the latest research -- in statistics, databases, machine learning, and artificial intelligence -- that are part of the exciting and rapidly growing field of Knowledge Discovery and Data Mining. Topics covered include fundamental issues, classification and clustering, trend and deviation analysis, dependency modeling, integrated discovery systems, next generation database systems, and application case studies. The contributors include leading researchers and practitioners from academia, government laboratories, and private industry. The last decade has seen an explosive growth in the generation and collection of data. Advances in data collection, widespread use of bar codes for most commercial products, and the computerization of many business and government transactions have flooded us with data and generated an urgent need for new techniques and tools that can intelligently and automatically assist in transforming this data into useful knowledge. This book is a timely and comprehensive overview of the new generation of techniques and tools for knowledge discovery in data. ---------------------------------------------------------------------------- Contents -------- Foreword: On the Barriers and Future of Knowledge Discovery / vii Gio Wiederhold Preface / xiii Chapter 1: From Data Mining to Knowledge Discovery: An Overview / 1 Usama M. Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth Part I: Foundations Chapter 2: The Process of Knowledge Discovery in Databases: A Human-Centered Approach Ronald J. Brachman and Tej Anand / 37 Chapter 3: Graphical Models for Discovering Knowledge Wray Buntine / 59 Chapter 4: A Statistical Perspective on Knowledge Discovery in Databases John Elder IV and Daryl Pregibon / 83 Part II Classification and Clustering Chapter 5: Inductive Logic Programming and Knowledge Discovery in Databases Saso Dzeroski / 117 Chapter 6: Bayesian Classification (AutoClass): Theory and Results Peter Cheeseman and John Stutz / 153 Chapter 7: Discovering Informative Patterns and Data Cleaning Isabelle Guyon, Nada Matic, and Vladimir Vapnik / 181 Chapter 8: Transforming Rules and Trees into Comprehensible Knowledge Structures Brian R. Gaines / 205 Part III Trend and Deviation Analysis Chapter 9: Finding Patterns in Time Series: A Dynamic Programming Approach Donald J. Berndt and James Clifford / 229 Chapter 10: Explora: A Multipattern and Multistrategy Discovery Assistant Willi Kloesgen / 249 Part IV Dependency Derivation Chapter 11: Bayesian Networks for Knowledge Discovery David Heckerman / 273 Chapter 12: Fast Discovery of Association Rules Rakesh Agrawal, Heikki Mannila, Ramakrishnan Srikant, Hannu Toivonen, and A. Inkeri Verkamo / 307 Chapter 13: From Contingency Tables to Various Forms of Knowledge in Databases Robert Zembowicz and Jan M. Zytkow / 329 Part V Integrated Discovery Systems Chapter 14: Integrating Inductive and Deductive Reasoning for Data Mining Evangelos Simoudis, Brian Livezey, and Randy Kerber / 353 Chapter 15: Metaqueries for Data Mining Wei-Min Shen, KayLiang Ong, Bharat Mitbander, and Carlo Zaniolo / 375 Chapter 16: Exploration of the Power of Attribute-Oriented Induction in Data Mining Jiawei Han and Yongjian Fu / 399 Part VI Next Generation Database Systems Chapter 17: Using Inductive Learning To Generate Rules for Semantic Query Optimization Chun-Nan Hsu and Craig A. Knoblock / 425 Chapter 18: Data Surveyor: Searching the Nuggets in Parallel Marcel Holsheimer, Martin L. Kersten, and Arno P.J.M. Siebes / 447 Part VII KDD Applications Chapter 19: Automating the Analysis and Cataloging of Sky Surveys Usama M. Fayyad, S. George Djorgovski, and Nicholas Weir / 471 Chapter 20: Selecting and Reporting What is Interesting: The KEFIR Application to Healthcare Data Christopher J. Matheus, Gregory Piatetsky-Shapiro, and Dwight McNeill / 495 Chapter 21: Modeling Subjective Uncertainty in Image Annotation Padhraic Smyth, Usama M. Fayyad, Michael C. Burl, and Pietro Perona / 517 Chapter 22: Predicting Equity Returns from Securities Data with Minimal Rule Generation Chidanand Apte and Se June Hong / 541 Chapter 23: From Data Mining to Knowledge Discovery: Current Challenges and Future Directions Ramasamy Uthurusamy / 561 Part VIII Appendices Knowledge Discovery in Databases Terminology Willi Kloesgen and Jan M. Zytkow / 573 Data Mining and Knowledge Discovery Internet Resources Gregory Piatetsky-Shapiro / 593 About The Editors / 597 Index / 601 ---------------------------------------------------------------------------- For Additional Information contact: American Association for Artificial Intelligence (AAAI) 445 Burgess Drive, Menlo Park, California 94025-3496 USA Telephone: 415-328-3123 / Fax: 415-321-4457 / Email: info at aaai.org ----------------------------------------------------------------------------  From ZECCHINA at to.infn.it Mon Mar 4 07:19:33 1996 From: ZECCHINA at to.infn.it (Riccardo Zecchina - tel.11-5647358, fax. 11-5647399) Date: Mon, 4 Mar 1996 13:19:33 +0100 (MET) Subject: paper: Learning and Generalization in Large Committee-Machines Message-ID: <960304131933.60400dc4@to.infn.it> FTP-host: archive.cis.ohio-state.edu The following paper is now available for copying from FTP-filename: /pub/neuroprose/zecchina.committee.ps.Z Title: LEARNING A GENERALIZATION THEORIES OF LARGE COMMITTEE-MACHINES Authors: Remi Monasson and Riccardo Zecchina to be appear in Int.J.Mod.Phys.B. Abstract: The study of the distribution of volumes associated to the internal representations of learning examples allows us to derive the critical learning capacity ($\alpha_c=\frac{16}{\pi} \sqrt{\ln K}$) of large committee machines, to verify the stability of the solution in the limit of a large number $K$ of hidden units and to find a Bayesian generalization cross--over at $\alpha=K$. Retrieving instructions: unix> ftp archive.cis.ohio-state.edu login: anonymous passwd: (your email address) ftp> cd /pub/neuroprose ftp> binary ftp> get zecchina.committee.ps.Z ftp> quit unix> uncompress zecchina.committee.ps.Z E_mail: zecchina at to.infn.it  From orsier at cui.unige.ch Mon Mar 4 09:08:00 1996 From: orsier at cui.unige.ch (Orsier Bruno) Date: Mon, 4 Mar 1996 15:08:00 +0100 Subject: TR+software available - finding global minima Message-ID: <943*/S=orsier/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> "Another hybrid algorithm for finding a global mimimum of MLP error functions" Technical Report UNIGE-AI-95-6 Bruno ORSIER, CUI, University of Geneva --- orsier at cui.unige.ch ABSTRACT: This report presents \pstar, a new global optimization method for training multilayered perceptrons. Instead of local minima, global minima of the error function are found. This new method is hybrid in the sense that it combines three very different optimization techniques: Random Line Search, Scaled Conjugate Gradient and a 1-dimensional minimization algorithm named P$^*$. The best points of each component are retained by the hybrid method: simplicity of Random Line Search, efficiency of Scaled Conjugate Gradient, efficiency and convergence toward a global minimum for P$^*$. \pstar\ is empirically shown to perform better or much better than three other global random optimization methods and a global deterministic optimization method. Retrieval: http://cuiwww.unige.ch/AI-group/staff/orsier.html \pstar and its test problems are available for users of the Stuttgart Neural Network Simulator. See also http://cuiwww.unige.ch/AI-group/staff/orsier.html for details. Best regards, Bruno Orsier E-mail: orsier at cui.unige.ch University of Geneva WWW:http://cuiwww.unige.ch/AI-group/staff/orsier.html  From nmg at skivs.ski.org Mon Mar 4 14:03:09 1996 From: nmg at skivs.ski.org (Norberto Grzywacz) Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) Subject: shift invariance In-Reply-To: <199603012256.RAA25712@shirt.cis.ohio-state.edu> Message-ID: On Fri, 1 Mar 1996, DeLiang Wang wrote: > > I'd like to know the evidence that the visual system achieves shift > (translation) invariance (I'd appreciate references if any). It seems > that the eye "focuses" on the object of interest first. In other > words, the eye seems to shift with the object, not that the visual system is > recognizing the object wherever it occurs on the retina. > A form of shift invariance appears to exist in cortical neurons of the anterior part of the superior temporal sulcus and of the inferior temporal cortex. Neurons in these areas have large receptive fields, which can show considerable selectivity for what the stimulus is irrespective of exactly where it is in the visual field. I would call this property "selectivity shift invariance," to contrast with "absolute shift invariance," which the cortex does not appear to have. The amplitude of cell responses vary (fall) with eccentricity, even though they maintain their selectivity. Moreover, the amplitude of the responses is modulated by the presence of other objects in the receptive fields. Three relevant references are: Tovee, M.J., Rolls, E.T., and Azzopardi, P. (1994) Translation invariance in the responses to faces of single neurons in the temporal visual cortical areas of the alert macaque. J. Neurophysiol. 72:1049-1060. Rolls, E.T. and Tovee, M.J. (1995) The responses of single neurons in the temporal visual cortical areas of the macaque when more than one stimulus is present in the receptive field. Exp. Brain Res. 103:409-420. Ito, M., Tamura, H., Fujita, I., and Tanaka, K. (1995) Size and position invariance of neuronal responses in monkey inferotemporal cortex. J. Neurophysiol. 73:218-226. Norberto Grzywacz  From isri at gpg.com Mon Mar 4 20:11:35 1996 From: isri at gpg.com (isri@gpg.com) Date: Mon, 4 Mar 96 20:11:35 -0500 (EST) Subject: Neural Networks Symposium NNS'96 - part of ICICS'96 Message-ID: ------------------------------------------------------------------------------- First Call for Contributions NEURAL NETWORKS SYMPOSIUM - NNS'96 as part of 1996 International Conference on Intelligent and Cognitive Systems ICICS'96 Comprising three symposia on Neural Networks, Fuzzy Systems, and Cognitive Science Sept. 23-26, 1996 Intelligent Systems Research Institute Tehran, Iran http://www.gpg.com/isri ------------------------------------------------------------------------------- Scope: Papers are solicited for, but not limited to, the following areas: * Theoretical aspects of neural networks, including Spin glasses, Coding Theory ... * Learning algorithms * New architectures and topologies * Simulation environments for neural networks * Analysis and organization of knowledge in neural networks * Neuro-fuzzy algorithms * Applications of neural networks. Program Committee: Neural Networks Symposium M.H. Abassian, M.R. Hashemi Golpayegani, C. Lucas, A.R. Mirzai (Co-chair), B. Moshiri, S. Rouhani (Co-chair), N. Sadati, V. Tahani, M.H. Zand. ------------------------------------------------------------------------------- Contribution Procedure Scientific papers should report novel results and achievements in the field of neural computing. Tutorial and review papers will be acceptable only in exceptional circumstances. Product oriented papers will be presented in a special session. Prospective contributors are invited to submit an extended summary (500-1000 words) of their paper emphasizing the novel results of their theoretical or applied research, and including the title, author(s), affiliation(s), address, telephone, fax, E-mail(s), to Intelligent Systems Research Institute P.O. Box 19395-5746 Tehran, Iran E-mail: int_sys at rose.ipm.ac.ir Please also cite: ``Submitted for possible presentation at ... '' together with the name of the conference/symposium to which the paper is contributed For special sessions/exhibitions/product introductions, etc., the same procedure applies. Please cite: ``Proposal for ... in ...'' ------------------------------------------------------------------------------- Timetable Deadline for receiving summaries/proposals: 20 April 1996 Notification of acceptance: 20 June 1996 Receipt of the full text: 20 August 1996 (The Program Committee will review the manuscripts and return to the authors for appropriate action if any shortcoming is noticed. The responsibility will always belong to the author(s)). ------------------------------------------------------------------------------- Contact Information Intelligent Systems Research Institute P.O. Box 19395-5746 Tehran, Iran E-mail: int_sys at rose.ipm.ac.ir WWW: http://www.gpg.com/isri -------------------------------------------------------------------------------  From postma at cs.rulimburg.nl Tue Mar 5 11:33:00 1996 From: postma at cs.rulimburg.nl (Eric Postma) Date: Tue, 5 Mar 96 17:33:00 +0100 Subject: shift invariance Message-ID: <9603051633.AA15499@bommel.cs.rulimburg.nl> DeLiang Wang wrote >I'd like to know the evidence that the visual system achieves shift >(translation) invariance (I'd appreciate references if any). Biederman and Cooper (1991) found that an object presented at one location of the retina facilitated recognition of that object at other locations. The visual system does not achieve perfect ranslation invariance as shown by Nazir and O'Regan (1991). Biederman, I. & Cooper, E.E. (1991). Evidence for complete translational and reflectional invariance in visual object priming. Perception, 20, 585-593. Nazir, T.A. & O'Regan, J.K. (1990). Some results on translation invariance in the human visual system. Spatial Vision, 5, 81-100. DeLiang Wang wrote >It seems that the eye "focuses" on the object of interest first. In other >words, the eye seems to shift with the object, not that the visual system is >recognizing the object wherever it occurs on the retina. ...and... >There seem to be problems with a system that DOES recognize an object no >matter where it occurs, when the system faces more than an object as we >confront all the time. In addition to the selection of objects through direction of gaze (overt attention), there exists an attentional process which operates independent of eye movements. This process is known as covert attention and may be likened (to a certain extent) to a searchlight. When fixing your gaze on a single letter of this text, you may still be able to select and identify the adjacent letters and words. Inspired by Anderson and Van Essen's (1987) shifter circuits, we developed a scalable model of covert attention capable of translation-invariant pattern processing (Postma, van den Herik, and Hudson, 1994, 1996 submitted). Our model is similar to the model proposed by Olshausen, Anderson, and Van Essen (1993, 1995) and is based on the idea that attentional selection provides a solution to the problem of translation invariance and the problem of selecting (parts of) objects. The attentional searchlight selects parts of a scene and maps their contents into a pattern-recognition stage without affecting the spatial ordering of the selected pattern. Anderson, C.H. & Van Essen, D.C. (1987). Shifter circuits: A computational strategy for dynamic aspects of visual processing. {\em Proceedings of the National Academy of Sciences U.S.A.}, {\bf 84}, 6297-6301. Olshausen, B.A., Anderson, C.H., & Van Essen, D.C. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. {\em The Journal of Neuroscience}, {\bf 13}, 4700-4719. Olshausen, B.A., Anderson, C.H., & Van Essen, D.C. (1995). A multiscale routing circuit for forming size- and position-invariant object representations. {\em The Journal of Computational Neuroscience}, {\bf 2}, 45-62. Postma, E.O., Van den Herik, H.J., & Hudson, P.T.W. (1994). Attentional scanning. In A. Cohn (Ed.), {\em ECAI 94, 11th European Conference on Artificial Intelligence} (pp. 173-177). New York: John Wiley and Sons. Postma, E.O., Van den Herik, H.J., & Hudson, P.T.W. (1996). SCAN: s scalable model of attentional selection. submitted to Neural Networks. Eric Postma Eric Postma Computer Science Department Faculty of General Sciences University of Limburg P.O. Box 616 6200 MD Maastricht The Netherlands  From maja at garnet.cs.brandeis.edu Tue Mar 5 12:22:34 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Tue, 5 Mar 1996 12:22:34 -0500 Subject: MS & PhD program in AI, robotics, evolutionary comp, etc. Message-ID: <199603051722.MAA24525@garnet.cs.brandeis.edu> In May 1994 Brandeis University announced the opening of the new Volen National Center for Complex Systems with the goal of promoting interdisciplinary research and collaboration between faculty from Computer Science, Linguistics, Neuroscience, Psychology, Biology and Physics. The Center, whose main mission is to study cognitive science, brain theory, and advanced computation, has already earned accolades from scientists world wide, and continues to expand. Brandeis is located in Waltham, a suburb 10 miles west of Boston, with easy rail access to both Cambridge and downtown. Founded in 1948, it is recognized as one of the finest private liberal arts and science universities in the United States. Brandeis combines the breadth and range of academic programs usually found at much larger universities, with the friendliness of a smaller and more focused research community. The Computer Science Department is located in the Volen Center and is the home of four Artificial Intelligence faculty actively involved in the Center activities and collaborations: Rick Alterman, Maja Mataric, Jordan Pollack, and James Pustejovsky. In addition to SGI and HP workstations, the Dept owns a 4096 processor Maspar MP2 and a 16 processor SGI challenge supercomputer, and has new electronics and metalworking facilities to support innovative research. Rich Alterman's research interests are in the general areas of artificial intelligence and cognitive science and include such topics as: planning and activity, discourse and text processing, memory and case based reasoning, and human-computer interaction. A recent focus has been on theories of pragmatics and usage as they apply to the problems of man-machine interaction. One project resulted in the construction of a detailed cognitive model of an individual learning how to use a device; significant features of this model included, techniques for skill acquisition and learning, a method for organizing procedural knowledge in memory, and "reading techniques" for actively seeking out and interpreting instructions that are relevant to a given "break down" situation. A second project develops a method of system adaptation where the system automatically evolves to the specifics of its task environment, after is deployed, based on the history of usage of the system for a given task. A third project develops techniques that support the evolution and maintenance of a collective memory for a community of distributed heterogeneous agents who plan and work cooperatively. Professor Alterman is especially looking for students (and postdocs) with backgrounds in planning and activity, memory and case based reasoning, text and information retrieval, and human-computer interaction. Maja Mataric's research focuses on understanding systems that integrate perception, representation, learning, and action. Her work is applied to problems of synthesis and analysis of complex behavior in situated agents and multi--agent systems. Mataric's Interaction Lab (http://www.cs.brandeis.edu/~agents) covers three main project areas: 1) multi-robot projects (dynamic task division, specialization, learning behaviors and behavior selection, learning social rules, distributed spatial representations, synthesis and analysis of multi-robot controllers; using 24 mobile robots and a dynamical robot simulator.); 2) multi-agent projects (cooperation vs. competition, dominance hierarchies, modeling markets, economies, and ecologies with non-rational agents, synthesizing and analyzing complex group behavior; using various multi-agent simulations); and 3) multi-modal representation projects (modeling learning by imitation involving perception, representation, and motor control, sensory-motor mappings, learning new motor behaviors, adapting internal motor programs, attention, and analysis of moving images; using a fully dynamic human torso simulation). Prof. Mataric encourages students with interests and/or backgrounds in AI, robotics, autonomous agents, machine learning, cognitive science, and cognitive neuroscience to apply. For more information see http://www.cs.brandeis.edu/~maja. Jordan Pollack's research interests lie at the boundary between neural and symbolic computation: How could simple neural mechanisms organized naturally into multi-cellular structures by evolution provide the capacity necessary for cognition, language, and general intelligence? This view has lead to successful work on how variable tree-structures could be represented in neural activity patterns, how dynamical systems could act as language generators and recognizers, and how fractal limit behavior of recurrent networks could represent mental imagery. One major current focus is on co-evolutionary learning, in which the learning task is dynamically constructed as a carrot, dangling in front of the machine learning horse. In the Dynamical and Evolutionary Machine Organization (DEMO), we are working on co-evolution in strategic game playing agents, cognitive tasks, and teams of agents who cooperate and communicate on complex tasks. As substrate we use recurrent neural networks and genetic programs, and use the 4096 processor Maspar machine. Professor Pollack is especially looking for students (and postdocs) with backgrounds in NN's & GA's, IFS's, robot building, and evolutionary agents. For more information see http://www.cs.brandeis.edu/~pollack or http://www.demo.cs.brandeis.edu James Pustejovsky conducts research in the areas of computational linguistics, lexical semantics, and information retrieval and extraction. The main focus of his current research is on the computational and cognitive modeling of natural language meaning. More specifically, the investigation is in how words and their meanings combine to meaningful texts. This research has focused on developing a theory of lexical semantics based on a methodology making use of formal and computational semantics. There are several projects applying the results of this theory to Natural Language Processing, which in effect, empirically test this view of semantics. These include: an NSF grant with Apple to automatically construct index libraries and help systems for applications; a DEC grant to automatically convert a trouble-shooting text-corpus into a case library. He recently compleated a joint project with aphasiologist Dr. Susan Kohn on word-finding difficulties and sentence generation in aphasics. For more information see http://www.cs.brandeis.edu/~jamesp/ The four AI faculty work together and with other members of the Volen Center, creating new interdisciplinary research opportunities in areas including cognitive science (http://fechner.ccs.brandeis.edu/cogsci.html) computational neuroscience, and complex systems at Brandeis University. To get more information about the Volen Center for Complex Systems, about the Computer Science Department, and about other faculty, see: http://www.cs.brandeis.edu/dept The URL for the graduate admission information is http://www.cs.brandeis.edu/dept/grad-info/application.html Graduate applications will begin to be reviewed on March 18th.  From icsc at freenet.edmonton.ab.ca Tue Mar 5 12:33:44 1996 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Tue, 5 Mar 1996 10:33:44 -0700 (MST) Subject: Announcement and Call for Papers ISFL'97 Message-ID: Announcement and Call for Papers Second International ICSC Symposium on FUZZY LOGIC AND APPLICATIONS ISFL'97 To be held at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland February 12 - 14, 1997 I. SPONSORS Swiss Federal Institute of Technology (ETH), Zurich, Switzerland and ICSC, International Computer Science Conventions, Canada/Switzerland II. PURPOSE OF THE CONFERENCE This conference is the successor of the highly successful meeting held in Zurich in 1995 (ISFL'95) and is intended to provide a forum for the discussion of new developments in fuzzy logic and its applications. An invitation to participate is extended both to those who took part in ISFL'95 and to others working in this field. Applications of fuzzy logic have played a significant role in industry, notably in the field of process and plant control, especially in applications where accurate modelling is difficult. The organisers hope that contributions will come not only from this field, but also from newer applications areas, perhaps in business, financial planning management, damage assessment, security, and so on. III. TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new application areas will be particularly welcome. - Basic concepts such as various kinds of Fuzzy Sets, Fuzzy Relations, Possibility Theory - Neuro-Fuzzy Systems and Learning - Fuzzy Decision Analysis - Image Analysis with Fuzzy Techniques - Mathematical Aspects such as non-classical logics, Category Theory, Algebra, Topology, Chaos Theory - Modeling, Identification, Control - Robotics - Fuzzy Reasoning, Methodology and Applications, for example in Artificial Intelligence, Expert Systems, Image Processing and Pattern Recognition, Cluster Analysis, Game Theory, Mathematical Programming, Neural Networks, Genetic Algorithms and Evolutionary Computing - Implementation, for example in Engineering, Process Control, Production, Medicine - Design - Damage Assessment - Security - Business, Finance, Management IV. INTERNATIONAL SCIENTIFIC COMMITTEE (ISC) - Honorary Chairman: M. Mansour, Swiss Federal Institute of Technology, Zurich - Chairman: N. Steele, Coventry University, U.K. - Vice-Chairman: E. Badreddin, Swiss Federal Institute of Technology, Zurich - Members: E. Alpaydin, Turkey P.G. Anderson, USA Z. Bien, Korea H.H. Bothe, Germany G. Dray, France R. Felix, Germany J. Godjevac, Switzerland H. Hellendoorn, Germany M. Heiss, Austria K. Iwata, Japan M. Jamshidi, USA E.P. Klement, Austria B. Kosko, USA R. Kruse, Germany F. Masulli, Italy S. Nahavandi, New Zealand C.C. Nguyen, USA V. Novak, Czech Republic R. Palm, Germany D.W. Pearson, France I. Perfilieva, Russia B. Reusch, Germany G.D. Smith, U.K. V. ORGANISING COMMITTEE ISFL'97 is a joint operation between the Swiss Federal Institute of Technology (ETH), Zurich and International Computer Science Conventions (ICSC), Canada/Switzerland. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSION OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax number of contact author - Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Fax: +1-403-387-4329 Email: icsc at freenet.edmonton.ab.ca or ICSC Switzerland P.O. Box 657 CH-8055 Zurich Switzerland VIII. OTHER CONTRIBUTIONS Anyone wishing to organise a workshop, tutorial or discussion, is requested to contact the chairman of the conference, Prof. Nigel Steele (e-mail: nsteele at coventry.ac.uk / phone: +44-1203-838568 / fax: +44-1203-838585) before August 31, 1996. IX. DEADLINES AND REGISTRATION It is the intention of the organisers to have the conference proceedings available for the delegates. Consequently, the deadlines below are to be strictly respected: - Submission of Abstracts: May 31, 1996 - Notification of Acceptance: August 31, 1996 - Delivery of full papers: October 31, 1996 X. ACCOMMODATION Block reservations will be made at nearby hotels and accommodation at reasonable rates (not included in the registration fee) will be available upon registration (full details will follow with the letters of acceptance) XI. SOCIAL AND TOURIST ACTIVITIES A social programme, including a reception, will be organized on the evening of February 13, 1997. This acitivity will also be available for accompanying persons. Winter is an attractive season in Switzerland and many famous alpine resorts are in easy reach by rail, bus or car for a one or two day excursion. The city of Zurich itself is the proud home of many art galleries, museums or theatres. Furthermore, the world famous shopping street 'Bahnhofstrasse' or the old part of the town with its many bistros, bars and restaurants are always worth a visit. XII. INFORMATION For further information please contact either of the following: - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - ICSC Switzerland, P.O. Box 657, CH-8055 Zurich, Switzerland Fax: +41-1-761-9627 - Prof. Nigel Steele, Chairman ISFL'97, Coventry University, U.K. E-mail: nsteele at coventry.ac.uk Fax: +44-1203-838585 Phone: +44-1203-838568  From edelman at wisdom.weizmann.ac.il Wed Mar 6 08:23:01 1996 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Wed, 6 Mar 1996 13:23:01 GMT Subject: shift invariance In-Reply-To: <9603051633.AA15499@bommel.cs.rulimburg.nl> (message from Eric Postma on Tue, 5 Mar 96 17:33:00 +0100) Message-ID: <199603061323.NAA08380@lachesis.wisdom.weizmann.ac.il> > Date: Tue, 5 Mar 96 17:33:00 +0100 > From: Eric Postma > > DeLiang Wang wrote > >I'd like to know the evidence that the visual system achieves shift > >(translation) invariance (I'd appreciate references if any). > > Biederman and Cooper (1991) found that an object presented at one location > of the retina facilitated recognition of that object at other locations. The > visual system does not achieve perfect ranslation invariance as shown by > Nazir and O'Regan (1991). > > Biederman, I. & Cooper, E.E. (1991). > Evidence for complete translational and reflectional invariance in visual > object priming. > Perception, 20, 585-593. > > Nazir, T.A. & O'Regan, J.K. (1990). > Some results on translation invariance in the human visual system. > Spatial Vision, 5, 81-100. Putting Nazir & O'Regan on the same list with Biederman like that may be misleading to someone who will not bother to read the paper. Nazir & O'Regan actually found evidence AGAINST translation invariance in human vision. They may have phrased the title conservatively to appease conservative reviewers... So, do not take the existence of translation invariance in biological vision for granted; heed well the cautionary note in Norberto's posting: > Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) > From: Norberto Grzywacz > A form of shift invariance appears to exist in cortical neurons of the > anterior part of the superior temporal sulcus and of the inferior temporal > cortex. Neurons in these areas have large receptive fields, which can show > considerable selectivity for what the stimulus is irrespective of exactly > where it is in the visual field. I would call this property "selectivity > shift invariance," to contrast with "absolute shift invariance," which > the cortex does not appear to have. -Shimon Dr. Shimon Edelman, Applied Math. & Computer Science Weizmann Institute of Science, Rehovot 76100, Israel The Web: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+972) 8 344122 tel: 8 342856 sec: 8 343545  From STECK at ie.twsu.edu Wed Mar 6 11:21:39 1996 From: STECK at ie.twsu.edu (JIM STECK) Date: Wed, 6 Mar 1996 11:21:39 CDT (GMT-6) Subject: 2 papers: Quantum Dot Neural Network / Optical Neural Network Message-ID: <44DD77C5015@ie.twsu.edu> An uncompressed postscript version of the following paper is available at: http://www.me.twsu.edu/me/faculty/steck/Pubs/ (approx 1400K) A Quantum Dot Neural Network E.C. Behrman, J. Niemel, J. E. Steck, S. R. Skinner Wichita State University, Wichita, KS 67260 Abstract We present a mathematical implementation of a quantum mechanical artificial neural network, in the quasi-continuum regime, using the nonlinearity inherent in the real-time propagation of a quantum system coupled to its environment. Our model is that of a quantum dot molecule coupled to the substrate lattice through optical phonons, and subject to a time-varying external field. Using discretized Feynman path integrals, we find that the real time evolution of the system can be put into a form which resembles the equations for the virtual neuron activation levels of an artificial neural network. The timeline discretization points serve as virtual neurons. We then train the network using a simple gradient descent algorithm, and find it is possible in some regions of the phase space to perform any desired classical logic gate. Because the network is quantum mechanical we can also train purely quantum gates such as a phase shift. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' An uncompressed postscript version of the following paper is available at: http://www.me.twsu.edu/me/faculty/steck/Pubs/ (approx 153K) Experimental Demonstration of On-Line Training for an Optical Neural Network Using Self-Lensing Media Alvaro A. Cruz-Cabrera, James E. Steck, Elizabeth C. Behrman, Steven R. Skinner Abstract The optical bench realization of a feed forward optical neural network, developed by the authors, is presented. The network uses a thermal nonlinear material that modulates the phase front of a forward propagating HeNe beam by dynamically altering the index of refraction profile of the material. The index of refraction cross-section of the nonlinear material was modified by applying a separate argon laser, which was modulated by a liquid crystal display used as a spatial light modulator. On-line training of the network was accomplished by using a reinforcement learning paradigm to achieve several standard and non-standard logic gates. James E. Steck Assistant Professor (316)-689-3402  From juergen at idsia.ch Wed Mar 6 13:05:08 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 6 Mar 96 19:05:08 +0100 Subject: feature detectors Message-ID: <9603061805.AA06532@fava.idsia.ch> SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES WELL-KNOWN FEATURE DETECTORS (9 pages, 260 K compressed, 1.14 M uncompressed) Neural Computation, 1996 (accepted) Juergen Schmidhuber, Martin Eldracher, Bernhard Foltin Predictability minimization (PM) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy appli- cations of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant pre-processing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off- center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression. (Revised and extended TR FKI-201-94) To obtain a copy, cut and paste this: netscape ftp://ftp.idsia.ch/pub/juergen/detectors.ps.gz Juergen Schmidhuber, IDSIA Martin Eldracher, IDSIA / TUM Bernhard Foltin, TUM  From minton at ISI.EDU Wed Mar 6 21:16:20 1996 From: minton at ISI.EDU (minton@ISI.EDU) Date: Wed, 6 Mar 96 18:16:20 PST Subject: JAIR article, Mean Field Theory for ... Message-ID: <9603070216.AA00570@sungod.isi.edu> Readers of this group may be interested in the following article, which was just published by JAIR: Saul, L.K., Jaakkola, T. and Jordan, M.I. (1996) "Mean Field Theory for Sigmoid Belief Networks", Volume 4, pages 61-76. Available in Postscript (302K) and compressed Postscript (123K). For quick access via your WWW browser, use this URL: http://www.cs.washington.edu/research/jair/abstracts/saul96a.html More detailed instructions are below. Abstract: We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits. The article is available via: -- comp.ai.jair.papers (also see comp.ai.jair.announce) -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html For direct access to this article and related files try: http://www.cs.washington.edu/research/jair/abstracts/saul96a.html -- Anonymous FTP from either of the two sites below. Carnegie-Mellon University (USA): ftp://p.gp.cs.cmu.edu/usr/jair/pub/volume4/saul96a.ps The University of Genoa (Italy): ftp://ftp.mrg.dist.unige.it/pub/jair/pub/volume4/saul96a.ps The compressed PostScript file is named saul96a.ps.Z (123K) -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND and our automailer will respond. To get the Postscript file, use the message body GET volume4/saul96a.ps (Note: Your mailer might find this file too large to handle.) Only one can file be requested in each message. -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, visit our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From king at cs.cuhk.hk Wed Mar 6 22:58:11 1996 From: king at cs.cuhk.hk (Irwin King) Date: Thu, 7 Mar 1996 11:58:11 +0800 (HKT) Subject: CFP - Special Sessions on Genetic Algorithm and Neural Networks in Multimedia Message-ID: <199603070358.LAA14549@cs.cuhk.hk> ********************************************************************** C A L L F O R P A P E R S Special Sessions On 1. GENETIC ALGORITHMS & PROGRAMMING and 2. NEURAL NETWORKS IN MULTIMEDIA APPLICATIONS September 24-27, 1996 International Conference on Neural Information Processing (ICONIP'96) Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong http://www.cs.cuhk.hk/iconip96 ********************************************************************** The main objectives of these special sessions are: * To provide a forum for presenting and discussing theoretical and application issues on GA and GP * To provide a forum for presenting and discussing the application of neural networks in multimedia systems * To promote collaboration between researchers internationally 1. Genetic Algorithms & Programming =================================== We invite papers dealing with the theory and application of GA and GP. Please submit a short abstract (1 page) via email to ksleung at cs.cuhk.edu.hk as soon as possible. If accepted, the full paper will be required by April 12, 1996. 2. Neural Networks in Multimedia Applications ============================================= We invite papers dealing with the application of neural networks in already implemented multimedia systems. In particular, we are interested in proven neural network techniques used for virtual reality applications and multimedia databases. Please submit a short abstract (1 page) via email to king at cs.cuhk.edu.hk as soon as possible. If accepted, the full paper will be required by April 12, 1996. Please send inquiries to: Genetic Algorithms & Programming Neural Networks in Multimedia ================================ ============================= K.S. Leung Irwin King Dept. of Comp. Sci. & Eng. Dept. of Comp. Sci. & Eng. The Chinese University of Hong Kong The Chinese University of Hong Kong Shatin, N.T. Shatin, N.T. Hong Kong Hong Kong ksleung at cs.cuhk.edu.hk king at cs.cuhk.edu.hk Fax: (852) 2603-5024 Fax: (852) 2603-5024  From goldfarb at unb.ca Fri Mar 8 14:03:58 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Fri, 8 Mar 1996 15:03:58 -0400 (AST) Subject: shift invariance In-Reply-To: <199603061323.NAA08380@lachesis.wisdom.weizmann.ac.il> Message-ID: On Wed, 6 Mar 1996, Edelman Shimon wrote: > > Nazir, T.A. & O'Regan, J.K. (1990). > > Some results on translation invariance in the human visual system. > > Spatial Vision, 5, 81-100. > > Nazir > & O'Regan actually found evidence AGAINST translation invariance in > human vision. They may have phrased the title conservatively to > appease conservative reviewers... So, do not take the existence of > translation invariance in biological vision for granted; heed well the > cautionary note in Norberto's posting: > > > Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) > > From: Norberto Grzywacz > > > A form of shift invariance appears to exist in cortical neurons of the > > anterior part of the superior temporal sulcus and of the inferior temporal > > cortex. Neurons in these areas have large receptive fields, which can show > > considerable selectivity for what the stimulus is irrespective of exactly > > where it is in the visual field. I would call this property "selectivity > > shift invariance," to contrast with "absolute shift invariance," which > > the cortex does not appear to have. Why would one want to invent such a strange name "selectivity shift invariance"? If we 1) DO NOT FORGET that the biological systems have at their disposal quite adequate means to extract symbolic (structural) representation right from the very beginning and 2) FORGET about our inadequate numeric models, then the question would not have arisen in the first place. Symbolic representations EMBODY shift invariance. We are completing a paper "Inductive Theory of Vision" that addresses these issues. Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From dnoelle at cs.ucsd.edu Fri Mar 8 18:37:25 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Fri, 8 Mar 96 15:37:25 -0800 Subject: CogSci96 Call For Participation Message-ID: <9603082337.AA14792@beowulf> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California CALL FOR PARTICIPATION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto), Jay McClelland (CMU), & Dave Rumelhart (Stanford) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) "Reconstructing Consciousness" Paul Churchland (UCSD) TRAVEL & ACCOMMODATIONS United Airlines is the official airline of the 1996 Cognitive Science Conference. Attendees flying with United can receive a 5% discount off of any published United or United Express round trip fare (to San Diego) in effect when ticket is purchased, subject to all applicable restrictions. Attendees flying with United can receive a 10% discount off of applicable BUA fares in effect when ticket is purchased 7 days in advance. To get your discount, be sure to give your travel agent the following information: * "Meeting ID# 557NS for the Cognitive Science Society Meeting" * United's Meeting Desk phone number is (800) 521-4041. Alternatively, you may order your tickets direct from United's Meeting Desk, using the same reference information as above. Purchasers of United tickets to the conference will be eligible for a drawing (to be held at the conference) in which two round trip tickets will be given away -- so don't throw away your boarding pass! If you are flying to San Diego, you will be arriving at Lindbergh Field. If you don't rent a car, transportation from the airport to the UCSD area will cost (not including tip) anywhere from $15.00 (for a seat on a shuttle/van) to $35.00 (for a taxi). We have arranged for special rates at two of the hotels nearest to the UCSD campus. In addition, on campus dormitory apartments can be rented at less expense. All rooms are subject to availability and hotel rates are only guaranteed up to the dates specified, so reserve early. None of the rates quoted below (unless explicitly stated) include tax, which is currently 10.5 percent. The La Jolla Marriott is located approximately 2 miles from campus. Single and double rooms are available at $92.00 per night, when reserved before June 21st. Included in the rate is a morning and evening shuttle service to and from campus (running for one hour periods, on July 13th, 14th, and 15th only). The hotel has parking spaces, available at $7 per day or $10 per day with valet service. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. There is also city buss service (fare is about $1.50 per ride) from and to campus which passes within 1 block of the hotel. Reservations can be made by calling the hotel at (619) 587-1414 or (800) 228-9290. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. Arrival after 6:00 P.M. requires a first night's deposit, or guarantee with a major credit card. The La Jolla Radisson is located approximately 1/2 mile from campus. Single and double rooms are available at $75.00 per night, when reserved before June 12th. Included in the rate is a morning and evening shuttle service to and from campus, although walking is also very feasible. Parking is available and complementary. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. The first night's room charge (+ tax) is due by June 12th. Reservations can be made by calling Radisson Reservations at (800) 333-3333. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. There are a limited number of on-campus apartments available for reservation as a 4 night package. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). On campus parking is complimentary with this package. These apartments may be reserved using the conference registration form. REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu". Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after May 1st are $20 higher ($10 higher for students) than fees received before May 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after May 1st) _____ Registration, Nonmember -- $145 ($165 after May 1st) _____ Registration, New Member -- $155 ($175 after May 1st) _____ Registration, Student Member -- $85 ($95 after May 1st) _____ Registration, Student Nonmember -- $100 ($110 after May 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by May 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ If you would like to share your room with your children, the UCSD apartments allow up to two children in a room. Number And Ages Of Children : ________________________________________ Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". ====================================================================== PLAIN TEXT REGISTRATION FORM ======================================================================  From ib at rana.usc.edu Fri Mar 8 21:25:00 1996 From: ib at rana.usc.edu (Irving Biederman) Date: Fri, 8 Mar 1996 18:25:00 -0800 Subject: Shift Invariance Message-ID: <199603090225.SAA14592@mizar.usc.edu> The communication by Shimon Edelman is, in my opinion, a bit misleading. In response to a posting by Eric Postma that listed papers by Biederman & Cooper (1991) and Nazir & O'Regan (1990) as evidence for shift invariance, Edelman writes: "Putting Nazir & O'Regan on the same list with Biederman like that may be misleading to someone who will not bother to read the paper. Nazir & O'Regan actually found evidence AGAINST translation invariance in human vision." One may distinguish a strong form of shift invariance, in which there is no cost in performance from changing the position of a stimulus with a weak form in which there is facilitation but not as much as when the stimulus is presented at its originally experienced position. Eric E. Cooper and I (Perception '91) found virtually complete (i.e., strong) shift invariance, as measured by the priming of briefly presented (100 msec) object pictures. [100 msec is too brief to make a fixation onto the stimulus.] Picture-naming RTs and error rates were unaffected by a shift. We did this by presenting the pictures either 2 deg to the left and or 2 deg to the right of fixation. The order of left-right positions was random appearing. In two experiments, when the pictures were shown a second time, there was virtually no difference in performance if a given picture was shifted or not. A third experiment produced the same result with 2 deg shifts above and below the fixation point. That there was perceptual and not just concept or name priming was evidenced by a reduction in priming of pictures with the same name and basic-level concept but a different shape (i.e., two different kind of chairs). So we obtained a strong form of shift invariance. The finding of strong left-right shift invariance on RTs was replicated using contour-deleted pictures by Cooper, Biederman, & Hummel (1992) that, for half the subjects, were also mirror-reversed when they were shifted. A slight, but reliable, increase in error rates was noted only for pictures that were shifted but not reversed. Shifted pictures that were also reversed (so the fish, for example, is always facing toward the fixation) showed no increase in error rates. We proposed that when a picture is shifted across the vertical midline, different features (e.g., parts) will be present at different eccentricities and therefore, receive differentresolution. Mirror reversing the stimulus preserves the original relation between resolution and features. If the features are difficult to discriminate, then the modest variation in resolution could produce an apparent shift cost. A subject in the Nazir & O'Regan ('90, Spatial Vision) experiment was extensively trained to discriminate a symmetrical nonsense pattern from two highly similar non-target patterns at 2.4 deg to the left of fixation. (Other subjects would be trained with that pattern to the right of fixation.) The subject could be then tested at the learned position (which was always peripheral), at central fixation, or on the opposite side. As Nazir & O'Regan noted there was an enormous amount of facilitation in all conditions in all experiments. So there was at least weak invariance. Was there strong invariance as well? When they controlled potentially confounding and contaminating factors, there was strong shift invariance, Edelman's claim to the contrary. Under controlled conditions, when a stimulus was not presented unless the eye was on the fixation point, there was no effect of a shift from learned to opposite positions. There was a cost, however, of shifting from a learned (peripheral) to central position. But this comparison confounds resolution (from peripheral to central) with shift. Although it may be surprising that one would do worse with central as compared to more peripheral positions, it is not implausible that a different set of features were employed at different resolutions. In three latter experiments, where eye position was not controlling (it was difficult to train subjects to do it), there were much larger costs but there well could have been a bias to look at the learned location. In these latter three experiments, as Nazir & O'Regan noted, there was considerable subject and stimulus variability, perhaps reflecting various task strategies. Certainly, a bias to monitor the trained location is not out of the question. So we have four name-priming experiments documenting strong shift invariance when resolution is controlled, three left-right and one up-down. The Nazir & O'Regan research shows strong invariance under controlled conditions. So five well-controlled experiments document strong shift invariance. Weak invariance is obtained under less controlled conditions. Finally, let me note that, of course, it is not the case that every representation is shift-invariant. Under the identical conditions that yielded invariance in object priming, subjects showed well-above-chance explicit memory of where the picture was presented. Cooper and I hypothesized that position information may be specified by the dorsal system. Those who presume to test invariance of shift (or size or orientation or reflection) should bear in mind the possibility that a particular task, especially if it is extremely difficult so that subjects are induced to undertake various strategies such as search, may tap both shift-invariant and shift-specific representations. For example, if I've been extensively trained to search for a small distinguishing feature on the left side of the display, I could readily show a shift cost if the feature is no longer there. References: Cooper, E. E., Biederman, I., & Hummel, J. E. (1992). Metric invariance in object recognition: A review and further evidence. Canadian Journal of Psychology, 46, 191-214. > Biederman, I. & Cooper, E. E. (1991). > Evidence for complete translational and reflectional invariance in visual > object priming. > Perception, 20, 585-593. > > Nazir, T.A. & O'Regan, J.K. (1990). > Some results on translation invariance in the human visual system. > Spatial Vision, 5, 81-100. ************************** Irving Biederman, Ph. D. William M. Keck Professor of Cognitive Neuroscience Department of Psychology University of Southern California Hedco Neurosciences Building, MC 2520 Los Angeles, CA 90089-2520 ib at rana.usc.edu (213) 740-6094 (Office); (213) 740-5687 (Fax); (310) 823-8980 (Home); (213) 740-6102 (Lab) Visit our web at: http://rana.usc.edu:8376/~ib/iul.html  From edelman at wisdom.weizmann.ac.il Sun Mar 10 12:01:12 1996 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 10 Mar 1996 17:01:12 GMT Subject: Shift Invariance In-Reply-To: <199603090225.SAA14592@mizar.usc.edu> (message from Irving Biederman on Fri, 8 Mar 1996 18:25:00 -0800) Message-ID: <199603101701.RAA08020@lachesis.wisdom.weizmann.ac.il> > Date: Fri, 8 Mar 1996 18:25:00 -0800 > From: Irving Biederman > > The communication by Shimon Edelman is, in my opinion, a bit > misleading. In response to a posting by Eric Postma that listed papers by > Biederman & Cooper (1991) and Nazir & O'Regan (1990) as evidence for shift > invariance, Edelman writes: > > "Putting Nazir & O'Regan on the same list with Biederman like that may > be misleading to someone who will not bother to read the paper. Nazir > & O'Regan actually found evidence AGAINST translation invariance in > human vision." > > One may distinguish a strong form of shift invariance, in which > there is no cost in performance from changing the position of a stimulus > with a weak form in which there is facilitation but not as much as when the > stimulus is presented at its originally experienced position. > ... [ rest of Biederman's message omitted ] > ... Many thanks to Irv Biederman for posting the details of his findings, along with a comparison with the results of Nazir & O'Regan. His effort should reduce the chance of the readers of this list jumping to premature conclusions. Note that the purpose of my previous posting was to advocate caution, certainly not to argue that all claims of invariance are wrong. Fortunately, my job in this matter is easy: just one example of a manifest lack of invariance suffices to invalidate the strong version of invariance-based theory of vision, which seems to be espoused by Goldfarb: > If we 1) DO NOT FORGET that the biological systems have at their disposal > quite adequate means to extract symbolic (structural) representation right > from the very beginning and 2) FORGET about our inadequate numeric models, > then the question would not have arisen in the first place. Symbolic > representations EMBODY shift invariance. So, here it goes... Whereas invariance does hold in many recognition tasks (in particular, in Biederman's experiments, as well as in the experiments reported in [1]), it does not in others (as, e.g., in [2], where interaction between size invariance and orientation is reported). A recent comprehensive survey of (the far from invariant) human performance in recognizing rotated objects can be found in [3]. Furthermore, not only recognition, but also perceptual learning, seems to be non-invariant in some cases; see [4,5]. FORGETTING about experimental findings will not make them go away, just as pointing out that symbolic representations EMBODY invariance will not make biological vision embrace a symbolic approach if it has not done so until now. -Shimon Dr. Shimon Edelman, Applied Math. & Computer Science Weizmann Institute of Science, Rehovot 76100, Israel The Web: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+972) 8 344122 tel: 8 342856 sec: 8 343545 ----------------------------------------------------------------------------- References: [1] @article{BricoloBulthoff92, author="E. Bricolo and H. H. {B\"ulthoff}", title="Translation-invariant features for object recognition", journal="Perception", volume="21 (supp.2)", year = 1992, pages = "59" } [2] @article{BricoloBulthoff93a, author="E. Bricolo and H. H. {B\"ulthoff}", title="Further evidence for viewer-centered representations", journal="Perception", volume="22 (supp)", year = 1993, pages = "105" } [3] @InCollection{JolicoeurHumphrey94, author = "P. Jolicoeur and G. K. Humphrey", title = "Perception of rotated two-dimensional and three-dimensional objects and visual shapes", booktitle = "Perceptual constancies", publisher = "Cambridge University Press", year = 1994, editor = "V. Walsh and J. Kulikowski", chapter = 10, address = "Cambridge, UK", note = "in press" } [4] @article{KarniSagi91, author="A. Karni and D. Sagi", title="Where practice makes perfect in texture discrimination", journal=pnas, volume="88", pages="4966-4970", year="1991" } [5] @article{PoggioFahleEdelman92, author="T. Poggio and M. Fahle and S. Edelman", title="Fast perceptual learning in visual hyperacuity", journal="Science", year="1992", volume="256", pages="1018-1021", }  From goldfarb at unb.ca Sun Mar 10 22:57:12 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 10 Mar 1996 23:57:12 -0400 (AST) Subject: Shift Invariance In-Reply-To: <199603101701.RAA08020@lachesis.wisdom.weizmann.ac.il> Message-ID: On Sun, 10 Mar 1996, Edelman Shimon wrote: > Many thanks to Irv Biederman for posting the details of his findings, > along with a comparison with the results of Nazir & O'Regan. His > effort should reduce the chance of the readers of this list jumping to > premature conclusions. > > Note that the purpose of my previous posting was to advocate caution, > certainly not to argue that all claims of invariance are wrong. > Fortunately, my job in this matter is easy: just one example of a > manifest lack of invariance suffices to invalidate the strong version > of invariance-based theory of vision, which seems to be espoused by > Goldfarb: > > > If we 1) DO NOT FORGET that the biological systems have at their disposal > > quite adequate means to extract symbolic (structural) representation right > > from the very beginning and 2) FORGET about our inadequate numeric models, > > then the question would not have arisen in the first place. Symbolic > > representations EMBODY shift invariance. > > So, here it goes... Whereas invariance does hold in many recognition > tasks (in particular, in Biederman's experiments, as well as in the > experiments reported in [1]), it does not in others (as, e.g., in [2], > where interaction between size invariance and orientation is > reported). A recent comprehensive survey of (the far from invariant) > human performance in recognizing rotated objects can be found in > [3]. Furthermore, not only recognition, but also perceptual learning, > seems to be non-invariant in some cases; see [4,5]. > > FORGETTING about experimental findings will not make them go away, > just as pointing out that symbolic representations EMBODY invariance > will not make biological vision embrace a symbolic approach if it has > not done so until now. It appears that there is a considerable confusion as to what "shift invariance" is: shift invariance should not include size, orientation, or context invariance, since an encoding of these may involve additional structural information. (By the way, I do not read Biederman's message as Edelman does) -- Lev  From robtag at dia.unisa.it Mon Mar 11 07:11:41 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Mon, 11 Mar 1996 13:11:41 +0100 Subject: International School on Neural Nets "E.R. Caianiello" Message-ID: <9603111211.AA24178@udsab.dia.unisa.it> Galileo Galilei Foundation World Federation of Scientists Ettore Maiorana Centre for Scientific Culture Galilelo Galilei Celebrations Four Hundreds Years Since the Birth of Modern Science International School on Neural Nets "E.R. Caianiello" 1st Course: Learning in Graphical Models A NATO Advanced Study Institute Erice-Sicily: 27 September - 7 October 1996 Sponsored by the: - European Union - International Institute for Advanced Scientific Studies (IIASS) - Italian Institute for Philosophical Studies - Italian Ministry of Education - Italian Ministry of University and Scientific Research - Italian National Research Institute (CNR) - Sicilian Regional Government - University of Salerno Programme and Lecturers - Introduction to Graphical Models J. Whittaker, University of Lancaster, UK - Introduction to Bayesian Methods D. Mackay, University of Cambridge, UK - Introduction to Neural Networks M. Jordan, MIT, Cambridge, MA, USA - Learning of Directed Graphs D. Heckerman, Microsoft Research, Redmond, WA, USA - The Helmholz Machine G. Hinton, University of Toronto, Canada - Model Selection G. Cooper, University of Pittsburg, PA, USA - Latent Variables Methods R. Neal, University of Toronto, Canada - Stochastic Grammars S. Omohundro, NEC Research, Princeton, NJ, USA - Statistical Mechanics and Clustering Models J. Buhmann, University of Bonn, Germany - Bayesian Learning of Graphical Models R. Cowell, University College, London, UK - Priors for Graphical Models D. Geiger, UCLA, Los Angeles, CA, USA - Independence and Decorrelation E. Oja, Helsinki University of Technology, Finland - Bayesian Learning and Gibbs Sampling D. Spiegelhalter, MRC, Cambridge, UK Purpose of the course Neural Networks and Bayesian belief networks are learning and interface methods that have been developed in two largely distinct reasearch communities. The purpose of this Course is to bring together researchers from these two communities and study both kinds of networks as istances of a general unified graphical formalism. The Course will focus on probabilistic methods for learning in graphical models, with attention paid to algorithm analysis and design, theory and applications. General Information Persons wishing to attend the Course should apply in writing to: - Prof. Maria Marinaro IIASS "E.R. Caianiello" Via G. Pellegrino, 19 84019 Vietri sul mare (SA), Italy Tel: + 39 89 761167 Fax: + 39 89 761189 They should specify: i) date and place of birth together with present nationality; ii) degree and other academic qualifications; iii) present position and place of work. Young persons with only little experience should include a letter of recommendation from the head of their research group or from a senior scientist active in the field. The total fee, which includes full board and lodging (arranged by the School), is $1000 USD. Thanks to the generosity of the sponsoring Institutions, partial support can be granted to some deserving students who need financial help. Requests to this effect must be specified and justified in the application letter. Closing date for application: July 15, 1996 No special application form is required. Admission to the Course will be decided in consultation with the Advisory Committee of the School consisting of Professors D. Hecherman, M.I. Jordan, M. Marinaro and A. Zichichi. It is regretted that it will not be possible to allow any person not selected by the Committee of the School to follow the Course. Participants must arrive in Erice on September 27, no later than 5 p.m. More information about this Course and the other activities of the Ettore Majorana Centre can be found on the WWW at the following address: http://www.ccsem.infn.it D. Heckerman - M.I. Jordan Directors of the Course M.I. Jordan - M. Marinaro Directors of the School A. Zichichi Director of the Centre  From obrad at sava.zfe.siemens.de Mon Mar 11 10:41:29 1996 From: obrad at sava.zfe.siemens.de (Dragan Obradovic) Date: Mon, 11 Mar 1996 16:41:29 +0100 Subject: NEW BOOK ANNOUNCEMENT Message-ID: <199603111541.QAA20158@sava.zfe.siemens.de> -------------------------------------------------------------------- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -------------------------------------------------------------------- "An Information-Theoretic Approach to Neural Computing" -------------------------------------------------------- Gustavo Deco and Dragan Obradovic (Springer Verlag) Full details at: http://www.springer.de/springer-news/inf/inf_9602.new.html ISBN 0-387-94666-7 Summary: --------- Neural networks provide a powerful new technology to model and control nonlinear and complex systems. In this book, the authors present a detailed formulation of neural networks from the information-theoretic viewpoint. They show how this perspective provides new insights into the design theory of neural networks. In particular they show how these methods may be applied to the topics of supervised and unsupervised learning including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from several different scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this to be a very valuable introduction to this topic. Contents: --------- Acknowledgments vi Foreword vii CHAPTER 1 Introduction 1 CHAPTER 2 Preliminaries of Information Theory and Neural Networks 7 Elements of Information Theory 8 Entropy and Information 8 Joint Entropy and Conditional Entropy 9 Kullback-Leibler Entropy 9 Mutual Information 10 Differential Entropy, Relative Entropy and Mutual Information 11 Chain Rules 13 Fundamental Information Theory Inequalities 15 Coding Theory 21 Elements of the Theory of Neural Networks 23 Neural Network Modeling 23 Neural Architectures 24 Learning Paradigms 27 Feedforward Networks: Backpropagation 28 Stochastic Recurrent Networks: Boltzmann Machine 31 Unsupervised Competitive Learning 35 Biological Learning Rules 36 PART I: Unsupervised Learning CHAPTER 3 Linear Feature Extraction: Infomax Principle 41 Principal Component Analysis: Statistical Approach 42 PCA and Diagonalization of the Covariance Matrix 42 PCA and Optimal Reconstruction 45 Neural Network Algorithms and PCA 51 Information Theoretic Approach: Infomax 57 Minimization of Information Loss Principle and Infomax Principle 58 Upper Bound of Information Loss 59 Information Capacity as a Lyapunov Function of the General Stochastic Approximation 61 CHAPTER 4 Independent Component Analysis: General Formulation and Linear Case 65 ICA-Definition 67 General Criteria for ICA 68 Cumulant Expansion Based Criterion for ICA 69 Mutual Information as Criterion for ICA 73 Linear ICA 79 Gaussian Input Distribution and Linear ICA 81 Networks With Anti-Symmetric Lateral Connections 84 Networks With Symmetric Lateral Connections 86 Examples of Learning with Symmetric and Anti-Symmetric Networks 89 Learning in Gaussian ICA with Rotation Matrices: PCA 91 Relationship Between PCA and ICA in Gaussian Input Case 93 Linear Gaussian ICA and the Output Dimension Reduction 94 Linear ICA in Arbitrary Input Distribution 95 Some Properties of Cumulants at the Output of a Linear Transformation 95 The Edgeworth Expansion Criteria and Theorem 4.6.2 99 Algorithms for Output Factorization in the Non-Gaussian Case 100 Experimental Results of Linear ICA Algorithms in the Non-Gaussian Case 102 CHAPTER 5 Nonlinear Feature Extraction: Boolean Stochastic Networks 109 Infomax Principle for Boltzmann Machines 110 Learning Model 110 Examples of Infomax Principle in Boltzmann Machine 113 Redundancy Minimization and Infomax for the Boltzmann Machine 119 Learning Model 119 Numerical Complexity of the Learning Rule 124 Factorial Learning Experiments 124 Receptive Fields Formation from a Retina 129 Appendix 132 CHAPTER 6 Nonlinear Feature Extraction: Deterministic Neural Networks 135 Redundancy Reduction by Triangular Volume Conserving Architectures 136 Networks with Linear, Sigmoidal and Higher Order Activation Functions 140 Simulations and Results 142 Unsupervised Modeling of Chaotic Time Series 146 Dynamical System Modeling 147 Redundancy Reduction by General Symplectic Architectures 156 General Entropy Preserving Nonlinear Maps 156 Optimizing a Parameterized Symplectic Map 157 Density Estimation and Novelty Detection 159 Example: Theory of Early Vision 163 Theoretical Background 164 Retina Model 165 PART II: Supervised Learning CHAPTER 7 Supervised Learning and Statistical Estimation 169 Statistical Parameter Estimation - Basic Definitions 171 Cramer-Rao Inequality for Unbiased Estimators 172 Maximum Likelihood Estimators 175 Maximum Likelihood and the Information Measure 176 Maximum A Posteriori Estimation 178 Extensions of MLE to Include Model Selection 179 Akaike's Information Theoretic Criterion (AIC) 179 Minimal Description Length and Stochastic Complexity 183 Generalization and Learning on the Same Data Set 185 CHAPTER 8 Statistical Physics Theory of Supervised Learning and Generalization 187 Statistical Mechanics Theory of Supervised Learning 188 Maximum Entropy Principle 189 Probability Inference with an Ensemble of Networks 192 Information Gain and Complexity Analysis 195 Learning with Higher Order Neural Networks 198 Partition Function Evaluation 198 Information Gain in Polynomial Networks 202 Numerical Experiments 203 Learning with General Feedforward Neural Networks 205 Partition Function Approximation 205 Numerical Experiments 207 Statistical Theory of Unsupervised and Supervised Factorial Learning 208 Statistical Theory of Unsupervised Factorial Learning 208 Duality Between Unsupervised and Maximum Likelihood Based Supervised Learning 213 CHAPTER 9 Composite Networks 219 Cooperation and Specialization in Composite Networks 220 Composite Models as Gaussian Mixtures 222 CHAPTER 10 Information Theory Based Regularizing Methods 225 Theoretical Framework 226 Network Complexity Regulation 226 Network Architecture and Learning Paradigm 227 Applications of the Mutual Information Based Penalty Term 231 Regularization in Stochastic Potts Neural Network 237 Neural Network Architecture 237 Simulations 239 References 243 Index 259 Ordering information: --------------------- ISBN 0-387-94666-7 US $49.95, DM 76 ------------------------------------------------------------ Dr. Gustavo Deco and Dr. Dragan Obradovic Siemens AG ZFE T SN 4 Corporate Research and Development Otto-Hahn-Ring 6 Phone: +49/89/636-49499 D-81739 Munich Fax: +49/89/636-49767 Germany E-Mail: Dragan.Obradovic at zfe.siemens.de Gustavo.Deco at zfe.siemens.de  From rjb at psy.ox.ac.uk Mon Mar 11 08:00:44 1996 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Mon, 11 Mar 1996 13:00:44 GMT Subject: Position available in computational neuroscience Message-ID: <199603111300.NAA05904@axp02.mrc-bbc.ox.ac.uk> The following Jobs may be of interest to readers of the connectionists mailing list. UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Posts in Computational Neuroscience and Visual Neurophysiology The following posts are available as part of a long-term research programme combining neurophysiological and computational approaches to brain mechanisms of vision and memory (see Rolls, 1995, Behav. Brain Res. 66: 177-185; or Rolls, 1994, Behav. Processes 33: 113-138): (1) Computational neuroscientist to make formal models and/or analyse by simulation the functions of visual cortical areas in invariant recognition. (2) Neurophysiologist (preferably postdoctoral) to analyse the activity of single neurons in the temporal cortical visual areas. The salaries are on the RS1A (postdoctoral) scale 14,317- 21,519 pounds, with support provided by a Programme Grant. Applications including the names of two referees, or enquiries, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (email Edmund.Rolls at psy.ox.ac.uk). The University exists to promote excellence in education and research. The University is an Equal Opportunity Employer.  From goldfarb at unb.ca Mon Mar 11 16:40:21 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Mon, 11 Mar 1996 17:40:21 -0400 (AST) Subject: Shift Invariance In-Reply-To: <199603090225.SAA14592@mizar.usc.edu> Message-ID: I would like to make one more comment. Shift invariance should be properly thought of as invariance of a "final" object representation wrt TRANSLATIONS of the object (to use the term from linear algebra). This is not to be confused with the fact that the POSITION of the object is also encoded separately, when necessary. The latter has to do with the need to represent the entire "scene". -- Lev  From tommi at psyche.mit.edu Mon Mar 11 14:28:07 1996 From: tommi at psyche.mit.edu (Tommi Jaakkola) Date: Mon, 11 Mar 96 14:28:07 EST Subject: Paper available: Upper and lower bounds on likelihoods Message-ID: <9603111928.AA13475@psyche.mit.edu> The following paper is available on the web at http://web.mit.edu/~tommi/home.html ftp://psyche.mit.edu/pub/tommi/jaak-ul-bounds.ps.Z Computing upper and lower bounds on likelihoods in intractable networks T. S. Jaakkola and M. I. Jordan We present techniques for computing upper and lower bounds on the likelihoods of partial instantiations of variables in sigmoid and noisy-OR networks. The bounds determine confidence intervals for the desired likelihoods and become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the obtained bounds by numerical experiments. -Tommi --------- The paper can be retrieved also via anonymous ftp: ftp-host: psyche.mit.edu ftp-file: pub/tommi/jaak-ul-bounds.ps.Z  From murase at synapse.fuis.fukui-u.ac.jp Mon Mar 11 20:31:02 1996 From: murase at synapse.fuis.fukui-u.ac.jp (Kazuyuki Murase) Date: Tue, 12 Mar 1996 10:31:02 +0900 Subject: Associate Professor Position in Japan Message-ID: <199603120131.KAA07787@synapse.fuis.fukui-u.ac.jp> ASSOCIATE PROFESSOR IN BIOLOGICAL INFORMATION PROCESSING IN JAPAN The department of Information Science at Fukui University invites applications for an associate professor position in its Biological Information Processing Division starting October 1996. The position requires Ph.D. with postdoctoral research experience. Theaching and supervision of undergraduate and graduate research projects are essential. The ability of Japanese language is not required initially, but should be developed within a few years. The cadidates with specific expertise in at least one of the following areas will be given higher priority: Electrophysiology of single cells or cellular networks, Sensory mechanisms of the spinal cord, Optical imaging of neuronal activities, Modeling of excitable cells or cellular networks, Artificial neural networks, Simulation and synthesis of biological behavior. Applicants should send a curriculum vitae including a publication list and brief description of future research plans by mail to Dr. Kazuyuki Murase, Department of Information Science, Fukui University, 3-9-1 Bunkyo, Fukui 910, Japan, or by E-mail to murase at synapse.fuis.fukui-u.ac.jp. Review of applications will begin immediately and continue until the position is filled. Fukui University is one of the Japanese National Universities.  From maja at garnet.cs.brandeis.edu Mon Mar 11 22:35:36 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Mon, 11 Mar 1996 22:35:36 -0500 Subject: AAAI Fall Symposium on Embodied Cognition and Action Message-ID: <199603120335.WAA05729@garnet.cs.brandeis.edu> !! PLEASE POST !! PLEASE POST !! PLEASE POST !! PLEASE POST !! PLEASE POST !! Call For Participation AAAI 1996 Fall Symposium on Embodied Cognition and Action ---------------------------------------------------------- to be held at MIT Nov 9-11, 1996 Submission Deadline: April 15, 1996. The role of physical embodiment in cognition has long been the subject of debate. It is largely accepted in AI that embodiment has strong implications on the control strategies for generating purposive and intelligent behavior in the world. Some theories have proposed that embodiment not only constrains but may also facilitate certain types of higher-level cognition. Evidence from neuroscience allows for postulating shared mechanisms for low-level control of embodied action (e.g., motor plans for limb movement) and higher-level cognition (e.g., abstract plans). Work in animal behavior has also addressed the potential links between the two systems and linguistic theories have long recognized the role of physical and spatial metaphors in language. The symposium will study the role of embodiment in both scaling up control and grounding cognition. We will explore ways of extending the existing typically low-level sub-cognitive systems such as autonomous robots and agents, as well as grounding more abstract typically disembodied cognitive systems. We will draw from AI, ethology, neuroscience, and other sources in order to focus on the implications of embodiment in cognition and action, and explore work that has been done in the areas of applying physical metaphors to more abstract higher-level cognition. Topics and questions of interest include: * What spatial metaphors that can be used for abstract/higher-level cognition? * What non-spatial metaphors can be applied in higher-level cognition? * What alternatives to symbolic representations (e.g., analogical, procedural, etc.) can be successfully employed in embodied cognition? * How can evidence from neuroscience and ethology benefit work in synthetic embodied cognition and embodied AI? Can we gain more than just inspiration from biological data in this area? Are there specific constraints and/or mechanisms we can usefully model? * (How) Do methods for modeling embodied insect and animal behavior scale up to higher-level cognition? * How do metaphors from embodiment apply to everyday activity? * What computational and representational structures are necessary and/or sufficient for enabling embodied cognition? * What are some successfully implemented embodied cognition systems? The symposium will focus on group discussions and panels with a few inspiring presentations and overviews of relevant work. Organizing committee: --------------------- Dana Ballard, University of Rochester, dana at cs.rochester.edu; Rod Brooks, MIT, brooks at ai.mit.edu; Daniel Dennett, Tufts University, ddennett at pearl.tufts.edu; Simon Giszter, Medical College of Pennsylvania, simon at SwampThing.medcolpa.edu; Maja Mataric (chair), Brandeis University, maja at cs.brandeis.edu; Erich Prem, Austrian AI Institute, erich at ai.univie.ac.at; Terence Sanger, MIT, tds at ai.mit.edu; Stefan Schaal, Georgia Tech, sschaal at cc.gatech.edu; Submission Information: ----------------------- We invite the participation of researchers who have been working on embodied cognition and action in the fields of AI, neuroscience, ethology, and robotics. Prospective participants should submit a brief paper (5 pages or less) or an extended abstract describing their research or interests. Papers should be submitted electronically, in postscript or plain text format, via ftp to ftp.cs.brandeis.edu/pub/faculty/maja/aaai96-fs/. Participants will have an opportunity to contribute to the final working notes. Detailed ftp instructions: -------------------------- compress your-paper (both Unix compress and gzip commands are ok) ftp ftp.cs.brandeis.edu (129.64.2.5, but check in case it has changed) give anonymous as your login name give your e-mail address as password set transmission to binary (just type the command BINARY) cd to /aaai96-fs put your-paper Relevant Dates: --------------- Apr 15, 1996: Submissions due May 17, 1996: Notification of acceptance given Aug 23, 1996: Material for inclusion into the working notes due Nov 9-11, 96: AAAI Fall Symposium The WWW home page for this symposium can be found at: http://www.cs.brandeis.edu/~maja/aaai96-fs/  From omlinc at cs.rpi.edu Tue Mar 12 09:54:08 1996 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Tue, 12 Mar 96 09:54:08 EST Subject: Shift Invariance Message-ID: <9603121454.AA17032@colossus.cs.rpi.edu> In his message <9602281000.ZM15421 at ICSI.Berkeley.edu>, Jerry Feldman wrote: >3) Shift invariance in time and recurrent networks. > > I threw in some (even more cryptic) comments on this anticipating that some >readers would morph the original task into this form. The 0*1010* problem is >an easy one for FSA induction and many simple techniques might work for this. >But consider a task that is only slightly more general, and much more natural. >Suppose the task is to learn any FSL from the class b*pb* where b and p are >fixed for each case and might overlap. Any learning technique that just >tried to predict (the probability of) successors will fail because there >are three distinct regimes and the learning algorithm needs to learn this. >I don't have a way to characterize all recurrent net learning algorithms to >show that they can't do this and it will be interesting to see if one can. >There are a variety on non-connectionist FSA induction methods that can >effectively learn such languages, but they all depend on some overall measure >of simplicity of the machine and its fit to the data - and are thus non-local. > This isn't really correct. First, any DFA can be represented in recurrent neural networks with sigmoidal discriminants functions, i.e. a network can be constructed such that the languages recognized by a DFA and its network implementation are identical (this implies stability of the internal DFA state representation for strings of arbitary length) [1,2]. As far as learning DFA's with recurrent networks is concerned: In my experience, success of failure of a network to learn a particular grammar depends on the size of the DFA, its complexity (simple self loops as opposed to orbits of arbitary length), the training data, and the order in which the training data is concerned. For instance, we found that incremental learning where the network is first trained on the shortest strings of data [8] is often crucial to successful convergence since it is a means to overcome the problem of learning long-term dependencies with gradient descent [4] (for methods for overcoming that problem see [5,6,7]). The `simplest' language of the form b*pb* might be 1*01*. A network with second-order weights and a single recurrent state neuron can learn that language within 100 epochs when trained on the first 100 strings in alphabetical order. Furthermore, the ideal DFA can also be extracted from the trained network [3]. See for example [9,10,11,12] for other extraction approaches. For the language 1*011* which is of the form b*pb* (notice overlapping of b and p), a second-order network with 3 recurrent state neurons easily converged within 200 epochs and the ideal DFA can be extracted as well. So, here are at least two examples which contradict the claim that "Any learning technique that just tried to predict (the probability of) successors will fail because there are three distinct regimes and the learning algorithm needs to learn this. I don't have a way to characterize all recurrent net learning algorithms to show that they can't do this and it will be interesting to see if one can." Christian ------------------------------------------------------------------- Christian W. Omlin, Ph.D. Phone (609) 951-2691 NEC Research Institute Fax: (609) 951-2438 4 Independence Way E-mail: omlinc at research.nj.nec.com Princeton, NJ 08540 omlinc at cs.rpi.edu URL: http://www.neci.nj.nec.com/homepages/omlin/omlin.html ------------------------------------------------------------------- =================================== Bibliography ======================================= [1] P. Frasconi, M. Gori, M. Maggini, G. Soda, "Representation of Finite State Automata in Recurrent Radial Basis Function Networks", Machine Learning, to be published, 1996. [2] C.W. Omlin, C.L. Giles, "Stable Encoding of Large Finite-State Automata in Recurrent Neural Networks with Sigmoid Discriminants", Neural Computation, to be published, 1996. [3] C.W. Omlin, C.L. Giles, "Extraction of Rules from Discrete-Time Recurrent Neural Networks", Neural Networks , Vol. 9, No. 1, p. 41-52, 1996. [4] Y. Bengio, P. Simard, P. Frasconi, "Learning Long-Term Dependencies with Gradient Descent is Difficult", IEEE Transactions on Neural Networks (Special Issue on Recurrent Neural Networks), Vol. 5, p. 157-166, 1994. [5] T. Lin, B.G. Horne, P. Tino, C.L. Giles, "Learning Long-Term Dependencies with NARX Recurrent Neural Networks, IEEE Transactions on Neural Networks, accepted for publication. [6] S. El Hihi, Y. Bengio, "Hierarchical Recurrent Neural Networks for Long-Term Dependencies", Neural Information Processing Systems 8, MIT Press, 1996. [7] S. Hochreiter, J. Schmidhuber, "Long Short Term Memory", Technical Report, Institut fuer Informatik, Technische Universitaet Muenchen, FKI-207-95, 1995. [8] J.L. Elman, "Incremental Learning, or the Importance of Starting Small" Technical Report, Center for Research in Language, University of California at San Diego, CRL Tech Report 9101, 1991. [9] S. Das, M.C. Mozer, "A Unified Gradient-descent/Clustering Architecture for Finite State for Finite State Machine Induction", Advances in Neural Information Processing Systems 6, J.D. Cowan , G. Tesauro, J. Alspector (Eds.), p. 19-26, 1994. [10] M.P. Casey, "Computation in Discrete-Time Dynamical Systems", Ph.D. Thesis, Department of Mathematics, University of California, San Diego, 1995. [11] P. Tino, J. Sajda, "Learning and Extracting Initial Mealy Machines With a Modular Neural Network Model}", Neural Computation, Vol. 7, No. 4, p. 822-844, 1995. [12] R.L. Watrous, G.M. Kuhn, "Induction of Finite-State Languages Using Second-Order Recurrent Networks", Neural Computation, Vol. 4, No. 5, p. 406, 1992.  From J.Heemskerk at dcs.shef.ac.uk Tue Mar 12 05:30:16 1996 From: J.Heemskerk at dcs.shef.ac.uk (Jan Heemskerk) Date: Tue, 12 Mar 96 10:30:16 GMT Subject: CALL FOR PARTICIPATION Message-ID: <9603121030.AA04014@dcs.shef.ac.uk> CALL FOR PARTICIPATION ** LEARNING IN ROBOTS AND ANIMALS ** An AISB-96 two-day workshop University of Sussex, Brighton, UK: April, 1st & 2nd, 1996 Co-Sponsored by IEE Professional Group C4 (Artificial Intelligence) WORKSHOP ORGANISERS: Noel Sharkey (chair), University of Sheffield, UK. Gillian Hayes, University of Edinburgh, UK. Jan Heemskerk, University of Sheffield, UK. Tony Prescott, University of Sheffield, UK. PROGRAMME COMMITTEE: Dave Cliff, UK. Marco Dorigo, Italy. Frans Groen, Netherlands. John Hallam, UK. John Mayhew, UK. Martin Nillson, Sweden Claude Touzet, France Barbara Webb, UK. Uwe Zimmer, Germany. Maja Mataric, USA. In the last five years there has been an explosion of research on Neural Networks and Robotics from both a self-learning and an evolutionary perspective. Within this movement there is also a growing interest in natural adaptive systems as a source of ideas for the design of robots, while robots are beginning to be seen as an effective means of evaluating theories of animal learning and behaviour. A fascinating interchange of ideas has begun between a number of hitherto disparate areas of research and a shared science of adaptive autonomous agents is emerging. This two-day workshop proposes to bring together an international group to both present papers of their most recent research, and to discuss the direction of this emerging field. PROVISION LIST OF PAPERS: Robot Shaping - Priniciples, Methods & Architectures Simon Perkins and Gillian Hayes Towards Autonomous Control using Connectionist 'Infinite State Automata' Tom Ziemke Entropy-based Tradeoff between Exploration and Exploitation Ping Zhang and Stephane Canu Evolving a Hierarchical Control System for Co-operating Autonomous Robots Robert Ghanea-Hercock & David P Barnes Evolutionary Learning of task achieving behaviours Myra S Wilson, Clive King and John E Hunt The design of learning for an artifact Joanna Bryson Robot See, Robot Do: An Overview of Robot Imitation Paul Bakker and Yasuo Kuniyoshi Does Dynamics Solve the Symbol Grounding Problem of Robots? An Experiment in Navigation Learning Jun Tani Abstracting Fuzzy Behavioural Rules From Geometric Models in Mobile Robotics A G Pipe, Tc Fogarty and A Winfield Brave Mobots Use Representation Chris Thornton Explore/Exploit Strategies in Autonomous Learning Stewart W Wilson Environment memory for a mobile robot using place cells Ken Harris, David Lee and Michael Recce Representations on a mobile robot Noel Sharkey and Jan Heemskerk Layered control architectures in natural and artificial systems Tony J Prescott REGISTRATION INFORMATION: http://www.cogs.susx.ac.uk:80/users/christ/aisb/aisb96/index.html ftp ftp.cogs.susx.ac.uk  From heckerma at MICROSOFT.com Tue Mar 12 18:05:00 1996 From: heckerma at MICROSOFT.com (David Heckerman) Date: Tue, 12 Mar 1996 15:05:00 -0800 Subject: paper available: Efficient Approximations for the Marginal Likelihood... Message-ID: The following paper is available on the web at http://www.research.microsoft.com/research/dtg/heckerma/heckerma.html Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman MSR-TR-96-08 A Bayesian score often used in model selection is the marginal likelihood of data (or "evidence") given a model. We examine asymptotic approximations for the marginal likelihood of incomplete data given a Bayesian network. We consider the well-known Laplace and BIC/MDL approximations, as well as approximations proposed by Draper (1993) and Cheeseman and Stutz (1995). In experiments using synthetic data generated from discrete naive-Bayes models having a hidden root node, we find the Cheeseman-Stutz measure to be the best in that it is as accurate as the Laplace approximation and as efficient as the BIC/MDL approximation. The paper also can be retrieved via anonymous ftp: ftp-host: ftp.research.microsoft.com ftp-file: pub/tech-reports/winter95-96/tr-96-08.ps -David  From jordan at psyche.mit.edu Tue Mar 12 19:36:49 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 12 Mar 96 19:36:49 EST Subject: International School on Neural Nets "E.R. Caianiello" Message-ID: The enclosed is a correction and amplification to the earlier message regarding next fall's ``Learning in Graphical Models'' Advanced Study Institute in Erice, Sicily. Mike Jordan ------------------------------------------------------------------------ Galileo Galilei Foundation World Federation of Scientists Ettore Maiorana Centre for Scientific Culture Galilelo Galilei Celebrations Four Hundreds Years Since the Birth of Modern Science International School on Neural Nets ``E.R. Caianiello'' 1st Course: Learning in Graphical Models A NATO Advanced Study Institute Erice-Sicily: 27 September - 7 October 1996 Sponsored by the: - European Union - International Institute for Advanced Scientific Studies (IIASS) - Italian Institute for Philosophical Studies - Italian Ministry of Education - Italian Ministry of University and Scientific Research - Italian National Research Institute (CNR) - Sicilian Regional Government - University of Salerno Lecturers will include: J. Whittaker, University of Lancaster, UK D. Madigan, University of Washington, USA D. Geiger, Technion, Israel U. Kjaerullf, Aalborg University, Denmark R. Cowell, University College, London, UK M. Studeny, Academy of Sciences, Czech Republic M. Jordan, MIT, USA S. Omohundro, NEC Research, USA D. Heckerman, Microsoft Research, USA G. Cooper, University of Pittsburg, USA W. Buntine, Thinkbank, USA L. Saul, MIT, USA J. Buhmann, University of Bonn, Germany N. Tishby, Hebrew University, Israel D. Mackay, University of Cambridge, UK D. Spiegelhalter, MRC, Cambridge, UK J. Pearl, UCLA, USA Topics will include: Introduction to graphical models (directed and undirected graphs) Inference (probabilistic propagation, junction trees, conditioning) Properties of conditional independence (Markov properties, separation) Chain graphs Mixture models, hidden Markov models Neural networks Data structures for efficient estimation (bump trees, ball trees) Bayesian methods Structure learning (metrics, search, approximations) Priors Statistical mechanical methods (decimation, mean field) Markov chain Monte Carlo (importance sampling, Gibbs sampling, hybrid MC) Bayesian graphical models (BUGS software) Learning and phase transitions Clustering and multidimensional scaling Model selection and averaging Surface learning and family discovery Online learning Causality Purpose of the course Neural networks and Bayesian belief networks are learning and inference methods that have been developed in two largely distinct reasearch communities. The purpose of this Course is to bring together researchers from these two communities and study both kinds of networks as instances of a general unified graphical formalism. The Course will focus on probabilistic methods for learning in graphical models, with attention paid to algorithm analysis and design, theory and applications. General Information Persons wishing to attend the Course should apply in writing to: - Prof. Maria Marinaro IIASS "E.R. Caianiello" Via G. Pellegrino, 19 84019 Vietri sul mare (SA), Italy Tel: + 39 89 761167 Fax: + 39 89 761189 They should specify: i) date and place of birth together with present nationality; ii) degree and other academic qualifications; iii) present position and place of work. Young persons with only little experience should include a letter of recommendation from the head of their research group or from a senior scientist active in the field. The total fee, which includes full board and lodging (arranged by the School), is $1000 USD. Thanks to the generosity of the sponsoring Institutions, partial support can be granted to some deserving students who need financial help. Requests to this effect must be specified and justified in the application letter. Closing date for application: July 15, 1996 No special application form is required. Admission to the Course will be decided in consultation with the Advisory Committee of the School consisting of Professors D. Heckerman, M.I. Jordan, M. Marinaro and A. Zichichi. It is regretted that it will not be possible to allow any person not selected by the Committee of the School to follow the Course. Participants must arrive in Erice on September 27, no later than 5 p.m. More information about this Course and the other activities of the Ettore Majorana Centre can be found on the WWW at the following address: http://www.ccsem.infn.it D. Heckerman - M.I. Jordan - J. Whittaker Directors of the Course M.I. Jordan - M. Marinaro Directors of the School A. Zichichi Director of the Centre  From wermter at nats5.informatik.uni-hamburg.de Wed Mar 13 10:42:21 1996 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Wed, 13 Mar 1996 16:42:21 +0100 Subject: book on language learning: connectionist statistical symbolic approaches Message-ID: <199603131542.QAA25501@nats13.informatik.uni-hamburg.de> [I am posting this to several relevant mailing lists -- apologies to those who, by subscribing to multiple lists, receive multiple copies of this announcement.] BOOK ANNOUNCEMENT ----------------- Title: Connectionist, statistical, and symbolic approaches to learning for natural language processing Editors: Stefan Wermter Ellen Riloff Gabriele Scheler Date: March 1996 (first week in Europe [order information and WWW reference for the book (access to first chapter) at the end of this message] Brief description ----------------- The purpose of this book is to present a collection of papers that represents a broad spectrum of current research in learning methods for natural language processing, and to advance the state of the art in language learning and artificial intelligence. The book should bridge a gap between several areas that are usually discussed separately, including connectionist, statistical, and symbolic methods. Table of contents ----------------- Introduction: Learning approaches for natural language processing S. Wermter, E. Riloff, G. Scheler Part 1: Connectionist Networks and Hybrid Approaches ---------------------------------------------------- Separating learning and representation N.E. Sharkey, A.J.C. Sharkey Natural language grammatical inference: a comparison of recurrent neural networks and machine learning methods S. Lawrence, S. Fong, C. L. Giles Extracting rules for grammar recognition from Cascade-2 networks R. Hayward, A. Tickle, J. Diederich Generating English plural determiners from semantic representations: a neural network learning approach G. Scheler Knowledge acquisition in concept and document spaces by using self-organizing neural networks W. Winiwarter, E. Schweighofer, D. Merkl Using hybrid connectionist learning for speech/language analysis V. Weber, S. Wermter SKOPE: A connectionist/symbolic architecture of spoken Korean processing G. Lee, J.-H. Lee Integrating different learning approaches into a multilingual spoken language translation system P. Geutner, B. Suhm, F.-D. Buo, T. Kemp, L. Mayfield, A. E. McNair, I. Rogina, T. Schultz, T. Sloboda, W. Ward, M. Woszczyna, A. Waibel Learning language using genetic algorithms T. C. Smith, I. H. Witten Part 2: Statistical Approaches --------------------------------------------------- A statistical syntactic disambiguation program and what it learns M. Ersan, E. Charniak Training stochastic grammars on semantical categories W.R. Hogenhout, Y. Matsumoto Learning restricted probabilistic link grammars E. W. Fong, D. Wu Learning PP attachment from corpus statistics A. Franz A minimum description length approach to grammar inference P. Gruenwald Automatic classification of dialog acts with semantic classification trees and polygrams M. Mast, H. Niemann, E. Noeth, E. G. Schukat-Talamazzini Sample selection in natural language learning S. P. Engelson, I. Dagan Part 3: Symbolic Approaches --------------------------------------------------- Learning information extraction patterns from examples S. B. Huffman Implications of an automatic lexical acquisition system P. M. Hastings Using learned extraction patterns for text classification E. Riloff Issues in inductive learning of domain-specific text extraction rules S. Soderland, D. Fisher, J. Aseltine, W. Lehnert Applying machine learning to anaphora resolution C. Aone, S. W. Bennett Embedded machine learning systems for natural language processing: a general framework C. Cardie Acquiring and updating hierarchical knowledge for machine translation based on a clustering technique T. Yamazaki, M. J. Pazzani, C. Merz Applying an existing machine learning algorithm to text categorization I. Moulinier, J.-G. Ganascia Comparative results on using inductive logic programming for corpus-based parser construction J. M. Zelle, R. J. Mooney Learning the past tense of English verbs using inductive logic programming R. J. Mooney, M. E. Califf A dynamic approach to paradigm-driven analogy S. Federici, V. Pirrelli, F. Yvon Can punctuation help learning? M. Osborne Using parsed corpora for circumventing parsing A. K. Joshi, B. Srinivas A symbolic and surgical acquisition of terms through variation C. Jacquemin A revision learner to acquire verb selection rules from human-made rules and examples S. Kaneda, H. Almuallim, Y. Akiba, M. Ishii, T. Kawaoka Learning from texts - a terminological metareasoning perspective U. Hahn, M. Klenner, K. Schnattinger ************************************************************** Bibliographic Data and Ordering Information: Editors: Stefan Wermter, Univ. of Hamburg, Germany Ellen Riloff, Univ. of Utah, Salt Lake City, USA Gabriele Scheler, Munich Univ. of Tech. Germany Title: Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing Publisher: Springer-Verlag ISBN: 3-540-60925-3 Pages: 468 + 9 Available: Europe: March 6, 1996 North America: around March 25, 1996 Subseries: Lecture Notes in Artificial Intelligence LNAI 1040 Cover: Softcover under Color Jacket Cover List Price: DM 86.00, approx. USD 68.00 With this information, any academic bookseller worlwide with a resonable computer science program should be able to provide copies of the book. Otherwise, one also can order through any Springer office directly, particularly through Berlin and Secaucus, as mentioned in the following special offer to Springer Authors. If you aren't a Springer Author you aren't entitled to make use of the special discount, but the ordering addresses are the same. ********************************************************** SPECIAL OFFER: SPRINGER-AUTHOR DISCOUNT All Authors or Editors of Springer Books, in particular Authors contributing to any LNCS or LNAI Proceedings, are entitled to buy any book published by Springer-Verlag for personal use at the "Springer-Author" discount of 33 1/3 % off the list price. Such preferential orders can only be processed through Springer directly (and not through book stores); reference to a Springer publication has to be given with such orders to any Springer office, particularly to the ones in Berlin and New York: Springer-Verlag Order Processing Department Postfach 31 13 40 D-10643 Berlin Germany FAX: +49 30 8207 301 Springer-Verlag New York, Inc. P.O. Box 2485 Secaucus, NJ 07096-2485 USA FAX: +1 201 348 4033 Phone: 1-800-SPRINGER (1 800 777 4647), toll-free in USA Preferential orders also can be placed by sending an email to orders at springer.de Shipping charges are DEM 5.00 per book for orders sent to Berlin, and USD 2.50 (plus USD 1.00 for each additional book) for orders sent to the Secaucus office. Payment of the book(s) plus shipping charges can be made by giving a credit card number together with the expiration date (American Express, Eurocard/Mastercard, Diners, and Visa are accepted) or by enclosing a check (mail orders only). ****************************************************************************** *Dr Stefan Wermter University of Hamburg * * Dept. of Computer Science * * Vogt-Koelln-Strasse 30 * *email: wermter at informatik.uni-hamburg.de D-22527 Hamburg * *phone: +49 40 54715-531 Germany * *fax: +49 40 54715-515 * *http://www.informatik.uni-hamburg.de/Arbeitsbereiche/NATS/staff/wermter.html* ******************************************************************************  From ted at SPENCER.CTAN.YALE.EDU Wed Mar 13 14:45:08 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Wed, 13 Mar 1996 19:45:08 GMT Subject: Postdoctoral positions available Message-ID: <199603131945.TAA19483@PLANCK.CTAN.YALE.EDU> The Neuroengineering and Neuroscience Center at Yale University is seeking to build a pool of qualified scientists and engineers to participate in research on applications of pattern recognition in engineering and medicine. Applicants must have a Ph.D. and demonstrated expertise in one or more of the following fields: pattern recognition, signal processing, machine learning, adaptive control, artificial neural networks, image analysis. Successful candidates will participate in highly creative and interdisciplinary projects of major scientific and social importance. Please send curriculum vitae and list of professional references to Prof. K.S. Narendra, Director NNC 5 Science Park North New Haven, CT 06511 Yale University is an Affirmative Action/Equal Opportunity employer. Women and Minorities encouraged to apply.  From laura at mpipf-muenchen.mpg.de Thu Mar 14 11:41:29 1996 From: laura at mpipf-muenchen.mpg.de (Laura Martignon) Date: Thu, 14 Mar 1996 17:41:29 +0100 Subject: paper available: "Bayesian Learning of loglinear models for neuron connectivity" Message-ID: Kathryn Laskey and I have just finished the paper: "Bayesian Learning of loglinear models for neuron connectivity" Kathryn Laskey Department of Systems Engineering George Mason University Fairfax, VA 22030 klaskey at gmu.edu Laura Martignon Max Planck Institute for Psychological Research 80802 M?nchen, Germany laura at mpipf-muenchen.mpg.de Abstract This paper presents a Bayesian approach to learning the connectivity structure of a group of neurons from data on configuration frequencies. A major objective of the research is to provide statistical tools for detecting changes in firing patterns with changing stimuli. Our framework is not restricted to the well-understood case of pair interactions, but generalizes the Boltzmann machine model to allow for higher order interactions. The paper applies a Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over connectivity structures and uses Laplace's method to approximate posterior probabilities of structures. Performance of the methods was tested on synthetic data. The models were also applied to data obtained by Vaadia on multi-unit recordings of several neurons in the visual cortex of a rhesus monkey in two different attentional states. Results confirmed the experimenters' conjecture that different attentional states were associated with different interaction structures. Keywords: Nonhierarchical loglinear models, Markov Chain Monte Carlo Model composition, Laplace's Method, Neural Networks To obtain a copy of these papers, please send your email request to Laura Martignon e-mail: laura at mpipf-muenchen.mpg.de  From itl-rec at thuban.crd.ge.com Thu Mar 14 15:12:55 1996 From: itl-rec at thuban.crd.ge.com (itlrecruiting) Date: Thu, 14 Mar 96 15:12:55 EST Subject: Job: Data Mining / Neural Nets / Artificial Intelligence Message-ID: <9603142012.AA20879@thuban.crd.ge.com> The Information Technology Laboratory (80 people strong and still growing) at the Corporate Research & Development Center of General Electric in Schenectady, New York has the following position to offer: R&D Staff opportunity in Data Mining/Analysis/Warehousing BACKGROUND REQUIRED: PhD in Computer Science, Statistics, Artificial Intelligence or related field with a broad knowledge base. Strong interpersonal skills, good initiative and analytical skills, adaptable to change, high self confidence. Excellent computer skills required: e.g. either hands-on experience in implementing data storage and access solutions for multi-million record databases or hands-on experience in sampling and Data Mining / knowledge discovery analysis algorithms on multi-million record databases. DESIRED ALSO: Experience with C++ object-oriented programming. WE OFFER A CHALLENGING PERSPECTIVE: Develop and apply modern statistical methods, machine learning techniques and neural nets to a variety of strategically important and technically significant problems throughout GE, involving finance, product development, manufacturing and process improvement, and product servicing and reliability. Lead work with analysts, engineers, and managers in the diverse GE businesses, e.g. Aircraft Engines, Capital Services, Medical Systems, NBC, Plastics, and Appliances and with scientists at the Research & Development Center. --------------------------------------------------------------------------- FOR YOUR INTEREST: GE is one of the world's largest and most successful companies, having leadership positions in business segments including aircraft engines, plastics, manufacturing, capital services, and others. GE has its Corporate Research and Development Center located in Schenectady, New York (more at http://www.ge.com/). It supports the advanced technology requirements of all GE businesses. The 1000-plus staff of scientists and engineers is composed of representatives of most major disciplines (more at http://www.crd.ge.com/). APPLICATION: If you meet the requirements and you are interested, please send your resume via electronic email in plain ASCII format to itl-rec at thuban.crd.ge.com (Steve Mirer). Please include, where you found this ad and put "DATA MINING" in the subject line. BTW, we are recruiting world-wide to get the best possible match. ---------------------------------------------------------------- GE is an equal opportunity employer.  From arbib at pollux.usc.edu Thu Mar 14 18:25:04 1996 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Thu, 14 Mar 1996 15:25:04 -0800 (PST) Subject: Workshop on Sensorimotor Coordination Message-ID: <199603142325.PAA15747@pollux.usc.edu> FINAL CALL FOR PAPERS Workshop on SENSORIMOTOR COORDINATION: AMPHIBIANS, MODELS, AND COMPARATIVE STUDIES Poco Diablo Resort, Sedona, Arizona, November 22-24, 1996 Co-Directors: Kiisa Nishikawa (Northern Arizona University, Flagstaff) and Michael Arbib (University of Southern California, Los Angeles). Local Arrangements Chair: Kiisa Nishikawa. E-mail enquiries may be addressed to Kiisa.Nishikawa at nau.edu or arbib at pollux.usc.edu. Further information may be found on our home page at http://www.nau.edu:80/~biology/vismot.html. Program Committee: Kiisa Nishikawa (Chair), Michael Arbib, Emilio Bizzi, Chris Comer, Peter Ewert, Simon Giszter, Mel Goodale, Ananda Weerasuriya, Walt Wilczynski, and Phil Zeigler. SCIENTIFIC PROGRAM The aim of this workshop is to study the neural mechanisms of sensorimotor coordination in amphibians and other model systems for their intrinsic interest, as a target for developments in computational neuroscience, and also as a basis for comparative and evolutionary studies. The list of subsidiary themes given below is meant to be representative of this comparative dimension, but is not intended to be exhaustive. The emphasis (but not the exclusive emphasis) will be on papers that encourage the dialog between modeling and experimentation. A decision as to whether or not to publish a proceedings is still pending. Central Theme: Sensorimotor Coordination in Amphibians and Other Model Systems Subsidiary Themes: Visuomotor Coordination: Comparative and Evolutionary Perspectives Reaching and Grasping in Frog, Pigeon, and Primate Cognitive Maps Auditory Communication (with emphasis on spatial behavior and sensory integration) Motor Pattern Generators This workshop is the sequel to four earlier workshops on the general theme of "Visuomotor Coordination in Frog and Toad: Models and Experiments". The first two were organized by Rolando Lara and Michael Arbib at the University of Massachusetts, Amherst (1981) and Mexico City (1982). The next two were organized by Peter Ewert and Arbib in Kassel and Los Angeles, respectively, with the Proceedings published as follows: Ewert, J.-P. and M. A. Arbib (Eds.) 1989. Visuomotor Coordination: Amphibians, Comparisons, Models and Robots. New York: Plenum Press. Arbib, M.A. and J.-P. Ewert (Eds.) 1991. Visual Structures and Integrated Functions, Research Notes in Neural Computing 3. Heidelberg, New York: Springer Verlag. INSTRUCTIONS FOR CONTRIBUTORS Persons who wish to present oral papers are asked to send three copies of an extended abstract, approximately 4 pages long, including figures and references. Persons who wish to present posters are asked to send a one page abstract. Abstracts may be sent by regular mail, e-mail or FAX. Authors should be aware that e-mailed abstracts should contain no figures. Abstracts should be sent no later than 1 May, 1996 to: Kiisa Nishikawa , Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640, E-mail: Kiisa.Nishikawa at nau.edu; FAX: (520)523-7500. Notification of the Program Committee's decision will be sent out no later than 15 June, 1996. REGISTRATION INFORMATION Meeting Location and General Information: The Workshop will be held at the Poco Diablo Resort in Sedona, Arizona (a beautiful small town set in dramatic red hills) immediately following the Society for Neuroscience meeting in 1996. The 1996 Neuroscience meeting ends on Thursday, November 21, so workshop participants can fly from Washington, DC to Phoenix, AZ that evening, meet Friday, Saturday, and Sunday, with a Workshop Banquet on Sunday evening, and fly home on Monday, November 25th. Paper sessions will be held all day on Friday, on Saturday afternoon, and all day on Sunday. Poster sessions will be held on Saturday afternoon and evening. A group field trip is planned for Saturday morning. Graduate Student and Postdoctoral Participation: In order to encourage the participation of graduate students and postdoctorals, we have arranged for affordable housing, and in addition we are able to offer a reduced registration fee (see below) thanks to the generous contribution of the Office of the Associate Provost for Research and Graduate Studies at Northern Arizona University. Travel from Phoenix to Sedona: Sedona, AZ is located approximately 100 miles north of Phoenix, where the nearest major airport (Sky Harbor) is located. Workshop attendees may wish to arrange their own transportation (e.g., car rental from Phoenix airport) from Phoenix to Sedona, or they may use the Workshop Shuttle (estimated round trip cost $20 US) to Sedona on 21 November, with a return to Phoenix on 25 November. If you plan to use the Workshop Shuttle, we will need to know your expected arrival time in Phoenix by 1 October 1996, to ensure that space is available for you at a convenient time. Lodging: The following costs are for each night. Since many participants may want to extend their stay to further enjoy Arizona's scenic beauty, we have negotiated special rates for additional nights after the end of the workshop on November 24th. Attendees should make their own booking with the Poco Diablo Resort, by phone (800) 352-5710 or FAX (520) 282-9712. Thurs.-Fri. (and additional week nights before the workshop) per night: students $85 US + tax, faculty $105 + tax Sat.-Sun. (and additional week nights after the workshop) per night: students $69 + tax, faculty $89 + tax. The student room rates are for double occupancy. Thus, students willing to share a room may stay for half the stated rate. When you make your room reservations with the Poco Diablo Resort, please be sure to indicate the number of guests in your party. Graduate students and postdocs should be sure to indicate whether they want single or double occupancy. REGISTRATION FEES: Students and postdoctorals $100; faculty, guests and others $200. The registration fee includes lunch Fri. - Sun., wine and cheese reception during the Saturday evening poster session, and a Farewell Dinner on Sunday evening. Registration fees should be paid by check in US funds, made payable to "Sensorimotor Coordination Workshop", and should be sent to Kiisa Nishikawa at the address listed below, together with the completed registration form that follows at the end of this announcement. Completed registration forms and fees must be received by 1 July, 1996. Late registration fees will be $150 for students and postdoctorals and $250 for faculty. REGISTRATION FORM NAME: ADDRESS: PHONE: FAX: EMAIL: STATUS: [ ] Faculty ($200); [ ] Postdoctoral ($100); [ ] Student ($100); [ ] Other ($200). (Postdocs and students: Please attach certification of your status signed by your supervisor.) TYPE OF PRESENTATION (paper vs. poster): ABSTRACT SENT: (yes/no) AREAS OF INTEREST RELEVANT TO WORKSHOP: WILL YOU REQUIRE ANY SPECIAL AUDIOVISUAL EQUIPMENT FOR YOUR PRESENTATION? HAVE YOU MADE A RESERVATION WITH THE HOTEL? EXPECTED TIME OF ARRIVAL IN PHOENIX (ON NOVEMBER 21): EXPECTED TIME OF DEPARTURE FROM PHOENIX (ON NOVEMBER 25): DO YOU WISH TO USE THE WORKSHOP SHUTTLE TO TRAVEL FROM PHOENIX TO SEDONA? (If so, please be sure that we know your expected arrival time by 1 October!) DO YOU WISH TO PARTICIPATE IN A GROUP HIKE IN THE SEDONA AREA ON SATURDAY MORNING? Please make sure that your check (in US funds and payable to the "Sensorimotor Coordination Workshop") is included with this form. If you plan to bring a guest with you to the Workshop, please add their name(s) to this form and enclose their registration fee along with your own. Mail to: Kiisa Nishikawa, Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640. E-mail: Kiisa.Nishikawa at nau.edu. FAX: (520)523-7500. Phone: (520)523-9497.  From rjb at psy.ox.ac.uk Fri Mar 15 09:49:03 1996 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 15 Mar 1996 14:49:03 GMT Subject: Paper available on exploritory projection pursuit. Message-ID: <199603151449.OAA06560@axp02.mrc-bbc.ox.ac.uk> The following paper is available on the web at http://www.mrc-bbc.ox.ac.uk/~rjb/ It has been accepted for publication in Network. TITLE: Searching for filters with ``interesting'' output distributions: an uninteresting direction to explore? Abstract It has been proposed that the receptive fields of neurons in V1 are optimised to generate ``sparse'', Kurtotic, or ``interesting'' output probability distributions \cite{Barlow92,Barlow94,Field94,Intrator91,Intrator92d}. We investigate the empirical evidence for this further and argue that filters can produce ``interesting'' output distributions simply because natural images have variable local intensity variance. If the proposed filters have zero D.C., then the probability distribution of filter outputs (and hence the output Kurtosis) is well predicted simply from these effects of variable local variance. This suggests that finding filters with high output Kurtosis does not necessarily signal interesting image structure. It is then argued that finding filters that maximise output Kurtosis generates filters that are incompatible with observed physiology. In particular the optimal difference--of--Gaussian (DOG) filter should have the smallest possible scale, an on--centre off--surround cell should have a negative D.C., and that the ratio of centre width to surround width should approach unity. This is incompatible with the physiology. Further, it is also predicted that oriented filters should always be oriented in the vertical direction, and of all the filters tested, the filter with the highest output Kurtosis has the lowest signal to noise (the filter is simply the difference of two neighbouring pixels). Whilst these observations are not incompatible with the brain using a sparse representation, it does argue that little significance should be placed on finding filters with highly Kurtotic output distributions. It is therefore argued that other constraints are required in order to understand the development of visual receptive fields. FILE: http://www.mrc-bbc.ox.ac.uk/ftp/users/rjb/rjb_kur.ps.Z -- Roland Baddeley Research Fellow, MRC Centre for Cognitive Neuroscience University of Oxford normal mail: Experimental Psychology email: rjb at psy.ox.ac.uk Oxford University phone: +44-1865-271914 South Parks Road fax: +44-1865-272488 Oxford, OX1 3UD UK  From terry at salk.edu Fri Mar 15 15:31:49 1996 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 15 Mar 96 12:31:49 PST Subject: Telluride Workshop - Deadline April 5 Message-ID: <9603152031.AA05918@salk.edu> WORKSHOP ON NEUROMORPHIC ENGINEERING JUNE 24 - JULY 14, 1996 TELLURIDE, COLORADO Deadline for application is April 5, 1996. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one three week workshop that will be held in Telluride, Colorado in 1996. The first two Telluride Workshops on Neuromorphic Engineering were held in the summer of 1994 and 1995, sponsored by NSF and co-funded by the "Center for Neuromorphic Systems Engineering" at Caltech, were resounding successes. A summary of these workshops, together with a list of participants is available from: http://www.klab.caltech.edu/~timmer/telluride.html or http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week workshop is co-organized by Dana Ballard (Rochester, US), Rodney Douglas (Zurich, Switzerland) and Misha Mahowald (Zurich, Switzerland). It is composed of lectures, practical tutorials on aVLSI design, hands-on projects, and interest groups. Apart from the lectures, the activities run concurrently. However, participants are free to attend any of these activities at their own convenience. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the course of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in four groups: 1) active perception, 2) elements of autonomous robots, 3) robot manipulation, and 4) multichip neuron networks. The "active perception" project group will emphasize vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system is based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. Projects will involve programming the DataCube to implement a variety of vision/oculomotor algorithms. The "elements of autonomous robots" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robot manipulation" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip neuron networks" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Dana Ballard, Rochester. Randy Beer, Case-Western Reserve. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Chris Dioro, Caltech. Rodney Douglas, Zurich. John Elias, Delaware University. Stefano Fusi, Italy Mary Hayhoe, Rochester. Geoffrey Hinton, Toronto. Ian Hoswill, NWU Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Misha Mahowald, Zurich. Stefan Schaal, Georgia Tech. Mark Tilden, Los Alamos. Terry Sejnowski, Salk Institute and UC San Diego. Paul Viola, MIT LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000+ range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, one or two MACs and a few PCs running windows and LINUX. We have funds to reimburse some participants for up to $500.- of domestic travel and for all housing expenses. Please specify on the application whether such finanical help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 5, 1996. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1996.  From ib at rana.usc.edu Fri Mar 15 19:02:03 1996 From: ib at rana.usc.edu (Irving Biederman) Date: Fri, 15 Mar 1996 16:02:03 -0800 Subject: Shift Invariance Message-ID: <199603160002.QAA01395@mizar.usc.edu> Shimon Edelman (March 8) writes: [Omission of some of posting] >Note that the purpose of my previous posting was to advocate caution, >certainly not to argue that all claims of invariance are wrong. >Fortunately, my job in this matter is easy: just one example of a >manifest lack of invariance suffices to invalidate the strong version >of invariance-based theory of vision, which seems to be espoused by >Goldfarb: >So, here it goes... Whereas invariance does hold in many recognition >tasks (in particular, in Biederman's experiments, as well as in the >experiments reported in [1]), it does not in others (as, e.g., in [2], >where interaction between size invariance and orientation is >reported). A recent comprehensive survey of (the far from invariant) >human performance in recognizing rotated objects can be found in >[3]. Furthermore, not only recognition, but also perceptual learning, >seems to be non-invariant in some cases; see [4,5]. [Omission of rest of posting] It should be so easy. Of course, ALL of vision is not shift invariant (I don't believe that Goldfarb was asserting that it was) as there is clear evidence that people are, for example, quite sensitive to the location of objects when they reach out to grasp them. The issue of shift invariance was specifically raised, not for ALL of vision, but the domain of (what should be called) object recognition, what I termed "primal access", Biederman, '87, in which basic-level or (most) subordinate-level classification is made from a large and uncertain population of objects, as when channel surfing. I think that readers who are not familiar with some of the literature cited in Edelman's posting might be misled into thinking that shift invariance in object recognition is a special case. As I noted in my previous posting, the evidence is quite strong that object recognition tasks, at the same time that they show a visual (and not just verbal or conceptual) benefit from a single presentation in an experiment, also show shift invariance. (They also show size, scale, reflection, and rotation in depth invariance, as long as the same parts and relations are readily distinguished.) Edelman points out that there have been reports of view-dependency for depth rotation, not shift, in "recognition" tasks. (Goldfard specifically exempted rotation.) But even for depth rotation, readers should note that the findings of large rotation costs are found only for extremely difficult discrimination tasks, performed only rarely in normal visual activities in which viewpoint-invariant information is generally not available, such as distinguishing among a set of highly similar bent paper clips. Why would invariance not be found with extremely difficult tasks? When tasks are difficult, subjects will attempt various strategies (e.g., look to the left [a dorsal function?] for a small, distinguishing feature), that might produce a cost of view-change, but this does not mean that the representation of the feature (or object) itself is not invariant. All in all, the absence of an effect of a view-change, puts one in a simpler explanatory position (assuming adequate power), than when an effect of view change (say, a shift) is found. The latter kind of result means that one has to eliminate other task variables as potential bases of the effect, such as a search for a distinguishing feature, as noted above. A finding of an effect of a change in viewpoint in "object recognition" might or might not mean that the representation of the object is viewpoint dependent. The "view-based" camp will have to demonstrate that the representation of an object (for primal access) really does change when it is shifted, or shown at a different size, or orientation in depth (assuming that the same parts are in view). They haven't done this yet. Whether a TASK (NOT A REPRESENTATION) does or does not manifest shift invariance might well depend on the degree to which it reflects dorsal (motor interaction) vs. ventral (recognition) cortical representations. The manifestation of these invariances nicely dovetails with the phenomenon of "object constancy" noted by the Gestaltists, in which the perception of the real object is largely unaffected by its translation or rotation. It is of interest that patient D. F. studied by Milner and Goodale, who presumably has a damaged ventral pathway shows no awareness of objects while at the same time is able to reach competently for them. My views on these matters of view invariance (especially of rotation in depth) are more fully presented in: 1. Biederman, I., & Gerhardstein, P. C. (1993). Recognizing depth-rotated objects: Evidence and conditions for 3D viewpoint invariance. Journal of Experimental Psychology: Human Perception and Performance, 19, 1162-1182. 2. Biederman, I., & Gerhardstein, P. C. (1995). Viewpoint-dependent mechanisms in visual object recognition: Reply to Tarr and B?lthoff (1995). Journal of Experimental Psychology: Human Perception and Performance, 21, 1506-1514. 3. Biederman, I., & Bar, M. (1995). One-Shot Viewpoint Invariance with Nonsense Objects. Paper presented at the Annual Meeting of the Psychonomic Society, 1995, Los Angeles, November. Available on our WWW site: http://rana.usc.edu:8376/~ib/iul.html  From terry at salk.edu Fri Mar 15 18:47:24 1996 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 15 Mar 96 15:47:24 PST Subject: Neural Computation 8:3 Titles Message-ID: <9603152347.AA08606@salk.edu> Neural Computation - Volume 8, Number 3 - April 1, 1996 Long Article: A Smoothing Regularizer for Feedforward and Recurrent Neural Networks Lizhong Wu and John Moody Notes: Note on the Maxnet Dynamics John P. F. Sum and Peter K. S. Tam Optimizing Synaptic Conductance Calculation for Network Simulations William W. Lytton Letters: Parameter Extraction from Population Codes: A Critical Assessment Herman P. Snippe Energy Efficient Neural Codes William B. Levy and Robert A. Baxter A Nonlinear Hebbian Network that Learns to Detect Disparity in Random-Dot Stereograms Christopher W. Lee and Bruno A. Olshausen Coupling the Neural and Physical Dynamics in Rhythmic Movements Nicholas G. Hatsopoulos Predictive Minimum Description Length Criterion for Time Series Modeling with Neural Networks Mikko Lehtokangas, Jukka Saarinen, Pentti Huuhtanen and Kimmo Kaski Minimum Description Length, Regularization and Multi-Model Data Richard Rohwer and John C. van der Rest VC Dimension of an Integrate-and-Fire Neuron Model Anthony M. Zador and Barak A. Pearlmutter The VC-Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs Peter L. Bartlett and Robert C. Williamson A Theoretical and Experimental Account of N-Tuple Classifier Performance Richard Rohwer and Michal Morciniec The Effects of Adding Noise During Backpropagation Training on a Generalization Performance Guozhong An ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From ruppin at math.tau.ac.il Sun Mar 17 06:53:46 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Sun, 17 Mar 1996 14:53:46 +0300 (GMT+0300) Subject: Memory Consolidation Workshop Message-ID: <199603171153.OAA17322@gemini.math.tau.ac.il> WORKSHOP ANNOUNCEMENT - TAU - May 28-30, 1996 ---------------------------------------------- MEMORY ORGANIZATION AND CONSOLIDATION: COGNITIVE AND COMPUTATIONAL PERSPECTIVES ----------------------------------------- Adams Super Center for Brain Studies Tel-Aviv University A workshop on Memory Organization and Consolidation, sponsored by Branco Weiss, will be held during May 28-30, 1996 at Tel-Aviv University, Israel. Invited speakers from different disciplines of the Neurosciences will discuss psychological, neurological, physiological and computational perspectives of the subject. An informal atmosphere will be maintained, encouraging questions and discussions. WORKSHOP PROGRAM -------------------- ---------------------- Tuesday, May 28 ------------------ 8:45AM: Opening address Session 1: Chair: Bruce McNaughton ---------------------------------- 9:00AM: Morris Moscovitch and Lynn Nadel - Consolidation: The dynamics of memory systems in humans and animals. 9:50AM: James McClelland - Why there are complementary learning systems in the brain. 10:40AM: Coffee break 11:10AM: Daniel Amit - Thinking about learning in the context of active memory. 12:00AM: Discussion 12:30AM: Lunch break Session 2: Chair: Jay McClelland --------------------------------- 1:45PM: Bruce McNaughton - The Hippocampus, Space, and Memory Consolidation: Towards a Grand Unified Theory. PartI: A multichart neuronal architecture for both integration of self motionin arbitrary spatial reference frames and memory reprocessing. 2:40PM: Edi Barkai - Cellular mechanisms underlying memory consolidation in the Piriform Cortex. 3:30PM: Coffee break 3:50PM: Ilan Golani - Spatial memory in rat unconstrained exploratory behavior. 4:40PM: Michael Hasselmo - A model of human memory based on the cellular physiology of the hippocampal formation. 5:30PM: Discussion 6:00PM: Get Together and Poster Session. Wednesday, May 29 ------------------- Session 3: Chair: Daniel Amit ------------------------------ 9:00AM: Bruce McNaughton - The Hippocampus, Space, and Memory Consolidation: Towards a Grand Unified Theory. PartII: Coherence of hippocampal and neocortical memory reactivation during off-line processing. 9:50AM: Avi Karni - Cortical plasticity and adult skill learning: Time and practice are of essence. 10:40AM: Coffee break 11:10 Richard Thompson - Declarative memory in classical conditioning? Involvement of the hippocampus. 12:00AM: Discussion 12:30AM: Lunch break Session 4: Chair: David Horn ----------------------------- 1:45PM: James McClelland- Representation and memory in the hippocampal system: acquisition, maintenance and recovery of novel, arbitrary associations. 2:40PM: Mark Gluck - Neurocomputational approaches to integrating animal and human models of memory. 3:30PM: Coffee break 3:50PM: Alessandro Treves - Quantitative constraints on consolidation. 4:40PM: Eytan Ruppin - Synaptic maintenance, consolidation and sleep. 5:30PM: Discussion 6:00PM: Dinner Thursday, May 30 ------------------ Session 5: Chair: Mark Gluck ------------------------------ 9:00AM: Richard Thompson - Localization of a memory trace in the mammalian brain: The cerebellum and classical conditioning. 9:50AM Matty Mintz - Fast acquisition of fear and slow acquisition of motor reflexes in classical conditioning: Interdependent processes? 10:40AM: Coffee break 11:10AM Lynn Nadel - Memory consolidation: Multiple modules and multiple time-scales. 12:00AM: Discussion 12:30AM: Lunch break Session 6: Chair: Richard Thompson ----------------------------------- 1:45PM: Morris Moscovitch - Structural and functional components of memory in humans. 2:40PM: Yadin Dudai - Taste, novelty, and a molecular saliency switch in brain 3:30PM: Coffee break 3:50PM: Amos Korczyn - Clinical aspects of forgetting. 4:40PM: Martin Albert - Memory and Language 5:30PM: Discussion 6:00PM: Closing Remarks CALL FOR ABSTRACTS -------------------- Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract, and will be reviewed by the Program Committee. Abstract submissions should be sent to Eytan Ruppin, Dept. of Computer-Science, Tel-Aviv University, Tel-Aviv, Israel, 69978. Email: ruppin at math.tau.ac.il. All submissions should arrive by April 15th, 1996. REGISTRATION -------------- To register for the workshop, contact Ms.Bila Lenczner, Adams Super Center for Brain Studies, Tel Aviv University, TelAviv 69978, Israel, Tel.: 972-3-6407377, Fax: 972-3-6407932, email:memory at neuron.tau.ac.il } Program Committee: ------------------ David Horn, Michael Myslobodsky and Eytan Ruppin Further Information and updates: -------------------------------- See our WWW homepage at http://www.brain.tau.ac.il, or http://neuron.tau.ac.il/Adams/memory.  From xjwang at xjwang.ccs.brandeis.edu Sun Mar 17 18:05:20 1996 From: xjwang at xjwang.ccs.brandeis.edu (Xiao Jing Wang) Date: Sun, 17 Mar 96 18:05:20 EST Subject: No subject Message-ID: <9603172305.AA06494@xjwang.ccs.brandeis.edu> POSTDOCTORAL POSITION AT THE SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY BRANDEIS UNIVERSITY ================================================================== Dear Colleagues, I am looking for a post-doctoral research associate in computational neuroscience, starting August/September, 1996. Current research in my lab is focused on two kinds of topics: coherent cortical oscillations and their functional roles; working-memory processes and their neuromodulation. Projects are expected to be carried out in close interactions and collaborations with experimental neurobiologists. Candidates with strong theoretical background, analytical and simulation skills, and knowledge in neuroscience, are encouraged to apply. Brandeis University, near Boston, offers excellent opportunities in this interdisciplinary field. Other faculty members at the Sloan Center include Drs. Laurence Abbott, Eve Marder, John Lisman, Sasha Nelson, Gina Turrigano. Applicants should send promptly a curriculum vitae and a brief description of fields of interest, and have three letters of recommandation sent to the following address. Xiao-Jing Wang Center for Complex Systems Brandeis University Waltham, MA 02254 phone: (617) 736-3147 email: xjwang at volen.brandeis.edu  From payman at ebs330.eb.uah.edu Fri Mar 15 12:33:17 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Fri, 15 Mar 96 11:33:17 CST Subject: CIFEr'96 fast approaching - register now! Message-ID: <9603151733.AA24900@ebs330> (1st reminder announcement after publicizing the CFP on CONNECTIONISTS). The 1996 IEEE/IAFE Computational Intelligence in Financial Engineering Conference will be held March 24-26 1996 at the Crowne Plaza Manhattan, New York. This is one of the leading forums for new technologies and applications in the intersection of computational intelligence and financial engineering. You can still register for the conference. Please visit our homepage at http://www.ieee.org/nnc/conferences/cfp or drop me a line for complete information and registration details. -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece  From piuri at elet.polimi.it Mon Mar 18 14:32:31 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Mon, 18 Mar 1996 20:32:31 +0100 Subject: NICROSP'96 - call for participation Message-ID: <9603181932.AA24645@ipmel2.elet.polimi.it> ================================================================================ NICROSP'96 1996 International Workshop on Neural Networks for Identification, Control, Robotics, and Signal/Image Processing Ramada Hotel, Venice, Italy - 21-23 August 1996 ================================================================================ Sponsored by the IEEE Computer Society and the IEEE CS Technical Committee on Pattern Analysis and Machine Intelligence. In cooperation with: ACM SIGART, IEEE Circuits and Systems Society, IEEE Control Systems Society, IEEE Instrumentation and Measurement Society, IEEE Neural Network Council, IEEE North-Italy Section, IEEE Region 8, IEEE System, Man, and Cybernetics Society, IMACS, ISCA, AEI, AICA, ANIPLA, FAST. This first edition of the workshop is directed to create a unique synergetic discussion forum and a strong link between theoretical researchers and practitioners in the application fields of identification, control, robotics, and signal/image processing by using neural techniques. The three-days single-session schedule will provide the ideal environment for in-depth analysis and discussions concerning the theoretical aspects of the applications and the use of neural networks in the practice. Two keynote speakers (prof. T. Kohonen and prof. B. Widrow) will provide starting points for the discussion. ORGANIZERS General Chair: Prof. Edgar Sanchez-Sinencio Department of Electrical Engineering, Texas A&M University, USA Program Chair: Prof. Vincenzo Piuri Department of Electronics and Information, Politecnico di Milano, Italy Publication Chair: Dr. Jose' Pineda de Gyvez Department of Electrical Engineering, Texas A&M University, USA Publicity, Registration & Local Arrangment Chair: Dr. Cesare Alippi Department of Electronics and Information, Politecnico di Milano, Italy Workshop Secretariat: Ms. Laura Caldirola (email caldirol at elet.polimi.it) Department of Electronics and Information, Politecnico di Milano, Italy Program Committee Shun-Ichi Amari, University of Tokyo, Japan Panos Antsaklis, University of Notre Dame, USA Magdy Bayoumi, University of Southwestern Louisiana, USA James C. Bezdek, University of West Florida, USA Pierre Borne, Ecole Politechnique de Lille, France Luiz Caloba, Universidad Federal de Rio de Janeiro, Brazil Jill Card, Digital Equipment Corp., USA Chris De Silva, University of Western Australia, Australia Laurene Fausett, Florida Institute of Technology, USA C. Lee Giles, NEC, USA Karl Goser, University of Dortmund, Germany Yee-Wei Huang, Motorola Inc., USA Simon Jones, University of Loughborough, UK Michael Jordan, Massachussets Institute of Technology, USA Robert J. Marks II, University of Washington, USA Jean D. Nicoud, EPFL, Switzerland Eros Pasero, Politecnico di Torino, Italy Emil M. Petriu, University of Ottawa, Canada Alberto Prieto, University of Granada, Spain Gianguido Rizzotto, SGS-Thomson, Italy Edgar Sanchez-Sinencio, A&M University, USA Bernd Schuermann, Siemens, Germany Earl E. Swartzlander, University of Texas at Austin, USA Philip Treleaven, University College London, UK Kenzo Watanabe, Shizuoka University, Japan Michel Weinfeld, Ecole Politechnique de Paris, France GENERAL INFORMATION The conference location is on the mainland of Venice, Italy. The workshop will be held in the Ramada Hotel, in S. Giuliano, near the International Airport of Venice. A number of rooms has been reserved at the Ramada Hotel for the NICROSP attendees at the special rates shown in the Hotel Reservation Form. This American-style hotel is fully equipped to provide a high comfort during the whole stay. Buffet breakfast is included in the hotel rates. Lunches are included in the registration fee for registered attendees, as well as coffee breaks, entrance to sessions, and one copy of the proceedings. Additional lunch tickets for companions may be purchased at the registration desk. Hotel reservation must be made directly with the Ramada Hotel at S. Giuliano - Venice by sending the Hotel Reservation Form (fax is preferred). Reservations can be also made by contacting any other Ramada Reservation Center around the world and mentioning the special rates for the NICROSP conference. Reservation deadline is June 21, 1996. After this date, the hotel will not be able to guarantee room's availability; should the hotel be completely booked, Ramada will suggest equivalent accomodations in nearby hotels. Disable persons should contact the hotel for possible special requirements. Ramada Hotel grants the same workshop rates from August 16 till August 26. TRANSPORTATION Venice is served by an International Airport (about 15 minutes by car from the Ramada Hotel). Flights are daily available from most European towns and from some US cities. A special shuttle service for NICROSP attendees may be organized by the Ramada Hotel from/to the airport: since the shuttle fare is fixed and independent from the number of passengers, attendees should contact the Ramada Hotel to coordinate car pools. At the airport it is possible to rent a car to reach the Ramada Hotel (guest parking is available within the hotel). Maps and directions are available at the car rentals. Taxi cabs are also available; typical fare from the airport to the Ramada Hotel is approximately 40,000 Italian Liras. Venice has good and frequent international rail connections. Use the Mestre station (every train stops there). Taxi cabs are available at the station exit; typical fare is approximately 20,000 Italian Liras. If you decide to drive to the workshop site, ask for a map at the workshop secretariat: leave the highway to Venice at the Mestre-Est exit or at the Mestre-Ovest exit and, then, follow the map. Guest parking is available within the hotel. The entrance to downtown Venice (piazzale Roma) may be reached by a shuttle service for hotel guests (provided by Ramada Hotel at scheduled times), by public bus (also available at scheduled times), or by taxi cabs. Public boats for downtown Venice and for the laguna islands leave from piazzale Roma. Additional information and time scheduling for transportations between the Ramada Hotel and downtown Venice will be provided at the workshop registration desk or at the hotel reception. TECHNICAL PROGRAM It will be available after 8 April 1996. Ask it at the workshop secretariat. FURTHER QUESTIONS For any problem or further information, contact the workshop secretariat by July 26. After this date, contact prof. Vincenzo Piuri by email only (email piuri at elet.polimi.it). ================================================================================ NICROSP'96 HOTEL RESERVATION FORM Please: Return this form as soon as possible (fax is preferred) to: RAMADA HOTEL via Orlanda 4, I-30173 S. Giuliano, Venezia, Italy Fax +39-41-5312278 Reservation deadline is June 21, 1996. After this date, the hotel will not be able to guarantee room's availability; should the hotel be completely booked, Ramada will suggest equivalent accomodations in nearby hotels. Last / First Name_______________________________________________________________ Company/University______________________________________________________________ Department______________________________________________________________________ Address_________________________________________________________________________ City__________________________________State/Country_____________________________ Telephone_______________________________________________________________________ Fax_____________________________________________________________________________ Date__________________________________Signature_________________________________ Please reserve the following accomodations: o No. __ Single room(s) at 162,000 Italian Liras o No. __ Double room(s) at 262,000 Italian Liras o No. __ Twin room(s) at 262,000 Italian Liras Cross the preferred accomodation and insert the number of rooms that you are reserving for each type (otherwise, one is assumed for each cross). Room rates are per night and include buffet breakfast. Arrival date and approximate time:______________________________________________ Departure date and approximate time:____________________________________________ Number of nights:_______________________________________________________________ For late arrival, please give credit card information: o Eurocard o MasterCard o Access o VISA Credit Card Number_________________________________________Valid until__________ Card Holder (Last/First Name)___________________________________________________ Signature__________________________________________Date_________________________ ================================================================================ NICROSP'96 WORKSHOP REGISTRATION FORM Please: Return this form as soon as possible by fax or mail (email is not accepted) to the workshop's secretariat Ms. Laura Caldirola Politecnico di Milano, Department of Electronics and Information Piazza L. da Vinci 32, 20133 Milano, Italy phone +39-2-2399-3623 fax +39-2-2399-3411 Last / First Name_______________________________________________________________ Company/University______________________________________________________________ Department______________________________________________________________________ Address_________________________________________________________________________ City______________________________________State/Country_________________________ Telephone_________________________________Fax___________________________________ E-mail__________________________________________________________________________ Date______________________________________Signature_____________________________ If received Registration fee before 1 June after 1 June o Member 320 US$ 385 US$ o IEEE o ACM o AEI o AICA o ANIPLA o IMACS o ISCA Member No.__________ o Non Member 400 US$ 480 US$ o Student (enclose copy of student identification card) 200 US$ 200 US$ o Banquet: No. ___ tickets at 70 US$ each: Total _________US$ Registration fees include entrance to sessions, one copy of the proceedings, and coffee breaks. Advance payment can be made by credit card: o MasterCard o VISA o American Express Credit Card Number __________________________________Valid until_____________ Card Holder (Last/First Name)________________________________________________ Total charged_______________________US$ Signature____________________________________________Date____________________ On-site registration can be paid in Italian Lira or in US$ (daily exchange rates - including bank exchange charge - will be provided at the registration desk), by credit card or cash. ================================================================================  From gps0%eureka at gte.com Mon Mar 18 11:00:34 1996 From: gps0%eureka at gte.com (Gregory Piatetsky-Shapiro) Date: Mon, 18 Mar 1996 11:00:34 -0500 Subject: Job at GTE Laboratories: Data Mining and Knowledge Discovery Message-ID: <9603181600.AA22544@eureka.gte.com> **** An Outstanding Applied Researcher/Developer needed for the ********** **** Knowledge Discovery in Databases project at GTE Laboratories ********** TASK: Participate in the design and development of state-of-the-art systems for data mining and knowledge discovery. While the job will have have significant research aspects, the focus will be on the development of prototypes to be used in production setting. Our current projects include predictive customer modeling and analysis of healthcare information. We are applying multiple learning and discovery methods to very large, high-dimensional real-world databases. The ideal candidate will have a Ph.D. in Machine Learning or related fields and 2-3 years of experience, or an M.S. with an equivalent experience. The candidate should have experience with a variety of machine learning algorithms, be familiar with statistical theory, have practical experience with databases, and be proficient with Web/Internet tools. Excellent coding skills in C/Unix environment and ability to quickly pick up new systems and languages are needed. Good communication skills, the ability to work as part of a team, and good system maintenance practices are very desirable. The candidate will join one of the leading R&D teams in the area of data mining and knowledge discovery. GTE Laboratories Incorporated is the central research facility for GTE and supports GTE's telecommunications businesses. We are the largest local exchange telephone company and the second largest mobile service provider in the United States. Our research facility is located on a quiet 50 acre campus-like setting in Waltham, MA, 20 minutes from downtown Boston. Our salaries are competitive, and our outstanding benefits include medical/life/dental insurance, pension plan, saving and investment plans, and an on-site fitness center. We have a workforce of approximately 500 employees. Proper work authorization required. Please send a resume and a cover letter (preferably by e-mail, in ASCII) to: Gregory Piatetsky-Shapiro e-mail: gps at gte.com GTE Laboratories, MS-44 tel: 617-466-4236 40 Sylvan Road fax: 617-466-2960 Waltham MA 02154-1120 URL: http://info.gte.com/~kdd/gps.html  From jose at scr.siemens.com Mon Mar 18 08:34:54 1996 From: jose at scr.siemens.com (Stephen Hanson) Date: Mon, 18 Mar 1996 08:34:54 -0500 (EST) Subject: Fwd: test Message-ID: If you know of anyone that might have interest please pass this on to them, or please post locally---thanks. Steve Please circulate this job posting! ----------------------------Original message---------------------------- Visiting Research Fellow The James S. McDonnell Foundation The James S. McDonnell Foundation, a major private philanthropy supporting research in education and the behavioral and biomedical sciences, is seeking an energetic, resourceful professional to fill the position of Visiting Research Fellow (VRF). The VRF will work closely with the President and the Program Officer on projects related to the Foundation's current national and international programs and assist in identifying new program opportunities related to the Foundation's interests. The VRF will be responsible for developing the Foundation's technological capabilities, specifically in the use of the Internet and other communication systems. The VRF will be encouraged to develop a research project relevant to the Foundation and the VRF's interests that includes using the Foundation to develop a model system of how foundations and non-profit organizations can best incorporate technology to achieve their mission and enhance their productivity and efficiency. The position will be filled by a recent graduate of a doctoral program in psychology, cognitive science or cognitive neuroscience interested in exploring career opportunities in academic administration, private philanthropy, or science policy. The successful candidate must possess superior oral and written communication skills and expertise in and enthusiasm for the application of computer and communication technologies. The VRF must be able to represent the Foundation at national and international meetings with senior representatives of other funding agencies, senior scientists, and high-level science administrators. The position is for one year, with the possibility of renewal for a second. The anticipated start date is July 1, 1996. Competitive salary and benefits. Additional information about the McDonnell Foundation may be obtained via HTTP://www.jsmf.org. Qualified candidates must submit a letter of interest, a curriculum vitae, and three letters of reference by April 26, 1996 to: Susan M. Fitzpatrick, PhD James S. McDonnell Foundation 1034 South Brentwood Blvd., Suite 1610 St. Louis, Missouri 63117 email: c 6819sf at wuvmd.wustl.edu Application materials may be submitted electronically. The James S. McDonnell Foundation is an EO/AAE.  From josh at vlsia.uccs.edu Tue Mar 19 16:39:04 1996 From: josh at vlsia.uccs.edu (Alspector) Date: Tue, 19 Mar 1996 14:39:04 -0700 (MST) Subject: Postdoc in neural systems Message-ID: RESEARCH ASSOCIATE IN NEURAL SYSTEMS There is a postdoctoral research associate position available in the electrical and computer engineering department at the University of Colorado at Colorado Springs. It is supported by a grant from ARPA to study a neural-style VLSI vision system in collaboration with a group from Caltech. We are applying neural techniques to recognize patterns in handwritten documents and in remote-sensing images. We are also applying for new funding in the area of real-time underwater sound processing. The project will involve applying an existing VME-based neural network learning system to several demanding problems in signal processing. These include adaptive non-linear equalization of underwater acoustic communication channels and magnetic recording channels. It is likely also to involve integrating the learning electronics with micro-machined sonic transducers directly on silicon. The successful candidate will have skills in some or all of the following areas: 1) analog and digital VLSI design and test, 2) signal, sound and image processing, 3) neural network processing of complex data, and 4) working at the system level in a UNIX/C/C++ environment. Please send a curriculum vita, names and addresses of at least three referees, and copies of some representative publications to: Prof. Joshua Alspector Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 The University of Colorado is an equal opportunity employer.  From georgiou at csci.csusb.edu Tue Mar 19 15:36:10 1996 From: georgiou at csci.csusb.edu (georgiou@csci.csusb.edu) Date: Tue, 19 Mar 1996 12:36:10 -0800 (PST) Subject: ICCIN'97: Call for papers Message-ID: <199603192036.MAA04367@csci.csusb.edu> First Announcement 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (http://www.csci.csusb.edu/iccin) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University This conference is part of the Third Joint Conference Information Sciences. Plenary Speakers include the following: James S. Albus Jim Anderson Roger Brockett Earl Dowell David E. Goldberg Stephen Grossberg Y. C. Ho John H. Holland Zdzislaw Pawlak Lotfi A. Zadeh Organizing Committee: Grigorios Antoniou, Griffith University, Australia Catalin Buiu, Romania Ian Cresswell, U.K. S. Das, University of California, Berkeley S.C. Dutta Roy, India Laurie Fausett, FAU George M. Georgiou, California State University Paulo Gaudiano, Boston, University Ugur Halici, METU, Turkey Akira Hirose, University of Tokyo Arun Jagota, University of North Texas Jonathan Marshall, University of N. Carolina Bimal Mathur, Rockwell CA Kishan Mehrotra, Syracuse Haluk Ogmen, University of Houston Ed Page, South Carolina W.A. Porter, University of Alabama Ed Rietman, Bell Labs Christos Schizas, University of Cyprus Harold Szu, USL M. Trivedi, UCSD E. Vityaev, Russia Paul Wang, Duke University Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evaluationary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Optimization o Parallel Computer Applications o Theory of Evolutionary Computation Summary Deadline: November 15, 1996 Proposals for sessions: November 15, 1996 Decision & Notification: January 5, 1997 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 georgiou at csci.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by November 15,1996. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. NSF funding has been requested to support women, minorities, recent Ph.D. recipients, and graduate students. Conference Web site: http://www.csci.csusb.edu/iccin  From mcasey at volen.brandeis.edu Thu Mar 7 07:50:02 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Wed, 20 Mar 1996 00:50:02 +30000 Subject: Paper on Dynamical Sytems, RNNs and Computation Available Message-ID: Dear connectionists, The following paper deals with the connection between computational and dynamical descriptions of systems and the analysis of recurrent neural networks in particular. It has been accepted for publication in Neural Computation, and will appear in Vol. 8, number 6 later this year. Comments are welcome. ---------------------------------------------------------------------- The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction [77 pages] Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 To appear in Neural Computation 8:6. ABSTRACT: Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine (DFA) which can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity of activation dynamics. As a corollary of our main theorem, we prove that the only formal languages which RNNs are able to robustly recognize are those recognizable by DFA (i.e. the regular languages). By elucidating the necessary and sufficient dynamical properties which an RNN must possess in order to perform a DFA computation, we provide a framework for discussing the relationship between symbolic (algorithmic, finite state) and subsymbolic (dynamic, continuous phase space) aspects of computation in physical systems. This theory also provides a theoretical framework for understanding finite state machine extraction techniques and can be used to improve training methods for RNNs performing DFA computations. This provides an example of a successful top-down approach to understanding a general class of complex systems that have not been explicitly designed, e.g. systems that have evolved or learned their internal structure. ---------------------------------------------------------------------- This paper is available via the WWW at http://eliza.cc.brandeis.edu/people/mcasey/papers.html or ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.gz ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.Z FTP INSTRUCTIONS unix% ftp eliza.cc.brandeis.edu (or 129.64.55.200) Name: anonymous Password: (use your e-mail address) ftp> cd /pub/mcasey/ ftp> bin ftp> get mcasey_nc.ps.Z (or mcasey_nc.ps.gz or mcasey_nc.ps) ftp> bye unix% uncompress mcasey_nc.ps.Z (or gzip -d mcasey_nc.ps.gz) Please send comments to Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey  From reggia at cs.UMD.EDU Wed Mar 20 12:59:57 1996 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Wed, 20 Mar 1996 12:59:57 -0500 (EST) Subject: Postdoc Position in Computational Neuroscience Message-ID: <199603201759.MAA17041@avion.cs.UMD.EDU> POSTDOC POSITION IN COMPUTATIONAL NEUROSCIENCE It is anticipated that a postdoctoral research position will be available starting this summer or fall involving neural modeling. The focus will be on modeling various aspects of cerebral cortex dynamics and plasticity. Ideally we are looking for someone with interdisciplinary interests and background in computation and neuroscience. Instructions for applying are given below; applications must be received by the April 8 deadline at the latest. They can be sent to Prof. Ja'Ja' as indicated below, or directly to me. James A. Reggia Dept. of Computer Science A. V. Williams Bldg. University of Maryland College Park, MD 20742 USA Email: reggia at cs.umd.edu Phone: 301-405-2686 Fax: 301-405-6707 Position Announcement: The University of Maryland Institute for Advanced Computer Studies (UMIACS) invites applications for post doctoral positions, beginning summer/fall '96 in the following areas: Real-time Video Indexing, Natural Language Processing, and Neural Modeling. Exceptionally strong candidates from other areas will also be considered. UMIACS, a state-supported research unit, has been the focal point for interdisciplinary and applications-oriented research activities in computing on the College Park campus. The Institute's 40 faculty members conduct research in high performance computing, software engineering, artificial intelligence, systems, combinatorial algorithms, scientific computing, and computer vision. Qualified applicants should send a 1 page statement of research interest, curriculum vita and the names and addresses of 3 references to: Prof. Joseph Ja'Ja' UMIACS A.V. Williams Building University of Maryland College Park, MD 20742 by April 8. UMIACS strongly encourages applications from minorities and women. EOE/AA  From juergen at idsia.ch Wed Mar 20 12:36:13 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 20 Mar 96 18:36:13 +0100 Subject: IDSIA postdoc position Message-ID: <9603201736.AA26310@fava.idsia.ch> IDSIA POSTDOC POSITION IDSIA, the Swiss machine learning research institute, offers a 2-year postdoc position with possibility for renewal. Goal of the corresponding research project is to analyze, compare, extend, and apply `neural' algorithms for unsupervised learning and redundancy reduction. A main focus will be on `predictability minimization' (see refs below). Application areas include adaptive image pre-processing, classification, data compression. Applicants should be willing to build on our previous work. The ideal candidate combines strong mathematical skills and strong programming skills. Switzerland tends to be nice to scientists. It boasts the highest supercomputing capacity per capita, the most Nobel prizes per capita (with Sweden), the highest gross national product per capita (with Luxembourg), and the best chocolate. IDSIA is located in beautiful Lugano, capital of scenic Ticino (the southern part of Switzerland). Pictures in my home page. Milano, Italy's center of fashion and finance, is one hour away. CSCS, the Ticino supercomputing center, is nearby. Salary is competitive. To obtain an overview of IDSIA's activities, see our home page. If interested, please send CV, list of publications, and cover letter with brief statement of research interests plus email addresses of three references to juergen at idsia.ch Send them as separate, uncompressed ASCII or postscript files. ASCII greatly preferred. Please call subject headers of ASCII files: name.cv, name.pub, name.cover, respectively, where `name' stands for your name. Please call subject headers of postscript files: name.cv.ps, name.pub.ps, name.cover.ps, respectively. Please also send HARDCOPIES of 3 representative papers by PHYSICAL mail (no problem if they arrive after the deadline). DEADLINE: APRIL 10, 1996. Earlier applications preferred. Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen --------- Refs on predictability minimization -------- - J.Schmidhuber, M.Eldracher, and B.Foltin. Semilinear predictability minimization produces well-known feature detectors. Neural Computation, in press, 1996. - J.Schmidhuber and D.Prelinger. Discovering predictable classifications. Neural Computation, 5(4):625-635, 1993. - J.Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992.  From milanese at cui.unige.ch Wed Mar 20 12:13:18 1996 From: milanese at cui.unige.ch (Ruggero Milanese) Date: Wed, 20 Mar 1996 18:13:18 +0100 Subject: Research positions on neural networks and vision Message-ID: <2829*/S=milanese/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> Two openings at the Department of Computer Science of the University of Geneva (Switzerland) on: COMPUTER and BIOLOGICAL VISION - One post-doc: for a person holding a Ph.D. degree in either Computer Science, Electrical Engineering, Mathematics, or Physics, with emphasis on neural networks, dynamic systems, computational neuroscience, and machine vision. Experience with the analysis of multidimensional EEG signals would be a plus. - One assistant (Ph.D. student): for a person holding a Diploma, or Master's degree in Computer Science, or comparable qualification, with experience in neuronal modeling and computer vision. Project description. -------------------- Development of neural networks for recognizing multiple objects over complex, textured background. The basic model, inspired by neurophysiological and psychophysical research in human vision, employs networks of oscillatory units capable of stimulus-dependent synchronization. Research issues include: the design of a complete neural network architecture employing integrate-and-fire units, the assessment of the role of visual attention in modifying neuronal dynamics, and the compatibility of the overall model with biological data. Both candidates will work in collaboration with a third researcher at the Brain Mapping Laboratory of the Geneva Cantonal Hospital (headed by Prof. T.Landis and C. Michael), and will contribute to the analysis of the data collected through experiments on human subjects using EEG-based tomography of brain activity. Position profiles. ------------------ Both positions are available *immediately*, for a minimum of 1.5 years (postdoc) and 2 years (assistant). An extension for another 2 years is likely. Salaries are approx. CHF 60,000 for the postdoc and CHF 58,000 for the assistant (both before taxes). Good written/oral knowledge of English is essential. Some knowledge of French would be desirable. Applicants should send a CV, a list of publications, a summary of M.Sc. thesis (assistant position only), together with one letter of recommendation, to the following address. Dr. Ruggero Milanese Dept. of Computer Science, University of Geneva 24 rue General-Dufour, 1211 Geneva 4, Switzerland Phone: +41 (22) 705-7631, Fax: +41 (22) 705-7780 E-mail: milanese at cui.unige.ch URL: http://cuiwww.unige.ch/~milanese Geneva, march 1996.  From murre at psy.uva.nl Tue Mar 19 09:10:57 1996 From: murre at psy.uva.nl (J.M.J. Murre) Date: Tue, 19 Mar 1996 15:10:57 +0100 Subject: Three jobs in connectionist modelling in psychology Message-ID: <199603191410.AA18392@uvapsy.psy.uva.nl> THREE GRADUATE RESEARCHERS (PH.D. STUDENTS) IN CONNECTIONIST MODELLING IN PSYCHOLOGY Application deadline: 12 April 1996 GENERAL DESCRIPTION Our group has job openings for three graduate researchers (Ph.D. students, or 'Onderzoekers-in-Opleiding' in Dutch) in connectionist modelling at the Graduate Research Institute for Experimental Psychology (EPOS) of the Dutch Organization for Scientific Research (NWO). The projects are located at the University of Amsterdam and Leiden University. The Graduate Research Institute EPOS was established by the University of Amsterdam, the Free University Amsterdam and Leiden University to foster and strengthen research and graduate training in the area of experimental psychology. The indiduvidual projects are funded by the Netherlands Society for Scientific Research (NWO) and form part of a larger research initiative 'Dynamic processes in self-organizing networks that interact with the environment'. One four-year postdoc has already been assigned to this project (this position is already filled). Several other graduate researchers within EPOS are working on similar projects (connectionist modelling and experimental psychology). CONDITIONS Each project runs for four years, starting 1 July 1996 at the earliest, and is expected to lead to a Ph.D. at the end of the four-year period. Full scholarships are available for each project to the amount of DFL 2100 (about $1320) per month in the first year, gradually increasing to DFL 3770 (about $2360) per month in the fourth year. (These figures are before taxes; a typical salary in the first year, after taxes, would be DFL 1680, but this depends on the experience of the applicant.) Succesful candidates will be required to move to the Netherlands. They will be required to follow and complete a number of graduate courses. In most cases, the graduate researchers are asked to participate in teaching of undergraduate students. This teaching load will be small and concerns only courses that are within their field of research. GENERAL REQUIREMENTS Excellent command of Dutch (or English) is necessary. Applicants from computer science, physics, mathematics, or engineering must bear in mind that a strong, demonstrable background in psychology or related fields is necessary for these projects. PROJECT 1. 'ARTIFICIAL NEURAL NETWORKS FOR AFFECTIVE PROCESSES'(QUOTE REF. 575-23-006). Description: Development of neural network models for direct and indirect affective processes. This work also has a strong neuroscience component. It builds on modelling work by the project supervisors (CALM approach to modelling). The simulations will keep pace with experimental research that occurs simultaneously in our group. Specific requirements: Master's (drs.) degree in experimental psychology. Strong interest in experimental research (especially from the cognitive neuroscience perspective). Experience with emotions research is desirable, as is experience with computer programming. This project is located at the University of Amsterdam. In case of urgent questions, further information can be obtained from Dr. R.H. Phaf (e-mail: pn_phaf at macmail.psy.uva.nl; phone: +31 20 525.6841 or +31 20 525.6840; fax: +31 20 639.1656). PROJECT 2. 'ARTIFICIAL NEURAL NETWORKS FOR MEMORY PROCESSES' (QUOTE REF. 575-23-007). Description: Development of neural network models for implicit and explicit memory phenomena and certain memory pathologies. Goal is implementation of a recently developed theoretical model of anterograde and retrograde amnesia. This implementation builds on existing work by the project supervisors (TraceNet and CALM approach to modelling, specifically TraceLink). Simulation results will be compared with research on memory in amnesic patients. Specific requirements: Master's (drs.) degree in experimental psychology or cognitive (neuro-)science. Strong interest in theoretical and empirical modelling of cognitive functions. Knowledge of neuroanatomy and neuropsychology is desirable, as is experience with psychological experimentation. Experience with computer programming is required (preferably in C). This project is located at Leiden University. In case of urgent questions, further information can be obtained from Dr. J.M.J. Murre (e-mail: murre at psy.uva.nl; phone: +31 20 525.6722 or +31 20 525.6840; fax: +31 20 639.1656). PROJECT 3. 'SEGMENTATION AND CLASSIFICATION OF VISUAL PATTERNS' (QUOTE REF. 575-23-008). Description: Development of neural network models for segmentation and classificatoin of visual patterns. This work builds on an approach that emphasizes research and simulation in the borderline area of stable and non-stable activation patterns. The simulations will keep pace with experimental research that occurs simultaneously in our group. Specific requirements: Master's (drs.) degree in cognitive science or experimental psychology, or in theoretical or medical biology or physics with a strongly interdisciplinary orientation and demonstrable interest in experimental research in psychology. Experience with computer programming is required. This project is located at the University of Amsterdam. In case of urgent questions, further information can be obtained from Dr. C. van Leeuwen (e- mail: pn_leeuwen at macmail.psy.uva.nl; phone: +31 20 525.6118 or +31 20 525.6840; fax: +31 20 525.1656). APPLICATION PROCESS Only applications by 'hardmail' are considered. Please include the following material: Cover letter Curriculum vitae One A4 in which you explain why you are interested in the project You may include up to 15 pages of excerpts taken from published material but this is not a requirement Two names of references (including fax and/or e-mail) Be sure to quote the reference number of the project to which you are applying Send the application to Dr. G. Wolters Department of Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands THE DEADLINE FOR APPLICATION IS 12 APRIL 1996.   From panos at csc.umist.ac.uk Thu Mar 21 05:58:31 1996 From: panos at csc.umist.ac.uk (Panos Liatsis) Date: Thu, 21 Mar 96 10:58:31 GMT Subject: CALL FOR PAPERS: IWISP'96 Message-ID: <333.9603211058@isabel.csc.umist.ac.uk> 3RD INTERNATIONAL WORKSHOP ON IMAGE AND SIGNAL PROCESSING ADVANCES IN COMPUTATIONAL INTELLIGENCE NOVEMBER 4-7, 1996, MANCHESTER, UK ORGANISER:UMIST CO-SPONSORS: IEEE SIGNAL PROCESSING SOCIETY IEEE UK&RI IEE INST MEAS CONTROL CALL FOR PAPERS The 3rd International Workshop on Image and Signal Processing, IWSIP-96 organised by the Control Systems Centre, UMIST in assosiation with IEEE Region 8 and co-sponsored by the IEE, IMC and the IEEE Signal Processing Society, is an International Workshop on theoretical, experimental and applied signal and image processing. This is a specialized workshop, which intends to attract high quality research papers and bring together researchers working in the areas of signal/image processing from both sides of the Atlantic, as well as from the countries of Central and Eastern Europe, The theme of the current workshop is on Advances in Computational Intelligence. SCOPE: General Techiques and Algorithms: Adaptive DSP algorithms, Digital Filter Implementations, Image Analysis, Image Enhancement and Restoration, Image Understanding. Technologies: Neural Networks, Fuzzy Logic, Wavelets, Fractals. Image Transmission: Encoding/Decoding, Compression, Transmission, ISDN, Internet, ATM, Modems, Radio, SATCOM and NAV. Applications: Automotive, Medical, Robotics, Control, Video, TV, Telepresence, Virtual Reality, Digital Production. SUBMISSION PROCEDURES: Prospective authors are invited to propose papers in any of the technical areas listed above, indicating whethear they are intended for oral/poster presentation. To submit a proposal, prepare a 2-3 page summary of the paper including figures and references. Send five copies of the paper summary and a cover sheet stating the (1) paper title, (2) technical area(s), (3) contact author' s name, (4) address, (5) telephone and fax number and (6) email address to: Professor Basil G. Mertzios, IWSIP-96, Dept. of Electrical and Computer Engineering, Democritus University of Thrace, GR-67100 Xanthi, Greece e-mail: mertzios at demokritos.cc.duth.gr FAX:+30-541-26947, Tel:+30-541-79511, 79512 (Secr.) 79559 (Lab) Each selected paper (four-page limit) will be published in the Proceedings of IWSIP-96, by an International Publisher. SCHEDULE: Extended summaries/abstracts: 30th April 1996 Notification of acceptance/rejection: 31st May 1996 Final Draft: 15th July 1996 CONFERENCE SITE: IWSIP-96 will be held at The Manchester Conference Centre, Manchester, GENERAL CHAIR Peter Wellstead Control Systems Centre UMIST, UK PROGRAM CHAIR Basil Mertzios Democritus University of Thrace Department of Electr./Comp.Eng. 67100 Xanthi, Greece PUBLICITY & LOCAL ARRANGEMENTS Panos Liatsis, Braham Levy UMIST, UK TUTORIALS CHAIR Marek Domanski Tech. University of Poznan Poznan, Poland PROCEEDINGS CHAIR Kalman Fazekas Tech. University of Budapest Budapest, Hungary FINANCIAL CHAIR Martin Zarrop UMIST, UK INTERNATIONAL PROGRAM COMMITTEE (TENTATIVE) I. Antoniou, Solvay Institute, Belgium Z. Bojkovic, University of Belgrade, Yugoslavia M. Brady, University of Oxford, UK V. Cappellini, University of Florence, Italy G. Caragiannis, NTUA, Greece M. Christodoulou, Technical University of Crete, Greece A. Constantinidis, Imperial College, UK J. Cornelis, Vrije Universiteit Brussel, Belgium A. Davies, King's College London, UK I. Erenyi, KFKI Research Institute, Hungary A. Fettweis, Ruhr Universitaet Bochum, Germany S. van Huffel, Katholieke Universiteit Leuven, Belgium G. Istefanopoulos, Bosporous University, Turkey V. Ivanov, Dubna Research Institute, Russia T. Kaczorec, University of Warsaw, Poland M. Karny, UTIA, Czech Republic T. Kida, University of Tokyo, Japan J. Kittler, University of Surrey, UK S.Y. Kung, Princeton University, USA M. Kunt, University of Lausanne, Switzerland F. Lewis, University of Texas at Arlington, USA T. Nossek, Technical University of Munich, Germany C. Nikias, University of Southern California, USA D. van Ormondt, Technical University of Delft, Netherlands K. Parhi, University of Minessota, USA S. Tzafestas, NTUA, Greece J. Turan, Technical University of Kosice, Slovak Republic G. Vachtsevanos, Georgia Institute of Technology, USA A. Venetsanopoulos, University of Toronto, Canada  From perso at DI.Unipi.IT Thu Mar 21 06:12:07 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Thu, 21 Mar 1996 12:12:07 +0100 (MET) Subject: NNSK deadline extension Message-ID: <199603211112.MAA15961@neuron.di.unipi.it> sorry for duplicated messages (if any...) ** E X T E N D E D D E A D L I N E ** Neural Networks and Structured Knowledge (NNSK) Call for Contributions ECAI '96 Workshop to be held on August 12/13, 1996 during the 12th European Conference on Artificial Intelligence from August 12-16, 1996 in Budapest, Hungary **************** N E W S C H E D U L E **************** Submission deadline April 3, 1996 Notification of acceptance/rejection May 2, 1996 Final version of papers due May 24, 1996 Deadline for participation without paper June 15, 1996 Date of the workshop August 12/13, 1996 *********************************************************** FOR MORE INFO: http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK/NNSK.html ** E X T E N D E D D E A D L I N E ** _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________  From jgreen at wjh.harvard.edu Fri Mar 22 10:14:15 1996 From: jgreen at wjh.harvard.edu (Jennifer Anne Greenhall) Date: Fri, 22 Mar 1996 10:14:15 -0500 (EST) Subject: POSTDOCTORAL JOB OPENINGS Message-ID: COGNITIVE NEUROPSYCHOLOGY: Visual word and object recognition. A post-doctoral research position is available at the COGNITIVE NEUROPSYCHOLOGY LABORATORY, DEPARTMENT OF PSYCHOLOGY, HARVARD UNIVERSITY. The project investigates the role of visual and attentional mechanisms in word and object recognition. These issues are addressed principally through the analysis of the reading and object recognition performance in brain-damaged subjects with visual neglect and other perceptual disorders. In addition, computational modeling and functional imaging studies of visual word recognition are planned. Individuals with training in visual perception and/or attention, computational modeling, or cognitive neuropsychology are encouraged to apply. Candidates should send a curriculum vitae, three letters of recommendation, and a statement of research interests to Alfonso Caramazza, Cognitive Neuropsychology Laboratory, Harvard University, William James Hall, 33 Kirkland St., Cambridge, MA 02138. E-mail inquiries may be sent to caram at broca.harvard.edu. Screening will begin immediately and continue until the position is filled. Harvard University is an Equal Opportunity/Affirmative Action Employer. Women and minorities are encouraged to apply. COGNITIVE NEUROPSYCHOLOGY: Lexical production. A post-doctoral research position is available at the COGNITIVE NEUROPSYCHOLOGY LABORATORY, DEPARTMENT OF PSYCHOLOGY, HARVARD UNIVERSITY. The project investigates the content and organization of the lexical system and, more specifically, the nature of lexical access mechanisms. These issues are investigated through the analysis of speech and writing disorders in brain-damaged subjects and the computational modeling of lexical access. We are also planning to carry out several functional imaging studies of word production. Individuals with training in psycholinguistics, computational modeling, or cognitive neuropsychology are encouraged to apply. Candidates should send a curriculum vitae, three letters of recommendation, and a statement of research interests to Alfonso Caramazza, Cognitive Neuropsychology Laboratory, Harvard University, William James Hall, 33 Kirkland St., Cambridge, MA 02138. E-mail inquiries may be sent to caram at broca.harvard.edu. Screening will begin immediately and continue until the position is filled. Harvard University is an Equal Opportunity/Affirmative Action Employer. Women and minorities are encouraged to apply. ****************************** * Jennifer Greenhall * * Harvard University * * Department of Psychology * * 33 Kirkland Street * * William James Hall, Rm. 884* * Cambridge, MA 02138 * * (617) 496-6374 * ******************************  From rsun at cs.ua.edu Mon Mar 25 16:19:54 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Mon, 25 Mar 1996 15:19:54 -0600 Subject: No subject Message-ID: <9603252119.AA14184@athos.cs.ua.edu> Hybrid Connectionist-Symbolic Models: a report from the IJCAI'95 workshop on connectionist-symbolic integration Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 --------------------------------------- To appear in: AI Magazine, 1996. 9 pages. ftp or Mosaic access: ftp://cs.ua.edu/pub/tech-reports/sun.ai-magazine.ps sorry, no hardcopy available. ---------------------------------------- {\it The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches} was held for two days during August 19-20 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95). The workshop was co-chaired by Ron Sun and Frederic Alexandre. During the two days of the workshop, various presentations and discussions brought to light many new ideas, controversies, and syntheses. The focus was on learning and architectures that feature hybrid representations and support hybrid learning. Hybrid models involve a variety of different types of processes and representations, in both learning and performance. Therefore, multiple mechanisms interact in complex ways in most models. We need to consider seriously ways of structuring these different components, which thus occupy a clearly more prominent place in this area of research. The hybridization of connectionist and symbolic models also inherits the difficulty with learning from the symbolic side, and mitigates to some large extent the advantage that the purely connectionist models have in their learning abilities. Considering the importance of learning, in both modeling cognition and building intelligent systems, it is crucial for researchers in this area to pay more attention to ways of enhancing hybrid models in this regard and to putting learning back into hybrid models.  From john at dcs.rhbnc.ac.uk Mon Mar 25 16:07:26 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 25 Mar 96 21:07:26 +0000 Subject: MSc in Computational Intelligence Message-ID: <199603252107.VAA15309@platon.cs.rhbnc.ac.uk> MSc in COMPUTATIONAL INTELLIGENCE at the Computer Science Department Royal Holloway, University of London We offer a new twelve-month MSc in Computational Intelligence covering a wide range of subjects: Computational Learning Statistical Learning Theory Intelligent Decision Making Neural Computing Inference Systems Probabilistic Reasoning Constraint Networks Simulated Annealing Neurochips and VLSI Equational Reasoning Computer Vision Concurrent Programming Object-Oriented Programming The first part of the course involves taking a selection of the listed courses including some prescribed core courses. The second part of the course comprises a project focussing on one of the areas covered in the courses and usually involving some implementation. Royal Holloway is one of the largest colleges of the University of London and is located on a beautiful wooded campus half an hour from central London by train and close to Heathrow airport. The Computational Intelligence group at Royal Holloway includes the following staff: Alex Gammerman, with interests in Bayesian Belief Networks John Shawe-Taylor, with interests in Computational Learning Theory and Coordinator of the EPSRIT NeuroCOLT Project Vladimir Vovk, expert in applications of game theory to learning Vladimir Vapnik (part time), probably no introduction necessary, and several other permanent members of staff. For further information email: cims at dcs.rhbnc.ac.uk or write to: Course Director, MSc in Computational Intelligence Computer Science Department Royal Holloway, University of London EGHAM, Surrey, TW20 0EX Tel: +44 (0)1784 333421 Fax: +44 (0)1784 443420 ------- End of Forwarded Message  From john at dcs.rhbnc.ac.uk Mon Mar 25 16:40:21 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 25 Mar 96 21:40:21 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199603252140.VAA03178@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. *** Please note that the location of the files was changed at the beginning of ** the year, so that any copies you have of the previous instructions should be * discarded. The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-038: ---------------------------------------- Active Noise Control with Dynamic Recurrent Neural Networks by Davor Pavisic, Facult\'{e} Polytechnique de Mons, Belgium Laurent Blondel, Facult\'{e} Polytechnique de Mons, Belgium Jean-Philipe Draye, Facult\'{e} Polytechnique de Mons, Belgium Ga\"{e}tan Libert, Facult\'{e} Polytechnique de Mons, Belgium Pierre Chapelle, Facult\'{e} Polytechnique de Mons, Belgium Abstract: We have developed a neural active noise controller which performs better than existing techniques. We used a dynamic recurrent neural network to model the behaviour of an existing controller that uses a Least Mean Squares algorithm to minimize an error signal. The network has two types of adaptive parameters, the weights between the units and the time constants associated with each neuron. Measured results show a significant improvement of the neural controller when compared with the existing system. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-039: ---------------------------------------- A Survey on real Structural Complexity Theory by Klaus Meer, RWTH Aachen, Germany Christian Michaux, Universit\'e de Mons-Hainaut Abstract: In this tutorial paper we overview research being done in the field of structural complexity and recursion theory over the real numbers and other domains following the approach by Blum, Shub and Smale. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-040: ---------------------------------------- The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials by Wolfgang Maass, Technische Universitaet Graz, Austria Berthold Ruf, Technische Universitaet Graz, Austria Abstract: Recently one has started to investigate the computational power of spiking neurons (also called ``integrate and fire neurons''). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatio-temporal coding, i.e.~encoded in the time points when specific neurons ``fire'' (and thus send a ``spike'' to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are ``accidental'' aspects of their realization in biological ``wetware''. Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatio-temporal coding in a new generation of artificial neural nets, such as for example pulse stream VLSI. The firing mechanism of spiking neurons is defined in terms of their postsynaptic potentials or ``response functions'', which describe the change in their electric membrane potential as a result of the firing of another neuron. We consider in this article the case where the response functions of spiking neurons are assumed to be of the mathematically most elementary type: they are assumed to be step-functions (i.e. piecewise constant functions). This happens to be the functional form which has so far been adapted most frequently in pulse stream VLSI as the form of potential changes (``pulses'') that mimic the role of postsynaptic potentials in biological neural systems. We prove the rather surprising result that in models without noise the computational power of networks of spiking neurons with arbitrary piecewise constant response functions is strictly weaker than that of networks where the response functions of neurons also contain short segments where they increase respectively decrease in a linear fashion (which is in fact biologically more realistic). More precisely we show for example that an addition of analog numbers is impossible for a network of spiking neurons with piecewise constant response functions (with any bounded number of computation steps, i.e. spikes), whereas addition of analog numbers is easy if the response functions have linearly increasing segments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-041: ---------------------------------------- Finding Optimal Multi-Splits for Numerical Attributes in Decision Tree Learning by Tapio Elomaa, University of Helsinki, Finland Juho Rousu, University of Helsinki, Finland Abstract: Handling continuous attribute ranges remains a deficiency of top-down induction of \dt s. They require special treatment and do not fit the learning scheme as well as one could hope for. Nevertheless, they are common in practical tasks and, therefore, need to be taken into account. This topic has attracted abundant attention in recent years. In particular, Fayyad and Irani showed how optimal binary partitions can be found efficiently. Later, they based a greedy heuristic multipartitioning algorithm on these results. Recently, Fulton, Kasif, and Salzberg attempted to develop algorithms for finding the optimal multi-split for a numerical attribute in one phase. We prove that, similarly as in the binary partitioning, only boundary points need to be inspected in order to find the optimal multipartition of a numerical value range. We develop efficient algorithms for finding the optimal splitting into more than two intervals. The resulting partition is guaranteed to be optimal w.r.t.\ the function that is used to evaluate the attributes' utility in class prediction. We contrast our method with alternative approaches in initial empirical experiments. They show that the new method surpasses the greedy heuristic approach of Fayyad and Irani constantly in the goodness of the produced multi-split, but, with small data sets, cannot quite attain the efficiency of the greedy approach. Furthermore, our experiments reveal that one of the techniques proposed by Fulton, Kasif, and Salzberg is of scarce use in practical tasks, since its time consumption falls short of all demands. In addition, it categorically fails in finding the optimal multi-split because of an error in the rationale of the method. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-042: ---------------------------------------- Shattering all Sets of k points in `General Position' Requires (k-1)/2 Parameters by Eduardo D. Sontag, Rutgers University, USA Abstract: For classes of concepts defined by certain classes of analytic functions depending on n parameters, there are nonempty open sets of samples of length 2n+2 which cannot be shattered. A slightly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor  From dana at cs.rochester.edu Fri Mar 22 18:05:42 1996 From: dana at cs.rochester.edu (dana@cs.rochester.edu) Date: Fri, 22 Mar 1996 18:05:42 -0500 Subject: Symposium on Neural Control of Spatial Behavior Message-ID: <199603222305.SAA15149@artery.cs.rochester.edu> 20th CVS Symposium NEURAL CONTROL OF SPATIAL BEHAVIOR JUNE 19-22, 1996 ----------------------------------------------------------------------- The Center for Visual Science at the University of Rochester is proud to present the 20th Symposium, "Neural Control of Spatial Behavior." The three-day symposium will consist of five sessions plus an open house and lab tours on Saturday afternoon. The meeting will begin with a Reception/Buffet on Wednesday evening, June 19. Formal sessions start Thursday morning, June 20, and end at noon on Saturday. There will be optional banquets held on Thursday and Friday evenings, and a cookout lunch on Saturday. Informal discussion gatherings will follow the banquets. The Symposium is sponsored in part by NINDS and NIH Biotechnology Resource Project. ------------------------------------------------------------------------ *PROGRAM* Wednesday, June 19 4:00-10:00 PM Registration 6:00-8:00 PM Reception/Buffet Thursday, June 20 SESSION I: REACHING AND GRASPING M Goodale An overview brain modeling of reaching and grasping M Graziano The representation of visuomotor space in body-part centered coordinates S Schall Modeling visuo-motor coordination in the SARCOS arm G Luppino Cortical motor areas involved in grasping P Pook Symbolic models for motor planning SESSION II: TARGET LOCALIZATION J Asad Representation of targets in parietal cortex L Snyder Coding the intention for an arm or eye movement in posterior parietal cortex of monkey J-O Eklund* Modeling occulomotor coordination with the KTH head G Zelinsky Predicting scanning patterns during visual search Friday, June 21 SESSION III: MULTI-SENSORY CALIBRATION M Brainard Visual calibration of an auditory space map in the owl G Pollack Neuroanatomical and physiological mechanisms of bat echolocation Y Trotter Proprioception and cortical processing of visual 3-D space J Van Opstal Models of visually and auditorily evoked saccades SESSION IV: SPATIAL ORIENTATION & MOTION J Leigh Gaze stability during locomation M Behrman Neurological observations of reference frames M Wilson Spatial orientation in the rat I Isreal Multi-sensory aspects of path integration Saturday, June 22 SESSION V: BEHAVIORAL SEQUENCES J Loomis The body's navigation systems J Tanji The neural representation of sequences of trained movements W Schultz Programming of sequences of behaviors J Barnes Prediction in ocular motor control SESSION VI: OPEN HOUSE Center for Visual Science Open House and Lab Tours * pending confirmation ---------------------------------------------------------------------- *ACCOMMODATIONS AND MEALS* The University has moderate cost rooms available for symposium attendees. Residence halls are centrally located on the campus and are a short walk to Hoyt Hall where the symposium sessions will be held. Rooms come with bed linens, towels, blankets, washcloths, soap, and water glasses, and are equipped with standard residence hall furniture including twin beds and desks. Telephones with local access service are in each room. Rochester also offers a variety of recreational and sports/fitness opportunities. The Athletic Center has a pool, tennis courts and indoor track for guests. The adjacent Genesee Valley Park has walking trails, canoe rentals and golf course. A special package of residence hall room and all meals and banquets is being offered to Symposium participants. This package includes all meals from Thursday breakfast through the Saturday barbecue. ------------------------------------------------------------------------ *TRAVEL AWARDS* A small number of travel awards are available to graduate and postdoctoral students. Applications must be made by May 3, 1996. Visit our web site at http://www.cvs.rochester.edu/ for a travel application award form that can be downloaded to your computer and printed out. ------------------------------------------------------------------------ *FEES* Preregistration, Regular $140.00 Preregistration, Student $ 95.00 On-site, Regular $200.00 On-site, Student $150.00 ------------------------------------------------------------------------- *PREREGISTRATION* To obtain a preregister form, visit our web site or send your address through email to judy at cvs.rochester.edu. Instructions for payment are on the form. Please send a separate form for each person registering. No preregistrations will be accepted after May 31. ------------------------------------------------------------------------- *VISIT OUR WEB SITE* Check our home page for further updates on the 20th CVS Symposium at: http://www.cvs.rochester.edu/. At this site you will find a complete preregistration form that can be downloaded to your computer and printed out. ------------------------------------------------------------------------- *DEADLINE DUE DATES* May 3 Travel Award Applications May 31 Preregistrations ------------------------------------------------------------------------- *FURTHER INFORMATION* For further information, or to request a printed brochure and preregistration form, contact: Judy Olevnik Symposium Secretary judy at cvs.rochester.edu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Judy Olevnik email: judy at cvs.rochester.edu Center for Visual Science phone: 716 275 8659 Room 274 Meliora Hall fax: 716 271 3043 University of Rochester Rochester NY 14627-0270 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  From martinl at ai.univie.ac.at Tue Mar 26 09:19:27 1996 From: martinl at ai.univie.ac.at (Martin Lorenz) Date: Tue, 26 Mar 1996 15:19:27 +0100 Subject: Symposium at EMCSR'96 Message-ID: <199603261419.PAA14776@rodaun.ai.univie.ac.at> ===================================================================== Artificial Neural Networks and Adaptive Systems A symposium at the ===================================================================== EMCSR'96 April 9 -12, 1996 University of Vienna organized by the Austrian Society for Cybernetic Studies in cooperation with Dept.of Medical Cybernetics and Artificial Intelligence, Univ.of Vienna and International Federation for Systems Research ---------------------------------------------------------------------- chairs: Guenter Palm, Germany, and Georg Dorffner, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks have been invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. We make a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and true adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. incremental learning in unstationary environments. ======= PROGRAM ======= TUESDAY, April 9, p.m., Room 47 14.00-14.30 Statistical Evaluation of Neural Network Experiments: Minimum Requirements and Current Practice A.Flexer, Austrian Research Institute for Artificial Intelligence, Vienna, Austria 14.30-15.00 Adaptive Analysis and Visualization in High Dimensional Data Spaces G.Palm, F.Schwenker, University of Ulm, Germany 15.00-15.30 Adaptive Learning Algorithm for Principal Component Analysis with Partial Data A.Cichocki, W.Kasprzak, W.Skarbek, Frontier Lab, RIKEN, Wako, Saitama, Japan 15.30-16.00 Coffee Break 16.00-16.30 Reinforcement Learning for Cybernetic Control M.Pendrith, M.Ryan, A.Hoffmann, University of New South Wales, Sydney, Australia 16.30-17.00 A Neural Circuit to Handle Passive Extinction in Conditioned Reinforcement Learning A.Glksz, U.Halici, Middle East Technical University, Ankara, Turkey 17.00-17.30 Truncated Temporal Differences with Function Approximation: Successful Examples Using CMAC P.Cichosz, Warsaw University of Technology, Poland 17.30-18.00 Adaptive Classification in Autonomous Agents C.Scheier, D.Lambrinos, University of Zurich, Switzerland WEDNESDAY, April 10, a.m., Room 47 11.00-11.30 A Study of the Adaptation of Learning Rule Parameters Using a Meta Neural Network C.McCormack, University College Cork, Ireland 11.30-12.00 Lower Bounds on Identification Criteria for Perceptron-like Learning Rules M.Schmitt, Technical University of Graz, Austria 12.00-12.30 Learning to Control Dynamic Systems M.Riedmiller, University of Karlsruhe, Germany 12.30-13.00 Neuronal Adaptivity and Network Fault-Tolerance D.Horn, N.Levy, E.Ruppin, Tel-Aviv University, Israel WEDNESDAY, April 10, p.m., Room 47 14.00-14.30 Tracking of Non-Stationary Time-Series Using Resource-Allocating RBF Networks A.McLachlan, D.Lowe, Aston University, United Kingdom 14.30-15.00 Neural Networks: Do They Really Outperform Linear Models? Exchange Rate Forecasting Using Weekly Data T.H.Hann, University of Karlsruhe, Germany 15.00-15.30 Hippocampal Two-Stage Learning and Memory Consolidation A.Bibbig, T.Wennekers, University of Ulm, Germany 15.30-16.00 Coffee Break 16.00-16.30 Analog Computations with Mapped Neural Fields A.Schierwagen, H.Werner, University of Leipzig, Germany 16.30-17.00 The Role of Reinforcement in a Reading Model H.Ruellan, LIMSI/CNRS, Orsay, France 17.00-17.30 Quasi Mental Clusters: A Neural Model of Knowledge Discovery in Narrative Texts S.W.K.Chan, J.Franklin, University of New South Wales, Sydney, Australia THURSDAY, April 11, a.m., Room 47 9.00-9.30 An Application of the Saturated Attractor Analysis to Three Typical Models J. Feng, B. Tirozzi, University of Munich, Germany 9.30-10.00 On a New Gauge-Theoretical Framework for Controlling Neural Network Dynamics E.Pessa, G.Resconi, Universita Cattolica del Sacro Cuore, Brescia, Italy 10.00-10.30 Investigation of the Attractor Structure in the Continuous Hopfield Model S.Amin, BT Laboratories, Ipswich, United Kingdom ------------------------------------------------------------------ the complete program of EMCSR'96 can be found at http://www.ai.univie.ac.at/emcsr/ ================================= Secretariat =========== I. Ghobrial-Willmann and G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at -- Martin "Lolly" Lorenz _/_/ _/_/_/ _/ _/ _/ _/_/ Austrian Research Institute for AI (OFAI) _/ _/ _/ _/_/ Tel.:+43-1-535 32 810 _/ _/_/ _/_/ martinl at ai.univie.ac.at _/ _/_/ http://www.ai.univie.ac.at/~martinl/ _/ _/_/ _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ _/ _/_/_/_/_/_/ _/ imagine there is a war _/ ~~~~~~~~~~~~~~~~~~~~~~~ _/ and nobody joins it... _/ ~I have a dream... ~ _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ~that people will live~ ~together in freedom ~ public PGP-key avail. at ~and peace. ~ http://www.nic.surfnet.nl/pgp/ ~~~~~~~~~~~~~~~~~~~~~~~  From n at predict.com Tue Mar 26 13:50:18 1996 From: n at predict.com (Norman Packard) Date: Tue, 26 Mar 96 11:50:18 MST Subject: Job Opening at Prediction Company Message-ID: <9603261850.AA03118@predict.com> A non-text attachment was scrubbed... Name: not available Type: text Size: 2766 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6e5d3e36/attachment.ksh From emj at cs.ucsd.edu Tue Mar 26 14:02:19 1996 From: emj at cs.ucsd.edu (Eric Mjolsness) Date: Tue, 26 Mar 96 11:02:19 -0800 Subject: paper on stochastic grammars and resulting architectures Message-ID: <9603261902.AA26326@triangulum> The following paper is available by ftp and www. Symbolic Neural Networks Derived from Stochastic Grammar Domain Models Eric Mjolsness Abstract: Starting with a statistical domain model in the form of a stochastic grammar, one can derive neural network architectures with some of the expressive power of a semantic network and also some of the pattern recognition and learning capabilities of more conventional neural networks. For example in this paper a new version of the "Frameville" architecture, and in particular its objective function and constraints, is derived from a stochastic grammar schema. Possible optimization dynamics for this architecture, and relationships to other recent architectures such as Bayesian networks and variable-binding networks, are also discussed. URL's for Web access: ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps.Z ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps.gz (or indirectly from http://www-cse.ucsd.edu/users/emj) ftp instructions: unix% ftp cs.ucsd.edu Name: anonymous Password: (use your e-mail address) ftp> cd /pub/emj/papers ftp> bin ftp> get ucsd.TR.CS95-437.ps.Z (or ucsd.TR.CS95-437.ps.gz) ftp> bye unix% uncompress ucsd.TR.CS95-437.ps.Z (or gunzip ucsd.TR.CS95-437.ps.gz)  From goldfarb at unb.ca Wed Mar 27 11:38:41 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Wed, 27 Mar 1996 12:38:41 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: <9603252119.AA14184@athos.cs.ua.edu> Message-ID: On Mon, 25 Mar 1996, Ron Sun wrote: > Hybrid Connectionist-Symbolic Models: > a report from the IJCAI'95 workshop on connectionist-symbolic integration > > Hybrid models involve a variety of different types of processes and > representations, in both learning and performance. > The hybridization of connectionist and symbolic models also > inherits the difficulty with learning from the symbolic > side, and mitigates to some large extent the advantage that the purely > connectionist models have in their learning abilities. > Considering the importance of learning, in both modeling cognition and > building intelligent systems, it is crucial for researchers in this area > to pay more attention to ways of enhancing hybrid models > in this regard and to putting learning back into hybrid models. I guess this is as good time as any to raise the following issue. From the mathematical perspective, I have never seen (in mathematics) HYBRID models. (Mathematicians don't use the term.) Hence a question: How are we to understand this term outside our mathematical experience? Lev Goldfarb Tel: 506-453-4566 Fax: 506-453-3566 http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From rsun at cs.ua.edu Wed Mar 27 23:39:33 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 27 Mar 1996 22:39:33 -0600 Subject: What is a "hybrid" model? Message-ID: <9603280439.AA21077@athos.cs.ua.edu> That's a tricky question. I don't know if there is any clear-cut answer. (I really hate to answer this, Lev :-) ) One might simply say hybrid models involve both symbolic and subsymbolic processes. But then, what are these? One is NN and the other is LISP code?? A deeper answer is needed. Smolensky attempted to distinguish the two types in his PTC paper in 1988. And there is the (still continuing) discussion in terms of systematicity etc. (Fodor and Pylyshyn 1988, Clark 1991). But I am still not clear about the difference. In relation to mathematical forms as alluded to in Lev's message, one possible answer is that while symbolic processes can be better modeled by discrete math, subsymbolic processes are better modeled by continuous math. Thus, hybrid models may involve a variety of mathematical forms. But obviously, this is an (over) simplification. (Just consider the approximate equivalence of discrete and continuous math: one can be approximated by the other.) Another possible answer is that while one involves explicit representation the other involves implicit representation. But then the question is: what is difference between the two representations? If I remember correctly, there was a paper recently in Mind and Machine on exactly this topic. But again I was not convinced by the answer provided by the author. Motivated by this dissatisfaction, I was trying to develop my own solution, but it fared no better. Recently, however, I stumbled upon something that I believe may provide a fruitful way of looking into this and other related issues. What I am looking at is psychological literature on implicit learning (and to a lesser extent, literature on implicit memory, unconscious perception, etc.). What these bodies of work may give us is a scientific (experimental) way of getting a handle on the issues. Instead of philosophizing on the differences and so on (no offense intended), we may actually examine the issues experimentally in human subjects and thus make some head ways towards understanding the differences in a rigorous and well-grounded way. As demonstrated by the work of e.g. Reber (1989), Berry and Broadbent (1989), Stanley et al. (1989), Willingham et al (1989), humans may actually learn in two different ways (at least): either explicitly or implicitly (symbolically or subsymbolically?). These two types of learning may interact sometimes (Stanley et al 1989). The distinction and dissociation of these two different types of learning have been demonstrated in a variety of domains, including artificial grammar learning, dynamic control, sequences, covariations, and so on (Seger 1994). Of course, in these experiments, an operational (experiment-based) definition of explicitness and implicitness has to be assumed, and indeed much controversy resulted from definitional differences. However, despite the shortcomings, given the breadth and consistency of results of this line of research, the distinction seems to be well established. I believe this distinction may be beneficial to the understanding of the symbolic vs. subsymbolic and related differences, and ultimately, may lead to a better understanding of what hybrid models are and how we should structure hybrid models. I will announce a TR that contains a thorough discussion of this shortly. --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From goldfarb at unb.ca Thu Mar 28 01:58:52 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Thu, 28 Mar 1996 02:58:52 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: <9603280439.AA21077@athos.cs.ua.edu> Message-ID: On Wed, 27 Mar 1996, Ron Sun wrote: > That's a tricky question. I don't know if there is any > clear-cut answer. (I really hate to answer this, Lev :-) ) Ron, Please note that the question is not really tricky. The question simply suggests that there is no need to attach the term "hybrid" to the model, because the combination (hybrid model) is both "ugly" and is likely to lead almost all researchers involved in the wrong direction: there are really no "hybrid" mathematical structures, but rather "symbiotic structures", e.g. topological group, (although I would also hesitate to suggest this combination as a research direction). In other words, once we find the right model that captures the necessary "symbiosis" of the discrete and the continuous, we will give it the name that reflects its unique and fundamentally new features, which it MUST exhibit. By the way, I do believe that the inductive learning model proposed by me - evolving transformation system (see the publications in my homepage) - embodies a fundamentally new symbiosis of the discrete and the continuous. > In relation to mathematical forms as alluded to in Lev's message, > one possible answer is that while symbolic processes can be better modeled > by discrete math, subsymbolic processes are better modeled > by continuous math. It is also important to understand why the nature of the above symbiosis should be radically different from that of the classical mathematical structures, which embody, basically, the symbiosis of the NUMERIC mathematical structures. > Another possible answer is that while one involves explicit representation > the other involves implicit representation. But then the question is: > what is difference between the two representations? > Recently, however, I stumbled upon something that I believe may provide > a fruitful way of looking into this and other related issues. > What I am looking at is psychological literature on implicit learning > (and to a lesser extent, literature on implicit memory, unconscious > perception, etc.). What these bodies of work may give us is a scientific > (experimental) way of getting a handle on the issues. Instead of > philosophizing on the differences and so on (no offense intended), > we may actually examine > the issues experimentally in human subjects and thus make some head ways > towards understanding the differences in a rigorous and well-grounded way. > As demonstrated by the work of e.g. Reber (1989), Berry and Broadbent (1989), > Stanley et al. (1989), Willingham et al (1989), humans may actually > learn in two different ways (at least): > either explicitly or implicitly (symbolically or subsymbolically?). > These two types of learning may interact sometimes (Stanley et al 1989). > The distinction and dissociation of these two different types of learning > have been demonstrated in a variety of domains, including artificial > grammar learning, dynamic control, sequences, covariations, and so on > (Seger 1994). Of course, in these experiments, an operational > (experiment-based) definition of explicitness > and implicitness has to be assumed, and indeed much controversy resulted > from definitional differences. However, despite the shortcomings, > given the breadth and consistency of results > of this line of research, the distinction seems to be well established. > I believe this distinction may be beneficial > to the understanding of the symbolic vs. subsymbolic and related differences, > and ultimately, may lead to a better understanding of what hybrid models > are and how we should structure hybrid models. I simply cannot imagine how such (of necessity) relatively "superficial" experimental observations will in the foreseeable future lead us to the insight into the nature of the fundamentally new MATHEMATICAL STRUCTURE (of course, if one at all cares about it). For that matter, many neuroscientists, for example, with equal justification, may also claim to be on the "right trail". And why not? Remember what Einstein said? (But as long as no principles are found on which to base the deduction, the individual empirical fact is of no use to the theorist; indeed he cannot even do anything with isolated general laws abstracted from experience. He will remain helpless in the face of separate results of empirical research, until principles which he can make the basis of deductive reasoning have revealed themselves to him.) To our area of research this observation applicable even to a larger extent: we are dealing with information processing. -- Lev http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From bengioy at IRO.UMontreal.CA Thu Mar 28 09:08:04 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Thu, 28 Mar 1996 09:08:04 -0500 Subject: Spring School/Workshop in Montreal: LAST ANNOUNCEMENT Message-ID: <199603281408.JAA01139@rouge.IRO.UMontreal.CA> ******** Last reminder, there are only a few seats left: ******* Montreal Workshop and Spring School on Artificial Neural Networks and Learning Algorithms April 15-30 1996 Centre de Recherche Mathematique, Universite de Montreal This workshop and concentrated course on artificial neural networks and learning algorithms is organized by the Centre de Recherches Mathematiques of the University of Montreal (Montreal, Quebec, Canada). The first week of the the workshop will concentrate on learning theory, statistics, and generalization. The second week (and beginning of third) will concentrate on learning algorithms, architectures, applications and implementations. The organizers of the workshop are Bernard Goulard (Montreal), Yoshua Bengio (Montreal), Bertrand Giraud (CEA Saclay, France) and Renato De Mori (McGill). The invited speakers are G. Hinton (Toronto), V. Vapnik (AT&T), M. Jordan (MIT), H. Bourlard (Mons), T. Hastie (Stanford), R. Tibshirani (Toronto), F. Girosi (MIT), M. Mozer (Boulder), J.P. Nadal (ENS, Paris), Y. Le Cun (AT&T), M. Marchand (U of Ottawa), J. Shawe-Taylor (London), L. Bottou (Paris), F. Pineda (Baltimore), J. Moody (Oregon), S. Bengio (INRS Montreal), J. Cloutier (Montreal), S. Haykin (McMaster), M. Gori (Florence), J. Pollack (Brandeis), S. Becker (McMaster), Y. Bengio (Montreal), S. Nowlan (Motorola), P. Simard (AT&T), G. Dreyfus (ESPCI Paris), P. Dayan (MIT), N. Intrator (Tel Aviv), B. Giraud (France), H.P. Graf (AT&T). MORE INFO AT: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html OR contact Louis Pelletier, pelletl at crm.umontreal.ca, #tel: 514-343-2197 -------------------- SCHEDULE --------------------------------- The lectures will take place in room 5340 (5th floor) of the Pavillon Andre-Aisenstadt on the campus of the Universite de Montreal. Week 1 Introduction, learning theory and statistics April 15: 9:00 - 9:30 Registration (Room 5341) & Coffee (Room 4361) 9:30 - 10:30 Y. Bengio: Introduction to learning theory and learning algorithms 10:30 - 11:30 J.P. Nadal: Constructive learning algorithms: empirical study of learning curves (part I) 14:00 - 15:00 G. Dreyfus: Learning to be a dynamical system (part I) 15:00 - 15:30 B. Giraud: Flexibility, robustness and algebraic convenience of neural nets with neurons having a window-like response function April 16: 9:00 - 10:00 Y. Bengio: Introduction to artificial neural networks and pattern recognition 10:00 - 11:00 F. Girosi: Neural networks and approximation theory (part I) 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 L. Bottou: Learning theory for local algorithms 14:00 - 15:00 J.P. Nadal: Constructive learning algorithms: empirical study of learning curves (part II) 15:00 - 16:00 G. Dreyfus: Learning to be a dynamical system (part II) April 17: 9:00 - 10:00 V. Vapnik: Theory of consistency of learning processes 10:00 - 11:00 L. Bottou: Stochastic gradient descent learning and generalization 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 F. Girosi: Neural networks and approximation theory (part II) 14:00 - 15:00 M. Marchand: Statistical methods for learning nonoverlapping neural networks (part I) 15:00 - 16:00 J. Shawe-Taylor: A Framework for Structural Risk Minimisation (part I) 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 M. Jordan: Introduction to Graphical Models April 18: 9:00 - 10:00 J. Shawe-Taylor: A Framework for Structural Risk Minimisation (part II) 10:00 - 11:00 R. Tibshirani: Regression shrinkage and selection via the lasso 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 V. Vapnik: Non-asymptotic bounds on the rate of convergence of learning processes 14:00 - 15:00 T. Hastie: Flexible Methods for Classification (part I) 15:00 - 16:00 M. Jordan: Algorithms for Learning and Inference in Graphical Models April 19: 9:00 - 10:00 S. Bengio: Introduction to Hidden Markov Models 10:00 - 11:00 V. Vapnik: The learning algorithms 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 R. Tibshirani: Model search and inference by bootstrap "bumping" 14:00 - 15:00 T. Hastie: Flexible Methods for Classification (part II) 15:00 - 16:00 M. Marchand: Statistical methods for learning nonoverlapping neural networks (part II) 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 B. Giraud: Spectrum recognition via the pseudo-inverse method and optimal background subtraction Week 2 and 3 Algorithms, architectures and applications April 22: 9:00 - 9:30 Registration (room 5341) & Coffee (room 4361) 9:30 - 10:30 S. Haykin: Neurosignal Processing: A Pradigm Shift in Statistical Signal Processing (part I) 10:30 - 11:30 H. Bourlard: Using Markov Models and Artificial Neural Networks for Speech Recognition (part I) 14:00 - 15:00 M. Gori: Links between suspiciousness and computational complexity 15:00 - 16:00 M. Mozer: Modeling time series with compositional structure 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 F. Pineda: Reinforcement learning and TD-lambda April 23: 9:00 - 10:00 S. Haykin: Neurosignal Processing: A Pradigm Shift in Statistical Signal Processing (part II) 10:00 - 11:00 F. Pineda: Hardware architecture for acoustic transient classification 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 H. Bourlard: Using Markov Models and Artificial Neural Networks for Speech Recognition (part I) 14:00 - 15:00 J. Pollack: 15:00 - 16:00 P. Dayan: Factor Analysis and the Helmholtz Machine Dynamical properties of networks for cognition 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 M. Mozer: Symbolically-Constrained Subsymbolic Processing April 24: 9:00 - 10:00 M. Gori: Number-plate recognition with neural networks 10:00 - 11:00 J. Pollack: A co-evolutionary framework for learning 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 P. Dayan: Bias and Variance in TD Learning 14:00 - 15:00 S. Becker: Unsupervised learning and vision (part I) 15:00 - 16:00 P. Simard: Memory-based pattern recognition April 25: 9:00 - 10:00 S. Becker: Unsupervised learning and vision (part I) 10:00 - 11:00 G. Hinton: Improving generalisation by using noisy weights 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 N. Intrator: General methods for training ensembles of regressors (part I) 14:00 - 15:00 S. Nowlan: Mixtures of experts 15:00 - 16:00 G. Hinton: Helmholtz machines 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 Y. Le Cun: Shape Recognition with Gradient-Based Learning Methods April 26: 9:00 - 10:00 S. Bengio: Input/Output Hidden Markov Models 10:00 - 11:00 Y. Le Cun: Fast Neural Net Learning and Non-Linear Optimization 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 S. Nowlan: Mixture of experts to understand functional aspects of primate cortical vision 14:00 - 15:00 N. Intrator: General methods for training ensembles of regressors (part II) 15:00 - 16:00 P. Simard: Pattern Recognition Using a Transformation Invariant Metric April 29: 9:00 - 10:00 J. Moody: Artificial Neural Networks applied to finance (part I) 10:00 - 11:00 Y. Bengio: Modeling multiple time scales and training with a specialized financial criterion 14:00 - 15:00 H.P. Graf: Recent Developments in Neural Net Hardware (part I) April 30: 9:00 - 10:00 J. Moody: Artificial Neural Networks applied to finance (part II) 10:00 - 10:30 Coffee Break (room 4361) 10:30 - 11:30 H.P. Graf: Recent Developments in Neural Net Hardware (part II) 14:00 - 15:00 J. Cloutier:FPGA-based multiprocessor: Implementation of Hardware-Friendly Algorithms for Neural Networks and Image Processing 15:00 - 15:30 Coffee Break (room 4361) 15:30 - 16:30 J. Moody: Artificial Neural Networks applied to finance (part III) -------------------- Registration information: --------------------- $100 (Canadian) or $75 (US) if received before April 1st $150 (Canadian) or $115 (US) if received on or after April 1st $25 (Canadian) or $19 (US) for students and post-doctoral fellows. The number of participants will be limited, on a first-come first-served basis. Please register early! Registration forms and hotel informations are available at our WEB SITE: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html For more information, contact Louis Pelletier, pelletl at crm.umontreal.ca 514-343-2197 Centre de Recherche Mathematique, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C-3J7, Canada. -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206  From ptodd at mpipf-muenchen.mpg.de Thu Mar 28 11:26:17 1996 From: ptodd at mpipf-muenchen.mpg.de (ptodd@mpipf-muenchen.mpg.de) Date: Thu, 28 Mar 96 17:26:17 +0100 Subject: looking for latest artistic/musical applications Message-ID: <9603281626.AA06734@hellbender.mpipf-muenchen.mpg.de> We are putting together a new book on the artistic and musical uses of connectionist systems, including psychological modeling, artistic creation, etc., and we would like everyone's help in making this work as complete as possible. The book will be based on the special issue of Connection Science we edited on this topic (1994), and will include new articles as well. For our revised introduction, we are seeking references and papers on the latest research in this area, so we can provide a more accurate survey of what's out there. Unpublished research projects are also of interest. We have collected references to all of the work that is currently known to us (including the tables of contents of the Connection Science issue and of the 1991 MIT Press book, Music and Connectionism), and we are making this available via anonymous ftp in the following file: host: ftp canetoad.mpipf-muenchen.mpg.de (with login name "anonymous" and your email address as password) directory: cd /pub/science/ptodd file: get references.txt (plain text) If you have any new pointers or suggestions in these area, please send them to us at the email addresses below. We will make available the table of contents of the new book when that has been finalized, as well as the list of research we compile beforehand. Thanks for your distributed help-- Peter Todd (ptodd at mpipf-muenchen.mpg.de) Niall Griffith (ngr at atlas.ex.ac.uk) .................................................. ' `. : Peter M. Todd : : Max Planck Institute for Psychological Research : : Center for Adaptive Behavior and Cognition : : Leopoldstrasse 24 : : 80802 Munich GERMANY : : : : Email: ptodd at mpipf-muenchen.mpg.de : : Phone: (049) (89) 38 602 236 : : Fax: (049) (89) 38 602 252 : `..................................................' - ------- End of Blind-Carbon-Copy ------- End of Forwarded Message  From lba at inesc.pt Thu Mar 28 11:57:15 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 28 Mar 1996 17:57:15 +0100 Subject: Sintra Workshop on Spatiotemporal Models - CFP Message-ID: <315AC4EB.13728473@inesc.pt> **** PLEASE POST **** PLEASE DISTRIBUTE TO OTHER LISTS **** Announcement and Call for Papers Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems Sintra, Portugal, 6-8 November 1996 The spatial and temporal aspects of information processing present important challenges, both in biological and in artificial systems. Good examples of these challenges are the processing of visual information and the roles of chaotic behavior and of synchrony in biological systems, or the training of desired dynamical behaviors and the representation of spatial information in artificial systems. The Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems aims to foster the discussion and interchange of ideas among researchers interested in all aspects of spatiotemporal modelling. A non-exhaustive list of topics is analysis of neural patterns neocortical dynamics neural synchrony cortico-thalamic interactions cortical modules perception units sensorimotor modeling neural correlates of behavior plasticity in neuronal networks learning processes and rules spatio temporal measures neural network dynamics oscillations and chaos in neural networks recurrent neural networks robot navigation coupled oscillators coupled lattices cellular automata WORKSHOP FORMAT The size of the workshop is planned to be relatively small (around 50 people), to enhance the communication among participants. Submissions will be subjected to an international peer review procedure, following the standards for high quality scientific events. All accepted submissions will be scheduled for poster presentation. The authors of the best-rated submissions will make oral presentations, in addition to their poster presentations. Presentation of an accepted contribution is mandatory for participation in the workshop. There will also be a number of presentations by renowned invited speakers. The workshop is planned to have a duration of two and a half days, from a wednesday afternoon through the next friday afternoon. The participants who so desire will have the opportunity to stay the following weekend, for sightseeing. PAPER SUBMISSION Submissions will consist of the full papers in their final form. Paper revision after the review is not expected to be possible. The accepted contributions will be published by a major scientific publisher. The proceedings volume is planned to be distributed to the participants at the beginning of the workshop. The camera-ready paper format is not available yet, but a rough indication is eight A4 pages, typed single-spaced in a 12 point font, with 3.5 cm margins all around. Once a final agreement with the publisher is reached, information on the camera-ready paper format will be sent to all those who have requested it, including those in the workshop mailing list, and will also be incorporated in the workshop's web page (see 'Staying Informed', below). Papers should be submitted to Dr. Jose C. Principe (see below), before the deadline of 30 April. Dr. Principe should also be contacted for clarifying any doubts regarding paper submission. USEFUL DATES The workshop will take place on 6-8 November 1996 in Sintra, Portugal. The schedule is as follows: Deadline for paper submission 30 April 1996 Results of paper review 31 July 1996 Workshop 6-8 November 1996 FUNDING We have received confirmation that the Office of Naval Research (U.S.A.) will provide partial funding for the workshop. Among other things (e.g. funding the invited speakers), this will allow us to partially subsidize the participants' expenses, by lowering the registration cost. We have also been given a grant from Fundacao Oriente (Portugal), in the amount of 100000 Portuguese escudos (about 600 US dollars) to be assigned to a participant from the Far East. Preference criteria for assigning this grant will be (1) being a graduate student, and (2) the ranking from the paper review procedure. Funding applications have also been sent to several other institutions. Therefore, some more funding may become available in the future. REGISTRATION The authors of the accepted papers will be sent information about the registration procedure. Registration of an author is mandatory for the publication of the corresponding paper in the proceedings. This is done to ensure the financial balance of the workshop. The current estimate of the registration cost is 40000 Portuguese escudos (about 270 US dollars) per participant. Besides the participation in the workshop itself, this estimate includes the proceedings, lunch on thursday and friday and coffee breaks. This estimate already takes into account the funding from the ONR. The cost of lodging at Hotel Tivoli Sintra will be, per night and per person: Single room 9700 escudos (about 65 US dollars) Double room 5450 escudos (about 36 US dollars) THE PLACE Sintra is a beautiful little town, located about 20 km west of Lisbon. It used to be a vacation place of the Portuguese aristocracy, and has in its vicinity a number of beautiful palaces, a moor castle, a monastery carved in the rock and other interesting spots. It is on the edge of a small mountain that creates a microclimate with a luxurious vegetation. Sintra has recently been designated World Patrimonium. The workshop will be held at the Hotel Tivoli Sintra, which is located in the old part of Sintra. The hotel is modern, comfortable and has good facilities for this kind of event. STAYING INFORMED To be included in the workshop's mailing list, send e-mail to Luis B. Almeida (see below). Workshop's web page: http://aleph.inesc.pt/smbas/ Mirror: http://www.cnel.ufl.edu/workshop.html Come back to these web pages. They will be kept up-to-date, with the latest information. WORKSHOP ORGANIZERS Chair Fernando Lopes da Silva Amsterdam University, The Netherlands Technical program Jose C. Principe Address: Electrical Engineering Dept. University of Florida Gainesville FL 32611 USA Phone: +1-904-392-2662 Fax: +1-904-392-0044 E-mail: principe at synapse.cnel.ufl.edu Local arrangements Luis B. Almeida Address: INESC R. Alves Redol, 9 1000 Lisboa Portugal Phone: +351-1-3100246 Fax: +351-1-3145843 E-mail: luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt  From FRYRL at f1groups.fsd.jhuapl.edu Thu Mar 28 15:23:00 1996 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Thu, 28 Mar 96 15:23:00 EST Subject: FW: What is a "hybrid" model? Message-ID: <315A6A59@fsdsmtpgw.fsd.jhuapl.edu> >On Wed, 27 Mar 1996, Lev wrote: >Ron, >Please note that the question is not really tricky. The question simply >suggests that there is no need to attach the term "hybrid" to the model, >because the combination (hybrid model) is both "ugly" and is likely to >lead almost all researchers involved in the wrong direction: there are >really no "hybrid" mathematical structures, but rather "symbiotic >structures", e.g. topological group, (although I would also hesitate to >suggest this combination as a research direction). >In other words, once we find the right model that captures the necessary >"symbiosis" of the discrete and the continuous, we will give it the name >that reflects its unique and fundamentally new features, which it MUST >exhibit. I agree whole-heartedly with Ron. The term "hybrid" is like so many other concepts that we confuse with reality, is of our own making. Like George Spencer-Brown said, "the world is like shifting sands beneath our feet..." It is up to the observer to segment and name the world throught he process of learning and distinction. Historical perspective often sheds light on what are perceived as new problems, but in fact are perhaps forgotten ideas. Joshea Willard Gibbs (see Volume I of the collected works) developed thermodynamics and thermostatics through classical and macroscopic means. He then (see Volume II of collected works) treated statistical mechanics. Gibbs called his statistical mechanical treatment an "analogy". Myron Tribus (another famous thermodynamicists now more famous for his popularization of Jaynes MaxEnt Principle) has told me that Gibbs could not show a one-to-one correspondence with what he knew about classical thermodynamics (discrete and quantized, albeit quantum principles had not yet been proposed) and statistical mechanics because all his statisrical functions were continuous. Perhaps this bit of historical perspective provides direct insight in discrete-continuous formulations of neural computation. This is my understanding. Bob Fry Johns Hopkins University/ Applied Physics Laboratory  From shrager at neurocog.lrdc.pitt.edu Fri Mar 29 00:25:28 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Fri, 29 Mar 1996 00:25:28 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: <9603280439.AA21077@athos.cs.ua.edu> Message-ID: > Recently, however, I stumbled upon something that I believe may provide > a fruitful way of looking into this and other related issues. >... Well, since you're into this literature, you might as well look at some real computational psychology. Permit me a moment of inhumility in pointing out our work on the development of arithmetic skill in preschoolers. Siegler, R. S., & Shrager, J., (1984). Strategy choices in addition and subtraction: How do children know what to do? In C. Sophian (Ed.), Origins of Cognitive Skills. Hillsdale, NJ: Lawrence Erlbaum Associates. 229-294. We describe an implemented hybrid (in neo-terminology) model of the development of small number addition skill (e.g., what's 4+3?) and validate the model against real children's learning data. The model generally correlates with the observed/predicted performance at greater than .8, and often greater than .9. It is hybrid in two senses. (I use the term "mediated" rather than "hybrid" because "mediated" describes the way in which the components interact, as follows....) First, the model has both an explicit component and a memory component. The former is discrete (it does simple addition, which is, well, simple), and the latter is a basic (continuous) association model. These components train one another and the decision about which component to use in a given case is made in accord with the history of training. This is one sense of mediation: the memory component is trained by the discrete component; that is, the discrete component mediates between the memory and reality (or, rather, correctness). The second sense in which the model is hybrid, or mediated, is in a (computational analog of a) social sense: Specifically, the model's "environment" -- the particular distribution of problems that it sees, is based upon real observations of the distrubution of problems that children are given by their parents. Here, too, the parent mediates the child's performance until the child's own systems are trained up appropriately. There. Now you don't have any excuse for not including this in your forthcoming TR! :-) Cheers, Jeff p.s. Anyone who would like a copy of this forthcame-TR may send me an address label-like email and I'll be happy to send one out.  From rsun at cs.ua.edu Fri Mar 29 01:20:43 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 29 Mar 1996 00:20:43 -0600 Subject: What is a "hybrid" model? Message-ID: <9603290620.AA14624@athos.cs.ua.edu> Lev: I have to disagree with several points you made. lev goldfarb at unb.ca wrote: >Please note that the question is not really tricky. The question simply >suggests that there is no need to attach the term "hybrid" to the model, >because the combination (hybrid model) is both "ugly" and is likely to >lead almost all researchers involved in the wrong direction I really don't care that much what label one uses, and a label simply cannot ``lead all researchers in the wrong direction". Sorry. :-) >really no "hybrid" mathematical structures, but rather "symbiotic >structures", e.g. topological group There have been _a lot of_ different mathematical structures being proposed that purport to capture all the essential properties of both symbolic and neural models. I would hesitate to make any such claim right now: we simply do not know enough yet about even the basics to make such a sweeping claim. What we can do is working toward such an end. >I simply cannot imagine how such (of necessity) relatively "superficial" >experimental observations will in the foreseeable future lead us to the >insight into the nature of the fundamentally new MATHEMATICAL STRUCTURE >(of course, if one at all cares about it). I would not so readily dismiss the literature on implicit learning which I cited in the previous message as ``superficial". I would urge you to look into these papers first. While you are at it, you might also look into some related work in developmental psychology (and models of developmental processes). Personally, I found these pieces of work very PRINCIPLED and may lead to exactly the kind of mathematical structure that we need to model cognition (which is, of necessity, complex or even ``heterogeneous"). >Remember what Einstein said? >(But as long as no principles are found on which to base the deduction, >the individual empirical fact is of no use to the theorist; indeed he >cannot even do anything with isolated general laws abstracted from >experience. He will remain helpless in the face of separate results of >empirical research, until principles which he can make the basis of >deductive reasoning have revealed themselves to him.) I cannot agree more. It's a nice quote. But it supports my points as much as it does yours. Cheers, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From rsun at cs.ua.edu Fri Mar 29 01:34:34 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 29 Mar 1996 00:34:34 -0600 Subject: What is a "hybrid" model? Message-ID: <9603290634.AA15686@athos.cs.ua.edu> Jeff Shrager wrote: >> I guess this is as good time as any to raise the following issue. From >> the mathematical perspective, I have never seen (in mathematics) HYBRID >> models. (Mathematicians don't use the term.) Hence a question: How are we >> to understand this term outside our mathematical experience? > >When you have two (or more) hacks, each of which does part of a job, >and you can't find one that does the whole job, you wire them together >and call it a hybrid model. Seems simple enough. (Maybe it's like >wiring together two (or more) diffeqs.) The brain is full of such >things. > >Cheers, > Jeff Exactly. you find such things not just in neuroscience, but also in psychological data, and in AI models. Are they necessarily bad for these fields? not if they lead to the discovery of real principles. BTW, principles can be ``hybrid", contrary to what some may say. (you can call them synergistic, symbiotic, or what have you.) Cheers, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From Jonathan_Stein at hub1.comverse.com Fri Mar 29 10:21:55 1996 From: Jonathan_Stein at hub1.comverse.com (Jonathan_Stein@hub1.comverse.com) Date: Fri, 29 Mar 96 10:21:55 EST Subject: Re[?]: What is a "hybrid" model? Message-ID: <9602298281.AA828133262@hub1.comverse.com> I have been following this thread and can't resist a few comments regarding the basic differences between "symbolic" and "neural" processes, and the interplay between them. First, it can be easily demonstrated that the two types of processes both exist and are differentiable. Consider, for example, the task of determining whether three specific dots among many others form an equilateral triangle. When the three dots are red and all the others black, this task can be performed quickly, while if the three points are marked by different shape (e.g.. small squares) than the others (e.g. misc. circles, triangles, etc. of roughly the same size) we resort to exhaustive search. The first problem is solved using a connectionist technique, while for the second we resort to good old fashioned AI. Next, it has been demonstrated in psychophysical experiments that there are two types of learning. The first type is gradual, with slowly improving performance, while in primates there is also "sudden" learning, where the subject (EUREKA!) discovers a symbolic representation simplifying the task. Thus not only is the basic hardware different for the two processes, different learning algorithms are used as well. Finally, regarding the interplay between the two. Biology does not cleanly separate the task with defined interfaces (as people typically try to do) but employs level mixing. In both speech recognition and reading problems it has been demonstrated that the lower (neural) levels provide initial best hypotheses, which can be rejected by higher (syntactic, semantic or pragmatic) levels. A nice example is to quickly say "How do you wreck a nice beach?" in the context of a conversation about speech recognition. Most people will hear "How do you recognize speech?" Another interesting aspect surfaces when there are several different lower levels feeding higher ones. In the famous BAGADA experiment a subject listens to one phoneme while seeing a film of someone saying a different one, and reports hearing a third! Thus the idea behind "hybrid" systems, composed of decision theoretic and symbolic layers is neither 1) trivial and ugly, 2) a hack - wiring together two unrelated layers, nor 3) a matter of semantics and of no interest. Calling them symbiotic rather than hybrid IS a matter of semantics. Jonathan Stein  From complex at blaze.cs.jhu.edu Fri Mar 29 16:59:59 1996 From: complex at blaze.cs.jhu.edu (2nd account for S.Kasif) Date: Fri, 29 Mar 96 16:59:59 EST Subject: AAAI SYMPOSIUM ANNOUNCEMENT Message-ID: ========================================================= C A L L F O R P A P E R S ========================================================= LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS AAAI Fall Symposium November 9-11, 1996 Cambridge, Massachusetts, USA ======================================= Submissions due April 15, 1996 See the symposium home page at http://www.cs.jhu.edu/complex/symposium/cfp.html Call for Papers The machine learning community made an important methodological transition by identifying a collection of benchmarks that can be used for comparative testing of learning (typically classification) algorithms. While the resulting comparative research contributed substantially to progress in the field, a number of recent studies have shown that very simple representations, such as depth-two decision trees, naive Bayes-classifiers or perceptrons, perform relatively well on many of the benchmarks which are typically static fixed-size databases. At the same time, when knowledge representations are hand-crafted for solving complex tasks they are typically rather large and are often designed to cope with complex dynamic environments. This symposium will attempt to bridge this gap by increasing the focus of the meeting towards the study of algorithms that learn to perform complex behaviors and cognitive tasks such as reasoning and planning with uncertainty, perception, natural language processing and large-scale industrial applications. An additional important subgoal is emphasizing scalability of learning algorithms (e.g. reinforcement learning) in these complex domains. Our main motivation is to have an interdisciplinary meeting that focuses on "rational" agents that learn complex behaviors which is closer in spirit to the goals of AI than learning simple classifiers. We expect to draw selected researchers from AI, Neural Networks, Machine Learning, Uncertainty, and Computer Science Theory. Some of the key issues we plan to address are: * Research on agents that learn to behave "rationally" in complex environments. * Discovering parameters that can be used to measure the empirical complexity of learning a complex domain. * Generating new benchmarks and devising a methodological framework for studying empirical scalability of algorithms that learn complex behaviors. * Broadening the focus of learning to achieve a particular functionality in response to the demands generated by the domain, rather than learning a particular representation (e.g. learning to answer queries of the form: "what is the probability of X given Y" may be easier than learning a complete probability distribution on n variables). * Discussing the hypothesis that current learning algorithms require substantial knowledge engineering and close familiarity with the problem domain in order to learn complex behaviors. * Scalability of different representations and learning methods. The symposium will consist of invited talks, submitted papers, and panel discussions on topics such as * learning complex i/o behaviors; * learning optimization and planning; * learning to reason; * learning to reason with uncertainty; and * learning to perform complex cognitive tasks. We will invite short technical papers on these issues as well as position papers relating learning and issues in knowledge representation; comparative papers that illustrate the capabilities of different representations to achieve the same functionality; and papers providing specific benchmarks that demonstrate the scalability of a particular representation or paradigm. SUBMISSION INFORMATION Prospective participants are encouraged to submit extended abstracts (5-8 pages) addressing the research issues above. Please refer to an extended version of the call for papers that provides additional submission information and a tentative program (available on the WEB at: http://www.cs.jhu.edu/complex/symposium/cfp.html). Electronic submissions as well as inquiries about the program should be sent to complex at cs.jhu.edu. IMPORTANT DATES Submissions must be received by: 15 April 1996 Notification of acceptance on or before: 17 May 1996 Camera-ready copy for working notes due: 23 Aug 1996 ORGANIZING COMMITTEE S. Kasif (co-chair), Johns Hopkins Univ.; S. Russell (co-chair), Berkeley; B. Berwick, MIT; T. Dean, Brown Univ. R. Greiner, Siemens Research; M. Jordan, MIT; L. Kaebling, Brown Univ.; D. Koller, Stanford Univ.; A. Moore, CMU; D. Roth, Weizmann Institute; * * * Fall Symposia are sponsored by the American Association for Artificial Intelligence (AAAI). More information about the Fall Symposium on "LEARNING COMPLEX BEHAVIORS" can be found at: http://www.cs.jhu.edu/complex/symposium/cfp.html  From omlinc at cs.rpi.edu Fri Mar 29 17:00:21 1996 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Fri, 29 Mar 96 17:00:21 EST Subject: TR available - fuzzy recurrent neural networks Message-ID: <9603292200.AA27037@colossus.cs.rpi.edu> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: --------------------------------------------------------------------------- Fuzzy Finite-state Automata Can Be Deterministically Encoded into Recurrent Neural Networks Christian W. Omlin(a), Karvel K. Thornber(a), C. Lee Giles(a,b) (a)NEC Research Institute, Princeton, NJ 08540 (b)UMIACS, U. of Maryland, College Park, MD 20742 U. of Maryland Technical Report CS-TR-3599 and UMIACS-96-12 ABSTRACT There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by measuring string recognition performance of recurrent neural networks which encode large randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strength. Keywords: Fuzzy logic, automata, fuzzy automata, recurrent neural networks, encoding, rules. **************************************************************** I would like to add to my announcement of the TR that recurrent neural networks with sigmoid discriminant functions that represent finite-state automata are an example of hybrid systems. Comments regarding the TR are welcome. Please send them to omlinc at research.nj.nec.com. Thanks -Christian **************************************************************************** http://www.neci.nj.nec.com/homepages/giles.html http://www.neci.nj.nec.com/homepages/omlin/omlin.html http://www.cs.umd.edu/TRs/TR-no-abs.html ftp://ftp.nj.nec.com/pub/giles/papers/ UMD-CS-TR-3599.fuzzy.automata.encoding.recurrent.nets.ps.Z ******************************************************************************  From goldfarb at unb.ca Fri Mar 29 23:51:28 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sat, 30 Mar 1996 00:51:28 -0400 (AST) Subject: What is a hybrid model? In-Reply-To: <9603291518.AA12861@athos.cs.ua.edu> Message-ID: On Fri, 29 Mar 1996, Ron Sun wrote: > lev goldfarb at unb.ca wrote: > >Please note that the question is not really tricky. The question simply > >suggests that there is no need to attach the term "hybrid" to the model, > >because the combination (hybrid model) is both "ugly" and is likely to > >lead almost all researchers involved in the wrong direction > > I really don't care that much what label one uses, and a label simply > cannot ``lead all researchers in the wrong direction". Sorry. :-) The labels we use betray our ignorance and prejudices and mislead many ignorant followers. > >really no "hybrid" mathematical structures, but rather "symbiotic > >structures", e.g. topological group > > There have been _a lot of_ different > mathematical structures being proposed that purport > to capture all the essential properties of both symbolic and neural models. > I would hesitate to make any such claim right now: we simply do not > know enough yet about even the basics to make such a sweeping claim. > What we can do is working toward such an end. I'm all for "working toward such end". But lets try to do it in a competent manner. Contrary to the above, there have been NO fundamentally new and relevant MATHEMATICAL STRUCTURES (except transformation system) proposed so far that embodies a "natural" symbiosis of symbolic and numeric mathematical structures. To properly understand the last statement one has to know first of all what the meaning of the "mathematical structure" is. An outstanding group of French mathematicians, who took the pseudonim of Nicolas Bourbaki, contributed significantly to the popularization of the emerging (during the first half of this century) understanding of mathematical structures. Presently a mathematical structure (e.g. totally ordered set, group, vector space, topological space) is typically understood as a set - carrier of the structure - together with a set of operations, or relations, defined on it. The relations/operations are actually specified by means of axioms. (Frankly, it takes MANY hour to become comfortable with the term "mathematical structure" through the study of several typical structures. I don't know any of the newer introductory books, which I'm sure are many, but from the older ones I recommend, for example, Algebra, by Roger Godement, 1968.) In the case of more complex and more interesting classical "symbiotic" structures, such as the topological group, one defines this new structure by imposing on the "old" structure (the group) a new structure (the topology) in such a way that the new structure is CONSISTENT with the old (in a certain well defined sense, e.g. algebraic operations must be continuous wrt introduced topology). Why is it that we are faced with considerable difficulties when trying to "combine" the symbolic and the numeric mathematical structures into one "natural" structure that is of relevance to us? It turns out that the two classes of structures are not "combinable" in the sense which we are used to in the classical mathematics. (Please, no hacks: I'd like to talk science now.) Why? While each element in the classical "symbiotic" math. structure belongs simultaneously to two earlier structures, in this case this is definitely not true: symbols are not numbers (and symbolic operations are FUNDAMENTALLY different from numeric operations). It appears that THE ONLY NATURAL WAY to accomplish the "symbiosis" in this case is to associate with each symbolic operation a weight and to introduce into the corresponding set of symbolic objects, or structs, the distance measure that takes into consideration the operation weights. The distance is defined by means of the sequences of weighted operations, i.e. NUMBERS enter the mathematical structure through the DISTANCES DEFINED ON the basic objects, STRUCTS. I'm quite confident that the new structure thus defined is (in a well defined sense) more general than any of the classical numeric structures, and hence cannot be "isomorphic" to any of them. We have discussed the axiomatics of the new structure on the INDUCTIVE list. It is also not difficult to see that in applications of the new mathematical structure, the symbolic representations, or structs, begin to play much more fundamental role as compared with the classical models, e.g. the NN. Of course, this implies, in particular, that in applications of the model we need fundamentally different measurement devices, which are in nature realized chemically, but can at present be simulated directly on top of the classical measurement devices. We are completing a paper "Inductive theory of vision" in which, among other things, we discuss more formally (and with some illustrations) these issues. -- Lev http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From rybaki at eplrx7.es.dupont.com Sat Mar 30 06:59:47 1996 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Sat, 30 Mar 1996 06:59:47 -0500 Subject: shift invatiance Message-ID: <199603301159.GAA13150@davinci> Dear Connectionists, I followed the discussion about shift invariance, which recently was on this list, with a huge interest. The question is still open. But, Lev Goldgarbs point of view looks more plausible (at least for me). Of course, human vision is able to recognize images invariantly to shift, scale and (possibly) rotation. However, it does not mean that this property results directly from the corresponding property of some perceptron-like neural network. Invariant recognition in human vision is related to much more complex processes, and probably it cannot be understood in limited frames of neural computations without taking into account attention mechanisms and psychological and behavioral aspects of visual perception and recognition. Anyway, using a kind of behavioral approach we have tried to build a model of visual system without any invariant properties in neural networks. The model is called "BMV: Behavioral model of active visual perception and invariant recognition". BMV is able to recognize complex gray-level images (e.g. faces) invariantly to any 2D transformations (shift, rotation and scale). The descriptions of our approach and BMV model, as well as our DEMO for DOS are now available in WWW. The URL is http://www.voicenet.com/~rybak/vnc.html Look at it, maybe you find it interesting in the context of shift invariance discussion. Any feedback is welcome. Ilya Rybak DuPont Central Research rybaki at eplrx7.es.duPont.com  From rsun at cs.ua.edu Sat Mar 30 10:21:00 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Sat, 30 Mar 1996 09:21:00 -0600 Subject: What is a hybrid model? Message-ID: <9603301521.AA18989@athos.cs.ua.edu> This discussion has been interesting. But I think enough is enough. So I will shut up after this message. lev goldfard wrote: >The labels we use betray our ignorance and prejudices and mislead many >ignorant followers. I just don't think the scientific community can in any way be characterized as being ``ignorant" (or as ``ignorant followers"). Furthermore, I am sure that people can see beyond labels, and get to the real issues, ideas, and techniques that we have been developing (which is what we should be focusing on instead of labels). >An outstanding group of French mathematicians, who took the pseudonim of >Nicolas Bourbaki, contributed significantly to the popularization of the >........ great. >It appears that THE ONLY NATURAL WAY to accomplish the "symbiosis" in this >case is to associate with each symbolic operation a weight and to >introduce into the corresponding set of symbolic objects, or structs, the >distance measure that takes into consideration the operation weights. The >the basic objects, STRUCTS. I'm quite confident that the new structure >thus defined is (in a well defined sense) more general than any of the >classical numeric structures, and hence cannot be "isomorphic" to any of >them. I am sure this is nice. But I can't see that this can solve all the problems. All that I am advocating here is some kind of ``pluralism": we need to try different approaches and methods. It is not obvious yet which method is THE best. Maybe different methods are good for solving different problems; this is true not just in engineering, but also in SCIENCE (e.g. Physics). It would be inappropriate to dismiss all the other ideas in favor one. Finally, there is always a danger of simplistic OVERgeneralization, which could be misleading, even though I don't share the view of people being ``ignorant". Regards, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From shrager at neurocog.lrdc.pitt.edu Sat Mar 30 13:05:24 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Sat, 30 Mar 1996 13:05:24 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: I need to clarify something regarding the paper I mentioned in my note to Ron about hybrid models. I made a joke at the end of my note which seems to have gone over everyone's heads. The paper that I referred to: Siegler, R. S., & Shrager, J., (1984). Strategy choices in addition and subtraction: How do children know what to do? In C. Sophian (Ed.), Origins of Cognitive Skills. Hillsdale, NJ: Lawrence Erlbaum Associates. 229-294. Is NOT a forthcoming TR; It's an *old* *published* paper. Since it's old I don't have it online and can't send it to you in postscript format, nor put it into the archive. However, since it's published in a somewhat obscure place I'm happy to send it to you, but you need to send me your hardcopy address! (I'd really appreciate it, though, if you'd try just once to find the book in your local library first, since I have to both do the work of photocopying the papers and mailing them out to you.) Thanks very much. Cheers, Jeff (The joke was that I played on Ron's "forthcoming TR" and called our paper, above, a "forthcame-TR." I'm sorry; I realize now that this was way too subtle for email correspondence.)  From kzhang at cogsci.ucsd.edu Fri Mar 1 01:49:34 1996 From: kzhang at cogsci.ucsd.edu (Kechen Zhang) Date: Thu, 29 Feb 1996 22:49:34 -0800 Subject: exact shift-invariance from position-independent weights Message-ID: <9603010649.AA21976@cogsci.UCSD.EDU> People often do not realize that it is actually possible to get shift-invariant responses from position-dependent weight patterns. The mechanism may seem counter-intuitive at the first sight, but the shift-invariance can be rigorously true. The story begins with the puzzling behaviors of the neurons in the visual area MST of macaque monkeys. For example, some neurons responded very well to a disk rotating clockwise on a screen no matter where the center of the disk was located. The same neurons would be inhibited if the disk rotated counterclockwise, once again, no matter where the disk was located on the screen. Of course, some other cells would prefer counterclockwise rotations to clockwise ones, also in a shift-invariant manner. (The same is true for many dilation/contraction neurons, and probably also for spiral neurons.) Recall that MST is just the next processing stage after area MT, where neurons respond typically to translational movements in a comparatively small region (receptive field). One might guess that some nonlinear, higher-order process is underlying the phenomenon. But brain has probably found a much simpler and more elegant solution. The plausible solution first emerged in a computer simulation experiment by Marty and Margaret Sereno. I helped to formalize their findings (hence this message). Poggio and colleagues independently arrived at similar conclusion via a different path. In short, rigorously shift-invariant responses can be obtained from a simple linear feedforward network whose weight pattern is not shift-invariant at all. The shift-invariance follows from what Poggio et al. called the Green theorems and we called the Gauss and Stokes theorems---all special cases of the general Stokes theorem, which can transform an integral along a closed curve into an integral over an area, and vice versa. Because the learned weight pattern (considered vector field) has a constant curl, the final response depend only on the area of that rotating disk. I think this is a nice example of a counter-intuitive neural mechanism for exact shift-invariance. References: [1] Sereno, M. I. and Sereno , M. E. (1991) Learning to see rotation and dilation with a Hebb rule. In: Advances in Neural Information Processing Systems, R. P. Lippman, J. Moody and D. S. Touretzky, eds. pp. 320-326. Morgan Kauffman, San Mateo, CA. [2] Zhang, K., Sereno, M. I. and Sereno , M. E. (1993) Emergence of position-independent detectors of sense of rotation and dilation with Hebbian learning: an analysis. Neural Computation 5: 597-612. [3] Poggio, T., Verri, A. and Torre, V. (1991) Green theorems and qualitative properties of optical flow. MIT A.I. Memo, no. 1289. -Kechen ________________________________________ Kechen Zhang Department of Cognitive Science University of California, San Diego La Jolla, CA 92093-0515 kzhang at cogsci.ucsd.edu ________________________________________  From Connectionists-Request at cs.cmu.edu Fri Mar 1 00:05:47 1996 From: Connectionists-Request at cs.cmu.edu (Connectionists-Request@cs.cmu.edu) Date: Fri, 01 Mar 96 00:05:47 EST Subject: Bi-monthly Reminder Message-ID: <24493.825656747@B.GP.CS.CMU.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** This note was last updated September 9, 1994. This is an automatically posted bi-monthly reminder about how the CONNECTIONISTS list works and how to access various online resources. CONNECTIONISTS is a moderated forum for enlightened technical discussions and professional announcements. It is not a random free-for-all like comp.ai.neural-nets. Membership in CONNECTIONISTS is restricted to persons actively involved in neural net research. The following posting guidelines are designed to reduce the amount of irrelevant messages sent to the list. Before you post, please remember that this list is distributed to thousands of busy people who don't want their time wasted on trivia. Also, many subscribers pay cash for each kbyte; they shouldn't be forced to pay for junk mail. -- Dave Touretzky & Lisa Saksida --------------------------------------------------------------------- What to post to CONNECTIONISTS ------------------------------ - The list is primarily intended to support the discussion of technical issues relating to neural computation. - We encourage people to post the abstracts of their latest papers and tech reports. - Conferences and workshops may be announced on this list AT MOST twice: once to send out a call for papers, and once to remind non-authors about the registration deadline. A flood of repetitive announcements about the same conference is not welcome here. - Requests for ADDITIONAL references. This has been a particularly sensitive subject. Please try to (a) demonstrate that you have already pursued the quick, obvious routes to finding the information you desire, and (b) give people something back in return for bothering them. The easiest way to do both these things is to FIRST do the library work to find the basic references, then POST these as part of your query. Here's an example: WRONG WAY: "Can someone please mail me all references to cascade correlation?" RIGHT WAY: "I'm looking for references to work on cascade correlation. I've already read Fahlman's paper in NIPS 2, his NIPS 3 abstract, corresponded with him directly and retrieved the code in the nn-bench archive. Is anyone aware of additional work with this algorithm? I'll summarize and post results to the list." - Announcements of job openings related to neural computation. - Short reviews of new textbooks related to neural computation. To send mail to everyone on the list, address it to Connectionists at CS.CMU.EDU ------------------------------------------------------------------- What NOT to post to CONNECTIONISTS: ----------------------------------- - Requests for addition to the list, change of address and other administrative matters should be sent to: "Connectionists-Request at cs.cmu.edu" (note the exact spelling: many "connectionists", one "request"). If you mention our mailing list to someone who may apply to be added to it, please make sure they use the above and NOT "Connectionists at cs.cmu.edu". - Requests for e-mail addresses of people who are believed to subscribe to CONNECTIONISTS should be sent to postmaster at appropriate-site. If the site address is unknown, send your request to Connectionists-Request at cs.cmu.edu and we'll do our best to help. A phone call to the appropriate institution may sometimes be simpler and faster. - Note that in many mail programs a reply to a message is automatically "CC"-ed to all the addresses on the "To" and "CC" lines of the original message. If the mailer you use has this property, please make sure your personal response (request for a Tech Report etc.) is NOT broadcast over the net. ------------------------------------------------------------------------------- The CONNECTIONISTS Archive: --------------------------- All e-mail messages sent to "Connectionists at cs.cmu.edu" starting 27-Feb-88 are now available for public perusal. A separate file exists for each month. The files' names are: arch.yymm where yymm stand for the obvious thing. Thus the earliest available data are in the file: arch.8802 Files ending with .Z are compressed using the standard unix compress program. To browse through these files (as well as through other files, see below) you must FTP them to your local machine. The file "current" in the same directory contains the archives for the current month. ------------------------------------------------------------------------------- How to FTP Files from the CONNECTIONISTS Archive ------------------------------------------------ 1. Open an FTP connection to host B.GP.CS.CMU.EDU 2. Login as user anonymous with password your username. 3. 'cd' directly to the following directory: /afs/cs/project/connect/connect-archives The archive directory is the ONLY one you can access. You can't even find out whether any other directories exist. If you are using the 'cd' command you must cd DIRECTLY into this directory. Problems? - contact us at "Connectionists-Request at cs.cmu.edu". ------------------------------------------------------------------------------- Using Mosaic and the World Wide Web ----------------------------------- You can also access these files using the following url: http://www.cs.cmu.edu:8001/afs/cs/project/connect/connect-archives ---------------------------------------------------------------------- The NEUROPROSE Archive ---------------------- Anonymous FTP on archive.cis.ohio-state.edu (128.146.8.52) pub/neuroprose directory This directory contains technical reports as a public service to the connectionist and neural network scientific community which has an organized mailing list (for info: connectionists-request at cs.cmu.edu) Researchers may place electronic versions of their preprints in this directory, announce availability, and other interested researchers can rapidly retrieve and print the postscripts. This saves copying, postage and handling, by having the interested reader supply the paper. We strongly discourage the merger into the repository of existing bodies of work or the use of this medium as a vanity press for papers which are not of publication quality. PLACING A FILE To place a file, put it in the Inbox subdirectory, and send mail to pollack at cis.ohio-state.edu. Within a couple of days, I will move and protect it, and suggest a different name if necessary. Current naming convention is author.title.filetype.Z where title is just enough to discriminate among the files of the same author. The filetype is usually "ps" for postscript, our desired universal printing format, but may be tex, which requires more local software than a spooler. The Z indicates that the file has been compressed by the standard unix "compress" utility, which results in the .Z affix. To place or retrieve .Z files, make sure to issue the FTP command "BINARY" before transfering files. After retrieval, call the standard unix "uncompress" utility, which removes the .Z affix. An example of placing a file is in the appendix. Make sure your paper is single-spaced, so as to save paper, and include an INDEX Entry, consisting of 1) the filename, 2) the email contact for problems, 3) the number of pages and 4) a one sentence description. See the INDEX file for examples. ANNOUNCING YOUR PAPER It is the author's responsibility to invite other researchers to make copies of their paper. Before announcing, have a friend at another institution retrieve and print the file, so as to avoid easily found local postscript library errors. And let the community know how many pages to expect on their printer. Finally, information about where the paper will/might appear is appropriate inside the paper as well as in the announcement. In your subject line of your mail message, rather than "paper available via FTP," please indicate the subject or title, e.g. "paper available "Solving Towers of Hanoi with ART-4" Please add two lines to your mail header, or the top of your message, so as to facilitate the development of mailer scripts and macros which can automatically retrieve files from both NEUROPROSE and other lab-specific repositories: FTP-host: archive.cis.ohio-state.edu FTP-filename: /pub/neuroprose/filename.ps.Z When you announce a paper, you should consider whether (A) you want it automatically forwarded to other groups, like NEURON-DIGEST, (which gets posted to comp.ai.neural-networks) and if you want to provide (B) free or (C) prepaid hard copies for those unable to use FTP. To prevent forwarding, place a "**DO NOT FORWARD TO OTHER GROUPS**" at the top of your file. If you do offer hard copies, be prepared for a high cost. One author reported that when they allowed combination AB, the rattling around of their "free paper offer" on the worldwide data net generated over 2000 hardcopy requests! A shell script called Getps, written by Tony Plate, is in the directory, and can perform the necessary retrieval operations, given the file name. Functions for GNU Emacs RMAIL, and other mailing systems will also be posted as debugged and available. At any time, for any reason, the author may request their paper be updated or removed. For further questions contact: Jordan Pollack Associate Professor Computer Science Department Center for Complex Systems Brandeis University Phone: (617) 736-2713/* to fax Waltham, MA 02254 email: pollack at cs.brandeis.edu APPENDIX: Here is an example of naming and placing a file: unix> compress myname.title.ps unix> ftp archive.cis.ohio-state.edu Connected to archive.cis.ohio-state.edu. 220 archive.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose/Inbox 250 CWD command successful. ftp> put myname.title.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for myname.title.ps.Z 226 Transfer complete. 100000 bytes sent in 1.414 seconds ftp> quit 221 Goodbye. unix> mail pollack at cis.ohio-state.edu Subject: file in Inbox. Jordan, I just placed the file myname.title.ps.Z in the Inbox. Here is the INDEX entry: myname.title.ps.Z mylogin at my.email.address 12 pages. A random paper which everyone will want to read Let me know when it is in place so I can announce it to Connectionists at cmu. ^D AFTER RECEIVING THE GO-AHEAD, AND HAVING A FRIEND TEST RETRIEVE THE FILE, HE DOES THE FOLLOWING: unix> mail connectionists Subject: TR announcement: Born Again Perceptrons FTP-host: archive.cis.ohio-state.edu FTP-file: pub/neuroprose/myname.title.ps.Z The file myname.title.ps.Z is now available for copying from the Neuroprose repository: Random Paper (12 pages) Somebody Somewhere Cornell University ABSTRACT: In this unpublishable paper, I generate another alternative to the back-propagation algorithm which performs 50% better on learning the exclusive-or problem. ~r.signature ^D ------------------------------------------------------------------------ How to FTP Files from the NN-Bench Collection --------------------------------------------- 1. Create an FTP connection from wherever you are to machine "ftp.cs.cmu.edu" (128.2.254.155). 2. Log in as user "anonymous" with password your username. 3. Change remote directory to "/afs/cs/project/connect/bench". Any subdirectories of this one should also be accessible. Parent directories should not be. Another valid directory is "/afs/cs/project/connect/code", where we store various supported and unsupported neural network simulators and related software. 4. At this point FTP should be able to get a listing of files in this directory and fetch the ones you want. Problems? - contact us at "neural-bench at cs.cmu.edu".  From joachim at fit.qut.edu.au Fri Mar 1 02:14:05 1996 From: joachim at fit.qut.edu.au (Joachim Diederich) Date: Fri, 1 Mar 1996 17:14:05 +1000 (EST) Subject: Postdoctoral Fellowships Message-ID: <199603010714.RAA18553@aldebaran.fit.qut.edu.au> POSTDOCTORAL RESEARCH FELLOWSHIPS NEUROCOMPUTING RESEARCH CENTRE QUEENSLAND UNIVERSITY OF TECHNOLOGY BRISBANE, AUSTRALIA QUT-NRC invites applications from qualified academics for a limited number of QUT Postdoctoral Fellowships. These fellowships are available to researchers with less than five years full-time professional experience since being awarded their PhD. The duration of the fellowship is between nine months and two years. Applications from researchers with a background in Computational Learning Theory or Hybrid Artificial Intelligence/Neurocomputing Systems are especially welcomed. The salary is A$37,345 to A$40,087 pa, depending on qualifications and experience. Before submitting an application, intending applicants must contact the Neurocomputing Research Centre. Only applications strongly supported by a QUT research centre will be considered by the university. Applications should reach the Human Resources Director QUT Locked Bag 2 Red Hill QLD 4059 by Friday 29 March 1996. Please direct enquiries to: Prof Joachim Diederich Neurocomputing Research Centre Queensland University of Technology Box 2434 Brisbane Q 4001 AUSTRALIA Phone: +61 7 3864-2143 Fax: +61 7 3864-1801 e-mail: joachim at fit.qut.edu.au  From rolf at cs.rug.nl Fri Mar 1 06:53:04 1996 From: rolf at cs.rug.nl (rolf@cs.rug.nl) Date: Fri, 1 Mar 1996 12:53:04 +0100 Subject: Learning shift invariance Message-ID: Dear connectionists, first of all, thanks to Laurenz Wiskott and Jerry Feldman for arranging the arguments and thus giving the discussion a proper fundament. My view on the matter is the following. The (to me) most interesting part is the generalizing ability which Laurenz has named 4b. I would define the challenge for a neural net to learn shift invariance as follows. There are N patterns and P positions. Beginning from tabula rasa, the network is presented ONE pattern in ALL possible positions to learn shift invariance. For practical reasons, more than one pattern may be required, but I would insist that shift invariance has to be learned from a small subset of the possible patterns. After having learned shift invariance that way the network should be able to learn new patterns at a SINGLE position and then recognize them in an invariant way in ANY position. Again, I would allow a small number of positions. I grant, that the network is NOW a structured one. That is what I would call a satisfactory solution to the problem of learning shift invariance. The network in Geoffrey Hinton's paper does a good job, but it fails to meet this requirement. His parameters are N=16, P=12. Every pattern is trained at 10 (random) positions. So the number of training examples is 0.83*P*N, the number of test examples to which the network generalizes is 0.17*P*N. This gets a little awkward for larger values of N and P. The task as outlined above would allow only s*(P+N-1) training examples, where s is the `small number'. Something like 3 should be appropriate, 1 desirable. Then the network should generalize and recognize all P*N examples correctly. Note that there is no objection to the choice of parameters in the paper but to the scaling behavior for larger parameters. The network must have seen the patterns in almost all possible positions to do the generalization. As far as I have followed the discussion the goal of an O(P+N) dependence of the training set size has not been reached yet. I see 3 possibilities for settling the issue. 1) Construct a network that solves the problem as outlined above. 2) Prove that it can not be done. 3) Prove (experimentally) that visual perception can not solve the problem. I am very interested in any progress in one of these directions, and I am looking forward to the further course of this discussion. Rolf +----------------------------------------------------------------------------+ | Rolf P. W"urtz | mailto: rolf at cs.rug.nl | URL: http://www.cs.rug.nl/~rolf/ | | Department of Computing Science, University of Groningen, The Netherlands | +----------------------------------------------------------------------------+  From dwang at cis.ohio-state.edu Fri Mar 1 17:56:13 1996 From: dwang at cis.ohio-state.edu (DeLiang Wang) Date: Fri, 1 Mar 1996 17:56:13 -0500 (EST) Subject: shift invariance Message-ID: <199603012256.RAA25712@shirt.cis.ohio-state.edu> Jerry Feldman writes >2) Understanding how the visual system achieves shift invariance. > > This thread has been non-argumentative. The problem of invariances and >constancies in the visual system remains central in visual science. I can't >think of any useful message-sized summary, but this is an area where >connectionist models should play a crucial role in expressing and testing >theories. But, as several people have pointed out, we can't expect much from >tabula rasa learning. I'd like to know the evidence that the visual system achieves shift (translation) invariance (I'd appreciate references if any). It seems that the eye "focuses" on the object of interest first. In other words, the eye seems to shift with the object, not that the visual system is recognizing the object wherever it occurs on the retina. There seem to be problems with a system that DOES recognize an object no matter where it occurs, when the system faces more than an object as we confront all the time. > The unlearnability of shift invarince is not a problem in practice because >people use preprocessing, weight sharing or other techniques to get shift >invariance where it is known to be needed. However, it does pose a problem for >the brain and for theories that are overly dependent on learning. Why does it pose a problem to the brain? Perhaps the brain is doing what's regarded as "preprocessing" (a black hole containing many "troubling" things). I do agree that there are limits to tabula rasa learning. The reason that we can learn things we do is, perhaps, critically linked to the prewiring of our brain. We know that we have a lot of difficulty in training a chimpanzee's brain to learn our language, let alone 3-layer perceptrons with backprop. DeLiang Wang  From biehl at Physik.Uni-Wuerzburg.DE Fri Mar 1 06:36:58 1996 From: biehl at Physik.Uni-Wuerzburg.DE (Michael Biehl) Date: Fri, 1 Mar 1996 12:36:58 +0100 (MEZ) Subject: paper: dynamics of learning in twolayered networks Message-ID: <199603011136.MAA03721@wptx08.physik.uni-wuerzburg.de> FTP-host: ftp.physik.uni-wuerzburg.de FTP-filename: /pub/preprint/WUE-ITP-96-003.ps.gz The following paper is now available via anonymous ftp: (See below for the retrieval procedure) ------------------------------------------------------------------ "Transient dynamics of on-line learning in two-layered neural networks" Michael Biehl, Peter Riegler, and Christian W"ohler Ref. WUE-ITP-96-003 Abstract The dynamics of on-line learning in neural networks with continous units is dominated by plateaus in the time dependence of the generalization error. Using tools from statistical mechanics, we show for a soft committee machine the existence of several fixed points of the dynamics of learning that give rise to complicated behavior, such as cascade--like runs through different plateaus with a decreasing value of the corresponding generalization error. We find learning-rate dependent phenomena, such as splitting and disappearing of fixed points of the equations of motion. The dependence of plateau lengths on the initial conditions is described analytically, and simulations confirm the results. --------------------------------------------------------------------- Retrieval procedure: unix> ftp ftp.physik.uni-wuerzburg.de Name: anonymous Password: {your e-mail address} ftp> cd pub/preprint ftp> get WUE-ITP-96-003.ps.gz (*) ftp> quit unix> gunzip WUE-ITP-96-003.ps.gz e.g. unix> lp WUE-ITP-96-003.ps [15 pages] (*) can be replaced by "get WUE-ITP-96-003.ps". The file will then be uncompressed before transmission (slow!). _____________________________________________________________________ Michael Biehl Institut fuer Theoretische Physik Julius-Maximilians-Universitaet Wuerzburg Am Hubland D-97074 Wuerzburg email: biehl at physik.uni-wuerzburg.de Tel.: (+49) (0)931 888 5865 " " " 5131 Fax : (+49) (0)931 888 5141  From ingber at ingber.com Sun Mar 3 03:43:36 1996 From: ingber at ingber.com (Lester Ingber) Date: Sun, 3 Mar 1996 00:43:36 -0800 Subject: Papers: Canonical Momenta of Financial Markets and Neocortical EEG Message-ID: <199603030843.AAA06118@shellx.best.com> Papers: Canonical Momenta of Financial Markets and Neocortical EEG The following two preprints are available. markets96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of financial markets and neocortical EEG %B International Conference on Neural Information Processing (ICONIP'96) %I Springer %C New York %D 1996 %O This is an invited paper to the 1996 International Conference on Neural Information Processing (ICONIP'96), Hong Kong, 24-27 September 1996. URL http://www.ingber.com/markets96_momenta.ps.Z A paradigm of statistical mechanics of financial markets (SMFM) is fit to multivariate financial markets using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and used as technical indicators in a recursive ASA optimization process to tune trading rules. These trading rules are then used on out-of-sample data, to demonstrate that they can profit from the SMFM model, to illustrate that these markets are likely not efficient. This methodology can be extended to other systems, e.g., electroencephalography. smni96_momenta.ps.Z [45K] %A L. Ingber %T Canonical momenta indicators of neocortical EEG %B Physics Computing 96 (PC96) %I PC96 %C Krakow, Poland %D 1996 %O This is an invited paper to Physics Computing 96 (PC96), Krakow, Poland, 17-21 September 1996. URL http://www.ingber.com/smni96_momenta.ps.Z A model of statistical mechanics of neocortical interactions (SMNI) has been fit to EEG data using Adaptive Simulated Annealing (ASA), a global optimization algorithm, to perform maximum likelihood fits of Lagrangians defined by path integrals of multivariate conditional probabilities. Canonical momenta are thereby derived and can be used as technical indicators in a recursive ASA optimization process to optimize clinician rules. This methodology has been applied to financial markets. This archive also contains the most recent version 12.10 of Adaptive Simulated Annealing (ASA) %A L. Ingber %T Adaptive Simulated Annealing (ASA) %R [http://www.ingber.com/ASA-shar, ASA-shar.Z, ASA.tar.Z, ASA.tar.gz, ASA.zip] %I Lester Ingber Research %C McLean, VA %D 1993 ASA is one of the most powerful optimization algorithms for nonlinear and stochastic systems, and is being used recursively in the above two projects. Please note that this archive recently has been moved to its present location from http://www.alumni.caltech.edu/~ingber/ and ftp.alumni.caltech.edu:/pub/ingber. Pointers to the new location will be found in the old location. ======================================================================== Instructions for Retrieval of Code and Reprints Interactively Via WWW The archive can be accessed via WWW path http://www.ingber.com/ Interactively Via Anonymous FTP Code and reprints can be retrieved via anonymous ftp from ftp.ingber.com. Interactively [brackets signify machine prompts]: [your_machine%] ftp ftp.ingber.com [Name (...):] anonymous [Password:] your_e-mail_address [ftp>] binary [ftp>] ls [ftp>] get file_of_interest [ftp>] quit The 00index file contains an index of the other files. Files have the same WWW and FTP paths under directory ingber.com; i.e., http://www.ingber.com/dir/file and ftp://ftp.ingber.com/dir/file reference the same file. Electronic Mail If you do not have ftp access, get information on the FTPmail service by: mail ftpmail at decwrl.dec.com, and send only the word "help" in the body of the message. Additional Information Sorry, I cannot assume the task of mailing out hardcopies of code or papers. Limited help assisting people with their queries on my codes and papers is available only by electronic mail correspondence. Lester , ======================================================================== /* RESEARCH ingber at ingber.com * * INGBER ftp://ftp.ingber.com * * LESTER http://www.ingber.com/ * * Dr. Lester Ingber _ P.O. Box 857 _ McLean, VA 22101 _ 1.800.L.INGBER */  From hermann at impa.br Sun Mar 3 10:12:50 1996 From: hermann at impa.br (Hermann Von Hasseln) Date: Sun, 3 Mar 1996 12:12:50 -0300 Subject: TR announcement (IPF for conditionals) Message-ID: <199603031512.MAA04517@Gauss.impa.br> In connection with the recent announcement of Padhraic Smyth et al. ("Probabilistic Independence Networks For Hideen Markov Probability Models") I'd like to announce the following technical report, which might be of interest for you: AN IPF PROCEDURE FOR MIXED GRAPHICAL MODELS Hermann von Hasseln IMPA Instituto de Matem\'atica Pura e Aplicada Rio de Janeiro, Brazil Abstract We introduce a variant of the well--known iterative proportional fitting (IPF) procedure. Just as the traditional IPF procedure, which uses a given set of marginal probabilities as constraints that have to be satisfied in each iteration, we show that this also can be done with a given set of compatible conditional probabilities. In the case of compatible conditionals convergence is guaranteed by a theorem by Csisz\'ar. We also define a ``mixed'' version of IPF procedures, where the set of constraints is given by a mixed set of marginal and conditional probabilities. Keywords: Iterative proportional fitting, maximum likelihood estimation, graphical models, maximum entropy, minimum discrimination information, conditionally specified distributions. To obtain a copy of these papers, pl. send your email request to hermann at impa.br Comments are welcome. Hermann von Hasseln  From goldfarb at unb.ca Sun Mar 3 15:58:17 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 3 Mar 1996 16:58:17 -0400 (AST) Subject: shift invariance In-Reply-To: <9602281000.ZM15421@ICSI.Berkeley.edu> Message-ID: On Wed, 28 Feb 1996, Jerry Feldman wrote: > 2) Understanding how the visual system achieves shift invariance. > > This thread has been non-argumentative. The problem of invariances and > constancies in the visual system remains central in visual science. I realize that I'm talking to the connectionist "family", but I still want to remind you that under symbolic encoding the above problem essentially "disappears". Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From terry at salk.edu Sun Mar 3 18:09:31 1996 From: terry at salk.edu (Terry Sejnowski) Date: Sun, 3 Mar 96 15:09:31 PST Subject: Journal Impact Factors Message-ID: <9603032309.AA15208@salk.edu> According to the latest Journal Citation Reports ratings, neural network journals took 3 out of the top 4 spots for impact factor (citations per article) for the area COMPUTER SCIENCE/ARTIFICIAL INTELLIGENCE: 1. Neural Computation 3.139 2. IEEE Trans. Pattern. Analy 2.006 3. IEEE Trans. Neural Net. 1.941 4. Neural Networks 1.939 5. Artificial Intelligence 1.915 6. Chemometr Intell Lab 1.752 7. Machine Learning 1.721 8. Network 1.196 9. Int J. Comput. Vision 1.153 10. Cogn. Brain Res. 0.880 11. AI Magazine 0.736 12. Pattern Recognition 0.691 13. Artif. Intell. Medicine 0.672 14. IEEE Expert 0.629 15. Image Vision Comput. 0.602 16. Intern. J. Intell. Systems 0.512 17. IEEE Trans. Knowl. Data En 0.461 18. Artif Intell. Review 0.457 19. Intern. J. Softw. Eng. Know 0.420 20. Pattern Recognition Lett. 0.381 Terry -----  From SAMY at gmr.com Sun Mar 3 21:52:41 1996 From: SAMY at gmr.com (R. Uthurusamy) Date: Sun, 03 Mar 1996 21:52:41 -0500 (EST) Subject: New Book: Advances in Knowledge Discovery and Data Mining Message-ID: <01I1X119VLS68ZEZKM@gmr.com> New Book Announcement: Advances in Knowledge Discovery and Data Mining ----------------------------------------------- Edited by Usama M. Fayyad, Gregory Piatetsky-Shapiro, Padhraic Smyth, and Ramasamy Uthurusamy Published by the AAAI Press / The MIT Press ISBN 0-262-56097-6 March 1996 625 pp. Price: $ 50.00 This book can be ordered online from The MIT Press: http://mitpress.mit.edu/ More info at: http://www-mitpress.mit.edu/mitp/recent-books/comp/fayap.html http://www.aaai.org/Publications/Press/Catalog/fayyad.html (This AAAI website also has abstracts of chapters) ---------------------------------------------------------------------------- "Advances in Knowledge Discovery and Data Mining" brings together the latest research -- in statistics, databases, machine learning, and artificial intelligence -- that are part of the exciting and rapidly growing field of Knowledge Discovery and Data Mining. Topics covered include fundamental issues, classification and clustering, trend and deviation analysis, dependency modeling, integrated discovery systems, next generation database systems, and application case studies. The contributors include leading researchers and practitioners from academia, government laboratories, and private industry. The last decade has seen an explosive growth in the generation and collection of data. Advances in data collection, widespread use of bar codes for most commercial products, and the computerization of many business and government transactions have flooded us with data and generated an urgent need for new techniques and tools that can intelligently and automatically assist in transforming this data into useful knowledge. This book is a timely and comprehensive overview of the new generation of techniques and tools for knowledge discovery in data. ---------------------------------------------------------------------------- Contents -------- Foreword: On the Barriers and Future of Knowledge Discovery / vii Gio Wiederhold Preface / xiii Chapter 1: From Data Mining to Knowledge Discovery: An Overview / 1 Usama M. Fayyad, Gregory Piatetsky-Shapiro, and Padhraic Smyth Part I: Foundations Chapter 2: The Process of Knowledge Discovery in Databases: A Human-Centered Approach Ronald J. Brachman and Tej Anand / 37 Chapter 3: Graphical Models for Discovering Knowledge Wray Buntine / 59 Chapter 4: A Statistical Perspective on Knowledge Discovery in Databases John Elder IV and Daryl Pregibon / 83 Part II Classification and Clustering Chapter 5: Inductive Logic Programming and Knowledge Discovery in Databases Saso Dzeroski / 117 Chapter 6: Bayesian Classification (AutoClass): Theory and Results Peter Cheeseman and John Stutz / 153 Chapter 7: Discovering Informative Patterns and Data Cleaning Isabelle Guyon, Nada Matic, and Vladimir Vapnik / 181 Chapter 8: Transforming Rules and Trees into Comprehensible Knowledge Structures Brian R. Gaines / 205 Part III Trend and Deviation Analysis Chapter 9: Finding Patterns in Time Series: A Dynamic Programming Approach Donald J. Berndt and James Clifford / 229 Chapter 10: Explora: A Multipattern and Multistrategy Discovery Assistant Willi Kloesgen / 249 Part IV Dependency Derivation Chapter 11: Bayesian Networks for Knowledge Discovery David Heckerman / 273 Chapter 12: Fast Discovery of Association Rules Rakesh Agrawal, Heikki Mannila, Ramakrishnan Srikant, Hannu Toivonen, and A. Inkeri Verkamo / 307 Chapter 13: From Contingency Tables to Various Forms of Knowledge in Databases Robert Zembowicz and Jan M. Zytkow / 329 Part V Integrated Discovery Systems Chapter 14: Integrating Inductive and Deductive Reasoning for Data Mining Evangelos Simoudis, Brian Livezey, and Randy Kerber / 353 Chapter 15: Metaqueries for Data Mining Wei-Min Shen, KayLiang Ong, Bharat Mitbander, and Carlo Zaniolo / 375 Chapter 16: Exploration of the Power of Attribute-Oriented Induction in Data Mining Jiawei Han and Yongjian Fu / 399 Part VI Next Generation Database Systems Chapter 17: Using Inductive Learning To Generate Rules for Semantic Query Optimization Chun-Nan Hsu and Craig A. Knoblock / 425 Chapter 18: Data Surveyor: Searching the Nuggets in Parallel Marcel Holsheimer, Martin L. Kersten, and Arno P.J.M. Siebes / 447 Part VII KDD Applications Chapter 19: Automating the Analysis and Cataloging of Sky Surveys Usama M. Fayyad, S. George Djorgovski, and Nicholas Weir / 471 Chapter 20: Selecting and Reporting What is Interesting: The KEFIR Application to Healthcare Data Christopher J. Matheus, Gregory Piatetsky-Shapiro, and Dwight McNeill / 495 Chapter 21: Modeling Subjective Uncertainty in Image Annotation Padhraic Smyth, Usama M. Fayyad, Michael C. Burl, and Pietro Perona / 517 Chapter 22: Predicting Equity Returns from Securities Data with Minimal Rule Generation Chidanand Apte and Se June Hong / 541 Chapter 23: From Data Mining to Knowledge Discovery: Current Challenges and Future Directions Ramasamy Uthurusamy / 561 Part VIII Appendices Knowledge Discovery in Databases Terminology Willi Kloesgen and Jan M. Zytkow / 573 Data Mining and Knowledge Discovery Internet Resources Gregory Piatetsky-Shapiro / 593 About The Editors / 597 Index / 601 ---------------------------------------------------------------------------- For Additional Information contact: American Association for Artificial Intelligence (AAAI) 445 Burgess Drive, Menlo Park, California 94025-3496 USA Telephone: 415-328-3123 / Fax: 415-321-4457 / Email: info at aaai.org ----------------------------------------------------------------------------  From ZECCHINA at to.infn.it Mon Mar 4 07:19:33 1996 From: ZECCHINA at to.infn.it (Riccardo Zecchina - tel.11-5647358, fax. 11-5647399) Date: Mon, 4 Mar 1996 13:19:33 +0100 (MET) Subject: paper: Learning and Generalization in Large Committee-Machines Message-ID: <960304131933.60400dc4@to.infn.it> FTP-host: archive.cis.ohio-state.edu The following paper is now available for copying from FTP-filename: /pub/neuroprose/zecchina.committee.ps.Z Title: LEARNING A GENERALIZATION THEORIES OF LARGE COMMITTEE-MACHINES Authors: Remi Monasson and Riccardo Zecchina to be appear in Int.J.Mod.Phys.B. Abstract: The study of the distribution of volumes associated to the internal representations of learning examples allows us to derive the critical learning capacity ($\alpha_c=\frac{16}{\pi} \sqrt{\ln K}$) of large committee machines, to verify the stability of the solution in the limit of a large number $K$ of hidden units and to find a Bayesian generalization cross--over at $\alpha=K$. Retrieving instructions: unix> ftp archive.cis.ohio-state.edu login: anonymous passwd: (your email address) ftp> cd /pub/neuroprose ftp> binary ftp> get zecchina.committee.ps.Z ftp> quit unix> uncompress zecchina.committee.ps.Z E_mail: zecchina at to.infn.it  From orsier at cui.unige.ch Mon Mar 4 09:08:00 1996 From: orsier at cui.unige.ch (Orsier Bruno) Date: Mon, 4 Mar 1996 15:08:00 +0100 Subject: TR+software available - finding global minima Message-ID: <943*/S=orsier/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> "Another hybrid algorithm for finding a global mimimum of MLP error functions" Technical Report UNIGE-AI-95-6 Bruno ORSIER, CUI, University of Geneva --- orsier at cui.unige.ch ABSTRACT: This report presents \pstar, a new global optimization method for training multilayered perceptrons. Instead of local minima, global minima of the error function are found. This new method is hybrid in the sense that it combines three very different optimization techniques: Random Line Search, Scaled Conjugate Gradient and a 1-dimensional minimization algorithm named P$^*$. The best points of each component are retained by the hybrid method: simplicity of Random Line Search, efficiency of Scaled Conjugate Gradient, efficiency and convergence toward a global minimum for P$^*$. \pstar\ is empirically shown to perform better or much better than three other global random optimization methods and a global deterministic optimization method. Retrieval: http://cuiwww.unige.ch/AI-group/staff/orsier.html \pstar and its test problems are available for users of the Stuttgart Neural Network Simulator. See also http://cuiwww.unige.ch/AI-group/staff/orsier.html for details. Best regards, Bruno Orsier E-mail: orsier at cui.unige.ch University of Geneva WWW:http://cuiwww.unige.ch/AI-group/staff/orsier.html  From nmg at skivs.ski.org Mon Mar 4 14:03:09 1996 From: nmg at skivs.ski.org (Norberto Grzywacz) Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) Subject: shift invariance In-Reply-To: <199603012256.RAA25712@shirt.cis.ohio-state.edu> Message-ID: On Fri, 1 Mar 1996, DeLiang Wang wrote: > > I'd like to know the evidence that the visual system achieves shift > (translation) invariance (I'd appreciate references if any). It seems > that the eye "focuses" on the object of interest first. In other > words, the eye seems to shift with the object, not that the visual system is > recognizing the object wherever it occurs on the retina. > A form of shift invariance appears to exist in cortical neurons of the anterior part of the superior temporal sulcus and of the inferior temporal cortex. Neurons in these areas have large receptive fields, which can show considerable selectivity for what the stimulus is irrespective of exactly where it is in the visual field. I would call this property "selectivity shift invariance," to contrast with "absolute shift invariance," which the cortex does not appear to have. The amplitude of cell responses vary (fall) with eccentricity, even though they maintain their selectivity. Moreover, the amplitude of the responses is modulated by the presence of other objects in the receptive fields. Three relevant references are: Tovee, M.J., Rolls, E.T., and Azzopardi, P. (1994) Translation invariance in the responses to faces of single neurons in the temporal visual cortical areas of the alert macaque. J. Neurophysiol. 72:1049-1060. Rolls, E.T. and Tovee, M.J. (1995) The responses of single neurons in the temporal visual cortical areas of the macaque when more than one stimulus is present in the receptive field. Exp. Brain Res. 103:409-420. Ito, M., Tamura, H., Fujita, I., and Tanaka, K. (1995) Size and position invariance of neuronal responses in monkey inferotemporal cortex. J. Neurophysiol. 73:218-226. Norberto Grzywacz  From isri at gpg.com Mon Mar 4 20:11:35 1996 From: isri at gpg.com (isri@gpg.com) Date: Mon, 4 Mar 96 20:11:35 -0500 (EST) Subject: Neural Networks Symposium NNS'96 - part of ICICS'96 Message-ID: ------------------------------------------------------------------------------- First Call for Contributions NEURAL NETWORKS SYMPOSIUM - NNS'96 as part of 1996 International Conference on Intelligent and Cognitive Systems ICICS'96 Comprising three symposia on Neural Networks, Fuzzy Systems, and Cognitive Science Sept. 23-26, 1996 Intelligent Systems Research Institute Tehran, Iran http://www.gpg.com/isri ------------------------------------------------------------------------------- Scope: Papers are solicited for, but not limited to, the following areas: * Theoretical aspects of neural networks, including Spin glasses, Coding Theory ... * Learning algorithms * New architectures and topologies * Simulation environments for neural networks * Analysis and organization of knowledge in neural networks * Neuro-fuzzy algorithms * Applications of neural networks. Program Committee: Neural Networks Symposium M.H. Abassian, M.R. Hashemi Golpayegani, C. Lucas, A.R. Mirzai (Co-chair), B. Moshiri, S. Rouhani (Co-chair), N. Sadati, V. Tahani, M.H. Zand. ------------------------------------------------------------------------------- Contribution Procedure Scientific papers should report novel results and achievements in the field of neural computing. Tutorial and review papers will be acceptable only in exceptional circumstances. Product oriented papers will be presented in a special session. Prospective contributors are invited to submit an extended summary (500-1000 words) of their paper emphasizing the novel results of their theoretical or applied research, and including the title, author(s), affiliation(s), address, telephone, fax, E-mail(s), to Intelligent Systems Research Institute P.O. Box 19395-5746 Tehran, Iran E-mail: int_sys at rose.ipm.ac.ir Please also cite: ``Submitted for possible presentation at ... '' together with the name of the conference/symposium to which the paper is contributed For special sessions/exhibitions/product introductions, etc., the same procedure applies. Please cite: ``Proposal for ... in ...'' ------------------------------------------------------------------------------- Timetable Deadline for receiving summaries/proposals: 20 April 1996 Notification of acceptance: 20 June 1996 Receipt of the full text: 20 August 1996 (The Program Committee will review the manuscripts and return to the authors for appropriate action if any shortcoming is noticed. The responsibility will always belong to the author(s)). ------------------------------------------------------------------------------- Contact Information Intelligent Systems Research Institute P.O. Box 19395-5746 Tehran, Iran E-mail: int_sys at rose.ipm.ac.ir WWW: http://www.gpg.com/isri -------------------------------------------------------------------------------  From postma at cs.rulimburg.nl Tue Mar 5 11:33:00 1996 From: postma at cs.rulimburg.nl (Eric Postma) Date: Tue, 5 Mar 96 17:33:00 +0100 Subject: shift invariance Message-ID: <9603051633.AA15499@bommel.cs.rulimburg.nl> DeLiang Wang wrote >I'd like to know the evidence that the visual system achieves shift >(translation) invariance (I'd appreciate references if any). Biederman and Cooper (1991) found that an object presented at one location of the retina facilitated recognition of that object at other locations. The visual system does not achieve perfect ranslation invariance as shown by Nazir and O'Regan (1991). Biederman, I. & Cooper, E.E. (1991). Evidence for complete translational and reflectional invariance in visual object priming. Perception, 20, 585-593. Nazir, T.A. & O'Regan, J.K. (1990). Some results on translation invariance in the human visual system. Spatial Vision, 5, 81-100. DeLiang Wang wrote >It seems that the eye "focuses" on the object of interest first. In other >words, the eye seems to shift with the object, not that the visual system is >recognizing the object wherever it occurs on the retina. ...and... >There seem to be problems with a system that DOES recognize an object no >matter where it occurs, when the system faces more than an object as we >confront all the time. In addition to the selection of objects through direction of gaze (overt attention), there exists an attentional process which operates independent of eye movements. This process is known as covert attention and may be likened (to a certain extent) to a searchlight. When fixing your gaze on a single letter of this text, you may still be able to select and identify the adjacent letters and words. Inspired by Anderson and Van Essen's (1987) shifter circuits, we developed a scalable model of covert attention capable of translation-invariant pattern processing (Postma, van den Herik, and Hudson, 1994, 1996 submitted). Our model is similar to the model proposed by Olshausen, Anderson, and Van Essen (1993, 1995) and is based on the idea that attentional selection provides a solution to the problem of translation invariance and the problem of selecting (parts of) objects. The attentional searchlight selects parts of a scene and maps their contents into a pattern-recognition stage without affecting the spatial ordering of the selected pattern. Anderson, C.H. & Van Essen, D.C. (1987). Shifter circuits: A computational strategy for dynamic aspects of visual processing. {\em Proceedings of the National Academy of Sciences U.S.A.}, {\bf 84}, 6297-6301. Olshausen, B.A., Anderson, C.H., & Van Essen, D.C. (1993). A neurobiological model of visual attention and invariant pattern recognition based on dynamic routing of information. {\em The Journal of Neuroscience}, {\bf 13}, 4700-4719. Olshausen, B.A., Anderson, C.H., & Van Essen, D.C. (1995). A multiscale routing circuit for forming size- and position-invariant object representations. {\em The Journal of Computational Neuroscience}, {\bf 2}, 45-62. Postma, E.O., Van den Herik, H.J., & Hudson, P.T.W. (1994). Attentional scanning. In A. Cohn (Ed.), {\em ECAI 94, 11th European Conference on Artificial Intelligence} (pp. 173-177). New York: John Wiley and Sons. Postma, E.O., Van den Herik, H.J., & Hudson, P.T.W. (1996). SCAN: s scalable model of attentional selection. submitted to Neural Networks. Eric Postma Eric Postma Computer Science Department Faculty of General Sciences University of Limburg P.O. Box 616 6200 MD Maastricht The Netherlands  From maja at garnet.cs.brandeis.edu Tue Mar 5 12:22:34 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Tue, 5 Mar 1996 12:22:34 -0500 Subject: MS & PhD program in AI, robotics, evolutionary comp, etc. Message-ID: <199603051722.MAA24525@garnet.cs.brandeis.edu> In May 1994 Brandeis University announced the opening of the new Volen National Center for Complex Systems with the goal of promoting interdisciplinary research and collaboration between faculty from Computer Science, Linguistics, Neuroscience, Psychology, Biology and Physics. The Center, whose main mission is to study cognitive science, brain theory, and advanced computation, has already earned accolades from scientists world wide, and continues to expand. Brandeis is located in Waltham, a suburb 10 miles west of Boston, with easy rail access to both Cambridge and downtown. Founded in 1948, it is recognized as one of the finest private liberal arts and science universities in the United States. Brandeis combines the breadth and range of academic programs usually found at much larger universities, with the friendliness of a smaller and more focused research community. The Computer Science Department is located in the Volen Center and is the home of four Artificial Intelligence faculty actively involved in the Center activities and collaborations: Rick Alterman, Maja Mataric, Jordan Pollack, and James Pustejovsky. In addition to SGI and HP workstations, the Dept owns a 4096 processor Maspar MP2 and a 16 processor SGI challenge supercomputer, and has new electronics and metalworking facilities to support innovative research. Rich Alterman's research interests are in the general areas of artificial intelligence and cognitive science and include such topics as: planning and activity, discourse and text processing, memory and case based reasoning, and human-computer interaction. A recent focus has been on theories of pragmatics and usage as they apply to the problems of man-machine interaction. One project resulted in the construction of a detailed cognitive model of an individual learning how to use a device; significant features of this model included, techniques for skill acquisition and learning, a method for organizing procedural knowledge in memory, and "reading techniques" for actively seeking out and interpreting instructions that are relevant to a given "break down" situation. A second project develops a method of system adaptation where the system automatically evolves to the specifics of its task environment, after is deployed, based on the history of usage of the system for a given task. A third project develops techniques that support the evolution and maintenance of a collective memory for a community of distributed heterogeneous agents who plan and work cooperatively. Professor Alterman is especially looking for students (and postdocs) with backgrounds in planning and activity, memory and case based reasoning, text and information retrieval, and human-computer interaction. Maja Mataric's research focuses on understanding systems that integrate perception, representation, learning, and action. Her work is applied to problems of synthesis and analysis of complex behavior in situated agents and multi--agent systems. Mataric's Interaction Lab (http://www.cs.brandeis.edu/~agents) covers three main project areas: 1) multi-robot projects (dynamic task division, specialization, learning behaviors and behavior selection, learning social rules, distributed spatial representations, synthesis and analysis of multi-robot controllers; using 24 mobile robots and a dynamical robot simulator.); 2) multi-agent projects (cooperation vs. competition, dominance hierarchies, modeling markets, economies, and ecologies with non-rational agents, synthesizing and analyzing complex group behavior; using various multi-agent simulations); and 3) multi-modal representation projects (modeling learning by imitation involving perception, representation, and motor control, sensory-motor mappings, learning new motor behaviors, adapting internal motor programs, attention, and analysis of moving images; using a fully dynamic human torso simulation). Prof. Mataric encourages students with interests and/or backgrounds in AI, robotics, autonomous agents, machine learning, cognitive science, and cognitive neuroscience to apply. For more information see http://www.cs.brandeis.edu/~maja. Jordan Pollack's research interests lie at the boundary between neural and symbolic computation: How could simple neural mechanisms organized naturally into multi-cellular structures by evolution provide the capacity necessary for cognition, language, and general intelligence? This view has lead to successful work on how variable tree-structures could be represented in neural activity patterns, how dynamical systems could act as language generators and recognizers, and how fractal limit behavior of recurrent networks could represent mental imagery. One major current focus is on co-evolutionary learning, in which the learning task is dynamically constructed as a carrot, dangling in front of the machine learning horse. In the Dynamical and Evolutionary Machine Organization (DEMO), we are working on co-evolution in strategic game playing agents, cognitive tasks, and teams of agents who cooperate and communicate on complex tasks. As substrate we use recurrent neural networks and genetic programs, and use the 4096 processor Maspar machine. Professor Pollack is especially looking for students (and postdocs) with backgrounds in NN's & GA's, IFS's, robot building, and evolutionary agents. For more information see http://www.cs.brandeis.edu/~pollack or http://www.demo.cs.brandeis.edu James Pustejovsky conducts research in the areas of computational linguistics, lexical semantics, and information retrieval and extraction. The main focus of his current research is on the computational and cognitive modeling of natural language meaning. More specifically, the investigation is in how words and their meanings combine to meaningful texts. This research has focused on developing a theory of lexical semantics based on a methodology making use of formal and computational semantics. There are several projects applying the results of this theory to Natural Language Processing, which in effect, empirically test this view of semantics. These include: an NSF grant with Apple to automatically construct index libraries and help systems for applications; a DEC grant to automatically convert a trouble-shooting text-corpus into a case library. He recently compleated a joint project with aphasiologist Dr. Susan Kohn on word-finding difficulties and sentence generation in aphasics. For more information see http://www.cs.brandeis.edu/~jamesp/ The four AI faculty work together and with other members of the Volen Center, creating new interdisciplinary research opportunities in areas including cognitive science (http://fechner.ccs.brandeis.edu/cogsci.html) computational neuroscience, and complex systems at Brandeis University. To get more information about the Volen Center for Complex Systems, about the Computer Science Department, and about other faculty, see: http://www.cs.brandeis.edu/dept The URL for the graduate admission information is http://www.cs.brandeis.edu/dept/grad-info/application.html Graduate applications will begin to be reviewed on March 18th.  From icsc at freenet.edmonton.ab.ca Tue Mar 5 12:33:44 1996 From: icsc at freenet.edmonton.ab.ca (icsc@freenet.edmonton.ab.ca) Date: Tue, 5 Mar 1996 10:33:44 -0700 (MST) Subject: Announcement and Call for Papers ISFL'97 Message-ID: Announcement and Call for Papers Second International ICSC Symposium on FUZZY LOGIC AND APPLICATIONS ISFL'97 To be held at the Swiss Federal Institute of Technology (ETH), Zurich, Switzerland February 12 - 14, 1997 I. SPONSORS Swiss Federal Institute of Technology (ETH), Zurich, Switzerland and ICSC, International Computer Science Conventions, Canada/Switzerland II. PURPOSE OF THE CONFERENCE This conference is the successor of the highly successful meeting held in Zurich in 1995 (ISFL'95) and is intended to provide a forum for the discussion of new developments in fuzzy logic and its applications. An invitation to participate is extended both to those who took part in ISFL'95 and to others working in this field. Applications of fuzzy logic have played a significant role in industry, notably in the field of process and plant control, especially in applications where accurate modelling is difficult. The organisers hope that contributions will come not only from this field, but also from newer applications areas, perhaps in business, financial planning management, damage assessment, security, and so on. III. TOPICS Contributions are sought in areas based on the list below, which is indicative only. Contributions from new application areas will be particularly welcome. - Basic concepts such as various kinds of Fuzzy Sets, Fuzzy Relations, Possibility Theory - Neuro-Fuzzy Systems and Learning - Fuzzy Decision Analysis - Image Analysis with Fuzzy Techniques - Mathematical Aspects such as non-classical logics, Category Theory, Algebra, Topology, Chaos Theory - Modeling, Identification, Control - Robotics - Fuzzy Reasoning, Methodology and Applications, for example in Artificial Intelligence, Expert Systems, Image Processing and Pattern Recognition, Cluster Analysis, Game Theory, Mathematical Programming, Neural Networks, Genetic Algorithms and Evolutionary Computing - Implementation, for example in Engineering, Process Control, Production, Medicine - Design - Damage Assessment - Security - Business, Finance, Management IV. INTERNATIONAL SCIENTIFIC COMMITTEE (ISC) - Honorary Chairman: M. Mansour, Swiss Federal Institute of Technology, Zurich - Chairman: N. Steele, Coventry University, U.K. - Vice-Chairman: E. Badreddin, Swiss Federal Institute of Technology, Zurich - Members: E. Alpaydin, Turkey P.G. Anderson, USA Z. Bien, Korea H.H. Bothe, Germany G. Dray, France R. Felix, Germany J. Godjevac, Switzerland H. Hellendoorn, Germany M. Heiss, Austria K. Iwata, Japan M. Jamshidi, USA E.P. Klement, Austria B. Kosko, USA R. Kruse, Germany F. Masulli, Italy S. Nahavandi, New Zealand C.C. Nguyen, USA V. Novak, Czech Republic R. Palm, Germany D.W. Pearson, France I. Perfilieva, Russia B. Reusch, Germany G.D. Smith, U.K. V. ORGANISING COMMITTEE ISFL'97 is a joint operation between the Swiss Federal Institute of Technology (ETH), Zurich and International Computer Science Conventions (ICSC), Canada/Switzerland. VI. PUBLICATION OF PAPERS All accepted papers will appear in the conference proceedings, published by ICSC Academic Press. In addition, some selected papers may also be considered for journal publication. VII. SUBMISSION OF MANUSCRIPTS Prospective authors are requested to send two copies of their abstracts of 500 words for review by the International Scientific Committee. All abstracts must be written in English, starting with a succinct statement of the problem, the results achieved, their significance and a comparison with previous work. If authors believe that more details are necessary to substantiate the main claims of the paper, they may include a clearly marked appendix that will be read at the discretion of the International Scientific Committee. The abstract should also include: - Title of proposed paper - Authors names, affiliations, addresses - Name of author to contact for correspondence - E-mail address and fax number of contact author - Name of topic which best describes the paper (max. 5 keywords) Contributions are welcome from those working in industry and having experience in the topics of this conference as well as from academics. The conference language is English. Abstracts may be submitted either by electronic mail (ASCII text), fax or mail (2 copies) to either one of the following addresses: ICSC Canada P.O. Box 279 Millet, Alberta T0C 1Z0 Canada Fax: +1-403-387-4329 Email: icsc at freenet.edmonton.ab.ca or ICSC Switzerland P.O. Box 657 CH-8055 Zurich Switzerland VIII. OTHER CONTRIBUTIONS Anyone wishing to organise a workshop, tutorial or discussion, is requested to contact the chairman of the conference, Prof. Nigel Steele (e-mail: nsteele at coventry.ac.uk / phone: +44-1203-838568 / fax: +44-1203-838585) before August 31, 1996. IX. DEADLINES AND REGISTRATION It is the intention of the organisers to have the conference proceedings available for the delegates. Consequently, the deadlines below are to be strictly respected: - Submission of Abstracts: May 31, 1996 - Notification of Acceptance: August 31, 1996 - Delivery of full papers: October 31, 1996 X. ACCOMMODATION Block reservations will be made at nearby hotels and accommodation at reasonable rates (not included in the registration fee) will be available upon registration (full details will follow with the letters of acceptance) XI. SOCIAL AND TOURIST ACTIVITIES A social programme, including a reception, will be organized on the evening of February 13, 1997. This acitivity will also be available for accompanying persons. Winter is an attractive season in Switzerland and many famous alpine resorts are in easy reach by rail, bus or car for a one or two day excursion. The city of Zurich itself is the proud home of many art galleries, museums or theatres. Furthermore, the world famous shopping street 'Bahnhofstrasse' or the old part of the town with its many bistros, bars and restaurants are always worth a visit. XII. INFORMATION For further information please contact either of the following: - ICSC Canada, P.O. Box 279, Millet, Alberta T0C 1Z0, Canada E-mail: icsc at freenet.edmonton.ab.ca Fax: +1-403-387-4329 Phone: +1-403-387-3546 - ICSC Switzerland, P.O. Box 657, CH-8055 Zurich, Switzerland Fax: +41-1-761-9627 - Prof. Nigel Steele, Chairman ISFL'97, Coventry University, U.K. E-mail: nsteele at coventry.ac.uk Fax: +44-1203-838585 Phone: +44-1203-838568  From edelman at wisdom.weizmann.ac.il Wed Mar 6 08:23:01 1996 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Wed, 6 Mar 1996 13:23:01 GMT Subject: shift invariance In-Reply-To: <9603051633.AA15499@bommel.cs.rulimburg.nl> (message from Eric Postma on Tue, 5 Mar 96 17:33:00 +0100) Message-ID: <199603061323.NAA08380@lachesis.wisdom.weizmann.ac.il> > Date: Tue, 5 Mar 96 17:33:00 +0100 > From: Eric Postma > > DeLiang Wang wrote > >I'd like to know the evidence that the visual system achieves shift > >(translation) invariance (I'd appreciate references if any). > > Biederman and Cooper (1991) found that an object presented at one location > of the retina facilitated recognition of that object at other locations. The > visual system does not achieve perfect ranslation invariance as shown by > Nazir and O'Regan (1991). > > Biederman, I. & Cooper, E.E. (1991). > Evidence for complete translational and reflectional invariance in visual > object priming. > Perception, 20, 585-593. > > Nazir, T.A. & O'Regan, J.K. (1990). > Some results on translation invariance in the human visual system. > Spatial Vision, 5, 81-100. Putting Nazir & O'Regan on the same list with Biederman like that may be misleading to someone who will not bother to read the paper. Nazir & O'Regan actually found evidence AGAINST translation invariance in human vision. They may have phrased the title conservatively to appease conservative reviewers... So, do not take the existence of translation invariance in biological vision for granted; heed well the cautionary note in Norberto's posting: > Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) > From: Norberto Grzywacz > A form of shift invariance appears to exist in cortical neurons of the > anterior part of the superior temporal sulcus and of the inferior temporal > cortex. Neurons in these areas have large receptive fields, which can show > considerable selectivity for what the stimulus is irrespective of exactly > where it is in the visual field. I would call this property "selectivity > shift invariance," to contrast with "absolute shift invariance," which > the cortex does not appear to have. -Shimon Dr. Shimon Edelman, Applied Math. & Computer Science Weizmann Institute of Science, Rehovot 76100, Israel The Web: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+972) 8 344122 tel: 8 342856 sec: 8 343545  From STECK at ie.twsu.edu Wed Mar 6 11:21:39 1996 From: STECK at ie.twsu.edu (JIM STECK) Date: Wed, 6 Mar 1996 11:21:39 CDT (GMT-6) Subject: 2 papers: Quantum Dot Neural Network / Optical Neural Network Message-ID: <44DD77C5015@ie.twsu.edu> An uncompressed postscript version of the following paper is available at: http://www.me.twsu.edu/me/faculty/steck/Pubs/ (approx 1400K) A Quantum Dot Neural Network E.C. Behrman, J. Niemel, J. E. Steck, S. R. Skinner Wichita State University, Wichita, KS 67260 Abstract We present a mathematical implementation of a quantum mechanical artificial neural network, in the quasi-continuum regime, using the nonlinearity inherent in the real-time propagation of a quantum system coupled to its environment. Our model is that of a quantum dot molecule coupled to the substrate lattice through optical phonons, and subject to a time-varying external field. Using discretized Feynman path integrals, we find that the real time evolution of the system can be put into a form which resembles the equations for the virtual neuron activation levels of an artificial neural network. The timeline discretization points serve as virtual neurons. We then train the network using a simple gradient descent algorithm, and find it is possible in some regions of the phase space to perform any desired classical logic gate. Because the network is quantum mechanical we can also train purely quantum gates such as a phase shift. '''''''''''''''''''''''''''''''''''''''''''''''''''''''''' An uncompressed postscript version of the following paper is available at: http://www.me.twsu.edu/me/faculty/steck/Pubs/ (approx 153K) Experimental Demonstration of On-Line Training for an Optical Neural Network Using Self-Lensing Media Alvaro A. Cruz-Cabrera, James E. Steck, Elizabeth C. Behrman, Steven R. Skinner Abstract The optical bench realization of a feed forward optical neural network, developed by the authors, is presented. The network uses a thermal nonlinear material that modulates the phase front of a forward propagating HeNe beam by dynamically altering the index of refraction profile of the material. The index of refraction cross-section of the nonlinear material was modified by applying a separate argon laser, which was modulated by a liquid crystal display used as a spatial light modulator. On-line training of the network was accomplished by using a reinforcement learning paradigm to achieve several standard and non-standard logic gates. James E. Steck Assistant Professor (316)-689-3402  From juergen at idsia.ch Wed Mar 6 13:05:08 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 6 Mar 96 19:05:08 +0100 Subject: feature detectors Message-ID: <9603061805.AA06532@fava.idsia.ch> SEMILINEAR PREDICTABILITY MINIMIZATION PRODUCES WELL-KNOWN FEATURE DETECTORS (9 pages, 260 K compressed, 1.14 M uncompressed) Neural Computation, 1996 (accepted) Juergen Schmidhuber, Martin Eldracher, Bernhard Foltin Predictability minimization (PM) exhibits various intuitive and theoretical advantages over many other methods for unsupervised redundancy reduction. So far, however, there were only toy appli- cations of PM. In this paper, we apply semilinear PM to static real world images and find: without a teacher and without any significant pre-processing, the system automatically learns to generate distributed representations based on well-known feature detectors, such as orientation sensitive edge detectors and off- center-on-surround-like structures, thus extracting simple features related to those considered useful for image pre-processing and compression. (Revised and extended TR FKI-201-94) To obtain a copy, cut and paste this: netscape ftp://ftp.idsia.ch/pub/juergen/detectors.ps.gz Juergen Schmidhuber, IDSIA Martin Eldracher, IDSIA / TUM Bernhard Foltin, TUM  From minton at ISI.EDU Wed Mar 6 21:16:20 1996 From: minton at ISI.EDU (minton@ISI.EDU) Date: Wed, 6 Mar 96 18:16:20 PST Subject: JAIR article, Mean Field Theory for ... Message-ID: <9603070216.AA00570@sungod.isi.edu> Readers of this group may be interested in the following article, which was just published by JAIR: Saul, L.K., Jaakkola, T. and Jordan, M.I. (1996) "Mean Field Theory for Sigmoid Belief Networks", Volume 4, pages 61-76. Available in Postscript (302K) and compressed Postscript (123K). For quick access via your WWW browser, use this URL: http://www.cs.washington.edu/research/jair/abstracts/saul96a.html More detailed instructions are below. Abstract: We develop a mean field theory for sigmoid belief networks based on ideas from statistical mechanics. Our mean field theory provides a tractable approximation to the true probability distribution in these networks; it also yields a lower bound on the likelihood of evidence. We demonstrate the utility of this framework on a benchmark problem in statistical pattern recognition---the classification of handwritten digits. The article is available via: -- comp.ai.jair.papers (also see comp.ai.jair.announce) -- World Wide Web: The URL for our World Wide Web server is http://www.cs.washington.edu/research/jair/home.html For direct access to this article and related files try: http://www.cs.washington.edu/research/jair/abstracts/saul96a.html -- Anonymous FTP from either of the two sites below. Carnegie-Mellon University (USA): ftp://p.gp.cs.cmu.edu/usr/jair/pub/volume4/saul96a.ps The University of Genoa (Italy): ftp://ftp.mrg.dist.unige.it/pub/jair/pub/volume4/saul96a.ps The compressed PostScript file is named saul96a.ps.Z (123K) -- automated email. Send mail to jair at cs.cmu.edu or jair at ftp.mrg.dist.unige.it with the subject AUTORESPOND and our automailer will respond. To get the Postscript file, use the message body GET volume4/saul96a.ps (Note: Your mailer might find this file too large to handle.) Only one can file be requested in each message. -- JAIR Gopher server: At p.gp.cs.cmu.edu, port 70. For more information about JAIR, visit our WWW or FTP sites, or send electronic mail to jair at cs.cmu.edu with the subject AUTORESPOND and the message body HELP, or contact jair-ed at ptolemy.arc.nasa.gov.  From king at cs.cuhk.hk Wed Mar 6 22:58:11 1996 From: king at cs.cuhk.hk (Irwin King) Date: Thu, 7 Mar 1996 11:58:11 +0800 (HKT) Subject: CFP - Special Sessions on Genetic Algorithm and Neural Networks in Multimedia Message-ID: <199603070358.LAA14549@cs.cuhk.hk> ********************************************************************** C A L L F O R P A P E R S Special Sessions On 1. GENETIC ALGORITHMS & PROGRAMMING and 2. NEURAL NETWORKS IN MULTIMEDIA APPLICATIONS September 24-27, 1996 International Conference on Neural Information Processing (ICONIP'96) Hong Kong Convention and Exhibition Center, Wan Chai, Hong Kong http://www.cs.cuhk.hk/iconip96 ********************************************************************** The main objectives of these special sessions are: * To provide a forum for presenting and discussing theoretical and application issues on GA and GP * To provide a forum for presenting and discussing the application of neural networks in multimedia systems * To promote collaboration between researchers internationally 1. Genetic Algorithms & Programming =================================== We invite papers dealing with the theory and application of GA and GP. Please submit a short abstract (1 page) via email to ksleung at cs.cuhk.edu.hk as soon as possible. If accepted, the full paper will be required by April 12, 1996. 2. Neural Networks in Multimedia Applications ============================================= We invite papers dealing with the application of neural networks in already implemented multimedia systems. In particular, we are interested in proven neural network techniques used for virtual reality applications and multimedia databases. Please submit a short abstract (1 page) via email to king at cs.cuhk.edu.hk as soon as possible. If accepted, the full paper will be required by April 12, 1996. Please send inquiries to: Genetic Algorithms & Programming Neural Networks in Multimedia ================================ ============================= K.S. Leung Irwin King Dept. of Comp. Sci. & Eng. Dept. of Comp. Sci. & Eng. The Chinese University of Hong Kong The Chinese University of Hong Kong Shatin, N.T. Shatin, N.T. Hong Kong Hong Kong ksleung at cs.cuhk.edu.hk king at cs.cuhk.edu.hk Fax: (852) 2603-5024 Fax: (852) 2603-5024  From goldfarb at unb.ca Fri Mar 8 14:03:58 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Fri, 8 Mar 1996 15:03:58 -0400 (AST) Subject: shift invariance In-Reply-To: <199603061323.NAA08380@lachesis.wisdom.weizmann.ac.il> Message-ID: On Wed, 6 Mar 1996, Edelman Shimon wrote: > > Nazir, T.A. & O'Regan, J.K. (1990). > > Some results on translation invariance in the human visual system. > > Spatial Vision, 5, 81-100. > > Nazir > & O'Regan actually found evidence AGAINST translation invariance in > human vision. They may have phrased the title conservatively to > appease conservative reviewers... So, do not take the existence of > translation invariance in biological vision for granted; heed well the > cautionary note in Norberto's posting: > > > Date: Mon, 4 Mar 1996 11:03:09 -0800 (PST) > > From: Norberto Grzywacz > > > A form of shift invariance appears to exist in cortical neurons of the > > anterior part of the superior temporal sulcus and of the inferior temporal > > cortex. Neurons in these areas have large receptive fields, which can show > > considerable selectivity for what the stimulus is irrespective of exactly > > where it is in the visual field. I would call this property "selectivity > > shift invariance," to contrast with "absolute shift invariance," which > > the cortex does not appear to have. Why would one want to invent such a strange name "selectivity shift invariance"? If we 1) DO NOT FORGET that the biological systems have at their disposal quite adequate means to extract symbolic (structural) representation right from the very beginning and 2) FORGET about our inadequate numeric models, then the question would not have arisen in the first place. Symbolic representations EMBODY shift invariance. We are completing a paper "Inductive Theory of Vision" that addresses these issues. Lev Goldfarb http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From dnoelle at cs.ucsd.edu Fri Mar 8 18:37:25 1996 From: dnoelle at cs.ucsd.edu (David Noelle) Date: Fri, 8 Mar 96 15:37:25 -0800 Subject: CogSci96 Call For Participation Message-ID: <9603082337.AA14792@beowulf> Eighteenth Annual Conference of the COGNITIVE SCIENCE SOCIETY July 12-15, 1996 University of California, San Diego La Jolla, California CALL FOR PARTICIPATION The Annual Cognitive Science Conference began with the La Jolla Conference on Cognitive Science in August of 1979. The organizing committee of the Eighteenth Annual Conference would like to welcome members home to La Jolla. We plan to recapture the pioneering spirit of the original conference, extending our welcome to fields on the expanding frontier of Cognitive Science, including Artificial Life, Cognitive and Computational Neuroscience, Evolutionary Psychology, as well as the core areas of Anthropology, Computer Science, Linguistics, Neuroscience, Philosophy, and Psychology. The conference will feature plenary addresses by invited speakers, invited symposia by leaders in their fields, technical paper sessions, a poster session, a banquet, and a Blues Party. San Diego is the home of the world-famous San Diego Zoo and Wild Animal Park, Sea World, the historic all-wooden Hotel Del Coronado, beautiful beaches, mountain areas and deserts, is a short drive from Mexico, and features a high Cappuccino Index. Bring the whole family and stay a while! PLENARY SESSIONS "Controversies in Cognitive Science: The Case of Language" Stephen Crain (UMD College Park) & Mark Seidenberg (USC) Moderated by Paul Smolensky (Johns Hopkins University) "Tenth Anniversary of the PDP Books" Geoff Hinton (Toronto), Jay McClelland (CMU), & Dave Rumelhart (Stanford) "Frontal Lobe Development and Dysfunction in Children: Dissociations between Intention and Action" Adele Diamond (MIT) "Reconstructing Consciousness" Paul Churchland (UCSD) TRAVEL & ACCOMMODATIONS United Airlines is the official airline of the 1996 Cognitive Science Conference. Attendees flying with United can receive a 5% discount off of any published United or United Express round trip fare (to San Diego) in effect when ticket is purchased, subject to all applicable restrictions. Attendees flying with United can receive a 10% discount off of applicable BUA fares in effect when ticket is purchased 7 days in advance. To get your discount, be sure to give your travel agent the following information: * "Meeting ID# 557NS for the Cognitive Science Society Meeting" * United's Meeting Desk phone number is (800) 521-4041. Alternatively, you may order your tickets direct from United's Meeting Desk, using the same reference information as above. Purchasers of United tickets to the conference will be eligible for a drawing (to be held at the conference) in which two round trip tickets will be given away -- so don't throw away your boarding pass! If you are flying to San Diego, you will be arriving at Lindbergh Field. If you don't rent a car, transportation from the airport to the UCSD area will cost (not including tip) anywhere from $15.00 (for a seat on a shuttle/van) to $35.00 (for a taxi). We have arranged for special rates at two of the hotels nearest to the UCSD campus. In addition, on campus dormitory apartments can be rented at less expense. All rooms are subject to availability and hotel rates are only guaranteed up to the dates specified, so reserve early. None of the rates quoted below (unless explicitly stated) include tax, which is currently 10.5 percent. The La Jolla Marriott is located approximately 2 miles from campus. Single and double rooms are available at $92.00 per night, when reserved before June 21st. Included in the rate is a morning and evening shuttle service to and from campus (running for one hour periods, on July 13th, 14th, and 15th only). The hotel has parking spaces, available at $7 per day or $10 per day with valet service. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. There is also city buss service (fare is about $1.50 per ride) from and to campus which passes within 1 block of the hotel. Reservations can be made by calling the hotel at (619) 587-1414 or (800) 228-9290. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. Arrival after 6:00 P.M. requires a first night's deposit, or guarantee with a major credit card. The La Jolla Radisson is located approximately 1/2 mile from campus. Single and double rooms are available at $75.00 per night, when reserved before June 12th. Included in the rate is a morning and evening shuttle service to and from campus, although walking is also very feasible. Parking is available and complementary. On campus parking requires the purchase of daily ($6.00) or weekly ($16.00) passes. The first night's room charge (+ tax) is due by June 12th. Reservations can be made by calling Radisson Reservations at (800) 333-3333. Be sure to reference the "Annual Conference of the Cognitive Science Society" to receive these special rates. There are a limited number of on-campus apartments available for reservation as a 4 night package. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). On campus parking is complimentary with this package. These apartments may be reserved using the conference registration form. REGISTRATION INFORMATION There are three ways to register for the 1996 Cognitive Science Conference: * ONLINE REGISTRATION -- You may fill out and electronically submit the online registration form, which may be found on the conference web page at "http://www.cse.ucsd.edu/events/cogsci96/". This is the preferred method of registration. (You must pay registration fees with a Visa or MasterCard in order to use this option.) * EMAIL REGISTRATION -- You may fill out the plain text (ASCII) registration form, which appears below, and send it via electronic mail to "cogsci96reg at cs.ucsd.edu". (You must pay registration fees with a Visa or MasterCard in order to use this option.) * POSTAL REGISTRATION -- You may download a copy of the PostScript registration form from the conference home page (or extract the plain text version, below), print it on a PostScript printer, fill it out with a pen, and send it via postal mail to: CogSci'96 Conference Registration Cognitive Science Department - 0515 University of California, San Diego 9500 Gilman Drive La Jolla, CA 92093-0515 (Under this option, you may enclose payment of registration fees in U. S. dollars in the form of a check or money order, or you may pay these fees with a Visa or MasterCard. Please make checks payable to: The Regents of the University of California.) For more information, visit the conference web page at "http://www.cse.ucsd.edu/events/cogsci96". Please direct questions and comments to "cogsci96 at cs.ucsd.edu". Edwin Hutchins and Walter Savitch, Conference Chairs John D. Batali, Local Arrangements Chair Garrison W. Cottrell, Program Chair ====================================================================== PLAIN TEXT REGISTRATION FORM ====================================================================== Cognitive Science 1996 Registration Form ---------------------------------------- Your Full Name : _____________________________________________________ Your Postal Address : ________________________________________________ (including zip/postal ________________________________________________ code and country) ________________________________________________ ________________________________________________ Your Telephone Number (Voice) : ______________________________________ Your Telephone Number (Fax) : ______________________________________ Your Internet Electronic Mail Address (e.g., dnoelle at cs.ucsd.edu) : ______________________________________________________________________ REGISTRATION FEES : Please select the appropriate registration option from the menu below by placing an "X" in the corresponding blank on the left. Note that the Cognitive Science Society is offering a special deal to individuals who opt to join the Society simultaneously with conference registration. The "New Member" package includes conference fees and first year's membership dues for only $10 more than the nonmember conference cost. Registration fees received after May 1st are $20 higher ($10 higher for students) than fees received before May 1st. Be sure to register early to take advantage of the lower fee rates. _____ Registration, Member -- $120 ($140 after May 1st) _____ Registration, Nonmember -- $145 ($165 after May 1st) _____ Registration, New Member -- $155 ($175 after May 1st) _____ Registration, Student Member -- $85 ($95 after May 1st) _____ Registration, Student Nonmember -- $100 ($110 after May 1st) CONFERENCE BANQUET : Tickets to the conference banquet are *not* included in the registration fees, above. Banquet tickets are $35 per person. (You may bring guests.) Number Of Banquet Tickets Desired ($35 each): _____ _____ Omnivorous _____ Vegetarian CONFERENCE SHIRTS : Conference T-Shirts are *not* included in the registration fees, above. These are $10 each. Number Of T-Shirts Desired ($10 each): _____ UCSD ON-CAMPUS APARTMENTS : There are a limited number of on-campus apartments available for reservation as a 4 night package. Included is a (mandatory) meal plan - cafeteria breakfast (4 days), and lunch (3 days). The total cost is $191 per person (double occupancy, including tax) and $227 per person (single occupancy, including tax). On campus parking is complimentary with this package. Off-campus accommodations in local hotels are also available, but you will need to make reservations by contacting the hotel of interest directly. If you will be staying off-campus, please skip this portion of the registration form. On-campus housing reservations must be received by May 1st, 1996. Please include the cost of on-campus housing in the total conference cost listed at the bottom of this form. Select the housing plan desired by placing an "X" in the appropriate blank on the left: _____ UCSD Housing and Meal Plan (Single Room) -- $227 per person _____ UCSD Housing and Meal Plan (Double Room) -- $191 per person Arrival Date And Time : ____________________________________________ Departure Date And Time : ____________________________________________ If you reserved a double room above, please indicate your roommate preference below: _____ Please assign a roommate to me. I am _____ female _____ male. _____ I will be sharing this room with a guest who is not registered for the conference. I will include $382 ($191 times 2) in the total conference cost listed at the bottom of this form. _____ I will be sharing this room with another conference attendee. I will include $191 in the total conference cost listed at the bottom of this form. My roommate will submit her housing fee along with her registration form. My roommate's full name is: ______________________________________________________________ If you would like to share your room with your children, the UCSD apartments allow up to two children in a room. Number And Ages Of Children : ________________________________________ Comments To The Registration Staff : ______________________________________________________________________ ______________________________________________________________________ ______________________________________________________________________ Please sum your conference registration fees, the cost of banquet tickets and t-shirts, and on-campus housing costs, and place the total below. To register by electronic mail, payment must be by Visa or MasterCard only. TOTAL : _$____________ Bill to: _____ Visa _____ MasterCard Number : ___________________________________________ Expiration Date: ___________________________________ When complete, send this form via email to "cogsci96reg at cs.ucsd.edu". ====================================================================== PLAIN TEXT REGISTRATION FORM ======================================================================  From ib at rana.usc.edu Fri Mar 8 21:25:00 1996 From: ib at rana.usc.edu (Irving Biederman) Date: Fri, 8 Mar 1996 18:25:00 -0800 Subject: Shift Invariance Message-ID: <199603090225.SAA14592@mizar.usc.edu> The communication by Shimon Edelman is, in my opinion, a bit misleading. In response to a posting by Eric Postma that listed papers by Biederman & Cooper (1991) and Nazir & O'Regan (1990) as evidence for shift invariance, Edelman writes: "Putting Nazir & O'Regan on the same list with Biederman like that may be misleading to someone who will not bother to read the paper. Nazir & O'Regan actually found evidence AGAINST translation invariance in human vision." One may distinguish a strong form of shift invariance, in which there is no cost in performance from changing the position of a stimulus with a weak form in which there is facilitation but not as much as when the stimulus is presented at its originally experienced position. Eric E. Cooper and I (Perception '91) found virtually complete (i.e., strong) shift invariance, as measured by the priming of briefly presented (100 msec) object pictures. [100 msec is too brief to make a fixation onto the stimulus.] Picture-naming RTs and error rates were unaffected by a shift. We did this by presenting the pictures either 2 deg to the left and or 2 deg to the right of fixation. The order of left-right positions was random appearing. In two experiments, when the pictures were shown a second time, there was virtually no difference in performance if a given picture was shifted or not. A third experiment produced the same result with 2 deg shifts above and below the fixation point. That there was perceptual and not just concept or name priming was evidenced by a reduction in priming of pictures with the same name and basic-level concept but a different shape (i.e., two different kind of chairs). So we obtained a strong form of shift invariance. The finding of strong left-right shift invariance on RTs was replicated using contour-deleted pictures by Cooper, Biederman, & Hummel (1992) that, for half the subjects, were also mirror-reversed when they were shifted. A slight, but reliable, increase in error rates was noted only for pictures that were shifted but not reversed. Shifted pictures that were also reversed (so the fish, for example, is always facing toward the fixation) showed no increase in error rates. We proposed that when a picture is shifted across the vertical midline, different features (e.g., parts) will be present at different eccentricities and therefore, receive differentresolution. Mirror reversing the stimulus preserves the original relation between resolution and features. If the features are difficult to discriminate, then the modest variation in resolution could produce an apparent shift cost. A subject in the Nazir & O'Regan ('90, Spatial Vision) experiment was extensively trained to discriminate a symmetrical nonsense pattern from two highly similar non-target patterns at 2.4 deg to the left of fixation. (Other subjects would be trained with that pattern to the right of fixation.) The subject could be then tested at the learned position (which was always peripheral), at central fixation, or on the opposite side. As Nazir & O'Regan noted there was an enormous amount of facilitation in all conditions in all experiments. So there was at least weak invariance. Was there strong invariance as well? When they controlled potentially confounding and contaminating factors, there was strong shift invariance, Edelman's claim to the contrary. Under controlled conditions, when a stimulus was not presented unless the eye was on the fixation point, there was no effect of a shift from learned to opposite positions. There was a cost, however, of shifting from a learned (peripheral) to central position. But this comparison confounds resolution (from peripheral to central) with shift. Although it may be surprising that one would do worse with central as compared to more peripheral positions, it is not implausible that a different set of features were employed at different resolutions. In three latter experiments, where eye position was not controlling (it was difficult to train subjects to do it), there were much larger costs but there well could have been a bias to look at the learned location. In these latter three experiments, as Nazir & O'Regan noted, there was considerable subject and stimulus variability, perhaps reflecting various task strategies. Certainly, a bias to monitor the trained location is not out of the question. So we have four name-priming experiments documenting strong shift invariance when resolution is controlled, three left-right and one up-down. The Nazir & O'Regan research shows strong invariance under controlled conditions. So five well-controlled experiments document strong shift invariance. Weak invariance is obtained under less controlled conditions. Finally, let me note that, of course, it is not the case that every representation is shift-invariant. Under the identical conditions that yielded invariance in object priming, subjects showed well-above-chance explicit memory of where the picture was presented. Cooper and I hypothesized that position information may be specified by the dorsal system. Those who presume to test invariance of shift (or size or orientation or reflection) should bear in mind the possibility that a particular task, especially if it is extremely difficult so that subjects are induced to undertake various strategies such as search, may tap both shift-invariant and shift-specific representations. For example, if I've been extensively trained to search for a small distinguishing feature on the left side of the display, I could readily show a shift cost if the feature is no longer there. References: Cooper, E. E., Biederman, I., & Hummel, J. E. (1992). Metric invariance in object recognition: A review and further evidence. Canadian Journal of Psychology, 46, 191-214. > Biederman, I. & Cooper, E. E. (1991). > Evidence for complete translational and reflectional invariance in visual > object priming. > Perception, 20, 585-593. > > Nazir, T.A. & O'Regan, J.K. (1990). > Some results on translation invariance in the human visual system. > Spatial Vision, 5, 81-100. ************************** Irving Biederman, Ph. D. William M. Keck Professor of Cognitive Neuroscience Department of Psychology University of Southern California Hedco Neurosciences Building, MC 2520 Los Angeles, CA 90089-2520 ib at rana.usc.edu (213) 740-6094 (Office); (213) 740-5687 (Fax); (310) 823-8980 (Home); (213) 740-6102 (Lab) Visit our web at: http://rana.usc.edu:8376/~ib/iul.html  From edelman at wisdom.weizmann.ac.il Sun Mar 10 12:01:12 1996 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Sun, 10 Mar 1996 17:01:12 GMT Subject: Shift Invariance In-Reply-To: <199603090225.SAA14592@mizar.usc.edu> (message from Irving Biederman on Fri, 8 Mar 1996 18:25:00 -0800) Message-ID: <199603101701.RAA08020@lachesis.wisdom.weizmann.ac.il> > Date: Fri, 8 Mar 1996 18:25:00 -0800 > From: Irving Biederman > > The communication by Shimon Edelman is, in my opinion, a bit > misleading. In response to a posting by Eric Postma that listed papers by > Biederman & Cooper (1991) and Nazir & O'Regan (1990) as evidence for shift > invariance, Edelman writes: > > "Putting Nazir & O'Regan on the same list with Biederman like that may > be misleading to someone who will not bother to read the paper. Nazir > & O'Regan actually found evidence AGAINST translation invariance in > human vision." > > One may distinguish a strong form of shift invariance, in which > there is no cost in performance from changing the position of a stimulus > with a weak form in which there is facilitation but not as much as when the > stimulus is presented at its originally experienced position. > ... [ rest of Biederman's message omitted ] > ... Many thanks to Irv Biederman for posting the details of his findings, along with a comparison with the results of Nazir & O'Regan. His effort should reduce the chance of the readers of this list jumping to premature conclusions. Note that the purpose of my previous posting was to advocate caution, certainly not to argue that all claims of invariance are wrong. Fortunately, my job in this matter is easy: just one example of a manifest lack of invariance suffices to invalidate the strong version of invariance-based theory of vision, which seems to be espoused by Goldfarb: > If we 1) DO NOT FORGET that the biological systems have at their disposal > quite adequate means to extract symbolic (structural) representation right > from the very beginning and 2) FORGET about our inadequate numeric models, > then the question would not have arisen in the first place. Symbolic > representations EMBODY shift invariance. So, here it goes... Whereas invariance does hold in many recognition tasks (in particular, in Biederman's experiments, as well as in the experiments reported in [1]), it does not in others (as, e.g., in [2], where interaction between size invariance and orientation is reported). A recent comprehensive survey of (the far from invariant) human performance in recognizing rotated objects can be found in [3]. Furthermore, not only recognition, but also perceptual learning, seems to be non-invariant in some cases; see [4,5]. FORGETTING about experimental findings will not make them go away, just as pointing out that symbolic representations EMBODY invariance will not make biological vision embrace a symbolic approach if it has not done so until now. -Shimon Dr. Shimon Edelman, Applied Math. & Computer Science Weizmann Institute of Science, Rehovot 76100, Israel The Web: http://eris.wisdom.weizmann.ac.il/~edelman fax: (+972) 8 344122 tel: 8 342856 sec: 8 343545 ----------------------------------------------------------------------------- References: [1] @article{BricoloBulthoff92, author="E. Bricolo and H. H. {B\"ulthoff}", title="Translation-invariant features for object recognition", journal="Perception", volume="21 (supp.2)", year = 1992, pages = "59" } [2] @article{BricoloBulthoff93a, author="E. Bricolo and H. H. {B\"ulthoff}", title="Further evidence for viewer-centered representations", journal="Perception", volume="22 (supp)", year = 1993, pages = "105" } [3] @InCollection{JolicoeurHumphrey94, author = "P. Jolicoeur and G. K. Humphrey", title = "Perception of rotated two-dimensional and three-dimensional objects and visual shapes", booktitle = "Perceptual constancies", publisher = "Cambridge University Press", year = 1994, editor = "V. Walsh and J. Kulikowski", chapter = 10, address = "Cambridge, UK", note = "in press" } [4] @article{KarniSagi91, author="A. Karni and D. Sagi", title="Where practice makes perfect in texture discrimination", journal=pnas, volume="88", pages="4966-4970", year="1991" } [5] @article{PoggioFahleEdelman92, author="T. Poggio and M. Fahle and S. Edelman", title="Fast perceptual learning in visual hyperacuity", journal="Science", year="1992", volume="256", pages="1018-1021", }  From goldfarb at unb.ca Sun Mar 10 22:57:12 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sun, 10 Mar 1996 23:57:12 -0400 (AST) Subject: Shift Invariance In-Reply-To: <199603101701.RAA08020@lachesis.wisdom.weizmann.ac.il> Message-ID: On Sun, 10 Mar 1996, Edelman Shimon wrote: > Many thanks to Irv Biederman for posting the details of his findings, > along with a comparison with the results of Nazir & O'Regan. His > effort should reduce the chance of the readers of this list jumping to > premature conclusions. > > Note that the purpose of my previous posting was to advocate caution, > certainly not to argue that all claims of invariance are wrong. > Fortunately, my job in this matter is easy: just one example of a > manifest lack of invariance suffices to invalidate the strong version > of invariance-based theory of vision, which seems to be espoused by > Goldfarb: > > > If we 1) DO NOT FORGET that the biological systems have at their disposal > > quite adequate means to extract symbolic (structural) representation right > > from the very beginning and 2) FORGET about our inadequate numeric models, > > then the question would not have arisen in the first place. Symbolic > > representations EMBODY shift invariance. > > So, here it goes... Whereas invariance does hold in many recognition > tasks (in particular, in Biederman's experiments, as well as in the > experiments reported in [1]), it does not in others (as, e.g., in [2], > where interaction between size invariance and orientation is > reported). A recent comprehensive survey of (the far from invariant) > human performance in recognizing rotated objects can be found in > [3]. Furthermore, not only recognition, but also perceptual learning, > seems to be non-invariant in some cases; see [4,5]. > > FORGETTING about experimental findings will not make them go away, > just as pointing out that symbolic representations EMBODY invariance > will not make biological vision embrace a symbolic approach if it has > not done so until now. It appears that there is a considerable confusion as to what "shift invariance" is: shift invariance should not include size, orientation, or context invariance, since an encoding of these may involve additional structural information. (By the way, I do not read Biederman's message as Edelman does) -- Lev  From robtag at dia.unisa.it Mon Mar 11 07:11:41 1996 From: robtag at dia.unisa.it (Tagliaferri Roberto) Date: Mon, 11 Mar 1996 13:11:41 +0100 Subject: International School on Neural Nets "E.R. Caianiello" Message-ID: <9603111211.AA24178@udsab.dia.unisa.it> Galileo Galilei Foundation World Federation of Scientists Ettore Maiorana Centre for Scientific Culture Galilelo Galilei Celebrations Four Hundreds Years Since the Birth of Modern Science International School on Neural Nets "E.R. Caianiello" 1st Course: Learning in Graphical Models A NATO Advanced Study Institute Erice-Sicily: 27 September - 7 October 1996 Sponsored by the: - European Union - International Institute for Advanced Scientific Studies (IIASS) - Italian Institute for Philosophical Studies - Italian Ministry of Education - Italian Ministry of University and Scientific Research - Italian National Research Institute (CNR) - Sicilian Regional Government - University of Salerno Programme and Lecturers - Introduction to Graphical Models J. Whittaker, University of Lancaster, UK - Introduction to Bayesian Methods D. Mackay, University of Cambridge, UK - Introduction to Neural Networks M. Jordan, MIT, Cambridge, MA, USA - Learning of Directed Graphs D. Heckerman, Microsoft Research, Redmond, WA, USA - The Helmholz Machine G. Hinton, University of Toronto, Canada - Model Selection G. Cooper, University of Pittsburg, PA, USA - Latent Variables Methods R. Neal, University of Toronto, Canada - Stochastic Grammars S. Omohundro, NEC Research, Princeton, NJ, USA - Statistical Mechanics and Clustering Models J. Buhmann, University of Bonn, Germany - Bayesian Learning of Graphical Models R. Cowell, University College, London, UK - Priors for Graphical Models D. Geiger, UCLA, Los Angeles, CA, USA - Independence and Decorrelation E. Oja, Helsinki University of Technology, Finland - Bayesian Learning and Gibbs Sampling D. Spiegelhalter, MRC, Cambridge, UK Purpose of the course Neural Networks and Bayesian belief networks are learning and interface methods that have been developed in two largely distinct reasearch communities. The purpose of this Course is to bring together researchers from these two communities and study both kinds of networks as istances of a general unified graphical formalism. The Course will focus on probabilistic methods for learning in graphical models, with attention paid to algorithm analysis and design, theory and applications. General Information Persons wishing to attend the Course should apply in writing to: - Prof. Maria Marinaro IIASS "E.R. Caianiello" Via G. Pellegrino, 19 84019 Vietri sul mare (SA), Italy Tel: + 39 89 761167 Fax: + 39 89 761189 They should specify: i) date and place of birth together with present nationality; ii) degree and other academic qualifications; iii) present position and place of work. Young persons with only little experience should include a letter of recommendation from the head of their research group or from a senior scientist active in the field. The total fee, which includes full board and lodging (arranged by the School), is $1000 USD. Thanks to the generosity of the sponsoring Institutions, partial support can be granted to some deserving students who need financial help. Requests to this effect must be specified and justified in the application letter. Closing date for application: July 15, 1996 No special application form is required. Admission to the Course will be decided in consultation with the Advisory Committee of the School consisting of Professors D. Hecherman, M.I. Jordan, M. Marinaro and A. Zichichi. It is regretted that it will not be possible to allow any person not selected by the Committee of the School to follow the Course. Participants must arrive in Erice on September 27, no later than 5 p.m. More information about this Course and the other activities of the Ettore Majorana Centre can be found on the WWW at the following address: http://www.ccsem.infn.it D. Heckerman - M.I. Jordan Directors of the Course M.I. Jordan - M. Marinaro Directors of the School A. Zichichi Director of the Centre  From obrad at sava.zfe.siemens.de Mon Mar 11 10:41:29 1996 From: obrad at sava.zfe.siemens.de (Dragan Obradovic) Date: Mon, 11 Mar 1996 16:41:29 +0100 Subject: NEW BOOK ANNOUNCEMENT Message-ID: <199603111541.QAA20158@sava.zfe.siemens.de> -------------------------------------------------------------------- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -- NEW BOOK -------------------------------------------------------------------- "An Information-Theoretic Approach to Neural Computing" -------------------------------------------------------- Gustavo Deco and Dragan Obradovic (Springer Verlag) Full details at: http://www.springer.de/springer-news/inf/inf_9602.new.html ISBN 0-387-94666-7 Summary: --------- Neural networks provide a powerful new technology to model and control nonlinear and complex systems. In this book, the authors present a detailed formulation of neural networks from the information-theoretic viewpoint. They show how this perspective provides new insights into the design theory of neural networks. In particular they show how these methods may be applied to the topics of supervised and unsupervised learning including feature extraction, linear and non-linear independent component analysis, and Boltzmann machines. Readers are assumed to have a basic understanding of neural networks, but all the relevant concepts from information theory are carefully introduced and explained. Consequently, readers from several different scientific disciplines, notably cognitive scientists, engineers, physicists, statisticians, and computer scientists, will find this to be a very valuable introduction to this topic. Contents: --------- Acknowledgments vi Foreword vii CHAPTER 1 Introduction 1 CHAPTER 2 Preliminaries of Information Theory and Neural Networks 7 Elements of Information Theory 8 Entropy and Information 8 Joint Entropy and Conditional Entropy 9 Kullback-Leibler Entropy 9 Mutual Information 10 Differential Entropy, Relative Entropy and Mutual Information 11 Chain Rules 13 Fundamental Information Theory Inequalities 15 Coding Theory 21 Elements of the Theory of Neural Networks 23 Neural Network Modeling 23 Neural Architectures 24 Learning Paradigms 27 Feedforward Networks: Backpropagation 28 Stochastic Recurrent Networks: Boltzmann Machine 31 Unsupervised Competitive Learning 35 Biological Learning Rules 36 PART I: Unsupervised Learning CHAPTER 3 Linear Feature Extraction: Infomax Principle 41 Principal Component Analysis: Statistical Approach 42 PCA and Diagonalization of the Covariance Matrix 42 PCA and Optimal Reconstruction 45 Neural Network Algorithms and PCA 51 Information Theoretic Approach: Infomax 57 Minimization of Information Loss Principle and Infomax Principle 58 Upper Bound of Information Loss 59 Information Capacity as a Lyapunov Function of the General Stochastic Approximation 61 CHAPTER 4 Independent Component Analysis: General Formulation and Linear Case 65 ICA-Definition 67 General Criteria for ICA 68 Cumulant Expansion Based Criterion for ICA 69 Mutual Information as Criterion for ICA 73 Linear ICA 79 Gaussian Input Distribution and Linear ICA 81 Networks With Anti-Symmetric Lateral Connections 84 Networks With Symmetric Lateral Connections 86 Examples of Learning with Symmetric and Anti-Symmetric Networks 89 Learning in Gaussian ICA with Rotation Matrices: PCA 91 Relationship Between PCA and ICA in Gaussian Input Case 93 Linear Gaussian ICA and the Output Dimension Reduction 94 Linear ICA in Arbitrary Input Distribution 95 Some Properties of Cumulants at the Output of a Linear Transformation 95 The Edgeworth Expansion Criteria and Theorem 4.6.2 99 Algorithms for Output Factorization in the Non-Gaussian Case 100 Experimental Results of Linear ICA Algorithms in the Non-Gaussian Case 102 CHAPTER 5 Nonlinear Feature Extraction: Boolean Stochastic Networks 109 Infomax Principle for Boltzmann Machines 110 Learning Model 110 Examples of Infomax Principle in Boltzmann Machine 113 Redundancy Minimization and Infomax for the Boltzmann Machine 119 Learning Model 119 Numerical Complexity of the Learning Rule 124 Factorial Learning Experiments 124 Receptive Fields Formation from a Retina 129 Appendix 132 CHAPTER 6 Nonlinear Feature Extraction: Deterministic Neural Networks 135 Redundancy Reduction by Triangular Volume Conserving Architectures 136 Networks with Linear, Sigmoidal and Higher Order Activation Functions 140 Simulations and Results 142 Unsupervised Modeling of Chaotic Time Series 146 Dynamical System Modeling 147 Redundancy Reduction by General Symplectic Architectures 156 General Entropy Preserving Nonlinear Maps 156 Optimizing a Parameterized Symplectic Map 157 Density Estimation and Novelty Detection 159 Example: Theory of Early Vision 163 Theoretical Background 164 Retina Model 165 PART II: Supervised Learning CHAPTER 7 Supervised Learning and Statistical Estimation 169 Statistical Parameter Estimation - Basic Definitions 171 Cramer-Rao Inequality for Unbiased Estimators 172 Maximum Likelihood Estimators 175 Maximum Likelihood and the Information Measure 176 Maximum A Posteriori Estimation 178 Extensions of MLE to Include Model Selection 179 Akaike's Information Theoretic Criterion (AIC) 179 Minimal Description Length and Stochastic Complexity 183 Generalization and Learning on the Same Data Set 185 CHAPTER 8 Statistical Physics Theory of Supervised Learning and Generalization 187 Statistical Mechanics Theory of Supervised Learning 188 Maximum Entropy Principle 189 Probability Inference with an Ensemble of Networks 192 Information Gain and Complexity Analysis 195 Learning with Higher Order Neural Networks 198 Partition Function Evaluation 198 Information Gain in Polynomial Networks 202 Numerical Experiments 203 Learning with General Feedforward Neural Networks 205 Partition Function Approximation 205 Numerical Experiments 207 Statistical Theory of Unsupervised and Supervised Factorial Learning 208 Statistical Theory of Unsupervised Factorial Learning 208 Duality Between Unsupervised and Maximum Likelihood Based Supervised Learning 213 CHAPTER 9 Composite Networks 219 Cooperation and Specialization in Composite Networks 220 Composite Models as Gaussian Mixtures 222 CHAPTER 10 Information Theory Based Regularizing Methods 225 Theoretical Framework 226 Network Complexity Regulation 226 Network Architecture and Learning Paradigm 227 Applications of the Mutual Information Based Penalty Term 231 Regularization in Stochastic Potts Neural Network 237 Neural Network Architecture 237 Simulations 239 References 243 Index 259 Ordering information: --------------------- ISBN 0-387-94666-7 US $49.95, DM 76 ------------------------------------------------------------ Dr. Gustavo Deco and Dr. Dragan Obradovic Siemens AG ZFE T SN 4 Corporate Research and Development Otto-Hahn-Ring 6 Phone: +49/89/636-49499 D-81739 Munich Fax: +49/89/636-49767 Germany E-Mail: Dragan.Obradovic at zfe.siemens.de Gustavo.Deco at zfe.siemens.de  From rjb at psy.ox.ac.uk Mon Mar 11 08:00:44 1996 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Mon, 11 Mar 1996 13:00:44 GMT Subject: Position available in computational neuroscience Message-ID: <199603111300.NAA05904@axp02.mrc-bbc.ox.ac.uk> The following Jobs may be of interest to readers of the connectionists mailing list. UNIVERSITY OF OXFORD DEPARTMENT OF EXPERIMENTAL PSYCHOLOGY Posts in Computational Neuroscience and Visual Neurophysiology The following posts are available as part of a long-term research programme combining neurophysiological and computational approaches to brain mechanisms of vision and memory (see Rolls, 1995, Behav. Brain Res. 66: 177-185; or Rolls, 1994, Behav. Processes 33: 113-138): (1) Computational neuroscientist to make formal models and/or analyse by simulation the functions of visual cortical areas in invariant recognition. (2) Neurophysiologist (preferably postdoctoral) to analyse the activity of single neurons in the temporal cortical visual areas. The salaries are on the RS1A (postdoctoral) scale 14,317- 21,519 pounds, with support provided by a Programme Grant. Applications including the names of two referees, or enquiries, to Dr. Edmund T. Rolls, University of Oxford, Department of Experimental Psychology, South Parks Road, Oxford OX1 3UD, England (email Edmund.Rolls at psy.ox.ac.uk). The University exists to promote excellence in education and research. The University is an Equal Opportunity Employer.  From goldfarb at unb.ca Mon Mar 11 16:40:21 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Mon, 11 Mar 1996 17:40:21 -0400 (AST) Subject: Shift Invariance In-Reply-To: <199603090225.SAA14592@mizar.usc.edu> Message-ID: I would like to make one more comment. Shift invariance should be properly thought of as invariance of a "final" object representation wrt TRANSLATIONS of the object (to use the term from linear algebra). This is not to be confused with the fact that the POSITION of the object is also encoded separately, when necessary. The latter has to do with the need to represent the entire "scene". -- Lev  From tommi at psyche.mit.edu Mon Mar 11 14:28:07 1996 From: tommi at psyche.mit.edu (Tommi Jaakkola) Date: Mon, 11 Mar 96 14:28:07 EST Subject: Paper available: Upper and lower bounds on likelihoods Message-ID: <9603111928.AA13475@psyche.mit.edu> The following paper is available on the web at http://web.mit.edu/~tommi/home.html ftp://psyche.mit.edu/pub/tommi/jaak-ul-bounds.ps.Z Computing upper and lower bounds on likelihoods in intractable networks T. S. Jaakkola and M. I. Jordan We present techniques for computing upper and lower bounds on the likelihoods of partial instantiations of variables in sigmoid and noisy-OR networks. The bounds determine confidence intervals for the desired likelihoods and become useful when the size of the network (or clique size) precludes exact computations. We illustrate the tightness of the obtained bounds by numerical experiments. -Tommi --------- The paper can be retrieved also via anonymous ftp: ftp-host: psyche.mit.edu ftp-file: pub/tommi/jaak-ul-bounds.ps.Z  From murase at synapse.fuis.fukui-u.ac.jp Mon Mar 11 20:31:02 1996 From: murase at synapse.fuis.fukui-u.ac.jp (Kazuyuki Murase) Date: Tue, 12 Mar 1996 10:31:02 +0900 Subject: Associate Professor Position in Japan Message-ID: <199603120131.KAA07787@synapse.fuis.fukui-u.ac.jp> ASSOCIATE PROFESSOR IN BIOLOGICAL INFORMATION PROCESSING IN JAPAN The department of Information Science at Fukui University invites applications for an associate professor position in its Biological Information Processing Division starting October 1996. The position requires Ph.D. with postdoctoral research experience. Theaching and supervision of undergraduate and graduate research projects are essential. The ability of Japanese language is not required initially, but should be developed within a few years. The cadidates with specific expertise in at least one of the following areas will be given higher priority: Electrophysiology of single cells or cellular networks, Sensory mechanisms of the spinal cord, Optical imaging of neuronal activities, Modeling of excitable cells or cellular networks, Artificial neural networks, Simulation and synthesis of biological behavior. Applicants should send a curriculum vitae including a publication list and brief description of future research plans by mail to Dr. Kazuyuki Murase, Department of Information Science, Fukui University, 3-9-1 Bunkyo, Fukui 910, Japan, or by E-mail to murase at synapse.fuis.fukui-u.ac.jp. Review of applications will begin immediately and continue until the position is filled. Fukui University is one of the Japanese National Universities.  From maja at garnet.cs.brandeis.edu Mon Mar 11 22:35:36 1996 From: maja at garnet.cs.brandeis.edu (Maja Mataric) Date: Mon, 11 Mar 1996 22:35:36 -0500 Subject: AAAI Fall Symposium on Embodied Cognition and Action Message-ID: <199603120335.WAA05729@garnet.cs.brandeis.edu> !! PLEASE POST !! PLEASE POST !! PLEASE POST !! PLEASE POST !! PLEASE POST !! Call For Participation AAAI 1996 Fall Symposium on Embodied Cognition and Action ---------------------------------------------------------- to be held at MIT Nov 9-11, 1996 Submission Deadline: April 15, 1996. The role of physical embodiment in cognition has long been the subject of debate. It is largely accepted in AI that embodiment has strong implications on the control strategies for generating purposive and intelligent behavior in the world. Some theories have proposed that embodiment not only constrains but may also facilitate certain types of higher-level cognition. Evidence from neuroscience allows for postulating shared mechanisms for low-level control of embodied action (e.g., motor plans for limb movement) and higher-level cognition (e.g., abstract plans). Work in animal behavior has also addressed the potential links between the two systems and linguistic theories have long recognized the role of physical and spatial metaphors in language. The symposium will study the role of embodiment in both scaling up control and grounding cognition. We will explore ways of extending the existing typically low-level sub-cognitive systems such as autonomous robots and agents, as well as grounding more abstract typically disembodied cognitive systems. We will draw from AI, ethology, neuroscience, and other sources in order to focus on the implications of embodiment in cognition and action, and explore work that has been done in the areas of applying physical metaphors to more abstract higher-level cognition. Topics and questions of interest include: * What spatial metaphors that can be used for abstract/higher-level cognition? * What non-spatial metaphors can be applied in higher-level cognition? * What alternatives to symbolic representations (e.g., analogical, procedural, etc.) can be successfully employed in embodied cognition? * How can evidence from neuroscience and ethology benefit work in synthetic embodied cognition and embodied AI? Can we gain more than just inspiration from biological data in this area? Are there specific constraints and/or mechanisms we can usefully model? * (How) Do methods for modeling embodied insect and animal behavior scale up to higher-level cognition? * How do metaphors from embodiment apply to everyday activity? * What computational and representational structures are necessary and/or sufficient for enabling embodied cognition? * What are some successfully implemented embodied cognition systems? The symposium will focus on group discussions and panels with a few inspiring presentations and overviews of relevant work. Organizing committee: --------------------- Dana Ballard, University of Rochester, dana at cs.rochester.edu; Rod Brooks, MIT, brooks at ai.mit.edu; Daniel Dennett, Tufts University, ddennett at pearl.tufts.edu; Simon Giszter, Medical College of Pennsylvania, simon at SwampThing.medcolpa.edu; Maja Mataric (chair), Brandeis University, maja at cs.brandeis.edu; Erich Prem, Austrian AI Institute, erich at ai.univie.ac.at; Terence Sanger, MIT, tds at ai.mit.edu; Stefan Schaal, Georgia Tech, sschaal at cc.gatech.edu; Submission Information: ----------------------- We invite the participation of researchers who have been working on embodied cognition and action in the fields of AI, neuroscience, ethology, and robotics. Prospective participants should submit a brief paper (5 pages or less) or an extended abstract describing their research or interests. Papers should be submitted electronically, in postscript or plain text format, via ftp to ftp.cs.brandeis.edu/pub/faculty/maja/aaai96-fs/. Participants will have an opportunity to contribute to the final working notes. Detailed ftp instructions: -------------------------- compress your-paper (both Unix compress and gzip commands are ok) ftp ftp.cs.brandeis.edu (129.64.2.5, but check in case it has changed) give anonymous as your login name give your e-mail address as password set transmission to binary (just type the command BINARY) cd to /aaai96-fs put your-paper Relevant Dates: --------------- Apr 15, 1996: Submissions due May 17, 1996: Notification of acceptance given Aug 23, 1996: Material for inclusion into the working notes due Nov 9-11, 96: AAAI Fall Symposium The WWW home page for this symposium can be found at: http://www.cs.brandeis.edu/~maja/aaai96-fs/  From omlinc at cs.rpi.edu Tue Mar 12 09:54:08 1996 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Tue, 12 Mar 96 09:54:08 EST Subject: Shift Invariance Message-ID: <9603121454.AA17032@colossus.cs.rpi.edu> In his message <9602281000.ZM15421 at ICSI.Berkeley.edu>, Jerry Feldman wrote: >3) Shift invariance in time and recurrent networks. > > I threw in some (even more cryptic) comments on this anticipating that some >readers would morph the original task into this form. The 0*1010* problem is >an easy one for FSA induction and many simple techniques might work for this. >But consider a task that is only slightly more general, and much more natural. >Suppose the task is to learn any FSL from the class b*pb* where b and p are >fixed for each case and might overlap. Any learning technique that just >tried to predict (the probability of) successors will fail because there >are three distinct regimes and the learning algorithm needs to learn this. >I don't have a way to characterize all recurrent net learning algorithms to >show that they can't do this and it will be interesting to see if one can. >There are a variety on non-connectionist FSA induction methods that can >effectively learn such languages, but they all depend on some overall measure >of simplicity of the machine and its fit to the data - and are thus non-local. > This isn't really correct. First, any DFA can be represented in recurrent neural networks with sigmoidal discriminants functions, i.e. a network can be constructed such that the languages recognized by a DFA and its network implementation are identical (this implies stability of the internal DFA state representation for strings of arbitary length) [1,2]. As far as learning DFA's with recurrent networks is concerned: In my experience, success of failure of a network to learn a particular grammar depends on the size of the DFA, its complexity (simple self loops as opposed to orbits of arbitary length), the training data, and the order in which the training data is concerned. For instance, we found that incremental learning where the network is first trained on the shortest strings of data [8] is often crucial to successful convergence since it is a means to overcome the problem of learning long-term dependencies with gradient descent [4] (for methods for overcoming that problem see [5,6,7]). The `simplest' language of the form b*pb* might be 1*01*. A network with second-order weights and a single recurrent state neuron can learn that language within 100 epochs when trained on the first 100 strings in alphabetical order. Furthermore, the ideal DFA can also be extracted from the trained network [3]. See for example [9,10,11,12] for other extraction approaches. For the language 1*011* which is of the form b*pb* (notice overlapping of b and p), a second-order network with 3 recurrent state neurons easily converged within 200 epochs and the ideal DFA can be extracted as well. So, here are at least two examples which contradict the claim that "Any learning technique that just tried to predict (the probability of) successors will fail because there are three distinct regimes and the learning algorithm needs to learn this. I don't have a way to characterize all recurrent net learning algorithms to show that they can't do this and it will be interesting to see if one can." Christian ------------------------------------------------------------------- Christian W. Omlin, Ph.D. Phone (609) 951-2691 NEC Research Institute Fax: (609) 951-2438 4 Independence Way E-mail: omlinc at research.nj.nec.com Princeton, NJ 08540 omlinc at cs.rpi.edu URL: http://www.neci.nj.nec.com/homepages/omlin/omlin.html ------------------------------------------------------------------- =================================== Bibliography ======================================= [1] P. Frasconi, M. Gori, M. Maggini, G. Soda, "Representation of Finite State Automata in Recurrent Radial Basis Function Networks", Machine Learning, to be published, 1996. [2] C.W. Omlin, C.L. Giles, "Stable Encoding of Large Finite-State Automata in Recurrent Neural Networks with Sigmoid Discriminants", Neural Computation, to be published, 1996. [3] C.W. Omlin, C.L. Giles, "Extraction of Rules from Discrete-Time Recurrent Neural Networks", Neural Networks , Vol. 9, No. 1, p. 41-52, 1996. [4] Y. Bengio, P. Simard, P. Frasconi, "Learning Long-Term Dependencies with Gradient Descent is Difficult", IEEE Transactions on Neural Networks (Special Issue on Recurrent Neural Networks), Vol. 5, p. 157-166, 1994. [5] T. Lin, B.G. Horne, P. Tino, C.L. Giles, "Learning Long-Term Dependencies with NARX Recurrent Neural Networks, IEEE Transactions on Neural Networks, accepted for publication. [6] S. El Hihi, Y. Bengio, "Hierarchical Recurrent Neural Networks for Long-Term Dependencies", Neural Information Processing Systems 8, MIT Press, 1996. [7] S. Hochreiter, J. Schmidhuber, "Long Short Term Memory", Technical Report, Institut fuer Informatik, Technische Universitaet Muenchen, FKI-207-95, 1995. [8] J.L. Elman, "Incremental Learning, or the Importance of Starting Small" Technical Report, Center for Research in Language, University of California at San Diego, CRL Tech Report 9101, 1991. [9] S. Das, M.C. Mozer, "A Unified Gradient-descent/Clustering Architecture for Finite State for Finite State Machine Induction", Advances in Neural Information Processing Systems 6, J.D. Cowan , G. Tesauro, J. Alspector (Eds.), p. 19-26, 1994. [10] M.P. Casey, "Computation in Discrete-Time Dynamical Systems", Ph.D. Thesis, Department of Mathematics, University of California, San Diego, 1995. [11] P. Tino, J. Sajda, "Learning and Extracting Initial Mealy Machines With a Modular Neural Network Model}", Neural Computation, Vol. 7, No. 4, p. 822-844, 1995. [12] R.L. Watrous, G.M. Kuhn, "Induction of Finite-State Languages Using Second-Order Recurrent Networks", Neural Computation, Vol. 4, No. 5, p. 406, 1992.  From J.Heemskerk at dcs.shef.ac.uk Tue Mar 12 05:30:16 1996 From: J.Heemskerk at dcs.shef.ac.uk (Jan Heemskerk) Date: Tue, 12 Mar 96 10:30:16 GMT Subject: CALL FOR PARTICIPATION Message-ID: <9603121030.AA04014@dcs.shef.ac.uk> CALL FOR PARTICIPATION ** LEARNING IN ROBOTS AND ANIMALS ** An AISB-96 two-day workshop University of Sussex, Brighton, UK: April, 1st & 2nd, 1996 Co-Sponsored by IEE Professional Group C4 (Artificial Intelligence) WORKSHOP ORGANISERS: Noel Sharkey (chair), University of Sheffield, UK. Gillian Hayes, University of Edinburgh, UK. Jan Heemskerk, University of Sheffield, UK. Tony Prescott, University of Sheffield, UK. PROGRAMME COMMITTEE: Dave Cliff, UK. Marco Dorigo, Italy. Frans Groen, Netherlands. John Hallam, UK. John Mayhew, UK. Martin Nillson, Sweden Claude Touzet, France Barbara Webb, UK. Uwe Zimmer, Germany. Maja Mataric, USA. In the last five years there has been an explosion of research on Neural Networks and Robotics from both a self-learning and an evolutionary perspective. Within this movement there is also a growing interest in natural adaptive systems as a source of ideas for the design of robots, while robots are beginning to be seen as an effective means of evaluating theories of animal learning and behaviour. A fascinating interchange of ideas has begun between a number of hitherto disparate areas of research and a shared science of adaptive autonomous agents is emerging. This two-day workshop proposes to bring together an international group to both present papers of their most recent research, and to discuss the direction of this emerging field. PROVISION LIST OF PAPERS: Robot Shaping - Priniciples, Methods & Architectures Simon Perkins and Gillian Hayes Towards Autonomous Control using Connectionist 'Infinite State Automata' Tom Ziemke Entropy-based Tradeoff between Exploration and Exploitation Ping Zhang and Stephane Canu Evolving a Hierarchical Control System for Co-operating Autonomous Robots Robert Ghanea-Hercock & David P Barnes Evolutionary Learning of task achieving behaviours Myra S Wilson, Clive King and John E Hunt The design of learning for an artifact Joanna Bryson Robot See, Robot Do: An Overview of Robot Imitation Paul Bakker and Yasuo Kuniyoshi Does Dynamics Solve the Symbol Grounding Problem of Robots? An Experiment in Navigation Learning Jun Tani Abstracting Fuzzy Behavioural Rules From Geometric Models in Mobile Robotics A G Pipe, Tc Fogarty and A Winfield Brave Mobots Use Representation Chris Thornton Explore/Exploit Strategies in Autonomous Learning Stewart W Wilson Environment memory for a mobile robot using place cells Ken Harris, David Lee and Michael Recce Representations on a mobile robot Noel Sharkey and Jan Heemskerk Layered control architectures in natural and artificial systems Tony J Prescott REGISTRATION INFORMATION: http://www.cogs.susx.ac.uk:80/users/christ/aisb/aisb96/index.html ftp ftp.cogs.susx.ac.uk  From heckerma at MICROSOFT.com Tue Mar 12 18:05:00 1996 From: heckerma at MICROSOFT.com (David Heckerman) Date: Tue, 12 Mar 1996 15:05:00 -0800 Subject: paper available: Efficient Approximations for the Marginal Likelihood... Message-ID: The following paper is available on the web at http://www.research.microsoft.com/research/dtg/heckerma/heckerma.html Efficient Approximations for the Marginal Likelihood of Incomplete Data Given a Bayesian Network D. Chickering and D. Heckerman MSR-TR-96-08 A Bayesian score often used in model selection is the marginal likelihood of data (or "evidence") given a model. We examine asymptotic approximations for the marginal likelihood of incomplete data given a Bayesian network. We consider the well-known Laplace and BIC/MDL approximations, as well as approximations proposed by Draper (1993) and Cheeseman and Stutz (1995). In experiments using synthetic data generated from discrete naive-Bayes models having a hidden root node, we find the Cheeseman-Stutz measure to be the best in that it is as accurate as the Laplace approximation and as efficient as the BIC/MDL approximation. The paper also can be retrieved via anonymous ftp: ftp-host: ftp.research.microsoft.com ftp-file: pub/tech-reports/winter95-96/tr-96-08.ps -David  From jordan at psyche.mit.edu Tue Mar 12 19:36:49 1996 From: jordan at psyche.mit.edu (Michael Jordan) Date: Tue, 12 Mar 96 19:36:49 EST Subject: International School on Neural Nets "E.R. Caianiello" Message-ID: The enclosed is a correction and amplification to the earlier message regarding next fall's ``Learning in Graphical Models'' Advanced Study Institute in Erice, Sicily. Mike Jordan ------------------------------------------------------------------------ Galileo Galilei Foundation World Federation of Scientists Ettore Maiorana Centre for Scientific Culture Galilelo Galilei Celebrations Four Hundreds Years Since the Birth of Modern Science International School on Neural Nets ``E.R. Caianiello'' 1st Course: Learning in Graphical Models A NATO Advanced Study Institute Erice-Sicily: 27 September - 7 October 1996 Sponsored by the: - European Union - International Institute for Advanced Scientific Studies (IIASS) - Italian Institute for Philosophical Studies - Italian Ministry of Education - Italian Ministry of University and Scientific Research - Italian National Research Institute (CNR) - Sicilian Regional Government - University of Salerno Lecturers will include: J. Whittaker, University of Lancaster, UK D. Madigan, University of Washington, USA D. Geiger, Technion, Israel U. Kjaerullf, Aalborg University, Denmark R. Cowell, University College, London, UK M. Studeny, Academy of Sciences, Czech Republic M. Jordan, MIT, USA S. Omohundro, NEC Research, USA D. Heckerman, Microsoft Research, USA G. Cooper, University of Pittsburg, USA W. Buntine, Thinkbank, USA L. Saul, MIT, USA J. Buhmann, University of Bonn, Germany N. Tishby, Hebrew University, Israel D. Mackay, University of Cambridge, UK D. Spiegelhalter, MRC, Cambridge, UK J. Pearl, UCLA, USA Topics will include: Introduction to graphical models (directed and undirected graphs) Inference (probabilistic propagation, junction trees, conditioning) Properties of conditional independence (Markov properties, separation) Chain graphs Mixture models, hidden Markov models Neural networks Data structures for efficient estimation (bump trees, ball trees) Bayesian methods Structure learning (metrics, search, approximations) Priors Statistical mechanical methods (decimation, mean field) Markov chain Monte Carlo (importance sampling, Gibbs sampling, hybrid MC) Bayesian graphical models (BUGS software) Learning and phase transitions Clustering and multidimensional scaling Model selection and averaging Surface learning and family discovery Online learning Causality Purpose of the course Neural networks and Bayesian belief networks are learning and inference methods that have been developed in two largely distinct reasearch communities. The purpose of this Course is to bring together researchers from these two communities and study both kinds of networks as instances of a general unified graphical formalism. The Course will focus on probabilistic methods for learning in graphical models, with attention paid to algorithm analysis and design, theory and applications. General Information Persons wishing to attend the Course should apply in writing to: - Prof. Maria Marinaro IIASS "E.R. Caianiello" Via G. Pellegrino, 19 84019 Vietri sul mare (SA), Italy Tel: + 39 89 761167 Fax: + 39 89 761189 They should specify: i) date and place of birth together with present nationality; ii) degree and other academic qualifications; iii) present position and place of work. Young persons with only little experience should include a letter of recommendation from the head of their research group or from a senior scientist active in the field. The total fee, which includes full board and lodging (arranged by the School), is $1000 USD. Thanks to the generosity of the sponsoring Institutions, partial support can be granted to some deserving students who need financial help. Requests to this effect must be specified and justified in the application letter. Closing date for application: July 15, 1996 No special application form is required. Admission to the Course will be decided in consultation with the Advisory Committee of the School consisting of Professors D. Heckerman, M.I. Jordan, M. Marinaro and A. Zichichi. It is regretted that it will not be possible to allow any person not selected by the Committee of the School to follow the Course. Participants must arrive in Erice on September 27, no later than 5 p.m. More information about this Course and the other activities of the Ettore Majorana Centre can be found on the WWW at the following address: http://www.ccsem.infn.it D. Heckerman - M.I. Jordan - J. Whittaker Directors of the Course M.I. Jordan - M. Marinaro Directors of the School A. Zichichi Director of the Centre  From wermter at nats5.informatik.uni-hamburg.de Wed Mar 13 10:42:21 1996 From: wermter at nats5.informatik.uni-hamburg.de (Stefan Wermter) Date: Wed, 13 Mar 1996 16:42:21 +0100 Subject: book on language learning: connectionist statistical symbolic approaches Message-ID: <199603131542.QAA25501@nats13.informatik.uni-hamburg.de> [I am posting this to several relevant mailing lists -- apologies to those who, by subscribing to multiple lists, receive multiple copies of this announcement.] BOOK ANNOUNCEMENT ----------------- Title: Connectionist, statistical, and symbolic approaches to learning for natural language processing Editors: Stefan Wermter Ellen Riloff Gabriele Scheler Date: March 1996 (first week in Europe [order information and WWW reference for the book (access to first chapter) at the end of this message] Brief description ----------------- The purpose of this book is to present a collection of papers that represents a broad spectrum of current research in learning methods for natural language processing, and to advance the state of the art in language learning and artificial intelligence. The book should bridge a gap between several areas that are usually discussed separately, including connectionist, statistical, and symbolic methods. Table of contents ----------------- Introduction: Learning approaches for natural language processing S. Wermter, E. Riloff, G. Scheler Part 1: Connectionist Networks and Hybrid Approaches ---------------------------------------------------- Separating learning and representation N.E. Sharkey, A.J.C. Sharkey Natural language grammatical inference: a comparison of recurrent neural networks and machine learning methods S. Lawrence, S. Fong, C. L. Giles Extracting rules for grammar recognition from Cascade-2 networks R. Hayward, A. Tickle, J. Diederich Generating English plural determiners from semantic representations: a neural network learning approach G. Scheler Knowledge acquisition in concept and document spaces by using self-organizing neural networks W. Winiwarter, E. Schweighofer, D. Merkl Using hybrid connectionist learning for speech/language analysis V. Weber, S. Wermter SKOPE: A connectionist/symbolic architecture of spoken Korean processing G. Lee, J.-H. Lee Integrating different learning approaches into a multilingual spoken language translation system P. Geutner, B. Suhm, F.-D. Buo, T. Kemp, L. Mayfield, A. E. McNair, I. Rogina, T. Schultz, T. Sloboda, W. Ward, M. Woszczyna, A. Waibel Learning language using genetic algorithms T. C. Smith, I. H. Witten Part 2: Statistical Approaches --------------------------------------------------- A statistical syntactic disambiguation program and what it learns M. Ersan, E. Charniak Training stochastic grammars on semantical categories W.R. Hogenhout, Y. Matsumoto Learning restricted probabilistic link grammars E. W. Fong, D. Wu Learning PP attachment from corpus statistics A. Franz A minimum description length approach to grammar inference P. Gruenwald Automatic classification of dialog acts with semantic classification trees and polygrams M. Mast, H. Niemann, E. Noeth, E. G. Schukat-Talamazzini Sample selection in natural language learning S. P. Engelson, I. Dagan Part 3: Symbolic Approaches --------------------------------------------------- Learning information extraction patterns from examples S. B. Huffman Implications of an automatic lexical acquisition system P. M. Hastings Using learned extraction patterns for text classification E. Riloff Issues in inductive learning of domain-specific text extraction rules S. Soderland, D. Fisher, J. Aseltine, W. Lehnert Applying machine learning to anaphora resolution C. Aone, S. W. Bennett Embedded machine learning systems for natural language processing: a general framework C. Cardie Acquiring and updating hierarchical knowledge for machine translation based on a clustering technique T. Yamazaki, M. J. Pazzani, C. Merz Applying an existing machine learning algorithm to text categorization I. Moulinier, J.-G. Ganascia Comparative results on using inductive logic programming for corpus-based parser construction J. M. Zelle, R. J. Mooney Learning the past tense of English verbs using inductive logic programming R. J. Mooney, M. E. Califf A dynamic approach to paradigm-driven analogy S. Federici, V. Pirrelli, F. Yvon Can punctuation help learning? M. Osborne Using parsed corpora for circumventing parsing A. K. Joshi, B. Srinivas A symbolic and surgical acquisition of terms through variation C. Jacquemin A revision learner to acquire verb selection rules from human-made rules and examples S. Kaneda, H. Almuallim, Y. Akiba, M. Ishii, T. Kawaoka Learning from texts - a terminological metareasoning perspective U. Hahn, M. Klenner, K. Schnattinger ************************************************************** Bibliographic Data and Ordering Information: Editors: Stefan Wermter, Univ. of Hamburg, Germany Ellen Riloff, Univ. of Utah, Salt Lake City, USA Gabriele Scheler, Munich Univ. of Tech. Germany Title: Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing Publisher: Springer-Verlag ISBN: 3-540-60925-3 Pages: 468 + 9 Available: Europe: March 6, 1996 North America: around March 25, 1996 Subseries: Lecture Notes in Artificial Intelligence LNAI 1040 Cover: Softcover under Color Jacket Cover List Price: DM 86.00, approx. USD 68.00 With this information, any academic bookseller worlwide with a resonable computer science program should be able to provide copies of the book. Otherwise, one also can order through any Springer office directly, particularly through Berlin and Secaucus, as mentioned in the following special offer to Springer Authors. If you aren't a Springer Author you aren't entitled to make use of the special discount, but the ordering addresses are the same. ********************************************************** SPECIAL OFFER: SPRINGER-AUTHOR DISCOUNT All Authors or Editors of Springer Books, in particular Authors contributing to any LNCS or LNAI Proceedings, are entitled to buy any book published by Springer-Verlag for personal use at the "Springer-Author" discount of 33 1/3 % off the list price. Such preferential orders can only be processed through Springer directly (and not through book stores); reference to a Springer publication has to be given with such orders to any Springer office, particularly to the ones in Berlin and New York: Springer-Verlag Order Processing Department Postfach 31 13 40 D-10643 Berlin Germany FAX: +49 30 8207 301 Springer-Verlag New York, Inc. P.O. Box 2485 Secaucus, NJ 07096-2485 USA FAX: +1 201 348 4033 Phone: 1-800-SPRINGER (1 800 777 4647), toll-free in USA Preferential orders also can be placed by sending an email to orders at springer.de Shipping charges are DEM 5.00 per book for orders sent to Berlin, and USD 2.50 (plus USD 1.00 for each additional book) for orders sent to the Secaucus office. Payment of the book(s) plus shipping charges can be made by giving a credit card number together with the expiration date (American Express, Eurocard/Mastercard, Diners, and Visa are accepted) or by enclosing a check (mail orders only). ****************************************************************************** *Dr Stefan Wermter University of Hamburg * * Dept. of Computer Science * * Vogt-Koelln-Strasse 30 * *email: wermter at informatik.uni-hamburg.de D-22527 Hamburg * *phone: +49 40 54715-531 Germany * *fax: +49 40 54715-515 * *http://www.informatik.uni-hamburg.de/Arbeitsbereiche/NATS/staff/wermter.html* ******************************************************************************  From ted at SPENCER.CTAN.YALE.EDU Wed Mar 13 14:45:08 1996 From: ted at SPENCER.CTAN.YALE.EDU (ted@SPENCER.CTAN.YALE.EDU) Date: Wed, 13 Mar 1996 19:45:08 GMT Subject: Postdoctoral positions available Message-ID: <199603131945.TAA19483@PLANCK.CTAN.YALE.EDU> The Neuroengineering and Neuroscience Center at Yale University is seeking to build a pool of qualified scientists and engineers to participate in research on applications of pattern recognition in engineering and medicine. Applicants must have a Ph.D. and demonstrated expertise in one or more of the following fields: pattern recognition, signal processing, machine learning, adaptive control, artificial neural networks, image analysis. Successful candidates will participate in highly creative and interdisciplinary projects of major scientific and social importance. Please send curriculum vitae and list of professional references to Prof. K.S. Narendra, Director NNC 5 Science Park North New Haven, CT 06511 Yale University is an Affirmative Action/Equal Opportunity employer. Women and Minorities encouraged to apply.  From laura at mpipf-muenchen.mpg.de Thu Mar 14 11:41:29 1996 From: laura at mpipf-muenchen.mpg.de (Laura Martignon) Date: Thu, 14 Mar 1996 17:41:29 +0100 Subject: paper available: "Bayesian Learning of loglinear models for neuron connectivity" Message-ID: Kathryn Laskey and I have just finished the paper: "Bayesian Learning of loglinear models for neuron connectivity" Kathryn Laskey Department of Systems Engineering George Mason University Fairfax, VA 22030 klaskey at gmu.edu Laura Martignon Max Planck Institute for Psychological Research 80802 M?nchen, Germany laura at mpipf-muenchen.mpg.de Abstract This paper presents a Bayesian approach to learning the connectivity structure of a group of neurons from data on configuration frequencies. A major objective of the research is to provide statistical tools for detecting changes in firing patterns with changing stimuli. Our framework is not restricted to the well-understood case of pair interactions, but generalizes the Boltzmann machine model to allow for higher order interactions. The paper applies a Markov Chain Monte Carlo Model Composition (MC3) algorithm to search over connectivity structures and uses Laplace's method to approximate posterior probabilities of structures. Performance of the methods was tested on synthetic data. The models were also applied to data obtained by Vaadia on multi-unit recordings of several neurons in the visual cortex of a rhesus monkey in two different attentional states. Results confirmed the experimenters' conjecture that different attentional states were associated with different interaction structures. Keywords: Nonhierarchical loglinear models, Markov Chain Monte Carlo Model composition, Laplace's Method, Neural Networks To obtain a copy of these papers, please send your email request to Laura Martignon e-mail: laura at mpipf-muenchen.mpg.de  From itl-rec at thuban.crd.ge.com Thu Mar 14 15:12:55 1996 From: itl-rec at thuban.crd.ge.com (itlrecruiting) Date: Thu, 14 Mar 96 15:12:55 EST Subject: Job: Data Mining / Neural Nets / Artificial Intelligence Message-ID: <9603142012.AA20879@thuban.crd.ge.com> The Information Technology Laboratory (80 people strong and still growing) at the Corporate Research & Development Center of General Electric in Schenectady, New York has the following position to offer: R&D Staff opportunity in Data Mining/Analysis/Warehousing BACKGROUND REQUIRED: PhD in Computer Science, Statistics, Artificial Intelligence or related field with a broad knowledge base. Strong interpersonal skills, good initiative and analytical skills, adaptable to change, high self confidence. Excellent computer skills required: e.g. either hands-on experience in implementing data storage and access solutions for multi-million record databases or hands-on experience in sampling and Data Mining / knowledge discovery analysis algorithms on multi-million record databases. DESIRED ALSO: Experience with C++ object-oriented programming. WE OFFER A CHALLENGING PERSPECTIVE: Develop and apply modern statistical methods, machine learning techniques and neural nets to a variety of strategically important and technically significant problems throughout GE, involving finance, product development, manufacturing and process improvement, and product servicing and reliability. Lead work with analysts, engineers, and managers in the diverse GE businesses, e.g. Aircraft Engines, Capital Services, Medical Systems, NBC, Plastics, and Appliances and with scientists at the Research & Development Center. --------------------------------------------------------------------------- FOR YOUR INTEREST: GE is one of the world's largest and most successful companies, having leadership positions in business segments including aircraft engines, plastics, manufacturing, capital services, and others. GE has its Corporate Research and Development Center located in Schenectady, New York (more at http://www.ge.com/). It supports the advanced technology requirements of all GE businesses. The 1000-plus staff of scientists and engineers is composed of representatives of most major disciplines (more at http://www.crd.ge.com/). APPLICATION: If you meet the requirements and you are interested, please send your resume via electronic email in plain ASCII format to itl-rec at thuban.crd.ge.com (Steve Mirer). Please include, where you found this ad and put "DATA MINING" in the subject line. BTW, we are recruiting world-wide to get the best possible match. ---------------------------------------------------------------- GE is an equal opportunity employer.  From arbib at pollux.usc.edu Thu Mar 14 18:25:04 1996 From: arbib at pollux.usc.edu (Michael A. Arbib) Date: Thu, 14 Mar 1996 15:25:04 -0800 (PST) Subject: Workshop on Sensorimotor Coordination Message-ID: <199603142325.PAA15747@pollux.usc.edu> FINAL CALL FOR PAPERS Workshop on SENSORIMOTOR COORDINATION: AMPHIBIANS, MODELS, AND COMPARATIVE STUDIES Poco Diablo Resort, Sedona, Arizona, November 22-24, 1996 Co-Directors: Kiisa Nishikawa (Northern Arizona University, Flagstaff) and Michael Arbib (University of Southern California, Los Angeles). Local Arrangements Chair: Kiisa Nishikawa. E-mail enquiries may be addressed to Kiisa.Nishikawa at nau.edu or arbib at pollux.usc.edu. Further information may be found on our home page at http://www.nau.edu:80/~biology/vismot.html. Program Committee: Kiisa Nishikawa (Chair), Michael Arbib, Emilio Bizzi, Chris Comer, Peter Ewert, Simon Giszter, Mel Goodale, Ananda Weerasuriya, Walt Wilczynski, and Phil Zeigler. SCIENTIFIC PROGRAM The aim of this workshop is to study the neural mechanisms of sensorimotor coordination in amphibians and other model systems for their intrinsic interest, as a target for developments in computational neuroscience, and also as a basis for comparative and evolutionary studies. The list of subsidiary themes given below is meant to be representative of this comparative dimension, but is not intended to be exhaustive. The emphasis (but not the exclusive emphasis) will be on papers that encourage the dialog between modeling and experimentation. A decision as to whether or not to publish a proceedings is still pending. Central Theme: Sensorimotor Coordination in Amphibians and Other Model Systems Subsidiary Themes: Visuomotor Coordination: Comparative and Evolutionary Perspectives Reaching and Grasping in Frog, Pigeon, and Primate Cognitive Maps Auditory Communication (with emphasis on spatial behavior and sensory integration) Motor Pattern Generators This workshop is the sequel to four earlier workshops on the general theme of "Visuomotor Coordination in Frog and Toad: Models and Experiments". The first two were organized by Rolando Lara and Michael Arbib at the University of Massachusetts, Amherst (1981) and Mexico City (1982). The next two were organized by Peter Ewert and Arbib in Kassel and Los Angeles, respectively, with the Proceedings published as follows: Ewert, J.-P. and M. A. Arbib (Eds.) 1989. Visuomotor Coordination: Amphibians, Comparisons, Models and Robots. New York: Plenum Press. Arbib, M.A. and J.-P. Ewert (Eds.) 1991. Visual Structures and Integrated Functions, Research Notes in Neural Computing 3. Heidelberg, New York: Springer Verlag. INSTRUCTIONS FOR CONTRIBUTORS Persons who wish to present oral papers are asked to send three copies of an extended abstract, approximately 4 pages long, including figures and references. Persons who wish to present posters are asked to send a one page abstract. Abstracts may be sent by regular mail, e-mail or FAX. Authors should be aware that e-mailed abstracts should contain no figures. Abstracts should be sent no later than 1 May, 1996 to: Kiisa Nishikawa , Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640, E-mail: Kiisa.Nishikawa at nau.edu; FAX: (520)523-7500. Notification of the Program Committee's decision will be sent out no later than 15 June, 1996. REGISTRATION INFORMATION Meeting Location and General Information: The Workshop will be held at the Poco Diablo Resort in Sedona, Arizona (a beautiful small town set in dramatic red hills) immediately following the Society for Neuroscience meeting in 1996. The 1996 Neuroscience meeting ends on Thursday, November 21, so workshop participants can fly from Washington, DC to Phoenix, AZ that evening, meet Friday, Saturday, and Sunday, with a Workshop Banquet on Sunday evening, and fly home on Monday, November 25th. Paper sessions will be held all day on Friday, on Saturday afternoon, and all day on Sunday. Poster sessions will be held on Saturday afternoon and evening. A group field trip is planned for Saturday morning. Graduate Student and Postdoctoral Participation: In order to encourage the participation of graduate students and postdoctorals, we have arranged for affordable housing, and in addition we are able to offer a reduced registration fee (see below) thanks to the generous contribution of the Office of the Associate Provost for Research and Graduate Studies at Northern Arizona University. Travel from Phoenix to Sedona: Sedona, AZ is located approximately 100 miles north of Phoenix, where the nearest major airport (Sky Harbor) is located. Workshop attendees may wish to arrange their own transportation (e.g., car rental from Phoenix airport) from Phoenix to Sedona, or they may use the Workshop Shuttle (estimated round trip cost $20 US) to Sedona on 21 November, with a return to Phoenix on 25 November. If you plan to use the Workshop Shuttle, we will need to know your expected arrival time in Phoenix by 1 October 1996, to ensure that space is available for you at a convenient time. Lodging: The following costs are for each night. Since many participants may want to extend their stay to further enjoy Arizona's scenic beauty, we have negotiated special rates for additional nights after the end of the workshop on November 24th. Attendees should make their own booking with the Poco Diablo Resort, by phone (800) 352-5710 or FAX (520) 282-9712. Thurs.-Fri. (and additional week nights before the workshop) per night: students $85 US + tax, faculty $105 + tax Sat.-Sun. (and additional week nights after the workshop) per night: students $69 + tax, faculty $89 + tax. The student room rates are for double occupancy. Thus, students willing to share a room may stay for half the stated rate. When you make your room reservations with the Poco Diablo Resort, please be sure to indicate the number of guests in your party. Graduate students and postdocs should be sure to indicate whether they want single or double occupancy. REGISTRATION FEES: Students and postdoctorals $100; faculty, guests and others $200. The registration fee includes lunch Fri. - Sun., wine and cheese reception during the Saturday evening poster session, and a Farewell Dinner on Sunday evening. Registration fees should be paid by check in US funds, made payable to "Sensorimotor Coordination Workshop", and should be sent to Kiisa Nishikawa at the address listed below, together with the completed registration form that follows at the end of this announcement. Completed registration forms and fees must be received by 1 July, 1996. Late registration fees will be $150 for students and postdoctorals and $250 for faculty. REGISTRATION FORM NAME: ADDRESS: PHONE: FAX: EMAIL: STATUS: [ ] Faculty ($200); [ ] Postdoctoral ($100); [ ] Student ($100); [ ] Other ($200). (Postdocs and students: Please attach certification of your status signed by your supervisor.) TYPE OF PRESENTATION (paper vs. poster): ABSTRACT SENT: (yes/no) AREAS OF INTEREST RELEVANT TO WORKSHOP: WILL YOU REQUIRE ANY SPECIAL AUDIOVISUAL EQUIPMENT FOR YOUR PRESENTATION? HAVE YOU MADE A RESERVATION WITH THE HOTEL? EXPECTED TIME OF ARRIVAL IN PHOENIX (ON NOVEMBER 21): EXPECTED TIME OF DEPARTURE FROM PHOENIX (ON NOVEMBER 25): DO YOU WISH TO USE THE WORKSHOP SHUTTLE TO TRAVEL FROM PHOENIX TO SEDONA? (If so, please be sure that we know your expected arrival time by 1 October!) DO YOU WISH TO PARTICIPATE IN A GROUP HIKE IN THE SEDONA AREA ON SATURDAY MORNING? Please make sure that your check (in US funds and payable to the "Sensorimotor Coordination Workshop") is included with this form. If you plan to bring a guest with you to the Workshop, please add their name(s) to this form and enclose their registration fee along with your own. Mail to: Kiisa Nishikawa, Department of Biological Sciences, Northern Arizona University, Flagstaff, AZ 86011-5640. E-mail: Kiisa.Nishikawa at nau.edu. FAX: (520)523-7500. Phone: (520)523-9497.  From rjb at psy.ox.ac.uk Fri Mar 15 09:49:03 1996 From: rjb at psy.ox.ac.uk (Roland Baddeley) Date: Fri, 15 Mar 1996 14:49:03 GMT Subject: Paper available on exploritory projection pursuit. Message-ID: <199603151449.OAA06560@axp02.mrc-bbc.ox.ac.uk> The following paper is available on the web at http://www.mrc-bbc.ox.ac.uk/~rjb/ It has been accepted for publication in Network. TITLE: Searching for filters with ``interesting'' output distributions: an uninteresting direction to explore? Abstract It has been proposed that the receptive fields of neurons in V1 are optimised to generate ``sparse'', Kurtotic, or ``interesting'' output probability distributions \cite{Barlow92,Barlow94,Field94,Intrator91,Intrator92d}. We investigate the empirical evidence for this further and argue that filters can produce ``interesting'' output distributions simply because natural images have variable local intensity variance. If the proposed filters have zero D.C., then the probability distribution of filter outputs (and hence the output Kurtosis) is well predicted simply from these effects of variable local variance. This suggests that finding filters with high output Kurtosis does not necessarily signal interesting image structure. It is then argued that finding filters that maximise output Kurtosis generates filters that are incompatible with observed physiology. In particular the optimal difference--of--Gaussian (DOG) filter should have the smallest possible scale, an on--centre off--surround cell should have a negative D.C., and that the ratio of centre width to surround width should approach unity. This is incompatible with the physiology. Further, it is also predicted that oriented filters should always be oriented in the vertical direction, and of all the filters tested, the filter with the highest output Kurtosis has the lowest signal to noise (the filter is simply the difference of two neighbouring pixels). Whilst these observations are not incompatible with the brain using a sparse representation, it does argue that little significance should be placed on finding filters with highly Kurtotic output distributions. It is therefore argued that other constraints are required in order to understand the development of visual receptive fields. FILE: http://www.mrc-bbc.ox.ac.uk/ftp/users/rjb/rjb_kur.ps.Z -- Roland Baddeley Research Fellow, MRC Centre for Cognitive Neuroscience University of Oxford normal mail: Experimental Psychology email: rjb at psy.ox.ac.uk Oxford University phone: +44-1865-271914 South Parks Road fax: +44-1865-272488 Oxford, OX1 3UD UK  From terry at salk.edu Fri Mar 15 15:31:49 1996 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 15 Mar 96 12:31:49 PST Subject: Telluride Workshop - Deadline April 5 Message-ID: <9603152031.AA05918@salk.edu> WORKSHOP ON NEUROMORPHIC ENGINEERING JUNE 24 - JULY 14, 1996 TELLURIDE, COLORADO Deadline for application is April 5, 1996. Christof Koch (Caltech) and Terry Sejnowski (Salk Institute/UCSD) invite applications for one three week workshop that will be held in Telluride, Colorado in 1996. The first two Telluride Workshops on Neuromorphic Engineering were held in the summer of 1994 and 1995, sponsored by NSF and co-funded by the "Center for Neuromorphic Systems Engineering" at Caltech, were resounding successes. A summary of these workshops, together with a list of participants is available from: http://www.klab.caltech.edu/~timmer/telluride.html or http://www.salk.edu/~bryan/telluride.html GOALS: Carver Mead introduced the term "Neuromorphic Engineering" for a new field based on the design and fabrication of artificial neural systems, such as vision systems, head-eye systems, and roving robots, whose architecture and design principles are based on those of biological nervous systems. The goal of this workshop is to bring together young investigators and more established researchers from academia with their counterparts in industry and national laboratories, working on both neurobiological as well as engineering aspects of sensory systems and sensory-motor integration. The focus of the workshop will be on "active" participation, with demonstration systems and hands-on-experience for all participants. Neuromorphic engineering has a wide range of applications from nonlinear adaptive control of complex systems to the design of smart sensors. Many of the fundamental principles in this field, such as the use of learning methods and the design of parallel hardware, are inspired by biological systems. However, existing applications are modest and the challenge of scaling up from small artificial neural networks and designing completely autonomous systems at the levels achieved by biological systems lies ahead. The assumption underlying this three week workshop is that the next generation of neuromorphic systems would benefit from closer attention to the principles found through experimental and theoretical studies of brain systems. FORMAT: The three week workshop is co-organized by Dana Ballard (Rochester, US), Rodney Douglas (Zurich, Switzerland) and Misha Mahowald (Zurich, Switzerland). It is composed of lectures, practical tutorials on aVLSI design, hands-on projects, and interest groups. Apart from the lectures, the activities run concurrently. However, participants are free to attend any of these activities at their own convenience. There will be two lectures in the morning that cover issues that are important to the community in general. Because of the diverse range of backgrounds among the participants, the majority of these lectures will be tutorials, rather than detailed reports of current research. These lectures will be given by invited speakers. Participants will be free to explore and play with whatever they choose in the afternoon. Projects and interest groups meet in the late afternoons, and after dinner. The aVLSI practical tutorials will cover all aspects of aVLSI design, simulation, layout, and testing over the course of the three weeks. The first week covers basics of transistors, simple circuit design and simulation. This material is intended for participants who have no experience with aVLSI. The second week will focus on design frames for silicon retinas, from the silicon compilation and layout of on-chip video scanners, to building the peripheral boards necessary for interfacing aVLSI retinas to video output monitors. Retina chips will be provided. The third week will feature a session on floating gates, including lectures on the physics of tunneling and injection, and experimentation with test chips. Projects that are carried out during the workshop will be centered in four groups: 1) active perception, 2) elements of autonomous robots, 3) robot manipulation, and 4) multichip neuron networks. The "active perception" project group will emphasize vision and human sensory-motor coordination and will be organized by Dana Ballard and Mary Hayhoe (Rochester). Issues to be covered will include spatial localization and constancy, attention, motor planning, eye movements, and the use of visual motion information for motor control. Demonstrations will include a robot head active vision system consisting of a three degree-of-freedom binocular camera system that is fully programmable. The vision system is based on a DataCube videopipe which in turn provides drive signals to the three motors of the head. Projects will involve programming the DataCube to implement a variety of vision/oculomotor algorithms. The "elements of autonomous robots" group will focus on small walking robots. It will look at characteristics and sources of parts for building robots, play with working examples of legged robots, and discuss CPG's and theories of nonlinear oscillators for locomotion. It will also explore the use of simple aVLSI sensors for autonomous robots. The "robot manipulation" group will use robot arms and working digital vision boards to investigate issues of sensory motor integration, passive compliance of the limb, and learning of inverse kinematics and inverse dynamics. The "multichip neuron networks" project group will use existing interchip communication interfaces to program small networks of artificial neurons to exhibit particular behaviors such as amplification, oscillation, and associative memory. Issues in multichip communication will be discussed. PARTIAL LIST OF INVITED LECTURERS: Dana Ballard, Rochester. Randy Beer, Case-Western Reserve. Kwabena Boahen, Caltech. Avis Cohen, Maryland. Tobi Delbruck, Arithmos, Palo Alto. Steve DeWeerth, Georgia Tech. Chris Dioro, Caltech. Rodney Douglas, Zurich. John Elias, Delaware University. Stefano Fusi, Italy Mary Hayhoe, Rochester. Geoffrey Hinton, Toronto. Ian Hoswill, NWU Christof Koch, Caltech. Shih-Chii Liu, Caltech and Rockwell. Misha Mahowald, Zurich. Stefan Schaal, Georgia Tech. Mark Tilden, Los Alamos. Terry Sejnowski, Salk Institute and UC San Diego. Paul Viola, MIT LOCATION AND ARRANGEMENTS: The workshop will take place at the "Telluride Summer Research Center," located in the small town of Telluride, 9000 feet high in Southwest Colorado, about 6 hours away from Denver (350 miles) and 5 hours from Aspen. Continental and United Airlines provide many daily flights directly into Telluride. Participants will be housed in shared condominiums, within walking distance of the Center. Bring hiking boots and a backpack, since Telluride is surrounded by beautiful mountains (several mountains are in the 14,000+ range). The workshop is intended to be very informal and hands-on. Participants are not required to have had previous experience in analog VLSI circuit design, computational or machine vision, systems level neurophysiology or modeling the brain at the systems level. However, we strongly encourage active researchers with relevant backgrounds from academia, industry and national laboratories to apply, in particular if they are prepared to talk about their work or to bring demonstrations to Telluride (e.g. robots, chips, software). Internet access will be provided. Technical staff present throughout the workshops will assist with software and hardware issues. We will have a network of SUN workstations running UNIX, one or two MACs and a few PCs running windows and LINUX. We have funds to reimburse some participants for up to $500.- of domestic travel and for all housing expenses. Please specify on the application whether such finanical help is needed. Unless otherwise arranged with one of the organizers, we expect participants to stay for the duration of this three week workshop. HOW TO APPLY: The deadline for receipt of applications is April 5, 1996. Applicants should be at the level of graduate students or above (i.e. post-doctoral fellows, faculty, research and engineering staff and the equivalent positions in industry and national laboratories). We actively encourage qualified women and minority candidates to apply. Application should include: 1. Name, address, telephone, e-mail, FAX, and minority status (optional). 2. Curriculum Vitae. 3. One page summary of background and interests relevant to the workshop. 4. Description of special equipment needed for demonstrations that could be brought to the workshop. 5. Two letters of recommendation Complete applications should be sent to: Prof. Terrence Sejnowski The Salk Institute 10010 North Torrey Pines Road San Diego, CA 92037 email: terry at salk.edu FAX: (619) 587 0417 Applicants will be notified around May 1, 1996.  From ib at rana.usc.edu Fri Mar 15 19:02:03 1996 From: ib at rana.usc.edu (Irving Biederman) Date: Fri, 15 Mar 1996 16:02:03 -0800 Subject: Shift Invariance Message-ID: <199603160002.QAA01395@mizar.usc.edu> Shimon Edelman (March 8) writes: [Omission of some of posting] >Note that the purpose of my previous posting was to advocate caution, >certainly not to argue that all claims of invariance are wrong. >Fortunately, my job in this matter is easy: just one example of a >manifest lack of invariance suffices to invalidate the strong version >of invariance-based theory of vision, which seems to be espoused by >Goldfarb: >So, here it goes... Whereas invariance does hold in many recognition >tasks (in particular, in Biederman's experiments, as well as in the >experiments reported in [1]), it does not in others (as, e.g., in [2], >where interaction between size invariance and orientation is >reported). A recent comprehensive survey of (the far from invariant) >human performance in recognizing rotated objects can be found in >[3]. Furthermore, not only recognition, but also perceptual learning, >seems to be non-invariant in some cases; see [4,5]. [Omission of rest of posting] It should be so easy. Of course, ALL of vision is not shift invariant (I don't believe that Goldfarb was asserting that it was) as there is clear evidence that people are, for example, quite sensitive to the location of objects when they reach out to grasp them. The issue of shift invariance was specifically raised, not for ALL of vision, but the domain of (what should be called) object recognition, what I termed "primal access", Biederman, '87, in which basic-level or (most) subordinate-level classification is made from a large and uncertain population of objects, as when channel surfing. I think that readers who are not familiar with some of the literature cited in Edelman's posting might be misled into thinking that shift invariance in object recognition is a special case. As I noted in my previous posting, the evidence is quite strong that object recognition tasks, at the same time that they show a visual (and not just verbal or conceptual) benefit from a single presentation in an experiment, also show shift invariance. (They also show size, scale, reflection, and rotation in depth invariance, as long as the same parts and relations are readily distinguished.) Edelman points out that there have been reports of view-dependency for depth rotation, not shift, in "recognition" tasks. (Goldfard specifically exempted rotation.) But even for depth rotation, readers should note that the findings of large rotation costs are found only for extremely difficult discrimination tasks, performed only rarely in normal visual activities in which viewpoint-invariant information is generally not available, such as distinguishing among a set of highly similar bent paper clips. Why would invariance not be found with extremely difficult tasks? When tasks are difficult, subjects will attempt various strategies (e.g., look to the left [a dorsal function?] for a small, distinguishing feature), that might produce a cost of view-change, but this does not mean that the representation of the feature (or object) itself is not invariant. All in all, the absence of an effect of a view-change, puts one in a simpler explanatory position (assuming adequate power), than when an effect of view change (say, a shift) is found. The latter kind of result means that one has to eliminate other task variables as potential bases of the effect, such as a search for a distinguishing feature, as noted above. A finding of an effect of a change in viewpoint in "object recognition" might or might not mean that the representation of the object is viewpoint dependent. The "view-based" camp will have to demonstrate that the representation of an object (for primal access) really does change when it is shifted, or shown at a different size, or orientation in depth (assuming that the same parts are in view). They haven't done this yet. Whether a TASK (NOT A REPRESENTATION) does or does not manifest shift invariance might well depend on the degree to which it reflects dorsal (motor interaction) vs. ventral (recognition) cortical representations. The manifestation of these invariances nicely dovetails with the phenomenon of "object constancy" noted by the Gestaltists, in which the perception of the real object is largely unaffected by its translation or rotation. It is of interest that patient D. F. studied by Milner and Goodale, who presumably has a damaged ventral pathway shows no awareness of objects while at the same time is able to reach competently for them. My views on these matters of view invariance (especially of rotation in depth) are more fully presented in: 1. Biederman, I., & Gerhardstein, P. C. (1993). Recognizing depth-rotated objects: Evidence and conditions for 3D viewpoint invariance. Journal of Experimental Psychology: Human Perception and Performance, 19, 1162-1182. 2. Biederman, I., & Gerhardstein, P. C. (1995). Viewpoint-dependent mechanisms in visual object recognition: Reply to Tarr and B?lthoff (1995). Journal of Experimental Psychology: Human Perception and Performance, 21, 1506-1514. 3. Biederman, I., & Bar, M. (1995). One-Shot Viewpoint Invariance with Nonsense Objects. Paper presented at the Annual Meeting of the Psychonomic Society, 1995, Los Angeles, November. Available on our WWW site: http://rana.usc.edu:8376/~ib/iul.html  From terry at salk.edu Fri Mar 15 18:47:24 1996 From: terry at salk.edu (Terry Sejnowski) Date: Fri, 15 Mar 96 15:47:24 PST Subject: Neural Computation 8:3 Titles Message-ID: <9603152347.AA08606@salk.edu> Neural Computation - Volume 8, Number 3 - April 1, 1996 Long Article: A Smoothing Regularizer for Feedforward and Recurrent Neural Networks Lizhong Wu and John Moody Notes: Note on the Maxnet Dynamics John P. F. Sum and Peter K. S. Tam Optimizing Synaptic Conductance Calculation for Network Simulations William W. Lytton Letters: Parameter Extraction from Population Codes: A Critical Assessment Herman P. Snippe Energy Efficient Neural Codes William B. Levy and Robert A. Baxter A Nonlinear Hebbian Network that Learns to Detect Disparity in Random-Dot Stereograms Christopher W. Lee and Bruno A. Olshausen Coupling the Neural and Physical Dynamics in Rhythmic Movements Nicholas G. Hatsopoulos Predictive Minimum Description Length Criterion for Time Series Modeling with Neural Networks Mikko Lehtokangas, Jukka Saarinen, Pentti Huuhtanen and Kimmo Kaski Minimum Description Length, Regularization and Multi-Model Data Richard Rohwer and John C. van der Rest VC Dimension of an Integrate-and-Fire Neuron Model Anthony M. Zador and Barak A. Pearlmutter The VC-Dimension and Pseudodimension of Two-Layer Neural Networks with Discrete Inputs Peter L. Bartlett and Robert C. Williamson A Theoretical and Experimental Account of N-Tuple Classifier Performance Richard Rohwer and Michal Morciniec The Effects of Adding Noise During Backpropagation Training on a Generalization Performance Guozhong An ----- ABSTRACTS - http://www-mitpress.mit.edu/jrnls-catalog/neural.html SUBSCRIPTIONS - 1996 - VOLUME 8 - 8 ISSUES ______ $50 Student and Retired ______ $78 Individual ______ $220 Institution Add $28 for postage and handling outside USA (+7% GST for Canada). (Back issues from Volumes 1-7 are regularly available for $28 each to institutions and $14 each for individuals Add $5 for postage per issue outside USA (+7% GST for Canada) mitpress-orders at mit.edu MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142. Tel: (617) 253-2889 FAX: (617) 258-6779 -----  From ruppin at math.tau.ac.il Sun Mar 17 06:53:46 1996 From: ruppin at math.tau.ac.il (Eytan Ruppin) Date: Sun, 17 Mar 1996 14:53:46 +0300 (GMT+0300) Subject: Memory Consolidation Workshop Message-ID: <199603171153.OAA17322@gemini.math.tau.ac.il> WORKSHOP ANNOUNCEMENT - TAU - May 28-30, 1996 ---------------------------------------------- MEMORY ORGANIZATION AND CONSOLIDATION: COGNITIVE AND COMPUTATIONAL PERSPECTIVES ----------------------------------------- Adams Super Center for Brain Studies Tel-Aviv University A workshop on Memory Organization and Consolidation, sponsored by Branco Weiss, will be held during May 28-30, 1996 at Tel-Aviv University, Israel. Invited speakers from different disciplines of the Neurosciences will discuss psychological, neurological, physiological and computational perspectives of the subject. An informal atmosphere will be maintained, encouraging questions and discussions. WORKSHOP PROGRAM -------------------- ---------------------- Tuesday, May 28 ------------------ 8:45AM: Opening address Session 1: Chair: Bruce McNaughton ---------------------------------- 9:00AM: Morris Moscovitch and Lynn Nadel - Consolidation: The dynamics of memory systems in humans and animals. 9:50AM: James McClelland - Why there are complementary learning systems in the brain. 10:40AM: Coffee break 11:10AM: Daniel Amit - Thinking about learning in the context of active memory. 12:00AM: Discussion 12:30AM: Lunch break Session 2: Chair: Jay McClelland --------------------------------- 1:45PM: Bruce McNaughton - The Hippocampus, Space, and Memory Consolidation: Towards a Grand Unified Theory. PartI: A multichart neuronal architecture for both integration of self motionin arbitrary spatial reference frames and memory reprocessing. 2:40PM: Edi Barkai - Cellular mechanisms underlying memory consolidation in the Piriform Cortex. 3:30PM: Coffee break 3:50PM: Ilan Golani - Spatial memory in rat unconstrained exploratory behavior. 4:40PM: Michael Hasselmo - A model of human memory based on the cellular physiology of the hippocampal formation. 5:30PM: Discussion 6:00PM: Get Together and Poster Session. Wednesday, May 29 ------------------- Session 3: Chair: Daniel Amit ------------------------------ 9:00AM: Bruce McNaughton - The Hippocampus, Space, and Memory Consolidation: Towards a Grand Unified Theory. PartII: Coherence of hippocampal and neocortical memory reactivation during off-line processing. 9:50AM: Avi Karni - Cortical plasticity and adult skill learning: Time and practice are of essence. 10:40AM: Coffee break 11:10 Richard Thompson - Declarative memory in classical conditioning? Involvement of the hippocampus. 12:00AM: Discussion 12:30AM: Lunch break Session 4: Chair: David Horn ----------------------------- 1:45PM: James McClelland- Representation and memory in the hippocampal system: acquisition, maintenance and recovery of novel, arbitrary associations. 2:40PM: Mark Gluck - Neurocomputational approaches to integrating animal and human models of memory. 3:30PM: Coffee break 3:50PM: Alessandro Treves - Quantitative constraints on consolidation. 4:40PM: Eytan Ruppin - Synaptic maintenance, consolidation and sleep. 5:30PM: Discussion 6:00PM: Dinner Thursday, May 30 ------------------ Session 5: Chair: Mark Gluck ------------------------------ 9:00AM: Richard Thompson - Localization of a memory trace in the mammalian brain: The cerebellum and classical conditioning. 9:50AM Matty Mintz - Fast acquisition of fear and slow acquisition of motor reflexes in classical conditioning: Interdependent processes? 10:40AM: Coffee break 11:10AM Lynn Nadel - Memory consolidation: Multiple modules and multiple time-scales. 12:00AM: Discussion 12:30AM: Lunch break Session 6: Chair: Richard Thompson ----------------------------------- 1:45PM: Morris Moscovitch - Structural and functional components of memory in humans. 2:40PM: Yadin Dudai - Taste, novelty, and a molecular saliency switch in brain 3:30PM: Coffee break 3:50PM: Amos Korczyn - Clinical aspects of forgetting. 4:40PM: Martin Albert - Memory and Language 5:30PM: Discussion 6:00PM: Closing Remarks CALL FOR ABSTRACTS -------------------- Individuals wishing to present a poster related to any aspect of the workshop's theme should submit an abstract describing the nature of their presentation. The single page submission should include title, author(s), contact information (address and email/fax), and abstract, and will be reviewed by the Program Committee. Abstract submissions should be sent to Eytan Ruppin, Dept. of Computer-Science, Tel-Aviv University, Tel-Aviv, Israel, 69978. Email: ruppin at math.tau.ac.il. All submissions should arrive by April 15th, 1996. REGISTRATION -------------- To register for the workshop, contact Ms.Bila Lenczner, Adams Super Center for Brain Studies, Tel Aviv University, TelAviv 69978, Israel, Tel.: 972-3-6407377, Fax: 972-3-6407932, email:memory at neuron.tau.ac.il } Program Committee: ------------------ David Horn, Michael Myslobodsky and Eytan Ruppin Further Information and updates: -------------------------------- See our WWW homepage at http://www.brain.tau.ac.il, or http://neuron.tau.ac.il/Adams/memory.  From xjwang at xjwang.ccs.brandeis.edu Sun Mar 17 18:05:20 1996 From: xjwang at xjwang.ccs.brandeis.edu (Xiao Jing Wang) Date: Sun, 17 Mar 96 18:05:20 EST Subject: No subject Message-ID: <9603172305.AA06494@xjwang.ccs.brandeis.edu> POSTDOCTORAL POSITION AT THE SLOAN CENTER FOR THEORETICAL NEUROBIOLOGY BRANDEIS UNIVERSITY ================================================================== Dear Colleagues, I am looking for a post-doctoral research associate in computational neuroscience, starting August/September, 1996. Current research in my lab is focused on two kinds of topics: coherent cortical oscillations and their functional roles; working-memory processes and their neuromodulation. Projects are expected to be carried out in close interactions and collaborations with experimental neurobiologists. Candidates with strong theoretical background, analytical and simulation skills, and knowledge in neuroscience, are encouraged to apply. Brandeis University, near Boston, offers excellent opportunities in this interdisciplinary field. Other faculty members at the Sloan Center include Drs. Laurence Abbott, Eve Marder, John Lisman, Sasha Nelson, Gina Turrigano. Applicants should send promptly a curriculum vitae and a brief description of fields of interest, and have three letters of recommandation sent to the following address. Xiao-Jing Wang Center for Complex Systems Brandeis University Waltham, MA 02254 phone: (617) 736-3147 email: xjwang at volen.brandeis.edu  From payman at ebs330.eb.uah.edu Fri Mar 15 12:33:17 1996 From: payman at ebs330.eb.uah.edu (Payman Arabshahi) Date: Fri, 15 Mar 96 11:33:17 CST Subject: CIFEr'96 fast approaching - register now! Message-ID: <9603151733.AA24900@ebs330> (1st reminder announcement after publicizing the CFP on CONNECTIONISTS). The 1996 IEEE/IAFE Computational Intelligence in Financial Engineering Conference will be held March 24-26 1996 at the Crowne Plaza Manhattan, New York. This is one of the leading forums for new technologies and applications in the intersection of computational intelligence and financial engineering. You can still register for the conference. Please visit our homepage at http://www.ieee.org/nnc/conferences/cfp or drop me a line for complete information and registration details. -- Payman Arabshahi Tel : (205) 895-6380 Dept. of Electrical & Computer Eng. Fax : (205) 895-6803 University of Alabama in Huntsville payman at ebs330.eb.uah.edu Huntsville, AL 35899 http://www.eb.uah.edu/ece  From piuri at elet.polimi.it Mon Mar 18 14:32:31 1996 From: piuri at elet.polimi.it (Vincenzo Piuri) Date: Mon, 18 Mar 1996 20:32:31 +0100 Subject: NICROSP'96 - call for participation Message-ID: <9603181932.AA24645@ipmel2.elet.polimi.it> ================================================================================ NICROSP'96 1996 International Workshop on Neural Networks for Identification, Control, Robotics, and Signal/Image Processing Ramada Hotel, Venice, Italy - 21-23 August 1996 ================================================================================ Sponsored by the IEEE Computer Society and the IEEE CS Technical Committee on Pattern Analysis and Machine Intelligence. In cooperation with: ACM SIGART, IEEE Circuits and Systems Society, IEEE Control Systems Society, IEEE Instrumentation and Measurement Society, IEEE Neural Network Council, IEEE North-Italy Section, IEEE Region 8, IEEE System, Man, and Cybernetics Society, IMACS, ISCA, AEI, AICA, ANIPLA, FAST. This first edition of the workshop is directed to create a unique synergetic discussion forum and a strong link between theoretical researchers and practitioners in the application fields of identification, control, robotics, and signal/image processing by using neural techniques. The three-days single-session schedule will provide the ideal environment for in-depth analysis and discussions concerning the theoretical aspects of the applications and the use of neural networks in the practice. Two keynote speakers (prof. T. Kohonen and prof. B. Widrow) will provide starting points for the discussion. ORGANIZERS General Chair: Prof. Edgar Sanchez-Sinencio Department of Electrical Engineering, Texas A&M University, USA Program Chair: Prof. Vincenzo Piuri Department of Electronics and Information, Politecnico di Milano, Italy Publication Chair: Dr. Jose' Pineda de Gyvez Department of Electrical Engineering, Texas A&M University, USA Publicity, Registration & Local Arrangment Chair: Dr. Cesare Alippi Department of Electronics and Information, Politecnico di Milano, Italy Workshop Secretariat: Ms. Laura Caldirola (email caldirol at elet.polimi.it) Department of Electronics and Information, Politecnico di Milano, Italy Program Committee Shun-Ichi Amari, University of Tokyo, Japan Panos Antsaklis, University of Notre Dame, USA Magdy Bayoumi, University of Southwestern Louisiana, USA James C. Bezdek, University of West Florida, USA Pierre Borne, Ecole Politechnique de Lille, France Luiz Caloba, Universidad Federal de Rio de Janeiro, Brazil Jill Card, Digital Equipment Corp., USA Chris De Silva, University of Western Australia, Australia Laurene Fausett, Florida Institute of Technology, USA C. Lee Giles, NEC, USA Karl Goser, University of Dortmund, Germany Yee-Wei Huang, Motorola Inc., USA Simon Jones, University of Loughborough, UK Michael Jordan, Massachussets Institute of Technology, USA Robert J. Marks II, University of Washington, USA Jean D. Nicoud, EPFL, Switzerland Eros Pasero, Politecnico di Torino, Italy Emil M. Petriu, University of Ottawa, Canada Alberto Prieto, University of Granada, Spain Gianguido Rizzotto, SGS-Thomson, Italy Edgar Sanchez-Sinencio, A&M University, USA Bernd Schuermann, Siemens, Germany Earl E. Swartzlander, University of Texas at Austin, USA Philip Treleaven, University College London, UK Kenzo Watanabe, Shizuoka University, Japan Michel Weinfeld, Ecole Politechnique de Paris, France GENERAL INFORMATION The conference location is on the mainland of Venice, Italy. The workshop will be held in the Ramada Hotel, in S. Giuliano, near the International Airport of Venice. A number of rooms has been reserved at the Ramada Hotel for the NICROSP attendees at the special rates shown in the Hotel Reservation Form. This American-style hotel is fully equipped to provide a high comfort during the whole stay. Buffet breakfast is included in the hotel rates. Lunches are included in the registration fee for registered attendees, as well as coffee breaks, entrance to sessions, and one copy of the proceedings. Additional lunch tickets for companions may be purchased at the registration desk. Hotel reservation must be made directly with the Ramada Hotel at S. Giuliano - Venice by sending the Hotel Reservation Form (fax is preferred). Reservations can be also made by contacting any other Ramada Reservation Center around the world and mentioning the special rates for the NICROSP conference. Reservation deadline is June 21, 1996. After this date, the hotel will not be able to guarantee room's availability; should the hotel be completely booked, Ramada will suggest equivalent accomodations in nearby hotels. Disable persons should contact the hotel for possible special requirements. Ramada Hotel grants the same workshop rates from August 16 till August 26. TRANSPORTATION Venice is served by an International Airport (about 15 minutes by car from the Ramada Hotel). Flights are daily available from most European towns and from some US cities. A special shuttle service for NICROSP attendees may be organized by the Ramada Hotel from/to the airport: since the shuttle fare is fixed and independent from the number of passengers, attendees should contact the Ramada Hotel to coordinate car pools. At the airport it is possible to rent a car to reach the Ramada Hotel (guest parking is available within the hotel). Maps and directions are available at the car rentals. Taxi cabs are also available; typical fare from the airport to the Ramada Hotel is approximately 40,000 Italian Liras. Venice has good and frequent international rail connections. Use the Mestre station (every train stops there). Taxi cabs are available at the station exit; typical fare is approximately 20,000 Italian Liras. If you decide to drive to the workshop site, ask for a map at the workshop secretariat: leave the highway to Venice at the Mestre-Est exit or at the Mestre-Ovest exit and, then, follow the map. Guest parking is available within the hotel. The entrance to downtown Venice (piazzale Roma) may be reached by a shuttle service for hotel guests (provided by Ramada Hotel at scheduled times), by public bus (also available at scheduled times), or by taxi cabs. Public boats for downtown Venice and for the laguna islands leave from piazzale Roma. Additional information and time scheduling for transportations between the Ramada Hotel and downtown Venice will be provided at the workshop registration desk or at the hotel reception. TECHNICAL PROGRAM It will be available after 8 April 1996. Ask it at the workshop secretariat. FURTHER QUESTIONS For any problem or further information, contact the workshop secretariat by July 26. After this date, contact prof. Vincenzo Piuri by email only (email piuri at elet.polimi.it). ================================================================================ NICROSP'96 HOTEL RESERVATION FORM Please: Return this form as soon as possible (fax is preferred) to: RAMADA HOTEL via Orlanda 4, I-30173 S. Giuliano, Venezia, Italy Fax +39-41-5312278 Reservation deadline is June 21, 1996. After this date, the hotel will not be able to guarantee room's availability; should the hotel be completely booked, Ramada will suggest equivalent accomodations in nearby hotels. Last / First Name_______________________________________________________________ Company/University______________________________________________________________ Department______________________________________________________________________ Address_________________________________________________________________________ City__________________________________State/Country_____________________________ Telephone_______________________________________________________________________ Fax_____________________________________________________________________________ Date__________________________________Signature_________________________________ Please reserve the following accomodations: o No. __ Single room(s) at 162,000 Italian Liras o No. __ Double room(s) at 262,000 Italian Liras o No. __ Twin room(s) at 262,000 Italian Liras Cross the preferred accomodation and insert the number of rooms that you are reserving for each type (otherwise, one is assumed for each cross). Room rates are per night and include buffet breakfast. Arrival date and approximate time:______________________________________________ Departure date and approximate time:____________________________________________ Number of nights:_______________________________________________________________ For late arrival, please give credit card information: o Eurocard o MasterCard o Access o VISA Credit Card Number_________________________________________Valid until__________ Card Holder (Last/First Name)___________________________________________________ Signature__________________________________________Date_________________________ ================================================================================ NICROSP'96 WORKSHOP REGISTRATION FORM Please: Return this form as soon as possible by fax or mail (email is not accepted) to the workshop's secretariat Ms. Laura Caldirola Politecnico di Milano, Department of Electronics and Information Piazza L. da Vinci 32, 20133 Milano, Italy phone +39-2-2399-3623 fax +39-2-2399-3411 Last / First Name_______________________________________________________________ Company/University______________________________________________________________ Department______________________________________________________________________ Address_________________________________________________________________________ City______________________________________State/Country_________________________ Telephone_________________________________Fax___________________________________ E-mail__________________________________________________________________________ Date______________________________________Signature_____________________________ If received Registration fee before 1 June after 1 June o Member 320 US$ 385 US$ o IEEE o ACM o AEI o AICA o ANIPLA o IMACS o ISCA Member No.__________ o Non Member 400 US$ 480 US$ o Student (enclose copy of student identification card) 200 US$ 200 US$ o Banquet: No. ___ tickets at 70 US$ each: Total _________US$ Registration fees include entrance to sessions, one copy of the proceedings, and coffee breaks. Advance payment can be made by credit card: o MasterCard o VISA o American Express Credit Card Number __________________________________Valid until_____________ Card Holder (Last/First Name)________________________________________________ Total charged_______________________US$ Signature____________________________________________Date____________________ On-site registration can be paid in Italian Lira or in US$ (daily exchange rates - including bank exchange charge - will be provided at the registration desk), by credit card or cash. ================================================================================  From gps0%eureka at gte.com Mon Mar 18 11:00:34 1996 From: gps0%eureka at gte.com (Gregory Piatetsky-Shapiro) Date: Mon, 18 Mar 1996 11:00:34 -0500 Subject: Job at GTE Laboratories: Data Mining and Knowledge Discovery Message-ID: <9603181600.AA22544@eureka.gte.com> **** An Outstanding Applied Researcher/Developer needed for the ********** **** Knowledge Discovery in Databases project at GTE Laboratories ********** TASK: Participate in the design and development of state-of-the-art systems for data mining and knowledge discovery. While the job will have have significant research aspects, the focus will be on the development of prototypes to be used in production setting. Our current projects include predictive customer modeling and analysis of healthcare information. We are applying multiple learning and discovery methods to very large, high-dimensional real-world databases. The ideal candidate will have a Ph.D. in Machine Learning or related fields and 2-3 years of experience, or an M.S. with an equivalent experience. The candidate should have experience with a variety of machine learning algorithms, be familiar with statistical theory, have practical experience with databases, and be proficient with Web/Internet tools. Excellent coding skills in C/Unix environment and ability to quickly pick up new systems and languages are needed. Good communication skills, the ability to work as part of a team, and good system maintenance practices are very desirable. The candidate will join one of the leading R&D teams in the area of data mining and knowledge discovery. GTE Laboratories Incorporated is the central research facility for GTE and supports GTE's telecommunications businesses. We are the largest local exchange telephone company and the second largest mobile service provider in the United States. Our research facility is located on a quiet 50 acre campus-like setting in Waltham, MA, 20 minutes from downtown Boston. Our salaries are competitive, and our outstanding benefits include medical/life/dental insurance, pension plan, saving and investment plans, and an on-site fitness center. We have a workforce of approximately 500 employees. Proper work authorization required. Please send a resume and a cover letter (preferably by e-mail, in ASCII) to: Gregory Piatetsky-Shapiro e-mail: gps at gte.com GTE Laboratories, MS-44 tel: 617-466-4236 40 Sylvan Road fax: 617-466-2960 Waltham MA 02154-1120 URL: http://info.gte.com/~kdd/gps.html  From jose at scr.siemens.com Mon Mar 18 08:34:54 1996 From: jose at scr.siemens.com (Stephen Hanson) Date: Mon, 18 Mar 1996 08:34:54 -0500 (EST) Subject: Fwd: test Message-ID: If you know of anyone that might have interest please pass this on to them, or please post locally---thanks. Steve Please circulate this job posting! ----------------------------Original message---------------------------- Visiting Research Fellow The James S. McDonnell Foundation The James S. McDonnell Foundation, a major private philanthropy supporting research in education and the behavioral and biomedical sciences, is seeking an energetic, resourceful professional to fill the position of Visiting Research Fellow (VRF). The VRF will work closely with the President and the Program Officer on projects related to the Foundation's current national and international programs and assist in identifying new program opportunities related to the Foundation's interests. The VRF will be responsible for developing the Foundation's technological capabilities, specifically in the use of the Internet and other communication systems. The VRF will be encouraged to develop a research project relevant to the Foundation and the VRF's interests that includes using the Foundation to develop a model system of how foundations and non-profit organizations can best incorporate technology to achieve their mission and enhance their productivity and efficiency. The position will be filled by a recent graduate of a doctoral program in psychology, cognitive science or cognitive neuroscience interested in exploring career opportunities in academic administration, private philanthropy, or science policy. The successful candidate must possess superior oral and written communication skills and expertise in and enthusiasm for the application of computer and communication technologies. The VRF must be able to represent the Foundation at national and international meetings with senior representatives of other funding agencies, senior scientists, and high-level science administrators. The position is for one year, with the possibility of renewal for a second. The anticipated start date is July 1, 1996. Competitive salary and benefits. Additional information about the McDonnell Foundation may be obtained via HTTP://www.jsmf.org. Qualified candidates must submit a letter of interest, a curriculum vitae, and three letters of reference by April 26, 1996 to: Susan M. Fitzpatrick, PhD James S. McDonnell Foundation 1034 South Brentwood Blvd., Suite 1610 St. Louis, Missouri 63117 email: c 6819sf at wuvmd.wustl.edu Application materials may be submitted electronically. The James S. McDonnell Foundation is an EO/AAE.  From josh at vlsia.uccs.edu Tue Mar 19 16:39:04 1996 From: josh at vlsia.uccs.edu (Alspector) Date: Tue, 19 Mar 1996 14:39:04 -0700 (MST) Subject: Postdoc in neural systems Message-ID: RESEARCH ASSOCIATE IN NEURAL SYSTEMS There is a postdoctoral research associate position available in the electrical and computer engineering department at the University of Colorado at Colorado Springs. It is supported by a grant from ARPA to study a neural-style VLSI vision system in collaboration with a group from Caltech. We are applying neural techniques to recognize patterns in handwritten documents and in remote-sensing images. We are also applying for new funding in the area of real-time underwater sound processing. The project will involve applying an existing VME-based neural network learning system to several demanding problems in signal processing. These include adaptive non-linear equalization of underwater acoustic communication channels and magnetic recording channels. It is likely also to involve integrating the learning electronics with micro-machined sonic transducers directly on silicon. The successful candidate will have skills in some or all of the following areas: 1) analog and digital VLSI design and test, 2) signal, sound and image processing, 3) neural network processing of complex data, and 4) working at the system level in a UNIX/C/C++ environment. Please send a curriculum vita, names and addresses of at least three referees, and copies of some representative publications to: Prof. Joshua Alspector Univ. of Colorado at Col. Springs Dept. of Elec. & Comp. Eng. P.O. Box 7150 Colorado Springs, CO 80933-7150 The University of Colorado is an equal opportunity employer.  From georgiou at csci.csusb.edu Tue Mar 19 15:36:10 1996 From: georgiou at csci.csusb.edu (georgiou@csci.csusb.edu) Date: Tue, 19 Mar 1996 12:36:10 -0800 (PST) Subject: ICCIN'97: Call for papers Message-ID: <199603192036.MAA04367@csci.csusb.edu> First Announcement 2nd International Conference on COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE (http://www.csci.csusb.edu/iccin) Sheraton Imperial Hotel & Convention Center Research Triangle Park, North Carolina March 2-5, 1997 Conference Co-chairs: Subhash C. Kak, Louisiana State University Jeffrey P. Sutton, Harvard University This conference is part of the Third Joint Conference Information Sciences. Plenary Speakers include the following: James S. Albus Jim Anderson Roger Brockett Earl Dowell David E. Goldberg Stephen Grossberg Y. C. Ho John H. Holland Zdzislaw Pawlak Lotfi A. Zadeh Organizing Committee: Grigorios Antoniou, Griffith University, Australia Catalin Buiu, Romania Ian Cresswell, U.K. S. Das, University of California, Berkeley S.C. Dutta Roy, India Laurie Fausett, FAU George M. Georgiou, California State University Paulo Gaudiano, Boston, University Ugur Halici, METU, Turkey Akira Hirose, University of Tokyo Arun Jagota, University of North Texas Jonathan Marshall, University of N. Carolina Bimal Mathur, Rockwell CA Kishan Mehrotra, Syracuse Haluk Ogmen, University of Houston Ed Page, South Carolina W.A. Porter, University of Alabama Ed Rietman, Bell Labs Christos Schizas, University of Cyprus Harold Szu, USL M. Trivedi, UCSD E. Vityaev, Russia Paul Wang, Duke University Areas for which papers are sought include: o Artificial Life o Artificially Intelligent NNs o Associative Memory o Cognitive Science o Computational Intelligence o Efficiency/Robustness Comparisons o Evaluationary Computation for Neural Networks o Feature Extraction & Pattern Recognition o Implementations (electronic, Optical, Biochips) o Intelligent Control o Learning and Memory o Neural Network Architectures o Neurocognition o Neurodynamics o Optimization o Parallel Computer Applications o Theory of Evolutionary Computation Summary Deadline: November 15, 1996 Proposals for sessions: November 15, 1996 Decision & Notification: January 5, 1997 Send summaries to: George M. Georgiou Computer Science Department California State University San Bernardino, CA 92407-2397 georgiou at csci.csusb.edu Papers will be accepted based on summaries. A summary shall not exceed 4 pages of 10-point font, double-column, single-spaced text, (1 page minimum) with figures and tables included. Any summary exceeding 4 pages will be charged $100 per additional page. Three copies of the summary are required by November 15,1996. A deposit of $150 check must be included to guarantee the publication of your 4 pages summary in the Proceedings. $150 can be deducted from registration fee later. NSF funding has been requested to support women, minorities, recent Ph.D. recipients, and graduate students. Conference Web site: http://www.csci.csusb.edu/iccin  From mcasey at volen.brandeis.edu Thu Mar 7 07:50:02 1996 From: mcasey at volen.brandeis.edu (Mike Casey) Date: Wed, 20 Mar 1996 00:50:02 +30000 Subject: Paper on Dynamical Sytems, RNNs and Computation Available Message-ID: Dear connectionists, The following paper deals with the connection between computational and dynamical descriptions of systems and the analysis of recurrent neural networks in particular. It has been accepted for publication in Neural Computation, and will appear in Vol. 8, number 6 later this year. Comments are welcome. ---------------------------------------------------------------------- The Dynamics of Discrete-Time Computation, With Application to Recurrent Neural Networks and Finite State Machine Extraction [77 pages] Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 To appear in Neural Computation 8:6. ABSTRACT: Recurrent neural networks (RNNs) can learn to perform finite state computations. It is shown that an RNN performing a finite state computation must organize its state space to mimic the states in the minimal deterministic finite state machine (DFA) which can perform that computation, and a precise description of the attractor structure of such systems is given. This knowledge effectively predicts activation space dynamics, which allows one to understand RNN computation dynamics in spite of complexity of activation dynamics. As a corollary of our main theorem, we prove that the only formal languages which RNNs are able to robustly recognize are those recognizable by DFA (i.e. the regular languages). By elucidating the necessary and sufficient dynamical properties which an RNN must possess in order to perform a DFA computation, we provide a framework for discussing the relationship between symbolic (algorithmic, finite state) and subsymbolic (dynamic, continuous phase space) aspects of computation in physical systems. This theory also provides a theoretical framework for understanding finite state machine extraction techniques and can be used to improve training methods for RNNs performing DFA computations. This provides an example of a successful top-down approach to understanding a general class of complex systems that have not been explicitly designed, e.g. systems that have evolved or learned their internal structure. ---------------------------------------------------------------------- This paper is available via the WWW at http://eliza.cc.brandeis.edu/people/mcasey/papers.html or ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.gz ftp://eliza.cc.brandeis.edu/pub/mcasey/mcasey_nc.ps.Z FTP INSTRUCTIONS unix% ftp eliza.cc.brandeis.edu (or 129.64.55.200) Name: anonymous Password: (use your e-mail address) ftp> cd /pub/mcasey/ ftp> bin ftp> get mcasey_nc.ps.Z (or mcasey_nc.ps.gz or mcasey_nc.ps) ftp> bye unix% uncompress mcasey_nc.ps.Z (or gzip -d mcasey_nc.ps.gz) Please send comments to Mike Casey Volen Center for Complex Systems Studies Brandeis University Waltham, MA 02254 email: mcasey at volen.brandeis.edu http://eliza.cc.brandeis.edu/people/mcasey  From reggia at cs.UMD.EDU Wed Mar 20 12:59:57 1996 From: reggia at cs.UMD.EDU (James A. Reggia) Date: Wed, 20 Mar 1996 12:59:57 -0500 (EST) Subject: Postdoc Position in Computational Neuroscience Message-ID: <199603201759.MAA17041@avion.cs.UMD.EDU> POSTDOC POSITION IN COMPUTATIONAL NEUROSCIENCE It is anticipated that a postdoctoral research position will be available starting this summer or fall involving neural modeling. The focus will be on modeling various aspects of cerebral cortex dynamics and plasticity. Ideally we are looking for someone with interdisciplinary interests and background in computation and neuroscience. Instructions for applying are given below; applications must be received by the April 8 deadline at the latest. They can be sent to Prof. Ja'Ja' as indicated below, or directly to me. James A. Reggia Dept. of Computer Science A. V. Williams Bldg. University of Maryland College Park, MD 20742 USA Email: reggia at cs.umd.edu Phone: 301-405-2686 Fax: 301-405-6707 Position Announcement: The University of Maryland Institute for Advanced Computer Studies (UMIACS) invites applications for post doctoral positions, beginning summer/fall '96 in the following areas: Real-time Video Indexing, Natural Language Processing, and Neural Modeling. Exceptionally strong candidates from other areas will also be considered. UMIACS, a state-supported research unit, has been the focal point for interdisciplinary and applications-oriented research activities in computing on the College Park campus. The Institute's 40 faculty members conduct research in high performance computing, software engineering, artificial intelligence, systems, combinatorial algorithms, scientific computing, and computer vision. Qualified applicants should send a 1 page statement of research interest, curriculum vita and the names and addresses of 3 references to: Prof. Joseph Ja'Ja' UMIACS A.V. Williams Building University of Maryland College Park, MD 20742 by April 8. UMIACS strongly encourages applications from minorities and women. EOE/AA  From juergen at idsia.ch Wed Mar 20 12:36:13 1996 From: juergen at idsia.ch (Juergen Schmidhuber) Date: Wed, 20 Mar 96 18:36:13 +0100 Subject: IDSIA postdoc position Message-ID: <9603201736.AA26310@fava.idsia.ch> IDSIA POSTDOC POSITION IDSIA, the Swiss machine learning research institute, offers a 2-year postdoc position with possibility for renewal. Goal of the corresponding research project is to analyze, compare, extend, and apply `neural' algorithms for unsupervised learning and redundancy reduction. A main focus will be on `predictability minimization' (see refs below). Application areas include adaptive image pre-processing, classification, data compression. Applicants should be willing to build on our previous work. The ideal candidate combines strong mathematical skills and strong programming skills. Switzerland tends to be nice to scientists. It boasts the highest supercomputing capacity per capita, the most Nobel prizes per capita (with Sweden), the highest gross national product per capita (with Luxembourg), and the best chocolate. IDSIA is located in beautiful Lugano, capital of scenic Ticino (the southern part of Switzerland). Pictures in my home page. Milano, Italy's center of fashion and finance, is one hour away. CSCS, the Ticino supercomputing center, is nearby. Salary is competitive. To obtain an overview of IDSIA's activities, see our home page. If interested, please send CV, list of publications, and cover letter with brief statement of research interests plus email addresses of three references to juergen at idsia.ch Send them as separate, uncompressed ASCII or postscript files. ASCII greatly preferred. Please call subject headers of ASCII files: name.cv, name.pub, name.cover, respectively, where `name' stands for your name. Please call subject headers of postscript files: name.cv.ps, name.pub.ps, name.cover.ps, respectively. Please also send HARDCOPIES of 3 representative papers by PHYSICAL mail (no problem if they arrive after the deadline). DEADLINE: APRIL 10, 1996. Earlier applications preferred. Juergen Schmidhuber research director IDSIA, Corso Elvezia 36, 6900-Lugano, Switzerland juergen at idsia.ch http://www.idsia.ch/~juergen --------- Refs on predictability minimization -------- - J.Schmidhuber, M.Eldracher, and B.Foltin. Semilinear predictability minimization produces well-known feature detectors. Neural Computation, in press, 1996. - J.Schmidhuber and D.Prelinger. Discovering predictable classifications. Neural Computation, 5(4):625-635, 1993. - J.Schmidhuber. Learning factorial codes by predictability minimization. Neural Computation, 4(6):863-879, 1992.  From milanese at cui.unige.ch Wed Mar 20 12:13:18 1996 From: milanese at cui.unige.ch (Ruggero Milanese) Date: Wed, 20 Mar 1996 18:13:18 +0100 Subject: Research positions on neural networks and vision Message-ID: <2829*/S=milanese/OU=cui/O=unige/PRMD=switch/ADMD=400net/C=ch/@MHS> Two openings at the Department of Computer Science of the University of Geneva (Switzerland) on: COMPUTER and BIOLOGICAL VISION - One post-doc: for a person holding a Ph.D. degree in either Computer Science, Electrical Engineering, Mathematics, or Physics, with emphasis on neural networks, dynamic systems, computational neuroscience, and machine vision. Experience with the analysis of multidimensional EEG signals would be a plus. - One assistant (Ph.D. student): for a person holding a Diploma, or Master's degree in Computer Science, or comparable qualification, with experience in neuronal modeling and computer vision. Project description. -------------------- Development of neural networks for recognizing multiple objects over complex, textured background. The basic model, inspired by neurophysiological and psychophysical research in human vision, employs networks of oscillatory units capable of stimulus-dependent synchronization. Research issues include: the design of a complete neural network architecture employing integrate-and-fire units, the assessment of the role of visual attention in modifying neuronal dynamics, and the compatibility of the overall model with biological data. Both candidates will work in collaboration with a third researcher at the Brain Mapping Laboratory of the Geneva Cantonal Hospital (headed by Prof. T.Landis and C. Michael), and will contribute to the analysis of the data collected through experiments on human subjects using EEG-based tomography of brain activity. Position profiles. ------------------ Both positions are available *immediately*, for a minimum of 1.5 years (postdoc) and 2 years (assistant). An extension for another 2 years is likely. Salaries are approx. CHF 60,000 for the postdoc and CHF 58,000 for the assistant (both before taxes). Good written/oral knowledge of English is essential. Some knowledge of French would be desirable. Applicants should send a CV, a list of publications, a summary of M.Sc. thesis (assistant position only), together with one letter of recommendation, to the following address. Dr. Ruggero Milanese Dept. of Computer Science, University of Geneva 24 rue General-Dufour, 1211 Geneva 4, Switzerland Phone: +41 (22) 705-7631, Fax: +41 (22) 705-7780 E-mail: milanese at cui.unige.ch URL: http://cuiwww.unige.ch/~milanese Geneva, march 1996.  From murre at psy.uva.nl Tue Mar 19 09:10:57 1996 From: murre at psy.uva.nl (J.M.J. Murre) Date: Tue, 19 Mar 1996 15:10:57 +0100 Subject: Three jobs in connectionist modelling in psychology Message-ID: <199603191410.AA18392@uvapsy.psy.uva.nl> THREE GRADUATE RESEARCHERS (PH.D. STUDENTS) IN CONNECTIONIST MODELLING IN PSYCHOLOGY Application deadline: 12 April 1996 GENERAL DESCRIPTION Our group has job openings for three graduate researchers (Ph.D. students, or 'Onderzoekers-in-Opleiding' in Dutch) in connectionist modelling at the Graduate Research Institute for Experimental Psychology (EPOS) of the Dutch Organization for Scientific Research (NWO). The projects are located at the University of Amsterdam and Leiden University. The Graduate Research Institute EPOS was established by the University of Amsterdam, the Free University Amsterdam and Leiden University to foster and strengthen research and graduate training in the area of experimental psychology. The indiduvidual projects are funded by the Netherlands Society for Scientific Research (NWO) and form part of a larger research initiative 'Dynamic processes in self-organizing networks that interact with the environment'. One four-year postdoc has already been assigned to this project (this position is already filled). Several other graduate researchers within EPOS are working on similar projects (connectionist modelling and experimental psychology). CONDITIONS Each project runs for four years, starting 1 July 1996 at the earliest, and is expected to lead to a Ph.D. at the end of the four-year period. Full scholarships are available for each project to the amount of DFL 2100 (about $1320) per month in the first year, gradually increasing to DFL 3770 (about $2360) per month in the fourth year. (These figures are before taxes; a typical salary in the first year, after taxes, would be DFL 1680, but this depends on the experience of the applicant.) Succesful candidates will be required to move to the Netherlands. They will be required to follow and complete a number of graduate courses. In most cases, the graduate researchers are asked to participate in teaching of undergraduate students. This teaching load will be small and concerns only courses that are within their field of research. GENERAL REQUIREMENTS Excellent command of Dutch (or English) is necessary. Applicants from computer science, physics, mathematics, or engineering must bear in mind that a strong, demonstrable background in psychology or related fields is necessary for these projects. PROJECT 1. 'ARTIFICIAL NEURAL NETWORKS FOR AFFECTIVE PROCESSES'(QUOTE REF. 575-23-006). Description: Development of neural network models for direct and indirect affective processes. This work also has a strong neuroscience component. It builds on modelling work by the project supervisors (CALM approach to modelling). The simulations will keep pace with experimental research that occurs simultaneously in our group. Specific requirements: Master's (drs.) degree in experimental psychology. Strong interest in experimental research (especially from the cognitive neuroscience perspective). Experience with emotions research is desirable, as is experience with computer programming. This project is located at the University of Amsterdam. In case of urgent questions, further information can be obtained from Dr. R.H. Phaf (e-mail: pn_phaf at macmail.psy.uva.nl; phone: +31 20 525.6841 or +31 20 525.6840; fax: +31 20 639.1656). PROJECT 2. 'ARTIFICIAL NEURAL NETWORKS FOR MEMORY PROCESSES' (QUOTE REF. 575-23-007). Description: Development of neural network models for implicit and explicit memory phenomena and certain memory pathologies. Goal is implementation of a recently developed theoretical model of anterograde and retrograde amnesia. This implementation builds on existing work by the project supervisors (TraceNet and CALM approach to modelling, specifically TraceLink). Simulation results will be compared with research on memory in amnesic patients. Specific requirements: Master's (drs.) degree in experimental psychology or cognitive (neuro-)science. Strong interest in theoretical and empirical modelling of cognitive functions. Knowledge of neuroanatomy and neuropsychology is desirable, as is experience with psychological experimentation. Experience with computer programming is required (preferably in C). This project is located at Leiden University. In case of urgent questions, further information can be obtained from Dr. J.M.J. Murre (e-mail: murre at psy.uva.nl; phone: +31 20 525.6722 or +31 20 525.6840; fax: +31 20 639.1656). PROJECT 3. 'SEGMENTATION AND CLASSIFICATION OF VISUAL PATTERNS' (QUOTE REF. 575-23-008). Description: Development of neural network models for segmentation and classificatoin of visual patterns. This work builds on an approach that emphasizes research and simulation in the borderline area of stable and non-stable activation patterns. The simulations will keep pace with experimental research that occurs simultaneously in our group. Specific requirements: Master's (drs.) degree in cognitive science or experimental psychology, or in theoretical or medical biology or physics with a strongly interdisciplinary orientation and demonstrable interest in experimental research in psychology. Experience with computer programming is required. This project is located at the University of Amsterdam. In case of urgent questions, further information can be obtained from Dr. C. van Leeuwen (e- mail: pn_leeuwen at macmail.psy.uva.nl; phone: +31 20 525.6118 or +31 20 525.6840; fax: +31 20 525.1656). APPLICATION PROCESS Only applications by 'hardmail' are considered. Please include the following material: Cover letter Curriculum vitae One A4 in which you explain why you are interested in the project You may include up to 15 pages of excerpts taken from published material but this is not a requirement Two names of references (including fax and/or e-mail) Be sure to quote the reference number of the project to which you are applying Send the application to Dr. G. Wolters Department of Psychology Leiden University P.O. Box 9555 2300 RB Leiden The Netherlands THE DEADLINE FOR APPLICATION IS 12 APRIL 1996.   From panos at csc.umist.ac.uk Thu Mar 21 05:58:31 1996 From: panos at csc.umist.ac.uk (Panos Liatsis) Date: Thu, 21 Mar 96 10:58:31 GMT Subject: CALL FOR PAPERS: IWISP'96 Message-ID: <333.9603211058@isabel.csc.umist.ac.uk> 3RD INTERNATIONAL WORKSHOP ON IMAGE AND SIGNAL PROCESSING ADVANCES IN COMPUTATIONAL INTELLIGENCE NOVEMBER 4-7, 1996, MANCHESTER, UK ORGANISER:UMIST CO-SPONSORS: IEEE SIGNAL PROCESSING SOCIETY IEEE UK&RI IEE INST MEAS CONTROL CALL FOR PAPERS The 3rd International Workshop on Image and Signal Processing, IWSIP-96 organised by the Control Systems Centre, UMIST in assosiation with IEEE Region 8 and co-sponsored by the IEE, IMC and the IEEE Signal Processing Society, is an International Workshop on theoretical, experimental and applied signal and image processing. This is a specialized workshop, which intends to attract high quality research papers and bring together researchers working in the areas of signal/image processing from both sides of the Atlantic, as well as from the countries of Central and Eastern Europe, The theme of the current workshop is on Advances in Computational Intelligence. SCOPE: General Techiques and Algorithms: Adaptive DSP algorithms, Digital Filter Implementations, Image Analysis, Image Enhancement and Restoration, Image Understanding. Technologies: Neural Networks, Fuzzy Logic, Wavelets, Fractals. Image Transmission: Encoding/Decoding, Compression, Transmission, ISDN, Internet, ATM, Modems, Radio, SATCOM and NAV. Applications: Automotive, Medical, Robotics, Control, Video, TV, Telepresence, Virtual Reality, Digital Production. SUBMISSION PROCEDURES: Prospective authors are invited to propose papers in any of the technical areas listed above, indicating whethear they are intended for oral/poster presentation. To submit a proposal, prepare a 2-3 page summary of the paper including figures and references. Send five copies of the paper summary and a cover sheet stating the (1) paper title, (2) technical area(s), (3) contact author' s name, (4) address, (5) telephone and fax number and (6) email address to: Professor Basil G. Mertzios, IWSIP-96, Dept. of Electrical and Computer Engineering, Democritus University of Thrace, GR-67100 Xanthi, Greece e-mail: mertzios at demokritos.cc.duth.gr FAX:+30-541-26947, Tel:+30-541-79511, 79512 (Secr.) 79559 (Lab) Each selected paper (four-page limit) will be published in the Proceedings of IWSIP-96, by an International Publisher. SCHEDULE: Extended summaries/abstracts: 30th April 1996 Notification of acceptance/rejection: 31st May 1996 Final Draft: 15th July 1996 CONFERENCE SITE: IWSIP-96 will be held at The Manchester Conference Centre, Manchester, GENERAL CHAIR Peter Wellstead Control Systems Centre UMIST, UK PROGRAM CHAIR Basil Mertzios Democritus University of Thrace Department of Electr./Comp.Eng. 67100 Xanthi, Greece PUBLICITY & LOCAL ARRANGEMENTS Panos Liatsis, Braham Levy UMIST, UK TUTORIALS CHAIR Marek Domanski Tech. University of Poznan Poznan, Poland PROCEEDINGS CHAIR Kalman Fazekas Tech. University of Budapest Budapest, Hungary FINANCIAL CHAIR Martin Zarrop UMIST, UK INTERNATIONAL PROGRAM COMMITTEE (TENTATIVE) I. Antoniou, Solvay Institute, Belgium Z. Bojkovic, University of Belgrade, Yugoslavia M. Brady, University of Oxford, UK V. Cappellini, University of Florence, Italy G. Caragiannis, NTUA, Greece M. Christodoulou, Technical University of Crete, Greece A. Constantinidis, Imperial College, UK J. Cornelis, Vrije Universiteit Brussel, Belgium A. Davies, King's College London, UK I. Erenyi, KFKI Research Institute, Hungary A. Fettweis, Ruhr Universitaet Bochum, Germany S. van Huffel, Katholieke Universiteit Leuven, Belgium G. Istefanopoulos, Bosporous University, Turkey V. Ivanov, Dubna Research Institute, Russia T. Kaczorec, University of Warsaw, Poland M. Karny, UTIA, Czech Republic T. Kida, University of Tokyo, Japan J. Kittler, University of Surrey, UK S.Y. Kung, Princeton University, USA M. Kunt, University of Lausanne, Switzerland F. Lewis, University of Texas at Arlington, USA T. Nossek, Technical University of Munich, Germany C. Nikias, University of Southern California, USA D. van Ormondt, Technical University of Delft, Netherlands K. Parhi, University of Minessota, USA S. Tzafestas, NTUA, Greece J. Turan, Technical University of Kosice, Slovak Republic G. Vachtsevanos, Georgia Institute of Technology, USA A. Venetsanopoulos, University of Toronto, Canada  From perso at DI.Unipi.IT Thu Mar 21 06:12:07 1996 From: perso at DI.Unipi.IT (Alessandro Sperduti) Date: Thu, 21 Mar 1996 12:12:07 +0100 (MET) Subject: NNSK deadline extension Message-ID: <199603211112.MAA15961@neuron.di.unipi.it> sorry for duplicated messages (if any...) ** E X T E N D E D D E A D L I N E ** Neural Networks and Structured Knowledge (NNSK) Call for Contributions ECAI '96 Workshop to be held on August 12/13, 1996 during the 12th European Conference on Artificial Intelligence from August 12-16, 1996 in Budapest, Hungary **************** N E W S C H E D U L E **************** Submission deadline April 3, 1996 Notification of acceptance/rejection May 2, 1996 Final version of papers due May 24, 1996 Deadline for participation without paper June 15, 1996 Date of the workshop August 12/13, 1996 *********************************************************** FOR MORE INFO: http://www.informatik.uni-ulm.de/fakultaet/abteilungen/ni/ECAI-96/NNSK/NNSK.html ** E X T E N D E D D E A D L I N E ** _________________________________________________________________ Alessandro Sperduti Dipartimento di Informatica, Corso Italia 40, Phone: +39-50-887264 56125 Pisa, Fax: +39-50-887226 ITALY E-mail: perso at di.unipi.it _________________________________________________________________  From jgreen at wjh.harvard.edu Fri Mar 22 10:14:15 1996 From: jgreen at wjh.harvard.edu (Jennifer Anne Greenhall) Date: Fri, 22 Mar 1996 10:14:15 -0500 (EST) Subject: POSTDOCTORAL JOB OPENINGS Message-ID: COGNITIVE NEUROPSYCHOLOGY: Visual word and object recognition. A post-doctoral research position is available at the COGNITIVE NEUROPSYCHOLOGY LABORATORY, DEPARTMENT OF PSYCHOLOGY, HARVARD UNIVERSITY. The project investigates the role of visual and attentional mechanisms in word and object recognition. These issues are addressed principally through the analysis of the reading and object recognition performance in brain-damaged subjects with visual neglect and other perceptual disorders. In addition, computational modeling and functional imaging studies of visual word recognition are planned. Individuals with training in visual perception and/or attention, computational modeling, or cognitive neuropsychology are encouraged to apply. Candidates should send a curriculum vitae, three letters of recommendation, and a statement of research interests to Alfonso Caramazza, Cognitive Neuropsychology Laboratory, Harvard University, William James Hall, 33 Kirkland St., Cambridge, MA 02138. E-mail inquiries may be sent to caram at broca.harvard.edu. Screening will begin immediately and continue until the position is filled. Harvard University is an Equal Opportunity/Affirmative Action Employer. Women and minorities are encouraged to apply. COGNITIVE NEUROPSYCHOLOGY: Lexical production. A post-doctoral research position is available at the COGNITIVE NEUROPSYCHOLOGY LABORATORY, DEPARTMENT OF PSYCHOLOGY, HARVARD UNIVERSITY. The project investigates the content and organization of the lexical system and, more specifically, the nature of lexical access mechanisms. These issues are investigated through the analysis of speech and writing disorders in brain-damaged subjects and the computational modeling of lexical access. We are also planning to carry out several functional imaging studies of word production. Individuals with training in psycholinguistics, computational modeling, or cognitive neuropsychology are encouraged to apply. Candidates should send a curriculum vitae, three letters of recommendation, and a statement of research interests to Alfonso Caramazza, Cognitive Neuropsychology Laboratory, Harvard University, William James Hall, 33 Kirkland St., Cambridge, MA 02138. E-mail inquiries may be sent to caram at broca.harvard.edu. Screening will begin immediately and continue until the position is filled. Harvard University is an Equal Opportunity/Affirmative Action Employer. Women and minorities are encouraged to apply. ****************************** * Jennifer Greenhall * * Harvard University * * Department of Psychology * * 33 Kirkland Street * * William James Hall, Rm. 884* * Cambridge, MA 02138 * * (617) 496-6374 * ******************************  From rsun at cs.ua.edu Mon Mar 25 16:19:54 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Mon, 25 Mar 1996 15:19:54 -0600 Subject: No subject Message-ID: <9603252119.AA14184@athos.cs.ua.edu> Hybrid Connectionist-Symbolic Models: a report from the IJCAI'95 workshop on connectionist-symbolic integration Ron Sun Department of Computer Science The University of Alabama Tuscaloosa, AL 35487 --------------------------------------- To appear in: AI Magazine, 1996. 9 pages. ftp or Mosaic access: ftp://cs.ua.edu/pub/tech-reports/sun.ai-magazine.ps sorry, no hardcopy available. ---------------------------------------- {\it The IJCAI Workshop on Connectionist-Symbolic Integration: From Unified to Hybrid Approaches} was held for two days during August 19-20 in Montreal, Canada, in conjunction with the Fourteenth International Joint Conference on Artificial Intelligence (IJCAI'95). The workshop was co-chaired by Ron Sun and Frederic Alexandre. During the two days of the workshop, various presentations and discussions brought to light many new ideas, controversies, and syntheses. The focus was on learning and architectures that feature hybrid representations and support hybrid learning. Hybrid models involve a variety of different types of processes and representations, in both learning and performance. Therefore, multiple mechanisms interact in complex ways in most models. We need to consider seriously ways of structuring these different components, which thus occupy a clearly more prominent place in this area of research. The hybridization of connectionist and symbolic models also inherits the difficulty with learning from the symbolic side, and mitigates to some large extent the advantage that the purely connectionist models have in their learning abilities. Considering the importance of learning, in both modeling cognition and building intelligent systems, it is crucial for researchers in this area to pay more attention to ways of enhancing hybrid models in this regard and to putting learning back into hybrid models.  From john at dcs.rhbnc.ac.uk Mon Mar 25 16:07:26 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 25 Mar 96 21:07:26 +0000 Subject: MSc in Computational Intelligence Message-ID: <199603252107.VAA15309@platon.cs.rhbnc.ac.uk> MSc in COMPUTATIONAL INTELLIGENCE at the Computer Science Department Royal Holloway, University of London We offer a new twelve-month MSc in Computational Intelligence covering a wide range of subjects: Computational Learning Statistical Learning Theory Intelligent Decision Making Neural Computing Inference Systems Probabilistic Reasoning Constraint Networks Simulated Annealing Neurochips and VLSI Equational Reasoning Computer Vision Concurrent Programming Object-Oriented Programming The first part of the course involves taking a selection of the listed courses including some prescribed core courses. The second part of the course comprises a project focussing on one of the areas covered in the courses and usually involving some implementation. Royal Holloway is one of the largest colleges of the University of London and is located on a beautiful wooded campus half an hour from central London by train and close to Heathrow airport. The Computational Intelligence group at Royal Holloway includes the following staff: Alex Gammerman, with interests in Bayesian Belief Networks John Shawe-Taylor, with interests in Computational Learning Theory and Coordinator of the EPSRIT NeuroCOLT Project Vladimir Vovk, expert in applications of game theory to learning Vladimir Vapnik (part time), probably no introduction necessary, and several other permanent members of staff. For further information email: cims at dcs.rhbnc.ac.uk or write to: Course Director, MSc in Computational Intelligence Computer Science Department Royal Holloway, University of London EGHAM, Surrey, TW20 0EX Tel: +44 (0)1784 333421 Fax: +44 (0)1784 443420 ------- End of Forwarded Message  From john at dcs.rhbnc.ac.uk Mon Mar 25 16:40:21 1996 From: john at dcs.rhbnc.ac.uk (John Shawe-Taylor) Date: Mon, 25 Mar 96 21:40:21 +0000 Subject: Technical Report Series in Neural and Computational Learning Message-ID: <199603252140.VAA03178@platon.cs.rhbnc.ac.uk> The European Community ESPRIT Working Group in Neural and Computational Learning Theory (NeuroCOLT) has produced a set of new Technical Reports available from the remote ftp site described below. They cover topics in real valued complexity theory, computational learning theory, and analysis of the computational power of continuous neural networks. Abstracts are included for the titles. *** Please note that the location of the files was changed at the beginning of ** the year, so that any copies you have of the previous instructions should be * discarded. The new location and instructions are given at the end of the list. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-038: ---------------------------------------- Active Noise Control with Dynamic Recurrent Neural Networks by Davor Pavisic, Facult\'{e} Polytechnique de Mons, Belgium Laurent Blondel, Facult\'{e} Polytechnique de Mons, Belgium Jean-Philipe Draye, Facult\'{e} Polytechnique de Mons, Belgium Ga\"{e}tan Libert, Facult\'{e} Polytechnique de Mons, Belgium Pierre Chapelle, Facult\'{e} Polytechnique de Mons, Belgium Abstract: We have developed a neural active noise controller which performs better than existing techniques. We used a dynamic recurrent neural network to model the behaviour of an existing controller that uses a Least Mean Squares algorithm to minimize an error signal. The network has two types of adaptive parameters, the weights between the units and the time constants associated with each neuron. Measured results show a significant improvement of the neural controller when compared with the existing system. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-039: ---------------------------------------- A Survey on real Structural Complexity Theory by Klaus Meer, RWTH Aachen, Germany Christian Michaux, Universit\'e de Mons-Hainaut Abstract: In this tutorial paper we overview research being done in the field of structural complexity and recursion theory over the real numbers and other domains following the approach by Blum, Shub and Smale. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-040: ---------------------------------------- The Computational Power of Spiking Neurons Depends on the Shape of the Postsynaptic Potentials by Wolfgang Maass, Technische Universitaet Graz, Austria Berthold Ruf, Technische Universitaet Graz, Austria Abstract: Recently one has started to investigate the computational power of spiking neurons (also called ``integrate and fire neurons''). These are neuron models that are substantially more realistic from the biological point of view than the ones which are traditionally employed in artificial neural nets. It has turned out that the computational power of networks of spiking neurons is quite large. In particular they have the ability to communicate and manipulate analog variables in spatio-temporal coding, i.e.~encoded in the time points when specific neurons ``fire'' (and thus send a ``spike'' to other neurons). These preceding results have motivated the question which details of the firing mechanism of spiking neurons are essential for their computational power, and which details are ``accidental'' aspects of their realization in biological ``wetware''. Obviously this question becomes important if one wants to capture some of the advantages of computing and learning with spatio-temporal coding in a new generation of artificial neural nets, such as for example pulse stream VLSI. The firing mechanism of spiking neurons is defined in terms of their postsynaptic potentials or ``response functions'', which describe the change in their electric membrane potential as a result of the firing of another neuron. We consider in this article the case where the response functions of spiking neurons are assumed to be of the mathematically most elementary type: they are assumed to be step-functions (i.e. piecewise constant functions). This happens to be the functional form which has so far been adapted most frequently in pulse stream VLSI as the form of potential changes (``pulses'') that mimic the role of postsynaptic potentials in biological neural systems. We prove the rather surprising result that in models without noise the computational power of networks of spiking neurons with arbitrary piecewise constant response functions is strictly weaker than that of networks where the response functions of neurons also contain short segments where they increase respectively decrease in a linear fashion (which is in fact biologically more realistic). More precisely we show for example that an addition of analog numbers is impossible for a network of spiking neurons with piecewise constant response functions (with any bounded number of computation steps, i.e. spikes), whereas addition of analog numbers is easy if the response functions have linearly increasing segments. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-041: ---------------------------------------- Finding Optimal Multi-Splits for Numerical Attributes in Decision Tree Learning by Tapio Elomaa, University of Helsinki, Finland Juho Rousu, University of Helsinki, Finland Abstract: Handling continuous attribute ranges remains a deficiency of top-down induction of \dt s. They require special treatment and do not fit the learning scheme as well as one could hope for. Nevertheless, they are common in practical tasks and, therefore, need to be taken into account. This topic has attracted abundant attention in recent years. In particular, Fayyad and Irani showed how optimal binary partitions can be found efficiently. Later, they based a greedy heuristic multipartitioning algorithm on these results. Recently, Fulton, Kasif, and Salzberg attempted to develop algorithms for finding the optimal multi-split for a numerical attribute in one phase. We prove that, similarly as in the binary partitioning, only boundary points need to be inspected in order to find the optimal multipartition of a numerical value range. We develop efficient algorithms for finding the optimal splitting into more than two intervals. The resulting partition is guaranteed to be optimal w.r.t.\ the function that is used to evaluate the attributes' utility in class prediction. We contrast our method with alternative approaches in initial empirical experiments. They show that the new method surpasses the greedy heuristic approach of Fayyad and Irani constantly in the goodness of the produced multi-split, but, with small data sets, cannot quite attain the efficiency of the greedy approach. Furthermore, our experiments reveal that one of the techniques proposed by Fulton, Kasif, and Salzberg is of scarce use in practical tasks, since its time consumption falls short of all demands. In addition, it categorically fails in finding the optimal multi-split because of an error in the rationale of the method. ---------------------------------------- NeuroCOLT Technical Report NC-TR-96-042: ---------------------------------------- Shattering all Sets of k points in `General Position' Requires (k-1)/2 Parameters by Eduardo D. Sontag, Rutgers University, USA Abstract: For classes of concepts defined by certain classes of analytic functions depending on n parameters, there are nonempty open sets of samples of length 2n+2 which cannot be shattered. A slightly weaker result is also proved for piecewise-analytic functions. The special case of neural networks is discussed. -------------------------------------------------------------------- ***************** ACCESS INSTRUCTIONS ****************** The Report NC-TR-96-001 can be accessed and printed as follows % ftp ftp.dcs.rhbnc.ac.uk (134.219.96.1) Name: anonymous password: your full email address ftp> cd pub/neurocolt/tech_reports ftp> binary ftp> get nc-tr-96-001.ps.Z ftp> bye % zcat nc-tr-96-001.ps.Z | lpr -l Similarly for the other technical reports. Uncompressed versions of the postscript files have also been left for anyone not having an uncompress facility. In some cases there are two files available, for example, nc-tr-96-002-title.ps.Z nc-tr-96-002-body.ps.Z The first contains the title page while the second contains the body of the report. The single command, ftp> mget nc-tr-96-002* will prompt you for the files you require. A full list of the currently available Technical Reports in the Series is held in a file `abstracts' in the same directory. The files may also be accessed via WWW starting from the NeuroCOLT homepage (note that this is undergoing some corrections and may be temporarily inaccessible): http://www.dcs.rhbnc.ac.uk/neural/neurocolt.html Best wishes John Shawe-Taylor  From dana at cs.rochester.edu Fri Mar 22 18:05:42 1996 From: dana at cs.rochester.edu (dana@cs.rochester.edu) Date: Fri, 22 Mar 1996 18:05:42 -0500 Subject: Symposium on Neural Control of Spatial Behavior Message-ID: <199603222305.SAA15149@artery.cs.rochester.edu> 20th CVS Symposium NEURAL CONTROL OF SPATIAL BEHAVIOR JUNE 19-22, 1996 ----------------------------------------------------------------------- The Center for Visual Science at the University of Rochester is proud to present the 20th Symposium, "Neural Control of Spatial Behavior." The three-day symposium will consist of five sessions plus an open house and lab tours on Saturday afternoon. The meeting will begin with a Reception/Buffet on Wednesday evening, June 19. Formal sessions start Thursday morning, June 20, and end at noon on Saturday. There will be optional banquets held on Thursday and Friday evenings, and a cookout lunch on Saturday. Informal discussion gatherings will follow the banquets. The Symposium is sponsored in part by NINDS and NIH Biotechnology Resource Project. ------------------------------------------------------------------------ *PROGRAM* Wednesday, June 19 4:00-10:00 PM Registration 6:00-8:00 PM Reception/Buffet Thursday, June 20 SESSION I: REACHING AND GRASPING M Goodale An overview brain modeling of reaching and grasping M Graziano The representation of visuomotor space in body-part centered coordinates S Schall Modeling visuo-motor coordination in the SARCOS arm G Luppino Cortical motor areas involved in grasping P Pook Symbolic models for motor planning SESSION II: TARGET LOCALIZATION J Asad Representation of targets in parietal cortex L Snyder Coding the intention for an arm or eye movement in posterior parietal cortex of monkey J-O Eklund* Modeling occulomotor coordination with the KTH head G Zelinsky Predicting scanning patterns during visual search Friday, June 21 SESSION III: MULTI-SENSORY CALIBRATION M Brainard Visual calibration of an auditory space map in the owl G Pollack Neuroanatomical and physiological mechanisms of bat echolocation Y Trotter Proprioception and cortical processing of visual 3-D space J Van Opstal Models of visually and auditorily evoked saccades SESSION IV: SPATIAL ORIENTATION & MOTION J Leigh Gaze stability during locomation M Behrman Neurological observations of reference frames M Wilson Spatial orientation in the rat I Isreal Multi-sensory aspects of path integration Saturday, June 22 SESSION V: BEHAVIORAL SEQUENCES J Loomis The body's navigation systems J Tanji The neural representation of sequences of trained movements W Schultz Programming of sequences of behaviors J Barnes Prediction in ocular motor control SESSION VI: OPEN HOUSE Center for Visual Science Open House and Lab Tours * pending confirmation ---------------------------------------------------------------------- *ACCOMMODATIONS AND MEALS* The University has moderate cost rooms available for symposium attendees. Residence halls are centrally located on the campus and are a short walk to Hoyt Hall where the symposium sessions will be held. Rooms come with bed linens, towels, blankets, washcloths, soap, and water glasses, and are equipped with standard residence hall furniture including twin beds and desks. Telephones with local access service are in each room. Rochester also offers a variety of recreational and sports/fitness opportunities. The Athletic Center has a pool, tennis courts and indoor track for guests. The adjacent Genesee Valley Park has walking trails, canoe rentals and golf course. A special package of residence hall room and all meals and banquets is being offered to Symposium participants. This package includes all meals from Thursday breakfast through the Saturday barbecue. ------------------------------------------------------------------------ *TRAVEL AWARDS* A small number of travel awards are available to graduate and postdoctoral students. Applications must be made by May 3, 1996. Visit our web site at http://www.cvs.rochester.edu/ for a travel application award form that can be downloaded to your computer and printed out. ------------------------------------------------------------------------ *FEES* Preregistration, Regular $140.00 Preregistration, Student $ 95.00 On-site, Regular $200.00 On-site, Student $150.00 ------------------------------------------------------------------------- *PREREGISTRATION* To obtain a preregister form, visit our web site or send your address through email to judy at cvs.rochester.edu. Instructions for payment are on the form. Please send a separate form for each person registering. No preregistrations will be accepted after May 31. ------------------------------------------------------------------------- *VISIT OUR WEB SITE* Check our home page for further updates on the 20th CVS Symposium at: http://www.cvs.rochester.edu/. At this site you will find a complete preregistration form that can be downloaded to your computer and printed out. ------------------------------------------------------------------------- *DEADLINE DUE DATES* May 3 Travel Award Applications May 31 Preregistrations ------------------------------------------------------------------------- *FURTHER INFORMATION* For further information, or to request a printed brochure and preregistration form, contact: Judy Olevnik Symposium Secretary judy at cvs.rochester.edu ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Judy Olevnik email: judy at cvs.rochester.edu Center for Visual Science phone: 716 275 8659 Room 274 Meliora Hall fax: 716 271 3043 University of Rochester Rochester NY 14627-0270 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~  From martinl at ai.univie.ac.at Tue Mar 26 09:19:27 1996 From: martinl at ai.univie.ac.at (Martin Lorenz) Date: Tue, 26 Mar 1996 15:19:27 +0100 Subject: Symposium at EMCSR'96 Message-ID: <199603261419.PAA14776@rodaun.ai.univie.ac.at> ===================================================================== Artificial Neural Networks and Adaptive Systems A symposium at the ===================================================================== EMCSR'96 April 9 -12, 1996 University of Vienna organized by the Austrian Society for Cybernetic Studies in cooperation with Dept.of Medical Cybernetics and Artificial Intelligence, Univ.of Vienna and International Federation for Systems Research ---------------------------------------------------------------------- chairs: Guenter Palm, Germany, and Georg Dorffner, Austria For this symposium, papers on any theoretical or practical aspect of artificial neural networks have been invited. Special focus, however, will be put on the issue of adaptivity both in practical engineering applications and in applications of neural networks to the modeling of human behavior. By adaptivity we mean the capability of a neural network to adjust itself to changing environments. We make a careful distinction between "learning" to devise weight matrices for a neural network before it is applied (and usually left unchanged) on one hand, and true adaptivity of a given neural network to constantly changing conditions on the other hand - i.e. incremental learning in unstationary environments. ======= PROGRAM ======= TUESDAY, April 9, p.m., Room 47 14.00-14.30 Statistical Evaluation of Neural Network Experiments: Minimum Requirements and Current Practice A.Flexer, Austrian Research Institute for Artificial Intelligence, Vienna, Austria 14.30-15.00 Adaptive Analysis and Visualization in High Dimensional Data Spaces G.Palm, F.Schwenker, University of Ulm, Germany 15.00-15.30 Adaptive Learning Algorithm for Principal Component Analysis with Partial Data A.Cichocki, W.Kasprzak, W.Skarbek, Frontier Lab, RIKEN, Wako, Saitama, Japan 15.30-16.00 Coffee Break 16.00-16.30 Reinforcement Learning for Cybernetic Control M.Pendrith, M.Ryan, A.Hoffmann, University of New South Wales, Sydney, Australia 16.30-17.00 A Neural Circuit to Handle Passive Extinction in Conditioned Reinforcement Learning A.Glksz, U.Halici, Middle East Technical University, Ankara, Turkey 17.00-17.30 Truncated Temporal Differences with Function Approximation: Successful Examples Using CMAC P.Cichosz, Warsaw University of Technology, Poland 17.30-18.00 Adaptive Classification in Autonomous Agents C.Scheier, D.Lambrinos, University of Zurich, Switzerland WEDNESDAY, April 10, a.m., Room 47 11.00-11.30 A Study of the Adaptation of Learning Rule Parameters Using a Meta Neural Network C.McCormack, University College Cork, Ireland 11.30-12.00 Lower Bounds on Identification Criteria for Perceptron-like Learning Rules M.Schmitt, Technical University of Graz, Austria 12.00-12.30 Learning to Control Dynamic Systems M.Riedmiller, University of Karlsruhe, Germany 12.30-13.00 Neuronal Adaptivity and Network Fault-Tolerance D.Horn, N.Levy, E.Ruppin, Tel-Aviv University, Israel WEDNESDAY, April 10, p.m., Room 47 14.00-14.30 Tracking of Non-Stationary Time-Series Using Resource-Allocating RBF Networks A.McLachlan, D.Lowe, Aston University, United Kingdom 14.30-15.00 Neural Networks: Do They Really Outperform Linear Models? Exchange Rate Forecasting Using Weekly Data T.H.Hann, University of Karlsruhe, Germany 15.00-15.30 Hippocampal Two-Stage Learning and Memory Consolidation A.Bibbig, T.Wennekers, University of Ulm, Germany 15.30-16.00 Coffee Break 16.00-16.30 Analog Computations with Mapped Neural Fields A.Schierwagen, H.Werner, University of Leipzig, Germany 16.30-17.00 The Role of Reinforcement in a Reading Model H.Ruellan, LIMSI/CNRS, Orsay, France 17.00-17.30 Quasi Mental Clusters: A Neural Model of Knowledge Discovery in Narrative Texts S.W.K.Chan, J.Franklin, University of New South Wales, Sydney, Australia THURSDAY, April 11, a.m., Room 47 9.00-9.30 An Application of the Saturated Attractor Analysis to Three Typical Models J. Feng, B. Tirozzi, University of Munich, Germany 9.30-10.00 On a New Gauge-Theoretical Framework for Controlling Neural Network Dynamics E.Pessa, G.Resconi, Universita Cattolica del Sacro Cuore, Brescia, Italy 10.00-10.30 Investigation of the Attractor Structure in the Continuous Hopfield Model S.Amin, BT Laboratories, Ipswich, United Kingdom ------------------------------------------------------------------ the complete program of EMCSR'96 can be found at http://www.ai.univie.ac.at/emcsr/ ================================= Secretariat =========== I. Ghobrial-Willmann and G. Helscher Austrian Society for Cybernetic Studies A-1010 Vienna 1, Schottengasse 3 (Austria) Phone: +43-1-53532810 Fax: +43-1-5320652 E-mail: sec at ai.univie.ac.at -- Martin "Lolly" Lorenz _/_/ _/_/_/ _/ _/ _/ _/_/ Austrian Research Institute for AI (OFAI) _/ _/ _/ _/_/ Tel.:+43-1-535 32 810 _/ _/_/ _/_/ martinl at ai.univie.ac.at _/ _/_/ http://www.ai.univie.ac.at/~martinl/ _/ _/_/ _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ _/ _/_/_/_/_/_/ _/ imagine there is a war _/ ~~~~~~~~~~~~~~~~~~~~~~~ _/ and nobody joins it... _/ ~I have a dream... ~ _/_/_/_/_/_/_/_/_/_/_/_/_/_/_/ ~that people will live~ ~together in freedom ~ public PGP-key avail. at ~and peace. ~ http://www.nic.surfnet.nl/pgp/ ~~~~~~~~~~~~~~~~~~~~~~~  From n at predict.com Tue Mar 26 13:50:18 1996 From: n at predict.com (Norman Packard) Date: Tue, 26 Mar 96 11:50:18 MST Subject: Job Opening at Prediction Company Message-ID: <9603261850.AA03118@predict.com> A non-text attachment was scrubbed... Name: not available Type: text Size: 2766 bytes Desc: not available Url : https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/00000000/6e5d3e36/attachment-0001.ksh From emj at cs.ucsd.edu Tue Mar 26 14:02:19 1996 From: emj at cs.ucsd.edu (Eric Mjolsness) Date: Tue, 26 Mar 96 11:02:19 -0800 Subject: paper on stochastic grammars and resulting architectures Message-ID: <9603261902.AA26326@triangulum> The following paper is available by ftp and www. Symbolic Neural Networks Derived from Stochastic Grammar Domain Models Eric Mjolsness Abstract: Starting with a statistical domain model in the form of a stochastic grammar, one can derive neural network architectures with some of the expressive power of a semantic network and also some of the pattern recognition and learning capabilities of more conventional neural networks. For example in this paper a new version of the "Frameville" architecture, and in particular its objective function and constraints, is derived from a stochastic grammar schema. Possible optimization dynamics for this architecture, and relationships to other recent architectures such as Bayesian networks and variable-binding networks, are also discussed. URL's for Web access: ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps.Z ftp://cs.ucsd.edu/pub/emj/papers/ucsd.TR.CS95-437.ps.gz (or indirectly from http://www-cse.ucsd.edu/users/emj) ftp instructions: unix% ftp cs.ucsd.edu Name: anonymous Password: (use your e-mail address) ftp> cd /pub/emj/papers ftp> bin ftp> get ucsd.TR.CS95-437.ps.Z (or ucsd.TR.CS95-437.ps.gz) ftp> bye unix% uncompress ucsd.TR.CS95-437.ps.Z (or gunzip ucsd.TR.CS95-437.ps.gz)  From goldfarb at unb.ca Wed Mar 27 11:38:41 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Wed, 27 Mar 1996 12:38:41 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: <9603252119.AA14184@athos.cs.ua.edu> Message-ID: On Mon, 25 Mar 1996, Ron Sun wrote: > Hybrid Connectionist-Symbolic Models: > a report from the IJCAI'95 workshop on connectionist-symbolic integration > > Hybrid models involve a variety of different types of processes and > representations, in both learning and performance. > The hybridization of connectionist and symbolic models also > inherits the difficulty with learning from the symbolic > side, and mitigates to some large extent the advantage that the purely > connectionist models have in their learning abilities. > Considering the importance of learning, in both modeling cognition and > building intelligent systems, it is crucial for researchers in this area > to pay more attention to ways of enhancing hybrid models > in this regard and to putting learning back into hybrid models. I guess this is as good time as any to raise the following issue. From the mathematical perspective, I have never seen (in mathematics) HYBRID models. (Mathematicians don't use the term.) Hence a question: How are we to understand this term outside our mathematical experience? Lev Goldfarb Tel: 506-453-4566 Fax: 506-453-3566 http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From rsun at cs.ua.edu Wed Mar 27 23:39:33 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Wed, 27 Mar 1996 22:39:33 -0600 Subject: What is a "hybrid" model? Message-ID: <9603280439.AA21077@athos.cs.ua.edu> That's a tricky question. I don't know if there is any clear-cut answer. (I really hate to answer this, Lev :-) ) One might simply say hybrid models involve both symbolic and subsymbolic processes. But then, what are these? One is NN and the other is LISP code?? A deeper answer is needed. Smolensky attempted to distinguish the two types in his PTC paper in 1988. And there is the (still continuing) discussion in terms of systematicity etc. (Fodor and Pylyshyn 1988, Clark 1991). But I am still not clear about the difference. In relation to mathematical forms as alluded to in Lev's message, one possible answer is that while symbolic processes can be better modeled by discrete math, subsymbolic processes are better modeled by continuous math. Thus, hybrid models may involve a variety of mathematical forms. But obviously, this is an (over) simplification. (Just consider the approximate equivalence of discrete and continuous math: one can be approximated by the other.) Another possible answer is that while one involves explicit representation the other involves implicit representation. But then the question is: what is difference between the two representations? If I remember correctly, there was a paper recently in Mind and Machine on exactly this topic. But again I was not convinced by the answer provided by the author. Motivated by this dissatisfaction, I was trying to develop my own solution, but it fared no better. Recently, however, I stumbled upon something that I believe may provide a fruitful way of looking into this and other related issues. What I am looking at is psychological literature on implicit learning (and to a lesser extent, literature on implicit memory, unconscious perception, etc.). What these bodies of work may give us is a scientific (experimental) way of getting a handle on the issues. Instead of philosophizing on the differences and so on (no offense intended), we may actually examine the issues experimentally in human subjects and thus make some head ways towards understanding the differences in a rigorous and well-grounded way. As demonstrated by the work of e.g. Reber (1989), Berry and Broadbent (1989), Stanley et al. (1989), Willingham et al (1989), humans may actually learn in two different ways (at least): either explicitly or implicitly (symbolically or subsymbolically?). These two types of learning may interact sometimes (Stanley et al 1989). The distinction and dissociation of these two different types of learning have been demonstrated in a variety of domains, including artificial grammar learning, dynamic control, sequences, covariations, and so on (Seger 1994). Of course, in these experiments, an operational (experiment-based) definition of explicitness and implicitness has to be assumed, and indeed much controversy resulted from definitional differences. However, despite the shortcomings, given the breadth and consistency of results of this line of research, the distinction seems to be well established. I believe this distinction may be beneficial to the understanding of the symbolic vs. subsymbolic and related differences, and ultimately, may lead to a better understanding of what hybrid models are and how we should structure hybrid models. I will announce a TR that contains a thorough discussion of this shortly. --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From goldfarb at unb.ca Thu Mar 28 01:58:52 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Thu, 28 Mar 1996 02:58:52 -0400 (AST) Subject: What is a "hybrid" model? In-Reply-To: <9603280439.AA21077@athos.cs.ua.edu> Message-ID: On Wed, 27 Mar 1996, Ron Sun wrote: > That's a tricky question. I don't know if there is any > clear-cut answer. (I really hate to answer this, Lev :-) ) Ron, Please note that the question is not really tricky. The question simply suggests that there is no need to attach the term "hybrid" to the model, because the combination (hybrid model) is both "ugly" and is likely to lead almost all researchers involved in the wrong direction: there are really no "hybrid" mathematical structures, but rather "symbiotic structures", e.g. topological group, (although I would also hesitate to suggest this combination as a research direction). In other words, once we find the right model that captures the necessary "symbiosis" of the discrete and the continuous, we will give it the name that reflects its unique and fundamentally new features, which it MUST exhibit. By the way, I do believe that the inductive learning model proposed by me - evolving transformation system (see the publications in my homepage) - embodies a fundamentally new symbiosis of the discrete and the continuous. > In relation to mathematical forms as alluded to in Lev's message, > one possible answer is that while symbolic processes can be better modeled > by discrete math, subsymbolic processes are better modeled > by continuous math. It is also important to understand why the nature of the above symbiosis should be radically different from that of the classical mathematical structures, which embody, basically, the symbiosis of the NUMERIC mathematical structures. > Another possible answer is that while one involves explicit representation > the other involves implicit representation. But then the question is: > what is difference between the two representations? > Recently, however, I stumbled upon something that I believe may provide > a fruitful way of looking into this and other related issues. > What I am looking at is psychological literature on implicit learning > (and to a lesser extent, literature on implicit memory, unconscious > perception, etc.). What these bodies of work may give us is a scientific > (experimental) way of getting a handle on the issues. Instead of > philosophizing on the differences and so on (no offense intended), > we may actually examine > the issues experimentally in human subjects and thus make some head ways > towards understanding the differences in a rigorous and well-grounded way. > As demonstrated by the work of e.g. Reber (1989), Berry and Broadbent (1989), > Stanley et al. (1989), Willingham et al (1989), humans may actually > learn in two different ways (at least): > either explicitly or implicitly (symbolically or subsymbolically?). > These two types of learning may interact sometimes (Stanley et al 1989). > The distinction and dissociation of these two different types of learning > have been demonstrated in a variety of domains, including artificial > grammar learning, dynamic control, sequences, covariations, and so on > (Seger 1994). Of course, in these experiments, an operational > (experiment-based) definition of explicitness > and implicitness has to be assumed, and indeed much controversy resulted > from definitional differences. However, despite the shortcomings, > given the breadth and consistency of results > of this line of research, the distinction seems to be well established. > I believe this distinction may be beneficial > to the understanding of the symbolic vs. subsymbolic and related differences, > and ultimately, may lead to a better understanding of what hybrid models > are and how we should structure hybrid models. I simply cannot imagine how such (of necessity) relatively "superficial" experimental observations will in the foreseeable future lead us to the insight into the nature of the fundamentally new MATHEMATICAL STRUCTURE (of course, if one at all cares about it). For that matter, many neuroscientists, for example, with equal justification, may also claim to be on the "right trail". And why not? Remember what Einstein said? (But as long as no principles are found on which to base the deduction, the individual empirical fact is of no use to the theorist; indeed he cannot even do anything with isolated general laws abstracted from experience. He will remain helpless in the face of separate results of empirical research, until principles which he can make the basis of deductive reasoning have revealed themselves to him.) To our area of research this observation applicable even to a larger extent: we are dealing with information processing. -- Lev http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From bengioy at IRO.UMontreal.CA Thu Mar 28 09:08:04 1996 From: bengioy at IRO.UMontreal.CA (Yoshua Bengio) Date: Thu, 28 Mar 1996 09:08:04 -0500 Subject: Spring School/Workshop in Montreal: LAST ANNOUNCEMENT Message-ID: <199603281408.JAA01139@rouge.IRO.UMontreal.CA> ******** Last reminder, there are only a few seats left: ******* Montreal Workshop and Spring School on Artificial Neural Networks and Learning Algorithms April 15-30 1996 Centre de Recherche Mathematique, Universite de Montreal This workshop and concentrated course on artificial neural networks and learning algorithms is organized by the Centre de Recherches Mathematiques of the University of Montreal (Montreal, Quebec, Canada). The first week of the the workshop will concentrate on learning theory, statistics, and generalization. The second week (and beginning of third) will concentrate on learning algorithms, architectures, applications and implementations. The organizers of the workshop are Bernard Goulard (Montreal), Yoshua Bengio (Montreal), Bertrand Giraud (CEA Saclay, France) and Renato De Mori (McGill). The invited speakers are G. Hinton (Toronto), V. Vapnik (AT&T), M. Jordan (MIT), H. Bourlard (Mons), T. Hastie (Stanford), R. Tibshirani (Toronto), F. Girosi (MIT), M. Mozer (Boulder), J.P. Nadal (ENS, Paris), Y. Le Cun (AT&T), M. Marchand (U of Ottawa), J. Shawe-Taylor (London), L. Bottou (Paris), F. Pineda (Baltimore), J. Moody (Oregon), S. Bengio (INRS Montreal), J. Cloutier (Montreal), S. Haykin (McMaster), M. Gori (Florence), J. Pollack (Brandeis), S. Becker (McMaster), Y. Bengio (Montreal), S. Nowlan (Motorola), P. Simard (AT&T), G. Dreyfus (ESPCI Paris), P. Dayan (MIT), N. Intrator (Tel Aviv), B. Giraud (France), H.P. Graf (AT&T). MORE INFO AT: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html OR contact Louis Pelletier, pelletl at crm.umontreal.ca, #tel: 514-343-2197 -------------------- SCHEDULE --------------------------------- The lectures will take place in room 5340 (5th floor) of the Pavillon Andre-Aisenstadt on the campus of the Universite de Montreal. Week 1 Introduction, learning theory and statistics April 15: 9:00 - 9:30 Registration (Room 5341) & Coffee (Room 4361) 9:30 - 10:30 Y. Bengio: Introduction to learning theory and learning algorithms 10:30 - 11:30 J.P. Nadal: Constructive learning algorithms: empirical study of learning curves (part I) 14:00 - 15:00 G. Dreyfus: Learning to be a dynamical system (part I) 15:00 - 15:30 B. Giraud: Flexibility, robustness and algebraic convenience of neural nets with neurons having a window-like response function April 16: 9:00 - 10:00 Y. Bengio: Introduction to artificial neural networks and pattern recognition 10:00 - 11:00 F. Girosi: Neural networks and approximation theory (part I) 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 L. Bottou: Learning theory for local algorithms 14:00 - 15:00 J.P. Nadal: Constructive learning algorithms: empirical study of learning curves (part II) 15:00 - 16:00 G. Dreyfus: Learning to be a dynamical system (part II) April 17: 9:00 - 10:00 V. Vapnik: Theory of consistency of learning processes 10:00 - 11:00 L. Bottou: Stochastic gradient descent learning and generalization 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 F. Girosi: Neural networks and approximation theory (part II) 14:00 - 15:00 M. Marchand: Statistical methods for learning nonoverlapping neural networks (part I) 15:00 - 16:00 J. Shawe-Taylor: A Framework for Structural Risk Minimisation (part I) 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 M. Jordan: Introduction to Graphical Models April 18: 9:00 - 10:00 J. Shawe-Taylor: A Framework for Structural Risk Minimisation (part II) 10:00 - 11:00 R. Tibshirani: Regression shrinkage and selection via the lasso 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 V. Vapnik: Non-asymptotic bounds on the rate of convergence of learning processes 14:00 - 15:00 T. Hastie: Flexible Methods for Classification (part I) 15:00 - 16:00 M. Jordan: Algorithms for Learning and Inference in Graphical Models April 19: 9:00 - 10:00 S. Bengio: Introduction to Hidden Markov Models 10:00 - 11:00 V. Vapnik: The learning algorithms 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 R. Tibshirani: Model search and inference by bootstrap "bumping" 14:00 - 15:00 T. Hastie: Flexible Methods for Classification (part II) 15:00 - 16:00 M. Marchand: Statistical methods for learning nonoverlapping neural networks (part II) 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 B. Giraud: Spectrum recognition via the pseudo-inverse method and optimal background subtraction Week 2 and 3 Algorithms, architectures and applications April 22: 9:00 - 9:30 Registration (room 5341) & Coffee (room 4361) 9:30 - 10:30 S. Haykin: Neurosignal Processing: A Pradigm Shift in Statistical Signal Processing (part I) 10:30 - 11:30 H. Bourlard: Using Markov Models and Artificial Neural Networks for Speech Recognition (part I) 14:00 - 15:00 M. Gori: Links between suspiciousness and computational complexity 15:00 - 16:00 M. Mozer: Modeling time series with compositional structure 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 F. Pineda: Reinforcement learning and TD-lambda April 23: 9:00 - 10:00 S. Haykin: Neurosignal Processing: A Pradigm Shift in Statistical Signal Processing (part II) 10:00 - 11:00 F. Pineda: Hardware architecture for acoustic transient classification 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 H. Bourlard: Using Markov Models and Artificial Neural Networks for Speech Recognition (part I) 14:00 - 15:00 J. Pollack: 15:00 - 16:00 P. Dayan: Factor Analysis and the Helmholtz Machine Dynamical properties of networks for cognition 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 M. Mozer: Symbolically-Constrained Subsymbolic Processing April 24: 9:00 - 10:00 M. Gori: Number-plate recognition with neural networks 10:00 - 11:00 J. Pollack: A co-evolutionary framework for learning 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 P. Dayan: Bias and Variance in TD Learning 14:00 - 15:00 S. Becker: Unsupervised learning and vision (part I) 15:00 - 16:00 P. Simard: Memory-based pattern recognition April 25: 9:00 - 10:00 S. Becker: Unsupervised learning and vision (part I) 10:00 - 11:00 G. Hinton: Improving generalisation by using noisy weights 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 N. Intrator: General methods for training ensembles of regressors (part I) 14:00 - 15:00 S. Nowlan: Mixtures of experts 15:00 - 16:00 G. Hinton: Helmholtz machines 16:00 - 16:30 Coffee Break (room 4361) 16:30 - 17:30 Y. Le Cun: Shape Recognition with Gradient-Based Learning Methods April 26: 9:00 - 10:00 S. Bengio: Input/Output Hidden Markov Models 10:00 - 11:00 Y. Le Cun: Fast Neural Net Learning and Non-Linear Optimization 11:00 - 11:30 Coffee Break (room 4361) 11:30 - 12:30 S. Nowlan: Mixture of experts to understand functional aspects of primate cortical vision 14:00 - 15:00 N. Intrator: General methods for training ensembles of regressors (part II) 15:00 - 16:00 P. Simard: Pattern Recognition Using a Transformation Invariant Metric April 29: 9:00 - 10:00 J. Moody: Artificial Neural Networks applied to finance (part I) 10:00 - 11:00 Y. Bengio: Modeling multiple time scales and training with a specialized financial criterion 14:00 - 15:00 H.P. Graf: Recent Developments in Neural Net Hardware (part I) April 30: 9:00 - 10:00 J. Moody: Artificial Neural Networks applied to finance (part II) 10:00 - 10:30 Coffee Break (room 4361) 10:30 - 11:30 H.P. Graf: Recent Developments in Neural Net Hardware (part II) 14:00 - 15:00 J. Cloutier:FPGA-based multiprocessor: Implementation of Hardware-Friendly Algorithms for Neural Networks and Image Processing 15:00 - 15:30 Coffee Break (room 4361) 15:30 - 16:30 J. Moody: Artificial Neural Networks applied to finance (part III) -------------------- Registration information: --------------------- $100 (Canadian) or $75 (US) if received before April 1st $150 (Canadian) or $115 (US) if received on or after April 1st $25 (Canadian) or $19 (US) for students and post-doctoral fellows. The number of participants will be limited, on a first-come first-served basis. Please register early! Registration forms and hotel informations are available at our WEB SITE: http://www.iro.umontreal.ca/labs/neuro/spring96/english.html For more information, contact Louis Pelletier, pelletl at crm.umontreal.ca 514-343-2197 Centre de Recherche Mathematique, Universite de Montreal, C.P. 6128, Succ. Centre-Ville, Montreal, Quebec, H3C-3J7, Canada. -- Yoshua Bengio Professeur Adjoint, Dept. Informatique et Recherche Operationnelle Pavillon Andre-Aisenstadt #3339 , Universite de Montreal, Dept. IRO, CP 6128, Succ. Centre-Ville, 2920 Chemin de la tour, Montreal, Quebec, Canada, H3C 3J7 E-mail: bengioy at iro.umontreal.ca Fax: (514) 343-5834 web: http://www.iro.umontreal.ca/htbin/userinfo/user?bengioy or http://www.iro.umontreal.ca/labs/neuro/ Tel: (514) 343-6804. Residence: (514) 738-6206  From ptodd at mpipf-muenchen.mpg.de Thu Mar 28 11:26:17 1996 From: ptodd at mpipf-muenchen.mpg.de (ptodd@mpipf-muenchen.mpg.de) Date: Thu, 28 Mar 96 17:26:17 +0100 Subject: looking for latest artistic/musical applications Message-ID: <9603281626.AA06734@hellbender.mpipf-muenchen.mpg.de> We are putting together a new book on the artistic and musical uses of connectionist systems, including psychological modeling, artistic creation, etc., and we would like everyone's help in making this work as complete as possible. The book will be based on the special issue of Connection Science we edited on this topic (1994), and will include new articles as well. For our revised introduction, we are seeking references and papers on the latest research in this area, so we can provide a more accurate survey of what's out there. Unpublished research projects are also of interest. We have collected references to all of the work that is currently known to us (including the tables of contents of the Connection Science issue and of the 1991 MIT Press book, Music and Connectionism), and we are making this available via anonymous ftp in the following file: host: ftp canetoad.mpipf-muenchen.mpg.de (with login name "anonymous" and your email address as password) directory: cd /pub/science/ptodd file: get references.txt (plain text) If you have any new pointers or suggestions in these area, please send them to us at the email addresses below. We will make available the table of contents of the new book when that has been finalized, as well as the list of research we compile beforehand. Thanks for your distributed help-- Peter Todd (ptodd at mpipf-muenchen.mpg.de) Niall Griffith (ngr at atlas.ex.ac.uk) .................................................. ' `. : Peter M. Todd : : Max Planck Institute for Psychological Research : : Center for Adaptive Behavior and Cognition : : Leopoldstrasse 24 : : 80802 Munich GERMANY : : : : Email: ptodd at mpipf-muenchen.mpg.de : : Phone: (049) (89) 38 602 236 : : Fax: (049) (89) 38 602 252 : `..................................................' - ------- End of Blind-Carbon-Copy ------- End of Forwarded Message  From lba at inesc.pt Thu Mar 28 11:57:15 1996 From: lba at inesc.pt (Luis B. Almeida) Date: Thu, 28 Mar 1996 17:57:15 +0100 Subject: Sintra Workshop on Spatiotemporal Models - CFP Message-ID: <315AC4EB.13728473@inesc.pt> **** PLEASE POST **** PLEASE DISTRIBUTE TO OTHER LISTS **** Announcement and Call for Papers Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems Sintra, Portugal, 6-8 November 1996 The spatial and temporal aspects of information processing present important challenges, both in biological and in artificial systems. Good examples of these challenges are the processing of visual information and the roles of chaotic behavior and of synchrony in biological systems, or the training of desired dynamical behaviors and the representation of spatial information in artificial systems. The Sintra Workshop on Spatiotemporal Models in Biological and Artificial Systems aims to foster the discussion and interchange of ideas among researchers interested in all aspects of spatiotemporal modelling. A non-exhaustive list of topics is analysis of neural patterns neocortical dynamics neural synchrony cortico-thalamic interactions cortical modules perception units sensorimotor modeling neural correlates of behavior plasticity in neuronal networks learning processes and rules spatio temporal measures neural network dynamics oscillations and chaos in neural networks recurrent neural networks robot navigation coupled oscillators coupled lattices cellular automata WORKSHOP FORMAT The size of the workshop is planned to be relatively small (around 50 people), to enhance the communication among participants. Submissions will be subjected to an international peer review procedure, following the standards for high quality scientific events. All accepted submissions will be scheduled for poster presentation. The authors of the best-rated submissions will make oral presentations, in addition to their poster presentations. Presentation of an accepted contribution is mandatory for participation in the workshop. There will also be a number of presentations by renowned invited speakers. The workshop is planned to have a duration of two and a half days, from a wednesday afternoon through the next friday afternoon. The participants who so desire will have the opportunity to stay the following weekend, for sightseeing. PAPER SUBMISSION Submissions will consist of the full papers in their final form. Paper revision after the review is not expected to be possible. The accepted contributions will be published by a major scientific publisher. The proceedings volume is planned to be distributed to the participants at the beginning of the workshop. The camera-ready paper format is not available yet, but a rough indication is eight A4 pages, typed single-spaced in a 12 point font, with 3.5 cm margins all around. Once a final agreement with the publisher is reached, information on the camera-ready paper format will be sent to all those who have requested it, including those in the workshop mailing list, and will also be incorporated in the workshop's web page (see 'Staying Informed', below). Papers should be submitted to Dr. Jose C. Principe (see below), before the deadline of 30 April. Dr. Principe should also be contacted for clarifying any doubts regarding paper submission. USEFUL DATES The workshop will take place on 6-8 November 1996 in Sintra, Portugal. The schedule is as follows: Deadline for paper submission 30 April 1996 Results of paper review 31 July 1996 Workshop 6-8 November 1996 FUNDING We have received confirmation that the Office of Naval Research (U.S.A.) will provide partial funding for the workshop. Among other things (e.g. funding the invited speakers), this will allow us to partially subsidize the participants' expenses, by lowering the registration cost. We have also been given a grant from Fundacao Oriente (Portugal), in the amount of 100000 Portuguese escudos (about 600 US dollars) to be assigned to a participant from the Far East. Preference criteria for assigning this grant will be (1) being a graduate student, and (2) the ranking from the paper review procedure. Funding applications have also been sent to several other institutions. Therefore, some more funding may become available in the future. REGISTRATION The authors of the accepted papers will be sent information about the registration procedure. Registration of an author is mandatory for the publication of the corresponding paper in the proceedings. This is done to ensure the financial balance of the workshop. The current estimate of the registration cost is 40000 Portuguese escudos (about 270 US dollars) per participant. Besides the participation in the workshop itself, this estimate includes the proceedings, lunch on thursday and friday and coffee breaks. This estimate already takes into account the funding from the ONR. The cost of lodging at Hotel Tivoli Sintra will be, per night and per person: Single room 9700 escudos (about 65 US dollars) Double room 5450 escudos (about 36 US dollars) THE PLACE Sintra is a beautiful little town, located about 20 km west of Lisbon. It used to be a vacation place of the Portuguese aristocracy, and has in its vicinity a number of beautiful palaces, a moor castle, a monastery carved in the rock and other interesting spots. It is on the edge of a small mountain that creates a microclimate with a luxurious vegetation. Sintra has recently been designated World Patrimonium. The workshop will be held at the Hotel Tivoli Sintra, which is located in the old part of Sintra. The hotel is modern, comfortable and has good facilities for this kind of event. STAYING INFORMED To be included in the workshop's mailing list, send e-mail to Luis B. Almeida (see below). Workshop's web page: http://aleph.inesc.pt/smbas/ Mirror: http://www.cnel.ufl.edu/workshop.html Come back to these web pages. They will be kept up-to-date, with the latest information. WORKSHOP ORGANIZERS Chair Fernando Lopes da Silva Amsterdam University, The Netherlands Technical program Jose C. Principe Address: Electrical Engineering Dept. University of Florida Gainesville FL 32611 USA Phone: +1-904-392-2662 Fax: +1-904-392-0044 E-mail: principe at synapse.cnel.ufl.edu Local arrangements Luis B. Almeida Address: INESC R. Alves Redol, 9 1000 Lisboa Portugal Phone: +351-1-3100246 Fax: +351-1-3145843 E-mail: luis.almeida at inesc.pt -- Luis B. Almeida INESC Phone: +351-1-3544607, +351-1-3100246 R. Alves Redol, 9 Fax: +351-1-3145843 P-1000 Lisboa Portugal e-mail: lba at inesc.pt or luis.almeida at inesc.pt  From FRYRL at f1groups.fsd.jhuapl.edu Thu Mar 28 15:23:00 1996 From: FRYRL at f1groups.fsd.jhuapl.edu (Fry, Robert L.) Date: Thu, 28 Mar 96 15:23:00 EST Subject: FW: What is a "hybrid" model? Message-ID: <315A6A59@fsdsmtpgw.fsd.jhuapl.edu> >On Wed, 27 Mar 1996, Lev wrote: >Ron, >Please note that the question is not really tricky. The question simply >suggests that there is no need to attach the term "hybrid" to the model, >because the combination (hybrid model) is both "ugly" and is likely to >lead almost all researchers involved in the wrong direction: there are >really no "hybrid" mathematical structures, but rather "symbiotic >structures", e.g. topological group, (although I would also hesitate to >suggest this combination as a research direction). >In other words, once we find the right model that captures the necessary >"symbiosis" of the discrete and the continuous, we will give it the name >that reflects its unique and fundamentally new features, which it MUST >exhibit. I agree whole-heartedly with Ron. The term "hybrid" is like so many other concepts that we confuse with reality, is of our own making. Like George Spencer-Brown said, "the world is like shifting sands beneath our feet..." It is up to the observer to segment and name the world throught he process of learning and distinction. Historical perspective often sheds light on what are perceived as new problems, but in fact are perhaps forgotten ideas. Joshea Willard Gibbs (see Volume I of the collected works) developed thermodynamics and thermostatics through classical and macroscopic means. He then (see Volume II of collected works) treated statistical mechanics. Gibbs called his statistical mechanical treatment an "analogy". Myron Tribus (another famous thermodynamicists now more famous for his popularization of Jaynes MaxEnt Principle) has told me that Gibbs could not show a one-to-one correspondence with what he knew about classical thermodynamics (discrete and quantized, albeit quantum principles had not yet been proposed) and statistical mechanics because all his statisrical functions were continuous. Perhaps this bit of historical perspective provides direct insight in discrete-continuous formulations of neural computation. This is my understanding. Bob Fry Johns Hopkins University/ Applied Physics Laboratory  From shrager at neurocog.lrdc.pitt.edu Fri Mar 29 00:25:28 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Fri, 29 Mar 1996 00:25:28 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: <9603280439.AA21077@athos.cs.ua.edu> Message-ID: > Recently, however, I stumbled upon something that I believe may provide > a fruitful way of looking into this and other related issues. >... Well, since you're into this literature, you might as well look at some real computational psychology. Permit me a moment of inhumility in pointing out our work on the development of arithmetic skill in preschoolers. Siegler, R. S., & Shrager, J., (1984). Strategy choices in addition and subtraction: How do children know what to do? In C. Sophian (Ed.), Origins of Cognitive Skills. Hillsdale, NJ: Lawrence Erlbaum Associates. 229-294. We describe an implemented hybrid (in neo-terminology) model of the development of small number addition skill (e.g., what's 4+3?) and validate the model against real children's learning data. The model generally correlates with the observed/predicted performance at greater than .8, and often greater than .9. It is hybrid in two senses. (I use the term "mediated" rather than "hybrid" because "mediated" describes the way in which the components interact, as follows....) First, the model has both an explicit component and a memory component. The former is discrete (it does simple addition, which is, well, simple), and the latter is a basic (continuous) association model. These components train one another and the decision about which component to use in a given case is made in accord with the history of training. This is one sense of mediation: the memory component is trained by the discrete component; that is, the discrete component mediates between the memory and reality (or, rather, correctness). The second sense in which the model is hybrid, or mediated, is in a (computational analog of a) social sense: Specifically, the model's "environment" -- the particular distribution of problems that it sees, is based upon real observations of the distrubution of problems that children are given by their parents. Here, too, the parent mediates the child's performance until the child's own systems are trained up appropriately. There. Now you don't have any excuse for not including this in your forthcoming TR! :-) Cheers, Jeff p.s. Anyone who would like a copy of this forthcame-TR may send me an address label-like email and I'll be happy to send one out.  From rsun at cs.ua.edu Fri Mar 29 01:20:43 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 29 Mar 1996 00:20:43 -0600 Subject: What is a "hybrid" model? Message-ID: <9603290620.AA14624@athos.cs.ua.edu> Lev: I have to disagree with several points you made. lev goldfarb at unb.ca wrote: >Please note that the question is not really tricky. The question simply >suggests that there is no need to attach the term "hybrid" to the model, >because the combination (hybrid model) is both "ugly" and is likely to >lead almost all researchers involved in the wrong direction I really don't care that much what label one uses, and a label simply cannot ``lead all researchers in the wrong direction". Sorry. :-) >really no "hybrid" mathematical structures, but rather "symbiotic >structures", e.g. topological group There have been _a lot of_ different mathematical structures being proposed that purport to capture all the essential properties of both symbolic and neural models. I would hesitate to make any such claim right now: we simply do not know enough yet about even the basics to make such a sweeping claim. What we can do is working toward such an end. >I simply cannot imagine how such (of necessity) relatively "superficial" >experimental observations will in the foreseeable future lead us to the >insight into the nature of the fundamentally new MATHEMATICAL STRUCTURE >(of course, if one at all cares about it). I would not so readily dismiss the literature on implicit learning which I cited in the previous message as ``superficial". I would urge you to look into these papers first. While you are at it, you might also look into some related work in developmental psychology (and models of developmental processes). Personally, I found these pieces of work very PRINCIPLED and may lead to exactly the kind of mathematical structure that we need to model cognition (which is, of necessity, complex or even ``heterogeneous"). >Remember what Einstein said? >(But as long as no principles are found on which to base the deduction, >the individual empirical fact is of no use to the theorist; indeed he >cannot even do anything with isolated general laws abstracted from >experience. He will remain helpless in the face of separate results of >empirical research, until principles which he can make the basis of >deductive reasoning have revealed themselves to him.) I cannot agree more. It's a nice quote. But it supports my points as much as it does yours. Cheers, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From rsun at cs.ua.edu Fri Mar 29 01:34:34 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Fri, 29 Mar 1996 00:34:34 -0600 Subject: What is a "hybrid" model? Message-ID: <9603290634.AA15686@athos.cs.ua.edu> Jeff Shrager wrote: >> I guess this is as good time as any to raise the following issue. From >> the mathematical perspective, I have never seen (in mathematics) HYBRID >> models. (Mathematicians don't use the term.) Hence a question: How are we >> to understand this term outside our mathematical experience? > >When you have two (or more) hacks, each of which does part of a job, >and you can't find one that does the whole job, you wire them together >and call it a hybrid model. Seems simple enough. (Maybe it's like >wiring together two (or more) diffeqs.) The brain is full of such >things. > >Cheers, > Jeff Exactly. you find such things not just in neuroscience, but also in psychological data, and in AI models. Are they necessarily bad for these fields? not if they lead to the discovery of real principles. BTW, principles can be ``hybrid", contrary to what some may say. (you can call them synergistic, symbiotic, or what have you.) Cheers, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From Jonathan_Stein at hub1.comverse.com Fri Mar 29 10:21:55 1996 From: Jonathan_Stein at hub1.comverse.com (Jonathan_Stein@hub1.comverse.com) Date: Fri, 29 Mar 96 10:21:55 EST Subject: Re[?]: What is a "hybrid" model? Message-ID: <9602298281.AA828133262@hub1.comverse.com> I have been following this thread and can't resist a few comments regarding the basic differences between "symbolic" and "neural" processes, and the interplay between them. First, it can be easily demonstrated that the two types of processes both exist and are differentiable. Consider, for example, the task of determining whether three specific dots among many others form an equilateral triangle. When the three dots are red and all the others black, this task can be performed quickly, while if the three points are marked by different shape (e.g.. small squares) than the others (e.g. misc. circles, triangles, etc. of roughly the same size) we resort to exhaustive search. The first problem is solved using a connectionist technique, while for the second we resort to good old fashioned AI. Next, it has been demonstrated in psychophysical experiments that there are two types of learning. The first type is gradual, with slowly improving performance, while in primates there is also "sudden" learning, where the subject (EUREKA!) discovers a symbolic representation simplifying the task. Thus not only is the basic hardware different for the two processes, different learning algorithms are used as well. Finally, regarding the interplay between the two. Biology does not cleanly separate the task with defined interfaces (as people typically try to do) but employs level mixing. In both speech recognition and reading problems it has been demonstrated that the lower (neural) levels provide initial best hypotheses, which can be rejected by higher (syntactic, semantic or pragmatic) levels. A nice example is to quickly say "How do you wreck a nice beach?" in the context of a conversation about speech recognition. Most people will hear "How do you recognize speech?" Another interesting aspect surfaces when there are several different lower levels feeding higher ones. In the famous BAGADA experiment a subject listens to one phoneme while seeing a film of someone saying a different one, and reports hearing a third! Thus the idea behind "hybrid" systems, composed of decision theoretic and symbolic layers is neither 1) trivial and ugly, 2) a hack - wiring together two unrelated layers, nor 3) a matter of semantics and of no interest. Calling them symbiotic rather than hybrid IS a matter of semantics. Jonathan Stein  From complex at blaze.cs.jhu.edu Fri Mar 29 16:59:59 1996 From: complex at blaze.cs.jhu.edu (2nd account for S.Kasif) Date: Fri, 29 Mar 96 16:59:59 EST Subject: AAAI SYMPOSIUM ANNOUNCEMENT Message-ID: ========================================================= C A L L F O R P A P E R S ========================================================= LEARNING COMPLEX BEHAVIORS IN ADAPTIVE INTELLIGENT SYSTEMS AAAI Fall Symposium November 9-11, 1996 Cambridge, Massachusetts, USA ======================================= Submissions due April 15, 1996 See the symposium home page at http://www.cs.jhu.edu/complex/symposium/cfp.html Call for Papers The machine learning community made an important methodological transition by identifying a collection of benchmarks that can be used for comparative testing of learning (typically classification) algorithms. While the resulting comparative research contributed substantially to progress in the field, a number of recent studies have shown that very simple representations, such as depth-two decision trees, naive Bayes-classifiers or perceptrons, perform relatively well on many of the benchmarks which are typically static fixed-size databases. At the same time, when knowledge representations are hand-crafted for solving complex tasks they are typically rather large and are often designed to cope with complex dynamic environments. This symposium will attempt to bridge this gap by increasing the focus of the meeting towards the study of algorithms that learn to perform complex behaviors and cognitive tasks such as reasoning and planning with uncertainty, perception, natural language processing and large-scale industrial applications. An additional important subgoal is emphasizing scalability of learning algorithms (e.g. reinforcement learning) in these complex domains. Our main motivation is to have an interdisciplinary meeting that focuses on "rational" agents that learn complex behaviors which is closer in spirit to the goals of AI than learning simple classifiers. We expect to draw selected researchers from AI, Neural Networks, Machine Learning, Uncertainty, and Computer Science Theory. Some of the key issues we plan to address are: * Research on agents that learn to behave "rationally" in complex environments. * Discovering parameters that can be used to measure the empirical complexity of learning a complex domain. * Generating new benchmarks and devising a methodological framework for studying empirical scalability of algorithms that learn complex behaviors. * Broadening the focus of learning to achieve a particular functionality in response to the demands generated by the domain, rather than learning a particular representation (e.g. learning to answer queries of the form: "what is the probability of X given Y" may be easier than learning a complete probability distribution on n variables). * Discussing the hypothesis that current learning algorithms require substantial knowledge engineering and close familiarity with the problem domain in order to learn complex behaviors. * Scalability of different representations and learning methods. The symposium will consist of invited talks, submitted papers, and panel discussions on topics such as * learning complex i/o behaviors; * learning optimization and planning; * learning to reason; * learning to reason with uncertainty; and * learning to perform complex cognitive tasks. We will invite short technical papers on these issues as well as position papers relating learning and issues in knowledge representation; comparative papers that illustrate the capabilities of different representations to achieve the same functionality; and papers providing specific benchmarks that demonstrate the scalability of a particular representation or paradigm. SUBMISSION INFORMATION Prospective participants are encouraged to submit extended abstracts (5-8 pages) addressing the research issues above. Please refer to an extended version of the call for papers that provides additional submission information and a tentative program (available on the WEB at: http://www.cs.jhu.edu/complex/symposium/cfp.html). Electronic submissions as well as inquiries about the program should be sent to complex at cs.jhu.edu. IMPORTANT DATES Submissions must be received by: 15 April 1996 Notification of acceptance on or before: 17 May 1996 Camera-ready copy for working notes due: 23 Aug 1996 ORGANIZING COMMITTEE S. Kasif (co-chair), Johns Hopkins Univ.; S. Russell (co-chair), Berkeley; B. Berwick, MIT; T. Dean, Brown Univ. R. Greiner, Siemens Research; M. Jordan, MIT; L. Kaebling, Brown Univ.; D. Koller, Stanford Univ.; A. Moore, CMU; D. Roth, Weizmann Institute; * * * Fall Symposia are sponsored by the American Association for Artificial Intelligence (AAAI). More information about the Fall Symposium on "LEARNING COMPLEX BEHAVIORS" can be found at: http://www.cs.jhu.edu/complex/symposium/cfp.html  From omlinc at cs.rpi.edu Fri Mar 29 17:00:21 1996 From: omlinc at cs.rpi.edu (omlinc@cs.rpi.edu) Date: Fri, 29 Mar 96 17:00:21 EST Subject: TR available - fuzzy recurrent neural networks Message-ID: <9603292200.AA27037@colossus.cs.rpi.edu> The following Technical Report is available via the University of Maryland Department of Computer Science and the NEC Research Institute archives: --------------------------------------------------------------------------- Fuzzy Finite-state Automata Can Be Deterministically Encoded into Recurrent Neural Networks Christian W. Omlin(a), Karvel K. Thornber(a), C. Lee Giles(a,b) (a)NEC Research Institute, Princeton, NJ 08540 (b)UMIACS, U. of Maryland, College Park, MD 20742 U. of Maryland Technical Report CS-TR-3599 and UMIACS-96-12 ABSTRACT There has been an increased interest in combining fuzzy systems with neural networks because fuzzy neural systems merge the advantages of both paradigms. On the one hand, parameters in fuzzy systems have clear physical meanings and rule-based and linguistic information can be incorporated into adaptive fuzzy systems in a systematic way. On the other hand, there exist powerful algorithms for training various neural network models. However, most of the proposed combined architectures are only able to process static input-output relationships, i.e. they are not able to process temporal input sequences of arbitrary length. Fuzzy finite-state automata (FFAs) can model dynamical processes whose current state depends on the current input and previous states. Unlike in the case of deterministic finite-state automata (DFAs), FFAs are not in one particular state, rather each state is occupied to some degree defined by a membership function. Based on previous work on encoding DFAs in discrete-time, second-order recurrent neural networks, we propose an algorithm that constructs an augmented recurrent neural network that encodes a FFA and recognizes a given fuzzy regular language with arbitrary accuracy. We then empirically verify the encoding methodology by measuring string recognition performance of recurrent neural networks which encode large randomly generated FFAs. In particular, we examine how the networks' performance varies as a function of synaptic weight strength. Keywords: Fuzzy logic, automata, fuzzy automata, recurrent neural networks, encoding, rules. **************************************************************** I would like to add to my announcement of the TR that recurrent neural networks with sigmoid discriminant functions that represent finite-state automata are an example of hybrid systems. Comments regarding the TR are welcome. Please send them to omlinc at research.nj.nec.com. Thanks -Christian **************************************************************************** http://www.neci.nj.nec.com/homepages/giles.html http://www.neci.nj.nec.com/homepages/omlin/omlin.html http://www.cs.umd.edu/TRs/TR-no-abs.html ftp://ftp.nj.nec.com/pub/giles/papers/ UMD-CS-TR-3599.fuzzy.automata.encoding.recurrent.nets.ps.Z ******************************************************************************  From goldfarb at unb.ca Fri Mar 29 23:51:28 1996 From: goldfarb at unb.ca (Lev Goldfarb) Date: Sat, 30 Mar 1996 00:51:28 -0400 (AST) Subject: What is a hybrid model? In-Reply-To: <9603291518.AA12861@athos.cs.ua.edu> Message-ID: On Fri, 29 Mar 1996, Ron Sun wrote: > lev goldfarb at unb.ca wrote: > >Please note that the question is not really tricky. The question simply > >suggests that there is no need to attach the term "hybrid" to the model, > >because the combination (hybrid model) is both "ugly" and is likely to > >lead almost all researchers involved in the wrong direction > > I really don't care that much what label one uses, and a label simply > cannot ``lead all researchers in the wrong direction". Sorry. :-) The labels we use betray our ignorance and prejudices and mislead many ignorant followers. > >really no "hybrid" mathematical structures, but rather "symbiotic > >structures", e.g. topological group > > There have been _a lot of_ different > mathematical structures being proposed that purport > to capture all the essential properties of both symbolic and neural models. > I would hesitate to make any such claim right now: we simply do not > know enough yet about even the basics to make such a sweeping claim. > What we can do is working toward such an end. I'm all for "working toward such end". But lets try to do it in a competent manner. Contrary to the above, there have been NO fundamentally new and relevant MATHEMATICAL STRUCTURES (except transformation system) proposed so far that embodies a "natural" symbiosis of symbolic and numeric mathematical structures. To properly understand the last statement one has to know first of all what the meaning of the "mathematical structure" is. An outstanding group of French mathematicians, who took the pseudonim of Nicolas Bourbaki, contributed significantly to the popularization of the emerging (during the first half of this century) understanding of mathematical structures. Presently a mathematical structure (e.g. totally ordered set, group, vector space, topological space) is typically understood as a set - carrier of the structure - together with a set of operations, or relations, defined on it. The relations/operations are actually specified by means of axioms. (Frankly, it takes MANY hour to become comfortable with the term "mathematical structure" through the study of several typical structures. I don't know any of the newer introductory books, which I'm sure are many, but from the older ones I recommend, for example, Algebra, by Roger Godement, 1968.) In the case of more complex and more interesting classical "symbiotic" structures, such as the topological group, one defines this new structure by imposing on the "old" structure (the group) a new structure (the topology) in such a way that the new structure is CONSISTENT with the old (in a certain well defined sense, e.g. algebraic operations must be continuous wrt introduced topology). Why is it that we are faced with considerable difficulties when trying to "combine" the symbolic and the numeric mathematical structures into one "natural" structure that is of relevance to us? It turns out that the two classes of structures are not "combinable" in the sense which we are used to in the classical mathematics. (Please, no hacks: I'd like to talk science now.) Why? While each element in the classical "symbiotic" math. structure belongs simultaneously to two earlier structures, in this case this is definitely not true: symbols are not numbers (and symbolic operations are FUNDAMENTALLY different from numeric operations). It appears that THE ONLY NATURAL WAY to accomplish the "symbiosis" in this case is to associate with each symbolic operation a weight and to introduce into the corresponding set of symbolic objects, or structs, the distance measure that takes into consideration the operation weights. The distance is defined by means of the sequences of weighted operations, i.e. NUMBERS enter the mathematical structure through the DISTANCES DEFINED ON the basic objects, STRUCTS. I'm quite confident that the new structure thus defined is (in a well defined sense) more general than any of the classical numeric structures, and hence cannot be "isomorphic" to any of them. We have discussed the axiomatics of the new structure on the INDUCTIVE list. It is also not difficult to see that in applications of the new mathematical structure, the symbolic representations, or structs, begin to play much more fundamental role as compared with the classical models, e.g. the NN. Of course, this implies, in particular, that in applications of the model we need fundamentally different measurement devices, which are in nature realized chemically, but can at present be simulated directly on top of the classical measurement devices. We are completing a paper "Inductive theory of vision" in which, among other things, we discuss more formally (and with some illustrations) these issues. -- Lev http://wwwos2.cs.unb.ca/profs/goldfarb/goldfarb.htm  From rybaki at eplrx7.es.dupont.com Sat Mar 30 06:59:47 1996 From: rybaki at eplrx7.es.dupont.com (Ilya Rybak) Date: Sat, 30 Mar 1996 06:59:47 -0500 Subject: shift invatiance Message-ID: <199603301159.GAA13150@davinci> Dear Connectionists, I followed the discussion about shift invariance, which recently was on this list, with a huge interest. The question is still open. But, Lev Goldgarbs point of view looks more plausible (at least for me). Of course, human vision is able to recognize images invariantly to shift, scale and (possibly) rotation. However, it does not mean that this property results directly from the corresponding property of some perceptron-like neural network. Invariant recognition in human vision is related to much more complex processes, and probably it cannot be understood in limited frames of neural computations without taking into account attention mechanisms and psychological and behavioral aspects of visual perception and recognition. Anyway, using a kind of behavioral approach we have tried to build a model of visual system without any invariant properties in neural networks. The model is called "BMV: Behavioral model of active visual perception and invariant recognition". BMV is able to recognize complex gray-level images (e.g. faces) invariantly to any 2D transformations (shift, rotation and scale). The descriptions of our approach and BMV model, as well as our DEMO for DOS are now available in WWW. The URL is http://www.voicenet.com/~rybak/vnc.html Look at it, maybe you find it interesting in the context of shift invariance discussion. Any feedback is welcome. Ilya Rybak DuPont Central Research rybaki at eplrx7.es.duPont.com  From rsun at cs.ua.edu Sat Mar 30 10:21:00 1996 From: rsun at cs.ua.edu (Ron Sun) Date: Sat, 30 Mar 1996 09:21:00 -0600 Subject: What is a hybrid model? Message-ID: <9603301521.AA18989@athos.cs.ua.edu> This discussion has been interesting. But I think enough is enough. So I will shut up after this message. lev goldfard wrote: >The labels we use betray our ignorance and prejudices and mislead many >ignorant followers. I just don't think the scientific community can in any way be characterized as being ``ignorant" (or as ``ignorant followers"). Furthermore, I am sure that people can see beyond labels, and get to the real issues, ideas, and techniques that we have been developing (which is what we should be focusing on instead of labels). >An outstanding group of French mathematicians, who took the pseudonim of >Nicolas Bourbaki, contributed significantly to the popularization of the >........ great. >It appears that THE ONLY NATURAL WAY to accomplish the "symbiosis" in this >case is to associate with each symbolic operation a weight and to >introduce into the corresponding set of symbolic objects, or structs, the >distance measure that takes into consideration the operation weights. The >the basic objects, STRUCTS. I'm quite confident that the new structure >thus defined is (in a well defined sense) more general than any of the >classical numeric structures, and hence cannot be "isomorphic" to any of >them. I am sure this is nice. But I can't see that this can solve all the problems. All that I am advocating here is some kind of ``pluralism": we need to try different approaches and methods. It is not obvious yet which method is THE best. Maybe different methods are good for solving different problems; this is true not just in engineering, but also in SCIENCE (e.g. Physics). It would be inappropriate to dismiss all the other ideas in favor one. Finally, there is always a danger of simplistic OVERgeneralization, which could be misleading, even though I don't share the view of people being ``ignorant". Regards, --Ron ======================================================================== Dr. Ron Sun http://cs.ua.edu/faculty/sun/sun.html 101 K Houser Hall ftp://aramis.cs.ua.edu/pub/tech-reports/ Department of Computer Science phone: (205) 348-6363 The University of Alabama fax: (205) 348-0219 Tuscaloosa, AL 35487 email: rsun at cs.ua.edu ========================================================================  From shrager at neurocog.lrdc.pitt.edu Sat Mar 30 13:05:24 1996 From: shrager at neurocog.lrdc.pitt.edu (Jeff Shrager) Date: Sat, 30 Mar 1996 13:05:24 -0500 (EST) Subject: What is a "hybrid" model? In-Reply-To: Message-ID: I need to clarify something regarding the paper I mentioned in my note to Ron about hybrid models. I made a joke at the end of my note which seems to have gone over everyone's heads. The paper that I referred to: Siegler, R. S., & Shrager, J., (1984). Strategy choices in addition and subtraction: How do children know what to do? In C. Sophian (Ed.), Origins of Cognitive Skills. Hillsdale, NJ: Lawrence Erlbaum Associates. 229-294. Is NOT a forthcoming TR; It's an *old* *published* paper. Since it's old I don't have it online and can't send it to you in postscript format, nor put it into the archive. However, since it's published in a somewhat obscure place I'm happy to send it to you, but you need to send me your hardcopy address! (I'd really appreciate it, though, if you'd try just once to find the book in your local library first, since I have to both do the work of photocopying the papers and mailing them out to you.) Thanks very much. Cheers, Jeff (The joke was that I played on Ron's "forthcoming TR" and called our paper, above, a "forthcame-TR." I'm sorry; I realize now that this was way too subtle for email correspondence.)