From smieja at jargon.gmd.de Sun Aug 2 05:43:27 1992 From: smieja at jargon.gmd.de (Frank Smieja) Date: Sun, 2 Aug 92 11:43:27 +0200 Subject: identification vs. authentication problems In-Reply-To: Jocelyn Sietsma Penington's message of Fri, 31 Jul 92 12:47:20 +1000 Message-ID: <9208020943.AA16555@jargon.gmd.de> The issue of network/network system response to an unlearnt class, or random pattern, was discussed in F. J. Smieja & H. Muehlenbein Reflective Modular Neural Network Systems (submitted to Machine Learning, also available in the ohio state uni NEUROPROSE archive as 'smieja.reflect.ps.Z') as the "dog-paw" test. For singly-operating networks the conclusion was either tolerate it or degrade the 'real' learning by explicitly mapping random patterns to another class output. It was shown that for the modular network system introduced in the paper 'garbage' answers could be learnt-out without significant degradation to the 'real' learning. Jocelyn Sietsma writes: -) -) In my opinion (based on somewhat limited experience) feed-forward ANNs usually -) develop a 'default output' or default class, which means most new inputs which -) are different to the training classes will be classified in the same way. I cannot agree with this. The fraction of garbage classified as a particular class depends on the distribution of the classes in the pattern space, and the frequency with which each class is seen during the learning. This determines how the pattern space is split up by the ANN's hyperplanes, and, of course, how each possible pattern in the input space will be classified. -Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 5205 St Augustin 1, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From UDAH025 at OAK.CC.KCL.AC.UK Tue Aug 4 10:58:00 1992 From: UDAH025 at OAK.CC.KCL.AC.UK (G.BUGMANN@OAK.CC.KCL.AC.UK) Date: Tue, 4 Aug 92 10:58 BST Subject: Robustness ? Message-ID: Dear Connectionnists, A widespread belief is that neural networks are robust against the loss of neurons. This is possibly true for large Hopfield nets but is certainly wrong for multilayer networks trained with backpropagation, a fact mentioned by several authors (a list of references can be found in our paper described below). I wonder where this whole idea of robustness comes from ? It is possibly based on the two following beliefs: 1) Millions of neurons in our brain die each day. 2) Artificial neural networks have the same properties as biological neural networks. As we are apparently not affected by the loss of neurons we are forced to conclude that artificial neural networks should also not be affected. However, belief 2 is difficult to confirm because we do not really know how the brain operates. As for belief 1, neuronal death is well documented during development but I could not find a publication covering the adult age. Does anyone know of any publications supporting belief 1 ? Thank you in advance for your reply. Guido Bugmann ---------------------------------------------------------- The following paper will appear in the proceedings of ICANN'92 published as "Artificial Neural Networks II" by Elsevier. "Direct approaches to Improving the Robustness of Multilayer Neural Networks" G. Bugmann, P. Sojka, M. Reiss, M. Plumbley and J. G. Taylor This paper describes two methods to improve the robustness of multilayer NN (Robustness is defined by the error induced by the loss of a hidden node). The first method consists of including a "robustness error" term in the error function used in backpropagation. This was done with the hope that it would force the network to converge to a more robust configuration. A typical test was to train a network with 10 hidden nodes for the XOR function. As only 2 hidden nodes are actually necessary, the most robust configuration is then made of two groups of 5 hidden neurons sharing the same function. Although the modified error function leads to a more robust network it increases the traditional functional error and does not converge to the most robust configuration. The second method is more direct. It consists of testing periodically the robustness of the net during normal backpropagation and then duplicating the hidden node whose loss would be most damaging for the net. For that duplication we use a prunned hidden node so that the total number of hidden nodes remains unchanged. This is a very effective technique. It converges to the optimal 2 x 5 configuration and does not use much extra computation time. The accuracy of the net is not affected because training goes on between the "prunning-duplication" operations. By using this technique as a complement to the classical prunning techniques robustness and generalisation can be improved at the same time. A preprint of the paper can be obtained from: ---------------------------------------------------------- Guido Bugmann Centre for Neural Networks Kings College London Strand London WC2R 2LS United Kingdom Phone (+44) 71 873 2234 FAX (+44) 71 873 2017 email: G.Bugmann at oak.cc.kcl.ac.uk -----------------------------------------------------------  From bill at nsma.arizona.edu Wed Aug 5 15:04:30 1992 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Wed, 5 Aug 92 12:04:30 MST Subject: Robustness ? Message-ID: <9208051904.AA07164@nsma.arizona.edu> G. Bugmann writes: > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > Does anyone know of any publications supporting belief 1 ? It's a myth. I heard Robert Sapolsky discuss this in a talk once: he said that the myth seems to have been inspired by some ancient studies of long-time alcoholics. Neurons do die every so often, of course, but the rate has never been quantified (as far as I know) because it's so low, certainly much fewer than a thousand per day in ordinary, healthy people. -- Bill  From george at minster.york.ac.uk Wed Aug 5 12:48:40 1992 From: george at minster.york.ac.uk (george@minster.york.ac.uk) Date: Wed, 5 Aug 92 12:48:40 Subject: Robustness ? Message-ID: > A widespread belief is that neural networks are robust > against the loss of neurons. This is possibly true for > large Hopfield nets but is certainly wrong for multilayer > networks trained with backpropagation, a fact mentioned > by several authors (a list of references can be found in > our paper described below). Equally, there are references which say MLP's are fault tolerant to unit loss. For instance, a study by Bedworth and Lowe examined a large (approx. 760-20-15) MLP trained to distinguish the confusable "ee" sounds in English. They found that for a wide range of faults, a degree of fault tolerance did exist. Typical fault modes were adding noise to weights, removing units, setting output of a unit to 0.5. These were based on fault modes that might arise from various implementation design choices (see Bolt 1991 and 1991B). Other main references are Tanaka 1989, Lincoln 1989. For a more complete review, see Bolt 1991C. A more recent report shows how fault tolerant MLP's (wrt to weight loss) can be constructed (see Bolt 1992). One of the problems is that adding extra capacity to a MLP, in extra hidden units for example, does not necessarily mean that back-error propagation will use the extra units to form a redundant representation. As has been noted by many researchers, the MLP tends to overgeneralise instead. This leads to very in-fault tolerant MLP's. However, if a constraint is placed on the network (as in Bugmann 1992) then this will force it to use extra capacity as redundancy which will lead to fault tolerance. See Neti 1990. Another result which has bearing on this matter is Abu-Mostafa's claim that ANN's are better at solving random problems (e.g. "ee" sounds as in Bedworth) rather than structured problems such as XOR. [Abu Mostafa 1986] It can also be viewed that the computational nature of neural networks is such that they map more easily to a problem whose solution space exhibits adjacency. If a fault occurs, which results in a shift in solution space, the function performed by the ANN only changes slightly due to the nature of the problem. This reasoning is supported by various studies of fault tolerance in ANN's where for applications such as XOR, little fault tolerance is found, whereas for "soft" problems, fault tolerance is found to be possible. I qualify this last statement since current training techniques do not tend to produce fault tolerant networks (see Bolt 1992). However, it is my belief that the computational structure provided by ANN's does imply a degree of inherent fault tolerance is possible. For instance, the simple perceptron unit (for Bipolar representations) will suffer upto D connection losses, where D is the Hamming distance between the two classes it separates. Note however that for Binary representations, only 0.5D will be tolerated. This degree of fault tolerance is very good. However, back-error propagation does not take advantage of it due to the weight configurations which it produces. An important feature is that equal loading must be placed on all units, this will then remove critical components with a neural network (e.g. see Bolt 1992B) > I wonder where this whole > idea of robustness comes from ? > > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > 2) Artificial neural networks have the same properties as > biological neural networks. > As we are apparently not affected by the loss of neurons we > are forced to conclude that artificial neural networks should > also not be affected. > > However, belief 2 is difficult to confirm because we do not > really know how the brain operates. As for belief 1, neuronal > death is well documented during development but I could > not find a publication covering the adult age. > > Does anyone know of any publications supporting belief 1 ? Figures that I have heard of range around thousands rather than millions... however, I would also like to hear of any publications supporting this claim. More interesting are occurances when severe brain damage is suffered with little effect. Wood (1983) gives an interesting study of this. ____________________________________________________________ George Bolt, Advanced Computer Architecture Group, Dept. of Computer Science, University of York, Heslington, YORK. YO1 5DD. UK. Tel: + [44] (904) 432771 george at minster.york.ac.uk Internet george%minster.york.ac.uk at nsfnet-relay.ac.uk ARPA ..!uknet!minster!george UUCP ____________________________________________________________ References: %T Fault Tolerance in Multi-Layer Perceptrons: a preliminary study %A M.D. Bedworth %A D. Lowe %D July 1988 %I RSRE: Pattern Processing and Machine Intelligence Division %K Note: RSRE is now the DRA %T A Study of a High Reliable System against Electric Noises and Element Failures %A H. Tanaka %D 1989 %J Proceedings of the 1989 International Symposium on Noise and Clutter Rejection in Radars and Imaging Sensors %E T. Suzuki %E H. Ogura %E S. Fujimura %P 415-20 %T Synergy of Clustering Multiple Back Propagation Networks %A W P Lincoln %A J Skrzypek %J Proceedings of NIPS-89 %D 1989 %P 650-657 %T Fault Models for Artificial Neural Networks %A G.R. Bolt (Bolt 1991) %D November 1991 %C Singapore %J IJCNN-91 %P 1918-1923 %V 3 %T Assessing the Reliability of Artificial Neural Networks %A G.R. Bolt (Bolt 1991B) %D November 1991 %C Singapore %J IJCNN-91 %P 578-583 %V 1 %T Investigating Fault Tolerance in Artificial Neural Networks %A G.R. Bolt (Bolt 1991C) %D March 1991 %O Dept. of Computer Science %I University of York, Heslington, York UK %R YCS 154 %K Neuroprose ftp: bolt.ft_nn.ps.Z %T Fault Tolerant Multi-Layer Perceptrons %A G.R. Bolt %A J. Austin %A G. Morgan %D August 1992 %O Computer Science Department %I University of York, UK %R YCS 180 %K Neuroprose ftp: bolt.ft_mlp.ps.Z %T Maximally fault-tolerant neural networks: Computational Methods and Generalization %A C. Neti %A M.H. Schneider %A E.D. Young %X Preprint, %D June 15, 1990 %A Y.S. Abu-Mostafa %B Complexity in Information Theory %T Complexity of random problems %D 1986 %I Springer-Verlag %T Implications of simulated lesion experiments for the interpretation of lesions in real nervous systems %A C. Wood %B Neural Models of Language Processes %E M.A. Arbib %E D. Caplan %E J.C. Marshall %I New York: Academic %D 1983 %T Uniform Tuple Storage in ADAM %A G. Bolt (Bolt 1992B) %A J. Austin %A G. Morgan %J Pattern Recognition Letters %D 1992 %P 339-344 %V 13  From rsantiag at nsf.gov Thu Aug 6 17:05:00 1992 From: rsantiag at nsf.gov (rsantiag@nsf.gov) Date: 6 Aug 92 16:05 EST Subject: Robustness ? Message-ID: <9208061648.ad20894@Note2.nsf.gov> "In search of the Engram" The problem of robustness from a neurobiological perspective seems to originate from work done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of brains and seeing if it affected memory. It didn't. Other work was done by another gentlemen named Richard F. Thompson in the same area. Both speak of the loss of neurons in a system and their theories about how integrity was kept. In particular Karl Lashley spoke of memory as holograms. I think this is what you are looking for as far as references. As far as every day loss of neurons, well it seems to vary from person to person and an actual measure cannot be ascertained (this was information was gathered after questioning 3 neurobiologists whom all agreed). It is more important, with regards to the loss of neurons and the question of robustness, to identify the stage in which the loss is occuring. There are three distinct areas in neurobiological creation and development that we observe this in with any significance. These are: embryonic development, maturation and learning stages, and maturity. In embryonic the loss of neurons is rampant but eventually leads to the full development of the brain with overconnected neurons. The loss of the neurons are important developmentally. In maturation and learning, the loss of neurons helps to define neuronal systems and plays a role in their adaption and learning process. Finally in maturity, the loss of neurons is insignificant. Indeed Lashly's model of the holographic mind seems very true. The only exception to this is the massive loss of brain matter(neurons). In a situation like this (such as a stroke) there can be massive destruction of neuronal systems. In comparison, though, to ANNs it is difficult. In ANNs if we are to lose but a few neurons, this could represent the loss of 5 to 25 percent of neurons, depending on the model. For a human to lose 5 to 25 of there brains could be a devastating proposition. The question of robustness is best reserved for larger systems that would suffer the loss of neurons on a more proportianal level to current biological NN systems. It is important though to indentify where the loss of neurons fall in your model (developing, training, or after you have a stable NN) before you attack the problem of robustness. (Most of the previous paragraph is derived from "Neurobiology" by Gordon M. Sheperd and from miscellaneous sources that he sights in his book) As for the assumption that ANNs and biological NNs have many of the properties, well that is an overwhelmingly boastfull statement. The only similarities that each have is the organizational structure to them. The only experiments with ANNs that come close to actual biological neuron modeling is a project done by Gary Lynch in California who modelled the Olfactory Cortex and some of the NN systems that go into smell identification. He structured each of his neurons to function exactly as a bilogical neuron. His results are very fascinating. Both ANNs and Biological NNs are parallel processors but after that, they seperate radically into two types of systems. Robert A. Santiago National Science Foundation rsantiag at note.nsf.gov  From Paul_King at NeXT.COM Thu Aug 6 14:28:46 1992 From: Paul_King at NeXT.COM (Paul King) Date: Thu, 6 Aug 92 11:28:46 -0700 Subject: Robustness ? Message-ID: <9208061828.AA09308@oz.NeXT.COM> G. Bugmann writes: > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > Does anyone know of any publications supporting belief 1 ? Bill Skaggs writes: > It's a myth. I heard Robert Sapolsky discuss this in a talk > once: he said that the myth seems to have been inspired by > some ancient studies of long-time alcoholics. ... Moshe Abeles in _Corticonics_ (Cambridge Univ. Press 1991) writes on page 208 that: "Comparisons of neuronal densities in the brains of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cells die between the ages of twenty and eighty years [Gerald, Tomlinson, and Gibson, 1980]. Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5 percent of the cortical cells die at random...." and goes on to discuss the implications for network robustness. The reference is to: Gearald H., Tomlinson B. E., and Gibson P.H. (1980). Cell counts in human cerebral cortex in normal adults throughout life using an image analysing computer. J. Neurol. 46:113-36. Paul King  From terry at helmholtz.sdsc.edu Thu Aug 6 23:53:25 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Thu, 6 Aug 92 20:53:25 PDT Subject: Robustness ? Message-ID: <9208070353.AA01759@helmholtz.sdsc.edu> Another paper that addresses fault tolerance in feedforward nets: Neti, C, Schneider, MH and Young, ED, Maximally fault-tolerant neural networks and nonlinear programming. Vol II, p 483 IJCNN San Diego, June 1990. Comparisons between the brain and BP nets may be misleading since a unit should not be equated with a single neuron. If one unit represents the average firing rate in thousands of neurons then random loss of neurons would correspond more closely with randomly perturbing the weights rather than cutting out units. Cutting out a unit is closer to the damage that occurs with lesions of many neurons, which often leads to unusual deficits. The performance of BP-derived feedforward nets is remarkably resistant to adding random noise to the weights, as Charlie Rosenberg and I showed using NETtalk. It took us a while to realize that our random number generator was really working. Terry -----  From edelman at wisdom.weizmann.ac.il Fri Aug 7 05:47:45 1992 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Fri, 7 Aug 92 13:47:45 +0400 Subject: Robustness ? In-Reply-To: rsantiag@nsf.gov's message of 6 Aug 92 16:05 EST <9208061648.ad20894@Note2.nsf.gov> Message-ID: <9208070947.AA28154@wisdom.weizmann.ac.il> Robert A. Santiago wrote: The problem of robustness from a neurobiological perspective seems to originate from work done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of brains and seeing if it affected memory. It didn't. This description of Lashley's results is incorrect. Lashley did find an effect of the lesions he induced in the rat's brain, but the effect seemed to depend more on the extent of the lesion rather than on its location. By the way, some people who work in the rat (e.g., Yadin Dudai, here at Weizmann) now believe that Lashley's results may have to do with his method: the lesions may have been frequently induced disregarding proximity to blood vessels. Damage to these vessels could have secondary effects over wider areas not related directly to the site of the original lesion... So, care should be taken not to jump to conclusions based on 60 year old anecdotes - better data are available now on the effects of lesions, including in humans. Some of these data indicate that certain brain functions are surprisingly well-localized. See, for example, McCarthy and Warrington, Nature 334:428-430 (1988). -Shimon Shimon Edelman Internet: edelman at wisdom.weizmann.ac.il Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100, ISRAEL  From jan at pallas.neuroinformatik.ruhr-uni-bochum.de Fri Aug 7 10:51:25 1992 From: jan at pallas.neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) Date: Fri, 7 Aug 92 16:51:25 +0200 Subject: Robustness ? Message-ID: <9208071451.AA10865@pallas.neuroinformatik.ruhr-uni-bochum.de> [Note to moderator: I read connectionists via my collegue Rolf Wuertz (rolf at neuroinformatik.ruhr-uni-bochum.de), so that only one copy of connectionists needs to cross the Atlantic. We work on the same stuff, however.)] As I remember it, the studies showing a marked reduction in nerve cell count with age were done around the turn of the century. The method, then as now, is to obtain brains of deceased persons, fix them, prepare cuts, count cells microscopically in those cuts, and then estimate the total number of cells by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel, I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramid cells are turned into that form by fixation. The new study showed that the total water content of brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percent?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty certain that the gist is correct. So the conclusion seems to be that cell loss with age in the CNS is much lower than generally thought. On the other hand, if you compare one "neuron" in your backprop net not with a single neuron in the CNS, but with a group of them (a column in striate cortex, for instance), which would be a better analogy functional- ly, then it seems to be easier to understand the brain's robustness: The death of a single neuron doesn't kill the whole processing unit, it merely reduces, e.g., its output power or its resolution. If, however, you kill off a whole region, i.e., all member cells of a column, your functionality will suffer and degradation will be much less gracefull. -- Jan Vorbrueggen, Institut f. Neuroinformatik, Ruhr-Universitaet Bochum, FRG -- jan at neuroinformatik.ruhr-uni-bochum.de  From bill at nsma.arizona.edu Fri Aug 7 15:01:19 1992 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Fri, 7 Aug 92 12:01:19 MST Subject: Robustness? Message-ID: <9208071901.AA07698@nsma.arizona.edu> I wrote: >Neurons do die every so often, of course, but the rate >has never been quantified (as far as I know) because >it's so low, certainly much fewer than a thousand per >day in ordinary, healthy people. I was wrong. There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons in cortex are lost by age 60. Using the standard estimate of ten billion neurons in the neocortex, this works out to about one hundred thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey, and rodent data", DG Flood & PD Coleman, *Neurobiology of Aging* 9:453-464 (1988). Thanks to the people who wrote to me about this. -- Bill  From fellous%hyla.usc.edu at usc.edu Fri Aug 7 19:49:06 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Fri, 7 Aug 92 16:49:06 PDT Subject: No subject Message-ID: <9208072349.AA23728@hyla.usc.edu> Dear Connectionists, Here are some papers I have found so far on computational models of LTP and/or LTD. I would be grateful if you could point me to any other models in this area, especially if they have actually been implemented, no matter on what simulator or tool. Zador A, Koch C, Brown TH: Biophysical model of a Hebbian synapse. {\em Proc Nat Acad Sci USA} 1990, 87:6718-6722. Proposes a specific, experimentally justified model of the dynamics of LTP in hippocampal synapses. Brown TH, Zador AM, Mainen ZF and Claiborne BJ (1991) Hebbian modifications in hippocampal neurons, in ``Long-Term Potentiation: A Debate of Current Issues'' (eds M Baudry and JL Davis) MIT Press, Cambridge MA, 357-389. Summarizes the material in the previous paper, and explores the consequences of the facts of LTP for the representations formed within the hippocampus, using compartmental modeling techniques. Holmes WR, Levy WB: Insights into associative long-term potentiation from computational models of NMDA receptor-mediated calcium influx and intracellular Calcium concentration changes. {\em J Neurophysiol} 1990, 63:1148-1168. In advance, thank you, Yours, Jean-Marc Fellous Center for Neural Engineering University of Southern California Los Angeles ps: I will post a summary of the (eventual !) replies to the list  From bruno at cns.caltech.edu Fri Aug 7 20:17:12 1992 From: bruno at cns.caltech.edu (Bruno Olshausen) Date: Fri, 7 Aug 92 17:17:12 PDT Subject: No subject Message-ID: <9208080017.AA15777@cns.caltech.edu> The following technical report has been archived for public ftp: ---------------------------------------------------------------------- A NEURAL MODEL OF VISUAL ATTENTION AND INVARIANT PATTERN RECOGNITION Bruno Olshausen, Charles Anderson*, and David Van Essen Computation and Neural Systems Program Division of Biology, 216-76 and *Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91125 CNS Memo 18 Abstract. We present a biologically plausible model of an attentional mechanism for forming position- and scale-invariant object representations. The model is based on using control neurons to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex, V1, is routed to higher cortical areas while preserving information about spatial relationships. This paper describes details of a neural circuit for routing visual information and provides a solution for controlling the circuit as part of an autonomous attentional system for recognizing objects. The model is designed to be consistent with known neurophysiology, neuroanatomy, and psychophysics, and it makes a variety of experimentally testable predictions. ---------------------------------------------------------------------- Obtaining the paper via anonymous ftp: 1. ftp to kant.cns.caltech.edu (131.215.135.31) 2. login as 'anonymous' and type your email address as the password 3. cd to pub/cnsmemo.18 4. set transfer mode to binary (type 'binary' at the prompt) 5. get either 'paper-apple.tar.Z' or 'paper-sparc.tar.Z'. The first will print on the Apple LaserWriter II, the other on the SPARCprinter. (They may work on other PostScript printers too, but I can't guarantee it.) 6. quit from ftp, and then uncompress and detar the file on your machine by typing uncompress -c filename.tar.Z | tar xvf - 7. remove the tarfile and print out the three postscript files (paper1.ps, paper2.ps and paper3.ps), beginning with paper3.ps. If you don't have an appropriate PostScript printer, then send a request for a hardcopy to bruno at cns.caltech.edu.  From rob at comec4.mh.ua.edu Sat Aug 8 12:54:17 1992 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Sat, 08 Aug 92 10:54:17 -0600 Subject: Genetic Algorithms Conf. Call for Papers Message-ID: <9208081554.AA16853@comec4.mh.ua.edu> Call for Papers ICGA-93 The Fifth International Conference on Genetic Algorithms 17-22 July, 1993 University of Illinois at Urbana-Champaign The Fifth International Conference on Genetic Algorithms (ICGA-93), will be held on July 17-22, 1993 at the Univ. of Illinois at Urbana-Champaign. This meeting brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection. Topics of particular interest include: genetic algorithms and classifier systems, evolution strategies, and other other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. Papers discussing how genetic algorithms and classifier systems are related to biological and cognitive systems are also encouraged. Papers describing significant, unpublished research in this area are solicited. Authors must submit four (4) complete copies of their paper (hardcopy only), received by February 1, 1993, to the Program Chair: Stephanie Forrest Dept. of Computer Science University of New Mexico Albuquerque, N.M. 87131-1386 Papers should be no longer than 10 pages, single-spaced, and printed using 12 pt. type. Please include a separate title page with authors names and addresses, and do not include these names in the paper's body, to allow for anonymous peer review. The title page should also contain a short abstract. Electronic submissions will not be accepted. Evaluation criteria include the significance of results, originality, and the clarity and quality of the presentation. Questions on the conference program and submission should be directed to icga93 at unmvax.cs.unm.edu. Other questions should be directed to rob at comec4.mh.ua.edu. Important Dates: February 1, 1993: Submissions must be received April 7, 1993: Notification to authors mailed May 7, 1993: Revised, final camera-ready paper due July 17-22, 1993: Conference dates ICGA-93 Conference Committee: Conference Co-Chairs: David E. Goldberg, Univ. of Illinois at Urbana-Champaign J. David Schaffer, Philips Labs Publicity: Robert E. Smith, Univ. of Alabama Program Chair: Stephanie Forrest, Univ. of New Mexico Financial Chair: Larry J. Eshelman, Philips Labs Local Arrangements: David E. Goldberg, Univ. of Illinois at Urbana-Champaign  From ptodd at spo.rowland.org Wed Aug 12 17:57:37 1992 From: ptodd at spo.rowland.org (Peter M. Todd) Date: Wed, 12 Aug 92 17:57:37 EDT Subject: Call for Presentations: Knowledge Technology in the Arts Message-ID: <9208122157.AA02858@spo.rowland.org> (I hope we will get some connectionist contributions-- Peter) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CALL FOR PRESENTATIONS on Knowledge Technology in the Arts to be presented in a special session at the 1993 Conference of the Society of Electroacoustic Music in the U.S. (SEAMUS) University of Texas at Austin March 31st to April 3rd, 1993 and at the Fourth Arts and Technology Conference Connecticut College, New London, CT March 4th to 6th, 1993 During the 1993 SEAMUS conference, a special session on knowledge technology in the arts will be held, co-sponsored by SEAMUS and IAKTA, the newly founded International Association for Knowledge Technology in the Arts. The main purpose of this session is to familiarize artists with the applications of AI, Connectionism, and other knowledge technologies in music and related arts, and the new tools that are available for these artistic pursuits. IAKTA is calling for proposals for presentations on applications of symbolic AI and neural networks to topics in composition, performance, and teaching in the computer arts (music, film, dance, video art, performance art), in keeping with the conference's focus on "Music, Media, and Movement". We would most like to encourage the submission of tutorial presentations that will help inspire artists to learn more about and become involved in knowledge technology, both as practitioners and as researchers. Speakers will have approximately 25-45 minutes to give a thorough introduction to their topic. Talks on new and innovative uses of knowledge technology and the arts are also welcomed. Please send abstracts/descriptions up to two pages in length and descriptions of your audiovisual requirements by October 1 to IAKTA president Otto Laske (laske at cs.bu.edu) and secretary Peter Todd (ptodd at spo.rowland.org). (For more information on IAKTA itself, our goals, membership structure, etc., please contact Peter Todd at ptodd at spo.rowland.org .) IAKTA would also like to encourage paper submissions on knowledge technology in the arts to the Fourth Arts and Technology Conference at Connecticut College. This conference will be held March 4th to 6th, 1993, in New London, Connecticut, and is being organized by Noel Zahler and David Smalley. The emphasis of the Arts and Technology Conference is multidisciplinary interaction; it will cover virtual reality, cognition and the arts, experimental theater, the compositional process, and speculative uses of technology in education. Submissions in the form of a detailed two-page abstract including audiovisual requirements should be sent by October 15 to Dr. Noel Zahler, Co-director Center for Arts and Technology Connecticut College, Box 5632 207 Mohegan Avenue New London, CT 06320-4196 email: nbzah at conncoll.bitnet (Authors should be notified of acceptance by November 15, and camera-ready copy will be due by January 15, 1993.) A copy of the contribution should also be sent to IAKTA president Otto Laske (laske at cs.bu.edu) and secretary Peter Todd (ptodd at spo.rowland.org). ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++  From n at predict.com Wed Aug 12 17:43:35 1992 From: n at predict.com (n (Norman Packard)) Date: Wed, 12 Aug 92 15:43:35 MDT Subject: Job Offer Message-ID: <9208122143.AA01835@arkady> Prediction Company Financial Forecasting August 12, 1992 Prediction Company is a small Santa Fe, NM based startup firm utilizing the latest nonlinear forecasting technologies for prediction and computerized trading of derivative financial instruments. The senior technical founders of the firm are Doyne Farmer and Norman Packard, who have worked for over ten years in the fields of chaos theory and nonlinear dynamics. The technical staff includes other senior researchers in the field. The company has the backing of a major technically based trading firm and their partner, a major European bank. There is currently one opening at the company for a senior computer scientist to provide leadership in that area for a staff of physical scientists and mathematicians with strong programming backgrounds. The job responsibilities include software design and implementation, support of deployed software systems for trading, management of a UNIX workstation network and infusion of computer science technologies into the firm. The successful applicant will be an experienced and talented C and C++ programmer with architectural skills, UNIX knowledge and an advanced degree in computer science or a related discipline. Experience in a production environment, in support of products or mission critical in-house software systems (preferably in the financial industry) is required. Knowledge of and experience with top down design methods, written specifications, formal tes methods and source code control is highly desirable, as is familiarity with data base and wide-area-networking technologies. Applicants should send resumes to Prediction Company, 234 Griffin Street, Santa Fe, NM 87501 or to Laura Barela at laura%predict.com at santafe.edu.  From dtam at morticia.cnns.unt.edu Fri Aug 14 17:22:29 1992 From: dtam at morticia.cnns.unt.edu (dtam@morticia.cnns.unt.edu) Date: Fri, 14 Aug 1992 16:22:29 -0500 Subject: Call for Papers (Biological Neural Networks) Message-ID: <199208142122.AA16626@morticia.cnns.unt.edu> ========================================================= = Call for Papers = ========================================================= Progress in Neural Networks Special Volume: Biological Neural Networks This special issue aims at a review of the current research progress made in the understanding of biological neural systems and its relations to artificial neural networks. Computational and theoretical issues addressing signal processing capabilities and dynamics of biologically based systems will be covered. Development and plasticity of neuroanatomical architecture are emphasized. Authors are invited to submit original manuscripts describing recent progress in biological neural network research addressing computational and theoretical issues in neurobiology and signal processing in the central nervous system. Manuscripts should be self-contained. They may be of tutorial or review in nature. Suggested topics include, but are not limited to: * Biophysics of neurons in a network * Biochemistry of synaptic transmission * Development of neuroanatomical circuitries * Receptive field and organization of dendrites * Synaptic plasticity and synaptic development * Signal encoding, decoding and transduction * Subthreshold vs spike code signal processing * Functional circuitry analysis * Neural population interactions and dynamics * Physiological functions of neuronal networks * Biological neuronal network models * Processing capabilities of biologically based systems Submit abstracts, extended summaries, or manuscripts to the volume editor directly. For more information please contact. Volume Editor Dr. David C. Tam Center for Network Neuroscience Department of Biological Sciences P. O. Box 5218 University of North Texas Denton, TX 76203 Tel: (817) 565-3261 Fax: (817) 565-4136 E-mail: dtam at morticia.cnns.unt.edu Publisher: ABLEX PUB CORP 355 Chestnut St., Norwood, NJ 07648  From moody-john at CS.YALE.EDU Mon Aug 17 16:10:51 1992 From: moody-john at CS.YALE.EDU (john moody) Date: Mon, 17 Aug 92 16:10:51 EDT Subject: reprint available Message-ID: <199208172010.AA12522@TOPAZ.SYSTEMSX.CS.YALE.EDU> Fellow Connectionists: The following reprint has been placed on Jordan Pollack's neuroprose archive: LEARNING RATE SCHEDULES FOR FASTER STOCHASTIC GRADIENT SEARCH Christian Darken*, Joseph Chang+, and John Moody* Yale Departments of Computer Science* and Statistics+ ABSTRACT Stochastic gradient descent is a general algorithm that includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate $\eta$ (both adaptive and fixed functions of time) often perform quite poorly. In contrast, our recently proposed class of ``search then converge'' ($STC$) learning rate schedules (Darken and Moody, 1990b, 1991) display the theoretically optimal asymptotic convergence rate and a superior ability to escape from poor local minima. However, the user is responsible for setting a key parameter. We propose here a new methodology for creating the first automatically adapting learning rates that achieve the optimal rate of convergence. To retrieve it via anonymous ftp, do the following: % ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: your email addressneuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get darken.learning_rates.ps.Z 200 PORT command successful. 150 Opening ASCII mode data connection for darken.learning_rates.ps.Z (238939 by tes). 226 Transfer complete. local: darken.learning_rates.ps.Z remote: darken.learning_rates.ps.Z 239730 bytes received in 11 seconds (22 Kbytes/s) ftp> quit 221 Goodbye. % uncompress darken.learning_rates.ps % lpr -P printer_name darken.learning_rates.ps Enjoy, John Moody -------  From tgd at ICSI.Berkeley.EDU Wed Aug 19 13:44:02 1992 From: tgd at ICSI.Berkeley.EDU (Tom Dietterich) Date: Wed, 19 Aug 92 10:44:02 PDT Subject: A neat idea from L. Breiman Message-ID: <9208191744.AA09768@icsib22.ICSI.Berkeley.EDU> I recently read the following paper by Leo Breiman: Breiman, L. (1991) The $\Pi$ method for estimating multivariate functions from noisy data. {\it Technometrics, 33} (2), 125--160. With discussion. In this paper, Breiman presents a very neat technique called "back fitting" that is a very general algorithm idea for improving greedy algorithms. Suppose we are executing a greedy algorithm for some task, and at any given point in the process, we have already made decisions d_1, d_2, ..., d_{k-1} and we are about to make decision d_k. In the standard greedy algorithm, we choose d_k to be the locally best decision and then go on to consider d_{k+1}. However, with backfitting, we first perform the following double loop: repeat until quiesence: for i from 1 to k-1 do "undo" decision d_i (holding all other decisions d_j, j<>i fixed) and re-make d_i to be the best decision (locally). In other words, we first hold {d_2, ..., d_{k-1}} constant and see if we can improve things by re-making decision d_1. Then we hold {d_1, d_3, ..., d_{k-1}} constant and consider re-making decision d_2, and so on. We cycle through the previous k-1 decisions making local improvements until no further improvements can be found. THEN, we make decision d_k (and repeat the process, of course). In general, this backfitting process will cost another factor of n in the algorithm (assuming there are n decisions to be made). In experiments in three different learning algorithms, Breiman has found that this method finds much better solutions than a simple greedy algorithm. Breiman (in various collaborations) is currently applying this idea to improving CART trees and neural networks. This idea would probably also find good application in COBWEB-style algorithms and greedy MDL algorithms. --Tom  From maass at figids01.tu-graz.ac.at Thu Aug 20 10:20:24 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Thu, 20 Aug 92 16:20:24 +0200 Subject: No subject Message-ID: <9208201420.AA15840@figids03.tu-graz.ac.at> Re: Vapnik-Chervonenkis Dimension of Neural Nets According to [BEHW] the Vapnik-Chervonenkis dimension (VC-dimension) of a neural net is the key parameter for determining the number of samples that are needed for training the net. For a feedforward neural net C with heaviside gates that has e edges (or equivalently: e adjustable weights) it was shown by Baum and Haussler [BH] in 1988 that the VC-dimension of C is bounded above by O(e log e). It has remained an open problem whether the factor "log e" in this upper bound is in fact needed, or whether a general upper bound O(e) would also be valid. This problem is solved by the following new result: THEOREM: One can construct for every natural number n a feedforward neural net C(n) of depth 4 with heaviside gates that has n input bits, one output bit, and not more than 33n edges, such that the VC-dimension of C(n) (or more precisely: of the class of boolean functions that can be computed by C(n) by choosing suitable integer weights from {-n,...,n}) is at least as large as n log n. COROLLARY: The general upper bound of O(e log e) for the VC-dimension of a neural net with e adjustable weigh Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:           From maass at figids01.tu-graz.ac.at Thu Aug 20 10:20:24 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Thu, 20 Aug 92 16:20:24 +0200 Subject: No subject Message-ID: <9208201420.AA15840@figids03.tu-graz.ac.at> Re: Vapnik-Chervonenkis Dimension of Neural Nets According to [BEHW] the Vapnik-Chervonenkis dimension (VC-dimension) of a neural net is the key parameter for determining the number of samples that are needed for training the net. For a feedforward neural net C with heaviside gates that has e edges (or equivalently: e adjustable weights) it was shown by Baum and Haussler [BH] in 1988 that the VC-dimension of C is bounded above by O(e log e). It has remained an open problem whether the factor "log e" in this upper bound is in fact needed, or whether a general upper bound O(e) would also be valid. This problem is solved by the following new result: THEOREM: One can construct for every natural number n a feedforward neural net C(n) of depth 4 with heaviside gates that has n input bits, one output bit, and not more than 33n edges, such that the VC-dimension of C(n) (or more precisely: of the class of boolean functions that can be computed by C(n) by choosing suitable integer weights from {-n,...,n}) is at least as large as n log n. COROLLARY: The general upper bound of O(e log e) for the VC-dimension of a neural net with e adjustable weights (due to Baum and Haussler [BW]) is asymptotically optimal. REMARKS: 1. The proof of the Theorem is immediate from a quite sophisticated circuit construction due to Lupanov (english translation due to Gyorgy Turan [T]). Lupanov had shown that every boolean function of n inputs can be computed by a threshold circuit of depth 4 with O(2**(n/2 - log n/2)) gates. 2. This new lower bound for the VC-dimension of neural nets will appear as a side-result in a paper "On the Theory of Neural Nets with Piecewise Polynomial Activation Functions", that should be available in October 1993. The main results of this paper are upper bounds for the VC-dimension and for the computational power of neural nets with arbitrary piecewise polynomial activation functions at the gates and arbitrary real weights. Preprints of this paper will be available from the author: Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2, A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Tel.: (0316)81-00-63-22 Fax: (0316)81-00-63-5 References: [BH] E.B.Baum, D. Haussler, What size nets gives valid generalization, Neural Computation 1, 1989, 151-160 [BEHW] A. Blumer, A. Ehrenfeucht, D. Haussler, M.K. Warmuth, Learnability and the Vapnik-Chervonenkis-dimension, Journal of the ACM, 36, 1989, 929-965 [T] G. Turan, unpublished manuscript (1989)  From terry at helmholtz.sdsc.edu Tue Aug 18 17:29:15 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 18 Aug 92 14:29:15 PDT Subject: Postdoc in San Diego Message-ID: <9208182129.AA05114@helmholtz.sdsc.edu> Opportunities for post doctoral research at the Naval Health Research Center, San Diego. Cognitive Performance and Psychophysiology Department __________________________________________________ Our laboratory is developing alertness and attention moni- toring systems based on human psychophysiological measures (EEG, ERP, EOG, ECG), through ongoing research at basic and exploratory development levels. We have openings for post doctoral fellows in signal processing / neural network esti- mation and human cognitive psychophysiology. We are espe- cially interested in the relation of oscillatory brain dynamics to attention and alertness. Our research is not classified. Please address inquiries to: Dr. Scott Makeig Naval Health Research Center email: scott at cpl.nhrc.navy.mil PO Box 85122 fax: (619) 436-9389 San Diego, CA 92186-5122 phone: (619) 436-7155 Scott -----  From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Wed Aug 19 00:13:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Wed, 19 Aug 1992 00:13 GMT-0200 Subject: Neural networks research in Medicine - Abstracts Message-ID: <01GNQV5G2OOS8Y4ZIC@ccvax.unicamp.br> ARTIFICIAL NEURAL NETWORKS IN MEDICINE AND BIOLOGY Center for Biomedical Informatics State University of Campinas, Campinas - Brazil Abstracts of published work by the Center Status of Aug 15 1992 ----------------------------------------------------------- A HIGH-LEVEL LANGUAGE AND MICROCOMPUTER PROGRAM FOR THE DESCRIPTION AND SIMULATION OF NEURAL ARCHITECTURES Sabbatini, RME and Arruda-Botelho, AG Center of Biomedical Informatics, Neurosciences Applications Group, State University of Campinas, Campinas, SP, BRAZIL) The description, representation and simulation of complex neural network structures by means of computers is an essential step in the investigation of model systems and inventions in the growing field of biological information processing and neurocomputing. The handcrafting of neural net architectures, however, is a long, tedious, difficult and error-prone process, which can be substituted satisfactorily by the neural network analogue of a computer program or formal symbolic language. Several attempts have been made to develop and apply such languages: P3, Hecht- Nielsen's AXON, and Rochester's ISCON are some recent examples. We present here a new tool for the formal description and simulation of artificial neural tissues in microcomputers. It is a network editor and simulator, called NEUROED, as well as a compiler for NEUROL, a high-level symbolic, structured language which allows the definition of the following elements of a neural tissue: a) elementary neural architectonic units: each unit has the same number of cells and the same internal interconnecting pattern and cell functional parameters; b) elementary cell types: each cell can be defined in terms of its basic functional parameters; synoptic interconnections inside an architectonic unit (axonic delay, weights and signal can be defined for each); a cell can fan out to several others, with the same synoptic properties; c) synaptic interconnections among units; d) cell types and architectonic units can be replicated automatically across neural tissue and interconnected; e) cell types and architectonic units can be named and arranged in hierarchical frames (parameter inheritance). NEUROED's underlying model processing element (PE) is a simplified Hodgkin-Huxley neuron, with RC-model, temporal- summation, passive electrotonic potentials at dendritic level, and a step transfer function with threshold level, a fixed-size, fixed-duration, fixed-form spike, and an absolute refractory period. Inputs Iij (i=1...NI) synapses for j-th neuron are weighted with Wij (i=1...NI), where Wij 0 is defined for a inhibitory synapse, Wij = 0 for an inactive or non-existent synapse and Wij 0 for an ex- citatory synapse. Outputs Okj (k=1...NO) can have axonic propagation delays Dkj (a delay can be equal to zero). Firing of neurons in a network follows diffusion process, according to propagation delays; random fluctuations in several processes can be simulated. Several learning algorithms can be implemented explicitly with NEUROL; a Hebbian synapse-strength reinforcement rule has specific language support now. NEUROED's basic specifications are: a) written in Turbo BASIC 1.0 for IBM-PC compatible machines, with CGA monochrome graphics display and optional numerical coprocessor; b) capacity of 100 neurons and 10.000 synapses; c) three neural tissue layers: input, processing and output. d) real-time simulation of neural tissue dynamics, with three display modes: oscilloscope mode (displays membrane potentials along time for several cells simultaneously); map mode (displays bidimensional architecture with individual cells, showing when they fire) and Hinton diagram (displays interconnecting matrix with individual synapses, showing when they fire); e) Realtime, interactive modification of net parameters; and f) capability for building procedures, functions and model libraries, which reside as external disk files. NEUROED and NEUROL are easy to learn and to use, intuitive for neuroscientists, and lend themselves to modeling neural tissue dynamics for teaching purposes. We are currently developing a basic "library" of NEUROED models to teach basic neurophysiology to medical students. Implementations of NEUROED and for parallel hardware are also under way. (Presented at the Fourth Annual Meeting of the Brazilian Federation of Biological Societies, Caxambu, MG, July 1991) ----------------------------------------------------------- A CASCADED NEURAL NETWORK MODEL FOR PROCESSING 2D TOMOGRAPHIC BRAIN IMAGES Dourado SC and Sabbatini RME Center for Biomedical Informatics, State University of Campinas, P.O. Box 6005, 13081 Campinas, So Paulo, Brazil. Artificial neural networks (ANN) have demonstrated many advantages and capabilities in applications involving the processing of biomedical images and signals. Particularly in the field of medical image processing, ANNs have been used in several ways, such as in image filtering, scatter correction, edge detection, segmentation, pattern and texture classification, image reconstruction and alignment, etc. The adaptive nature of ANNs (i.e., they are capable of learning) and the possibility of implementing its function using truly massive parallel processors and neural integrated circuits, in the future; are strong arguments in favor of investigating new architectures, algorithms and applications for ANNs in Medicine. In the present work, we are interested into designing a prototype ANN which could be capable of processing serial sections of the brain, obtained from CT or MRI tomographs. The segmented, outlined images, representing internal brain structures, both normal and abnormal, would then be used as an input to a three-dimensional stereotaxic radiosurgery planning software. The ANN-based algorithm we have devised was initially implemented as a software simulation in a microcomputer (PC 80386, with VGA color graphics and a 80387 mathematical coprocessor). It is structured as a compound ANN, comprised by three cascading sub-networks. The first one receives the original digitized image, and is a one-layer, fully interconnected ANN, with one processing element (PE) per image pixel. The brain image is obtained from a General Electric CT system, with 256 x 256 pixels and 256 gray levels. The first ANN implements a MHF lateral inhibition function, based on a convolution filter of variable dimension (3 x 3 up to 9 x 9 PE's), and it is used to iteratively enhance borders in the image. The PE interconnection (i.e. convolution) function can be defined by the user as a disk file containing a set of synaptic weights, which is read by the program; thus allowing for experimentation with different sets of coefficients and sizes of the convolution window. In this layer, PE's have synaptic weights varying from -1 to 1, and the step function as its transfer function. Usually after 2 to 3 iterations, the borders are completely formed and do not vary any more, but are too thick (i.e., the trace width spans several pixels). In order to thin out the borders, the output of the MHF ANN layer is subsequently fed into a three-layer perceptron, which was trained off-line using the backpropagation algorithm to perform thinning on smaller straight line segments. Finally, the thinned out image obtained pixel-wise at the this ANN's output is fed into a third network, also a three-layer perceptron trained off- line using the backpropagation algorithm to complete small gaps ocurring in the image contours. The final image, also 256 x 256 pixels with 2 levels of gray, is passed to the 3D slice reconstruction program, implemented with conventional, sequential algorithms. A fourth ANN perceptron previously trained by back-propagation to recognize the gray histogram signature of small groups of pixels in the original image (such as bone, liquor, gray and white matter, blood, dense tumor areas, etc.), is used to false-color the entire image according to the classified thematic regions. The cascaded, multilayer ANN thus implemented performs very well in the overall task of obtaining automatically outlined and segmented brain slices, for the purposes of 3D reconstruction and surgical planning. Due to the complexity of algorithms and to the size of the image, the time spent by the computer we use is inordinately large, preventing a practical application. We are now studying the implementation of this ANN paradigm in RISC-based and vector-processing CPUs, as well as the potential applications of neurochip prototyping kits already available in the market. (Presented at the I Latinoamerican Congress on Health Informatics, Habana, Cuba, February 1992) -------------------------------------------------------- COMPUTER SIMULATION OF A QUANTITATIVE MODEL FOR REFLEX EPILEPSY R.M.E. Sabbatini Center of Biomedical Informatics and School of Medicine of the State University of Campinas, Brazil. In the present study we propose a continuous, lumped- parameter, non-linear mathematical model for explaining the quantitatively observed characteristics of a class of experimental reflex epilepsy, namely audiogenic seizures in rodents, and simulate this model with a especially contrived microcomputer program. In a first phase of the study, we have individually stimulated 280 adult Wistar albino rats with a 112 dB white-noise sound source, and recorded the latency, duration and intensity values of the psychomotor components of the audiogenic reaction: after an initial delay one or more circular running phases usually occurs, followed or not by complete tonic-clonic seizures. In the second step, we performed several multivariate statistical analyses of these data, which have revealed many properties of the underlying neural system responsible for the crisis; such as the independence of the running and convulsive phases; and a scale of severity which is correlated to the value of latencies and intensities. Finally, a lumped- parameter model based on a set of differential equations which describes the macro behavior of the interaction of four different populations of excitatory and inhibitory neurons with different time constants and threshold elements has been simulated in a computer, In this model, running waves, which may occur several times before leading or not to the final convulsive phase, are explained by the oscillatory behavior of a controlling neural population, caused by mixed feedback: an early, internal positive feedback which results in the growing of excitation, and a late negative feedback elicited by motor components of the running itself, which causes the oscillation back to inhibition. A second, threshold-triggered population controls the convulsive phase and its subsequent refractory phase. The results of the simulation have been found to explain reasonably well the time course and structural characteristics of the several forms of rodent audiogenic epilepsy and correlates well with the existing knowledge about the neural bases of this phenomenon. (Presented at the Second IBRO/IMIA International Symposium on Mathematical Approaches to Brain Functioning Diagnostics, Prague, Czechoslovakia, September 1990). -------------------------------------------------------- OUTCOME PREDICTION FOR CRITICAL PATIENTS UNDER INTENSIVE CARE, USING BACKPROPAGATION NEURAL NETWORKS P. Felipe Jr., R.M.E. Sabbatini, P.M. Carvalho- Jnior, R.E. Beseggio, and R.G.G. Terzi Center for Biomedical Informatics, State University of Campinas, Campinas SP 13081-970 Brazil Several scores have been designed to estimate death probability for patients admitted to Intensive Care Units, such as the APACHE and MPM systems, which are based on regression analysis. In the present work, we have studied the potential of a model of artificial neural network, the three-layer perceptron with backpropagation learning rule, to perform this task. Training and testing data were derived from a Brazilian database which was previously used for calculating APACHE scores. The neural networks were trained with physiological, clinical and pathological data (30 variables, such as worst pCO2, coma level, arterial pressure, etc.) based on a sample of more than 300 patients, whose outcome was known. All networks were able to reach convergence with a small global prediction error. Maximum percentages of 75% correct predictions in the test dataset and 99.6 % in the training dataset, were achieved. Maximum sensitivity and specificity were 60% and 80%, respectively. We conclude that the neural network approach has worked well for outcome prognosis in a highly "noisy" dataset, with a similar, if slightly lower performance than APACHE II, but with the advantage of deriving its parameters from a regional dataset instead from an universal model. The paper will be presented at the MEDINFO'92 workshop on "Applications of Connectionist System in Biomedicine", September 8, 1992, in Geneva, Switzerland. ============================================================== Reprints/Preprints are available Renato M.E. Sabbatini, PhD Center for Biomedical Informatics State University of Campinas SABBATINI at CCVAX.UNICAMP.BR SABBATINI at BRUC.BITNET  From kevin at synapse.cs.byu.edu Fri Aug 21 13:54:32 1992 From: kevin at synapse.cs.byu.edu (Kevin Vanhorn) Date: Fri, 21 Aug 92 11:54:32 -0600 Subject: A neat idea from L. Breiman Message-ID: <9208211754.AA07435@synapse.cs.byu.edu> The "back-fitting" algorithm is just an instance of local search [1], a well-known heuristic technique for combinatorial optimization. By the way, it was mentioned that Breiman was applying back-fitting to improving CART trees. Has he published anything on this yet? There was no mention of it in the article Tom Dietterich cited. It's not clear to me how you would usefully apply back-fitting or any other kind of local search to improving a decision tree, as the choice of how to split a node is highly dependent on how its ancestral nodes were split. [1] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Inc., 1982. (See Chap. 19) ----------------------------------------------------------------------------- Kevin S. Van Horn | It is the means that determine the ends. vanhorn at synapse.cs.byu.edu |  From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Fri Aug 21 09:21:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Fri, 21 Aug 1992 09:21 GMT-0200 Subject: Survey on neural networks in Medicine and Biology Message-ID: <01GNU6VHHU448Y50NR@ccvax.unicamp.br> The Scientific Program on NEURAL NETWORKS IN MEDICINE in the Seventh World Congress of Medical Informatics (MEDINFO 92) - Palexpo Geneva, 6-10 September 1992 The explosive growth of artificial neural network systems (also called connectionist or nuromorphic systems) in the last years has followed several convincing demonstrations of its usefulness in decision-making tasks, particularly in the classification and recognition of patterns. Not surprisingly, the medical and biological applications of such systems have followed suit and are growing at a relentless pace. Connectionist systems have been used successfully in computer-aided medical diagnosis, intelligent biomedical instrumentation, classification of images of cells and tissues, recognition of abnormal EKG and EEG features, development of prosthetic devices, prediction of protein structure, etc. Thus, this is an important area which will be increasingly considered as an alternative to implementation of intelligent systems in all areas of Medicine and Biology. This year's scientific program will include several activities and papers on the subject of neural networks applications. They are summarized below for the benefit of people interested in this area. POSTER SESSION Z-1.1, Sept. 7, 11:00-13:00 Diagnosis of children's ketonemia by neural network Artificial Intelligence. A. Garliausakas, A. Garliauskiene (Lithuania) Diagnosing functional disorders of the cervical spine using backpropagation networks. Preliminary results. W. Schoner, M. Berger, G. Holzmueller, A. Neiss, H. Ulmer (Austria) POSTER SESSION Z-1.1, Sept. 7, 14:00-16:00 The use of neural network in decision making of nursing diagnosis Y. Chen, X. Cai, L. Chen, R. Guo (China) SEMI-PLENARY SESSION 2-04 (Room PS-11) - Sept. 8, 9:30-10:30 Neural Networks and Their Applications in Biomedicine Applications of connectionist systems in Biomedicine Renato M.E. Sabbatini (Brazil) A neural network approach to assess myocardial infarction A. Palazos, V. Maojo, F. Martin, N. Ezquerra (Spain) DEMONSTRATION 3-09 (Room S81) - Sept. 9, 16:00-17:30 HYPERNET - A decision support system for arterial hypertension M. Ramos, M.T. Haashi, E. Czogala, O. Marson, H. Aszen, O. Kohlmann Jr., M.S. Ancao, D. Sigulem (Brazil) SESSION 9-01 (Room S-68) - Sept. 10, 10:30-11:00 Neural networks for classification of EEG Signals D.C. Reddy, D.R. Korrai (India) WORKSHOP 3-08 (Room W25) - Sept. 8, 18:00-22:00 Applications of connectionist systems in Biomedicine Chair: Renato M.E. Sabbatini (Campinas State University, Brazil) The workshop has the objective of reviewing and discussing the value, aims, classes and results of artificial neural network systems (ANS) applications in the biomedical area. Specific techniques and results will be demonstrated (in many instances using actual computers) in several important domains, such as (i) ANS simulation environments, languages and hardware specifically designed for biomedical applications, (ii) signal and image processing tasks; (iii) development of computerised decision-making systems in diagnosis, therapy, etc.; (iv) integration with other Artificial Intelligence approaches; (v) practical aspects in the evaluation, development and implementation of ANS systems in Biomedicine; (vi) current research and perspectives of advancement. The Workshop will be conducted by several renowned specialists in the growing field of ANS applications in Biomedicine. In addition, the participants will have the opportunity to try some hands-on demonstrations on interesting software products in this area. All participants will receive an information package containing a list of publications on the subject (papers, review, books, technical reports, government studies, etc.), with abstracts; available hardware and software resources (neural network simulation environments, neurochips and neuroboards, specific medical NN application software, biomedical instruments embedding NNs, etc., either commercial or non-commercial); a list of research groups, laboratories and individuals involved in the area of ANS applications in Biological and Health Sciences; with addresses and research areas. INVITATION I would like to invite all persons active in this area of research and development, to contribute with discussions, short presentations and software demonstrations, to the workshop. Those who are interested in participating, please send name, mail and e-mail/fax address to me, together with a short proposal about his/her potential intervention at the Workshop. Renato M.E. Sabbatini ***************************************************************************** * Renato M.E. Sabbatini, PhD * INTERNET: SABBATINI at CCVAX.UNICAMP.BR * * Director * BITNET : SABBATINI at BRUC.BITNET * * Center for Biomedical Informatics * Tel.: +55 192 39-7130 (office) * * State University of Campinas * 39-4168 (home) * * * Fax.: +55 192 39-4717 (office) * * P.O. Box 6005 * Telex: +55 19 1150 * * Campinas, SP 13081 - BRAZIL * * *****************************************************************************  From maass at figids01.tu-graz.ac.at Sat Aug 22 12:41:43 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Sat, 22 Aug 92 18:41:43 +0200 Subject: Correction of date of availability for preprints Message-ID: <9208221641.AA18659@figids03.tu-graz.ac.at> Two days ago I posted a new result on the Vapnik-Chervonenkis Dimension of Neural Nets. The paper in which this result appears will be available in October 1992 (NOT 1993, as incorrectly stated in the posted note). Wolfgang Maass maass at igi.tu-graz.ac.at  From lyn at dcs.ex.ac.uk Mon Aug 24 12:51:51 1992 From: lyn at dcs.ex.ac.uk (Lyn Shackleton) Date: Mon, 24 Aug 92 12:51:51 BST Subject: First Sewdish National Conference on Connectionism Message-ID: <409.9208241151@castor.dcs.exeter.ac.uk> The Connectionist Research Group University of Skovde The First Swedish National Conference on Connectionism Wednesday 9th and Thursday 10th Sept. 1992, Skovde, Sweden at Billingehus Hotel and Conference Centre INVITED SPEAKERS James M. Bower, California Inst. of Technology, USA "The neuropharmacology of associative memory function: an in vitro, in vivo, and in computo study of object recognition in olfactory cortex." Ronald L. Chrisley, University of Sussex, UK "Connectionist Cognitive Maps and the Development of Objec- tivity." Garrison W. Cottrell, University of California, San Diego, USA "Dynamic Rate Adaptation." Jerome A. Feldman, ICSI, Berkeley, USA "Structure and Change in Connectionist Models" Dan Hammerstrom, Adaptive Solutions, Inc., USA "Neurocomputing Hardware: Present and Future." James A. Hendler, University of Maryland, USA "SCRuFFy: An applications-oriented hybrid connectionist/symbolic shell." Ajit Narayanan, University of Exeter, UK "On Nativist Connectionism." Jordan B. Pollack, Ohio State University, USA "Explaining Cognition with Nonlinear Dynamics." David E. Rumelhart, Stanford University, USA "From Theory to Practice: A Case Study" Noel E. Sharkey, University of Exeter, UK "Semantic and Syntactic Decompositions of Fully Distributed Representations" Tim van Gelder, Indiana University, USA "Connectionism and the Mind-Body Problem: Exposing the Rift between Mind and Cognition." PROGRAMME Secretariat: SNCC-92 Attn: Ulrica Carlbom University of Skovde P.O. Box 408 S-541 28 Skovde, SWEDEN Phone +46 (0)500-77600, Fax +46 (0)500-16325 conference at his.se Conference organisers Lars Niklasson (University of Skovde) lars at his.se Mikael Boden (University of Skovde) mikael at his.se Program committee Anders Lansner (Royal Institute of Technology, Sweden) Noel E. Sharkey (University of Exeter, UK) Ajit Narayanan (University of Exeter, UK) Conference sponsors University of Skovde The County of Skaraborg (Lansstyrelsen, Skaraborgs Lan) Conference patrons Lars-Erik Johansson, Vice-chancellor University of Skovde Stig Emanuelsson, Head of Comp. Sci. Dept., Univ. of Skovde The Swedish Neural Network Society (SNNS) will hold an offi- cial members meeting at the conference. The Sessions Wednesday 9th Session 1: Opening / Invited Papers (Room 1) Chair: Lars Niklasson (SNCC-92 organiser) 08.30 Opening 09.00 Connectionism and the Mind-Body Problem: Exposing the Rift between Mind and Cognition Tim van Gelder, Indiana University, USA 09.50 Explaining Cognition with Nonlinear Dynamics Jordan B. Pollack, Ohio State University, USA 10.40 Coffee Break Session 2: Invited Paper (Room 1) Chair: Tim van Gelder (Indiana University, USA) 11.10 - 12.00 Semantic and Syntactic Decompositions of Fully Distributed Representations Noel E. Sharkey, University of Exeter, UK Session 3a: Philosophical presentations (Room 1) Chair: Tim van Gelder (Indiana University, USA) 12.05 Subsymbolic Connectionism: Representational Vehicles and Contents Tere Vaden, University of Tampere, Finland 12.30 First Connectionist Model of Nonmonotonic Reasoning: Handling Exceptions in Inheritance Hierarchies Mikael Boden, University of Skovde and Ajit Narayanan, University of Exeter, UK 12.55 Lunch Break Session 3b: Theoretical presentations (Room 2) Chair: Jordan B. Pollack (Ohio State University, USA) 12.05 Neural Networks for Unsupervised Linear Feature Extraction Reiner Lenz and Mats Osterberg, Linkoping University 12.30 Feed-forward Neural Networks in Limiting Cases of Infinite Nodes Abhay Bulsari and Henrik Saxen, Abo Akademi, Finland 12.55 Lunch Break Session 4: Invited Papers (Room 1) Chair: Jerome A. Feldman (ICSI, Berkeley, USA) 14.00 SCRuFFy: An Applications-oriented Hybrid Connectionist/Symbolic Shell James A. Hendler, University of Maryland, USA 14.50 Neurocomputing Hardware: Present and Future Dan Hammerstrom, Adaptive Solutions, Inc., USA 15.40 Coffee Break Session 5a: Philosophical presentations (Room 1) Chair: Noel E. Sharkey (University of Exeter, UK) 16.10 Connectionism - The Miracle Mind Model Lars Niklasson, University of Skovde and Noel E. Sharkey, University of Exeter, UK 16.35 Some Properties of Neural Representations Christian Balkeniun, Lund University 17.00 Behaviors, Motivations, and Perceptions In Artificial Creatures Per Hammarlund and Anders Lansner, Royal Institute of Technology, Stockholm 17.25 Break Session 5b: Hardware-oriented presentations (Room 2) Chair: Dan Hammerstrom (Adaptive Solutions Inc., USA) 16.10 Pulse Coded Neural Networks for Hardware Implementation Lars Asplund, Olle Gallmo, Ernst Nordstrom, and Mats Gustafsson Uppsala University 16.35 Towards Modular, Massively Parallel Neural Computers Bertil Svensson, Chalmers University of Technology, Goteborg and Centre for Computer Science, Halmstad University, Tomas Nordstrom, Lulea University of Technology, Kenneth Nilsson and Per-Arne Wiberg, Halmstad University 17.00 The Grid - An Experiment in Neurocomputer Architecture Olle Gallmo and Lars Asplund, Uppsala University 17.25 Break Session 6a: Theoretical presentations (Room 1) Chair: Garrison W. Cottrell (University of California, USA) 17.40 A Neural System as a Model for Image Reconstruction Mats Bengtsson, Swedish Defence Research Establishment, Linkoping 18.05 Internal Representation Models in Feedforward Artificial Neural Networks Hans G. C. Traven, Royal Institute of Technology, Stockholm 18.30 - 18.55 A Connectionist Model for Fuzzy Logic, Abhay Bulsari and Henrik Saxen, Abo Akademi, Finland Session 6b: Application-oriented presentations (Room 2) Chair: James A. Hendler (University of Maryland, USA) 17.40 A Robust Query-Reply System Based on a Bayesian Neural Network Anders Holst and Anders Lansner, Royal Institute of Technology, Stockholm 18.05 Neural Networks for Admission Control in an ATM Network Ernst Nordstrom, Olle Gallmo, Lars Asplund, Uppsala University 18.25 - 18.55 Adaptive Generalisation in Connectionist Nets Amanda Sharkey and Noel E Sharkey, University of Exeter, UK Swedish Neural Network Society (SNNS) (Room 1) 19.00-19.45 Members meeting Thursday 10th Session 7: Invited Papers (Room 1) Chair: Ronald L. Chrisley (University of Sussex, UK) 09.00 Structure and Change in Connectionist Models Jerome A. Feldman, ICSI, Berkeley, USA 09.50 On Nativist Connectionism Ajit Narayanan, University of Exeter, UK 10.40 Coffee Break Session: 8 Invited Paper (Room 1) Chair: Anders Lansner (Royal Institute of Technology, Stockholm) 11.10 - 12.00 The Neuropharmacology of Associative Memory Function: an in Vitro, in Vivo, and in Computo Study of Object Recognition in Olfactory Cortex James M. Bower, California Institute of Technology, USA Session 9a: Neurobiological presentations (Room 1) Chair: James M. Bower (CalTech) 12.05 A Model of Cortical Associative Memory Based on Hebbian Cell Assemblies Erik Fransen, Anders Lansner and Hans Liljenstrom, Royal Institute of Technology, Stockholm 12.30 Cognition, Neurodynamics and Computer Models Hans Liljenstrom, Royal Institute of Technology, Stockholm 12.55 Lunch Break Session 9b: Application-oriented presentations (Room 2) Chair: David E. Rumelhart (Stanford University) 12.05 Experiments with Artificial Neural Networks for Phoneme and Word Recognition Kjell Elenius, Royal Institute of Technology, Stockholm 12.30 Recognition of Isolated Spoken Swedish Words - An Approach Based on a Self-organizing Feature Map Tomas Rahkkonen, Telia Research AB, Systems Research Spoken Language Processing, Haninge 12.55 Lunch Break Session 10: Invited Papers (Room 1) Chair: Ajit Narayanan (University of Exeter, UK) 14.00 Connectionist Cognitive Maps and the Development of Objectivity Ronald L. Chrisley, University of Sussex, UK 14.50 Dynamic Rate Adaption Garrison W. Cottrell, University of California, San Diego, USA 15.40 Coffee Break Session 11: Invited Paper (Room 1) Chair: Mikael Boden (SNCC-92 organiser) 16.10 From Theory to Practice: A Case Study David E. Rumelhart, Stanford University, USA 17.00 Closing Registration form Fees include admission to all conference sessions and a copy of the Advance Proceedings. Hotel reservation is made at Billingehus Hotel and Confer- ence Centre. The rooms are available from Tuesday (8th) evening to noon Thursday (10th), for the two-day alterna- tive, and from Wednesday evening (9th) to noon Thursday (10th), for the one-day alternative. All activities will be held at the conference centre. A block of rooms has been reserved until 10th Aug. After this date, room reservations will be accepted on a space available basis. To register, complete and return the form below to the secretariat. Registration is valid when payment is received. Payment should be made to postal giro 78 81 40 - 2, payable to SNCC-92, Hogskolan i Skovde. Cancellation of hotel reserva- tion can be made until 15/8 (the conference fee, 600 SEK, is not refundable). -------------------------- cut ----------------------------- Name: (Mr/Ms) ____________________________________________ Company: ____________________________________________ Address: ____________________________________________ City/Country: ____________________________________________ Phone: ____________________________________________ If the double room alternative has been chosen, please give the details for the second person. Name: (Mr/Ms) ____________________________________________ Company: ____________________________________________ Country: ____________________________________________ Alternatives Until 9th Aug After 9th Aug (please circle chosen fee) Conference fee only 1000SEK 1000SEK (incl. coffee and lunch) Conference fee + Full board and single room lodging 8/9 - 10/9 2400SEK 3500SEK Conference fee + Full board and double room lodging 8/9 - 10/9 2 x 2000SEK 2 x 2600SEK Conference fee + Full board and single room lodging 9/9 - 10/9 1700SEK 2400SEK Conference fee + Full board and double room lodging 9/9 - 10/9 2 x 1500SEK 2 x 2000SEK Indicate if vegetarian meals are preferred: _____ person(s)  From mpadgett at eng.auburn.edu Mon Aug 24 10:52:54 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Mon, 24 Aug 92 09:52:54 CDT Subject: IEEE-NNC Standards Message-ID: <9208241452.AA24224@eng.auburn.edu> IEEE-NNC Standards Committee Report It is the purpose of this column to update you on this activity and to invite you to participate in forthcoming meetings. At its June meeting the IEEE Standards Board formally approved the Project Authorization Requests (PAR's) submitted by the Working Group on Glossary and Symbols and by the Working Group on Performance Evaluation, so those two groups now have their "marching orders." The NNC Standards Committee had a series of fruitful meetings in conjunction with the Baltimore IJCNN. Progress made by the various working groups is detailed below. FUTURE EVENTS * IJCNN Beijing, Nov. 1-6, 1992 A panel discussion and/or workshop will be conducted by Mary Lou Padgett early in the meeting. The formation of an international glossary and symbology for artificial neural networks will be discussed. * SimTec/WNN92 Houston, Nov. 4-7, 1992 There will be a Standards Committee Meeting on Friday, Nov. 6, in conjunction with this conference. Paper competition awards will be announced. Dr. Robert Shelton of NASA/JSC is conducting the Performance Measure Methodology Contest and Prof. E. Tzanakou of Rutgers is conducting the Paradigm Comparison Student Paper Contest. * IEEE-ICNN and IEEE-FUZZ 1993 San Francisco, March 28 - April 1, 1993 A come-and-go meeting of everyone interested in standards will be held on Sunday, March 27 and individual working group meetings will take place on Monday and Tuesday evenings, March 28 and 29. * Proposed New Activity It has been proposed to form a working group to draft a glossary for fuzzy systems. An initial meeting to that end will take place in San Francisco on March 27 and 28, in conjuction with the conference. Please contact either of the undersigned if interested in participating. Over 400 people and companies are on the interest list for standards. If you would like to be included, please contact Mary Lou Padgett. WORKING GROUP REPORTS: WORKING GROUP ON GLOSSARY AND SYMBOLS Chair: Mary Lou Padgett, Auburn University The Working Group on Glossary and Symbols submitted the following PAR, which has been approved by IEEE as a formal project for the group. A voting group will be constructed in the near future. Project Title: Recommended Definition of Terms for Artificial Neural Networks Scope: Terminology used to describe and discuss artificial neural networks including hardware, software and algorithms related to artificial neural networks. Purpose: The subject of artificial neural networks is treated in a wide variety of textbooks, technical papers, manuals and other publications. At the present time, there is no widely accepted guide or standard for the use of technical terms relating to artificial neural networks. It is the purpose of this project to provide a comprehensive glossary of recommended terms to be used by the authors of future publications in this field. Status Report: The glossary being developed should be usable by everyone interested in neural networks, so a simple basic structure is desirable. The draft glossary proposed by Russell Eberhart meets this requirement, with some modifications. To help insure that the finished product is usable and still specific enough to help in specialized areas, Glossary Special Interest Group Chairs have been appointed. The exact scope of their groups will be discussed in San Francisco. Eventually, representation from all major neural networks thrusts and geographic areas should be included. People from academia, industry and government in all areas should be represented. The first Glossary SIG Chairs are: Patrick A. Shoemaker, NOSC; Dale E. Nelson, WPAFB; and Emile Fiesler, IDIAP. The glossary will be structured in a modular form, with basic elements coming first, followed by more specialized subsets. Your input is respectfully requested! WORKING GROUP ON PERFORMANCE METHODOLOGY Chair: Robert Shelton, NASA/JSC The Working Group on Performance Methodology met at the Baltimore IJCNN to discuss their newly approved project and formulate an agenda. Project Title: Guidelines for the Evaluation of the Speed and Accuracy of Implementations of Feed-Forward Artificial Neural Networks. Scope: Artificial neural network implementations which implement supervised learning through minimization of an error function based on the sum of the squares of residual errors. Purpose: Since 1986, a large number of implementations of the feed-forward back-error propagation neural network algorithms have been described with widely varying claims of speed and accuracy. At present, buyers and users of software and/or hardware for the purpose of executing such algorithms have no common set of bench-marks to facilitate the verification of vendor claims. The working group proposes to fulfill this need by assembling a suite of test cases. Agenda: Forward Propagation Only The following will comprise a forward propagation system to which the standard will apply. Such a system will be a 3-layer (input, hidden, output), fully connected (sequentially i.e. input to hidden to output), feed-forward neural network. Cases of varying sizes will be proposed. In addition, for each size, there will be at least one "problem" of the following two types. A. Discrete output B. Continuous output. A "problem" will consist of a set of I/O pairs which the system will be required to reproduce. Sequential, portable e.g. C language computer code will be distributed which emulates the desired network including nominal weights and customary sigmoidal transfer functions. The user of the standard may make use of the distributed code and weight values as he or she sees fit. The determination of weights is deemed to be a "learning" problem and not within the scope of the part of the standard described here. Parity problems were proposed as hard cases for the discrete output test. Such problems are sufficiently well understood that weights could be provided without recourse to the use of learning algorithms. Character identification was suggested as a second easier kind of discrete output problem. The task of providing good test problems for the case of continuous output was agreed to be significantly more complex. It was suggested that mathematical combinations of algebraic and transcendental functions could serve as the basic model, but it was agreed that the determination of the candidate problems for continuous output would require considerable additional effort. Robert Shelton PT4, NASA/JSC Houston, TX 77058 P: (713) 483-8110 shelton at gothamcity.jsc.nasa.gov WORKING GROUP ON SOFTWARE AND HARDWARE INTERFACES Chair: Steven Deiss, Applied Neurodynamics The NNC Working Group on Software and Hardware Interfaces met at the Baltimore IJCNN. The group was evenly divided by interest into an ad hoc Working Subgroup on Software Interface Standards and an ad hoc Working Subgroup on Hardware Interface Standards. The overall working group persists as an umbrella to integrate current efforts and promote new interface standards activities. Future meetings are expected to discuss PAR submission along with the technical issues. The Software Group got off to a fast start in Baltimore and several meetings were held there. Ten ANN vendors and 15 labs and companies expressed interest in the task of formalizing selected data format standards which would be used to store ANN training sets. Many vendors have translation tools for importing data to their own environments, but many research users find it difficult to share data because of use of unique data formats and paradigm code written early on to accept their nonstandard formats. The group reached consensus that a simple standard training data format is needed, several were discussed, and it was felt that the task was manageable. For further information concerning this project contact: Dr. Harold K. Brown Florida Institute of Technology Dept. of Electrical and Computer Engineering Melbourne, FL 32901-6988 Phone: 407-768-8000 x 7556 Fax: 407-984-8461 Email: hkb at ee.fit.edu The Hardware Group discussed related work on hardware standards that was carried out under the IEEE Computer Society Microprocessor Standards Committee and tried to focus on goals for the current group. In 1989 A Study Group was formed under the auspices of the MSC to evaluate Futurebus+ (896) and Scalable Coherent Interface (1596) for applicability to NN applications. The group recommended a hybrid approach while recognizing the longer range potential of a NN specific interface and interconnect standard. The present group chose to focus on 'guidelines' for utilization of existing standards for NN applications. It was the consensus that the NN community may not yet be ready for a real NN hardware interface standard since this is such an active area of reseach, however, work toward the evolution of such a standard would appear to be timely. For further information about this project or about other areas where interface standards might be appropriate contact: Stephen R. Deiss Applied Neurodynamics 2049 Village Park Wy, #248 Encinitas, CA 92024-5418 Phone: 619-944-8859 Fax: 619-944-8880 Email: deiss at cerf.net Thank you for your support of the IEEE-NNC Neural Networks Standards Committee. Please continue to interact with all of the working groups to help us grow in positive directions, and provide service to the entire community. SEE YOU IN SAN FRANCISCO, if not before! Sincerely, Professor Walter J. Karplus Mary Lou Padgett Chair Vice Chair IEEE-NNC Standards Committee IEEE-NNC Standards Committee UCLA, CS Dept. Auburn University, EE Dept. 3723 Boelter Hall 1165 Owens Road Los Angeles, CA 90024 Auburn, AL 36830 P: (310) 825-2929 P: (205) 821-2472 or 3488 email: karplus at CS.UCLA.EDU email: mpadgett at eng.auburn.edu   From bryan at ad.cise.nsf.gov Mon Aug 24 10:57:37 1992 From: bryan at ad.cise.nsf.gov (Bryan Thompson) Date: Mon, 24 Aug 92 10:57:37 GMT-0400 Subject: A neat idea from L. Breiman Message-ID: <9208241457.AA01221@ ad.cise.nsf.gov.cise.nsf.gov > This sounds very much like an attempt to approximate the results of dynamic programming. In dynamic programming each state (decision point) or state-action pair is associated with the cumulative expected future utility of that action. Decisions are then made locally by selecting the option that has the highest cumulative expected future utility. Standard dynamic programming works backwards from a terminus, but there are approximations to dynamic programming that operate in forward time (e.g. heuristic dynamic programming or Q-learning) and can be applied within neural networks. In practive these approximations often develop non-optimal decision policies and a balance must be struck between preformance (when you always choose the action that is perceived to be optimal) and exploration (when you choose actions that are believed _not_ to be optimal in order to efficently explore the decision space, avoid local minima in the decision policy, detect a changing environment, etc.). Bryan Thompson National Science Foundation  From fellous%hyla.usc.edu at usc.edu Sun Aug 23 15:43:12 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Sun, 23 Aug 92 12:43:12 PDT Subject: models of LTP and LTD Message-ID: <9208231943.AA02074@hyla.usc.edu> Dear Connectionists, Few weeks ago I posted a request for complementary references on models of LTP and LTD. I would like to thank all the people who kindly replied, as I reproduce at the end of this message their emails. Yours, Jean-Marc >>>>>>>>>>>> From: ""L. Detweiler"" X-Mts: smtp Status: RO -- Please let me know what you find out. I can't give you any LTP references per se, but there is a lot of work going on in vision research with Hebbian rules. I have plenty of these references; let me know if you are interested. ld231782 at longs.LANCE.ColoState.EDU From rsantiag at nsf.gov Tue Aug 11 13:59:00 1992 From: rsantiag at nsf.gov (rsantiag@nsf.gov) Date: 11 Aug 92 12:59 EST Subject: (none) Message-ID: I am interested in finding out what work you are doing with LTP. I am currently working with systems incorporating Backprop, CMAC, Spline, and Critic models. I have been looking into modeling LTP for the last couple of months. I think that the mechanisms of LTP will help me in incroporating memories in a more efficient manner in my system. It might also provide more inspirations into reorganizing my architecture. After seeing your posting on the connectionist I have revived my interest in this area. I would like to collaborate with you if that is possible in incorporating LTP models into ANN current ANN systems Robero A. Santiago National Science Foundation From jan at pallas.neuroinformatik.ruhr-uni-bochum.de Wed Aug 12 15:28:55 1992 From: jan at pallas.neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) Date: Wed, 12 Aug 92 21:28:55 +0200 Subject: LTP/LTD models Message-ID: <9208121928.AA01393@pallas.neuroinformatik.ruhr-uni-bochum.de> Hi Jean-Marc, have a look at last year's issues of Neural Computation. There is a paper by Bill Philips and his colleagues (can't remember all the names) on learning rules in general, and they also compare the rule derived from Artola, Broecher, and Singer's paper in Nature on LTD/LTP in rat striate cortex (vdM should know about that one, also 1st half of 1991, I think) with all the others and show that it is best. Regards, Jan From paolo at psyche.mit.edu Mon Aug 24 14:00:31 1992 From: paolo at psyche.mit.edu (Paolo Frasconi) Date: Mon, 24 Aug 92 14:00:31 EDT Subject: Paper in Neuroprose Archive Message-ID: <9208241800.AA11636@psyche.mit.edu> The following technical report has been placed in the Neuroprose Archives at Ohio State University: Injecting Nondeterministic Finite State Automata into Recurrent Neural Networks Paolo Frasconi, Marco Gori, and Giovanni Soda Technical Report DSI-RT15/92, August 1992 Dipartimento di Sistemi e Informatica University of Florence Abstract: In this paper we propose a method for injecting time-warping nondeterministic finite state automata into recurrent neural networks. The proposed algorithm takes as input a set of automata transition rules and produces a recurrent architecture. The resulting connection weights are specified by means of linear constraints. In this way, the network is guaranteed to carry out the assigned automata rules, provided the weights belong to the constrained domain and the inputs belong to an appropriate range of values, making possible a boolean interpretation. In a subsequent phase, the weights can be adapted in order to obtain the desired behavior on corrupted inputs, using learning from examples. One of the main concerns of the proposed neural model is that it is no longer focussed exclusively on learning, but also on the identification of significant architectural and weight constraints derived systematically from automata rules, representing the partial domain knowledge on a given problem. To obtain a copy via FTP (courtesy of Jordan Pollack): unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your E-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get frasconi.nfa.ps.Z ftp> quit unix% zcat frasconi.nfa.ps.Z | lpr (or however you uncompress and print postscript) Sorry, no hard copies available. Paolo Frasconi Dipartimento di Sistemi e Informatica Via di Santa Marta, 3 50139 Firenze, Italy frasconi at ingfi1.cineca.it  From fellous%hyla.usc.edu at usc.edu Mon Aug 24 20:47:12 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Mon, 24 Aug 92 17:47:12 PDT Subject: Models of LTP and LTD. Message-ID: Dear Connectionists, I forgot to add this entry to the review of replies regrading LTP/LTD. Sorry for any inconvenience, Jean-Marc Fellous >>>>>>> From mrj at moria.cs.su.oz.au Tue Aug 25 03:57:43 1992 From: mrj at moria.cs.su.oz.au (Mark James) Date: Tue, 25 Aug 1992 17:57:43 +1000 Subject: Thesis on NN simlulation hardware availabe for ftp Message-ID: The following Master of Science thesis is available for ftp from neuroprose: Design of Low-cost, Real-time Simulation Systems for Large Neural Networks -------------------------------------------------------------------------- Mark James The University of Sydney January, 1992 ABSTRACT Systems with large amounts of computing power and storage are required to simulate very large neural networks capable of tackling complex control problems and real-time emulation of the human sensory, language and reasoning systems. General-purpose parallel computers do not have communications, processor and memory architectures optimized for neural computation and so can not perform such simulations at reasonable cost. The thesis analyses several software and hardware strategies to make feasible the simulation of large, brain-like neural networks in real-time and presents a particular multicomputer design able to implement these strategies. An important design goal is that the system must not sacrifice computational flexibility for speed as new information about the workings of the brain and new artificial neural network architectures and learning algorithms are continually emerging. The main contributions of the thesis are: - an analysis of the important features of biological neural networks that need to be simulated, - a review of hardware and software approaches to neural networks, and an evaluation of their abilities to simulate brain-like networks, - the development of techniques for efficient simulation of brain- like neural networks, and - the description of a multicomputer that is able to simulate large, brain-like neural networks in real-time and at low cost. ------------------------------------------ To obtain a copy via FTP use the standard procedure: % ftp cheops.cis.ohio-state.edu anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get james.nnsim.ps.Z ftp> quit % zcat james.nnsim.ps.Z | lpr ------------------------------------------ Mark James | EMAIL : mrj at cs.su.oz.au | Basser Department of Computer Science, F09 | PHONE : +61-2-692-4276 | The University of Sydney NSW 2006 AUSTRALIA | FAX : +61-2-692-3838 |  From henrik at tazdevil.llnl.gov Mon Aug 31 17:01:11 1992 From: henrik at tazdevil.llnl.gov (Henrik Klagges) Date: Mon, 31 Aug 92 14:01:11 PDT Subject: A neat idea from L. Breiman Message-ID: <9208312101.AA25658@tazdevil.llnl.gov> > hold {d_2, ..., d_{k-1}} constant ...by re-making decision d_1 How can you hold d_2 ... etc constant if they might depend on d_1, like in a game tree ? Cheers, Henrik IBM Research Lawrence Livermore National Labs From smieja at jargon.gmd.de Sun Aug 2 05:43:27 1992 From: smieja at jargon.gmd.de (Frank Smieja) Date: Sun, 2 Aug 92 11:43:27 +0200 Subject: identification vs. authentication problems In-Reply-To: Jocelyn Sietsma Penington's message of Fri, 31 Jul 92 12:47:20 +1000 Message-ID: <9208020943.AA16555@jargon.gmd.de> The issue of network/network system response to an unlearnt class, or random pattern, was discussed in F. J. Smieja & H. Muehlenbein Reflective Modular Neural Network Systems (submitted to Machine Learning, also available in the ohio state uni NEUROPROSE archive as 'smieja.reflect.ps.Z') as the "dog-paw" test. For singly-operating networks the conclusion was either tolerate it or degrade the 'real' learning by explicitly mapping random patterns to another class output. It was shown that for the modular network system introduced in the paper 'garbage' answers could be learnt-out without significant degradation to the 'real' learning. Jocelyn Sietsma writes: -) -) In my opinion (based on somewhat limited experience) feed-forward ANNs usually -) develop a 'default output' or default class, which means most new inputs which -) are different to the training classes will be classified in the same way. I cannot agree with this. The fraction of garbage classified as a particular class depends on the distribution of the classes in the pattern space, and the frequency with which each class is seen during the learning. This determines how the pattern space is split up by the ANN's hyperplanes, and, of course, how each possible pattern in the input space will be classified. -Frank Smieja Gesellschaft fuer Mathematik und Datenverarbeitung (GMD) GMD-FIT.KI.AS, Schloss Birlinghoven, 5205 St Augustin 1, Germany. Tel: +49 2241-142214 email: smieja at gmd.de  From UDAH025 at OAK.CC.KCL.AC.UK Tue Aug 4 10:58:00 1992 From: UDAH025 at OAK.CC.KCL.AC.UK (G.BUGMANN@OAK.CC.KCL.AC.UK) Date: Tue, 4 Aug 92 10:58 BST Subject: Robustness ? Message-ID: Dear Connectionnists, A widespread belief is that neural networks are robust against the loss of neurons. This is possibly true for large Hopfield nets but is certainly wrong for multilayer networks trained with backpropagation, a fact mentioned by several authors (a list of references can be found in our paper described below). I wonder where this whole idea of robustness comes from ? It is possibly based on the two following beliefs: 1) Millions of neurons in our brain die each day. 2) Artificial neural networks have the same properties as biological neural networks. As we are apparently not affected by the loss of neurons we are forced to conclude that artificial neural networks should also not be affected. However, belief 2 is difficult to confirm because we do not really know how the brain operates. As for belief 1, neuronal death is well documented during development but I could not find a publication covering the adult age. Does anyone know of any publications supporting belief 1 ? Thank you in advance for your reply. Guido Bugmann ---------------------------------------------------------- The following paper will appear in the proceedings of ICANN'92 published as "Artificial Neural Networks II" by Elsevier. "Direct approaches to Improving the Robustness of Multilayer Neural Networks" G. Bugmann, P. Sojka, M. Reiss, M. Plumbley and J. G. Taylor This paper describes two methods to improve the robustness of multilayer NN (Robustness is defined by the error induced by the loss of a hidden node). The first method consists of including a "robustness error" term in the error function used in backpropagation. This was done with the hope that it would force the network to converge to a more robust configuration. A typical test was to train a network with 10 hidden nodes for the XOR function. As only 2 hidden nodes are actually necessary, the most robust configuration is then made of two groups of 5 hidden neurons sharing the same function. Although the modified error function leads to a more robust network it increases the traditional functional error and does not converge to the most robust configuration. The second method is more direct. It consists of testing periodically the robustness of the net during normal backpropagation and then duplicating the hidden node whose loss would be most damaging for the net. For that duplication we use a prunned hidden node so that the total number of hidden nodes remains unchanged. This is a very effective technique. It converges to the optimal 2 x 5 configuration and does not use much extra computation time. The accuracy of the net is not affected because training goes on between the "prunning-duplication" operations. By using this technique as a complement to the classical prunning techniques robustness and generalisation can be improved at the same time. A preprint of the paper can be obtained from: ---------------------------------------------------------- Guido Bugmann Centre for Neural Networks Kings College London Strand London WC2R 2LS United Kingdom Phone (+44) 71 873 2234 FAX (+44) 71 873 2017 email: G.Bugmann at oak.cc.kcl.ac.uk -----------------------------------------------------------  From bill at nsma.arizona.edu Wed Aug 5 15:04:30 1992 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Wed, 5 Aug 92 12:04:30 MST Subject: Robustness ? Message-ID: <9208051904.AA07164@nsma.arizona.edu> G. Bugmann writes: > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > Does anyone know of any publications supporting belief 1 ? It's a myth. I heard Robert Sapolsky discuss this in a talk once: he said that the myth seems to have been inspired by some ancient studies of long-time alcoholics. Neurons do die every so often, of course, but the rate has never been quantified (as far as I know) because it's so low, certainly much fewer than a thousand per day in ordinary, healthy people. -- Bill  From george at minster.york.ac.uk Wed Aug 5 12:48:40 1992 From: george at minster.york.ac.uk (george@minster.york.ac.uk) Date: Wed, 5 Aug 92 12:48:40 Subject: Robustness ? Message-ID: > A widespread belief is that neural networks are robust > against the loss of neurons. This is possibly true for > large Hopfield nets but is certainly wrong for multilayer > networks trained with backpropagation, a fact mentioned > by several authors (a list of references can be found in > our paper described below). Equally, there are references which say MLP's are fault tolerant to unit loss. For instance, a study by Bedworth and Lowe examined a large (approx. 760-20-15) MLP trained to distinguish the confusable "ee" sounds in English. They found that for a wide range of faults, a degree of fault tolerance did exist. Typical fault modes were adding noise to weights, removing units, setting output of a unit to 0.5. These were based on fault modes that might arise from various implementation design choices (see Bolt 1991 and 1991B). Other main references are Tanaka 1989, Lincoln 1989. For a more complete review, see Bolt 1991C. A more recent report shows how fault tolerant MLP's (wrt to weight loss) can be constructed (see Bolt 1992). One of the problems is that adding extra capacity to a MLP, in extra hidden units for example, does not necessarily mean that back-error propagation will use the extra units to form a redundant representation. As has been noted by many researchers, the MLP tends to overgeneralise instead. This leads to very in-fault tolerant MLP's. However, if a constraint is placed on the network (as in Bugmann 1992) then this will force it to use extra capacity as redundancy which will lead to fault tolerance. See Neti 1990. Another result which has bearing on this matter is Abu-Mostafa's claim that ANN's are better at solving random problems (e.g. "ee" sounds as in Bedworth) rather than structured problems such as XOR. [Abu Mostafa 1986] It can also be viewed that the computational nature of neural networks is such that they map more easily to a problem whose solution space exhibits adjacency. If a fault occurs, which results in a shift in solution space, the function performed by the ANN only changes slightly due to the nature of the problem. This reasoning is supported by various studies of fault tolerance in ANN's where for applications such as XOR, little fault tolerance is found, whereas for "soft" problems, fault tolerance is found to be possible. I qualify this last statement since current training techniques do not tend to produce fault tolerant networks (see Bolt 1992). However, it is my belief that the computational structure provided by ANN's does imply a degree of inherent fault tolerance is possible. For instance, the simple perceptron unit (for Bipolar representations) will suffer upto D connection losses, where D is the Hamming distance between the two classes it separates. Note however that for Binary representations, only 0.5D will be tolerated. This degree of fault tolerance is very good. However, back-error propagation does not take advantage of it due to the weight configurations which it produces. An important feature is that equal loading must be placed on all units, this will then remove critical components with a neural network (e.g. see Bolt 1992B) > I wonder where this whole > idea of robustness comes from ? > > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > 2) Artificial neural networks have the same properties as > biological neural networks. > As we are apparently not affected by the loss of neurons we > are forced to conclude that artificial neural networks should > also not be affected. > > However, belief 2 is difficult to confirm because we do not > really know how the brain operates. As for belief 1, neuronal > death is well documented during development but I could > not find a publication covering the adult age. > > Does anyone know of any publications supporting belief 1 ? Figures that I have heard of range around thousands rather than millions... however, I would also like to hear of any publications supporting this claim. More interesting are occurances when severe brain damage is suffered with little effect. Wood (1983) gives an interesting study of this. ____________________________________________________________ George Bolt, Advanced Computer Architecture Group, Dept. of Computer Science, University of York, Heslington, YORK. YO1 5DD. UK. Tel: + [44] (904) 432771 george at minster.york.ac.uk Internet george%minster.york.ac.uk at nsfnet-relay.ac.uk ARPA ..!uknet!minster!george UUCP ____________________________________________________________ References: %T Fault Tolerance in Multi-Layer Perceptrons: a preliminary study %A M.D. Bedworth %A D. Lowe %D July 1988 %I RSRE: Pattern Processing and Machine Intelligence Division %K Note: RSRE is now the DRA %T A Study of a High Reliable System against Electric Noises and Element Failures %A H. Tanaka %D 1989 %J Proceedings of the 1989 International Symposium on Noise and Clutter Rejection in Radars and Imaging Sensors %E T. Suzuki %E H. Ogura %E S. Fujimura %P 415-20 %T Synergy of Clustering Multiple Back Propagation Networks %A W P Lincoln %A J Skrzypek %J Proceedings of NIPS-89 %D 1989 %P 650-657 %T Fault Models for Artificial Neural Networks %A G.R. Bolt (Bolt 1991) %D November 1991 %C Singapore %J IJCNN-91 %P 1918-1923 %V 3 %T Assessing the Reliability of Artificial Neural Networks %A G.R. Bolt (Bolt 1991B) %D November 1991 %C Singapore %J IJCNN-91 %P 578-583 %V 1 %T Investigating Fault Tolerance in Artificial Neural Networks %A G.R. Bolt (Bolt 1991C) %D March 1991 %O Dept. of Computer Science %I University of York, Heslington, York UK %R YCS 154 %K Neuroprose ftp: bolt.ft_nn.ps.Z %T Fault Tolerant Multi-Layer Perceptrons %A G.R. Bolt %A J. Austin %A G. Morgan %D August 1992 %O Computer Science Department %I University of York, UK %R YCS 180 %K Neuroprose ftp: bolt.ft_mlp.ps.Z %T Maximally fault-tolerant neural networks: Computational Methods and Generalization %A C. Neti %A M.H. Schneider %A E.D. Young %X Preprint, %D June 15, 1990 %A Y.S. Abu-Mostafa %B Complexity in Information Theory %T Complexity of random problems %D 1986 %I Springer-Verlag %T Implications of simulated lesion experiments for the interpretation of lesions in real nervous systems %A C. Wood %B Neural Models of Language Processes %E M.A. Arbib %E D. Caplan %E J.C. Marshall %I New York: Academic %D 1983 %T Uniform Tuple Storage in ADAM %A G. Bolt (Bolt 1992B) %A J. Austin %A G. Morgan %J Pattern Recognition Letters %D 1992 %P 339-344 %V 13  From rsantiag at nsf.gov Thu Aug 6 17:05:00 1992 From: rsantiag at nsf.gov (rsantiag@nsf.gov) Date: 6 Aug 92 16:05 EST Subject: Robustness ? Message-ID: <9208061648.ad20894@Note2.nsf.gov> "In search of the Engram" The problem of robustness from a neurobiological perspective seems to originate from work done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of brains and seeing if it affected memory. It didn't. Other work was done by another gentlemen named Richard F. Thompson in the same area. Both speak of the loss of neurons in a system and their theories about how integrity was kept. In particular Karl Lashley spoke of memory as holograms. I think this is what you are looking for as far as references. As far as every day loss of neurons, well it seems to vary from person to person and an actual measure cannot be ascertained (this was information was gathered after questioning 3 neurobiologists whom all agreed). It is more important, with regards to the loss of neurons and the question of robustness, to identify the stage in which the loss is occuring. There are three distinct areas in neurobiological creation and development that we observe this in with any significance. These are: embryonic development, maturation and learning stages, and maturity. In embryonic the loss of neurons is rampant but eventually leads to the full development of the brain with overconnected neurons. The loss of the neurons are important developmentally. In maturation and learning, the loss of neurons helps to define neuronal systems and plays a role in their adaption and learning process. Finally in maturity, the loss of neurons is insignificant. Indeed Lashly's model of the holographic mind seems very true. The only exception to this is the massive loss of brain matter(neurons). In a situation like this (such as a stroke) there can be massive destruction of neuronal systems. In comparison, though, to ANNs it is difficult. In ANNs if we are to lose but a few neurons, this could represent the loss of 5 to 25 percent of neurons, depending on the model. For a human to lose 5 to 25 of there brains could be a devastating proposition. The question of robustness is best reserved for larger systems that would suffer the loss of neurons on a more proportianal level to current biological NN systems. It is important though to indentify where the loss of neurons fall in your model (developing, training, or after you have a stable NN) before you attack the problem of robustness. (Most of the previous paragraph is derived from "Neurobiology" by Gordon M. Sheperd and from miscellaneous sources that he sights in his book) As for the assumption that ANNs and biological NNs have many of the properties, well that is an overwhelmingly boastfull statement. The only similarities that each have is the organizational structure to them. The only experiments with ANNs that come close to actual biological neuron modeling is a project done by Gary Lynch in California who modelled the Olfactory Cortex and some of the NN systems that go into smell identification. He structured each of his neurons to function exactly as a bilogical neuron. His results are very fascinating. Both ANNs and Biological NNs are parallel processors but after that, they seperate radically into two types of systems. Robert A. Santiago National Science Foundation rsantiag at note.nsf.gov  From Paul_King at NeXT.COM Thu Aug 6 14:28:46 1992 From: Paul_King at NeXT.COM (Paul King) Date: Thu, 6 Aug 92 11:28:46 -0700 Subject: Robustness ? Message-ID: <9208061828.AA09308@oz.NeXT.COM> G. Bugmann writes: > It is possibly based on the two following beliefs: > 1) Millions of neurons in our brain die each day. > Does anyone know of any publications supporting belief 1 ? Bill Skaggs writes: > It's a myth. I heard Robert Sapolsky discuss this in a talk > once: he said that the myth seems to have been inspired by > some ancient studies of long-time alcoholics. ... Moshe Abeles in _Corticonics_ (Cambridge Univ. Press 1991) writes on page 208 that: "Comparisons of neuronal densities in the brains of people who died at different ages (from causes not associated with brain damage) indicate that about a third of the cortical cells die between the ages of twenty and eighty years [Gerald, Tomlinson, and Gibson, 1980]. Adults can no longer generate new neurons, and therefore those neurons that die are never replaced. The neuronal fallout proceeds at a roughly steady rate throughout adulthood (although it is accelerated when the circulation of blood in the brain is impaired). The rate of neuronal fallout is not homogeneous throughout all the cortical regions, but most of the cortical regions are affected by it. Let us assume that every year about 0.5 percent of the cortical cells die at random...." and goes on to discuss the implications for network robustness. The reference is to: Gearald H., Tomlinson B. E., and Gibson P.H. (1980). Cell counts in human cerebral cortex in normal adults throughout life using an image analysing computer. J. Neurol. 46:113-36. Paul King  From terry at helmholtz.sdsc.edu Thu Aug 6 23:53:25 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Thu, 6 Aug 92 20:53:25 PDT Subject: Robustness ? Message-ID: <9208070353.AA01759@helmholtz.sdsc.edu> Another paper that addresses fault tolerance in feedforward nets: Neti, C, Schneider, MH and Young, ED, Maximally fault-tolerant neural networks and nonlinear programming. Vol II, p 483 IJCNN San Diego, June 1990. Comparisons between the brain and BP nets may be misleading since a unit should not be equated with a single neuron. If one unit represents the average firing rate in thousands of neurons then random loss of neurons would correspond more closely with randomly perturbing the weights rather than cutting out units. Cutting out a unit is closer to the damage that occurs with lesions of many neurons, which often leads to unusual deficits. The performance of BP-derived feedforward nets is remarkably resistant to adding random noise to the weights, as Charlie Rosenberg and I showed using NETtalk. It took us a while to realize that our random number generator was really working. Terry -----  From edelman at wisdom.weizmann.ac.il Fri Aug 7 05:47:45 1992 From: edelman at wisdom.weizmann.ac.il (Edelman Shimon) Date: Fri, 7 Aug 92 13:47:45 +0400 Subject: Robustness ? In-Reply-To: rsantiag@nsf.gov's message of 6 Aug 92 16:05 EST <9208061648.ad20894@Note2.nsf.gov> Message-ID: <9208070947.AA28154@wisdom.weizmann.ac.il> Robert A. Santiago wrote: The problem of robustness from a neurobiological perspective seems to originate from work done by Karl Lashley. He sought to find how memory was partitioned in the brain. He thought that memories were kept on certain neuronal circuit paths (engrams) and experimented under this hypothesis by cutting out parts of brains and seeing if it affected memory. It didn't. This description of Lashley's results is incorrect. Lashley did find an effect of the lesions he induced in the rat's brain, but the effect seemed to depend more on the extent of the lesion rather than on its location. By the way, some people who work in the rat (e.g., Yadin Dudai, here at Weizmann) now believe that Lashley's results may have to do with his method: the lesions may have been frequently induced disregarding proximity to blood vessels. Damage to these vessels could have secondary effects over wider areas not related directly to the site of the original lesion... So, care should be taken not to jump to conclusions based on 60 year old anecdotes - better data are available now on the effects of lesions, including in humans. Some of these data indicate that certain brain functions are surprisingly well-localized. See, for example, McCarthy and Warrington, Nature 334:428-430 (1988). -Shimon Shimon Edelman Internet: edelman at wisdom.weizmann.ac.il Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100, ISRAEL  From jan at pallas.neuroinformatik.ruhr-uni-bochum.de Fri Aug 7 10:51:25 1992 From: jan at pallas.neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) Date: Fri, 7 Aug 92 16:51:25 +0200 Subject: Robustness ? Message-ID: <9208071451.AA10865@pallas.neuroinformatik.ruhr-uni-bochum.de> [Note to moderator: I read connectionists via my collegue Rolf Wuertz (rolf at neuroinformatik.ruhr-uni-bochum.de), so that only one copy of connectionists needs to cross the Atlantic. We work on the same stuff, however.)] As I remember it, the studies showing a marked reduction in nerve cell count with age were done around the turn of the century. The method, then as now, is to obtain brains of deceased persons, fix them, prepare cuts, count cells microscopically in those cuts, and then estimate the total number of cells by multiplying the sampled cells/(volume of cut) with the total volume. This method has some obvious systematic pitfalls, however. The study was done again some (5-10?) years ago by a German anatomist (from Kiel, I think), who tried to get these things under better control. It is well known, for instance, that tissue shrinks when it is fixed; the cortex's pyramid cells are turned into that form by fixation. The new study showed that the total water content of brain does vary dramatically with age; when this is taken into account, it turns out that the number of cells is identical within error bounds (a few percent?) between quite young children and persons up to 60-70 years of age. All this is from memory, and I don't have access to the original source, unfortunately; but I'm pretty certain that the gist is correct. So the conclusion seems to be that cell loss with age in the CNS is much lower than generally thought. On the other hand, if you compare one "neuron" in your backprop net not with a single neuron in the CNS, but with a group of them (a column in striate cortex, for instance), which would be a better analogy functional- ly, then it seems to be easier to understand the brain's robustness: The death of a single neuron doesn't kill the whole processing unit, it merely reduces, e.g., its output power or its resolution. If, however, you kill off a whole region, i.e., all member cells of a column, your functionality will suffer and degradation will be much less gracefull. -- Jan Vorbrueggen, Institut f. Neuroinformatik, Ruhr-Universitaet Bochum, FRG -- jan at neuroinformatik.ruhr-uni-bochum.de  From bill at nsma.arizona.edu Fri Aug 7 15:01:19 1992 From: bill at nsma.arizona.edu (Bill Skaggs) Date: Fri, 7 Aug 92 12:01:19 MST Subject: Robustness? Message-ID: <9208071901.AA07698@nsma.arizona.edu> I wrote: >Neurons do die every so often, of course, but the rate >has never been quantified (as far as I know) because >it's so low, certainly much fewer than a thousand per >day in ordinary, healthy people. I was wrong. There have been a number of studies of neuron loss in aging. It proceeds at different rates in different parts of the brain, with some parts showing hardly any loss at all. Even in different areas of cortex the rates of loss vary widely, but it looks like, overall, about 20% of the neurons in cortex are lost by age 60. Using the standard estimate of ten billion neurons in the neocortex, this works out to about one hundred thousand neurons lost per day of adult life. Reference: "Neuron numbers and sizes in aging brain: Comparisons of human, monkey, and rodent data", DG Flood & PD Coleman, *Neurobiology of Aging* 9:453-464 (1988). Thanks to the people who wrote to me about this. -- Bill  From fellous%hyla.usc.edu at usc.edu Fri Aug 7 19:49:06 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Fri, 7 Aug 92 16:49:06 PDT Subject: No subject Message-ID: <9208072349.AA23728@hyla.usc.edu> Dear Connectionists, Here are some papers I have found so far on computational models of LTP and/or LTD. I would be grateful if you could point me to any other models in this area, especially if they have actually been implemented, no matter on what simulator or tool. Zador A, Koch C, Brown TH: Biophysical model of a Hebbian synapse. {\em Proc Nat Acad Sci USA} 1990, 87:6718-6722. Proposes a specific, experimentally justified model of the dynamics of LTP in hippocampal synapses. Brown TH, Zador AM, Mainen ZF and Claiborne BJ (1991) Hebbian modifications in hippocampal neurons, in ``Long-Term Potentiation: A Debate of Current Issues'' (eds M Baudry and JL Davis) MIT Press, Cambridge MA, 357-389. Summarizes the material in the previous paper, and explores the consequences of the facts of LTP for the representations formed within the hippocampus, using compartmental modeling techniques. Holmes WR, Levy WB: Insights into associative long-term potentiation from computational models of NMDA receptor-mediated calcium influx and intracellular Calcium concentration changes. {\em J Neurophysiol} 1990, 63:1148-1168. In advance, thank you, Yours, Jean-Marc Fellous Center for Neural Engineering University of Southern California Los Angeles ps: I will post a summary of the (eventual !) replies to the list  From bruno at cns.caltech.edu Fri Aug 7 20:17:12 1992 From: bruno at cns.caltech.edu (Bruno Olshausen) Date: Fri, 7 Aug 92 17:17:12 PDT Subject: No subject Message-ID: <9208080017.AA15777@cns.caltech.edu> The following technical report has been archived for public ftp: ---------------------------------------------------------------------- A NEURAL MODEL OF VISUAL ATTENTION AND INVARIANT PATTERN RECOGNITION Bruno Olshausen, Charles Anderson*, and David Van Essen Computation and Neural Systems Program Division of Biology, 216-76 and *Jet Propulsion Laboratory California Institute of Technology Pasadena, CA 91125 CNS Memo 18 Abstract. We present a biologically plausible model of an attentional mechanism for forming position- and scale-invariant object representations. The model is based on using control neurons to dynamically modify the synaptic strengths of intra-cortical connections so that information from a windowed region of primary visual cortex, V1, is routed to higher cortical areas while preserving information about spatial relationships. This paper describes details of a neural circuit for routing visual information and provides a solution for controlling the circuit as part of an autonomous attentional system for recognizing objects. The model is designed to be consistent with known neurophysiology, neuroanatomy, and psychophysics, and it makes a variety of experimentally testable predictions. ---------------------------------------------------------------------- Obtaining the paper via anonymous ftp: 1. ftp to kant.cns.caltech.edu (131.215.135.31) 2. login as 'anonymous' and type your email address as the password 3. cd to pub/cnsmemo.18 4. set transfer mode to binary (type 'binary' at the prompt) 5. get either 'paper-apple.tar.Z' or 'paper-sparc.tar.Z'. The first will print on the Apple LaserWriter II, the other on the SPARCprinter. (They may work on other PostScript printers too, but I can't guarantee it.) 6. quit from ftp, and then uncompress and detar the file on your machine by typing uncompress -c filename.tar.Z | tar xvf - 7. remove the tarfile and print out the three postscript files (paper1.ps, paper2.ps and paper3.ps), beginning with paper3.ps. If you don't have an appropriate PostScript printer, then send a request for a hardcopy to bruno at cns.caltech.edu.  From rob at comec4.mh.ua.edu Sat Aug 8 12:54:17 1992 From: rob at comec4.mh.ua.edu (Robert Elliott Smith) Date: Sat, 08 Aug 92 10:54:17 -0600 Subject: Genetic Algorithms Conf. Call for Papers Message-ID: <9208081554.AA16853@comec4.mh.ua.edu> Call for Papers ICGA-93 The Fifth International Conference on Genetic Algorithms 17-22 July, 1993 University of Illinois at Urbana-Champaign The Fifth International Conference on Genetic Algorithms (ICGA-93), will be held on July 17-22, 1993 at the Univ. of Illinois at Urbana-Champaign. This meeting brings together an international community from academia, government, and industry interested in algorithms suggested by the evolutionary process of natural selection. Topics of particular interest include: genetic algorithms and classifier systems, evolution strategies, and other other forms of evolutionary computation; machine learning and optimization using these methods, their relations to other learning paradigms (e.g., neural networks and simulated annealing), and mathematical descriptions of their behavior. Papers discussing how genetic algorithms and classifier systems are related to biological and cognitive systems are also encouraged. Papers describing significant, unpublished research in this area are solicited. Authors must submit four (4) complete copies of their paper (hardcopy only), received by February 1, 1993, to the Program Chair: Stephanie Forrest Dept. of Computer Science University of New Mexico Albuquerque, N.M. 87131-1386 Papers should be no longer than 10 pages, single-spaced, and printed using 12 pt. type. Please include a separate title page with authors names and addresses, and do not include these names in the paper's body, to allow for anonymous peer review. The title page should also contain a short abstract. Electronic submissions will not be accepted. Evaluation criteria include the significance of results, originality, and the clarity and quality of the presentation. Questions on the conference program and submission should be directed to icga93 at unmvax.cs.unm.edu. Other questions should be directed to rob at comec4.mh.ua.edu. Important Dates: February 1, 1993: Submissions must be received April 7, 1993: Notification to authors mailed May 7, 1993: Revised, final camera-ready paper due July 17-22, 1993: Conference dates ICGA-93 Conference Committee: Conference Co-Chairs: David E. Goldberg, Univ. of Illinois at Urbana-Champaign J. David Schaffer, Philips Labs Publicity: Robert E. Smith, Univ. of Alabama Program Chair: Stephanie Forrest, Univ. of New Mexico Financial Chair: Larry J. Eshelman, Philips Labs Local Arrangements: David E. Goldberg, Univ. of Illinois at Urbana-Champaign  From ptodd at spo.rowland.org Wed Aug 12 17:57:37 1992 From: ptodd at spo.rowland.org (Peter M. Todd) Date: Wed, 12 Aug 92 17:57:37 EDT Subject: Call for Presentations: Knowledge Technology in the Arts Message-ID: <9208122157.AA02858@spo.rowland.org> (I hope we will get some connectionist contributions-- Peter) ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ CALL FOR PRESENTATIONS on Knowledge Technology in the Arts to be presented in a special session at the 1993 Conference of the Society of Electroacoustic Music in the U.S. (SEAMUS) University of Texas at Austin March 31st to April 3rd, 1993 and at the Fourth Arts and Technology Conference Connecticut College, New London, CT March 4th to 6th, 1993 During the 1993 SEAMUS conference, a special session on knowledge technology in the arts will be held, co-sponsored by SEAMUS and IAKTA, the newly founded International Association for Knowledge Technology in the Arts. The main purpose of this session is to familiarize artists with the applications of AI, Connectionism, and other knowledge technologies in music and related arts, and the new tools that are available for these artistic pursuits. IAKTA is calling for proposals for presentations on applications of symbolic AI and neural networks to topics in composition, performance, and teaching in the computer arts (music, film, dance, video art, performance art), in keeping with the conference's focus on "Music, Media, and Movement". We would most like to encourage the submission of tutorial presentations that will help inspire artists to learn more about and become involved in knowledge technology, both as practitioners and as researchers. Speakers will have approximately 25-45 minutes to give a thorough introduction to their topic. Talks on new and innovative uses of knowledge technology and the arts are also welcomed. Please send abstracts/descriptions up to two pages in length and descriptions of your audiovisual requirements by October 1 to IAKTA president Otto Laske (laske at cs.bu.edu) and secretary Peter Todd (ptodd at spo.rowland.org). (For more information on IAKTA itself, our goals, membership structure, etc., please contact Peter Todd at ptodd at spo.rowland.org .) IAKTA would also like to encourage paper submissions on knowledge technology in the arts to the Fourth Arts and Technology Conference at Connecticut College. This conference will be held March 4th to 6th, 1993, in New London, Connecticut, and is being organized by Noel Zahler and David Smalley. The emphasis of the Arts and Technology Conference is multidisciplinary interaction; it will cover virtual reality, cognition and the arts, experimental theater, the compositional process, and speculative uses of technology in education. Submissions in the form of a detailed two-page abstract including audiovisual requirements should be sent by October 15 to Dr. Noel Zahler, Co-director Center for Arts and Technology Connecticut College, Box 5632 207 Mohegan Avenue New London, CT 06320-4196 email: nbzah at conncoll.bitnet (Authors should be notified of acceptance by November 15, and camera-ready copy will be due by January 15, 1993.) A copy of the contribution should also be sent to IAKTA president Otto Laske (laske at cs.bu.edu) and secretary Peter Todd (ptodd at spo.rowland.org). ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++  From n at predict.com Wed Aug 12 17:43:35 1992 From: n at predict.com (n (Norman Packard)) Date: Wed, 12 Aug 92 15:43:35 MDT Subject: Job Offer Message-ID: <9208122143.AA01835@arkady> Prediction Company Financial Forecasting August 12, 1992 Prediction Company is a small Santa Fe, NM based startup firm utilizing the latest nonlinear forecasting technologies for prediction and computerized trading of derivative financial instruments. The senior technical founders of the firm are Doyne Farmer and Norman Packard, who have worked for over ten years in the fields of chaos theory and nonlinear dynamics. The technical staff includes other senior researchers in the field. The company has the backing of a major technically based trading firm and their partner, a major European bank. There is currently one opening at the company for a senior computer scientist to provide leadership in that area for a staff of physical scientists and mathematicians with strong programming backgrounds. The job responsibilities include software design and implementation, support of deployed software systems for trading, management of a UNIX workstation network and infusion of computer science technologies into the firm. The successful applicant will be an experienced and talented C and C++ programmer with architectural skills, UNIX knowledge and an advanced degree in computer science or a related discipline. Experience in a production environment, in support of products or mission critical in-house software systems (preferably in the financial industry) is required. Knowledge of and experience with top down design methods, written specifications, formal tes methods and source code control is highly desirable, as is familiarity with data base and wide-area-networking technologies. Applicants should send resumes to Prediction Company, 234 Griffin Street, Santa Fe, NM 87501 or to Laura Barela at laura%predict.com at santafe.edu.  From dtam at morticia.cnns.unt.edu Fri Aug 14 17:22:29 1992 From: dtam at morticia.cnns.unt.edu (dtam@morticia.cnns.unt.edu) Date: Fri, 14 Aug 1992 16:22:29 -0500 Subject: Call for Papers (Biological Neural Networks) Message-ID: <199208142122.AA16626@morticia.cnns.unt.edu> ========================================================= = Call for Papers = ========================================================= Progress in Neural Networks Special Volume: Biological Neural Networks This special issue aims at a review of the current research progress made in the understanding of biological neural systems and its relations to artificial neural networks. Computational and theoretical issues addressing signal processing capabilities and dynamics of biologically based systems will be covered. Development and plasticity of neuroanatomical architecture are emphasized. Authors are invited to submit original manuscripts describing recent progress in biological neural network research addressing computational and theoretical issues in neurobiology and signal processing in the central nervous system. Manuscripts should be self-contained. They may be of tutorial or review in nature. Suggested topics include, but are not limited to: * Biophysics of neurons in a network * Biochemistry of synaptic transmission * Development of neuroanatomical circuitries * Receptive field and organization of dendrites * Synaptic plasticity and synaptic development * Signal encoding, decoding and transduction * Subthreshold vs spike code signal processing * Functional circuitry analysis * Neural population interactions and dynamics * Physiological functions of neuronal networks * Biological neuronal network models * Processing capabilities of biologically based systems Submit abstracts, extended summaries, or manuscripts to the volume editor directly. For more information please contact. Volume Editor Dr. David C. Tam Center for Network Neuroscience Department of Biological Sciences P. O. Box 5218 University of North Texas Denton, TX 76203 Tel: (817) 565-3261 Fax: (817) 565-4136 E-mail: dtam at morticia.cnns.unt.edu Publisher: ABLEX PUB CORP 355 Chestnut St., Norwood, NJ 07648  From moody-john at CS.YALE.EDU Mon Aug 17 16:10:51 1992 From: moody-john at CS.YALE.EDU (john moody) Date: Mon, 17 Aug 92 16:10:51 EDT Subject: reprint available Message-ID: <199208172010.AA12522@TOPAZ.SYSTEMSX.CS.YALE.EDU> Fellow Connectionists: The following reprint has been placed on Jordan Pollack's neuroprose archive: LEARNING RATE SCHEDULES FOR FASTER STOCHASTIC GRADIENT SEARCH Christian Darken*, Joseph Chang+, and John Moody* Yale Departments of Computer Science* and Statistics+ ABSTRACT Stochastic gradient descent is a general algorithm that includes LMS, on-line backpropagation, and adaptive k-means clustering as special cases. The standard choices of the learning rate $\eta$ (both adaptive and fixed functions of time) often perform quite poorly. In contrast, our recently proposed class of ``search then converge'' ($STC$) learning rate schedules (Darken and Moody, 1990b, 1991) display the theoretically optimal asymptotic convergence rate and a superior ability to escape from poor local minima. However, the user is responsible for setting a key parameter. We propose here a new methodology for creating the first automatically adapting learning rates that achieve the optimal rate of convergence. To retrieve it via anonymous ftp, do the following: % ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password: your email addressneuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get darken.learning_rates.ps.Z 200 PORT command successful. 150 Opening ASCII mode data connection for darken.learning_rates.ps.Z (238939 by tes). 226 Transfer complete. local: darken.learning_rates.ps.Z remote: darken.learning_rates.ps.Z 239730 bytes received in 11 seconds (22 Kbytes/s) ftp> quit 221 Goodbye. % uncompress darken.learning_rates.ps % lpr -P printer_name darken.learning_rates.ps Enjoy, John Moody -------  From tgd at ICSI.Berkeley.EDU Wed Aug 19 13:44:02 1992 From: tgd at ICSI.Berkeley.EDU (Tom Dietterich) Date: Wed, 19 Aug 92 10:44:02 PDT Subject: A neat idea from L. Breiman Message-ID: <9208191744.AA09768@icsib22.ICSI.Berkeley.EDU> I recently read the following paper by Leo Breiman: Breiman, L. (1991) The $\Pi$ method for estimating multivariate functions from noisy data. {\it Technometrics, 33} (2), 125--160. With discussion. In this paper, Breiman presents a very neat technique called "back fitting" that is a very general algorithm idea for improving greedy algorithms. Suppose we are executing a greedy algorithm for some task, and at any given point in the process, we have already made decisions d_1, d_2, ..., d_{k-1} and we are about to make decision d_k. In the standard greedy algorithm, we choose d_k to be the locally best decision and then go on to consider d_{k+1}. However, with backfitting, we first perform the following double loop: repeat until quiesence: for i from 1 to k-1 do "undo" decision d_i (holding all other decisions d_j, j<>i fixed) and re-make d_i to be the best decision (locally). In other words, we first hold {d_2, ..., d_{k-1}} constant and see if we can improve things by re-making decision d_1. Then we hold {d_1, d_3, ..., d_{k-1}} constant and consider re-making decision d_2, and so on. We cycle through the previous k-1 decisions making local improvements until no further improvements can be found. THEN, we make decision d_k (and repeat the process, of course). In general, this backfitting process will cost another factor of n in the algorithm (assuming there are n decisions to be made). In experiments in three different learning algorithms, Breiman has found that this method finds much better solutions than a simple greedy algorithm. Breiman (in various collaborations) is currently applying this idea to improving CART trees and neural networks. This idea would probably also find good application in COBWEB-style algorithms and greedy MDL algorithms. --Tom  From maass at figids01.tu-graz.ac.at Thu Aug 20 10:20:24 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Thu, 20 Aug 92 16:20:24 +0200 Subject: No subject Message-ID: <9208201420.AA15840@figids03.tu-graz.ac.at> Re: Vapnik-Chervonenkis Dimension of Neural Nets According to [BEHW] the Vapnik-Chervonenkis dimension (VC-dimension) of a neural net is the key parameter for determining the number of samples that are needed for training the net. For a feedforward neural net C with heaviside gates that has e edges (or equivalently: e adjustable weights) it was shown by Baum and Haussler [BH] in 1988 that the VC-dimension of C is bounded above by O(e log e). It has remained an open problem whether the factor "log e" in this upper bound is in fact needed, or whether a general upper bound O(e) would also be valid. This problem is solved by the following new result: THEOREM: One can construct for every natural number n a feedforward neural net C(n) of depth 4 with heaviside gates that has n input bits, one output bit, and not more than 33n edges, such that the VC-dimension of C(n) (or more precisely: of the class of boolean functions that can be computed by C(n) by choosing suitable integer weights from {-n,...,n}) is at least as large as n log n. COROLLARY: The general upper bound of O(e log e) for the VC-dimension of a neural net with e adjustable weigh Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:  Return-Path:           From maass at figids01.tu-graz.ac.at Thu Aug 20 10:20:24 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Thu, 20 Aug 92 16:20:24 +0200 Subject: No subject Message-ID: <9208201420.AA15840@figids03.tu-graz.ac.at> Re: Vapnik-Chervonenkis Dimension of Neural Nets According to [BEHW] the Vapnik-Chervonenkis dimension (VC-dimension) of a neural net is the key parameter for determining the number of samples that are needed for training the net. For a feedforward neural net C with heaviside gates that has e edges (or equivalently: e adjustable weights) it was shown by Baum and Haussler [BH] in 1988 that the VC-dimension of C is bounded above by O(e log e). It has remained an open problem whether the factor "log e" in this upper bound is in fact needed, or whether a general upper bound O(e) would also be valid. This problem is solved by the following new result: THEOREM: One can construct for every natural number n a feedforward neural net C(n) of depth 4 with heaviside gates that has n input bits, one output bit, and not more than 33n edges, such that the VC-dimension of C(n) (or more precisely: of the class of boolean functions that can be computed by C(n) by choosing suitable integer weights from {-n,...,n}) is at least as large as n log n. COROLLARY: The general upper bound of O(e log e) for the VC-dimension of a neural net with e adjustable weights (due to Baum and Haussler [BW]) is asymptotically optimal. REMARKS: 1. The proof of the Theorem is immediate from a quite sophisticated circuit construction due to Lupanov (english translation due to Gyorgy Turan [T]). Lupanov had shown that every boolean function of n inputs can be computed by a threshold circuit of depth 4 with O(2**(n/2 - log n/2)) gates. 2. This new lower bound for the VC-dimension of neural nets will appear as a side-result in a paper "On the Theory of Neural Nets with Piecewise Polynomial Activation Functions", that should be available in October 1993. The main results of this paper are upper bounds for the VC-dimension and for the computational power of neural nets with arbitrary piecewise polynomial activation functions at the gates and arbitrary real weights. Preprints of this paper will be available from the author: Wolfgang Maass Institute for Theoretical Computer Science Technische Universitaet Graz Klosterwiesgasse 32/2, A-8010 Graz, Austria e-mail: maass at igi.tu-graz.ac.at Tel.: (0316)81-00-63-22 Fax: (0316)81-00-63-5 References: [BH] E.B.Baum, D. Haussler, What size nets gives valid generalization, Neural Computation 1, 1989, 151-160 [BEHW] A. Blumer, A. Ehrenfeucht, D. Haussler, M.K. Warmuth, Learnability and the Vapnik-Chervonenkis-dimension, Journal of the ACM, 36, 1989, 929-965 [T] G. Turan, unpublished manuscript (1989)  From terry at helmholtz.sdsc.edu Tue Aug 18 17:29:15 1992 From: terry at helmholtz.sdsc.edu (Terry Sejnowski) Date: Tue, 18 Aug 92 14:29:15 PDT Subject: Postdoc in San Diego Message-ID: <9208182129.AA05114@helmholtz.sdsc.edu> Opportunities for post doctoral research at the Naval Health Research Center, San Diego. Cognitive Performance and Psychophysiology Department __________________________________________________ Our laboratory is developing alertness and attention moni- toring systems based on human psychophysiological measures (EEG, ERP, EOG, ECG), through ongoing research at basic and exploratory development levels. We have openings for post doctoral fellows in signal processing / neural network esti- mation and human cognitive psychophysiology. We are espe- cially interested in the relation of oscillatory brain dynamics to attention and alertness. Our research is not classified. Please address inquiries to: Dr. Scott Makeig Naval Health Research Center email: scott at cpl.nhrc.navy.mil PO Box 85122 fax: (619) 436-9389 San Diego, CA 92186-5122 phone: (619) 436-7155 Scott -----  From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Wed Aug 19 00:13:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Wed, 19 Aug 1992 00:13 GMT-0200 Subject: Neural networks research in Medicine - Abstracts Message-ID: <01GNQV5G2OOS8Y4ZIC@ccvax.unicamp.br> ARTIFICIAL NEURAL NETWORKS IN MEDICINE AND BIOLOGY Center for Biomedical Informatics State University of Campinas, Campinas - Brazil Abstracts of published work by the Center Status of Aug 15 1992 ----------------------------------------------------------- A HIGH-LEVEL LANGUAGE AND MICROCOMPUTER PROGRAM FOR THE DESCRIPTION AND SIMULATION OF NEURAL ARCHITECTURES Sabbatini, RME and Arruda-Botelho, AG Center of Biomedical Informatics, Neurosciences Applications Group, State University of Campinas, Campinas, SP, BRAZIL) The description, representation and simulation of complex neural network structures by means of computers is an essential step in the investigation of model systems and inventions in the growing field of biological information processing and neurocomputing. The handcrafting of neural net architectures, however, is a long, tedious, difficult and error-prone process, which can be substituted satisfactorily by the neural network analogue of a computer program or formal symbolic language. Several attempts have been made to develop and apply such languages: P3, Hecht- Nielsen's AXON, and Rochester's ISCON are some recent examples. We present here a new tool for the formal description and simulation of artificial neural tissues in microcomputers. It is a network editor and simulator, called NEUROED, as well as a compiler for NEUROL, a high-level symbolic, structured language which allows the definition of the following elements of a neural tissue: a) elementary neural architectonic units: each unit has the same number of cells and the same internal interconnecting pattern and cell functional parameters; b) elementary cell types: each cell can be defined in terms of its basic functional parameters; synoptic interconnections inside an architectonic unit (axonic delay, weights and signal can be defined for each); a cell can fan out to several others, with the same synoptic properties; c) synaptic interconnections among units; d) cell types and architectonic units can be replicated automatically across neural tissue and interconnected; e) cell types and architectonic units can be named and arranged in hierarchical frames (parameter inheritance). NEUROED's underlying model processing element (PE) is a simplified Hodgkin-Huxley neuron, with RC-model, temporal- summation, passive electrotonic potentials at dendritic level, and a step transfer function with threshold level, a fixed-size, fixed-duration, fixed-form spike, and an absolute refractory period. Inputs Iij (i=1...NI) synapses for j-th neuron are weighted with Wij (i=1...NI), where Wij 0 is defined for a inhibitory synapse, Wij = 0 for an inactive or non-existent synapse and Wij 0 for an ex- citatory synapse. Outputs Okj (k=1...NO) can have axonic propagation delays Dkj (a delay can be equal to zero). Firing of neurons in a network follows diffusion process, according to propagation delays; random fluctuations in several processes can be simulated. Several learning algorithms can be implemented explicitly with NEUROL; a Hebbian synapse-strength reinforcement rule has specific language support now. NEUROED's basic specifications are: a) written in Turbo BASIC 1.0 for IBM-PC compatible machines, with CGA monochrome graphics display and optional numerical coprocessor; b) capacity of 100 neurons and 10.000 synapses; c) three neural tissue layers: input, processing and output. d) real-time simulation of neural tissue dynamics, with three display modes: oscilloscope mode (displays membrane potentials along time for several cells simultaneously); map mode (displays bidimensional architecture with individual cells, showing when they fire) and Hinton diagram (displays interconnecting matrix with individual synapses, showing when they fire); e) Realtime, interactive modification of net parameters; and f) capability for building procedures, functions and model libraries, which reside as external disk files. NEUROED and NEUROL are easy to learn and to use, intuitive for neuroscientists, and lend themselves to modeling neural tissue dynamics for teaching purposes. We are currently developing a basic "library" of NEUROED models to teach basic neurophysiology to medical students. Implementations of NEUROED and for parallel hardware are also under way. (Presented at the Fourth Annual Meeting of the Brazilian Federation of Biological Societies, Caxambu, MG, July 1991) ----------------------------------------------------------- A CASCADED NEURAL NETWORK MODEL FOR PROCESSING 2D TOMOGRAPHIC BRAIN IMAGES Dourado SC and Sabbatini RME Center for Biomedical Informatics, State University of Campinas, P.O. Box 6005, 13081 Campinas, So Paulo, Brazil. Artificial neural networks (ANN) have demonstrated many advantages and capabilities in applications involving the processing of biomedical images and signals. Particularly in the field of medical image processing, ANNs have been used in several ways, such as in image filtering, scatter correction, edge detection, segmentation, pattern and texture classification, image reconstruction and alignment, etc. The adaptive nature of ANNs (i.e., they are capable of learning) and the possibility of implementing its function using truly massive parallel processors and neural integrated circuits, in the future; are strong arguments in favor of investigating new architectures, algorithms and applications for ANNs in Medicine. In the present work, we are interested into designing a prototype ANN which could be capable of processing serial sections of the brain, obtained from CT or MRI tomographs. The segmented, outlined images, representing internal brain structures, both normal and abnormal, would then be used as an input to a three-dimensional stereotaxic radiosurgery planning software. The ANN-based algorithm we have devised was initially implemented as a software simulation in a microcomputer (PC 80386, with VGA color graphics and a 80387 mathematical coprocessor). It is structured as a compound ANN, comprised by three cascading sub-networks. The first one receives the original digitized image, and is a one-layer, fully interconnected ANN, with one processing element (PE) per image pixel. The brain image is obtained from a General Electric CT system, with 256 x 256 pixels and 256 gray levels. The first ANN implements a MHF lateral inhibition function, based on a convolution filter of variable dimension (3 x 3 up to 9 x 9 PE's), and it is used to iteratively enhance borders in the image. The PE interconnection (i.e. convolution) function can be defined by the user as a disk file containing a set of synaptic weights, which is read by the program; thus allowing for experimentation with different sets of coefficients and sizes of the convolution window. In this layer, PE's have synaptic weights varying from -1 to 1, and the step function as its transfer function. Usually after 2 to 3 iterations, the borders are completely formed and do not vary any more, but are too thick (i.e., the trace width spans several pixels). In order to thin out the borders, the output of the MHF ANN layer is subsequently fed into a three-layer perceptron, which was trained off-line using the backpropagation algorithm to perform thinning on smaller straight line segments. Finally, the thinned out image obtained pixel-wise at the this ANN's output is fed into a third network, also a three-layer perceptron trained off- line using the backpropagation algorithm to complete small gaps ocurring in the image contours. The final image, also 256 x 256 pixels with 2 levels of gray, is passed to the 3D slice reconstruction program, implemented with conventional, sequential algorithms. A fourth ANN perceptron previously trained by back-propagation to recognize the gray histogram signature of small groups of pixels in the original image (such as bone, liquor, gray and white matter, blood, dense tumor areas, etc.), is used to false-color the entire image according to the classified thematic regions. The cascaded, multilayer ANN thus implemented performs very well in the overall task of obtaining automatically outlined and segmented brain slices, for the purposes of 3D reconstruction and surgical planning. Due to the complexity of algorithms and to the size of the image, the time spent by the computer we use is inordinately large, preventing a practical application. We are now studying the implementation of this ANN paradigm in RISC-based and vector-processing CPUs, as well as the potential applications of neurochip prototyping kits already available in the market. (Presented at the I Latinoamerican Congress on Health Informatics, Habana, Cuba, February 1992) -------------------------------------------------------- COMPUTER SIMULATION OF A QUANTITATIVE MODEL FOR REFLEX EPILEPSY R.M.E. Sabbatini Center of Biomedical Informatics and School of Medicine of the State University of Campinas, Brazil. In the present study we propose a continuous, lumped- parameter, non-linear mathematical model for explaining the quantitatively observed characteristics of a class of experimental reflex epilepsy, namely audiogenic seizures in rodents, and simulate this model with a especially contrived microcomputer program. In a first phase of the study, we have individually stimulated 280 adult Wistar albino rats with a 112 dB white-noise sound source, and recorded the latency, duration and intensity values of the psychomotor components of the audiogenic reaction: after an initial delay one or more circular running phases usually occurs, followed or not by complete tonic-clonic seizures. In the second step, we performed several multivariate statistical analyses of these data, which have revealed many properties of the underlying neural system responsible for the crisis; such as the independence of the running and convulsive phases; and a scale of severity which is correlated to the value of latencies and intensities. Finally, a lumped- parameter model based on a set of differential equations which describes the macro behavior of the interaction of four different populations of excitatory and inhibitory neurons with different time constants and threshold elements has been simulated in a computer, In this model, running waves, which may occur several times before leading or not to the final convulsive phase, are explained by the oscillatory behavior of a controlling neural population, caused by mixed feedback: an early, internal positive feedback which results in the growing of excitation, and a late negative feedback elicited by motor components of the running itself, which causes the oscillation back to inhibition. A second, threshold-triggered population controls the convulsive phase and its subsequent refractory phase. The results of the simulation have been found to explain reasonably well the time course and structural characteristics of the several forms of rodent audiogenic epilepsy and correlates well with the existing knowledge about the neural bases of this phenomenon. (Presented at the Second IBRO/IMIA International Symposium on Mathematical Approaches to Brain Functioning Diagnostics, Prague, Czechoslovakia, September 1990). -------------------------------------------------------- OUTCOME PREDICTION FOR CRITICAL PATIENTS UNDER INTENSIVE CARE, USING BACKPROPAGATION NEURAL NETWORKS P. Felipe Jr., R.M.E. Sabbatini, P.M. Carvalho- Jnior, R.E. Beseggio, and R.G.G. Terzi Center for Biomedical Informatics, State University of Campinas, Campinas SP 13081-970 Brazil Several scores have been designed to estimate death probability for patients admitted to Intensive Care Units, such as the APACHE and MPM systems, which are based on regression analysis. In the present work, we have studied the potential of a model of artificial neural network, the three-layer perceptron with backpropagation learning rule, to perform this task. Training and testing data were derived from a Brazilian database which was previously used for calculating APACHE scores. The neural networks were trained with physiological, clinical and pathological data (30 variables, such as worst pCO2, coma level, arterial pressure, etc.) based on a sample of more than 300 patients, whose outcome was known. All networks were able to reach convergence with a small global prediction error. Maximum percentages of 75% correct predictions in the test dataset and 99.6 % in the training dataset, were achieved. Maximum sensitivity and specificity were 60% and 80%, respectively. We conclude that the neural network approach has worked well for outcome prognosis in a highly "noisy" dataset, with a similar, if slightly lower performance than APACHE II, but with the advantage of deriving its parameters from a regional dataset instead from an universal model. The paper will be presented at the MEDINFO'92 workshop on "Applications of Connectionist System in Biomedicine", September 8, 1992, in Geneva, Switzerland. ============================================================== Reprints/Preprints are available Renato M.E. Sabbatini, PhD Center for Biomedical Informatics State University of Campinas SABBATINI at CCVAX.UNICAMP.BR SABBATINI at BRUC.BITNET  From kevin at synapse.cs.byu.edu Fri Aug 21 13:54:32 1992 From: kevin at synapse.cs.byu.edu (Kevin Vanhorn) Date: Fri, 21 Aug 92 11:54:32 -0600 Subject: A neat idea from L. Breiman Message-ID: <9208211754.AA07435@synapse.cs.byu.edu> The "back-fitting" algorithm is just an instance of local search [1], a well-known heuristic technique for combinatorial optimization. By the way, it was mentioned that Breiman was applying back-fitting to improving CART trees. Has he published anything on this yet? There was no mention of it in the article Tom Dietterich cited. It's not clear to me how you would usefully apply back-fitting or any other kind of local search to improving a decision tree, as the choice of how to split a node is highly dependent on how its ancestral nodes were split. [1] C. H. Papadimitriou and K. Steiglitz. Combinatorial Optimization: Algorithms and Complexity. Prentice-Hall, Inc., 1982. (See Chap. 19) ----------------------------------------------------------------------------- Kevin S. Van Horn | It is the means that determine the ends. vanhorn at synapse.cs.byu.edu |  From SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU Fri Aug 21 09:21:00 1992 From: SABBATINI%ccvax.unicamp.br at BITNET.CC.CMU.EDU (SABBATINI%ccvax.unicamp.br@BITNET.CC.CMU.EDU) Date: Fri, 21 Aug 1992 09:21 GMT-0200 Subject: Survey on neural networks in Medicine and Biology Message-ID: <01GNU6VHHU448Y50NR@ccvax.unicamp.br> The Scientific Program on NEURAL NETWORKS IN MEDICINE in the Seventh World Congress of Medical Informatics (MEDINFO 92) - Palexpo Geneva, 6-10 September 1992 The explosive growth of artificial neural network systems (also called connectionist or nuromorphic systems) in the last years has followed several convincing demonstrations of its usefulness in decision-making tasks, particularly in the classification and recognition of patterns. Not surprisingly, the medical and biological applications of such systems have followed suit and are growing at a relentless pace. Connectionist systems have been used successfully in computer-aided medical diagnosis, intelligent biomedical instrumentation, classification of images of cells and tissues, recognition of abnormal EKG and EEG features, development of prosthetic devices, prediction of protein structure, etc. Thus, this is an important area which will be increasingly considered as an alternative to implementation of intelligent systems in all areas of Medicine and Biology. This year's scientific program will include several activities and papers on the subject of neural networks applications. They are summarized below for the benefit of people interested in this area. POSTER SESSION Z-1.1, Sept. 7, 11:00-13:00 Diagnosis of children's ketonemia by neural network Artificial Intelligence. A. Garliausakas, A. Garliauskiene (Lithuania) Diagnosing functional disorders of the cervical spine using backpropagation networks. Preliminary results. W. Schoner, M. Berger, G. Holzmueller, A. Neiss, H. Ulmer (Austria) POSTER SESSION Z-1.1, Sept. 7, 14:00-16:00 The use of neural network in decision making of nursing diagnosis Y. Chen, X. Cai, L. Chen, R. Guo (China) SEMI-PLENARY SESSION 2-04 (Room PS-11) - Sept. 8, 9:30-10:30 Neural Networks and Their Applications in Biomedicine Applications of connectionist systems in Biomedicine Renato M.E. Sabbatini (Brazil) A neural network approach to assess myocardial infarction A. Palazos, V. Maojo, F. Martin, N. Ezquerra (Spain) DEMONSTRATION 3-09 (Room S81) - Sept. 9, 16:00-17:30 HYPERNET - A decision support system for arterial hypertension M. Ramos, M.T. Haashi, E. Czogala, O. Marson, H. Aszen, O. Kohlmann Jr., M.S. Ancao, D. Sigulem (Brazil) SESSION 9-01 (Room S-68) - Sept. 10, 10:30-11:00 Neural networks for classification of EEG Signals D.C. Reddy, D.R. Korrai (India) WORKSHOP 3-08 (Room W25) - Sept. 8, 18:00-22:00 Applications of connectionist systems in Biomedicine Chair: Renato M.E. Sabbatini (Campinas State University, Brazil) The workshop has the objective of reviewing and discussing the value, aims, classes and results of artificial neural network systems (ANS) applications in the biomedical area. Specific techniques and results will be demonstrated (in many instances using actual computers) in several important domains, such as (i) ANS simulation environments, languages and hardware specifically designed for biomedical applications, (ii) signal and image processing tasks; (iii) development of computerised decision-making systems in diagnosis, therapy, etc.; (iv) integration with other Artificial Intelligence approaches; (v) practical aspects in the evaluation, development and implementation of ANS systems in Biomedicine; (vi) current research and perspectives of advancement. The Workshop will be conducted by several renowned specialists in the growing field of ANS applications in Biomedicine. In addition, the participants will have the opportunity to try some hands-on demonstrations on interesting software products in this area. All participants will receive an information package containing a list of publications on the subject (papers, review, books, technical reports, government studies, etc.), with abstracts; available hardware and software resources (neural network simulation environments, neurochips and neuroboards, specific medical NN application software, biomedical instruments embedding NNs, etc., either commercial or non-commercial); a list of research groups, laboratories and individuals involved in the area of ANS applications in Biological and Health Sciences; with addresses and research areas. INVITATION I would like to invite all persons active in this area of research and development, to contribute with discussions, short presentations and software demonstrations, to the workshop. Those who are interested in participating, please send name, mail and e-mail/fax address to me, together with a short proposal about his/her potential intervention at the Workshop. Renato M.E. Sabbatini ***************************************************************************** * Renato M.E. Sabbatini, PhD * INTERNET: SABBATINI at CCVAX.UNICAMP.BR * * Director * BITNET : SABBATINI at BRUC.BITNET * * Center for Biomedical Informatics * Tel.: +55 192 39-7130 (office) * * State University of Campinas * 39-4168 (home) * * * Fax.: +55 192 39-4717 (office) * * P.O. Box 6005 * Telex: +55 19 1150 * * Campinas, SP 13081 - BRAZIL * * *****************************************************************************  From maass at figids01.tu-graz.ac.at Sat Aug 22 12:41:43 1992 From: maass at figids01.tu-graz.ac.at (maass@figids01.tu-graz.ac.at) Date: Sat, 22 Aug 92 18:41:43 +0200 Subject: Correction of date of availability for preprints Message-ID: <9208221641.AA18659@figids03.tu-graz.ac.at> Two days ago I posted a new result on the Vapnik-Chervonenkis Dimension of Neural Nets. The paper in which this result appears will be available in October 1992 (NOT 1993, as incorrectly stated in the posted note). Wolfgang Maass maass at igi.tu-graz.ac.at  From lyn at dcs.ex.ac.uk Mon Aug 24 12:51:51 1992 From: lyn at dcs.ex.ac.uk (Lyn Shackleton) Date: Mon, 24 Aug 92 12:51:51 BST Subject: First Sewdish National Conference on Connectionism Message-ID: <409.9208241151@castor.dcs.exeter.ac.uk> The Connectionist Research Group University of Skovde The First Swedish National Conference on Connectionism Wednesday 9th and Thursday 10th Sept. 1992, Skovde, Sweden at Billingehus Hotel and Conference Centre INVITED SPEAKERS James M. Bower, California Inst. of Technology, USA "The neuropharmacology of associative memory function: an in vitro, in vivo, and in computo study of object recognition in olfactory cortex." Ronald L. Chrisley, University of Sussex, UK "Connectionist Cognitive Maps and the Development of Objec- tivity." Garrison W. Cottrell, University of California, San Diego, USA "Dynamic Rate Adaptation." Jerome A. Feldman, ICSI, Berkeley, USA "Structure and Change in Connectionist Models" Dan Hammerstrom, Adaptive Solutions, Inc., USA "Neurocomputing Hardware: Present and Future." James A. Hendler, University of Maryland, USA "SCRuFFy: An applications-oriented hybrid connectionist/symbolic shell." Ajit Narayanan, University of Exeter, UK "On Nativist Connectionism." Jordan B. Pollack, Ohio State University, USA "Explaining Cognition with Nonlinear Dynamics." David E. Rumelhart, Stanford University, USA "From Theory to Practice: A Case Study" Noel E. Sharkey, University of Exeter, UK "Semantic and Syntactic Decompositions of Fully Distributed Representations" Tim van Gelder, Indiana University, USA "Connectionism and the Mind-Body Problem: Exposing the Rift between Mind and Cognition." PROGRAMME Secretariat: SNCC-92 Attn: Ulrica Carlbom University of Skovde P.O. Box 408 S-541 28 Skovde, SWEDEN Phone +46 (0)500-77600, Fax +46 (0)500-16325 conference at his.se Conference organisers Lars Niklasson (University of Skovde) lars at his.se Mikael Boden (University of Skovde) mikael at his.se Program committee Anders Lansner (Royal Institute of Technology, Sweden) Noel E. Sharkey (University of Exeter, UK) Ajit Narayanan (University of Exeter, UK) Conference sponsors University of Skovde The County of Skaraborg (Lansstyrelsen, Skaraborgs Lan) Conference patrons Lars-Erik Johansson, Vice-chancellor University of Skovde Stig Emanuelsson, Head of Comp. Sci. Dept., Univ. of Skovde The Swedish Neural Network Society (SNNS) will hold an offi- cial members meeting at the conference. The Sessions Wednesday 9th Session 1: Opening / Invited Papers (Room 1) Chair: Lars Niklasson (SNCC-92 organiser) 08.30 Opening 09.00 Connectionism and the Mind-Body Problem: Exposing the Rift between Mind and Cognition Tim van Gelder, Indiana University, USA 09.50 Explaining Cognition with Nonlinear Dynamics Jordan B. Pollack, Ohio State University, USA 10.40 Coffee Break Session 2: Invited Paper (Room 1) Chair: Tim van Gelder (Indiana University, USA) 11.10 - 12.00 Semantic and Syntactic Decompositions of Fully Distributed Representations Noel E. Sharkey, University of Exeter, UK Session 3a: Philosophical presentations (Room 1) Chair: Tim van Gelder (Indiana University, USA) 12.05 Subsymbolic Connectionism: Representational Vehicles and Contents Tere Vaden, University of Tampere, Finland 12.30 First Connectionist Model of Nonmonotonic Reasoning: Handling Exceptions in Inheritance Hierarchies Mikael Boden, University of Skovde and Ajit Narayanan, University of Exeter, UK 12.55 Lunch Break Session 3b: Theoretical presentations (Room 2) Chair: Jordan B. Pollack (Ohio State University, USA) 12.05 Neural Networks for Unsupervised Linear Feature Extraction Reiner Lenz and Mats Osterberg, Linkoping University 12.30 Feed-forward Neural Networks in Limiting Cases of Infinite Nodes Abhay Bulsari and Henrik Saxen, Abo Akademi, Finland 12.55 Lunch Break Session 4: Invited Papers (Room 1) Chair: Jerome A. Feldman (ICSI, Berkeley, USA) 14.00 SCRuFFy: An Applications-oriented Hybrid Connectionist/Symbolic Shell James A. Hendler, University of Maryland, USA 14.50 Neurocomputing Hardware: Present and Future Dan Hammerstrom, Adaptive Solutions, Inc., USA 15.40 Coffee Break Session 5a: Philosophical presentations (Room 1) Chair: Noel E. Sharkey (University of Exeter, UK) 16.10 Connectionism - The Miracle Mind Model Lars Niklasson, University of Skovde and Noel E. Sharkey, University of Exeter, UK 16.35 Some Properties of Neural Representations Christian Balkeniun, Lund University 17.00 Behaviors, Motivations, and Perceptions In Artificial Creatures Per Hammarlund and Anders Lansner, Royal Institute of Technology, Stockholm 17.25 Break Session 5b: Hardware-oriented presentations (Room 2) Chair: Dan Hammerstrom (Adaptive Solutions Inc., USA) 16.10 Pulse Coded Neural Networks for Hardware Implementation Lars Asplund, Olle Gallmo, Ernst Nordstrom, and Mats Gustafsson Uppsala University 16.35 Towards Modular, Massively Parallel Neural Computers Bertil Svensson, Chalmers University of Technology, Goteborg and Centre for Computer Science, Halmstad University, Tomas Nordstrom, Lulea University of Technology, Kenneth Nilsson and Per-Arne Wiberg, Halmstad University 17.00 The Grid - An Experiment in Neurocomputer Architecture Olle Gallmo and Lars Asplund, Uppsala University 17.25 Break Session 6a: Theoretical presentations (Room 1) Chair: Garrison W. Cottrell (University of California, USA) 17.40 A Neural System as a Model for Image Reconstruction Mats Bengtsson, Swedish Defence Research Establishment, Linkoping 18.05 Internal Representation Models in Feedforward Artificial Neural Networks Hans G. C. Traven, Royal Institute of Technology, Stockholm 18.30 - 18.55 A Connectionist Model for Fuzzy Logic, Abhay Bulsari and Henrik Saxen, Abo Akademi, Finland Session 6b: Application-oriented presentations (Room 2) Chair: James A. Hendler (University of Maryland, USA) 17.40 A Robust Query-Reply System Based on a Bayesian Neural Network Anders Holst and Anders Lansner, Royal Institute of Technology, Stockholm 18.05 Neural Networks for Admission Control in an ATM Network Ernst Nordstrom, Olle Gallmo, Lars Asplund, Uppsala University 18.25 - 18.55 Adaptive Generalisation in Connectionist Nets Amanda Sharkey and Noel E Sharkey, University of Exeter, UK Swedish Neural Network Society (SNNS) (Room 1) 19.00-19.45 Members meeting Thursday 10th Session 7: Invited Papers (Room 1) Chair: Ronald L. Chrisley (University of Sussex, UK) 09.00 Structure and Change in Connectionist Models Jerome A. Feldman, ICSI, Berkeley, USA 09.50 On Nativist Connectionism Ajit Narayanan, University of Exeter, UK 10.40 Coffee Break Session: 8 Invited Paper (Room 1) Chair: Anders Lansner (Royal Institute of Technology, Stockholm) 11.10 - 12.00 The Neuropharmacology of Associative Memory Function: an in Vitro, in Vivo, and in Computo Study of Object Recognition in Olfactory Cortex James M. Bower, California Institute of Technology, USA Session 9a: Neurobiological presentations (Room 1) Chair: James M. Bower (CalTech) 12.05 A Model of Cortical Associative Memory Based on Hebbian Cell Assemblies Erik Fransen, Anders Lansner and Hans Liljenstrom, Royal Institute of Technology, Stockholm 12.30 Cognition, Neurodynamics and Computer Models Hans Liljenstrom, Royal Institute of Technology, Stockholm 12.55 Lunch Break Session 9b: Application-oriented presentations (Room 2) Chair: David E. Rumelhart (Stanford University) 12.05 Experiments with Artificial Neural Networks for Phoneme and Word Recognition Kjell Elenius, Royal Institute of Technology, Stockholm 12.30 Recognition of Isolated Spoken Swedish Words - An Approach Based on a Self-organizing Feature Map Tomas Rahkkonen, Telia Research AB, Systems Research Spoken Language Processing, Haninge 12.55 Lunch Break Session 10: Invited Papers (Room 1) Chair: Ajit Narayanan (University of Exeter, UK) 14.00 Connectionist Cognitive Maps and the Development of Objectivity Ronald L. Chrisley, University of Sussex, UK 14.50 Dynamic Rate Adaption Garrison W. Cottrell, University of California, San Diego, USA 15.40 Coffee Break Session 11: Invited Paper (Room 1) Chair: Mikael Boden (SNCC-92 organiser) 16.10 From Theory to Practice: A Case Study David E. Rumelhart, Stanford University, USA 17.00 Closing Registration form Fees include admission to all conference sessions and a copy of the Advance Proceedings. Hotel reservation is made at Billingehus Hotel and Confer- ence Centre. The rooms are available from Tuesday (8th) evening to noon Thursday (10th), for the two-day alterna- tive, and from Wednesday evening (9th) to noon Thursday (10th), for the one-day alternative. All activities will be held at the conference centre. A block of rooms has been reserved until 10th Aug. After this date, room reservations will be accepted on a space available basis. To register, complete and return the form below to the secretariat. Registration is valid when payment is received. Payment should be made to postal giro 78 81 40 - 2, payable to SNCC-92, Hogskolan i Skovde. Cancellation of hotel reserva- tion can be made until 15/8 (the conference fee, 600 SEK, is not refundable). -------------------------- cut ----------------------------- Name: (Mr/Ms) ____________________________________________ Company: ____________________________________________ Address: ____________________________________________ City/Country: ____________________________________________ Phone: ____________________________________________ If the double room alternative has been chosen, please give the details for the second person. Name: (Mr/Ms) ____________________________________________ Company: ____________________________________________ Country: ____________________________________________ Alternatives Until 9th Aug After 9th Aug (please circle chosen fee) Conference fee only 1000SEK 1000SEK (incl. coffee and lunch) Conference fee + Full board and single room lodging 8/9 - 10/9 2400SEK 3500SEK Conference fee + Full board and double room lodging 8/9 - 10/9 2 x 2000SEK 2 x 2600SEK Conference fee + Full board and single room lodging 9/9 - 10/9 1700SEK 2400SEK Conference fee + Full board and double room lodging 9/9 - 10/9 2 x 1500SEK 2 x 2000SEK Indicate if vegetarian meals are preferred: _____ person(s)  From mpadgett at eng.auburn.edu Mon Aug 24 10:52:54 1992 From: mpadgett at eng.auburn.edu (Mary Lou Padgett) Date: Mon, 24 Aug 92 09:52:54 CDT Subject: IEEE-NNC Standards Message-ID: <9208241452.AA24224@eng.auburn.edu> IEEE-NNC Standards Committee Report It is the purpose of this column to update you on this activity and to invite you to participate in forthcoming meetings. At its June meeting the IEEE Standards Board formally approved the Project Authorization Requests (PAR's) submitted by the Working Group on Glossary and Symbols and by the Working Group on Performance Evaluation, so those two groups now have their "marching orders." The NNC Standards Committee had a series of fruitful meetings in conjunction with the Baltimore IJCNN. Progress made by the various working groups is detailed below. FUTURE EVENTS * IJCNN Beijing, Nov. 1-6, 1992 A panel discussion and/or workshop will be conducted by Mary Lou Padgett early in the meeting. The formation of an international glossary and symbology for artificial neural networks will be discussed. * SimTec/WNN92 Houston, Nov. 4-7, 1992 There will be a Standards Committee Meeting on Friday, Nov. 6, in conjunction with this conference. Paper competition awards will be announced. Dr. Robert Shelton of NASA/JSC is conducting the Performance Measure Methodology Contest and Prof. E. Tzanakou of Rutgers is conducting the Paradigm Comparison Student Paper Contest. * IEEE-ICNN and IEEE-FUZZ 1993 San Francisco, March 28 - April 1, 1993 A come-and-go meeting of everyone interested in standards will be held on Sunday, March 27 and individual working group meetings will take place on Monday and Tuesday evenings, March 28 and 29. * Proposed New Activity It has been proposed to form a working group to draft a glossary for fuzzy systems. An initial meeting to that end will take place in San Francisco on March 27 and 28, in conjuction with the conference. Please contact either of the undersigned if interested in participating. Over 400 people and companies are on the interest list for standards. If you would like to be included, please contact Mary Lou Padgett. WORKING GROUP REPORTS: WORKING GROUP ON GLOSSARY AND SYMBOLS Chair: Mary Lou Padgett, Auburn University The Working Group on Glossary and Symbols submitted the following PAR, which has been approved by IEEE as a formal project for the group. A voting group will be constructed in the near future. Project Title: Recommended Definition of Terms for Artificial Neural Networks Scope: Terminology used to describe and discuss artificial neural networks including hardware, software and algorithms related to artificial neural networks. Purpose: The subject of artificial neural networks is treated in a wide variety of textbooks, technical papers, manuals and other publications. At the present time, there is no widely accepted guide or standard for the use of technical terms relating to artificial neural networks. It is the purpose of this project to provide a comprehensive glossary of recommended terms to be used by the authors of future publications in this field. Status Report: The glossary being developed should be usable by everyone interested in neural networks, so a simple basic structure is desirable. The draft glossary proposed by Russell Eberhart meets this requirement, with some modifications. To help insure that the finished product is usable and still specific enough to help in specialized areas, Glossary Special Interest Group Chairs have been appointed. The exact scope of their groups will be discussed in San Francisco. Eventually, representation from all major neural networks thrusts and geographic areas should be included. People from academia, industry and government in all areas should be represented. The first Glossary SIG Chairs are: Patrick A. Shoemaker, NOSC; Dale E. Nelson, WPAFB; and Emile Fiesler, IDIAP. The glossary will be structured in a modular form, with basic elements coming first, followed by more specialized subsets. Your input is respectfully requested! WORKING GROUP ON PERFORMANCE METHODOLOGY Chair: Robert Shelton, NASA/JSC The Working Group on Performance Methodology met at the Baltimore IJCNN to discuss their newly approved project and formulate an agenda. Project Title: Guidelines for the Evaluation of the Speed and Accuracy of Implementations of Feed-Forward Artificial Neural Networks. Scope: Artificial neural network implementations which implement supervised learning through minimization of an error function based on the sum of the squares of residual errors. Purpose: Since 1986, a large number of implementations of the feed-forward back-error propagation neural network algorithms have been described with widely varying claims of speed and accuracy. At present, buyers and users of software and/or hardware for the purpose of executing such algorithms have no common set of bench-marks to facilitate the verification of vendor claims. The working group proposes to fulfill this need by assembling a suite of test cases. Agenda: Forward Propagation Only The following will comprise a forward propagation system to which the standard will apply. Such a system will be a 3-layer (input, hidden, output), fully connected (sequentially i.e. input to hidden to output), feed-forward neural network. Cases of varying sizes will be proposed. In addition, for each size, there will be at least one "problem" of the following two types. A. Discrete output B. Continuous output. A "problem" will consist of a set of I/O pairs which the system will be required to reproduce. Sequential, portable e.g. C language computer code will be distributed which emulates the desired network including nominal weights and customary sigmoidal transfer functions. The user of the standard may make use of the distributed code and weight values as he or she sees fit. The determination of weights is deemed to be a "learning" problem and not within the scope of the part of the standard described here. Parity problems were proposed as hard cases for the discrete output test. Such problems are sufficiently well understood that weights could be provided without recourse to the use of learning algorithms. Character identification was suggested as a second easier kind of discrete output problem. The task of providing good test problems for the case of continuous output was agreed to be significantly more complex. It was suggested that mathematical combinations of algebraic and transcendental functions could serve as the basic model, but it was agreed that the determination of the candidate problems for continuous output would require considerable additional effort. Robert Shelton PT4, NASA/JSC Houston, TX 77058 P: (713) 483-8110 shelton at gothamcity.jsc.nasa.gov WORKING GROUP ON SOFTWARE AND HARDWARE INTERFACES Chair: Steven Deiss, Applied Neurodynamics The NNC Working Group on Software and Hardware Interfaces met at the Baltimore IJCNN. The group was evenly divided by interest into an ad hoc Working Subgroup on Software Interface Standards and an ad hoc Working Subgroup on Hardware Interface Standards. The overall working group persists as an umbrella to integrate current efforts and promote new interface standards activities. Future meetings are expected to discuss PAR submission along with the technical issues. The Software Group got off to a fast start in Baltimore and several meetings were held there. Ten ANN vendors and 15 labs and companies expressed interest in the task of formalizing selected data format standards which would be used to store ANN training sets. Many vendors have translation tools for importing data to their own environments, but many research users find it difficult to share data because of use of unique data formats and paradigm code written early on to accept their nonstandard formats. The group reached consensus that a simple standard training data format is needed, several were discussed, and it was felt that the task was manageable. For further information concerning this project contact: Dr. Harold K. Brown Florida Institute of Technology Dept. of Electrical and Computer Engineering Melbourne, FL 32901-6988 Phone: 407-768-8000 x 7556 Fax: 407-984-8461 Email: hkb at ee.fit.edu The Hardware Group discussed related work on hardware standards that was carried out under the IEEE Computer Society Microprocessor Standards Committee and tried to focus on goals for the current group. In 1989 A Study Group was formed under the auspices of the MSC to evaluate Futurebus+ (896) and Scalable Coherent Interface (1596) for applicability to NN applications. The group recommended a hybrid approach while recognizing the longer range potential of a NN specific interface and interconnect standard. The present group chose to focus on 'guidelines' for utilization of existing standards for NN applications. It was the consensus that the NN community may not yet be ready for a real NN hardware interface standard since this is such an active area of reseach, however, work toward the evolution of such a standard would appear to be timely. For further information about this project or about other areas where interface standards might be appropriate contact: Stephen R. Deiss Applied Neurodynamics 2049 Village Park Wy, #248 Encinitas, CA 92024-5418 Phone: 619-944-8859 Fax: 619-944-8880 Email: deiss at cerf.net Thank you for your support of the IEEE-NNC Neural Networks Standards Committee. Please continue to interact with all of the working groups to help us grow in positive directions, and provide service to the entire community. SEE YOU IN SAN FRANCISCO, if not before! Sincerely, Professor Walter J. Karplus Mary Lou Padgett Chair Vice Chair IEEE-NNC Standards Committee IEEE-NNC Standards Committee UCLA, CS Dept. Auburn University, EE Dept. 3723 Boelter Hall 1165 Owens Road Los Angeles, CA 90024 Auburn, AL 36830 P: (310) 825-2929 P: (205) 821-2472 or 3488 email: karplus at CS.UCLA.EDU email: mpadgett at eng.auburn.edu   From bryan at ad.cise.nsf.gov Mon Aug 24 10:57:37 1992 From: bryan at ad.cise.nsf.gov (Bryan Thompson) Date: Mon, 24 Aug 92 10:57:37 GMT-0400 Subject: A neat idea from L. Breiman Message-ID: <9208241457.AA01221@ ad.cise.nsf.gov.cise.nsf.gov > This sounds very much like an attempt to approximate the results of dynamic programming. In dynamic programming each state (decision point) or state-action pair is associated with the cumulative expected future utility of that action. Decisions are then made locally by selecting the option that has the highest cumulative expected future utility. Standard dynamic programming works backwards from a terminus, but there are approximations to dynamic programming that operate in forward time (e.g. heuristic dynamic programming or Q-learning) and can be applied within neural networks. In practive these approximations often develop non-optimal decision policies and a balance must be struck between preformance (when you always choose the action that is perceived to be optimal) and exploration (when you choose actions that are believed _not_ to be optimal in order to efficently explore the decision space, avoid local minima in the decision policy, detect a changing environment, etc.). Bryan Thompson National Science Foundation  From fellous%hyla.usc.edu at usc.edu Sun Aug 23 15:43:12 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Sun, 23 Aug 92 12:43:12 PDT Subject: models of LTP and LTD Message-ID: <9208231943.AA02074@hyla.usc.edu> Dear Connectionists, Few weeks ago I posted a request for complementary references on models of LTP and LTD. I would like to thank all the people who kindly replied, as I reproduce at the end of this message their emails. Yours, Jean-Marc >>>>>>>>>>>> From: ""L. Detweiler"" X-Mts: smtp Status: RO -- Please let me know what you find out. I can't give you any LTP references per se, but there is a lot of work going on in vision research with Hebbian rules. I have plenty of these references; let me know if you are interested. ld231782 at longs.LANCE.ColoState.EDU From rsantiag at nsf.gov Tue Aug 11 13:59:00 1992 From: rsantiag at nsf.gov (rsantiag@nsf.gov) Date: 11 Aug 92 12:59 EST Subject: (none) Message-ID: I am interested in finding out what work you are doing with LTP. I am currently working with systems incorporating Backprop, CMAC, Spline, and Critic models. I have been looking into modeling LTP for the last couple of months. I think that the mechanisms of LTP will help me in incroporating memories in a more efficient manner in my system. It might also provide more inspirations into reorganizing my architecture. After seeing your posting on the connectionist I have revived my interest in this area. I would like to collaborate with you if that is possible in incorporating LTP models into ANN current ANN systems Robero A. Santiago National Science Foundation From jan at pallas.neuroinformatik.ruhr-uni-bochum.de Wed Aug 12 15:28:55 1992 From: jan at pallas.neuroinformatik.ruhr-uni-bochum.de (Jan Vorbrueggen) Date: Wed, 12 Aug 92 21:28:55 +0200 Subject: LTP/LTD models Message-ID: <9208121928.AA01393@pallas.neuroinformatik.ruhr-uni-bochum.de> Hi Jean-Marc, have a look at last year's issues of Neural Computation. There is a paper by Bill Philips and his colleagues (can't remember all the names) on learning rules in general, and they also compare the rule derived from Artola, Broecher, and Singer's paper in Nature on LTD/LTP in rat striate cortex (vdM should know about that one, also 1st half of 1991, I think) with all the others and show that it is best. Regards, Jan From paolo at psyche.mit.edu Mon Aug 24 14:00:31 1992 From: paolo at psyche.mit.edu (Paolo Frasconi) Date: Mon, 24 Aug 92 14:00:31 EDT Subject: Paper in Neuroprose Archive Message-ID: <9208241800.AA11636@psyche.mit.edu> The following technical report has been placed in the Neuroprose Archives at Ohio State University: Injecting Nondeterministic Finite State Automata into Recurrent Neural Networks Paolo Frasconi, Marco Gori, and Giovanni Soda Technical Report DSI-RT15/92, August 1992 Dipartimento di Sistemi e Informatica University of Florence Abstract: In this paper we propose a method for injecting time-warping nondeterministic finite state automata into recurrent neural networks. The proposed algorithm takes as input a set of automata transition rules and produces a recurrent architecture. The resulting connection weights are specified by means of linear constraints. In this way, the network is guaranteed to carry out the assigned automata rules, provided the weights belong to the constrained domain and the inputs belong to an appropriate range of values, making possible a boolean interpretation. In a subsequent phase, the weights can be adapted in order to obtain the desired behavior on corrupted inputs, using learning from examples. One of the main concerns of the proposed neural model is that it is no longer focussed exclusively on learning, but also on the identification of significant architectural and weight constraints derived systematically from automata rules, representing the partial domain knowledge on a given problem. To obtain a copy via FTP (courtesy of Jordan Pollack): unix% ftp archive.cis.ohio-state.edu (or 128.146.8.52) Name: anonymous Password: (type your E-mail address) ftp> cd pub/neuroprose ftp> binary ftp> get frasconi.nfa.ps.Z ftp> quit unix% zcat frasconi.nfa.ps.Z | lpr (or however you uncompress and print postscript) Sorry, no hard copies available. Paolo Frasconi Dipartimento di Sistemi e Informatica Via di Santa Marta, 3 50139 Firenze, Italy frasconi at ingfi1.cineca.it  From fellous%hyla.usc.edu at usc.edu Mon Aug 24 20:47:12 1992 From: fellous%hyla.usc.edu at usc.edu (Jean-Marc Fellous) Date: Mon, 24 Aug 92 17:47:12 PDT Subject: Models of LTP and LTD. Message-ID: Dear Connectionists, I forgot to add this entry to the review of replies regrading LTP/LTD. Sorry for any inconvenience, Jean-Marc Fellous >>>>>>> From mrj at moria.cs.su.oz.au Tue Aug 25 03:57:43 1992 From: mrj at moria.cs.su.oz.au (Mark James) Date: Tue, 25 Aug 1992 17:57:43 +1000 Subject: Thesis on NN simlulation hardware availabe for ftp Message-ID: The following Master of Science thesis is available for ftp from neuroprose: Design of Low-cost, Real-time Simulation Systems for Large Neural Networks -------------------------------------------------------------------------- Mark James The University of Sydney January, 1992 ABSTRACT Systems with large amounts of computing power and storage are required to simulate very large neural networks capable of tackling complex control problems and real-time emulation of the human sensory, language and reasoning systems. General-purpose parallel computers do not have communications, processor and memory architectures optimized for neural computation and so can not perform such simulations at reasonable cost. The thesis analyses several software and hardware strategies to make feasible the simulation of large, brain-like neural networks in real-time and presents a particular multicomputer design able to implement these strategies. An important design goal is that the system must not sacrifice computational flexibility for speed as new information about the workings of the brain and new artificial neural network architectures and learning algorithms are continually emerging. The main contributions of the thesis are: - an analysis of the important features of biological neural networks that need to be simulated, - a review of hardware and software approaches to neural networks, and an evaluation of their abilities to simulate brain-like networks, - the development of techniques for efficient simulation of brain- like neural networks, and - the description of a multicomputer that is able to simulate large, brain-like neural networks in real-time and at low cost. ------------------------------------------ To obtain a copy via FTP use the standard procedure: % ftp cheops.cis.ohio-state.edu anonymous Password: anything ftp> cd pub/neuroprose ftp> binary ftp> get james.nnsim.ps.Z ftp> quit % zcat james.nnsim.ps.Z | lpr ------------------------------------------ Mark James | EMAIL : mrj at cs.su.oz.au | Basser Department of Computer Science, F09 | PHONE : +61-2-692-4276 | The University of Sydney NSW 2006 AUSTRALIA | FAX : +61-2-692-3838 |  From henrik at tazdevil.llnl.gov Mon Aug 31 17:01:11 1992 From: henrik at tazdevil.llnl.gov (Henrik Klagges) Date: Mon, 31 Aug 92 14:01:11 PDT Subject: A neat idea from L. Breiman Message-ID: <9208312101.AA25658@tazdevil.llnl.gov> > hold {d_2, ..., d_{k-1}} constant ...by re-making decision d_1 How can you hold d_2 ... etc constant if they might depend on d_1, like in a game tree ? Cheers, Henrik IBM Research Lawrence Livermore National Labs