From mcmillan at tigger.Colorado.EDU Thu Aug 1 11:35:21 1991 From: mcmillan at tigger.Colorado.EDU (Clayton McMillan) Date: Thu, 1 Aug 91 09:35:21 MDT Subject: Tech Report Available Message-ID: <9108011535.AA22278@tigger.colorado.edu> A compressed postscript version of the following tech report has been placed in the pub/neuroprose directory for anonymous ftp from cheops.cis.ohio-state.edu (instructions follow). This paper will appear in the Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society. The Connectionist Scientist Game: Rule Extraction and Refinement in a Neural Network Clayton McMillan, Michael C. Mozer, & Paul Smolensky Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 mcmillan at boulder.colorado.edu Abstract Scientific induction involves an iterative process of hypothesis formulation, testing, and refinement. People in ordinary life appear to undertake a similar process in explaining their world. We believe that it is instructive to study rule induction in connectionist systems from a similar perspective. We propose an approach, called the Connectionist Scientist Game, in which symbolic condition-action rules are extracted from the learned connection strengths in a network, thereby forming explicit hypotheses about a domain. The hypotheses are tested by injecting the rules back into the network and continuing the training process. This extraction-injection process continues until the resulting rule base adequately characterizes the domain. By exploiting constraints inherent in the domain of symbolic string-to-string mappings, we show how a connectionist architecture called RuleNet can induce explicit, symbolic condition-action rules from examples. RuleNet's performance is far superior to that of a variety of alternative architectures we've examined. RuleNet is capable of handling domains having both symbolic and subsymbolic components, and thus shows greater potential than purely symbolic learning algorithms. The formal string manipulation task performed by RuleNet can be viewed as an abstraction of several interesting cognitive models in the connectionist literature, including case role assignment and the mapping of orthography to phonology. Instructions for porting the file and printing: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get mcmillan.csg.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for mcmillan.csg.ps.Z (56880 bytes). 226 Transfer complete. local: mcmillan.csg.ps.Z remote: mcmillan.csg.ps.Z 56880 bytes received in 2.5 seconds (22 Kbytes/s) ftp> quit 221 Goodbye. unix> uncompress mcmillan.csg.ps.Z unix> lpr mcmillan.csg.ps Clayton McMillan From tom at faulty.che.utexas.edu Thu Aug 1 13:05:24 1991 From: tom at faulty.che.utexas.edu (Thomas W. Karjala) Date: Thu, 1 Aug 91 12:05:24 CDT Subject: No subject Message-ID: <9108011705.AA15929@faulty.che.utexas.edu> Dear connectionist researchers, The members of our research group have be applying neural networks to fault detection, data reconciliation, and control in the field of chemical engineering for several years. Recently, we have been using nonlinear programming techniques for training of feedforward networks. We have found these techniques to be more successful than backpropagation and have been able to train larger networks more quickly without external tuning of parameters such as learning rate and momentum. Training methods based on nonlinear programming share one drawback with backpropagation. Selection of the wrong starting point in the weight space can often lead to local minima where the network has learned the training data only poorly. In our readings, several of us have come across vague mention of methods of choosing initial starting weights other than picking small random values. We are now searching for more substantial references in this area. We would welcome any suggestions, comments, or pointers to papers on this subject and will be glad to share any information we find. Please contact us directly via email. Thanks! Thomas Karjala Department of Chemical Engineering The University of Texas at Austin Ausin, TX 78712 tom at faulty.che.utexas.edu From gbugmann at nsis86.cl.nec.co.jp Fri Aug 2 10:32:40 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Fri, 02 Aug 91 10:32:40 V Subject: Robustness Message-ID: <9108020132.AA12736@nsis86.cl.nec.co.jp> Robustness is a vague but often used concept. Is there an accepted method to determine the robustness of a NN ? In an application of a FF Backprop net to the reproduction of a function f(x,y) [1], we had measured the robustness in the following way: After completed training, each weight was set to zero in turn and the root mean square of the relative errors (RMS) (relative differences between the actual outputs of the net and the outputs defined in the training set) was measured (The mean is over all the examples in the training set). In our case, the largest RMS induced by the loss of one connection was 1600 %. We have used this "worst possible damage" as a measure of the (non-) robustness of the network. [1] Bugmann, G., Lister, J.B. and von Stockar, U. (1989) "The standard deviation method: Data Analysis by Classical Means and by Neural Networks", Lab Report LRP-384/89, CRPP, Swiss Federal Institute of Technology, CH-1015 Lausanne. ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From MFMISMIC%HMARL5.BITNET at vma.cc.cmu.edu Fri Aug 2 17:43:00 1991 From: MFMISMIC%HMARL5.BITNET at vma.cc.cmu.edu (MFMISMIC%HMARL5.BITNET@vma.cc.cmu.edu) Date: Fri, 2 Aug 91 17:43 N Subject: Rules and Neural Networks. Paper announcement. Message-ID: The following paper has been published in the proceedings of the European Simulation Multiconference 91 in Copenhagen: HOMOMORPHIC TRANSFORMATION FROM NEURAL NETWORKS TO RULE BASES Author: Michael Egmont-Petersen, Computer Resources International a/s and Copenhagen Business School Abstract: In this article a method to extract the knowl- edge induced in a neural network is presented. The method explicates the relation between a network's inputs and its outputs. This relation is stored as logic rules. The feasibility of the method is studied by means of three test examples. The result is that the method can be used, though some drawbacks are detected. One is that the method sometimes generates a lot of rules. For fast retrieval, these rules can well be stored in a B- tree. Contents: 1. Introduction 2. Synthesizing Rule Bases Parsimoniously 3. Description of the Experiments 4. Practical Applicability of the Algorithm 5. Conclusion Hardcopies of the paper are avaliable. Please send requests to the following address in Holland: Institute of Medical Statistics and Informatics University of Limburg Postbus 616 NL-6200 MD Maastricht The Netherlands att. Michael Egmont-Petersen Michael Egmont-Petersen From SCHOLTES at ALF.LET.UVA.NL Sun Aug 4 14:59:00 1991 From: SCHOLTES at ALF.LET.UVA.NL (SCHOLTES) Date: Sun, 4 Aug 91 14:59 MET Subject: Neural Nets in Information Retrieval References Message-ID: Dear Connectionists, Hereby the compiled list of received reactions on work done in Information Retrieval and Connectionist or Neural Systems. Please keep me informed of any new work in this field. Regards, Jan Scholtes University of Amsterdam Faculty of Arts Department of Computational Linguistics scholtes at alf.let.uva.nl ********************************************************************* References Neural Nets in Information Retrieval [Allen, 1991]: Allen, R.B. (1991). Knowledge Representation and Information Retrieval with Simple Recurrent Networks.. Work. Notes of the AAAI SSS on Connectionist Natural Language Processing. March 26th-28th 1991, Palo Alto, CA., pp. 1-6. [Belew et al., 1988]: Belew, R.K. and Holland, M.P. (1988). A Computer System Designed to Support the Near-Library User of Information Retrieval. Microcomputers for Information Management, Vol. 5, No. 3, December 1988, pp. 147-167. [Belew, 1986]: Belew, R.K. (1986). Adaptive Information Retrieval: Machine Learning in Associative Networks. Ph.D. Dissertation, Univ. Michigan, CS Department, Ann Harbor, MI. [Belew, 1987]: Belew, R.K. (1987). A Connectionist Approach to Conceptual Information Retrieval. Proc. of the First Intl. conf. on AI and Law, pp. 116-126. ACM Press. [Belew, 1989]: Belew, R.K. (1989). Adaptive Information Retrieval: Using a Connectionist Representation to Retrieve and Learn About Documents. Proc. SIGIR-89, June 11th-20th, 1989. Cambridge, MA. [Brachman et al., 1988]: Brachman, R.J. and McGuinness, D.L. (1988). Knowledge Representation, Connecitonism, and Conceptual Retrieval. SIGIR-88, [Doszkocs et al., 1990]: Doszkocs, T.E., Reggia, J. and Lin, X. (1990). Connectionist Models and Information Retrieval. Ann. Review of Information Science and Technology, Vol. 25, pp. 209-260. [Eichmann et al., 1991]: Eichmann, D.A. and Srinivas, K. (1991). Neural Network-Based Retrieval from Reuse Repositories. CHI '91 Workshop on Pattern Recognition and Neural Networks in Human-Computer Interaction, April 28th, 1991. New Orleans, LA. [Eichmann et al., 1992]: Eichmann, D.A. and Srinivas, K. (1992). Neural Network-Based Retrieval from Reuse Repositories. In: Neural Networks and Pattern Recognition in Human Computer Interaction (R. Beale and J. Findlay, Eds.), Ellis Horwood Ltd, WS, UK. [Gersho et al., 1990]: Gersho, M. and Reiter, R. (1990). Information Retrieval using a Hybrid Multi-Layer Neural Network. Proceedings of the IJCNN, San Diego, June 17-21, 1990, Vol. 2, pp. 111-117. [Honkela et al., 1991]: Honkela, T. and Vepsalainen, A.M. (1991). Interpreting Imprecise Expressions: Experiments with Kohonen's Self-Organizing Maps and Associative Memory. Proc. of the ICANN '91, June 24th-28th, Helsinki, Vol. 1, pp. 897-902. [Jagota, 1990]: Jagota, A. (1990). Applying a Hopfield-style Network to Degraded Printed Text Restoration. Proc. of the 3rd Conference on Neural Networks and PDP, Indiana-Purdue University. [Jagota, 1990]: Jagota, A. (1990). Applying a Hopfield-Style Network to Degraded Text Recognition. Proc. of the IJCNN 1990, [Jagota, 1991]: Jagota, A. (1991). Novelty Detection on a Very Large Number of Memories in a Hopfield-style Network. Proc. of the IJCNN, July 8th-12th, 1991. Seattle, WA, [Jagota, et al., 1990]: Jagota, A. and Hung, Y.-S. (1990). A Neural Lexicon in a Hopfield-style Network. Proceedings of the IJCNN, Washington, 1990, Vol. 2, pp. 607-610. [Mitzman et al., 1990]: Mitzman, D. and Giovannini, R. (1990). ActivityNets: A Neural Classifier of Natural Language Descriptions of Economic Activities. Proc. of the Int. Workshop on Neural Nets for Statistical and Economic Data, Dublin, December 10-11, [Mozer, 1984]: Mozer, M.C. (1984). Inductive Information Retrieval Using Parallel Distributed Computation. ICS Technical Report 8406, La Jolla, UCSD. [Rose et al., 1989]: Rose, D.E. and Belew, R.K. (1989). A Case for Symbolic/Sub-Symbolic Hybrids. Proc. of the Cogn. Science Society, pp. 844-851. [Rose et al., 1991]: Rose, D.E. and Belew, R.K. (1991). A Connectionist and Symbolic Hybrid for Improving Legal Research. Int. Journal of Man-Machine Studies, July 1991, [Rose, 1990]: Rose, D.E. (1990). Appropriate Uses of Hybrid Systems. In: Connectionist Models: Proc. of the 1990 Summer School, pp. 277-286]. Morgan-Kaufman Publishers. [Rose, 1991]: Rose, D.E. (1991). A Symbolic and Connectionist Approach to Legal Information Retrieval. Ph.D. Dissertation, UCSD, La Jolla, CA. [Scholtes, 1991]: Scholtes, J.C. (1991). Filtering the Pravda with a Self-Organizing Neural Net. Submitted to the Bellcore Workshop on High Performance Information Filtering, November 5th-7th 1991, Chester, NJ. [Scholtes, 1991]: Scholtes, J.C. (1991). Neural Nets and Their Relevance in Information Retrieval. TR Dep. of Computation Linguistics, August 1991. University of Amsterdam. [Scholtes, 1991]: Scholtes, J.C. (1991). Neural Nets versus Statistics in Information Retrieval. Submitted to NIPS*91, December 2nd-5th 1991, Boulder, Colorado. [Scholtes, 1991]: Scholtes, J.C. (1991). Unsupervised Learning and the Information Retrieval Problem. Submitted to IJCNN November 18th-22nd 1991, Singapore. [Scholtes, 1992]: Scholtes, J.C. (1992). Filtering the Pravda with a Self-Organizing Neural Net. Submitted to the Symposium on Document Analysis and Information Retrieval. March 16th-18th 1992, Las Vegas, NV. [Wermter, 1991]: Wermter, S. (1991). Learning to Classify Neural Language Titles in a Recurrent Connectionist Model. Proc. of the ICANN '91, June 24th-28th, Helsinki, Vol. 2, pp. 1715-1718. ******************************************************************** From aam9n at hagar1.acc.Virginia.EDU Sat Aug 3 22:14:31 1991 From: aam9n at hagar1.acc.Virginia.EDU (Ali Ahmad Minai) Date: Sat, 3 Aug 91 22:14:31 EDT Subject: Robustness Message-ID: <9108040214.AA07744@hagar1.acc.Virginia.EDU> In <9108020132.AA12736 at nsis86.cl.nec.co.jp>, Guido Bugmann writes: > Robustness is a vague but often used concept. Is there an > accepted method to determine the robustness of a NN ? > In an application of a FF Backprop net to the reproduction > of a function f(x,y) [1], we had measured the robustness in > the following way: > After completed training, each weight was set to zero in turn > and the root mean square of the relative errors (RMS) (relative > differences between the actual outputs of the net and the outputs > defined in the training set) was measured (The mean is over all > the examples in the training set). > In our case, the largest RMS induced by the loss of one connection > was 1600 %. We have used this "worst possible damage" as a measure > of the (non-) robustness of the network. > > [1] Bugmann, G., Lister, J.B. and von Stockar, U. (1989) > "The standard deviation method: Data Analysis by Classical Means > and by Neural Networks", Lab Report LRP-384/89, CRPP, Swiss Federal > Institute of Technology, CH-1015 Lausanne. Indeed, robustness of neural networks could do with considerably greater investigation. I am just finishing up a dissertation on the robustness of feed-forward networks with real-valued inputs and outputs. I have looked at a very simple case --- probably the simplest possible. I define a perturbation process over all the non-output neurons of the network, with the major restriction that only one neuron's output is perturbed during the presentation of any one input vector. The neuron perturbed is selected with a distribution q(i), and the magnitude of the perturbation is an independent random variable with 0-mean distribution p(d). For simplicity of analysis, I take both q and p to be uniform, but that can be relaxed. The robustness of the network over a data set T is defined as the average deviation in the output of the network operating under the perturbation process, relative to some appropriate parameter of distribution p (e.g. the spread of the uniform deviation, or the variance etc.). The deviation can be measured in many ways: for simplicity, I use the sum of absolute deviations over all network outputs. The main thing is to predict the average deviation without making a hundred passes over the data set, and without actually perturbing the network. This is easily done using a power series approximation of the relationship between each neuron output and each network output. The required derivatives can be calculated using dynamic feedback a la Werbos (back-propagation, if you like). As long as the weight vectors for individual neurons in the network are not huge (i.e. if no neurons have activation functions close to being discountinuous), the approximation I make is quite reasonable. Of course, since all activation and composition functions in the network are continuous, and continuously differentiable everywhere, there is always a perturbation process with bounded distribution that satisfies the convergence criteria of the power series. Using the uniform distributions for p and q, and retaining only the linear term in the power series expansions, the analysis, applied to any network, yields a characteristic measure that directly scales the expected output deviation, i.e. given that p is U[0,b], the expected output deviation is b/2r, where r is the characteristic measure of robustness for the network. Once r is determined, the network's response to perturbation distributions with various spreads can be predicted (within limits). Indeed, with hardly any extra effort (and no extra computational expense), even the variance of the output deviation can be predicted in a similar way. The computational complexity of determining r is O(|W|*|T|), where W is the set of weights and T is the data set. Of course, the predictive accuracy of r over data sets other than T depends on how representative T is --- the usual generalization issue. As T grows, however, r's predictive accuracy should converge over all data sets chosen under the same sampling distribution. The empirical results I have are very good. One interesting aspect of this analysis is that it also provides a measure of the sensitivity of the network output to perturbations in each neuron output, which is a natural way of measuring the relevance of individual neurons. This can be used either for pruning, or (I think, more consistently with connectionist philosophy), to encourage the emergence of distributed representations. I am working on incorporating such distribution imperatives into back-propagation etc., and should have some results in a few months. The work described above is now being written up into papers, and should be submitted over the next month or two. I would be delighted to discuss this and related issues with anyone who is interested. There is much work to be done in extending this formulation to the case where multiple perturbations are permitted simultaneously. Since things are not additive or subadditive, I'm not sure how important higher order effects are --- probably quite important. Still, I have a few ideas, which I'll be working on in the next few months. Ali Minai Electrical Engineering University of Virginia From jm2z+ at andrew.cmu.edu Mon Aug 5 10:46:36 1991 From: jm2z+ at andrew.cmu.edu (Javier Movellan) Date: Mon, 5 Aug 91 10:46:36 -0400 (EDT) Subject: Robustness to what ? In-Reply-To: <9108020132.AA12736@nsis86.cl.nec.co.jp> References: <9108020132.AA12736@nsis86.cl.nec.co.jp> Message-ID: Robustness to what? * Damage ? * Effects of noise in input ? * Effect of noise in teacher ? Traditionally in statistics robustness of an estimator is understood as the the resistance of the estimates to the effects of a wide variety of noise distributions. The key point here is VARIETY. So we may have estimators that behave very well under Gaussian noise but deteriorate under other types of noise (non-robust) and estimators that behave OK but sub-optimally under very different types of noise (robust). Robust estimators are advised when the form of the noise is unknown. Maximum likelihood estimators are a good choice when the form of the noise is known. In practice robustness is measured by analyzing how the estimator behaves under three benchmark noise distributions. These distributions represent low tail, normal tail and large tail conditions. Things are a bit more complicated in the neural nets environment for we are trying to estimate functions instead of points, and unlike linear regression we have problems with multiple minima. For statistical theory on robust estimation as applied to linear regression see: Li G(1985) Robust Regression, in Hoaglin, Mosteller and Tukey: Exploring data, tables, trends, and shapes. New York, John Wiley. For application of these ideas to the back-prop environment see: Movellan J(1991) Error Functions to Improve Noise Resistance and Generalization in Backpropagation Networks} in {\it Advances in Neural Networks\/}, Ablex Publishing Corporation. Hanson S has an earlier paper on the same theme. I believe the paper is in one of the NIPS proceedings but I do not have the exact reference with me. -Javier From jm2z+ at andrew.cmu.edu Mon Aug 5 14:06:10 1991 From: jm2z+ at andrew.cmu.edu (Javier Movellan) Date: Mon, 5 Aug 91 14:06:10 -0400 (EDT) Subject: f(a,b) = f(a +b) Message-ID: I am looking for a name to the constraint f(a,b) = f(a+b). Do you have any suggestions ? Thanks Javier From port at iuvax.cs.indiana.edu Tue Aug 6 02:35:21 1991 From: port at iuvax.cs.indiana.edu (Robert Port) Date: Tue, 6 Aug 91 1:35:21 EST Subject: dynamic reprstns and lang Message-ID: This paper will be presented at the Cognitive Science Society meeting later this week. It proposes that dynamic systems suggest ways to expand the range of representational systems. `REPRESENTING ASPECTS OF LANGUAGE' by Robert F. Port (Departments of Linguistics and Computer Science) and Timothy van Gelder (Department of Philosophy) Cognitive Science Program, Indiana University, Bloomington. We provide a conceptual framework for understanding similarities and differences among various schemes of compositional representation, emphasizing problems that arise in modelling aspects of human language. We propose six abstract dimensions that suggest a space of possible compositional schemes. Temporality and dynamics turn out to play a key role in defining several of these dimensions. From studying how schemes fall into this space, it is apparent that there is no single crucial difference between AI and connectionist approaches to representation. Large regions of the space of compositional schemes remain unexplored, such as the entire class of active, dynamic models that do composition in time. These models offer the possibility of parsing real-time input into useful segments, and thus potentially into linguistic units like words and phrases. A specific dynamic model implemented in a recurrent network is presented. This model was designed to simulate some aspects of human auditory perception but has implications for representation in general. The paper can be obtained from Neuroprose at Ohio State University. Use ftp cheops.cis.ohio-state.edu. Login as anonymous with neuron as password. Cd to pub/neuroprose. Then get port.langrep.ps.Z. After uncompressing, do lpr (in Unix) to a postscript printer. Robert Port, Dept of Linguistics, Memorial Hall, Indiana University, 47405 812-855-9217 From singras at gmuvax2.gmu.edu Tue Aug 6 18:16:03 1991 From: singras at gmuvax2.gmu.edu (Steve Ingrassia) Date: Tue, 6 Aug 91 18:16:03 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108062216.AA04965@gmuvax2.gmu.edu> I do not understand your question. If the function "f" is a function of two variables, as on the left-hand side of your equal sign, then the right-hand side, "f(a+b)" is undefined. If "f" is a function of a single variable, then "f(a,b)" makes no sense. From lacher at lambda.cs.fsu.edu Wed Aug 7 10:37:54 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Wed, 7 Aug 91 10:37:54 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108071437.AA00165@lambda.cs.fsu.edu> A function f(x,y) that satisfies f(x,y) = g(x+y) for some g ? I asked Lou Howard and here is his response: If you think of y as time and x as position, something of the form g(x+y) is sometimes called a "leftward propagating wave"; but this is not much shorter than calling a spade a spade. [It is the general solution of the "left-going wave equation", du/dt = du/dx.] Can't think of anything else. -- Lou. --- Chris Lacher From bachmann at radar.nrl.navy.mil Wed Aug 7 09:22:30 1991 From: bachmann at radar.nrl.navy.mil (Charles Bachmann) Date: Wed, 7 Aug 91 09:22:30 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108071322.AA10316@radar.nrl.navy.mil> In reply to Steve Ingrassia's comment , what about f(x,y) = 1 /(x + y). We can think of it as a function of two variables, but with the replacement of z = x + y, it becomes a function of one variable f(z) = 1 / z. From bmb at Think.COM Wed Aug 7 19:31:27 1991 From: bmb at Think.COM (Bruce Boghosian) Date: Wed, 7 Aug 91 19:31:27 EDT Subject: f(a,b) = f(a +b) In-Reply-To: Charles Bachmann's message of Wed, 7 Aug 91 09:22:30 -0400 <9108071322.AA10316@radar.nrl.navy.mil> Message-ID: <9108072331.AA17609@aldebaran.think.com> Date: Wed, 7 Aug 91 09:22:30 -0400 From: Charles Bachmann In reply to Steve Ingrassia's comment , what about f(x,y) = 1 /(x + y). We can think of it as a function of two variables, but with the replacement of z = x + y, it becomes a function of one variable f(z) = 1 / z. At the risk of splitting hairs here, most mathematicians would probably like to see a different symbol used for the second function. So, the question is (presumably) to characterize functions of two variables, f(x,y), that have the property that there exists a function of one variable, g(z), such that f(x,y)=g(x+y). I know of no particular name for such functions. They satisfy the PDE partial f partial f --------- - --------- = 0, partial x partial y but that's probably not a particularly useful characterization either. --Bruce From barto at envy.cs.umass.edu Thu Aug 8 16:15:57 1991 From: barto at envy.cs.umass.edu (Andy Barto) Date: Thu, 8 Aug 91 16:15:57 -0400 Subject: postdoctoral position Message-ID: <9108082015.AA15473@envy.cs.umass.edu> Postdoctoral Position in Computational Neuroscience. I am looking for applicants for a postdoctoral position for the study of computational aspects of the control of movement. The position will involve working as a member of an interdisciplinary group on the modeling and simulation of neural systems that control arm movements. The ideal candidate would have experience in both neurophysiology and connectionist modelling, with specific interests in learning algorithms and the dynamics of nonlinear recurrent networks. Andy Barto Department of Computer Science University of Massachusets at Amherst Send curriculum vitae and two references to: Gwyn Mitchell, Department of Computer Science, University of Massachusets, Amherst MA 01003. From barryf at ee.su.OZ.AU Wed Aug 7 21:47:23 1991 From: barryf at ee.su.OZ.AU (Barry Flower) Date: Thu, 8 Aug 1991 11:47:23 +1000 Subject: ACNN'92 Final Call Message-ID: <9108080147.AA11030@brutus.ee.su.OZ.AU> FINAL CALL FOR PAPERS & ADVANCE REGISTRATION (Deadline for Submissions is August 30, 1991) Third Australian Conference On Neural Networks (ACNN'92) 3rd - 5th February 1992 The Australian National University, Canberra, Australia The third Australian conference on neural networks will be held in Canberra on February 3rd-5th 1992, at the Australian National University. This conference is interdisciplinary, with emphasis on cross discipline communication between Neuroscientists, Engineers, Computer Scientists, Mathematicians and Psychologists concerned with understanding the integrative nature of the nervous system and its implementation in hardware/software. The categories for presentation and submissions include: 1 - Neuroscience: Integrative function of neural networks in vision, audition, motor, somatosensory and autonomic functions; Synaptic function; Cellular information processing; 2 - Theory: Learning; generalisation; complexity; scaling; stability; dynamics; 3 - Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementations; 4 - Architectures and Learning Algorithms: New architectures and learning algorithms; hierarchy; modularity; learning pattern sequences; Information integration; 5 - Cognitive Science and AI: Computational models of cognition and perception; Reasoning; Concept formation; Language acquisition; Neural net implementation of expert systems; 6 - Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, machine vision; Motor control; Robotic; ACNN'92 will feature an invited keynote speaker. The program will include, presentations and poster sessions. Proceedings will be printed and distributed to the attendees. There will be no parallel sessions. Invited Keynote Speaker ~~~~~~~~~~~~~~~~~~~~~~~ Professor Terrence Sejnowski, The Salk Institute and University of California at San Diego. Call for Papers ~~~~~~~~~~~~~~~ Original research contributions are solicited and will be internationally refereed. Authors must submit the following by August 30, 1991: * Five copies of a four page (maximum) manuscript * Five copies of a 100 word (maximum) abstract * Covering letter indicating submission title and correspondence addresses for authors. Each manuscript and abstract should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. Note that names or addresses of the authors should be omitted from the manuscript and the abstract and should be included only on the covering letter. Authors will be notified by November 1, 1991 whether their submissions are accepted or not, and are expected to prepare a revised manuscript (up to four pages) by December 13, 1991. Submissions should be mailed to: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Electrical Engineering NSW 2006 Australia Registration material may be obtained by writing to Mrs Agatha Shotam at the address above or by: Tel: (+61-2) 692 4214 Fax: (+61-2) 660 1228 Email: acnn92 at ee.su.oz.au. Deadline for Submissions is August 30, 1991 Venue ~~~~~ Australian National University, Canberra, Australia. Principal Sponsors ~~~~~~~~~~~~~~~~~~ * Australian Telecommunications & Electronics Research Board * Telectronics Pacing Systems ACNN'92 Organising Committee ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Organising Committee: Chairman: Marwan Jabri, (SU) Technical Program Chairman: Bill Levick, (ANU) & Ah Chung Tsoi, (QU) Stream Chairs: Neuroscience: Max Bennent (SU) Theory: Ah Chung Tsoi (QU) Cognitive SCience & AI: Max Coltheart (MU) Implementations: Marwan Jabri/Steve Pickard (SU) Applications: Yianni Attikiouzel (WA) Local Arrangements Chair: M Srinivasan (ANU) Institutions Liason Chair: N Nandagopal (DSTO) Sponsorship Chair: Steve Pickard (SU) Publicity Chair: Barry Flower (SU) Publications Chair: Philip Leong (SU) Secretariat/Treasurer: Agatha Shotam (SU) Registration ~~~~~~~~~~~~ The registration fee to attend ACNN'92 is: Full Time Students $A75 Academics $A200 Other $A300 A discount of 20% applies for advance registration. Registration forms must be posted before December 13th, 1991, to be entitled to the discount. To be eligible for the Full Time Student rate a letter from the Head of Department as verification of enrolment is required. Accommodation ~~~~~~~~~~~~~ To assist attendees in obtaining accommodation, a list of hotels and colleges close to the conference venue is shown below. Lakeside Hotel Olims Canberra Hotel London Circuit Cnr. Ainslie & Limestone Ave Canberra City ACT 2600 Braddon ACT 2601 Tel: +61(6) 247 6244 Tel: +61(6) 248 5511 Fax: +61(6) 257 3071 Tel: +61(6) 247 0864 Macquarie Private Hotel Capital Hotel 18 National Circuit 108 Northbourne Ave Barton ACT 2600 Canberra City ACT 2600 Tel: +61(6) 273 2325 Tel: +61(6) 248 6566 Fax: +61(6) 273 4241 Tel: +61(6) 248 8011 Brassey Hotel Canberra Rex Hotel Belmore Gardens Northbourne Ave Barton ACT 2600 Canberra City ACT 2600 Tel: +61(6) 273 3766 Tel: +61(6) 248 5311 Fax: +61(6) 273 2791 Tel: +61(6) 248 8357 Accommodation on Australian National University Campus ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ University House Burton & Garran Hall Tel: +61(6) 249 5211 Tel: +61(6) 249 3083 Fax: +61(6) 249 5252 Fax: +61(6) 249 3136 Important Conference Dates ~~~~~~~~~~~~~~~~~~~~~~~~~~ 30th August 1991 Deadline for Submissions 1st Novemeber 1991 Notification of Acceptance 13th December 1991 Deadline for Camera ready paper 13th December 1991 Deadline for advance registration with discount 3rd-5th February 1992 Conference ------------------------------------------------------------------------------- ACNN'92 Registration Form I wish to attend the conference ______ I wish to be on your mailing list ______ My research interests are: Neuroscience _____ Modelling _____ Implementation _____ Other: ___________________________________ Name: _______________________________________________________________________ Title: ______________________________________________________________________ Organisation: _______________________________________________________________ Occupation: _________________________________________________________________ Address: ____________________________________________________________________ _________________________________________ City: _____________________________ State: ________________ Post Code: ____________ Country: ____________________ Tel: ____________________________________ Fax: ______________________________ E-Mail: _____________________________________________________________________ Find enclosed a cheque for the sum of ($A): ____________ (Deduct 20% of original registration cost if posted before December 13th, 1991) OR Please charge my credit card for the sum of (A$): ____________ (Deduct 20% of original registration cost if posted before December 13th, 1991) Mastercard/Visa Number: ______________________ Expiry Date: ________________ Signature: ____________________________________ Date: _______________________ To register, please fill in this form and return it together with payment: Secretariat ACNN'92 University of Sydney Electrical Engineering Sydney NSW 2006 Australia From egnp46 at castle.edinburgh.ac.uk Mon Aug 12 15:45:08 1991 From: egnp46 at castle.edinburgh.ac.uk (D J Wallace) Date: Mon, 12 Aug 91 15:45:08 WET DST Subject: f(a,b) = f(a +b) In-Reply-To: Javier Movellan's message of Mon, 5 Aug 91 14:06:10 -0400 (EDT) Message-ID: <9108121545.aa08941@castle.ed.ac.uk> The function f(a,b) is translation invariant, in the sense that if you think of a and b as coordinates on a line, its value is unchanged under the transformation a -> a+x, b -> b-x (any x). I don't think there is a need to introduce any special name for this. David Wallace > I am looking for a name to the constraint > > f(a,b) = f(a+b). > > Do you have any suggestions ? > > > Thanks > > > Javier From jose at tractatus.siemens.com Mon Aug 12 14:31:03 1991 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 12 Aug 1991 14:31:03 -0400 (EDT) Subject: NIPS update Message-ID: NIPS goers: Due to a clerical error, one of our temps put a bunch (maybe ~50) labeled envelopes in the mail. Unfortunately they were EMPTY envelopes. Please disregard these envelopes, they have no bearing on your submission. Envelopes with LETTERS will be appearing soon. We apologize profusely for jumping the gun and causing some of you who are anxiously waiting for information any concern. Please DON'T call us for clarification. And please share this info with your fellow NIPS colleagues. Again Mea Culpa. Steve NIPS*91 Program Chair From tgd at turing.CS.ORST.EDU Mon Aug 12 15:15:36 1991 From: tgd at turing.CS.ORST.EDU (Tom Dietterich) Date: Mon, 12 Aug 91 12:15:36 PDT Subject: Mackey-Glass Data? Message-ID: <9108121915.AA27441@turing.CS.ORST.EDU> I want to experiment with the Mackey-Glass timeseries task. Before I numerically integrate the differential equation myself, I was wondering if someone had already constructed a collection of training and test data that they would be willing to make available. If anyone has suggestions for other interesting regression problems (i.e., where the task is to predict a real-valued function), I'd like to hear from you. Thanks, --Tom From pablo at cs.washington.edu Mon Aug 12 18:56:02 1991 From: pablo at cs.washington.edu (David Cohn) Date: Mon, 12 Aug 91 15:56:02 -0700 Subject: "Virtual" Directories of neural network papers, etc. Message-ID: <9108122256.AA15554@june.cs.washington.edu> Prospero is a distributed directory service that allows users to organize information that is scattered across the Internet. It also allows users to look for information that has been organized by others. Over the past few months I've been assembling "virtual" directories of neural network related papers, source releases, and training/test data. The motivation is that one can access current (publicly ftp'able) work in an organized directory format. If your system is running Prospero, you can look in: /papers/subjects/neural-nets Papers and reviews about neural nets /releases/neural-nets Neural net software /databases/machine-learning Machine learning test data I am incorporating all publicly ftp'able neural-network-related files that I become aware of. This includes all papers that are announced as being placed in the neuroprose archive. (Note that this doesn't *replace* neuroprose or any other archive sites; its purpose is to make them easier to access). If your system has relevant publicly-accessible software, papers, or data that you would like to make available to others, please send me email and I will incorporate it into these virtual directories. If you system is not already running Prospero, information on obtaining the release can be obtained from info-prospero at isi.edu. Virtually, -David "Pablo" Cohn e-mail: pablo at cs.washington.edu Dept. of Computer Science, FR-35 phone: (206) 543-7798 University of Washington Seattle, WA 98195 From ben%psych at Forsythe.Stanford.EDU Mon Aug 12 17:41:26 1991 From: ben%psych at Forsythe.Stanford.EDU (Ben Martin) Date: Mon, 12 Aug 91 14:41:26 PDT Subject: Request for info on satellite photo analysis. Message-ID: <9108122141.AA08847@psych> A colleague here at Stanford asked me if I knew of any applications of networks to the problem of satellite photo analysis. He mentioned hearing about a system that could recognize clouds. Any information you can pass along would be appreciated. Ben Martin (ben at psych.stanford.edu) From ira at linus.mitre.org Tue Aug 13 10:15:01 1991 From: ira at linus.mitre.org (ira@linus.mitre.org) Date: Tue, 13 Aug 91 10:15:01 -0400 Subject: Request for info on satellite photo analysis. In-Reply-To: Ben Martin's message of Mon, 12 Aug 91 14:41:26 PDT <9108122141.AA08847@psych> Message-ID: <9108131415.AA06394@ella.mitre.org> I am aware of some efforts to use neural networks for satellite photo analysis. The cloud work you heard about may have been from our group here at MITRE. I believe Hughes has applied the Boundary Contour System to either high altitude or satellite imagery. Our work applies neural network vision techniques to the cloud and other recognition problems. See "Meteorological Classification of Satellite Imagery Using Neural Network Data Fusion," Smotroff, I. G., Howells, T. P., Lehar, S., International Joint Conference On Neural Networks, June 1990, Vol. II, pp. 23-28. See also "Meteorological Classification of Satellite Imagery and Ground Sensor Data Using Neural Network Data Fusion," Smotroff, Howells, Lehar, in Proceedings of the American Meteorological Society Seventh International Conference on Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology, New Orleans, LA, January 1991, pp. 239-243. Ira Smotroff (ira at mitre.org) From sims at starbase.MITRE.ORG Tue Aug 13 07:18:00 1991 From: sims at starbase.MITRE.ORG (Jim Sims) Date: Tue, 13 Aug 91 07:18:00 EDT Subject: Request for info on satellite photo analysis. In-Reply-To: Ben Martin's message of Mon, 12 Aug 91 14:41:26 PDT <9108122141.AA08847@psych> Message-ID: <9108131118.AA06259@starbase.mitre.org> some folks at MITRE Bedford were working on cloud ID using neural nets and satellite data. I think you could contact Ira Smotroff there.... try (627) 271-2000 (the operator) jim From slehar at park.bu.edu Tue Aug 13 11:59:09 1991 From: slehar at park.bu.edu (Steve Lehar) Date: Tue, 13 Aug 91 11:59:09 -0400 Subject: Request for info on satellite photo analysis. In-Reply-To: connectionists@c.cs.cmu.edu's message of 13 Aug 91 06:11:37 GM Message-ID: <9108131559.AA11710@park.bu.edu> > A colleague here at Stanford asked me if I knew of any applications > of networks to the problem of satellite photo analysis. He > mentioned hearing about a system that could recognize clouds. Any > information you can pass along would be appreciated. We did some work at Mitre Corp [1] on cloud recognition, which we presented at the INNC conference in Paris. My own part of the work involved mostly the Boundary Contour System / Feature Contour System (BCS/FCS) of Grossberg and Mingolla, which is a neural vision model that performs a variety of image enhancement and recognition functions, and which we used as a front-end for a backpropagation network to classify different cloud types, as identified (for the training set) by meterological specialists. If you would like more details on the BCS, or my extension to it the MRBCS, I would be happy to send you a an informal description I have prepared as an ASCII file. If you are interested in the backprop end of it, or the overall research effort, write to ira at linus.mitre.org, or howells at linus.mitre.org for further information. REFERENCES [1] Lehar S., Howells T, & Smotroff I. APPLICATION OF GROSSBERG AND MINGOLLA NEURAL VISION MODEL TO SATELLITE WEATHER IMAGERY. Proceedings of the INNC July 1990 Paris. From REJOHNSON at ORCON.dnet.ge.com Tue Aug 13 14:59:54 1991 From: REJOHNSON at ORCON.dnet.ge.com (REJOHNSON@ORCON.dnet.ge.com) Date: Tue, 13 Aug 91 14:59:54 EDT Subject: UNSUBSCRIBE Message-ID: <9108131859.AA25350@ge-dab.GE.COM> UNSUBSCRIBE From ingber at umiacs.UMD.EDU Wed Aug 14 15:04:18 1991 From: ingber at umiacs.UMD.EDU (Lester Ingber) Date: Wed, 14 Aug 1991 15:04:18 EDT Subject: ingber.eeg.ps.Z in Neuroprose archive Message-ID: <9108141904.AA05900@moonunit.umiacs.UMD.EDU> The paper ingber.eeg.ps.Z has been placed in the Neuroprose archive. This can be accessed by anonymous FTP on cheops.cis.ohio-state.edu (128.146.8.62) in the pub/neuroprose directory. This will laserprint out to 65 pages, so I give the abstract below to help you decide whether it's worth it. (I also enclose a referee's review afterwards to sway you the other way.) The six figures can be mailed on request, and I'm willing to make some hardcopies of the galleys or reprints, when they come, available. However, since this project is funded out of my own pocket, I might have to stop honoring such requests. The published paper will run 44 pages. This message may be forwarded to other lists. Lester Ingber ------------------------------------------ | | | | | | | Prof. Lester Ingber | | ______________________ | | | | | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | | | ------------------------------------------ ======================================================================= Physical Review A, vol. 44 (6) (to be published 15 Sep 91) Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography Lester Ingber Science Transfer Corporation, P.O. Box 857, McLean, VA 22101 (Received 10 April 1991) A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systemat- ics. The necessity of including nonlinear and stochastic struc- tures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numeri- cal algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked potential data being collected to investigate genetic predispositions to alcoholism and to extract brain "signatures" of short-term memory. Using the numerical algorithm of Very Fast Simulated Re-Annealing, it is demonstrated that SMNI can indeed fit this data within experimentally observed ranges of its underlying neuronal-synaptic parameters, and use the quantitative modeling results to examine physical neocortical mechanisms to discrim- inate between high-risk and low-risk populations genetically predisposed to alcoholism. Since this first study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is incon- clusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent with EEG data during selective attention tasks and with neocortical mechanisms describing short-term memory (STM) previously pub- lished using this approach. This paper explicitly identifies similar nonlinear stochastic mechanisms of interaction at the microscopic-neuronal, mesoscopic-columnar and macroscopic- regional scales of neocortical interactions. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons. PACS Nos.: 87.10.+e, 05.40.+j, 02.50.+s, 02.70.+d ======================================================================= Report of Referee Manuscript No. AD4564 Over the years, I had several occasions to review papers by Lester Ingber. However, there was never time enough to fully digest and comprehend all of the details to convince myself that his efforts of developing a theoretical basis for describing neocortical brain functions are in fact sound and not just speculative. This paper dispels all those reservations and doubts, but unfortunately it is rather lengthy. This paper, and the research behind it, is pioneering, and it needs to be published. The question is whether Physical Review A is the appropriate journal. Since the paper reviews and presents in a rather comprehensive fashion the research by Lester Ingber in the area of modeling neocortical brain functions, I recommend that it be submitted to Review of Modern Physics. ======================================================================= ------------------------------------------ | | | | | | | Prof. Lester Ingber | | ______________________ | | | | | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | | | ------------------------------------------ From yair at siren.arc.nasa.gov Wed Aug 14 20:42:47 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Wed, 14 Aug 91 17:42:47 PDT Subject: PlaNet software Message-ID: <9108150042.AA05349@siren.arc.nasa.gov.> does anyone have an idea of the goodness of the PlaNet software from the University of Colorado at Boulder compared to others such as the PDP software by McClelland and Rumelhart. your response is appreciated. yair at siren.arc.nasa.gov From hussien at circe.arc.nasa.gov Wed Aug 14 20:56:59 1991 From: hussien at circe.arc.nasa.gov (Bassam Hussien) Date: Wed, 14 Aug 91 17:56:59 PDT Subject: No subject Message-ID: <9108150056.AA08370@circe.arc.nasa.gov.> From hussien at circe.arc.nasa.gov Wed Aug 14 20:38:32 1991 From: hussien at circe.arc.nasa.gov (Bassam Hussien) Date: Wed, 14 Aug 91 17:38:32 PDT Subject: Preprint on Texture Generation with the Random Neural Network In-Reply-To: Erol Gelenbe's message of Fri, 22 Feb 91 18:13:25 +2 <9102230855.AA15700@inria.inria.fr> Message-ID: <9108150038.AA08366@circe.arc.nasa.gov.> m From spotter at sanger Wed Aug 14 20:53:26 1991 From: spotter at sanger (Steve Potter) Date: Wed, 14 Aug 91 17:53:26 PDT Subject: Cultured Neural Nets? Message-ID: <9108150053.AA15897@sanger.bio.uci.edu> At the 1987 IEEE NN conference I heard a talk by a guy named Gross (I believe at U of Texas) who was describing a plan to grow neurons in culture dishes covered with electrodes. After the neurons developed connections, the numerous electrodes would be used to stimulate and record from the "natural" neural net. A search of recent literature on MedLine did not turn up any such papers. Does anyone have any more news on this? Is anyone else out there doing similar work? Steve Potter U of Cal Irvine Psychobiology dept. (714)856-4723 spotter at sanger.bio.uci.edu From rob at galab2.mh.ua.edu Thu Aug 15 10:27:51 1991 From: rob at galab2.mh.ua.edu (Robert Elliott Smith) Date: Thu, 15 Aug 91 09:27:51 CDT Subject: IEEE SouthEastCon '92 Session on Neural Networks Message-ID: <9108151427.AA18974@galab2.mh.ua.edu> IEEE SouthEastCon '92 April 12-15, 1992 Birmingham, Alabama Session on Neural Networks Announcement and Call for Papers -------------------------------- General Conference Announcement: SouthEastCon is the yearly IEEE Region 3 Technical Conference for the purpose of bringing regional Electrical Engineering students and professionals together and sharing information, particularly through the presentation of technical papers. It it the most influential outlet in Region 3 for promoting awareness of technical contributions made by our profession to the advancement of engineering science and society. Attendance and professional program paper participation from areas outside Region 3 is encouraged and welcome. I am session chair for a planned SouthEastCon '92 session on neural networks. I am writing to encourage the submission of high quality papers that describe innovative work on neural networks theory, analysis, design, and application. Instructions for paper submission: ---------------------------------- Acceptance Categories: 1) Full-length Papers (Refereed) Submit four copies of a paper not to exceed twenty (20) double-spaced, typewritten pages (including references and figures) to the Technical Program Chairman: Dr. Perry Wheless University of Alabama Department of Electrical Engineering Box 870286 Tuscaloosa, Alabama 35487-0286 (205) 348-1757 FAX: (205) 348-8573 email: wwheless at ua1vm.ua.edu by October 1, 1991. These papers will be fully refereed. Author notification will be mailed by November 19,1991, and final camera-ready papers will be due on January 6, 1992. 2) Concise Papers: Submit four copies of a paper summary and a separate abstract to the Technical Program Chairman by September 17, 1991. The abstract must be provided on a separate sheet, and limited to one page. The summary should not exceed 500 words. The summary should be complete and should include (a) statement of problems or questions addressed, (b) objective of your work with regard to the problem, (c) approach employed to achieve objective, (d) progress, work performed, and (e) important results and conclusions. Since the summary will be the basis for selection, care should be taken in its preparation so that it is representative of the work reported. As an aid to the Papers Review Committee, please indicate that the paper is intended for the Neural Networks Session. Concise papers, not exceeding four (4) camera-ready Proceedings pages (including references and figures) will be published subject to acceptance by the Papers Review Committee and the author's fulfillment of additional requirements contained in the authors kit. Notification of acceptance and mailing of the author's kit will be on or before November 5, 1991, and the camera-ready papers will be due on January 6, 1991. Important Dates: ---------------- Concise Paper Abstract and Summary Deadline: Sept. 17, 1991 Full-length Paper Deadline: Oct. 1, 1991 Conference Dates: April 12-15, 1991 I hope to see you at SouthEastCon '92. ------------------------------------------- Robert Elliott Smith Department of Engineering of Mechanics The University of Alabama ------------------------------------------- From kolen-j at cis.ohio-state.edu Thu Aug 15 13:30:07 1991 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 15 Aug 91 13:30:07 -0400 Subject: Tech Report: Multiassociative Memory Message-ID: <9108151730.AA08632@retina.cis.ohio-state.edu> Another fine tech report available through the neuroprose archive at cis.ohio-state.edu: Multiassociative Memory John F. Kolen and Jordan B. Pollack Laboratory for AI Research Dept. of Computer and Information Sciences The Ohio State University Columbus, OH 43210 Abstract This paper discusses the problem of implementing many to many, or multiassociative, mappings with connectionist models. Traditional symbolic approaches explicitly represent all alternatives via stored links, or implicitly represent the alternatives through enumerative algorithms. Classical pattern association models ignore the issue of generating multiple outputs for a single input pattern. While recent research on recurrent networks looks promising, the field has not clearly focused on multiassociativity as a goal. In this paper, we define multiassociative memory and discuss its utility in cognitive modeling. We extend sequential cascaded networks to fit the task, and perform several initial experiments which demonstrate the feasibility of the concept. %ftp cis.ohio-state.edu Connected to cis.ohio-state.edu 220 news FTP server (SunOS 4.1) ready. Name (cis.ohio-state.edu:kolen-j): anonymous 331 Guest login ok, send ident as password Password: username 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get kolen.multi.ps.Z 200 PORT command successful. 150 ASCII data connection for kolen.multi.ps.Z (128.146.61.207,1336) (48345 bytes). 226 ASCII Transfer complete. local: kolen.multi.ps.Z remote: kolen.multi.ps.Z 48580 bytes received in 1.1 seconds (42 Kbytes/s) then uncompress and print the file %uncompress kolen.multi.ps.Z %lpr kolen.multi.ps From yair at siren.arc.nasa.gov Thu Aug 15 13:34:52 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Thu, 15 Aug 91 10:34:52 PDT Subject: PlaNet Message-ID: <9108151734.AA05507@siren.arc.nasa.gov.> Hi guys Anyone out there can tell me if the PlaNet Neural-Net software is worth the effort of learning it. How does it compare with the PDP software by McClelland and Rumelhart or other free or non-free software for the SUN. Thanks for any illumination in that regard Yair Barniv From Prahlad.Gupta at K.GP.CS.CMU.EDU Thu Aug 15 13:35:30 1991 From: Prahlad.Gupta at K.GP.CS.CMU.EDU (Prahlad.Gupta@K.GP.CS.CMU.EDU) Date: Thu, 15 Aug 91 13:35:30 EDT Subject: PlaNet software Message-ID: I've just been looking at some simulators. I'd be very interested in other people's opinions, especially on simulators that provide the ability to configure architectures in non-standard ways, and that interface with pretty graphics displays. I'd be happy to e-mail anyone who's interested a compilation I made of info about NN simulators, culled from this mailiing list from the last year or so. However, this mainly details the *availability* of the products, and not user reviews. Here's my (somewhat superficial) assessment: 1. Both the PDP and PlaNet systems allow you to set up *standard* sorts of models pretty fast. 2. Only PlaNet interfaces these with good graphics that let you examine network internals. 3. For models/architectures that depart significantly from "standard", you need to program things yourself. (a) With PDP, you need to work on the simulator code itself. (b) PlaNet provides a set of functions which are *meant* to give the user this capability. However, certain things aren't that easy to do -- eg. I couldn't see an easy way to handle I/O in my own data format (as opposed to PlaNet's imposed file format) without writing my own I/O routines. 4. If you really need to devise strange networks and don't want to write a simulator yourself, you need fairly extensive programming capability *within* an existing simulator. As far as I can tell, the Rochester Connectionist Simulator provides this. However, I haven't yet examined it closely, and I'm not sure how good it's graphics capabilities are for someone who'd rather not dive into X programming. -- Prahlad From ubli at ivy.Princeton.EDU Thu Aug 15 15:50:24 1991 From: ubli at ivy.Princeton.EDU (Urbashi Mitra) Date: Thu, 15 Aug 91 15:50:24 EDT Subject: training in correlated noise Message-ID: <9108151950.AA14733@ivy.Princeton.EDU> i'm interested in any work that may have been done on the convergence properties (i.e. will the learning algorithm converge?) of neural networks (perceptron models) that are trained with *correlated noise* present. the individual training (desired) data are independent from time sample to time sample, but the noise is not. the noise is, however, m-dependent,i.e. noise samples more than m time samples apart, *are* independent. i'm looking for work along the lines of john shynk's (ece dept. ucsb) that is similar to adaptive filtering results, but ANYTHING, in any form in this area would be greatly appreciated. thanks, urbashi mitra From koch at CitIago.Bitnet Thu Aug 15 15:57:57 1991 From: koch at CitIago.Bitnet (Christof Koch) Date: Thu, 15 Aug 91 12:57:57 PDT Subject: Cultured Neural Nets? In-Reply-To: Your message <9108150053.AA15897@sanger.bio.uci.edu> dated 14-Aug-1991 Message-ID: <910815125706.2040b65c@Iago.Caltech.Edu> Yes, there exists a number of groups out there trying to grow neurons onto silicon studded with various electrodes and other sensors: Jerry Pine at Caltech and David Tank at AT & T are both doing this sort of work. For a recent reference check the May or June issues of Science magazine. A german researcher, Fromherz from Ulm, had a paper discussing his way of interfacing silicon circuits with single neurons. Christof From B344DSL at UTARLG.UTA.EDU Fri Aug 16 15:02:00 1991 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Fri, 16 Aug 91 14:02 CDT Subject: Book out and available, finally Message-ID: <2CA5B6F7523F001FB5@utarlg.uta.edu> Some of you saw my announcement over a year ago of the textbook, Introduction to Neural and Cognitive Modeling, and by now are wondering if the book is an imaginary construct. I am pleased to say that it isn't: it is now out and available from Lawrence Erlbaum Associates. It is only $19.95 in paperback (it's a lot more in hardback) and can be ordered at 1-800-9-BOOKS-9. Their address is Lawrence Erlbaum Associates, Inc., 365 Broadway, Hillsdale, NJ 07642-1487. It took a long time because we did the typesetting here, using WordPerfect with about 150 CorelDraw figures and lots of equations. You might be interested in Erlbaum's current order form, since they have severalother books recently out or in press that are likely to be of interest. These include Neuroscience and Cognition (title?) edited by Gluck and Rumelhart; Neural Networks for Conditioning and Action, edited by Commons, Grossberg, and Staddon; Brain and Perception by Pribram; Income and Choice in Biological Syst- ems by Rosenstein; and Motivation, Emotion, and Goal Direction, edited by Leven and myself. (The latter should be Motivation, Emotion, and Goal Direction in Neural Networks and will be ready in about a month.) Daniel S. Levine Dept. of Math., UT Arlington Arlington, TX 76019-0408 817-273-3598 b344dsl at utarlg.uta.edu From rob at galab2.mh.ua.edu Fri Aug 16 08:49:39 1991 From: rob at galab2.mh.ua.edu (Robert Elliott Smith) Date: Fri, 16 Aug 91 07:49:39 CDT Subject: newer RCS? Message-ID: <9108161249.AA19813@galab2.mh.ua.edu> Hi, Since there is some current discussion on various simulators, I thought I'd bring this up. I retrieved the Rochester Connectionist Simulator (dated 1989) for use in a class last year. I really like the way it was designed, but I did locate some significant bugs: 1) The order of execution in the program *does not* match the manual! Check page 120 of the manual, the paragraph that starts "The execution cycle...", versus, the source code. I've forgotten exactly what file the problem is in, but this is a significant problem if you want to design your own archetectures. However, it's easy to fix. I'll check the details if the author's write me back. 2) The backprop simulator (admittedly) abuses the structure of the RCS itself. However, it seems to me that backprop could be built without this abuse. 3) The system works with X11R3, but is a pain to get to run under R4. Despite these complaints, I really like the RCS! It's the only simulator I've seen that has a truely flexible design philosophy, and doesn't require you to learn a new programming language (you simply write subroutines in C, and compile them with the RCS library). Has anyone got a comprehensive update of RCS? Are the original authors out there? If not, can we (those who have hacked parts of RCS) fix it up to realize its potential. If it's the later, please don't bombard me with requests for my fixes, but if you have others, write, and maybe we can put the whole thing together. Rob. ------------------------------------------- Robert Elliott Smith Department of Engineering of Mechanics The University of Alabama P. O. Box 870278 Tuscaloosa, Alabama 35487 <> @ua1ix.ua.edu:rob at galab2.mh.ua.edu <> (205) 348-1618 <> (205) 348-8573 ------------------------------------------- From lacher at lambda.cs.fsu.edu Sat Aug 17 11:07:49 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Sat, 17 Aug 91 11:07:49 -0400 Subject: PLEASE!!! Message-ID: <9108171507.AA02137@lambda.cs.fsu.edu> Please don't send binary or postscript files out over connectionists. This causes extreme inconvenience in some cases, such as reading mail over a PC-modem-Unix connection (I just spent 8 minutes watching drivle fly by on my PC screen at 2400 baud). From schraudo at cs.UCSD.EDU Fri Aug 16 17:49:38 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Fri, 16 Aug 91 14:49:38 PDT Subject: neuroprose update of hertz.refs.bib.Z Message-ID: <9108162149.AA25270@beowulf.ucsd.edu> The hertz.refs.bib.Z bibliography in the neuroprose archive has been improved: I've moved the address information of @inproceeedings entries around to follow the recommendations for BibTeX 0.99a: conference location in the address field, publisher address in the publisher field. This eliminates the ugly hack of putting conference locations in the organization field, which led to strange results in some bibliography styles. Best regards, -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From jfj at m53.limsi.fr Sun Aug 18 07:40:26 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Sun, 18 Aug 91 13:40:26 +0200 Subject: newer RCS? Message-ID: <9108181140.AA12206@m53.limsi.fr> Hi ! I've done some work with RCS, and found I had to rewrite the Backprop algorithms pretty completely. I can try and make these changes available, if people are interested, though they are not very pretty. We also kludged together a version of Kohonen's maps, that seem to work, though slowly. My own impression of the RCS is that it is an excellent tool, but its data structures are a little limiting: implementing Backprop, for instance, required kludging the algorithms in order to store the required data in a 'neuron' data structure that didn't provide the necessary slots. In the same vein, access to the graphic displays is a little difficult if you want to do something that wasn't provided for in the original design. Final note: we were able to obtain an X11R4 version of RCS. Contact the RCS mailing list for details: administrative requests: simulator-request at cs.rochester.edu mailing list: simulator-users at cs.rochester.edu Regards, jfj From COSIC at cgi.com Tue Aug 20 13:01:00 1991 From: COSIC at cgi.com (COSIC@cgi.com) Date: Tue, 20 Aug 91 13:01 EDT Subject: Please remove me from this mailing list. Message-ID: From goodman at crl.ucsd.edu Tue Aug 20 18:27:46 1991 From: goodman at crl.ucsd.edu (Judith Goodman) Date: Tue, 20 Aug 91 15:27:46 PDT Subject: No subject Message-ID: <9108202227.AA22200@crl.ucsd.edu> Please add me to your mailing list. Thanks very much goodman at crl.ucsd.edu From gbugmann at nsis86.cl.nec.co.jp Wed Aug 21 15:12:49 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Wed, 21 Aug 91 15:12:49 V Subject: Cultured Neural Nets? In-Reply-To: Your message of Wed, 14 Aug 91 17:53:26 PDT. <9108150053.AA15897@sanger.bio.uci.edu> Message-ID: <9108210612.AA02876@nsis86.cl.nec.co.jp> Neuron cultures on substrates prepared by nanofabrication techniques (grooves and microcontacts) have been realized by the group of the professors C.D.W. Wilkinson and Adam Curtis at the University of Glasgow. I dont have references on their most recent work. Older references are: P. Clark, P. Connolly, A.S.G. Curtis, J.A.T. Dow & C.D.W. Wilkinson Res. Developmental Biology, 99 (1987) 439-448 J.A.T. Dow, P. Clark, P. Connolly, A.S.G. Curtis & C.D.W. Wilkinson J. Cell. Sci. Suppl., 8 (1987) 55-79 ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From mpp at cns.brown.edu Wed Aug 21 12:57:32 1991 From: mpp at cns.brown.edu (Michael P. Perrone) Date: Wed, 21 Aug 91 12:57:32 EDT Subject: publishing IJCNN supplementary poster papers Message-ID: <9108211657.AA00805@cns.brown.edu> For all those interested in having their IJCNN Supplementary Poster Session papers published: Due to the slow response from supplementary poster session participants, the deadline for submission has been extended to the end of September. Feel free to contect Derek Stubbs or Edward Rosenfeld if you have further questions. The following is a copy of the announcement that was distributed at IJCNN-91. Michael Perrone Box 1843 Center for Neural Science Brown University Providence, RI 02912 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ARE YOU INTERESTED IN HAVING YOUR POSTER PAPER PUBLISHED? Publishing is expensive and the organizers of IJCNN-91 Seattle have been unable to publish all the papers submitted in this year's Proceedings. This is a setback for researchers. As a result, we hope to publish the Supplemental Poster Session papers as a separate publication. We are entering this endeavor in the spirit of self-organization typical of neural networks. We will contact potential publishers. However, please note, we cannot assure you that we will succeed in publishing these papers. We will do our best. Here are the steps that you can take towards potential publication of your paper: 1. Provide two (2) camera-ready copies of your poster paper on 8.5 by 11 inch paper, with one inch margins all around. No more than four (4) pages please. Use the title, author and reference system that appear in the IJCNN Proceedings. 2. Provide a keyword list (at the end of the Abstract) so that we can create and include a keyword index. Use at least five keywords for each paper. 3. By August 31st, 1991, send copies, marked clearly: IJCNN-91, to either editor at the addresses below. [NOTE: I talked with Ed Rosenfeld and he said that this deadline had been extended to the end of September. -MP] EDITORS: DEREK STUBBS Sixth Generation Systems PO box 155 Vicksburg, MI 49097-0155 (phone: 1-616-649-3772) EDWARD ROSENFELD Intelligence PO box 20008 New York, NY 10025-1510 (phone: 1-212-222-1123) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From goodman at crl.ucsd.edu Wed Aug 21 11:06:26 1991 From: goodman at crl.ucsd.edu (Judith Goodman) Date: Wed, 21 Aug 91 08:06:26 PDT Subject: mailing list Message-ID: <9108211506.AA25484@crl.ucsd.edu> Please add me to your mailing list. Thanks very much goodman at crl.ucsd.edu From tedwards at wam.umd.edu Wed Aug 21 15:21:24 1991 From: tedwards at wam.umd.edu (Thomas VLSI Edwards) Date: Wed, 21 Aug 91 15:21:24 -0400 Subject: Synchronization Binding? Freq. Locking? Bursting? Message-ID: <9108211921.AA04522@avw.umd.edu> I have just read "Synchronized Oscillations During Cooperative Feature Linking in a Cortical Model of Visual Perception" (Grossberg, Somers, Neural Networks Vol. 4 pp 453-466). It describes some models of phase-locking (supposedly neuromorphic) relaxation oscillators, including a cooperative bipole coupling which appears similar to the Kammen comparator model, and fits into BCS theory. I am curious at this date what readers of connectionists think about the theory that syncrhonous oscillations reflect the binding of local feature detectors to form coherent groups. I am also curious as to whether or not phase-locking of oscillators is a reasonable model of the phenomena going on, or whether synchronized bursting, yet not frequency-locked oscillation, is a more biologically acceptable answer. Incidently, my current work involves VLSI circuits which perform a partitionable phase-locking of multiple oscillators using a method similar to the "comparator model." (Circuit fabricated, technical report in preparation). The figures in Grossberg's paper look much like the responses of my oscillators, so I'll take that as an ecouraging sign. -Thomas Edwards tedwards at avw.umd.edu From gbugmann at nsis86.cl.nec.co.jp Thu Aug 22 16:52:17 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Thu, 22 Aug 91 16:52:17 V Subject: Cultured Neural Nets? In-Reply-To: Your message of Wed, 14 Aug 91 17:53:26 PDT. <9108150053.AA15897@sanger.bio.uci.edu> Message-ID: <9108220752.AA01505@nsis86.cl.nec.co.jp> Neuron cultures on substrates prepared by nanofabrication techniques (grooves and microcontacts) have been realized by the group of the professors C.D.W. Wilkinson and Adam Curtis at the University of Glasgow. I dont have references on their most recent work. Older references are: P. Clark, P. Connolly, A.S.G. Curtis, J.A.T. Dow & C.D.W. Wilkinson Res. Developmental Biology, 99 (1987) 439-448 J.A.T. Dow, P. Clark, P. Connolly, A.S.G. Curtis & C.D.W. Wilkinson J. Cell. Sci. Suppl., 8 (1987) 55-79 ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From qin at eng.umd.edu Thu Aug 22 09:33:13 1991 From: qin at eng.umd.edu (Si-Zhao Qin) Date: Thu, 22 Aug 91 09:33:13 -0400 Subject: delete Message-ID: <9108221333.AA13467@cm14.eng.umd.edu> Please delete me from the mailing list. From yair at siren.arc.nasa.gov Thu Aug 22 17:29:02 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Thu, 22 Aug 91 14:29:02 PDT Subject: PlaNet In-Reply-To: chk@occs.cs.oberlin.edu's message of Thu, 22 Aug 91 16:39:09 -0400 <9108222039.AA25235@occs.cs.cs.oberlin.edu> Message-ID: <9108222129.AA00209@siren.arc.nasa.gov.> I know that Prahlad.Gupta at K.GP.CS.CMU.EDU has a list of NNet software. good luck, Yair From chk at occs.cs.oberlin.edu Thu Aug 22 16:39:09 1991 From: chk at occs.cs.oberlin.edu (chk@occs.cs.oberlin.edu) Date: Thu, 22 Aug 91 16:39:09 -0400 Subject: PlaNet In-Reply-To: Your message of "Thu, 15 Aug 91 10:34:52 PDT." <9108151734.AA05507@siren.arc.nasa.gov.> Message-ID: <9108222039.AA25235@occs.cs.cs.oberlin.edu> Yair Barniv -- I just noticed your note dated August 15 on the connectionist mailing list about a comparison between neural network simulators. I'm wondering if you would be willing to forward to me any useful messages that you may receive in reply? I have the same question as you about which of the many simulators out there are worthwhile learning. Thanks! Chris Koch Oberlin College, Computer Science From mike at ailab.EUR.NL Fri Aug 23 12:48:36 1991 From: mike at ailab.EUR.NL (mike@ailab.EUR.NL) Date: Fri, 23 Aug 91 18:48:36 +0200 Subject: Analysis of Time Sequences Message-ID: <9108231648.AA04606@> Dear Connectionists, I am working in a project where we want to use a neural network for online analysis of sequences of sensor measurements over discrete time steps to detect abnormalities as soon as possible. The obvious thing to me to handle the problem of time would be to, at a given time t, look back a fixed number of n time steps and analyze the points from time t-n to t. Now, I would like to know what alternative approaches for dealing with time sequences there are. Could anybody please give me any references on that topic? Thank you in advance, Michael Tepp mike at ailab.eur.nl From cjiang at ee.WPI.EDU Fri Aug 23 10:32:56 1991 From: cjiang at ee.WPI.EDU (Caixia Jiang) Date: Fri, 23 Aug 91 10:32:56 -0400 Subject: delete Message-ID: <9108231432.AA08179@ee.WPI.EDU> Please delete my name from the mailing list. From fellous%pipiens.usc.edu at usc.edu Fri Aug 23 21:00:43 1991 From: fellous%pipiens.usc.edu at usc.edu (Jean-Marc Fellous) Date: Fri, 23 Aug 91 18:00:43 PDT Subject: Single Neuron Simulators Message-ID: <9108240100.AA01291@pipiens.usc.edu> Dear Members of the Connectionist mailing list, I would like to go trough the existing (and operational) Single Neuron simulators that allow fine grain analysis (electrical & chemical) of neurons and interconnected neurons (small number of them). I would appreciate any references, email/mail addresses, criticisms on this matter. My ultimate goal is to determine the required level of sophistication a connectionist modeler needs to take into account in order to simulate individual neurons and networks of neurons (depending of course of his/her goal). I know already about: NSL (USC), GENESIS (Caltech), NEMOSYS (LLNL), CAJAL91 (USC) I will summarize the answers to the list. Thank you in advance, Yours, Jean-Marc Fellous Center for Neural Engineering University of Southern California Los Angeles From marwan at ee.su.OZ.AU Fri Aug 23 22:51:09 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Sat, 24 Aug 1991 12:51:09 +1000 Subject: Job opportunities Message-ID: <9108240251.AA16536@brutus.ee.su.OZ.AU> Research Fellows / Professional Assistants (2) Department of Electrical Engineering The University of Sydney Australia Microelectronics Implementation of Neural Networks based Devices for the Analysis and Classification of Medical Signals (Re-Advertised) Applications are invited from persons to work on an advanced neural network application project in the medical area. The project is being funded jointly by the Australian Government and a high-technology manufacturer of medical products. Appointees will be joining an existing team of 3 staff. The project is the research and development of VLSI architectures of neural networks for pattern classification applications. The project spans over a 2-year period and follows a preliminary study which was completed in early 1991. Applicants with expertise in one or more of the following areas are particularly invited to apply: - Neural network architectures and hardware implementation - Analog and/or digital VLSI design - Electronic systems design - Hardware development and design - EEPROM storage technologies - Analog circuits modeling and simulation The appointments will be for a period of 2 years. Applicants should have an Electrical/Electronic Engineering degree or equivalent. The appointees may apply for enrollment towards a postgraduate degree (part-time). Salary range according to qualifications: $25k - $40k Method of application: --------------------- Applications including curriculum vitae, list of publications and the names, addresses and Fax numbers of three referees should be sent to: Dr M.A. Jabri, Sydney University Electrical Engineering Building J03 NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: (+61-2) 692-3847 Email: marwan at ee.su.oz.au From lacher at lambda.cs.fsu.edu Sat Aug 24 19:32:11 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Sat, 24 Aug 91 19:32:11 -0400 Subject: Analysis of Time Sequences Message-ID: <9108242332.AA00585@lambda.cs.fsu.edu> If the measurements occur at discrete but non-uniformly spaced times, they can be converted to uniformly spaced pseudomeasurements (for comparability) using interpolation of some sort. We are trying cubic splines in one project where the data sample times depend on human subjects' recording "4 times daily". Chris Lacher From mike at ailab.EUR.NL Mon Aug 26 14:57:32 1991 From: mike at ailab.EUR.NL (mike@ailab.EUR.NL) Date: Mon, 26 Aug 91 20:57:32 +0200 Subject: Analysis of Time Sequences In-Reply-To: Chris Lacher's message of Sat, 24 Aug 91 19:32:11 -0400 <9108242332.AA00585@lambda.cs.fsu.edu> Message-ID: <9108261857.AA05986@> Dear Chris Lacher, Thank's a lot for your reply to my question on analyzing time sequences. I appreciate your help on this subject very much. regards Michael Tepp mike at ailab.eur.nl From hazem at gwusun.gwu.edu Mon Aug 26 11:25:35 1991 From: hazem at gwusun.gwu.edu (ali) Date: Mon, 26 Aug 91 11:25:35 EDT Subject: mailing list Message-ID: <9108261525.AA00401@ac080a.gwu.edu> Please add me to your mailing list . Thank you hazem at eesun.gwu.edu From tenorio at ecn.purdue.edu Mon Aug 26 11:56:18 1991 From: tenorio at ecn.purdue.edu (Manoel Fernando Tenorio) Date: Mon, 26 Aug 91 10:56:18 -0500 Subject: Analysis of Time Sequences In-Reply-To: Your message of Fri, 23 Aug 91 18:48:36 +0200. <9108231648.AA04606@> Message-ID: <9108261556.AA21734@dynamo.ecn.purdue.edu> -------- The other methods more commonly used for temporal phenomena are: 1. memorize delayed inputs 2. recursion (state, input, output) 3. sliding time window 4. Hysteresis The first 3 require the knowledge of the size of the time dependence, and are fixed, although Weigend has recently shown that for prediction problems, this is not a big problem since large than necessary windows work. -- M. F. Tenorio --- Your message of: Friday,08/23/91 --- From: mike at ailab.EUR.NL Subject: Analysis of Time Sequences Dear Connectionists, I am working in a project where we want to use a neural network for online analysis of sequences of sensor measurements over discrete time steps to detect abnormalities as soon as possible. The obvious thing to me to handle the problem of time would be to, at a given time t, look back a fixed number of n time steps and analyze the points from time t-n to t. Now, I would like to know what alternative approaches for dealing with time sequences there are. Could anybody please give me any references on that topic? Thank you in advance, Michael Tepp mike at ailab.eur.nl --- end of message --- From denis at hebb.psych.mcgill.ca Mon Aug 26 14:08:44 1991 From: denis at hebb.psych.mcgill.ca (Denis Mareschal) Date: Mon, 26 Aug 91 14:08:44 EDT Subject: Genetron references Message-ID: <9108261808.AA20610@hebb.psych.mcgill.ca.psych.mcgill.ca> Hi! Does anybody out there know hwere I could get some information on the following: 1. Seymour Papert's early 60's work on GENETRONs. I've already got a copy of his chapter in La Filiation des Structures (1963) but haven't been able to track down anything else (or in english for that matter). 2. A network design that would randomly (or pseudo-randomly) select one among k (equally valid) simultaneously presented input stimuli. For example: Given an array of nodes whoses values lie in the interval [0,1] select one from among those whose value is above a threshold of say 0.5. Any help at tracking these down would be greatly appreciated. Thanks a lot Cheers, Denis Mareschal From uli at ira.uka.de Tue Aug 27 05:26:39 1991 From: uli at ira.uka.de (Uli Bodenhausen) Date: Tue, 27 Aug 91 10:26:39 +0100 Subject: No subject Message-ID: Subject: Re: Analysis of Time Sequences In-reply-to: Your message of "Mon, 26 Aug 91 10:56:18 EST." <9108261556.AA21734 at dynamo.ecn.purdue.edu> -------- Manoel Fernando Tenorio writes: The other methods more commonly used for temporal phenomena are: 1. memorize delayed inputs 2. recursion (state, input, output) 3. sliding time window 4. Hysteresis The first 3 require the knowledge of the size of the time dependence, and are fixed, although Weigend has recently shown that for prediction problems, this is not a big problem since large than necessary windows work. --------------------------- I'd like to point out that it is possible to derive a learning algorithm that adjusts the size of time-windows automatically (Bodenhausen and Waibel, last NIPS proceedings and last ICASSP proceedings). Uli From jfj at m53.limsi.fr Tue Aug 27 08:30:21 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Tue, 27 Aug 91 14:30:21 +0200 Subject: Time-unfolding Message-ID: <9108271230.AA03904@m53.limsi.fr> Dear connectionists, I've been doing a little work with Time-Unfolding Networks (first mentioned, I think, in PDP), and getting pretty terrible results. My intuition is that I've misunderstood the model. Does anyone out there use this model ? If so, would you be prepared to exchange benchmark results (or even better, software) and compare notes ? A little discouraged, jfj From lissie!botsec7!botsec1!dcl at uunet.UU.NET Tue Aug 27 13:09:49 1991 From: lissie!botsec7!botsec1!dcl at uunet.UU.NET (David Lambert) Date: Tue, 27 Aug 91 13:09:49 EDT Subject: Radial Basis Functions Message-ID: <9108271709.AA00629@botsec1.bot.COM> Hi. Could someone post a brief description of RBF techniques as applied to this field? References would also be most appreciated. Thanks. David Lambert From denis at hebb.psych.mcgill.ca Tue Aug 27 09:01:35 1991 From: denis at hebb.psych.mcgill.ca (Denis Mareschal) Date: Tue, 27 Aug 91 09:01:35 EDT Subject: No subject Message-ID: <9108271301.AA21302@hebb.psych.mcgill.ca.psych.mcgill.ca> Subject: Genetron references Hi! Does anybody out there know where I could get some information on the following: 1. Seymour Papert's early 60's work on GENETRONs. I've already got a copy of his chapter in La Filiation des Structures (1963) but haven't been able to track down anything else (or in english for that matter). 2. A network design that would randomly (or pseudo-randomly) select one among k (equally valid) simultaneously presented input stimuli. For example: Given an array of nodes whoses values lie in the interval [0,1] select one from among those whose value is above a threshold of say 0.5. Any help at tracking these down would be greatly appreciated. Thanks a lot Cheers, Denis Mareschal From dlovell at s1.elec.uq.oz.au Wed Aug 28 11:09:18 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Wed, 28 Aug 91 11:09:18 GMT+1000 Subject: Translation Invariance in Human Visual System? Message-ID: <9108280109.AA11569@c14.elec.uq.oz.au> Dear Connectionists, Recently we (at the University of Queensland) had the pleasure of a visit from Prof. Thomas Huang of the University of Illinois. In one of his presentations on image compression he mentioned that a team of French researchers had recently published information which showed the response of the human visual system not to be invariant under certain translations of input. Prof. Huang did not know the details of this paper at the time but said that he would pass them on to me when he returned to Illinois (in mid-September). The problem that I am now faced with is that I would like to get hold of this paper by Friday, August 30 and I don't know where to look (and I don't have much spare time between now and then). Any assistance would be greatly appreciated. Regards David -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | University of Queensland | BRISBANE 4072 | Australia | | From wimberly at bosque.psc.edu Wed Aug 28 12:35:44 1991 From: wimberly at bosque.psc.edu (wimberly@bosque.psc.edu) Date: Wed, 28 Aug 91 12:35:44 EDT Subject: Position at Pittsburgh Supercomputing Center Message-ID: <9108281635.AA05743@bosque.psc.edu> TITLE: Scientific Specialist DEPARTMENT: Pittsburgh Supercomputing Center Carnegie Mellon University This position will support the Center in its role as a national resource in scientific computing in the area of biomedical research. Specifically, this work will be undertaken in the area of connectionist artificial intelligence, as applied to problems in medicine and biology, or in the area of modeling biological neuronal processes, or both. The incumbent will undertake efforts in training, high-level user consulting and collaborative research. He or she will provide leadership and coordination for a community of the Center's users interested in neural modeling and connectionist AI. QUALIFICATIONS: An earned doctorate in Computer Science, Neural Science, Psychology or a related discipline is required. A publication record is desirable. Strong interpersonal and communications skills are required and candidates should have "hands-on" experience designing, developing and documenting software systems. Supercomputing experience with vector processors or massively parallel architectures is highly desirable. To apply, send a letter and resume to: Dr. Frank C. Wimberly Scientific Applications Coordinator Pittsburgh Supercomputing Center 4400 Fifth Avenue Pittsburgh, PA 15213 wimberly at psc.edu (internet) wimberly at cpwpsca (bitnet) From jcp at vaxserv.sarnoff.com Wed Aug 28 13:24:16 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Wed, 28 Aug 91 13:24:16 EDT Subject: Analysis of Time Sequences Message-ID: <9108281724.AA27660@sarnoff.sarnoff.com> In response to: I'd like to point out that it is possible to derive a learning algorithm that adjusts the size of time-windows automatically (Bodenhausen and Waibel, last NIPS proceedings and last ICASSP proceedings). Uli I would like to add that Bert de Vries and Jose Principe have been developing a neural network model with variable (and trainable) time delays for temporal sequence processing. See NIPS-90, page 162. John Pearson David Sarnoff Research Center CN5300 Princeton, NJ 08543 609-734-2385 jcp at as1.sarnoff.com From liaw at ecn.purdue.edu Wed Aug 28 13:34:49 1991 From: liaw at ecn.purdue.edu (Liaw Jin-Nan) Date: Wed, 28 Aug 91 12:34:49 -0500 Subject: Radial Basis Functions Message-ID: <9108281734.AA28659@betta.ecn.purdue.edu> Hi, As far as I know there are several related papers which apply Radial Basis Function to neural networks. Some of these articles are listed below: S. M. Botros and C. G. Atkeson (1990) "Generalization properties of Radial Basis Functions", In R. P. Lippmann, J. E. Moody, D. S. Touretzky, ed.s, Advances in Neural Information Processing Systems 3, pp. 707-713, Morgan Kaufmann, San Mateo, CA. J. Moody and C. Darken (1989) "Fast learning in networks of locally tuned processing units", Neural Computation 1(2), pp. 281-294. T. Poggio and F. Girosi (1990) "Networks for approximation and learning", Proceedings of IEEE 78(9), pp. 1481-1497. M. J. D. Powell (1987) "Radial basis functions for multivariable interpolation: A review", In J. C. Mason and M. G. Cox(ed.), Algorithms for Approximation, pp. 143-167, Clarendon Press, Oxford. S. Renals and R. Rohwor (1989) "Phoneme classification experiments using radial basis functions", IJCNN, PP. I-462 - I-467, Washington, D. C. T. D. Sanger (1990) "Basis-function trees for approximation in high-dimensional spaces", Proceedings of 1990 Connectionist Summer School, pp. 145-151, Morgan Kaufmann, San Mateo, CA. T. D. Sanger (1991) "A tree-structure algorithm for reducing computation in networks with separable basis functions", Neural Computation, 3(1), pp. 67-81. Cheers, Jin-Nan Liaw liaw at betta.ecn.purdue.edu From port at iuvax.cs.indiana.edu Wed Aug 28 13:38:09 1991 From: port at iuvax.cs.indiana.edu (Robert Port) Date: Wed, 28 Aug 91 12:38:09 EST Subject: Analysis of time sequences without a window Message-ID: Tenorio mentioned several schemes for collecting information about patterns in time sequences. As he noted, several are essentially the same -- Time Windows and Delay Lines -- since a static (or fixed-window) slice of the signal is stored and advanced through a structure that preserves all info in the inputs -- that is, it preserves a raw record of inputs. (Im not sure what he means by `recursion'.) As long as the bandwidth of input is very small (that is, when there is a small set of possible inputs), this technique is fine. (Though, as noted before, an apriori window size imposes a limit on the maximum length of pattern that can be learned). But for a domain like HEARING, where the entire acoustic spectrum is sampled (at sampling rates that vary with frequency), the idea of storing EVERYTHING that comes in is computationally intractable -- at least, it apparently is for human hearing (and surely hearing in other animals as well). Despite the intuitive appeal of theories about `echoic memory', `precategorical acoustic store', etc, the evidence shows that these `acoustic memories' do NOT contain anything like raw sound spectra for any length of time. Instead, these memories should be called `auditory' since they contain `names' (or categories of some sort) for learned patterns. (See my paper in Connection Science, 1990) One source of evidence for this is simply that when an acoustic temporal pattern is REALLY NOVEL - eg, an artificial pattern completely unlike speech or other sounds from our environment - then listeners do NOT have a veridical representation of it that can be retained for a second or two. See experiments by CS Watson on patterns of 5-10-tones presented within a half second or so. The patterns are random-freq pure-tone sequences (patterns that impressionistically resemble the sound of a Touch-Tone phone when it auto-redials, or maybe even a turkey gobble). It is incredibly difficult to detect changes in, say, the frequency of one of the tones -- at least it's hard as long as the pattern is `unfamiliar'. And to really learn the pattern (to near-asymptotic performance level) requires literally thousands of practice trials! So what could familiar, learned auditory memories be like if they aren't specified within in a raw time window of the acoustic signal? I think the answer is Tenorio's other type: Hysteresis. This refers to the effect where some properties of past inputs affect system response to the current input. A concrete example is a cheap dimmer switch for a light. Frequently, a given angle for the rotating knob produces one level of brightness when approached from the left and another brightness approached from the right. This kind of nonlinear behavior can be exploited in a dynamical system (eg, the nervous system or a recurrent connectionist network) to store information about pattern history. By an appropriate learning process the parameters of the dynamic system can be adjusted to generate a distinctive trajectory (through activation space) for familiar patterns. See Anderson and Port, 1990 and Anderson, Port, McAuley, 91 for some demonstrations of this kind of `dynamic memory' in networks trained with recurrent backprop. This kind of representation for familiar sequential patterns exhibits many standard properties of human hearing. For example, this kind of representation (unlike a raw time window representation) is naturally INVARIANT under changes in the RATE of presentation of the pattern (just as words or tunes are recognized as the same despite differences in rate of production). It has been shown that for `Watson patterns', changing the rate of presentation to listeners during testing by a factor of 2 or so relative to the rate used during training has no effect whatever on performance. The same is true of our recurrent networks. The use of dynamic-memory representations by the nervous system for environmental sounds exploits the fact that most of the sounds we hear are very similar to sounds we have heard many times before. Only a minute fragment of the possible spectrally distinct patterns over time occur in our environment, so apparently we classify sounds into a (very large) alphabet of familiar sequences. Watson patterns are not in this set, so we cannot store them for a second or 2. But to return to Michael Tepp's original problem. He apparently has several sampled physiological measures from milk cows (eg, body temperature, chemical content of the milk, etc) and hopes to detect the presence of mastitis in the cow as early as possible. It seems very likely that the onset of the disease will exhibit differences in rate between instances of the illness. So, even though keeping a static time-window is trivial given the rate at which the physiological data is generated, the distribution of information about the `target pattern' (whatever it is) across such a window is very likely NOT to be constant. Thus a hysteresis-based method of pattern recognition using a dynamic memory might be expected to have more success. A few refs: Anderson, Sven, R. Port and Devin McAuley (1991) Dynamic Memory: a model for auditory pattern recognition. Mspt. But I will make it available by ftp from neuroprose at Ohio State. We will post a separate note when it is there. Port, Robert (1990) Representation and recognition of temporal patterns. \f2Connection Science,\f1 151-176. This includes some description of Watson's work and contains more on the argument against time windows. Port, Robert and Sven Anderson (1989) Recognition of melody fragments in continuously performed music. In G. Olson and E. Smith (eds) \f2Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society\f1 (L. Erlbaum Assoc, Hillsdale, NJ), pp. 820-827. Port, R and Tim van Gelder (1991) Representing aspects of language. Proc of Cog Sci Soc 13, Erlbaum Assoc. Generalizes the notion of dynamic representations as applied to other kinds of patterns. From dtam at next-cns.neusc.bcm.tmc.edu Wed Aug 28 14:21:18 1991 From: dtam at next-cns.neusc.bcm.tmc.edu (David C. Tam) Date: Wed, 28 Aug 91 14:21:18 GMT-0600 Subject: Cultured Neural Nets references Message-ID: <9108282021.AA15899@next-cns.neusc.bcm.tmc.edu> Here are some references of Dr. Guenter W. Gross at University of North Texas, Denton TX 76203 on cultured neural nets grown on MultiMicroElectrode Plates (MMEPs). He is among one of the first investigators successfully grown neurons forming networks with spontaneous electrical activities recorded from 64-electrodes simultaneously for over a long period of time (i.e., months) with the pattern of firing monitored by these microelectrodes under physiological conditions. His current electrode design is a second-generation. The first-generation uses metal (opague) electrode conductor photo-etched on a glass-plate substrate. The second-generation design uses transparent indium-tin oxide electrode conductor so that the complete neural network can be visualized under the microscope without being blocked by the electrode-leads. His results on the network activity of neurons include: cooperative sychronous burst (phase-lock) firing of neurons; switching of firing patterns in groups of neurons; micro-laser surgery that selectively "zap" axons and/or dendrites connectivities of neurons in the network, etc. These are successful experimental results of monitoring up to 64 channels of electrodes in cultured neurons. Per request of Steve Potter, here is a list of his publications: Gross, G. W. (1979). Simultaneous single unit recording in vitro with a photoetched, laser deinsulated gold, multimicroelectrode surface. IEEE Trans. Biomed. Eng. BME. 26: 273-279. Gross, G. W., & Hightower, M. H. (1986). An approach to the determination of network properties in mammalian neuronal monolayer cultures. Proc. of the 1st IEEE Conf. on Synth. Microstructs. in Biol. Res., Naval Research Lab. Press, Washington D.C., pp. 3-21. Gross, G. W. & Kowalski, J. M (1990). Experimental and theoretical analysis of random nerve cell network dynamics. In Neural Networks: Concepts, Applications, and Implementations Vol. 3. Prentice-Hall: New Jersey. (in press) Gross, G. W., & Lucas, J. H. (1982). Long-term monitoring of spontaneous single unit activity from neuronal monolayer networks cultured on photoetched multielectrode surfaces. J. Electrophys. Tech. 9: 55-69. Gross, G. W., Wen, W., & Lin, J. (1985) Transparent indium-tin oxide patterns for extracellular multisite recording in neuronal cultures. J. Neurosci. Methods. 15: 243-252. Droge, D. H., Gross, G. W., Hightower, M. H., & Czisny, L. E. (1986) Multielectrode analysis of coordinated, multisite, rhythmic bursting in cultured CNS monolayer networks. J. Neurosci. 6: 1583-1592. Lucas, J. H., Czisny, L. E., & Gross, G. W. (1986). Adhesion of cultured mammalian CNS neurons to flame-modified hydrophobic surfaces. In Vitro Cell & Dev. Biol. 22: 37-43. David Tam dtam at next-cns.neusc.bcm.tmc.edu From snider at mprgate.mpr.ca Wed Aug 28 16:23:11 1991 From: snider at mprgate.mpr.ca (Duane Snider) Date: Wed, 28 Aug 91 13:23:11 PDT Subject: No subject Message-ID: <9108282023.AA00878@kiwi.mpr.ca> Subject: Re: Radial Basis Functions > Could someone post a brief description of RBF techniques > as applied to this field? References would also be > most appreciated. There is a fairly good description on how the radial basis functions work in multidimensional spaces is in 'Neurocomputing' by Robert Hecht-Nielsen. It was published by Addison-Wesley in 1990. John Moody terms his radial basis functions 'Local Receptive Fields'. I have seen some of his work at the ICJNN's before. I also found Nestor Corp's algorithm for Restricted Coulomb Energy fields in 'DARPA Neural Network Study', ISBN 0-916159-17-5. I hope this helps, Duane Snider snider at mprgate.mpr.ca MPR Teltech Ltd Burnaby, BC Canada From gbugmann at nsis86.cl.nec.co.jp Thu Aug 29 07:38:13 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Thu, 29 Aug 91 07:38:13 V Subject: Radial Basis Functions In-Reply-To: Your message of Tue, 27 Aug 91 13:09:49 EDT. <9108271709.AA00629@botsec1.bot.COM> Message-ID: <9108282238.AA07351@nsis86.cl.nec.co.jp> Hello, an application of RBF for the modelling of time series is described in: He, X. and Lapedes, A. (1991) "Nonlinear Modeling and Prediction by Successive Approximations Using Radial Basis Fuctions", Los Alamos Lab. Report LA-UR-91-1375 On this network, there was also an announcement by: Martin Roescheisen of following technical report: Hofmann, R., Roescheisen, M. and Tresp, V. (1991) "Incorporating Prior Knowledge in Parsimonious Networks of Locally-Tuned Units" Regards Guido Bugmann From edelman at BLACK Thu Aug 29 02:57:00 1991 From: edelman at BLACK (Shimon Edelman) Date: Thu, 29 Aug 91 08:57+0200 Subject: Translation Invariance in Human Visual System? In-Reply-To: <9108280109.AA11569@c14.elec.uq.oz.au> Message-ID: <19910829065741.1.EDELMAN@YAD> >From: David Lovell >Subject: Translation Invariance in Human Visual System? Here is, I believe, the reference to that paper: @article{NazirORegan90, author="T. Nazir and J. K. {O'Regan}", title="Some results on translation invariance in the human visual system", journal="Spatial vision", volume="5", pages="81-100", year=1990 } Actually, the paper discusses evidence for a quite *limited* translational invariance (in the recognition of dot patterns). -Shimon Shimon Edelman (edelman at wisdom.weizmann.ac.il) Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100 Israel From dlovell at s1.elec.uq.oz.au Thu Aug 29 17:45:58 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Thu, 29 Aug 91 16:45:58 EST Subject: Translation Invariance etc.. Message-ID: <9108290646.AA26523@s2.elec.uq.oz.au> Dear Connectionists, For those of you who have been holding your breath wondering who published the paper that claims the human visual system is not as translationally invariant as one would think, all the way from the keyboard of Shimon Edelman, I am proud to present.... @article{NazirORegan90, author="T. Nazir and J. K. {O'Regan}", title="Some results on translation invariance in the human visual system", journal="Spatial vision", volume="5", pages="81-100", year=1990 } Many thanks to Shimon and also Irv Biederman who mailed me a few pointers to this paper. There is the promise of more information on translational invariance from a few different sources so if new news comes to hand, I'll post that too. Happy Connectioning(?) -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | From dlovell at s1.elec.uq.oz.au Fri Aug 30 18:49:04 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Fri, 30 Aug 91 17:49:04 EST Subject: Translation Invariance in Human Visual System In-Reply-To: <9108261857.AA05986@>; from "owner-neuroz@munnari.oz.au" at Aug 26, 91 8:57 pm Message-ID: <9108300749.AA10168@s2.elec.uq.oz.au> Dear Connectionists, Here is some more information about a publication that would be of interest to anyone researching the translational (in?)variance of the human visual field: Biederman, I. & Cooper, E. E. (in press). Evidence for complete translational and reflectional priming in visual object recognition. PERCEPTION. A recent paper (also by Biederman & rman and Cooper) provides many of the details of methodology, procedure and theory. It appeared in the July issue of COGNITIVE PSYCHOLOGY. Again, thanks to everyone who has replied to my original request especially Eric Postma, Irv Biederman, Shimon Edelman and Bill Phillips. If anyone has any more information on the subject I suggest that they post it to Connectionists because I'm in the middle of writing a paper and it could be a long time before any further info gets re-posted. Yours appreciatively, David. -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | From kruschke at ucs.indiana.edu Fri Aug 30 11:44:00 1991 From: kruschke at ucs.indiana.edu (JOHN K. KRUSCHKE) Date: 30 Aug 91 10:44:00 EST Subject: motives for RBF networks Message-ID: Regarding uses of radial basis functions (RBFs) in neural networks: One motive for using RBFs has been the promise of better interpolation between training examples (i.e., better generalization). Some suggest that RBF nodes are also neurally plausible (at least in the type of function they compute, if not the methods used to train them). (See previous postings by other Connectionists for references.) Another motive comes from the molar, psychological level. In some situations, human behavior can be accurately described in terms of memory for specific exemplars, with generalization to novel exemplars based on similarity to memorized exemplars. If an exemplar is encoded as a point in a multi-dimensional psychological space, then an internally memorized exemplar can be represented by an RBF node centered on that point. When combined with back-propagation learning (and learned dimensional attention strengths), such RBF-based networks can do a reasonably good job of capturing human performance in several category learning tasks. Some references: Kruschke, J. K. (1991a). ALCOVE: A connectionist model of human category learning. In: R. P. Lippmann, J. E. Moody & D. S. Touretzky (eds.), Advances in Neural Information Processing Systems 3, pp.649-655. San Mateo, CA: Morgan Kaufmann. (Several other papers in this volume address related issues.) Kruschke, J. K. (1991b). Dimensional attention learning in models of human categorization. In: Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, pp.281-286. Hillsdale, NJ: Erlbaum. Kruschke, J. K. (1991c). Dimensional attention learning in connectionist models of human categorization. Indiana University Cognitive Science Research Report 50. Kruschke, J. K. (in press). ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review. Scheduled to appear in January 1992. [Indiana University Cognitive Science Research Report 47.] Nosofsky, R. M., Kruschke, J. K. & McKinley, S. (in press). Combining exemplar-based category representations and connectionist learning rules. Journal of Experimental Psychology: Learning, Memory and Cognition. Scheduled to appear in March 1992. From bap at james.psych.yale.edu Fri Aug 30 13:09:05 1991 From: bap at james.psych.yale.edu (Barak Pearlmutter) Date: Fri, 30 Aug 91 13:09:05 -0400 Subject: Analysis of Time Sequences In-Reply-To: John Pearson W343 x2385's message of Wed, 28 Aug 91 13:24:16 EDT <9108281724.AA27660@sarnoff.sarnoff.com> Message-ID: <9108301709.AA17362@james.psych.yale.edu> In this vein, a procedure for performing gradient descent in the length of time delays in the fully recurrect continuous time case using backpropagation through time is presented in "Learning state space trajectories in recurrent neural networks," Barak Pearlmutter, IJCNN'89 v2 pp365-372, and also in the technical report available from the neuroprose archives as "pearlmutter.dynets.ps.Z". From english at sun1.cs.ttu.edu Fri Aug 30 17:22:24 1991 From: english at sun1.cs.ttu.edu (Tom English) Date: Fri, 30 Aug 91 16:22:24 CDT Subject: Processing of auditory sequences Message-ID: <9108302122.AA17772@sun1.cs.ttu.edu> Thanks to Robert Port for a fine discussion of acoustic/auditory pattern processing and hysteresis. I do not take exception to his observations, but would like to extend the discussion a bit. It seems that humans process human utterances differently than they do other sounds. I cannot point to hard data on this point, but I am fairly confident that playing back recorded speech at twice the recording speed has serious effects upon intelligibility. (Surely some of you find Alvin and the Chipmunks difficult to understand.) What accounts for the difference between speech and synthetic "Watson patterns"? Perhaps one of the Connectionists could briefly describe the techniques used to preserve intelligibility in time-compression of speech recordings for the blind. This might be an important clue. I believe the techniques are more sophisticated than, say, clipping 5 msec of speech from each 10 msec and smoothing the transitions between the remaining speech segments. I submit that if evolution has not provided us with special apparatus for processing the calls of members of our own species, it should. That is, "knowing" the characteristics of the physical system that generates human utterances should be of great utility in extracting information from utterances (especially when they are noisy). How might such innate knowledge be represented and utilized in human processing of speech signals? Of course, some would argue that an internal model of the articulators need not be innate. I vaguely recall that Grossberg and associates have placed speech generation and recognition in a single "loop." The model of articulation might be learned by generating motor "commands" and hearing the results. Does anyone know whether people born with impaired control of the articulators suffer some detriment in processing speech signals? To tie these comments together, allow me to hypothesize that artificially slowed and speeded speech, even when it is in the range of natural speaking rates, does not exhibit the coarticulatory phenomena appropriate to the speaking rate. Thus the internal model of articulation does not account well for the altered speech, and intelligibility suffers. This hypothesis is half-baked, if only because it ignores the fact that "unnatural sounding" synthetic speech may be highly intelligible. I would, however, like to see relevant evidence and/or discussion. Thomas English english at sun1.cs.ttu.edu Dept. of Computer Science Texas Tech University From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Sat Aug 31 10:24:21 1991 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Sat, 31 Aug 91 10:24:21 -0400 Subject: Processing of auditory sequences In-Reply-To: Your message of Fri, 30 Aug 91 16:22:24 -0600. <9108302122.AA17772@sun1.cs.ttu.edu> Message-ID: Perhaps one of the Connectionists could briefly describe the techniques used to preserve intelligibility in time-compression of speech recordings for the blind. This might be an important clue. I believe the techniques are more sophisticated than, say, clipping 5 msec of speech from each 10 msec and smoothing the transitions between the remaining speech segments. I'm not an expert on this, but I believe that the basic idea idea is to speed up the speech by 2x or so, while keeping the frequencies where they should be. Apparently the human speech-understanding system is quite flexible about rate, but really doesn't like to deal with formants moving too far from where they are expected to be in frequency space. Perhaps that is because the basic analysis into frequency bands is done in the cochlea, with fixed neuro-mechanical filters, and the rest of the processing is done by the brain using neural machinery that is more flexible and trainable. I believe the crude chopping you describe above is one technique that has been used to accomplish this, and that it works surprisingly well. Even though some critical events get dropped on the floor this way -- the pop in a "P", for example, listeners quickly learn to compensate for this. One can do a better job if more attention is paid to the smoothing: chopping at zero-crossings, etc. Some pitch-shifter/harmonizer boxes used in music processing do this sort of thing. The best approach would probably be to move everything into the Fourier domain, slide and stretch everything around smoothly, and then convert back, but I doubt that any practical reading machines actually do this. Only in the last couple of years has the necessary signal-processing power been available on a single chip. -- Scott Fahlman From shawn at helmholtz.sdsc.edu Sat Aug 31 15:30:16 1991 From: shawn at helmholtz.sdsc.edu (shawn@helmholtz.sdsc.edu) Date: Sat, 31 Aug 91 12:30:16 -0700 Subject: No subject Message-ID: <9108311930.AA08466@gall> Several months ago I asked about connectionist efforts in modeling chemotaxis and animal orientation. Response was quite good. Here is a compilation of what I received. Thanks to all those who kindly responded. OUTGOING QUERY: " I am a neurobiologist interested in training neural networks to perform chemotaxis, and other feats of simple animal navigation. I'd be very interested to know what has been done by connectionists in this area. The only things I have found so far are: Mozer and Bachrach (1990) Discovering the Structure of a Reacative nvironment by Exploration, and Nolfi et al. (1990) Learning and Evolution in Neural Networks Many thanks, Shawn Lockery CNL Salk Institute Box 85800 San Diego, CA 92186-5800 (619) 453-4100 x527 shawn at helmholtz.sdsc.edu " _____________________________________________________________________ THE REPLIES _____________________________________________________________________ From: mlittman at breeze.bellcore.com (Michael L. Littman) Hi, Dave Ackley and Michael Littman (me) did some work where we used a combination of natural selection and reinforcement learning to train simulated creatures to survive in a simulated environment. %A D. H. Ackley %A M. L. Littman %T Interactions between learning and evolution %B Artificial Life 2. %I Addison-Wesley %D 1990 %E Langton, Chris %O (in press) %A D. H. Ackley %A M. S. Littman %T Learning from natural selection in an artificial environment %B Proceedings of the International Joint Conference on Neural Networks %C Washington, D.C. %D January 1990 There is also some neat work by Stewart Wilson as well as Rich Sutton and friends. I'm not sure exactly what sort of things you are looking for so I'm having trouble knowing exactly where to point you. If you describe the problem you have in mind I might be able to indicate some other relevant work. -Michael ---------------------------------------------------------------------------- From: David Cliff Re. your request for chemotaxis and navigation: do you know about Randy Beer's work on a simulated cockroach? He did studies of locomotion control using hardwired network models (ie not much training involved) but the simulated bug performed simple navigation tasks. I think it had chemoreceptors in it's antennae, so I think there was some chemotaxis involved. He's written a book: R.D.Beer "Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology" Academic Press, 1990. davec at cogs.susx.ac.uk COGS 5C17 School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England UK ---------------------------------------------------------------------------- From: Ronald L Chrisley You might take a look at my modest efforts in a paper in the Proc. of the 1990 CMSS. Not biologically motivated at all, though. Ron Chrisley ---------------------------------------------------------------------------- From: beer at cthulhu.ces.cwru.edu (Randy Beer) Hello Shawn! I'm not sure that this is what you're looking for, but as I mentioned to you at Neurosciences, we've been using genetic algorithms to evolve dynamical NNs. One of our experiments involved gradient following. A simple circular "animal" with two chemosensors and two motors was placed in an environment with a patch of food emitting an odor whose intensity decreased as the inverse square of the distance from the patch's center. The animal's behavior was controlled by a bilaterally symmetric, six node, fully interconnected dynamical NN (2 sensory neurons, 2 motor neurons, and two interneurons). The time constants (3, due to bilateral symmetry) and the weights (18) were encoded on a bit string genome. The performance function was simply the average distance from the food patch that the animal was found after a fixed amount of time for a variety of initial starting positions. We evolved several different solutions to this problem, including one less than optimal but interesting "wiggler". This animal oscillated from side to side until it happened to come near the food patch, then the relatively strong signal from the nearby food damped out the oscillations and it turned toward the food. Most of the other solutions simply compared the signals in each chemosensor and turned toward the stronger side, as you would expect. These more obvious solutions still varied in the overall gain of their response. Low gain solutions performed very well near the food patch, but had a great deal of trouble finding it if they started too far away. High gain solutions rarely had any trouble finding the food patch, but their behavior at the patch was often more erratic and sometimes they would fly back off of it. Randy ---------------------------------------------------------------------------- From: wey at psyche.mit.edu (Wey Fun) You may look into Christ Watkins, ANdrew Barto & Richard Barto's work on TD algo. My colleague at Univ of Edinburgh, Peter Dayan, ahs also done a lot of work on the simulation of rats swimming in milky water and finding a fast shortest route after trials to a platform. His email address is : dayan at cns.ed.ac.uk Wey ---------------------------------------------------------------------------- From: Jordan B Pollack Im pretty sure that Andy Barto at cs.umass.edu and his students worked on "A-RP" reinforcement learning in a little creature navigating through an environment of smell gradients. This is the only reference in my list: %A A. G. Barto %A C. W. Anderson %A R. S. Sutton %T Synthesis of Nonlinear Control Surfaces by a layered Associative Search Network %J Biological Cybernetics %V 43 %P 175-185 %D 1982 %K R12 jordan ---------------------------------------------------------------------------- From: Peter Dayan [This is in response to your direct note to me - you should also have received a reply to your connectionists at cmu posting from me.] In that, I neglected to mention: Barto, AG (1989). From chemotaxis to cooperativity: Abstract exercises in neuronal learning strategies. In R Durbin, C Miall \& G Mitchison, editors, {\it The Computing Neuron.\/} Wokingham, England: Addison-Wesley. and Watkins, CJCH (1989). {\it Learning from Delayed Rewards.\/} PhD Thesis. University of Cambridge, England. which is one of my `source' texts, and contains interesting discussions of TD learning from the viewpoint of dynamical programming. Regards, Peter Randy ----------------------------------------------------------------------------- From: "Vijaykumar Gullapalli (413) 545-1596" Andy Barto wrote a nice paper discussing learning issues that might be of interest. It appeared as a tech report and as a book chapter. The ref is @techreport{Barto-88a, author="Barto, A. G.", title="From Chemotaxis to Cooperativity: {A}bstract Exercises in Neuronal Learning Strategies", institution="University of Massachusetts", address="Amherst, MA", number="88-65", year=1988, note="To appear in {\it The Computing Neurone}, R. Durbin and R. Maill and G. Mitchison (eds.), Addison-Wesley"}. A copy of the tech report can be obtained by writing to Connie Smith at smith at cs.umass.edu. Vijay __________________________________________________________________________________ From: nin at cns.brown.edu (Nathan Intrator) Could you give me more information on the task, is the input binary and is the dimensionality of the input large. I have an unsupervised network that is supposed to discover structure in HIGH DIMENSIONAL spaces in an unsupervised way which may be of interest to you. --------------------------------------------------------------------- From: meyer%frulm63.bitnet at Sds.sdsc.edu (Jean-Arcady MEYER) I'm interested in the simulation of adaptive behavior and I have written a Technical Report on the subject, in which I think you could find several interesting references. In particular, various works have been made in the spirit of Nolfi et al. I'm sending this report to you today. Let me add that I have organized - together with Stewart Wilson - the conference SAB90 (Simulation of adaptive behavior: from animals to animats) which has been held in Paris in September 1990. The corresponding proceedings are about to be published by The MIT Press/Bradford Books. I'm also sending you a booklet of the papers'summaries. Finally, I don't know the paper from Mozer and Bachrach you are mentioning in your mail. Could you be kind enough to send me its reference? Hope this will be helpful to you. Jean-Arcady Meyer Groupe de BioInformatique URA686. Ecole Normale Superieure 46 rue d'Ulm 75230 PARIS Cedex05 FRANCE --------------------------------------------------------------------- From: barto at envy.cs.umass.edu We have done a number of papers over the years that relate to chemotaxis. Chemotaxic behavior of single cells has inspired a lot of our thinking about learning. Probably the most relevant are: Barto, From Chemotaxis to Cooperativity, in The Computing Neuron, edited by Durbin, Miall, Mitchison. Addison Wesley, 1989 Barto and Sutton, Landmark Learning: An Illustration of Associative Search, Biol. Cyb. 42, 1981 Andy Barto Dept. of Computer and Information Science University of Massachusetts Amherst MA 01003 --------------------------------------------------------------------- From: dmpierce at cs.utexas.edu I had a paper myself at a recent Paris conference (September 1990) which might be relevant to you: Pierce, D.M., \& Kuipers, B.J. (1991). Learning hill-climbing functions as a strategy for generating behaviors in a mobile robot. {\em From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior}, J.-A. Meyer \& S.W. Wilson, eds., Cambridge, MA: The MIT Press/Bradford Books, pp.~327-336. This is also available as University of Texas AI Lab. Tech. Report AI90-137. Here is the abstract: We consider the problem of learning, in an unknown environment, behaviors (i.e., sequences of actions) which can be taken to achieve a given goal. This general problem involves a learning agent interacting with a reactive environment: the agent produces actions that affect the environment and in turn receives sensory feedback from the environment. The agent must learn, through experimentation, behaviors that consistently achieve the goal. In this paper, we consider the particular problem of a mobile robot in a spatial two-dimensional world whose goal is to find a target location which contains a ``food'' source. The robot has access to incomplete information about the state of the world via a set of senses and is able to detect when it has achieved the goal. Its task is to learn to use its motor apparatus to reliably move to the food. The catch is that the robot does not know a priori what its sensors mean, nor what effects its motor apparatus has on the world. We propose a method by which the robot may analyze its sensory information in order to derive (when possible) a function defined in terms of the sensory data which is maximized at the food and which is suitable for hill-climbing. Given this function, the robot solves its problem by learning a behavior that maximizes the function thereby resulting in motion to the food. -Dave Pierce ------------------------------------------------------------------------ From: KDBG100%bgunve.bgu.ac.il at BITNET.CC.CMU.EDU I may not have understood the specifics of what you require, but about spatial environments, there is a paper in PDP II 1986 which I suppose you must know about. So -- what is the issue you are pursuing? David leiser, Jerusalem ------------------------------------------------------------------------- From: Steve Hampson I was just going over old mail and found your request for refs on animal navigation. I was planning on replying, but probably never did. My book "Connecitonistic Problem Solving" Birkhauser, Boston, is an attempt at general problem solving, but almost all of the examples are maze-like. Several approaches are discussed and implemented. Sorry for the delay. Steven Hampson ICS Dept. UCI. ------------------------------------------------------------------------ From emsca!conicit!gpass at Sun.COM Sat Aug 31 14:50:10 1991 From: emsca!conicit!gpass at Sun.COM (Gianfranco Passariello (USB) Date: Sat, 31 Aug 91 14:50:10 AST Subject: info. Message-ID: <9108311850.AA08078@conicit> Caracas, August 31st 1991 I have just received a job opportunities message coming from Sidney-Austrlia through you. I would like to know how does that work. Thank you very much in advance Gianfranco Passariello Dpto. de Electronica y Circuitos Universidad Simon Bolivar Apdo.89000 Caracas, Venezuela 1080A From mcmillan at tigger.Colorado.EDU Thu Aug 1 11:35:21 1991 From: mcmillan at tigger.Colorado.EDU (Clayton McMillan) Date: Thu, 1 Aug 91 09:35:21 MDT Subject: Tech Report Available Message-ID: <9108011535.AA22278@tigger.colorado.edu> A compressed postscript version of the following tech report has been placed in the pub/neuroprose directory for anonymous ftp from cheops.cis.ohio-state.edu (instructions follow). This paper will appear in the Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society. The Connectionist Scientist Game: Rule Extraction and Refinement in a Neural Network Clayton McMillan, Michael C. Mozer, & Paul Smolensky Department of Computer Science and Institute of Cognitive Science University of Colorado Boulder, CO 80309-0430 mcmillan at boulder.colorado.edu Abstract Scientific induction involves an iterative process of hypothesis formulation, testing, and refinement. People in ordinary life appear to undertake a similar process in explaining their world. We believe that it is instructive to study rule induction in connectionist systems from a similar perspective. We propose an approach, called the Connectionist Scientist Game, in which symbolic condition-action rules are extracted from the learned connection strengths in a network, thereby forming explicit hypotheses about a domain. The hypotheses are tested by injecting the rules back into the network and continuing the training process. This extraction-injection process continues until the resulting rule base adequately characterizes the domain. By exploiting constraints inherent in the domain of symbolic string-to-string mappings, we show how a connectionist architecture called RuleNet can induce explicit, symbolic condition-action rules from examples. RuleNet's performance is far superior to that of a variety of alternative architectures we've examined. RuleNet is capable of handling domains having both symbolic and subsymbolic components, and thus shows greater potential than purely symbolic learning algorithms. The formal string manipulation task performed by RuleNet can be viewed as an abstraction of several interesting cognitive models in the connectionist literature, including case role assignment and the mapping of orthography to phonology. Instructions for porting the file and printing: unix> ftp cheops.cis.ohio-state.edu Connected to cheops.cis.ohio-state.edu. 220 cheops.cis.ohio-state.edu FTP server ready. Name: anonymous 331 Guest login ok, send ident as password. Password:neuron 230 Guest login ok, access restrictions apply. ftp> binary 200 Type set to I. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get mcmillan.csg.ps.Z 200 PORT command successful. 150 Opening BINARY mode data connection for mcmillan.csg.ps.Z (56880 bytes). 226 Transfer complete. local: mcmillan.csg.ps.Z remote: mcmillan.csg.ps.Z 56880 bytes received in 2.5 seconds (22 Kbytes/s) ftp> quit 221 Goodbye. unix> uncompress mcmillan.csg.ps.Z unix> lpr mcmillan.csg.ps Clayton McMillan From tom at faulty.che.utexas.edu Thu Aug 1 13:05:24 1991 From: tom at faulty.che.utexas.edu (Thomas W. Karjala) Date: Thu, 1 Aug 91 12:05:24 CDT Subject: No subject Message-ID: <9108011705.AA15929@faulty.che.utexas.edu> Dear connectionist researchers, The members of our research group have be applying neural networks to fault detection, data reconciliation, and control in the field of chemical engineering for several years. Recently, we have been using nonlinear programming techniques for training of feedforward networks. We have found these techniques to be more successful than backpropagation and have been able to train larger networks more quickly without external tuning of parameters such as learning rate and momentum. Training methods based on nonlinear programming share one drawback with backpropagation. Selection of the wrong starting point in the weight space can often lead to local minima where the network has learned the training data only poorly. In our readings, several of us have come across vague mention of methods of choosing initial starting weights other than picking small random values. We are now searching for more substantial references in this area. We would welcome any suggestions, comments, or pointers to papers on this subject and will be glad to share any information we find. Please contact us directly via email. Thanks! Thomas Karjala Department of Chemical Engineering The University of Texas at Austin Ausin, TX 78712 tom at faulty.che.utexas.edu From gbugmann at nsis86.cl.nec.co.jp Fri Aug 2 10:32:40 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Fri, 02 Aug 91 10:32:40 V Subject: Robustness Message-ID: <9108020132.AA12736@nsis86.cl.nec.co.jp> Robustness is a vague but often used concept. Is there an accepted method to determine the robustness of a NN ? In an application of a FF Backprop net to the reproduction of a function f(x,y) [1], we had measured the robustness in the following way: After completed training, each weight was set to zero in turn and the root mean square of the relative errors (RMS) (relative differences between the actual outputs of the net and the outputs defined in the training set) was measured (The mean is over all the examples in the training set). In our case, the largest RMS induced by the loss of one connection was 1600 %. We have used this "worst possible damage" as a measure of the (non-) robustness of the network. [1] Bugmann, G., Lister, J.B. and von Stockar, U. (1989) "The standard deviation method: Data Analysis by Classical Means and by Neural Networks", Lab Report LRP-384/89, CRPP, Swiss Federal Institute of Technology, CH-1015 Lausanne. ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From MFMISMIC%HMARL5.BITNET at vma.cc.cmu.edu Fri Aug 2 17:43:00 1991 From: MFMISMIC%HMARL5.BITNET at vma.cc.cmu.edu (MFMISMIC%HMARL5.BITNET@vma.cc.cmu.edu) Date: Fri, 2 Aug 91 17:43 N Subject: Rules and Neural Networks. Paper announcement. Message-ID: The following paper has been published in the proceedings of the European Simulation Multiconference 91 in Copenhagen: HOMOMORPHIC TRANSFORMATION FROM NEURAL NETWORKS TO RULE BASES Author: Michael Egmont-Petersen, Computer Resources International a/s and Copenhagen Business School Abstract: In this article a method to extract the knowl- edge induced in a neural network is presented. The method explicates the relation between a network's inputs and its outputs. This relation is stored as logic rules. The feasibility of the method is studied by means of three test examples. The result is that the method can be used, though some drawbacks are detected. One is that the method sometimes generates a lot of rules. For fast retrieval, these rules can well be stored in a B- tree. Contents: 1. Introduction 2. Synthesizing Rule Bases Parsimoniously 3. Description of the Experiments 4. Practical Applicability of the Algorithm 5. Conclusion Hardcopies of the paper are avaliable. Please send requests to the following address in Holland: Institute of Medical Statistics and Informatics University of Limburg Postbus 616 NL-6200 MD Maastricht The Netherlands att. Michael Egmont-Petersen Michael Egmont-Petersen From SCHOLTES at ALF.LET.UVA.NL Sun Aug 4 14:59:00 1991 From: SCHOLTES at ALF.LET.UVA.NL (SCHOLTES) Date: Sun, 4 Aug 91 14:59 MET Subject: Neural Nets in Information Retrieval References Message-ID: Dear Connectionists, Hereby the compiled list of received reactions on work done in Information Retrieval and Connectionist or Neural Systems. Please keep me informed of any new work in this field. Regards, Jan Scholtes University of Amsterdam Faculty of Arts Department of Computational Linguistics scholtes at alf.let.uva.nl ********************************************************************* References Neural Nets in Information Retrieval [Allen, 1991]: Allen, R.B. (1991). Knowledge Representation and Information Retrieval with Simple Recurrent Networks.. Work. Notes of the AAAI SSS on Connectionist Natural Language Processing. March 26th-28th 1991, Palo Alto, CA., pp. 1-6. [Belew et al., 1988]: Belew, R.K. and Holland, M.P. (1988). A Computer System Designed to Support the Near-Library User of Information Retrieval. Microcomputers for Information Management, Vol. 5, No. 3, December 1988, pp. 147-167. [Belew, 1986]: Belew, R.K. (1986). Adaptive Information Retrieval: Machine Learning in Associative Networks. Ph.D. Dissertation, Univ. Michigan, CS Department, Ann Harbor, MI. [Belew, 1987]: Belew, R.K. (1987). A Connectionist Approach to Conceptual Information Retrieval. Proc. of the First Intl. conf. on AI and Law, pp. 116-126. ACM Press. [Belew, 1989]: Belew, R.K. (1989). Adaptive Information Retrieval: Using a Connectionist Representation to Retrieve and Learn About Documents. Proc. SIGIR-89, June 11th-20th, 1989. Cambridge, MA. [Brachman et al., 1988]: Brachman, R.J. and McGuinness, D.L. (1988). Knowledge Representation, Connecitonism, and Conceptual Retrieval. SIGIR-88, [Doszkocs et al., 1990]: Doszkocs, T.E., Reggia, J. and Lin, X. (1990). Connectionist Models and Information Retrieval. Ann. Review of Information Science and Technology, Vol. 25, pp. 209-260. [Eichmann et al., 1991]: Eichmann, D.A. and Srinivas, K. (1991). Neural Network-Based Retrieval from Reuse Repositories. CHI '91 Workshop on Pattern Recognition and Neural Networks in Human-Computer Interaction, April 28th, 1991. New Orleans, LA. [Eichmann et al., 1992]: Eichmann, D.A. and Srinivas, K. (1992). Neural Network-Based Retrieval from Reuse Repositories. In: Neural Networks and Pattern Recognition in Human Computer Interaction (R. Beale and J. Findlay, Eds.), Ellis Horwood Ltd, WS, UK. [Gersho et al., 1990]: Gersho, M. and Reiter, R. (1990). Information Retrieval using a Hybrid Multi-Layer Neural Network. Proceedings of the IJCNN, San Diego, June 17-21, 1990, Vol. 2, pp. 111-117. [Honkela et al., 1991]: Honkela, T. and Vepsalainen, A.M. (1991). Interpreting Imprecise Expressions: Experiments with Kohonen's Self-Organizing Maps and Associative Memory. Proc. of the ICANN '91, June 24th-28th, Helsinki, Vol. 1, pp. 897-902. [Jagota, 1990]: Jagota, A. (1990). Applying a Hopfield-style Network to Degraded Printed Text Restoration. Proc. of the 3rd Conference on Neural Networks and PDP, Indiana-Purdue University. [Jagota, 1990]: Jagota, A. (1990). Applying a Hopfield-Style Network to Degraded Text Recognition. Proc. of the IJCNN 1990, [Jagota, 1991]: Jagota, A. (1991). Novelty Detection on a Very Large Number of Memories in a Hopfield-style Network. Proc. of the IJCNN, July 8th-12th, 1991. Seattle, WA, [Jagota, et al., 1990]: Jagota, A. and Hung, Y.-S. (1990). A Neural Lexicon in a Hopfield-style Network. Proceedings of the IJCNN, Washington, 1990, Vol. 2, pp. 607-610. [Mitzman et al., 1990]: Mitzman, D. and Giovannini, R. (1990). ActivityNets: A Neural Classifier of Natural Language Descriptions of Economic Activities. Proc. of the Int. Workshop on Neural Nets for Statistical and Economic Data, Dublin, December 10-11, [Mozer, 1984]: Mozer, M.C. (1984). Inductive Information Retrieval Using Parallel Distributed Computation. ICS Technical Report 8406, La Jolla, UCSD. [Rose et al., 1989]: Rose, D.E. and Belew, R.K. (1989). A Case for Symbolic/Sub-Symbolic Hybrids. Proc. of the Cogn. Science Society, pp. 844-851. [Rose et al., 1991]: Rose, D.E. and Belew, R.K. (1991). A Connectionist and Symbolic Hybrid for Improving Legal Research. Int. Journal of Man-Machine Studies, July 1991, [Rose, 1990]: Rose, D.E. (1990). Appropriate Uses of Hybrid Systems. In: Connectionist Models: Proc. of the 1990 Summer School, pp. 277-286]. Morgan-Kaufman Publishers. [Rose, 1991]: Rose, D.E. (1991). A Symbolic and Connectionist Approach to Legal Information Retrieval. Ph.D. Dissertation, UCSD, La Jolla, CA. [Scholtes, 1991]: Scholtes, J.C. (1991). Filtering the Pravda with a Self-Organizing Neural Net. Submitted to the Bellcore Workshop on High Performance Information Filtering, November 5th-7th 1991, Chester, NJ. [Scholtes, 1991]: Scholtes, J.C. (1991). Neural Nets and Their Relevance in Information Retrieval. TR Dep. of Computation Linguistics, August 1991. University of Amsterdam. [Scholtes, 1991]: Scholtes, J.C. (1991). Neural Nets versus Statistics in Information Retrieval. Submitted to NIPS*91, December 2nd-5th 1991, Boulder, Colorado. [Scholtes, 1991]: Scholtes, J.C. (1991). Unsupervised Learning and the Information Retrieval Problem. Submitted to IJCNN November 18th-22nd 1991, Singapore. [Scholtes, 1992]: Scholtes, J.C. (1992). Filtering the Pravda with a Self-Organizing Neural Net. Submitted to the Symposium on Document Analysis and Information Retrieval. March 16th-18th 1992, Las Vegas, NV. [Wermter, 1991]: Wermter, S. (1991). Learning to Classify Neural Language Titles in a Recurrent Connectionist Model. Proc. of the ICANN '91, June 24th-28th, Helsinki, Vol. 2, pp. 1715-1718. ******************************************************************** From aam9n at hagar1.acc.Virginia.EDU Sat Aug 3 22:14:31 1991 From: aam9n at hagar1.acc.Virginia.EDU (Ali Ahmad Minai) Date: Sat, 3 Aug 91 22:14:31 EDT Subject: Robustness Message-ID: <9108040214.AA07744@hagar1.acc.Virginia.EDU> In <9108020132.AA12736 at nsis86.cl.nec.co.jp>, Guido Bugmann writes: > Robustness is a vague but often used concept. Is there an > accepted method to determine the robustness of a NN ? > In an application of a FF Backprop net to the reproduction > of a function f(x,y) [1], we had measured the robustness in > the following way: > After completed training, each weight was set to zero in turn > and the root mean square of the relative errors (RMS) (relative > differences between the actual outputs of the net and the outputs > defined in the training set) was measured (The mean is over all > the examples in the training set). > In our case, the largest RMS induced by the loss of one connection > was 1600 %. We have used this "worst possible damage" as a measure > of the (non-) robustness of the network. > > [1] Bugmann, G., Lister, J.B. and von Stockar, U. (1989) > "The standard deviation method: Data Analysis by Classical Means > and by Neural Networks", Lab Report LRP-384/89, CRPP, Swiss Federal > Institute of Technology, CH-1015 Lausanne. Indeed, robustness of neural networks could do with considerably greater investigation. I am just finishing up a dissertation on the robustness of feed-forward networks with real-valued inputs and outputs. I have looked at a very simple case --- probably the simplest possible. I define a perturbation process over all the non-output neurons of the network, with the major restriction that only one neuron's output is perturbed during the presentation of any one input vector. The neuron perturbed is selected with a distribution q(i), and the magnitude of the perturbation is an independent random variable with 0-mean distribution p(d). For simplicity of analysis, I take both q and p to be uniform, but that can be relaxed. The robustness of the network over a data set T is defined as the average deviation in the output of the network operating under the perturbation process, relative to some appropriate parameter of distribution p (e.g. the spread of the uniform deviation, or the variance etc.). The deviation can be measured in many ways: for simplicity, I use the sum of absolute deviations over all network outputs. The main thing is to predict the average deviation without making a hundred passes over the data set, and without actually perturbing the network. This is easily done using a power series approximation of the relationship between each neuron output and each network output. The required derivatives can be calculated using dynamic feedback a la Werbos (back-propagation, if you like). As long as the weight vectors for individual neurons in the network are not huge (i.e. if no neurons have activation functions close to being discountinuous), the approximation I make is quite reasonable. Of course, since all activation and composition functions in the network are continuous, and continuously differentiable everywhere, there is always a perturbation process with bounded distribution that satisfies the convergence criteria of the power series. Using the uniform distributions for p and q, and retaining only the linear term in the power series expansions, the analysis, applied to any network, yields a characteristic measure that directly scales the expected output deviation, i.e. given that p is U[0,b], the expected output deviation is b/2r, where r is the characteristic measure of robustness for the network. Once r is determined, the network's response to perturbation distributions with various spreads can be predicted (within limits). Indeed, with hardly any extra effort (and no extra computational expense), even the variance of the output deviation can be predicted in a similar way. The computational complexity of determining r is O(|W|*|T|), where W is the set of weights and T is the data set. Of course, the predictive accuracy of r over data sets other than T depends on how representative T is --- the usual generalization issue. As T grows, however, r's predictive accuracy should converge over all data sets chosen under the same sampling distribution. The empirical results I have are very good. One interesting aspect of this analysis is that it also provides a measure of the sensitivity of the network output to perturbations in each neuron output, which is a natural way of measuring the relevance of individual neurons. This can be used either for pruning, or (I think, more consistently with connectionist philosophy), to encourage the emergence of distributed representations. I am working on incorporating such distribution imperatives into back-propagation etc., and should have some results in a few months. The work described above is now being written up into papers, and should be submitted over the next month or two. I would be delighted to discuss this and related issues with anyone who is interested. There is much work to be done in extending this formulation to the case where multiple perturbations are permitted simultaneously. Since things are not additive or subadditive, I'm not sure how important higher order effects are --- probably quite important. Still, I have a few ideas, which I'll be working on in the next few months. Ali Minai Electrical Engineering University of Virginia From jm2z+ at andrew.cmu.edu Mon Aug 5 10:46:36 1991 From: jm2z+ at andrew.cmu.edu (Javier Movellan) Date: Mon, 5 Aug 91 10:46:36 -0400 (EDT) Subject: Robustness to what ? In-Reply-To: <9108020132.AA12736@nsis86.cl.nec.co.jp> References: <9108020132.AA12736@nsis86.cl.nec.co.jp> Message-ID: Robustness to what? * Damage ? * Effects of noise in input ? * Effect of noise in teacher ? Traditionally in statistics robustness of an estimator is understood as the the resistance of the estimates to the effects of a wide variety of noise distributions. The key point here is VARIETY. So we may have estimators that behave very well under Gaussian noise but deteriorate under other types of noise (non-robust) and estimators that behave OK but sub-optimally under very different types of noise (robust). Robust estimators are advised when the form of the noise is unknown. Maximum likelihood estimators are a good choice when the form of the noise is known. In practice robustness is measured by analyzing how the estimator behaves under three benchmark noise distributions. These distributions represent low tail, normal tail and large tail conditions. Things are a bit more complicated in the neural nets environment for we are trying to estimate functions instead of points, and unlike linear regression we have problems with multiple minima. For statistical theory on robust estimation as applied to linear regression see: Li G(1985) Robust Regression, in Hoaglin, Mosteller and Tukey: Exploring data, tables, trends, and shapes. New York, John Wiley. For application of these ideas to the back-prop environment see: Movellan J(1991) Error Functions to Improve Noise Resistance and Generalization in Backpropagation Networks} in {\it Advances in Neural Networks\/}, Ablex Publishing Corporation. Hanson S has an earlier paper on the same theme. I believe the paper is in one of the NIPS proceedings but I do not have the exact reference with me. -Javier From jm2z+ at andrew.cmu.edu Mon Aug 5 14:06:10 1991 From: jm2z+ at andrew.cmu.edu (Javier Movellan) Date: Mon, 5 Aug 91 14:06:10 -0400 (EDT) Subject: f(a,b) = f(a +b) Message-ID: I am looking for a name to the constraint f(a,b) = f(a+b). Do you have any suggestions ? Thanks Javier From port at iuvax.cs.indiana.edu Tue Aug 6 02:35:21 1991 From: port at iuvax.cs.indiana.edu (Robert Port) Date: Tue, 6 Aug 91 1:35:21 EST Subject: dynamic reprstns and lang Message-ID: This paper will be presented at the Cognitive Science Society meeting later this week. It proposes that dynamic systems suggest ways to expand the range of representational systems. `REPRESENTING ASPECTS OF LANGUAGE' by Robert F. Port (Departments of Linguistics and Computer Science) and Timothy van Gelder (Department of Philosophy) Cognitive Science Program, Indiana University, Bloomington. We provide a conceptual framework for understanding similarities and differences among various schemes of compositional representation, emphasizing problems that arise in modelling aspects of human language. We propose six abstract dimensions that suggest a space of possible compositional schemes. Temporality and dynamics turn out to play a key role in defining several of these dimensions. From studying how schemes fall into this space, it is apparent that there is no single crucial difference between AI and connectionist approaches to representation. Large regions of the space of compositional schemes remain unexplored, such as the entire class of active, dynamic models that do composition in time. These models offer the possibility of parsing real-time input into useful segments, and thus potentially into linguistic units like words and phrases. A specific dynamic model implemented in a recurrent network is presented. This model was designed to simulate some aspects of human auditory perception but has implications for representation in general. The paper can be obtained from Neuroprose at Ohio State University. Use ftp cheops.cis.ohio-state.edu. Login as anonymous with neuron as password. Cd to pub/neuroprose. Then get port.langrep.ps.Z. After uncompressing, do lpr (in Unix) to a postscript printer. Robert Port, Dept of Linguistics, Memorial Hall, Indiana University, 47405 812-855-9217 From singras at gmuvax2.gmu.edu Tue Aug 6 18:16:03 1991 From: singras at gmuvax2.gmu.edu (Steve Ingrassia) Date: Tue, 6 Aug 91 18:16:03 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108062216.AA04965@gmuvax2.gmu.edu> I do not understand your question. If the function "f" is a function of two variables, as on the left-hand side of your equal sign, then the right-hand side, "f(a+b)" is undefined. If "f" is a function of a single variable, then "f(a,b)" makes no sense. From lacher at lambda.cs.fsu.edu Wed Aug 7 10:37:54 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Wed, 7 Aug 91 10:37:54 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108071437.AA00165@lambda.cs.fsu.edu> A function f(x,y) that satisfies f(x,y) = g(x+y) for some g ? I asked Lou Howard and here is his response: If you think of y as time and x as position, something of the form g(x+y) is sometimes called a "leftward propagating wave"; but this is not much shorter than calling a spade a spade. [It is the general solution of the "left-going wave equation", du/dt = du/dx.] Can't think of anything else. -- Lou. --- Chris Lacher From bachmann at radar.nrl.navy.mil Wed Aug 7 09:22:30 1991 From: bachmann at radar.nrl.navy.mil (Charles Bachmann) Date: Wed, 7 Aug 91 09:22:30 -0400 Subject: f(a,b) = f(a +b) Message-ID: <9108071322.AA10316@radar.nrl.navy.mil> In reply to Steve Ingrassia's comment , what about f(x,y) = 1 /(x + y). We can think of it as a function of two variables, but with the replacement of z = x + y, it becomes a function of one variable f(z) = 1 / z. From bmb at Think.COM Wed Aug 7 19:31:27 1991 From: bmb at Think.COM (Bruce Boghosian) Date: Wed, 7 Aug 91 19:31:27 EDT Subject: f(a,b) = f(a +b) In-Reply-To: Charles Bachmann's message of Wed, 7 Aug 91 09:22:30 -0400 <9108071322.AA10316@radar.nrl.navy.mil> Message-ID: <9108072331.AA17609@aldebaran.think.com> Date: Wed, 7 Aug 91 09:22:30 -0400 From: Charles Bachmann In reply to Steve Ingrassia's comment , what about f(x,y) = 1 /(x + y). We can think of it as a function of two variables, but with the replacement of z = x + y, it becomes a function of one variable f(z) = 1 / z. At the risk of splitting hairs here, most mathematicians would probably like to see a different symbol used for the second function. So, the question is (presumably) to characterize functions of two variables, f(x,y), that have the property that there exists a function of one variable, g(z), such that f(x,y)=g(x+y). I know of no particular name for such functions. They satisfy the PDE partial f partial f --------- - --------- = 0, partial x partial y but that's probably not a particularly useful characterization either. --Bruce From barto at envy.cs.umass.edu Thu Aug 8 16:15:57 1991 From: barto at envy.cs.umass.edu (Andy Barto) Date: Thu, 8 Aug 91 16:15:57 -0400 Subject: postdoctoral position Message-ID: <9108082015.AA15473@envy.cs.umass.edu> Postdoctoral Position in Computational Neuroscience. I am looking for applicants for a postdoctoral position for the study of computational aspects of the control of movement. The position will involve working as a member of an interdisciplinary group on the modeling and simulation of neural systems that control arm movements. The ideal candidate would have experience in both neurophysiology and connectionist modelling, with specific interests in learning algorithms and the dynamics of nonlinear recurrent networks. Andy Barto Department of Computer Science University of Massachusets at Amherst Send curriculum vitae and two references to: Gwyn Mitchell, Department of Computer Science, University of Massachusets, Amherst MA 01003. From barryf at ee.su.OZ.AU Wed Aug 7 21:47:23 1991 From: barryf at ee.su.OZ.AU (Barry Flower) Date: Thu, 8 Aug 1991 11:47:23 +1000 Subject: ACNN'92 Final Call Message-ID: <9108080147.AA11030@brutus.ee.su.OZ.AU> FINAL CALL FOR PAPERS & ADVANCE REGISTRATION (Deadline for Submissions is August 30, 1991) Third Australian Conference On Neural Networks (ACNN'92) 3rd - 5th February 1992 The Australian National University, Canberra, Australia The third Australian conference on neural networks will be held in Canberra on February 3rd-5th 1992, at the Australian National University. This conference is interdisciplinary, with emphasis on cross discipline communication between Neuroscientists, Engineers, Computer Scientists, Mathematicians and Psychologists concerned with understanding the integrative nature of the nervous system and its implementation in hardware/software. The categories for presentation and submissions include: 1 - Neuroscience: Integrative function of neural networks in vision, audition, motor, somatosensory and autonomic functions; Synaptic function; Cellular information processing; 2 - Theory: Learning; generalisation; complexity; scaling; stability; dynamics; 3 - Implementation: Hardware implementation of neural nets; Analog and digital VLSI implementation; Optical implementations; 4 - Architectures and Learning Algorithms: New architectures and learning algorithms; hierarchy; modularity; learning pattern sequences; Information integration; 5 - Cognitive Science and AI: Computational models of cognition and perception; Reasoning; Concept formation; Language acquisition; Neural net implementation of expert systems; 6 - Applications: Application of neural nets to signal processing and analysis; Pattern recognition: Speech, machine vision; Motor control; Robotic; ACNN'92 will feature an invited keynote speaker. The program will include, presentations and poster sessions. Proceedings will be printed and distributed to the attendees. There will be no parallel sessions. Invited Keynote Speaker ~~~~~~~~~~~~~~~~~~~~~~~ Professor Terrence Sejnowski, The Salk Institute and University of California at San Diego. Call for Papers ~~~~~~~~~~~~~~~ Original research contributions are solicited and will be internationally refereed. Authors must submit the following by August 30, 1991: * Five copies of a four page (maximum) manuscript * Five copies of a 100 word (maximum) abstract * Covering letter indicating submission title and correspondence addresses for authors. Each manuscript and abstract should clearly indicate submission category (from the six listed) and author preference for oral or poster presentations. Note that names or addresses of the authors should be omitted from the manuscript and the abstract and should be included only on the covering letter. Authors will be notified by November 1, 1991 whether their submissions are accepted or not, and are expected to prepare a revised manuscript (up to four pages) by December 13, 1991. Submissions should be mailed to: Mrs Agatha Shotam Secretariat ACNN'92 Sydney University Electrical Engineering NSW 2006 Australia Registration material may be obtained by writing to Mrs Agatha Shotam at the address above or by: Tel: (+61-2) 692 4214 Fax: (+61-2) 660 1228 Email: acnn92 at ee.su.oz.au. Deadline for Submissions is August 30, 1991 Venue ~~~~~ Australian National University, Canberra, Australia. Principal Sponsors ~~~~~~~~~~~~~~~~~~ * Australian Telecommunications & Electronics Research Board * Telectronics Pacing Systems ACNN'92 Organising Committee ~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Organising Committee: Chairman: Marwan Jabri, (SU) Technical Program Chairman: Bill Levick, (ANU) & Ah Chung Tsoi, (QU) Stream Chairs: Neuroscience: Max Bennent (SU) Theory: Ah Chung Tsoi (QU) Cognitive SCience & AI: Max Coltheart (MU) Implementations: Marwan Jabri/Steve Pickard (SU) Applications: Yianni Attikiouzel (WA) Local Arrangements Chair: M Srinivasan (ANU) Institutions Liason Chair: N Nandagopal (DSTO) Sponsorship Chair: Steve Pickard (SU) Publicity Chair: Barry Flower (SU) Publications Chair: Philip Leong (SU) Secretariat/Treasurer: Agatha Shotam (SU) Registration ~~~~~~~~~~~~ The registration fee to attend ACNN'92 is: Full Time Students $A75 Academics $A200 Other $A300 A discount of 20% applies for advance registration. Registration forms must be posted before December 13th, 1991, to be entitled to the discount. To be eligible for the Full Time Student rate a letter from the Head of Department as verification of enrolment is required. Accommodation ~~~~~~~~~~~~~ To assist attendees in obtaining accommodation, a list of hotels and colleges close to the conference venue is shown below. Lakeside Hotel Olims Canberra Hotel London Circuit Cnr. Ainslie & Limestone Ave Canberra City ACT 2600 Braddon ACT 2601 Tel: +61(6) 247 6244 Tel: +61(6) 248 5511 Fax: +61(6) 257 3071 Tel: +61(6) 247 0864 Macquarie Private Hotel Capital Hotel 18 National Circuit 108 Northbourne Ave Barton ACT 2600 Canberra City ACT 2600 Tel: +61(6) 273 2325 Tel: +61(6) 248 6566 Fax: +61(6) 273 4241 Tel: +61(6) 248 8011 Brassey Hotel Canberra Rex Hotel Belmore Gardens Northbourne Ave Barton ACT 2600 Canberra City ACT 2600 Tel: +61(6) 273 3766 Tel: +61(6) 248 5311 Fax: +61(6) 273 2791 Tel: +61(6) 248 8357 Accommodation on Australian National University Campus ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ University House Burton & Garran Hall Tel: +61(6) 249 5211 Tel: +61(6) 249 3083 Fax: +61(6) 249 5252 Fax: +61(6) 249 3136 Important Conference Dates ~~~~~~~~~~~~~~~~~~~~~~~~~~ 30th August 1991 Deadline for Submissions 1st Novemeber 1991 Notification of Acceptance 13th December 1991 Deadline for Camera ready paper 13th December 1991 Deadline for advance registration with discount 3rd-5th February 1992 Conference ------------------------------------------------------------------------------- ACNN'92 Registration Form I wish to attend the conference ______ I wish to be on your mailing list ______ My research interests are: Neuroscience _____ Modelling _____ Implementation _____ Other: ___________________________________ Name: _______________________________________________________________________ Title: ______________________________________________________________________ Organisation: _______________________________________________________________ Occupation: _________________________________________________________________ Address: ____________________________________________________________________ _________________________________________ City: _____________________________ State: ________________ Post Code: ____________ Country: ____________________ Tel: ____________________________________ Fax: ______________________________ E-Mail: _____________________________________________________________________ Find enclosed a cheque for the sum of ($A): ____________ (Deduct 20% of original registration cost if posted before December 13th, 1991) OR Please charge my credit card for the sum of (A$): ____________ (Deduct 20% of original registration cost if posted before December 13th, 1991) Mastercard/Visa Number: ______________________ Expiry Date: ________________ Signature: ____________________________________ Date: _______________________ To register, please fill in this form and return it together with payment: Secretariat ACNN'92 University of Sydney Electrical Engineering Sydney NSW 2006 Australia From egnp46 at castle.edinburgh.ac.uk Mon Aug 12 15:45:08 1991 From: egnp46 at castle.edinburgh.ac.uk (D J Wallace) Date: Mon, 12 Aug 91 15:45:08 WET DST Subject: f(a,b) = f(a +b) In-Reply-To: Javier Movellan's message of Mon, 5 Aug 91 14:06:10 -0400 (EDT) Message-ID: <9108121545.aa08941@castle.ed.ac.uk> The function f(a,b) is translation invariant, in the sense that if you think of a and b as coordinates on a line, its value is unchanged under the transformation a -> a+x, b -> b-x (any x). I don't think there is a need to introduce any special name for this. David Wallace > I am looking for a name to the constraint > > f(a,b) = f(a+b). > > Do you have any suggestions ? > > > Thanks > > > Javier From jose at tractatus.siemens.com Mon Aug 12 14:31:03 1991 From: jose at tractatus.siemens.com (Steve Hanson) Date: Mon, 12 Aug 1991 14:31:03 -0400 (EDT) Subject: NIPS update Message-ID: NIPS goers: Due to a clerical error, one of our temps put a bunch (maybe ~50) labeled envelopes in the mail. Unfortunately they were EMPTY envelopes. Please disregard these envelopes, they have no bearing on your submission. Envelopes with LETTERS will be appearing soon. We apologize profusely for jumping the gun and causing some of you who are anxiously waiting for information any concern. Please DON'T call us for clarification. And please share this info with your fellow NIPS colleagues. Again Mea Culpa. Steve NIPS*91 Program Chair From tgd at turing.CS.ORST.EDU Mon Aug 12 15:15:36 1991 From: tgd at turing.CS.ORST.EDU (Tom Dietterich) Date: Mon, 12 Aug 91 12:15:36 PDT Subject: Mackey-Glass Data? Message-ID: <9108121915.AA27441@turing.CS.ORST.EDU> I want to experiment with the Mackey-Glass timeseries task. Before I numerically integrate the differential equation myself, I was wondering if someone had already constructed a collection of training and test data that they would be willing to make available. If anyone has suggestions for other interesting regression problems (i.e., where the task is to predict a real-valued function), I'd like to hear from you. Thanks, --Tom From pablo at cs.washington.edu Mon Aug 12 18:56:02 1991 From: pablo at cs.washington.edu (David Cohn) Date: Mon, 12 Aug 91 15:56:02 -0700 Subject: "Virtual" Directories of neural network papers, etc. Message-ID: <9108122256.AA15554@june.cs.washington.edu> Prospero is a distributed directory service that allows users to organize information that is scattered across the Internet. It also allows users to look for information that has been organized by others. Over the past few months I've been assembling "virtual" directories of neural network related papers, source releases, and training/test data. The motivation is that one can access current (publicly ftp'able) work in an organized directory format. If your system is running Prospero, you can look in: /papers/subjects/neural-nets Papers and reviews about neural nets /releases/neural-nets Neural net software /databases/machine-learning Machine learning test data I am incorporating all publicly ftp'able neural-network-related files that I become aware of. This includes all papers that are announced as being placed in the neuroprose archive. (Note that this doesn't *replace* neuroprose or any other archive sites; its purpose is to make them easier to access). If your system has relevant publicly-accessible software, papers, or data that you would like to make available to others, please send me email and I will incorporate it into these virtual directories. If you system is not already running Prospero, information on obtaining the release can be obtained from info-prospero at isi.edu. Virtually, -David "Pablo" Cohn e-mail: pablo at cs.washington.edu Dept. of Computer Science, FR-35 phone: (206) 543-7798 University of Washington Seattle, WA 98195 From ben%psych at Forsythe.Stanford.EDU Mon Aug 12 17:41:26 1991 From: ben%psych at Forsythe.Stanford.EDU (Ben Martin) Date: Mon, 12 Aug 91 14:41:26 PDT Subject: Request for info on satellite photo analysis. Message-ID: <9108122141.AA08847@psych> A colleague here at Stanford asked me if I knew of any applications of networks to the problem of satellite photo analysis. He mentioned hearing about a system that could recognize clouds. Any information you can pass along would be appreciated. Ben Martin (ben at psych.stanford.edu) From ira at linus.mitre.org Tue Aug 13 10:15:01 1991 From: ira at linus.mitre.org (ira@linus.mitre.org) Date: Tue, 13 Aug 91 10:15:01 -0400 Subject: Request for info on satellite photo analysis. In-Reply-To: Ben Martin's message of Mon, 12 Aug 91 14:41:26 PDT <9108122141.AA08847@psych> Message-ID: <9108131415.AA06394@ella.mitre.org> I am aware of some efforts to use neural networks for satellite photo analysis. The cloud work you heard about may have been from our group here at MITRE. I believe Hughes has applied the Boundary Contour System to either high altitude or satellite imagery. Our work applies neural network vision techniques to the cloud and other recognition problems. See "Meteorological Classification of Satellite Imagery Using Neural Network Data Fusion," Smotroff, I. G., Howells, T. P., Lehar, S., International Joint Conference On Neural Networks, June 1990, Vol. II, pp. 23-28. See also "Meteorological Classification of Satellite Imagery and Ground Sensor Data Using Neural Network Data Fusion," Smotroff, Howells, Lehar, in Proceedings of the American Meteorological Society Seventh International Conference on Interactive Information and Processing Systems for Meteorology, Oceanography, and Hydrology, New Orleans, LA, January 1991, pp. 239-243. Ira Smotroff (ira at mitre.org) From sims at starbase.MITRE.ORG Tue Aug 13 07:18:00 1991 From: sims at starbase.MITRE.ORG (Jim Sims) Date: Tue, 13 Aug 91 07:18:00 EDT Subject: Request for info on satellite photo analysis. In-Reply-To: Ben Martin's message of Mon, 12 Aug 91 14:41:26 PDT <9108122141.AA08847@psych> Message-ID: <9108131118.AA06259@starbase.mitre.org> some folks at MITRE Bedford were working on cloud ID using neural nets and satellite data. I think you could contact Ira Smotroff there.... try (627) 271-2000 (the operator) jim From slehar at park.bu.edu Tue Aug 13 11:59:09 1991 From: slehar at park.bu.edu (Steve Lehar) Date: Tue, 13 Aug 91 11:59:09 -0400 Subject: Request for info on satellite photo analysis. In-Reply-To: connectionists@c.cs.cmu.edu's message of 13 Aug 91 06:11:37 GM Message-ID: <9108131559.AA11710@park.bu.edu> > A colleague here at Stanford asked me if I knew of any applications > of networks to the problem of satellite photo analysis. He > mentioned hearing about a system that could recognize clouds. Any > information you can pass along would be appreciated. We did some work at Mitre Corp [1] on cloud recognition, which we presented at the INNC conference in Paris. My own part of the work involved mostly the Boundary Contour System / Feature Contour System (BCS/FCS) of Grossberg and Mingolla, which is a neural vision model that performs a variety of image enhancement and recognition functions, and which we used as a front-end for a backpropagation network to classify different cloud types, as identified (for the training set) by meterological specialists. If you would like more details on the BCS, or my extension to it the MRBCS, I would be happy to send you a an informal description I have prepared as an ASCII file. If you are interested in the backprop end of it, or the overall research effort, write to ira at linus.mitre.org, or howells at linus.mitre.org for further information. REFERENCES [1] Lehar S., Howells T, & Smotroff I. APPLICATION OF GROSSBERG AND MINGOLLA NEURAL VISION MODEL TO SATELLITE WEATHER IMAGERY. Proceedings of the INNC July 1990 Paris. From REJOHNSON at ORCON.dnet.ge.com Tue Aug 13 14:59:54 1991 From: REJOHNSON at ORCON.dnet.ge.com (REJOHNSON@ORCON.dnet.ge.com) Date: Tue, 13 Aug 91 14:59:54 EDT Subject: UNSUBSCRIBE Message-ID: <9108131859.AA25350@ge-dab.GE.COM> UNSUBSCRIBE From ingber at umiacs.UMD.EDU Wed Aug 14 15:04:18 1991 From: ingber at umiacs.UMD.EDU (Lester Ingber) Date: Wed, 14 Aug 1991 15:04:18 EDT Subject: ingber.eeg.ps.Z in Neuroprose archive Message-ID: <9108141904.AA05900@moonunit.umiacs.UMD.EDU> The paper ingber.eeg.ps.Z has been placed in the Neuroprose archive. This can be accessed by anonymous FTP on cheops.cis.ohio-state.edu (128.146.8.62) in the pub/neuroprose directory. This will laserprint out to 65 pages, so I give the abstract below to help you decide whether it's worth it. (I also enclose a referee's review afterwards to sway you the other way.) The six figures can be mailed on request, and I'm willing to make some hardcopies of the galleys or reprints, when they come, available. However, since this project is funded out of my own pocket, I might have to stop honoring such requests. The published paper will run 44 pages. This message may be forwarded to other lists. Lester Ingber ------------------------------------------ | | | | | | | Prof. Lester Ingber | | ______________________ | | | | | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | | | ------------------------------------------ ======================================================================= Physical Review A, vol. 44 (6) (to be published 15 Sep 91) Statistical mechanics of neocortical interactions: A scaling paradigm applied to electroencephalography Lester Ingber Science Transfer Corporation, P.O. Box 857, McLean, VA 22101 (Received 10 April 1991) A series of papers has developed a statistical mechanics of neocortical interactions (SMNI), deriving aggregate behavior of experimentally observed columns of neurons from statistical electrical-chemical properties of synaptic interactions. While not useful to yield insights at the single neuron level, SMNI has demonstrated its capability in describing large-scale properties of short-term memory and electroencephalographic (EEG) systemat- ics. The necessity of including nonlinear and stochastic struc- tures in this development has been stressed. In this paper, a more stringent test is placed on SMNI: The algebraic and numeri- cal algorithms previously developed in this and similar systems are brought to bear to fit large sets of EEG and evoked potential data being collected to investigate genetic predispositions to alcoholism and to extract brain "signatures" of short-term memory. Using the numerical algorithm of Very Fast Simulated Re-Annealing, it is demonstrated that SMNI can indeed fit this data within experimentally observed ranges of its underlying neuronal-synaptic parameters, and use the quantitative modeling results to examine physical neocortical mechanisms to discrim- inate between high-risk and low-risk populations genetically predisposed to alcoholism. Since this first study is a control to span relatively long time epochs, similar to earlier attempts to establish such correlations, this discrimination is incon- clusive because of other neuronal activity which can mask such effects. However, the SMNI model is shown to be consistent with EEG data during selective attention tasks and with neocortical mechanisms describing short-term memory (STM) previously pub- lished using this approach. This paper explicitly identifies similar nonlinear stochastic mechanisms of interaction at the microscopic-neuronal, mesoscopic-columnar and macroscopic- regional scales of neocortical interactions. These results give strong quantitative support for an accurate intuitive picture, portraying neocortical interactions as having common algebraic or physics mechanisms that scale across quite disparate spatial scales and functional or behavioral phenomena, i.e., describing interactions among neurons, columns of neurons, and regional masses of neurons. PACS Nos.: 87.10.+e, 05.40.+j, 02.50.+s, 02.70.+d ======================================================================= Report of Referee Manuscript No. AD4564 Over the years, I had several occasions to review papers by Lester Ingber. However, there was never time enough to fully digest and comprehend all of the details to convince myself that his efforts of developing a theoretical basis for describing neocortical brain functions are in fact sound and not just speculative. This paper dispels all those reservations and doubts, but unfortunately it is rather lengthy. This paper, and the research behind it, is pioneering, and it needs to be published. The question is whether Physical Review A is the appropriate journal. Since the paper reviews and presents in a rather comprehensive fashion the research by Lester Ingber in the area of modeling neocortical brain functions, I recommend that it be submitted to Review of Modern Physics. ======================================================================= ------------------------------------------ | | | | | | | Prof. Lester Ingber | | ______________________ | | | | | | P.O. Box 857 703-759-2769 | | McLean, VA 22101 ingber at umiacs.umd.edu | | | ------------------------------------------ From yair at siren.arc.nasa.gov Wed Aug 14 20:42:47 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Wed, 14 Aug 91 17:42:47 PDT Subject: PlaNet software Message-ID: <9108150042.AA05349@siren.arc.nasa.gov.> does anyone have an idea of the goodness of the PlaNet software from the University of Colorado at Boulder compared to others such as the PDP software by McClelland and Rumelhart. your response is appreciated. yair at siren.arc.nasa.gov From hussien at circe.arc.nasa.gov Wed Aug 14 20:56:59 1991 From: hussien at circe.arc.nasa.gov (Bassam Hussien) Date: Wed, 14 Aug 91 17:56:59 PDT Subject: No subject Message-ID: <9108150056.AA08370@circe.arc.nasa.gov.> From hussien at circe.arc.nasa.gov Wed Aug 14 20:38:32 1991 From: hussien at circe.arc.nasa.gov (Bassam Hussien) Date: Wed, 14 Aug 91 17:38:32 PDT Subject: Preprint on Texture Generation with the Random Neural Network In-Reply-To: Erol Gelenbe's message of Fri, 22 Feb 91 18:13:25 +2 <9102230855.AA15700@inria.inria.fr> Message-ID: <9108150038.AA08366@circe.arc.nasa.gov.> m From spotter at sanger Wed Aug 14 20:53:26 1991 From: spotter at sanger (Steve Potter) Date: Wed, 14 Aug 91 17:53:26 PDT Subject: Cultured Neural Nets? Message-ID: <9108150053.AA15897@sanger.bio.uci.edu> At the 1987 IEEE NN conference I heard a talk by a guy named Gross (I believe at U of Texas) who was describing a plan to grow neurons in culture dishes covered with electrodes. After the neurons developed connections, the numerous electrodes would be used to stimulate and record from the "natural" neural net. A search of recent literature on MedLine did not turn up any such papers. Does anyone have any more news on this? Is anyone else out there doing similar work? Steve Potter U of Cal Irvine Psychobiology dept. (714)856-4723 spotter at sanger.bio.uci.edu From rob at galab2.mh.ua.edu Thu Aug 15 10:27:51 1991 From: rob at galab2.mh.ua.edu (Robert Elliott Smith) Date: Thu, 15 Aug 91 09:27:51 CDT Subject: IEEE SouthEastCon '92 Session on Neural Networks Message-ID: <9108151427.AA18974@galab2.mh.ua.edu> IEEE SouthEastCon '92 April 12-15, 1992 Birmingham, Alabama Session on Neural Networks Announcement and Call for Papers -------------------------------- General Conference Announcement: SouthEastCon is the yearly IEEE Region 3 Technical Conference for the purpose of bringing regional Electrical Engineering students and professionals together and sharing information, particularly through the presentation of technical papers. It it the most influential outlet in Region 3 for promoting awareness of technical contributions made by our profession to the advancement of engineering science and society. Attendance and professional program paper participation from areas outside Region 3 is encouraged and welcome. I am session chair for a planned SouthEastCon '92 session on neural networks. I am writing to encourage the submission of high quality papers that describe innovative work on neural networks theory, analysis, design, and application. Instructions for paper submission: ---------------------------------- Acceptance Categories: 1) Full-length Papers (Refereed) Submit four copies of a paper not to exceed twenty (20) double-spaced, typewritten pages (including references and figures) to the Technical Program Chairman: Dr. Perry Wheless University of Alabama Department of Electrical Engineering Box 870286 Tuscaloosa, Alabama 35487-0286 (205) 348-1757 FAX: (205) 348-8573 email: wwheless at ua1vm.ua.edu by October 1, 1991. These papers will be fully refereed. Author notification will be mailed by November 19,1991, and final camera-ready papers will be due on January 6, 1992. 2) Concise Papers: Submit four copies of a paper summary and a separate abstract to the Technical Program Chairman by September 17, 1991. The abstract must be provided on a separate sheet, and limited to one page. The summary should not exceed 500 words. The summary should be complete and should include (a) statement of problems or questions addressed, (b) objective of your work with regard to the problem, (c) approach employed to achieve objective, (d) progress, work performed, and (e) important results and conclusions. Since the summary will be the basis for selection, care should be taken in its preparation so that it is representative of the work reported. As an aid to the Papers Review Committee, please indicate that the paper is intended for the Neural Networks Session. Concise papers, not exceeding four (4) camera-ready Proceedings pages (including references and figures) will be published subject to acceptance by the Papers Review Committee and the author's fulfillment of additional requirements contained in the authors kit. Notification of acceptance and mailing of the author's kit will be on or before November 5, 1991, and the camera-ready papers will be due on January 6, 1991. Important Dates: ---------------- Concise Paper Abstract and Summary Deadline: Sept. 17, 1991 Full-length Paper Deadline: Oct. 1, 1991 Conference Dates: April 12-15, 1991 I hope to see you at SouthEastCon '92. ------------------------------------------- Robert Elliott Smith Department of Engineering of Mechanics The University of Alabama ------------------------------------------- From kolen-j at cis.ohio-state.edu Thu Aug 15 13:30:07 1991 From: kolen-j at cis.ohio-state.edu (john kolen) Date: Thu, 15 Aug 91 13:30:07 -0400 Subject: Tech Report: Multiassociative Memory Message-ID: <9108151730.AA08632@retina.cis.ohio-state.edu> Another fine tech report available through the neuroprose archive at cis.ohio-state.edu: Multiassociative Memory John F. Kolen and Jordan B. Pollack Laboratory for AI Research Dept. of Computer and Information Sciences The Ohio State University Columbus, OH 43210 Abstract This paper discusses the problem of implementing many to many, or multiassociative, mappings with connectionist models. Traditional symbolic approaches explicitly represent all alternatives via stored links, or implicitly represent the alternatives through enumerative algorithms. Classical pattern association models ignore the issue of generating multiple outputs for a single input pattern. While recent research on recurrent networks looks promising, the field has not clearly focused on multiassociativity as a goal. In this paper, we define multiassociative memory and discuss its utility in cognitive modeling. We extend sequential cascaded networks to fit the task, and perform several initial experiments which demonstrate the feasibility of the concept. %ftp cis.ohio-state.edu Connected to cis.ohio-state.edu 220 news FTP server (SunOS 4.1) ready. Name (cis.ohio-state.edu:kolen-j): anonymous 331 Guest login ok, send ident as password Password: username 230 Guest login ok, access restrictions apply. ftp> cd pub/neuroprose 250 CWD command successful. ftp> get kolen.multi.ps.Z 200 PORT command successful. 150 ASCII data connection for kolen.multi.ps.Z (128.146.61.207,1336) (48345 bytes). 226 ASCII Transfer complete. local: kolen.multi.ps.Z remote: kolen.multi.ps.Z 48580 bytes received in 1.1 seconds (42 Kbytes/s) then uncompress and print the file %uncompress kolen.multi.ps.Z %lpr kolen.multi.ps From yair at siren.arc.nasa.gov Thu Aug 15 13:34:52 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Thu, 15 Aug 91 10:34:52 PDT Subject: PlaNet Message-ID: <9108151734.AA05507@siren.arc.nasa.gov.> Hi guys Anyone out there can tell me if the PlaNet Neural-Net software is worth the effort of learning it. How does it compare with the PDP software by McClelland and Rumelhart or other free or non-free software for the SUN. Thanks for any illumination in that regard Yair Barniv From Prahlad.Gupta at K.GP.CS.CMU.EDU Thu Aug 15 13:35:30 1991 From: Prahlad.Gupta at K.GP.CS.CMU.EDU (Prahlad.Gupta@K.GP.CS.CMU.EDU) Date: Thu, 15 Aug 91 13:35:30 EDT Subject: PlaNet software Message-ID: I've just been looking at some simulators. I'd be very interested in other people's opinions, especially on simulators that provide the ability to configure architectures in non-standard ways, and that interface with pretty graphics displays. I'd be happy to e-mail anyone who's interested a compilation I made of info about NN simulators, culled from this mailiing list from the last year or so. However, this mainly details the *availability* of the products, and not user reviews. Here's my (somewhat superficial) assessment: 1. Both the PDP and PlaNet systems allow you to set up *standard* sorts of models pretty fast. 2. Only PlaNet interfaces these with good graphics that let you examine network internals. 3. For models/architectures that depart significantly from "standard", you need to program things yourself. (a) With PDP, you need to work on the simulator code itself. (b) PlaNet provides a set of functions which are *meant* to give the user this capability. However, certain things aren't that easy to do -- eg. I couldn't see an easy way to handle I/O in my own data format (as opposed to PlaNet's imposed file format) without writing my own I/O routines. 4. If you really need to devise strange networks and don't want to write a simulator yourself, you need fairly extensive programming capability *within* an existing simulator. As far as I can tell, the Rochester Connectionist Simulator provides this. However, I haven't yet examined it closely, and I'm not sure how good it's graphics capabilities are for someone who'd rather not dive into X programming. -- Prahlad From ubli at ivy.Princeton.EDU Thu Aug 15 15:50:24 1991 From: ubli at ivy.Princeton.EDU (Urbashi Mitra) Date: Thu, 15 Aug 91 15:50:24 EDT Subject: training in correlated noise Message-ID: <9108151950.AA14733@ivy.Princeton.EDU> i'm interested in any work that may have been done on the convergence properties (i.e. will the learning algorithm converge?) of neural networks (perceptron models) that are trained with *correlated noise* present. the individual training (desired) data are independent from time sample to time sample, but the noise is not. the noise is, however, m-dependent,i.e. noise samples more than m time samples apart, *are* independent. i'm looking for work along the lines of john shynk's (ece dept. ucsb) that is similar to adaptive filtering results, but ANYTHING, in any form in this area would be greatly appreciated. thanks, urbashi mitra From koch at CitIago.Bitnet Thu Aug 15 15:57:57 1991 From: koch at CitIago.Bitnet (Christof Koch) Date: Thu, 15 Aug 91 12:57:57 PDT Subject: Cultured Neural Nets? In-Reply-To: Your message <9108150053.AA15897@sanger.bio.uci.edu> dated 14-Aug-1991 Message-ID: <910815125706.2040b65c@Iago.Caltech.Edu> Yes, there exists a number of groups out there trying to grow neurons onto silicon studded with various electrodes and other sensors: Jerry Pine at Caltech and David Tank at AT & T are both doing this sort of work. For a recent reference check the May or June issues of Science magazine. A german researcher, Fromherz from Ulm, had a paper discussing his way of interfacing silicon circuits with single neurons. Christof From B344DSL at UTARLG.UTA.EDU Fri Aug 16 15:02:00 1991 From: B344DSL at UTARLG.UTA.EDU (B344DSL@UTARLG.UTA.EDU) Date: Fri, 16 Aug 91 14:02 CDT Subject: Book out and available, finally Message-ID: <2CA5B6F7523F001FB5@utarlg.uta.edu> Some of you saw my announcement over a year ago of the textbook, Introduction to Neural and Cognitive Modeling, and by now are wondering if the book is an imaginary construct. I am pleased to say that it isn't: it is now out and available from Lawrence Erlbaum Associates. It is only $19.95 in paperback (it's a lot more in hardback) and can be ordered at 1-800-9-BOOKS-9. Their address is Lawrence Erlbaum Associates, Inc., 365 Broadway, Hillsdale, NJ 07642-1487. It took a long time because we did the typesetting here, using WordPerfect with about 150 CorelDraw figures and lots of equations. You might be interested in Erlbaum's current order form, since they have severalother books recently out or in press that are likely to be of interest. These include Neuroscience and Cognition (title?) edited by Gluck and Rumelhart; Neural Networks for Conditioning and Action, edited by Commons, Grossberg, and Staddon; Brain and Perception by Pribram; Income and Choice in Biological Syst- ems by Rosenstein; and Motivation, Emotion, and Goal Direction, edited by Leven and myself. (The latter should be Motivation, Emotion, and Goal Direction in Neural Networks and will be ready in about a month.) Daniel S. Levine Dept. of Math., UT Arlington Arlington, TX 76019-0408 817-273-3598 b344dsl at utarlg.uta.edu From rob at galab2.mh.ua.edu Fri Aug 16 08:49:39 1991 From: rob at galab2.mh.ua.edu (Robert Elliott Smith) Date: Fri, 16 Aug 91 07:49:39 CDT Subject: newer RCS? Message-ID: <9108161249.AA19813@galab2.mh.ua.edu> Hi, Since there is some current discussion on various simulators, I thought I'd bring this up. I retrieved the Rochester Connectionist Simulator (dated 1989) for use in a class last year. I really like the way it was designed, but I did locate some significant bugs: 1) The order of execution in the program *does not* match the manual! Check page 120 of the manual, the paragraph that starts "The execution cycle...", versus, the source code. I've forgotten exactly what file the problem is in, but this is a significant problem if you want to design your own archetectures. However, it's easy to fix. I'll check the details if the author's write me back. 2) The backprop simulator (admittedly) abuses the structure of the RCS itself. However, it seems to me that backprop could be built without this abuse. 3) The system works with X11R3, but is a pain to get to run under R4. Despite these complaints, I really like the RCS! It's the only simulator I've seen that has a truely flexible design philosophy, and doesn't require you to learn a new programming language (you simply write subroutines in C, and compile them with the RCS library). Has anyone got a comprehensive update of RCS? Are the original authors out there? If not, can we (those who have hacked parts of RCS) fix it up to realize its potential. If it's the later, please don't bombard me with requests for my fixes, but if you have others, write, and maybe we can put the whole thing together. Rob. ------------------------------------------- Robert Elliott Smith Department of Engineering of Mechanics The University of Alabama P. O. Box 870278 Tuscaloosa, Alabama 35487 <> @ua1ix.ua.edu:rob at galab2.mh.ua.edu <> (205) 348-1618 <> (205) 348-8573 ------------------------------------------- From lacher at lambda.cs.fsu.edu Sat Aug 17 11:07:49 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Sat, 17 Aug 91 11:07:49 -0400 Subject: PLEASE!!! Message-ID: <9108171507.AA02137@lambda.cs.fsu.edu> Please don't send binary or postscript files out over connectionists. This causes extreme inconvenience in some cases, such as reading mail over a PC-modem-Unix connection (I just spent 8 minutes watching drivle fly by on my PC screen at 2400 baud). From schraudo at cs.UCSD.EDU Fri Aug 16 17:49:38 1991 From: schraudo at cs.UCSD.EDU (Nici Schraudolph) Date: Fri, 16 Aug 91 14:49:38 PDT Subject: neuroprose update of hertz.refs.bib.Z Message-ID: <9108162149.AA25270@beowulf.ucsd.edu> The hertz.refs.bib.Z bibliography in the neuroprose archive has been improved: I've moved the address information of @inproceeedings entries around to follow the recommendations for BibTeX 0.99a: conference location in the address field, publisher address in the publisher field. This eliminates the ugly hack of putting conference locations in the organization field, which led to strange results in some bibliography styles. Best regards, -- Nicol N. Schraudolph, CSE Dept. | work (619) 534-8187 | nici%cs at ucsd.edu Univ. of California, San Diego | FAX (619) 534-7029 | nici%cs at ucsd.bitnet La Jolla, CA 92093-0114, U.S.A. | home (619) 273-5261 | ...!ucsd!cs!nici From jfj at m53.limsi.fr Sun Aug 18 07:40:26 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Sun, 18 Aug 91 13:40:26 +0200 Subject: newer RCS? Message-ID: <9108181140.AA12206@m53.limsi.fr> Hi ! I've done some work with RCS, and found I had to rewrite the Backprop algorithms pretty completely. I can try and make these changes available, if people are interested, though they are not very pretty. We also kludged together a version of Kohonen's maps, that seem to work, though slowly. My own impression of the RCS is that it is an excellent tool, but its data structures are a little limiting: implementing Backprop, for instance, required kludging the algorithms in order to store the required data in a 'neuron' data structure that didn't provide the necessary slots. In the same vein, access to the graphic displays is a little difficult if you want to do something that wasn't provided for in the original design. Final note: we were able to obtain an X11R4 version of RCS. Contact the RCS mailing list for details: administrative requests: simulator-request at cs.rochester.edu mailing list: simulator-users at cs.rochester.edu Regards, jfj From COSIC at cgi.com Tue Aug 20 13:01:00 1991 From: COSIC at cgi.com (COSIC@cgi.com) Date: Tue, 20 Aug 91 13:01 EDT Subject: Please remove me from this mailing list. Message-ID: From goodman at crl.ucsd.edu Tue Aug 20 18:27:46 1991 From: goodman at crl.ucsd.edu (Judith Goodman) Date: Tue, 20 Aug 91 15:27:46 PDT Subject: No subject Message-ID: <9108202227.AA22200@crl.ucsd.edu> Please add me to your mailing list. Thanks very much goodman at crl.ucsd.edu From gbugmann at nsis86.cl.nec.co.jp Wed Aug 21 15:12:49 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Wed, 21 Aug 91 15:12:49 V Subject: Cultured Neural Nets? In-Reply-To: Your message of Wed, 14 Aug 91 17:53:26 PDT. <9108150053.AA15897@sanger.bio.uci.edu> Message-ID: <9108210612.AA02876@nsis86.cl.nec.co.jp> Neuron cultures on substrates prepared by nanofabrication techniques (grooves and microcontacts) have been realized by the group of the professors C.D.W. Wilkinson and Adam Curtis at the University of Glasgow. I dont have references on their most recent work. Older references are: P. Clark, P. Connolly, A.S.G. Curtis, J.A.T. Dow & C.D.W. Wilkinson Res. Developmental Biology, 99 (1987) 439-448 J.A.T. Dow, P. Clark, P. Connolly, A.S.G. Curtis & C.D.W. Wilkinson J. Cell. Sci. Suppl., 8 (1987) 55-79 ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From mpp at cns.brown.edu Wed Aug 21 12:57:32 1991 From: mpp at cns.brown.edu (Michael P. Perrone) Date: Wed, 21 Aug 91 12:57:32 EDT Subject: publishing IJCNN supplementary poster papers Message-ID: <9108211657.AA00805@cns.brown.edu> For all those interested in having their IJCNN Supplementary Poster Session papers published: Due to the slow response from supplementary poster session participants, the deadline for submission has been extended to the end of September. Feel free to contect Derek Stubbs or Edward Rosenfeld if you have further questions. The following is a copy of the announcement that was distributed at IJCNN-91. Michael Perrone Box 1843 Center for Neural Science Brown University Providence, RI 02912 +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ARE YOU INTERESTED IN HAVING YOUR POSTER PAPER PUBLISHED? Publishing is expensive and the organizers of IJCNN-91 Seattle have been unable to publish all the papers submitted in this year's Proceedings. This is a setback for researchers. As a result, we hope to publish the Supplemental Poster Session papers as a separate publication. We are entering this endeavor in the spirit of self-organization typical of neural networks. We will contact potential publishers. However, please note, we cannot assure you that we will succeed in publishing these papers. We will do our best. Here are the steps that you can take towards potential publication of your paper: 1. Provide two (2) camera-ready copies of your poster paper on 8.5 by 11 inch paper, with one inch margins all around. No more than four (4) pages please. Use the title, author and reference system that appear in the IJCNN Proceedings. 2. Provide a keyword list (at the end of the Abstract) so that we can create and include a keyword index. Use at least five keywords for each paper. 3. By August 31st, 1991, send copies, marked clearly: IJCNN-91, to either editor at the addresses below. [NOTE: I talked with Ed Rosenfeld and he said that this deadline had been extended to the end of September. -MP] EDITORS: DEREK STUBBS Sixth Generation Systems PO box 155 Vicksburg, MI 49097-0155 (phone: 1-616-649-3772) EDWARD ROSENFELD Intelligence PO box 20008 New York, NY 10025-1510 (phone: 1-212-222-1123) +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ From goodman at crl.ucsd.edu Wed Aug 21 11:06:26 1991 From: goodman at crl.ucsd.edu (Judith Goodman) Date: Wed, 21 Aug 91 08:06:26 PDT Subject: mailing list Message-ID: <9108211506.AA25484@crl.ucsd.edu> Please add me to your mailing list. Thanks very much goodman at crl.ucsd.edu From tedwards at wam.umd.edu Wed Aug 21 15:21:24 1991 From: tedwards at wam.umd.edu (Thomas VLSI Edwards) Date: Wed, 21 Aug 91 15:21:24 -0400 Subject: Synchronization Binding? Freq. Locking? Bursting? Message-ID: <9108211921.AA04522@avw.umd.edu> I have just read "Synchronized Oscillations During Cooperative Feature Linking in a Cortical Model of Visual Perception" (Grossberg, Somers, Neural Networks Vol. 4 pp 453-466). It describes some models of phase-locking (supposedly neuromorphic) relaxation oscillators, including a cooperative bipole coupling which appears similar to the Kammen comparator model, and fits into BCS theory. I am curious at this date what readers of connectionists think about the theory that syncrhonous oscillations reflect the binding of local feature detectors to form coherent groups. I am also curious as to whether or not phase-locking of oscillators is a reasonable model of the phenomena going on, or whether synchronized bursting, yet not frequency-locked oscillation, is a more biologically acceptable answer. Incidently, my current work involves VLSI circuits which perform a partitionable phase-locking of multiple oscillators using a method similar to the "comparator model." (Circuit fabricated, technical report in preparation). The figures in Grossberg's paper look much like the responses of my oscillators, so I'll take that as an ecouraging sign. -Thomas Edwards tedwards at avw.umd.edu From gbugmann at nsis86.cl.nec.co.jp Thu Aug 22 16:52:17 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Thu, 22 Aug 91 16:52:17 V Subject: Cultured Neural Nets? In-Reply-To: Your message of Wed, 14 Aug 91 17:53:26 PDT. <9108150053.AA15897@sanger.bio.uci.edu> Message-ID: <9108220752.AA01505@nsis86.cl.nec.co.jp> Neuron cultures on substrates prepared by nanofabrication techniques (grooves and microcontacts) have been realized by the group of the professors C.D.W. Wilkinson and Adam Curtis at the University of Glasgow. I dont have references on their most recent work. Older references are: P. Clark, P. Connolly, A.S.G. Curtis, J.A.T. Dow & C.D.W. Wilkinson Res. Developmental Biology, 99 (1987) 439-448 J.A.T. Dow, P. Clark, P. Connolly, A.S.G. Curtis & C.D.W. Wilkinson J. Cell. Sci. Suppl., 8 (1987) 55-79 ------------------------------------------------- Guido Bugmann NEC Fund. Res. Lab. 34 Miyukigaoka Tsukuba, Ibaraki 305 Japan ------------------------------------------------- From qin at eng.umd.edu Thu Aug 22 09:33:13 1991 From: qin at eng.umd.edu (Si-Zhao Qin) Date: Thu, 22 Aug 91 09:33:13 -0400 Subject: delete Message-ID: <9108221333.AA13467@cm14.eng.umd.edu> Please delete me from the mailing list. From yair at siren.arc.nasa.gov Thu Aug 22 17:29:02 1991 From: yair at siren.arc.nasa.gov (Yair Barniv) Date: Thu, 22 Aug 91 14:29:02 PDT Subject: PlaNet In-Reply-To: chk@occs.cs.oberlin.edu's message of Thu, 22 Aug 91 16:39:09 -0400 <9108222039.AA25235@occs.cs.cs.oberlin.edu> Message-ID: <9108222129.AA00209@siren.arc.nasa.gov.> I know that Prahlad.Gupta at K.GP.CS.CMU.EDU has a list of NNet software. good luck, Yair From chk at occs.cs.oberlin.edu Thu Aug 22 16:39:09 1991 From: chk at occs.cs.oberlin.edu (chk@occs.cs.oberlin.edu) Date: Thu, 22 Aug 91 16:39:09 -0400 Subject: PlaNet In-Reply-To: Your message of "Thu, 15 Aug 91 10:34:52 PDT." <9108151734.AA05507@siren.arc.nasa.gov.> Message-ID: <9108222039.AA25235@occs.cs.cs.oberlin.edu> Yair Barniv -- I just noticed your note dated August 15 on the connectionist mailing list about a comparison between neural network simulators. I'm wondering if you would be willing to forward to me any useful messages that you may receive in reply? I have the same question as you about which of the many simulators out there are worthwhile learning. Thanks! Chris Koch Oberlin College, Computer Science From mike at ailab.EUR.NL Fri Aug 23 12:48:36 1991 From: mike at ailab.EUR.NL (mike@ailab.EUR.NL) Date: Fri, 23 Aug 91 18:48:36 +0200 Subject: Analysis of Time Sequences Message-ID: <9108231648.AA04606@> Dear Connectionists, I am working in a project where we want to use a neural network for online analysis of sequences of sensor measurements over discrete time steps to detect abnormalities as soon as possible. The obvious thing to me to handle the problem of time would be to, at a given time t, look back a fixed number of n time steps and analyze the points from time t-n to t. Now, I would like to know what alternative approaches for dealing with time sequences there are. Could anybody please give me any references on that topic? Thank you in advance, Michael Tepp mike at ailab.eur.nl From cjiang at ee.WPI.EDU Fri Aug 23 10:32:56 1991 From: cjiang at ee.WPI.EDU (Caixia Jiang) Date: Fri, 23 Aug 91 10:32:56 -0400 Subject: delete Message-ID: <9108231432.AA08179@ee.WPI.EDU> Please delete my name from the mailing list. From fellous%pipiens.usc.edu at usc.edu Fri Aug 23 21:00:43 1991 From: fellous%pipiens.usc.edu at usc.edu (Jean-Marc Fellous) Date: Fri, 23 Aug 91 18:00:43 PDT Subject: Single Neuron Simulators Message-ID: <9108240100.AA01291@pipiens.usc.edu> Dear Members of the Connectionist mailing list, I would like to go trough the existing (and operational) Single Neuron simulators that allow fine grain analysis (electrical & chemical) of neurons and interconnected neurons (small number of them). I would appreciate any references, email/mail addresses, criticisms on this matter. My ultimate goal is to determine the required level of sophistication a connectionist modeler needs to take into account in order to simulate individual neurons and networks of neurons (depending of course of his/her goal). I know already about: NSL (USC), GENESIS (Caltech), NEMOSYS (LLNL), CAJAL91 (USC) I will summarize the answers to the list. Thank you in advance, Yours, Jean-Marc Fellous Center for Neural Engineering University of Southern California Los Angeles From marwan at ee.su.OZ.AU Fri Aug 23 22:51:09 1991 From: marwan at ee.su.OZ.AU (Marwan Jabri) Date: Sat, 24 Aug 1991 12:51:09 +1000 Subject: Job opportunities Message-ID: <9108240251.AA16536@brutus.ee.su.OZ.AU> Research Fellows / Professional Assistants (2) Department of Electrical Engineering The University of Sydney Australia Microelectronics Implementation of Neural Networks based Devices for the Analysis and Classification of Medical Signals (Re-Advertised) Applications are invited from persons to work on an advanced neural network application project in the medical area. The project is being funded jointly by the Australian Government and a high-technology manufacturer of medical products. Appointees will be joining an existing team of 3 staff. The project is the research and development of VLSI architectures of neural networks for pattern classification applications. The project spans over a 2-year period and follows a preliminary study which was completed in early 1991. Applicants with expertise in one or more of the following areas are particularly invited to apply: - Neural network architectures and hardware implementation - Analog and/or digital VLSI design - Electronic systems design - Hardware development and design - EEPROM storage technologies - Analog circuits modeling and simulation The appointments will be for a period of 2 years. Applicants should have an Electrical/Electronic Engineering degree or equivalent. The appointees may apply for enrollment towards a postgraduate degree (part-time). Salary range according to qualifications: $25k - $40k Method of application: --------------------- Applications including curriculum vitae, list of publications and the names, addresses and Fax numbers of three referees should be sent to: Dr M.A. Jabri, Sydney University Electrical Engineering Building J03 NSW 2006 Australia Tel: (+61-2) 692-2240 Fax: (+61-2) 692-3847 Email: marwan at ee.su.oz.au From lacher at lambda.cs.fsu.edu Sat Aug 24 19:32:11 1991 From: lacher at lambda.cs.fsu.edu (Chris Lacher) Date: Sat, 24 Aug 91 19:32:11 -0400 Subject: Analysis of Time Sequences Message-ID: <9108242332.AA00585@lambda.cs.fsu.edu> If the measurements occur at discrete but non-uniformly spaced times, they can be converted to uniformly spaced pseudomeasurements (for comparability) using interpolation of some sort. We are trying cubic splines in one project where the data sample times depend on human subjects' recording "4 times daily". Chris Lacher From mike at ailab.EUR.NL Mon Aug 26 14:57:32 1991 From: mike at ailab.EUR.NL (mike@ailab.EUR.NL) Date: Mon, 26 Aug 91 20:57:32 +0200 Subject: Analysis of Time Sequences In-Reply-To: Chris Lacher's message of Sat, 24 Aug 91 19:32:11 -0400 <9108242332.AA00585@lambda.cs.fsu.edu> Message-ID: <9108261857.AA05986@> Dear Chris Lacher, Thank's a lot for your reply to my question on analyzing time sequences. I appreciate your help on this subject very much. regards Michael Tepp mike at ailab.eur.nl From hazem at gwusun.gwu.edu Mon Aug 26 11:25:35 1991 From: hazem at gwusun.gwu.edu (ali) Date: Mon, 26 Aug 91 11:25:35 EDT Subject: mailing list Message-ID: <9108261525.AA00401@ac080a.gwu.edu> Please add me to your mailing list . Thank you hazem at eesun.gwu.edu From tenorio at ecn.purdue.edu Mon Aug 26 11:56:18 1991 From: tenorio at ecn.purdue.edu (Manoel Fernando Tenorio) Date: Mon, 26 Aug 91 10:56:18 -0500 Subject: Analysis of Time Sequences In-Reply-To: Your message of Fri, 23 Aug 91 18:48:36 +0200. <9108231648.AA04606@> Message-ID: <9108261556.AA21734@dynamo.ecn.purdue.edu> -------- The other methods more commonly used for temporal phenomena are: 1. memorize delayed inputs 2. recursion (state, input, output) 3. sliding time window 4. Hysteresis The first 3 require the knowledge of the size of the time dependence, and are fixed, although Weigend has recently shown that for prediction problems, this is not a big problem since large than necessary windows work. -- M. F. Tenorio --- Your message of: Friday,08/23/91 --- From: mike at ailab.EUR.NL Subject: Analysis of Time Sequences Dear Connectionists, I am working in a project where we want to use a neural network for online analysis of sequences of sensor measurements over discrete time steps to detect abnormalities as soon as possible. The obvious thing to me to handle the problem of time would be to, at a given time t, look back a fixed number of n time steps and analyze the points from time t-n to t. Now, I would like to know what alternative approaches for dealing with time sequences there are. Could anybody please give me any references on that topic? Thank you in advance, Michael Tepp mike at ailab.eur.nl --- end of message --- From denis at hebb.psych.mcgill.ca Mon Aug 26 14:08:44 1991 From: denis at hebb.psych.mcgill.ca (Denis Mareschal) Date: Mon, 26 Aug 91 14:08:44 EDT Subject: Genetron references Message-ID: <9108261808.AA20610@hebb.psych.mcgill.ca.psych.mcgill.ca> Hi! Does anybody out there know hwere I could get some information on the following: 1. Seymour Papert's early 60's work on GENETRONs. I've already got a copy of his chapter in La Filiation des Structures (1963) but haven't been able to track down anything else (or in english for that matter). 2. A network design that would randomly (or pseudo-randomly) select one among k (equally valid) simultaneously presented input stimuli. For example: Given an array of nodes whoses values lie in the interval [0,1] select one from among those whose value is above a threshold of say 0.5. Any help at tracking these down would be greatly appreciated. Thanks a lot Cheers, Denis Mareschal From uli at ira.uka.de Tue Aug 27 05:26:39 1991 From: uli at ira.uka.de (Uli Bodenhausen) Date: Tue, 27 Aug 91 10:26:39 +0100 Subject: No subject Message-ID: Subject: Re: Analysis of Time Sequences In-reply-to: Your message of "Mon, 26 Aug 91 10:56:18 EST." <9108261556.AA21734 at dynamo.ecn.purdue.edu> -------- Manoel Fernando Tenorio writes: The other methods more commonly used for temporal phenomena are: 1. memorize delayed inputs 2. recursion (state, input, output) 3. sliding time window 4. Hysteresis The first 3 require the knowledge of the size of the time dependence, and are fixed, although Weigend has recently shown that for prediction problems, this is not a big problem since large than necessary windows work. --------------------------- I'd like to point out that it is possible to derive a learning algorithm that adjusts the size of time-windows automatically (Bodenhausen and Waibel, last NIPS proceedings and last ICASSP proceedings). Uli From jfj at m53.limsi.fr Tue Aug 27 08:30:21 1991 From: jfj at m53.limsi.fr (Jean-Francois Jadouin) Date: Tue, 27 Aug 91 14:30:21 +0200 Subject: Time-unfolding Message-ID: <9108271230.AA03904@m53.limsi.fr> Dear connectionists, I've been doing a little work with Time-Unfolding Networks (first mentioned, I think, in PDP), and getting pretty terrible results. My intuition is that I've misunderstood the model. Does anyone out there use this model ? If so, would you be prepared to exchange benchmark results (or even better, software) and compare notes ? A little discouraged, jfj From lissie!botsec7!botsec1!dcl at uunet.UU.NET Tue Aug 27 13:09:49 1991 From: lissie!botsec7!botsec1!dcl at uunet.UU.NET (David Lambert) Date: Tue, 27 Aug 91 13:09:49 EDT Subject: Radial Basis Functions Message-ID: <9108271709.AA00629@botsec1.bot.COM> Hi. Could someone post a brief description of RBF techniques as applied to this field? References would also be most appreciated. Thanks. David Lambert From denis at hebb.psych.mcgill.ca Tue Aug 27 09:01:35 1991 From: denis at hebb.psych.mcgill.ca (Denis Mareschal) Date: Tue, 27 Aug 91 09:01:35 EDT Subject: No subject Message-ID: <9108271301.AA21302@hebb.psych.mcgill.ca.psych.mcgill.ca> Subject: Genetron references Hi! Does anybody out there know where I could get some information on the following: 1. Seymour Papert's early 60's work on GENETRONs. I've already got a copy of his chapter in La Filiation des Structures (1963) but haven't been able to track down anything else (or in english for that matter). 2. A network design that would randomly (or pseudo-randomly) select one among k (equally valid) simultaneously presented input stimuli. For example: Given an array of nodes whoses values lie in the interval [0,1] select one from among those whose value is above a threshold of say 0.5. Any help at tracking these down would be greatly appreciated. Thanks a lot Cheers, Denis Mareschal From dlovell at s1.elec.uq.oz.au Wed Aug 28 11:09:18 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Wed, 28 Aug 91 11:09:18 GMT+1000 Subject: Translation Invariance in Human Visual System? Message-ID: <9108280109.AA11569@c14.elec.uq.oz.au> Dear Connectionists, Recently we (at the University of Queensland) had the pleasure of a visit from Prof. Thomas Huang of the University of Illinois. In one of his presentations on image compression he mentioned that a team of French researchers had recently published information which showed the response of the human visual system not to be invariant under certain translations of input. Prof. Huang did not know the details of this paper at the time but said that he would pass them on to me when he returned to Illinois (in mid-September). The problem that I am now faced with is that I would like to get hold of this paper by Friday, August 30 and I don't know where to look (and I don't have much spare time between now and then). Any assistance would be greatly appreciated. Regards David -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | University of Queensland | BRISBANE 4072 | Australia | | From wimberly at bosque.psc.edu Wed Aug 28 12:35:44 1991 From: wimberly at bosque.psc.edu (wimberly@bosque.psc.edu) Date: Wed, 28 Aug 91 12:35:44 EDT Subject: Position at Pittsburgh Supercomputing Center Message-ID: <9108281635.AA05743@bosque.psc.edu> TITLE: Scientific Specialist DEPARTMENT: Pittsburgh Supercomputing Center Carnegie Mellon University This position will support the Center in its role as a national resource in scientific computing in the area of biomedical research. Specifically, this work will be undertaken in the area of connectionist artificial intelligence, as applied to problems in medicine and biology, or in the area of modeling biological neuronal processes, or both. The incumbent will undertake efforts in training, high-level user consulting and collaborative research. He or she will provide leadership and coordination for a community of the Center's users interested in neural modeling and connectionist AI. QUALIFICATIONS: An earned doctorate in Computer Science, Neural Science, Psychology or a related discipline is required. A publication record is desirable. Strong interpersonal and communications skills are required and candidates should have "hands-on" experience designing, developing and documenting software systems. Supercomputing experience with vector processors or massively parallel architectures is highly desirable. To apply, send a letter and resume to: Dr. Frank C. Wimberly Scientific Applications Coordinator Pittsburgh Supercomputing Center 4400 Fifth Avenue Pittsburgh, PA 15213 wimberly at psc.edu (internet) wimberly at cpwpsca (bitnet) From jcp at vaxserv.sarnoff.com Wed Aug 28 13:24:16 1991 From: jcp at vaxserv.sarnoff.com (John Pearson W343 x2385) Date: Wed, 28 Aug 91 13:24:16 EDT Subject: Analysis of Time Sequences Message-ID: <9108281724.AA27660@sarnoff.sarnoff.com> In response to: I'd like to point out that it is possible to derive a learning algorithm that adjusts the size of time-windows automatically (Bodenhausen and Waibel, last NIPS proceedings and last ICASSP proceedings). Uli I would like to add that Bert de Vries and Jose Principe have been developing a neural network model with variable (and trainable) time delays for temporal sequence processing. See NIPS-90, page 162. John Pearson David Sarnoff Research Center CN5300 Princeton, NJ 08543 609-734-2385 jcp at as1.sarnoff.com From liaw at ecn.purdue.edu Wed Aug 28 13:34:49 1991 From: liaw at ecn.purdue.edu (Liaw Jin-Nan) Date: Wed, 28 Aug 91 12:34:49 -0500 Subject: Radial Basis Functions Message-ID: <9108281734.AA28659@betta.ecn.purdue.edu> Hi, As far as I know there are several related papers which apply Radial Basis Function to neural networks. Some of these articles are listed below: S. M. Botros and C. G. Atkeson (1990) "Generalization properties of Radial Basis Functions", In R. P. Lippmann, J. E. Moody, D. S. Touretzky, ed.s, Advances in Neural Information Processing Systems 3, pp. 707-713, Morgan Kaufmann, San Mateo, CA. J. Moody and C. Darken (1989) "Fast learning in networks of locally tuned processing units", Neural Computation 1(2), pp. 281-294. T. Poggio and F. Girosi (1990) "Networks for approximation and learning", Proceedings of IEEE 78(9), pp. 1481-1497. M. J. D. Powell (1987) "Radial basis functions for multivariable interpolation: A review", In J. C. Mason and M. G. Cox(ed.), Algorithms for Approximation, pp. 143-167, Clarendon Press, Oxford. S. Renals and R. Rohwor (1989) "Phoneme classification experiments using radial basis functions", IJCNN, PP. I-462 - I-467, Washington, D. C. T. D. Sanger (1990) "Basis-function trees for approximation in high-dimensional spaces", Proceedings of 1990 Connectionist Summer School, pp. 145-151, Morgan Kaufmann, San Mateo, CA. T. D. Sanger (1991) "A tree-structure algorithm for reducing computation in networks with separable basis functions", Neural Computation, 3(1), pp. 67-81. Cheers, Jin-Nan Liaw liaw at betta.ecn.purdue.edu From port at iuvax.cs.indiana.edu Wed Aug 28 13:38:09 1991 From: port at iuvax.cs.indiana.edu (Robert Port) Date: Wed, 28 Aug 91 12:38:09 EST Subject: Analysis of time sequences without a window Message-ID: Tenorio mentioned several schemes for collecting information about patterns in time sequences. As he noted, several are essentially the same -- Time Windows and Delay Lines -- since a static (or fixed-window) slice of the signal is stored and advanced through a structure that preserves all info in the inputs -- that is, it preserves a raw record of inputs. (Im not sure what he means by `recursion'.) As long as the bandwidth of input is very small (that is, when there is a small set of possible inputs), this technique is fine. (Though, as noted before, an apriori window size imposes a limit on the maximum length of pattern that can be learned). But for a domain like HEARING, where the entire acoustic spectrum is sampled (at sampling rates that vary with frequency), the idea of storing EVERYTHING that comes in is computationally intractable -- at least, it apparently is for human hearing (and surely hearing in other animals as well). Despite the intuitive appeal of theories about `echoic memory', `precategorical acoustic store', etc, the evidence shows that these `acoustic memories' do NOT contain anything like raw sound spectra for any length of time. Instead, these memories should be called `auditory' since they contain `names' (or categories of some sort) for learned patterns. (See my paper in Connection Science, 1990) One source of evidence for this is simply that when an acoustic temporal pattern is REALLY NOVEL - eg, an artificial pattern completely unlike speech or other sounds from our environment - then listeners do NOT have a veridical representation of it that can be retained for a second or two. See experiments by CS Watson on patterns of 5-10-tones presented within a half second or so. The patterns are random-freq pure-tone sequences (patterns that impressionistically resemble the sound of a Touch-Tone phone when it auto-redials, or maybe even a turkey gobble). It is incredibly difficult to detect changes in, say, the frequency of one of the tones -- at least it's hard as long as the pattern is `unfamiliar'. And to really learn the pattern (to near-asymptotic performance level) requires literally thousands of practice trials! So what could familiar, learned auditory memories be like if they aren't specified within in a raw time window of the acoustic signal? I think the answer is Tenorio's other type: Hysteresis. This refers to the effect where some properties of past inputs affect system response to the current input. A concrete example is a cheap dimmer switch for a light. Frequently, a given angle for the rotating knob produces one level of brightness when approached from the left and another brightness approached from the right. This kind of nonlinear behavior can be exploited in a dynamical system (eg, the nervous system or a recurrent connectionist network) to store information about pattern history. By an appropriate learning process the parameters of the dynamic system can be adjusted to generate a distinctive trajectory (through activation space) for familiar patterns. See Anderson and Port, 1990 and Anderson, Port, McAuley, 91 for some demonstrations of this kind of `dynamic memory' in networks trained with recurrent backprop. This kind of representation for familiar sequential patterns exhibits many standard properties of human hearing. For example, this kind of representation (unlike a raw time window representation) is naturally INVARIANT under changes in the RATE of presentation of the pattern (just as words or tunes are recognized as the same despite differences in rate of production). It has been shown that for `Watson patterns', changing the rate of presentation to listeners during testing by a factor of 2 or so relative to the rate used during training has no effect whatever on performance. The same is true of our recurrent networks. The use of dynamic-memory representations by the nervous system for environmental sounds exploits the fact that most of the sounds we hear are very similar to sounds we have heard many times before. Only a minute fragment of the possible spectrally distinct patterns over time occur in our environment, so apparently we classify sounds into a (very large) alphabet of familiar sequences. Watson patterns are not in this set, so we cannot store them for a second or 2. But to return to Michael Tepp's original problem. He apparently has several sampled physiological measures from milk cows (eg, body temperature, chemical content of the milk, etc) and hopes to detect the presence of mastitis in the cow as early as possible. It seems very likely that the onset of the disease will exhibit differences in rate between instances of the illness. So, even though keeping a static time-window is trivial given the rate at which the physiological data is generated, the distribution of information about the `target pattern' (whatever it is) across such a window is very likely NOT to be constant. Thus a hysteresis-based method of pattern recognition using a dynamic memory might be expected to have more success. A few refs: Anderson, Sven, R. Port and Devin McAuley (1991) Dynamic Memory: a model for auditory pattern recognition. Mspt. But I will make it available by ftp from neuroprose at Ohio State. We will post a separate note when it is there. Port, Robert (1990) Representation and recognition of temporal patterns. \f2Connection Science,\f1 151-176. This includes some description of Watson's work and contains more on the argument against time windows. Port, Robert and Sven Anderson (1989) Recognition of melody fragments in continuously performed music. In G. Olson and E. Smith (eds) \f2Proceedings of the Eleventh Annual Meeting of the Cognitive Science Society\f1 (L. Erlbaum Assoc, Hillsdale, NJ), pp. 820-827. Port, R and Tim van Gelder (1991) Representing aspects of language. Proc of Cog Sci Soc 13, Erlbaum Assoc. Generalizes the notion of dynamic representations as applied to other kinds of patterns. From dtam at next-cns.neusc.bcm.tmc.edu Wed Aug 28 14:21:18 1991 From: dtam at next-cns.neusc.bcm.tmc.edu (David C. Tam) Date: Wed, 28 Aug 91 14:21:18 GMT-0600 Subject: Cultured Neural Nets references Message-ID: <9108282021.AA15899@next-cns.neusc.bcm.tmc.edu> Here are some references of Dr. Guenter W. Gross at University of North Texas, Denton TX 76203 on cultured neural nets grown on MultiMicroElectrode Plates (MMEPs). He is among one of the first investigators successfully grown neurons forming networks with spontaneous electrical activities recorded from 64-electrodes simultaneously for over a long period of time (i.e., months) with the pattern of firing monitored by these microelectrodes under physiological conditions. His current electrode design is a second-generation. The first-generation uses metal (opague) electrode conductor photo-etched on a glass-plate substrate. The second-generation design uses transparent indium-tin oxide electrode conductor so that the complete neural network can be visualized under the microscope without being blocked by the electrode-leads. His results on the network activity of neurons include: cooperative sychronous burst (phase-lock) firing of neurons; switching of firing patterns in groups of neurons; micro-laser surgery that selectively "zap" axons and/or dendrites connectivities of neurons in the network, etc. These are successful experimental results of monitoring up to 64 channels of electrodes in cultured neurons. Per request of Steve Potter, here is a list of his publications: Gross, G. W. (1979). Simultaneous single unit recording in vitro with a photoetched, laser deinsulated gold, multimicroelectrode surface. IEEE Trans. Biomed. Eng. BME. 26: 273-279. Gross, G. W., & Hightower, M. H. (1986). An approach to the determination of network properties in mammalian neuronal monolayer cultures. Proc. of the 1st IEEE Conf. on Synth. Microstructs. in Biol. Res., Naval Research Lab. Press, Washington D.C., pp. 3-21. Gross, G. W. & Kowalski, J. M (1990). Experimental and theoretical analysis of random nerve cell network dynamics. In Neural Networks: Concepts, Applications, and Implementations Vol. 3. Prentice-Hall: New Jersey. (in press) Gross, G. W., & Lucas, J. H. (1982). Long-term monitoring of spontaneous single unit activity from neuronal monolayer networks cultured on photoetched multielectrode surfaces. J. Electrophys. Tech. 9: 55-69. Gross, G. W., Wen, W., & Lin, J. (1985) Transparent indium-tin oxide patterns for extracellular multisite recording in neuronal cultures. J. Neurosci. Methods. 15: 243-252. Droge, D. H., Gross, G. W., Hightower, M. H., & Czisny, L. E. (1986) Multielectrode analysis of coordinated, multisite, rhythmic bursting in cultured CNS monolayer networks. J. Neurosci. 6: 1583-1592. Lucas, J. H., Czisny, L. E., & Gross, G. W. (1986). Adhesion of cultured mammalian CNS neurons to flame-modified hydrophobic surfaces. In Vitro Cell & Dev. Biol. 22: 37-43. David Tam dtam at next-cns.neusc.bcm.tmc.edu From snider at mprgate.mpr.ca Wed Aug 28 16:23:11 1991 From: snider at mprgate.mpr.ca (Duane Snider) Date: Wed, 28 Aug 91 13:23:11 PDT Subject: No subject Message-ID: <9108282023.AA00878@kiwi.mpr.ca> Subject: Re: Radial Basis Functions > Could someone post a brief description of RBF techniques > as applied to this field? References would also be > most appreciated. There is a fairly good description on how the radial basis functions work in multidimensional spaces is in 'Neurocomputing' by Robert Hecht-Nielsen. It was published by Addison-Wesley in 1990. John Moody terms his radial basis functions 'Local Receptive Fields'. I have seen some of his work at the ICJNN's before. I also found Nestor Corp's algorithm for Restricted Coulomb Energy fields in 'DARPA Neural Network Study', ISBN 0-916159-17-5. I hope this helps, Duane Snider snider at mprgate.mpr.ca MPR Teltech Ltd Burnaby, BC Canada From gbugmann at nsis86.cl.nec.co.jp Thu Aug 29 07:38:13 1991 From: gbugmann at nsis86.cl.nec.co.jp (Guido Bugmann) Date: Thu, 29 Aug 91 07:38:13 V Subject: Radial Basis Functions In-Reply-To: Your message of Tue, 27 Aug 91 13:09:49 EDT. <9108271709.AA00629@botsec1.bot.COM> Message-ID: <9108282238.AA07351@nsis86.cl.nec.co.jp> Hello, an application of RBF for the modelling of time series is described in: He, X. and Lapedes, A. (1991) "Nonlinear Modeling and Prediction by Successive Approximations Using Radial Basis Fuctions", Los Alamos Lab. Report LA-UR-91-1375 On this network, there was also an announcement by: Martin Roescheisen of following technical report: Hofmann, R., Roescheisen, M. and Tresp, V. (1991) "Incorporating Prior Knowledge in Parsimonious Networks of Locally-Tuned Units" Regards Guido Bugmann From edelman at BLACK Thu Aug 29 02:57:00 1991 From: edelman at BLACK (Shimon Edelman) Date: Thu, 29 Aug 91 08:57+0200 Subject: Translation Invariance in Human Visual System? In-Reply-To: <9108280109.AA11569@c14.elec.uq.oz.au> Message-ID: <19910829065741.1.EDELMAN@YAD> >From: David Lovell >Subject: Translation Invariance in Human Visual System? Here is, I believe, the reference to that paper: @article{NazirORegan90, author="T. Nazir and J. K. {O'Regan}", title="Some results on translation invariance in the human visual system", journal="Spatial vision", volume="5", pages="81-100", year=1990 } Actually, the paper discusses evidence for a quite *limited* translational invariance (in the recognition of dot patterns). -Shimon Shimon Edelman (edelman at wisdom.weizmann.ac.il) Dept. of Applied Mathematics and Computer Science The Weizmann Institute of Science Rehovot 76100 Israel From dlovell at s1.elec.uq.oz.au Thu Aug 29 17:45:58 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Thu, 29 Aug 91 16:45:58 EST Subject: Translation Invariance etc.. Message-ID: <9108290646.AA26523@s2.elec.uq.oz.au> Dear Connectionists, For those of you who have been holding your breath wondering who published the paper that claims the human visual system is not as translationally invariant as one would think, all the way from the keyboard of Shimon Edelman, I am proud to present.... @article{NazirORegan90, author="T. Nazir and J. K. {O'Regan}", title="Some results on translation invariance in the human visual system", journal="Spatial vision", volume="5", pages="81-100", year=1990 } Many thanks to Shimon and also Irv Biederman who mailed me a few pointers to this paper. There is the promise of more information on translational invariance from a few different sources so if new news comes to hand, I'll post that too. Happy Connectioning(?) -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | From dlovell at s1.elec.uq.oz.au Fri Aug 30 18:49:04 1991 From: dlovell at s1.elec.uq.oz.au (David Lovell) Date: Fri, 30 Aug 91 17:49:04 EST Subject: Translation Invariance in Human Visual System In-Reply-To: <9108261857.AA05986@>; from "owner-neuroz@munnari.oz.au" at Aug 26, 91 8:57 pm Message-ID: <9108300749.AA10168@s2.elec.uq.oz.au> Dear Connectionists, Here is some more information about a publication that would be of interest to anyone researching the translational (in?)variance of the human visual field: Biederman, I. & Cooper, E. E. (in press). Evidence for complete translational and reflectional priming in visual object recognition. PERCEPTION. A recent paper (also by Biederman & rman and Cooper) provides many of the details of methodology, procedure and theory. It appeared in the July issue of COGNITIVE PSYCHOLOGY. Again, thanks to everyone who has replied to my original request especially Eric Postma, Irv Biederman, Shimon Edelman and Bill Phillips. If anyone has any more information on the subject I suggest that they post it to Connectionists because I'm in the middle of writing a paper and it could be a long time before any further info gets re-posted. Yours appreciatively, David. -- David Lovell - dlovell at s1.elec.uq.oz.au | | Dept. Electrical Engineering | "Oh bother! The pudding is ruined University of Queensland | completely now!" said Marjory, as BRISBANE 4072 | Henry the daschund leapt up and Australia | into the lemon surprise. | From kruschke at ucs.indiana.edu Fri Aug 30 11:44:00 1991 From: kruschke at ucs.indiana.edu (JOHN K. KRUSCHKE) Date: 30 Aug 91 10:44:00 EST Subject: motives for RBF networks Message-ID: Regarding uses of radial basis functions (RBFs) in neural networks: One motive for using RBFs has been the promise of better interpolation between training examples (i.e., better generalization). Some suggest that RBF nodes are also neurally plausible (at least in the type of function they compute, if not the methods used to train them). (See previous postings by other Connectionists for references.) Another motive comes from the molar, psychological level. In some situations, human behavior can be accurately described in terms of memory for specific exemplars, with generalization to novel exemplars based on similarity to memorized exemplars. If an exemplar is encoded as a point in a multi-dimensional psychological space, then an internally memorized exemplar can be represented by an RBF node centered on that point. When combined with back-propagation learning (and learned dimensional attention strengths), such RBF-based networks can do a reasonably good job of capturing human performance in several category learning tasks. Some references: Kruschke, J. K. (1991a). ALCOVE: A connectionist model of human category learning. In: R. P. Lippmann, J. E. Moody & D. S. Touretzky (eds.), Advances in Neural Information Processing Systems 3, pp.649-655. San Mateo, CA: Morgan Kaufmann. (Several other papers in this volume address related issues.) Kruschke, J. K. (1991b). Dimensional attention learning in models of human categorization. In: Proceedings of the Thirteenth Annual Conference of the Cognitive Science Society, pp.281-286. Hillsdale, NJ: Erlbaum. Kruschke, J. K. (1991c). Dimensional attention learning in connectionist models of human categorization. Indiana University Cognitive Science Research Report 50. Kruschke, J. K. (in press). ALCOVE: An exemplar-based connectionist model of category learning. Psychological Review. Scheduled to appear in January 1992. [Indiana University Cognitive Science Research Report 47.] Nosofsky, R. M., Kruschke, J. K. & McKinley, S. (in press). Combining exemplar-based category representations and connectionist learning rules. Journal of Experimental Psychology: Learning, Memory and Cognition. Scheduled to appear in March 1992. From bap at james.psych.yale.edu Fri Aug 30 13:09:05 1991 From: bap at james.psych.yale.edu (Barak Pearlmutter) Date: Fri, 30 Aug 91 13:09:05 -0400 Subject: Analysis of Time Sequences In-Reply-To: John Pearson W343 x2385's message of Wed, 28 Aug 91 13:24:16 EDT <9108281724.AA27660@sarnoff.sarnoff.com> Message-ID: <9108301709.AA17362@james.psych.yale.edu> In this vein, a procedure for performing gradient descent in the length of time delays in the fully recurrect continuous time case using backpropagation through time is presented in "Learning state space trajectories in recurrent neural networks," Barak Pearlmutter, IJCNN'89 v2 pp365-372, and also in the technical report available from the neuroprose archives as "pearlmutter.dynets.ps.Z". From english at sun1.cs.ttu.edu Fri Aug 30 17:22:24 1991 From: english at sun1.cs.ttu.edu (Tom English) Date: Fri, 30 Aug 91 16:22:24 CDT Subject: Processing of auditory sequences Message-ID: <9108302122.AA17772@sun1.cs.ttu.edu> Thanks to Robert Port for a fine discussion of acoustic/auditory pattern processing and hysteresis. I do not take exception to his observations, but would like to extend the discussion a bit. It seems that humans process human utterances differently than they do other sounds. I cannot point to hard data on this point, but I am fairly confident that playing back recorded speech at twice the recording speed has serious effects upon intelligibility. (Surely some of you find Alvin and the Chipmunks difficult to understand.) What accounts for the difference between speech and synthetic "Watson patterns"? Perhaps one of the Connectionists could briefly describe the techniques used to preserve intelligibility in time-compression of speech recordings for the blind. This might be an important clue. I believe the techniques are more sophisticated than, say, clipping 5 msec of speech from each 10 msec and smoothing the transitions between the remaining speech segments. I submit that if evolution has not provided us with special apparatus for processing the calls of members of our own species, it should. That is, "knowing" the characteristics of the physical system that generates human utterances should be of great utility in extracting information from utterances (especially when they are noisy). How might such innate knowledge be represented and utilized in human processing of speech signals? Of course, some would argue that an internal model of the articulators need not be innate. I vaguely recall that Grossberg and associates have placed speech generation and recognition in a single "loop." The model of articulation might be learned by generating motor "commands" and hearing the results. Does anyone know whether people born with impaired control of the articulators suffer some detriment in processing speech signals? To tie these comments together, allow me to hypothesize that artificially slowed and speeded speech, even when it is in the range of natural speaking rates, does not exhibit the coarticulatory phenomena appropriate to the speaking rate. Thus the internal model of articulation does not account well for the altered speech, and intelligibility suffers. This hypothesis is half-baked, if only because it ignores the fact that "unnatural sounding" synthetic speech may be highly intelligible. I would, however, like to see relevant evidence and/or discussion. Thomas English english at sun1.cs.ttu.edu Dept. of Computer Science Texas Tech University From Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU Sat Aug 31 10:24:21 1991 From: Scott_Fahlman at SEF-PMAX.SLISP.CS.CMU.EDU (Scott_Fahlman@SEF-PMAX.SLISP.CS.CMU.EDU) Date: Sat, 31 Aug 91 10:24:21 -0400 Subject: Processing of auditory sequences In-Reply-To: Your message of Fri, 30 Aug 91 16:22:24 -0600. <9108302122.AA17772@sun1.cs.ttu.edu> Message-ID: Perhaps one of the Connectionists could briefly describe the techniques used to preserve intelligibility in time-compression of speech recordings for the blind. This might be an important clue. I believe the techniques are more sophisticated than, say, clipping 5 msec of speech from each 10 msec and smoothing the transitions between the remaining speech segments. I'm not an expert on this, but I believe that the basic idea idea is to speed up the speech by 2x or so, while keeping the frequencies where they should be. Apparently the human speech-understanding system is quite flexible about rate, but really doesn't like to deal with formants moving too far from where they are expected to be in frequency space. Perhaps that is because the basic analysis into frequency bands is done in the cochlea, with fixed neuro-mechanical filters, and the rest of the processing is done by the brain using neural machinery that is more flexible and trainable. I believe the crude chopping you describe above is one technique that has been used to accomplish this, and that it works surprisingly well. Even though some critical events get dropped on the floor this way -- the pop in a "P", for example, listeners quickly learn to compensate for this. One can do a better job if more attention is paid to the smoothing: chopping at zero-crossings, etc. Some pitch-shifter/harmonizer boxes used in music processing do this sort of thing. The best approach would probably be to move everything into the Fourier domain, slide and stretch everything around smoothly, and then convert back, but I doubt that any practical reading machines actually do this. Only in the last couple of years has the necessary signal-processing power been available on a single chip. -- Scott Fahlman From shawn at helmholtz.sdsc.edu Sat Aug 31 15:30:16 1991 From: shawn at helmholtz.sdsc.edu (shawn@helmholtz.sdsc.edu) Date: Sat, 31 Aug 91 12:30:16 -0700 Subject: No subject Message-ID: <9108311930.AA08466@gall> Several months ago I asked about connectionist efforts in modeling chemotaxis and animal orientation. Response was quite good. Here is a compilation of what I received. Thanks to all those who kindly responded. OUTGOING QUERY: " I am a neurobiologist interested in training neural networks to perform chemotaxis, and other feats of simple animal navigation. I'd be very interested to know what has been done by connectionists in this area. The only things I have found so far are: Mozer and Bachrach (1990) Discovering the Structure of a Reacative nvironment by Exploration, and Nolfi et al. (1990) Learning and Evolution in Neural Networks Many thanks, Shawn Lockery CNL Salk Institute Box 85800 San Diego, CA 92186-5800 (619) 453-4100 x527 shawn at helmholtz.sdsc.edu " _____________________________________________________________________ THE REPLIES _____________________________________________________________________ From: mlittman at breeze.bellcore.com (Michael L. Littman) Hi, Dave Ackley and Michael Littman (me) did some work where we used a combination of natural selection and reinforcement learning to train simulated creatures to survive in a simulated environment. %A D. H. Ackley %A M. L. Littman %T Interactions between learning and evolution %B Artificial Life 2. %I Addison-Wesley %D 1990 %E Langton, Chris %O (in press) %A D. H. Ackley %A M. S. Littman %T Learning from natural selection in an artificial environment %B Proceedings of the International Joint Conference on Neural Networks %C Washington, D.C. %D January 1990 There is also some neat work by Stewart Wilson as well as Rich Sutton and friends. I'm not sure exactly what sort of things you are looking for so I'm having trouble knowing exactly where to point you. If you describe the problem you have in mind I might be able to indicate some other relevant work. -Michael ---------------------------------------------------------------------------- From: David Cliff Re. your request for chemotaxis and navigation: do you know about Randy Beer's work on a simulated cockroach? He did studies of locomotion control using hardwired network models (ie not much training involved) but the simulated bug performed simple navigation tasks. I think it had chemoreceptors in it's antennae, so I think there was some chemotaxis involved. He's written a book: R.D.Beer "Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology" Academic Press, 1990. davec at cogs.susx.ac.uk COGS 5C17 School of Cognitive and Computing Sciences University of Sussex Brighton BN1 9QH England UK ---------------------------------------------------------------------------- From: Ronald L Chrisley You might take a look at my modest efforts in a paper in the Proc. of the 1990 CMSS. Not biologically motivated at all, though. Ron Chrisley ---------------------------------------------------------------------------- From: beer at cthulhu.ces.cwru.edu (Randy Beer) Hello Shawn! I'm not sure that this is what you're looking for, but as I mentioned to you at Neurosciences, we've been using genetic algorithms to evolve dynamical NNs. One of our experiments involved gradient following. A simple circular "animal" with two chemosensors and two motors was placed in an environment with a patch of food emitting an odor whose intensity decreased as the inverse square of the distance from the patch's center. The animal's behavior was controlled by a bilaterally symmetric, six node, fully interconnected dynamical NN (2 sensory neurons, 2 motor neurons, and two interneurons). The time constants (3, due to bilateral symmetry) and the weights (18) were encoded on a bit string genome. The performance function was simply the average distance from the food patch that the animal was found after a fixed amount of time for a variety of initial starting positions. We evolved several different solutions to this problem, including one less than optimal but interesting "wiggler". This animal oscillated from side to side until it happened to come near the food patch, then the relatively strong signal from the nearby food damped out the oscillations and it turned toward the food. Most of the other solutions simply compared the signals in each chemosensor and turned toward the stronger side, as you would expect. These more obvious solutions still varied in the overall gain of their response. Low gain solutions performed very well near the food patch, but had a great deal of trouble finding it if they started too far away. High gain solutions rarely had any trouble finding the food patch, but their behavior at the patch was often more erratic and sometimes they would fly back off of it. Randy ---------------------------------------------------------------------------- From: wey at psyche.mit.edu (Wey Fun) You may look into Christ Watkins, ANdrew Barto & Richard Barto's work on TD algo. My colleague at Univ of Edinburgh, Peter Dayan, ahs also done a lot of work on the simulation of rats swimming in milky water and finding a fast shortest route after trials to a platform. His email address is : dayan at cns.ed.ac.uk Wey ---------------------------------------------------------------------------- From: Jordan B Pollack Im pretty sure that Andy Barto at cs.umass.edu and his students worked on "A-RP" reinforcement learning in a little creature navigating through an environment of smell gradients. This is the only reference in my list: %A A. G. Barto %A C. W. Anderson %A R. S. Sutton %T Synthesis of Nonlinear Control Surfaces by a layered Associative Search Network %J Biological Cybernetics %V 43 %P 175-185 %D 1982 %K R12 jordan ---------------------------------------------------------------------------- From: Peter Dayan [This is in response to your direct note to me - you should also have received a reply to your connectionists at cmu posting from me.] In that, I neglected to mention: Barto, AG (1989). From chemotaxis to cooperativity: Abstract exercises in neuronal learning strategies. In R Durbin, C Miall \& G Mitchison, editors, {\it The Computing Neuron.\/} Wokingham, England: Addison-Wesley. and Watkins, CJCH (1989). {\it Learning from Delayed Rewards.\/} PhD Thesis. University of Cambridge, England. which is one of my `source' texts, and contains interesting discussions of TD learning from the viewpoint of dynamical programming. Regards, Peter Randy ----------------------------------------------------------------------------- From: "Vijaykumar Gullapalli (413) 545-1596" Andy Barto wrote a nice paper discussing learning issues that might be of interest. It appeared as a tech report and as a book chapter. The ref is @techreport{Barto-88a, author="Barto, A. G.", title="From Chemotaxis to Cooperativity: {A}bstract Exercises in Neuronal Learning Strategies", institution="University of Massachusetts", address="Amherst, MA", number="88-65", year=1988, note="To appear in {\it The Computing Neurone}, R. Durbin and R. Maill and G. Mitchison (eds.), Addison-Wesley"}. A copy of the tech report can be obtained by writing to Connie Smith at smith at cs.umass.edu. Vijay __________________________________________________________________________________ From: nin at cns.brown.edu (Nathan Intrator) Could you give me more information on the task, is the input binary and is the dimensionality of the input large. I have an unsupervised network that is supposed to discover structure in HIGH DIMENSIONAL spaces in an unsupervised way which may be of interest to you. --------------------------------------------------------------------- From: meyer%frulm63.bitnet at Sds.sdsc.edu (Jean-Arcady MEYER) I'm interested in the simulation of adaptive behavior and I have written a Technical Report on the subject, in which I think you could find several interesting references. In particular, various works have been made in the spirit of Nolfi et al. I'm sending this report to you today. Let me add that I have organized - together with Stewart Wilson - the conference SAB90 (Simulation of adaptive behavior: from animals to animats) which has been held in Paris in September 1990. The corresponding proceedings are about to be published by The MIT Press/Bradford Books. I'm also sending you a booklet of the papers'summaries. Finally, I don't know the paper from Mozer and Bachrach you are mentioning in your mail. Could you be kind enough to send me its reference? Hope this will be helpful to you. Jean-Arcady Meyer Groupe de BioInformatique URA686. Ecole Normale Superieure 46 rue d'Ulm 75230 PARIS Cedex05 FRANCE --------------------------------------------------------------------- From: barto at envy.cs.umass.edu We have done a number of papers over the years that relate to chemotaxis. Chemotaxic behavior of single cells has inspired a lot of our thinking about learning. Probably the most relevant are: Barto, From Chemotaxis to Cooperativity, in The Computing Neuron, edited by Durbin, Miall, Mitchison. Addison Wesley, 1989 Barto and Sutton, Landmark Learning: An Illustration of Associative Search, Biol. Cyb. 42, 1981 Andy Barto Dept. of Computer and Information Science University of Massachusetts Amherst MA 01003 --------------------------------------------------------------------- From: dmpierce at cs.utexas.edu I had a paper myself at a recent Paris conference (September 1990) which might be relevant to you: Pierce, D.M., \& Kuipers, B.J. (1991). Learning hill-climbing functions as a strategy for generating behaviors in a mobile robot. {\em From Animals to Animats: Proceedings of The First International Conference on Simulation of Adaptive Behavior}, J.-A. Meyer \& S.W. Wilson, eds., Cambridge, MA: The MIT Press/Bradford Books, pp.~327-336. This is also available as University of Texas AI Lab. Tech. Report AI90-137. Here is the abstract: We consider the problem of learning, in an unknown environment, behaviors (i.e., sequences of actions) which can be taken to achieve a given goal. This general problem involves a learning agent interacting with a reactive environment: the agent produces actions that affect the environment and in turn receives sensory feedback from the environment. The agent must learn, through experimentation, behaviors that consistently achieve the goal. In this paper, we consider the particular problem of a mobile robot in a spatial two-dimensional world whose goal is to find a target location which contains a ``food'' source. The robot has access to incomplete information about the state of the world via a set of senses and is able to detect when it has achieved the goal. Its task is to learn to use its motor apparatus to reliably move to the food. The catch is that the robot does not know a priori what its sensors mean, nor what effects its motor apparatus has on the world. We propose a method by which the robot may analyze its sensory information in order to derive (when possible) a function defined in terms of the sensory data which is maximized at the food and which is suitable for hill-climbing. Given this function, the robot solves its problem by learning a behavior that maximizes the function thereby resulting in motion to the food. -Dave Pierce ------------------------------------------------------------------------ From: KDBG100%bgunve.bgu.ac.il at BITNET.CC.CMU.EDU I may not have understood the specifics of what you require, but about spatial environments, there is a paper in PDP II 1986 which I suppose you must know about. So -- what is the issue you are pursuing? David leiser, Jerusalem ------------------------------------------------------------------------- From: Steve Hampson I was just going over old mail and found your request for refs on animal navigation. I was planning on replying, but probably never did. My book "Connecitonistic Problem Solving" Birkhauser, Boston, is an attempt at general problem solving, but almost all of the examples are maze-like. Several approaches are discussed and implemented. Sorry for the delay. Steven Hampson ICS Dept. UCI. ------------------------------------------------------------------------ From emsca!conicit!gpass at Sun.COM Sat Aug 31 14:50:10 1991 From: emsca!conicit!gpass at Sun.COM (Gianfranco Passariello (USB) Date: Sat, 31 Aug 91 14:50:10 AST Subject: info. Message-ID: <9108311850.AA08078@conicit> Caracas, August 31st 1991 I have just received a job opportunities message coming from Sidney-Austrlia through you. I would like to know how does that work. Thank you very much in advance Gianfranco Passariello Dpto. de Electronica y Circuitos Universidad Simon Bolivar Apdo.89000 Caracas, Venezuela 1080A