From alexis%yummy at gateway.mitre.org Wed Jun 1 08:22:32 1988 From: alexis%yummy at gateway.mitre.org (alexis%yummy@gateway.mitre.org) Date: Wed, 1 Jun 88 08:22:32 EDT Subject: HIMOSTHYLEDYNE - MACH Message-ID: <8806011222.AA04375@marzipan.mitre.org> >> ... dynamic, nets -- maybe chaotic ... Actually that's an interesting thought. Certainly dynamic networks are useful (meaning nets (necessarily with feedback) which converge to an orbit of period > 1) as are bifurcations and all that {I mean, biological systems do it, so it *MUST* be important :-) }. But what about chaos and strange attractors et al.? For those of you who don't believe Turing and think non-deterministic computing engines buy you some computing power (Richard are you out there?) I suppose it's a way of getting your non-determinism into the system ..., but I mean really, is a *chaotic-neural-network* worth anything? alexis wieland From golden at frodo.STANFORD.EDU Wed Jun 1 10:52:18 1988 From: golden at frodo.STANFORD.EDU (Richard Golden) Date: Wed, 1 Jun 88 07:52:18 PDT Subject: HIMOSTHYLEDYNE - MACH Message-ID: Hi Alexis....I never said non-deterministic computing engines buy you anything. I simply said that whatever type of PDP engine (deterministic or non-deterministic) you have, it must solve an inductive logic problem and the correct (but typically intractable) way to solve such problems is to use probabilistic inference. Regarding chaos, the only useful reason for having chaos in a dynamical system that I can see is that it might be mathematically impossible to represent certain distributions of attraction basins in a non-chaotic dynamical system. But I agree with you that just introducing chaos with the hopes that your network will do something neat is pretty naive I think. Richard From moody-john at YALE.ARPA Wed Jun 1 13:49:35 1988 From: moody-john at YALE.ARPA (john moody) Date: Wed, 1 Jun 88 13:49:35 EDT Subject: chaos and neural information processing Message-ID: <8806011744.AA15297@NEBULA.SUN3.CS.YALE.EDU> It is hard for me to imagine how chaotic behavior could be compu- tationally useful. However, the fact that our minds wander in the absence of sensory stimulation or concentration suggests that the large scale dynamics of association cortex may in fact be chaotic at some times. It is not clear however, whether the mind is generally chaotic, sometimes chaotic, marginally unstable, or (in isolation) simply just ergodic with a very long limit cycle. However, the brain is not a closed system, but is subject to changing environmental inputs and internal chemical and physio- logical states. This makes dynamical distinctions appropriate for closed systems irrelevant, except under very controlled condi- tions. Furthermore, the effects of thermal noise and the proba- bilistic nature of spike generation and synaptic transmission make analogies to classical dynamics incomplete at best. The general question of what regimes of behavior are possible as a function of internal parameters and patterns of sensory behavior is none-the-less extremely interesting. For example, it has already been suggested that certain patholog- ical phenomina such as epileptic seizures, migraine headaches, and visual hallucinations are the result of instabilities in oth- erwise stable networks. These instabilities are probably caused by changes in the balances of neurotransmitters and neuromodula- tors. Jack Cowan (University of Chicago Mathematics Department) and collaborators have developed some very impressive mathemati- cal theories to explain such phenomina. ------- From rba at flash.bellcore.com Wed Jun 1 10:46:25 1988 From: rba at flash.bellcore.com (Bob Allen) Date: Wed, 1 Jun 88 10:46:25 EDT Subject: No subject Message-ID: <8806011446.AA10548@flash.bellcore.com> Subject: Report Available Sequential Connectionist Networks for Answering Simple Questions about a Microworld*# Robert B. Allen 2A-367 Morristown, NJ 07960-1910 rba at bellocre.com Sequential back-propagation networks were trained to answer simple questions about objects in a microworld. The networks transferred the ability to answer this type of question to patterns on which they had not been trained. Moreover, the networks were shown to have developed expectations about the objects even when they were not present in the microworld. A variety of architectures were tested using this paradigm and the addition of channel-specific hidden layers was found to improve performance. Overall, these results are directed to the approach of building language users with connectionist networks, rather than language processors. *To appear in Proceedings Cognitive Science Society, 1988. #Not to be confused with HIMOSTHYLEDYNE of the previous message, this work introduces hierarchical sequential networks (HSN). From MITCHELL at EXETER.AC.UK Thu Jun 2 15:01:43 1988 From: MITCHELL at EXETER.AC.UK (MITCHELL@EXETER.AC.UK) Date: Thu, 02 Jun 88 15:01:43 BST Subject: Request Message-ID: Is there any chance that you could send me the list of email addresses for the "connectionist" group? Thanks, Don Mitchell From merrill%bucasb.bu.edu at bu-it.BU.EDU Mon Jun 6 17:21:35 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (merrill%bucasb.bu.edu@bu-it.BU.EDU) Date: Mon, 6 Jun 88 17:21:35 EDT Subject: chaos and neural information processing In-Reply-To: john moody's message of Wed, 1 Jun 88 13:49:35 EDT <8806011744.AA15297@NEBULA.SUN3.CS.YALE.EDU> Message-ID: <8806062121.AA04824@bucasb.bu.edu> From: john moody Date: Wed, 1 Jun 88 13:49:35 EDT It is hard for me to imagine how chaotic behavior could be compu- tationally useful. Freeman and Skarda, in a recent article in Behavioral and Brain Science [1], argue that chaos is a essential element in their model of olfactory function in the rat. For example, it has already been suggested that certain patholog- ical phenomina such as epileptic seizures, migraine headaches, and visual hallucinations are the result of instabilities in oth- erwise stable networks. These instabilities are probably caused by changes in the balances of neurotransmitters and neuromodulators. Jack Cowan (University of Chicago Mathematics Department) and collaborators have developed some very impressive mathematical theories to explain such phenomina. Other researchers have argued that the exact opposite is the case: that, for instance, the breakdown of chaos is characteristic of Sudden Heart Failure; the heart seems to collapse from a normally chaotic beat into a phase-locked regime. Similarly, researchers have argued that the appearance of a domain of apparent periodicity mildly *predicts* the onset of epileptic seizure. (If anyone wants a reference, I'll look it up in my files; I've just moved, and they're a little chaotic right now.) As a purely philosophical point, I can imagine a reason for chaos to be an essential aspect of brain function. By its very nature, a chaotic system is capable of changing states at any time with only a slight kick. That could allow a neural network to make a decision at any time without needing an external reset signal, such as most abstract neural networks require. [1] Freeman, W. and S. Scarda, "How brains make chaos in order to make sense of the world", Brain and Behavioral Science, November, '87. From bhb at cadre.dsl.pittsburgh.edu Mon Jun 6 09:20:51 1988 From: bhb at cadre.dsl.pittsburgh.edu (Barry Blumenfeld) Date: Mon, 6 Jun 88 09:20:51 EDT Subject: No subject Message-ID: <8806061321.AA10055@cadre.dsl.pittsburgh.edu> I'm a fellow in medical informatics at presby, with an interest in connectionism. I'd like to start attending the weekly meetings of your group when they resume. Could you place me on your mailling list? email address....bhb at cadre.dsl.pittsburgh.edu ...thanks Barry Blumenfeld From unido!tumult!schmidhu at uunet.UU.NET Mon Jun 6 14:28:34 1988 From: unido!tumult!schmidhu at uunet.UU.NET (Juergen Schmidhuber) Date: Mon, 6 Jun 88 16:28:34 -0200 Subject: "Technical Note Available" Message-ID: <8806061428.AA02242@tumult.informatik.tu-muenchen.de> Here is the abstract of a technical note on accelerated learning in neural networks. Write or send email to obtain a copy. Accelerated Learning in Back-Propagation Nets Juergen Schmidhuber, Institut fuer Informatik Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2, Germany Two of the most serious problems with back-propagation are insufficient speed and the danger of getting stuck in local minima. We offer an approach to cope with both of these problems: Instead of using bp to find zero-points of the gradient of the error-surface we are looking for zero-points of the error-surface itself. This can be done with less computational effort than there is in second order methods. Experimental results indicate that in cases where only a small fraction of units is active simultaneously, this method can be applied successfully. Furthermore it can be significantly faster than conventional bp. Juergen Schmidhuber From R09614%BBRBFU01.BITNET at VMA.CC.CMU.EDU Mon Jun 6 10:15:51 1988 From: R09614%BBRBFU01.BITNET at VMA.CC.CMU.EDU (R09614%BBRBFU01.BITNET@VMA.CC.CMU.EDU) Date: 06 Jun 88 16:15:51 +0200 Subject: No subject Message-ID: This message has been sent in two different formats. Please excuse us if you receive two copies of it. - Thank you Comment on John Moody's remarks about chaos in neural systems Today we have at our disposal a variety of methods to assess from an experimental time series some of the dynamical properties of the system under study. These methods (phase space construction, Poincare maps, correlation dimensions, Lyapunov exponents, Kolmogorov entropies, ...) may give us a quantitative measure of the real brain activity. In 1985, these methods were used for the first time in the analysis of brain activity [1] from the electroencephalogram (EEG) recordings. Although the origin of the EEG is not well understood, however several well defined stages of the brain activity show characteristic EEG's which are still used as diagnostic tools. We showed that several stages of the sleep cycle could obey low dimensional chaotic dynamics. Later, this kind of analysis has been extended by other laboratories [2] and our group [3] to the study of other stages of the brain activity. The general conclusions are: 1) Eyes open, the brain activity shows noise-like behavior (very high dimensional) 2) Eyes closed, the dynamics may be described by a chaotic attractor of relatively low dimension (around 6) 3) This value decreases during the sleep cycle and reaches a minimum near 4 in the deep sleep (stage IV) 4) The REM (dream) episods are characterized by a rather high dimensional or noise-like behavior 5) During severe pathologies such as epileptic seizures (petit mal) and coma (terminal state of the Creutzfeld-Jakob disease), the correlation dimension drops to lower values (2.05 for petit mal). Therfore higher is the dimension, more alert is the brain. Could chaos be associated with processing power? This goes in the same direction as the findings of Golberger et al. [4] about dying heart that you mentioned. We have found that the normal heart is not a periodic oscillator but the variabilities between successive beats shows non random behavior [5]. Here again, the normal rule is a chaotic state whereas pathologies seem to be associated with more coherent behavior. Regarding the computing abilities of chaotic systems, there has been a great deal of work done on these topics (see for ex. J.S. Nicolis in [6]). With its particular properties, the chaotic attractor is an interesting candidate for information processing: it continuously creates and destroys information (in the sense of Shannon) in the same time. There is a constant creation of new conditions (a large region of phase space may be visited), while the memory of the present state is progressively lost ... Now, how to design a neural network which could take profit of such properties is another question. However, we think that still much interesting work could be done in this direction. We are presently engaged in such studies and we try to refine these data with more accurate technics. Anyone interested may write at: R09614 at BBRBFU01.BITNET Any comment is wellcome. Alain Destexhe Agnessa Babloyantz Faculte des Sciences, Universite Libre de Bruxelles, Campus de la Plaine (CP-231), Bd. du Triomphe, B-1050 Brussels, Belgium [1] A.Babloyantz, C.Nicolis & M.Salazar: Phys. Lett. 111A: 152 (1985) [2] S.P.Layne, G.Mayer-Kress & J.Holzfuss: in Dimensions and Entropies in Chaotic Systems Ed G.Mayer-Kress (Springer Berlin 1986) I.Dvorak & J.Siska: Phys. lett. 118A: 63 (1986) P.E.Rapp, I.D.Zimmerman, A.M.Albano, G.C.de Guzman, N.N.Greenbaun & T.R.Bashore: in Nonlinear Oscillations in Biology and Chemistry Ed. H.G. Othmer Lectures notes in Biomathematics 66: 175 (Springer Berlin 1986) C.A.Skarda & W.J.Freeman: Behavior. Brain Sci. 10: 187 (1987) J.Roschke & E.Basar: in Dynamics of Sensory and Cognitive Processing by the Brain, Ed E.Basar, Springer Series in Brain Dynamics, Vol 1, 203 (1988) [3] A. Babloyantz & A. Destexhe: in Temporal Disorder in Human Oscillatory Systems Eds. L. Rensing, U. an der Heiden and M.C. Mackey, Springer series in Synergetics 36:48 (1987a) A.Babloyantz & A.Destexhe: in Proceedings of the first IEEE International Conference on Neural Networks, Eds M. Caudill and C Butler Vol 4: 31(1987) A.Babloyantz & A.Destexhe: Proc. Natl. Acad. Sci. USA 83: 3513 (1986) A.Babloyantz & A.Destexhe: in From Chemical to Biological Organization, Ed by M.Markus, S.Muller and G.Nicolis, Springer, Berlin (1988) [4] A.Babloyantz & A.Destexhe: Biol. Cybernetics 58: 131 (1988) [5] A.Goldberger, V.Barghava, B.J.West & A.J.Mandell: Physica 17D: 207 (1985) [6] J.S.Nicolis: in Hierarchical systems, Springer, Berlin (1985) J.S.Nicolis, G.Mayer-Kress & G.Haubs: Z.Naturforsh. 38a: 1157 (1983) From R09614%BBRBFU01.BITNET at CUNYVM.CUNY.EDU Fri Jun 3 13:13:39 1988 From: R09614%BBRBFU01.BITNET at CUNYVM.CUNY.EDU (R09614%BBRBFU01.BITNET@CUNYVM.CUNY.EDU) Date: 03 Jun 88 19:13:39 +0200 Subject: No subject Message-ID: Comment on John Moody's remarks about chaos in neural systems Today we have at our disposal a variety of methods to assess from an experimental time series some of the dynamical properties of the system under study. These methods (phase space construction, Poincare maps, correlation dimensions, Lyapunov exponents, Kolmogorov entropies, ...) may give us a quantitative measure of the real brain activity. In 1985, these methods were used for the first time in the analysis of brain activity [1] from the electroencephalogram (EEG) recordings. Although the origin of the EEG is not well understood, however several well defined stages of the brain activity show characteristic EEG's which are still used as diagnostic tools. We showed that several stages of the sleep cycle could obey low dimensional chaotic dynamics. Later, this kind of analysis has been extended by other laboratories [2] and our group [3] to the study of other stages of the brain activity. The general conclusions are: 1) Eyes open, the brain activity shows noise-like behavior (very high dimensional) 2) Eyes closed, the dynamics may be described by a chaotic attractor of relatively low dimension (around 6) 3) This value decreases during the sleep cycle and reaches a minimum near 4 in the deep sleep (stage IV) 4) The REM (dream) episods are characterized by a rather high dimensional or noise-like behavior 5) During severe pathologies such as epileptic seizures (petit mal) and coma (terminal state of the Creutzfeld-Jakob disease), the correlation dimension drops to lower values (2.05 for petit mal). Therfore higher is the dimension, more alert is the brain. Could chaos be associated with processing power? This goes in the same direction as the findings of Golberger et al. [4] about dying heart that you mentioned. We have found that the normal heart is not a periodic oscillator but the variabilities between successive beats shows non random behavior [5]. Here again, the normal rule is a chaotic state whereas pathologies seem to be associated with more coherent behavior. Regarding the computing abilities of chaotic systems, there has been a great deal of work done on these topics (see for ex. J.S. Nicolis in [6]). With its particular properties, the chaotic attractor is an interesting candidate for information processing: it continuously creates and destroys information (in the sense of Shannon) in the same time. There is a constant creation of new conditions (a large region of phase space may be visited), while the memory of the present state is progressively lost ... Now, how to design a neural network which could take profit of such properties is another question. However, we think that still much work could be done in this direction. We are presently engaged in such studies and we try to refine these data with more accurate technics. Anyone interested may write at: R09614 at BBRBFU01.BITNET Any comment is wellcome. Alain Destexhe Agnessa Babloyantz Faculte des Sciences, Universite Libre de Bruxelles, Campus de la Plaine (CP-231), Bd. du Triomphe, B-1050 Brussels, Belgium [1] A.Babloyantz, C.Nicolis & M.Salazar: Phys. Lett. 111A: 152 (1985) [2] S.P.Layne, G.Mayer-Kress & J.Holzfuss: in Dimensions and Entropies in Chaotic Systems Ed G.Mayer-Kress (Springer Berlin 1986) I.Dvorak & J.Siska: Phys. lett. 118A: 63 (1986) P.E.Rapp, I.D.Zimmerman, A.M.Albano, G.C.de Guzman, N.N.Greenbaun & T.R.Bashore: in Nonlinear Oscillations in Biology and Chemistry Ed. H.G. Othmer Lectures notes in Biomathematics 66: 175 (Springer Berlin 1986) C.A.Skarda & W.J.Freeman: Behavior. Brain Sci. 10: 187 (1987) J.Roschke & E.Basar: in Dynamics of Sensory and Cognitive Processing by the Brain, Ed E.Basar, Springer Series in Brain Dynamics, Vol 1, 203 (1988) [3] A. Babloyantz & A. Destexhe: in Temporal Disorder in Human Oscillatory Systems Eds. L. Rensing, U. an der Heiden and M.C. Mackey, Springer series in Synergetics 36:48 (1987a) A.Babloyantz & A.Destexhe: in Proceedings of the first IEEE International Conference on Neural Networks, Eds M. Caudill and C Butler Vol IV: 31 (1987 A.Babloyantz & A.Destexhe: Proc. Natl. Acad. Sci. USA 83: 3513 (1986) A.Babloyantz & A.Destexhe: in From Chemical to Biological Organization, Ed by M.Markus, S.Muller and G.Nicolis, Springer, Berlin (1988) [4] A.Babloyantz & A.Destexhe: Biol. Cybernetics 58: 131 (1988) [5] A.Goldberger, V.Barghava, B.J.West & A.J.Mandell: Physica 17D: 207 (1985) [6] J.S.Nicolis: in Hierarchical systems, Springer, Berlin (1985) J.S.Nicolis, G.Mayer-Kress & G.Haubs: Z.Naturforsh. 38a: 1157 (1983) From feldman at cs.rochester.edu Wed Jun 8 15:41:25 1988 From: feldman at cs.rochester.edu (feldman@cs.rochester.edu) Date: Wed, 8 Jun 88 15:41:25 EDT Subject: Dr. Feldman's address Message-ID: <8806081941.AA04284@wasat.cs.rochester.edu> Dr. Feldman is available at feldman at cs.rochester.edu Dr. Jerome A. Feldman Department of Computer Science University of Rochester Rochester, NY 14627 Or: Dr. Jerome A. Feldman International Computer Science Institute 1947 Center Street Berkeley, CA 94704-1105 From laic!taurus!pat at lll-lcc.llnl.gov Fri Jun 10 15:33:40 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Fri, 10 Jun 88 12:33:40 PDT Subject: AAAI Neural Vision Workshop Message-ID: <8806101933.AA01958@taurus.laic.uucp> ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology From jordan at psyche.mit.edu Sun Jun 12 03:51:47 1988 From: jordan at psyche.mit.edu (Michael Jordan) Date: Sun, 12 Jun 88 03:51:47 edt Subject: Technical Report available Message-ID: <8806120752.AA20214@ATHENA.MIT.EDU> "Supervised learning and systems with excess degrees of freedom" Michael I. Jordan Massachusetts Institute of Technology COINS Technical Report 88-27 ABSTRACT When many outputs of an adaptive system have equivalent effects on the environment, the problem of finding appropriate actions given desired results is ill-posed. For supervised learning algorithms, the ill-posedness of such ``inverse learning problems'' implies a certain flexibility---during training, there are in general many possible target vectors corresponding to each input vector. To allow supervised learning algorithms to make use of this flexibility, the current paper considers how to specify targets by sets of constraints, rather than as particular vectors. Two classes of constraints are distinguished---configurational constraints, which define regions of output space in which an output vector must lie, and temporal constraints, which define relationships between outputs produced at different points in time. Learning algorithms minimize a cost function that contains terms for both kinds of constraints. This approach to inverse learning is illustrated by a robotics application in which a network finds trajectories of inverse kinematic solutions for manipulators with excess degrees of freedom. To obtain a copy, contact: jordan at wheaties.ai.mit.edu or Ms. Connie Smith Computer and Information Science Graduate Research Center University of Massachusetts Amherst, MA 01003 smith at cs.umass.edu From hurlbert at wheaties.ai.mit.edu Mon Jun 13 18:32:22 1988 From: hurlbert at wheaties.ai.mit.edu (Anya C. Hurlbert) Date: Mon, 13 Jun 88 18:32:22 EDT Subject: Italian mathematician seeks position Message-ID: <8806132232.AA17098@rice-chex.ai.mit.edu> ATTENTION: The following is the c.v. of Cesare Furlanello, a young mathematician with an Italian Ph.D. who is interested in using the techniques of logic and analysis to model human reasoning. He presently works at the Insitute for Research in Science and Technology (which specializes in image understanding and speech and pattern recognition) and would like to spend next year or a part thereof working in the U.S. He will probably come with his own funding. If you know of any research positions for which he might be suitable, or if you yourself are interested in having him work with you, please send mail directly to him, with a cc to me, lest the uunet goes down again. I am hurlbert at wheaties.ai.mit.edu. Thank you !!!!! Cesare Furlanello - IRST - June 1988 CESARE FURLANELLO CURRICULUM VITAE Age: 27 Nationality: Italian Address: IRST, 38050 Povo (Trento), Italia Tel.No: 0461/810105 Email: furlan at irst.uucp EDUCATION HIGH SCHOOL: Sc. Grammar School; maturita'1980 with 60/60 (A Level). UNIVERSITY: graduated in pure Mathematics on 11 November 1986 110/110 cum laude (maximum with distinction); I was particularly interested in Algebraic Geometry; other topics of major interest before and during the preparation of the thesis were, amongst others, Category Theory,General Topology and Commutative Algebra. On these topics I regularly attended lectures and meetings. Supervisor: Professor Francesco Baldassarri Title of the thesis: Linear Differential Equations with algebraic relations between the solutions Abstract of the thesis. It is attempted to apply modern results of differential algebra and projective geometry to the work of Gino Fano on homogeneous linear differential equations with coefficients in C(z) and algebraic relations between their fundamental solutions.One of the major interests in this topic consists in describing the fundamental solutions in terms of those of equations of lower order.The originality of this approach consists in the study of the integral curve g of such an equation L, curve which can be defined in the (n-1)-th projective space V if L is of n-th order. It is therefore possible to study the differential Galois group G of L by means of the group Gpr of projective transformations of the variety defined in V by the algebraic relations existing between the fundamental solutions. In this thesis two cases are investigated: a) the integral curve g is algebraic; b) g is contained in a quadric hypersurface and n<7. A theorem and other results are established giving a characterization of Gpr not previously known in the literature of differential algebra. Those tools, and the concept of symmetric powers of linear differential operator introduced in a recent work of M.F.Singer, are used together with some sophisticated notions of projective geometry like flecnodal curves and the theory of Schuberts cycles in order to simplify the proofs of several results of Fano. Explicit conditions for case a) and b) are found. FELLOWSHIPS: a 12 months undergraduate studentship from the CNR (Consiglio Nazionale delle Ricerche). POSTGRADUATE STUDY AND WORK EXPERIENCE (at IRST) - A tutorial on GCLisp (January 87, Trento). - A course on general techniques of Pattern Recognition (Spring 87,Trento Univ.). - A course on Symbolic Computation with the MAPLE language (Feb 87, SASIAM, Bari) - Studies on formal approaches to PR: the categorical and topological approach. - Summer school of categorical Topology (7-12 June 87, Italian Group of Research in Topology, Bressanone) - UNIX operative system (Fall 87, internal course, IRST) - A course on Logic Programming (Spring 1987, Dept. of Mathematics, Trento Univ.). - I have been admitted to the '88 CIME "Logic and CS" Summer School (lessons held by A.Nerode, J.Hartmanis, R.Platek, G.Sacks,A.Scedrov, 20-28 June 1988,Montecatini) I was assigned at IRST the task of studying concepts using mathematical methods by the Director of IRST, Dr Luigi Stringa. Dr Stringa is looking for an approach successful in answering comprehensively to the various problems of AI and Pattern Recognition. Due to my background in Algebraic Geometry, I started studying geometrical-topological methods. An idea that seems very challenging to me is that of using those powerful formal techniques which are major tools in various fields of Mathematics like Category Theory (and, within that framework, Sheaves and Topoi Theory) for modelling some aspects of human reasoning. My opinion is that some formalization can be attempted, even if limited to a specific domain, and it is supported by the fact that the use of Category Theory is by now well established in Logic and in Logic for CS due to Dana Scott and many others. Categorical models are especially used for the non-traditional logics which have been recently receiving wide attention for computation and in the AI environment. Anyway it is obvious that a valid approach should be sensitive to computational and cognitive paradigms. From laic!taurus!pat at lll-lcc.llnl.gov Mon Jun 13 19:17:09 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Mon, 13 Jun 88 16:17:09 PDT Subject: AAAI Neural Vision Workshop Message-ID: <8806132317.AA04510@taurus.laic.uucp> ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology From richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK Tue Jun 14 13:06:29 1988 From: richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK (Richard Hall) Date: Tue, 14 Jun 88 13:06:29 BST Subject: No subject Message-ID: <24984.8806141206@tsuna.cvaxa.sussex.ac.uk> Reply-To: Andy Clark The Guest Editors would welcome any contributions to the forthcoming Special Issue. Please send all replies to the EMAIL addresses listed below and not the sender's! -- Richard Hall. >>>>>>>>>>>>>>>>>>> CONNECTIONISM IN CONTEXT <<<<<<<<<<<<<<<<<<<<<<<< __________________________ A Special Issue ________________________________ AI & Society Journal Of Human And Machine Intelligence A New International Journal from Springer-Verlag ___________________________________________________________________________ AI & Society would like to invite you to a discussion of the horizons of CONNECTIONIST MODELLING. The SPECIAL ISSUE aims to treat connectionism in a wide context including its social and cultural implications. It would also discuss developments in NEUROCOMPUTING and issues such as validation and legal constraints. Some of the topics of interest include: 1. Can connectionist models of individual information-processing be fruitfully extended to help model and undestand social wholes (eg countries, committees, ant-colonies!)? 2. Can connectionism illuminate developmental issues in a new way? Can it perhape illuminate the evolutionary trajectory of mind? 3. Is symbolic AI necessary to an understanding of the full range of human thought? 4. Connectionism, semiotics, and literary criticism. Can connectionistic accounts of learning and knowledge representation shed any new light on discussions of meaning (or vice versa)? 5. How powerful is the idea of cultural knowledge (raised by Smolensky) as involving the use of a special virtual machine (the conscious rule interpreter) which uses a subsymbolic implementation of a symbol processing society. 6. "Real Symbol Processing", it is claimed, depends upon the devoius combination of PDP based capacities with actual manipulation of structures in the external world. Is it our ability to create and use physical representation which is the key to our capacity to engage in serial symbolic thought? Could conventional computer architectures be putting back into deep structures of the head what really belongs in the interaction of the head and the world? 7. PDP and ecological psychology. Could PDP provide an account of the mechanisms which ecological psychology (despite its occaisonal claims) seems to need but could never previously display? Editorial Team for the Special Issue on Connectionism: Guest Editors: Andy Clark and Rudy Lutz, School of Cognitive Sciences, University of Sussex, Brighton, Sussex, UK AI & Society Editors: Janet Vaux and Ajit Narayanan. Contributions could include brief discussion papers for the Open Forum section or major papers for the main section of AI & Society. PAPERS SHOULD BE SUBMITTED either to our GUEST EDITORS, Andy Clark and Rudi Lutz or to the AI & Society EDITOR: Karamjit S Gill, Seake Centre, Brighton Polytechnic, Moulsecomb, Brighton, BN2 4GJ, Sussex UK EMAIL: JANET: andycl at uk.ac.sussex.unx1 BITNET: andycl at unx1.sussex.ac.uk ARPANET: andycl%uk.ac.sussex.unx1 at nss.cs.ucl.ac.uk From richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK Tue Jun 14 19:12:54 1988 From: richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK (Richard Hall) Date: Tue, 14 Jun 88 19:12:54 BST Subject: No subject Message-ID: <28325.8806141812@tsuna.cvaxa.sussex.ac.uk> Reply-To: Andy Clark The Guest Editors would welcome any contributions to the forthcoming Special Issue. Please send all replies to the EMAIL addresses listed below and not the sender's! -- Richard Hall. >>>>>>>>>>>>>>>>>>> CONNECTIONISM IN CONTEXT <<<<<<<<<<<<<<<<<<<<<<<< __________________________ A Special Issue ________________________________ AI & Society Journal Of Human And Machine Intelligence A New International Journal from Springer-Verlag ___________________________________________________________________________ AI & Society would like to invite you to a discussion of the horizons of CONNECTIONIST MODELLING. The SPECIAL ISSUE aims to treat connectionism in a wide context including its social and cultural implications. It would also discuss developments in NEUROCOMPUTING and issues such as validation and legal constraints. Some of the topics of interest include: 1. Can connectionist models of individual information-processing be fruitfully extended to help model and undestand social wholes (eg countries, committees, ant-colonies!)? 2. Can connectionism illuminate developmental issues in a new way? Can it perhape illuminate the evolutionary trajectory of mind? 3. Is symbolic AI necessary to an understanding of the full range of human thought? 4. Connectionism, semiotics, and literary criticism. Can connectionistic accounts of learning and knowledge representation shed any new light on discussions of meaning (or vice versa)? 5. How powerful is the idea of cultural knowledge (raised by Smolensky) as involving the use of a special virtual machine (the conscious rule interpreter) which uses a subsymbolic implementation of a symbol processing society. 6. "Real Symbol Processing", it is claimed, depends upon the devoius combination of PDP based capacities with actual manipulation of structures in the external world. Is it our ability to create and use physical representation which is the key to our capacity to engage in serial symbolic thought? Could conventional computer architectures be putting back into deep structures of the head what really belongs in the interaction of the head and the world? 7. PDP and ecological psychology. Could PDP provide an account of the mechanisms which ecological psychology (despite its occaisonal claims) seems to need but could never previously display? Editorial Team for the Special Issue on Connectionism: Guest Editors: Andy Clark and Rudy Lutz, School of Cognitive Sciences, University of Sussex, Brighton, Sussex, UK AI & Society Editors: Janet Vaux and Ajit Narayanan. Contributions could include brief discussion papers for the Open Forum section or major papers for the main section of AI & Society. PAPERS SHOULD BE SUBMITTED either to our GUEST EDITORS, Andy Clark and Rudi Lutz or to the AI & Society EDITOR: Karamjit S Gill, Seake Centre, Brighton Polytechnic, Moulsecomb, Brighton, BN2 4GJ, Sussex UK EMAIL: JANET: andycl at uk.ac.sussex.unx1 BITNET: andycl at unx1.sussex.ac.uk ARPANET: andycl%uk.ac.sussex.unx1 at nss.cs.ucl.ac.uk From jose at tractatus.bellcore.com Wed Jun 15 13:55:31 1988 From: jose at tractatus.bellcore.com (Stephen J. Hanson) Date: Wed, 15 Jun 88 13:55:31 EDT Subject: JOB ANNOUNCEMENT Message-ID: <8806151755.AA19344@tractatus.bellcore.com> ---------------------------------------------------------------- CONNECTIONIST: The Human Information Processing Group (HIPG) at Princeton University offers a position for a member of the professional research staff to coordinate a broad range of research in human information processing and to conduct research in neural networks, connectionist, or PDP-style models. The applicant should have some interest or background in neuro-science and its relation to neural network models and be willing to collaborate or interact with neuroscience faculty. The applicant should also have a familiarity with main-stream cognitive science areas, human factors, and computational modeling more generally. Ph.D. required. Applications should be submitted before 1 September 1988. HIPG is an inter-disciplinary group within Princeton University consisting of psychologists, engineers, philosophers, linguists, and computer scientists. Its facilities include the Cognitive Science Lab, the Interactive Computer Graphics Lab, the Robotics Lab, the Engineering Anomalies Lab, and the Cognitive Motivation Lab. Princeton University is an equal opportunity employer. Address inquiries to: Search Committee Human Information Processing Group Department of Psychology Green Hall Princeton University Princeton, New Jersey 08544 ----------------------------------------------------------------------- From merrill%bucasb.bu.edu at bu-it.BU.EDU Wed Jun 15 14:45:32 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (John Merrill) Date: Wed, 15 Jun 88 14:45:32 EDT Subject: Technical Report available In-Reply-To: Michael Jordan's message of Sun, 12 Jun 88 03:51:47 edt <8806120752.AA20214@ATHENA.MIT.EDU> Message-ID: <8806151845.AA02963@bucasb.bu.edu> Would you please send me a copy of your technical report on "Supervised learning and systems with excess degrees of freedom"? My address is John Merrill Center for Adaptive Systems 111 Cummington Street Boston, Mass. 02215 Thank you. From THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU Sun Jun 19 11:54:00 1988 From: THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU (THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU) Date: Sun, 19 Jun 88 11:54 O Subject: Technical Report available Message-ID: LU TP 88-8 April 1988 TRACK FINDING WITH NEURAL NETWORKS Carsten Peterson Department of Theoretical Physics, University of Lund, Solvegatan 14A, S-223 62 Lund, Sweden [Submitted to Nuclear Instrumentation Methods] ABSTRACT (a modified version): In high energy physics experiments the produced particles give rise to sparks or signals that follow their tracks. In events with large multiplicity reconstructing the tracks from the signals poses a computationally intensive task. Until now signal data has been collected on tapes and then processed with conventional CPU power. In future accelerators like SSC real time experimental triggers will be needed that would benefit from immediate track finding. The track finding problem is a combinatorial optimization problem. We have cast this problem onto a neural network by letting a neuron represent whether a line segment between two signals exist or not. We have applied mean field theory equations together with a greedy heuristic to planar situations with very encouraging results with respect to the quality of the solutions. Also rapid convergence times and good scaling properties are found. The generalization to three dimensions is straightforward. With the great potential that exists for realizing the neural network technology in custom made hardware we feel that this approach to the track finding problem could be very important in the future for experimental high energy physics. Our approach to track finding is generic. Many other and less "peaceful" applications than tracking elementary particles could easily be imagined. ---------------------------- To receive a copy send name and address to: THEPCAP at SELDC52 [bitnet]. Please allow for 4-5 weeks oversea delivery time. From ohare at nrl-css.arpa Sun Jun 19 10:59:24 1988 From: ohare at nrl-css.arpa (John O'Hare) Date: Sun, 19 Jun 88 10:59:24 EDT Subject: Research support Message-ID: <8806191459.AA09437@nrl-css.arpa> 1. Research proposals based on connectionist models that lead to a better understanding of classification of transient, non-speech sounds are being supported (as well as perception in general) by: Office of Naval Research (Code 1142PS), Attn: J. J. O'Hare, 800 N. Quincy St., Arlington, VA 22217-5000. A typical program would be for 3 years at an annual level of $100-150K. A full proposal would be fine but a preliminary proposal to establish mutual interest would be acceptable. Start date would be 1 October 1988. From suneast!vargas!velu at Sun.COM Mon Jun 20 12:09:35 1988 From: suneast!vargas!velu at Sun.COM (Velu Sinha) Date: Mon, 20 Jun 88 12:09:35 EDT Subject: Technical Report available In-Reply-To: 's message of Sun, 19 Jun 88 11:54 O <8806200312.AA09878@eneevax.umd.edu> Message-ID: <8806201609.AA04004@vargas.ecd.sun.com> I'd like a copy of your track finding TR... Velu Sinha 10 Magazine Street #1004 Cambridge, MA 02139 USA From harnad at Princeton.EDU Tue Jun 21 11:09:09 1988 From: harnad at Princeton.EDU (Stevan Harnad) Date: Tue, 21 Jun 88 11:09:09 edt Subject: Speech Recognition: Reference Query Message-ID: <8806211509.AA24419@mind> For a colleague working on speech recognition who is unfamiliar with the connectionist work in this area: I would be grateful to receive pointers to the current and representative work. Stevan Harnad (harnad at mind.princeton.edu) From hendler at dormouse.cs.umd.edu Fri Jun 24 09:29:03 1988 From: hendler at dormouse.cs.umd.edu (Jim Hendler) Date: Fri, 24 Jun 88 09:29:03 EDT Subject: TR Message-ID: <8806241329.AA12592@dormouse.cs.umd.edu> I don't expect the people on this list to find the following of staggering import, but in case anyone knows someone looking for such a thing: BackProp: A tool for Learning About Connectionist Architectures J. Pollack, M. Evett, J.Hendler Technical Report SRC-TR-88-43 Abstract This paper provides an implementation, in Common Lisp, of an epoch learning algorithm, a simple modification of the standrad back-propagation algorithm. The implementation is NOT intended to be a gerneral purpose, high-powered back-propoagation learning system. Rather, this report seeks only to provide a simple implementation of a popular and easily understood connectionist learning algorithm. It is primarily intended to be a teaching tool for AI researchers wishing to familiarize themselves or their students with back-propagation in a language with which they are comfortable. ---- Copies can be requested from: Tammy Paolino tammy at ra.umd.edu (arpa) From csh at ec.ecn.purdue.edu Sat Jun 25 22:53:42 1988 From: csh at ec.ecn.purdue.edu (Craig S Hughes) Date: Sat, 25 Jun 88 21:53:42 EST Subject: Weight Behavior in the Boltzmann Machine Message-ID: <8806260253.AA23085@ec.ecn.purdue.edu> A question for someone who is familiar with the mathematics of the Boltzmann machine: As the network converges during learning, do the weights monotonically increase or decrease to their optimal values, or do they "hop around" until they hit their final value and stabilize? A mathematical explanation would be nice. --craig hughes INTERNET: csh at ec.ecn.purdue.edu UUCP: pur-ee!csh From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Mon Jun 27 14:21:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Mon, 27 Jun 88 13:21 EST Subject: Fractal Representations Message-ID: Has there been research into fractal representation of networks for any kind of processing tasks? Assuming that there is some amount of inborn processing ability in the brain, one would think that a fractal representation of the neural structure would be a very advantageous way of genetically describing that structure. -Thomas G. Edwards ins_atge at jhuvms From ucece1!achhabra at Sun.COM Mon Jun 27 08:17:19 1988 From: ucece1!achhabra at Sun.COM (Atul Chhabra) Date: Mon, 27 Jun 88 08:17:19 edt Subject: Neural Architecture for Computer Vision Message-ID: <8806271239.AA14219@uccba.uc.edu> Can someone please e-mail me a copy of the announcement on Workshop on Neural Architectures for Computer Vision. I accidentally deleted the copy I received. My e-mail address is ucece1!achhabra at ucqais.uc.edu. Thanks. Atul K. Chhabra University of Cincinnati From merrill%bucasb.bu.edu at bu-it.BU.EDU Tue Jun 28 08:57:41 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (John Merrill) Date: Tue, 28 Jun 88 08:57:41 EDT Subject: Technical Report Available (was Fractal Representations) In-Reply-To: 's message of Mon, 27 Jun 88 13:21 EST <8806272137.AA00148@bucasb.bu.edu> Message-ID: <8806281257.AA07119@bucasb.bu.edu> The following technical report is available from the Indiana University Department of Computer Science as TR #249. Title: Fractally Configured Neural Networks Authors: Merrill, John W. L. adn Port, Robert F. ABSTRACT The effects of network structure on learning are investigated. We argue that there are problems for which specially tailored network structures are essential in order to achieve a desired result. We present a method to derive such network structures, and present the results of applying this algorithm to the problem of generalization in abstract neural networks. In order to derive these networks, it is essential that the system employ a flexible, yet efficient, representation of edge structure. The algorithm discussed here uses deterministic chaos to generate a fractal partition of the edge space, and uses that fractal partition to produce an edge structure. We discuss the results of applying this algorithm to a simple classification problem, and we compare the performance of the resulting network to the performance of standard feed-forward networks. Our results show that the specially constructed networks are better able to generalize than completely connected networks with the same number of nodes. Electronic requests should be sent to merrill at bucasb.bu.edu on ARPA or to port at iuvax on UUCP. Physical requests should be sent to: Nancy Garrett Department of Computer Science 101 Lindley Hall Indiana University Bloomington, Indiana 47405 ------------------------------------------------------------------------ John Merrill | ARPA: merrill at bucasb.bu.edu Center for Adaptive Systems | 111 Cummington Street | Boston, Mass. 02215 | Phone: (617) 353-5765 From laic!taurus!pat at lll-lcc.llnl.gov Tue Jun 28 12:47:49 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Tue, 28 Jun 88 09:47:49 PDT Subject: Neural Vision Workshop Message-ID: <8806281647.AA12641@taurus.laic.uucp> Please Post and send to others who might be interested. ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology From alexis%yummy at gateway.mitre.org Wed Jun 1 08:22:32 1988 From: alexis%yummy at gateway.mitre.org (alexis%yummy@gateway.mitre.org) Date: Wed, 1 Jun 88 08:22:32 EDT Subject: HIMOSTHYLEDYNE - MACH Message-ID: <8806011222.AA04375@marzipan.mitre.org> >> ... dynamic, nets -- maybe chaotic ... Actually that's an interesting thought. Certainly dynamic networks are useful (meaning nets (necessarily with feedback) which converge to an orbit of period > 1) as are bifurcations and all that {I mean, biological systems do it, so it *MUST* be important :-) }. But what about chaos and strange attractors et al.? For those of you who don't believe Turing and think non-deterministic computing engines buy you some computing power (Richard are you out there?) I suppose it's a way of getting your non-determinism into the system ..., but I mean really, is a *chaotic-neural-network* worth anything? alexis wieland From golden at frodo.STANFORD.EDU Wed Jun 1 10:52:18 1988 From: golden at frodo.STANFORD.EDU (Richard Golden) Date: Wed, 1 Jun 88 07:52:18 PDT Subject: HIMOSTHYLEDYNE - MACH Message-ID: Hi Alexis....I never said non-deterministic computing engines buy you anything. I simply said that whatever type of PDP engine (deterministic or non-deterministic) you have, it must solve an inductive logic problem and the correct (but typically intractable) way to solve such problems is to use probabilistic inference. Regarding chaos, the only useful reason for having chaos in a dynamical system that I can see is that it might be mathematically impossible to represent certain distributions of attraction basins in a non-chaotic dynamical system. But I agree with you that just introducing chaos with the hopes that your network will do something neat is pretty naive I think. Richard From moody-john at YALE.ARPA Wed Jun 1 13:49:35 1988 From: moody-john at YALE.ARPA (john moody) Date: Wed, 1 Jun 88 13:49:35 EDT Subject: chaos and neural information processing Message-ID: <8806011744.AA15297@NEBULA.SUN3.CS.YALE.EDU> It is hard for me to imagine how chaotic behavior could be compu- tationally useful. However, the fact that our minds wander in the absence of sensory stimulation or concentration suggests that the large scale dynamics of association cortex may in fact be chaotic at some times. It is not clear however, whether the mind is generally chaotic, sometimes chaotic, marginally unstable, or (in isolation) simply just ergodic with a very long limit cycle. However, the brain is not a closed system, but is subject to changing environmental inputs and internal chemical and physio- logical states. This makes dynamical distinctions appropriate for closed systems irrelevant, except under very controlled condi- tions. Furthermore, the effects of thermal noise and the proba- bilistic nature of spike generation and synaptic transmission make analogies to classical dynamics incomplete at best. The general question of what regimes of behavior are possible as a function of internal parameters and patterns of sensory behavior is none-the-less extremely interesting. For example, it has already been suggested that certain patholog- ical phenomina such as epileptic seizures, migraine headaches, and visual hallucinations are the result of instabilities in oth- erwise stable networks. These instabilities are probably caused by changes in the balances of neurotransmitters and neuromodula- tors. Jack Cowan (University of Chicago Mathematics Department) and collaborators have developed some very impressive mathemati- cal theories to explain such phenomina. ------- From rba at flash.bellcore.com Wed Jun 1 10:46:25 1988 From: rba at flash.bellcore.com (Bob Allen) Date: Wed, 1 Jun 88 10:46:25 EDT Subject: No subject Message-ID: <8806011446.AA10548@flash.bellcore.com> Subject: Report Available Sequential Connectionist Networks for Answering Simple Questions about a Microworld*# Robert B. Allen 2A-367 Morristown, NJ 07960-1910 rba at bellocre.com Sequential back-propagation networks were trained to answer simple questions about objects in a microworld. The networks transferred the ability to answer this type of question to patterns on which they had not been trained. Moreover, the networks were shown to have developed expectations about the objects even when they were not present in the microworld. A variety of architectures were tested using this paradigm and the addition of channel-specific hidden layers was found to improve performance. Overall, these results are directed to the approach of building language users with connectionist networks, rather than language processors. *To appear in Proceedings Cognitive Science Society, 1988. #Not to be confused with HIMOSTHYLEDYNE of the previous message, this work introduces hierarchical sequential networks (HSN). From MITCHELL at EXETER.AC.UK Thu Jun 2 15:01:43 1988 From: MITCHELL at EXETER.AC.UK (MITCHELL@EXETER.AC.UK) Date: Thu, 02 Jun 88 15:01:43 BST Subject: Request Message-ID: Is there any chance that you could send me the list of email addresses for the "connectionist" group? Thanks, Don Mitchell From merrill%bucasb.bu.edu at bu-it.BU.EDU Mon Jun 6 17:21:35 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (merrill%bucasb.bu.edu@bu-it.BU.EDU) Date: Mon, 6 Jun 88 17:21:35 EDT Subject: chaos and neural information processing In-Reply-To: john moody's message of Wed, 1 Jun 88 13:49:35 EDT <8806011744.AA15297@NEBULA.SUN3.CS.YALE.EDU> Message-ID: <8806062121.AA04824@bucasb.bu.edu> From: john moody Date: Wed, 1 Jun 88 13:49:35 EDT It is hard for me to imagine how chaotic behavior could be compu- tationally useful. Freeman and Skarda, in a recent article in Behavioral and Brain Science [1], argue that chaos is a essential element in their model of olfactory function in the rat. For example, it has already been suggested that certain patholog- ical phenomina such as epileptic seizures, migraine headaches, and visual hallucinations are the result of instabilities in oth- erwise stable networks. These instabilities are probably caused by changes in the balances of neurotransmitters and neuromodulators. Jack Cowan (University of Chicago Mathematics Department) and collaborators have developed some very impressive mathematical theories to explain such phenomina. Other researchers have argued that the exact opposite is the case: that, for instance, the breakdown of chaos is characteristic of Sudden Heart Failure; the heart seems to collapse from a normally chaotic beat into a phase-locked regime. Similarly, researchers have argued that the appearance of a domain of apparent periodicity mildly *predicts* the onset of epileptic seizure. (If anyone wants a reference, I'll look it up in my files; I've just moved, and they're a little chaotic right now.) As a purely philosophical point, I can imagine a reason for chaos to be an essential aspect of brain function. By its very nature, a chaotic system is capable of changing states at any time with only a slight kick. That could allow a neural network to make a decision at any time without needing an external reset signal, such as most abstract neural networks require. [1] Freeman, W. and S. Scarda, "How brains make chaos in order to make sense of the world", Brain and Behavioral Science, November, '87. From bhb at cadre.dsl.pittsburgh.edu Mon Jun 6 09:20:51 1988 From: bhb at cadre.dsl.pittsburgh.edu (Barry Blumenfeld) Date: Mon, 6 Jun 88 09:20:51 EDT Subject: No subject Message-ID: <8806061321.AA10055@cadre.dsl.pittsburgh.edu> I'm a fellow in medical informatics at presby, with an interest in connectionism. I'd like to start attending the weekly meetings of your group when they resume. Could you place me on your mailling list? email address....bhb at cadre.dsl.pittsburgh.edu ...thanks Barry Blumenfeld From unido!tumult!schmidhu at uunet.UU.NET Mon Jun 6 14:28:34 1988 From: unido!tumult!schmidhu at uunet.UU.NET (Juergen Schmidhuber) Date: Mon, 6 Jun 88 16:28:34 -0200 Subject: "Technical Note Available" Message-ID: <8806061428.AA02242@tumult.informatik.tu-muenchen.de> Here is the abstract of a technical note on accelerated learning in neural networks. Write or send email to obtain a copy. Accelerated Learning in Back-Propagation Nets Juergen Schmidhuber, Institut fuer Informatik Technische Universitaet Muenchen Arcisstr. 21 8000 Muenchen 2, Germany Two of the most serious problems with back-propagation are insufficient speed and the danger of getting stuck in local minima. We offer an approach to cope with both of these problems: Instead of using bp to find zero-points of the gradient of the error-surface we are looking for zero-points of the error-surface itself. This can be done with less computational effort than there is in second order methods. Experimental results indicate that in cases where only a small fraction of units is active simultaneously, this method can be applied successfully. Furthermore it can be significantly faster than conventional bp. Juergen Schmidhuber From R09614%BBRBFU01.BITNET at VMA.CC.CMU.EDU Mon Jun 6 10:15:51 1988 From: R09614%BBRBFU01.BITNET at VMA.CC.CMU.EDU (R09614%BBRBFU01.BITNET@VMA.CC.CMU.EDU) Date: 06 Jun 88 16:15:51 +0200 Subject: No subject Message-ID: This message has been sent in two different formats. Please excuse us if you receive two copies of it. - Thank you Comment on John Moody's remarks about chaos in neural systems Today we have at our disposal a variety of methods to assess from an experimental time series some of the dynamical properties of the system under study. These methods (phase space construction, Poincare maps, correlation dimensions, Lyapunov exponents, Kolmogorov entropies, ...) may give us a quantitative measure of the real brain activity. In 1985, these methods were used for the first time in the analysis of brain activity [1] from the electroencephalogram (EEG) recordings. Although the origin of the EEG is not well understood, however several well defined stages of the brain activity show characteristic EEG's which are still used as diagnostic tools. We showed that several stages of the sleep cycle could obey low dimensional chaotic dynamics. Later, this kind of analysis has been extended by other laboratories [2] and our group [3] to the study of other stages of the brain activity. The general conclusions are: 1) Eyes open, the brain activity shows noise-like behavior (very high dimensional) 2) Eyes closed, the dynamics may be described by a chaotic attractor of relatively low dimension (around 6) 3) This value decreases during the sleep cycle and reaches a minimum near 4 in the deep sleep (stage IV) 4) The REM (dream) episods are characterized by a rather high dimensional or noise-like behavior 5) During severe pathologies such as epileptic seizures (petit mal) and coma (terminal state of the Creutzfeld-Jakob disease), the correlation dimension drops to lower values (2.05 for petit mal). Therfore higher is the dimension, more alert is the brain. Could chaos be associated with processing power? This goes in the same direction as the findings of Golberger et al. [4] about dying heart that you mentioned. We have found that the normal heart is not a periodic oscillator but the variabilities between successive beats shows non random behavior [5]. Here again, the normal rule is a chaotic state whereas pathologies seem to be associated with more coherent behavior. Regarding the computing abilities of chaotic systems, there has been a great deal of work done on these topics (see for ex. J.S. Nicolis in [6]). With its particular properties, the chaotic attractor is an interesting candidate for information processing: it continuously creates and destroys information (in the sense of Shannon) in the same time. There is a constant creation of new conditions (a large region of phase space may be visited), while the memory of the present state is progressively lost ... Now, how to design a neural network which could take profit of such properties is another question. However, we think that still much interesting work could be done in this direction. We are presently engaged in such studies and we try to refine these data with more accurate technics. Anyone interested may write at: R09614 at BBRBFU01.BITNET Any comment is wellcome. Alain Destexhe Agnessa Babloyantz Faculte des Sciences, Universite Libre de Bruxelles, Campus de la Plaine (CP-231), Bd. du Triomphe, B-1050 Brussels, Belgium [1] A.Babloyantz, C.Nicolis & M.Salazar: Phys. Lett. 111A: 152 (1985) [2] S.P.Layne, G.Mayer-Kress & J.Holzfuss: in Dimensions and Entropies in Chaotic Systems Ed G.Mayer-Kress (Springer Berlin 1986) I.Dvorak & J.Siska: Phys. lett. 118A: 63 (1986) P.E.Rapp, I.D.Zimmerman, A.M.Albano, G.C.de Guzman, N.N.Greenbaun & T.R.Bashore: in Nonlinear Oscillations in Biology and Chemistry Ed. H.G. Othmer Lectures notes in Biomathematics 66: 175 (Springer Berlin 1986) C.A.Skarda & W.J.Freeman: Behavior. Brain Sci. 10: 187 (1987) J.Roschke & E.Basar: in Dynamics of Sensory and Cognitive Processing by the Brain, Ed E.Basar, Springer Series in Brain Dynamics, Vol 1, 203 (1988) [3] A. Babloyantz & A. Destexhe: in Temporal Disorder in Human Oscillatory Systems Eds. L. Rensing, U. an der Heiden and M.C. Mackey, Springer series in Synergetics 36:48 (1987a) A.Babloyantz & A.Destexhe: in Proceedings of the first IEEE International Conference on Neural Networks, Eds M. Caudill and C Butler Vol 4: 31(1987) A.Babloyantz & A.Destexhe: Proc. Natl. Acad. Sci. USA 83: 3513 (1986) A.Babloyantz & A.Destexhe: in From Chemical to Biological Organization, Ed by M.Markus, S.Muller and G.Nicolis, Springer, Berlin (1988) [4] A.Babloyantz & A.Destexhe: Biol. Cybernetics 58: 131 (1988) [5] A.Goldberger, V.Barghava, B.J.West & A.J.Mandell: Physica 17D: 207 (1985) [6] J.S.Nicolis: in Hierarchical systems, Springer, Berlin (1985) J.S.Nicolis, G.Mayer-Kress & G.Haubs: Z.Naturforsh. 38a: 1157 (1983) From R09614%BBRBFU01.BITNET at CUNYVM.CUNY.EDU Fri Jun 3 13:13:39 1988 From: R09614%BBRBFU01.BITNET at CUNYVM.CUNY.EDU (R09614%BBRBFU01.BITNET@CUNYVM.CUNY.EDU) Date: 03 Jun 88 19:13:39 +0200 Subject: No subject Message-ID: Comment on John Moody's remarks about chaos in neural systems Today we have at our disposal a variety of methods to assess from an experimental time series some of the dynamical properties of the system under study. These methods (phase space construction, Poincare maps, correlation dimensions, Lyapunov exponents, Kolmogorov entropies, ...) may give us a quantitative measure of the real brain activity. In 1985, these methods were used for the first time in the analysis of brain activity [1] from the electroencephalogram (EEG) recordings. Although the origin of the EEG is not well understood, however several well defined stages of the brain activity show characteristic EEG's which are still used as diagnostic tools. We showed that several stages of the sleep cycle could obey low dimensional chaotic dynamics. Later, this kind of analysis has been extended by other laboratories [2] and our group [3] to the study of other stages of the brain activity. The general conclusions are: 1) Eyes open, the brain activity shows noise-like behavior (very high dimensional) 2) Eyes closed, the dynamics may be described by a chaotic attractor of relatively low dimension (around 6) 3) This value decreases during the sleep cycle and reaches a minimum near 4 in the deep sleep (stage IV) 4) The REM (dream) episods are characterized by a rather high dimensional or noise-like behavior 5) During severe pathologies such as epileptic seizures (petit mal) and coma (terminal state of the Creutzfeld-Jakob disease), the correlation dimension drops to lower values (2.05 for petit mal). Therfore higher is the dimension, more alert is the brain. Could chaos be associated with processing power? This goes in the same direction as the findings of Golberger et al. [4] about dying heart that you mentioned. We have found that the normal heart is not a periodic oscillator but the variabilities between successive beats shows non random behavior [5]. Here again, the normal rule is a chaotic state whereas pathologies seem to be associated with more coherent behavior. Regarding the computing abilities of chaotic systems, there has been a great deal of work done on these topics (see for ex. J.S. Nicolis in [6]). With its particular properties, the chaotic attractor is an interesting candidate for information processing: it continuously creates and destroys information (in the sense of Shannon) in the same time. There is a constant creation of new conditions (a large region of phase space may be visited), while the memory of the present state is progressively lost ... Now, how to design a neural network which could take profit of such properties is another question. However, we think that still much work could be done in this direction. We are presently engaged in such studies and we try to refine these data with more accurate technics. Anyone interested may write at: R09614 at BBRBFU01.BITNET Any comment is wellcome. Alain Destexhe Agnessa Babloyantz Faculte des Sciences, Universite Libre de Bruxelles, Campus de la Plaine (CP-231), Bd. du Triomphe, B-1050 Brussels, Belgium [1] A.Babloyantz, C.Nicolis & M.Salazar: Phys. Lett. 111A: 152 (1985) [2] S.P.Layne, G.Mayer-Kress & J.Holzfuss: in Dimensions and Entropies in Chaotic Systems Ed G.Mayer-Kress (Springer Berlin 1986) I.Dvorak & J.Siska: Phys. lett. 118A: 63 (1986) P.E.Rapp, I.D.Zimmerman, A.M.Albano, G.C.de Guzman, N.N.Greenbaun & T.R.Bashore: in Nonlinear Oscillations in Biology and Chemistry Ed. H.G. Othmer Lectures notes in Biomathematics 66: 175 (Springer Berlin 1986) C.A.Skarda & W.J.Freeman: Behavior. Brain Sci. 10: 187 (1987) J.Roschke & E.Basar: in Dynamics of Sensory and Cognitive Processing by the Brain, Ed E.Basar, Springer Series in Brain Dynamics, Vol 1, 203 (1988) [3] A. Babloyantz & A. Destexhe: in Temporal Disorder in Human Oscillatory Systems Eds. L. Rensing, U. an der Heiden and M.C. Mackey, Springer series in Synergetics 36:48 (1987a) A.Babloyantz & A.Destexhe: in Proceedings of the first IEEE International Conference on Neural Networks, Eds M. Caudill and C Butler Vol IV: 31 (1987 A.Babloyantz & A.Destexhe: Proc. Natl. Acad. Sci. USA 83: 3513 (1986) A.Babloyantz & A.Destexhe: in From Chemical to Biological Organization, Ed by M.Markus, S.Muller and G.Nicolis, Springer, Berlin (1988) [4] A.Babloyantz & A.Destexhe: Biol. Cybernetics 58: 131 (1988) [5] A.Goldberger, V.Barghava, B.J.West & A.J.Mandell: Physica 17D: 207 (1985) [6] J.S.Nicolis: in Hierarchical systems, Springer, Berlin (1985) J.S.Nicolis, G.Mayer-Kress & G.Haubs: Z.Naturforsh. 38a: 1157 (1983) From feldman at cs.rochester.edu Wed Jun 8 15:41:25 1988 From: feldman at cs.rochester.edu (feldman@cs.rochester.edu) Date: Wed, 8 Jun 88 15:41:25 EDT Subject: Dr. Feldman's address Message-ID: <8806081941.AA04284@wasat.cs.rochester.edu> Dr. Feldman is available at feldman at cs.rochester.edu Dr. Jerome A. Feldman Department of Computer Science University of Rochester Rochester, NY 14627 Or: Dr. Jerome A. Feldman International Computer Science Institute 1947 Center Street Berkeley, CA 94704-1105 From laic!taurus!pat at lll-lcc.llnl.gov Fri Jun 10 15:33:40 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Fri, 10 Jun 88 12:33:40 PDT Subject: AAAI Neural Vision Workshop Message-ID: <8806101933.AA01958@taurus.laic.uucp> ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology From jordan at psyche.mit.edu Sun Jun 12 03:51:47 1988 From: jordan at psyche.mit.edu (Michael Jordan) Date: Sun, 12 Jun 88 03:51:47 edt Subject: Technical Report available Message-ID: <8806120752.AA20214@ATHENA.MIT.EDU> "Supervised learning and systems with excess degrees of freedom" Michael I. Jordan Massachusetts Institute of Technology COINS Technical Report 88-27 ABSTRACT When many outputs of an adaptive system have equivalent effects on the environment, the problem of finding appropriate actions given desired results is ill-posed. For supervised learning algorithms, the ill-posedness of such ``inverse learning problems'' implies a certain flexibility---during training, there are in general many possible target vectors corresponding to each input vector. To allow supervised learning algorithms to make use of this flexibility, the current paper considers how to specify targets by sets of constraints, rather than as particular vectors. Two classes of constraints are distinguished---configurational constraints, which define regions of output space in which an output vector must lie, and temporal constraints, which define relationships between outputs produced at different points in time. Learning algorithms minimize a cost function that contains terms for both kinds of constraints. This approach to inverse learning is illustrated by a robotics application in which a network finds trajectories of inverse kinematic solutions for manipulators with excess degrees of freedom. To obtain a copy, contact: jordan at wheaties.ai.mit.edu or Ms. Connie Smith Computer and Information Science Graduate Research Center University of Massachusetts Amherst, MA 01003 smith at cs.umass.edu From hurlbert at wheaties.ai.mit.edu Mon Jun 13 18:32:22 1988 From: hurlbert at wheaties.ai.mit.edu (Anya C. Hurlbert) Date: Mon, 13 Jun 88 18:32:22 EDT Subject: Italian mathematician seeks position Message-ID: <8806132232.AA17098@rice-chex.ai.mit.edu> ATTENTION: The following is the c.v. of Cesare Furlanello, a young mathematician with an Italian Ph.D. who is interested in using the techniques of logic and analysis to model human reasoning. He presently works at the Insitute for Research in Science and Technology (which specializes in image understanding and speech and pattern recognition) and would like to spend next year or a part thereof working in the U.S. He will probably come with his own funding. If you know of any research positions for which he might be suitable, or if you yourself are interested in having him work with you, please send mail directly to him, with a cc to me, lest the uunet goes down again. I am hurlbert at wheaties.ai.mit.edu. Thank you !!!!! Cesare Furlanello - IRST - June 1988 CESARE FURLANELLO CURRICULUM VITAE Age: 27 Nationality: Italian Address: IRST, 38050 Povo (Trento), Italia Tel.No: 0461/810105 Email: furlan at irst.uucp EDUCATION HIGH SCHOOL: Sc. Grammar School; maturita'1980 with 60/60 (A Level). UNIVERSITY: graduated in pure Mathematics on 11 November 1986 110/110 cum laude (maximum with distinction); I was particularly interested in Algebraic Geometry; other topics of major interest before and during the preparation of the thesis were, amongst others, Category Theory,General Topology and Commutative Algebra. On these topics I regularly attended lectures and meetings. Supervisor: Professor Francesco Baldassarri Title of the thesis: Linear Differential Equations with algebraic relations between the solutions Abstract of the thesis. It is attempted to apply modern results of differential algebra and projective geometry to the work of Gino Fano on homogeneous linear differential equations with coefficients in C(z) and algebraic relations between their fundamental solutions.One of the major interests in this topic consists in describing the fundamental solutions in terms of those of equations of lower order.The originality of this approach consists in the study of the integral curve g of such an equation L, curve which can be defined in the (n-1)-th projective space V if L is of n-th order. It is therefore possible to study the differential Galois group G of L by means of the group Gpr of projective transformations of the variety defined in V by the algebraic relations existing between the fundamental solutions. In this thesis two cases are investigated: a) the integral curve g is algebraic; b) g is contained in a quadric hypersurface and n<7. A theorem and other results are established giving a characterization of Gpr not previously known in the literature of differential algebra. Those tools, and the concept of symmetric powers of linear differential operator introduced in a recent work of M.F.Singer, are used together with some sophisticated notions of projective geometry like flecnodal curves and the theory of Schuberts cycles in order to simplify the proofs of several results of Fano. Explicit conditions for case a) and b) are found. FELLOWSHIPS: a 12 months undergraduate studentship from the CNR (Consiglio Nazionale delle Ricerche). POSTGRADUATE STUDY AND WORK EXPERIENCE (at IRST) - A tutorial on GCLisp (January 87, Trento). - A course on general techniques of Pattern Recognition (Spring 87,Trento Univ.). - A course on Symbolic Computation with the MAPLE language (Feb 87, SASIAM, Bari) - Studies on formal approaches to PR: the categorical and topological approach. - Summer school of categorical Topology (7-12 June 87, Italian Group of Research in Topology, Bressanone) - UNIX operative system (Fall 87, internal course, IRST) - A course on Logic Programming (Spring 1987, Dept. of Mathematics, Trento Univ.). - I have been admitted to the '88 CIME "Logic and CS" Summer School (lessons held by A.Nerode, J.Hartmanis, R.Platek, G.Sacks,A.Scedrov, 20-28 June 1988,Montecatini) I was assigned at IRST the task of studying concepts using mathematical methods by the Director of IRST, Dr Luigi Stringa. Dr Stringa is looking for an approach successful in answering comprehensively to the various problems of AI and Pattern Recognition. Due to my background in Algebraic Geometry, I started studying geometrical-topological methods. An idea that seems very challenging to me is that of using those powerful formal techniques which are major tools in various fields of Mathematics like Category Theory (and, within that framework, Sheaves and Topoi Theory) for modelling some aspects of human reasoning. My opinion is that some formalization can be attempted, even if limited to a specific domain, and it is supported by the fact that the use of Category Theory is by now well established in Logic and in Logic for CS due to Dana Scott and many others. Categorical models are especially used for the non-traditional logics which have been recently receiving wide attention for computation and in the AI environment. Anyway it is obvious that a valid approach should be sensitive to computational and cognitive paradigms. From laic!taurus!pat at lll-lcc.llnl.gov Mon Jun 13 19:17:09 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Mon, 13 Jun 88 16:17:09 PDT Subject: AAAI Neural Vision Workshop Message-ID: <8806132317.AA04510@taurus.laic.uucp> ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology From richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK Tue Jun 14 13:06:29 1988 From: richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK (Richard Hall) Date: Tue, 14 Jun 88 13:06:29 BST Subject: No subject Message-ID: <24984.8806141206@tsuna.cvaxa.sussex.ac.uk> Reply-To: Andy Clark The Guest Editors would welcome any contributions to the forthcoming Special Issue. Please send all replies to the EMAIL addresses listed below and not the sender's! -- Richard Hall. >>>>>>>>>>>>>>>>>>> CONNECTIONISM IN CONTEXT <<<<<<<<<<<<<<<<<<<<<<<< __________________________ A Special Issue ________________________________ AI & Society Journal Of Human And Machine Intelligence A New International Journal from Springer-Verlag ___________________________________________________________________________ AI & Society would like to invite you to a discussion of the horizons of CONNECTIONIST MODELLING. The SPECIAL ISSUE aims to treat connectionism in a wide context including its social and cultural implications. It would also discuss developments in NEUROCOMPUTING and issues such as validation and legal constraints. Some of the topics of interest include: 1. Can connectionist models of individual information-processing be fruitfully extended to help model and undestand social wholes (eg countries, committees, ant-colonies!)? 2. Can connectionism illuminate developmental issues in a new way? Can it perhape illuminate the evolutionary trajectory of mind? 3. Is symbolic AI necessary to an understanding of the full range of human thought? 4. Connectionism, semiotics, and literary criticism. Can connectionistic accounts of learning and knowledge representation shed any new light on discussions of meaning (or vice versa)? 5. How powerful is the idea of cultural knowledge (raised by Smolensky) as involving the use of a special virtual machine (the conscious rule interpreter) which uses a subsymbolic implementation of a symbol processing society. 6. "Real Symbol Processing", it is claimed, depends upon the devoius combination of PDP based capacities with actual manipulation of structures in the external world. Is it our ability to create and use physical representation which is the key to our capacity to engage in serial symbolic thought? Could conventional computer architectures be putting back into deep structures of the head what really belongs in the interaction of the head and the world? 7. PDP and ecological psychology. Could PDP provide an account of the mechanisms which ecological psychology (despite its occaisonal claims) seems to need but could never previously display? Editorial Team for the Special Issue on Connectionism: Guest Editors: Andy Clark and Rudy Lutz, School of Cognitive Sciences, University of Sussex, Brighton, Sussex, UK AI & Society Editors: Janet Vaux and Ajit Narayanan. Contributions could include brief discussion papers for the Open Forum section or major papers for the main section of AI & Society. PAPERS SHOULD BE SUBMITTED either to our GUEST EDITORS, Andy Clark and Rudi Lutz or to the AI & Society EDITOR: Karamjit S Gill, Seake Centre, Brighton Polytechnic, Moulsecomb, Brighton, BN2 4GJ, Sussex UK EMAIL: JANET: andycl at uk.ac.sussex.unx1 BITNET: andycl at unx1.sussex.ac.uk ARPANET: andycl%uk.ac.sussex.unx1 at nss.cs.ucl.ac.uk From richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK Tue Jun 14 19:12:54 1988 From: richardh%tsuna.uucp at CVAXA.SUSSEX.AC.UK (Richard Hall) Date: Tue, 14 Jun 88 19:12:54 BST Subject: No subject Message-ID: <28325.8806141812@tsuna.cvaxa.sussex.ac.uk> Reply-To: Andy Clark The Guest Editors would welcome any contributions to the forthcoming Special Issue. Please send all replies to the EMAIL addresses listed below and not the sender's! -- Richard Hall. >>>>>>>>>>>>>>>>>>> CONNECTIONISM IN CONTEXT <<<<<<<<<<<<<<<<<<<<<<<< __________________________ A Special Issue ________________________________ AI & Society Journal Of Human And Machine Intelligence A New International Journal from Springer-Verlag ___________________________________________________________________________ AI & Society would like to invite you to a discussion of the horizons of CONNECTIONIST MODELLING. The SPECIAL ISSUE aims to treat connectionism in a wide context including its social and cultural implications. It would also discuss developments in NEUROCOMPUTING and issues such as validation and legal constraints. Some of the topics of interest include: 1. Can connectionist models of individual information-processing be fruitfully extended to help model and undestand social wholes (eg countries, committees, ant-colonies!)? 2. Can connectionism illuminate developmental issues in a new way? Can it perhape illuminate the evolutionary trajectory of mind? 3. Is symbolic AI necessary to an understanding of the full range of human thought? 4. Connectionism, semiotics, and literary criticism. Can connectionistic accounts of learning and knowledge representation shed any new light on discussions of meaning (or vice versa)? 5. How powerful is the idea of cultural knowledge (raised by Smolensky) as involving the use of a special virtual machine (the conscious rule interpreter) which uses a subsymbolic implementation of a symbol processing society. 6. "Real Symbol Processing", it is claimed, depends upon the devoius combination of PDP based capacities with actual manipulation of structures in the external world. Is it our ability to create and use physical representation which is the key to our capacity to engage in serial symbolic thought? Could conventional computer architectures be putting back into deep structures of the head what really belongs in the interaction of the head and the world? 7. PDP and ecological psychology. Could PDP provide an account of the mechanisms which ecological psychology (despite its occaisonal claims) seems to need but could never previously display? Editorial Team for the Special Issue on Connectionism: Guest Editors: Andy Clark and Rudy Lutz, School of Cognitive Sciences, University of Sussex, Brighton, Sussex, UK AI & Society Editors: Janet Vaux and Ajit Narayanan. Contributions could include brief discussion papers for the Open Forum section or major papers for the main section of AI & Society. PAPERS SHOULD BE SUBMITTED either to our GUEST EDITORS, Andy Clark and Rudi Lutz or to the AI & Society EDITOR: Karamjit S Gill, Seake Centre, Brighton Polytechnic, Moulsecomb, Brighton, BN2 4GJ, Sussex UK EMAIL: JANET: andycl at uk.ac.sussex.unx1 BITNET: andycl at unx1.sussex.ac.uk ARPANET: andycl%uk.ac.sussex.unx1 at nss.cs.ucl.ac.uk From jose at tractatus.bellcore.com Wed Jun 15 13:55:31 1988 From: jose at tractatus.bellcore.com (Stephen J. Hanson) Date: Wed, 15 Jun 88 13:55:31 EDT Subject: JOB ANNOUNCEMENT Message-ID: <8806151755.AA19344@tractatus.bellcore.com> ---------------------------------------------------------------- CONNECTIONIST: The Human Information Processing Group (HIPG) at Princeton University offers a position for a member of the professional research staff to coordinate a broad range of research in human information processing and to conduct research in neural networks, connectionist, or PDP-style models. The applicant should have some interest or background in neuro-science and its relation to neural network models and be willing to collaborate or interact with neuroscience faculty. The applicant should also have a familiarity with main-stream cognitive science areas, human factors, and computational modeling more generally. Ph.D. required. Applications should be submitted before 1 September 1988. HIPG is an inter-disciplinary group within Princeton University consisting of psychologists, engineers, philosophers, linguists, and computer scientists. Its facilities include the Cognitive Science Lab, the Interactive Computer Graphics Lab, the Robotics Lab, the Engineering Anomalies Lab, and the Cognitive Motivation Lab. Princeton University is an equal opportunity employer. Address inquiries to: Search Committee Human Information Processing Group Department of Psychology Green Hall Princeton University Princeton, New Jersey 08544 ----------------------------------------------------------------------- From merrill%bucasb.bu.edu at bu-it.BU.EDU Wed Jun 15 14:45:32 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (John Merrill) Date: Wed, 15 Jun 88 14:45:32 EDT Subject: Technical Report available In-Reply-To: Michael Jordan's message of Sun, 12 Jun 88 03:51:47 edt <8806120752.AA20214@ATHENA.MIT.EDU> Message-ID: <8806151845.AA02963@bucasb.bu.edu> Would you please send me a copy of your technical report on "Supervised learning and systems with excess degrees of freedom"? My address is John Merrill Center for Adaptive Systems 111 Cummington Street Boston, Mass. 02215 Thank you. From THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU Sun Jun 19 11:54:00 1988 From: THEPCAP%SELDC52.BITNET at VMA.CC.CMU.EDU (THEPCAP%SELDC52.BITNET@VMA.CC.CMU.EDU) Date: Sun, 19 Jun 88 11:54 O Subject: Technical Report available Message-ID: LU TP 88-8 April 1988 TRACK FINDING WITH NEURAL NETWORKS Carsten Peterson Department of Theoretical Physics, University of Lund, Solvegatan 14A, S-223 62 Lund, Sweden [Submitted to Nuclear Instrumentation Methods] ABSTRACT (a modified version): In high energy physics experiments the produced particles give rise to sparks or signals that follow their tracks. In events with large multiplicity reconstructing the tracks from the signals poses a computationally intensive task. Until now signal data has been collected on tapes and then processed with conventional CPU power. In future accelerators like SSC real time experimental triggers will be needed that would benefit from immediate track finding. The track finding problem is a combinatorial optimization problem. We have cast this problem onto a neural network by letting a neuron represent whether a line segment between two signals exist or not. We have applied mean field theory equations together with a greedy heuristic to planar situations with very encouraging results with respect to the quality of the solutions. Also rapid convergence times and good scaling properties are found. The generalization to three dimensions is straightforward. With the great potential that exists for realizing the neural network technology in custom made hardware we feel that this approach to the track finding problem could be very important in the future for experimental high energy physics. Our approach to track finding is generic. Many other and less "peaceful" applications than tracking elementary particles could easily be imagined. ---------------------------- To receive a copy send name and address to: THEPCAP at SELDC52 [bitnet]. Please allow for 4-5 weeks oversea delivery time. From ohare at nrl-css.arpa Sun Jun 19 10:59:24 1988 From: ohare at nrl-css.arpa (John O'Hare) Date: Sun, 19 Jun 88 10:59:24 EDT Subject: Research support Message-ID: <8806191459.AA09437@nrl-css.arpa> 1. Research proposals based on connectionist models that lead to a better understanding of classification of transient, non-speech sounds are being supported (as well as perception in general) by: Office of Naval Research (Code 1142PS), Attn: J. J. O'Hare, 800 N. Quincy St., Arlington, VA 22217-5000. A typical program would be for 3 years at an annual level of $100-150K. A full proposal would be fine but a preliminary proposal to establish mutual interest would be acceptable. Start date would be 1 October 1988. From suneast!vargas!velu at Sun.COM Mon Jun 20 12:09:35 1988 From: suneast!vargas!velu at Sun.COM (Velu Sinha) Date: Mon, 20 Jun 88 12:09:35 EDT Subject: Technical Report available In-Reply-To: 's message of Sun, 19 Jun 88 11:54 O <8806200312.AA09878@eneevax.umd.edu> Message-ID: <8806201609.AA04004@vargas.ecd.sun.com> I'd like a copy of your track finding TR... Velu Sinha 10 Magazine Street #1004 Cambridge, MA 02139 USA From harnad at Princeton.EDU Tue Jun 21 11:09:09 1988 From: harnad at Princeton.EDU (Stevan Harnad) Date: Tue, 21 Jun 88 11:09:09 edt Subject: Speech Recognition: Reference Query Message-ID: <8806211509.AA24419@mind> For a colleague working on speech recognition who is unfamiliar with the connectionist work in this area: I would be grateful to receive pointers to the current and representative work. Stevan Harnad (harnad at mind.princeton.edu) From hendler at dormouse.cs.umd.edu Fri Jun 24 09:29:03 1988 From: hendler at dormouse.cs.umd.edu (Jim Hendler) Date: Fri, 24 Jun 88 09:29:03 EDT Subject: TR Message-ID: <8806241329.AA12592@dormouse.cs.umd.edu> I don't expect the people on this list to find the following of staggering import, but in case anyone knows someone looking for such a thing: BackProp: A tool for Learning About Connectionist Architectures J. Pollack, M. Evett, J.Hendler Technical Report SRC-TR-88-43 Abstract This paper provides an implementation, in Common Lisp, of an epoch learning algorithm, a simple modification of the standrad back-propagation algorithm. The implementation is NOT intended to be a gerneral purpose, high-powered back-propoagation learning system. Rather, this report seeks only to provide a simple implementation of a popular and easily understood connectionist learning algorithm. It is primarily intended to be a teaching tool for AI researchers wishing to familiarize themselves or their students with back-propagation in a language with which they are comfortable. ---- Copies can be requested from: Tammy Paolino tammy at ra.umd.edu (arpa) From csh at ec.ecn.purdue.edu Sat Jun 25 22:53:42 1988 From: csh at ec.ecn.purdue.edu (Craig S Hughes) Date: Sat, 25 Jun 88 21:53:42 EST Subject: Weight Behavior in the Boltzmann Machine Message-ID: <8806260253.AA23085@ec.ecn.purdue.edu> A question for someone who is familiar with the mathematics of the Boltzmann machine: As the network converges during learning, do the weights monotonically increase or decrease to their optimal values, or do they "hop around" until they hit their final value and stabilize? A mathematical explanation would be nice. --craig hughes INTERNET: csh at ec.ecn.purdue.edu UUCP: pur-ee!csh From INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU Mon Jun 27 14:21:00 1988 From: INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU (INS_ATGE%JHUVMS.BITNET@VMA.CC.CMU.EDU) Date: Mon, 27 Jun 88 13:21 EST Subject: Fractal Representations Message-ID: Has there been research into fractal representation of networks for any kind of processing tasks? Assuming that there is some amount of inborn processing ability in the brain, one would think that a fractal representation of the neural structure would be a very advantageous way of genetically describing that structure. -Thomas G. Edwards ins_atge at jhuvms From ucece1!achhabra at Sun.COM Mon Jun 27 08:17:19 1988 From: ucece1!achhabra at Sun.COM (Atul Chhabra) Date: Mon, 27 Jun 88 08:17:19 edt Subject: Neural Architecture for Computer Vision Message-ID: <8806271239.AA14219@uccba.uc.edu> Can someone please e-mail me a copy of the announcement on Workshop on Neural Architectures for Computer Vision. I accidentally deleted the copy I received. My e-mail address is ucece1!achhabra at ucqais.uc.edu. Thanks. Atul K. Chhabra University of Cincinnati From merrill%bucasb.bu.edu at bu-it.BU.EDU Tue Jun 28 08:57:41 1988 From: merrill%bucasb.bu.edu at bu-it.BU.EDU (John Merrill) Date: Tue, 28 Jun 88 08:57:41 EDT Subject: Technical Report Available (was Fractal Representations) In-Reply-To: 's message of Mon, 27 Jun 88 13:21 EST <8806272137.AA00148@bucasb.bu.edu> Message-ID: <8806281257.AA07119@bucasb.bu.edu> The following technical report is available from the Indiana University Department of Computer Science as TR #249. Title: Fractally Configured Neural Networks Authors: Merrill, John W. L. adn Port, Robert F. ABSTRACT The effects of network structure on learning are investigated. We argue that there are problems for which specially tailored network structures are essential in order to achieve a desired result. We present a method to derive such network structures, and present the results of applying this algorithm to the problem of generalization in abstract neural networks. In order to derive these networks, it is essential that the system employ a flexible, yet efficient, representation of edge structure. The algorithm discussed here uses deterministic chaos to generate a fractal partition of the edge space, and uses that fractal partition to produce an edge structure. We discuss the results of applying this algorithm to a simple classification problem, and we compare the performance of the resulting network to the performance of standard feed-forward networks. Our results show that the specially constructed networks are better able to generalize than completely connected networks with the same number of nodes. Electronic requests should be sent to merrill at bucasb.bu.edu on ARPA or to port at iuvax on UUCP. Physical requests should be sent to: Nancy Garrett Department of Computer Science 101 Lindley Hall Indiana University Bloomington, Indiana 47405 ------------------------------------------------------------------------ John Merrill | ARPA: merrill at bucasb.bu.edu Center for Adaptive Systems | 111 Cummington Street | Boston, Mass. 02215 | Phone: (617) 353-5765 From laic!taurus!pat at lll-lcc.llnl.gov Tue Jun 28 12:47:49 1988 From: laic!taurus!pat at lll-lcc.llnl.gov (Pat Ransil) Date: Tue, 28 Jun 88 09:47:49 PDT Subject: Neural Vision Workshop Message-ID: <8806281647.AA12641@taurus.laic.uucp> Please Post and send to others who might be interested. ******************************************************************************* CALL FOR WORKSHOP PARTICIPATION: NEURAL ARCHITECTURES FOR COMPUTER VISION AAAI-88, Minneapolis, Minnesota, Saturday, August 20 To be held in the Radisson St. Paul Hotel Active research in the use of Artificial Neural Networks for Computer Vision has led to a proliferation of architectures, with each design focusing on particular aspects of the problem. In this full day workshop we will look at neural approaches to dealing with many of the difficult issues in computer vision. The morning session will focus on "low-level vision" where predominantly bottom-up or data-driven networks must reduce large amounts of data to compact feature sets which efficiently represent useful information in the original picture. In the afternoon, we will examine "high-level" tasks in which networks combine world and object knowledge with image data to perform object recognition and scene analysis. Throughout the workshop we will consider architectural issues such as the choice of representation and the use of modularity to see how they impact neural vision systems in areas like computational complexity, training, generalization and robustness. Comparing the strengths and weaknesses of various approaches and encouraging the exchange of ideas between research groups will result in an exciting workshop which will be of great benefit to all. All who wish to attend, please send abstracts (four copies) describing your work to: Patrick Ransil, Lockheed AI Center, 2710 Sand Hill Road, Menlo Park, CA 94025. Include your name, address, and phone number. Abstracts must be received by July 10. Organizing Committee: Patrick Ransil, Lockheed AI Center Dana Ballard, University of Rochester Federico Faggin, Synaptics Christof Koch, California Institute of Technology