From NIKE%IMICLVX.BITNET at VMA.CC.CMU.EDU Fri Sep 1 15:24:00 1989 From: NIKE%IMICLVX.BITNET at VMA.CC.CMU.EDU (NIKE%IMICLVX.BITNET@VMA.CC.CMU.EDU) Date: Fri, 1 Sep 89 15:24 N Subject: report received Message-ID: dear dr. Harnad, I received the copy of report "The Symbol Grounding Problem" you sent me by email. I particularly appreciated the troff/nroff format, as it is the only way we can print formatted text received by email (that is, pure ASCII files). I think anybody who wants to distribute electronically his/her works, should at least provide a troff/nroff source, possibly along with more sophisticated formats. UNIX appears to be popular enough with computer science researchers that almost everybody can get a formatted nroff print, while this is not necessarily true with other systems (e.g. TEX, Postscript laser printers). Thanks again. Nicola Musatti ----- From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:19:17 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:19:17 EST Subject: Forwarded mail Message-ID: <8909040519.10652@munnari.oz.au> From ray at cluster.cs.su.oz Sat Sep 2 20:11:42 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sat, 2 Sep 89 19:11:42 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Wed Aug 23 11:00:46 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:23:04 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:23:04 EST Subject: Forwarded mail Message-ID: <8909040522.10700@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:15:26 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:15:26 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Fri Sep 1 02:08:42 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:22:46 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:22:46 EST Subject: Forwarded mail Message-ID: <8909040522.10695@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:12:27 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:12:27 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Thu Aug 31 08:26:48 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:20:10 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:20:10 EST Subject: Forwarded mail Message-ID: <8909040520.10673@munnari.oz.au> From ray at cluster.cs.su.oz Sat Sep 2 20:10:21 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sat, 2 Sep 89 19:10:21 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Wed Aug 23 09:38:24 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:24:00 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:24:00 EST Subject: Forwarded mail Message-ID: <8909040523.10716@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:24:55 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:24:55 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Sat Sep 2 02:38:37 1989 From NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz Fri Sep 1 15:24:00 1989 From: NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz (NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU@murtoa.cs.mu.oz) Date: Fri, 1 Sep 89 15:24 N Subject: report received Message-ID: dear dr. Harnad, I received the copy of report "The Symbol Grounding Problem" you sent me by email. I particularly appreciated the troff/nroff format, as it is the only way we can print formatted text received by email (that is, pure ASCII files). I think anybody who wants to distribute electronically his/her works, should at least provide a troff/nroff source, possibly along with more sophisticated formats. UNIX appears to be popular enough with computer science researchers that almost everybody can get a formatted nroff print, while this is not necessarily true with other systems (e.g. TEX, Postscript laser printers). Thanks again. Nicola Musatti ----- Dphys4neuroz at phys4.anu;act at eeadfa.eerayMail,1092,620807924,6009701phys4.anucluster.cs.su.ozmailerID=send+Z+APn1d5K,ccadfa=01*O71 From niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Mon Sep 4 09:02:56 1989 From: niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Mahesan Niranjan) Date: Mon, 4 Sep 89 09:02:56 BST Subject: Yet Another Neural Conference (YANC) Message-ID: <9890.8909040802@dsl.eng.cam.ac.uk> Here is the programme of the IEE Conference on ANNs. niranjan PS: IEE means Institute of Electrical Engineers (in UK) PPS: ANN means Artificial Neural Networks ============================================================================= IEE First International Conference on Artificial Neural Networks at IEE Savoy Place 16-18 October 1989 Registration: Conference Services IEE Savoy Place London WC2R 0BL tel: 01-240-1871 fax: 01-240-7735 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ PROGRAMME: MONDAY 16 October Registration 8.30 Formal Opening 9.30 Keynote address: 'On the significance of internal representations in neural networks', Kohonen. Session 1 - Self Organising and Feedback Networks 'Hierarchical self-organisation: a review', Luttrell, RSRE 'A comparative study of the Kohonen and Multiedit neural net learning algorithms', Kittler & Lucas, Surrey U 'Self-organisation based on the second maximum entropy principle', Grabec, E K U, Yugoslavia 'A new learning rule for feedback neural networks', Tarassenko, Seifert, Tombs & Reynolds, Oxford U & Murray, Edinburgh U 'Linear interpolation with binary neurons', Jonker, Coolenet & van der Gon, Utrecht U CLOSE - LUNCH 12.30 Session 2 - Implementation I 14.00 'Silicon implementation of neural networks', Murray, Edinburgh U 'Digital optical technology for the neural plane', Collins & Crossland, STC & Vass, Edinburgh U 'Implementation of plasticity in MOS synapses', Card & Moore, Oxford U 'Integrated circuit emulation of ART1 Networks, Rao, Walker, Clark & Akers, Arizona SU 'A limited connectivity switched capacitor analogue neural processing circuit with digital storage of non-binary input weights', Bounds, RSRE TEA - Poster session 1 15.40 'A non-competitive model for unsupervised learning' Hrycej, PCS, W.Germany 'Evolution equations for neural networks with arbitrary spacial structure', Coolen, van der Gon & Ruijgrok, Utrecht U 'Hardware realisable models of neural processing', Taylor, Clarkson, KCL & Gorse UCL, London 'On the training and the convergence of brain-state-in-a-box neural networks' Vandenberghe & Vandewalle, Katholieke U Louven 'Learning in a single pass: a neural model for instantaneous principal component analysis and linear regression', Rosenblatt, Concept Technols, Lelu & Georgei, INIST/CNRS France 'Dynamic scheduling for feed-forward neural nets using transputers', Oglesby & Mason, UC Swansea 'Analogue-to-digital conversin of self organising networks - the JAM technique', Allinson, Johnson & Brown, York U 'Temporal effects in a simple neural network deived from an optical implementation', Wright & White, BAe 'Neural networks and systolic arrays', Broomhead, Jones, McWhirter & Shepherd, RSRE 'Infrared search and track signal processing: a potential application of artificial neural computing', Chenoweth, Louisville U 'Optimal visual tracking with artificial neural networks, Dobnikar, Likar & Podberegar, Ljubljana U 'Extension of the Hopfield neural network to a multilayer architecture for optical implementation', Selviah & Midwinter, UCL, London Session 3 - Vision 16.20 'A neural network approach to the computation of vision algorithms', Psarrou & Buxton, QMC London 'A neural network implementation for real-time scene analysis', Allen, Adams & Booth, Newcastle-upon-Tyne U 'Optical flow estimation using an artificial neural network', Zhongquan, Purdue U 'Neural networks and Hough transform for pattern recognition', Costa & Sandler, KCL, London CLOSE OF SESSION 17.40 Cocktail Party in IEE Refectory 18.00-19.15 TUESDAY 17 OCTOBER Session 4 - Speech 09.00 'Experimental comparison of a range of neural network and conventional techniques for a word recognition task', Bedworth, Bridle, Flyn & Ponting, RSRE, Fallside & Prager, Cambridge U, Fogelman & Bottu, EHEI, Paris 'Two level recognition of isolated words using neural nets', Howard & Huckvale, UCL, London 'Predictive analysis of speech using adaptive networks', Lowe, RSRE 'The application of artificial neural network techniques to low bit-range speech coding', Kaouri & McCanny, Belfast U 'The modified Kanerva model: results for real time word recognition', Prager, Clarke & Fallside Cambridge U COFFEE - Poster session 2 10.40 'Identifying and discriminating temporal events with connectionist language users', Allen Kaufman & Bahmidpaty, Illinois U 'Auditory processing in a post-cochlear stochastic neural network', Schwartz, Demongeot, Herve, Wu & Escudier, ICP, France 'Neural networks for speech pattern classification', Renals & Rohwer, Edinburgh U 'Weight limiting, weight quantisation and generalisation in multi-layer perceptrons', Woodland, BTRL 'Using a connectionist network to eliminate redundancy from a phonetic lattice in an analytical speech recognition system', Miclet & Caharel, CNET 'Speaker recognition with a neural classifier', Oglesby & Mason, UC Swansea 'Output functions for probabilistic logic nodes', Myers, ICST London 'Neural networks with restricted-range connections', Noest, Brain Research Inst, Netherlands 'A hybrid neural network for temporal pattern recognition', McCulloch & Bounds, RSRE 'A/D conversion and analog vector quantization using neural network models, Svensson & Chen, Linkoping U 'Stochastic searching networks', Bishop, Reading U Session 5 - Architectures 11.00 'Canonical neural nets based on logic nodes', Aleksander, ICST London 'Designing neural networks', Cybenko, Illinois U. 'A continuously adaptable artificial neural network', Sayers & Coghill, Auckland U 'An analysis of silicon models of visual processing', Taylor, KC London CLOSE SESSION - LUNCH 12.30 Session 6 - Signal and Data Processing 14.00 'Nonlinear decision feedback equalizers using neural network structures', Siu, Cowan & Gibson, Edinburgh U 'Equalisation using neural networks', Jha, Durrani & Soraghan, Strathclyde U 'Artificial neural net algorithms in classifying electromyographic signals', Pattichis, Middleton & Schizaz, MDRT of Cyprus, Schofield & Fawcett, Newcastle Gen Hospital 'Recognition of radar signals by neural network', Beastall, RNEC UK 'Applications of neural networks to nondestructive testing', Upda & Upda, Colorado U TEA - Poster session 3 15.40 'Bearing estimation using neural optimisation methods', Jha & Durrani, Strathclyde U 'An example of back propagation: diagnosis of dyspepsia', Ridella, Mella, Arrigo, Marconi, Scalia & Mansi, CNR Italy 'The application of pulse processing neural networks in communications and signal demodilation', Chesmore, Hull U 'Neural networks and GMDH regression: case studies and comparisons', Harrison, Mort, Hasnain & Linkens, Sheffield U 'The application of neural networks to tactical and sensor data fusion problems', Whittington & Spracklen, Aberdeen U 'A new learning paradigm for neural networks', Lucas & Damper, Southampton U 'Estimating hidden unit quantity of two-layer perceptrons performing binary mappings', Gutierrez, Gondin & Wang, Arizona SU 'Training networks with discontinuous activation functions', Findlay, Plessey Research 'Can a perceptron find Lyapunov functions?', Banks & Harrison, Sheffield U Session 7 - Multilayer perceptrons I 16.20 'Single-layer look-up perceptrons (SLLUPS)', Tattersall & Foster, UEA 'Probabilistic learning on a network and a Markov random field', Wright, BAe 'Building symmetries into feedforward networks', Shawe-Taylor, London U 'Stochastic computing and reinforcement neural networks', Mars, Durham U & Leaver, BAe CLOSE OF SESSION 17.40 WEDNESDAY 18 October Session 8 - Image Processing 9.00 'Optical character recognition using artificial networks: past and future', Alpaydin, SFIT Switzerland 'An associative neural architecture for invariant pattern classification', Austin, York U 'Self-organising Hopfield networks', Naillon & Theeten, LEPA France 'Comparison of neural networks and conventional techniques for feature location in facial images', Hutchinson & Welsh, BTRL 'Expectation-based feedback in a neural network whcih recognises hand-drawn characters and symbols', Banks & Elliman, Nottingham U COFFEE - Poster session 4 10.40 'Matching of attributed and non-attributed graphs by use of Boltzmann Machine algorithm', Kuner, Siemens W Germany 'Image processing with optimum neural networks', Bichsel, PSI Switzerland 'A comparative study of neural network structures for practical application in a pattern recognition task', Bisset, Fiho & Fairhurst, Kent U 'On the use of pre-defined regions to minimise the training and complexity of multy-layer neural networks', Houselander & Taylor, UCL London 'A novel training algorithm', Wang & Grondin, Arizona SU 'Diffusion learning for the multilayer perceptron', Hoptroff & Hall, KC London 'Automatic learning of efficient behaviour', Watkins, Philips UK 'Learning with interferene cells', Sequeira & Tome, IST - AV Portugal 'Test of neural netork as a substitute for a traditional small-scale expert system', Filippi & Walker, Rome U 'Image compression with competing multilayer perceptrons', Sirat & Viala, LEP, France Session 9 - Multilayer percepteons II 11.00 'The radial basis function network: adapting the transfer functions to suit the experiment and the problem of generalisation', Lowe, RSRE 'On the analysis of multi-dimensional linear predictive/autoregressive data by a class of single layer connectionist models', Fallside, Cambridge U 'Unlimited input accuracy in layered networks', Sirat & Zorer, LEP, France 'The properties and implementation of the non-linear vector space connectionist model', Lynch & Rayner, Cambridge U CLOSE OF SESSION - LUNCH 12.30 Session 10 - AI and Neural Networks 14.00 'Overcoming independence assumption in Bayesian neural networks', Kononenko, FEE, Yugoslavia 'A neural controller', Saerens & Soquet, IRIDIA, Belgium 'Linked assembly of neural netwoeks to solve the interconnection problem, Green & Noakes, Essex U 'Building expert systems on neural architecture', Fu, Wisconsin U 'COMPO - conceptual clustering with connectionist competitive learining', de Garis, Bruxelles LU CLOSE OF SESSION - TEA 15.40 Session 11 - Implementation II 16.10 'Ferroelectric connections for IC neural networks', Clark, Dey, & Grondin, Arizona SU 'An implementation of fully analogue sum-of-product neural models', Daniel, Waller & Bisset, Kent U 'The implementation of hardware neural net systems', Myers, BTRL 'A general purpose digital architecture for neural network simulation', Duranton & Mauduit, LEPA, France CLOSING REMARKS by Cowan 17.30 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ ============================================================================= O.K. thanks for the attention...... From niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Mon Sep 4 12:37:30 1989 From: niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Mahesan Niranjan) Date: Mon, 4 Sep 89 12:37:30 BST Subject: Weight Space Message-ID: <11273.8909041137@dsl.eng.cam.ac.uk> > From: INS_ATGE%JHUVMS.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz > Date: Tue, 22 Aug 89 18:23 EST > Subject: Neural Net Archive? > > Has anyone considered (or possibly created) an archive site for > trained neural networks? Why spend thousands of epochs of learning > to create you network weights, only to throw them away after your research > is done? If anyone feels such an archive site may be of use, please send > me email (as it would be helpful to me as I lobby for a site at Hopkins). > > -Thomas Edwards I can imagine a day when weight values will be available for sale! Companies with number crunching power might train networks and 'sell' the values (possibly at reduced prices for academic institutions!). It may also be possible to 'buy' quantised weights at cheap rates!! niranjan From gc at s16.csrd.uiuc.edu Mon Sep 4 22:06:22 1989 From: gc at s16.csrd.uiuc.edu (George Cybenko) Date: Mon, 4 Sep 89 21:06:22 CDT Subject: Connections Per Second Message-ID: <8909050206.AA14400@s16.csrd.uiuc.edu> The recent discussion about how to measure machine performance in terms of connections per second, etc. is reminiscent of the decade old debate about how to report machine performance in general. Here are three important turning points in the history of measuring and reporting machine performance for scientific computing. Pre 1980 - (Vendor defined and supplied floating point execution rates) In order to make megaflop numbers as large as possible, people used register arithmetic operations so those rates completely ignored addressing, incrementing, cache effects, etc. Consequently, the performance of a machine on a typical scientific program was almost uncorrelated with those MFLOP rates. Mid-1980's - (Kernels and loops) Livermore loops, Whetstones, Linpack kernels were introduced because of the problems noted above. However, these loops and kernels are somewhat homogeneous, lack I/O , stressed the memory bandwidth in predictable ways and could be easily detected and optimized by a compiler. Consequently, there were accusations that compilers and machines were being constructed to deliver high performance on those benchmark loops and kernels. Late-1980's and beyond ? (Applications based benchmarking -Perfect Club, SPEC) Replace kernels and loops with scientific applications codes and data sets that are representative of a high end machine workload. Replace MFLOPS with absolute execution times. The Perfect (Performance Evaluation through Cost-Effective Transformation) Club was a cooperative effort formed in 1987 that collected 13 scientific codes and data sets to form a benchmark suite. When porting codes to different machines, changes in the codes were allowed. The initial effort included researchers from Cray, IBM, Caltech, Princeton, HARC, University of Illinois, and the Institute for Supercomputing Research (Tokyo) - the codes include circuit simulation, fluids, physics and signal processing applications. The Systems Performance Evaluation Cooperative (SPEC) is an industry motivated effort started in spring 1989 to develop applications based benchmarking for a wider range of machines, including workstations. DEC, IBM, and MIPS belong to SPEC for example. In light of this history, it seems that using MFLOPS or CPS as measures of machine performance on neural computing applications ignores a decade of progress and plays right into vendor hands. Instead, let me suggest that someone submit a state-of-the-art code solving a representative problem in one of the major connectionist models, together with a data set and solution to the Perfect Club or SPEC suite. This way, people interested in connectionist computing can simultaneously contribute to a broader effort in benchmarking and avoid recapitulating history. Information about the Perfect Club effort can be obtained by writing to Lynn Rubarts Center for Supercomputing Research and Development University of Illinois Urbana, IL 61801 USA (217) 333-6223 or sending an email request for the Perfect Club reports to rubarts at uicsrd.csrd.uiuc.edu. Anyone with a code that might be suitable for the Perfect benchmark can contact me. George Cybenko Center for Supercomputing Research and Development University of Illinois at Urbana Urbana, IL 61801 (217) 244-4145 gc at uicsrd.csrd.uiuc.edu From carol at ai.toronto.edu Tue Sep 5 15:53:17 1989 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 5 Sep 89 15:53:17 EDT Subject: CRG-TR-89-4 available Message-ID: <89Sep5.155337edt.10806@ephemeral.ai.toronto.edu> The following technical report by Yann le Cun, CRG-TR-89-4/June 1989, is now available. Please send me your (physical) mailing address to receive this report: GENERALIZATION AND NETWORK DESIGN STRATEGIES Yann le Cun* Department of Computer Science University of Toronto TECHNICAL REPORT CRG-89-4 / June l989 ABSTRACT An interesting property of connectionist systems is their ability to learn from examples. Although most recent work in the field concentrates on reducing learning times, the most important feature of a learning machine is its generalization performance. It is usually accepted that good generalization performance on real-world problems cannot be achieved unless some a priori knowledge about the task is built into the system. Back-propagation networks provide a way of specifying such knowledge by imposing constraints both on the architecture of the network and on its weights. In general, such constraints can be considered as particular transformations of the parameter space. Building a constrained network for image recognition appears to be a feasible task. We describe a small handwritten digit recognition problem and show that, even though the problem is linearly separable, single layer networks exhibit poor generalization performance. Multilayer constrained networks perform very well on this task when organized in a hierarchical structure with shift invariant feature detectors. These results confirm the idea that minimizing the number of free parameters in the network enhances generalization. The paper also contains a short description of a second order version of back-propagation that uses a diagonal approximation to the Hessian matrix. ------------- *Present address: Room 4G-332, AT&T Bell Laboratories, Crawfords Corner Rd, Holmdel, NJ 07733 Note: A shortened version of the Technical Report will appear in: R. Pfeifer, Z. Schreter, F. Fogelman, and L. Steels (editors), "Connectionism in Perspective", Zurich, Switzerland, 1989. Elsevier. From jose at tractatus.bellcore.com Thu Sep 7 07:36:41 1989 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Thu, 7 Sep 89 07:36:41 -0400 Subject: NIPS Registration Message-ID: <8909071136.AA12106@tractatus.bellcore.com> **** NIPS89 Update **** We've just finished putting the program for the conference together and have a preliminary program for the workshops. A mailing to authors will go out this week, with registration information. Those who requested this information but are not authors will hear from us starting in another week. If you received a postcard from us acknowledging receipt of your paper, you are on our authors' mailing list. If you haven't requested the registration packet, you can do so by writing to Kathie Hibbard NIPS89 Local Committee University of Colorado Eng'g Center Campus Box 425 Boulder, CO 80309-0425 From LIN2 at ibm.com Thu Sep 7 17:06:12 1989 From: LIN2 at ibm.com (Ralph Linsker) Date: 7 Sep 89 17:06:12 EDT Subject: Preprint available Message-ID: <090789.170612.lin2@ibm.com> ********* FOR CONNECTIONISTS ONLY - PLEASE DO NOT FORWARD *********** **************** TO OTHER BBOARDS/ELECTRONIC MEDIA ******************* The following preprint is available. If you would like a copy, please send a note to lin2 @ ibm.com containing *only* the information on the following four lines (to allow more efficient handling of your request): *NC* Name Address (each line not beyond column 33) How to Generate Ordered Maps by Maximizing the Mutual Information Between Input and Output Signals* Ralph Linsker IBM Research Division, T. J. Watson Research Center, P. O. Box 218, Yorktown Heights, NY 10598 *To appear in: Neural Computation 1(3):396-405 (1989). A learning rule that performs gradient ascent in the average mutual information between input and output signals is de- rived for a system having feedforward and lateral inter- actions. Several processes emerge as components of this learning rule: Hebb-like modification, and cooperation and competition among processing nodes. Topographic map formation is demonstrated using the learning rule. An analytic expression relating the average mutual information to the response properties of nodes and their geometric arrangement is derived in certain cases. This yields a relation between the local map magnification factor and the probability distribution in the input space. The results provide new links between unsupervised learning and information-theoretic optimization in a system whose proper- ties are biologically motivated. From barto%anger at cs.umass.edu Tue Sep 12 17:52:57 1989 From: barto%anger at cs.umass.edu (barto%anger@cs.umass.edu) Date: Tue, 12 Sep 89 17:52:57 EDT Subject: Technical Reports Available Message-ID: <8909122152.AA00331@anger.ANW.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** Two new technical reports are available: CONNECTIONIST LEARNING FOR CONTROL: AN OVERVIEW Andrew G. Barto Department of Computer and Information Science University of Massachusetts, Amherst MA 01003 COINS Technical Report 89-89 September 1989 Abstract---This report is an introductory overview of learning by connectionist networks, also called artificial neural networks, with a focus on the ideas and methods most relevant to the control of dynamical systems. It is intended both to provide an overview of connectionist ideas for control theorists and to provide connectionist researchers with an introduction to certain issues in control. The perspective taken emphasizes the continuity of the current connectionist research with more traditional research in control, signal processing, and pattern classification. Control theory is a well--developed field with a large literature, and many of the learning methods being described by connectionists are closely related to methods that already have been intensively studied by adaptive control theorists. On the other hand, the directions that connectionists are taking these methods have characteristics that are absent in the traditional engineering approaches. This report describes these characteristics and discusses their positive and negative aspects. It is argued that connectionist approaches to control are special cases of memory--intensive approaches, provided a sufficiently generalized view of memory is adopted. Because adaptive connectionist networks can cover the range between structureless lookup tables and highly constrained model--based parameter estimation, they seem well--suited for the acquisition and storage of control information. Adaptive networks can strike a balance between the tradeoffs associated with the extremes of the memory/model continuum. LEARNING AND SEQUENTIAL DECISION MAKING A. G. Barto Department of Computer and Information Science University of Massachusetts, Amherst MA 01003 R. S. Sutton GTE Laboratories Incorporated Waltham, MA 02254 C. J. C. H. Watkins Philips Research Laboratories Cross Oak Lane, Redhill Surrey RH1 5HA, England COINS Technical Report 89-95 September 1989 Abstract---In this report we show how the class of adaptive prediction methods that Sutton called ``temporal difference,'' or TD, methods are related to the theory of squential decision making. TD methods have been used as ``adaptive critics'' in connectionist learning systems, and have been proposed as models of animal learning in classical conditioning experiments. Here we relate TD methods to decision tasks formulated in terms of a stochastic dynamical system whose behavior unfolds over time under the influence of a decision maker's actions. Strategies are sought for selecting actions so as to maximize a measure of long-term payoff gain. Mathematically, tasks such as this can be formulated as Markovian decision problems, and numerous methods have been proposed for learning how to solve such problems. We show how a TD method can be understood as a novel synthesis of concepts from the theory of stochastic dynamic programming, which comprises the standard method for solving such tasks when a model of the dynamical system is available, and the theory of parameter estimation, which provides the appropriate context for studying learning rules in the form of equations for updating associative strengths in behavioral models, or connection weights in connectionist networks. Because this report is oriented primarily toward the non-engineer interested in animal learning, it presents tutorials on stochastic sequential decision tasks, stochastic dynamic programming, and parameter estimation. You can be these reports in several ways. I have followed Jordan Pollack's very good suggestion and placed postscript files in the account kindly provided at Ohio State for this purpose. Here is the version of Jordan's instructions appropriate for getting them: ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> get (remote-file) barto.control.ps (local-file) foo.ps 587591 bytes sent in ?? seconds (?? Kbytes/s) ftp> get (remote-file) barto.sequential_decisions.ps (local-file) bar.ps 904574 bytes sent in ?? seconds (?? Kbytes/s) ftp> quit unix> lpr *.ps (note: these are rather large files: 38 and 51 pages respectively when printed) Alternatively, you can send requests for printed copies via e-mail to Ms. Connie Smith using the address: Smith at cs.umass.EDU or write to Ms. Connie Smith Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 (but I would prefer that you use the ftp option if possible!) Andy Barto **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** From noel at CS.EXETER.AC.UK Wed Sep 13 12:05:38 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Wed, 13 Sep 89 12:05:38 BST Subject: address Message-ID: <5738.8909131105@entropy.cs.exeter.ac.uk> Does anyone know ronan reilly's new email address at the beckmann institute as i need to contact him urgently. noel sharkey Centre for Connection Science JANET: noel at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!noel Exeter EX4 4PT Devon BITNET: noel at cs.exeter.ac.uk@UKACRL U.K. From noel at CS.EXETER.AC.UK Thu Sep 14 13:56:06 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Thu, 14 Sep 89 13:56:06 BST Subject: Natural Language Message-ID: <6001.8909141256@entropy.cs.exeter.ac.uk> CALL FOR PAPERS CONNECTION SCIENCE (Journal of Neural Computing, Artificial Intelligence and Cognitive Research) Special Issue CONNECTIONIST RESEARCH ON NATURAL LANGUAGE Editor: Noel E. Sharkey, University of Exeter Special Editorial Review Panel Robert Allen, Bell Communication Research Garrison W. Cottrell, University of California, San Diego Michael G. Dyer, University of California, Los Angeles Jeffrey L. Elman, University of California, San Diego George Lakoff, University of California, Berkeley Wendy W. Lehnert, University of Massachusetts at Amherst Jordan Pollack, Ohio State University Ronan Reilly, Beckmann Institute, Illinois Bart Selman, University of Toronto Paul Smolensky, University of Colorado, Boulder This special issue will accept submissions of full length connectionist papers and brief reports from any area of natural language research including: Connectionist applications to AI problems in natural language (e.g. paraphrase, summarisation, question answering). New formalisms or algorithms for natural language processing. Simulations of psychological data. Memory modules or inference mechanisms to support natural language processing. Representational methods for natural language. Techniques for ambiguity resolution. Parsing. Speech recognition, production, and processing. Connectionist approaches to linguistics (phonology, morphology etc.). Submissions of short reports or recent updates will also be accepted for the Brief Reports section in the journal. No paper should be currently submitted elsewhere. DEADLINES Deadline for submissions: December 15th 1989 Decision/reviews by: February 1990 Papers may be accepted to appear in regular issues if there is insufficient space in the special issue. For further information about the journal please contact Lyn Shackleton (Assistant Editor) Centre for Connection Science JANET: lyn at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!lyn Exeter EX4 4PT Devon BITNET: lyn at cs.exeter.ac.uk@UKACRL U.K. From mcvax!alf.inesc.pt!lba at uunet.UU.NET Thu Sep 14 12:51:43 1989 From: mcvax!alf.inesc.pt!lba at uunet.UU.NET (Luis Borges de Almeida) Date: Thu, 14 Sep 89 16:51:43 GMT Subject: EURASIP workshop on neural networks - call for contributions Message-ID: <8909141651.AA07728@alf.inesc.pt> EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND 2nd CALL FOR CONTRIBUTIONS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Short contributions from all fields related to the neural network area are welcome (see submission procedures below). A (non-exclusive) list of topics is given ahead. These contributions will be presented at the workshop in poster format, and are intended for presentation of ongoing work, projects (e.g. ESPRIT, BRAIN, DARPA,...), or for proposing interesting views (even controversial or provocative). Short contributions will not correspond to a paper in the proceedings, but publication in a special issue of one of EURASIP's journals is being considered. Care is being taken to ensure that the workshop will have a high level of quality. Full contributions have already been selected based on an evaluation by an international technical committee, and the proceedings volume containing these contributions will be published and handed to participants at the workshop. The number of participants will be limited to 50. A small number of non-contributing participants may be accepted, depending on the total number of contributions. The official language of the workshop will be English. Dr. Georges Cybenko, of the University of Illinois, will be an invited speaker. Contacts are on the way for invitation of another well known researcher. TOPICS: - signal processing (speech, image,...) - pattern recognition - algorithms (training procedures, new structures, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for short contributions is as follows: Deadline for submission Oct 1, 1989 Notification of acceptance Nov 15, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori H. Muehlenbein REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the EURASIP Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From lba at alf.inesc.pt Fri Sep 15 04:54:47 1989 From: lba at alf.inesc.pt (Luis Borges de Almeida) Date: Fri, 15 Sep 89 08:54:47 GMT Subject: EURASIP workshop on neural networks - call for contributions Message-ID: <8909150854.AA03120@alf.inesc.pt> EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND 2nd CALL FOR CONTRIBUTIONS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Short contributions from all fields related to the neural network area are welcome (see submission procedures below). A (non-exclusive) list of topics is given ahead. These contributions will be presented at the workshop in poster format, and are intended for presentation of ongoing work, projects (e.g. ESPRIT, BRAIN, DARPA,...), or for proposing interesting views (even controversial or provocative). Short contributions will not correspond to a paper in the proceedings, but publication in a special issue of one of EURASIP's journals is being considered. Care is being taken to ensure that the workshop will have a high level of quality. Full contributions have already been selected based on an evaluation by an international technical committee, and the proceedings volume containing these contributions will be published and handed to participants at the workshop. The number of participants will be limited to 50. A small number of non-contributing participants may be accepted, depending on the total number of contributions. The official language of the workshop will be English. Dr. Georges Cybenko, of the University of Illinois, will be an invited speaker. Contacts are on the way for invitation of another well known researcher. TOPICS: - signal processing (speech, image,...) - pattern recognition - algorithms (training procedures, new structures, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for short contributions is as follows: Deadline for submission Oct 1, 1989 Notification of acceptance Nov 15, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori H. Muehlenbein REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the EURASIP Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From ST401843%BROWNVM.BITNET at vma.CC.CMU.EDU Sun Sep 17 14:30:56 1989 From: ST401843%BROWNVM.BITNET at vma.CC.CMU.EDU (thanasis kehagias) Date: Sun, 17 Sep 89 14:30:56 EDT Subject: old paper is now available via ftp ... Message-ID: The following OLD paper is now available by anonymous FTP. To get a copy, please "ftp" to cheops.cis.ohio-state.edu (128.146.8.62), "cd" to the pub/neuroprose directory, and "get" the file kehagias.hmm0289.tex. Please use your own version of LATEX to print it out. OPTIMAL CONTROL FOR TRAINING: THE MISSING LINK BETWEEN HIDDEN MARKOV MODELS AND CONNECTIONIST NETWORKS ABSTRACT For every Hidden Markov Model there is a set of "forward" probabilities that need to be computed for both the recognition and the training problem. These probabilities are computed recursively and hence the computation can be performed by a multistage, feedforward network that we will call Hidden Markov Model Net (HMMN). This network has exactly the same architecture as the standard Connectionist Network (CN). Furthermore training an Hidden Markov Model is equivalent to optimizing a function of the HMMN; training a CN is equivalent to optimizing a function of the CN. Due to the multistage architecture, both problems can be seen as Optimal Control problems. By applying standard Optimal Control techniques we discover in both problems that certain backpropagated quantities (backward probabilities for HMMN, backward propagated errors for CN) are of crucial importance to the solution. So HMM's and CN's are similar in architecture and training. From pollack at cis.ohio-state.edu Mon Sep 18 10:15:10 1989 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 18 Sep 89 10:15:10 EDT Subject: Neuroprose Compression Message-ID: <8909181415.AA28731@toto.cis.ohio-state.edu> *************DO NOT FORWARD TO OTHER BBOARDS*************** Dr. Barto's postscript files are quite large, and apparently have been difficult to transmit to certain machines. There are standard Unix utilities, called "compress" and "uncompress" which replace "file" with "file.Z" and replace "file.Z" with "file", respectively. Rather than replacing them, however, I have added compressed versions of Dr. Barto's files to the neuroprose directory. In general, compressing postscript seems like a good idea, since 70% compression yields a lot of file space and speed improvement in transmission. Note that one needs to use BINARY mode in ftp to transfer these files, whereas TEXT mode worked for both tex and ps. Jordan *************DO NOT FORWARD TO OTHER BBOARDS*************** From harnad at phoenix.Princeton.EDU Tue Sep 19 01:38:26 1989 From: harnad at phoenix.Princeton.EDU (S. R. Harnad) Date: Tue, 19 Sep 89 01:38:26 -0400 Subject: Visual Search & Complexity: BBS Call for Commentators Message-ID: <8909190538.AA23981@phoenix.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at princeton.edu or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] ____________________________________________________________________ Analyzing Vision at the Complexity Level John K. Tsotsos Department of Computer Science, University of Toronto and The Canadian Institute for Advanced Research The general problem of visual search can be shown to be computationally intractable in a formal complexity-theoretic sense, yet visual search is widely involved in everyday perception and biological systems manage to perform it remarkably well. Complexity level analysis may resolve this contradiction. Visual search can be reshaped into tractability through approximations and by optimizing the resources devoted to visual processing. Architectural constraints can be derived using the minimum cost principle to rule out a large class of potential solutions. The evidence speaks strongly against purely bottom-up approaches to vision. This analysis of visual search performance in terms of task-directed influences on visual information processing and complexity satisfaction allows a large body of neurophysiological and psychological evidence to be tied together. From sankar at caip.rutgers.edu Wed Sep 20 11:12:22 1989 From: sankar at caip.rutgers.edu (ananth sankar) Date: Wed, 20 Sep 89 11:12:22 EDT Subject: Hardware Implementations of Neural Nets Message-ID: <8909201512.AA06629@caip.rutgers.edu> I would like to know of any references to hardware implementations of neural nets. I would appreciate it if someone could point me to some papers on this subject. Thanks in anticipation. Ananth Sankar Dept. of Electrical Engineering Rutgers University New Brunswick, NJ From ala at nada.kth.se Thu Sep 21 03:24:10 1989 From: ala at nada.kth.se (Anders Lansner) Date: Thu, 21 Sep 89 09:24:10 +0200 Subject: ABSTRACT - A Bayesian Neural Network Message-ID: <8909210724.AA05832@nada.kth.se> The following paper will appear in the first (March -89) issue of the International Journal for Neural Systems (World Scientific Publishing): A One-Layer Feedback Artificial Neural Network with a Bayesian Learning Rule by Anders Lansner and \rjan Ekeberg Dept. of Numerical Analysis and Computing Science Royal Institute of Technology, Stockholm, Sweden A probabilistic artificial neural network is presented. It is of a one-layer, feedback-coupled type with graded units. The learning rule is derived from Bayes rule. Learning is regarded as collecting statistics and recall as a statistical inference process. Units corresponds to events and connections come out as compatibility coefficients in a logarithmic combination rule. The input to a unit via connections from other active units affects the a posteriori belief in the event in question. The new model is compared to an earlier binary model with respect to storage capacity, noise tolerance etc. in a content addressable memory (CAM) task. The new model is a real time network and some results on the reaction time for associative recall are given. The scaling of learning and relaxation operations is considered together with issues related to representation of information in one-layered artificial neural networks. An extension with complex units is discussed. Preprint requests to: Anders Lansner NADA KTH S-100 44 Stockholm SWEDEN Various earlier versions of this model are also described in: Lansner A. and Ekeberg \. (1985): Reliability and Speed of Recall in an Associative Network. IEEE Trans. Pattern Analysis and Machine Intelligence 7(4), 490-498. Lansner A. and Ekeberg \ (1987): AN Associative Network Solving the 4-bit ADDER Problem. Proc. ICNN, II-549, San Diego, June 21-24, 1987. Ekeberg \. and Lansner A. (1988): Automatic Generation of Internal Representation in a Probabilisitic Artificial Neural Network. Proc. nEuro'88, Neural Networks from Models to Applications, Personnaz L. and Dreyfus G. (eds.), I.D.S.E.T., Paris, 1989, 178-186. From yann at neural.att.com Thu Sep 21 14:06:00 1989 From: yann at neural.att.com (yann@neural.att.com) Date: Thu, 21 Sep 89 14:06:00 -0400 Subject: ABSTRACT - A Bayesian Neural Network In-Reply-To: Your message of Thu, 21 Sep 89 09:24:10 +0200. Message-ID: <8909211806.AA03687@lesun.> > The following paper will appear in the first (March -89) issue of the > International Journal for Neural Systems (World Scientific Publishing): > > A One-Layer Feedback Artificial Neural Network > with a Bayesian Learning Rule > > by > Anders Lansner and \rjan Ekeberg > Dept. of Numerical Analysis and Computing Science > Royal Institute of Technology, Stockholm, Sweden That reminds me of the following paper: Murakami, K. and Aibara, T. : "Construction of a distributed associative memory on the basis of the bayes discriminant rule". IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3, No 2, March 1981. This paper is about a variation of Nakano's "associatron" model (what we call now a Hopfield network) which uses Bayes rule to compute the weights. As far as i remember, their units are binary. - Yann Le Cun yann at neural.att.com From MUMME%IDCSVAX.BITNET at CUNYVM.CUNY.EDU Fri Sep 22 06:47:00 1989 From: MUMME%IDCSVAX.BITNET at CUNYVM.CUNY.EDU (MUMME%IDCSVAX.BITNET@CUNYVM.CUNY.EDU) Date: Fri, 22 Sep 89 03:47 PDT Subject: Squaring the activation function Message-ID: ****** CONNECTIONISTS ONLY DO NOT FORWARD ******* I have heard rumors that some people get faster learning in a Back-prop network by passing the SQUARE of each unit's activation to the following layer. If anyone has tried this and/or can refer me to articles regarding this method and theories thereof, please let me know. I will circulate the results of this inquiry to interested contributors. Thanks! Dean Mumme ***** CONNECTIONISTS ONLY DO NOT FORWARD ****** Dean C. Mumme bitnet: mumme at idcsvax Dept. of Computer Science University of Idaho Moscow, ID 83843 From munnari!psych.psy.uq.oz.au!janet at uunet.UU.NET Wed Sep 27 06:09:51 1989 From: munnari!psych.psy.uq.oz.au!janet at uunet.UU.NET (Janet Wiles) Date: Wed, 27 Sep 89 20:09:51 +1000 Subject: cognitive science lectureships Message-ID: <8909271043.AA05249@uunet.uu.net> THE UNIVERSITY OF QUEENSLAND Equal Opportunity in Employment is University Policy COGNITIVE SCIENCE LECTURERS (Tenurable or fixed term) Psychology - Computer Science Psychology - Linguistics The University of Queensland is planning a major expansion in research and teaching in Cognitive Science and anticipates appointing two new lecturers subject to availability of funding. The lecturers will be expected to conduct research in an area related to Cognitive Science and to teach undergraduate and postgraduate subjects. We prefer individuals with a computational approach to psychological or linguistic issues such as decision making, grammar, memory, problem solving, semantics, speech perception, vision and other cognitive science areas. One appointee will probably be a neural network or connectionist modeller, the other can have any computational approach. One lectureship will be a joint appointment between Psychology and Computer Science, the other will be either be in Psychology or a joint appointment between Psychology and Linguistics (English). The University of Queensland is one of the major research universities in Australia, and has strong research programs in many areas related to Cognitive Science. Substantial research facilities and support are available. Applicants should have a PhD in a relevant area and should have a record of, or show promise of, conducting high quality research. Salary: $31,258-$40,621 per annum. The appointments will either be tenurable or fixed term. Closing date: October 31, 1989. Ref. No: 43689. Further information is available from Professor S. Schwartz on (07) 3772884 from within Australia or 61-7-3772884 from outside Australia, or from Dr Michael Humphreys via email at mh at psych.psy.uq.oz.au. Please forward an original plus 7 copies of application and resume to the Director, Personnel Services, The University of Queensland, St Lucia 4067, Qld, Australia. From mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu Wed Sep 27 17:46:29 1989 From: mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu (mclennan%MACLENNAN.CS.UTK.EDU@cs.utk.edu) Date: Wed, 27 Sep 89 17:46:29 EDT Subject: Tech Reports available Message-ID: <8909272146.AA03016@MACLENNAN.CS.UTK.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** The following two tech reports are available (abstracts are attached): ------------------------------------------------------------ Bruce J. MacLennan Continuous Computation: Taking Massive Parallelism Seriously Univ. of Tennessee Computer Science Dept. Tech. Report CS-89-83, June 1989, 13 pages. This is based on poster presentation at Los Alamos National Laboratory Center for Nonlinear Studies 9th Annual International Conference, Emergent Computation, Los Alamos, NM, May 22-26, 1989. A latex version is available at cheops.cis.ohio-state.edu in pub/neuroprose: maclennan.contin_comp.tex ------------------------------------------------------------ Bruce J. MacLennan Outline of a Theory of Massively Parallel Analog Computation Univ. of Tennessee Computer Science Dept. Tech. Report CS-89-84, June 1989, 23 pages. This is based on a poster presentation at IJCNN '89. As this report contains non-postscript art, a latex version is not available. ------------------------------------------------------------ Hardcopy versions of both reports are available from: Betsy Holleman, Librarian Department of Computer Science University of Tennessee Knoxville, TN 37996-1301 or library at cs.utk.edu Or write to me at the same address (maclennan at cs.utk.edu). ------------------------------------------------------------ ABSTRACTS Since the contents of the two reports are very similar, one abstract covers both. We present an overview of the theory of Field Computation: mas- sively parallel analog computation in which the number of pro- cessing elements is so large that it may be considered a continu- ous quantity. We pursue this idea for a number of reasons. First, skillful behavior seems to require significant neural mass. Second, we are interested in computers, such as optical computers and molecular computers, for which the number of pro- cessing elements is effectively continuous. Third, continuous mathematics is generally easier than discrete mathematics. And fourth, we want to encourage a new style of thinking about paral- lelism. Currently, we try to apply to parallel machines the thought habits we have acquired from thinking about sequential machines. This strategy works fairly well when the degree of parallelism is low, but it will not scale up. One cannot think individually about the 10^20 processors of a molecular computer. Rather than postpone the inevitable, we think that it's time to develop a theoretical framework for understanding massively parallel analog computers. The principal goal of this report is to outline such a theory. Both reports discuss the basic concept of field computation and the basic principles of universal (general purpose) field comput- ers. In addition, CS-89-83 discusses representation of consti- tuent structure, learning, and field computation versions of simulated annealing and a kind of genetic algorithm. CS-89-84 discusses ways of avoiding fields of high dimension, and imple- mentation of field computations in terms of neurons with conjunc- tive synapses. From harnad at clarity.Princeton.EDU Wed Sep 27 23:30:45 1989 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Wed, 27 Sep 89 23:30:45 EDT Subject: Searle's Problem and Fodor's Problem Message-ID: <8909280330.AA05837@psycho.Princeton.EDU> Searle's Problem vs. Fodor's Problem Both the symbolic (S) and the connectionistic (C) approaches to modeling the mind seem to suffer from their own respective fatal handicap: S suffers from Searle's Problem: Symbols have no intrinsic meaning, they're ungrounded; their meanings are parasitic on the meanings in our heads, which clearly do have intrinsic meaning. C suffers from Fodor's Problem: Connectionist "representations" lack systematicity, unlike the meanings in our heads, which clearly do have systematicity. My proposal is a very particular kind of hybrid approach in which C is given only the limited and nonrepresentational role of feature learning, a role to which it is naturally suited. The need for systematicity (Fodor's Problem) never arises for C. S then enters as a DEDICATED symbol system (one whose primitive symbol-tokens have additional nonsymbolic constraints on them). The nonsymbolic constraints are what GROUND S (thereby avoiding Searle's Problem) through the connections between the primitive symbol tokens and the feature-detectors that pick out the objects to which they refer from their sensory projections. The question of learning and learnability is clearly critical in all this. Fodor is satisfied with a radical nativism for most of our concepts. That's not surprising, because he accepts the "vanishing intersections" argument against the existence of critical features (especially sensory ones) that pick out objects. I think C may allow the first actual TEST of whether feature intersections really vanish; I don't think that is decidable from the armchair. In any case, whether feature-learning took place during evolution or takes place during the lifetime of an organism does not much matter (the answer is probably that there is some of each). What matters is whether features are learnable at all. I'm still betting they are, and that our sensory and conceptual categories are not just "Spandrels." Stevan Harnad ------- From terry%sdbio2 at ucsd.edu Thu Sep 28 22:27:46 1989 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Thu, 28 Sep 89 19:27:46 PDT Subject: Neural Computation - Vol. 1, No. 3 Contents Message-ID: <8909290227.AA09517@sdbio2.UCSD.EDU> Neural Computation, Volume 1, Number 3 October 1, 1989 Reviews Unsupervised Learning H. B. Barlow The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning Yaser S. Abu-Mostafa Note Linking Linear Threshold Units with Quadratic Models of Motion Perception Humbert Suarez and Christof Koch Letters Surface Interpolation in Three-Dimensional Structure-from-Motion Perception Masud Husain, Stefan Treue and Richard A. Andersen A Winner-Take-All Mechanism Based on Presynaptic Inhibition Feedback Alan L. Yuille and Norberto M. Grzywacz An Analysis of the Elastic Net Approach to the Traveling Salesman Problem Richard Durbin, Richard Szeliski and Alan Yuille The Storage of Time Intervals Using Oscillating Neurons Christopher Miall Finite State Automata and Simple Recurrent Networks Axel Cleeremans, David Servan-Schreiber, and James L. McClelland Asymptotic Convergence of Backpropagation Gerald Tesauro, Yu He, and Subutai Ahmad Learning by Assertion: Two Methods for Calibrating a Linear Visual System Laurence T. Maloney and Albert J. Ahumada How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Ralph Linsker Finding Minimum Entropy Codes H. B. Barlow, T. P. Kaushal and G. J. Mitchison From Connectionists-Request at CS.CMU.EDU Fri Sep 29 09:49:05 1989 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 29 Sep 89 09:49:05 EDT Subject: Please include return addresses Message-ID: <25579.623080145@B.GP.CS.CMU.EDU> In order to minimize the number of responses sent to the entire list, I'd like to ask that people explicitly include their email address in their messages. Some mailers are unable to generate the appropriate return address for messages routed through Connectionists at cs.cmu.edu. Thanks, David Plaut Connectionists-Request at cs.cmu.edu (ARPAnet) From movellan at garnet.berkeley.edu Fri Sep 29 17:52:26 1989 From: movellan at garnet.berkeley.edu (movellan@garnet.berkeley.edu) Date: Fri, 29 Sep 89 14:52:26 PDT Subject: IJCNN-90 Message-ID: <8909292152.AA16768@garnet.berkeley.edu> I am trying to contact the reviewing committee for the IJCNN-90. I am using (619) 451-3752 but nobody answers the phone. Anybody knows whether they have a diffe rent number ? Thanks --Javier. From rudnick at cse.ogc.edu Fri Sep 29 20:22:45 1989 From: rudnick at cse.ogc.edu (Mike Rudnick) Date: Fri, 29 Sep 89 17:22:45 PDT Subject: applications to DNA, RNA and proteins Message-ID: <8909300022.AA18343@cse.ogc.edu> I'm looking for literature pointers to applications of artificial neural networks to recognition tasks involving DNA, RNA, or proteins. Please respond to me directly and if there is interest I will post a summary. Thanks, Mike Rudnick CSnet: rudnick at cse.ogc.edu Computer Science & Eng. Dept. UUCP: {tektronix,verdix}!ogccse!rudnick Oregon Graduate Center (503) 690-1121 X7390 (or X7309) 19600 N.W. von Neumann Dr. Beaverton, OR. 97006-1999 From jim%cs.st-andrews.ac.uk at NSFnet-Relay.AC.UK Thu Sep 28 18:53:03 1989 From: jim%cs.st-andrews.ac.uk at NSFnet-Relay.AC.UK (Jim Bairaktaris) Date: Thu, 28 Sep 89 18:53:03 BST Subject: No subject Message-ID: <12574.8909281753@tamdhu.cs.st-andrews.ac.uk> Subject : Room to share at 23rd HICSS conference ?? Is anybody on this mailing list going to attend the 23 Hawaii International Conference on System Sciences ? If so, is he/she willing to share a room with me to cut the costs of accomodation ? If he/she is a connectionist even better. Please reply by e-mail to : jim%uk.ac.st-and.cs or write to : Dimitrios Bairaktaris University of St.Andrews Computational Science North Haugh FIFE KY16 9SS Scotland or telephone : (+44) 334 76161 ext. 8106 Thank you DImitrios From NIKE%IMICLVX.BITNET at VMA.CC.CMU.EDU Fri Sep 1 15:24:00 1989 From: NIKE%IMICLVX.BITNET at VMA.CC.CMU.EDU (NIKE%IMICLVX.BITNET@VMA.CC.CMU.EDU) Date: Fri, 1 Sep 89 15:24 N Subject: report received Message-ID: dear dr. Harnad, I received the copy of report "The Symbol Grounding Problem" you sent me by email. I particularly appreciated the troff/nroff format, as it is the only way we can print formatted text received by email (that is, pure ASCII files). I think anybody who wants to distribute electronically his/her works, should at least provide a troff/nroff source, possibly along with more sophisticated formats. UNIX appears to be popular enough with computer science researchers that almost everybody can get a formatted nroff print, while this is not necessarily true with other systems (e.g. TEX, Postscript laser printers). Thanks again. Nicola Musatti ----- From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:19:17 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:19:17 EST Subject: Forwarded mail Message-ID: <8909040519.10652@munnari.oz.au> From ray at cluster.cs.su.oz Sat Sep 2 20:11:42 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sat, 2 Sep 89 19:11:42 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Wed Aug 23 11:00:46 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:23:04 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:23:04 EST Subject: Forwarded mail Message-ID: <8909040522.10700@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:15:26 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:15:26 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Fri Sep 1 02:08:42 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:22:46 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:22:46 EST Subject: Forwarded mail Message-ID: <8909040522.10695@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:12:27 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:12:27 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Thu Aug 31 08:26:48 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:20:10 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:20:10 EST Subject: Forwarded mail Message-ID: <8909040520.10673@munnari.oz.au> From ray at cluster.cs.su.oz Sat Sep 2 20:10:21 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sat, 2 Sep 89 19:10:21 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Wed Aug 23 09:38:24 1989 From munnari!anu.anu.OZ.AU!root at uunet.UU.NET Mon Sep 4 16:24:00 1989 From: munnari!anu.anu.OZ.AU!root at uunet.UU.NET (The Almighty) Date: Mon, 4 Sep 89 15:24:00 EST Subject: Forwarded mail Message-ID: <8909040523.10716@munnari.oz.au> From ray at cluster.cs.su.oz Sun Sep 3 16:24:55 1989 From: ray at cluster.cs.su.oz (ray@cluster.cs.su.oz) Date: Sun, 3 Sep 89 15:24:55 EST Subject: No subject Message-ID: >From Connectionists-Request at Q.CS.CMU.EDU@murtoa.cs.mu.oz Sat Sep 2 02:38:37 1989 From NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz Fri Sep 1 15:24:00 1989 From: NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz (NIKE%IMICLVX.BITNET%VMA.CC.CMU.EDU@murtoa.cs.mu.oz) Date: Fri, 1 Sep 89 15:24 N Subject: report received Message-ID: dear dr. Harnad, I received the copy of report "The Symbol Grounding Problem" you sent me by email. I particularly appreciated the troff/nroff format, as it is the only way we can print formatted text received by email (that is, pure ASCII files). I think anybody who wants to distribute electronically his/her works, should at least provide a troff/nroff source, possibly along with more sophisticated formats. UNIX appears to be popular enough with computer science researchers that almost everybody can get a formatted nroff print, while this is not necessarily true with other systems (e.g. TEX, Postscript laser printers). Thanks again. Nicola Musatti ----- Dphys4neuroz at phys4.anu;act at eeadfa.eerayMail,1092,620807924,6009701phys4.anucluster.cs.su.ozmailerID=send+Z+APn1d5K,ccadfa=01*O71 From niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Mon Sep 4 09:02:56 1989 From: niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Mahesan Niranjan) Date: Mon, 4 Sep 89 09:02:56 BST Subject: Yet Another Neural Conference (YANC) Message-ID: <9890.8909040802@dsl.eng.cam.ac.uk> Here is the programme of the IEE Conference on ANNs. niranjan PS: IEE means Institute of Electrical Engineers (in UK) PPS: ANN means Artificial Neural Networks ============================================================================= IEE First International Conference on Artificial Neural Networks at IEE Savoy Place 16-18 October 1989 Registration: Conference Services IEE Savoy Place London WC2R 0BL tel: 01-240-1871 fax: 01-240-7735 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ PROGRAMME: MONDAY 16 October Registration 8.30 Formal Opening 9.30 Keynote address: 'On the significance of internal representations in neural networks', Kohonen. Session 1 - Self Organising and Feedback Networks 'Hierarchical self-organisation: a review', Luttrell, RSRE 'A comparative study of the Kohonen and Multiedit neural net learning algorithms', Kittler & Lucas, Surrey U 'Self-organisation based on the second maximum entropy principle', Grabec, E K U, Yugoslavia 'A new learning rule for feedback neural networks', Tarassenko, Seifert, Tombs & Reynolds, Oxford U & Murray, Edinburgh U 'Linear interpolation with binary neurons', Jonker, Coolenet & van der Gon, Utrecht U CLOSE - LUNCH 12.30 Session 2 - Implementation I 14.00 'Silicon implementation of neural networks', Murray, Edinburgh U 'Digital optical technology for the neural plane', Collins & Crossland, STC & Vass, Edinburgh U 'Implementation of plasticity in MOS synapses', Card & Moore, Oxford U 'Integrated circuit emulation of ART1 Networks, Rao, Walker, Clark & Akers, Arizona SU 'A limited connectivity switched capacitor analogue neural processing circuit with digital storage of non-binary input weights', Bounds, RSRE TEA - Poster session 1 15.40 'A non-competitive model for unsupervised learning' Hrycej, PCS, W.Germany 'Evolution equations for neural networks with arbitrary spacial structure', Coolen, van der Gon & Ruijgrok, Utrecht U 'Hardware realisable models of neural processing', Taylor, Clarkson, KCL & Gorse UCL, London 'On the training and the convergence of brain-state-in-a-box neural networks' Vandenberghe & Vandewalle, Katholieke U Louven 'Learning in a single pass: a neural model for instantaneous principal component analysis and linear regression', Rosenblatt, Concept Technols, Lelu & Georgei, INIST/CNRS France 'Dynamic scheduling for feed-forward neural nets using transputers', Oglesby & Mason, UC Swansea 'Analogue-to-digital conversin of self organising networks - the JAM technique', Allinson, Johnson & Brown, York U 'Temporal effects in a simple neural network deived from an optical implementation', Wright & White, BAe 'Neural networks and systolic arrays', Broomhead, Jones, McWhirter & Shepherd, RSRE 'Infrared search and track signal processing: a potential application of artificial neural computing', Chenoweth, Louisville U 'Optimal visual tracking with artificial neural networks, Dobnikar, Likar & Podberegar, Ljubljana U 'Extension of the Hopfield neural network to a multilayer architecture for optical implementation', Selviah & Midwinter, UCL, London Session 3 - Vision 16.20 'A neural network approach to the computation of vision algorithms', Psarrou & Buxton, QMC London 'A neural network implementation for real-time scene analysis', Allen, Adams & Booth, Newcastle-upon-Tyne U 'Optical flow estimation using an artificial neural network', Zhongquan, Purdue U 'Neural networks and Hough transform for pattern recognition', Costa & Sandler, KCL, London CLOSE OF SESSION 17.40 Cocktail Party in IEE Refectory 18.00-19.15 TUESDAY 17 OCTOBER Session 4 - Speech 09.00 'Experimental comparison of a range of neural network and conventional techniques for a word recognition task', Bedworth, Bridle, Flyn & Ponting, RSRE, Fallside & Prager, Cambridge U, Fogelman & Bottu, EHEI, Paris 'Two level recognition of isolated words using neural nets', Howard & Huckvale, UCL, London 'Predictive analysis of speech using adaptive networks', Lowe, RSRE 'The application of artificial neural network techniques to low bit-range speech coding', Kaouri & McCanny, Belfast U 'The modified Kanerva model: results for real time word recognition', Prager, Clarke & Fallside Cambridge U COFFEE - Poster session 2 10.40 'Identifying and discriminating temporal events with connectionist language users', Allen Kaufman & Bahmidpaty, Illinois U 'Auditory processing in a post-cochlear stochastic neural network', Schwartz, Demongeot, Herve, Wu & Escudier, ICP, France 'Neural networks for speech pattern classification', Renals & Rohwer, Edinburgh U 'Weight limiting, weight quantisation and generalisation in multi-layer perceptrons', Woodland, BTRL 'Using a connectionist network to eliminate redundancy from a phonetic lattice in an analytical speech recognition system', Miclet & Caharel, CNET 'Speaker recognition with a neural classifier', Oglesby & Mason, UC Swansea 'Output functions for probabilistic logic nodes', Myers, ICST London 'Neural networks with restricted-range connections', Noest, Brain Research Inst, Netherlands 'A hybrid neural network for temporal pattern recognition', McCulloch & Bounds, RSRE 'A/D conversion and analog vector quantization using neural network models, Svensson & Chen, Linkoping U 'Stochastic searching networks', Bishop, Reading U Session 5 - Architectures 11.00 'Canonical neural nets based on logic nodes', Aleksander, ICST London 'Designing neural networks', Cybenko, Illinois U. 'A continuously adaptable artificial neural network', Sayers & Coghill, Auckland U 'An analysis of silicon models of visual processing', Taylor, KC London CLOSE SESSION - LUNCH 12.30 Session 6 - Signal and Data Processing 14.00 'Nonlinear decision feedback equalizers using neural network structures', Siu, Cowan & Gibson, Edinburgh U 'Equalisation using neural networks', Jha, Durrani & Soraghan, Strathclyde U 'Artificial neural net algorithms in classifying electromyographic signals', Pattichis, Middleton & Schizaz, MDRT of Cyprus, Schofield & Fawcett, Newcastle Gen Hospital 'Recognition of radar signals by neural network', Beastall, RNEC UK 'Applications of neural networks to nondestructive testing', Upda & Upda, Colorado U TEA - Poster session 3 15.40 'Bearing estimation using neural optimisation methods', Jha & Durrani, Strathclyde U 'An example of back propagation: diagnosis of dyspepsia', Ridella, Mella, Arrigo, Marconi, Scalia & Mansi, CNR Italy 'The application of pulse processing neural networks in communications and signal demodilation', Chesmore, Hull U 'Neural networks and GMDH regression: case studies and comparisons', Harrison, Mort, Hasnain & Linkens, Sheffield U 'The application of neural networks to tactical and sensor data fusion problems', Whittington & Spracklen, Aberdeen U 'A new learning paradigm for neural networks', Lucas & Damper, Southampton U 'Estimating hidden unit quantity of two-layer perceptrons performing binary mappings', Gutierrez, Gondin & Wang, Arizona SU 'Training networks with discontinuous activation functions', Findlay, Plessey Research 'Can a perceptron find Lyapunov functions?', Banks & Harrison, Sheffield U Session 7 - Multilayer perceptrons I 16.20 'Single-layer look-up perceptrons (SLLUPS)', Tattersall & Foster, UEA 'Probabilistic learning on a network and a Markov random field', Wright, BAe 'Building symmetries into feedforward networks', Shawe-Taylor, London U 'Stochastic computing and reinforcement neural networks', Mars, Durham U & Leaver, BAe CLOSE OF SESSION 17.40 WEDNESDAY 18 October Session 8 - Image Processing 9.00 'Optical character recognition using artificial networks: past and future', Alpaydin, SFIT Switzerland 'An associative neural architecture for invariant pattern classification', Austin, York U 'Self-organising Hopfield networks', Naillon & Theeten, LEPA France 'Comparison of neural networks and conventional techniques for feature location in facial images', Hutchinson & Welsh, BTRL 'Expectation-based feedback in a neural network whcih recognises hand-drawn characters and symbols', Banks & Elliman, Nottingham U COFFEE - Poster session 4 10.40 'Matching of attributed and non-attributed graphs by use of Boltzmann Machine algorithm', Kuner, Siemens W Germany 'Image processing with optimum neural networks', Bichsel, PSI Switzerland 'A comparative study of neural network structures for practical application in a pattern recognition task', Bisset, Fiho & Fairhurst, Kent U 'On the use of pre-defined regions to minimise the training and complexity of multy-layer neural networks', Houselander & Taylor, UCL London 'A novel training algorithm', Wang & Grondin, Arizona SU 'Diffusion learning for the multilayer perceptron', Hoptroff & Hall, KC London 'Automatic learning of efficient behaviour', Watkins, Philips UK 'Learning with interferene cells', Sequeira & Tome, IST - AV Portugal 'Test of neural netork as a substitute for a traditional small-scale expert system', Filippi & Walker, Rome U 'Image compression with competing multilayer perceptrons', Sirat & Viala, LEP, France Session 9 - Multilayer percepteons II 11.00 'The radial basis function network: adapting the transfer functions to suit the experiment and the problem of generalisation', Lowe, RSRE 'On the analysis of multi-dimensional linear predictive/autoregressive data by a class of single layer connectionist models', Fallside, Cambridge U 'Unlimited input accuracy in layered networks', Sirat & Zorer, LEP, France 'The properties and implementation of the non-linear vector space connectionist model', Lynch & Rayner, Cambridge U CLOSE OF SESSION - LUNCH 12.30 Session 10 - AI and Neural Networks 14.00 'Overcoming independence assumption in Bayesian neural networks', Kononenko, FEE, Yugoslavia 'A neural controller', Saerens & Soquet, IRIDIA, Belgium 'Linked assembly of neural netwoeks to solve the interconnection problem, Green & Noakes, Essex U 'Building expert systems on neural architecture', Fu, Wisconsin U 'COMPO - conceptual clustering with connectionist competitive learining', de Garis, Bruxelles LU CLOSE OF SESSION - TEA 15.40 Session 11 - Implementation II 16.10 'Ferroelectric connections for IC neural networks', Clark, Dey, & Grondin, Arizona SU 'An implementation of fully analogue sum-of-product neural models', Daniel, Waller & Bisset, Kent U 'The implementation of hardware neural net systems', Myers, BTRL 'A general purpose digital architecture for neural network simulation', Duranton & Mauduit, LEPA, France CLOSING REMARKS by Cowan 17.30 +++++++++++++++++++++++++++++++++++++++++++++++++++++++ ============================================================================= O.K. thanks for the attention...... From niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK Mon Sep 4 12:37:30 1989 From: niranjan%digsys.engineering.cambridge.ac.uk at NSFnet-Relay.AC.UK (Mahesan Niranjan) Date: Mon, 4 Sep 89 12:37:30 BST Subject: Weight Space Message-ID: <11273.8909041137@dsl.eng.cam.ac.uk> > From: INS_ATGE%JHUVMS.BITNET%VMA.CC.CMU.EDU at murtoa.cs.mu.oz > Date: Tue, 22 Aug 89 18:23 EST > Subject: Neural Net Archive? > > Has anyone considered (or possibly created) an archive site for > trained neural networks? Why spend thousands of epochs of learning > to create you network weights, only to throw them away after your research > is done? If anyone feels such an archive site may be of use, please send > me email (as it would be helpful to me as I lobby for a site at Hopkins). > > -Thomas Edwards I can imagine a day when weight values will be available for sale! Companies with number crunching power might train networks and 'sell' the values (possibly at reduced prices for academic institutions!). It may also be possible to 'buy' quantised weights at cheap rates!! niranjan From gc at s16.csrd.uiuc.edu Mon Sep 4 22:06:22 1989 From: gc at s16.csrd.uiuc.edu (George Cybenko) Date: Mon, 4 Sep 89 21:06:22 CDT Subject: Connections Per Second Message-ID: <8909050206.AA14400@s16.csrd.uiuc.edu> The recent discussion about how to measure machine performance in terms of connections per second, etc. is reminiscent of the decade old debate about how to report machine performance in general. Here are three important turning points in the history of measuring and reporting machine performance for scientific computing. Pre 1980 - (Vendor defined and supplied floating point execution rates) In order to make megaflop numbers as large as possible, people used register arithmetic operations so those rates completely ignored addressing, incrementing, cache effects, etc. Consequently, the performance of a machine on a typical scientific program was almost uncorrelated with those MFLOP rates. Mid-1980's - (Kernels and loops) Livermore loops, Whetstones, Linpack kernels were introduced because of the problems noted above. However, these loops and kernels are somewhat homogeneous, lack I/O , stressed the memory bandwidth in predictable ways and could be easily detected and optimized by a compiler. Consequently, there were accusations that compilers and machines were being constructed to deliver high performance on those benchmark loops and kernels. Late-1980's and beyond ? (Applications based benchmarking -Perfect Club, SPEC) Replace kernels and loops with scientific applications codes and data sets that are representative of a high end machine workload. Replace MFLOPS with absolute execution times. The Perfect (Performance Evaluation through Cost-Effective Transformation) Club was a cooperative effort formed in 1987 that collected 13 scientific codes and data sets to form a benchmark suite. When porting codes to different machines, changes in the codes were allowed. The initial effort included researchers from Cray, IBM, Caltech, Princeton, HARC, University of Illinois, and the Institute for Supercomputing Research (Tokyo) - the codes include circuit simulation, fluids, physics and signal processing applications. The Systems Performance Evaluation Cooperative (SPEC) is an industry motivated effort started in spring 1989 to develop applications based benchmarking for a wider range of machines, including workstations. DEC, IBM, and MIPS belong to SPEC for example. In light of this history, it seems that using MFLOPS or CPS as measures of machine performance on neural computing applications ignores a decade of progress and plays right into vendor hands. Instead, let me suggest that someone submit a state-of-the-art code solving a representative problem in one of the major connectionist models, together with a data set and solution to the Perfect Club or SPEC suite. This way, people interested in connectionist computing can simultaneously contribute to a broader effort in benchmarking and avoid recapitulating history. Information about the Perfect Club effort can be obtained by writing to Lynn Rubarts Center for Supercomputing Research and Development University of Illinois Urbana, IL 61801 USA (217) 333-6223 or sending an email request for the Perfect Club reports to rubarts at uicsrd.csrd.uiuc.edu. Anyone with a code that might be suitable for the Perfect benchmark can contact me. George Cybenko Center for Supercomputing Research and Development University of Illinois at Urbana Urbana, IL 61801 (217) 244-4145 gc at uicsrd.csrd.uiuc.edu From carol at ai.toronto.edu Tue Sep 5 15:53:17 1989 From: carol at ai.toronto.edu (Carol Plathan) Date: Tue, 5 Sep 89 15:53:17 EDT Subject: CRG-TR-89-4 available Message-ID: <89Sep5.155337edt.10806@ephemeral.ai.toronto.edu> The following technical report by Yann le Cun, CRG-TR-89-4/June 1989, is now available. Please send me your (physical) mailing address to receive this report: GENERALIZATION AND NETWORK DESIGN STRATEGIES Yann le Cun* Department of Computer Science University of Toronto TECHNICAL REPORT CRG-89-4 / June l989 ABSTRACT An interesting property of connectionist systems is their ability to learn from examples. Although most recent work in the field concentrates on reducing learning times, the most important feature of a learning machine is its generalization performance. It is usually accepted that good generalization performance on real-world problems cannot be achieved unless some a priori knowledge about the task is built into the system. Back-propagation networks provide a way of specifying such knowledge by imposing constraints both on the architecture of the network and on its weights. In general, such constraints can be considered as particular transformations of the parameter space. Building a constrained network for image recognition appears to be a feasible task. We describe a small handwritten digit recognition problem and show that, even though the problem is linearly separable, single layer networks exhibit poor generalization performance. Multilayer constrained networks perform very well on this task when organized in a hierarchical structure with shift invariant feature detectors. These results confirm the idea that minimizing the number of free parameters in the network enhances generalization. The paper also contains a short description of a second order version of back-propagation that uses a diagonal approximation to the Hessian matrix. ------------- *Present address: Room 4G-332, AT&T Bell Laboratories, Crawfords Corner Rd, Holmdel, NJ 07733 Note: A shortened version of the Technical Report will appear in: R. Pfeifer, Z. Schreter, F. Fogelman, and L. Steels (editors), "Connectionism in Perspective", Zurich, Switzerland, 1989. Elsevier. From jose at tractatus.bellcore.com Thu Sep 7 07:36:41 1989 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Thu, 7 Sep 89 07:36:41 -0400 Subject: NIPS Registration Message-ID: <8909071136.AA12106@tractatus.bellcore.com> **** NIPS89 Update **** We've just finished putting the program for the conference together and have a preliminary program for the workshops. A mailing to authors will go out this week, with registration information. Those who requested this information but are not authors will hear from us starting in another week. If you received a postcard from us acknowledging receipt of your paper, you are on our authors' mailing list. If you haven't requested the registration packet, you can do so by writing to Kathie Hibbard NIPS89 Local Committee University of Colorado Eng'g Center Campus Box 425 Boulder, CO 80309-0425 From LIN2 at ibm.com Thu Sep 7 17:06:12 1989 From: LIN2 at ibm.com (Ralph Linsker) Date: 7 Sep 89 17:06:12 EDT Subject: Preprint available Message-ID: <090789.170612.lin2@ibm.com> ********* FOR CONNECTIONISTS ONLY - PLEASE DO NOT FORWARD *********** **************** TO OTHER BBOARDS/ELECTRONIC MEDIA ******************* The following preprint is available. If you would like a copy, please send a note to lin2 @ ibm.com containing *only* the information on the following four lines (to allow more efficient handling of your request): *NC* Name Address (each line not beyond column 33) How to Generate Ordered Maps by Maximizing the Mutual Information Between Input and Output Signals* Ralph Linsker IBM Research Division, T. J. Watson Research Center, P. O. Box 218, Yorktown Heights, NY 10598 *To appear in: Neural Computation 1(3):396-405 (1989). A learning rule that performs gradient ascent in the average mutual information between input and output signals is de- rived for a system having feedforward and lateral inter- actions. Several processes emerge as components of this learning rule: Hebb-like modification, and cooperation and competition among processing nodes. Topographic map formation is demonstrated using the learning rule. An analytic expression relating the average mutual information to the response properties of nodes and their geometric arrangement is derived in certain cases. This yields a relation between the local map magnification factor and the probability distribution in the input space. The results provide new links between unsupervised learning and information-theoretic optimization in a system whose proper- ties are biologically motivated. From barto%anger at cs.umass.edu Tue Sep 12 17:52:57 1989 From: barto%anger at cs.umass.edu (barto%anger@cs.umass.edu) Date: Tue, 12 Sep 89 17:52:57 EDT Subject: Technical Reports Available Message-ID: <8909122152.AA00331@anger.ANW.edu> **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** Two new technical reports are available: CONNECTIONIST LEARNING FOR CONTROL: AN OVERVIEW Andrew G. Barto Department of Computer and Information Science University of Massachusetts, Amherst MA 01003 COINS Technical Report 89-89 September 1989 Abstract---This report is an introductory overview of learning by connectionist networks, also called artificial neural networks, with a focus on the ideas and methods most relevant to the control of dynamical systems. It is intended both to provide an overview of connectionist ideas for control theorists and to provide connectionist researchers with an introduction to certain issues in control. The perspective taken emphasizes the continuity of the current connectionist research with more traditional research in control, signal processing, and pattern classification. Control theory is a well--developed field with a large literature, and many of the learning methods being described by connectionists are closely related to methods that already have been intensively studied by adaptive control theorists. On the other hand, the directions that connectionists are taking these methods have characteristics that are absent in the traditional engineering approaches. This report describes these characteristics and discusses their positive and negative aspects. It is argued that connectionist approaches to control are special cases of memory--intensive approaches, provided a sufficiently generalized view of memory is adopted. Because adaptive connectionist networks can cover the range between structureless lookup tables and highly constrained model--based parameter estimation, they seem well--suited for the acquisition and storage of control information. Adaptive networks can strike a balance between the tradeoffs associated with the extremes of the memory/model continuum. LEARNING AND SEQUENTIAL DECISION MAKING A. G. Barto Department of Computer and Information Science University of Massachusetts, Amherst MA 01003 R. S. Sutton GTE Laboratories Incorporated Waltham, MA 02254 C. J. C. H. Watkins Philips Research Laboratories Cross Oak Lane, Redhill Surrey RH1 5HA, England COINS Technical Report 89-95 September 1989 Abstract---In this report we show how the class of adaptive prediction methods that Sutton called ``temporal difference,'' or TD, methods are related to the theory of squential decision making. TD methods have been used as ``adaptive critics'' in connectionist learning systems, and have been proposed as models of animal learning in classical conditioning experiments. Here we relate TD methods to decision tasks formulated in terms of a stochastic dynamical system whose behavior unfolds over time under the influence of a decision maker's actions. Strategies are sought for selecting actions so as to maximize a measure of long-term payoff gain. Mathematically, tasks such as this can be formulated as Markovian decision problems, and numerous methods have been proposed for learning how to solve such problems. We show how a TD method can be understood as a novel synthesis of concepts from the theory of stochastic dynamic programming, which comprises the standard method for solving such tasks when a model of the dynamical system is available, and the theory of parameter estimation, which provides the appropriate context for studying learning rules in the form of equations for updating associative strengths in behavioral models, or connection weights in connectionist networks. Because this report is oriented primarily toward the non-engineer interested in animal learning, it presents tutorials on stochastic sequential decision tasks, stochastic dynamic programming, and parameter estimation. You can be these reports in several ways. I have followed Jordan Pollack's very good suggestion and placed postscript files in the account kindly provided at Ohio State for this purpose. Here is the version of Jordan's instructions appropriate for getting them: ftp cheops.cis.ohio-state.edu (or, ftp 128.146.8.62) Name: anonymous Password: neuron ftp> cd pub/neuroprose ftp> get (remote-file) barto.control.ps (local-file) foo.ps 587591 bytes sent in ?? seconds (?? Kbytes/s) ftp> get (remote-file) barto.sequential_decisions.ps (local-file) bar.ps 904574 bytes sent in ?? seconds (?? Kbytes/s) ftp> quit unix> lpr *.ps (note: these are rather large files: 38 and 51 pages respectively when printed) Alternatively, you can send requests for printed copies via e-mail to Ms. Connie Smith using the address: Smith at cs.umass.EDU or write to Ms. Connie Smith Department of Computer and Information Science University of Massachusetts Amherst, MA 01003 (but I would prefer that you use the ftp option if possible!) Andy Barto **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** **********DO NOT FORWARD TO OTHER BBOARDS************** From noel at CS.EXETER.AC.UK Wed Sep 13 12:05:38 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Wed, 13 Sep 89 12:05:38 BST Subject: address Message-ID: <5738.8909131105@entropy.cs.exeter.ac.uk> Does anyone know ronan reilly's new email address at the beckmann institute as i need to contact him urgently. noel sharkey Centre for Connection Science JANET: noel at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!noel Exeter EX4 4PT Devon BITNET: noel at cs.exeter.ac.uk@UKACRL U.K. From noel at CS.EXETER.AC.UK Thu Sep 14 13:56:06 1989 From: noel at CS.EXETER.AC.UK (Noel Sharkey) Date: Thu, 14 Sep 89 13:56:06 BST Subject: Natural Language Message-ID: <6001.8909141256@entropy.cs.exeter.ac.uk> CALL FOR PAPERS CONNECTION SCIENCE (Journal of Neural Computing, Artificial Intelligence and Cognitive Research) Special Issue CONNECTIONIST RESEARCH ON NATURAL LANGUAGE Editor: Noel E. Sharkey, University of Exeter Special Editorial Review Panel Robert Allen, Bell Communication Research Garrison W. Cottrell, University of California, San Diego Michael G. Dyer, University of California, Los Angeles Jeffrey L. Elman, University of California, San Diego George Lakoff, University of California, Berkeley Wendy W. Lehnert, University of Massachusetts at Amherst Jordan Pollack, Ohio State University Ronan Reilly, Beckmann Institute, Illinois Bart Selman, University of Toronto Paul Smolensky, University of Colorado, Boulder This special issue will accept submissions of full length connectionist papers and brief reports from any area of natural language research including: Connectionist applications to AI problems in natural language (e.g. paraphrase, summarisation, question answering). New formalisms or algorithms for natural language processing. Simulations of psychological data. Memory modules or inference mechanisms to support natural language processing. Representational methods for natural language. Techniques for ambiguity resolution. Parsing. Speech recognition, production, and processing. Connectionist approaches to linguistics (phonology, morphology etc.). Submissions of short reports or recent updates will also be accepted for the Brief Reports section in the journal. No paper should be currently submitted elsewhere. DEADLINES Deadline for submissions: December 15th 1989 Decision/reviews by: February 1990 Papers may be accepted to appear in regular issues if there is insufficient space in the special issue. For further information about the journal please contact Lyn Shackleton (Assistant Editor) Centre for Connection Science JANET: lyn at uk.ac.exeter.cs Dept. Computer Science University of Exeter UUCP: !ukc!expya!lyn Exeter EX4 4PT Devon BITNET: lyn at cs.exeter.ac.uk@UKACRL U.K. From mcvax!alf.inesc.pt!lba at uunet.UU.NET Thu Sep 14 12:51:43 1989 From: mcvax!alf.inesc.pt!lba at uunet.UU.NET (Luis Borges de Almeida) Date: Thu, 14 Sep 89 16:51:43 GMT Subject: EURASIP workshop on neural networks - call for contributions Message-ID: <8909141651.AA07728@alf.inesc.pt> EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND 2nd CALL FOR CONTRIBUTIONS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Short contributions from all fields related to the neural network area are welcome (see submission procedures below). A (non-exclusive) list of topics is given ahead. These contributions will be presented at the workshop in poster format, and are intended for presentation of ongoing work, projects (e.g. ESPRIT, BRAIN, DARPA,...), or for proposing interesting views (even controversial or provocative). Short contributions will not correspond to a paper in the proceedings, but publication in a special issue of one of EURASIP's journals is being considered. Care is being taken to ensure that the workshop will have a high level of quality. Full contributions have already been selected based on an evaluation by an international technical committee, and the proceedings volume containing these contributions will be published and handed to participants at the workshop. The number of participants will be limited to 50. A small number of non-contributing participants may be accepted, depending on the total number of contributions. The official language of the workshop will be English. Dr. Georges Cybenko, of the University of Illinois, will be an invited speaker. Contacts are on the way for invitation of another well known researcher. TOPICS: - signal processing (speech, image,...) - pattern recognition - algorithms (training procedures, new structures, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for short contributions is as follows: Deadline for submission Oct 1, 1989 Notification of acceptance Nov 15, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori H. Muehlenbein REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the EURASIP Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From lba at alf.inesc.pt Fri Sep 15 04:54:47 1989 From: lba at alf.inesc.pt (Luis Borges de Almeida) Date: Fri, 15 Sep 89 08:54:47 GMT Subject: EURASIP workshop on neural networks - call for contributions Message-ID: <8909150854.AA03120@alf.inesc.pt> EURASIP WORKSHOP ON NEURAL NETWORKS Sesimbra, Portugal February 15-17, 1990 ANNOUNCEMENT AND 2nd CALL FOR CONTRIBUTIONS The workshop will be held at the Hotel do Mar in Sesimbra, Portugal. It will take place in 1990, from February 15 morning to 17 noon, and will be sponsored by EURASIP, the European Association for Signal Processing. It will be open to participants from all countries. Short contributions from all fields related to the neural network area are welcome (see submission procedures below). A (non-exclusive) list of topics is given ahead. These contributions will be presented at the workshop in poster format, and are intended for presentation of ongoing work, projects (e.g. ESPRIT, BRAIN, DARPA,...), or for proposing interesting views (even controversial or provocative). Short contributions will not correspond to a paper in the proceedings, but publication in a special issue of one of EURASIP's journals is being considered. Care is being taken to ensure that the workshop will have a high level of quality. Full contributions have already been selected based on an evaluation by an international technical committee, and the proceedings volume containing these contributions will be published and handed to participants at the workshop. The number of participants will be limited to 50. A small number of non-contributing participants may be accepted, depending on the total number of contributions. The official language of the workshop will be English. Dr. Georges Cybenko, of the University of Illinois, will be an invited speaker. Contacts are on the way for invitation of another well known researcher. TOPICS: - signal processing (speech, image,...) - pattern recognition - algorithms (training procedures, new structures, speedups,...) - generalization - implementation - specific applications where NN have been proved better than other approaches - industrial projects and realizations SUBMISSION PROCEDURES Submissions, both for long and for short contributions, will consist of (strictly) 2-page summaries, plus a cover page indicating title, author's name, affiliation, phone no., and e-mail address if possible. Three copies should be sent directly to the Technical Chairman, at the address given below. The calendar for short contributions is as follows: Deadline for submission Oct 1, 1989 Notification of acceptance Nov 15, 1989 THE LOCATION Sesimbra is a fishermens village, located in a nice region about 30 km south of Lisbon. Special transportation from/to Lisbon will be arranged. The workshop will end on a Saturday at lunch time; therefore, the participants will have the option of either flying back home in the afternoon, or staying for sightseeing for the remainder of the weekend in Sesimbra and/or Lisbon. An optional program for accompanying persons is being organized. For further information, send the coupon below to the general chairman, or contact directly. ORGANIZING COMMITTEE: GENERAL CHAIRMAN Luis B. Almeida INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-544607. Fax: +351-1-525843. E-mail: {any backbone, uunet}!mcvax!inesc!lba TECHNICAL CHAIRMAN Christian Wellekens Philips Research Laboratory Av. Van Becelaere 2 Box 8 B-1170 BRUSSELS BELGIUM Phone: +32-2-6742275 TECHNICAL COMMITTEE John Bridle Herve Bourlard Frank Fallside Francoise Fogelman-Soulie Jeanny Herault Larry Jackel Renato de Mori H. Muehlenbein REGISTRATION, FINANCE, LOCAL ARRANGEMENTS Joao Bilhim INESC Apartado 10105 P-1017 LISBOA CODEX PORTUGAL Phone: +351-1-545150. Fax: +351-1-525843. --------------------------------------------------------------------- Please keep me informed about the EURASIP Workshop on Neural Networks Name: University/Company: Address: Phone: E-mail: [ ] I plan to attend the workshop (send to Luis B. Almeida, INESC, Apartado 10105, P-1017 LISBOA CODEX, PORTUGAL) From ST401843%BROWNVM.BITNET at vma.CC.CMU.EDU Sun Sep 17 14:30:56 1989 From: ST401843%BROWNVM.BITNET at vma.CC.CMU.EDU (thanasis kehagias) Date: Sun, 17 Sep 89 14:30:56 EDT Subject: old paper is now available via ftp ... Message-ID: The following OLD paper is now available by anonymous FTP. To get a copy, please "ftp" to cheops.cis.ohio-state.edu (128.146.8.62), "cd" to the pub/neuroprose directory, and "get" the file kehagias.hmm0289.tex. Please use your own version of LATEX to print it out. OPTIMAL CONTROL FOR TRAINING: THE MISSING LINK BETWEEN HIDDEN MARKOV MODELS AND CONNECTIONIST NETWORKS ABSTRACT For every Hidden Markov Model there is a set of "forward" probabilities that need to be computed for both the recognition and the training problem. These probabilities are computed recursively and hence the computation can be performed by a multistage, feedforward network that we will call Hidden Markov Model Net (HMMN). This network has exactly the same architecture as the standard Connectionist Network (CN). Furthermore training an Hidden Markov Model is equivalent to optimizing a function of the HMMN; training a CN is equivalent to optimizing a function of the CN. Due to the multistage architecture, both problems can be seen as Optimal Control problems. By applying standard Optimal Control techniques we discover in both problems that certain backpropagated quantities (backward probabilities for HMMN, backward propagated errors for CN) are of crucial importance to the solution. So HMM's and CN's are similar in architecture and training. From pollack at cis.ohio-state.edu Mon Sep 18 10:15:10 1989 From: pollack at cis.ohio-state.edu (Jordan B Pollack) Date: Mon, 18 Sep 89 10:15:10 EDT Subject: Neuroprose Compression Message-ID: <8909181415.AA28731@toto.cis.ohio-state.edu> *************DO NOT FORWARD TO OTHER BBOARDS*************** Dr. Barto's postscript files are quite large, and apparently have been difficult to transmit to certain machines. There are standard Unix utilities, called "compress" and "uncompress" which replace "file" with "file.Z" and replace "file.Z" with "file", respectively. Rather than replacing them, however, I have added compressed versions of Dr. Barto's files to the neuroprose directory. In general, compressing postscript seems like a good idea, since 70% compression yields a lot of file space and speed improvement in transmission. Note that one needs to use BINARY mode in ftp to transfer these files, whereas TEXT mode worked for both tex and ps. Jordan *************DO NOT FORWARD TO OTHER BBOARDS*************** From harnad at phoenix.Princeton.EDU Tue Sep 19 01:38:26 1989 From: harnad at phoenix.Princeton.EDU (S. R. Harnad) Date: Tue, 19 Sep 89 01:38:26 -0400 Subject: Visual Search & Complexity: BBS Call for Commentators Message-ID: <8909190538.AA23981@phoenix.Princeton.EDU> Below is the abstract of a forthcoming target article to appear in Behavioral and Brain Sciences (BBS), an international, interdisciplinary journal that provides Open Peer Commentary on important and controversial current research in the biobehavioral and cognitive sciences. Commentators must be current BBS Associates or nominated by a current BBS Associate. To be considered as a commentator on this article, to suggest other appropriate commentators, or for information about how to become a BBS Associate, please send email to: harnad at princeton.edu or write to: BBS, 20 Nassau Street, #240, Princeton NJ 08542 [tel: 609-921-7771] ____________________________________________________________________ Analyzing Vision at the Complexity Level John K. Tsotsos Department of Computer Science, University of Toronto and The Canadian Institute for Advanced Research The general problem of visual search can be shown to be computationally intractable in a formal complexity-theoretic sense, yet visual search is widely involved in everyday perception and biological systems manage to perform it remarkably well. Complexity level analysis may resolve this contradiction. Visual search can be reshaped into tractability through approximations and by optimizing the resources devoted to visual processing. Architectural constraints can be derived using the minimum cost principle to rule out a large class of potential solutions. The evidence speaks strongly against purely bottom-up approaches to vision. This analysis of visual search performance in terms of task-directed influences on visual information processing and complexity satisfaction allows a large body of neurophysiological and psychological evidence to be tied together. From sankar at caip.rutgers.edu Wed Sep 20 11:12:22 1989 From: sankar at caip.rutgers.edu (ananth sankar) Date: Wed, 20 Sep 89 11:12:22 EDT Subject: Hardware Implementations of Neural Nets Message-ID: <8909201512.AA06629@caip.rutgers.edu> I would like to know of any references to hardware implementations of neural nets. I would appreciate it if someone could point me to some papers on this subject. Thanks in anticipation. Ananth Sankar Dept. of Electrical Engineering Rutgers University New Brunswick, NJ From ala at nada.kth.se Thu Sep 21 03:24:10 1989 From: ala at nada.kth.se (Anders Lansner) Date: Thu, 21 Sep 89 09:24:10 +0200 Subject: ABSTRACT - A Bayesian Neural Network Message-ID: <8909210724.AA05832@nada.kth.se> The following paper will appear in the first (March -89) issue of the International Journal for Neural Systems (World Scientific Publishing): A One-Layer Feedback Artificial Neural Network with a Bayesian Learning Rule by Anders Lansner and \rjan Ekeberg Dept. of Numerical Analysis and Computing Science Royal Institute of Technology, Stockholm, Sweden A probabilistic artificial neural network is presented. It is of a one-layer, feedback-coupled type with graded units. The learning rule is derived from Bayes rule. Learning is regarded as collecting statistics and recall as a statistical inference process. Units corresponds to events and connections come out as compatibility coefficients in a logarithmic combination rule. The input to a unit via connections from other active units affects the a posteriori belief in the event in question. The new model is compared to an earlier binary model with respect to storage capacity, noise tolerance etc. in a content addressable memory (CAM) task. The new model is a real time network and some results on the reaction time for associative recall are given. The scaling of learning and relaxation operations is considered together with issues related to representation of information in one-layered artificial neural networks. An extension with complex units is discussed. Preprint requests to: Anders Lansner NADA KTH S-100 44 Stockholm SWEDEN Various earlier versions of this model are also described in: Lansner A. and Ekeberg \. (1985): Reliability and Speed of Recall in an Associative Network. IEEE Trans. Pattern Analysis and Machine Intelligence 7(4), 490-498. Lansner A. and Ekeberg \ (1987): AN Associative Network Solving the 4-bit ADDER Problem. Proc. ICNN, II-549, San Diego, June 21-24, 1987. Ekeberg \. and Lansner A. (1988): Automatic Generation of Internal Representation in a Probabilisitic Artificial Neural Network. Proc. nEuro'88, Neural Networks from Models to Applications, Personnaz L. and Dreyfus G. (eds.), I.D.S.E.T., Paris, 1989, 178-186. From yann at neural.att.com Thu Sep 21 14:06:00 1989 From: yann at neural.att.com (yann@neural.att.com) Date: Thu, 21 Sep 89 14:06:00 -0400 Subject: ABSTRACT - A Bayesian Neural Network In-Reply-To: Your message of Thu, 21 Sep 89 09:24:10 +0200. Message-ID: <8909211806.AA03687@lesun.> > The following paper will appear in the first (March -89) issue of the > International Journal for Neural Systems (World Scientific Publishing): > > A One-Layer Feedback Artificial Neural Network > with a Bayesian Learning Rule > > by > Anders Lansner and \rjan Ekeberg > Dept. of Numerical Analysis and Computing Science > Royal Institute of Technology, Stockholm, Sweden That reminds me of the following paper: Murakami, K. and Aibara, T. : "Construction of a distributed associative memory on the basis of the bayes discriminant rule". IEEE Trans. on Pattern Analysis and Machine Intelligence, Vol. PAMI-3, No 2, March 1981. This paper is about a variation of Nakano's "associatron" model (what we call now a Hopfield network) which uses Bayes rule to compute the weights. As far as i remember, their units are binary. - Yann Le Cun yann at neural.att.com From MUMME%IDCSVAX.BITNET at CUNYVM.CUNY.EDU Fri Sep 22 06:47:00 1989 From: MUMME%IDCSVAX.BITNET at CUNYVM.CUNY.EDU (MUMME%IDCSVAX.BITNET@CUNYVM.CUNY.EDU) Date: Fri, 22 Sep 89 03:47 PDT Subject: Squaring the activation function Message-ID: ****** CONNECTIONISTS ONLY DO NOT FORWARD ******* I have heard rumors that some people get faster learning in a Back-prop network by passing the SQUARE of each unit's activation to the following layer. If anyone has tried this and/or can refer me to articles regarding this method and theories thereof, please let me know. I will circulate the results of this inquiry to interested contributors. Thanks! Dean Mumme ***** CONNECTIONISTS ONLY DO NOT FORWARD ****** Dean C. Mumme bitnet: mumme at idcsvax Dept. of Computer Science University of Idaho Moscow, ID 83843 From munnari!psych.psy.uq.oz.au!janet at uunet.UU.NET Wed Sep 27 06:09:51 1989 From: munnari!psych.psy.uq.oz.au!janet at uunet.UU.NET (Janet Wiles) Date: Wed, 27 Sep 89 20:09:51 +1000 Subject: cognitive science lectureships Message-ID: <8909271043.AA05249@uunet.uu.net> THE UNIVERSITY OF QUEENSLAND Equal Opportunity in Employment is University Policy COGNITIVE SCIENCE LECTURERS (Tenurable or fixed term) Psychology - Computer Science Psychology - Linguistics The University of Queensland is planning a major expansion in research and teaching in Cognitive Science and anticipates appointing two new lecturers subject to availability of funding. The lecturers will be expected to conduct research in an area related to Cognitive Science and to teach undergraduate and postgraduate subjects. We prefer individuals with a computational approach to psychological or linguistic issues such as decision making, grammar, memory, problem solving, semantics, speech perception, vision and other cognitive science areas. One appointee will probably be a neural network or connectionist modeller, the other can have any computational approach. One lectureship will be a joint appointment between Psychology and Computer Science, the other will be either be in Psychology or a joint appointment between Psychology and Linguistics (English). The University of Queensland is one of the major research universities in Australia, and has strong research programs in many areas related to Cognitive Science. Substantial research facilities and support are available. Applicants should have a PhD in a relevant area and should have a record of, or show promise of, conducting high quality research. Salary: $31,258-$40,621 per annum. The appointments will either be tenurable or fixed term. Closing date: October 31, 1989. Ref. No: 43689. Further information is available from Professor S. Schwartz on (07) 3772884 from within Australia or 61-7-3772884 from outside Australia, or from Dr Michael Humphreys via email at mh at psych.psy.uq.oz.au. Please forward an original plus 7 copies of application and resume to the Director, Personnel Services, The University of Queensland, St Lucia 4067, Qld, Australia. From mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu Wed Sep 27 17:46:29 1989 From: mclennan%MACLENNAN.CS.UTK.EDU at cs.utk.edu (mclennan%MACLENNAN.CS.UTK.EDU@cs.utk.edu) Date: Wed, 27 Sep 89 17:46:29 EDT Subject: Tech Reports available Message-ID: <8909272146.AA03016@MACLENNAN.CS.UTK.EDU> *** DO NOT FORWARD TO ANY OTHER LISTS *** The following two tech reports are available (abstracts are attached): ------------------------------------------------------------ Bruce J. MacLennan Continuous Computation: Taking Massive Parallelism Seriously Univ. of Tennessee Computer Science Dept. Tech. Report CS-89-83, June 1989, 13 pages. This is based on poster presentation at Los Alamos National Laboratory Center for Nonlinear Studies 9th Annual International Conference, Emergent Computation, Los Alamos, NM, May 22-26, 1989. A latex version is available at cheops.cis.ohio-state.edu in pub/neuroprose: maclennan.contin_comp.tex ------------------------------------------------------------ Bruce J. MacLennan Outline of a Theory of Massively Parallel Analog Computation Univ. of Tennessee Computer Science Dept. Tech. Report CS-89-84, June 1989, 23 pages. This is based on a poster presentation at IJCNN '89. As this report contains non-postscript art, a latex version is not available. ------------------------------------------------------------ Hardcopy versions of both reports are available from: Betsy Holleman, Librarian Department of Computer Science University of Tennessee Knoxville, TN 37996-1301 or library at cs.utk.edu Or write to me at the same address (maclennan at cs.utk.edu). ------------------------------------------------------------ ABSTRACTS Since the contents of the two reports are very similar, one abstract covers both. We present an overview of the theory of Field Computation: mas- sively parallel analog computation in which the number of pro- cessing elements is so large that it may be considered a continu- ous quantity. We pursue this idea for a number of reasons. First, skillful behavior seems to require significant neural mass. Second, we are interested in computers, such as optical computers and molecular computers, for which the number of pro- cessing elements is effectively continuous. Third, continuous mathematics is generally easier than discrete mathematics. And fourth, we want to encourage a new style of thinking about paral- lelism. Currently, we try to apply to parallel machines the thought habits we have acquired from thinking about sequential machines. This strategy works fairly well when the degree of parallelism is low, but it will not scale up. One cannot think individually about the 10^20 processors of a molecular computer. Rather than postpone the inevitable, we think that it's time to develop a theoretical framework for understanding massively parallel analog computers. The principal goal of this report is to outline such a theory. Both reports discuss the basic concept of field computation and the basic principles of universal (general purpose) field comput- ers. In addition, CS-89-83 discusses representation of consti- tuent structure, learning, and field computation versions of simulated annealing and a kind of genetic algorithm. CS-89-84 discusses ways of avoiding fields of high dimension, and imple- mentation of field computations in terms of neurons with conjunc- tive synapses. From harnad at clarity.Princeton.EDU Wed Sep 27 23:30:45 1989 From: harnad at clarity.Princeton.EDU (Stevan Harnad) Date: Wed, 27 Sep 89 23:30:45 EDT Subject: Searle's Problem and Fodor's Problem Message-ID: <8909280330.AA05837@psycho.Princeton.EDU> Searle's Problem vs. Fodor's Problem Both the symbolic (S) and the connectionistic (C) approaches to modeling the mind seem to suffer from their own respective fatal handicap: S suffers from Searle's Problem: Symbols have no intrinsic meaning, they're ungrounded; their meanings are parasitic on the meanings in our heads, which clearly do have intrinsic meaning. C suffers from Fodor's Problem: Connectionist "representations" lack systematicity, unlike the meanings in our heads, which clearly do have systematicity. My proposal is a very particular kind of hybrid approach in which C is given only the limited and nonrepresentational role of feature learning, a role to which it is naturally suited. The need for systematicity (Fodor's Problem) never arises for C. S then enters as a DEDICATED symbol system (one whose primitive symbol-tokens have additional nonsymbolic constraints on them). The nonsymbolic constraints are what GROUND S (thereby avoiding Searle's Problem) through the connections between the primitive symbol tokens and the feature-detectors that pick out the objects to which they refer from their sensory projections. The question of learning and learnability is clearly critical in all this. Fodor is satisfied with a radical nativism for most of our concepts. That's not surprising, because he accepts the "vanishing intersections" argument against the existence of critical features (especially sensory ones) that pick out objects. I think C may allow the first actual TEST of whether feature intersections really vanish; I don't think that is decidable from the armchair. In any case, whether feature-learning took place during evolution or takes place during the lifetime of an organism does not much matter (the answer is probably that there is some of each). What matters is whether features are learnable at all. I'm still betting they are, and that our sensory and conceptual categories are not just "Spandrels." Stevan Harnad ------- From terry%sdbio2 at ucsd.edu Thu Sep 28 22:27:46 1989 From: terry%sdbio2 at ucsd.edu (Terry Sejnowski) Date: Thu, 28 Sep 89 19:27:46 PDT Subject: Neural Computation - Vol. 1, No. 3 Contents Message-ID: <8909290227.AA09517@sdbio2.UCSD.EDU> Neural Computation, Volume 1, Number 3 October 1, 1989 Reviews Unsupervised Learning H. B. Barlow The Vapnik-Chervonenkis Dimension: Information versus Complexity in Learning Yaser S. Abu-Mostafa Note Linking Linear Threshold Units with Quadratic Models of Motion Perception Humbert Suarez and Christof Koch Letters Surface Interpolation in Three-Dimensional Structure-from-Motion Perception Masud Husain, Stefan Treue and Richard A. Andersen A Winner-Take-All Mechanism Based on Presynaptic Inhibition Feedback Alan L. Yuille and Norberto M. Grzywacz An Analysis of the Elastic Net Approach to the Traveling Salesman Problem Richard Durbin, Richard Szeliski and Alan Yuille The Storage of Time Intervals Using Oscillating Neurons Christopher Miall Finite State Automata and Simple Recurrent Networks Axel Cleeremans, David Servan-Schreiber, and James L. McClelland Asymptotic Convergence of Backpropagation Gerald Tesauro, Yu He, and Subutai Ahmad Learning by Assertion: Two Methods for Calibrating a Linear Visual System Laurence T. Maloney and Albert J. Ahumada How to Generate Ordered Maps by Maximizing the Mutual Information between Input and Output Ralph Linsker Finding Minimum Entropy Codes H. B. Barlow, T. P. Kaushal and G. J. Mitchison From Connectionists-Request at CS.CMU.EDU Fri Sep 29 09:49:05 1989 From: Connectionists-Request at CS.CMU.EDU (Connectionists-Request@CS.CMU.EDU) Date: Fri, 29 Sep 89 09:49:05 EDT Subject: Please include return addresses Message-ID: <25579.623080145@B.GP.CS.CMU.EDU> In order to minimize the number of responses sent to the entire list, I'd like to ask that people explicitly include their email address in their messages. Some mailers are unable to generate the appropriate return address for messages routed through Connectionists at cs.cmu.edu. Thanks, David Plaut Connectionists-Request at cs.cmu.edu (ARPAnet) From movellan at garnet.berkeley.edu Fri Sep 29 17:52:26 1989 From: movellan at garnet.berkeley.edu (movellan@garnet.berkeley.edu) Date: Fri, 29 Sep 89 14:52:26 PDT Subject: IJCNN-90 Message-ID: <8909292152.AA16768@garnet.berkeley.edu> I am trying to contact the reviewing committee for the IJCNN-90. I am using (619) 451-3752 but nobody answers the phone. Anybody knows whether they have a diffe rent number ? Thanks --Javier. From rudnick at cse.ogc.edu Fri Sep 29 20:22:45 1989 From: rudnick at cse.ogc.edu (Mike Rudnick) Date: Fri, 29 Sep 89 17:22:45 PDT Subject: applications to DNA, RNA and proteins Message-ID: <8909300022.AA18343@cse.ogc.edu> I'm looking for literature pointers to applications of artificial neural networks to recognition tasks involving DNA, RNA, or proteins. Please respond to me directly and if there is interest I will post a summary. Thanks, Mike Rudnick CSnet: rudnick at cse.ogc.edu Computer Science & Eng. Dept. UUCP: {tektronix,verdix}!ogccse!rudnick Oregon Graduate Center (503) 690-1121 X7390 (or X7309) 19600 N.W. von Neumann Dr. Beaverton, OR. 97006-1999 From jim%cs.st-andrews.ac.uk at NSFnet-Relay.AC.UK Thu Sep 28 18:53:03 1989 From: jim%cs.st-andrews.ac.uk at NSFnet-Relay.AC.UK (Jim Bairaktaris) Date: Thu, 28 Sep 89 18:53:03 BST Subject: No subject Message-ID: <12574.8909281753@tamdhu.cs.st-andrews.ac.uk> Subject : Room to share at 23rd HICSS conference ?? Is anybody on this mailing list going to attend the 23 Hawaii International Conference on System Sciences ? If so, is he/she willing to share a room with me to cut the costs of accomodation ? If he/she is a connectionist even better. Please reply by e-mail to : jim%uk.ac.st-and.cs or write to : Dimitrios Bairaktaris University of St.Andrews Computational Science North Haugh FIFE KY16 9SS Scotland or telephone : (+44) 334 76161 ext. 8106 Thank you DImitrios