From jacobs at marfak.crd.ge.com Mon Jul 3 13:37:32 1989 From: jacobs at marfak.crd.ge.com (jacobs) Date: Mon, 3 Jul 89 13:37:32 EDT Subject: Seminar notice Message-ID: <8907031737.AA06443@marfak.steinmetz.ge.com> Neural Networks and High-Level Cognitive Tasks Robert B. Allen, Bellcore Thursday, July 6, 10am, Guest House GE Research and Development Center, Schenectady, NY While connectionist networks are clearly applicable to signal processing tasks, they have been claimed not to be relevant to high-level cognitive tasks. However, the networks' ability to adapt to context and the parsimony of a vertically integrated cognitive model make their use for high-level tasks worth careful investigation. This talk reviews the author's work with temporal networks on applications including 4-term analogies, agent modeling, agent interaction, grammars, planning, plan recognition, and 'language use'. In addition novel architectures and procedures such as adaptive training and a new reinforcement technique will be described. While the models to be reported have substantial limitations, the scope and relative ease with which results have been obtained seems promising. From hinton at ai.toronto.edu Thu Jul 6 08:18:29 1989 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Thu, 6 Jul 89 08:18:29 EDT Subject: 6 month post-doc job Message-ID: <89Jul6.105056edt.10862@ephemeral.ai.toronto.edu> CONNECTIONIST POST-DOC POSITION (If you know of individuals who might be interested but are not on the connectionists mailing list, please forward this to them.) The connectionist research group at the University of Toronto is looking for a post-doctoral researcher for a period of six months starting on January 1 1990. The ideal candidate would have the following qualifications: 1. A significant amount of experience at running connectionist simulations, preferably in a unix/C environment, and a willingness to use the Toronto Research Simulator (not publicly available). 2. Some knowledge of neuropsychology. 3. A genuine desire to spend six months working intensively on connectionist simulations that explain neuropsychological phenomena. Examples of the types of syndrome we are interested in are given in Shallice, T. "From Neuropsychology to Mental Structure", Cambridge, 1988. 4. A PhD that is already completed or will clearly be completed by Jan 1 1990. The starting date is inflexible because the job is designed to coincide with a six month visit to the University of Toronto by Tim Shallice. Also, it will not be possible to finish off a PhD or convert a recent PhD into journal papers during the six month period. For the right person, this would be an excellent opportunity to work in a leading connectionist group with excellent simulation facilities and with close collaboration with a neuropsychologist who has a detailed understanding of connectionist models. One example of the kind of research we have in mind is described in "Lesioning a connectionist network: Investigations of acquired dyslexia" by Hinton and Shallice. To order this TR, send email requesting CRG-TR-89-3 to carol at ai.toronto.edu Applications should be made in writing to Geoffrey Hinton Department of Computer Science University of Toronto 10 Kings College Road Toronto, Ontario, M5S 1A4 Canada Please enclose a full CV, a copy of a recent relevant TR or paper, and the names addresses and phone numbers of three referees. The salary is negotiable, but will be approximately $20,000 for six months for a person with a PhD. I will be in europe until the end of July, so no replies will be forthcoming for a while. Geoff Hinton From cpd at aic.hrl.hac.com Tue Jul 11 11:35:23 1989 From: cpd at aic.hrl.hac.com (cpd@aic.hrl.hac.com) Date: Tue, 11 Jul 89 08:35:23 -0700 Subject: Tech Report Available Message-ID: <8907111535.AA00430@aic.hrl.hac.com> *** DO NOT RE-POST TO OTHER B-BOARDS *** *** DO NOT RE-POST TO OTHER B-BOARDS *** Title: Tensor Manipulation Networks: Connectionist and Symbolic Approaches to Comprehension, Learning, and Planning Availability: This tech report is available from both the Hughes Research Laboratories and The UCLA AI Lab. Request for mailing outside the US should be sent to UCLA, because Hughes requires some non-trivial paper work for such mailings. Otherwise, please flip coin so that we may stochastically equalize mailing costs. From UCLA: valerie at cs.ucla.edu From Hughes: tech-reports at aic.hrl.hac.com Abstract: It is a controversial issue as to which of the two approaches, the Physical Symbol System Hypothesis (PSSH) or Parallel Distributed Processing (PDP), is a better characterization of the mind. At the root of this controversy are two questions: (1) What sort of computer is the brain, and (2) what sort of programs run on that computer? What is presented here is a theory which bridges the apparent gap between PSSH and PDP approaches. In particular, a computer is presented that adheres to constraints of PDP computation (a network of simple processing units), and a program is presented which at first glance is only suitable for a PSSH computer but which runs on a PDP computer. The approach presented here, vertical integration, shows how to construct PDP computers that can process symbols and how to design symbol systems so that they will run on more brain-like computers. The type of computer presented here is called a tensor manipulation network. It is a special type of PDP network where the operation of the network is interpreted as manipulations of high rank tensors (generalized vector outer products). The operations on tensors in turn are interpreted as operations on symbol structures. A wide range of tensor manipulation architectures are presented with the goal of inducing constraints on the symbol structures that it is possible for the mind to possess. As a demonstration of what is possible with constrained symbol structures, a program, CRAM, is presented which uses and acquires thematic knowledge. CRAM is able to read, in English, single- paragraph, fable-like stories and either give a thematically relevant summary or generate planning advice for a character in the story. CRAM is also able to learn new themes through combination of existing, known themes encountered in the fables CRAM reads. CRAM demonstrates that even the most symbolic cognitive tasks can be accomplished with PDP networks, if the networks are designed properly. From keith%epistemi.edinburgh.ac.uk at NSFnet-Relay.AC.UK Thu Jul 13 08:40:00 1989 From: keith%epistemi.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Keith Stenning) Date: Thu, 13 Jul 89 13:40:00 +0100 Subject: Connectionist Unification Message-ID: <24364.8907131240@epistemi.ed.ac.uk> Some weeks ago there was a message on the CMU net about a TR about a connectionist implementation of unification algorithms. Unfortunately, I lost my copy and cannot trace the origin. Can anyone help me? Thanks in advance, Keith Stenning Centre for Cognitive Science Edinburgh University Edinburgh Scotland keith at uk.ac.ed.epistemi From tedwards at cmsun.nrl.navy.mil Thu Jul 13 14:59:00 1989 From: tedwards at cmsun.nrl.navy.mil (Thomas Edwards) Date: Thu, 13 Jul 89 14:59:00 EDT Subject: Linesearch Message-ID: <8907131859.AA00595@cmsun.nrl.navy.mil> That last message should have read "linesearch" instead of "linesort" Sorry. From tedwards at cmsun.nrl.navy.mil Thu Jul 13 14:57:53 1989 From: tedwards at cmsun.nrl.navy.mil (Thomas Edwards) Date: Thu, 13 Jul 89 14:57:53 EDT Subject: Linesearches in Conjugate Gradient Message-ID: <8907131857.AA00511@cmsun.nrl.navy.mil> Can anyone reccomend a linesort for use in conjugate gradient neural network error minimization methods? How about one which is easily parallelizable? -Thomas Edwards tedwards at cmsun.nrl.navy.mil ins_atge at jhuvms.BITNET From buescher at bellman.csl.uiuc.edu Wed Jul 19 16:29:25 1989 From: buescher at bellman.csl.uiuc.edu (Kevin Buescher) Date: Wed, 19 Jul 89 15:29:25 CDT Subject: No subject Message-ID: <8907192029.AA02608@bellman> Dear Sirs: I have been told that you may be able to put me in contact with a local connectionist group (or mailing list, or forum, or...) here at the University of Illinois at Urbana-Champaign. Could you help me in this regard? Sorry to be so vague. Thanks, Kevin Buescher buescher at bellman.csl.uiuc.edu From rsun at cs.brandeis.edu Wed Jul 19 10:42:49 1989 From: rsun at cs.brandeis.edu (Ron Sun) Date: Wed, 19 Jul 89 10:42:49 edt Subject: No subject Message-ID: Is there anyone out there willing to share a room at IJCAI (around 2-4 persons, from 8/19 to 8/23), preferably at Days Inn or Shorecrest Moter Inn. If you are interested, please tell me whether you already made the resevation or I should go ahead to make it. ASAP Ron From kroger at cs.utexas.edu Thu Jul 20 11:46:41 1989 From: kroger at cs.utexas.edu (Jim Kroger) Date: Thu, 20 Jul 89 10:46:41 CDT Subject: No subject Message-ID: <8907201546.AA05667@mothra.cs.utexas.edu> I can do that, although I am not the person you intended. I know because I justt applied to grad school there (although I am going somewhere else). Do he following: 1. Call the psych dept 2. Ask to speak to any professor. 3. Ask the professor who in the psych dept. is doing connectionism. I fhe doesn't know, ask who is in the cognitive science area. Then ask that person how to get in touch with connectionists at the university. 4. This method is garaunteed to work. From tony%psy.glasgow.ac.uk at NSFnet-Relay.AC.UK Fri Jul 28 17:03:59 1989 From: tony%psy.glasgow.ac.uk at NSFnet-Relay.AC.UK (Tony Sanford) Date: Fri, 28 Jul 89 17:03:59 BST Subject: learning new material Message-ID: <123.8907281603@buzzard.psy.glasgow.ac.uk> I am interested to hear of anyone who is interested in the problem of adding new information to a network with minimal impact on what has already been learned. I seem to recall some work on weight-decay which was relevant, but I have lost the reference(s). Could anyone working on the problem please give me some leads? Please mail replies to tony at psy.glasgow.ac.uk unless the material is good for broadcast. Tony Sanford Dept. of Psychology University of Glasgow Glasgow G12 Scotland From jfeldman%icsia12.Berkeley.EDU at berkeley.edu Fri Jul 28 13:32:42 1989 From: jfeldman%icsia12.Berkeley.EDU at berkeley.edu (Jerry Feldman) Date: Fri, 28 Jul 89 10:32:42 PDT Subject: languages Message-ID: <8907281732.AA09704@icsia12> Some months ago, I suggested a benchmark task of learning finite state languages from examples. It was noted that the task is provably feasible from lexicographically ordered examples and provably intractable from arbitrary presentations. Two recent theoretical results show that the case of arbitrary presentation is not even approximately solvable in any reason- able way. The papers are by Kearns and Valiant (STOC 89) and by Pitt and Warmuth ( U. Illinois TR ENG-89-1718 ). The proofs rely on very unnatural examples, like exactly one crafted perverse string. Thus one should not be discouraged from seeking solutions to more reasonable cases, but should be very careful about the claims made. We are currently working on a task which is both easier and much harder than the original abstract one. The goal is to have a system learn a tiny fragment of natural language describing simple 2D pictures from examples of sentence-picture pairs. The system is supposed to work on presentations in any natural language, but our initial work is more constrained. If people are interested, we could send or post more details. Jerry F. From josh at flash.bellcore.com Fri Jul 28 16:41:32 1989 From: josh at flash.bellcore.com (Joshua Alspector) Date: Fri, 28 Jul 89 16:41:32 EDT Subject: Postdoc position at Bellcore Message-ID: <8907282041.AA15845@flash.bellcore.com> POSTDOCTORAL POSITION IN NEURAL NET RESEARCH The Adaptive Systems Research Group at Bellcore, Morristown, NJ is looking for a postdoctoral researcher for a period of 1 - 2 years starting approximately November, 1989. Bellcore has a stimulating research environment for neural computation with active programs in neural network theory and analysis, in applications such as speech recognition and expert systems, and in optical and electronic implementation. This is an excellent opportunity for a researcher to be exposed to this environment while contributing to the effort in VLSI implementation. A test chip that implements a neural network based learning algorithm related to the Boltzmann machine has been designed, fabricated and tested. The next step is to implement a useful, multiple-chip system that can learn to solve difficult artificial intelligence problems. We will extend our study of electronic implementation issues to large scale systems using a three-pronged approach: 1) Further development of learning algorithms and architectures suitable for modular VLSI implementation. To be useful, algorithms must be implementable because learning by example takes too long using serial computer simulations. Therefore, the algorithms should take into account the constraints imposed by VLSI. 2) Functional simulation of large scale hardware systems using benchmark test problems. We will build a computer-based development system for testing algorithms in software. This will be composed of software modules, some of which eventually will be replaced by hardware learning modules. A computation module may be run on a remote parallel machine. This will serve as a platform for algorithm development, will perform functional simulation of a hardware system before design, and also will be the front end for testing the chips and boards after fabrication. 3) Design and fabrication of prototype chips suitable for inclusion in such systems. As a first step in the development of large scale, modular VLSI systems, our learning test chip will be expanded to contain more neurons and synapses and to enable construction of a multichip system. This system would be taken to board-level design and fabrication. Evaluation will involve a speed comparison using a variety of benchmarks of three neural network implementations: software on a serial machine, software on a general purpose parallel machine, and special purpose neural hardware using the board level system we build. Chip and board design will be carried out using a combination of sophisticated VLSI CAD tools. The successful candidate should be involved in many aspects of this work including the design of algorithms and architectures for VLSI neural implementation, computer programming to simulate and test the existing and proposed neural architectures, and the design of analog and digital chips to implement them. He or she should be capable of doing independent publishable research in neural network learning theory, in parallel software simulation, in applications of neural information processing, or in VLSI implementations of neural network learning models. Please enclose a resume, a copy of a recent paper, and the names, addresses, and phone numbers of three references. Send applications to: Joshua Alspector Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-1910 From jose at tractatus.bellcore.com Sun Jul 30 09:22:29 1989 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Sun, 30 Jul 89 09:22:29 -0400 Subject: languages Message-ID: <8907301322.AA17909@tractatus.bellcore.com> But don't such proofs have a lot to do with the "representation function" one chooses to do the learning with? Assumming some such other function would results of such proofs change..? I know of a number of cases where people have been teaching FSMs to networks.. David Schvren-Shieber & Jay McClelland Bob Allen Mike Mozer... I would be interested in what you thought about these cases relatives to the proofs that claim they can't exist and the nature of your results Steve From jacobs at marfak.crd.ge.com Mon Jul 3 13:37:32 1989 From: jacobs at marfak.crd.ge.com (jacobs) Date: Mon, 3 Jul 89 13:37:32 EDT Subject: Seminar notice Message-ID: <8907031737.AA06443@marfak.steinmetz.ge.com> Neural Networks and High-Level Cognitive Tasks Robert B. Allen, Bellcore Thursday, July 6, 10am, Guest House GE Research and Development Center, Schenectady, NY While connectionist networks are clearly applicable to signal processing tasks, they have been claimed not to be relevant to high-level cognitive tasks. However, the networks' ability to adapt to context and the parsimony of a vertically integrated cognitive model make their use for high-level tasks worth careful investigation. This talk reviews the author's work with temporal networks on applications including 4-term analogies, agent modeling, agent interaction, grammars, planning, plan recognition, and 'language use'. In addition novel architectures and procedures such as adaptive training and a new reinforcement technique will be described. While the models to be reported have substantial limitations, the scope and relative ease with which results have been obtained seems promising. From hinton at ai.toronto.edu Thu Jul 6 08:18:29 1989 From: hinton at ai.toronto.edu (Geoffrey Hinton) Date: Thu, 6 Jul 89 08:18:29 EDT Subject: 6 month post-doc job Message-ID: <89Jul6.105056edt.10862@ephemeral.ai.toronto.edu> CONNECTIONIST POST-DOC POSITION (If you know of individuals who might be interested but are not on the connectionists mailing list, please forward this to them.) The connectionist research group at the University of Toronto is looking for a post-doctoral researcher for a period of six months starting on January 1 1990. The ideal candidate would have the following qualifications: 1. A significant amount of experience at running connectionist simulations, preferably in a unix/C environment, and a willingness to use the Toronto Research Simulator (not publicly available). 2. Some knowledge of neuropsychology. 3. A genuine desire to spend six months working intensively on connectionist simulations that explain neuropsychological phenomena. Examples of the types of syndrome we are interested in are given in Shallice, T. "From Neuropsychology to Mental Structure", Cambridge, 1988. 4. A PhD that is already completed or will clearly be completed by Jan 1 1990. The starting date is inflexible because the job is designed to coincide with a six month visit to the University of Toronto by Tim Shallice. Also, it will not be possible to finish off a PhD or convert a recent PhD into journal papers during the six month period. For the right person, this would be an excellent opportunity to work in a leading connectionist group with excellent simulation facilities and with close collaboration with a neuropsychologist who has a detailed understanding of connectionist models. One example of the kind of research we have in mind is described in "Lesioning a connectionist network: Investigations of acquired dyslexia" by Hinton and Shallice. To order this TR, send email requesting CRG-TR-89-3 to carol at ai.toronto.edu Applications should be made in writing to Geoffrey Hinton Department of Computer Science University of Toronto 10 Kings College Road Toronto, Ontario, M5S 1A4 Canada Please enclose a full CV, a copy of a recent relevant TR or paper, and the names addresses and phone numbers of three referees. The salary is negotiable, but will be approximately $20,000 for six months for a person with a PhD. I will be in europe until the end of July, so no replies will be forthcoming for a while. Geoff Hinton From cpd at aic.hrl.hac.com Tue Jul 11 11:35:23 1989 From: cpd at aic.hrl.hac.com (cpd@aic.hrl.hac.com) Date: Tue, 11 Jul 89 08:35:23 -0700 Subject: Tech Report Available Message-ID: <8907111535.AA00430@aic.hrl.hac.com> *** DO NOT RE-POST TO OTHER B-BOARDS *** *** DO NOT RE-POST TO OTHER B-BOARDS *** Title: Tensor Manipulation Networks: Connectionist and Symbolic Approaches to Comprehension, Learning, and Planning Availability: This tech report is available from both the Hughes Research Laboratories and The UCLA AI Lab. Request for mailing outside the US should be sent to UCLA, because Hughes requires some non-trivial paper work for such mailings. Otherwise, please flip coin so that we may stochastically equalize mailing costs. From UCLA: valerie at cs.ucla.edu From Hughes: tech-reports at aic.hrl.hac.com Abstract: It is a controversial issue as to which of the two approaches, the Physical Symbol System Hypothesis (PSSH) or Parallel Distributed Processing (PDP), is a better characterization of the mind. At the root of this controversy are two questions: (1) What sort of computer is the brain, and (2) what sort of programs run on that computer? What is presented here is a theory which bridges the apparent gap between PSSH and PDP approaches. In particular, a computer is presented that adheres to constraints of PDP computation (a network of simple processing units), and a program is presented which at first glance is only suitable for a PSSH computer but which runs on a PDP computer. The approach presented here, vertical integration, shows how to construct PDP computers that can process symbols and how to design symbol systems so that they will run on more brain-like computers. The type of computer presented here is called a tensor manipulation network. It is a special type of PDP network where the operation of the network is interpreted as manipulations of high rank tensors (generalized vector outer products). The operations on tensors in turn are interpreted as operations on symbol structures. A wide range of tensor manipulation architectures are presented with the goal of inducing constraints on the symbol structures that it is possible for the mind to possess. As a demonstration of what is possible with constrained symbol structures, a program, CRAM, is presented which uses and acquires thematic knowledge. CRAM is able to read, in English, single- paragraph, fable-like stories and either give a thematically relevant summary or generate planning advice for a character in the story. CRAM is also able to learn new themes through combination of existing, known themes encountered in the fables CRAM reads. CRAM demonstrates that even the most symbolic cognitive tasks can be accomplished with PDP networks, if the networks are designed properly. From keith%epistemi.edinburgh.ac.uk at NSFnet-Relay.AC.UK Thu Jul 13 08:40:00 1989 From: keith%epistemi.edinburgh.ac.uk at NSFnet-Relay.AC.UK (Keith Stenning) Date: Thu, 13 Jul 89 13:40:00 +0100 Subject: Connectionist Unification Message-ID: <24364.8907131240@epistemi.ed.ac.uk> Some weeks ago there was a message on the CMU net about a TR about a connectionist implementation of unification algorithms. Unfortunately, I lost my copy and cannot trace the origin. Can anyone help me? Thanks in advance, Keith Stenning Centre for Cognitive Science Edinburgh University Edinburgh Scotland keith at uk.ac.ed.epistemi From tedwards at cmsun.nrl.navy.mil Thu Jul 13 14:59:00 1989 From: tedwards at cmsun.nrl.navy.mil (Thomas Edwards) Date: Thu, 13 Jul 89 14:59:00 EDT Subject: Linesearch Message-ID: <8907131859.AA00595@cmsun.nrl.navy.mil> That last message should have read "linesearch" instead of "linesort" Sorry. From tedwards at cmsun.nrl.navy.mil Thu Jul 13 14:57:53 1989 From: tedwards at cmsun.nrl.navy.mil (Thomas Edwards) Date: Thu, 13 Jul 89 14:57:53 EDT Subject: Linesearches in Conjugate Gradient Message-ID: <8907131857.AA00511@cmsun.nrl.navy.mil> Can anyone reccomend a linesort for use in conjugate gradient neural network error minimization methods? How about one which is easily parallelizable? -Thomas Edwards tedwards at cmsun.nrl.navy.mil ins_atge at jhuvms.BITNET From buescher at bellman.csl.uiuc.edu Wed Jul 19 16:29:25 1989 From: buescher at bellman.csl.uiuc.edu (Kevin Buescher) Date: Wed, 19 Jul 89 15:29:25 CDT Subject: No subject Message-ID: <8907192029.AA02608@bellman> Dear Sirs: I have been told that you may be able to put me in contact with a local connectionist group (or mailing list, or forum, or...) here at the University of Illinois at Urbana-Champaign. Could you help me in this regard? Sorry to be so vague. Thanks, Kevin Buescher buescher at bellman.csl.uiuc.edu From rsun at cs.brandeis.edu Wed Jul 19 10:42:49 1989 From: rsun at cs.brandeis.edu (Ron Sun) Date: Wed, 19 Jul 89 10:42:49 edt Subject: No subject Message-ID: Is there anyone out there willing to share a room at IJCAI (around 2-4 persons, from 8/19 to 8/23), preferably at Days Inn or Shorecrest Moter Inn. If you are interested, please tell me whether you already made the resevation or I should go ahead to make it. ASAP Ron From kroger at cs.utexas.edu Thu Jul 20 11:46:41 1989 From: kroger at cs.utexas.edu (Jim Kroger) Date: Thu, 20 Jul 89 10:46:41 CDT Subject: No subject Message-ID: <8907201546.AA05667@mothra.cs.utexas.edu> I can do that, although I am not the person you intended. I know because I justt applied to grad school there (although I am going somewhere else). Do he following: 1. Call the psych dept 2. Ask to speak to any professor. 3. Ask the professor who in the psych dept. is doing connectionism. I fhe doesn't know, ask who is in the cognitive science area. Then ask that person how to get in touch with connectionists at the university. 4. This method is garaunteed to work. From tony%psy.glasgow.ac.uk at NSFnet-Relay.AC.UK Fri Jul 28 17:03:59 1989 From: tony%psy.glasgow.ac.uk at NSFnet-Relay.AC.UK (Tony Sanford) Date: Fri, 28 Jul 89 17:03:59 BST Subject: learning new material Message-ID: <123.8907281603@buzzard.psy.glasgow.ac.uk> I am interested to hear of anyone who is interested in the problem of adding new information to a network with minimal impact on what has already been learned. I seem to recall some work on weight-decay which was relevant, but I have lost the reference(s). Could anyone working on the problem please give me some leads? Please mail replies to tony at psy.glasgow.ac.uk unless the material is good for broadcast. Tony Sanford Dept. of Psychology University of Glasgow Glasgow G12 Scotland From jfeldman%icsia12.Berkeley.EDU at berkeley.edu Fri Jul 28 13:32:42 1989 From: jfeldman%icsia12.Berkeley.EDU at berkeley.edu (Jerry Feldman) Date: Fri, 28 Jul 89 10:32:42 PDT Subject: languages Message-ID: <8907281732.AA09704@icsia12> Some months ago, I suggested a benchmark task of learning finite state languages from examples. It was noted that the task is provably feasible from lexicographically ordered examples and provably intractable from arbitrary presentations. Two recent theoretical results show that the case of arbitrary presentation is not even approximately solvable in any reason- able way. The papers are by Kearns and Valiant (STOC 89) and by Pitt and Warmuth ( U. Illinois TR ENG-89-1718 ). The proofs rely on very unnatural examples, like exactly one crafted perverse string. Thus one should not be discouraged from seeking solutions to more reasonable cases, but should be very careful about the claims made. We are currently working on a task which is both easier and much harder than the original abstract one. The goal is to have a system learn a tiny fragment of natural language describing simple 2D pictures from examples of sentence-picture pairs. The system is supposed to work on presentations in any natural language, but our initial work is more constrained. If people are interested, we could send or post more details. Jerry F. From josh at flash.bellcore.com Fri Jul 28 16:41:32 1989 From: josh at flash.bellcore.com (Joshua Alspector) Date: Fri, 28 Jul 89 16:41:32 EDT Subject: Postdoc position at Bellcore Message-ID: <8907282041.AA15845@flash.bellcore.com> POSTDOCTORAL POSITION IN NEURAL NET RESEARCH The Adaptive Systems Research Group at Bellcore, Morristown, NJ is looking for a postdoctoral researcher for a period of 1 - 2 years starting approximately November, 1989. Bellcore has a stimulating research environment for neural computation with active programs in neural network theory and analysis, in applications such as speech recognition and expert systems, and in optical and electronic implementation. This is an excellent opportunity for a researcher to be exposed to this environment while contributing to the effort in VLSI implementation. A test chip that implements a neural network based learning algorithm related to the Boltzmann machine has been designed, fabricated and tested. The next step is to implement a useful, multiple-chip system that can learn to solve difficult artificial intelligence problems. We will extend our study of electronic implementation issues to large scale systems using a three-pronged approach: 1) Further development of learning algorithms and architectures suitable for modular VLSI implementation. To be useful, algorithms must be implementable because learning by example takes too long using serial computer simulations. Therefore, the algorithms should take into account the constraints imposed by VLSI. 2) Functional simulation of large scale hardware systems using benchmark test problems. We will build a computer-based development system for testing algorithms in software. This will be composed of software modules, some of which eventually will be replaced by hardware learning modules. A computation module may be run on a remote parallel machine. This will serve as a platform for algorithm development, will perform functional simulation of a hardware system before design, and also will be the front end for testing the chips and boards after fabrication. 3) Design and fabrication of prototype chips suitable for inclusion in such systems. As a first step in the development of large scale, modular VLSI systems, our learning test chip will be expanded to contain more neurons and synapses and to enable construction of a multichip system. This system would be taken to board-level design and fabrication. Evaluation will involve a speed comparison using a variety of benchmarks of three neural network implementations: software on a serial machine, software on a general purpose parallel machine, and special purpose neural hardware using the board level system we build. Chip and board design will be carried out using a combination of sophisticated VLSI CAD tools. The successful candidate should be involved in many aspects of this work including the design of algorithms and architectures for VLSI neural implementation, computer programming to simulate and test the existing and proposed neural architectures, and the design of analog and digital chips to implement them. He or she should be capable of doing independent publishable research in neural network learning theory, in parallel software simulation, in applications of neural information processing, or in VLSI implementations of neural network learning models. Please enclose a resume, a copy of a recent paper, and the names, addresses, and phone numbers of three references. Send applications to: Joshua Alspector Bellcore, MRE 2E-378 445 South St. Morristown, NJ 07960-1910 From jose at tractatus.bellcore.com Sun Jul 30 09:22:29 1989 From: jose at tractatus.bellcore.com (Stephen J Hanson) Date: Sun, 30 Jul 89 09:22:29 -0400 Subject: languages Message-ID: <8907301322.AA17909@tractatus.bellcore.com> But don't such proofs have a lot to do with the "representation function" one chooses to do the learning with? Assumming some such other function would results of such proofs change..? I know of a number of cases where people have been teaching FSMs to networks.. David Schvren-Shieber & Jay McClelland Bob Allen Mike Mozer... I would be interested in what you thought about these cases relatives to the proofs that claim they can't exist and the nature of your results Steve