Connection Science

Lyn Shackleton lyn at CS.EXETER.AC.UK
Wed Jun 7 13:36:25 EDT 1989


ANNOUNCEMENT

Issue 1. of the new journal CONNECTION SCIENCE has just gone to press and
Issue 2. will follow shortly. The editors are very pleased with the response
they have received and would welcome more high quality submissions or
theoretical notes.


VOLUME 1 NUMBER 1 CONTENTS

Michael C Mozer & Paul Smolensky
    'Using Relevance to Reduce Network Size Automatically'
James Hendler
    'The Design and Implementation of Symbolic Marker-Passing Systems'
Eduardo R Caianello, Patrik E Eklund & Aldo G S Ventre
    'Implementations of the C-Calculus'
Charles P Dolan & Paul Smolensky
    'Tensor Product Production System: A Modular Architecture and
     Representation'
Christopher J Thornton
    'Learning Mechanisms which Construct Neighbourhood Representations'
Ronald J Williams & David Zipser
    'Experimental Analysis of the Real-Time Recurrent Learning
     Algorithm'


Editor:  Dr NOEL E SHARKEY, Centre for Connection Science, Dept of Computer
Science, University of Exeter, UK

Associate Editors:
Andy CLARK (University of Sussex, Brighton, UK)
Gary COTTRELL (University of California, San Diego, USA)
James A HENDLER (University of Maryland, USA)
Ronan REILLY (St Patrick's College, Dublin, Ireland)
Richard SUTTON (GTE Laboratories, Waltham, MA, USA)

FORTHCOMING IN VOLUMES 1 & 2
Special Issue on Natural Language, edited by Ronan Reilly & Noel Sharkey

Special Issue on Hybrid Symbolic/Connectionist Systems, edited by
James Hendler


For further details please contact.


lyn shackleton
(assistant editor)

Centre for Connection Science       JANET:  lyn at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter                UUCP:   !ukc!expya!lyn
Exeter EX4 4PT
Devon                    BITNET: lyn at cs.exeter.ac.uk.UKACRL
U.K.

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa02160; 8 Jun 89 10:56:34 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa11156; 8 Jun 89 10:50:51 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU;  8 Jun 89 10:48:22 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA15362; Thu, 8 Jun 89 09:09:45 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA08666; Thu, 8 Jun 89 09:14:22 EDT
Date: Thu, 8 Jun 89 09:14:22 EDT
From: harnad at Princeton.EDU
Message-Id: <8906081314.AA08666 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Categorization and Supervision

           CATEGORIZATION AND SUPERVISION

The question of the definition and nature of supervised vs.
unsupervised learning touches on some analogous points in the theory of
categorization. Perhaps some of the underlying logic is more obvious
in the case of categorization:

There are two kinds of categorization tasks, one of which I've dubbed
"imposed" categorization and the other "ad lib" categorization
(otherwise known as similarity judgment). In both kinds of
categorization task a set of inputs is sorted into a set of
categories, but in imposed categorization there is feedback about
whether the sorting is correct or incorrect (and when it is incorrect,
there is usually also information indicating which category would have
been correct), whereas in ad lib categorization there is no feedback (and
often no nonarbitrary criterion for "correctness" either): The subject
is simply asked to sort the inputs into categories as he sees fit.
(Sometimes even less is asked for in ad lib categorization: The
subject just rates pairs of inputs on how similar he judges them, and
then the "categorization" is inferred by some suitable multidimensional
statistical analysis such as scaling or cluster analysis.)

The logical structure and informational demands of imposed and ad lib
categorization are very different. Although both are influenced by
learning, it is clear that imposed categorization (except if it is
inborn) depends critically on learning through feedback from the external
consequences of MIScategorization. Inputs are provisionally sorted and
labeled, the consequences of the sorting and labeling somehow matter
to the organism, and the feedback from incorrect sorting and labeling
gives rise to a learning process that converges eventually on correct
sorting and labeling. The categories are "imposed" by the
environmental consequences of miscategorization. The category "label" can of
course be any operant response, so imposed categorization is the
paradigm for learned operant behaviors as well as many evolved sensorimotor
adaptations (i.e. Skinner's "selection by consequences," although Skinner
of course provides no mechanism for the learning, whereas connectionism seems
to have some candidates).

Ad lib categorization, on the other hand, does not depend directly on
learning (although it is no doubt influenced by the outcome of prior
imposed category learning). Logically, all it depends on is passive
exposure to the inputs; by definition there are no consequences
arising from miscategorization (if there are, then we are back into
supervised learning).

One last point: Learning an imposed categorization is usually
dependent on finding the stimulus features that will reliably sort the
imputs into the correct categories. An imposed categorization problem
is "hard" to the degree to which this is not a trivial task. (Put
another way, it depends on how "underdetermined" the categorization
problem is.) A trivial imposed categorization problem is one in which
the ad lib categorization happens to be a solution to the imposed
categorization as well, i.e., the perceived similarity structure of
the input set is already such that it "relaxes" into the right sorting
through mere exposure, without the need for feedback.

I think this last point may be at the heart of some of the
misunderstandings about supervised vs. unsupervised learning: Logically
speaking, there can be no "correct" or "incorrect" categorization in ad
lib categorization; the "correctness" criterion is just in the mind of
the experimenter. But if the hard work (i.e., finding and weighting the
features that will reliably sort the inputs "correctly") has already
been done by prior imposed category learning (or the evolution of our
sensorimotor systems) then a given categorization problem may
spuriously give the appearance of having been solved by "unsupervised"
learning alone. In reality, however, the "solution" will have been
well-prepared by prior learning and evolution, since, apart from the
mind of the experimenter, "correctness" is DEFINED by the consequences
of miscategorization, i.e., by supervision.

Reference:

Harnad, S. (Ed.) (1987) Categorical Perception: The Groundwork of
Cognition. NY: Cambridge University Press.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa09828; 8 Jun 89 18:43:21 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa15405; 8 Jun 89 15:44:14 EDT
Received: from EE.ECN.PURDUE.EDU by CS.CMU.EDU;  8 Jun 89 15:42:42 EDT
Received: by ee.ecn.purdue.edu (5.61/1.18jrs)
	id AA11580; Thu, 8 Jun 89 14:42:19 -0500
Message-Id: <8906081942.AA11580 at ee.ecn.purdue.edu>
To: connectionists at CS.CMU.EDU
Subject: TR available 
Date: Tue, 06 Jun 89 17:05:00 EST
From: Manoel Fernando Tenorio <tenorio at ee.ecn.purdue.edu>
Sender: tenorio at ee.ecn.purdue.edu

Bcc:neuron-request at hplabs.hp.com
--------

The Tech Report below will be available by June, 15. Please do not reply to
this posting. Send all you requests to jld at ee.ecn.purdue.edu

        Self Organizing Neural Network for Optimum Supervised Learning

Manoel Fernando Tenorio                                    Wei-Tsih Lee
School of Electrical Engineering            School of Electrical Engineering 
Purdue University                                         Purdue University
W. Lafayette, IN. 47907                              W. Lafayette, IN. 47907
tenorio at ee.ecn.purdue.edu                            lwt at ed.ecn.purdue.edu 


                               Summary

Current neural network algorithms can be classified by the following
characteristics: the architecture of the network, the error criteria used, the
neuron transfer function, and the algorithm used during learning. For example:
in the case of back propagation, one would classify the algorithm as a fixed
architecture (feedforward in most cases), using a MSE criteria, and a sigmoid
function on a weighted sum of the input, with the Generalized Delta Rule
performing a gradient descent in the weight space. This characterization is
important  in order to assess the power of such algorithms from a modeling
viewpoint. The expressive power of a network is intimately related with these
four features. 

In this paper, we will discuss a neural network algorithm with
noticeably different characteristics from current networks. 
The Self Organizing Neural Network (SONN) [TeLe88] is an algorithm that through
a search process creates the network necessary and optimum in the sense of
performance and complexity. SONN can be classified as follows. The network
architecture is constructed through a search using Simulated Annealing (SA),and
it is optimum in that sense. The error criteria used is a modification of the
Minimum Description Length Criteria called the Structure Estimation Criteria
(SEC); it takes into account both the performance of the algorithm and the
complexity of the structure generated. The neuron transfer function is
individually chosen from a pool of functions, and the weights are adjusted
during the neuron creation. This function pool can be selected with a priori
knowledge of the problem, or simply use a class of non-linearities shown to be
general enough for a wide variety of problems. 

Although the algorithm is stochastic in nature (SA), we show that its
performance is extremely high both in comparative and absolute terms. In
[TeLe88], we have used SONN as an algorithm to identify and predict chaotic
series, particularly the Mackey-Glass equation [LaFa87, Mood88] was used. For
comparison, the experiments of using Back Propagation for this problem were
replicated under the same computational environment. The results indicated that
for about 10% of the computational effort, the SONN delivered a 2.11 times
better model (normalized RMSE). Some inherited aspects of the
algorithm are even more interesting: there were 3.75 times less weights, 15
times less connections, 6.51 times less epochs over the data set, and only 1/5
of the data was fed to the algorithm. Furthermore, the algorithm generates a
symbolic representation of the network which can be used to substitute it, or 
be used for the analysis of the problem.

******************************************************************************




We have further developed the algorithm, and although not part of the
report above, it will be part of a paper submitted to NIPS'89.
There, some major improvements on the algorithm are reported. The same
chaotic series problem can now run with 26.4 less epochs over the data set that
BP, and have generated the same model in about 18.5 seconds of computer time.
(This is down from 2 CPU hours in a Gould NP1 Powernode 9080).
 Performance on a Sun 3-60 was
sightly over 1 minute. These performance figures include the use of an 8 times
larger function pool;  the final performance  now independs of the size of the
pool.

Other aspects of the algorithm are also important considering. Because of its
stochastic nature, no two runs of the algorithm should be the same. This can
become a hindrance if a suboptimal solution is desired, since at every run the
set of suboptimal models can be different. A report on modifications of the
original SONN to run on an A* search are presented. Since the algorithm
generates partial structures at each iteration, the learning process is only
optimized for the structure presently generated. If such substructure is used 
as a part of a larger structure, then no provision is made to readjust its 
weights making the final model slightly stiff. A provision for melting the 
structure (parametric readjustment) is also discussed. Finally, the combination
 of symbolic processing with this numerical method can lead to construction of 
AI-NN based methods for supervised and unsupervised learning. The ability of 
SONN to take symbolic constraints and produce symbolic information can make 
such a system possible. Implications of this design are also explored.




[LaFa87] - Alans Lapedes and Robert Farber, How Neural Networks Work, TR
LA-UR-88-418, Los Alamos, 1987.

[Mood88] - J. Moody, Fast Learning in Multi-Resolution Hierarchies, Advances in
Neural Information Processing Systems, D. Touresky, Ed., Morgan Kaufmann, 1989
(NIPS88).

[TeLe88]  - M. F. Tenorio and W-T Lee, Self Organizing Neural Networks for the
Identification Problem, Advances in Neural Information Processing Systems, D.
Touresky, Ed., Morgan Kaufmann, 1989 (NIPS88).

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa11898; 8 Jun 89 22:28:55 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa18227; 8 Jun 89 18:49:26 EDT
Received: from EPHEMERAL.AI.TORONTO.EDU by CS.CMU.EDU;  8 Jun 89 18:47:03 EDT
Received: from localhost (stdin) by ephemeral.ai.toronto.edu with SMTP id 11256; Thu, 8 Jun 89 18:46:41 EDT
To:	harnad at PRINCETON.EDU
cc:	connectionists at CS.CMU.EDU
Subject: Re: Categorization and Supervision 
In-reply-to: Your message of Thu, 08 Jun 89 09:14:22 -0400.
Date:	Thu, 8 Jun 89 18:46:27 EDT
From:	Geoffrey Hinton <hinton at ai.toronto.edu>
Message-Id: <89Jun8.184641edt.11256 at ephemeral.ai.toronto.edu>


Steve Harnad's message about supervised versus unsupervised learning makes an
apparently plausible point that is DEEPLY wrong.  He says

"Logically speaking, there can be no "correct" or "incorrect" categorization
in ad lib (his term for "unsupervised") categorization; the "correctness"
criterion is just in the mind of the experimenter.  But if the hard work (i.e.,
finding and weighting the features that will reliably sort the inputs
"correctly") has already been done by prior imposed category learning (or the
evolution of our sensorimotor systems) then a given categorization problem may
spuriously give the appearance of having been solved by "unsupervised"
learning alone. In reality, however, the "solution" will have been
well-prepared by prior learning and evolution, since, apart from the mind of
the experimenter, "correctness" is DEFINED by the consequences of
miscategorization, i.e., by supervision."

The mistake in this line of reasoning is as follows: Using the Kolmogorov
notion of complexity, we can distinguish between good and bad models of some
data without any additional teacher.  A good model is one that has low
kolmogorov complexity and fits the data closely (there is always a data-fit vs
model-complexity trade-off).  Clustering data into categories is just one
particularly tractable way of modelling data.

Naturally, ideas based on Kolmogorov complexity are easiest to apply if we
start with a restricted class of possible models (so there is still plenty of
room for evolution to be helpful).  But restricting the class of models is not
the same as implicitly saying which particular models (i.e. specific
categorizations) are correct.  For example, we could insist on modeling some
data as having arisen from a mixture of gaussians (one per cluster), but this
doesnt tell us which data to put in which cluster, or how many clusters to
use.  For that we need to trade-off the complexity of the model (number of
clusters) against the data-fit.  Cheeseman (and others) have shown that this
approach can be made to work nicely in practice.

Geoff


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa02050; 9 Jun 89 4:54:31 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa22494; 9 Jun 89 1:24:46 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU;  9 Jun 89 01:23:14 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA19296; Fri, 9 Jun 89 01:23:15 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA17091; Fri, 9 Jun 89 01:28:02 EDT
Date: Fri, 9 Jun 89 01:28:02 EDT
From: harnad at Princeton.EDU
Message-Id: <8906090528.AA17091 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Categorization and Supervision

Geoff Hinton wrote:

>> Steve Harnad's message about supervised versus unsupervised learning makes an
>> apparently plausible point that is DEEPLY wrong.  [Harnad] says
>>    "Logically speaking, there can be no "correct" or "incorrect"
       categorization in ad lib (his term for "unsupervised") categorization..."
>> The mistake in this line of reasoning is as follows: Using the Kolmogorov
>> notion of complexity, we can distinguish between good and bad models of some
>> data without any additional teacher.  A good model is one that has low
>> kolmogorov complexity and fits the data closely...

I am afraid Geoff has not understood my point. I was not speaking about
algorithms or models but about human categorization performance and its
constraints. There may well be a "model" or internal criterion or
constraint according to which input can be preferentially categorized
in a particular way, but that still has nothing to do with the
correctness or incorrectness of the categorization, which, as a logical
matter, can only be determined by some external consequence of
categorizing INcorrectly. That's what categorizing correctly MEANS.
Otherwise "categorization" and "correctness" are just figures of speech
(and solipsistic ones, at that).

My point about imposed vs. ad lib categorization (which is not only
plausible, but, till grasped and refuted on its own terms, stands,
uncorrected) was based purely on external performance considerations,
not model-specific internal ones. I think that the
supervised/unsupervised learning distinction has given rise to
misunderstandings precisely because it equivocates between external
and internal considerations. I focused on the external question of what
the "correctness" of a categorization really consists in so as to
highlight some of the logical, methodological and informational
features of categorization, and indeed of learning in general. It is a
purely logical point that, even if arrived at by purely internal,
unsupervised means, the "correctness" of a "correct" human categorization
must be based on some external performance constraint, and hence at
least a potential "supervisor." Otherwise the "correctness" is merely
in the mind of the theorist, or the interpreter (or the astrologist, if
the magical number happens to be 12).

Now, that simple point having been made, I might add that whereas I do
not find it implausible that SOME categorization problems (for which
the requisite potential supervision from the external consequences of
miscategorization must, I continue to insist, exist in principle) might
nevertheless be solved through internal constraints alone, with no
need for any actual recourse to the external supervision, I find it
highly implausible that ALL, MOST, or even MANY categorization problems
should be soluble that way -- and certainly not the ones I called the
"hard" (underdetermined) categorization problems. Connectionism should
not be TOO ambitious. It's a proud enough hope to aspire to be THE
method that reliably finds invariant features in generalized nontrivial
induction WITH supervision -- without going on to claim to be able to do
it all with your eyes closed and your hands tied behind your back...

Stevan Harnad
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa07681; 9 Jun 89 13:45:13 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa03720; 9 Jun 89 13:39:42 EDT
Received: from GORT.CS.BUFFALO.EDU by CS.CMU.EDU;  9 Jun 89 13:37:27 EDT
Received: from sybil.cs.Buffalo.EDU by gort.cs.Buffalo.EDU (5.59/1.1)
	id AA10660; Fri, 9 Jun 89 13:37:19 EDT
Received: by sybil.cs.Buffalo.EDU (4.12/1.1)
	id AA06218; Fri, 9 Jun 89 13:38:32 edt
Date: Fri, 9 Jun 89 13:38:32 edt
From: Arun Jagota <jagota at cs.Buffalo.EDU>
Message-Id: <8906091738.AA06218 at sybil.cs.Buffalo.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Supervised/Unsupervised Learning

Steve Harnad wrote :

>It is a
>purely logical point that, even if arrived at by purely internal,
>unsupervised means, the "correctness" of a "correct" human categorization
>must be based on some external performance constraint, and hence at
>least a potential "supervisor." Otherwise the "correctness" is merely
>in the mind of the theorist, or the interpreter (or the astrologist, if
>the magical number happens to be 12).

There are times when an organism might wish to 
categorize events (in the input space) for internal reasons only. 
The external correctness of the categorization may not be important 
(there may even be none), as long as there is an internal consistency of 
classification and recognition. 

>Now, that simple point having been made, I might add that whereas I do
>not find it implausible that SOME categorization problems (for which
>the requisite potential supervision from the external consequences of
>miscategorization must, I continue to insist, exist in principle) might
>nevertheless be solved through internal constraints alone, with no
>need for any actual recourse to the external supervision, I find it
>highly implausible that ALL, MOST, or even MANY categorization problems
>should be soluble that way 

There are situations for which I don't necessarily think of 
categorization as a problem that has to be solved, rather a process 
that has to be performed. I think there are instances when 
the exact nature of the categories is not as important as the
process itself (as an aid to event recognition/content addressibility).

Consider a dictionary:
We could clasify words 
a) lexically sorted
b) By word-size
c) By a hash function (sum of all ASCII character codes mod constant)

etc
The choice of the categorization to use is determined more by
what we want to use it for (retrieval etc) than by any issues of
its external correctness.


I don't wish to overstate the case for such an unsupervised learning,
but I think it deserves to be studied as an independent task in it's
own right.

Arun Jagota
(jagota at cs.buffalo.edu)
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa13460; 9 Jun 89 19:50:58 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa05685; 9 Jun 89 15:02:51 EDT
Received: from HELIOS.NORTHEASTERN.EDU by CS.CMU.EDU;  9 Jun 89 15:00:28 EDT
Received: from corwin.ccs.northeastern.edu by helios.northeastern.edu
          id aa14361; 9 Jun 89 15:00 EDT
Received: by corwin.CCS.Northeastern.EDU (5.51/SMI-3.2+CCS-main-2.6)
	id AA10984; Fri, 9 Jun 89 14:58:44 ADT
Date: Fri, 9 Jun 89 14:58:44 ADT
From: steve gallant <sg at corwin.ccs.northeastern.edu>
Message-Id: <8906091758.AA10984 at corwin.CCS.Northeastern.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Supervised/Unsupervised and Autoassociative learning


	Here's my suggestion for a taxonomy of learning problems:

	I.  Supervised Learning:  correct activations given for output cells
					for each training example
		
		A.  Hard Learning Problems:  network structure is fixed and
					activations are not given for
					intermediate cells

		B.  Easy Learning Problems:  correct activations given
					for intermediate cells OR
					network structure allowed to be 
					modified OR single-cell
					problem

	II.  Unsupervised Learning:  correct activations not given for
					output cells
	
	There are further subdivisions for learning ALGORITHMS that can cut
across the above PROBLEM classes.  For example:

	1.  One-Shot Learning Algorithms:  each training example is 
					looked at at most one time

	2.  Iterative Algorithms:  training examples may be looked at 
					several times

	Autoassociative Algorithms are a special case of IB above because
each output cell can be trained independently of the other cells, reducing
the problem to single-cell supervised learning.  However
these models have different dynamics, in that output values are copied to
input cells for the next iteration.  Most learning algs for autoassociative
networks are one-shot algorithms (eg. linear autoassociator, BSB, Hopfield
nets).  Of course iterative algorithms can be used to increase the capacity
of autoassociative models (eg. perceptron learning, pocket alg.).  We could
also add intermediate cells to an autoassociative network to allow
arbitrary sets of patterns to be stored.

	Sorry if you got this message twice, but I don't think the first
try made it.

	Steve Gallant
	
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa03549; 9 Jun 89 22:18:27 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa08388; 9 Jun 89 18:09:32 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU;  9 Jun 89 18:06:53 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA15388; Fri, 9 Jun 89 18:06:49 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA24806; Fri, 9 Jun 89 18:11:32 EDT
Date: Fri, 9 Jun 89 18:11:32 EDT
From: harnad at Princeton.EDU
Message-Id: <8906092211.AA24806 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Unsupervised Category Learning

Arun Jagota <jagota at cs.Buffalo.EDU> wrote:

>> an organism might wish to categorize events (in the input space) for
>> internal reasons only. The external correctness of the categorization
>> may not be important (there may even be none), as long as there is an
>> internal consistency of classification and recognition... I think there
>> are instances when the exact nature of the categories is not as
>> important as the process itself (as an aid to event recognition/content
>> addressibility). Consider a dictionary: We could classify words a)
>> lexically sorted b) By word-size c) By a hash function... The choice of
>> the categorization to use is determined more by what we want to use it
>> for (retrieval etc) than by any issues of its external correctness.

I think there is a confusion of phenomena here, including (1) organisms'
adaptive behavior in their environments, (2) intelligent machines we
use for various purposes, and (3) something like finger painting or
playing with tarot cards.

I have no particular interest in doing or in modeling (3) (categorizing
events for "internal reasons only"), and I don't think it's a
well-defined problem. (1) is clearly "supervised" by the external
consequences for survival and reproduction arising from how the
organism sorts and responds to classes of objects, events and states of
affairs in the world. (2) could in principle be governed by
machine-internal constraints only, but the "usefulness" of the
categories (e.g., word classifications) to US is again clearly
dependent on external consequences (to US), exactly as in (1).

Perhaps an example will give a better idea of the distinction I'm
emphasizing: Suppose you had N "objects." Consider the many ways you
could sort and label them: By size, by shape, by color, by function,
by age, as natural or synthetic, living or nonliving, etc. In each
case, the name of the same "object" changes, as does the membership of
the category to which it is assigned. With sufficient imagination an
enormous number of possible categorizations could be generated for
the same set of objects. What makes some of those categorizations "correct"
(or, equally important, incorrect) and others arbitrary?

The answer is surely related to why it is that we bother to categorize
at all: Sorting (and especially mis-sorting) objects some ways (e.g.,
as edibles vs inedibles) has important consequences for us, whereas
sorting them other ways (e.g., bigger/smaller than a breadbox, moon in
Aquarius) does not. The consequences do not depend on "internal"
considerations but on external ones, so I continue to doubt that
unsupervised internal constraints can account for much that is of
interest in human category learning.

Stevan Harnad
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa05838; 10 Jun 89 4:18:33 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa12044; 9 Jun 89 23:42:04 EDT
Received: from UICSRD.CSRD.UIUC.EDU by CS.CMU.EDU;  9 Jun 89 23:10:20 EDT
Received: from s16.csrd.uiuc.edu by uicsrd.csrd.uiuc.edu with SMTP
	(5.61+/IDA-1.2.8) id AA16942; Fri, 9 Jun 89 22:10:11 -0500
Received: by s16.csrd.uiuc.edu (3.2/9.2)
	id AA08968; Fri, 9 Jun 89 22:10:07 CDT
Date: Fri, 9 Jun 89 22:10:07 CDT
From: George Cybenko <gc at s16.csrd.uiuc.edu>
Message-Id: <8906100310.AA08968 at s16.csrd.uiuc.edu>
To: connectionists at CS.CMU.EDU

Re: Unsupervised learning

What do people mean by "data" in unsupervised learning?  Surely
it is more than just bit strings....if you get "data" in the form
of bit strings don't you have to know what those bit strings mean?
Does the bit string represent an image, a vector of 32 bit floating
point numbers in VAX or MC68000 arithmetic, an integer, or
an ASCII representation of some characters?  It seems to me
that the interpretation of the bit string in this way determines
a metric that is imposed by the semantics of the problem from
which the data came.  That metric qualifies as a form of "supervision".
Moreover, any discussion of Kolmogorov complexity vs. modeling errors
requires some notion of error, hence some notion of a metric
imposed on the data.

Put another way, I don't think that there can be a useful "unsupervised"
learning procedure that accepts only bit strings as input with no
interpretations of what those bit strings mean.  I suspect that there
might be a canonical format into which all problems can be transformed
that would usually give meaningful classifications but then it
would be argued that the transformation is a type of supervision.

In fact, this question of how to interpret and represent input data
is a key criticism of connectionist modeling as given by many neuro-
biologists, from my understanding of that debate.

George Cybenko
that debate.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa05853; 10 Jun 89 4:27:32 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa12632; 10 Jun 89 0:04:04 EDT
Received: from THINK.COM by CS.CMU.EDU; 10 Jun 89 00:02:41 EDT
Return-Path: <singer at Think.COM>
Received: from kulla.think.com by Think.COM; Sat, 10 Jun 89 00:02:35 EDT
Received: by kulla.think.com; Sat, 10 Jun 89 00:01:33 EDT
Date: Sat, 10 Jun 89 00:01:33 EDT
From: singer at Think.COM
Message-Id: <8906100401.AA04047 at kulla.think.com>
To: connectionists at CS.CMU.EDU
Subject: Sparse vs. Dense networks

	In general sparsely connected networks run faster than densely
connected ones.  Ignoring this advantage, are there any theoretical
benefits to sparsely connected nets?  Are schemes utilitzing local
connectivity patterns, perhaps including the use of additional layers,
buying the researcher some advantage besides more efficient use of limited
amounts of computer time?  Naively it would seem that the more
interconnections the better (even if, after training, many have near-zero
weights), is this true?

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa12627;
          10 Jun 89 19:40:08 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa20829; 10 Jun 89 19:35:56 EDT
Received: from GOAT.CS.WISC.EDU by CS.CMU.EDU; 10 Jun 89 19:34:30 EDT
Date: Sat, 10 Jun 89 18:34:12 -0500
From: Vasant Honavar <honavar at cs.wisc.edu>
Message-Id: <8906102334.AA02423 at goat.cs.wisc.edu>
Received: by goat.cs.wisc.edu; Sat, 10 Jun 89 18:34:12 -0500
To: singer at Think.COM
Subject: Re:  Sparse vs. Dense networks
Cc: connectionists at CS.CMU.EDU, honavar at cs.wisc.edu


The tradeoff between connectivity and performance, can in general, be 
quite complex. 

There is reason to think that smaller networks (with fewer nodes and links) 
may be better than larger ones for getting improved generalization as shown
by the work of Hinton and others.

The use of some topological constraints (such as local receptive fields - 
in the case of vision) on network connectivity can yield improved performance
in learning (after discounting the saving in computer time for simulations)
(Honavar and Uhr, 1988; Moody and Darken, 198?; Qian and Sejnowski, 1988). 
Local interactions and systematic initiation and termination of plasticity,
combined with a  Hebbian-like learning rule, can, through a process of 
self-organization, yield network structures that resemble that of 
orientation columns in the primary visual cortex (Linsker, 1988).

Modularity is another means of restricting interactions between different
parts of a network. When tasks are relatively independent, they are best
handled by seperate or sparsely interacting modules. This is shown by the
work of Ruekl and Kosslyn (198?) - Seperate modules learning object 
identification and object location perform better than a monolithic network
learning both.

Many tasks that connectionist models attempt to solve are NP-complete;
combinatorially explosive solutions are not feasible. Other factors being
equal, architectures that exploit the natural constraints of the domain
yield better performance (speed, cost) than those that don't. 

Network architecture (size, topology, etc) interact intimately 
with the learning processes (weight modification; generation of new 
links (Honavar, 1988) - and possibly, nodes; regulation of plasticity)
as well as the structure of the domain (e.g., vision, speech) at 
different levels. Connectivity appropriate for a particular problem 
domain cannot be determined independent of these factors.

Vasant Honavar (honavar at cs.wisc.edu)



Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa21277;
          11 Jun 89 19:11:08 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa27443; 11 Jun 89 17:25:58 EDT
Received: from ATHENA.MIT.EDU by CS.CMU.EDU; 11 Jun 89 17:23:40 EDT
Received: by ATHENA.MIT.EDU (5.45/4.7) id AA27878; Sun, 11 Jun 89 17:25:12 EDT
Message-Id: <8906112125.AA27878 at ATHENA.MIT.EDU>
Date: Sun, 11 Jun 89 17:15:39 edt
From: Kyle Cave <kcave at psyche.mit.edu>
Site: MIT Center for Cognitive Science
To: connectionists at CS.CMU.EDU
Subject: modularity


I'd like to add two notes concerning modularity to Honavar's letter.
First, the paper by Rueckl, Cave, & Kosslyn (1989) that he referred
to can be found in the most recent issue of the Journal of Cognitive
Neuroscience.  Second, anyone interested in how modular problems can
be approached in a connectionist framework will be interested in
recent work by Robbie Jacobs, who is constructing systems that learn
to devote separate subnets to independent problems.

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa02004; 12 Jun 89 8:48:00 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa02852; 12 Jun 89 8:41:06 EDT
Received: from GATEWAY.MITRE.ORG by CS.CMU.EDU; 12 Jun 89 08:38:49 EDT
Received: by gateway.mitre.org (5.54/SMI-2.2)
	id AA18039; Mon, 12 Jun 89 08:39:09 EDT
Return-Path: <alexis%yummy at gateway.mitre.org>
Received: by marzipan.mitre.org (4.0/SMI-2.2)
	id AA01478; Mon, 12 Jun 89 08:36:28 EDT
Date: Mon, 12 Jun 89 08:36:28 EDT
From: alexis%yummy at gateway.mitre.org
Message-Id: <8906121236.AA01478 at marzipan.mitre.org>
To: singer at Think.COM
Cc: connectionists at CS.CMU.EDU
In-Reply-To: singer at Think.COM's message of Sat, 10 Jun 89 00:01:33 EDT <8906100401.AA04047 at kulla.think.com>
Subject: Sparse vs. Dense networks
Reply-To: alexis%yummy at gateway.mitre.org

"Sparsely connected networks" (or "tessellated connections" as we tend
to think of them) can also be used to hardwire known constraints.  In
speech or vision or something like the Penzias problem the researcher
knows before hand that it makes sence to most if not all of the
computation locally, possibly in something like a pyramid architecture.  
This restriction forces the network to use clues that are considered 
"reasonable."

As an example, a few years ago we spent a fair amount of time using
a NN to recognize characters independant of rotation.  At first the
"D" had the largest letter, so the net just counted pixels (a valid,
but as far as we were concerned "wrong," discriminant).  We originally
worked around this by modifying the training database, but latter
we found we could get the same results by using tessellation.  Limiting
the receptive fields forced the net to primarily use gradients, which
proved to be a much more robust method in general.

The moral of this story is that you can force a network to do
certain classes of algorithms by restricting the connections.
Since there is often a limit on the training data available
these restrictions are even more important.  Combine all this
with shared connections (ala the Blue Book), and share tesselations,
and tesselations of tesselations, and "skip-level" arcs into shared
tesselations, ... and you can really control and add to what you
and your net can do.

alexis.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa06612;
          12 Jun 89 14:17:18 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa04131; 12 Jun 89 9:32:19 EDT
Received: from RELAY.CS.NET by CS.CMU.EDU; 12 Jun 89 09:30:25 EDT
Received: from relay2.cs.net by RELAY.CS.NET id ab08738; 12 Jun 89 9:24 EDT
Received: from cs.brandeis.edu by RELAY.CS.NET id ab18119; 12 Jun 89 9:13 EDT
Received: by cs.brandeis.edu (14.2/6.0.GT)
	id AA03801; Mon, 12 Jun 89 09:07:57 edt
Date: Mon, 12 Jun 89 09:07:57 edt
From: Ron Sun <rsun at cs.brandeis.edu>
Posted-Date: Mon, 12 Jun 89 09:07:57 edt
To: connectionists.2 at cs.brandeis.edu



The following two tech reports are available
from rsun%cs.brandeis.edu at relay.cs.net
or
  R. Sun
  Brandeis U.
  CS
  Waltham, MA 02254



#############################################################

              A Discrete Neural Network Model
        for Conceptual Representation and Reasoning



                          Ron Sun
                   Computer Science Dept.
                    Brandeis University
                     Waltham, MA 02254


Current  connectionist  models  are  oversimplified  in
terms  of the internal mechanisms of individual neurons  and
the  communication  between  them.   Although  connectionist
models offer significant advantages in certain aspects, this
oversimplification leads to the inefficiency of these models
in  addressing issues in explicit symbolic processing, which
is proven to be essential to human  intelligence.   What  we
are aiming at is a connectionist architecture which is capa-
ble  of  simple,  flexible  representations  of  high  level
knowledge  structures and efficient performance of reasoning
based on the data.  We first propose  a discrete neural net-
work model which contains state variables for each neuron in
which a  set of  discrete  states  is  explicitly  specified
instead  of a continuous activation function. A technique is
developed for representing concepts in this  network,  which
utilizes   the   connections  to  define  the  concepts  and
represents the concepts in both verbal and  compiled  forms.
The  main  advantage is that this scheme can handle variable
bindings efficiently.  A reasoning scheme  is  developed  in
the  discrete  neural  network  model,  which  utilizes  the
inherent parallelism in a neural network  model,  performing
all possible inference steps in parallel, implementable in a
fine-grained massively parallel computer.

(to appear in Proc. CogSci Conf. 1989)



###############################################################





Model local neural networks in the lobster stomatogastric ganglion



                          Ron Sun
                         Eve Marder
                        David Waltz

                    Brandeis University
                     Waltham, MA 02254



                          ABSTRACT

     We describe a simulation study of the  pyloric  network
of  the lobster stomatogastric ganglion. We demonstrate that
a few simple activation functions are sufficient to describe
the  oscillatory  behavior  of  the  network.  Our aim is to
determine the essential mechanisms necessary to specify  the
operation  of  biological  neural  networks  so  that we can
incorporate  them  into  connectionist  models.   Our  model
includes  rhythmically  active  bursting  neurons  and  long
time-constant synaptic relations.  In the process  of  doing
this  work, various models and algorithms were compared.  We
have derived some connectionist learning  algorithms.   They
have  proved   useful in terms of ease and accuracy in model
generation.


(to appear in IJCNN-89)


**************************************************************



Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa09540;
          12 Jun 89 17:29:12 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa06000; 12 Jun 89 11:32:32 EDT
Received: from HELIOS.NORTHEASTERN.EDU by RI.CMU.EDU; 12 Jun 89 11:30:32 EDT
Received: from corwin.ccs.northeastern.edu by helios.northeastern.edu
          id aa23199; 8 Jun 89 12:12 EDT
Received: by corwin.CCS.Northeastern.EDU (5.51/SMI-3.2+CCS-main-2.6)
	id AA01649; Thu, 8 Jun 89 12:10:34 ADT
Date: Thu, 8 Jun 89 12:10:34 ADT
From: steve gallant <sg at corwin.ccs.northeastern.edu>
Message-Id: <8906081510.AA01649 at corwin.CCS.Northeastern.EDU>
To: connectionists at RI.CMU.EDU
Subject: supervised vs. unsupervised learning and autoassociators

	Here's my suggestion for a taxonomy of learning problems:

	I.  Supervised Learning:  correct activations given for output cells
					for each training example
		
		A.  Hard Learning Problems:  network structure is fixed and
					activations are not given for
					intermediate cells

		B.  Easy Learning Problems:  correct activations given
					for intermediate cells OR
					network structure allowed to be 
					modified OR single-cell
					problem

	II.  Unsupervised Learning:  correct activations not given for
					output cells
	
	There are further subdivisions for learning ALGORITHMS that can cut
across the above PROBLEM classes.  For example:

	1.  One-Shot Learning Algorithms:  each training example is 
					looked at at most one time

	2.  Iterative Algorithms:  training examples may be looked at 
					several times

	Autoassociative Algorithms are a special case of IB above because
each output cell can be trained independently of the other cells, reducing
the problem to single-cell supervised learning.  However
these models have different dynamics, in that output values are copied to
input cells for the next iteration.  Most learning algs for autoassociative
networks are one-shot algorithms (eg. linear autoassociator, BSB, Hopfield
nets).  Of course iterative algorithms can be used to increase the capacity
of autoassociative models (eg. perceptron learning, pocket alg.).  We could
also add intermediate cells to an autoassociative network to allow
arbitrary sets of patterns to be stored.

	Steve Gallant
	
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa13248; 13 Jun 89 0:10:26 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa09991; 12 Jun 89 15:51:06 EDT
Received: from AI.CS.WISC.EDU by CS.CMU.EDU; 12 Jun 89 15:48:14 EDT
Date: Mon, 12 Jun 89 14:48:02 -0500
From: Leonard Uhr <uhr at cs.wisc.edu>
Message-Id: <8906121948.AA11341 at ai.cs.wisc.edu>
Received: by ai.cs.wisc.edu; Mon, 12 Jun 89 14:48:02 -0500
To: singer at Think.COM
Subject: desirability of sparse, local, converging connectivity
Cc: connectionists at CS.CMU.EDU

I think there is a strong set of arguments for sparse local connectivity,
with multiple converging layers to give global functions.  In a word:

Sparse connectivity keeps the number of links down to O(kN), rather than O(N**2).
Consider brains: 10**12 neurons each with only roughly 3*10**3 synaptic links.
10**24 links is far too much to handle either serially (time) or in parallel (wires).
Obviously the links should be those with non-zero weight, and that usually means
local (tho not always).  Irrelevant links with random initial weights almost
certainly obscure the functional links and breed confusion.  Less-than-complete
connectivity means that the convergence necessary to compute global interactions
needs additional layers O(logN).  - Len Uhr
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa15637; 13 Jun 89 5:22:36 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa11703; 12 Jun 89 17:16:46 EDT
Received: from MARS.NJIT.EDU by CS.CMU.EDU; 12 Jun 89 17:15:15 EDT
Received: by mars.njit.edu (5.57/Ultrix2.4-C)
	id AA14240; Mon, 12 Jun 89 17:14:31 EDT
Date: Mon, 12 Jun 89 17:14:31 EDT
From: nirwan ansari fac ee <ang at mars.njit.edu>
Message-Id: <8906122114.AA14240 at mars.njit.edu>
To: Connectionists at CS.CMU.EDU
Subject: 1989 INNS meeting, Sep 5-9, 1989.
Return-Receipt-To: ang at mars.njit.edu


Last March, I submitted a paper to the 1989 INNS meeting,
Sep 5-9, 1989. Just found out that this meeting has been
cancelled. Yet, I haven't been notified of the fate of my
paper. I called Schman Associates and INNS office - no one
knew where my paper ended up. Anybody out there knows what
were happening to the papers submitted to this meeting?
I still got the receipt of my express mail...
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa15939; 13 Jun 89 6:00:25 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa13235; 12 Jun 89 18:46:14 EDT
Received: from EPHEMERAL.AI.TORONTO.EDU by CS.CMU.EDU; 12 Jun 89 18:43:56 EDT
Received: from localhost (stdin) by ephemeral.ai.toronto.edu with SMTP id 10882; Mon, 12 Jun 89 18:42:36 EDT
To:	harnad at PRINCETON.EDU
cc:	connectionists at CS.CMU.EDU
Subject: Re: Categorization and Supervision 
In-reply-to: Your message of Fri, 09 Jun 89 01:28:02 -0400.
Date:	Mon, 12 Jun 89 18:41:56 EDT
From:	Geoffrey Hinton <hinton at ai.toronto.edu>
Message-Id: <89Jun12.184236edt.10882 at ephemeral.ai.toronto.edu>


I am afraid Steve has not understood my point.  He simply repeats his
conviction again:

"It is a purely logical point that, even if arrived at by purely internal,
unsupervised means, the "correctness" of a "correct" human categorization must
be based on some external performance constraint, and hence at least a
potential "supervisor." Otherwise the "correctness" is merely in the mind of
the theorist, or the interpreter (or the astrologist, if the magical number
happens to be 12)."

The whole debate is about whether there is a sense in which a categorization
can be "correct" even though no external supervision is supplied.  Steve
thinks that any such categorization would be "merely in the mind of the
theorist" (and hence, I guess he thinks, arbitrary).  I think that this is
wrong.  If one model of the data is overwhelmingly simpler than any other,
then its not just in the mind of the theorist. Its correct.  The nice thing
about the Kolmogorov-Chaitin view of complexity is that (in the limit) it
doesnt need to mention the mind of the observer (i.e. in the limit, one model
can be simpler than another WHATEVER the programming language in which we
measure simplicity).

In another message on the same subject, Steve says:

"Perhaps an example will give a better idea of the distinction I'm
emphasizing: Suppose you had N "objects." Consider the many ways you
could sort and label them: By size, by shape, by color, by function,
by age, as natural or synthetic, living or nonliving, etc. In each
case, the name of the same "object" changes, as does the membership of
the category to which it is assigned. With sufficient imagination an
enormous number of possible categorizations could be generated for
the same set of objects. What makes some of those categorizations "correct"
(or, equally important, incorrect) and others arbitrary?
The answer is surely related to why it is that we bother to categorize
at all: Sorting (and especially mis-sorting) objects some ways (e.g.,
as edibles vs inedibles) has important consequences for us, whereas
sorting them other ways (e.g., bigger/smaller than a breadbox, moon in
Aquarius) does not. The consequences do not depend on "internal"
considerations but on external ones, so I continue to doubt that
unsupervised internal constraints can account for much that is of
interest in human category learning."

Again, this is just a repetition of his viewpoint.  If you consider Peter
Cheeseman's work on categorization (which found a new class of stars without
supervision), it becomes clear that unsupervised categorization is NOT
arbitrary.  

Geoff

PS: I think I have now stated my beliefs clearly, and I do not intend to
clutter up the network with any more meta-level junk.  I think this debate
will be resolved by technical progress in getting unsupervised learning to
work nicely, not by a priori assertions about what is necessary and what is
impossible.



Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa16749; 13 Jun 89 7:28:02 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa14729; 12 Jun 89 20:10:35 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 12 Jun 89 20:08:14 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA07505; Mon, 12 Jun 89 20:08:02 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA09043; Mon, 12 Jun 89 20:12:47 EDT
Date: Mon, 12 Jun 89 20:12:47 EDT
From: harnad at Princeton.EDU
Message-Id: <8906130012.AA09043 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Categorization & Supervision

Geoff writes:

>> I am afraid Steve has not understood my point... about whether there
>> is a sense in which a categorization can be "correct" even though no
>> external supervision is supplied... If one model of the data is
>> overwhelmingly simpler than any other, then it's not just in the mind of
>> the theorist. It's correct.

Let me quickly cite two senses (that I've already noted) in which such
a categorization can indeed be "correct," just as Geoff writes in the
foregoing passage, so as to confirm that the misunderstanding's not on
this side of the border:

(1) Under the above conditions it would certainly be (tautologically)
"correct" that the simplest categorization was indeed the simplest
categorization, and that would be true entirely independently of any
external supervisory criterion.

(2) It COULD also be "correct"
         (i)   accidentally, or because of prior
         (ii)  innate or
         (iii) learned preparation, or because the categorization problem
               happened to be
         (iv)  easy (all four of which I've already mentioned in prior postings)
that the categorization arrived at by the internal simplicity criterion ALSO
happens to correspond to a categorization that DOES match an external
supervisory criterion. (Edibles and inedibles could conceivably sort
this way, though it's unlikely.)

I take it that (1) in and of itself is of interest only to the
complexity theorist. (2) Is a possibility; the burden of proof is on
anyone who wishes to claim that many, most or all of our external
categorizations in the world can indeed be captured this way
(the "some" I already noted in my first posting).

I have already given (in a response to someone else's posting on this
question) one a priori reason not to expect simplicity or any other a
priori symmetry to succeed very often: We typically can and do
categorize the very SAME N inputs in many, many DIFFERENT ways,
depending on external contingencies; so it is hard to imagine how one
and the same internal criterion (or several) could second guess ALL of
these alternative categorizations. The winning features are certainly lurking
SOMEWHERE in the input, otherwise successful categorization would not
be possible; but finding them is hardly likely to be an a priori or
internal matter -- and certainly not in what I called the "hard"
(underdetermined) cases. That's what learning is all about. And without
supervision it's just thrashing in the dark (except in the easy cases
that wear their category structure on their ears -- or the already "prepared"
cases, but those involve theft rather than honest toil for a learning
mechanism).

Let me close by clarifying some possible sources of misunderstanding in
this discussion of the relation between supervised/unsupervised learning
algorithms and imposed/ad-lib categorization performance tasks:

(A) A pure ad lib sorting task is to give a subject N objects and ask him
to sort them any way he wishes. A variant on this is to ask him to
(A1) sort them into k categories any way he wishes.

(B) A pure imposed sorting task is to give a subject N objects and k
categories with k category names. The subject is asked to sort them
correctly and is given feedback on every trial; whenever there is an
error, the name of the correct category is provided. The task is
trivial if the subject can sort correctly from the beginning, or as
soon as he knows how many categories there are and what their names
are (i.e., if task B is the same as task A1) or soon thereafter. The
task is nontrivial if the the N objects are sufficiently
interconfusable, and the winning features sufficiently nonobvious, as
to require many supervised trials in order to learn which features will
reliably sort the objects correctly. (The solution should also not be
a matter of rote memory for every object-label pair: i.e., N should be very
large, and categorization should reach 100% before having sampled them
all).

A variant on B would be (B1) to provide feedback only on whether or
not the categorization is correct, but without indicating the correct
category name when something has been miscategorized.

Note that for dichotomous categories (k = 2), B and B1 are the same
(and that locally, all categorizations are dichotomous: "Is this an `X'
or isn't it?"). In any case, both B and B1 would be cases of supervised
learning.

Note also that the choice of k, and of which categorization (and hence
which potential features) are "correct," is entirely up to the
experimenter in imposed categorization -- just as it is entirely up to
external contingencies in the world, and the consequences of
miscategorization for the organism, in natural categorization. Cases in
which A happened to match these external contingencies would be special
ones indeed.

Stevan Harnad
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa06509;
          13 Jun 89 13:14:03 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa21735; 13 Jun 89 9:18:34 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 13 Jun 89 09:15:26 EDT
Received: from suspicion.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA26730; Tue, 13 Jun 89 09:15:27 EDT
Received: by suspicion.Princeton.EDU (4.0/1.81)
	id AA00904; Tue, 13 Jun 89 09:20:13 EDT
Date: Tue, 13 Jun 89 09:20:13 EDT
From: "Stephen J. Hanson" <jose at suspicion.Princeton.EDU>
Message-Id: <8906131320.AA00904 at suspicion.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: sup vs unsup



one other connection with this issue is a profitable distinction
that has been made in the animal learning literature for some decades now.
It has to do with the notion that there is a set of operations
or procedures and there is a concomitant set processes.
Classical or Pavlovian conditioning is an "open loop"" procedure
in control lingo, and is clearly a supervised procedure (analogous
to delta rule/back-prop/boltzmann) while Skinner's Operant 
or Instrumental conditioning--"Reinforcement Learning" is a
"closed loop" procedure, the organism's action determine consequences.
Clearly this a weakening of the supervision and of the information
provided by the teacher.   What is interesting is one can
find process theories that run the gamut from complete distinction
between classical and operant conditioning (less so these days)
to equivalence.  Consequently, from a process point of view
there may be NO distinction between the procedures..this can
cause some confusion at the procedure level depending on the 
process theory one subscribes to..

Now one other interesting point in this context raised a moment
ago (at least I saw the mail a moment ago)  by Cybenko...There
is really nothing analogous in the animal learning area with
"unsupervised" learning --procedures might be 
"sensitization", pseudo-conditioning,
pre-exposure... all which have relatively complex process accounts.

As Cybenko pointed out, "exposure" or Unsupervised procedures really
do impose some implicit metric on the system, one which
interacts with the way the network computes activation. 
In a similar sense, animals, in fact, come prewired or "prepared" for classical
and operant conditioning with certain classes of stimuli and certain classes 
of responses and not other classes.   These kinds of predispositions
mitigate the distinction between supervised and
unsupervised learning, of course there is a set unsupervised
operations which one can impose on the system.

	Steve

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa10206;
          13 Jun 89 17:45:16 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa24897; 13 Jun 89 10:31:58 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 13 Jun 89 10:28:59 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA28936; Tue, 13 Jun 89 10:29:03 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA10919; Tue, 13 Jun 89 10:33:48 EDT
Date: Tue, 13 Jun 89 10:33:48 EDT
From: harnad at Princeton.EDU
Message-Id: <8906131433.AA10919 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Re: Categorization & Supervision

In a variant of a comment that appears to have been posted twice,
Geoff Hinton added:

>  Cheeseman's work on categorization (which found a new class of stars
>  without supervision)... [illustrates] that unsupervised categorization
>  is NOT arbitrary... I think this debate will be resolved by technical
>  progress in getting unsupervised learning to work nicely, not by a
>  priori assertions about what is necessary and what is impossible.

This seems an apt point to ponder again the words of the historian
J. H. Hexter on the subject of negative a prioris. He wrote:

   In an academic generation a little overaddicted to "politesse," it
   may be worth saying that violent destruction is not necessarily
   worthless and futile. Even though it leaves doubt about the right
   road for London, it helps if someone rips up, however violently, a
   `To London' sign on the Dover cliffs pointing south...

But I'm certainly prepared to agree that time itself can arbitrate
whether or not it has been well spent checking if the unsupervised
discovery of a new class of stars also happens to lead us to the
nuts-and-bolts categories imposed by the contingencies of nonverbal
and verbal life on terra firma...

Stevan Harnad
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa10313;
          13 Jun 89 17:54:43 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa21431; 13 Jun 89 9:12:22 EDT
Received: from FLASH.BELLCORE.COM by CS.CMU.EDU; 13 Jun 89 09:10:33 EDT
Received: by flash.bellcore.com (5.58/1.1)
	id AA15789; Tue, 13 Jun 89 09:09:19 EDT
Date: Tue, 13 Jun 89 09:09:19 EDT
From: Stephen J Hanson <jose at flash.bellcore.com>
Message-Id: <8906131309.AA15789 at flash.bellcore.com>
To: connectionists at CS.CMU.EDU
Subject: test


please ignore

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa11670;
          13 Jun 89 20:44:06 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa27730; 13 Jun 89 12:31:06 EDT
Received: from ARPA.ATT.COM by CS.CMU.EDU; 13 Jun 89 12:29:09 EDT
From: neural!yann at att.att.com
Message-Id: <8906131533.AA05289 at neural.UUCP>
Received: from localhost by lesun.UUCP (4.0/4.7)
	id AA00895; Tue, 13 Jun 89 11:35:26 EDT
To: att!cs.cmu.edu!connectionists at att.att.com
Cc: att!ai.toronto.edu!hinton at att.att.com
Subject: Kolmogorov-Chaitin complexity (was: Categorization and Supervision)
In-Reply-To: Your message of Mon, 12 Jun 89 18:41:56 -0400.
Reply-To: neural!yann at neural.att.com
Date: Tue, 13 Jun 89 11:35:23 -0400
>From: Yann le Cun <neural!yann>


Like Geoff  I "do not intend to clutter up the network with any more 
meta-level junk", so let's talk about more technical issues.

Geoff Hinton says:

  " If one model of the data is overwhelmingly simpler than any other,
  then its not just in the mind of the theorist. Its correct.  The nice thing
  about the Kolmogorov-Chaitin view of complexity is that (in the limit) it
  doesnt need to mention the mind of the observer (i.e. in the limit, one model
  can be simpler than another WHATEVER the programming language in which we
  measure simplicity)."

The bad thing about KC complexity is that it DOES mention the mind of the
observer (or rather, the language he uses) for all finitely complex objects.
Infinitely complex objects are infinitely complex for all measures of
complexity (in Chaitin's framework they correspond to non-computable
functions). 

But the complexity of finite objects depends on the language used to describe
the object.  If Ca(x) is the complexity of object x expressed in language a,
and Cb(x) the complexity of the same object expressed in an other language b,
then there is a constant c such that for any x : Ca(x) < Cb(x) + c 
the constant c can be interpreted as the complexity of a language translator
from b to a.

The trouble is: c can be very large, so that the comparison of the complexity 
of two real (computable) objects depends on the language used to describe
the objects. We can have Ca(x) < Ca(y)  and Cb(x) > Cb(y).
Using this thing as a Universal Criterion for Unsupervised Learning looks
quite hopeless.

- Yann Le Cun





Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa17748;
          14 Jun 89 11:19:05 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa03400; 13 Jun 89 19:20:19 EDT
Received: from IBM.COM by CS.CMU.EDU; 13 Jun 89 19:17:52 EDT
Date: 13 Jun 89 18:33:31 EDT
From: Dimitri Kanevsky <DIMITRI at ibm.com>
To:   connectionists at CS.CMU.EDU
Message-Id: <061389.183332.dimitri at ibm.com>
Subject: Categorization & Supervision



 I have been followng the discussion of supervised learning between
 G. Hinton and S. Harnad and it is not at all clear to me why Hinton
 would expect the correspondence between simplicity and correctness to
 be anything but accidental.

 Dimitri




Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa23564;
          14 Jun 89 18:55:59 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa05110; 13 Jun 89 21:55:17 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 13 Jun 89 21:53:23 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Tue, 13 Jun 89 21:52:36 EDT
Received: from JHUVMS.BITNET (INS_ATGE) by CMUCCVMA (Mailer X1.25) with BSMTP
 id 0258; Tue, 13 Jun 89 21:52:35 EDT
Date:     Tue, 13 Jun 89 21:51 EST
From:     INS_ATGE%JHUVMS.BITNET at VMA.CC.CMU.EDU
MMDF-Warning:  Parse error in original version of preceding line at Q.CS.CMU.EDU
Subject:  Two Questions
To:       connectionists at CS.CMU.EDU
X-Original-To:  connectionists at cs.cmu.edu, INS_ATGE

I am currently writing a parallel backpropogation program on The
Connection Machine, with the immediate task of identifying insonified
objects from active sonar data.
  I was wondering if anyone could give me a reference for a detailed paper
on conjugate gradient network teaching technqiues.  From what I have
picked up from casual conversation, it appears that this method can often
lead to faster learning.
  I would also appreciate information dealing with NN analysis of
sonar data (citations in literature besides Sejnowski and Gorman,
personal communication, just to get an idea of how the program is
stacking up against earlier work).

-Thomas G. Edwards
 ins_atge at jhuvms.BITNET
 ins_atge at jhunix.hcf.jhu.edu
 tedwards at cmsun.nrl.navy.mil
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa04553; 15 Jun 89 0:36:56 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa14557; 14 Jun 89 13:29:12 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 14 Jun 89 08:26:23 EDT
Received: from clarity.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA07121; Wed, 14 Jun 89 08:26:25 EDT
Received: by clarity.Princeton.EDU (4.0/1.81)
	id AA16907; Wed, 14 Jun 89 08:31:09 EDT
Date: Wed, 14 Jun 89 08:31:09 EDT
From: harnad at Princeton.EDU
Message-Id: <8906141231.AA16907 at clarity.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: Proper Place of Connectionism...

ON THE PROPER PLACE OF CONNECTIONISM IN MODELLING OUR BEHAVIORAL CAPACITIES

(Abstract of paper presented at First Annual Meeting of the American
Psychological Society, Alexandria VA, June 11 1989)


                        Stevan Harnad
			Psychology Department
			Princeton University
			Princeton NJ 08544

Connectionism is a family of statistical techniques for extracting
complex higher-order correlations from data. It can also be interpreted
and implemented as a neural network of interconnected units with
weighted positive and negative interconnections. Many claims and
counterclaims have been made about connectionism: Some have said it
will supplant artificial intelligence (symbol manipulation) and
explain how we learn and how our brain works. Others have said it is
just a limited family of statistical pattern recognition techniques and
will not be able to account for most of our behavior and cognition. I
will try to sketch how connectionist processes could play a crucial 
but partial role in modeling our behavioral capacities in learning and
representing invariances in the input, thereby mediating the "grounding"
of symbolic representations in analog sensory representations. The
behavioral capacity I will focus on is categorization: Our ability to
sort and label inputs correctly on the basis of feedback from the
consequences of miscategorization.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa22843; 15 Jun 89 7:40:47 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa01101; 14 Jun 89 19:39:50 EDT
Received: from ICSIB.BERKELEY.EDU by CS.CMU.EDU; 14 Jun 89 18:55:26 EDT
Received: from icsib6. (icsib6.Berkeley.EDU) by icsib.Berkeley.EDU (4.0/SMI-4.0)
	id AA21096; Wed, 14 Jun 89 15:59:22 PDT
Received: by icsib6. (4.0/SMI-4.0)
	id AA11131; Wed, 14 Jun 89 15:56:27 PDT
Date: Wed, 14 Jun 89 15:56:27 PDT
From: Andreas Stolcke <stolcke at icsib6.Berkeley.EDU>
Message-Id: <8906142256.AA11131 at icsib6.>
To: connectionists at CS.CMU.EDU
Subject: Tech. Report available

The following Technical Report is available from ICSI:

______________________________________________________________________

TR-89-032                   Andreas Stolcke                    5/01/89
19 pages,$1.75   "A Connectionist Model of Unification"

A general approach to   encode and  unify recursively  nested  feature
structures in  connectionist  networks is described.   The unification
algorithm implemented by the net is based  on  iterative coarsening of
equivalence classes   of  graph  nodes.    This  method  allows    the
reformulation of unification as a  constraint satisfaction problem and
enables the connectionist implementation to take full advantage of the
potential parallelism  inherent in  unification, resuting in sublinear
time complexity.  Moreover, the method  is able to process any  number
of feature structures in parallel, searching for possible unifications
and making decisions  among   mutually  exclusive unifications   where
necessary.

______________________________________________________________________

               International Computer Science Institute
                         Technical Reports

There is a small charge to cover postage and handling for each report.
This charge is waived for ICSI sponsors and for institutions having an
exchange agreement  with the Institute.   Please use the  form  at the
back of the  list for your order.   Make any  necessary  additions  or
corrections to the address  label on the  form, and return  it  to the
International Computer Science Institute.

NOTE: Qualifying institutions may choose to participate in a technical
report exchange program and receive ICSI TRs at no charge.  To arrange
an exchange agreement write, call or send e-mail message to:

                            Librarian
           International Computer Science Institute
                  1947 Center Street, Suite 600
                      Berkeley, CA  94704
                    info at icsi.berkeley.EDU
                        (415) 643-9153.

______________________________________________________________________





                    Technical Report Order Form


             International Computer Science Institute
                  1947 Center Street, Suite 600
                       Berkeley, CA  94704


Please enclose your check to  cover postage and  handling.  Prepayment
is  required  for  all  materials.  Charges  will  be waived for  ICSI
sponsors and for institutions  that have exchange  agreements with the
Institute.  Make checks payable to "ICSI".


        __________________________________________________________
       |              |           |                   |           |
       |  TR Number   | Quantity  | Postage & Handling|  Total    |
       |______________|___________|___________________|___________|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
       |              |           |                   |       |   |
       |______________|___________|___________________|_______|___|
                                  |                   |       |   |
                                  |   Total Amount    |       |   |
                                  |___________________|_______|___|



       NAME AND ADDRESS

                                    __
       __________________________  |__| Please note change of address
                                    __  as shown.
       __________________________  |__| Please remove my name from this
                                    __  mailing list.
       __________________________  |__| Please add my name to this
                                        mailing list.
       __________________________
      
       __________________________


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa14169;
          16 Jun 89 12:09:45 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa27023; 16 Jun 89 12:04:26 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 16 Jun 89 06:34:46 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Fri, 16 Jun 89 06:34:14 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 2217;
 Fri, 16 Jun 89 06:34:13 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7570; Fri,
 16 Jun 89 10:51:44 BST
Received:
Via:        UK.AC.EX.CS; 16 JUN 89 10:51:38 BST
Received:   from exsc by expya.cs.exeter.ac.uk; Fri, 16 Jun 89 10:58:04-0000
From:       Lyn Shackleton <lyn at CS.EXETER.AC.UK>
Date:       Fri, 16 Jun 89 10:50:29 BST
Message-Id: <702.8906160950 at exsc.cs.exeter.ac.uk>
To:         connectionists at CS.CMU.EDU
Subject:    journal reviewers


          *******  CONNECTION SCIENCE ******

Editor: Noel E. Sharkey

Because fo the number of specialist submissions, the journal is
currently expanding its review panel.
This is an interdisciplinary journal with an emphasis on replicability
of results.

If you wish to volunteer please send details of your review area to the
address below. Or write for further details.

lyn shackleton
(assistant editor)

Centre for Connection Science       JANET:  lyn at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter                UUCP:   !ukc!expya!lyn
Exeter EX4 4PT
Devon                    BITNET: lyn at cs.exeter.ac.uk.UKACRL
U.K.

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa09476;
          16 Jun 89 18:22:40 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa28639; 16 Jun 89 13:20:32 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 16 Jun 89 13:18:10 EDT
Received: from suspicion.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA00231; Fri, 16 Jun 89 11:30:25 EDT
Received: by suspicion.Princeton.EDU (4.0/1.81)
	id AA02388; Fri, 16 Jun 89 11:35:14 EDT
Date: Fri, 16 Jun 89 11:35:14 EDT
From: jose at confidence.Princeton.EDU
Message-Id: <8906161535.AA02388 at suspicion.Princeton.EDU>
To: connectionists at CS.CMU.EDU
Subject: lost papers...



through a series of complex, political and unnecessarily
confusing and obfuscating moves by various parties..this
field's attempt to have one coherent, reasonable quality
meeting has been foiled.  Now we have (at least) 3 of
varying coherence and quality.
IJCNN which is meeting very soon this month, INNS
will meet sometime in January, and NIPS will
occur as always in November.   It is too bad that greed
and self-interest seems to take the place of the sort
of nurturing and careful, thoughtful, long term commitment
to a field that has such great potential.  It would
be a shame to see the polarization within the field
allow people's work and thought to be lost or ignored.

        Steve Hanson                                                 

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa08295;
          16 Jun 89 21:24:38 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa03304; 16 Jun 89 17:21:10 EDT
Received: from ANDREW.CMU.EDU by CS.CMU.EDU; 16 Jun 89 17:16:05 EDT
Received: by andrew.cmu.edu (5.54/3.15) id <AA07439> for connectionists at cs.cmu.edu; Fri, 16 Jun 89 17:15:56 EDT
Received: via switchmail; Fri, 16 Jun 89 17:15:53 -0400 (EDT)
Received: from chicory.psy.cmu.edu via qmail
          ID </afs/andrew.cmu.edu/service/mailqs/q004/QF.cYaKqAy00jWD00Cr4E>;
          Fri, 16 Jun 89 17:13:50 -0400 (EDT)
Received: from chicory.psy.cmu.edu via qmail
          ID </afs/andrew.cmu.edu/usr24/jlm/.Outgoing/QF.IYaKq2y00jWDQDSWsC>;
          Fri, 16 Jun 89 17:13:39 -0400 (EDT)
Received: from BatMail.robin.v2.8.CUILIB.3.45.SNAP.NOT.LINKED.chicory.psy.cmu.edu.sun3.35
          via MS.5.6.chicory.psy.cmu.edu.sun3_35;
          Fri, 16 Jun 89 17:13:33 -0400 (EDT)
Message-Id: <IYaKpxy00jWD8DSWhg at andrew.cmu.edu>
Date: Fri, 16 Jun 89 17:13:33 -0400 (EDT)
From: "James L. McClelland" <jlm+ at ANDREW.CMU.EDU>
To: connectionists at CS.CMU.EDU
Subject: Let 1,000 flowers bloom
In-Reply-To: <8906161535.AA02388 at suspicion.Princeton.EDU>
References: <8906161535.AA02388 at suspicion.Princeton.EDU>

In response to Steve Hanson's last message:

I myself do not mind the fact that there are three conectionist
meetings, as well as others where connectionist work can be presented.

Sure, there are factions and there is politics; this has lead to some
fragmentation.  But, there is an upside.  Different perspectives all
have a chance to be represented, and different goals have a chance to
be served.  The NIPS meeting, for example, is a small, high-quality
meeting dedicated to highly quantitative, computational-theory type
work with a biological flavor; I think it would be a shame if these
characteristics were lost in an attempt to achieve some grand
synthesis.  The other conferences have their specific strengths as
well. Meanwhile, non-connectionist conferences like Cognitive Science
and AAAI attract some of the good connectionist work applied to
language and higher-level aspects of cognition.  People seem to be
gravitating toward attending the one or two of these that best suit
their own needs and interests within the broad range of connectionist
research.  Seems to me things are pretty close to the way they
ought to be.  It would be nice if the process of arriving at this
state could have been smoother, and maybe in the end it won't turn
out that there need to be quite so many meetings, but things could
be a heck of a lot worse.

-- Jay McClelland







Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa00328; 17 Jun 89 3:02:39 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa07666; 16 Jun 89 22:50:01 EDT
Received: from EPHEMERAL.AI.TORONTO.EDU by CS.CMU.EDU; 16 Jun 89 22:48:23 EDT
Received: from localhost (stdin) by ephemeral.ai.toronto.edu with SMTP id 10958; Fri, 16 Jun 89 15:33:20 EDT
To:	connectionists at CS.CMU.EDU
Subject: CRG-TR-89-3
Date:	Fri, 16 Jun 89 15:33:15 EDT
From:	Carol Plathan <carol at ai.toronto.edu>
Message-Id: <89Jun16.153320edt.10958 at ephemeral.ai.toronto.edu>

The Technical Report CRG-TR-89-3 by Hinton and Shallice (May 1989) can be
obtained by sending me your full mailing address.  An abstract of this Report
follows:


  LESIONING A CONNECTIONIST NETWORK:  INVESTIGATIONS OF ACQUIRED DYSLEXIA
  -----------------------------------------------------------------------

Geoffrey E. Hinton                          Tim Shallice
Department of Computer Science              MRC Applied Psychology Unit
University of Toronto                       Cambridge, UK

ABSTRACT:
--------

     A connectionist network which had been trained to map orthographic
representation into semantic ones was systematically 'lesioned'.  Wherever it
was lesioned it produced more Visual, Semantic, and Mixed visual and semantic
errors than would be expected by chance.  With more severe lesions it showed
relatively spared categorical discrimination when item identification was not
possible.  Both phenomena are qualitatively similar to those observed in
neurological patients.  The error pattern is that characteristically found in
deep dyslexia.  The spared categorical discrimination is observed in semantic
access dyslexia and also in a form of pure alexia.  It is concluded that the
lesioning of connectionist networks may produce phenomena which mimic
non-transparent aspects of the behaviour of neurological patients.


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa04289;
          17 Jun 89 18:05:49 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa13079; 17 Jun 89 17:30:52 EDT
Received: from IBM.COM by CS.CMU.EDU; 17 Jun 89 17:28:36 EDT
Date: 17 Jun 89 17:05:59 EDT
From: Scott Kirkpatrick <KIRK at ibm.com>
To:   connectionists at CS.CMU.EDU
cc:   Steve at confidence.princeton.edu
Message-Id: <061789.170559.kirk at ibm.com>
Subject: NIPS 89 status and schedule

Authors have been known to call me, Rich Lippman (the program committee chair)
or Kathie Hibbard (who runs the local office in Denver), asking when the NIPS
program decisions will be made, announced, etc...  I'll give our schedule
in order to restrict the phone calls to those which can help us to catch
mistakes.  Steve Hanson (NIPS publicity, and responsible for our eye-catching
blue poster) -- please put a version of this on all the other bulletin
boards.

Deadline for abstracts and summaries was May 30, 1989.  We have now received
over 460 contributions (almost 50% more than last year!).  They are now
logged in, and cards acknowledging receipt will be mailed next week to authors.
Authors who have not received an acknowledgement by June 30 should write to
Kathie Hibbard at the NIPS office; it's possible we got your address wrong
in our database, and this will help us catch these things.

Refereeing will take July.  Collecting the results and defining a final
program will be done in August.  We plan to mail letters informing authors
of the outcome during the first week of September.  At that time, we will
send all registration material, information about prices, and a complete
program.   If you haven't heard from us in late September, again please
write, to help us straighten things out.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa05039;
          17 Jun 89 20:45:24 EDT
Received: from csvax.caltech.edu by Q.CS.CMU.EDU id aa14984;
          17 Jun 89 20:41:52 EDT
Received: from aurel.caltech.edu by csvax.caltech.edu (5.59/1.2)
	id AA09482; Sat, 17 Jun 89 16:36:47 PDT
Received: from smaug.caltech.edu. by aurel.caltech.edu (4.0/SMI-4.0)
	id AA00438; Sat, 17 Jun 89 16:39:32 PDT
Received: by smaug.caltech.edu. (4.0/SMI-4.0)
	id AA28797; Sat, 17 Jun 89 14:39:57 PDT
Date: Sat, 17 Jun 89 14:39:57 PDT
From: Jim Bower <jbower at smaug.caltech.edu>
Message-Id: <8906172139.AA28797 at smaug.caltech.edu.>
To: connectionists at Q.CS.CMU.EDU
Subject: 3 meetings

Concerning Steve Hanson's comments on meetings.  I think that it is only
 fair to note that not all the recent history of neural-net meetings have
 been characterized by "greed and self-interest".  In order for greed to be
 an issue there must be an opportunity for organizers to make money.  In
 order for self-interest to be an issue there must be a biasable mechanism
 for self promotion.  Opportunities for commercialization and the often
 outrageous associated claims apply to both cases. In this regard, in my
 view, it is unfair to mention all three national neural network meetings in
 the same sentence.  One of the three meetings was founded and continues
 to be organized to provide a "nurturing, careful, and thoughtful"  forum
 committed to the long term support and growth of a field with
 "considerable potential".  Most of you know which meeting that is, but if a
 clue is necessary, it is the only one that takes place in a city not known as
 a national focus for greed and self-interest. 

Jim Bower

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa03661;
          18 Jun 89 15:29:07 EDT
Received: from vma.cc.cmu.edu by Q.CS.CMU.EDU id aa02704; 18 Jun 89 15:25:31 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Sun, 18 Jun 89 15:25:51 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 7292;
 Sun, 18 Jun 89 15:25:49 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7878; Sun,
 18 Jun 89 20:23:50 BST
Received:
Via:        UK.AC.EX.CS; 18 JUN 89 20:23:43 BST
To:         cs.exeter-connect at CS.EXETER.AC.UK
Received:
Received:   from q.cs.cmu.edu by NSFnet-Relay.AC.UK   via NSFnet with SMTP
            id aa09349; 18 Jun 89 3:58 BS
Received:   from csvax.caltech.edu by Q.CS.CMU.EDU id aa14984;
            17 Jun 89 20:41:52 ED
Received:   from aurel.caltech.edu by csvax.caltech.edu (5.59/1.2)
            id AA09482; Sat, 17 Jun 89 16:36:47 PD
Received:   from smaug.caltech.edu. by aurel.caltech.edu (4.0/SMI-4.0)
            id AA00438; Sat, 17 Jun 89 16:39:32 PD
Received:   by smaug.caltech.edu. (4.0/SMI-4.0)
            id AA28797; Sat, 17 Jun 89 14:39:57 PD
Date:       Sat, 17 Jun 89 14:39:57 PDT
From:       Jim Bower <jbower at SMAUG.CALTECH.EDU>
Message-Id: <8906172139.AA28797 at smaug.caltech.edu.>
To:         connectionists at Q.CS.CMU.EDU
Subject:    3 meetings
Original-Sender: Connectionists-Request at edu.cmu.cs.q
Sender:     connect-bb-request at CS.EXETER.AC.UK

Concerning Steve Hanson's comments on meetings.  I think that it is only
 fair to note that not all the recent history of neural-net meetings have
 been characterized by "greed and self-interest".  In order for greed to be
 an issue there must be an opportunity for organizers to make money.  In
 order for self-interest to be an issue there must be a biasable mechanism
 for self promotion.  Opportunities for commercialization and the often
 outrageous associated claims apply to both cases. In this regard, in my
 view, it is unfair to mention all three national neural network meetings in
 the same sentence.  One of the three meetings was founded and continues
 to be organized to provide a "nurturing, careful, and thoughtful"  forum
 committed to the long term support and growth of a field with
 "considerable potential".  Most of you know which meeting that is, but if a
 clue is necessary, it is the only one that takes place in a city not known as
 a national focus for greed and self-interest.

Jim Bower


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa06023;
          18 Jun 89 18:08:05 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa02751; 18 Jun 89 15:28:06 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 18 Jun 89 15:24:17 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Sun, 18 Jun 89 15:24:45 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 7252;
 Sun, 18 Jun 89 15:24:40 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7874; Sun,
 18 Jun 89 20:22:47 BST
Received:
Via:          UK.AC.EX.CS; 18 JUN 89 20:22:42 BST
To:           cs.exeter-connect at CS.EXETER.AC.UK
Received:
Received:     from q.cs.cmu.edu by NSFnet-Relay.AC.UK   via NSFnet with SMTP
              id aa01674; 17 Jun 89 9:11 BS
Received:
Received:     from VMA.CC.CMU.EDU by CS.CMU.EDU; 16 Jun 89 06:34:46 EDT
Received:     from CMUCCVMA by VMA.CC.CMU.EDU ; Fri, 16 Jun 89 06:34:14 EDT
Received:     from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 2217;
              Fri, 16 Jun 89 06:34:13 ED
Received:
Received:
Original-Via: UK.AC.EX.CS; 16 JUN 89 10:51:38 BST
Received:     from exsc by expya.cs.exeter.ac.uk; Fri, 16 Jun 89 10:58:04-0000
From:         Lyn Shackleton <lyn at CS.EXETER.AC.UK>
Date:         Fri, 16 Jun 89 10:50:29 BST
Message-Id:   <702.8906160950 at exsc.cs.exeter.ac.uk>
To:           connectionists at CS.CMU.EDU
Subject:      journal reviewers
Original-Sender: Connectionists-Request at edu.cmu.cs.q
Sender:       connect-bb-request at CS.EXETER.AC.UK


          *******  CONNECTION SCIENCE ******

Editor: Noel E. Sharkey

Because fo the number of specialist submissions, the journal is
currently expanding its review panel.
This is an interdisciplinary journal with an emphasis on replicability
of results.

If you wish to volunteer please send details of your review area to the
address below. Or write for further details.

lyn shackleton
(assistant editor)

Centre for Connection Science       JANET:  lyn at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter                UUCP:   !ukc!expya!lyn
Exeter EX4 4PT
Devon                    BITNET: lyn at cs.exeter.ac.uk.UKACRL
U.K.


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa06079;
          18 Jun 89 18:12:51 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa02706; 18 Jun 89 15:25:48 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 18 Jun 89 15:22:25 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Sun, 18 Jun 89 15:22:45 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 7140;
 Sun, 18 Jun 89 15:22:43 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7856; Sun,
 18 Jun 89 20:20:44 BST
Received:
Via:        UK.AC.EX.CS; 18 JUN 89 20:20:36 BST
To:         cs.exeter-connect at CS.EXETER.AC.UK
Received:
Received:   from q.cs.cmu.edu by NSFnet-Relay.AC.UK   via NSFnet with SMTP
            id aa01325; 17 Jun 89 8:37 BS
Received:   from cs.cmu.edu by Q.CS.CMU.EDU id aa07666; 16 Jun 89 22:50:01 EDT
Received:   from EPHEMERAL.AI.TORONTO.EDU by CS.CMU.EDU; 16 Jun 89 22:48:23 EDT
Received:
To:         connectionists at CS.CMU.EDU
Subject:    CRG-TR-89-3
Date:       Fri, 16 Jun 89 15:33:15 EDT
From:       Carol Plathan <carol at AI.TORONTO.EDU>
Message-Id: <89Jun16.153320edt.10958 at ephemeral.ai.toronto.edu>
Original-Sender: Connectionists-Request at edu.cmu.cs.q
Sender:     connect-bb-request at CS.EXETER.AC.UK

The Technical Report CRG-TR-89-3 by Hinton and Shallice (May 1989) can be
obtained by sending me your full mailing address.  An abstract of this Report
follows:


  LESIONING A CONNECTIONIST NETWORK:  INVESTIGATIONS OF ACQUIRED DYSLEXIA
  -----------------------------------------------------------------------

Geoffrey E. Hinton                          Tim Shallice
Department of Computer Science              MRC Applied Psychology Unit
University of Toronto                       Cambridge, UK

ABSTRACT:
--------

     A connectionist network which had been trained to map orthographic
representation into semantic ones was systematically 'lesioned'.  Wherever it
was lesioned it produced more Visual, Semantic, and Mixed visual and semantic
errors than would be expected by chance.  With more severe lesions it showed
relatively spared categorical discrimination when item identification was not
possible.  Both phenomena are qualitatively similar to those observed in
neurological patients.  The error pattern is that characteristically found in
deep dyslexia.  The spared categorical discrimination is observed in semantic
access dyslexia and also in a form of pure alexia.  It is concluded that the
lesioning of connectionist networks may produce phenomena which mimic
non-transparent aspects of the behaviour of neurological patients.



Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa08767; 19 Jun 89 0:41:09 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa02712; 18 Jun 89 15:26:22 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 18 Jun 89 15:22:58 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Sun, 18 Jun 89 15:23:18 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 7170;
 Sun, 18 Jun 89 15:23:17 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7866; Sun,
 18 Jun 89 20:21:07 BST
Received:
Via:        UK.AC.EX.CS; 18 JUN 89 20:21:02 BST
To:         cs.exeter-connect at CS.EXETER.AC.UK
Received:
Received:   from q.cs.cmu.edu by NSFnet-Relay.AC.UK   via NSFnet with SMTP
            id aa01599; 17 Jun 89 9:03 BS
Received:   from cs.cmu.edu by Q.CS.CMU.EDU id aa28639; 16 Jun 89 13:20:32 EDT
Received:   from PRINCETON.EDU by CS.CMU.EDU; 16 Jun 89 13:18:10 EDT
Received:   from suspicion.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
            id AA00231; Fri, 16 Jun 89 11:30:25 ED
Received:   by suspicion.Princeton.EDU (4.0/1.81)
            id AA02388; Fri, 16 Jun 89 11:35:14 ED
Date:       Fri, 16 Jun 89 11:35:14 EDT
From:       jose at CONFIDENCE.PRINCETON.EDU
Message-Id: <8906161535.AA02388 at suspicion.Princeton.EDU>
To:         connectionists at CS.CMU.EDU
Subject:    lost papers...
Original-Sender: Connectionists-Request at edu.cmu.cs.q
Sender:     connect-bb-request at CS.EXETER.AC.UK



through a series of complex, political and unnecessarily
confusing and obfuscating moves by various parties..this
field's attempt to have one coherent, reasonable quality
meeting has been foiled.  Now we have (at least) 3 of
varying coherence and quality.
IJCNN which is meeting very soon this month, INNS
will meet sometime in January, and NIPS will
occur as always in November.   It is too bad that greed
and self-interest seems to take the place of the sort
of nurturing and careful, thoughtful, long term commitment
to a field that has such great potential.  It would
be a shame to see the polarization within the field
allow people's work and thought to be lost or ignored.

        Steve Hanson


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa10897; 19 Jun 89 5:48:18 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa14967; 19 Jun 89 5:40:18 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 19 Jun 89 05:38:17 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Mon, 19 Jun 89 05:38:43 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 0828;
 Mon, 19 Jun 89 05:38:42 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 6641; Mon,
 19 Jun 89 10:36:41 BST
Received:
Via:         UK.AC.EX.CS; 19 JUN 89 10:36:23 BST
Received:
From:        Noel Sharkey <noel at CS.EXETER.AC.UK>
Date:        Mon, 19 Jun 89 10:35:56 BST
Message-Id:  <4309.8906190935 at entropy.cs.exeter.ac.uk>
To:          carol at AI.TORONTO.EDU
Cc:          connectionists at CS.CMU.EDU
In-Reply-To: Carol Plathan's message of Fri, 16 Jun 89 15:33:15 EDT
             <89Jun16.153320edt.10958 at ephemeral.ai.toronto.edu
Subject:     CRG-TR-89-3


please send me a copy of Report CRG-TR-89-3 by Hinton and Shallice (May 1989).
 LESIONING A CONNECTIONIST NETWORK:  INVESTIGATIONS OF ACQUIRED DYSLEXIA


noel sharkey

Centre for Connection Science       JANET:  noel at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter                UUCP:   !ukc!expya!noel
Exeter EX4 4PT
Devon                    BITNET: noel at cs.exeter.ac.uk@UKACRL
U.K.

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa01375;
          19 Jun 89 10:01:27 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa15712; 19 Jun 89 6:05:21 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 19 Jun 89 06:03:46 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Mon, 19 Jun 89 06:04:13 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 0907;
 Mon, 19 Jun 89 06:04:12 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7010; Mon,
 19 Jun 89 10:50:54 BST
Received:
Via:         UK.AC.EX.CS; 19 JUN 89 10:50:38 BST
Received:
From:        Noel Sharkey <noel at CS.EXETER.AC.UK>
Date:        Mon, 19 Jun 89 10:50:37 BST
Message-Id:  <4312.8906190950 at entropy.cs.exeter.ac.uk>
To:          jose at CONFIDENCE.PRINCETON.EDU
Cc:          connectionists at CS.CMU.EDU
In-Reply-To: jose at edu.princeton.confidence's message of Fri, 16 Jun 89 11:35:14
             EDT <8906161535.AA02388 at suspicion.Princeton.EDU
Subject:     lost papers...


Hanson's comment seems a bit bitter ... i wonder what is really
behind it. In a field growing as rapidly as connectionism, I would
have thought that we need a lot more annual meetings. And there will
of course be more when we in Europe get our act together.

I think that the field is getting far to large to be unified. Papers
come at me from all directions - the structure of the hippocampus to
reasoning in natural language understanding and low-level visual perception.
Surely it is inevidable that the "field" will fractionate into many
specialised sub-fields as is happening at the cognitive science meeting
etc. as Jay pointed out.

Imagine having one annual psychology meeting, or one annual physics
meeting.


noel sharkey

Centre for Connection Science       JANET:  noel at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter                UUCP:   !ukc!expya!noel
Exeter EX4 4PT
Devon                    BITNET: noel at cs.exeter.ac.uk@UKACRL
U.K.

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa02032;
          19 Jun 89 10:53:39 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa17474; 19 Jun 89 8:49:38 EDT
Received: from PRINCETON.EDU by CS.CMU.EDU; 19 Jun 89 08:47:31 EDT
Received: from suspicion.Princeton.EDU by Princeton.EDU (5.58+++/2.17)
	id AA12779; Mon, 19 Jun 89 08:47:20 EDT
Received: by suspicion.Princeton.EDU (4.0/1.81)
	id AA00471; Mon, 19 Jun 89 08:52:13 EDT
Date: Mon, 19 Jun 89 08:52:13 EDT
From: "Stephen J. Hanson" <jose at confidence.Princeton.EDU>
Message-Id: <8906191252.AA00471 at suspicion.Princeton.EDU>
To: jose at confidence.Princeton.EDU, noel%CS.EXETER.AC.UK at pucc.Princeton.EDU
Subject: Re:  lost papers...
Cc: connectionists at CS.CMU.EDU

Flowers and all..

(flame on)

Actually, I agree with Jay and Noel, diversity is the spice of life.
In fact, one of the key aspects of this field is the fact that one
can find 8-9 diciplines in the room represented.  I think we forget
sometimes how remarkable this actually is.  I also think it
is important to remember as we rush to sell our version of
the story, or make a septobijjillion dollars selling neural
net black boxes or teaching neural nets to the great
unwashed, that we don't shoot ourselves in our collective feet
--after all who is the competition here?  remember some sort of
proto-AI killed off this field less than 40 years ago..all I
was suggesting was that it would be nice if there was
sort of agreement about organization, politics and coherence
about progress in the field--god knows--not about the subject
matter.  I realize this is unlikely and somewhat naive, 
and more (conferences, journals, etc..) is usually better in any field..
its just that all these flowers look a bit carnivorous.

(flame off)

and enough.. back to some substance please.

	jose
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa20520;
          20 Jun 89 14:35:03 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa14829; 20 Jun 89 14:08:20 EDT
Received: from TRACTATUS.BELLCORE.COM by RI.CMU.EDU; 20 Jun 89 14:05:10 EDT
Received: by tractatus.bellcore.com (5.61/1.34)
	id AA13847; Mon, 19 Jun 89 08:51:16 -0400
Date: Mon, 19 Jun 89 08:51:16 -0400
From: Stephen J Hanson <jose at tractatus.bellcore.com>
Message-Id: <8906191251.AA13847 at tractatus.bellcore.com>
To: DJERS at TUCC.TUCC.EDU, TheoryNet at ibm.com, ailist at kl.sri.com, 
    arpanet-bboards at mc.lcs.mit.edu, biotech%umdc.BITNET at cunyvm.cuny.edu, 
    chipman at NPRDC.NAVY.MIL, conferences at hplabs.hp.com, 
    connectionists at RI.CMU.EDU, dynsys-l%unc.BITNET at cunyvm.cuny.edu, 
    epsynet%uhupvm1.BITNET at cunyvm.cuny.edu, gazzaniga at tractatus.bellcore.com, 
    hirst at ROCKY2.ROCKEFELLER.EDU, info-futures at bu-cs.bu.edu, 
    kaiser%yorkvm1.BITNET at cunyvm.cuny.edu, keeler at mcc.com, 
    mike%bucasb.bu.edu at bu-it.bu.edu, msgs at tractatus.bellcore.com, 
    msgs at confidence.princeton.edu, neuron at ti-csl.csc.ti.com, 
    parsym at sumex-aim.stanford.edu, physics at mc.lcs.mit.edu, 
    self-org at mc.lcs.mit.edu, soft-eng at MINTAKA.LCS.MIT.EDU, 
    taylor at hplabsz.hpl.hp.com, vision-list at ADS.COM
Subject: NIPS Schedule


*********NIPS UPDATE**********

Deadline for abstracts and summaries was May 30, 1989.  We have now received
over 460 contributions (almost 50% more than last year!).  They are now
logged in, and cards acknowledging receipt will be mailed next week to authors.
Authors who have not received an acknowledgement by June 30 should write to
Kathie Hibbard at the NIPS office; it's possible we got your address wrong
in our database, and this will help us catch these things.

Refereeing will take July.  Collecting the results and defining a final
program will be done in August.  We plan to mail letters informing authors
of the outcome during the first week of September.  At that time, we will
send all registration material, information about prices, and a complete
program.   If you haven't heard from us in late September, again please
write, to help us straighten things out.

**********NIPS UPDATE***********
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa27436;
          20 Jun 89 19:58:42 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa16421; 20 Jun 89 15:22:48 EDT
Received: from ELROND.STANFORD.EDU by CS.CMU.EDU; 20 Jun 89 15:20:05 EDT
Received: by elrond.Stanford.EDU (3.2/4.7); Tue, 20 Jun 89 12:03:08 PDT
Date: Tue, 20 Jun 89 12:03:08 PDT
From: Dave Rumelhart <der at elrond.Stanford.EDU>
To: jose at confidence.Princeton.EDU, noel%CS.EXETER.AC.UK at pucc.Princeton.EDU
Subject: Re:  lost papers...
Cc: connectionists at CS.CMU.EDU

	Not that I want to prolong this discussion, but as a member of the
INNS board, I should like to clarify one point concerning the the IJCNN
meetings and the INNS.  In fact, the movment has been toward cooperation
between the large IEEE sponsored meeting and the large INNS sponsored
meeting, formerly known as the ICNN meeting.  There has been an agreement
whereby IEEE and INNS will co-sponsor two meetings per year -- roughly a
summer meeting and a winter meeting.  These jointly sponsored meetings
have been dubbed IJCNN or International JOINT Conference on Neural Networks
with the JOINT to signify the joint sponsorship.  The movement of the INNS
annual meeting from September 1989 to January 1990 has been by way of
cooperating, not by way of competing.  The current plan to continue to  jointly
sponsor two meetings per year as long as there is sufficient interest.   In 
addition INNS and IEEE will probably help sponsor occasional European and
Japanese meetings from time to time.  

	That there will, in addition,  of course, be a number of smaller 
meetings and workshops sponsored my other groups is, in my opinion, healthy
and natural.  There are many people with many interests working on things they 
call (or are willing to call) neural networks.  It is normal that special
interest groups should form within such a interdisciplinary field.

	I hope these comments are of some use to those bewildered by the
array of meetings.  I, in fact, believe that the establishment of the joint
meeting plan between INNS and IEEE was a major accomplishment and certainly
a move in the right direction for the field.


	David E. Rumelhart
	der at psych.stanford.edu

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa01011; 21 Jun 89 3:51:42 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa24477; 21 Jun 89 3:48:22 EDT
Received: from UUNET.UU.NET by CS.CMU.EDU; 21 Jun 89 03:46:49 EDT
Received: from munnari.UUCP by uunet.uu.net (5.61/1.14) with UUCP 
	id AA14955; Wed, 21 Jun 89 03:46:42 -0400
From: munnari!cluster.cs.su.OZ.AU!ray at uunet.uu.net
Received: from munnari.oz.au (munnari.oz) by murtoa.cs.mu.oz (5.5)
	id AA01986; Wed, 21 Jun 89 16:13:32 EST
	(from ray at cluster.cs.su.OZ.AU for uunet!connectionists at CS.CMU.EDU)
Received: from cluster.cs.su.oz (via basser) by munnari.oz.au with SunIII (5.61+IDA+MU)
	id AA20510; Wed, 21 Jun 89 13:21:46 +1000
	(from ray at cluster.cs.su.OZ.AU for connectionists at CS.CMU.EDU@murtoa.cs.mu.OZ.AU)
Date: Wed, 21 Jun 89 13:20:51 +1000
Message-Id: <8906210321.20510 at munnari.oz.au>
To: connectionists at CS.CMU.EDU
Subject: location of IJCNN conferences
Cc: munnari!ray at uunet.uu.net

  >  From Connectionists-Request at q.cs.cmu.edu@murtoa.cs.mu.oz
  >  From: der%elrond.Stanford.EDU at murtoa.cs.mu.oz (Dave Rumelhart)
  >  Date: Tue, 20 Jun 89 12:03:08 PDT
  >  To: jose at ... noel%...
  >  Subject: Re:  lost papers...
  >  Cc: connectionists at CS.CMU.EDU
  >  
  >  ... The current plan to continue to  jointly sponsor two meetings per
  >  year as long as there is sufficient interest.   In addition INNS and
  >  IEEE will probably help sponsor occasional European and Japanese
  >  meetings from time to time.  

This is one aspect of the current arrangement that bothers me. The INNS
is an international society, but its *premier* conference is held only
in the USA. Given the high percentage of US INNS members, and the amount
of NN research done in the USA, its fitting that the bulk of NN conferences
be held there.  But the INNS' premier conference should be held occasionally
outside the USA.

The IJCAI and AAAI have a good arrangement.  The IJCAI
conferences are held every second year, with every second conference
held in North America. The AAAI holds its own conference in the USA in the
three out of four years that the IJCAI is not in North America.  I think
the exact periodicities for the AAAI and IJCAI may not suit the IEEE and
INNS. It seems that people want more frequent conferences. But I think
the general idea is sound.  Perhaps every second occurrence of the January
conference could be held outside the USA? (Leaving three out of every four
IJCNN conferences in the USA.)

As I understand it, the current arrangement will be reviewed in a year or
two.  I was glad to see the INNS and IEEE get together and coordinate the
conferences.  I just think some fine tuning is required.


Raymond Lister
Basser Department of Computer Science
University of Sydney
AUSTRALIA

Internet: ray at basser.cs.su.oz.au@uunet.uu.net
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa05561; 21 Jun 89 7:39:33 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa26050; 21 Jun 89 5:08:45 EDT
Received: from UUNET.UU.NET by CS.CMU.EDU; 21 Jun 89 05:06:33 EDT
Received: from munnari.UUCP by uunet.uu.net (5.61/1.14) with UUCP 
	id AA01454; Wed, 21 Jun 89 05:06:24 -0400
From: munnari!cluster.cs.su.OZ.AU!ray at uunet.uu.net
Received: from munnari.oz.au (munnari.oz) by murtoa.cs.mu.oz (5.5)
	id AA06593; Wed, 21 Jun 89 18:38:56 EST
	(from ray at cluster.cs.su.OZ.AU for uunet!connectionists at CS.CMU.EDU)
Received: from cluster.cs.su.oz (via basser) by munnari.oz.au with SunIII (5.61+IDA+MU)
	id AA24824; Wed, 21 Jun 89 18:38:52 +1000
	(from ray at cluster.cs.su.OZ.AU for connectionists at CS.CMU.EDU@murtoa.cs.mu.OZ.AU)
Date: Wed, 21 Jun 89 18:37:48 +1000
Message-Id: <8906210838.24824 at munnari.oz.au>
To: singer at THINK.COM
Subject: Re: Sparse vs. Dense networks
Cc: munnari!ray at uunet.uu.net, connectionists at CS.CMU.EDU

Hopfield and Tank's approach to the N city traveling salesman problem (TSP)
('"Neural" Computation of Decisions in Optimization Problems' Biol.
Cybern. 52, pp 141-152 (1985)) used a N**2 matrix of neurons. Each neuron
is connected to kN other neurons. Bailey and Hammerstrom pointed out that
this high level of interconnect raises the area requirement to N**3.  ('Why
VLSI Implementations of Associative VLCNs Require Connection Multiplexing',
IEEE International Conference on Neural Networks, San Diego (1988),
pp II-173 to II-180 - anyone interested in implementation issues, but without
a background in VLSI design, should read this paper). An N**2 area
requirement is pushing it.  N**3 is just about impractical, for any but very
small TSPs.  Adding metal layers for interconnect doesn't beat the problem
(unless the number of metal layers is a function of N, which is impractical).

I have devised an approach that uses an N**2 array of neurons, like H&T,
but which requires only log N interconnect per neuron (giving an overall area
requirement of (N**2)*(log N). My approach does not use multiplexing.  It
works by restricting the matrix to legal solutions.  Despite this restriction,
the approach generates quite good TSP solutions.  Not only does the approach
reduce the level of interconnect to practical levels, it also suggests that
the capacity of analog approaches to move within the volume of the solution
hypercube is not as important as previously thought.  If you would like to
know more about my approach, send your complete postal address, and I'll
send you a paper.

Raymond Lister
Basser Department of Computer Science
University of Sydney
NSW  2006
AUSTRALIA

Internet: ray%basser.cs.su.oz.au at uunet.uu.net
CSNET:    ray%basser.cs.su.oz at csnet-relay
UUCP:	  {uunet,hplabs,pyramid,mcvax,ukc,nttlab}!munnari!basser.cs.su.oz!ray
JANET:	  munnari!basser.cs.su.oz!ray at ukc
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa08791;
          21 Jun 89 17:04:10 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa00924; 21 Jun 89 7:48:08 EDT
Received: from TI.COM by CS.CMU.EDU; 21 Jun 89 07:45:45 EDT
Received: by ti.com id AA01436; Tue, 20 Jun 89 21:32:43 CDT
Message-Id: <8906210232.AA01436 at ti.com>
Received: by tilde id AA29832; Tue, 20 Jun 89 21:30:34 CDT
Date: Tue, 20 Jun 89 21:30:34 CDT
From: lugowski at ngstl1.csc.ti.com
To: connectionists at CS.CMU.EDU
Subject: objection!

As an original member of connectionists at cs.cmu (50+ folks), I have
considered unsubscribing because of the recent runaway discussions a la
Self-Org or AIList.  Thus, I will keep my note short and ask for no
replies, although I anticipate other strong opinions on the same subject.
Please consider the note that follows to be my personal opinion.

				-- Marek Lugowski, TI AI Lab/IU CS Dept.

I object to an abstract posted to the list recently, exerpted below,
as the definition of connectionism given there is grossly misleading.
As *one* prototype that does not fit the misleadingly drawn category,
I suggest that my work is entirely connectionist; has been perceived
by connectionists to be connectionist as early as 1986, yet in *no
way* fits the definition cited below.  I object knowing that the
author presented at the Emergent Computation conference May 22-26 and
had plenty opportunity to disabuse himself in Los Alamos of such
distortions.  The fact that he apprently chose not to do so is what I
object to the strongest, only then to the distortion itself.

Lugowski, Marek.  "Computational Metabolism: Towards Biological
Geometries for Computing", pp. 341 - 368, in _Artificial Life_, 2nd
printing, Christopher Langton, ed., Addison-Wesley, Reading, MA: 1989.
-----------------------
Objectionable citation:

	(Abstract of paper presented... June 11 1989)
 
	Connectionism is a family of statistical techniques for extracting
	complex higher-order correlations from data.
-----------------------
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa08797;
          21 Jun 89 17:04:15 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa01359; 21 Jun 89 7:56:32 EDT
Received: from TRACTATUS.BELLCORE.COM by CS.CMU.EDU; 21 Jun 89 07:54:30 EDT
Received: by tractatus.bellcore.com (5.61/1.34)
	id AA13801; Mon, 19 Jun 89 08:20:12 -0400
Date: Mon, 19 Jun 89 08:20:12 -0400
From: Stephen J Hanson <jose at tractatus.bellcore.com>
Message-Id: <8906191220.AA13801 at tractatus.bellcore.com>
To: connectionists at CS.CMU.EDU
Subject: test


please ignore..

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa09103;
          21 Jun 89 17:37:15 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa02801; 21 Jun 89 9:48:41 EDT
Received: from TRACTATUS.BELLCORE.COM by RI.CMU.EDU; 21 Jun 89 09:46:13 EDT
Received: by tractatus.bellcore.com (5.61/1.34)
	id AA16491; Wed, 21 Jun 89 09:45:36 -0400
Date: Wed, 21 Jun 89 09:45:36 -0400
From: Stephen J Hanson <jose at tractatus.bellcore.com>
Message-Id: <8906211345.AA16491 at tractatus.bellcore.com>
To: Connectionists at RI.CMU.EDU
Subject: NIPS

------------------NIPS UPDATE------------------
Deadline for abstracts and summaries was May 30, 1989.  We have now received
over 460 contributions (almost 50% more than last year!).  They are now
logged in, and cards acknowledging receipt will be mailed next week to authors.
Authors who have not received an acknowledgement by June 30 should write to
Kathie Hibbard at the NIPS office; it's possible we got your address wrong
in our database, and this will help us catch these things.

Refereeing will take July.  Collecting the results and defining a final
program will be done in August.  We plan to mail letters informing authors
of the outcome during the first week of September.  At that time, we will
send all registration material, information about prices, and a complete
program.   If you haven't heard from us in late September, again please
write, to help us straighten things out.
------------------NIPS UPDATE------------------
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa00127;
          22 Jun 89 16:48:56 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa18103; 22 Jun 89 11:59:23 EDT
Received: from BOULDER.COLORADO.EDU by CS.CMU.EDU; 22 Jun 89 11:57:15 EDT
Return-Path: <sanger at boulder.Colorado.EDU>
Received: by boulder.Colorado.EDU (cu-hub.022489)
Received: by tigger.colorado.edu (cu.generic.021288)
Date: Thu, 22 Jun 89 09:56:59 MDT
From: Dennis Sanger <sanger at boulder.Colorado.EDU>
Message-Id: <8906221556.AA24065 at tigger>
To: connectionists at cs.cmu.edu
Subject: TR available:  Contribution Analysis


University of Colorado at Boulder Technical Report CU-CS-435-89 is
now available:


            Contribution Analysis:  A Technique for Assigning
        Responsibilities to Hidden Units in Connectionist Networks

                              Dennis Sanger

       AT&T Bell Laboratories and the University of Colorado at Boulder


       ABSTRACT: Contributions, the products of hidden unit
       activations and weights, are presented as a valuable tool
       for investigating the inner workings of neural nets.  Using
       a scaled-down version of NETtalk, a fully automated method
       for summarizing in a compact form both local and distributed
       hidden-unit responsibilities is demonstrated.  Contributions
       are shown to be more useful for ascertaining hidden-unit
       responsibilities than either weights or hidden-unit
       activations.  Among the results yielded by contribution
       analysis:  for the example net, redundant output units are
       handled by identical patterns of hidden units, and the
       amount of responsibility a hidden unit takes on is inversely
       proportional to the number of hidden units.


Please send requests to conn_tech_report at boulder.colorado.edu.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa00855;
          22 Jun 89 17:27:17 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa19129; 22 Jun 89 12:36:08 EDT
Received: from BOULDER.COLORADO.EDU by CS.CMU.EDU; 22 Jun 89 12:33:22 EDT
Return-Path: <gardner at boulder.Colorado.EDU>
Received: by boulder.Colorado.EDU (cu-hub.022489)
Received: by tigger.colorado.edu (cu.generic.021288)
Date: Thu, 22 Jun 89 10:33:10 MDT
From: Phillip E. Gardner <gardner at boulder.Colorado.EDU>
Message-Id: <8906221633.AA25139 at tigger>
To: connectionists at cs.cmu.edu, pdp at boulder.Colorado.EDU
Subject: learning spatial data

I'm interested in teaching a network to learn its way around a building.
I want to use a technique similar to the one outlined by Dr. Widrow at
the last NIPPS conference where a network learned how to backup a truck.
If you could send me some references that might help, including references
to what Dr. Widrow talked about, I would be most thankful.

				    Sincerely,

				    Phil Gardner
				    gardner at boulder.colorado.edu
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa03744;
          22 Jun 89 22:23:10 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa25755; 22 Jun 89 22:18:22 EDT
Received: from UCSD.EDU by CS.CMU.EDU; 22 Jun 89 22:15:45 EDT
Received: from sdbio2.ucsd.edu by ucsd.edu; id AA27631
	sendmail 5.60/UCSD-1.0
	Thu, 22 Jun 89 19:14:05 PDT for connectionists at cs.cmu.edu
Received: by sdbio2.UCSD.EDU (3.2/UCSDGENERIC2)
	id AA23341 for delivery to terry at helmholtz.sdsc.edu; Thu, 22 Jun 89 19:16:14 PDT
Date: Thu, 22 Jun 89 19:16:14 PDT
From: terry%sdbio2 at ucsd.edu (Terry Sejnowski)
Message-Id: <8906230216.AA23341 at sdbio2.UCSD.EDU>
To: connectionists at cs.cmu.edu
Subject: Neural Computation, Vol. 1, No. 2
Cc: terry at helmholtz.sdsc.edu

NEURAL COMPUTATION  -- Issue #2  --  July 1, 1989

Views:

Recurrent backpropagation and the dynamical approach to
	adaptive neural computation.  F. J. Pineda

New models for motor control.  J. S. Altman and J. Kien

Seeing chips: Analog VLSI circuits for computer vision.  C. Koch

A proposal for more powerful learning algorithms.  E. B. Baum

Letters:

A possible neural mechanism for computing shape from shading.
	A. Pentland

Optimization in model matching and perceptual organization.
	E. Mjolsness, G. Gindi and P. Anandan

Distributed parallel processing in the vestibulo-oculomotor
	system.  T. J. Anastasio and D. A. Robinson

A neural model for generation of some behaviors in the 
	fictive scratch reflex.  R. Shadmehr

A robot that walks: Emergent behaviors from a carefully
	evolved network.  R. A. Brooks

Learning state space trajectories in recurrent neural
	networks.  B. A. Pearlmutter.

A learning algorithm for continually running fully recurrent
	neural networks.  R. J. Williams and D. Zipser.

Fast learning in networks of locally-tuned processing units.
	J. Moody and C. J. Darken.

-----

SUBSCRIPTIONS:

______ $35. Student
______ $45. Individual
______ $90. Institution

Add $9. for postage outside USA and Canada surface mail
or $17. for air mail.

MIT Press Journals, 55 Hayward Street, Cambridge, MA 02142.
	(617) 253-2889.

-----


Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa24809;
          23 Jun 89 11:39:13 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa13269; 23 Jun 89 11:34:30 EDT
Received: from UUNET.UU.NET by CS.CMU.EDU; 23 Jun 89 11:31:42 EDT
Received: from unido.UUCP by uunet.uu.net (5.61/1.14) with UUCP 
	id AA22532; Fri, 23 Jun 89 11:31:36 -0400
Received: from gmdzi.UUCP (gmdzi) (1961)
	by unido.irb.informatik.uni-dortmund.de
	for uunet
	id AP04399; Fri, 23 Jun 89 15:19:41 +0100
Received: by gmdzi.UUCP id AA19582; Fri, 23 Jun 89 16:20:34 -0200
Received: by zsv.gmd.de id AA03551; Fri, 23 Jun 89 16:20:55 +0200
Date: Fri, 23 Jun 89 16:20:55 +0200
From: unido!gmdzi!zsv!joerg at uunet.UU.NET (Joerg Kindermann)
Message-Id: <8906231420.AA03551 at zsv.gmd.de>
To: connectionists at cs.cmu.edu
Cc: gmdzi!zsv!joerg at uunet.UU.NET
Subject: wanted: guest researcher


If you are a postgraduate student of scientist with a strong background in
neural networks, we are interested to get in touch with you:

We are a small team (5 scientists plus students) doing research in neural
networks here at the GMD. Currently we are applying to get funding for several
positions of guest researchers. But: we need strong arguments (i.e. good
people who are interested in a stay) to actually get the money.

Our research interests are both theoretical and application oriented. The main
focus is on temporal computation (time series analysis) by neural networks. We
are using multi-layer recurrent networks and gradient learning algorithms
(backpropagation, reinforcement).  Applications are speech recognition,
analysis of medical data (ECG, ...), and navigation tasks for autonomous
vehicles (2-D simulation only). A second research direction is the
optimization of neural networks by means of genetic algorithms. We are using
both SUN3s and a parallel Computer (64 cpu transputer-based).

So, if you are interested, please write a letter, indicating your background
in neural networks and preferred dates for your stay.

         Dr. Joerg Kindermann                                       
         Gesellschaft fuer Mathematik und Datenverarbeitung mbH (GMD)
         Postfach 1240                      email:  joerg at gmdzi.uucp 
         D-5205 St. Augustin 1, FRG         phone: (+49 02241) 142437
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa16084;
          25 Jun 89 12:30:57 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa27315; 25 Jun 89 12:25:53 EDT
Received: from THINK.COM by CS.CMU.EDU; 25 Jun 89 12:24:12 EDT
Return-Path: <singer at Think.COM>
Received: from leander.think.com by Think.COM; Sun, 25 Jun 89 12:24:30 EDT
Received: by leander.think.com; Sun, 25 Jun 89 12:22:43 EDT
Date: Sun, 25 Jun 89 12:22:43 EDT
From: singer at Think.COM
Message-Id: <8906251622.AA07513 at leander.think.com>
To: unido!gmdzi!zsv!joerg at uunet.uu.net
Cc: connectionists at cs.cmu.edu, gmdzi!zsv!joerg at uunet.uu.net
In-Reply-To: Joerg Kindermann's message of Fri, 23 Jun 89 16:20:55 +0200 <8906231420.AA03551 at zsv.gmd.de>
Subject: wanted: guest researcher

   Date: Fri, 23 Jun 89 16:20:55 +0200
   From: unido!gmdzi!zsv!joerg at uunet.uu.net (Joerg Kindermann)

   If you are a postgraduate student of scientist with a strong background in
   neural networks, we are interested to get in touch with you:

	[...]

	    Dr. Joerg Kindermann                                       
	    Gesellschaft fuer Mathematik und Datenverarbeitung mbH (GMD)
	    Postfach 1240                      email:  joerg at gmdzi.uucp 
	    D-5205 St. Augustin 1, FRG         phone: (+49 02241) 142437

Though I am not a postgraduate student (i.e. I do not have a PhD), your
invitation made me very interested.  Especially when I saw your address.  
Is your organization the same one in which work relating the methods of
statistical mechanics to "Darwinian" optimization paradigms has been done?
Unfortunately I do not have the particular papers and names at my disposal
right now, but the Gesellschaft sounds familiar.

My own background includes a Bachelor of Science in Neural Science and a
bachelor of Arts in Philosophy.  I made an oral presentation at the first
NIPS (Neural Information Processing) Conference in Denver, CO in 1987 on
hybrid neural net/biological systems.  I spent a year beginning my PhD
studies at Johns Hopkins with Terry Sejnowski, but had to stop temoprarily
because Dr Sejnowski moved to California.  I am currently employed by
Thinking Machines Corporation workingon their 64K processor Connection
Machine doing neural network research, genetic algorithm research,
combinatorial optimization work, and statistical work.  I also have a
working knowledge of German from having worked in Frankfurt for a summer.

I would be extremely interested in further discussing this with you, if my qua
lifications seem appropriate.

Alexander Singer
Thinkning Machines Corp.
245 First St.
Cambridge, MA 02142 USA
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa01703;
          25 Jun 89 17:39:39 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa27905; 25 Jun 89 12:36:58 EDT
Received: from [128.188.1.13] by CS.CMU.EDU; 25 Jun 89 12:33:55 EDT
Received: by net2.m2c.org (5.57/sendmail.28-May-87)
	id AA00561; Sun, 25 Jun 89 12:28:13 EDT
Received: by m2c.m2c.org (5.57/sendmail.28-May-87)
	id AA05897; Sun, 25 Jun 89 12:30:16 EDT
Received: by wpi (4.12/4.7)
	id AA09483; Sun, 25 Jun 89 12:32:44 edt
Date: Sun, 25 Jun 89 12:32:44 edt
From: weili at wpi.wpi.edu (Wei Li)
Message-Id: <8906251632.AA09483 at wpi>
To: connectionists at CS.CMU.EDU
Subject: information on fundings wanted

Hi, if any one could send me some notes on fundings from NSF, AFOR, NIH
and DARPA, including areas of interested projects, amount of available money,
dead line for accepting proposals, and phone numbers of people to contact
to, I will appriate it very much. This information was given in IJCNN 89
Washington D.C. neural network conference.

My e-mail address is

weili at wpi.wpi.edu

---- Wei Li

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa09527; 28 Jun 89 9:22:00 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa08554; 28 Jun 89 8:08:24 EDT
Received: from VMA.CC.CMU.EDU by RI.CMU.EDU; 28 Jun 89 08:04:57 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Wed, 28 Jun 89 08:03:12 EDT
Received: from EB0UB011.BITNET (D4PBPHB2) by CMUCCVMA (Mailer X1.25) with BSMTP
 id 2266; Wed, 28 Jun 89 08:03:11 EDT
Date: Wed, 28 Jun 89 12:58:05 HOE
To: connectionists at c.cs.cmu.edu
From: D4PBPHB2%EB0UB011.BITNET at VMA.CC.CMU.EDU
Comment: CROSSNET mail via MAILER at CMUCCVMA

Date: 28 June 1989, 12:57:19 HOE
From: D4PBPHB2 at EB0UB011
To:   connectionists at c.cs.cmu.edu

Add/Subscribe Perfecto Herrera

Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa01812; 29 Jun 89 6:05:54 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa21472; 29 Jun 89 6:00:41 EDT
Received: from UUNET.UU.NET by RI.CMU.EDU; 29 Jun 89 05:58:31 EDT
Received: from munnari.UUCP by uunet.uu.net (5.61/1.14) with UUCP 
	id AA02442; Thu, 29 Jun 89 05:58:21 -0400
Received: from munnari.oz.au (munnari.oz) by murtoa.cs.mu.oz (5.5)
	id AA07438; Thu, 29 Jun 89 18:42:41 EST
	(from guy at flinders.cs.flinders.oz for uunet!connectionists at RI.CMU.EDU)
Received: from flinders.cs.flinders.oz (via murtoa) by munnari.oz.au with SunIII (5.61+IDA+MU)
	id AA08527; Thu, 29 Jun 89 18:33:36 +1000
	(from guy at flinders.cs.flinders.oz for connectionists at RI.CMU.EDU@murtoa.cs.mu.OZ.AU)
Message-Id: <8906290833.8527 at munnari.oz.au>
Received: by flinders.cs.flinders.oz.au(4.0/SMI-3.2)
	id AA03504; Thu, 29 Jun 89 17:56:22 CST
Date: Thu, 29 Jun 89 17:56:22 CST
From: munnari!cs.flinders.oz.au!guy at uunet.UU.NET (Guy Smith)
To: connectionists at RI.CMU.EDU
Subject: Tech Report available
Cc: munnari!guy at uunet.UU.NET


  The Tech Report "Back Propagation with Discrete Weights and Activations"
describes a modification of BP which generates a net with discrete (but
not integral) weights and activations. The modification is simple: weights
and activations are restricted to discrete values. The weights/activations
calculated by BP are rounded to one of the neighbouring discrete values. 

  For simple discrete problems, the learning performance of the net was not
much affected until the granularity of the legal weight/activation values
was as coarse as ten values per integer (ie 0.0, 0.1, 0.2, ...). 

  To request a copy, mail to "guy at cs.flinders.oz..." or write to

	Guy Smith,
	Computer Science Department,
	Flinders University,
	Adelaide	5042, 
	AUSTRALIA. 

Guy Smith. 
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa03460; 29 Jun 89 9:22:27 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa00460; 29 Jun 89 9:17:33 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by CS.CMU.EDU; 29 Jun 89 08:24:18 EDT
Received: from DST.BOLTZ.CS.CMU.EDU by DST.BOLTZ.CS.CMU.EDU; 29 Jun 89 08:23:30 EDT
To: connectionists at cs.cmu.edu
Reply-To: Dave.Touretzky at cs.cmu.edu
cc: copetas at cs.cmu.edu
Subject: "Rules and Maps" tech report
Date: Thu, 29 Jun 89 08:23:22 EDT
Message-ID: <3871.615126202 at DST.BOLTZ.CS.CMU.EDU>
From: Dave.Touretzky at B.GP.CS.CMU.EDU

Rules and Maps in Connectionist Symbol Processing

	 Technical Report CMU-CS-89-158

               David S. Touretzky

	    School of Computer Science
	    Carnegie Mellon University
            Pittsburgh, PA 15213-3890


ABSTRACT:

This report contains two papers to be presented at the Eleventh Annual
Conference of the Cognitive Science Society.  The first describes a
simulation of chunking in a connectionist network.  The network applies
context-sensitive rewrite rules to strings of symbols as they flow through
its input buffer.  Chunking is implemented as a form of self-supervised
learning using backpropagation.  Over time, the network improves its
efficiency by replacing simple rule sequences with more complex chunks.

The second paper describes the first implementation of Lakoff's new theory
of cognitive phonology.  His approach is based on a multilevel
representation of utterances to which all rules apply in parallel.
Cognitive phonology is free of the rule ordering constraints that make
classical generative theories computationally awkward.  The connectionist
implementation utilizes a novel ``many maps'' architecture that may explain
certain constraints on phonological rules not adequately accounted for by
more abstract models.

================

To order copies of this tech report, please send email to Catherine Copetas
(copetas at cs.cmu.edu), or write the School of Computer Science at the
address above.  If you previously requested a copy of CMU-CS-89-147
(Connectionism and Compositional Semantics), you will receive a copy of
this new report automatically.  In fact, it should arrive in your mailbox
today.
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa10761;
          29 Jun 89 15:12:55 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa02123; 29 Jun 89 10:41:34 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 29 Jun 89 10:40:08 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Thu, 29 Jun 89 10:39:37 EDT
Received: from UKACRL.BITNET by CMUCCVMA (Mailer X1.25) with BSMTP id 0225;
 Thu, 29 Jun 89 10:39:36 EDT
Received: from RL.IB by UKACRL.BITNET (Mailer X1.25) with BSMTP id 7397; Thu,
 29 Jun 89 15:37:20 BST
Received:
Via:        UK.AC.EX.CS; 29 JUN 89 15:37:17 BST
Received:   from entropy by expya.cs.exeter.ac.uk; Thu, 29 Jun 89 15:29:29-0000
From:       Noel Sharkey <noel at CS.EXETER.AC.UK>
Date:       Thu, 29 Jun 89 15:28:14 BST
Message-Id: <6270.8906291428 at entropy.cs.exeter.ac.uk>
To:         connectionists at CS.CMU.EDU
Subject:    post graduate studentships



    RESEARCH STUDENTSHIPS IN COMPUTER SCIENCE

        University of Exeter

The Department of  Computer  Science invites applications  for Science
and Engineering Research Council (SERC) PhD. quotas for October, 1989.
Any   area of computer    science  within  the  Department's  research
interests (new   generation  architectures and languages, methodology,
interfaces, logics) will be  considered.  However, for  a least one of
the quotas, preference will be given to candidates with an interest in
CONNECTIONIST or NEURAL NETWORK research,  particularly in relation to
applications  within the  domain  of  natural  language processing  or
simulations of human memory.

Candidates should possess a first degree of at least 2(i) standard in
order to be eligible for the award of SERC research studentship.
Closing date for applications is 14th July, 1989.

For further information about the Department's research interests,
eligibility consideration and application procedure, please contact:
.nf

Dr Noel Sharkey,
Reader in Computer Science          JANET:  noel at uk.ac.exeter.cs
Dept. Computer Science
University of Exeter
Exeter EX4 4PT
U.K.
(Telephone: 0392 264061)
.fi
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa15407;
          29 Jun 89 22:07:18 EDT
Received: from ri.cmu.edu by Q.CS.CMU.EDU id aa10205; 29 Jun 89 17:36:17 EDT
Received: from CS.UTEXAS.EDU by RI.CMU.EDU; 29 Jun 89 17:34:27 EDT
Date: Thu, 29 Jun 89 16:25:24 CDT
From: yu at cs.utexas.edu (Yeong-Ho Yu)
Posted-Date: Thu, 29 Jun 89 16:25:24 CDT
Message-Id: <8906292125.AA27045 at cs.utexas.edu>
Received: by cs.utexas.edu (5.59/36.2)
	id AA27045; Thu, 29 Jun 89 16:25:24 CDT
To: guy!s.flinders.oz.au!guy at uunet.UU.NET
Cc: connectionists at RI.CMU.EDU, munnari!guy at uunet.UU.NET
In-Reply-To: Guy Smith's message of Thu, 29 Jun 89 17:56:22 CST <8906290833.8527 at munnari.oz.au>
Subject: Tech Report available



 I'd like to get a copy of your tech report "Back Propagation with
Discrete Weights and Activations".

  My address is

 Yeong-Ho Yu
 AI Lab
 The University of Texas at Austin
 Austin, TX 78712
 (yu at cs.utexas.edu)

Thanks in advance.

Yeong
----------
Received: from q.cs.cmu.edu by B.GP.CS.CMU.EDU id aa23828;
          30 Jun 89 12:43:18 EDT
Received: from cs.cmu.edu by Q.CS.CMU.EDU id aa15581; 30 Jun 89 9:30:39 EDT
Received: from VMA.CC.CMU.EDU by CS.CMU.EDU; 30 Jun 89 09:28:32 EDT
Received: from CMUCCVMA by VMA.CC.CMU.EDU ; Fri, 30 Jun 89 09:28:04 EDT
Received: from WEIZMANN.WEIZMANN.AC.IL by CMUCCVMA (Mailer X1.25) with BSMTP id
 7131; Fri, 30 Jun 89 09:28:03 EDT
Received: by WEIZMANN (Mailer R2.03B) id 3415; Fri, 30 Jun 89 10:00:56 +0300
Date:         Fri, 30 Jun 89 09:52:40 +0300
From:         "Harel Shouval, Tal Grossman" <FEGROSS%WEIZMANN.BITNET at VMA.CC.CMU.EDU>
Subject:      Preprint available
To:           connectionists at cs.cmu.edu


The following preprint describes a theoretical and experimental work on
optical neural network that is based on a negative weights nn model.
Please send your requests by email to: feshouva at weizmann  (bitnet),
or write to: Harel Shouval, Electronics Dept., Weizmann Inst.
Rehovot 76100, ISRAEL.

               ---------------------

An All-Optical Hopfield Network: Theory and Experiment
-------------------------------------------------------
Harel Shouval, Itzhak Shariv, Tal Grossman,
Asher A. Friesem and Eytan Domany.
Dept. of Electronics, Weizmann Institute of Science,
Rehovot 76100 Israel.

    ---  ABSTRACT  ---

Realization of an all-optical Hopfield-type neural network
is made possible
by eliminating the need for subtracting light intensities. This
can be done without significntly degrading the network's preformance,
if only inhibitory  connections (i.e. $J_{ij}<0$) are used. We
present  theoretical analysis of such a network, and its experimental
implementation, that uses a liquid crystal light valve for the neurons
and an array of sub-holograms for the interconnections.
-----------------------------------------------------------------------


More information about the Connectionists mailing list