Composite networks

David Cohn pablo at cs.washington.edu
Tue Mar 3 15:22:12 EST 1992


P. J. Hampson (STAY8026%IRUCCVAX.UCC.IE at bitnet.cc.cmu.edu) asks:

> I am interested in modelling tasks in which invariant information from
> previous input-output pairs is brought to bear on the acquisition of current
> input-output pairs.  Thus I want to use previously extracted regularity to
> influence current processing. ...

I'm not sure I've read the intent of the posting correctly, but this
sounds like it may be able to draw on some of the recent work in
"active" learning systems. These are learning systems that have some
control over what their inputs will be. A common form of active
learning is that of "learning by queries," where a neural network
"asks for" new training examples from some part of a domain based on
its evaluation of of previous training examples (e.g. Cohn et al.,
Baum and Lang, Hwang et al. and MacKay).

> At year's memory conference in Lancaster (England) Dave Rumelhart mentioned
> the need to develop nets which incorporate a distinction between learning
> and memory and exploit the attributes of both. ...

The distinction usually made in querying is a bit different, and consists of
a loop iterating between sampling and learning. With a neural network, the
"learning" generally consists of simply training on the sampled data, and
thus could be thought of as analogous to stuffing the data into "memory."
The driving algorithm directs the sampling based on previously learned data
to optimize the utility of new examples.

This approach has been tried on a number of relatively complicated,
yet still "toy" problems; current efforts are to overcome the
representational and computational complexity problems that arise as
one makes the transition to the domain of "real world" problems like
speech.

 -David Cohn				e-mail: pablo at cs.washington.edu
  Dept. of Computer Science & Eng.	phone:  (206) 543-7798
  University of Washington, FR-35
  Seattle, WA 98195

References:
Cohn, Atlas & Ladner. (1990) Training Connectionist Networks with Queries
	and Selective Sampling. In D. Touretzky, ed., "Advances In Neural
	Info. Processing 2"
Baum and Lang. (1991) Constructing Hidden Units using Examples and Queries.
	In Lippmann et al., eds.,  "Advances In Neural Info. Processing 3"
Hwang, Choi, Oh and Marks. (1990) Query learning based on boundary search
	and gradient computation of trained multilayer perceptrons. In
	Proceedings, IJCNN 1990.
MacKay. (1991) Bayesian methods for adaptive models. Ph.D. thesis, Dept. of
	Computation and Neural Systems, California Institute of Technology.



More information about the Connectionists mailing list