Papers available: "Object recognition and unsupervised learning"
Guy M. Wallis
guy at taco.mpik-tueb.mpg.de
Wed Feb 21 08:29:37 EST 1996
FTP-host: archive.cis.ohio-state.edu
FTP-filename: /pub/neuroprose/wallisgm.ittrain.ps.Z
FTP-filename: /pub/neuroprose/wallisgm.temporalobjrec1.ps.Z
FTP-filename: /pub/neuroprose/wallisgm.temporalobjrec2.ps.Z
** Three papers available on the unsupervised **
** learning of invariant object recognition **
The three papers listed above are now available for retrieval from the
Neuroprose repository. All three papers discuss learning to associate different
views of objects on the basis of their appearance in time as well as their
spatial appearance. The papers are also available directly from my home page,
along with a copy of my PhD thesis which, I should warn you, is rather long:
http://www.mpik-tueb.mpg.de/people/personal/guy/guy.html
--------------------------------------------------------------------------------
PaperI: A Model of Invariant Object Recognition in the
Visual System
ABSTRACT
Neurons in the ventral stream of the primate visual system exhibit
responses to the images of objects which are invariant with respect to
natural transformations such as translation, size, and view. Anatomical
and neurophysiological evidence suggests that this is achieved through
a series of hierarchical processing areas. In an attempt to elucidate
the manner in which such representations are established, we have
constructed a model of cortical visual processing which seeks to
parallel many features of this system, specifically the multi-stage
hierarchy with its topologically constrained convergent connectivity.
Each stage is constructed as a competitive network utilising a modified
Hebb-like learning rule, called the trace rule, which incorporates
previous as well as current neuronal activity. The trace rule enables
neurons to learn about whatever is invariant over short time periods
(e.g. 0.5 s) in the representation of objects as the objects transform
in the real world. The trace rule enables neurons to learn the
statistical invariances about objects during their transformations, by
associating together representations which occur close together in
time. We show that by using the trace rule training algorithm the model
can indeed learn to produce transformation invariant responses to
natural stimuli such as faces.
Submitted to Journal of Computational Neuroscience
32 pages 1.6 Mb compressed
ftp://archive.cis.ohio-state.edu/pub/neuroprose/wallisgm.ittrain.ps.Z
or
ftp://ftp.mpik-tueb.mpg.de/pub/guy/jcns7.ps.Z
--------------------------------------------------------------------------------
PaperII: Optimal, Unsupervised Learning in Invariant
Object Recognition
ABSTRACT
A means for establishing transformation invariant representations of objects at
the single cell level is proposed and analysed. The association of views of
objects is achieved by using both the temporal order of the presentation of
these views, as well as their spatial similarity. Assuming knowledge of the
distribution of presentation times, an optimal linear learning rule is derived.
If we assume that objects are viewed with presentation times that are
approximately Jeffrey's distributed, then the optimal learning rule is very
well approximated using a simple exponential temporal trace. Simulations of a
competitive network trained on a character recognition task are then used to
highlight the success of this learning rule in relation to simple Hebbian
learning, and to show that the theory can give quantitative predictions for the
optimal parameters for such networks.
Submitted to Neural Computation
15 pages 180 Kb compressed
ftp://archive.cis.ohio-state.edu/pub/neuroprose/wallisgm.temporalobjrec1.ps.Z
or
ftp://ftp.mpik-tueb.mpg.de/pub/guy/nc.ps.Z
--------------------------------------------------------------------------------
PaperIII: Using Spatio-Temporal Correlations to Learn Invariant
Object Recognition
ABSTRACT
A competitive network is described which learns to classify objects on the
basis of temporal as well as spatial correlations. This is achieved by using a
Hebb-like learning rule which is dependent upon prior as well as current neural
activity. The rule is shown to be capable of outperforming a supervised rule on
the cross-validation test of an invariant character recognition task, given a
relatively small training set. It is also shown to outperform the supervised
version of Fukushima's Neocognitron, on a larger training set.
Submitted to Neural Networks
13 pages 110 Kb compressed
ftp://archive.cis.ohio-state.edu/pub/neuroprose/wallisgm.temporalobjrec2.ps.Z
or
ftp://ftp.mpik-tueb.mpg.de/pub/guy/nn.ps.Z
--------------------------------------------------------------------------------
--
-----------------------------------------------------------
_/ _/ _/_/_/ _/_/_/ Guy Wallis
_/_/ _/_/ _/ _/ _/ Max-Planck Institut f"ur
_/ _/ _/ _/_/_/ _/ Biologische Kybernetik
_/ _/ _/ _/ Spemannstr. 38
_/ _/ _/ _/_/_/ 72076 T"ubingen, Germany
http://www.mpik-tueb.mpg.de/ TEL: +49-7071/601-630
Email: guy at mpik-tueb.mpg.de FAX: +49-7071/601-575
-----------------------------------------------------------
More information about the Connectionists
mailing list