Thesis on neuroprose
Richard S. Zemel
zemel at salk.edu
Mon Jan 10 19:52:40 EST 1994
**DO NOT FORWARD TO OTHER GROUPS**
A postscript copy of my PhD thesis has been placed in the neuroprose archive.
It prints on 138 pages. The abstract is given below, followed by retrieval
instructions.
Rich Zemel
e-mail: zemel at salk.edu
-----------------------------------------------------------------------------
A Minimum Description Length Framework for Unsupervised Learning
ABSTRACT
A fundamental problem in learning and reasoning about a set of information is
finding the right representation. The primary goal of an unsupervised
learning procedure is to optimize the quality of a system's internal
representation. In this thesis, we present a general framework for describing
unsupervised learning procedures based on the Minimum Description Length (MDL)
principle. The MDL principle states that the best model is one that minimizes
the summed description length of the model and the data with respect to the
model. Applying this approach to the unsupervised learning problem makes
explicit a key trade off between the accuracy of a representation (i.e., how
concise a description of the input may be generated from it) and its
succinctness (i.e., how compactly the representation itself can be described).
Viewing existing unsupervised learning procedures in terms of the framework
exposes their implicit assumptions about the type of structure assumed to
underlie the data. While these existing algorithms typically minimize the
data description using a fixed-length representation, we use the framework to
derive a class of objective functions for training self-supervised neural
networks, where the goal is to minimize the description length of the
representation simultaneously with that of the data. Formulating a
description of the representation forces assumptions about the structure of
the data to be made explicit, which in turn leads to a particular network
configuration as well as an objective function that can be used to optimize
the network parameters. We describe three new learning algorithms derived in
this manner from the MDL framework. Each algorithm embodies a different
scheme for describing the internal representation, and is therefore suited to
a range of datasets based on the structure underlying the data. Simulations
demonstrate the applicability of these algorithms on some simple computational
vision tasks.
-----------------------------------------------------------------------------
I divided the thesis for retrieval purposes into 3 chunks. Also, it is in
book style, so it will look better if you print it out on a double-sided
printer if you have access to one.
To retrieve from neuroprose:
unix> ftp cheops.cis.ohio-state.edu
Name (cheops.cis.ohio-state.edu:zemel): anonymous
Password: (use your email address)
ftp> cd pub/neuroprose
ftp> get zemel.thesis1.ps.Z
ftp> get zemel.thesis2.ps.Z
ftp> get zemel.thesis3.ps.Z
ftp> quit
unix> uncompress zemel*
unix> lpr zemel.thesis1.ps
unix> lpr zemel.thesis2.ps
unix> lpr zemel.thesis3.ps
More information about the Connectionists
mailing list