Workshop on Learning and Descriptional Complexity

Edwin Pednault epdp at big.att.com
Wed Apr 6 16:18:48 EDT 1994


	 Workshop on Applications of Descriptional Complexity
	   to Inductive, Statistical, and Visual Inference

			Sunday, July 10, 1994
			  Rutgers University
		      New Brunswick, New Jersey

	 Held in Conjunction with the Eleventh International
       Conference on Machine Learning (ML94, July 11-13, 1994)
	  and the Seventh Annual Conference on Computational
	     Learning Theory (COLT94, July 12-15, 1994).


Interest in the minimum description-length (MDL) principle is
increasing in the machine learning and computational learning theory
communities.  One reason is that MDL provides a basis for inductive
learning in the presence of noise and other forms of uncertainty.
Another reason is that it enables one to combine and compare different
kinds of data models within a single unified framework, allowing a
wide range of inductive-inference problems to be addressed.

Interest in the MDL principle is not restricted to the learning
community.  Inductive-inference problems arise in one form or another
in many disciplines, including information theory, statistics,
computer vision, and signal processing.  In each of these disciplines,
inductive-inference problems have been successfully pursued using the
MDL principle and related descriptional complexity measures, such as
stochastic complexity, predictive MDL, and algorithmic probability.

The purpose of this workshop is two fold: (1) to provide an
opportunity to researchers in all disciplines involved with
descriptional complexity to meet and share results; and (2) to foster
greater interaction between the descriptional complexity community and
the machine learning and computational learning theory communities,
enabling each group to benefit from the results and insights of the
others.  To meet these objectives, the format of the workshop is
designed to maximize opportunities for interaction among participants.
In addition, a tutorial on descriptional complexity will be held prior
to the workshop to encourage broad participation.  The tutorial and
workshop may be attended together or individually.

The topics of the workshop will include, but will not be limited to,

	- Applications of descriptional complexity to all forms of 
	  inductive inference, including those in statistics, machine 
	  learning, computer vision, pattern recognition, and signal
	  processing.

	- Rates of convergence, error bounds, distortion bounds, and
	  other convergence and accuracy results.

	- New descriptional complexity measures for inductive learning.

	- Specializations and approximations of complexity measures
	  that take advantage of problem-specific constraints.

	- Representational techniques, search techniques, and other 
	  application and implementation related issues.

	- Theoretical and empirical comparisons between different 
	  descriptional complexity measures, and with other learning
	  techniques.


WORKSHOP FORMAT

The workshop will be held on Sunday, July 10, 1994.  Attendance will
be open.  However, those who wish to attend should contact the
organizers prior to the workshop at the address below.

To maximize the opportunity for interaction, the workshop will consist
primarily of poster presentations, with a few selected talks and a
moderated wrap-up discussion.

Posters will be the primary medium for presentation.  This medium was
chosen because it encourages close interaction between participants,
and because many more posters can be accommodated than talks.  Both
factors should encourage productive interaction across a wide range of
topics despite the constraints of a one-day workshop.

Depending on the number and quality of the submissions, arrangements
may be made to publish a book of papers after the workshop under the
auspices of the International Federation for Information Processing
Working Group 14.2 on Descriptional Complexity.


SUBMISSIONS

Posters will be accepted on the basis of extended abstracts that
should not exceed 3000 words, excluding references (i.e., about six
pages of text, single spaced).  Separate one-page summaries should
accompany the submitted abstracts.  The summary pages of accepted
abstracts will be distributed to all interested participants prior to
the workshop, and should be written accordingly.  Summaries longer
than one page will have only their first page distributed.

Six copies of each extended abstract and two copies of each summary
page must be received at the address below by May 18, 1994.  Acceptance
decisions will be made by June 10, 1994.  Copies of the summary pages
of accepted abstracts will be mailed to all those who submit abstracts
and to those who contact the organizers before the decision date.

Because we expect the audience to be diverse, clarity of presentation
will be a criterion in the review process.  Contributions and key
insights should be clearly conveyed with a wide audience in mind.

Authors whose submissions are accepted will be expected to provide the
organizers with full-length papers or revised versions of their
extended abstracts when they arrive at the workshop.  These papers and
abstracts will be used for the publisher's review.  Authors may wish
to bring additional copies to distribute at the workshop.


IMPORTANT DATES

May 18		Extended abstracts due
June 10		Acceptance decisions made, summary pages distributed
July 10		Workshop


PROGRAM COMMITTEE

Ed Pednault (Chair), AT&T Bell Laboratories.
Andrew Barron, Yale University.
Ron Book, University of California, Santa Barbara.
Tom Cover, Stanford University.
Juris Hartmanis, Cornell University.
Shuichi Itoh, University of Electro-Communications.
Jorma Rissanen, IBM Almaden Research Center.
Paul Vitanyi, CWI and University of Amsterdam.
Detlef Wotschke, University of Frankfurt.
Kenji Yamanishi, NEC Corporation.


CONTACT ADDRESS

Ed Pednault
AT&T Bell Laboratories, 4G-318
101 Crawfords Corner Road
Holmdel, NJ 07733-3030

email: epdp at research.att.com
tel:   908-949-1074


-----------------------------------------------------------------------


     Tutorial on Descriptional Complexity and Inductive Learning


One of the earliest theories of inductive inference was first
formulated by Solomonoff in the late fifties and early sixties.  It
was expanded in subsequent and, in some cases, independent work by
Solomonoff, Kolmogorov, Chaitin, Wallace, Rissanen, and others.  The
theory received its first citation in the AI literature even before
its official publication.  It provides a basis for learning both
deterministic and probabilistic target concepts, and it establishes
bounds on what is computationally learnable in the limit.  Over time,
this theory found its way into several fields, including probability
theory and theoretical computer science.  In probability theory, it
provides a precise mathematical definition for the notion of a random
sample sequence.  In theoretical computer science, it is being used
among other things to prove lower bounds on the computational
complexity of problems, to analyze average-case behavior of
algorithms, and to explore the relationship between the succinctness
of a representation and the computational complexity of algorithms
that employ that representation.  Interest in the theory diminished in
artificial intelligence in the mid to late sixties because of the
inherent intractability of the theory in its most general form.
However, research in the seventies and early eighties led to several
tractable specializations developed expressly for inductive inference.
These specializations in turn led to applications in many disciplines,
including information theory, statistics, machine learning, computer
vision, and signal processing.

The body of theory as it now stands has developed well beyond its
origins in inductive inference, encompassing algorithmic probability,
Kolmogorov complexity, algorithmic information theory, generalized
Kolmogorov complexity, minimum message-length inference, the minimum
description-length (MDL) principle, stochastic complexity, predictive
MDL, and related concepts.  It is being referred to collectively as
descriptional complexity to reflect this evolution.

This tutorial will provide an introduction to the principal concepts
and results of descriptional complexity as they apply to inductive
inference.  The practical application of these results will be
illustrated through case studies drawn from statistics, machine
learning, and computer vision.  No prior background will be assumed in
the presentation other than a passing familiarity with probability
theory and the theory of computation.  Attendees should expect to gain
a sound conceptual understanding of descriptional complexity and its
main results.  The tutorial will be held on Sunday, July 10, 1994.


-----------------------------------------------------------------------



More information about the Connectionists mailing list