Thesis Proposal - Xuezhi Wang
Jeff Schneider
schneide at cs.cmu.edu
Thu Nov 6 17:54:45 EST 2014
Hi Everyone,
Tomorrow Xuezhi will do her thesis proposal! Come out to hear about some cool
work in transfer learning!
Jeff.
From: Deborah Cavlovich <deb at cs.cmu.edu>
Subject: Thesis Proposal - Xuezhi Wang
Date: October 27, 2014 7:52:26 PM EDT
To: cs-friends at cs.cmu.edu, cs-students at cs.cmu.edu, cs-visitors at cs.cmu.edu
CALENDAR OF EVENTS
November 7, 2014
Xuezhi Wang
1:00 PM, 4405 Gates and Hillman Centers
Thesis Proposal
Title: Active Transfer Learning
Abstract:
Transfer learning algorithms are used when one has sufficient training data for
one supervised learning task (the source task) but only very limited training
data for a second task (the target task) that is similar but not identical to
the first. These algorithms use varying assumptions about the similarity
between the tasks to carry information from the source to the target task.
Common assumptions are that only certain specific marginal or conditional
distributions have changed while all else remains the same. Moreover, not much
work on transfer learning has considered the case when a few labels in the test
domain are available. Alternatively, if one has only the target task, but also
has the ability to choose a limited amount of additional training data to
collect, then active learning algorithms are used to make choices which will
most improve performance on the target task. These algorithms may be combined
into active transfer learning, but previous efforts have had to apply the two
methods in sequence or use restrictive transfer assumptions.
This thesis focuses on active transfer learning under the model shift
assumption. We start by proposing two transfer learning algorithms that allow
changes in all marginal and conditional distributions but assume the changes are
smooth in order to achieve transfer between the tasks. We then propose an active
learning algorithm that yields a combined active transfer learning algorithm.
Furthermore, we consider a general case where both the support and the model
change across domains. We transform both $X$ and $Y$ by a
parameterized-location-scale shift to achieve transfer between tasks. Previous
work on covariate shift assumes that the support of test $P(X)$ is contained in
the support of training $P(X)$, and target/conditional shift makes a similar
assumption for $P(Y)$. Since we allow more flexible transformations, the
proposed method yields better results on both synthetic data and real-world data.
In this thesis, we further propose: (1) Risk analysis on the proposed
approaches; (2) Faster model update and extension to non-myopic active
selection; (3) Active surveying with partial information; (4) Extension to
multi-task learning.
Thesis Committee:
Jeff Schneider, Chair
Christos Faloutsos
Geoff Gordon
Jerry Zhu, University of Wisconsin-Madison
Thesis Summary:
http://autonlab.org/autonweb/22755/version/1/part/5/data/proposal_wxz.pdf?branch=main&language=en
More information about the Autonlab-research
mailing list