Connectionists: CFP - NIPS 2013 New Directions in Transfer and Multi-Task: Learning Across Domains and Tasks (Reminder of Submission Deadline)

urun dogan urundogan at gmail.com
Mon Oct 7 04:24:34 EDT 2013


Apologies for cross-posting. Please forward this to those who may be
interested. Thank you.

=========================================================================
 CALL FOR PAPERS (Reminder of Submission Deadline)

 NIPS 2013 New Directions in Transfer and Multi-Task: Learning Across
Domains and Tasks


 December 10, 2013
 Lake Tahoe, Nevada, USA
 https://sites.google.com/site/learningacross/
  Submission Deadline: October 9, 2013
 =========================================================================

The main objective of the workshop is to document and discuss the recent
rise of new research questions on the general problem of learning across
domains and tasks. This includes the main topics of transfer and multi-task
learning, together with several related variants as domain adaptation and
dataset bias.

In the last years there has been an increasing boost of activity in these
areas, many of them driven by practical applications, such as object
categorization. Different solutions were studied for the considered topics,
mainly separately and without a joint theoretical framework. On the other
hand, most of the existing theoretical formulations model regimes that are
rarely used in practice (e.g. adaptive methods that store all the source
samples).

This NIPS 2013 workshop will focus on closing this gap by providing an
opportunity for theoreticians and practitioners to get together in one
place, to share and debate over current theories and empirical results. The
goal is to promote a fruitful exchange of ideas and methods between the
different communities, leading to a global advancement of the field.

Transfer Learning - Transfer Learning (TL) refers to the problem of
retaining and applying the knowledge available for one or more source
tasks, to efficiently develop an hypothesis for a new target task. Each
task may contain the same (domain adaptation) or different label sets
(across category transfer). A lot of the effort has been devoted to binary
classification, while most interesting practical transfer problems are
intrinsically multi-class and the number of classes can often increase in
time. Hence, it is natural to ask:

- How to formalize knowledge transfer across multi-class tasks and provide
theoretical guarantees on this setting?
- Can interclass transfer and incremental class learning be properly
integrated?
- Can learning guarantees be provided when the adaptation relies only on
pre-trained source hypotheses without explicit access to the source
samples, as it is often the case in real world scenarios?


Multi-task Learning - Learning over multiple related tasks can outperform
learning each task in isolation. This is the principal assertion of
Multi-task learning (MTL) and implies that the learning process may benefit
from common information shared across the tasks. In the simplest case,
transfer process is symmetric and all the tasks are considered as equally
related and appropriate for joint training.

- What happens when this condition does not hold, e.g. how to avoid
negative transfer?
- Can RHKS embeddings be adequately integrated into the learning process to
estimate and compare the distributions underlying the multiple tasks?
 - How may embedding probability distributions help learning from data
clouds?
- Can deep learning or multiple kernel learning help to get a step closer
towards the complete automatization of multi-task learning?
- How can notions from reinforcement learning such as source task selection
be connected to notions from convex multi-task learning such as the task
similarity matrix?


Confirmed Invited Speakers

- Shai Ben-David (University of Waterloo)
- Massimiliano Pontil (University College London)
- Fei Sha (University of Southern California)
- Arthur Gretton (Gatsby Computational Neuroscience Unit)
 - Sinno Jialin Pan (Institute for Infocomm Research and National
University of Singapore)

 =========================================================================

Submission:

We invite submission of extended abstracts to the workshop on all topics
related to transfer and multi-task learning with special interest in

- New views and unifying theories on TL and MTL
- Learning the task similarities/dissimilarities from the data
- Sparse vs non sparse regularization in similarity learning
- Domain adaptation
- Dataset bias
 - Use of deep and reinforcement learning for TL and MTL
- Large scale and online TL and MTL
- Connections of multiple kernel learning to TL and MTL
- Innovative applications, e.g. in computer vision or computational biology.


Preference will be given to submissions which propose new principled TL and
MTL methods or which are likely to generate new debate by rising general
issues or suggesting directions for future work.

Submissions should be no longer than 4 pages in the NIPS latex style. The
extended abstract may be accompanied by an unlimited appendix and other
supplementary material, with the understanding that anything beyond 4 pages
may be ignored by the program committee. Topics that were recently
published or presented elsewhere are allowed, provided that the extended
abstract mentions this explicitly.


Please send your submission by email to
ml-newdirectionsinmtl at lists.tu-berlin.de

Important Dates
Submission deadline: October 9th, 2013
 Acceptance decision: October 23th, 2013
Workshop: December 10th, 2013

Organizers
Urun Dogan (Skype Labs / Microsoft)
Marius Kloft (Courant Institute of Mathematical Sciences & Memorial
Sloan-Kettering Cancer Center)
Francesco Orabona (Toyota Technological Institute, Chicago)
Tatiana Tommasi (KU Leuven)
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20131007/c635759d/attachment.html>


More information about the Connectionists mailing list