Connectionists: Call for Contributions: Resource-Efficient Machine Learning, NIPS-2013 workshop

Yevgeny Seldin yevgeny.seldin at gmail.com
Fri Aug 23 00:45:30 EDT 2013


        CALL FOR ABSTRACTS AND OPEN PROBLEMS

Resource-Efficient Machine Learning
NIPS-2013 Workshop
Tuesday, December 10, 2012
Lake Tahoe, Nevada, US
https://sites.google.com/site/resefml2013
----------------------------------------------

We invite submission of abstracts and open problems to 
Resource-Efficient Machine Learning NIPS-2013 workshop.

IMPORTANT DATES

*Submission Deadline:* October 9.
*Notification of Acceptance:* October 23.

More details are provided below.
-----------------------------------------------


        Abstract

Resource efficiency is key for making ideas practical. It is crucial in 
many tasks, ranging from large-scale learning ("big data'') to 
small-scale mobile devices. Understanding resource efficiency is also 
important for understanding biological systems, from individual cells to 
complex learning systems, such as the human brain. The goal of this 
workshop is to improve our fundamental theoretical understanding and 
link between various applications of learning under constraints on the 
resources, such as computation, observations, communication, and memory. 
While the founding fathers of machine learning were mainly concerned 
with characterizing the sample complexity of learning (the observations 
resource) [VC74] it now gets realized that fundamental understanding of 
other resource requirements, such as computation, communication, and 
memory is equally important for further progress [BB11].

The problem of resource-efficient learning is multidimensional and we 
already see some parts of this puzzle being assembled. One question is 
the interplay between the requirements on different resources. Can we 
use more of one resource to save on a different resource? For example, 
the dependence between computation and observations requirements was 
studied in [SSS08,SSST12,SSB12]. Another question is online learning 
under various budget constraints [AKKS12,BKS13,CKS04,DSSS05,CBG06]. One 
example that Badanidiyuru et al. provide is dynamic pricing with limited 
supply, where we have a limited number of items to sell and on each 
successful sale transaction we lose one item. A related question of 
online learning under constraints on information acquisition was studied 
in [SBCA13], where the constraints could be computational or monetary. 
Yet another direction is adaptation of algorithms to the complexity of 
operation environment. Such adaptation can allow resource consumption to 
reflect the hardness of a situation being faced. An example of such 
adaptation in multiarmed bandits with side information was given in 
[SAL+11]. Another way of adaptation is interpolation between stochastic 
and adversarial environments. At the moment there are two prevailing 
formalisms for modeling the environment, stochastic and adversarial 
(also known as ``the average case'' and ``the worst case''). But in 
reality the environment is often neither stochastic, nor adversarial, 
but something in between. It is, therefore, crucial to understand the 
intermediate regime. First steps in this direction were done in [BS12]. 
And, of course, one of the flagman problems nowadays is ``big data'', 
where the constraint shifts from the number of observations to 
computation. We strongly believe that there are deep connections between 
problems at various scales and with various resource constraints and 
there are basic principles of learning under resource constraints that 
are yet to be discovered. We invite researchers to share their practical 
challenges and theoretical insights on this problem.

Study of resource-efficient learning also require design of 
resource-dependent performance measures. In the past, algorithms were 
compared in terms of predictive accuracy (classification errors, AUC, 
F-measures, NDCG, etc.), yet there is a need to evaluate them with 
additional metrics related to resources, such as memory, CPU time, and 
even power. For example, reward per computational operation. This theme 
will also be discussed at the workshop.

References:
[AKKS12] Kareem Amin, Michael Kearns, Peter Key and Anton Schwaighofer. 
Budget Optimization for Sponsored Search: Censored Learning in MDPs. UAI 
2012.
[BB11] Leon Bottou and Olivier Bousquet. The trade-offs of large scale 
learning. In Suvrit Sra, Sebastian Nowozin, and Stephen J. Wright, 
editors, Optimization for Machine Learning. MIT Press, 2011.
[BKS13] Ashwinkumar Badanidiyuru, Robert Kleinberg and Aleksandrs 
Slivkins. Bandits with Knapsacks. FOCS, 2013.
[BS12] Sebastien Bubeck and Aleksandrs Slivkins. The best of both 
worlds: stochastic and adversarial bandits. COLT, 2012.
[CBG06] Nicolò Cesa-Bianchi and Claudio Gentile. Tracking the best 
hyperplane with a simple budget perceptron. COLT 2006.
[CKS04] Koby Crammer, Jaz Kandola and Yoram Singer. Online 
Classification on a Budget. NIPS 2003.
[DSSS05] Ofer Dekel, Shai Shalev-shwartz and Yoram Singer. The 
Forgetron: A kernel-based perceptron on a fixed budget. NIPS 2004.
[SAL+11] Yevgeny Seldin, Peter Auer, François Laviolette, John 
Shawe-Taylor, and Ronald Ortner. PAC-Bayesian Analysis of Contextual 
Bandits. NIPS, 2011.
[SBCA13] Yevgeny Seldin, Peter Bartlett, Koby Crammer, and Yasin 
Abbasi-Yadkori. Prediction with Limited Advice and Multiarmed Bandits 
with Paid Observations. 2013.
[SSB12] Shai Shalev-Shwartz and Aharon Birnbaum. Learning halfspaces 
with the zero-one loss: Time-accuracy trade-offs. NIPS, 2012.
[SSS08] Shai Shalev-Shwartz and Nathan Srebro. SVM Optimization: Inverse 
Dependence on Training Set Size. ICML, 2008.
[SSST12] Shai Shalev-Shwartz, Ohad Shamir, and Eran Tromer. Using more 
data to speed-up training time. AISTATS, 2012.
[VC74] Vladimir N. Vapnik and Alexey Ya. Chervonenkis. Theory of pattern 
recognition. Nauka, Moscow (in Russian), 1974. German translation: 
W.N.Wapnik, A.Ya.Tschervonenkis (1979), Theorie der Zeichenerkennug, 
Akademia, Berlin.


        Call for Sponsors

Your logo could be here.... If you are interested to sponsor this event, 
please, contact yevgeny.seldin at gmail.


        Call for Contributions

We invite submission of *abstracts and open problems* to the workshop. 
Abstracts and open problems should be at most 4 pages long in the NIPS 
format 
<https://www.google.com/url?q=https%3A%2F%2Fnips.cc%2FPaperInformation%2FStyleFiles&sa=D&sntz=1&usg=AFrqEzcqzQaRgUcy_z9XvGI7PHaJrD5PXg>. 
Appendices are allowed, but the organizers reserve the right to evaluate 
the submissions based on the first 4 pages only. Submissions should be 
NOT anonymous. Selected abstracts and open problems will be presented as 
talks or posters during the workshop. Contributions should be emailed to 
yevgeny.seldin at gmail.

IMPORTANT DATES

*Submission Deadline:* October 9.
*Notification of Acceptance:* October 23.

EVALUATION CRITERIA

  * Theory and application-oriented contributions are equally welcome.
  * All submissions should emphasize relevance to the workshop subject.
  * Submission of previously published work or work under review is
    allowed, in particular NIPS-2013 submissions. However, for oral
    presentations preference will be given to novel work or work that
    was not yet presented elsewhere (for example, recent journal
    publications or NIPS posters). All double submissions must be
    clearly declared as such!


        Invited Speakers (tentative)

Alexandrs Slivkins 
<http://www.google.com/url?q=http%3A%2F%2Fresearch.microsoft.com%2Fen-us%2Fpeople%2Fslivkins&sa=D&sntz=1&usg=AFrqEzcq8FiVZpP8SerrPSzCPitzRTgYeQ>, 
Microsoft Research
Michael Mahoney 
<http://www.google.com/url?q=http%3A%2F%2Fcs.stanford.edu%2Fpeople%2Fmmahoney%2F&sa=D&sntz=1&usg=AFrqEzeYfedlPRrUDeMnAtIRH26dbLoM-w>, 
Stanford


        Organizers

Yevgeny Seldin 
<https://www.google.com/url?q=https%3A%2F%2Fsites.google.com%2Fsite%2Fyevgenyseldin&sa=D&sntz=1&usg=AFrqEzc2wzxGdoSn68PNkINqgCsvFMjQOA>, 
Queensland University of Technology and UC Berkeley
Koby Crammer 
<http://www.google.com/url?q=http%3A%2F%2Fwebee.technion.ac.il%2Fpeople%2Fkoby%2F&sa=D&sntz=1&usg=AFrqEzehNTgYPpJpcFJUST9hknRYFQhsRA>, 
The Technion
Yasin Abbasi-Yadkori 
<http://www.google.com/url?q=http%3A%2F%2Fwebdocs.cs.ualberta.ca%2F%7Eabbasiya&sa=D&sntz=1&usg=AFrqEzfdCklx6ABkdZmZS7jyNsC9_TU_Fw>, 
Queensland University of Technology and UC Berkeley
Ralf Herbrich 
<http://www.google.com/url?q=http%3A%2F%2Fwww.herbrich.me&sa=D&sntz=1&usg=AFrqEzeyvu8HGZiis2YiBTE_xP3CD_kQ2w>, 
Amazon
Peter Bartlett 
<http://www.google.com/url?q=http%3A%2F%2Fwww.stat.berkeley.edu%2F%257Ebartlett%2F&sa=D&sntz=1&usg=AFrqEzfJ6kQ0KSL6vNMpQYruwRbVVxR1BA>, 
UC Berkeley and Queensland University of Technology


        Schedule

TBA
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20130823/87c8df7c/attachment.html>


More information about the Connectionists mailing list