<div dir="ltr">Hi everyone,<div><br></div><div><div>I am defending next Wednesday.</div><div>You are welcome to drop by.</div><div><br></div><div>Thanks!</div><div>Samy</div></div><div><br><div class="gmail_quote"><div dir="ltr">---------- Forwarded message ---------<br>From: <strong class="gmail_sendername" dir="auto">Diane Stidle</strong> <span dir="ltr"><<a href="mailto:stidle@andrew.cmu.edu">stidle@andrew.cmu.edu</a>></span><br>Date: Mon, Sep 24, 2018 at 4:19 PM<br>Subject: Thesis Defense - Oct. 3, 2018 - Kirthevasan Kandasamy - Tuning Hyper-parameters without Grad-students: Scaling up Bandit Optimisation<br>To: <a href="mailto:ml-seminar@cs.cmu.edu">ml-seminar@cs.cmu.edu</a> <<a href="mailto:ML-SEMINAR@cs.cmu.edu">ML-SEMINAR@cs.cmu.edu</a>>, <a href="mailto:zoubin@eng.cam.ac.uk">zoubin@eng.cam.ac.uk</a> <<a href="mailto:zoubin@eng.cam.ac.uk">zoubin@eng.cam.ac.uk</a>>, <<a href="mailto:zoubin@uber.com">zoubin@uber.com</a>><br></div><br><br>
<div text="#000000" bgcolor="#FFFFFF">
<p><i><b>Thesis Defense</b></i></p>
<p>Date: October 3, 2018<br>
Time: 12:30pm (EDT)<br>
Place: 8102 GHC<br>
PhD Candidate: Kirthevasan Kandasamy</p>
<p><b>Title: </b><b><span class="m_8499101156100071494gmail-im"><font color="#000000">Tuning Hyper-parameters without
Grad-students: Scaling up Bandit Optimisation</font></span></b></p>
<p><span class="m_8499101156100071494gmail-im"><font color="#000000">Abstract:<br>
</font></span><font color="#000000">This thesis explores
scalable methods for adaptive decision making under uncertainty,
where the goal of an agent is to design an experiment, observe
the outcome, and plan subsequent experiments to achieve a
desired goal. Typically, each experiment incurs a large
computational or economic cost, and we need to keep the number
of experiments to a minimum. Many of such problems fall under
the bandit framework, where each experiment evaluates a noisy
function and the goal is to find the optimum of this function. A
common use case for the bandit framework, pervasive in many
industrial and scientific applications, is hyper-parameter
tuning, where we need to find the optimal configuration of a
black-box system by tuning the several knobs which affect the
performance of the system. Some applications include statistical
model selection, materials design, optimal policy selection in
robotics, and maximum likelihood inference in simulation based
scientific models. More generally, bandits are but one class of
problems studied under the umbrella of adaptive decision-making
under uncertainty. Problems such as active learning and design
of experiments are other examples of adaptive decision-making,
but unlike bandits, progress towards a desired goal is not made
known to the agent via a reward signal.</font></p>
<div>
<div><font color="#000000">With increasingly expensive function
evaluations and demands to optimise over complex input spaces,
bandit optimisation tasks face new challenges today. At the
same time, there are new opportunities that have not been
exploited previously. We study the following questions in this
thesis to enable the application of bandits and more broadly
adaptive decision-making to modern applications.<br>
</font></div>
<div><font color="#000000">- Conventional bandit methods work
reliably in low dimensional settings, but scale poorly with
input dimensionality. Scaling such methods to high dimensional
inputs requires addressing several computational and
statistical challenges.</font></div>
<div><font color="#000000">- In many applications, an expensive
function can be cheaply approximated. We study techniques that
can use information from these cheap lower fidelity
approximations to speed up the overall optimisation process.</font></div>
<span class="m_8499101156100071494gmail-im">
<div><font color="#000000">- Conventional bandit methods are
inherently sequential. We study parallelisation techniques
so as to deploy several function evaluations at the same
time.</font></div>
</span>
<div><font color="#000000">- Typical methods assume that a design
can be characterised by a Euclidean vector. We study bandit
methods on graph-structured spaces. As a specific application,
we study neural architecture search, which optimises for the
structure of the neural network by viewing as a directed graph
with node labels and node weights.</font></div>
<div><font color="#000000">- Many methods for adaptive
decision-making are not competitive with human experts.
Incorporating domain knowledge and human intuition about
specific problems may significantly improve practical
performance.</font></div>
</div>
<span class="m_8499101156100071494gmail-im">
<div><font color="#000000">We first study the above
topics in the bandit framework and then study how they can be
extended to broader decision-making problems. We develop
methods with theoretical guarantees which simultaneously enjoy
good empirical performance. As part of this thesis, we also
develop an open source platform for scalable and robust bandit
optimisation.</font></div>
<div><font color="#000000"><br>
</font></div>
<div><font color="#000000">Thesis Committee:</font><font color="#000000"></font><span style="font-size:12pt"></span><span class="m_8499101156100071494gmail-im"><span style="font-size:13.0pt;line-height:115%"><br>
Barnabás Póczos</span></span><span style="font-size:13.0pt;line-height:115%"></span><font color="#000000">(Co-chair)</font><font color="#000000"><br>
Jeff Schneider (Co-chair)</font><font color="#000000"><br>
Aarti Singh</font><font color="#000000"><br>
Zoubin Ghahramani (University of Cambridge)<br>
</font><br>
</div>
</span><span class="m_8499101156100071494gmail-im"></span>
<div><b>Link to draft document: </b><a href="http://www.cs.cmu.edu/~kkandasa/docs/thesis.pdf" target="_blank">http://www.cs.cmu.edu/~kkandasa/docs/thesis.pdf</a></div>
<p><span class="m_8499101156100071494gmail-im"></span></p>
<pre class="m_8499101156100071494moz-signature" cols="72">--
Diane Stidle
Graduate Programs Manager
Machine Learning Department
Carnegie Mellon University
<a class="m_8499101156100071494moz-txt-link-abbreviated" href="mailto:stidle@cmu.edu" target="_blank">stidle@cmu.edu</a>
412-268-1299</pre>
</div>
</div></div></div>