From kirthevasankandasamy at gmail.com Mon Apr 3 12:12:40 2017 From: kirthevasankandasamy at gmail.com (Kirthevasan Kandasamy) Date: Mon, 3 Apr 2017 12:12:40 -0400 Subject: Fwd: Thesis Proposal - 4/6/17 - Kirthevasan Kandasamy - Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation In-Reply-To: <3efac583-d00e-d878-515b-249d202ee6b1@cs.cmu.edu> References: <3efac583-d00e-d878-515b-249d202ee6b1@cs.cmu.edu> Message-ID: Hi all, I'll be proposing this Thursday at noon. Feel free to drop by. -kirthevasan Sent from my phone. Please excuse brevity. ---------- Forwarded message ---------- From: "Diane Stidle" Date: Mar 27, 2017 14:17 Subject: Thesis Proposal - 4/6/17 - Kirthevasan Kandasamy - Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation To: "ml-seminar at cs.cmu.edu" , "zoubin at eng.cam.ac.uk" Cc: *Thesis Proposal* Date: 4/6/17 Time 12:00pm Place: 6121 GHC Speaker: Kirthevasan Kandasamy Title: Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation Abstract: In many scientific and engineering applications, we are tasked with optimising a black-box function which is expensive to evaluate due to computational or economic reasons. In *bandit optimisation*, we sequentially evaluate a noisy function with the goal of identifying its optimum in as few evaluations as possible. Some applications include tuning the hyper-parameters of machine learning algorithms, on-line advertising, optimal policy selection in robotics and maximum likelihood inference in simulation based scientific models. Today, these problems face new challenges due to increasingly expensive evaluations and the need to perform these tasks in high dimensional spaces. At the same time, there are new opportunities that have not been exploited before. We may have the flexibility to approximate the expensive function by investing less resources per evaluation. We can also carry out several evaluations simultaneously, say via parallel computing or by concurrently conducting multiple experiments in the real world. In this thesis, we aim to tackle these and several other challenges to meet emerging demands in large scale bandit applications. We develop methods with theoretical underpinnings and which also enjoy good empirical performance. Thesis Committee: Barnab?s P?czos (Co-Chair) Jeff Schneider (Co-Chair) Aarti Singh Zoubin Ghahramani (University of Cambridge) Link to draft document: cs.cmu.edu/~kkandasa/docs/proposal.pdf -- Diane Stidle Graduate Programs Manager Machine Learning Department Carnegie Mellon Universitydiane at cs.cmu.edu412-268-1299 <(412)%20268-1299> -------------- next part -------------- An HTML attachment was scrubbed... URL: From kandasamy at cmu.edu Wed Apr 5 20:05:15 2017 From: kandasamy at cmu.edu (Kirthevasan Kandasamy) Date: Wed, 5 Apr 2017 20:05:15 -0400 Subject: Fwd: ROOM CHANGE - Thesis Proposal - 4/6/17 - Kirthevasan Kandasamy - Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation In-Reply-To: <54f7717b-feaf-cb38-218e-3e5cf8d66eed@cs.cmu.edu> References: <54f7717b-feaf-cb38-218e-3e5cf8d66eed@cs.cmu.edu> Message-ID: Hi all, We changed rooms for the proposal. It is happening tomorrow at noon at GHC 6115. samy ---------- Forwarded message ---------- From: Diane Stidle Date: Tue, Apr 4, 2017 at 2:03 PM Subject: ROOM CHANGE - Thesis Proposal - 4/6/17 - Kirthevasan Kandasamy - Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation To: "ml-seminar at cs.cmu.edu" , "zoubin at eng.cam.ac.uk" *Thesis Proposal* Date: 4/6/17 Time 12:00pm Place: *6115 GHC (Note: Room Change)* Speaker: Kirthevasan Kandasamy Title: Tuning Hyper-parameters without Grad Students: Scaling up Bandit Optimisation Abstract: In many scientific and engineering applications, we are tasked with optimising a black-box function which is expensive to evaluate due to computational or economic reasons. In *bandit optimisation*, we sequentially evaluate a noisy function with the goal of identifying its optimum in as few evaluations as possible. Some applications include tuning the hyper-parameters of machine learning algorithms, on-line advertising, optimal policy selection in robotics and maximum likelihood inference in simulation based scientific models. Today, these problems face new challenges due to increasingly expensive evaluations and the need to perform these tasks in high dimensional spaces. At the same time, there are new opportunities that have not been exploited before. We may have the flexibility to approximate the expensive function by investing less resources per evaluation. We can also carry out several evaluations simultaneously, say via parallel computing or by concurrently conducting multiple experiments in the real world. In this thesis, we aim to tackle these and several other challenges to meet emerging demands in large scale bandit applications. We develop methods with theoretical underpinnings and which also enjoy good empirical performance. Thesis Committee: Barnab?s P?czos (Co-Chair) Jeff Schneider (Co-Chair) Aarti Singh Zoubin Ghahramani (University of Cambridge) Link to draft document: cs.cmu.edu/~kkandasa/docs/proposal.pdf -- Diane Stidle Graduate Programs Manager Machine Learning Department Carnegie Mellon Universitydiane at cs.cmu.edu412-268-1299 <(412)%20268-1299> -------------- next part -------------- An HTML attachment was scrubbed... URL: