PhD thesis on Boosting available

Gunnar Raetsch Gunnar.Raetsch at anu.edu.au
Mon Feb 4 08:32:43 EST 2002


Dear Connectionists,

I am pleased to announce that my PhD thesis entitled

     "Robust Boosting via Convex Optimization"

is now available at

     http://www.boosting.org/papers/thesis.ps.gz (and .pdf)

Please find the summary of my thesis below.

Gunnar


Summary
=======

In this work we consider statistical learning problems.  A learning
machine aims to extract information from a set of training examples
such that it is able to predict the associated label on unseen
examples.  We consider the case where the resulting classification or
regression rule is a combination of simple rules - also called base
hypotheses.  The so-called boosting algorithms iteratively find a
weighted linear combination of base hypotheses that predict well on
unseen data.  We study the following issues:

o The statistical learning theory framework for analyzing boosting
    methods.

    We study learning theoretic guarantees on the prediction performance
    on unseen examples.  Recently, large margin classification
    techniques have emerged as a practical result of the theory of
    generalization, in particular Boosting and Support Vector
    Machines. A large margin implies a good generalization
    performance. Hence, we analyze how large the margins in boosting are
    and find an improved algorithm that is able to generate the maximum
    margin solution.

o How can boosting methods be related to mathematical optimization
    techniques?

    To analyze the properties of the resulting classification or
    regression rule, it is of high importance to understand whether and
    under which conditions boosting converges. We show that boosting can
    be used to solve large scale constrained optimization problems,
    whose solutions are well characterizable.  To show this, we relate
    boosting methods to methods known from mathematical optimization,
    and derive convergence guarantees for a quite general family of
    boosting algorithms.

o How to make Boosting noise robust?

    One of the problems of current boosting techniques is that they are
    sensitive to noise in the training sample.  In order to make
    boosting robust, we transfer the soft margin idea from support
    vector learning to boosting.  We develop theoretically motivated
    regularized algorithms that exhibit a high noise robustness.

o How to adapt boosting to regression problems?

    Boosting methods are originally designed for classification
    problems. To extend the boosting idea to regression problems, we use
    the previous convergence results and relations to semi- infinite
    programming to design boosting-like algorithms for regression
    problems.  We show that these leveraging algorithms have desirable
    properties - from both, the theoretical and the practical side.

o Can boosting techniques be useful in practice?

    The presented theoretical results are guided by simulation results
    either to illustrate properties of the proposed algorithms or to
    show that they work well in practice. We report on successful
    applications in a non-intrusive power monitoring system, chaotic
    time series analysis and the drug discovery process.

-- 
+-----------------------------------------------------------------+
   Gunnar R"atsch                     http://mlg.anu.edu.au/~raetsch
   Australian National University   mailto:Gunnar.Raetsch at anu.edu.au
   Research School for Information            Tel: (+61) 2 6125-8647
   Sciences and Engineering                   Fax: (+61) 2 6125-8651
   Canberra, ACT 0200, Australia






More information about the Connectionists mailing list