Multi-objective optimization/bias-variance
Antonio de Padua Braga
apbraga at cpdee.ufmg.br
Fri Mar 16 12:52:08 EST 2001
Dear Connectionists,
The following paper has just been published by Neurocomputing.
The idea of the paper is to balance the error of the training
set and the norm of the weight vectors with a multi-objective
optimization approach to avoid over-fitting.
Copies are available on request.
We apologize in advance for any multiple postings that may be
received.
***********************************************************************
Improving generalization of MLPs with multi-objective optimization
Teixeira, R.A., Braga, A.P., Takahashi, R.H.C. And Rezende, R.
Neurocomputing. Volume 35, pages 189-194.
ABSTRACT
This paper presents a new learning scheme for improving
generalization of Multilayer Perceptrons (MLPs). The algorithm
uses a multi-objective optimization approach to balance between
the error of the training data and the norm of network weight
vectors to avoid over-fitting. The results are compared with
Support Vector Machines (SVMs) and standard backpropagation.
***********************************************************************
--
Prof. Antonio de Padua Braga, Depto. Eng. Eletronica, Campus da
UFMG (Pampulha), C. P. 209, 30.161-970, Belo Horizonte, MG, Brazil
Tel:+55 31 4994869, Fax:+55 31 4994850, Email:apbraga at cpdee.ufmg.br,
http://www.cpdee.ufmg.br/~apbraga
More information about the Connectionists
mailing list