more combinations of more estimators

Volker Tresp tresp at traun.zfe.siemens.de
Tue Jul 4 12:40:28 EDT 1995






Our recent NIPS7 paper


      COMBINING ESTIMATORS USING NON-CONSTANT WEIGHTING FUNCTIONS

              by Volker Tresp and Michiaki Taniguchi
 

might be of interest to people interested in
combining predictors. The basic idea in our (and many related)
approaches  is to estimate some measure of  certainty of a predictor
given the input for determining the (input dependent) weight of that
predictor. The inverse of the variance of a predictor is suitable: if
a predictor is uncertain about its prediction it should obtain a small
weight.  Another measure can be derived from the input data
distribution of the training data which were used to train a given
predictor: a predictor should obtain a small weight in regions were it
did not see any data during training.  The latter idea is closely
related to the mixtures of experts. We also indicate how both
concepts:  (i. e. variance based weighting and density based weighting)
can be combined.


Incidentally, mixtures of experts fit nicely
into the missing input concept: we conceptionally introduce an additional
input with as many states as there are experts. Since we never know
which expert is the true  expert we are faced with a missing input problem
during training and recall. The learning rules for systems with missing
inputs can be used to derive   the learning rules for the mixtures of experts.

Volker and Mich






The paper can be obtained via:
FTP-host: archive.cis.ohio-state.edu
FTP-filename: /pub/neuroprose/tresp.combining.ps.Z



More information about the Connectionists mailing list