Backprop for classification

Mahesan Niranjan niranjan at engineering.cambridge.ac.uk
Tue Aug 21 20:20:36 EDT 1990


> From: xiru at com.think
> Subject: backprop for classification
> Date: 19 Aug 90 00:26:28 GMT
>
> While we trained a standard backprop network for some classification task
> (one output unit for each class), we found that when the classes are not
> evenly distribed in the training set, e.g., 50% of the training data belong
> to one class, 10% belong to another, ... etc., then the network always biased
> towards the classes that have the higher percentage in the training set.
>
This often happens when the network is too small to load the training data.
Your network, in this case, does not converge to negligible error.
My suggestion is to start with a large network that can load your training
data and gradually reduce the size of the net by pruning the weights giving
small contributions to the output error.

niranjan



More information about the Connectionists mailing list