preprint available

Robert Urbanczik robert at physik.uni-wuerzburg.de
Tue May 19 05:10:10 EDT 1998


The following preprint (13 pages, to appear in Phys.Rev.E) is available
for download from:

  ftp://ftp.physik.uni-wuerzburg.de/pub/preprint/1998/WUE-ITP-98-016.ps.gz



             Multilayer Perceptrons May Learn Simple Rules Quickly
                           Robert Urbanczik

  Zero temperature Gibbs learning is considered for a connected committee
  machine with $K$ hidden units. For large $K$, the scale of the
  learning curve strongly depends on the target rule. When learning a
  perceptron, the sample size $P$ needed for optimal generalization scales
  so that $N\ll P\ll KN$, where $N$ is the dimension of the input.
  This even holds for a noisy perceptron rule if a new input is classified
  by the majority vote of all students in the version space. When learning
  a committee machine with $M$ hidden units, $1\ll M\ll K$, optimal
  generalization requires $\sqrt{MK} N \ll P$.





More information about the Connectionists mailing list