Paper announcement
Ron Meir
rmeir at ee.technion.ac.il
Fri Oct 8 08:03:01 EDT 1993
FTP-host: archive.cis.ohio-state.edu
FTP-filename: /pub/neuroprose/meir.compress.ps.Z
FTP-filename: /pub/neuroprose/meir.learn.ps.Z
**PLEASE DO NOT FORWARD TO OTHER GROUPS**
The following two papers are now available in the neuroprose directory.
The papers are 10 and 11 pages long, respectively. Sorry, but no
hardcopies are available.
Data Compression and Prediction in Neural Networks
Ronny Meir Jose F. Fontanari
Department of EE Department of Physics
Technion University of Sao Paulo
Haifa 32000, Israel 13560 Sao Carlos, Brazil
rmeir at ee.technion.ac.il fontanari at uspfsc.ifqsc.usp.ansp.br
We study the relationship between data compression and prediction in
single-layer neural networks of limited complexity. Quantifying the
intuitive notion of Occam's razor using Rissanen's
minimum complexity framework, we investigate the model-selection
criterion advocated by this principle. While we
find that the criterion works well for large sample sizes (as it
must for consistency), the behavior for finite sample sizes is rather
complex, depending intricately on the relationship between the
complexity of the hypothesis space and the target space.
We also show that the limited networks studied perform efficient data
compression, even in the error full regime.
------------------------------------------------------------------------------
Learning Algorithms, Input Distributions and Generalization
Ronny Meir
Department of Electrical Engineering
Technion
Haifa 32000, Israel
rmeir at ee.technion.ac.il
We study the interaction between input distributions, learning
algorithms and finite sample sizes in the case of learning
classification tasks. Focusing on the case of normal input
distributions, we use statistical mechanics techniques to
calculate the empirical and expected (or
generalization) errors for several well-known algorithms learning the
weights of a single-layer perceptron. In the
case of spherically symmetric
distributions within each class we find that the simple Hebb
algorithm is optimal. Moreover, we show that in the regime
where the overlap between the classes is large,
algorithms with low empirical error do worse in terms
of generalization, a phenomenon known as over-training.
--------------------------------------------------------------------------
To obtain copies:
ftp cheops.cis.ohio-state.edu
login: anonymous
password: <your email address>
cd pub/neuroprose
binary
get meir.compress.ps.Z
get meir.learn.ps.Z
quit
Then at your system:
uncompress meir.*.ps.Z
lpr -P<printer-name> meir.*.ps
More information about the Connectionists
mailing list