Batch methods versus stochastic methods...

Frank Smieja smieja at jargon.gmd.de
Tue Oct 29 05:14:40 EST 1991


-) Unfortunately, we do not have any datasets of the proper size.
-) So I would appreciate if anyone could inform me about where to find big 
-) datasets that are public available.
-) 
-) -- Martin M
-) 
-) -----------------------------------------------------------------------
-) Martin F. Moller	       	email: mmoller at daimi.aau.dk
-) Computer Science Department	phone: +45 86202711 5223
-) Aarhus University		fax:    +45 86135725
-) Ny Munkegade, Building 540
-) 8000 Aarhus C
-) Denmark
-) ----------------------------------------------------------------------

I demonstrated in my paper "MLP Solutions, Generalization and Hidden
Unit Representations" in the DANIP (Distributed And Neural Information
Processing) conference in Bonn, Germany, April 1989 (ed: Kindermann &
Linden, pub: Oldenbourg Verlag), how one might "synthetically"
construct a training set of any size of inputs/outputs, that may be
generalized, insofar that the "regularities" beloved by our networks
are guaranteed to exist, since they are used to generate the training
set pairs, but not visible to the network until the examples are seen,
and the learning results in "emergent generalization".  I used this
method in the paper to study a small diagnosis problem, but scaling up
is no problem.

If you cannot get hold of this book, and would like to see the paper,
I can make it available in the neuroprose archive (unfortunately
without figures, but they are not needed to explain the method).  If
this is also difficult, I will send hard copies to interested parties.

Please send such requests directly to me (smieja at gmdzi.uucp) and I
will either reply directly or to the bboard.

	-Frank Smieja



More information about the Connectionists mailing list