Technical Report Series in Neural and Computational Learning
John Shawe-Taylor
john at dcs.rhbnc.ac.uk
Mon Jan 9 09:27:28 EST 1995
The European Community ESPRIT Working Group in Neural and Computational
Learning Theory (NeuroCOLT): three new reports available
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-015:
----------------------------------------
Grammar Inference and the Minimum Description Length Principle
by Peter Gr\"{u}nwald, Centrum voor Wiskunde en Informatica,
Kruislaan 413, 1098 SJ Amsterdam, The Netherlands
Abstract:
We describe a new abstract model for the computational learning of
grammars. The model deals with a learning process in which an algorithm
is given an input of a large set of training sentences that belong to
some grammar $G$. The algorithm then tries to infer this grammar. Our
model is based on the well-known {\em Minimum Description Length
Principle}. It turns out that our model is, in a certain sense, a more
general version of two seemingly different well-known other ones. Also,
two other existing models turn out to be very similar to ours. We have
made an initial implementation of the algorithm implied by the model.
We have tried this implementation on natural language texts, and we
give a short description of the results of these tests.
The results of testing the algorithm in practice are quite interesting,
but unfortunately they are neither encouraging nor discouraging enough
to indicate whether our method of grammar induction, which hardly makes
any use of any linguistic principles and makes no use at all of any
semantical information, is really worth pursuing further.
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-017:
----------------------------------------
Bounds for the Computational Power and Learning Complexity of Analog
Neural Nets
by Wolfgang Maass, Institute for Theoretical Computer Science, Technische
Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz, Austria
Abstract:
It is shown that high order feedforward neural nets of constant depth
with piecewise polynomial activation functions and arbitrary real
weights can be simulated for boolean inputs and outputs by neural nets
of a somewhat larger size and depth with heaviside gates and weights
from $\{-1,0,1\}$. This provides the first known upper bound for the
computational power of the former type of neural nets. It is also
shown that in the case of first order nets with piecewise linear
activation functions one can replace arbitrary real weights by rational
numbers with polynomially many bits, without changing the boolean
function that is computed by the neural net. In order to prove these
results we introduce two new methods for reducing nonlinear problems
about weights in multi-layer neural nets to linear problems for a
transformed set of parameters. These transformed parameters can be
interpreted as weights in a somewhat larger neural net.
As another application of our new proof technique we show that
neural nets with piecewise polynomial activation functions and a
constant number of analog inputs are probably approximately learnable
(in Valiant's model for PAC-learning).
----------------------------------------
NeuroCOLT Technical Report NC-TR-94-021:
----------------------------------------
On the Computational Complexity of Networks of Spiking Neurons
by Wolfgang Maass, Institute for Theoretical Computer Science,
Technische Universitaet Graz, Klosterwiesgasse 32/2, A-8010 Graz,
Austria
Abstract:
We investigate the computational power of a formal model for networks
of spiking neurons. It is shown that simple operations on
phase-differences between spike-trains provide a very powerful
computational tool that can in principle be used to carry out highly
complex computations on a small network of spiking neurons. We
construct networks of spiking neurons that simulate arbitrary threshold
circuits, Turing machines, and a certain type of random access machines
with real valued inputs. We also show that relatively weak basic
assumptions about the response- and threshold-functions of the spiking
neurons are sufficient in order to employ them for such computations.
Furthermore we prove upper bounds for the computational power of
networks of spiking neurons with arbitrary piecewise linear response-
and threshold-functions, and show that they are with regard to
real-time simulations computationally equivalent to a certain type of
random access machine, and to recurrent analog neural nets with
piecewise linear activation functions. In addition we give
corresponding results for networks of spiking neurons with a limited
timing precision, and we prove upper and lower bounds for the
VC-dimension and pseudo-dimension of networks of spiking neurons.
-----------------------
The Report NC-TR-94-015 can be accessed and printed as follows
% ftp cscx.cs.rhbnc.ac.uk (134.219.200.45)
Name: anonymous
password: your full email address
ftp> cd pub/neurocolt/tech_reports
ftp> binary
ftp> get nc-tr-94-015.ps.Z
ftp> bye
% zcat nc-tr-94-015.ps.Z | lpr -l
Similarly for the other technical reports.
Uncompressed versions of the postscript files have also been
left for anyone not having an uncompress facility.
A full list of the currently available Technical Reports in the Series is held in a file
`abstracts' in the same directory.
Best wishes
John Shawe-Taylor
More information about the Connectionists
mailing list