recurrent nets.
Bhaskar DasGupta
bhaskar at theory.cs.psu.edu
Tue Dec 17 09:46:14 EST 1991
The following will appear as a concise paper in IEEE SouthEastcon 1992.
Learning Capabalities of Recurrent Networks.
Bhaskar DasGupta
Computer Science Department
Penn State.
Brief summary:
Recurrent Neural Networks are models of computation in which the underlying
graph is directed ( possibly cyclic ), and each processor changes state
according to some function computed according to its weighted summed
inputs, either deterministically or probabilistically. Under arbitrary
probabilistic update rules, such models can be as powerful as Probabilistic
Turing Machines. For probabilistic models we can define the error probability
as the maximum probability of reaching an incorrect output configuration.
It is observed:
If the error probability is bounded then such a network can be simulated
by a deterministic finite automaton ( with exponentially many states )
For deterministic recurrent nets where each processor implements a
threshold function:
It may accept all P-complete language problems. However, restricting
the weight-threshold relationship may result in accepting a weaker
class, the NC class ( problems which can be solved in poly-log time
with polynomially many processors ).
The results are straightforward to derive, so I did not put it in the
neuroprose archive.
Thanks.
Bhaskar
More information about the Connectionists
mailing list