PhD on neurocontrol: 2nd announcement

Rafal W Zbikowski rafal at mech.gla.ac.uk
Sun Jun 25 12:35:13 EDT 1995



My PhD thesis on neurocontrol can be found on the anonymous FTP
server

	ftp.mech.gla.ac.uk	(130.209.12.14)

in directory

	rafal
	
as PostScript file (ca 1.2 M)

	zbikowski_phd.ps

For details see abstract below.

   Rafal Zbikowski
        Control Group, Department of Mechanical Engineering, 
        Glasgow University, Glasgow G12 8QQ, Scotland, UK
   rafal at mech.gla.ac.uk

----------------------------- cut here ---------------------------------

	``Recurrent Neural Networks: Some Control Aspects''

			   PhD Thesis
			Rafal Zbikowski
		

			ABSTRACT

This work aims at a rigorous theoretical research on nonlinear
adaptive control using recurrent neural networks. Attention is
focussed on the dynamic, nonlinear parametric structures as generic
models suitable for on-line use.  The discussion is centred around
proper mathematical formulation and analysis of the complex and
abstract issues and therefore no experimental data are given.  The
main aim of this work is to explore the capabilities of deterministic,
continuous-time recurrent neural networks as state-space, generic,
parametric models in the framework of nonlinear adaptive control.

The notion of *nonlinear neural adaptive control* is introduced
and discussed.  The continuous-time state-space approach to recurrent
neural networks is used.  A general formalism of genericity of
control is set up and developed into the *differential
approximation* as the focal point of recurrent networks theory.  A
comparison of approaches to neural approximation, both feedforward
and recurrent, is presented within a unified framework and with
emphasis on relevance for neurocontrol.  Two approaches to
identifiability of recurrent networks are analysed in detail: one
based on the State Isomorphism Theorem and the other on the I/O
equivalence.  The Lie algebra associated with recurrent networks is
described and difficulties in verification of (weak) controllability
and observability pointed out.  Learning algorithms for recurrent
networks are systematically presented and interpreted as
deterministic, infinite-dimensional optimisation problems.  Also the
continuous-time version of the Real-Time Recurrent Learning is
rigorously derived.  Proper links between recurrent learning and
optimal control are established.  Finally, the interpretation of
graceful degradation as an optimal sensitivity problem is given.



More information about the Connectionists mailing list