No subject
Andras Lorincz
lorincz at iserv.iki.kfki.hu
Fri Aug 18 05:22:36 EDT 1995
The following recent publications of the Adaptive Systems Lab
of the Attila Jozsef University of Szeged and the Hungarian
Academy of Sciences can be accessed on WWW via the URL
http://iserv.iki.kfki.hu/asl-publs.html and
http://iserv.iki.kfki.hu/qua-publs.html.
Olah, M. and Lorincz, A. (1995)
Analog VLSI Implementation of Grassfire Transformation for
Generalized Skeleton Formation
ICANN'95 Paris, accepted
Abstract
A novel analog VLSI circuit is proposed that implements the grassfire
transformation for calculating the generalized skeleton of a planar
shape. The fully parallel VLSI circuit can perform the calculation in
real time. The algorithm, based on an activation spreading process on
an artificial neural network, can be implemented by an extended
nonlinear resistive network. Architecture and building blocks are
outlined, the feasibility of this technique is investigated.
Szepesvari, Cs. (1995)
General Framework for Reinforcement Learning,
ICANN'95 Paris, accepted
Abstract
We set up a general framework for the investigation of decision
processes. The framework is based on the so called one step look-ahead
(OLA) cost mapping. The cost of a policy is defined by the successive
application OLA mapping. This way various decision criterions (e.g. the
expected value criterion or the worst-case criterion) can be treated in
a unified way. The main theorem of this article says that under minimal
conditions optimal stationary policies are greedy w.r.t. the optimal
cost function and vice versa. Based on this result we hope that
previous results on reinforcement learning can be generalized to other
decision criterions that fit the proposed framework.
Marczell, Zs., Kalmar, Zs. and Lorincz, A. (1995)
Generalized skeleton formation for texture segmentation
Neural Network World, accepted
Abstract
An algorithm and an artificial neural architecture that approximates
the algorithm are proposed for the formation of generalized skeleton
transformations. The algorithm includes the original grassfire proposal
of Blum and is extended with an integrative on-center off-surround
detector system. It is shown that the algorithm can elicit textons by
skeletonization. Slight modification of the architecture corresponds to
the Laplace transformation followed by full wave rectification, another
algorithm for texture discrimination proposed by Bergen and Adelson.
Szepesvari, Cs. (1995)
Perfect Dynamics for Neural Networks
Talk presented at Mathematics of Neural Networks and Applications Lady
Margaret Hall, Oxford, July, 1995
Abstract
The vast majority of artificial neural network (ANN) algorithms may be
viewed as massively parallel, non-linear numerical procedures that
solve certain kind of fixed point equations in an iterative manner. We
consider the use of such recurrent ANNs to solve optimization problems.
The dynamics of such networks is often based on the minimization of an
energy-like function. An important (and presumably hard) problem is to
exclude the possibility of "spurious local minima". In this article we
take another starting point and that is to consider perfect dynamics.
We say that a recurrent ANN admits perfect dynamics if the dynamical
system given by the update operator of the network has an attractor
whose basin of attraction covers the set of all possible initial
solution candidates. One may wonder whether neural networks that admit
perfect dynamics can be interesting in applications. In this article we
show that there exist a family of such networks (or dynamics). We
introduce Generalised Dynamic Programming (GenDyP) for the government
of the dynamics. Roughly speaking a GenDyP problem is a 3-tuple
(X,A,Q), where X and A are arbitrary sets and Q maps functions of X
into functions of X x A. GenDyP problems derive from sequential
decision problems and dynamic programming (DP). Traditional DP
procedures correspond to special selections of the mapping Q. We show
that if Q is monotone and satisfies other reasonable (continuity and
boundedness) conditions then the above iteration converges to a
distinguished solution of the functional equation u(x)=inf_a (Q
u)(x,a), that is the generalization of the well known Bellmann
Optimality Equation. The proofs relies on the relation between the
above dynamics and the optimal solutions of generalized multi-stage
decision problems (we give the definitions in the next section).
Kalmar, Zs., Szepesvari, Cs. and Lorincz, A. (1995)
Generalized Dynamic Concept Model as a Route to Construct Adaptive
Autonomous Agents
Neural Network World 3:353--360
Abstract
A model of adaptive autonomous agents, that (i) builds internal
representation of events and event relations, (ii) utilizes activation
spreading for building dynamic concepts and (iii) makes use of the
winner-take-all paradigm to come to a decision is extended by
introducing generalization into the model. The generalization reduces
memory requirements and improves performance in unseen scenes as it is
indicated by computer simulations
J.Toth, G., Kovacs, Sz, and Lorincz, A. (1995)
Genetic algorithm with alphabet optimization
Biological Cybernetics 73:61-68
Abstract
In recent years genetic algorithm (GA) was used successfully to solve
many optimization problems. One of the most difficult questions of
applying GA to a particular problem is that of coding. In this paper a
scheme is derived to optimize one aspect of the coding in an automatic
fashion. This is done by using a high cardinality alphabet and
optimizing the meaning of the letters. The scheme is especially well
suited in cases where a number of similar problems need to be solved.
The use of the scheme is demonstrated on such a group of problems: on
the simplified problem of navigating a `robot' in a `room'. It is shown
that for the sample problem family the proposed algorithm is superior
to the canonical GA.
J.Toth, G. and Lorincz, A. (1995)
Genetic algorithm with migration on topology conserving maps
Neural Network World 2:171--181
Abstract
Genetic algorithm (GA) is extended to solve a family of optimization
problems in a self-organizing fashion. The continuous world of inputs
is discretized in an optimal fashion with the help of a topology
conserving neural network. The GA is generalized to organize
individuals into subpopulations associated with the neurons.
Interneuron topology connections are used to allow gene migration to
neighboring sites. The method speeds up GA by allowing small
subpopulations, but still saving diversity with the help of migration.
Within a subpopulation the original GA was applied as the means of
evolution. To illustrate the this modified GA the optimal control of a
simulated robot-arm is treated: a falling ping-pong ball has to be
caught by a bat without bouncing. It is demonstrated that the
simultaneous optimization for an interval of height can be solved, and
that migration can considerably reduce computation time. Other aspects
of the algorithm are outlined.
Amstrup, B., J.Toth, G., Szabo, G., Rabitz, H., and Lorincz, A. (1995)
Genetic algorithm with migration on topology conserving maps for optimal
control of quantum systems
Journal of Physical Chemistry, 99, 5206-5213
Abstract
The laboratory implementation of molecular optimal control has to
overcome the problem caused by the changing environmental parameters,
such as the temperature of the laser rod, the resonator parameters, the
mechanical parameters of the laboratory equipment, and other dependent
parameters such as time delay between pulses or the pulse amplitudes. In
this paper a solution is proposed: instead of trying to set the
parameter(s) with very high precision, their changes are monitored and
the control is adjusted to the current values. The optimization in the
laboratory can then be run at several values of the parameter(s) with an
extended genetic algorithm (GA) wich is tailored to such parametric
optimization. The extended GA does not presuppose but can take advantage
and, in fact, explores whether the mapping from the parameter(s) to the
optimal control field is continuous. Then the optimization for the
different values of the parameter(s) is done cooperatively, which
reduces the optimization time. A further advantage of the method is its
full adaptiveness; i.e., in the best circumstances no information on the
the system or laboratory equipment is required, and only the success of
the control needs to be measured. The method is demonstrated on a model
problem: a pump-and-dump type model experiment on CsI.
The WWW pages also contain pointers to other older publications that
are available on line. Papers are available also by anonymous ftp
through iserv.iki.kfki.hu/pub/papers
Best regards,
Andras Lorincz
Department of Adaptive Systems also Department of Photophysics
Attila Jozsef University of Szeged Institute of Isotopes
Dom ter 9 Hungarian Academy of Sciences
Szeged Konkoly-Thege 29-33
Hungary, H-6720 Budapest, P.O.B. 77
Hungary, H-1525
email: lorincz at iserv.iki.kfki.hu
More information about the Connectionists
mailing list