Connectionists: Announcing HyperNEAT: A New Neuroevolution Algorithm that Exploits Geometry

Kenneth Stanley kstanley at eecs.ucf.edu
Mon Jul 5 18:50:22 EDT 2010


Dear Connectionists,

Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT) is a
step beyond traditional neural network evolution (i.e. neuroevolution)
algorithms towards evolving more brain-like structures through evolutionary
algorithms.  In particular, neural networks evolved by HyperNEAT feature
topography in addition to topology.  That is, neurons exist at spatial
locations just as they do in real brains, which means that connectivity
patterns evolve that can be analyzed for emergent topographic map-like
characteristics.  In addition, the ability to encode and evolve large
connectivity patterns with regularities means that HyperNEAT can evolve
larger networks than past approaches, with up to millions of connections.
The cover picture on the July 2010 issue of Neural Computation journal is a
HyperNEAT network being constructed: 
http://www.mitpressjournals.org/action/showLargeCover?issue=40049392

In the past three years, a significant body of research from a growing
HyperNEAT community has emerged.  Many of these publications, source code,
and a short online introduction to the technique are available at the
HyperNEAT Users Page:

http://eplex.cs.ucf.edu/hyperNEATpage/HyperNEAT.html

Links to three comprehensive articles in Neural Computation journal,
Artificial Life journal, and the Journal of Machine Learning Research (JMLR)
follow.

--Kenneth O. Stanley (kstanley at eecs.ucf.edu) 

-----------------------------------------------------------------------
Introducing HyperNEAT from a neural computation perspective:

AUTONOMOUS EVOLUTION OF TOPOGRAPHIC REGULARITIES IN ARTIFICIAL NEURAL
NETWORKS
Jason Gauci and Kenneth O. Stanley
Neural Computation journal 22(7): pages 1860-1898. Cambridge, MA: MIT Press,
2010.

Manuscript: http://eplex.cs.ucf.edu/publications/2010/gauci.nc10.html

Abstract: Looking to nature as inspiration, for at least the last 25 years
researchers in the field of neuroevolution (NE) have developed evolutionary
algorithms designed specifically to evolve artificial neural networks
(ANNs). Yet the ANNs evolved through NE algorithms lack the distinctive
characteristics of biological brains, perhaps explaining why NE is not yet a
mainstream subject of neural computation. Motivated by this gap, this
article shows that when geometry is introduced to evolved ANNs through the
Hypercube-based NeuroEvolution of Augmenting Topologies (HyperNEAT)
algorithm, they begin to acquire characteristics that indeed are reminiscent
of biological brains. That is, if the neurons in evolved ANNs are situated
at locations in space (i.e. if they are given coordinates), then, as
experiments in evolving checkers-playing ANNs in this paper show,
topographic maps with symmetries and regularities can evolve spontaneously.
The ability to evolve such maps is shown in this paper to provide an
important advantage in generalization. In fact, the evolved maps are
sufficiently informative that their analysis yields the novel insight that
the geometry of the connectivity patterns of more general players is
significantly more smooth and contiguous than less general ones. Thus, the
results in this paper reveal a correlation between generality and smoothness
in connectivity patterns. This result hints at the intriguing possibility
that, as NE matures as a field, its algorithms can evolve ANNs of increasing
relevance to those who study neural computation in general.

-----------------------------------------------------------------------
A more basic introduction:

A HYPERCUBE-BASED ENCODING FOR EVOLVING LARGE-SCALE NEURAL NETWORKS
Kenneth O. Stanley, David B. D'Ambrosio, and Jason Gauci 
Artificial Life journal 15(2): pages 185-212. Cambridge, MA: MIT Press, 2009

Manuscript: http://eplex.cs.ucf.edu/publications/2009/stanley.alife09.html

Research in neuroevolution, i.e. evolving artificial neural networks (ANNs)
through evolutionary algorithms, is inspired by the evolution of biological
brains. Because natural evolution discovered intelligent brains with
billions of neurons and trillions of connections, perhaps neuroevolution can
do the same. Yet while neuroevolution has produced successful results in a
variety of domains, the scale of natural brains remains far beyond reach.
This paper presents a method called Hypercube-based NeuroEvolution of
Augmenting Topologies (HyperNEAT) that aims to narrow this gap. HyperNEAT
employs an indirect encoding called connective Compositional Pattern
Producing Networks (connective CPPNs) that can produce connectivity patterns
with symmetries and repeating motifs by interpreting spatial patterns
generated within a hypercube as connectivity patterns in a lower-dimensional
space. The advantage of this approach is that it can exploit the geometry of
the task by mapping its regularities onto the topology of the network,
thereby shifting problem difficulty away from dimensionality to underlying
problem structure. Furthermore, connective CPPNs can represent the same
connectivity pattern at any resolution, allowing ANNs to scale to new
numbers of inputs and outputs without further evolution. HyperNEAT is
demonstrated through visual discrimination and food gathering tasks,
including successful visual discrimination networks containing over eight
million connections. The main conclusion is that the ability to explore the
space of regular connectivity patterns opens up a new class of complex
high-dimensional tasks to neuroevolution.

-----------------------------------------------------------------------
In this last paper, HyperNEAT enables a novel representation for the
challenging RoboCup Keepaway benchmark:

EVOLVING STATIC REPRESENTATIONS FOR TASK TRANSFER
Phillip Verbancsics and Kenneth O. Stanley
Journal of Machine Learning Research 11: pages 1737-1769. Brookline, MA:
Microtome, 2010.

Courtesy of JMLR:
http://eplex.cs.ucf.edu/publications/2010/verbancsics.jmlr10.html

An important goal for machine learning is to transfer knowledge between
tasks. For example, learning to play RoboCup Keepaway should contribute to
learning the full game of RoboCup soccer. Previous approaches to transfer in
Keepaway have focused on transforming the original representation to fit the
new task. In contrast, this paper explores the idea that transfer is most
effective if the representation is designed to be the same even across
different tasks. To demonstrate this point, a bird's eye view (BEV)
representation is introduced that can represent different tasks on the same
two-dimensional map. For example, both the 3 vs. 2 and 4 vs. 3 Keepaway
tasks can be represented on the same BEV. Yet the problem is that a raw
two-dimensional map is high-dimensional and unstructured. This paper shows
how this problem is addressed naturally by an idea from evolutionary
computation called indirect encoding, which compresses the representation by
exploiting its geometry. The result is that the BEV learns a Keepaway policy
that transfers without further learning or manipulation. It also facilitates
transferring knowledge learned in a different domain, Knight Joust, into
Keepaway. Finally, the indirect encoding of the BEV means that its geometry
can be changed without altering the solution. Thus static representations
facilitate several kinds of transfer.

-----------------------------------------------------------------------

Software, demos, and many other HyperNEAT publications from our own group
and others are available through:
http://eplex.cs.ucf.edu/hyperNEATpage/HyperNEAT.html



More information about the Connectionists mailing list