ontogenesis and synaptogenesis (constructing, generating)
Leonard Uhr
uhr at cs.wisc.edu
Fri Jan 11 15:46:42 EST 1991
An alternative to adding physical nodes (and/or links) whenever constructive
algorithms (or "generations") need them, is to have the system generate the
physical substrate with whatever vigor it can muster (hopefully, so there will
always be as-yet-unused resources available), and also when needed free up
(degenerate) resources to make new space. Then the structures of processes can,
as needed, be embedded where appropriate.
This raises several interesting unsolved problems. A rather general topology
is needed, one into which the particular structures that are actually generated
will fit with reasonable efficiency. I'm describing this in a way that brings
out the similarities to the problem of finding good topologies for massively
parallel multi-computers, so that a variety of different structures of processes
can be embedded and executed efficiently. The major architectures used today
are 2-D arrays, trees, and N-cubes; each accepts some embeddings reasonably
well, but not others. One pervasive problem is that their small number of links
(typically 2 to 12) can easily lead to bottlenecks - which NN with e.g. 100 or
so links might almost always overcome. And there are many many other possible
graphs, including interesting hybrids.
There are several other issues, but I hope this is enough detail to make the
following point: If the physical substrate forms a graph whose topology is
reasonably close-to-isomorphic to a variety of structures that combine many
smaller graphs, the problem can be viewed as one of finding graphs for
the physical into which rich enough sets of graphs of processes can be embedded
to step serially through the (usually small) increments that good learning algorithms will generate. To the extent that generation builds relatively small,
local graphs this probably becomes easier.
I don't mean to solve a problem by using the result of yet another unsolved
problem. Just as is done with today's multi-computers, we can use mediocre
topologies with poor embeddings and slower-than-optimal processing (and even
call these super- and ultra-computers, since they may well be the fastest and
most powerful around today), and at the same time try to improve upon them.
There's another type of learning that almost certainly needs just this kind of
thing. Consider how much we remember when we're told the plot of a novel or
some gossip, or shown some pictures and then asked which ones we were shown.
The brain is able to make large amounts of already-present physical substrate
available, whether for temporary or permanent remembered new information,
including processes. As Scott points out, the hardware processors do NOT have
to be generated exactly when and because the functions they execute are.
Len Uhr
More information about the Connectionists
mailing list