No subject


Tue Jun 6 06:52:25 EDT 2006


algorithms definitely need to design and train nets on their own. (By
the way, this is indeed a doable task and all neural network algorithms
need to focus on both of these tasks.) We cannot leave design out of
our algorithms. Our original intent is to build self-learning systems, not
systems that we have to "baby sit" all the time. Such systems are
"useless" if we want to build truly autonomous learning systems that can
learn own their own. "Learning" includes "design and training". We
cannot call them learning algorithms unless they design nets on their
own and unless they attempt to generalize (i.e. attempt to build the
smallest possible net).
 
I would welcome more thoughts and debate on all of these issues. It
would help to see some more response on two of the other premises of
classical connectionist learning - local learning and memoryless learning.
They have been the key concepts behind algorithm development in this
field for the last 40 to 50 years. Again, open and vigorous debate is
very healthy for a scientific field. Perhaps more researchers will come
forward with facts and ideas on all these two and other issues.
********************************************************
********************************************************
On May 23 Danny Silver wrote:
 
"Dr. Roy ..
 It was interesting to read your mail on new criteria for neural network
based inductive learning.  I am sure that many other readers have at
one time or another had similar thoughts or portions thereof.
 Notwithstanding the need to walk before you run, there is reason to
set our sights a little higher then they have been.
 Along these lines I would like to point you toward a growing body of
work on Transfer in Inductive Systems which suggests that a "life long
learning" or "learning to learn" approach encomposes much of the
criteria which you have outlined. At NIPS*95 a post-conference
workshop covered this very topic and heard from some 15 speakers on
the subject. All those who are interested should search through the
hompages below for additional information."
Daniel L. Silver    University of Western Ontario, London, Canada    =
N6A 3K7 - Dept. of Comp. Sci. - Office: MC27b    =
dsilver at csd.uwo.ca  H: (519)473-6168   O: (519)679-2111 (ext.6903)
WWW home page ....  http://www.csd.uwo.ca/~dsilver                   =
==================================================
 Workshop page:
 http://www.cs.cmu.edu/afs/cs.cmu.edu/usr/caruana/pub/transfer.html
 Lori Pratt's transfer page:
 http://vita.mines.colorado.edu:3857/0/lpratt/transfer.html
 Danny Silver's transfer ref list:
 http://www.csd.uwo.ca/~dsilver/ltl-ref-list
 Rich Caruana's transfer ref list:
http://www.cs.cmu.edu/afs/cs.cmu.edu/user/caruana/pub/transferbib.html
********************************************************
********************************************************
On May 21 Michael Vanier wrote:
 
"I read your post to the computational neuroscience mailing list with
interest.  I agreed with most of your points about the differences
between "brain-like" learning and the learning exhibited by current
neural network models.  I have a couple of comments, for what it's
worth.
 
(On Task A: Perform Network Design Task)
 
As a student of neuroscience (and computational neuroscience), it isn't
clear to me what you're referring to when you say that the brain
designs an appropriate network for a given task.  One take on this is
that evolution has done just that, but evolution has operated over
millions of years. Biological development can also presumably tune a
network in response to inputs (e.g. the development of connectivity in
visual cortex in response to the presence or absence of visual stimuli),
but again, this is slow and relatively fixed after a certain period, so it
would only apply to generic tasks whose nature doesn't change
profoundly over time (which presumably is the case for early vision).  I
know of no example where the brain can massively rewire itself in
order to perform some task.  However, the kind of learning found in
connectionist networks (correlation-based using local learning rules)
has a fairly direct analogy to long-term potentiation and depression in
the brain, so it's likely that the brain is at least this powerful.  This
accounts for much of the appeal of local learning rules: you can find
them (or something similar to them) in the brain.  In fact,despite the
practical problems with backprop (which you mention), the most
common objection given by biologists to backprop is that even this
simple a learning rule would be very difficult to instantiate in a
biological system.
 
(On Task C: Quickness in Learning)
 
This is indeed a problem.  Interestingly, attractor networks such as the
Hopfield net can in principle learn in one trial (although there are other
problems involved there too).  Hopfield nets are also fundamentally
feedback structures, like the brain but unlike most connectionist
models. This is not to suggest that Hopfield nets are good models of
the brain; they clearly aren't.
 
It's not clear to me what you mean by "storing training examples in
memory".  Again using the Hopfield net example, in that case the
whole purpose of the network is to store patterns in memory.  Perhaps
what you're suggesting is that feedforward networks take advantage of
this to repeatedly play back memorized patterns from attractor
networks so as to make learning more rapid.  Some researchers believe
the hippocampus is performing this function by storing patterns when
an animal is awake and playing them back when the animal is asleep.
 
Thanks for an interesting post."
********************************************************
********************************************************
On May 15 Brendan McCane wrote:
 
" Hi, Just a few comments here. Although I think the points you make
are valid and probably desirable, I don't think they can necessarily be
applied to the human brain. Following are specific comments about the
listed criteria.
 
(On Task A: Design Networks)
 
 The neural network architecture of the brain is largely pre-determined.
 Tuning certainly takes place, but I do not believe that the entire brain
 architecture is rebuilt for every newborn. This would require
tremendous effort and probably end up with people who cannot
communicate with each other at all (due to different representations).
The human brain system has actually been created with external
assistance, namely from evolution.
 
 (On Task B: Robustness in Learning)
 
 I agree that no local-minima would be optimal, but humans almost
 certainly fall into local minima (due to lack of extensive input or
 whatever) and only jump out when new input comes to light.
 
(On Task E: Efficiency in Learning.)
 
 I don't see why insects or birds could not solve NP-hard problems
from an evolutionary point of view. That is, the solution has now been
 hard-wired into their brains after millions of years of learning.
 
 I am not convinced that these characteristics are more brain-like than
 classical connectionist ones. Certainly they are desirable, and are
 possibly the holy grail of learning, but I don't think you can make the
 claim that the brain functions in this way.
 
 
 I think I've addressed all the other points made below in the points
 above."
********************************************************
********************************************************
On May 15 Richard Kenyon wrote:
 
 "Here are my comments.
I think that what you are looking for is something along the lines of
 a-life type networks which would evolve their design (much like the
 brain, see Brendans comment), as there is no obvious design for any
particular problem in the first place, and a net which can design a
 network must already know something about the problem, which is
why you raise the issue. I think though that design is evolving albeit at
the hands of connectionist scienctists, i.e the title of this list is one
 such step in the evolution.
 
(On Task B: Robustness in Learning)
 
 For me one of the key concepts in neural nets is graceful degradation,
the idea that when problems arise the networks don;t just fall over. I
 reckon that networks are still fairly brittle and that a lot needs ot be
 done in this area. However i agree again with Brendan that our brains
 suffer local minima more than we would like to admit.
 
 (On Task C: Quickness in Learning)
 
Memory is indeed very important, but recurrent neural networks have
 published a lot on the storage capacity of such devices already, it has
 not been forgotten. Very idealistic i'm afraid. Humans don't learn as
quickly as we might like to think. Our 'education' is a long drawn out
process and only every now and again do we experience enlightenment
in the grasping of a key concept. This does not happen quickly or that
often (relatively). The main factor affecting neural nets (imo) will be
parallel computers at which point the net as we know it will not be
stand alone but connected to many more, this is the principle i think is
the closest theorisation we have to the brains parallelism. This is also
why hybrid systems are v  interesting, as a parallel system will be able
to process output from mnay designs.
 
 (On Task D: Efficiency in Learning)
 
 Obviously efficiency in learning is important, but for us humans this is
 often mediated by efficent teaching, as in the case of some algorithms,
 self organising nets offer some form of autonamy in learning, but
often end up doing it the same way over and over again, as do we.
Kohonen has interpreted this as a physiological principle, in that it
takes a lot of effort to sever old neural connections and etablish a new
path for incorporating new ideas. Local minima have muscle.
 
(On Task E: Generalization in Learning)
 
 The brain probably accepts some form of redundancy (waste).
 I agree that the brain is one hell of an optimisation machine.
 Intelligence whatever task it may be applied to is (again imho) one
long optimisation process. Generalisation arises (even emerges or is a
side effect) as a result of ongoing optimisation, conglomeration,
reprocessing etc etc. This is again very important i agree, but i think (i
do anyway) we in NN commumnity are aware of this as with much of
the above. I thought that apart from point A we were doing all of this
already, although to have it explicitly published is very valuable.
 
 I may be wrong
 
 > A good test for a so-called "brain-like" algorithm is to imagine it
 > actually being part of a human brain.
 
 I don't think that many researchers would claim too much about
neural nets being very brain like at all. The simulated neurons, whether
sigmoid or tansigmoid etc, do not behave very like real neurons at all,
which is why there is a lot of research into biologically plauysible
neurons.
 
> Then examine the learning
 > phenomenon of the algorithm and compare it with that of the
 > human's. For example, pose the following question: If an algorithm
 > like back propagation is "planted" in the brain, how will it behave?
 > Will it be similar to human behavior in every way? Look at the
 > following simple "model/algorithm" phenomenon when the back-
 > propagation algorithm is "fitted" to a human brain. You give it a
 > few learning examples for a simple problem and after a while this
 > "back prop fitted" brain says: "I am stuck in a local minimum. I
 > need to relearn this problem. Start over again." And you ask:
 > "Which examples should I go over again?" And this "back prop
> fitted" brain replies: "You need to go over all of them.
 
 I agree this is limitation, but how is any net supposed ot know what is
 relevant to remember or even pay greater attention to. This is in part
 the frame problem which roboticists are having a great deal of fun
 discussing.
 
> I don't
 > remember anything you told me." So you go over the teaching
 > examples again. And let's say it gets stuck in a local minimum again
 > and, as usual, does not remember any of the past examples. So you
 > provide the teaching examples again and this process is repeated a
 > few times until it learns properly. The obvious questions are as
 > follows: Is "not remembering" any of the learning examples a brain-
 > like phenomenon?
 
 yes and no, children often need to be told over and over again, and
this fielkd is still in its infancy.
 
>Are the interactions with this so-called "brain-
  > like" algorithm similar to what one would actually encounter with a
 > human in a similar situation? If the interactions are not similar, then
 > the algorithm is not brain-like. A so-called brain-like algorithm's
 > interactions with the external world/teacher cannot be different
 > from that of the human.
 >
 > In the context of this example, it should be noted that
 > storing/remembering relevant facts and examples is very much a
 > natural part of the human learning process. Without the ability to
 > store and recall facts/information and discuss, compare and argue
 > about them, our ability to learn would be in serious jeopardy.
 > Information storage facilitates mental comparison of facts and
 > information and is an integral part of rapid and efficient learning. It
 > is not biologically justified when "brain-like" algorithms disallow
 > usage of memory to store relevant information.
 
 I did not know they were not allowed, but perhapos they have been
left on the sidelines, but again i refer you to recurrent nets.
 
 > Another typical phenomenon of classical connectionist learning is
 > the "external tweaking" of algorithms. How many times do we
 > "externally tweak" the brain (e.g. adjust the net, try a different
 > parameter setting) for it to learn? Interactions with a brain-like
 > algorithm has to be brain-like indeed in all respect.
 
 An analogy here is perhaps taking a different perspective on a
problem, this is a very human parameter that we must tweak to make
progress.
 
 > It is perhaps time to reexamine the foundations of the neural
 > network/connectionist field. This mailing list/newsletter provides an
 > excellent opportunity for participation by all concerned throughout
 > the world. I am looking forward to a lively debate on these matters.
 > That is how a scientific field makes real progress.
 
 i agree with the last sentiment."
********************************************************
********************************************************
On May 16 Chris Cleirigh wrote:
 
 "hi
 good luck with your enterprise, i think if you aim to be consistent with
 biology you have more chance of long term success.
 
 i'm no engineer -- i'm a linguist -- but i've read of Edelman's theory of
 neuronal group selection which seeks to explain categorisation
through darwinian processes of variation and selection of populations
of neuronal groups in the brain. are you motivated by such models.
 
 one thing, you say:
 
 For neuroscientists and neuroengineers, it
 should open the door to development of brain-like systems they
 have always wanted - those that can learn on their own without any
 external intervention or assistance, much like the brain.
 however, efficient learning does involve external intervention,
especially by other brains. consider language learning and the
corrective role played by adults in teaching children."
********************************************************
********************************************************
On May 17 Kevin Gurney wrote:
 
" I read your (provocative) posting to the cogpsy mailing list and
would like to make some comments
(Your original remarks are enclosed in square brackets)
 
 YA. Perform Network Design Task: A neural network/connectionist
 learning method must be able to design an appropriate network for
 a given problem,...From a neuroengineering and neuroscience point of
view, this is an essential property for any "stand-alone" learning system
-.."
 
 It might be from a neuroengineering point of view but not from a
neurscientific one. Real brains undergo a developmental process, much
of which is encoded in the organism's DNA. Thus, the basic
mechanisms of  structural and trophic development are not thought to
be activity driven per se. Mechansims like Long Term Potentiation
(LTP) may be the biological correlate of connectionist learning (Hebb
rule) but are not responsible for the overall neural architecture at the
modular level which includes the complex layering of the cortex.
 
 I would take issue quite generally with your frequent invocation of th
 eneuroscientists in your programme. They *are* definitley interested
in discovering the nature of real brains - rather than super-efficient
networks hat may be engineered -  will bring this out in subsequent
points below
 
 YB.  Robustness in Learning: The method must be robust so as
 not to have the local minima problem, the problems of oscillation
 and catastrophic forgetting, the problem of recall or lost memories
 and similar learning difficulties."
 
 Again, it may be the goal of neuro*engineers* to study ideal devices -
it is not the domain of neuroscientists.
 
 YC.  Quickness in Learning: The method must be quick in its
 learning and learn rapidly from only a few examples, much as
 humans do. "
 
 Humans don't, in fact, learn from just a few examples in most
cognitive and perceptual tasks  - this is a myth. The fine tuning of
visual and motor cortex which is a result of the critical period in
infanthood is a result of a continuous bombardment of the animal with
stimuli and tactile feedback. The same goes for langauge. The same
applies for the learning of any new skill in fact (reading, playing a
musical instrument ec etc.). These may be executed in an algorithmic,
serial processing fashion until they become automatised in the
 parallel processing of the brain (cf Andy Clarke's von-Neuman
emulaton by the brain)
 
 Many connectionists have imbued humans with god-like powers
which aren't there. It is true that we can learn one-off facts and add
them to our episodic memory but this is not usually the kind of things
which nets are asked to perform.
 
 YD.  Efficiency in Learning: The method must be
 computationally efficient in its learning when provided with a finite
 number of training examples (Minsky and PapertY1988"). It must be
 able to both design and train an appropriate net in polynomial time."
 
 Judd has shown that NN learning is intrinsically NP complex in many
instances - there is no `free lunch'. See also the results in
computational learning theory by Wolpert and Schaffer.
 
 YE.  Generalization in Learning: ...That is, it must try to design the
 smallest possible net, ... This property is based on the
 notion that the brain could not be wasteful of its limited resources,
 so it must be trying to design the smallest possible net for every
 task."
 
 Not true. Visual cortex uses a massive expansion in its coding from
the LGN to V1 before it `recompresses' in higher visual centres. This
has been described theoretically in terms of PCA etc (ECVP last year -
can't recall ref. just now)
 
 YAs far as I know, there is no biological evidence for any of the
 premises of classical connectionist learning."
 The relation LTP = Hebb rule is a fairly non-contentious statement in
the neuroscientific community.
 
 I could go on (RP learning and operant conditioning etc)...
 
  YSo, who should construct the net for a neural net
 algorithm? The answer again is very simple: Who else, but the
  algorithm itself!"
 
 The brain uses many `algorithms' to develop - it is these working in
concert (genetically deterimined and activity mediated) which ensure
the final state
 
 YYou give it a
 few learning examples for a simple problem and after a while this
 "back prop fitted" brain says: "I am stuck in a local minimum. I
 need to relearn this problem. Start over again.""
 
 My brain constantly gets stuck in local minima. If not then I would
learn everything I tried to do to perfection - I would be an
accomplished craftsman/musician/linguist/sporstman etc. In fact I am
non of these...but rather have a small amount  (local minimum's worth)
of ability in each.
 
 YThe obvious questions are as
 follows: Is "not remembering" any of the learning examples a brain-
 like phenomenon? "
 
 There may be some mechanism for storing the `rules' and `examples'
in STM or even LTM but even this is not certain (e.g. `now describe
to me the perfect tennis backhand......`No - you forget to mention the
follow-through - how many more times...')
 
 Finally, an engineering point. The claim that previous connectionist
algorithms are not able to construct networks is a little brash. There
have been several attempts to contruct nets as part of the learning
proces (e.g. Cascade correlation).
 
  In summary:
 
 I am pleased to see that people are trying to overcome some of the
problems encountered in building neural nets. However, I would urge
people not to missappropriate the activities of people in other fields
(neuroscience) and to learn a little more about the real capabilities of
humans and their brains as described by neuroscientists, and
psychologists. I would also ask that more account be taken of some of
the teoretical literature on learning be taken into account.
 
 I hope this contribution is useful"
********************************************************
********************************************************
On May 18 Craig Hicks wrote:
 
" Hi,
 >A. Perform Network Design Task: A neural network/connectionist
 >learning method must be able to design an appropriate network for
 >a given problem, since, in general, it is a task performed by the
 >brain. A pre-designed net should not be provided to the method as
 >part of its external input, since it never is an external input to the
 >brain. From a neuroengineering and neuroscience point of view, this
 >is an essential property for any "stand-alone" learning system - a
>system that is expected to learn "on its own" without any external
 >design assistance.
 
 Doesn't this ignore the role of evolution as a "learning" force?
 It's undisputable that the brain has a highly specialized structure.
 Obviously, this did  not come from nowhere,  but is the result  of the
 forces of natural selection."
********************************************************
********************************************************
On May 23 Dgragan Gamberger wrote:
 
"I read you submission with great interest although (or may because
of) I m not working in the field of neural networks. My interests are in
the field of inductive learning. The presented ideas seem very attractive
 to me and in my opinion your criticism of the present systems is fully
 justified. The only suggestion for improvement is on part C.:
 
 > C.  Quickness in Learning: The method must be quick in its
 > learning and learn rapidly from only a few examples, much as
 > humans do. For example, one which learns from only 10 examples
 > learns faster than one which requires a 100 or a 1000 examples.
 
 Although the statement is not incorrect by itself, in my opinion it
 reflects the common unawareness of the importance of redundancy
for machine, as well as for human learning. In practice neither machine
 nor human can learn something (except extremely simple concepts)
 from 10 examples especially if there is noise (errors in training
examples). Even for learning of simple concepts it is advisable to use
as much as possible training examples (and not only necessary subset)
because it can improve quality and (at least) reliability of induced
concepts. Especially for handling imperfections in training data (noise)
the use of redundant training set is obligatory.
 
 In practice, humans can and do induce concepts from a small training
set but they are 'aware' of their unreliability and use every occasion
 (additional examples) to test induced concepts and to refine them if
necessary. That is potentially the ideal model of incremental learning."
********************************************************
********************************************************
On May 25 Guido Bugmann responded to Raj Rao:
 
"A similar question (are there references for 1 millions neurons lost
 per day ?) came up in a discussion on the topic of robustness
 on connectionists a few years ago (1992). Some of the replies were:
  -------------------------------------------------------
 From Bill Skaggs, bill at nsma.arizona.edu :
 
 There have been a number of studies of neuron loss in aging.
 It proceeds at different rates in different parts of the brain,
 with some parts showing hardly any loss at all. Even in
 different areas of the cortex the rates of loss vary widely,
 but it looks like, overall, about 20% of the neurons are lost
 by age 60.
 
 Using the standard estimate of ten bilion neurons in the
 neocortex, this works out to about one hunderd thousand
 neurons lost per day of adult life.
 
 Reference:
 "Neuron numbers and sizes in aging brain: Comparisons of
human, monkey and rodent data" DG Flood & PD Coleman,
 Neurobiology of Aging, 9, (1988) pp.453-464.
 --------------------------------------------------------
 From  Arshavir Balckwell, arshavir at crl.ucsd.edu :
 
 I have come across a brief reference to adult neural
 death that may be of use, or at  least a starting point.
 The book is:
 
 Dowling, J.E. 1992 Neurons and Networks. Cambridge: Harward
Univ.
 
 In a footnote (!) on page 32, he writes:
 
 There is typically a loss of 5-10 percent of brain tissue with age.
 
 Assuming a brain loss of 7 percent over a life span of 100 years,
 and 10^11 neurons (100 billions) to begin with, approximately
 200,000 neurons are lost per day.
 ----------------------------------------------------------------
 From Jan Vorbrueggen, jan at neuroinformatik.ruhr-uni-bochum.de
 
 As I remember it, the studies showing the marked reduction
 in nerve cell count with age were done around the turn of the
 century. The method, then as now, is to obtain brains of deceased
 persons, fix them, prepare cuts, count cells microscopically
 in those cuts, and then estimate the total number by multiplying
 the sampled cells/(volume of cut) with the total volume.
 This method has some obvious systematic pitfalls, however.
 The study was done again some (5-10?) years ago by a German
 anatomist (from Kiel I think), who tried to get these things
 under better control. It is well known, for instance, that
 tissue shrinks when it is fixed; the cortex's pyramidal cells
 are turned into that form by fixation. The new study showed
 that the total water content of the brain does vary dramatically
with age; when this is taken into account, it turns out that
 the number of cells is identical within error bounds (a few
 percents?) between quite young children and persons up to
 60-70 years of age.
 
 All this is from memory, and I don't have access to the
 original source, unfortunately; but I'm pretty certain that
 the gist is correct. So the conclusion seems to be that the
 cell loss with age in the CNS is much lower than generally
 thought.
 ----------------------------------------------------------------
 From Paul King, Paul_King at next.com
 
 Moshe Abeles in Corticonics (Cambridge Univ. Press, 1991)
 writes on page 208 that:
 
 "Comparisons of neural densities in the brain of people
 who died at different ages (from causes not associated
 with brain damage) indicate that about a third of the
 cortical cell die between the ages of twenty and eighty
 years (Tomlinson and Gibson, 1980). Adults can no longer
 generate new neurons, and therefore those neurons that
 die are never replaced.
  The neuronal fallout proceeds at a roughly steady
 rate throughout adulthood (although it is accelerated when
 the circulation of blood in the brain is impaired). The rate
 of neuronal fallout is not homogeneous throughout all
 the cortical regions, but most of the cortical regions
 are affected by it.  Let us assume that every year about 0.5% of the
 cortical cells die at random...."  and goes on to discuss the
implications for network robustness.
 
 Reference:
 
 Gearald H,  Tomlinson BE and Gibson PH (1980) "Cell counts
 in human cerebral cortex in normal adults throughout life
using an image analysis computer" J. Neurol., 46, pp. 113-136.
 -------------------------------------------------------------
 From Robert A. Santiago, rsantiag at note.nsf.gov
 
 "In search of the Engram"
 
 The problem of robutsness from a neurobiological
 perspective seems to originate from works done by
 Karl Lashley. He sought to find how memory was
 partitioned in the brain. He thought that memories
 were kept on certain neuronal circuit paths (engrams)
 and experimented under this hypothesis by cutting
 out parts of the memory and seeing if it affected
 memory...  Other work was done by a gentlemen named
 Richard F. Thompson. Both speak of the loss of
 neurons in a system and how integrity was kept.
 In particular Karl Lashley spoke of the memory
 as holograms...
 -------------------------------------------------
  Hope it helps..."


More information about the Connectionists mailing list