Connectionists: Deep Belief Nets (2006) / Neural History Compressor (1991) or Hierarchical Temporal Memory

Thomas Trappenberg tt at cs.dal.ca
Mon Feb 10 23:40:12 EST 2014


I enjoy these further discussions. Thanks so much for all the thoughts.

Personally I am always really fascinated when  I can learn about mechanisms
that are not so obvious, like phase transitions, point attractor networks,
or universal learning machines. The recent popularity of deep learning
systems is really fun as it creates new interest in students, and learning
machines, specifically the questions of representational learning, is
important and useful in its own right. And I even think that on an abstract
level it has something to do with the brain.

I also like to understand the brain, where some of these mechanisms are at
work but which has also a lot of structure that I like to understand.
Evolution, development, dendritic computations, glia networks,
neuromodulation, epigenetics, lots of fascinating anatomy, and if I might
add probabilistic synapses, are all important to understand and must play
important roles. Still lots to do, and another reason not to bet on one.

Personally I am rather critical about universal learning machines. Indeed,
we (or at least me) are a good example of a non-universal learning
machines. I even start to appreciate the comments on recognition rather
than learning machines. Even un-human big data seems mostly to solve very
stereotyped problems (though recognizing traffic signes with better than
human performance is also useful). This is why I now like learning with
small data, which could be another useful machine learning domain. No free
lunch, but evolution can evolve biased learners that can get some free
snacks here and there.

Cheers, Thomas
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20140211/afed69df/attachment.html>


More information about the Connectionists mailing list