Connectionists: Who introduced the term "Deep Learning" to NNs?

Stephen José Hanson jose at psychology.rutgers.edu
Fri Mar 13 14:36:10 EDT 2015


I think there is a confusion here about "Deep Learning".    I think Deep
Learning is not so much about the words
"deep" and "learning", but the architecture of the learning system--or
neural network.

Those of us who were doing Auto-Encoders back in the 80s, had tried many
time to increase the number of hidden layers,
especially doing some sort of language task or grammer task.    I was
particularly inspired to try this after seeing Yann's
early work on conv-nets, in which he employed many layers.   Also Geoff
had a "kinship network" that consisted of 6 layers or so.. (cog
sci,'86),  ....so we had all tried to train auto-encoders with multiple
layers.. and miserably failed using back-propagations with sigmoidal
activation functions...      

So the actual concept of Deep Learning is more about this
representational compression and abstraction (not so much
theory about this part, I think).   So again, really not about "deep" or
"learning".. etc..     I think searching for these words in documents
from the 60s will miss the critical aspects of the multiple layers of
representational structure that is extracted and used to generalize.
This is really an extension of stuff that was happening in the mid to
late 1980s.

Cheers

Steve
-- 
Stephen José Hanson
Director RUBIC (Rutgers Brain Imaging Center)
Professor of Psychology
Member of Cognitive Science Center (NB)
Member EE Graduate Program (NB)
Member CS Graduate Program (NB)
Rutgers University 


email: jose at rubic.rutgers.edu
web: psychology.rutgers.edu/~jose
lab: www.rumba.rutgers.edu
fax: 866-434-7959
voice: 973-353-3313 (RUBIC)



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20150313/02682d1d/attachment.html>


More information about the Connectionists mailing list