Connectionists: Deep Learning Overview Draft

Juergen Schmidhuber juergen at idsia.ch
Fri May 2 10:13:50 EDT 2014


Dear Connectionists,

thanks a lot for numerous helpful comments! 
This was great fun; I learned a lot.

A revised version (now with 750+ references) is here: http://arxiv.org/abs/1404.7828
PDF with better formatting (for A4 paper): http://www.idsia.ch/~juergen/DeepLearning30April2014.pdf
LATEX source: http://www.idsia.ch/~juergen/DeepLearning30April2014.tex
The complete BIBTEX file is also public: http://www.idsia.ch/~juergen/bib.bib

Don't hesitate to send additional suggestions to juergen at idsia.ch (NOT to the entire list).

Kind regards,

Juergen Schmidhuber
http://www.idsia.ch/~juergen/deeplearning.html


Revised Table of Contents

1 Introduction to Deep Learning (DL) in Neural Networks (NNs) 
2 Event-Oriented Notation for Activation Spreading in Feedforward NNs (FNNs) and Recurrent NNs (RNNs)
3 Depth of Credit Assignment Paths (CAPs) and of Problems 

4 Recurring Themes of Deep Learning 
4.1 Dynamic Programming (DP) for DL 
4.2 Unsupervised Learning (UL) Facilitating Supervised Learning (SL) and RL
4.3 Occam’s Razor: Compression and Minimum Description Length (MDL)
4.4 Learning Hierarchical Representations Through Deep SL, UL, RL 
4.5 Fast Graphics Processing Units (GPUs) for DL in NNs 

5 Supervised NNs, Some Helped by Unsupervised NNs (With DL Timeline)
5.1 1940s and Earlier 
5.2 Around 1960: More Neurobiological Inspiration for DL
5.3 1965: Deep Networks Based on the Group Method of Data Handling (GMDH)
5.4 1979: Convolution + Weight Replication + Winner-Take-All (WTA) 
5.5 1960-1981 and Beyond: Development of Backpropagation (BP) for NNs 
5.5.1 BP for Weight-Sharing FNNs and RNNs
5.6 1989: BP for Convolutional NNs (CNNs)
5.7 Late 1980s-2000: Improvements of NNs 
5.7.1 Ideas for Dealing with Long Time Lags and Deep CAPs 
5.7.2 Better BP Through Advanced Gradient Descent 
5.7.3 Discovering Low-Complexity, Problem-Solving NNs 
5.7.4 Potential Benefits of UL for SL 
5.8 1987: UL Through Autoencoder (AE) Hierarchies 
5.9 1991: Fundamental Deep Learning Problem of Gradient Descent 
5.10 1991: UL-Based History Compression Through a Deep Hierarchy of RNNs 
5.11 1994: Contest-Winning Not So Deep NNs 
5.12 1995: Supervised Very Deep Recurrent Learner (LSTM RNN) 
5.13 1999: Max-Pooling (MP)
5.14 2003: More Contest-Winning/Record-Setting, Often Not So Deep NNs 
5.15 2006: Deep Belief Networks (DBNs) / Improved CNNs / GPU-CNNs 
5.16 2009: First Official Competitions Won by RNNs, and with MPCNNs 
5.17 2010: Plain Backprop (+ Distortions) on GPU Yields Excellent Results 
5.18 2011: MPCNNs on GPU Achieve Superhuman Vision Performance  
5.19 2011: Hessian-Free Optimization for RNNs 
5.20 2012: First Contests Won on ImageNet & Object Detection & Segmentation 
5.21 2013: More Contests and Benchmark Records  
5.21.1 Currently Successful Supervised Techniques: LSTM RNNs / GPU-MPCNNs 
5.22 Recent Tricks for Improving SL Deep NNs (Compare Sec. 5.7.2, 5.7.3) 
5.23 Consequences for Neuroscience 
5.24 DL with Spiking Neurons?

6 DL in FNNs and RNNs for Reinforcement Learning (RL) 
6.1 RL Through NN World Models Yields RNNs With Deep CAPs 
6.2 Deep FNNs for Traditional RL and Markov Decision Processes (MDPs) 
6.3 Deep RL RNNs for Partially Observable MDPs (POMDPs) 
6.4 RL Facilitated by Deep UL in FNNs and RNNs 
6.5 Deep Hierarchical RL (HRL) and Subgoal Learning with FNNs and RNNs 
6.6 Deep RL by Direct NN Search / Policy Gradients / Evolution
6.7 Deep RL by Indirect Policy Search / Compressed NN Search 
6.8 Universal RL

7 Conclusion



On Apr 17, 2014, at 5:40 PM, Schmidhuber Juergen <juergen at idsia.ch> wrote:

> Dear connectionists,
> 
> here the preliminary draft of an invited Deep Learning overview:
> 
> http://www.idsia.ch/~juergen/DeepLearning17April2014.pdf
> 
> Abstract. In recent years, deep neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarises relevant work, much of it from the previous millennium. Shallow and deep learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
> 
> The draft mostly consists of references (about 600 entries so far). Many important citations are still missing though. As a machine learning researcher, I am obsessed with credit assignment. In case you know of references to add or correct, please send brief explanations and bibtex entries to juergen at idsia.ch (NOT to the entire list), preferably together with URL links to PDFs for verification. Please also do not hesitate to send me additional corrections / improvements / suggestions / Deep Learning success stories with feedforward and recurrent neural networks. I'll post a revised version later.
> 
> Thanks a lot!
> 
> Juergen Schmidhuber
> http://www.idsia.ch/~juergen/
> http://www.idsia.ch/~juergen/whatsnew.html 




More information about the Connectionists mailing list