Connectionists: CFP: Special Issue on Deep Learning of Representations in Neural Networks
DeLiang Wang
dwang at cse.ohio-state.edu
Fri Dec 6 10:36:23 EST 2013
*Guest Editors*
Yoshua Bengio, Universite de Montreal
Honglak Lee, University of Michigan
*Timeframe*
Submission deadline: January 15, 2014
Notification of Acceptance: May 15, 2014
Final Manuscripts Due: July 1, 2014
Date of Publication: October 1, 2014
*Background and Motivation*
The performance of machine learning methods is heavily dependent on the
choice of data representation (or features) on which they are applied.
For that reason, much of the actual effort in deploying machine learning
algorithms goes into the design of preprocessing pipelines and data
transformations that result in a representation of the data that can
support effective machine learning. Such feature engineering is
important but labor-intensive and highlights the weakness of many
traditional learning algorithms: their inability to extract and organize
the discriminative information from the data. Feature engineering is a
way to take advantage of human ingenuity and prior knowledge to
compensate for that weakness. In order to expand the scope and ease of
applicability of machine learning, it would be highly desirable to make
learning algorithms less dependent on feature engineering, so that novel
applications could be constructed faster, and more importantly, to make
progress towards Artificial Intelligence (AI).
Deep Learning is an emerging approach within the machine learning
research community. Deep Learning research aims at discovering learning
algorithms that discover multiple levels of distributed representations,
with higher levels representing more abstract concepts. They have had
important empirical successes in a number of traditional AI applications
such as computer vision and natural language processing. Deep Learning
is attracting much attention both from the academic and industrial
communities. Companies like Google, Microsoft, Apple, IBM and Baidu are
investing in Deep Learning, with the first products being used by
consumers being at the core of speech recognition engines. Deep Learning
is also used for object recognition (Google goggles), image and music
information retrieval (Google image search, Google music), as well as
computational advertising. The New York Times covered the subject twice
in 2012, with front-page articles.1 Another series of articles
(including a New York Times article) covered a more recent event showing
off the application of Deep Learning in a major Kaggle competition for
drug discovery (for example see "Deep Learning - The Biggest Data
Science Breakthrough of the Decade"2 ). Earlier, a variant of the
Boltzmann machine that is easier to train (the Restricted Boltzmann
Machine) has been used as a crucial part of the winning entry of a
million-dollar machine learning competition (the Netflix competition).
Much more recently, Google bought out ("acqui-hired'') a company
(DNNresearch) created by University of Toronto professor Geoffrey Hinton
(the founder and leading researcher of Deep Learning), with the press
writing titles such as "Google Hires Brains that Helped Supercharge
Machine Learning'' (Robert McMillan for Wired, March 13th, 2013).
A representation learning algorithm discovers explanatory factors or
features, while a deep learning algorithm is a representation learning
procedure that discovers multiple levels of representation, with
higher-level features representing more abstract aspects of the data.
This area of research has been kick-started in 2006 by a few research
groups and is now one of the most active sub-areas of machine learning,
with an increasing number of workshops (now one every year at the NIPS
and ICML conferences) and even a new specialized conference just created
in 2013 (ICLR -- the International Conference on Learning
Representations). Although impressive theoretical results, effective
learning algorithms, and breakthrough experiments have already been
achieved, several challenges lie ahead, and constitute the subject of
this special issue.
This special issue invites paper submissions on the most recent
developments in learning deep architectures, theoretical foundations,
representation, optimization, semi-supervised and transfer learning, and
applications to real-world tasks. We also welcome survey and overview
papers in these general areas pertaining to learning deep architectures.
Detailed topics of presentations include but are not limited to:
* Deep learning architectures and algorithms
* Unsupervised and semi-supervised learning with deep architectures
* transfer learning algorithms with deep architectures
* Representation-learning and disentangling
* Inference and sampling issues
* Scaling up to large models and parallelization
* Optimization relevant to learning deep architectures
* Theoretical foundations of deep learning
* Applications, in particular to computer vision, speech recognition,
NLP and big data
*
**Submission Instructions*
Prospective authors should follow standard author instructions for
Neural Networks and submit their manuscripts online at
http://ees.elsevier.com/neunet/, where this Call for Papers is also
listed. During the submission process, there will be opportunities to
designate the submission to this special issue.
More information about the Connectionists
mailing list