Connectionists: post-doc announcement, learning deep architectures
Yoshua Bengio
bengioy at iro.umontreal.ca
Thu Mar 27 13:38:44 EDT 2008
Dear Colleagues,
Please find below an announcement for a post-doc position in my lab, on the topic of learning
algorithms for deep architectures.
POST-DOCTORAL FELLOWSHIP
Starting date: now to January 2009
Duration: one or two years
Location: University of Montreal, CS dept, machine learning lab (http://www.iro.umontreal.ca/~lisa)
Sponsored by an NSERC strategic grant and a Google grant.
Faculty involved in the research: Yoshua Bengio, Pascal Vincent, Douglas Eck at UdeM, collaboration
with Geoff Hinton at UofT and Yann Le Cun at NYU in CIAR's NCAP program
(http://www.ciar.ca/web/home.nsf/pages/ncap)
Salary: negotiable premium on top of NSERC baseline.
The candidate should already have a strong machine learning background, displaying skills in
algorithms and mathematics (statistics, probabilility, optimization, linear algebra). Preference
will be given to candidates having previously conducted research involving graphical models
(especially with hidden variables), unsupervised learning of factor models, and / or neural networks.
Research topic of the grant:
Theoretical results strongly suggest that in order to learn the kind of complicated functions that
can represent high-level abstractions (e.g. in vision, language, and other AI-level tasks), one may
need "deep architectures", which are composed of multiple levels of non-linear operations (such as
in neural nets with many hidden layers). Searching the parameter space of deep architectures is a
difficult optimization task, but learning algorithms (e.g. Deep Belief Networks) have recently been
proposed to tackle this problem with notable success, beating the state-of-the-art in certain areas.
The grant focusses on improving and expanding upon the new crop of learning algorithms that have
been proposed to address the difficult optimization task that learning deep architectures entails,
and in particular the discovery of multiple levels of representation that capture and disentangle
the main factors of variation explaining the data distribution. The algorithms will be tested and
geared towards large datasets involving images, text, and music, some artificially generated and
some real.
For more on this topic, see the web page of this recent workshop held at NIPS'2007:
http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepLearningWorkshopNIPS2007
-- Yoshua Bengio
More information about the Connectionists
mailing list