Connectionists: NIPS satellite meeting on deep learning
Yoshua Bengio
bengioy at iro.umontreal.ca
Thu Oct 25 17:32:30 EDT 2007
Hello,
Following up on Geoffrey Hinton's Monday NIPS tutorial on Deep Belief
Nets, there will be a satellite meeting/workshop on Thursday
afternoon (i.e. not at the usual time for workshops) at the Vancouver
Hyatt. This Deep Learning workshop
http://www.iro.umontreal.ca/~lisa/deepNIPS2007
is part of the "Neuro-Thursday" this year
http://nips.cc/Conferences/2007/Neuro-Thursday
SEATING IS LIMITED at the Hyatt for this workshop, so if you are
interested (check the program at the URL above) you are encouraged to
register as soon as possible.
For interested participants of the deep learning workshop buses to
Whistler are scheduled after the workshop (the regular buses to
Whisler are at 2pm). The workshop also honors Geoffrey Hinton's 60-th
birthday.
More on the meeting:
Theoretical results strongly suggest that in order to learn the kind
of complicated functions that can represent high-level abstractions
(e.g. in vision, language, and other AI-level tasks), one may need
"deep architectures", which are composed of multiple levels of non-
linear operations (such as in neural nets with many hidden layers).
Searching the parameter space of deep architectures is a difficult
optimization task, but learning algorithms (e.g. Deep Belief
Networks) have recently been proposed to tackle this problem with
notable success, beating the state-of-the-art in certain areas.
This workshop is intended to bring together researchers interested in
the question of deep learning in order to review the current
algorithms' principles and successes, but also to identify the
challenges, and to formulate promising directions of investigation.
Besides the algorithms themselves, there are many fundamental
questions that need to be addressed: What would be a good
formalization of deep learning? What new ideas could be exploited to
make further inroads to that difficult optimization problem? What
makes a good high-level representation or abstraction? What type of
problem is deep learning appropriate for?
Schedule:
2:00pm - 2:25pm Yee-Whye Teh, Gatsby Unit : Deep Belief Networks
2:25pm - 2:45pm John Langford, Yahoo Research: Theoretical Results on
Deep Architectures
2:45pm - 3:05pm Yoshua Bengio, University of Montreal: Optimizing
Deep Architectures
3:05pm - 3:25pm Yann Le Cun, New York University: Learning deep
hierarchies of invariant features
3:25pm - 3:45pm Martin Szummer, Microsoft Research: Deep networks for
information retrieval
3:45pm - 4:00pm Coffee break
4:00pm - 4:20pm Max Welling, University of California: Hierarchical
Representations from networks of HDPs
4:20pm - 4:40pm Andrew Ng, Stanford University: Self-taught learning:
Transfer learning from unlabeled data
4:40pm - 5:10pm Geoff Hinton, University of Toronto: Restricted
Boltzmann machines with multiplicative interactions
5:10pm - 5:30pm Discussion
-- Yoshua Bengio
More information about the Connectionists
mailing list