Connectionists: ICML Workshop on Representation Learning

Hugo Larochelle hugo.larochelle at usherbrooke.ca
Fri Apr 6 19:13:31 EDT 2012


CALL FOR PAPERS

Representation Learning Workshop - ICML 2012

Date: Sunday July 1, 2012  
Location: Edinburgh, Scotland
URL: https://sites.google.com/site/representationworkshopicml2012/

*Important Dates*

Paper Submission Deadline: Monday, May 7, 2012
Acceptance Notification: Monday, May 21, 2012

*Overview*

In this workshop we consider the question of how we can learn meaningful and useful representations of the data.  There has been a great deal of recent work on this topic, much of it emerging from researchers interested in training deep architectures.  Deep learning methods such as deep belief networks, sparse coding-based methods, convolutional networks, and deep Boltzmann machines, have shown promise as a means of learning invariant representations of data and have already been successfully applied to a variety of tasks in computer vision, audio processing, natural language processing, information retrieval, and robotics. Bayesian nonparametric methods and other hierarchical graphical model-based approaches have also been recently shown the ability to learn rich representations of data.

By bringing together researchers with diverse expertise and perspectives but who are all interested in the question of how to learn data representations, we will explore the challenges and promising directions for future research in this area. 

In the context of an opening overview talk and in a panel discussion (including our invited speakers), we will attempt to address some of the issues that have recently emerged as critical in shaping the future development of this line of research:

- How do we learn invariant representations? Feature pooling is a popular and highly successful mean of achieving invariant features, but is there a tension between feature specificity and robustness to structured noise (movement in a direction of an irrelevant factor of variation)? Does it make sense to think in terms of a theory of invariant features?

- What role does learning really play? There is some evidence that learning does not seem as important as previously believed. Rather, the process of feature extraction itself seems to play the most significant role in determining the success of the representation of the data. For example, there is evidence that the use of feedback in feature extraction could be very important.

- How can several layers of latent variables be effectively learned? There has been lots of empirical work showing the importance of certain architectures and inference algorithms to learn representations that retain information of the input while extracting more and more abstract concepts. We would like to discuss what are the key modules of these hierarchical models and what inference methods are more suitable to discover useful representations of data. Also, we would like to investigate which inference algorithms are more effective and scalable in terms of number of data points and feature dimensionality.

The workshop will also invite paper submissions on the development of representation learning methods, deep learning algorithms, theoretical foundations, inference and optimization methods, semi-supervised and transfer learning, and applications of deep learning and unsupervised feature learning to real-world tasks. Papers will be presented mainly as poster presentations.

*Submission of Papers*

We solicit submissions of unpublished research papers. Paper length is restricted between 2 and 8 pages. Papers must satisfy the formatting instructions of the ICML 2012 call for papers but they need not to be anonymous. Submissions should include the title, authors' names, institutions and email addresses. Style files are available here.

We encourage submissions on the following and related topics:
- learning hierarchical models
- learning invariant representations
- invariance and selectivity trade-off
- role of learning compared to choice of feature extraction method
- role of feedback and sparsity during learning and inference
- scalability of hierarchical models at training and test time in terms of number of samples and feature dimensionality
- applications of hierarchical models to large scale datasets

Papers should be submitted in pdf or ps format by email to: 
representation.workshop.icml12 at gmail.com 
no later than 23:59 PDT, Monday, May 7, 2012.

*Organizers*

Aaron Courville, Université de Montréal 
Hugo Larochelle, Université de Sherbrooke
Marc'Aurelio Ranzato, Google Inc
Yoshua Bengio, Université de Montréal
-------------- next part --------------
An HTML attachment was scrubbed...
URL: https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20120406/4394f2b4/attachment.html


More information about the Connectionists mailing list