Connectionists: CFP: NIPS 2012 Workshop on Log-Linear Models

Tony Jebara jebara at cs.columbia.edu
Wed Sep 12 00:36:22 EDT 2012




WORKSHOP ON LOG-LINEAR MODELS

NIPS 2012 WORKSHOP

December 8, Lake Tahoe, Nevada

https://sites.google.com/site/nips12logmodels/home
 

CONFIRMED SPEAKERS

Francis Bach
Martin Wainwright
Li Deng
Fei Sha
Hermann Ney
Peder Olsen

 
SUBMISSION

Submission deadline: Friday, October 1st, 2012. 
Notification of acceptance: Friday, October 12th, 2012 

We invite submission of abstracts to the workshop for poster or oral presentation. Submissions should be written as extended abstracts, no longer than 4 pages in the NIPS latex style. NIPS style files and formatting instructions can be found at http://nips.cc/PaperInformation/StyleFiles. The submissions should include the authors' name and affiliation since the review process will not be double blind. The extended abstract may be accompanied by an unlimited appendix and other supplementary material, with the understanding that anything beyond 4 pages may be ignored by the program committee.

Abstracts should be submitted by email to logmodels at gmail.com 

There will be a special issue of IEEE Transactions in Speech and Signal Processing on large-scale optimization. Authors of accepted papers with speech content will be encouraged to extend their abstract to the full papers to be considered for this special issue. 


OVERVIEW

Exponential functions are core mathematical constructs that are the key to many important applications, including speech recognition, pattern-search and logistic regression problems in statistics, machine translation, and natural language processing. Exponential functions are found in exponential families, log-linear models, conditional random fields (CRF), entropy functions, neural networks involving sigmoid and soft max functions, and Kalman filter or MMIE training of hidden Markov models. Many techniques have been developed in pattern recognition to construct formulations from exponential expressions and to optimize such functions, including growth transforms, EM, EBW, Rprop, bounds for log-linear models, large-margin formulations, and regularization. Optimization of log-linear models also provides important algorithmic tools for machine learning applications (including deep learning), leading to new research in such topics as stochastic gradient methods, sparse / regularized optimization methods, enhanced first-order methods, coordinate descent, and approximate second-order methods. Specific recent advances relevant to log-linear modeling include the following.

*Effective optimization approaches, including stochastic gradient and Hessian-free methods.
*Efficient algorithms for regularized optimization problems.
*Bounds for log-linear models and recent convergence results.
*Recognition of modeling equivalences across different areas, such as the equivalence between Gaussian and log-linear models/HMM and HCRF, and the equivalence between transfer entropy and Granger causality for Gaussian parameters.

Though exponential functions and log-linear models are well established, research activity remains intense, due to the central importance of the area in front-line applications and the rapid expanding size of the data sets to be processed. Fundamental work is needed to transfer algorithmic ideas across different contexts and explore synergies between them, to assimilate the influx of ideas from optimization, to assemble better combinations of algorithmic elements for tackling such key tasks as deep learning, and to explore such key issues as parameter tuning.


TOPICS

The workshop will bring together researchers from the many fields that formulate, use, analyze, and optimize log-linear models, with a view to exposing and studying the issues discussed above. Topics of possible interest for talks at the workshop include, but are not limited to, the following:

1) Log-linear models.
2) Equivalences across different applications and models.
3) Comparisons of optimization / accuracy performance.
4) Convex formulations.
5) Bounds and their applications.
6) Stochastic gradient, first-order, and second-order methods.
7) Efficient non-Gaussian filtering approaches.
8) Graphical modeling and network inference.
9) Missing data and hidden variables in log-linear modeling.
10) Semi-supervised estimation in log-linear modeling.
11) Sparsity in log-linear models.
12) Block and novel regularization methods for log-linear models.
13) Parallel, distributed and large-scale methods for log-linear models.
14) Information geometry of Gaussian densities and exponential families.
15) Hybrid algorithms that combine different optimization strategies.
16) Connections between log-linear models and deep belief networks.
17) Connections with kernel methods.
18) Applications to speech / natural-language processing and other areas.
19) Empirical contributions that compare and contrast different approaches.
20) Theoretical contributions that relate to any of the above topics.


ORGANIZERS

Dimitri Kanevsky  (IBM)
Tony Jebara (Columbia University)
Li Deng (Microsoft)
Stephen Wright (University of Wisconsin)
Georg Heigold (Google)
Avishy Carmi (Nanyang Technological University)

 


More information about the Connectionists mailing list