Connectionists: Neuro Thursday at NIPS

Terry Sejnowski terry at salk.edu
Sun Nov 11 03:34:46 EST 2007


Neural Information Processing Systems - NIPS 2007:  http://nips.cc/Conferences/2007/

Conference December 3-6, 2007, Vancouver
Workshop December 7-8, 2007, Whistler

NEURO-THURSDAY at NIPS:

Thursday, December 6 - 8:30 AM - Noon - Vancouver

Thursday, December 6 (the final day of the Conference), will be devoted to Neuroscience, 
and will consist of a fascinating invited talk by Professor Manabu Tanifuji (Riken) 
on the monkey visual cortex, plus six outstanding plenary talks. 
In addition, all of the Neuroscience posters will take place on Wednesday night, 
allowing early arrivals to interact with researchers. The Wednesday night 
poster program will also contain many posters on Machine Learning and Computer Vision, 
focused on topics that are also relevant to Neuroscience.

All of the morning events (including the Wednesday night Poster Session and the 
Spotlights that precede it) will be available for the special "Neuro-Thursday" 
registration rate of $50.

For those attending the entire Conference, "Neuro-Thursday" is included in the registration price.

8:30 - 8:50am  	 Misha Ahrens, Maneesh Sahani
	Inferring Elapsed Time from Stochastic Neural Processes

8:50 - 9:10am 	Omer Bobrowski, Ron Meir, Shy Shoham, Yonina Eldar
	A neural network implementing optimal state estimation 
	based on dynamic spike train decoding

9:10 - 9:30am 	Sebastian Gerwinn, Jakob Macke, Matthias Seeger, Matthias Bethge
	Bayesian Inference for Spiking Neuron Models with a Sparsity Prior

9:30 - 9:50am 	Jonathan Pillow, Peter Latham
	Neural characterization in partially observed populations of spiking neurons

9:50 - 10:10am 	Srinjoy Mitra, Giacomo Indiveri, Stefano Fusi
	Learning to classify complex patterns using a VLSI network of spiking neurons

10:10 - 10:40am Break

10:40 - 11:00am Mate Lengyel, Peter Dayan
	Hippocampal Contributions to Control: The Third Way

11:00am - 12:00pm Invited Speaker: Manabu Tanifuji
	Population coding of object images based on visual features and 
	its relevance to view invariant representation

--------

Deep Learning Satellite Meeting: Foundations and Future Directions
Thursday, December 6 - 2:00 to 5:30 PM - Vancouver

Theoretical results strongly suggest that in order to learn the kind of complicated 
functions that can represent high-level abstractions (e.g. in vision, language, 
and other AI-level tasks), one may need "deep architectures", which are composed of 
multiple levels of non-linear operations (such as in neural nets with many hidden layers). 
Searching the parameter space of deep architectures is a difficult optimization task, 
but learning algorithms (e.g. Deep Belief Networks) have recently been proposed to 
tackle this problem with notable success, beating the state-of-the-art in certain areas.

This Workshop is intended to bring together researchers interested in the question of 
deep learning in order to review the current algorithms' principles and successes, 
but also to identify the challenges, and to formulate promising directions of investigation. 
Besides the algorithms themselves, there are many fundamental questions that need to be addressed: 
What would be a good formalization of deep learning? What new ideas could be exploited to 
make further inroads to that difficult optimization problem? What makes a good high-level 
representation or abstraction? What type of problem is deep learning appropriate for?

There is no charge for this Workshop or for the bus to Whistler that will leave after the Workshop; 
however, a separate registration is required.

To register:   http://www.iro.umontreal.ca/~lisa/twiki/bin/view.cgi/Public/DeepLearningWorkshopNIPS2007 

2:00pm - 2:25pm Yee-Whye Teh, Gatsby Unit : Deep Belief Networks

2:25pm - 2:45pm John Langford, Yahoo Research: Theoretical Results on Deep Architectures

2:45pm - 3:05pm Yoshua Bengio, University of Montreal: Optimizing Deep Architectures

3:05pm - 3:25pm Yann Le Cun, New York University: Learning deep hierarchies of invariant feature

3:25pm - 3:45pm Martin Szummer, Microsoft Research: Deep networks for information retrieval

3:45pm - 4:00pm Coffee break

4:00pm - 4:20pm Max Welling, University of California: Hierarchical Representations from networks of HDPs

4:20pm - 4:40pm Andrew Ng, Stanford University: Self-taught learning: Transfer learning from unlabeled data

4:40pm - 5:00pm Geoff Hinton, University of Toronto: How the brain works

5:00pm - 5:30pm Discussion 

---------

NIPS Whistler Workshops 

Friday December 7 - Saturday December 8, 2007 - Whistler

The post-Conference Workshops will be held at the Westin Resort and Spa and the 
Westin Hilton in Whistler, British Columbia, Canada on December 7 and 8, 2007.
The Workshops provide multi-track intensive sessions on a wide range of topics. 
The venue and schedule facilitate informality and depth. 

Partial List of Workshop Topics and Organizers:

Beyond Simple Cells: Probabilistic Models for Visual Cortical Processing
	Richard Turner, Pietro Berkes, Maneesh Sahani

Hierarchical Organization of Behavior: Computational, Psychological and Neural Perspectives
	Yael Niv, Matthew Botvinick, Andrew Barto

Large Scale Brain Dynamics 
	Ryan Canolty, Kai Miller, Joaquin Quinonero Candela, Thore Graepel, Ralf Herbrich

Mechanisms of Visual Attention
	Jillian Fecteau, Dirk Walther, Vidhya Navalpakkam, John Tsotsos

Music, Brain and Cognition. Part 1: Learning the Structure of Music and Its Effects On the Brain
	David Hardoon, Eduardo Reck-Miranda, John Shawe-Taylor

Music, Brain and Cognition. Part 2: Models of Sound and Cognition
        Hendrik Purwins, Xavier Serra, Klaus Obermayer

Principles of Learning Problem Design
        John Langford, Alina Beygelzimer

Representations and Inference on Probability Distributions
        Kenji Fukumizu, Arthur Gretton, Alex Smola
	
The Grammar of Vision: Probabilistic Grammar-Based Models for 
Visual Scene Understanding and Object Categorization
	Jan Peters, Marc Toussaint

A complete list of all 25 workshop and links for more information:
http://nips.cc/Conferences/2007/Program/schedule.php?Session=Workshops

---------

NIPS POSTERS - Wednesday, December 5, 7:30pm - 12:00am

B. Fischer - Optimal models of sound localization by barn owls
S. Ghebreab, A. Smeulders, P. Adriaans -  Predicting Brain States from fMRI Data: 
	Incremental Functional Principal Component Regression
M. Cerf, J. Harel, W. Einhaeuser, C. Koch -  Predicting human gaze using low-level 
	saliency combined with face detection
J. Macke, G. Zeck, M. Bethge -  Receptive Fields without Spike-Triggering
V. Rao, M. Howard -  Retrieved context and the discovery of semantic structure
C. Christoforou, P. Sajda, L. Parra -  Second Order Bilinear Discriminant Analysis 
	for single trial EEG analysis
P. Frazier, A. Yu -  Sequential Hypothesis Testing under Stochastic Deadlines
L. Buesing, W. Maass -  Simplified Rules and Theoretical Analysis for 
	Information Bottleneck Optimization and PCA with Spiking Neurons
H. Lee, E. Chaitanya, A. Ng -  Sparse deep belief net model for visual area V2
M. Figueroa, G. Carvajal, W. Valenzuela -  Subspace-Based Face Recognition in Analog VLSI
D. Gao, V. Mahadevan, N. Vasconcelos -  The discriminant center-surround hypothesis for bottom-up saliency
D. Mochihashi, E. Sumita -  The Infinite Markov Model
N. Daw, A. Courville -  The rat as particle filter
R. Legenstein, D. Pecevski, W. Maass -  Theoretical Analysis of Learning with 
	Reward-Modulated Spike-Timing-Dependent Plasticity
M. Mahmud, S. Ray -  Transfer Learning using Kolmogorov Complexity: Basic Theory and Empirical Evaluations
A. Graves, S. Fernandez, M. Liwicki, H. Bunke, J. Schmidhuber -  Unconstrained On-line Handwriting Recognition 
	with Recurrent Neural Networks
M. Mozer, D. Baldwin -  Experience-Guided Search: A Theory of Attentional Control
A. Yuille, H. Lu -  The Noisy-Logical Distribution and its Application to Causal Inference
M. Frank, N. Goodman, J. Tenenbaum -  A Bayesian Framework for Cross-Situational Word-Learning
A. Stocker, E. Simoncelli - A Bayesian Model of Conditioned Perception
C. Kemp, N. Goodman, J. Tenenbaum -  A complexity measure for intuitive theories
M. Giulioni, M. pannunzi, D. Badoni, V. Dante, P. del Giudice -  A configurable analog VLSI neural network 
	with spiking neurons and self-regulating plastic synapses
S. Siddiqi, B. Boots, G. Gordon -  A Constraint Generation Approach to Learning Stable Linear Dynamical Systems
O. Bobrowski, R. Meir, S. Shoham, Y. Eldar -  A neural network implementing optimal state estimation 
	based on dynamic spike train decoding
G. Englebienne, T. Cootes, M. Rattray -  A probabilistic model for generating realistic lip movements from speech
A. Argyriou, C. Micchelli, M. Pontil, Y. Ying -  A Spectral Regularization Framework for Multi-Task Structure Learning
P. Liang, D. Klein, M. Jordan -  Agreement-Based Learning
F. Sinz, O. Chapelle, A. Agarwal, B. Schölkopf - An Analysis of Inference with the Universum
D. Sridharan, B. Percival, J. Arthur, K. Boahen - An in-silico Neural Model of Dynamic Routing through Neuronal Coherence
C. Clopath, A. Longtin, W. Gerstner -  An online Hebbian learning rule that performs Independent Component Analysis
N. Chapados, Y. Bengio - Augmented Functional Time Series Representation and Forecasting with Gaussian Processes
Y. Teh, H. Daume III, D. Roy -  Bayesian Agglomerative Clustering with Coalescents
D. Endres, M. Oram, J. Schindelin, P. Foldiak -  Bayesian binning beats approximate alternatives: 
	estimating peri-stimulus time histograms
S. Gerwinn, J. Macke, M. Seeger, M. Bethge -  Bayesian Inference for Spiking Neuron Models with a Sparsity Prior
S. Yu, B. Krishnapuram, R. Rosales, H. Steck, R. Rao -  Bayesian Multi-View Learning
T. Sharpee - Better than least squares: comparison of objective functions for estimating linear-nonlinear models
Y. Lin, J. Chen, Y. Kim, D. Lee - Blind channel identification for speech dereverberation using l1-norm sparse learning
L. Sigal, A. Balan, M. Black -  Combined discriminative and generative articulated pose and non-rigid shape estimation
U. Beierholm, K. Kording, L. Shams, W. Ma - Comparing Bayesian models for 
	multisensory cue combination without mandatory integration
R. Peters, L. Itti -  Congruence between model and human attention reveals unique signatures of critical visual events
L. Murray, A. Storkey -  Continuous Time Particle Filtering for fMRI
E. Neftci, E. Chicca, G. Indiveri, J. Slotine, R. Douglas -  Contraction Properties of VLSI 
	Cooperative Competitive Neural Networks of Spiking Neurons
H. Chieu, W. Lee, Y. Teh -  Cooled and Relaxed Survey Propagation for MRFs
P. Ferrez, J. Millan -  EEG-Based Brain-Computer Interaction: Improved Accuracy by Automatic Single-Trial Error Detection
O. Sumer, U. Acar, A. Ihler, R. Mettu -  Efficient Bayesian Inference for Dynamically Changing Graphs
J. Huang, C. Guestrin, L. Guibas -  Efficient Inference forDistributions on Permutations
V. Singh, L. Mukherjee, J. Peng, J. Xu -  Ensemble Clustering using Semidefinite Programming
E. Tsang, B. Shi -  Estimating disparity with confidence from energy neurons
K. Ganchev, J. Graca, B. Taskar -  Expectation Maximization, Posterior Constraints, and Statistical Alignment
D. Wingate, S. Singh -  Exponential Family Predictive Representations of State
S. LAM, B. Shi -  Extending position/phase-shift tuning to motion energy neurons improves velocity discrimination
M. Ross, A. Cohen -  GRIFT: A graphical model for inferring visual classification features from human data
M. Lengyel, P. Dayan -  Hippocampal Contributions to Control: The Third Way
A. Christmann, I. Steinwart -  How SVMs can estimate quantiles and the median
M. Ahrens, M. Sahani -  Inferring Elapsed Time from Stochastic Neural Processes
J. Cunningham, B. Yu, K. Shenoy, M. Sahani -  Inferring Neural Firing Rates from Spike Trains Using Gaussian Processes
B. Blankertz, M. Kawanabe, R. Tomioka, F. Hohlefeld, V. Nikulin, K. Mueller -  Invariant Common Spatial Patterns: 
	Alleviating Nonstationarities in Brain-Computer Interfacing
K. Fukumizu, A. Gretton, X. Sun, B. Schoelkopf -  Kernel Measures of Conditional Dependence
M. Parsana, S. Bhattacharya, C. Bhattacharyya, K. Ramakrishnan -  Kernels on Attributed Pointsets with Applications
P. Garrigues, B. Olshausen -  Learning Horizontal Connections in a Sparse Coding Model of Natural Images
N. Le Roux, Y. Bengio, P. Lamblin, M. Joliveau, B. Kegl -  Learning the 2-D Topology of Images
S. Mitra, G. Indiveri, S. Fusi -  Learning to classify complex patterns using a VLSI network of spiking neurons
V. Ferrari, A. Zisserman -  Learning Visual Attributes
S. Kirshner -  Learning with Tree-Averaged Densities and Distributions
F. Meyer, G. Stephens -  Locality and low-dimensions in the prediction of natural experience from fMRI
A. Sanborn, T. Griffiths -  Markov Chain Monte Carlo with People
J. Dauwels, F. Vialatte, T. Rutkowski, A. Cichocki -  Measuring Neural Synchrony by Message Passing
R. Turner, M. Sahani - Modeling Natural Sounds with Modulation Cascade Processes
B. Williams, M. Toussaint, A. Storkey - Modelling motion primitives and their timing in biologically executed movements
E. Bonilla, K. Chai, C. Williams -  Multi-task Gaussian Process Prediction
M. Bethge, P. Berens -  Near-Maximum Entropy Models for Binary Neural Representations of Natural Images
J. He, J. Carbonell -  Nearest-Neighbor-Based Active Learning for Rare Category Detection
J. Pillow, P. Latham - Neural characterization in partially observed populations of spiking neurons
G. Lebanon, Y. Mao -  Non-parametric Modeling of Partially Ranked Data
B. Russell, A. Torralba, C. Liu, R. Fergus, W. Freeman -  Object Recognition by Scene Alignment
P. Berkes, R. Turner, M. Sahani -  On Sparsity and Overcompleteness in Image Models
Z. Barutcuoglu, P. Long, R. Servedio -  One-Pass Boosting

A complete schedule and abstracts of all NIPS talks and posters can be found at:
http://nips.cc/Conferences/2007/Program/schedule.php?Session=Conference%20Sessions

-----


More information about the Connectionists mailing list