Symposium Workshop "Bayesian Methods for Cognitive Modeling"

Richard Golden golden at utdallas.edu
Fri Jun 13 09:42:50 EDT 2003


Symposium Workshop: 
Bayesian Methods for Cognitive Modeling
Tentative Schedule

Monday, July 28, 2003, 
Weber State University
Ogden, Utah
(Following the  2003 Annual Meeting of the Society for Mathematical 
Psychology)

8:15am - 8:30am Introduction to the Symposium Workshop. 
Richard Golden (University Texas Dallas) and Richard Shiffrin (Indiana 
University)

8:30am -10:00am Bayesian Methods for Unsupervised Learning
Zoubin Ghahramani  (University College London, Gatsby Computational 
Neuroscience Unit)

10:00am - 10:30am Coffee Break

10:30am -12:00pm Bayesian Models of Human Learning and Inference
Josh Tenenbaum (MIT, Brain and Cognitive Sciences)

12:00pm  - 1:30pm  Lunch Break

1:30pm -3:00pm The Bayesian Approach to Vision
Alan Yuille (UCLA, Departments of Statistics and Psychology)

3:00pm - 3:30pm Coffee Break

3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing 
Christopher Manning (Stanford University, Computer Science)

------------------------------------------------------------------------------
-----------------------------------------
*   Each  talk will be approximately 80 minutes in length with a 10 minute 
question time period. 
*   A $20 Registration Fee is required for participation in the workshop.

ABSTRACTS

8:30am -10:00am Bayesian Methods for Unsupervised Learning
Zoubin Ghahramani  (University College London, Gatsby Computational 
Neuroscience Unit)

Many models used in machine learning and neural computing can be understood 
within the unified framework of probabilistic graphical models. These include 
clustering models (k-means, mixtures of Gaussians), dimensionality reduction 
models (PCA, factor analysis), time series models (hidden Markov models, linear 
dynamical systems), independent components analysis (ICA), hierarchical neural 
network models, etc. I will review the link between all these models, and the 
framework for learning them using the EM algorithm for maximum likelihood. I 
will then describe limitations of the maximum likelihood framework and how 
Bayesian methods overcome these limitations, allowing learning without 
overfitting, principled model selection, and the coherent handling of uncertainty. Time 
permitting I will decribe the computational challenges of Bayesian learning 
and approximate methods for overcoming those challenges, such as variational 
methods.


10:30am -12:00pm Bayesian Models of Human Learning and Inference
Josh Tenenbaum (MIT, Brain and Cognitive Sciences)

How can people learn the meaning of a new word from just a few examples?  
What makes a set of examples more or less representative of a concept?  What 
makes two objects seem more or less similar?  Why are some generalizations 
apparently based on all-or-none rules while others appear to be based on gradients of 
similarity?  How do we infer the existence of hidden causal properties or 
novel causal laws?  I will describe an approach to explaining these aspects of 
everyday induction in terms of rational statistical inference.  In our Bayesian 
models, learning and reasoning are explained in terms of probability 
computations over a hypothesis space of possible concepts, word meanings, or 
generalizations.  The structure of the learner's hypothesis spaces reflects their 
domain-specific prior knowledge, while the nature of the probability computations 
depends on domain-general statistical principles.  The hypotheses can be thought 
of as either potential rules for abstraction or potential features for 
similarity, with the shape of the learner's posterior probability distribution 
determining whether generalization appears more rule-based or similarity-based.  
Bayesian models thus offer an alternative to classical accounts of learning and 
reasoning that rest on a single route to knowledge -- e.g., domain-general 
statistics or domain-specific constraints -- or a single representational 
paradigm -- e.g., abstract rules or exemplar similarity.  This talk will illustrate 
the Bayesian approach to modeling learning and reasoning on a range of 
behavioral case studies, and contrast its explanations with those of more traditional 
process models.

1:30pm -3:00pm The Bayesian Approach to Vision
Alan Yuille (UCLA, Departments of Statistics and Psychology)

Bayesian statistical decision theory formulates vision as perceptual 
inference where the goal is to infer the structure of the viewed scene from input 
images. The approach can be used not only to  model perceptual phenomena but also 
to design computer vision systems that perform useful tasks on natural images. 
This ensures that the models can be extended from the artificial stimuli used 
in most psychophysical, or neuroscientific, experiments to more natural and 
realistic stimuli. The approach requires specifying likelihood functions for 
how the viewed scene generates the observed image data and prior probabilities 
for the state of the scene. We show how this relates to Signal Detection Theory 
and Machine Learning. Next we describe how the probability models (i.e. 
likelihood functions and priors) can be represented by graphs which makes explicit 
the statistical dependencies between variables. This representation enables us 
to account for perceptual phenomena such as discounting, cue integration, and 
explaining away. We illustrate the techniques involved in the Bayesian 
approach by two worked examples. The first is the perception of motion where we 
describe Bayesian theories (Weiss & Adelson, Yuille & Grzywacz) which show that 
many phenomena  can be explained as a trade-off between the likelihood function 
and the prior of a single model. The second is image parsing where the goal is 
to segment natural images and to detect and recognize objects. This involves 
models competing and cooperating to explain the image by combining bottom-up 
and top-down processing.

3:30pm - 5:00pm Probabilistic Approaches to Language Learning and Processing 
Christopher Manning (Stanford University, Computer Science)

At the engineering end of speech and natural language understanding research, 
the field has been transformed by the adoption of Bayesian probabilistic 
approaches, with generative models such as Markov models, hidden Markov models, 
and probabilistic context-free grammars being standard tools of the trade, and 
people increasingly using more sophisticated models.  More recently, there has 
also started to be use of these models as cognitive models, to explore issues 
in psycholinguistic processing, and how humans approach the resolution 
problem, of combining evidence from numerous sources during the course of processing. 
Much of this work has been in a supervised learning paradigm, where models 
are built from hand-annotated data, but probabilistic approaches also open 
interesting new perspectives on formal problems of language learning.  After 
surveying the broader field of probabilist approaches in natural language 
processing, I'd like to focus in on unsupervised approaches to learning language 
structure, show why it's a difficult problem, and present some recent work that I and 
others have been doing using probabilistic models, which shows considerable 
progress on tasks such as word class and syntactic structure learning.





More information about the Connectionists mailing list