NIPS*2002 Workshops Abstracts
Barak Pearlmutter
bap at cs.unm.edu
Wed Nov 13 18:56:55 EST 2002
****************************************************************
NIPS*2002 Workshops
December 12-14, 2002, Whistler BC, Canada
http://www.nips.cc
****************************************************************
Workshop Schedule
=================
The NIPS*2002 Workshops will be held at the Westin in Whistler BC,
Canada, on Fri Dec 13 and Sat Dec 14, with sessions at 7:30-10:00am
and 4:00-7:00pm.
Two Day Workshops: Fri Dec 13 & Sat Dec 14
Functional Neuroimaging
Multi-Agent Learning
Propagation on Cyclic Graphs
One Day Workshops on Fri Dec 13
Adaptation/Plasticity and Coding
Bioinformatics
Independent Component Analysis
Neuromorphic Engineering
Spectral Methods
Statistics for Computational Experiments
Unreal Data
One Day Workshops on Sat Dec 13
Learning Invariant Representations
Learning Rankings
Negative Results
On Learning Kernels
Quantum Neural Computing
Thalamocortical Processing
Universal Learning Algorithms
Workshop Descriptions
=====================
TWO DAY WORKSHOPS (Friday & Saturday)
Propagation Algorithms on Graphs with Cycles: Theory and Applications
Shiro Ikeda, Kyushu Institute of Technology, Fukuoka, Japan
Toshiyuki Tanaka, Tokyo Metropolitan University, Tokyo, Japan
Max Welling, University of Toronto, Toronto, Canada
Inference on graphs with cycles (loopy graphs) has drawn much
attention in recent years. The problem arises in various fields
such as AI, error-correcting codes, statistical physics, and image
processing. Although exact inference is often intractable, much
progress has been made in solving the problem approximately with
local propagation algorithms. The aim of the workshop is to
provide an overview of recent developments in methods related to
belief propagation. We also encourage discussion of open
theoretical problems and new possibilities for applications.
Computational Neuroimaging: Foundations, Concepts & Methods
Stephen J. Hanson, Rutgers University, Newark, NJ, USA
Barak A. Pearlmutter, University of New Mexico, Albuquerque, NM, USA
Stephen Strother, University of Minnesota, Minneapolis, MN, USA
Lars Kai Hansen, Technical University of Denmark, Lyngby, Denmark
Benjamin Martin Bly, Rutgers University, Newark, NJ, USA
This workshop will concentrate on the foundations of neuroimaging,
including the relation between neural firing and BOLD, fast fMRI,
and diffusion methods. The first day includes speakers on new
Methods for Multivariate analysis using fMRI especially as they
relate to Neural Modeling (ICA, SVM, or other ML methods), which
will slip into the next morning, with cognitive neuroscience talks
involving Network and specific Neural Modeling approaches to
cognitive function on day two.
Multi-Agent Learning: Theory and Practice
Gerald Tesauro, IBM Research, NY, USA
Michael L. Littman, Rutgers University, New Brunswick, NJ, USA
Machine learning in a multi-agent system, where learning agents
interact with other agents that are also simultaneously learning,
poses a radically different set of issues from those arising in
normal single-agent learning in a stationary environment. This
topic is poorly understood theoretically but seems ripe for
progress by building upon many recent advances in RL and in
Bayesian, game-theoretic, decision-theoretic, and evolutionary
learning. At the same time, learning is increasingly vital in
fielded applications of multi-agent systems. Many application
domains are envisioned in which teams of software agents or robots
learn to cooperate to achieve global objectives. Learning may
also be essential in many non-cooperative domains such as
economics and finance, where classical game-theoretic solutions
are either infeasible or inappropriate. This workshop brings
together researchers studying multi-agent learning from a variety
of perspectives. Our invited speakers include leading AI
theorists, applications developers in fields such as robotics and
e-commerce, as well as social scientists studying learning in
multi-player human-subject experiments. Slots are also available
for contributed talks and/or posters.
ONE DAY WORKSHOPS (Friday)
The Role of Adaptation/Plasticity in Neuronal Coding
Garrett B. Stanley, Harvard University, Cambridge, MA, USA
Tai Sing Lee, Carnegie Mellon University, Pittsburgh, PA, USA
A ubiquitous characteristic of neuronal processing is the ability
to adapt to an ever changing environment on a variety of different
time scales. Although the different forms of adaptation/
plasticity have been studied for some time, their role in the
encoding process is still not well understood. The most widely
utilized measures assume time-invariant encoding dynamics even
though mechanisms serving to modify coding properties are
continually active in all but the most artificial laboratory
conditions. Important questions include: (1) how do encoding
dynamics and/or receptive field properties change with time and
the statistics of the environment?, (2) what are the underlying
sources of these changes?, (3) what are the resulting effects on
information transmission and processing in the pathway?, and (4)
can the mechanisms of plasticity/adaptation be understood from a
behavioral perspective? It is the goal of this workshop to
discuss neuronal coding within several different experimental
paradigms, in order to explore these issues that have only
recently been addressed in the literature.
Independent Component Analysis and Beyond
Stefan Harmeling, Fraunhofer FIRST, Berlin, Germany
Luis Borges de Almeida, INESC ID, Lisbon, Portugal
Erkki Oja, HUT, Helsinki, Finland
Dinh-Tuan Pham, LMC-IMAG, Grenoble, France
Independent component analysis (ICA) aims at extracting unknown
hidden factors/components from multivariate data using only the
assumption that the unknown factors are mutually independent.
Since the introduction of ICA concepts in the early 80s in the
context of neural networks and array signal processing, many new
successful algorithms have been proposed that are now
well-established methods. Since then, diverse applications in
telecommunications, biomedical data analysis, feature extraction,
speech separation, time-series analysis and data mining have been
reported. Notably of special interest for the NIPS community are,
first, the application of ICA techniques to process multivariate
data from various neuro-physiological recordings and second, the
interesting conceptual parallels to information processing in the
brain. Recently exciting developments have moved the field
towards more general nonlinear or nonindependent source separation
paradigms. The goal of the planed workshop is to bring together
researchers from the different fields of signal processing,
machine learning, statistics and applications to explore these new
directions.
Spectral Methods in Dimensionality Reduction, Clustering, and
Classification
Josh Tenenbaum, M.I.T., Cambridge, MA, USA
Sam Roweis, University of Toronto, Ontario, Canada
Data-driven learning by local or greedy parameter update
algorithms is often a painfully slow process fraught with local
minima. However, by formulating a learning task as an appropriate
algebraic problem, globally optimal solutions may be computed
efficiently in closed form via an eigendecomposition.
Traditionally, this spectral approach was thought to be applicable
only to learning problems with an essentially linear structure,
such as principal component analysis or linear discriminant
analysis. Recently, researchers in machine learning, statistics,
and theoretical computer science have figured out how to cast a
number of important nonlinear learning problems in terms amenable
to spectral methods. These problems include nonlinear
dimensionality reduction, nonparameteric clustering, and nonlinear
classification with fully or partially labeled data. Spectral
approaches to these problems offer the potential for dramatic
improvements in efficiency, accuracy, optimality and
reproducibility relative to traditional iterative or greedy
learning algorithms. Furthermore, numerical methods for spectral
computations are extremely mature and well understood, allowing
learning algorithms to benefit from a long history of
implementation efficiencies in other fields. The goal of this
workshop is to bring together researchers working on spectral
approaches across this broad range of problem areas, for a series
of talks on state-of-the-art research and discussions of common
themes and open questions.
Neuromorphic Engineering in the Commercial World
Timothy Horiuchi, University of Maryland, College Park, MD, USA
Giacomo Indiveri, University-ETH Zurich, Zurich, Switzerland
Ralph Etienne-Cummings, University of Maryland, College Park, MD, USA
We propose a one-day workshop to discuss strategies, opportunities
and success stories in the commercialization of neuromorphic
systems. Towards this end, we will be inviting speakers from
industry and universities with relevant experience in the
field. The discussion will cover a broad range of topics, from
visual and auditory processing to olfaction and locomotion,
focusing specifically on the key elements and ideas for
successfully transitioning from neuroscience to commercialization.
Statistical Methods for Computational Experiments in Visual Processing
and Computer Vision
Ross Beveridge, Colorado State University, Colorado, USA
Bruce Draper, Colorado State University, Colorado, USA
Geof Givens, Colorado State University, Colorado, USA
Ross J. Micheals, NIST, Maryland, USA
Jonathon Phillips, DARPA & NIST, Maryland, USA
In visual processing and computer vision, computational
experiments play a critical role in explaining algorithm and
system behavior. Disciplines such as psychophysics and medicine
have a long history of designing experiments. Vision researchers
are still learning how to use computational experiments to explain
how systems behave in complex domains. This workshop will focus
on new and better experiment experimental methods in the context
of visual processing and computer vision.
Unreal Data: Principles of Modeling Nonvectorial Data
Alexander J. Smola, Australian National Univ., Canberra, Australia
Gunnar Raetsch, Australian National Univ., Canberra, Australia
Zoubin Ghahramani, University College London, London, UK
A large amount of research in machine learning is concerned with
classification and regression for real-valued data which can
easily be embedded into a Euclidean vector space. This is in stark
contrast with many real world problems, where the data is often a
highly structured combination of features, a sequence of symbols,
a mixture of different modalities, may have missing variables,
etc. To address the problem of learning from non-vectorial data,
various methods have been proposed, such as embedding the
structures in some metric spaces, the extraction and selection of
features, proximity based approaches, parameter constraints in
Graphical Models, Inductive Logic Programming, Decision Trees,
etc. The goal of this workshop is twofold. Firstly, we hope to
make the machine learning community aware of the problems arising
from domains where non-vectorspace data abounds and to uncover the
pitfalls of mapping such data into vector spaces. Secondly, we
will try to find a more uniform structure governing methods for
dealing with non-vectorial data and to understand what, if any,
are the principles underlying the modeling of non-vectorial data.
Machine Learning Techniques for Bioinformatics
Colin Campbell, University of Bristol, UK
Phil Long, Genome Institute of Singapore
This workshop will cover the development and application of
machine learning techniques in application to molecular biology.
Contributed papers are welcome from any topic relevant to this
theme including, but not limited to, analysis of expression data,
promoter analysis, protein structure prediction, protein homology
detection, detection of splice junctions, and phylogeny, for
example. Contributions are most welcome which propose new
algorithms or methods, rather than the use of existing techniques.
In addition to contributed papers we expect to have several
tutorials covering different areas where machine learning
techniques are have been successfully applied in this domain.
ONE DAY WORKSHOPS (Saturday)
Thalamocortical Processing in Audition and Vision
Tony Zador, Cold Spring Harbor Lab., Cold Spring Harbor, NY, USA
Shihab Shamma, University of Maryland, College Park, MD, USA
All sensory information (except olfactory) passes through the
thalamus before reaching the cortex. Are the principles governing
this thalamocortical transformation shared across sensory
modalities? This workshop will investigate this question in the
context of audition and vision. Questions include: Do the LGN and
MGN play analogous roles in the two sensory modalities? Are the
cortical representations of sound and light analogous?
Specifically, the idea is to talk about cortical processing (as
opposed to purely thalamic), how receptive fields are put together
in the cortex, and the implications of these ideas to the nature
of information being encoded and extracted at the cortex.
Learning of Invariant Representations
Konrad Paul Koerding, ETH/UNI Zuerich, Switzerland
Bruno. A. Olshausen, U.C. Davis & RNI, CA, USA
Much work in recent years has shown that the sensory coding
strategies employed in the nervous systems of many animals is well
matched to the statistics of their natural environment. For
example, it has been shown that lateral inhibition occuring in the
retina may be understood in terms of a decorrelation or
`whitening' strategy (Srinivasan et al., 1982; Atick & Redlich,
1992), and that the receptive properties of cortical neurons may
be understood in terms of sparse coding or ICA (Olshausen & Field,
1996; Bell & Sejnowski, 1997; van Hateren & van der Schaaf,
1998). However, most of these models do not address the question
of which properties of the environment are interesting or relevant
and which others are behaviourally insignificant. The purpose of
this workshop is to focus on unsupervised learning models that
attempt to represent features of the environment which are
invariant or insensitive to variations such as position, size, or
other factors.
Quantum Neural Computing
Elizabeth C. Behrman, Wichita State University, Wichita, KS, USA
James E. Steck, Wichita State University, Wichita, KS, USA
Recently there has been a resurgence of interest in quantum
computers because of their potential for being very much smaller
and very much faster than classical computers, and because of
their ability in principle to do hereofore impossible
calculations, such as factorization of large numbers in polynomial
time. We will explore ways to implement biologically inspired
quantum computing in network topologies, thus exploiting both the
intrinsic advantages of quantum computing and the adaptability of
neural computing. This workshop will follow up on our very
successful NIPS 2000 workshop and the IJCNN 2001 Special Session.
Aspects/approaches to be explored will include: quantum hardware,
e.g., SQUIDs, nmr, trapped ions, quantum dots, and molecular
computing; theoretical and practical limits to quantum and quantum
neural computing, e.g. noise, error correction, and decoherence;
and simulations.
Universal Learning Algorithms and Optimal Search
Juergen Schmidhuber, IDSIA, Manno-Lugano, Switzerland
Marcus Hutter, IDSIA, Manno-Lugano, Switzerland
Recent theoretical and practical advances are currently driving a
renaissance in the fields of Universal Learners (rooted in
Solomonoff's universal induction scheme, 1964) and Optimal Search
(rooted in Levin's universal search algorithm, 1973). Both are
closely related to the theory of Kolmogorov complexity. The new
millennium has brought several significant developments including:
Sharp expected loss bounds for universal sequence predictors,
theoretically optimal reinforcement learners for general
computable environments, computable optimal predictions based on
natural priors that take algorithm runtime into account, and
practical, bias-optimal, incremental, universal search algorithms.
Topics will also include: Practical but general MML/MDL/SRM
approaches with theoretical foundation, weighted majority
approaches, and no free lunch theorems.
On Learning Kernels
Nello Cristianini, U.C. Davis, California, USA
Tommi Jaakkola, M.I.T., Massachusetts, USA
Michael I. Jordan, U.C. Berkeley, California, USA
Gert R.G. Lanckriet, U.C. Berkeley, California, USA
Recent theoretical advances and experimental results have drawn
considerable attention to the use of kernel methods in learning
systems. For the past five years, a growing community has been
meeting at the NIPS workshops to discuss the latest progress in
learning with kernels. Recent research in this area addresses the
problem of learning the kernel itself from data. This subfield is
becoming an active research area, offering a challenging interplay
between statistics, advanced convex optimization and information
geometry. It presents a number of interesting open problems. The
workshop has two goals. First, it aims at discussing
state-of-the-art research on 'learning the kernel', as well as
giving an introduction to some of the new techniques used in this
subfield. Second, it offers a meeting point for a diverse
community of researchers working on kernel methods. As such,
contributions from ALL subfields in kernel methods are welcome and
will be considered for a poster presentation, with priority to
very recent results. Furthermore, contributions on the main theme
of learning kernels will be considered for oral presentations.
Deadline for submissions: Nov 15, 2002.
Negative Results and Open Problems
Isabelle Guyon, Clopinet, California, USA
In mathematics and theoretical computer science, exhibiting
counter examples is part of the established scientific method to
rule out wrong hypotheses. Yet, negative results and counter
examples are seldom reported in experimental papers, although they
can be very valuable. Our workshop will be a forum to freely
discuss negative results and introduce the community to
challenging open problems. This may include reporting experimental
results of principled algorithms that obtain poor performance
compared to seemingly dumb heuristics; experimental results that
falsify an existing theory; counter examples to a generally
admitted conjecture; failure to find a solution to a given problem
after various attempts; and failure to demonstrate the advantage
of a given method after various attempts. If you have interesting
negative results (not inconclusive results) or challenging open
problems, you may submit an abstract before November 15, 2002.
Beyond Classification and Regression: Learning Rankings, Preferences,
Equality Predicates, and Other Structures
Rich Caruana, Cornell University, NY, USA
Thorsten Joachims, Cornell University, NY, USA
Not all supervised learning problems fit the classification/
regression function-learning model. Some problems require
predictions other than values or classes. For example, sometimes
the magnitude of the values predicted for cases are not important,
but the ordering these values induce is important. This workshop
addresses supervised learning problems where either the goal of
learning or the input to the learner is more complex than in
classification and regression. Examples of such problems include
learning partial or total orderings, learning equality or match
rules, learning to optimize non-standard criteria such as
Precision and Recall or ROC Area, using relative preferences as
training examples, learning graphs and other structures, and
problems that benefit from these approaches (e.g., text retrieval,
medical decision making, protein matching). The goal of this
one-day workshop is to discuss the current state-of-the-art, and
to inspire research on new algorithms and problems. To submit an
abstract, see http://www.cs.cornell.edu/People/tj/ranklearn.
More extensive information is available on the NIPS web page
http://www.nips.cc, which has links to the pages maintained by each
individual workshop.
The number of workshop proposals was particularly high this year. All
together there will be seventeen NIPS*2002 workshops, of which three
will last for two days, for a total of twenty workshop-days: a new
record. We anticipate a great year not just in the number of
workshops and in their quality, but in attendance as well: projections
indicate that the workshops may surpass the main conference in total
number of participants.
More information about the Connectionists
mailing list