Connectionists: speciel sessions at ESANN'2007 European Symposium on Artificial Neural Networks
esann
esann at dice.ucl.ac.be
Sat Oct 28 16:36:12 EDT 2006
ESANN'2007
15th European Symposium
on Artificial Neural Networks
Advances in Computational Intelligence and Learning
Bruges (Belgium) - April 25-26-27, 2007
Special sessions
=====================================================
The following message contains a summary of all special sessions that will
be organized during the ESANN'2007 conference. Authors are invited to
submit their contributions to one of these sessions or to a regular session,
according to the guidelines found on the web pages of the conference
http://www.dice.ucl.ac.be/esann/.
According to our policy to reduce the number of unsolicited e-mails, we
gathered all special session descriptions in a single message, and try to
avoid sending it to overlapping distribution lists. We apologize if you
receive multiple copies of this e-mail despite our precautions.
Special sessions that will be organized during the ESANN'2007 conference
========================================================================
1. Fuzzy and Probabilistic Methods in Neural Networks and Machine Learning
B. Hammer, Clausthal Univ. Tech. (Germany), T. Villmann, Univ. Leipzig
(Germany)
2. Reinforcement Learning
V. Heidrich-Meisner, Ruhr-Univ. Bochum, M. Lauer, Univ. Osnabrück,
C. Igel, Ruhr-Univ. Bochum, M. Riedmiller, Univ. Karlsruhe (Germany)
3. Convex Optimization for the Design of Learning Machines
K. Pelckmans, J.A.K. Suykens, Katholieke Univ. Leuven (Belgium)
4. Learning causality
P. F. Verdes, Heidelberg Acad. of Sciences (Germany),
K. Hlavackova-Schindler, Austrian Acad. of Sciences (Austria)
5. Reservoir Computing
D. Verstraeten, B. Schrauwen, Univ. Gent (Belgium)
Short descriptions
==================
Fuzzy and Probabilistic Methods in Neural Networks and Machine Learning
-----------------------------------------------------------------------
Organized by:
- B. Hammer, Clausthal Univ. Tech. (Germany)
- T. Villmann, Univ. Leipzig (Germany)
The availability of huge amounts of real world data in widespread areas such
as image processing, medicine, bioinformatics, robotics, geophysics, etc.
lead to an increasing importance of fuzzy and probabilistic methods in
adaptive data processing. Usually, measured data contain noise,
classification labels may be undetermined on a certain level, decision
systems have to cope with uncertainty of knowledge, and systems have to deal
with missing or contradictory data from several sources. Extensions of
neural networks and machine learning methods to incorporate probabilistic or
fuzzy information offer possibilities to handle such problems.
In the special session we focus on new developments and applications which
extend common approaches by features related to fuzzy and probabilistic
information processing. We encourage submissions within the following
non-exclusive list of topics:
- fuzzy and probabilistic clustering or classification
- processing of misleading or insecure information
- fuzzy reasoning and rule extraction
- fuzzy control
- probabilistic networks
- visualization of fuzzy information
- applications in image processing, medicine, robotics, ...
Thereby, the focus will lie on major developments which contribute to new
insights, improved algorithms, or the demonstration of fuzzy and
probabilistic methods for real life problems.
Reinforcement Learning
-----------------------------------------------------------------------
Organized by:
- V. Heidrich-Meisner, Ruhr-Univ. Bochum
- M. Lauer, Univ. Osnabrück
- C. Igel, Ruhr-Univ. Bochum
- M. Riedmiller, Univ. Karlsruhe (Germany)
Reinforcement learning (RL) deals with computational models and algorithms
for solving sequential decision and control problems: a learning agent
builds policies of optimal behavior by interacting with the environment and
observing reward for his present performance. Fields of applications are for
example nonlinear control of technical systems, robotics, and economical
processes. Reinforcement learning addresses problems for which knowledge of
systems dynamics is poor, feedback about actions is sparse, unspecific, or
delayed. Moreover, it is the biologically most plausible learning paradigm
for behavioral processes. Thus, RL is a highly interdiciplinary field of
research combining optimal control, psychology, biology, and machine
learning. Bringing together researchers from these different disciplines is
one goal of the proposed session.
In our session we would welcome papers describing theoretical work and
carefully evaluated applications from all areas of RL or approximate dynamic
programming. We would encourage submissions dealing with new biological
models of RL processes as well as innovative computational learning
algorithms, in particular direct policy search methods and approximative RL.
Convex Optimization for the Design of Learning Machines
-----------------------------------------------------------------------
Organized by:
- K. Pelckmans, Katholieke Univ. Leuven (Belgium)
- J.A.K. Suykens, Katholieke Univ. Leuven (Belgium)
Recently, techniques of Convex Optimization (CO) take a more prominent place
in learning approaches, as pioneered by the work on Support Vector Machines
(SVMs) and other regularization based learning schemes. Duality theory has
played an important role in the development of so-called kernel machines,
while the fact of uniqueness of the optimal solution has permitted
theoretical as well as practical breakthroughs. A third main advantage of
using CO tools in research on learning problems is that the conceptual level
of the design of a learning scheme becomes nicely separated from the actual
algorithm implementing this scheme (the CO solver). In this special session,
we target the first stage, i.e. the phase where one studies how the learning
problem at hand can be converted effectively into a CO problem.
Papers are solicited from the area of, but not restricted to
* New formulations of kernel machines in terms of convex optimization
problems
* Novel (convex) optimality principles for learning machines
* Handling different model structures (e.g. additive models) and noise
models using a convex solver.
* Structure detection, regularization path and sparseness issues
* Convex approaches to model selection problems
* Convex techniques in Clustering and Exploratory Data Analysis (EDA)
* SDP and SOCP based techniques in kernel machines
* Application specific (convex) formulations
Learning causality
-----------------------------------------------------------------------
Organized by:
- P. F. Verdes, Heidelberg Acad. of Sciences (Germany)
- K. Hlavackova-Schindler, Austrian Acad. of Sciences (Austria)
Discovering interdependencies and causal relationships is one of the most
relevant challenges raised by the information era. As more and better data
become available, there is an urgent need for techniques with the capability
of efficiently sensing, for example, the hidden interactions within
regulatory networks in Biology, the complex feedbacks of a climate system
affected by global warming, the possible coupling of economic indexes, the
subtle recruiting processes that may take place in the human brain, etc. As
such, this important issue is receiving increasing attention in the recent
literature. We encourage submissions to this topic that present original
contributions on methodological and practical aspects of causality detection
techniques by means of learning, statistical information processing and
computational intelligence, possibly illustrated by real-world data
applications.
Reservoir Computing
-----------------------------------------------------------------------
Organized by:
- D. Verstraeten, Univ. Gent (Belgium)
- B. Schrauwen, Univ. Gent (Belgium)
Reservoir computing is a novel temporal classification and regression
technique that is a generalization of three prior concepts: the Echo State
Network, the Liquid State Machine and Backpropagation Decorrelation. It has
been successfully applied to many temporal machine learning problems in DSP,
robotics, speech recognition and others. The technique uses a broad class of
recurrent neural networks as a form of explicit kernel-like mapping that
projects the input space into a higher dimensional network state-space,
while retaining temporal information using the fading memory property of the
network. The network itself is left untrained, but instead a simple readout
function (usually a linear discriminant) is applied to the network's
state-space, which is generally very easy to train. This technique shows
promising advantages compared to traditional temporal machine learning
methods, but there are still many open research questions. This special
session will present a mixture of theoretical results and state-of-the-art
applications in several fields.
========================================================
ESANN - European Symposium on Artificial Neural Networks
http://www.dice.ucl.ac.be/esann
* For submissions of papers, reviews,...
Michel Verleysen
Univ. Cath. de Louvain - Microelectronics Laboratory
3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium
tel: +32 10 47 25 51 - fax: + 32 10 47 25 98
mailto:esann at dice.ucl.ac.be
* Conference secretariat
d-side conference services
24 av. L. Mommaerts - B-1140 Evere - Belgium
tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00
mailto:esann at dice.ucl.ac.be
========================================================
More information about the Connectionists
mailing list