special sessions at ESANN'2002


Mon Oct 1 06:32:01 EDT 2001


----------------------------------------------------
|                                                  |
|                    ESANN'2002                    |
|                                                  |
|              10th European Symposium             |
|           on Artificial Neural Networks          |
|                                                  |
|      Bruges (Belgium) - April 24-25-26, 2002     |
|                                                  |
|                 Special sessions                 |
----------------------------------------------------

The following message contains a summary of all special sessions that will
be organized during the ESANN'2002 conference.  Authors are invited to
submit their contributions to one of these sessions or to a regular session,
according to the guidelines found on the web server of the conference
(http://www.dice.ucl.ac.be/esann/).

List of special sessions that will be organized during the ESANN'2002
conference
============================================================================
1. Perspectives on Learning with Recurrent Networks (B. Hammer, J.J. Steil)
2. Representation of high-dimensional data (A. Gurin-Dugu, J. Hrault)
3. Neural Network Techniques in Fault Detection and Isolation (S. Simani)
4. Hardware and Parallel Computer Implementations of Neural Networks (U.
Seiffert)
5. Exploratory Data Analysis in Medicine and Bioinformatics (A. Wismller,
T. Villmann)
6. Neural Networks and Cognitive Science (H. Paugam-Moisy, D. Puzenat)

Short description
=================

1. Perspectives on Learning with Recurrent Networks
---------------------------------------------------
Organised by :
Barbara Hammer, University of Osnabrck
Jochen J.Steil, University of Bielefeld

Description of the session:
Recurrent neural networks constitute a natural and widely applicable tool
for processing spatio-temporal data such as time-series, language, control
signals, financial data, etc. According to the various areas of application,
many different models have been proposed: Recurrent networks may evolve in
continuous or discrete time, they may be fully or partially connected, they
may be trained with supervised or unsupervised methods, to name just a few
aspects. With respect to this variety, it is difficult to compare and
analyse recurrent networks in a common framework.

One approach is to focus on their learning ability, where it is well known
that classical gradient descent techniques suffer from numerical problems
and may require a huge amount of data for adequate generalization. Then a
main question is: Can we find efficient learning algorithms for the specific
tasks and can we guarantee their success? To this aim, different means like
normalization of the gradients, stochastic approaches, or genetic algorithms
have been proposed. Hereby a key ingredient for efficient learning and valid
generalization is some kind of regularisation. At a technical level this is
mainly achieved by a proper choice of the fitness functions or the
stochastic model, and/or constraints on the weights, often inspired by
similar techniques for feedforward networks.

Recently there have been proposed also more none-standard methods: for
continuous networks the learning can be restricted to well behaved, e.g.
stable, regions of the network known from a dynamical analysis. In case of
symbolic, i.e. discrete inputs relations to symbolic systems such as finite
automata can be exploited. Integration of automata rules or a restriction of
the network to automata behavior often yields a good starting point for
further training. We can borrow ideas from biological systems for adequate
training according to the specific area of application, e.g. in speech
acquisition or restrict the training to combine prototypic weight matrix
templates.

The session will focus on methods and examples for efficient training of
recurrent networks such that valid generalization can be achieved. This
involves algorithms, successful applications, and theoretical investigations
to forward new insights and ideas for algorithm design. Authors are invited
to submit contributions related to the following list of topics:

training methods for recurrent networks,
genetic, evolutionary, and hybrid approaches,
regularization e.g. via stability constraints,
connection to automata or symbolic systems,
investigation of the dynamical behavior,
general learning theory of recurrent networks,
applications e.g. in speech recognition or forecasting,
other learning related topics.


2. Representation of high-dimensional data
------------------------------------------
Organised by :
Anne Gurin-Dugu, CLIPS-IMAG, Grenoble (France)
Jeanny Hrault, LIS Grenoble (France)

Description of the session:
A common problem in Artificial Neural Networks is the analysis of
non-linearly related and high-dimensional data. For human beings living in a
3-D world, there is a strong need for the representation and visualisation
of these data. The problem is not simple; because of the so-called "curse of
dimensionality", our understanding about high dimensions is often similar to
the primitive numeration: one, two, three... many.

The problem implies the need of dimension and size reduction of data sets,
but for this purpose, some questions remain widely open and are of major
importance: evaluation of similarities, distance metrics, manifold
unfolding, local projections, non-linear relations...

Topics to be addressed (this list is not limitative):

Exploratory data analysis
Similarity, distance metrics
Vector quantization
Multidimensional scaling, Self-Organized Maps
Visualization
High Order Statistics
Human-driven exploration
Applications (Vision, Genetics, Astronomy, Psychometrics...)


3. Neural Network Techniques in Fault Detection and Isolation
-------------------------------------------------------------
Organised by :
Silvio Simani, Department of Engineering, University of Ferrara

Background:
In recent years neural networks have been exploited successfully in pattern
recognition as well as function approximation theory and they have been
therefore proposed as a possible technique for fault diagnosis, too. Neural
networks can handle non-linear behaviour and partially known process because
they learn the diagnostic requirements by means of the information of the
training data. They are noise tolerant and their ability to generalise the
knowledge as well as to adapt during use are extremely interesting
properties.

Contribution topics:
Under previous assumptions, session contributions should concern neural
networks methods for fault diagnosis and identification which can be applied
to a broad spectrum of processes. In particular, the papers can be address
the following items:

1. Diagnosis problem (fault detection and isolation, FDI). Neural networks
are exploited to estimate unknown relationships between symptoms and faults.
In such a way, residuals which can be generated by means of model-based
techniques dependent only on system faults. Therefore, neural networks are
able to evaluate patterns of residuals, which are uniquely related to
particular fault conditions independently from the plant dynamics.

2. Identification problem for FDI. Neural networks can be exploited also for
the identification of complex dynamic processes. Such structures can be
therefore successfully used to describe the input-output behaviour of the
monitored systems. Moreover, on the basis of the analytical redundancy
principle, the identified non-linear models can be hence applied for the
development of model-based FDI algorithms.


4. Hardware and Parallel Computer Implementations of Neural Networks
--------------------------------------------------------------------
Organised by :
Udo Seiffert, University of Magdeburg (Germany)

Description of the session:
There are three major reasons to implement neural networks on specialised
hardware. The first one comes from the internal topology and data flow of
the network itself, which provides, depending on the considered network
type, a more or less massive parallelism. While parallel implementations for
that reason are often intended to adapt a technical model as close as
possible to its biological original, the second objective becomes evident
when dealing with large scale networks and increasing training times. In
this case parallel computer hardware can significantly accelerate the
training of existing networks or make their realisation viable at all.
Sometimes a particular hardware, which is not necessarily parallel, is
essential to meet some requirements of a practical application.

This session covers all three topics and tries to reflect the great
diversity of dedicated hardware implementations from the neural networks
point of view.


5. Exploratory Data Analysis in Medicine and Bioinformatics
-----------------------------------------------------------
Organised by :
Axel Wismller, Institut fr Klinische Radiologie, Ludwig-Maximilians-Univ.
Mnchen (Germany)
Thomas Villmann, Klinik fr Psychotherapie, Universitt Leipzig (Germany)

Description of the session:
Biomedical research is a challenge to neural network computation. As medical
doctors and bioscientists are facing vast, rapidly growing amounts of data,
the need for advanced exploratory data analysis techniques increasingly
moves into the focus of attention. In this context, artificial neural
networks, as a special kind of learning and self-adapting data processing
systems, have to offer considerable contributions. Their abilities to handle
noisy and high-dimensional data, nonlinear problems, large data sets etc.
have lead to a wide scope of successful applications in biomedicine.

Beyond the classical subjects of neural computation in biomedicine, such as
computer-aided diagnosis or biomedical image processing, new application
domains discover the conceptual power of artificial neural networks for
exploratory data analysis and visualization. As an important example, the
subject of `bioinformatics' has emerged in recent years as a promising
application domain with growing importance for both biomedical basic
research and clinical application.

Neural network computation in biomedicine can aim at different motivations.
We have to distinguish at least two main directions: the first one is the
description of neural processes in brains by neural network models. The
other one is to exploit neural computation techniques for biomedical data
analysis. The announced special session should focus on the second item.

Although, at a first glance, the growing number of applications in the field
may seem encouraging, there are still considerable unsolved problems. In
particular, there is a need for continuous research emphasizing quality
assessment including critical comparative evaluation of competing biosignal
processing algorithms with respect to specific constraints of given
application domains. In this context, it increasingly becomes clear that
knowledge about neural network theory alone is not sufficient for
designing successful applications aiming at the solution of relevant
real-world problems in biomedicine. What is required as well is a sound
knowledge of the data, i.e.~the underlying application domain. Although
there may be methodological similarities, each application requires specific
careful consideration with regard to data preprocessing, postprocessing,
interpretation, and quality assessment. This challenge can only be managed
by close interdisciplinary cooperation of medical doctors, biologists,
engineers, and computer scientists. Hence, this subject can serve as an
example for lively cross-fertilization between neural network computing and
related research.

In the proposed special session we want to focus on exploratory data
analysis in medicine and bioinformatics based on neural networks as well as
other advanced methods of computational intelligence. A special emphasis is
put on real-world applications combining original ideas and new developments
with a strong theoretical background.

Authors are invited to submit contributions which can be in any area of
medical research or bioinformatics applications. The following
non-restrictive list can serve as an orientation, however, additional topics
may be chosen as well:
time-series analysis (EEG, EKG analysis, sleep monitoring, ...)
pattern classification, clustering
functional and structural genomics
blind source separation and decorrelation
dimension and noise reduction
evaluation of non-metric data (e.g.~categorial, ordinal data)
hybrid systems
decision support systems
data mining
quality assessment
knowledge-data fusion


6. Neural Networks and Cognitive Science
----------------------------------------
Organised by :
Helene Paugam-Moisy, Universite Lyon 2 (France)
Didier Puzenat, Universite Antilles-Guyanne (France)

Description of the session:
First, neural networks have been inspired by cognitive processes [PDP,1986].
Second, they were proved to be very efficient computing tools for
engineering, financial and medical applications, etc...

The aim of this session is to point out that there is still a great
interest, for both engineering and cognitive science, to explore more deeply
the links between natural and artificial neural systems, from a theoretical
point of view. On the one hand: how to define more complex learning rules
adapted to heterogeneous neural networks and how to build modular
multi-network systems for modeling cognitive processes. On the other hand:
how to derive new interesting learning paradigms back, for artificial neural
networks, and how to design more performant systems than classical basic
connectionist models. Especially, the strong power of parallel distributed
processing is far from being fully understood and new ideas can be found in
cognitive science both for boosting the efficiency of parallel computing and
for designing more efficient learning rules.

[PDP,1986] Parallel Distributed Processing: Explorations in the
Microstructure
of Cognition, D. E. Rumelhart, J. L. McClelland and the PDP Research Group,
MIT Press, 1986



========================================================
ESANN - European Symposium on Artificial Neural Networks
http://www.dice.ucl.ac.be/esann

* For submissions of papers, reviews,...
Michel Verleysen
Univ. Cath. de Louvain - Microelectronics Laboratory
3, pl. du Levant - B-1348 Louvain-la-Neuve - Belgium
tel: +32 10 47 25 51 - fax: + 32 10 47 25 98
mailto:esann at dice.ucl.ac.be

* Conference secretariat
d-side conference services
24 av. L. Mommaerts - B-1140 Evere - Belgium
tel: + 32 2 730 06 11 - fax: + 32 2 730 06 00
mailto:esann at dice.ucl.ac.be
========================================================





More information about the Connectionists mailing list