No subject

Wlodzislaw Duch (Dr) ASWDuch at ntu.edu.sg
Mon Jun 5 16:42:55 EDT 2006


Dear Connectionists,

Here are five of my recent computational intelligence papers for your
comments:

1. Duch W (2003) Support Vector Neural Training.
Submitted to IEEE Transactions on Neural Networks (submitted 11.2003)
http://www.phys.uni.torun.pl/publications/kmk/03-SVNT.html

2. Duch W (2003) Uncertainty of data, fuzzy membership functions, and
multi-layer perceptrons.
Submitted to IEEE Transactions on Neural Networks (submitted 11.2003)
http://www.phys.uni.torun.pl/publications/kmk/03-uncert.html

3. Duch W (2003) Coloring black boxes: visualization of neural network
decisions.
International Joint Conference on Neural Networks
http://www.phys.uni.torun.pl/publications/kmk/03-IJCNN.html

4. Kordos M, Duch W (2003)
On Some Factors Influencing MLP Error Surface.
The Seventh International Conference on Artificial Intelligence and Soft
Computing (ICAISC)
http://www.phys.uni.torun.pl/publications/kmk/03-MLPerrs.html

5. Duch W (2003) Brain-inspired conscious computing architecture.
Journal of Mind and Behavior (submitted 10/03)
http://www.phys.uni.torun.pl/publications/kmk/03-Brainins.html

All these papers (and quite a few more) are linked to my page:
http://www.phys.uni.torun.pl/~duch/cv/papall.html

Here are the abstracts:

1. Support Vector Neural Training.

Neural networks are usually trained on all available data. Support
Vector Machines start from all data but near the end of the training use
only a small subset of vectors near the decision border. The same
learning strategy may be used in neural networks, independently of the
actual optimization method used. Feedforward step is used to identify
vectors that will not contribute to optimization. Threshold for
acceptance of useful vectors for training is dynamically adjusted during
learning to avoid excessive oscillations in the number of support
vectors. Benefits of such approach include faster training, higher
accuracy of final solutions and identification of a small number of
support vectors near decision borders. Results on satellite image
classification and hypothyroid disease obtained with this type of
training are better than any other neural network results published so
far.

2. Uncertainty of data, fuzzy membership functions, and multi-layer
perceptrons.

Probability that a crisp logical rule applied to imprecise input data is
true may be computed using fuzzy membership function. All reasonable
assumptions about input uncertainty distributions lead to membership
functions of sigmoidal shape. Convolution of several inputs with uniform
uncertainty leads to bell-shaped Gaussian-like uncertainty functions.
Relations between input uncertainties and fuzzy rules are systematically
explored and several new types of membership functions discovered.
Multi-layered perceptron (MLP) networks are shown to be a particular
implementation of hierarchical sets of fuzzy threshold logic rules based
on sigmoidal membership functions. They are equivalent to crisp logical
networks applied to input data with uncertainty. Leaving fuzziness on
the input side makes the networks or the rule systems easier to
understand. Practical applications of these ideas are presented for
analysis of questionnaire data and gene expression data.

3. Coloring black boxes: visualization of neural network decisions.

Neural networks are commonly regarded as black boxes performing
incomprehensible functions. For classification problems networks provide
maps from high dimensional feature space to K-dimensional image space.
Images of training vector are projected on polygon vertices, providing
visualization of network function. Such visualization may show the
dynamics of learning, allow for comparison of different networks,
display training vectors around which potential problems may arise, show
differences due to regularization and optimization procedures,
investigate stability of network classification under perturbation of
original vectors, and place new data sample in relation to training
data, allowing for estimation of confidence in classification of a given
sample. An illustrative examples for the three-class Wine data and
five-class Satimage data are described. The visualization method
proposed here is applicable to any black box system that provides
continuous outputs.

4. Kordos M, Duch W (2003)

Visualization of MLP error surfaces helps to understand the influence of
network structure and training data on neural learning dynamics. PCA is
used to determine two orthogonal directions that capture almost all
variance in the weight space. 3-dimensional plots show many aspects of
the original error surfaces.


5. Duch W (2003) Brain-inspired conscious computing architecture.

What type of artificial systems will claim to be conscious and will
claim to experience qualia? The ability to comment upon physical states
of a brain-like dynamical system coupled with its environment seems to
be sufficient to make claims. The flow of internal states in such
system, guided and limited by associative memory, is similar to the
stream of consciousness. Minimal requirements for an artificial system
that will claim to be conscious were given in form of specific
architecture named articon. Nonverbal discrimination of the working
memory states of the articon gives it the ability to experience
different qualities of internal states. Analysis of the inner state
flows of such a system during typical behavioral process shows that
qualia are inseparable from perception and action. The role of
consciousness in learning of skills, when conscious information
processing is replaced by subconscious, is elucidated. Arguments
confirming that phenomenal experience is a result of cognitive processes
are presented. Possible philosophical objections based on the Chinese
room and other arguments are discussed, but they are insufficient to
refute claims articon's claims. Conditions for genuine understanding
that go beyond the Turing test are presented. Articons may fulfill such
conditions and in principle the structure of their experiences may be
arbitrarily close to human.

With best regards for the coming year,

Wlodzislaw Duch
Dept. of Informatics, Nicholaus Copernicus University	Dept. of
Computer Science, SCE NTU, Singapore
http://www.phys.uni.torun.pl/~duch
http://www.ntu.edu.sg/home/aswduch/




More information about the Connectionists mailing list