Contents of Neurocomputing 25 (1998)

Georg Thimm mgeorg at SGraphicsWS1.mpe.ntu.edu.sg
Thu Apr 8 16:39:06 EDT 1999


Dear reader,

Please find below a compilation of the contents for Neurocomputing and Scanning
the Issue written by V. David Sanchez A.  More information on the journal are
available at the URL http://www.elsevier.nl/locate/neucom .

The contents of this and other journals published by Elsevier are distributed
also by the ContentsDirect service (see at the URL
http://www.elsevier.nl/locate/ContentsDirect).

Please feel free to redistribute this message. My apologies if this message
is inappropriate for this mailing list; I would appreciate a feedback.


With kindest regards,

     Georg Thimm



Dr. Georg Thimm                                        Tel ++65 790 5010
Design Research Center, School of MPE,                 Email: mgeorg at ntu.edu.sg
Nanyang Technological University, Singapore 639798 

********************************************************************************

Vol. 25 (1-3) Scanning the Issue

I. Santamaria, M. Lazaro, C.J. Pantaleon, J.A. Garcia, A. Tazon, and A.
Mediavilla describe "A nonlinear MESFET model for intermodulation analysis using
a generalized radial basis function network". The transistor bias voltages are
input to the GRBF network which maps them onto the derivatives of the
drain-to-source current associated to the intermodulation properties.

In "Solving dynamic optimization problems with adaptive networks" Y.  Takahashi
systematically constructs adaptive networks (AN) from a given dynamic
optimization problem (DOP) which generate a locally-minimum problem
solution. The construction of a solution for the Dynamic Traveling Salesman
Problem (DTSP) is shown as example.

L.M. Patnaik and S. Udaykumar discuss in "Mapping adaptive resonance theory onto
ring and mesh architectures" different strategies to parallelize ART2-A
networks. The parallel architectural simulator PROTEUS is used. Simulations show
that the speedup obtained for the ring architecture is higher than the one
obtained for the mesh architecture.

H.-H. Chen, M.T. Manry, and H. Chandrasekaran use in "A neural network training
algorithm utilizing multiple sets of linear equations" output weight
optimization (OWO), hidden weight optimization (HWO), and Backpropagation in
training algorithms. Simulations show that the combined OWO-HWO technique is
more effective than the OWO-BP and the Levenberg-Marquardt methods for training
MLP networks.

Y. Baram presents in "Bayesian classification by iterated weighting" a modular
and separate calculation of the likelihoods and the weights. This allows for the
use of any density estimation method. The likelihoods are estimated by
parametric optimization, the weights are estimated using iterated
averaging. Results obtained are similar to those generated using the expectation
maximization method.

V. Maiorov and A. Pinkus prove "Lower bounds for approximation by MLP neural
networks" including that any continuous function on any compact domain can be
approximated arbitrarily well by a two hidden layer MLP with a fixed number of
units per layer. The degree of approximation for an MLP with n hidden units is
bounded by the degree of approximation of n ridge functions linearly combined.

In "Developing robust non-linear models through bootstrap aggregated neural
networks" J. Zhang describes a technique for aggregating multiple networks.
Bootstrap techniques are used to resample data into training and test data
sets. Combination of the individual network models is done by principal
component regression. More accurate and more robust results are obtained than
when using single networks.

S.-Y. Cho and T.W.S. Chow describe a new heuristic for global learning in
"Training multilayer neural networks using fast global learning algorithm *
Least squares and penalized optimization methods". Classification problems are
used to confirm that a higher convergence speed and ability to escape local
minima is achieved with the new algorithm as opposed to other conventional
methods.

In "A novel entropy-constrained competitive learning algorithm for vector
quantization" W.-J. Hwang, B.-Y. Ye, and S.-C. Liao develops the
entropy-constrained competitive learning (ECCL) algorithm. This algorithm
outperforms the entropy-constrained vector quantizer (ECVQ) design algorithm
when the same rate constraint and initial codewords are used.

K.B. Eom presents a "Fuzzy clustering approach in unsupervised sea ice
classification". Passive radar images taken by multichannel passive microwave
imagers are used as input. The see ice types in polar regions are determined by
clustering. Hard clustering methods do not apply due to the fuzzy nature of the
boundaries between different see-ice types.

G.-J. Wang and T.-C. Chen introduce "A robust parameters self-tuning learning
algorithm for multilayer feedforward neural network". Automatic adjustment of
learning parameters such as the learning rate and the momentum can be acieved
with this new algorithm. It outperforms the error backpropagation (EBP)
algorithm in terms of convergence, but is also more insensitive to the initial
weights.

In "Neural computation for robust approximate pole assignment" D.W.C. Ho, J.
Lam, J. Xu, and H.K. Tam pose the problem of output feedback robust approximate
pole assignment as an unconstrained optimization problem and solve it using a
neural architecture and the gradient flow formulation. This formulation allows
for a simple recurrent neural network realization.


I appreciate the cooperation of all those who submitted their work for
inclusion in this issue.

V. David Sanchez A.

Neurocomputing * Editor-in-Chief *

********************************************************************************


More information about the Connectionists mailing list