Papers on Ridge Polynomial Networks and on Generalization

Joydeep Ghosh ghosh at pine.ece.utexas.edu
Tue Apr 11 17:31:31 EDT 1995


=========================== Paper announcement ========================

The following two papers are available via anonymous ftp:

FTP-host: www.lans.ece.utexas.edu  (128.83.52.78)
filenames: /pub/papers/rpn_paper.ps.Z  and /pub/papers/struc_adapt_jann94.ps.Z

More conveniently, they can be retrieved from the HOME PAGE of the
LAB. FOR ARTIFICIAL NEURAL SYSTEMS (LANS) at Univ. of Texas, Austin:

http://www.lans.ece.utexas.edu

where, under "selected publications", the abstracts of more than 40 papers 
can be viewed and the corresponding .ps.Z files can be downloaded.

=====================================================================

		RIDGE POLYNOMIAL NETWORKS
	(to appear, IEEE Trans. Neural Networks)

		Yoan Shin and Joydeep Ghosh

This paper presents a  polynomial  connectionist  network  called
RIDGE POLYNOMIAL NETWORK (RPN) that can uniformly approximate any
continuous function on a compact set in  multi-dimensional  input
space  $Re^{d}$, with arbitrary degree of accuracy.  This network
provides a more efficient and regular  architecture  compared  to
ord  inary  higher-order  feedforward  networks while maintaining
their fast learning property.

The ridge polynomial network is a generalization of the  pi-sigma
network  and   u  ses  a special form of ridge polynomials. It is
shown that any multivariate polynomial can  be  repre  sented  in
this  form,  and realized by an RPN.  Approximation capability of
the RPNs is shown by this representation theorem an d the  Weier-
strass  polynomial approximation theorem.  The RPN provides a na-
tural  mechanism  for  incremental  network  growth.   Simulation
results  on  a  surface  fitting  problem,  the classification of
high-dim ensional data and the realization of a multivariate  po-
lynomial  function  are given to highligh t the capability of the
network.   In  particular,  a  constructive   learning  algorithm
developed for the network is shown to yield smooth generalization
and steady learning.
=====================================================================

	STRUCTURAL ADAPTATION  AND  GENERALIZATION  
	IN  SUPERVISED  FEED-FORWARD NETWORKS

(Jl. of Artificial Neural Networks, 1(4), 1994, pp. 431-458.)

		Joydeep Ghosh and Kagan Tumer

This work explores diverse techniques for improving the generali-
zation  ability  of  supervised  feed-forward neural networks via
structural adaptation, and introduces  a  new  network  structure
with  sparse  connectivity.   Pruning  methods which start from a
large network and proceed in trimming  it  until  a  satisfactory
solution  is  reached,  are  studied  first.  Then,  construction
methods, which build a network from a simple  initial  configura-
tion, are presented. A survey of related results from the discip-
lines of function approximation theory, nonparametric statistical
inference  and  estimation theory leads to methods for principled
architecture selection and estimation  of  prediction  error.   A
network  based  on sparse connectivity is proposed as an alterna-
tive approach to adaptive networks. The generalization ability of
this  network  is  improved by partly decoupling the outputs.  We
perform numerical simulations and provide comparative results for
both classification and regression problems to show the generali-
zation abilities of the sparse network.

===========================repeat FTP info ========================

FTP-host: www.lans.ece.utexas.edu  (128.83.52.78)
filenames: /pub/papers/rpn_paper.ps.Z  and /pub/papers/struc_adapt_jann94.ps.Z

*************  SORRY, NO HARD COPIES ***********



More information about the Connectionists mailing list