A NIPS*94 WORKSHOP

Dr. Xu Lei lxu at cs.cuhk.hk
Wed Oct 26 10:21:47 EDT 1994



		      A NIPS*94 POST CONFERENCE WORKSHOP


       UNSUPERVISED LEARNING  RULES AND VISUAL PROCESSING

      Lei Xu$^1$,  Zhaoping Li$^2$  and  Laiwan Chan$^1$

             1. The Chinese University of Hong Kong

	2. Hong Kong University of Science and Technology



There are three major types of unsupervised learning rules: competitive
learning or vector quantization type, information preserving or {\em
Principal Component Analysis (PCA)} type, and the self-organizing
topological map type. All of them are closely related to visual processing.
For instance, they are used to interpret the developments of orientation
and other feature selective cells, as well as development of cortical
retinotopic maps such as ocular dominance and orientation columns.

The development of the study of learning and the understanding of visual
processing facilitate each other.  Recent years, a number of advances have
been made in both in the two areas.  For instance, in the area of
unsupervised learning, (1) numerous algorithms for competitive learning,
PCA learning, and self-organizing maps have been proposed; (2) several new
theories and principles, like maximum coherence, minimum description
length, finite mixtures with EM learning, statistical physics, Bayesian
theory, exploratory projection pursuit, and local PCA, have been developed;
(3) theories for unifying various unsupervised learning rules (e.g.,
multisets modeling learning theory) have been explored. In the area of
visual processing, more knowledge is being gathered experimentally about
how visual development can be preserved or altered by neural activities,
neural transmitters/receptors, and the visual environment etc, providing
the bases and constraints for various learning rules and motivating new
learning rule studies.  In addition, there has been more theoretical
understandings on the dependence of the visual processing units on the
visual input environment, supporting the rationality of unsupervised
learning.

The purpose of this workshop is twofolds: (1) to summarize the advances on
unsupervised learning and to discuss whether these advances can help the
investigation on visual processing system; (2) to screening the current
results on visual processing and to check if they can motivate or provide
some hints on developing unsupervised learning theories.  The targeted
groups of participants are researchers working in either or both the area
of learning and the study of visual processing.


 -----------------------------------------------------------------------------

	                NIPS*94 WORKSHOP PROGRAM



The Friday morning session 7:30-9:30AM, Dec.2,  chairperson  Lei Xu

1. "Time-Domain Solutions of Oja's Equations", 
John Wyatt and Ibrahim Elfadel (MIT)

2. "Kmeans Performs Newton Optimization", 
Leon Bottou (Neuristique Paris) and Yoshua Bengio (University of Montreal)

3. "Multisets Modeling Learning: An Unified Framework  for
 Unsupervised Learning", 
Lei Xu (The Chinese University of Hong Kong and Peking University)

4. "Information Theory Motivation For Projection Pursuit", 
Nathan Intrator (Tel-Aviv University)

5. "The Helmholtz Machine", 
Peter Dayan  (University of Toronto)




The Friday evening session, chairperson 4:30-6:30PM, Dec.2,  Zhaoping Li


1. "Predictability Minimization And Visual Processing",  
Juergen Schmidhuber (Technische Universitaet Muenchen)

2. "Non-linear, Non-gaussian Information Maximisation: Why It's More Useful", 
Tony Bell (Salk Institute)

3. "Understanding The Visual Cortical Coding From Visual Input Statistics",
Zhaoping Li (Hong Kong University of Science and Technology)

4. "Formation Of Orientation And Ocular Dominance In  
Macaque Striate Cortex", Klaus Obermayer (Universitaet Bielefel)

5. "Putative Functional Roles Of Self-organized Lateral Connectivity In The
Primary Visual Cortex", Joseph Sirosh (University of Texas at Austin)



The Saturday  morning session 7:30-9:30AM,  Dec.3, chairperson, Laiwan Chan


1. "Density Estimation with a Hybrid of Neural Networks and Gaussian 
Mixtures",
Yoshua Bengio (University of Montreal)

2. "Learning Object Models through Domain-Specific Distance Measures"
Eric Mjolsness (UCSD) and Steve Gold (Yale University) 

3. "Auto-associative Learning of On-line Handwriting Using Recurrent Neural
Networks", Dit-Yan Yeung (Hong Kong University of Science and Technology)

4. "Training Mixtures of Gaussians with Deficient Data", 
Volker Tresp (Siemens AG, Central Research)

5. "A Fast Method for Activating Competitive Self-Organizing Neural-Networks", 
George F. Harpur and Richard W. Prager (Cambridge University)



The Saturday evening session 4:30-6:30PM, Dec.3,   chairperson, Lei Xu


1.  "Neuromodulatory Mechanisms For Regulation Of Cortical 
Self-organization", Michael E. Hasselmo (Harvard University)


2. "Learning To Cluster Visual Scenes With Contextual Modulation", 
Sue Becker (McMaster University)


3. "Invisibility in Vision: Occlusion, Motion, Grouping, and
Self-Organization", Jonathan A. Marshall (University of North Carolina at
Chapel Hill)

4. "A Comparative Study on Receptive Filters by PCA Learning and Gabor
Functions",  Irwin King and Lei Xu (The Chinese University of Hong Kong)


5. "Detection of Visual Feature Locations with a Growing Neural Gas Network"
Bernd Fritzke (Ruhr-Universitaet Bochum)




More information about the Connectionists mailing list