Tentative program for MIND meeting at Texas A&M

B344DSL@utarlg.uta.edu B344DSL at utarlg.uta.edu
Tue May 2 18:22:22 EDT 1995


Conference on Neural Networks for Novel High-Order Rule Formation
Sponsored by Metroplex Institute for Neural Dynamics (MIND) and
For a New Social Science (NSS)
Forum Theatre, Rudder Theatre Complex, Texas A&M University, May
20-21, 1995

Tentative Schedule and List of Abstracts


Saturday, May 20 ~

4:30 - 5:30 PM      Karl Pribram, Radford University
                    Brain, Values, and Creativity

Sunday, May 21 ~

9:00 - 10:00        John Taylor, University of London
                    Building the Mind from Neural Networks
10:00 - 10:45       Daniel Levine, University of Texas at Arlington
                    The Prefrontal Cortex and Rule Formation

10:45 - 11:00       Break

11:00 - 11:45       Sam Leven, For a New Social Science
                    Synesthesia and S.A.M.: Modeling Creative
                         Process
11:45 - 12:15       Richard Long, University of Central Florida
                    A Computational Model of Emotion Based
                         Learning: Variation and Selection of
                         Attractors in ANNs

12:15 - 1:45        Lunch

1:45 - 2:30         Ron Sun, University of Alabama
                    An Agent Architecture with On-line Learning of
                         Conceptual and Subconceptual Knowledge
2:30 - 3:00         Madhav Moganti, University of Missouri, Rolla
                    Generation of FAM Rules Using DCL Network in
                         PCB Inspection

3:30 - 3:45         Break

3:45 - 4:30         Ramkrishna Prakash, University of Houston
                    Towards Neural Bases of Cognitive Functions:
                         Sensorimotor Intelligence
4:30 - 5:15         Risto Miikkulainen, University of Texas
                    Learning and Performing Sequential Decision
                         Tasks Through Symbiotic Evolution of
                         Neural Networks
5:15 - 5:45         Richard Filer, University of York
                    Correlation Matrix Memory in Rule-based
                         Reasoning and Combinatorial Rule Match


                Posters (to be up continuously):

Risto Miikkulainen, University of Texas
Parsing Embedded Structures with Subsymbolic Neural Networks

Haejung Paik and Caren Marzban, University of Oklahoma 
Predicting Television Extreme Viewers and Nonviewers: A Neural
Network Analysis 
 
Haejung Paik, University of Oklahoma
Television Viewing and Mathematics Achievement

Doug Warner, University of New Mexico
Modeling of an Air Combat Expert: The Relevance of Context


                      Abstracts for talks:


                             Pribram

     Perturbation, internally or externally generated, produces an
orienting reaction which interrupts ongoing behavior and demarcates
an episode.  As the orienting reaction habituates, the weightings
(values) of polarizations of the junctional microprocess become
(re)structured on the basis of protocritic processing.  Temporary
stability characterizes the new structure which acts as a
reinforcing attractor for the duration of the episode, i.e., until
dishabituation (another orienting reaction) occurs.  Habituation
leads to extinction and under suitable conditions an extinguished
experience can become reactivated, i.e., made relevant.  Innovation
depends on such reactivation and is enhanced not only by adding
randomness to the process, but also by adding structured variety
produced by prior experience.

                             Taylor

     After a description of a global approach to the mind, the
manner in which various modules in the brain can contribute will be
explored, and related to the European Human Brain Project and to 
developments stemming from non-invasive instruments and single 
neuron measurements.  A possible neural model of the mind will 
be presented, with suggestions outlined as to how it could be
it could be tested. 

                             Levine

     Familiar modeling principles (e.g., Hebbian or associative
learning, lateral inhibition, opponent processing, neuromodulation)
could recur, in different combinations, in architectures that can
learn diverse rules.  These rules include, for example: go to the
most novel object; alternate between two given objects; touch three
given objects, without repeats, in any order.  Frontal lobe damage
interferes with learning all three of those rules.  Hence, network
models of rule learning and encoding should include a module
analogous to the prefrontal cortex.  They should also include
modules analogous to the hippocampus, for episode setting, and the
amygdala, for emotional evaluation.
     Through its connections with the parietal association cortex,
with secondary cortical areas for individual sensory modalities,
and with supplementary motor and premotor cortices, the
dorsolateral part of the prefrontal cortex contains representations
of movement sequences the animal has performed or thought about
performing.  Through connections to the amygdala via the orbital
prefrontal cortex (which seems to be extensively and diffusely
connected to the dorsolateral part), selective enhancement occurs
of those motor sequence representations which have led to reward. 
I propose that the prefrontal cortex also contains "coincidence
detectors" which respond to commonalities in any spatial or
temporal attributes among all those reinforced representations. 
Such coincidence detection is a prelude to generating rules and
thereby making inferences about classes of possible future
movements.
     This general prefrontal function includes the function of
tying together working memory representations that has been
ascribed to it by other modelers (Kimberg & Farah, 1993) but goes
beyond it.  It also encompasses the ability to generate new rules,
in coordination with the hippocampus, if current rules prove to be
unsatisfactory.

                              Leven

(To be added)

                     Long (with Leon Hardy)

     We propose a novel neural network architecture which is based
on a broader theory of learning and cognitive self-organization.
The model is designed to be loosely based on the mammalian brain's
limbic and cortical neurophysiology and which possesses a number of
unique and useful properties.  This architecture uses a variation
and selection algorithm similar to those found in evolutionary
programming (EP), and genetic algorithms (GA).  In this case,
however, selection does not operate on bit strings, or even
neuronal weights; instead, variation and selection acts on
attractors in a dynamical system.  Furthermore, hierarchical
processing is imposed on a single neuronal layer in a manner that
is easily scalable by simply adding additional  nodes.  This is
accomplished using a large, uniform-intensity input signal that
sweeps across a neural layer.  This "sweeping activation"
alternately pushes nodes into their active threshold regime, thus
turning them "on".  In this way, the active portion of the network
settles into an attractor, becoming the preprocessed "input" to the
newly recruited nodes. 
     The attractor neural network (ANN) which forms the basis of
this system is similar to a Hopfield neural network in that it has
the same node update rule and is asynchronous, but differs from a
traditional Hopfield network in two ways.  First, unlike a fully
connected Hopfield network, we use a sparse connection scheme using
a random walk or gaussian distribution.  Second, we allow for
positive-weighted self connections which dramatically improves  
attractor stability when negative or inhibitory weights are
allowed. 
     This model is derived from a more general theory of emotion
and emotion- 
based learning in the mammalian brain.  The theory postulates that
negative and positive emotion is synonymous with variation and
selection respectively.  The theory further classifies  
various emotions according to their role in learning, and so makes
predictions as to the functions of various brain regions and their
interconnections.

                               Sun

     In developing autonomous agents, we usually emphasize only the
procedural and situated knowledge, ignoring generic and declarative
knowledge that is more explicit and more widely applicable.  On the
other hand, in developing AI symbolic reasoning models, we usually
emphasize only the declarative and context-free knowledge.  In
order to develop versatile cognitive agents that learn in situated
contexts and generalize resulting knowledge to different
environments, we explore the possibility of learning both
declarative and procedural knowledge in a hybrid connectionist
architecture.  The architecture is based on the two-level idea
proposed earlier by the author.  Declarative knowledge is
represented conceptually, while procedural knowledge is represented
subconceptually.  The architecture integrates embodied reactions,
rules, learning, and decision-making in a unified framework, and
structures different learning components (including Q-learning and
rule induction) in a synergistic way to perform on-line and
integrated learning.

                             Moganti

     Many vision problems are solved using knowledge-based
approaches.  The conventional knowledge-based systems use domain
experts to generate the initial rules and their membership
functions, and then by trial and error refine the rules and
membership functions to optimize the final system's performance. 
However, it would be difficult for human experts to examine all the
input-output data in complex vision applications to find and tune
the rules and functions within the system.  In this presentation,
the speaker introduces the application of fuzzy logic in complex
computer vision applications.  It will be shown that neural
networks could be effectively used in the estimation of fuzzy
rules, thus making the knowledge acquisition simple, robust, and
complete.
     As an example application, the problem of visual inspection of
defects in printed circuit boards (PCBs) will be presented.  The
speaker will present the work carried out by him where the
inspection problem is characterized as a pattern classification
problem.  The process involves a two-level classification of the
printed circuit board image sub-patterns into either a non-
defective class or a defective class.  The PCB sub-patterns are
checked for geometric shape and dimensional verification using
fuzzy information extracted from the scan-line grid with an
adaptive fuzzy data algorithm that uses differential competitive
learning (DCL) in updating winning synaptic vectors.  The fuzzy
feature vectors drastically reduce the conventional inspection
systems.  The presentation concludes with experimental results
showing the superiority of the approach.
     It will be shown that the basic method presented is by no
means limited to the PCB inspection application.  The model can
easily be extended to other vision problems like silicon wafer
inspection, automatic target recognition (ATR) systems, etc.

                   Prakash (with Haluk Ogmen)

     A developmental neural network model was proposed (Ogmen,
1992, 1995) that ties higher level cognitive functions to lower
level sensorimotor intelligence through stage transitions and the
decalage vertical" (Piaget, 1975). Our neural
representation of a sensorimotor reflex comprises of sensory,
motor, and affective elements.  The affective elements establish an
internal organization: The primary affective and secondary
affective elements dictate the totality and the relationship
aspects of the organization, respectively.
     In order to study sensorimotor intelligence in detail the
network was elaborated for the sucking and rooting reflexes. During
the first two sub-stages of the sensorimotor stage, as proposed by
Piaget (1952), assimilation predominates over accommodation.  We
will present simulations of recognitory and functional
assimilations in the sucking reflex, and reciprocal assimilation
between the sucking and rooting reflexes.
     We will then consider possible subcortical neural substrates
for our sensorimotor model of the rooting reflex in order to bring
the model closer to neurophysiology. The subcortical areas believed
to be involved in the rooting reflex are the spinal trigeminal
nuclei which receive facial somatosensory afferents and the
cervical motor neurons that control the neck muscles. Neurons in
these regions are proposed to correspond to the sensory and motor
elements of our model, respectively. The reticular formation which
receives and sends projections to these two regions and which
receives inputs from visceral regions is a good candidate for the
loci of the affective elements of our model. In this talk, we will
discuss these three areas and their mapping to our model in further
detail.

                          Miikkulainen

     A new approach called SANE (Symbiotic, Adaptive
Neuro-Evolution) for learning and performing sequential decision
tasks is presented.  In SANE, a population of neurons is evolved
through genetic algorithms to form a neural network for the given
task. Compared to problem-general heuristics, SANE forms more
effective decision strategies because it 
learns to utilize domain-specific information. Applications of SANE
to controlling the inverted pendulum, performing value ordering in
constraint satisfaction search, and focusing minimax search in game
playing will be described and compared to traditional methods.

                    Filer (with James Austin)

     This abstract is taken from a paper that presents Correlation
Matrix Memory, a form of binary associative neural network, and the
potential of using this technology in expert systems.  The
particular focus of this paper is on a comparison with an existing
database technique used for achieving partial match, Multi-level
Superimposed Coding (Kim & Lee, 1992), and how using Correlation
Matrix Memory (CMM) enables very efficient rule matching, and a
combinatorial rule match in linear time.  We achieve this utilising
a comparatively simple network approach, which has obvious
implications for advancing models of reasoning in the brain.
     Rule-based reasoning has been the subject of a lot of work in
AI, and some expert systems have proved very useful, e.g.,
PROSPECTOR (Gaschnig, 1980) and DENDRAL (Lindsay et al., 1980), but
it is clear that the usefulness of an expert system is not
necessarily the result of a particular architecture.  We suggest
that efficient partial match is a fundamental requirement, and
combinatorial pattern match is a facility that is directly related
to dealing with partial information, but a brute force approach
invariably takes combinatorial time to do this.  Combinatorial
match we take to mean the ability to answer a sub-maximally
specified query that should succeed if a subset of these attributes
match (i.e., specify A attributes and a number N, N s A, and the
query succeeds if any N of A match).  This sort of match is
fundamental, not only in knowledge-based reasoning, but also in
(vision) occlusion analysis.
     Touretzky and Hinton (1988) were the first to emulate a
symbolic, rule-based system in a connectionist architecture.  A
connectionist approach held the promise of better performance with
partial information and being generally less brittle.  Whether or
not this is the case, Touretzky and Hinton usefully demonstrated
that connectionist networks are capable of symbolic reasoning. 
This paper describes CMM, which maintains a truly distributed
knowledge representation, and the use of CMM as an inference engine
(Austin, 1994).  This paper is concerned with some very useful
properties; we believe we have shown a fundamental link between
database technology and an artificial neural network technology
that has parallels in neurobiology.


                     Abstracts for posters:

                          Miikkulainen

     A distributed neural network model called SPEC for processing
sentences with recursive relative clauses is described. The model
is based on separating the tasks of segmenting the input word
sequence into clauses, forming the case-role representations, and
keeping track of the 
recursive embeddings into different modules. The system needs to be
trained only with the basic sentence constructs, and it generalizes
not only to new instances of familiar relative clause structures,
but to novel structures as well. SPEC exhibits plausible memory
degradation as the depth of the center embeddings increases, its
memory is primed by earlier constituents, and its performance is
aided by semantic constraints between the constituents.  The
ability to process structure is largely due to a central executive
network that monitors and controls the execution of the entire
system. This way, in contrast to earlier subsymbolic systems,
parsing is modeled as a controlled high-level process rather than
one based on automatic reflex responses.

                        Paik and Marzban

     In an attempt to better understand the attributes of the
"average" viewer, an analysis of the data characterizing television
nonviewers and extreme viewers is performed.  The data is taken
from the 1988, 1989, and 1990 General Social Surveys (GSS),
conducted by the National Opinion Research Center (NORC).  Given
the assumption-free, model-independent representation that a neural
network can offer, we perform such an analysis and discuss the
significance of the 
findings.  For comparison, a linear discriminant analysis is also
performed, and is shown to be outperformed by the neural network. 
Furthermore, the set of demographic variables are identified as the
strongest predictor of nonviewers, and the combination of
family-related and social 
activity-related variables as the strongest attribute of extreme
viewers. 

                              Paik

     This study examines the correlation between mathematics
achievement and television viewing, and explores the underlying
processes.  The data consists of 13,542 high 
school seniors from the first wave of the High School and Beyond
project, conducted by the National Opinion Research Center on
behalf of the National Center for Education Statistics.  A neural
network is employed for the analysis; unlike methods employed in
prior studies, with no a priori assumptions about the underlying
model or the distributions of the data, the neural network yields
a correlation impervious to errors or inaccuracies arising from
possibly violated assumptions.  A curvilinear relationship is
found, independent of viewer characteristics, parental background,
parental involvement, and leisure activities, with a maximum at
about one hour of viewing, and persistent upon the inclusion of
statistical errors.  The choice of mathematics 
performance as the measure of achievement elevates the found
curvilinearity to a content-independent status, because of the lack
of television programs dealing with high school 
senior mathematics.  It is further shown that the curvilinearity is
replaced with an entirely positive correlation across all hours of
television viewing, for lower ability students. 
     A host of intervening variables, and their contributions to
the process, are examined.  It is shown that the process, and
especially the component with a positive correlation, involves only
cortical stimulations brought about by the formal features of
television programs.

                             Warner

     A modeling approach was used to investigate the theorized
connection between expertise and context.  Using the domain of
air-combat maneuvering, an expert was modeled both with and without
respect to context.  Neural networks were used for each condition. 
One network used a simple multi-layer perceptron with inputs for
five consecutive time segments from the data as well as a
quantitative descriptor for context in this domain.  The comparison
used a set of networks with identical structure to the first
network.  The same data were provided to each condition, however
the data were divided by context before being provided to separate
networks for the comparison.  It was discovered, after training and
generalization testing on all networks, that the comparison
condition using context-explicit networks performed better for
strict definitions of offensive context.  This distinction implies
the use of context in an air-combat task by the expert human pilot. 
Simulating problems using a standard model and comparing it against
the same model incorporating hypothesized explicit divisions within
the data should prove to be a useful tool in psychology.

              Transportation and Hotel Information

     Texas A&M is in College Station, TX, about 1.5 to 2 hours NW
of Houston and NE of Austin.  Bryan/College Station Airport
(Easterwood) is only about five minutes from the conference site,
and is served by American (American Eagle), Continental, and Delta
(ASA).
     The College Station Hilton (409-693-7500) has a block of rooms
reserved for the Creative Concepts Conference (of which MIND is a
satellite) at $60 a night, and a shuttle bus to and from the A&M
campus.  There are also rooms available at the Memorial Student
Union on campus (409-845-8909) on campus for about $40 a night. 
Other nearby hotels include the Comfort Inn (409-846-7333), Hampton
Inn (409-846-0184), LaQuinta (409-696-7777), and Ramada-Aggieland
(409-693-9891), all of which have complimentary shuttles to campus.


More information about the Connectionists mailing list