No subject
cfields@NMSU.Edu
cfields at NMSU.Edu
Fri Jan 26 16:02:10 EST 1990
_________________________________________________________________________
The following are abstracts of papers appearing in the fourth issue
of the Journal of Experimental and Theoretical Artificial
Intelligence, which appeared in November, 1989. The next issue, 2(1),
will be published in March, 1990.
For submission information, please contact either of the editors:
Eric Dietrich Chris Fields
PACSS - Department of Philosophy Box 30001/3CRL
SUNY Binghamton New Mexico State University
Binghamton, NY 13901 Las Cruces, NM 88003-0001
dietrich at bingvaxu.cc.binghamton.edu cfields at nmsu.edu
JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia
_________________________________________________________________________
Problem solving architecture at the knowledge level.
Jon Sticklen, AI/KBS Group, CPS Department, Michigan State University,
East Lansine, MI 48824, USA
The concept of an identifiable "knowledge level" has proven to be
important by shifting emphasis from purely representational issues to
implementation-free decsriptions of problem solving. The knowledge
level proposal enables retrospective analysis of existing
problem-solving agents, but sheds little light on how theories of
problem solving can make predictive statements while remaining aloof
from implementation details. In this report, we discuss the knowledge
level architecture, a proposal which extends the concepts of Newell
and which enables verifiable prediction. The only prerequisite for
application of our approach is that a problem solving agent must be
decomposable to the cooperative actions of a number of more primitive
subagents. Implications for our work are in two areas. First, at the
practical level, our framework provides a means for guiding the
development of AI systems which embody previously-understood
problem-solving methods. Second, at the foundations of AI level, our
results provide a focal point about which a number of pivotal ideas of
AI are merged to yield a new perspective on knowledge-based problem
solving. We conclude with a discussion of how our proposal relates to
other threads of current research.
With commentaries by:
William Clancy: "Commentary on Jon Stcklen's 'Problem solving
architecture at the knowledge level'".
James Hendler: "Below the knowledge level architecture".
Brian Slator: "Decomposing meat: A commentary on Sticklen's 'Problem
solving architecture at the knowledge level'".
and Sticklen's response.
__________________________________________________________________________
Natural language analysis by stochastic optimization: A progress
report on Project APRIL
Geoffrey Sampson, Robin Haigh, and Eric Atwell, Centre for Computer
Analysis of Language and Speech, Department of Linguistics &
Phonetics, University of Leeds, Leeds LS2 9JT, UK.
Parsing techniques based on rules defining grammaticality are
difficult to use with authentic natural-language inputs, which are
often grammatically messy. Instead, the APRIL systems seeks a
labelled tree structure which maximizes a numerical measure of
conformity to statistical norms derived from a sample of parsed text.
No distinction between legal and illegal trees arises: any labelled
tree has a value. Because the search space is large and has an
irregular geometry, APRIL seeks the best tree using simulated
annealing, a stochastic optmization technique. Beginning with an
arbitrary tree, many randomly-generated local modifications are
considered and adopted or rejected according to their effect on
tree-value: acceptance decisions are made probabilistically, subject
to a bias against adverse moves which is very weak at the outset but
is made to increase as the random walk through the search space
continues. This enables the system to converge on the global optimum
without getting trapped in local optima. Performance of an early
verson of the APRIL system on authentic inputs had been yielding
analyses with a mean accuracy of 75%, using a schedule which increases
processing linearly with sentence length; modifications currently
being implemented should eliminate many of the remaining errors.
_________________________________________________________________________
On designing a visual system
(Towards a Gibsonian computational model of vision)
Aaron Sloman, School of Cognitive and Computing Sciences, University
of Sussex, Brighton, BN1 9QN, UK
This paper contrasts the standard (in AI) "modular" theory of the
nature of vision with a more general theory of vision as involving
multiple functions and multiple relationships with other subsystems of
an intelligent system. The modular theory (e.g. as expounded by Marr)
treats vision as entirely, and permanently, concerned with the
production of a limited range of descriptions of visual surfaces, for
a central database; while the "labyrithine" design allows any
output that a visual system can be trained to associate reliably with
features of an optic array and allows forms of learning that set up
new communication channels. The labyrithine theory turns out to have
much in common with J. J. Gibson's theory of affordances, while not
eschewing information processing as he did. It also seems to fit
better than the modular theory with neurophysiological evidence of
rich interconnectivity within and between subsystems in the brain.
Some of the trade-offs between different designs are discussed in
order to provide a unifying framework for future empirical
investigations and engineering design studies. However, the paper is
more about requirements than detailed designs.
________________________________________________________________________
More information about the Connectionists
mailing list