No subject

cfields@NMSU.Edu cfields at NMSU.Edu
Fri Mar 3 17:16:53 EST 1989


_________________________________________________________________________

The following are abstracts of papers appearing in the inaugural issue
of the Journal of Experimental and Theoretical Artificial
Intelligence.  JETAI 1, 1 was published 1 January, 1989.

For submission information, please contact either of the editors:

Eric Dietrich                           Chris Fields
PACSS - Department of Philosophy        Box 30001/3CRL
SUNY Binghamton                         New Mexico State University
Binghamton, NY 13901                    Las Cruces, NM 88003-0001

dietrich at bingvaxu.cc.binghamton.edu     cfields at nmsu.edu

JETAI is published by Taylor & Francis, Ltd., London, New York, Philadelphia

_________________________________________________________________________

Minds, machines and Searle

Stevan Harnad

Behavioral & Brain Sciences, 20 Nassau Street, Princeton NJ 08542, USA

Searle's celebrated Chinese Room Argument has shaken the foundations
of Artificial Intelligence.  Many refutations have been attempted, but
none seem convincing.  This paper is an attempt to sort out explicitly
the assumptions and the logical, methodological and empirical points
of disagreement.  Searle is shown to have underestimated some features
of computer modeling, but the heart of the issue turns out to be an
empirical question about the scope and limits of the purely symbolic
(computational) model of the mind.  Nonsymbolic modeling turns out to
be immune to the Chinese Room Argument.  The issues discussed include
the Total Turing Test, modularity, neural modeling, robotics,
causality and the symbol-grounding problem.

_________________________________________________________________________

Explanation-based learning: its role in problem solving

Brent J. Krawchuck and Ian H. Witten

Knowledge Sciences Laboratory, Department of Computer Science,
University of Calgary, 2500 University Drive, NW, Calgary, Alta,
Canada, T2N 1N4.

`Explanation-based' learning is a semantically-driven,
knowledge-intensive paradigm for machine learning which contrasts
sharply with syntactic or `similarity-based' approaches.  This paper
redevelops the foundations of EBL from the perspective of
problem-solving.  Viewed in this light, the technique is revealed as a
simple modification to an inference engine which gives it the ability
to generalize the conditions under which the solution to a particular
problem holds.  We show how to embed generalization invisibly within
the problem solver, so that it is accomplished as inference proceeds
rather than as a separate step.  The approach is also extended to the
more complex domain of planning to illustrate that it is applicable to
a variety of logic-based problem-solvers and is by no means restricted
to only simple ones.  We argue against the current trend to isolate
learning from other activity and study it separately, preferred
instead to integrate it into the very heart of problem solving.

----------------------------------------------------------------------------

The recognition and classification of concepts in understanding
scientific texts

Fernando Gomez and Carlos Segami

Department of Computer Science, University of Central Florida,
Orlando, FL 32816, USA.

In understanding a novel scientific text, we may distinguish the
following processes.  First, concepts are built from the logical form
of the sentence into the final knowledge structures.  This is called
concept formation.  While these concepts are being formed, they are
also being recognized by checking whether they are already in
long-term memory (LTM).  Then, those concepts which are unrecognized
are integrated in LTM.  In this paper, algorithms for the recognition
and integration of concepts in understanding scientific texts are
presented.  It is shown that the integration of concepts in scientific
texts is essentially a classification task, which determines how and
where to integrate them in LTM.  In some cases, the integration of
concepts results in a reclassification of some of the concepts already
stored in LTM.  All the algorithms described here have been
implemented and are part of SNOWY, a program which reads short
scientific paragraphs and answer questions.

---------------------------------------------------------------------------

Exploring the No-Function-In-Structure principle

Anne Keuneke and Dean Allemang

Laboratory for Artificial Intelligence Research, Department of
Computer and Information Science, The Ohio State University, 2036 Neil
Avenue Mall, Columbus, OH 43210-1277, USA.

Although much of past work in AI has focused on compiled knowledge
systems, recent research shows renewed interest and advanced efforts
both in model-based reasoning and in the integration of this deep
knowledge with compiled problem solving structures.  Device-based
reasoning can only be as good as the model used; if the needed
knowledge, correct detail, or proper theoretical background is not
accessible, performance deteriorates.  Much of the work on model-based
reasoning references the `no-function-in-structure' principle, which
was introduced be de Kleer and Brown.  Although they were all well
motivated in establishing the guideline, this paper explores the
applicability and workability of the concept as a universal principle
for model representation.  This paper first describes the principle,
its intent and the concerns it addresses.  It then questions the
feasibility and the practicality of the principle as a universal
guideline for model representation.

___________________________________________________________________________


More information about the Connectionists mailing list