Philosophy/Neuroscience/Psychology technical reports

Andy Clark andy at twinearth.wustl.edu
Thu Sep 22 20:14:54 EDT 1994


This is to announce a new archive for technical reports for the
Philosophy/Neuroscience/Psychology program at Washington University.
Reports are available in a number of areas of cognitive science and
philosophy of mind.

Reports are stored in various formats -- most are in ASCII or compressed
Postscript.  The former (files with .ascii) can be retrieved and read or
printed directly; the latter (files with .ps.Z) must be retrieved,
uncompressed (with "uncompress <filename>") and printed on a laser printer.
Some papers are stored in both formats for convenience.

To retrieve a report -- e.g., clark.folk-psychology.ascii:

1. ftp thalamus.wustl.edu
2. Login as "anonymous" or "ftp"
3. Password: <your e-mail address>
4. cd pub/pnp/papers
5. get clark.folk-psychology.ascii

An index of papers in the archive so far is included below.  The list
will be expanding frequently; an updated index can be found in the
file INDEX in pub/pnp/papers.

Andy Clark      (andy at twinearth.wustl.edu)
Philosophy/Neuroscience/Psychology Program
Washington University
St Louis, MO 63130.

----------------------------------------------------------------------------
Archive of Philosophy/Neuroscience/Psychology technical reports (Washington
University), on thalamus.wustl.edu in pub/pnp/papers.

94-01 kwasny.sraam.ps

Tail-Recursive Distributed Representations and Simple Recurrent Networks
   Stan C. Kwasny & Barry L. Kalman, Department of Computer Science

Representation poses important challenges to connectionism.  The ability to
structurally compose representations is critical in achieving the capability
considered necessary for cognition.  We provide a technique for mapping any
ordered collection (forest) of hierarchical structures (trees) into a set of
training patterns which can be used effectively in training a simple recurrent
network (SRN) to develop RAAM-style distributed representations.  The
advantages in our technique are three-fold: first, the fixed-valence
restriction on RAAM structures is removed; second, representations correspond
to ordered forests of labeled trees thereby extending what can be represented;
third, training can be accomplished with an auto-associative SRN, making
training much more straightforward.

94-02 kalman.trainrec.ps     

TRAINREC: A System for Training Feedforward & Simple Recurrent Networks
Efficiently and Correctly
   Barry L. Kalman & Stan C. Kwasny, Department	of Computer Science

TRAINREC is a system for training feedforward and recurrent neural networks
that incorporates several ideas.  It uses the more efficient conjugate
gradient method; we derive a new error function with several desirable
properties; we argue for skip (shortcut) connections where appropriate, and
for a sigmoidal yielding values in the [-1,1] interval; we use singular value
decomposition to avoid overanalyzing the input feature space.  We have made an
effort to discover methods that work in both theory and practice, motivated by
considerations ranging from efficiency of training to accuracy of the result.

94-03 chalmers.computation.{ps,ascii}

A Computational Foundation for the Study of Cognition
  David J. Chalmers, Department of Philosophy

Computation is central to the foundations of modern cognitive science, but its
role is controversial.  Questions about computation abound: What is it for a
physical system to implement a computation?  Is computation sufficient for
thought?  What is the role of computation in a theory of cognition?  What is
the relation between different sorts of computational theory, such as
connectionism and symbolic computation?  This article develops a systematic
framework that addresses all of these questions.  A careful analysis of
computation and its relation to cognition suggests that the ambitions of
artificial intelligence and the centrality of computation in cognitive science
are justified.

94-04 chalmers.content.{ps,ascii}

The Components of Content
  David J. Chalmers, Department of Philosophy.

Are the contents of thought in the head of the thinker, in the environment,
or in a combination of the two?  In this paper I develop a two-dimensional
intensional account of content, decomposing a thought's content into its
notional content -- which is internal to the thinker -- and its relational
content.  Notional content is fully semantic, having truth-conditions of its
own; and notional content is what governs the dynamics and rationality of
thought.  I apply this two-dimensional picture to dissolve a number of
problems in the philosophy of mind and language.

94-05 chalmers.bibliography.{intro,1,2,3,4,5}

Contemporary Philosophy of Mind: An Annotated Bibliography
  David J. Chalmers, Department of Philosophy

This is an annotated bibliography of work in the philosophy of mind from the
last thirty years.  There are about 1700 entries, divided into five parts:
(1) Consciousness and Qualia; (2) Mental Content; (3) Psychophysical Relations
and Psychological Explanation; (4) Philosophy of Artificial Intelligence;
(5) Miscellaneous Topics.

94-06 clark.trading-spaces.ascii

Trading Spaces: Computation, Representation, and the Limits of Learning
  Andy Clark (Dept. of Philosophy) and Chris Thornton (U. of Sussex)

We argue that existing learning algorithms are often poorly equipped to solve
problems involving a certain type of (important and widespread) statistical
regularity, which we call `type-2 regularity'. The solution is to trade
achieved representation against computational search. We investigate several
ways in which such a trade-off may be pursued. The upshot is that various
kinds of incremental learning (e.g. Elman 1991) emerge not as peripheral but
as absolutely central and essential features of successful cognition.

94-07 clark.folk-psychology.ascii

Dealing in Futures: Folk Psychology and the Role of Representations in
Cognitive Science.
  Andy Clark, Department of Philosophy.

The paper investigates the Churchlands' long-standing critique of folk
psychology.  I argue that the scientific advances upon which the Churchlands
so ably draw will have their most profound impact NOT upon our assessment of
the folk discourse but upon our conception of the role of representations in
the explanatory projects of cognitive science.  Representation, I suggest,
will indeed be reconceived, somewhat marginalized, and will emerge as at best
one of the objects of cognitive scientific explanation rather than as its
foundation.

94-08 clark.autonomous-agents.ascii

Autonomous Agents and Real-Time Success: Some Foundational Issues.
  Andy Clark, Department of Philosophy

Recent developments in situated robotics and related fields claim to
challenge the pervasive role of internal representations in the production of
intelligent behavior.  Such arguments, I show, are both suggestive and
misguided.  The true lesson, I argue, lies in forcing a much-needed
re-evaluation of the notion of internal representation itself.  The present
paper begins the task of developing such a notion by pursuing two concrete
examples of fully situated yet representation-dependent cognition: animate
vision and motor emulation.

94-09 mccann.gold-market.ps

A Neural Network Model for the Gold Market
  Peter J. McCann and Barry L. Kalman, Department of Computer Science

A neural network trend predictor for the gold bullion market is presented.
A simple recurrent neural network was trained to recognize turning points
in the gold market based on a to-date history of ten market indices.
The network was tested on data that was held back from training, and a
significant amount of predictive power was observed.  The turning point
predictions can be used to time transactions in the gold bullion and gold
mining company stock index markets to obtain a significant paper profit
during the test period.

94-10 chalmers.consciousness.{ps,ascii}

Facing Up to the Problem of Consciousness
  David Chalmers, Department of Philosophy

The problems of consciousness fall into two classes: the easy problems and the
hard problems.  The easy problems include reportability, accessibility, the
difference between wakefulness and sleep, and the like; the hard problem is
subjective experience.  Most recent work attacks only the easy problems.  I
illustrate this with a critique, and argue that reductive approaches to the
hard problem must inevitably fail.  I outline a new framework for the
nonreductive explanation of consciousness, in terms of basic principles
connecting physical processes to experience.  Using this framework, I sketch a
candidate theory of conscious experience, revolving around principles of
structural coherence and organizational invariance, and a double-aspect theory
of information.

94-11 chalmers.qualia.{ps,ascii}

Absent Qualia, Fading Qualia, Dancing Qualia
  David Chalmers, Department of Philosophy

In this paper I use thought-experiments to argue that systems with the same
fine-grained functional organization will have the same conscious experiences,
no matter what they are made out of.  These thought-experiments appeal to
scenarios involving gradual replacement of neurons by silicon chips.  I
argue against the "absent qualia" hypothesis by using a "fading qualia"
scnario, and against the "inverted qualia" hypothesis by using a "dancing
qualia" scenario.  The conclusion is that absent qualia and inverted qualia
are logically possible but empirically impossible, leading to a kind of
nonreductive functionalism.

94-12 christiansen.language-learning.ps

Language Learning in the Full or, Why the Stimulus Might Not be So Poor,
After All
  Morten Christiansen, Department of Philosophy

Language acquisition is often said to require a massive innate body of
language specific knowledge in order to overcome the poverty of the stimulus.
In this picture, language learning merely implies setting a number of
parameters in an internal Universal Grammar.  But is the primary linguistic
evidence really so poor that it warrants such an extreme nativism?  Is there
no room for a more empiricist approach to language acquisition?  In this
paper, I argue against the extreme nativist position, discussing recent
results from psycholinguistics and connectionist research on natural language.

94-13 christiansen.nlp-recursion.ps

Natural Language Recursion and Recurrent Neural Networks
  Morten Christiansen (Dept. of Philosophy) and Nick Chater (U. of Oxford)

The recursive structure of natural language was one of the principal sources
of difficulty for associationist models of linguistic behaviour.  More
recently, it has become a focus in the debate on neural network models of
language, which many regard as the natural heirs of the associationist legacy.
Can neural networks learn to handle recursive structures?  If not, many would
argue, neural networks can be ruled out as viable models of language
processing.  In this paper, we reconsider the implications of natural language
recursion for neural network models, and present simulations in which
recurrent neural networks are trained on simple recursive structures.  We
suggest implications for theories of human language processing.

94-14 bechtel.embodied.ps

Embodied Connectionism
  William Bechtel, Department of Philosophy

Classical approaches to modeling cognition have treated the cognitive system
as disembodied.  This I argue is a consequence of a common strategy of theory
development in which researchers attempt to decompose functions into component
functions and assign these components functions to parts of systems.  But one
might question the decomposition that segregates a cognitive system from its
environment.  I suggest how connectionist modeling may facilitate the
development of cognitive models that do not so isolate cognitive systems from
their environment.  While such an approach may seem natural for lower
cognitive activities, such as navigating an environment, I suggest that the
approach be pursued with higher cognitive functions as well, using natural
deduction as the example.

94-15 bechtel.consciousness.ps

Consciousness: Perspectives from Symbolic and Connectionist AI
  William Bechtel, Department of Philosophy

While consciousness has not been a major concern of most AI researchers, some
have tried to explore how computational models might explain it.  I explore
how far computational models might go in explaining consciousness, focusing on
three aspects of conscious mental states: their intrinsic intentionality, a
subject's awareness of the contents of these intentional states, and the
distinctive qualitative character of these states.  I describe and evaluate
strategies for developing connectionist systems that satisfy these aspects of
consciousness.  My assessment is that connectionist models can do quite well
with regard to the first two components, but face far greater difficulties in
explaining the qualitative character of conscious states.

94-16 bechtel.language.ps

What Knowledge Must be in the Head in Order to Acquire Langauge?
  William Bechtel, Department of Philosophy

A common strategy in theorizing about the linguistic capacity has localized it
within the mind of the language user.  A result has been that the mind itself
is often taken to operate according to linguistic principles.  I propose an
approach to modeling linguistic capacity which distributes that capacity over
a cognitive system and external symbols.  This lowers the requirements that
must be satisfied by the cognitive system itself.  For example, productivity
and systematicity might not result from processing characteristics of the
cognitive system, but from the system's interaction with external symbols
which themselves adhere to syntactic principles.  To indicate how a relatively
weak processing system can exhibit linguistic competence, I describe a recent
model by St. John and McClelland.

94-17 bechtel.deduction.ps

Natural Deduction in Connectionist Systems
  William Bechtel, Department of Philosophy

I have argued elsewhere that the systematicity of human thought might be
explained as a result of the fact that we have learned natural languages which
are themselves syntactically structured.  According to this view, linguistic
symbols are external to the cognitive system and what the system must learn to
do is produce and comprehend such symbols.  In this paper I pursue that idea
by arguing that ability in natural deduction itself may rely on pattern
recognition abilities that enable us to operate on external symbols rather
than encodings of rules that might be applied to internal representations.  To
support this suggestion, I present a series of experiments with connectionist
networks that have been trained to construct simple natural deductions in
sentential logic.




More information about the Connectionists mailing list