Levels

Aaron Sloman aarons%cogs.sussex.ac.uk at NSFnet-Relay.AC.UK
Sun Feb 25 05:56:14 EST 1990


> From: Terry Sejnowski <Terry_Sejnowski at ucsd.edu>
> There are at least three notions of levels ...
> ... Marr introduced levels of analysis -- computatonal, algorithmic,
> and implementational ...
> ... In biology there are well defined levels of organization
> molecular, synaptic, neuronal, networks, columns, maps, and systems...
> ... Finally, one can distinguish levels of processing, from the
> sensory periphery toward higher processing centers....

Two comments - (A) a critique of Marr and (B) a pointer to another
notion of level:

A. I think that Marr's analysis is somewhat confused, and would be best
replaced by the following:

a. What he called the "computational" level should be re-named the
"task" level, without any presumption that there is only one such level:
tasks form a hierarchy of sub-tasks. This is closely related to what
software engineers call "requirements analysis", and has to take into
account the nature of the environment, the behaviour that is to be
achieved within it, including constraints such as speed. In the case of
vision, Marr's main concern, requirements analysis would include
description of the relevant properties of light (or the optic array),
visible surfaces, forms of visible motion, etc. as well as internal and
external requirements of the organism e.g. recognition, generalisation,
description, planning, explaining, control of actions, posture control,
various kinds of visual reflexes (some trainable), reading, etc.

Requirements analysis also includes investigation of trade-offs and
priorities. E.g. in some conditions where there's a trade-off between
speed and accuracy, getting a quick decision that has a good chance of
being right may be more important than guaranteeing perfect accuracy.

Internal requirements analysis would include description of other
non-visual modules that require input from visual modules (e.g. for
posture control, or fine control of movement through feedback loops -
processes which don't necessarily require the same kind of visual
information as e.g. recognition of objects). So there is not ONE
requirement or task defining vision, but a rich multiplicity, which can
vary from organism to organism (or machine).

b. Then instead of Marr's two remaining levels, "algorithmic" and
"implementational" (or physical mechanism), there would be a number of
different layers of implementation, for each of which it is possible to
distinguish design and implementation. How many layers there are, and
which are implemented in software which in hardware, is an empirical
question and might vary from one organism or machine to another.

Moreover, because vision is multi-functional there need not be one
hierarchy of layers: instead there could be a fairly tangled network of
tasks performed by a network of interrelated processes sharing some
sub-mechanisms (e.g. retinas).

          ----------------------------------------------------

B: There's at least one other notion of level, not in Terry's list,
that's worth mentioning, though it's related to his three and to levels
of task analysis mentioned above. It is familiar to computer scientists,
though it may need to be generalised before it can be applied to brains.
I refer to the notion of a "virtual machine".

For example, a particular programming language refers to a class of
entities (e.g. words, strings, numbers, lists, trees, etc) and defines
operations on these, that together define a virtual machine. A
particular virtual machine can be implemented in a lower level virtual
machine via an interpreter or compiler (with interestingly different
consequences). The lower level virtual machine (e.g. the virtual machine
that defines a VAX architecture) may itself be an abstraction that is
implemented in some lower level machine (e.g. hardware, or a mixture of
hardware and microcode). Processes in a computer can have many levels of
virtual machine each implemented via a compiler or interpreter to a
lower level or possibly more than one lower level, e.g. if two sorts of
virtual machines are combined to implement a higher level hybrid.

Circular organisation is possible if a low-level machine can invoke a
sub-routine defined in a high level machine (e.g. for handling errors
or interrupts).

Different layers of virtual machine do not map in any simple way onto
physically differentiable structures in the computer: indeed without
changing the lowest level physical architecture one can implement very
many different higher level virtual machines, though there will be some
physical differences in the form of different patterns of bits in the
virtual memory, or different patterns of switch states or magnetic
molecule states in the physical memory. In this important sense virtual
structures in high level virtual machines may be "distributed" over the
memory of a conventional computer with no simple mapping from physical
components to the virtual structures they represent. (This is especially
clear in the case of sparse arrays, databases using inference, default
values for slots, etc.) (This is why I think talk about "physical symbol
systems" in AI is utterly misleading: most of the interesting symbol
systems are virtual structures in virtual machines, not physical
structures.)

Similarly, I presume different abstract virtual machines can be
implemented in neural nets, though the kind of implementation will be
different. E.g. it does not seem appropriate to talk about a compiler or
interpreter, at least at the lower levels. An example of such an
abstract virtual machine implemented in a human brain would be one that
can store and execute a long and complex sequence of instructions, such
as reciting a poem, doing a dance, or playing a piano sonata from
memory. Logical thinking (especially when done by an expert trained
logician) would be another example.

My expectation is that "connectionist" approaches to intelligence will
begin to take off when this branch of AI has a good theory about the
kinds of virtual machines that need to be implented to achieve different
sorts of intelligent systems, including a theory of how such virtual
machines are layered and how they may be implemented in different kinds
of neural networks (perhaps using the levels of organisation described
by Terry).

Aaron Sloman,
School of Cognitive and Computing Sciences,
Univ of Sussex, Brighton, BN1 9QH, England
    EMAIL   aarons at cogs.sussex.ac.uk
            aarons%uk.ac.sussex.cogs at nsfnet-relay.ac.uk
            aarons%uk.ac.sussex.cogs%nsfnet-relay.ac.uk at relay.cs.net


More information about the Connectionists mailing list