[ACT-R-users] ACT-R/Soar on a robot

Kelley, Troy (Civ,ARL/HRED) tkelley at arl.army.mil
Thu Oct 6 11:47:40 EDT 2005


Hello,

 

Since there is not a lot of activity on the ACT-R or Soar mailing list
of late, I thought I would pose some theoretical questions to the
community concerning work we have been doing at our lab to implement
ACT-R onto a robot.  We are working with a production system
architecture called SS-RICS, which was inspired by ACT-R, and has much
of the same functionality as ACT-R.  

 

Each paragraph below is meant to be a self-contained set of questions to
ponder, if you want to respond to just one paragraph, please feel free
to do so, you don't need to read all the paragraphs.  Your input would
be appreciated.

 

1)  ACT-R grew out of the General Problem Solver (GPS) production system
architecture developed by Newell and Simon.  Their intent was to develop
a *general* problem solver using various syntax based strategies.  These
are also known as weak method problem solvers in AI.  However, it seems
as if ACT-R has moved toward strong method problem solvers, which use
*specific* domain knowledge to solve problems.  So the question is, can
we develop ACT-R/Soar models that are general adaptable problem solvers
that are NOT domain specific?  Or does domain specificity so influence
the problem solving process, that one cannot extricate oneself from the
domain?

 

2)  ACT-R is firmly rooted in the "symbolic" side of AI as opposed to
the "distributed" side of AI.  A criticism of traditional symbolic AI is
that it tends to be brittle and non-adaptive if the symbols do not match
the current problem space.  Is this also true of ACT-R?  If so, does
ACT-R seem to be a logical fit for robotics or will it just introduce
the same set of symbolic problems (especially the frame problem) that
are currently found in AI.

 

3)  If one considers ACT-R to be "just another symbolic system", as some
in the AI community do, then what does ACT-R or computational cognitive
psychology have to offer the robotics community?  I have not seen
references to memory decay in the AI literature however, most people in
the AI community consider memory decay to be a handicap of the human
cognitive system.  While I disagree with that, what other
mechanisms/functionality of the human cognitive system would appear to
be the most  important additions to robotic control mechanism that have
been overlooked by the AI community?

 

4) There is a distinction in psychological literature between long term,
short term and working memory, but what does that mean computationally?
For us, we have implemented different decay rates for LTM and STM.
However, those decay rates are really part of a continuum of decay rates
for all memories.  For example, if we have one million memories on our
robot, don't those memories form a continuum of various decay rates?
Why should I have to have a place holder for LTM and STM and WM?  Why
can't all my memories just have various decay rates?  It seems as if
LTM, STM and WM are simply convenient labels for something that is
really a continuum.  If we define WM as those memories currently in use,
does that preclude LTM or STM from the problem space?

 

4) Has there ever been any use of *different* decay rates *within* an
ACT-R model?  I am talking about different base level activations for
different chunks at the beginning of the model run.  For example, we
have found it very helpful to code perceptual processing as building on
lower level memories that decay very quickly.  Once the higher level
concepts are formed, the lower level memories are quickly forgotten, and
the higher level concepts remain.  So, the higher level concepts have a
slower decay rate than the lower level perceptions.  I know right now,
in ACT-R, we request information from the buffers, but I have been
working on some memory research that says that information will "get in"
even if there is not a "request" for it (or an attentional shift to it).
Perhaps this is more of a statement than a question then; what do people
think of having more control over decay rates for individual chunks,
especially those coming in from the perceptual components?

 

5)  The transition from a rule-based, symbolic understanding of a
problem, to a proceduralized, intuitive, understanding of a problem is
difficult to represent computationally. What does proceduralization
really mean?  At the symbolic level, we can change latency values, or
skip over rules, to speed up the procedure.  But really,
psychologically, it seems as if the symbolic level rules are being
"rewritten" or "re-compiled" as subsymbolic representations.  On our
robot, it is unclear how we could take symbolic production systems and
recompile them in a subsymbolic way to produce improved performance seen
in humans. 

 

6) When does a memory become and memory?  When you first perceive
something, is it a memory then or does it become a memory later?  Is our
conceptualization of the world a conceptualization of memories or actual
stimuli?  Does some level of processing have to take place before it can
be called a memory?  For example, with our robot, it sees dots and
points returned from its laser, which are organized into lines. But we
struggle with the idea that the dots and points are also memories.  We
have buffers similar to ACT-R, but we ended up having a lot of
processing that goes on before information even gets to the buffer.  So
that means there is a lot of information unavailable to the production
system.  For example, points get organized into lines which then get
organized into shapes at a Gestalt layer, which then become memories for
the production system.  So, a lot of processing goes on before our
sensory data becomes a memory.  So, again, when does a memory become a
memory? 

 

7) Is cognition sensory specific?  Much of our code right now for the
robot seems to be for interpreting information the robot gets from each
one of its sensors.  So, if we were to put a new sensor on the robot,
would it still be able to make sense of the world?  That is a direction
we are going toward, but it can be very difficult.  Many of the
productions we end up writing seem to be tied directly to specific
sensory stimuli, and it is difficult to write productions that are
general and not tied to sensory information.  So, is cognition bound to
the sensory information returned from perceptual mechanisms (as Rodney
Brooks would say) or can it be separated from sensory information.
True, that is what the symbolic level can do, but then, does the
symbolic level become brittle if it is not tied to the sensory layer?

 

Troy Kelley

U.S. Army Research Laboratory

Human Research and Engineering Directorate

AMSRD-ARL-HR-SE, APG, MD

21005-5425

Voice: 410-278-5859

email: tkelley at arl.army.mil

 

 

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/act-r-users/attachments/20051006/16967cfb/attachment.html>


More information about the ACT-R-users mailing list