Reprints avail.

Mark Gluck gluck at psych.Stanford.EDU
Mon Oct 24 13:07:57 EDT 1988


Reprints of the following two papers are available by netrequest
to gluck at psych.stanford.edu or by writing: Mark Gluck, Dept. of Psychology,
Jordan Hall; Bldg. 420, Stanford Univ., Stanford, CA 94305.

Gluck, M. A., & Bower, G. H. (1988) From conditioning to category learning:
   An adaptive network model. Journal of Experimental Psychology: General,
   V. 117, N. 3, 227-247
                                Abstract
                                --------
   We used adaptive network theory to extend the Rescorla-Wagner (1972)
   least mean squares (LMS) model of associative learning to phenomena
   of human learning and judgment. In three experiments subjects 
   learned to categorize hypothetical patients with particular symptom
   patterns as having certain diseases.  When one disease is far more
   likely than another, the model predicts that subjects will sub-
   stantially overestimate the diagnosticity of the more valid symptom
   for the rare disease. The results of Experiments 1 and 2 provide clear
   support for this prediction in contradistinction to predictions from
   probability matching, exemplar retrieval, or simple prototype learning
   models.  Experiment 3 contrasted the adaptive network model with one
   predicting pattern-probability matching when patients always had
   four symptoms (chosen from four opponent pairs) rather than the
   presence or absence of each of four symptoms, as in Experiment 1. 
   The results again support the Rescorla-Wagner LMS learning rule as
   embedded within an adaptive network.


Gluck, M. A., Parker, D. B., & Reifsnider, E. (1988) Some biological
   implications of a differential-Hebbian learning rule.
   Psychobiology, Vol. 16(3), 298-302

                                Abstract
                                --------
   Klopf (1988) presents a formal real-time model of classical
   conditioning which generates a wide range of behavioral Pavlovian
   phenomena.  We describe a replication of his simulation results and
   summarize some of the strengths and shortcomings of the drive-
   reinforcement model as a real-time behavioral model of classical
   conditioning.  To facilitate further comparison of Klopf's model
   with neuronal capabilities, we present a pulse-coded reformulation
   of the model that is more stable and easier to compute than the
   original, frequency-based model.  We then review three ancillary
   assumptions to the model's learning algorithm, noting that each
   can be seen as dually motivated by both behavioral and biological
   considerations.








More information about the Connectionists mailing list