Connectionists: New Paper Attentional Bias, Cateogorization and Deep Learning
Stephen Jose Hanson
jose at rubic.rutgers.edu
Sun Apr 15 09:42:36 EDT 2018
New paper
http://journal.frontiersin.org/article/10.3389/fpsyg.2018.00374/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Psychology&id=284733
Attentional Bias in Human Category Learning: The Case of Deep Learning
Catherine Hanson, Leyla Roskan Caglar and Stephen José Hanson
RUBIC, Psychology, Rutgers University
Category learning performance is influenced by both the nature of the
category's structure and the way category features are processed during
learning. Shepard (1964, 1987) showed that stimuli can have structures
with features that are statistically uncorrelated (separable) or
statistically correlated (integral) within categories. Humans find it
much easier to learn categories having separable features, especially
when attention to only a subset of relevant features is required, and
harder to learn categories having integral features, which require
consideration of all of the available features and integration of all
the relevant category features satisfying the category rule (Garner,
1974). In contrast to humans, a single hidden layer backpropagation (BP)
neural network has been shown to learn both separable and integral
categories equally easily, independent of the category rule (Kruschke,
1993). This “failure” to replicate human category performance appeared
to be strong evidence that connectionist networks were incapable of
modeling human attentional bias. We tested the presumed limitations of
attentional bias in networks in two ways: (1) by having networks learn
categories with exemplars that have high feature complexity in contrast
to the low dimensional stimuli previously used, and (2) by investigating
whether a Deep Learning (DL) network, which has demonstrated humanlike
performance in many different kinds of tasks (language translation,
autonomous driving, etc.), would display human-like attentional bias
during category learning. We were able to show a number of interesting
results. First, we replicated the failure of BP to differentially
process integral and separable category structures when low dimensional
stimuli are used (Garner, 1974; Kruschke, 1993). Second, we show that
using the same low dimensional stimuli, Deep Learning (DL), unlike BP
but similar to humans, learns separable category structures more quickly
than integral category structures. Third, we show that even BP can
exhibit human like learning differences between integral and separable
category structures when high dimensional stimuli (face exemplars) are
used. We conclude, after visualizing the hidden unit representations,
that DL appears to extend initial learning due to feature development
thereby reducing destructive feature competition by incrementally
refining feature detectors throughout later layers until a tipping point
(in terms of error) is reached resulting in rapid asymptotic learning
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20180415/986e2eb4/attachment.html>
More information about the Connectionists
mailing list