<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1">Gary,<br>
</font></p>
<p><font size="+1">Not retreating. Simply stating the obvious.
Brains are where symbols, as we talk about them, are. But
what are they exactly in a brain?</font></p>
<p><font size="+1">I think that is a question that starts with
connections and layers.. not some sort of specialized
software, this is not about software engineering but about the
science of the thing.</font></p>
<p><font size="+1">Utility is important in many domains. but appears
to me to be a retreat that you are didactically hiding behind
("even Jay"? comeon.)</font></p>
<p><font size="+1"> I am aware of simulations using RNN to counter
you claim that RNNs could not learn certain sequential behavior.
(<a class="moz-txt-link-freetext" href="https://www.semanticscholar.org/paper/On-the-Emergence-of-Rules-in-Neural-Networks-Hanson-Negishi/4bca27b823c9724d910b4637fd489343233570f8">https://www.semanticscholar.org/paper/On-the-Emergence-of-Rules-in-Neural-Networks-Hanson-Negishi/4bca27b823c9724d910b4637fd489343233570f8</a>).</font></p>
<p><font size="+1">I just can't take modularity of the brain
seriously anymore, as cognitive neuroscience continues to
embrace generic networks (resting state) and distributed
representations-- things are moving on (see --the Failure of
Blobology-SJH). Face areas? why would thetr be Face areas
(different from neural patches)?. what would they do in any
case---store all the faces you've seen--unlikely, process faces
into parts from whole? What was the point of a face area in
the first place.. more likely to be some sort of WAVELET that
incidentally encodes faces, 1963 Cadillacs and greebles
(<a class="moz-txt-link-freetext" href="https://psycnet.apa.org/record/2008-00548-008">https://psycnet.apa.org/record/2008-00548-008</a>;
<a class="moz-txt-link-freetext" href="https://pubmed.ncbi.nlm.nih.gov/19883493/">https://pubmed.ncbi.nlm.nih.gov/19883493/</a>) then some specific
type/token face thing.<br>
</font></p>
<p><font size="+1">I think this is more about hoping there is some
common ground that doesn't really exist. Most are beginning to
see that deep learning is a fundamental step forward in AI and
yes is very non-biologically plausible, except for the obvious
parts that are in the mammalian brain--layers (the cortex has
6) and connections. Its better to focus on why it works and how
to make mathematical sense of it.<br>
</font></p>
<p><font size="+1">Steve<br>
</font></p>
<div class="moz-cite-prefix">On 2/4/22 2:52 PM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:303504A2-453D-4DE8-8A34-C41693041954@nyu.edu">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">Steve,
<div dir="ltr"><br>
</div>
<div dir="ltr">The phrase I always liked was “poverty of the
imagination arguments”; I share your disdain for them. But
that’s why I think you should be careful of any retreat into
biological plausibility. As even Jay McClelland has
acknowledged, we do know that some humans some of the time
manipulate symbols. So wetware-based symbols are not literally
biologically impossible; the real question for cognitive
neuroscience is about the scope and development of symbols. </div>
<div dir="ltr"><br>
</div>
<div dir="ltr">For engineering, the real question is, are they
useful. Certainly for software engineering in general, they
are indispensable.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Beyond this, none of the available AI approaches
map particularly neatly onto what we know about the brain, and
none of what we know about the brain is understood well enough
to solve AI. All the examples you point to, for instance, are
actually controversial, not decisive. As you probably know,
for example, Nancy Kanwisher has a different take on
domain-specificity than you do (<a
href="https://web.mit.edu/bcs/nklab/" moz-do-not-send="true">https://web.mit.edu/bcs/nklab/</a>),
with evidence of specialization early in life, and Jeff Bowers
has argued that the grandmother cell hypothesis has been
dismissed prematurely (<a
href="https://jeffbowers.blogs.bristol.ac.uk/blog/grandmother-cells/"
moz-do-not-send="true">https://jeffbowers.blogs.bristol.ac.uk/blog/grandmother-cells/</a>);
there’s also a long literature on the possible neural
realization of rules, both in humans and other animals. </div>
<div dir="ltr"><br>
</div>
<div dir="ltr">I don’t know what the right answers are there,
but nor do I think that neurosymbolic systems are beholden to
them anymore than CNNs are bound to whether or not the brain
performs back-propagation.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Finally, as a reminder, “Distributed” per se in
not the right question; in some technical sense ASCII
encodings are distributed, and about as symbolic as you can
get. The proper question is really what you do with your
encodings; the neurosymbolic approach is trying to broaden the
available range of options.</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">
<blockquote type="cite">On Feb 4, 2022, at 07:04, Stephen José
Hanson <a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu"><jose@rubic.rutgers.edu></a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<meta http-equiv="Content-Type" content="text/html;
charset=UTF-8">
<p><font size="+1">Well I don't like counterfactual
arguments or ones that start with "It can't be done with
neural networks.."--as this amounts to the old Rumelhart
saw, of "proof by lack of imagination".</font></p>
<p><font size="+1">I think my position and others (I can't
speak for Geoff and won't) is more of a "purist" view
that brains have computationally complete
representational power to do what ever is required of
human level mental processing. AI symbol systems are
remote descriptions of this level of processing.
Looking at 1000s of brain scans, one begins to see a
pattern of interacting large and smaller scale networks,
probably related to Resting state and the Default Mode
networks in some important competitive way. But what
one doesn't find is modular structure (e.g. face area..
nope) or evidence of "symbols" being processed.
Research on Numbers is interesting in this regard, as
number representation should provide some evidence of
discrete symbol processing as would letters. But
again the processing states from brain imaging more
generally appear to be distributed representations of
some sort.</font></p>
<p><font size="+1">One other direction has to do with prior
rules that could be neurally coded and therefore provide
an immediate bias in learning and thus dramatically
reduce the number of examples required for asymptotic
learning. Some of this has been done with
pre-training-- on let's say 1000s of videos that are
relatively generic, prior to learning on a small set of
videos related to a specific topic-- say two individuals
playing a monopoly game. In that case, no game-like
videos were sampled in the pre-training, and the LSTM
was trained to detect change point on 2 minutes of
video, achieving a 97% match with human parsers. In
these senses I have no problem with this type of hybrid
training.<br>
</font></p>
<p><font size="+1">Steve</font><br>
</p>
<div class="moz-cite-prefix">On 2/4/22 9:07 AM, Gary Marcus
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:1A52CB03-212F-446F-95A5-EDE3A18C614A@nyu.edu">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div dir="ltr">The whole point of the neurosymbolic
approach is to develop systems that can accommodate both
vectors and symbols, since neither on their own seems
adequate.
<div dir="ltr">
<div dir="ltr"><br>
</div>
<div dir="ltr">If there are arguments against trying
to do that, we would be interested.</div>
<div dir="ltr"><br>
<blockquote type="cite">On Feb 4, 2022, at 4:17 AM,
Stephen José Hanson <a
class="moz-txt-link-rfc2396E"
href="mailto:jose@rubic.rutgers.edu"
moz-do-not-send="true"><jose@rubic.rutgers.edu></a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<meta http-equiv="Content-Type"
content="text/html; charset=UTF-8">
<p><font size="+1">Geoff's position is pretty
clear. He said in the conversation we had
and in this thread, "vectors of soft
features",</font></p>
<p><font size="+1">Some of my claim is in several
of the conversations with Mike Jordan and Rich
Sutton, but briefly, there are a number of<br>
very large costly efforts from the 1970s and
1980s, to create, deploy and curate symbol AI
systems that were massive failures. Not
counterfactuals, but factuals that failed.
The MCC comes to mind with Adm Bobby Inmann's
national US mandate to counter the Japanese so
called"Fifth-generation AI systems" as a
massive failure of symbolic AI. <br>
</font></p>
<p><font size="+1">--------------------<br>
</font></p>
<p><font size="+1">In 1982, Japan launched its
Fifth Generation Computer Systems project
(FGCS), designed to develop intelligent
software that would run on novel computer
hardware. As the first national, large-scale
artificial intelligence (AI) research and
development (R&D) project to be free from
military influence and corporate profit
motives, the FGCS was open, international, and
oriented around public goods.<br>
</font></p>
<div class="moz-cite-prefix">On 2/3/22 6:34 PM,
Francesca Rossi2 wrote:<br>
</div>
<blockquote type="cite"
cite="mid:BN8PR15MB273890B80829AA8EB71E13B3D7289@BN8PR15MB2738.namprd15.prod.outlook.com">
<pre class="moz-quote-pre" wrap="">Hi all.
Thanks Gary for adding me to this thread.
I also would be interested in knowing why Steve thinks that NS AI did not work in the past, and why this is an indication that it cannot work now or in the future.
Thanks,
Francesca.
------------------
Francesca Rossi
IBM Fellow and AI Ethics Global Leader
T.J. Watson Research Center, Yorktown Heights, USA
+1-617-3869639
________________________________________
From: Artur Garcez <a class="moz-txt-link-rfc2396E" href="mailto:arturdavilagarcez@gmail.com" moz-do-not-send="true"><arturdavilagarcez@gmail.com></a>
Sent: Thursday, February 3, 2022 6:00 PM
To: Gary Marcus
Cc: Stephen José Hanson; Geoffrey Hinton; AIhub; <a class="moz-txt-link-abbreviated" href="mailto:connectionists@mailman.srv.cs.cmu.edu" moz-do-not-send="true">connectionists@mailman.srv.cs.cmu.edu</a>; Luis Lamb; Josh Tenenbaum; Anima Anandkumar; Francesca Rossi2; Swarat Chaudhuri; Gadi Singer
Subject: [EXTERNAL] Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing. Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally ZjQcmQRYFpfptBannerStart
This Message Is From an External Sender
This message came from outside your organization.
ZjQcmQRYFpfptBannerEnd
It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing.
Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally symbolic), there is a clear resurgence of neuro-symbolic approaches.
Best wishes,
Artur
On Thu, Feb 3, 2022 at 5:00 PM Gary Marcus <<a class="moz-txt-link-abbreviated" href="mailto:gary.marcus@nyu.edu" moz-do-not-send="true">gary.marcus@nyu.edu</a><a class="moz-txt-link-rfc2396E" href="mailto:gary.marcus@nyu.edu" moz-do-not-send="true"><mailto:gary.marcus@nyu.edu></a>> wrote:
Steve,
I’d love to hear you elaborate on this part,
Many more shoes will drop in the next few years. I for one don't believe one of those shoes will be Hybrid approaches to AI, I've seen that movie before and it didn't end well.
I’d love your take on why you think the impetus towards hybrid models ended badly before, and why you think that the mistakes of the past can’t be corrected. Also it’ would be really instructive to compare with deep learning, which lost steam for quite some time, but reemerged much stronger than ever before. Might not the same happen with hybrid models?
I am cc’ing some folks (possibly not on this list) who have recently been sympathetic to hybrid models, in hopes of a rich discussion. (And, Geoff, still cc’d, I’d genuinely welcome your thoughts if you want to add them, despite our recent friction.)
Cheers,
Gary
On Feb 3, 2022, at 5:10 AM, Stephen José Hanson <<a class="moz-txt-link-abbreviated" href="mailto:jose@rubic.rutgers.edu" moz-do-not-send="true">jose@rubic.rutgers.edu</a><a class="moz-txt-link-rfc2396E" href="mailto:jose@rubic.rutgers.edu" moz-do-not-send="true"><mailto:jose@rubic.rutgers.edu></a>> wrote:
I would encourage you to read the whole transcript, as you will see the discussion does intersect with a number of issues you raised in an earlier post on what is learned/represented in DLs.
Its important for those paying attention to this thread, to realize these are still very early times. Many more shoes will drop in the next few years. I for one don't believe one of those shoes will be Hybrid approaches to AI, I've seen that movie before and it didn't end well.
Best and hope you are doing well.
Steve
</pre>
</blockquote>
<div class="moz-signature">-- <br>
<div><signature.png></div>
</div>
</div>
</blockquote>
</div>
</div>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:part10.F964F597.215A567E@rubic.rutgers.edu"
class="" border="0"></div>
</div>
</blockquote>
</div>
</blockquote>
<div class="moz-signature">-- <br>
<img src="cid:part10.F964F597.215A567E@rubic.rutgers.edu"
border="0"></div>
</body>
</html>