Connectionists: Stephen Hanson in conversation with Geoff Hinton

Geoffrey Hinton geoffrey.hinton at gmail.com
Wed Feb 2 15:52:05 EST 2022


You started this thread and it was a mistake for me to engage in arguing
with you. I have said all I want to say.  You have endless time for arguing
and I don't. I find it more productive to spend time writing programs to
see what works and what doesn't. You should try it sometime.

Geoff


On Wed, Feb 2, 2022 at 3:25 PM Gary Marcus <gary.marcus at nyu.edu> wrote:

> Dear Geoff, and interested others,
>
> What, for example, would you make of a system that often drew the
> red-hatted hamster you requested, and perhaps a fifth of the time gave you
> utter nonsense?  Or say one that you trained to create birds but sometimes
> output stuff like this:
>
>
> One could
>
> a. avert one’s eyes and deem the anomalous outputs irrelevant
> or
> b. wonder if it might be possible that sometimes the system gets the right
> answer for the wrong reasons (eg partial historical contingency), and
> wonder whether another approach might be indicated.
>
> Benchmarks are harder than they look; most of the field has come to
> recognize that. The Turing Test has turned out to be a lousy measure of
> intelligence, easily gamed. It has turned out empirically that the Winograd
> Schema Challenge did not measure common sense as well as Hector might have
> thought. (As it happens, I am a minor coauthor of a very recent review on
> this very topic: https://arxiv.org/abs/2201.02387) But its conquest in no
> way means machines now have common sense; many people from many different
> perspectives recognize that (including, e.g., Yann LeCun, who generally
> tends to be more aligned with you than with me).
>
> So: on the goalpost of the Winograd schema, I was wrong, and you can quote
> me; but what you said about me and machine translation remains your
> invention, and it is inexcusable that you simply ignored my 2019
> clarification. On the essential goal of trying to reach meaning and
> understanding, I remain unmoved; the problem remains unsolved.
>
> All of the problems LLMs have with coherence, reliability, truthfulness,
> misinformation, etc stand witness to that fact. (Their persistent inability
> to filter out toxic and insulting remarks stems from the same.) I am hardly
> the only person in the field to see that progress on any given benchmark
> does not inherently mean that the deep underlying problems have solved.
> You, yourself, in fact, have occasionally made that point.
>
> With respect to embeddings: Embeddings are very good for natural language
> *processing*; but NLP is not the same as NL*U* – when it comes to
> *understanding*, their worth is still an open question. Perhaps they will
> turn out to be necessary; they clearly aren’t sufficient. In their extreme,
> they might even collapse into being symbols, in the sense of uniquely
> identifiable encodings, akin to the ASCII code, in which a specific set of
> numbers stands for a specific word or concept. (Wouldn’t that be ironic?)
>
> (Your GLOM, which as you know I praised publicly, is in many ways an
> effort to wind up with encodings that effectively serve as symbols in
> exactly that way, guaranteed to serve as consistent representations of
> specific concepts.)
>
> Notably absent from your email is any kind of apology for misrepresenting
> my position. It’s fine to say that “many people thirty years ago once
> thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t.
> I have consistently felt throughout our interactions that you have mistaken
> me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me
> for having made that error. I am still not he.
>
> Which maybe connects to the last point; if you read my work, you would see
> thirty years of arguments *for* neural networks, just not in the way that
> you want them to exist. I have ALWAYS argued that there is a role for them;
>  characterizing me as a person “strongly opposed to neural networks” misses
> the whole point of my 2001 book, which was subtitled “Integrating
> Connectionism and Cognitive Science.”
>
> In the last two decades or so you have insisted (for reasons you have
> never fully clarified, so far as I know) on abandoning symbol-manipulation,
> but the reverse is not the case: I have *never* called for dismissal of
> neural networks, but rather for some hybrid between the two (as you
> yourself contemplated in 1991); the point of the 2001 book was to
> characterize exactly where multilayer perceptrons succeeded and broke down,
> and where symbols could complement them. It’s a rhetorical trick (which is
> what the previous thread was about) to pretend otherwise.
>
> Gary
>
>
> On Feb 2, 2022, at 11:22, Geoffrey Hinton <geoffrey.hinton at gmail.com>
> wrote:
>
> 
> Embeddings are just vectors of soft feature detectors and they are very
> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the
> opposite.
>
> A few decades ago, everyone I knew then would have agreed that the ability
> to translate a sentence into many different languages was strong evidence
> that you understood it.
>
>
> But once neural networks could do that, their critics moved the goalposts.
> An exception is Hector Levesque who defined the goalposts more sharply by
> saying that the ability to get pronoun references correct in Winograd
> sentences is a crucial test. Neural nets are improving at that but still
> have some way to go. Will Gary agree that when they can get pronoun
> references correct in Winograd sentences they really do understand? Or does
> he want to reserve the right to weasel out of that too?
>
> Some people, like Gary, appear to be strongly opposed to neural networks
> because they do not fit their preconceived notions of how the mind should
> work.
> I believe that any reasonable person would admit that if you ask a neural
> net to draw a picture of a hamster wearing a red hat and it draws such a
> picture, it understood the request.
>
> Geoff
>
>
>
>
>
> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural
>> network community,
>>
>> There has been a lot of recent discussion on this list about framing and
>> scientific integrity. Often the first step in restructuring narratives is
>> to bully and dehumanize critics. The second is to misrepresent their
>> position. People in positions of power are sometimes tempted to do this.
>>
>> The Hinton-Hanson interview that you just published is a real-time
>> example of just that. It opens with a needless and largely content-free
>> personal attack on a single scholar (me), with the explicit intention of
>> discrediting that person. Worse, the only substantive thing it says is
>> false.
>>
>> Hinton says “In 2015 he [Marcus] made a prediction that computers
>> wouldn’t be able to do machine translation.”
>>
>> I never said any such thing.
>>
>> What I predicted, rather, was that multilayer perceptrons, as they
>> existed then, would not (on their own, absent other mechanisms)
>> understand language. Seven years later, they still haven’t, except in
>> the most superficial way.
>>
>> I made no comment whatsoever about machine translation, which I view as a
>> separate problem, solvable to a certain degree by correspondance without
>> semantics.
>>
>> I specifically tried to clarify Hinton’s confusion in 2019, but,
>> disappointingly, he has continued to purvey misinformation despite that
>> clarification. Here is what I wrote privately to him then, which should
>> have put the matter to rest:
>>
>> You have taken a single out of context quote [from 2015] and
>> misrepresented it. The quote, which you have prominently displayed at the
>> bottom on your own web page, says:
>>
>> Hierarchies of features are less suited to challenges such as language,
>> inference, and high-level planning. For example, as Noam Chomsky famously
>> pointed out, language is filled with sentences you haven't seen
>> before. Pure classifier systems don't know what to do with such sentences.
>> The talent of feature detectors -- in  identifying which member of some
>> category something belongs to -- doesn't translate into understanding
>> novel  sentences, in which each sentence has its own unique meaning.
>>
>> It does not say "neural nets would not be able to deal with novel
>> sentences"; it says that hierachies of features detectors (on their own, if
>> you read the context of the essay) would have trouble understanding novel sentences.
>>
>>
>> Google Translate does yet not understand the content of the sentences is
>> translates. It cannot reliably answer questions about who did what to whom,
>> or why, it cannot infer the order of the events in paragraphs, it can't
>> determine the internal consistency of those events, and so forth.
>>
>> Since then, a number of scholars, such as the the computational linguist
>> Emily Bender, have made similar points, and indeed current LLM difficulties
>> with misinformation, incoherence and fabrication all follow from these
>> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter
>> with Alexander Koller, https://aclanthology.org/2020.acl-main.463.pdf
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=>,
>> also emphasizing issues of understanding and meaning:
>>
>> The success of the large neural language models on many NLP tasks is
>> exciting. However, we find that these successes sometimes lead to hype in
>> which these models are being described as “understanding” language or
>> capturing “meaning”. In this position paper, we argue that a system trained
>> only on form has a priori no way to learn meaning. .. a clear understanding
>> of the distinction between form and meaning will help guide the field
>> towards better science around natural language understanding.
>>
>> Her later article with Gebru on language models “stochastic parrots” is
>> in some ways an extension of this point; machine translation requires
>> mimicry, true understanding (which is what I was discussing in 2015)
>> requires something deeper than that.
>>
>> Hinton’s intellectual error here is in equating machine translation with
>> the deeper comprehension that robust natural language understanding will
>> require; as Bender and Koller observed, the two appear not to be the same.
>> (There is a longer discussion of the relation between language
>> understanding and machine translation, and why the latter has turned out to
>> be more approachable than the former, in my 2019 book with Ernest Davis).
>>
>> More broadly, Hinton’s ongoing dismissiveness of research from
>> perspectives other than his own (e.g. linguistics) have done the field a
>> disservice.
>>
>> As Herb Simon once observed, science does not have to be zero-sum.
>>
>> Sincerely,
>> Gary Marcus
>> Professor Emeritus
>> New York University
>>
>> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>>
>> 
>> Stephen Hanson in conversation with Geoff Hinton
>>
>> In the latest episode of this video series for AIhub.org
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>,
>> Stephen Hanson talks to  Geoff Hinton about neural networks,
>> backpropagation, overparameterization, digit recognition, voxel cells,
>> syntax and semantics, Winograd sentences, and more.
>>
>> You can watch the discussion, and read the transcript, here:
>>
>> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>>
>> About AIhub:
>> AIhub is a non-profit dedicated to connecting the AI community to the
>> public by providing free, high-quality information through AIhub.org
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>
>> (https://aihub.org/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
>> We help researchers publish the latest AI news, summaries of their work,
>> opinion pieces, tutorials and more.  We are supported by many leading
>> scientific organizations in AI, namely AAAI
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
>> NeurIPS
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
>> ICML
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
>> AIJ
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>
>> /IJCAI
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
>> ACM SIGAI
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
>> EurAI/AICOMM, CLAIRE
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
>> and RoboCup
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>
>> .
>> Twitter: @aihuborg
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220202/5d1ea187/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image.png
Type: image/png
Size: 319547 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220202/5d1ea187/attachment-0001.png>


More information about the Connectionists mailing list