Connectionists: Stephen Hanson in conversation with Geoff Hinton

Ali Minai minaiaa at gmail.com
Sat Feb 5 00:45:52 EST 2022


I say that when a machine can correctly translate the meaning of a Persian
ghazal by Rumi or Hafez - with all its symbolism, metaphor, historical
allusions, etc, -  into English, it has understood the language. Until
then, it is just superficial mimicry. As of now, when I want to get a laugh
on social media, I take a bit of Persian or Urdu (or Hindi or Arabic for
that matter), have Google or Facebook translate it, and post it. Without
fail, it is hilarious and "not even wrong".  A distributional view of
language without grounding in embodied experience, historical knowledge,
cultural knowledge, etc., can only go so far. It is surprising how far it
has gone, but nowhere near far enough.

This does not mean that I am arguing for the impossibility of understanding
by machines. Or that I am arguing against a neural approach - which I think
is the right one. We just happen to have taken a very superficial view of
how a system built from neurons should do things to realize a mind, and,
thanks to our computational muscle, have taken that superficial vision to
an extreme. There are notable successes - especially where the models
actually follow the principles that the brain uses, however simplistically.
CNN-based networks are a good example; they are a reasonable approximation
to the early visual system. But until we bring a deep understanding of the
biological organism - including evolution and development - to how we try
to build minds from matter, we will remain mired in these superficial
successes and impressive irrelevancies like learning to play chess and Go.

One problem is that there are two mutually incompatible visions of AI. One
is to build an actual intelligence - autonomous, self-motivated,
understanding, creative. The other is to build "smart" tools for human
purposes, to the point of including purposes that today require mental
work, such as translation, video captioning, recommendation, summarization,
driving etc., just as, until recently, arithmetic required mental work. In
the near future, AI may well write legal briefs, prescribe medication, and
teach courses fairly competently. But this is all the work of smart,
obedient servants learning the craft, not autonomous, creative
intelligences comparable to our own - or even to a bird at this point.
Unfortunately, trying to do the latter has no immediate payoff, which is
what today's AI is mostly about. As for academic research, most people seem
more focused on getting that additional 0.8% in performance on a benchmark
dataset by adding a few million parameters to a model that already has 200
million parameters, all so they can get a publication in ICLR or ACL. Very
few people have the incentive or the inclination to focus on addressing
fundamental conceptual issues that would get us out of our current local
conceptual minimum in AI but solve no lucrative problem or meet any popular
benchmark.

Ali

*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Thu, Feb 3, 2022 at 2:24 AM Geoffrey Hinton <geoffrey.hinton at gmail.com>
wrote:

> Embeddings are just vectors of soft feature detectors and they are very
> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the
> opposite.
>
> A few decades ago, everyone I knew then would have agreed that the ability
> to translate a sentence into many different languages was strong evidence
> that you understood it.
> But once neural networks could do that, their critics moved the goalposts.
> An exception is Hector Levesque who defined the goalposts more sharply by
> saying that the ability to get pronoun references correct in Winograd
> sentences is a crucial test. Neural nets are improving at that but still
> have some way to go. Will Gary agree that when they can get pronoun
> references correct in Winograd sentences they really do understand? Or does
> he want to reserve the right to weasel out of that too?
>
> Some people, like Gary, appear to be strongly opposed to neural networks
> because they do not fit their preconceived notions of how the mind should
> work.
> I believe that any reasonable person would admit that if you ask a neural
> net to draw a picture of a hamster wearing a red hat and it draws such a
> picture, it understood the request.
>
> Geoff
>
>
>
>
>
> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural
>> network community,
>>
>> There has been a lot of recent discussion on this list about framing and
>> scientific integrity. Often the first step in restructuring narratives is
>> to bully and dehumanize critics. The second is to misrepresent their
>> position. People in positions of power are sometimes tempted to do this.
>>
>> The Hinton-Hanson interview that you just published is a real-time
>> example of just that. It opens with a needless and largely content-free
>> personal attack on a single scholar (me), with the explicit intention of
>> discrediting that person. Worse, the only substantive thing it says is
>> false.
>>
>> Hinton says “In 2015 he [Marcus] made a prediction that computers
>> wouldn’t be able to do machine translation.”
>>
>> I never said any such thing.
>>
>> What I predicted, rather, was that multilayer perceptrons, as they
>> existed then, would not (on their own, absent other mechanisms)
>> understand language. Seven years later, they still haven’t, except in
>> the most superficial way.
>>
>> I made no comment whatsoever about machine translation, which I view as a
>> separate problem, solvable to a certain degree by correspondance without
>> semantics.
>>
>> I specifically tried to clarify Hinton’s confusion in 2019, but,
>> disappointingly, he has continued to purvey misinformation despite that
>> clarification. Here is what I wrote privately to him then, which should
>> have put the matter to rest:
>>
>> You have taken a single out of context quote [from 2015] and
>> misrepresented it. The quote, which you have prominently displayed at the
>> bottom on your own web page, says:
>>
>> Hierarchies of features are less suited to challenges such as language,
>> inference, and high-level planning. For example, as Noam Chomsky famously
>> pointed out, language is filled with sentences you haven't seen
>> before. Pure classifier systems don't know what to do with such sentences.
>> The talent of feature detectors -- in  identifying which member of some
>> category something belongs to -- doesn't translate into understanding
>> novel  sentences, in which each sentence has its own unique meaning.
>>
>> It does not say "neural nets would not be able to deal with novel
>> sentences"; it says that hierachies of features detectors (on their own, if
>> you read the context of the essay) would have trouble understanding novel sentences.
>>
>>
>> Google Translate does yet not understand the content of the sentences is
>> translates. It cannot reliably answer questions about who did what to whom,
>> or why, it cannot infer the order of the events in paragraphs, it can't
>> determine the internal consistency of those events, and so forth.
>>
>> Since then, a number of scholars, such as the the computational linguist
>> Emily Bender, have made similar points, and indeed current LLM difficulties
>> with misinformation, incoherence and fabrication all follow from these
>> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter
>> with Alexander Koller, https://aclanthology.org/2020.acl-main.463.pdf,
>> also emphasizing issues of understanding and meaning:
>>
>> The success of the large neural language models on many NLP tasks is
>> exciting. However, we find that these successes sometimes lead to hype in
>> which these models are being described as “understanding” language or
>> capturing “meaning”. In this position paper, we argue that a system trained
>> only on form has a priori no way to learn meaning. .. a clear understanding
>> of the distinction between form and meaning will help guide the field
>> towards better science around natural language understanding.
>>
>> Her later article with Gebru on language models “stochastic parrots” is
>> in some ways an extension of this point; machine translation requires
>> mimicry, true understanding (which is what I was discussing in 2015)
>> requires something deeper than that.
>>
>> Hinton’s intellectual error here is in equating machine translation with
>> the deeper comprehension that robust natural language understanding will
>> require; as Bender and Koller observed, the two appear not to be the same.
>> (There is a longer discussion of the relation between language
>> understanding and machine translation, and why the latter has turned out to
>> be more approachable than the former, in my 2019 book with Ernest Davis).
>>
>> More broadly, Hinton’s ongoing dismissiveness of research from
>> perspectives other than his own (e.g. linguistics) have done the field a
>> disservice.
>>
>> As Herb Simon once observed, science does not have to be zero-sum.
>>
>> Sincerely,
>> Gary Marcus
>> Professor Emeritus
>> New York University
>>
>> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>>
>> 
>> Stephen Hanson in conversation with Geoff Hinton
>>
>> In the latest episode of this video series for AIhub.org, Stephen Hanson
>> talks to  Geoff Hinton about neural networks, backpropagation,
>> overparameterization, digit recognition, voxel cells, syntax and semantics,
>> Winograd sentences, and more.
>>
>> You can watch the discussion, and read the transcript, here:
>>
>> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>>
>> About AIhub:
>> AIhub is a non-profit dedicated to connecting the AI community to the
>> public by providing free, high-quality information through AIhub.org (
>> https://aihub.org/
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
>> We help researchers publish the latest AI news, summaries of their work,
>> opinion pieces, tutorials and more.  We are supported by many leading
>> scientific organizations in AI, namely AAAI
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
>> NeurIPS
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
>> ICML
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
>> AIJ
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>
>> /IJCAI
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
>> ACM SIGAI
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
>> EurAI/AICOMM, CLAIRE
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
>> and RoboCup
>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>
>> .
>> Twitter: @aihuborg
>>
>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220205/d27f20c1/attachment.html>


More information about the Connectionists mailing list