Connectionists: Stephen Hanson in conversation with Geoff Hinton

Danko Nikolic danko.nikolic at gmail.com
Thu Jul 14 12:16:48 EDT 2022


Dear Steve,

Thank you very much for your message and for the greetings. I will pass
them on if an occasion arises.

Regarding your question: The key problem I am trying to address and that,
to the best of my knowledge, no connectionist system was able to solve so
far is that of scaling the system's intelligence. For example, if the
system is able to correctly recognize 100 different objects, how many
additional resources are needed to double that to 200? All the empirical
data show that connectionist systems scale poorly: Some of the best systems
we have require 500x more resources in order to increase the intelligence
by only 2x. I document this problem in the manuscript and even run some
simulations to show that the worst performance is if connectionist systems
need to solve a generalized XOR problem.

In contrast, the biological brain scales well. This I also quantify in the
paper.

I will look at the publication that you mentioned. However, so far, I
haven't seen a solution that scales well in intelligence.

My argument is that transient selection of subnetworks by the help of the
mentioned proteins is how intelligence scaling is achieved in biological
brains.

In short, intelligence scaling is the key problem that concerns me. I
describe the intelligence scaling problem in more detail in this book that
just came out a few weeks ago and that is written for practitioners in Data
Scientist and AI: https://amzn.to/3IBxUpL

I hope that this at least partly answers where I see the problems and what
I am trying to solve.

Greetings from Germany,

Danko

Dr. Danko Nikolić
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/
--- A progress usually starts with an insight ---


On Thu, Jul 14, 2022 at 3:30 PM Grossberg, Stephen <steve at bu.edu> wrote:

> Dear Danko,
>
> I have just read your new article and would like to comment briefly about
> it.
>
> In your introductory remarks, you write:
>
> "However, connectionism did not yet produce a satisfactory explanation of
> how the mental emerges from the physical. A number of open problems remain
> s ( 5,6,7,8). As a result, the explanatory gap between the mind and the
> brain remains wide open."
>
> I certainly believe that no theoretical explanation in science is ever
> complete. However, I also believe that "the explanatory gap between the
> mind and the brain" does not remain "wide open".
>
> My Magnum Opus, that was published in 2021, makes that belief clear in its
> title:
>
> *Conscious Mind, Resonant Brain: How Each Brain Makes a Mind*
>
> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
>
> The book provides a self-contained and non-technical exposition in a
> conversational tone of many principled and unifying explanations of
> psychological and neurobiological data.
>
> In particular, it explains roles for the metabotropic glutamate receptors
> that you mention in your own work. See the text and figures around p. 521.
> This explanation unifies psychological, anatomical, neurophysiological,
> biophysical, and biochemical data about the processes under discussion.
>
> I have a very old-fashioned view about how to understand scientific
> theories. I get excited by theories that explain and predict more data than
> previous theories.
>
> Which of the data that I explain in my book, and support with quantitative
> computer simulations, can you also explain?
>
> What data can you explain, in the same quantitative sense, that you do not
> think the neural models in my book can explain?
>
> I would be delighted to discuss these issues further with you.
>
> If you are in touch with my old friend and esteemed colleague, Wolf
> Singer, please send him my warm regards. I cite the superb work that he and
> various of his collaborators have done in many places in my book.
>
> Best,
>
> Steve
>
> Stephen Grossberg
> http://en.wikipedia.org/wiki/Stephen_Grossberg
> http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en
> https://youtu.be/9n5AnvFur7I
> https://www.youtube.com/watch?v=_hBye6JQCh4
> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
>
> Wang Professor of Cognitive and Neural Systems
> Director, Center for Adaptive Systems
> Professor Emeritus of Mathematics & Statistics,
>        Psychological & Brain Sciences, and Biomedical Engineering
> Boston University
> sites.bu.edu/steveg
> steve at bu.edu
>
> ------------------------------
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Danko Nikolic <danko.nikolic at gmail.com>
> *Sent:* Thursday, July 14, 2022 6:05 AM
> *To:* Gary Marcus <gary.marcus at nyu.edu>
> *Cc:* connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>; AIhub <aihuborg at gmail.com>
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
> Dear Gary and everyone,
>
> I am continuing the discussion from where we left off a few months ago.
> Back then, some of us agreed that the problem of understanding remains
> unsolved.
>
> As a reminder, the challenge for connectionism was to 1) learn with few
> examples and 2) apply the knowledge to a broad set of situations.
>
> I am happy to announce that I have now finished a draft of a paper in
> which I propose how the brain is able to achieve that. The manuscript
> requires a bit of patience for two reasons: one is that the reader may be
> exposed for the first time to certain aspects of brain physiology. The
> second reason is that it may take some effort to understand the
> counterintuitive implications of the new ideas (this requires a different
> way of thinking than what we are used to based on connectionism).
>
> In short, I am suggesting that instead of the connectionist paradigm, we
> adopt transient selection of subnetworks. The mechanisms that transiently
> select brain subnetworks are distributed all over the nervous system and, I
> argue, are our main machinery for thinking/cognition. The surprising
> outcome is that neural activation, which was central in connectionism, now
> plays only a supportive role, while the real 'workers' within the brain are
> the mechanisms for transient selection of subnetworks.
>
> I also explain how I think transient selection achieves learning with only
> a few examples and how the learned knowledge is possible to apply to a
> broad set of situations.
>
> The manuscript is made available to everyone and can be downloaded here:
> https://bit.ly/3IFs8Ug
> (I apologize for the neuroscience lingo, which I tried to minimize.)
>
> It will likely take a wide effort to implement these concepts as an AI
> technology, provided my ideas do not have a major flaw in the first place.
> Does anyone see a flaw?
>
> Thanks.
>
> Danko
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> https://www.linkedin.com/in/danko-nikolic/
>
>
> On Thu, Feb 3, 2022 at 5:25 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
> Dear Danko,
>
> Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED talk,
> in which he said (paraphrasing from memory, because I don’t remember the
> precise words) that the famous 200 Quoc Le unsupervised model [
> https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf]
> had learned the concept of a ca. In reality the model had clustered
> together some catlike images based on the image statistics that it had
> extracted, but it was a long way from a full, counterfactual-supporting
> concept of a cat, much as you describe below.
>
> I fully agree with you that the reason for even having a semantics is as
> you put it, "to 1) learn with a few examples and 2) apply the knowledge to
> a broad set of situations.” GPT-3 sometimes gives the appearance of having
> done so, but it falls apart under close inspection, so the problem remains
> unsolved.
>
> Gary
>
> On Feb 3, 2022, at 3:19 AM, Danko Nikolic <danko.nikolic at gmail.com> wrote:
>
> G. Hinton wrote: "I believe that any reasonable person would admit that if
> you ask a neural net to draw a picture of a hamster wearing a red hat and
> it draws such a picture, it understood the request."
>
> I would like to suggest why drawing a hamster with a red hat does not
> necessarily imply understanding of the statement "hamster wearing a red
> hat".
> To understand that "hamster wearing a red hat" would mean inferring, in
> newly emerging situations of this hamster, all the real-life
> implications that the red hat brings to the little animal.
>
> What would happen to the hat if the hamster rolls on its back? (Would the
> hat fall off?)
> What would happen to the red hat when the hamster enters its lair? (Would
> the hat fall off?)
> What would happen to that hamster when it goes foraging? (Would the red
> hat have an influence on finding food?)
> What would happen in a situation of being chased by a predator? (Would it
> be easier for predators to spot the hamster?)
>
> ...and so on.
>
> Countless many questions can be asked. One has understood "hamster wearing
> a red hat" only if one can answer reasonably well many of such real-life
> relevant questions. Similarly, a student has understood materias in a class
> only if they can apply the materials in real-life situations (e.g.,
> applying Pythagora's theorem). If a student gives a correct answer to a
> multiple choice question, we don't know whether the student understood the
> material or whether this was just rote learning (often, it is rote
> learning).
>
> I also suggest that understanding also comes together with effective
> learning: We store new information in such a way that we can recall it
> later and use it effectively  i.e., make good inferences in newly emerging
> situations based on this knowledge.
>
> In short: Understanding makes us humans able to 1) learn with a few
> examples and 2) apply the knowledge to a broad set of situations.
>
> No neural network today has such capabilities and we don't know how to
> give them such capabilities. Neural networks need large amounts of
> training examples that cover a large variety of situations and then
> the networks can only deal with what the training examples have already
> covered. Neural networks cannot extrapolate in that 'understanding' sense.
>
> I suggest that understanding truly extrapolates from a piece of knowledge.
> It is not about satisfying a task such as translation between languages or
> drawing hamsters with hats. It is how you got the capability to complete
> the task: Did you only have a few examples that covered something different
> but related and then you extrapolated from that knowledge? If yes, this is
> going in the direction of understanding. Have you seen countless examples
> and then interpolated among them? Then perhaps it is not understanding.
>
> So, for the case of drawing a hamster wearing a red hat, understanding
> perhaps would have taken place if the following happened before that:
>
> 1) first, the network learned about hamsters (not many examples)
> 2) after that the network learned about red hats (outside the context of
> hamsters and without many examples)
> 3) finally the network learned about drawing (outside of the context of
> hats and hamsters, not many examples)
>
> After that, the network is asked to draw a hamster with a red hat. If it
> does it successfully, maybe we have started cracking the problem of
> understanding.
>
> Note also that this requires the network to learn sequentially without
> exhibiting catastrophic forgetting of the previous knowledge, which is
> possibly also a consequence of human learning by understanding.
>
>
> Danko
>
>
>
>
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=>
> https://www.linkedin.com/in/danko-nikolic/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=>
> --- A progress usually starts with an insight ---
>
>
>
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=> Virus-free.
> www.avast.com
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>
> On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Without getting into the specific dispute between Gary and Geoff, I think
> with approaches similar to GLOM, we are finally headed in the right
> direction. There’s plenty of neurophysiological evidence for single-cell
> abstractions and multisensory neurons in the brain, which one might claim
> correspond to symbols. And I think we can finally reconcile the decades old
> dispute between Symbolic AI and Connectionism.
>
>
>
> GARY: (Your GLOM, which as you know I praised publicly, is in many ways an
> effort to wind up with encodings that effectively serve as symbols in
> exactly that way, guaranteed to serve as consistent representations of
> specific concepts.)
>
> GARY: I have *never* called for dismissal of neural networks, but rather
> for some hybrid between the two (as you yourself contemplated in 1991); the
> point of the 2001 book was to characterize exactly where multilayer
> perceptrons succeeded and broke down, and where symbols could complement
> them.
>
>
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>
> Asim Roy | iSearch (asu.edu)
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Gary Marcus
> *Sent:* Wednesday, February 2, 2022 1:26 PM
> *To:* Geoffrey Hinton <geoffrey.hinton at gmail.com>
> *Cc:* AIhub <aihuborg at gmail.com>; connectionists at mailman.srv.cs.cmu.edu
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> Dear Geoff, and interested others,
>
>
>
> What, for example, would you make of a system that often drew the
> red-hatted hamster you requested, and perhaps a fifth of the time gave you
> utter nonsense?  Or say one that you trained to create birds but sometimes
> output stuff like this:
>
>
>
> <image001.png>
>
>
>
> One could
>
>
>
> a. avert one’s eyes and deem the anomalous outputs irrelevant
>
> or
>
> b. wonder if it might be possible that sometimes the system gets the right
> answer for the wrong reasons (eg partial historical contingency), and
> wonder whether another approach might be indicated.
>
>
>
> Benchmarks are harder than they look; most of the field has come to
> recognize that. The Turing Test has turned out to be a lousy measure of
> intelligence, easily gamed. It has turned out empirically that the Winograd
> Schema Challenge did not measure common sense as well as Hector might have
> thought. (As it happens, I am a minor coauthor of a very recent review on
> this very topic: https://arxiv.org/abs/2201.02387
> <https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$>)
> But its conquest in no way means machines now have common sense; many
> people from many different perspectives recognize that (including, e.g.,
> Yann LeCun, who generally tends to be more aligned with you than with me).
>
>
>
> So: on the goalpost of the Winograd schema, I was wrong, and you can quote
> me; but what you said about me and machine translation remains your
> invention, and it is inexcusable that you simply ignored my 2019
> clarification. On the essential goal of trying to reach meaning and
> understanding, I remain unmoved; the problem remains unsolved.
>
>
>
> All of the problems LLMs have with coherence, reliability, truthfulness,
> misinformation, etc stand witness to that fact. (Their persistent inability
> to filter out toxic and insulting remarks stems from the same.) I am hardly
> the only person in the field to see that progress on any given benchmark
> does not inherently mean that the deep underlying problems have solved.
> You, yourself, in fact, have occasionally made that point.
>
>
>
> With respect to embeddings: Embeddings are very good for natural language
> *processing*; but NLP is not the same as NL*U* – when it comes to
> *understanding*, their worth is still an open question. Perhaps they will
> turn out to be necessary; they clearly aren’t sufficient. In their extreme,
> they might even collapse into being symbols, in the sense of uniquely
> identifiable encodings, akin to the ASCII code, in which a specific set of
> numbers stands for a specific word or concept. (Wouldn’t that be ironic?)
>
>
>
> (Your GLOM, which as you know I praised publicly, is in many ways an
> effort to wind up with encodings that effectively serve as symbols in
> exactly that way, guaranteed to serve as consistent representations of
> specific concepts.)
>
>
>
> Notably absent from your email is any kind of apology for misrepresenting
> my position. It’s fine to say that “many people thirty years ago once
> thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t.
> I have consistently felt throughout our interactions that you have mistaken
> me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me
> for having made that error. I am still not he.
>
>
>
> Which maybe connects to the last point; if you read my work, you would see
> thirty years of arguments *for* neural networks, just not in the way that
> you want them to exist. I have ALWAYS argued that there is a role for them;
>  characterizing me as a person “strongly opposed to neural networks” misses
> the whole point of my 2001 book, which was subtitled “Integrating
> Connectionism and Cognitive Science.”
>
>
>
> In the last two decades or so you have insisted (for reasons you have
> never fully clarified, so far as I know) on abandoning symbol-manipulation,
> but the reverse is not the case: I have *never* called for dismissal of
> neural networks, but rather for some hybrid between the two (as you
> yourself contemplated in 1991); the point of the 2001 book was to
> characterize exactly where multilayer perceptrons succeeded and broke down,
> and where symbols could complement them. It’s a rhetorical trick (which is
> what the previous thread was about) to pretend otherwise.
>
>
>
> Gary
>
>
>
>
>
> On Feb 2, 2022, at 11:22, Geoffrey Hinton <geoffrey.hinton at gmail.com>
> wrote:
>
> 
>
> Embeddings are just vectors of soft feature detectors and they are very
> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the
> opposite.
>
>
>
> A few decades ago, everyone I knew then would have agreed that the ability
> to translate a sentence into many different languages was strong evidence
> that you understood it.
>
>
>
> But once neural networks could do that, their critics moved the goalposts.
> An exception is Hector Levesque who defined the goalposts more sharply by
> saying that the ability to get pronoun references correct in Winograd
> sentences is a crucial test. Neural nets are improving at that but still
> have some way to go. Will Gary agree that when they can get pronoun
> references correct in Winograd sentences they really do understand? Or does
> he want to reserve the right to weasel out of that too?
>
>
>
> Some people, like Gary, appear to be strongly opposed to neural networks
> because they do not fit their preconceived notions of how the mind should
> work.
>
> I believe that any reasonable person would admit that if you ask a neural
> net to draw a picture of a hamster wearing a red hat and it draws such a
> picture, it understood the request.
>
>
>
> Geoff
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural
> network community,
>
>
>
> There has been a lot of recent discussion on this list about framing and
> scientific integrity. Often the first step in restructuring narratives is
> to bully and dehumanize critics. The second is to misrepresent their
> position. People in positions of power are sometimes tempted to do this.
>
>
>
> The Hinton-Hanson interview that you just published is a real-time example
> of just that. It opens with a needless and largely content-free personal
> attack on a single scholar (me), with the explicit intention of
> discrediting that person. Worse, the only substantive thing it says is
> false.
>
>
>
> Hinton says “In 2015 he [Marcus] made a prediction that computers wouldn’t
> be able to do machine translation.”
>
>
>
> I never said any such thing.
>
>
>
> What I predicted, rather, was that multilayer perceptrons, as they existed
> then, would not (on their own, absent other mechanisms) *understand* language.
> Seven years later, they still haven’t, except in the most superficial way.
>
>
>
>
> I made no comment whatsoever about machine translation, which I view as a
> separate problem, solvable to a certain degree by correspondance without
> semantics.
>
>
>
> I specifically tried to clarify Hinton’s confusion in 2019, but,
> disappointingly, he has continued to purvey misinformation despite that
> clarification. Here is what I wrote privately to him then, which should
> have put the matter to rest:
>
>
>
> You have taken a single out of context quote [from 2015] and
> misrepresented it. The quote, which you have prominently displayed at the
> bottom on your own web page, says:
>
>
>
> Hierarchies of features are less suited to challenges such as language,
> inference, and high-level planning. For example, as Noam Chomsky famously
> pointed out, language is filled with sentences you haven't seen
> before. Pure classifier systems don't know what to do with such sentences.
> The talent of feature detectors -- in  identifying which member of some
> category something belongs to -- doesn't translate into understanding
> novel  sentences, in which each sentence has its own unique meaning.
>
>
>
> It does *not* say "neural nets would not be able to deal with novel
> sentences"; it says that hierachies of features detectors (on their own, if
> you read the context of the essay) would have trouble *understanding *novel sentences.
>
>
>
>
> Google Translate does yet not *understand* the content of the sentences
> is translates. It cannot reliably answer questions about who did what to
> whom, or why, it cannot infer the order of the events in paragraphs, it
> can't determine the internal consistency of those events, and so forth.
>
>
>
> Since then, a number of scholars, such as the the computational linguist
> Emily Bender, have made similar points, and indeed current LLM difficulties
> with misinformation, incoherence and fabrication all follow from these
> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter
> with Alexander Koller, https://aclanthology.org/2020.acl-main.463.pdf
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=>,
> also emphasizing issues of understanding and meaning:
>
>
>
> *The success of the large neural language models on many NLP tasks is
> exciting. However, we find that these successes sometimes lead to hype in
> which these models are being described as “understanding” language or
> capturing “meaning”. In this position paper, we argue that a system trained
> only on form has a priori no way to learn meaning. .. a clear understanding
> of the distinction between form and meaning will help guide the field
> towards better science around natural language understanding. *
>
>
>
> Her later article with Gebru on language models “stochastic parrots” is in
> some ways an extension of this point; machine translation requires mimicry,
> true understanding (which is what I was discussing in 2015) requires
> something deeper than that.
>
>
>
> Hinton’s intellectual error here is in equating machine translation with
> the deeper comprehension that robust natural language understanding will
> require; as Bender and Koller observed, the two appear not to be the same.
> (There is a longer discussion of the relation between language
> understanding and machine translation, and why the latter has turned out to
> be more approachable than the former, in my 2019 book with Ernest Davis).
>
>
>
> More broadly, Hinton’s ongoing dismissiveness of research from
> perspectives other than his own (e.g. linguistics) have done the field a
> disservice.
>
>
>
> As Herb Simon once observed, science does not have to be zero-sum.
>
>
>
> Sincerely,
>
> Gary Marcus
>
> Professor Emeritus
>
> New York University
>
>
>
> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>
> 
>
> Stephen Hanson in conversation with Geoff Hinton
>
>
>
> In the latest episode of this video series for AIhub.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>,
> Stephen Hanson talks to  Geoff Hinton about neural networks,
> backpropagation, overparameterization, digit recognition, voxel cells,
> syntax and semantics, Winograd sentences, and more.
>
>
>
> You can watch the discussion, and read the transcript, here:
>
>
> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>
>
>
> About AIhub:
>
> AIhub is a non-profit dedicated to connecting the AI community to the
> public by providing free, high-quality information through AIhub.org
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>
> (https://aihub.org/
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
> We help researchers publish the latest AI news, summaries of their work,
> opinion pieces, tutorials and more.  We are supported by many leading
> scientific organizations in AI, namely AAAI
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
> NeurIPS
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
> ICML
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
> AIJ
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>
> /IJCAI
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
> ACM SIGAI
> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
> EurAI/AICOMM, CLAIRE
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
> and RoboCup
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>
> .
>
> Twitter: @aihuborg
>
>
>
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=> Virus-free.
> www.avast.com
> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220714/0e1772a5/attachment.html>


More information about the Connectionists mailing list