Connectionists: Stephen Hanson in conversation with Geoff Hinton

Daniel Polani daniel.polani at gmail.com
Tue Feb 8 08:00:10 EST 2022


Hi Ali, and all,

all excellent points. However, I think there is something more insidious
and implicit about symbols that shows that they are not quite as "pure" as
we would like them to be. We think of symbols as almost irreducible
entities. But even when we embark on this most stringent symbol management
game called "mathematics", it turns out that one can dig deeper and deeper
to try to anchor them into some a prioristic axioms; but still these axioms
are actually extracted through our expertise about the real world and are
our crystallized expertise about (say, in the case of Euclidean geometry)
how the real world workings can be simplified. And we know well that this
game does not always a foregone conclusions (as non-Euclidean geometry
ended up demonstrating)

Even when we push further, at some point, when it seems we have reached the
ultimate grounds, we get something like Goedel's theorems; where,while we
think we intuitively understand what it means for something to be "true",
it turns out it is not something that is ultimately grounded in the ground
rules of the logical system that we impose on symbols. It is clear that we
still have some "embodied" intuition to which the symbols, even in their
purest form, do not give us proper access. In short, there is a clear
tension/dichotomy between "pure" and "embodied" symbols.

To vary Einstein's dictum: as far as symbols are "pure", they do not refer
to reality; and as far as they refer to reality, the are not "pure".

In practice, thus, I believe symbols in the brain of a living being always
carry the "primeval sludge" with them. I think this is what makes symbols
so tricky to handle in an artificial AI (and I am emphatically not
restricting this argument to NNs). I personally do not think that we can do
that satisfactorily without somehow getting the embodiment into the matter;
but that's just my personal guess.

- Daniel

On Tue, Feb 8, 2022 at 7:25 AM Ali Minai <minaiaa at gmail.com> wrote:

> Hi Gary
>
> Thanks for your reply. I'll think more about your points. I do think that,
> to understand the human mind, we should start with vertebrates, which is
> why I suggested fish. At least for the motor system - which is part of the
> mind - we have learned a lot from lampreys (e.g. Sten Grillner's work and
> that beautiful lamprey-salamander model by Ijspeert et al.), and it has
> taught us a lot about locomotion in other animals, including mammals. The
> principles clearly generalize, though the complexity increases a lot.
> Insects too are very interesting. After all, they are our ancestors too.
>
> I don't agree that we can posit a clear transition from deep cognitive
> models in humans to none below that in the phylogenetic tree. Chimpanzees
> and macaques certainly show some evidence, and there's no reason to think
> that it's a step change rather than a highly nonlinear continuum. And even
> though what we might (simplistically) call System 2 aspects of cognition
> are minimally present in other mammals, their precursors must be.
>
> My point about cats and symbols was not regarding whether cats are aware
> of symbols, but that symbols emerge naturally from the physics of their
> brains. Behaviors that require some small degree of symbolic processing
> exist in mammals other than humans (e.g., transitive inference and
> landmark-based navigation in rats), and it is seen better as an emergent
> property of brains than an attribute to be explicitly built-into neural
> models by us. Once we have a sufficiently brain-like neural model, symbolic
> processing will already be there.
>
> I agree with you completely that we are far from understanding some of the
> most fundamental principles of the brain, but even more importantly, we are
> not even looking in the right direction. I'm hoping to lay out my arguments
> about all this in more detail in some other form.
>
> Best
> Ali
>
>
> PS: I had inadvertently posted my reply of Gary's message only to him.
> Should have posted to everyone, so here it is.
>
>
> *Ali A. Minai, Ph.D.*
> Professor and Graduate Program Director
> Complex Adaptive Systems Lab
> Department of Electrical Engineering & Computer Science
> 828 Rhodes Hall
> University of Cincinnati
> Cincinnati, OH 45221-0030
>
>
> Phone: (513) 556-4783
> Fax: (513) 556-7326
> Email: Ali.Minai at uc.edu
>           minaiaa at gmail.com
>
> WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>
>
>
> On Mon, Feb 7, 2022 at 12:28 AM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Ali,
>>
>>
>> It’s useful to think about animals, but I really wouldn’t start with
>> fish; it’s not clear that their ecological niche demands anything
>> significant in the way of extrapolation, causal reasoning, or
>> compositionality. There is good evidence elsewhere in the animal world for
>> extrapolation of functions that may be innate (eg solar azimuth in bees),
>> and causal reasoning (eg  tool use in ravens, various primates, and
>> octopi). It’s still not clear to me how much hierarchical representation
>> (critical to AGI) exists outside of humans, though; the ability to
>> construct rich new cognitive models may also be unique to us.
>>
>>
>> In any case it matters not in the least whether the average cat or human
>> *cares* about symbols, anymore that it matters whether the average
>> animal understands digestion; only a tiny fraction of the creatures on this
>> planet have any real understanding of their internal workings.
>>
>>
>> My overall feeling is that we are a really, really long way from
>> understanding the neural basis of higher-level cognition, and that AI is
>> going to need muddle through on its own, for another decade or two,
>>
>>
>> I do fully agree with your conclusion, though, that "AI today is driven
>> more by habit and the incentives of the academic and corporate marketplaces
>> than by a deep, long-term view of AI as a great exploratory project in
>> fundamental science." Let's hope that changes.
>>
>>
>>  Gary
>>
>> On Feb 6, 2022, at 13:19, Ali Minai <minaiaa at gmail.com> wrote:
>>
>> 
>>
>> Gary,
>>
>> That’s a very interesting and accurate list of capabilities that a
>> general intelligent system must have and that our AI does not. Of course,
>> the list is familiar to me from having read your book. However, I have a
>> somewhat different take on this whole thing.
>>
>>
>>
>> All the things we discuss here – symbols/no symbols, parts/wholes,
>> supervised/unsupervised, token/type, etc., are useful categories and
>> distinctions for our analysis of the problem, and are partly a result of
>> the historical evolution of the field of AI in particular and of philosophy
>> in general. The categories are not wrong in any way, of course, but they
>> are posterior to the actual system – good for describing and analyzing it,
>> and for validating our versions of it (which is how you use them). I think
>> they are less useful as prescriptions for how to build our AI systems.  If
>> intelligent systems did not already exist and we were building them from
>> scratch (please ignore the impossibility of that), having a list of “must
>> haves” would be great. But intelligent systems already exist – from humans
>> to fish – and they already have these capacities to a greater or lesser
>> degree because of the physics of their biology. A cat’s intelligence does
>> not care whether it has symbols or not, and nor does mine or yours.
>> Whatever we describe as symbolic processing post-facto has already been
>> done by brains for at least tens of millions of years. Instead of getting
>> caught up in “how to add symbols into our neural models”, we should be
>> investigating how what we see as symbolic processing emerges from animal
>> brains, and then replicate those brains to the degree necessary. If we can
>> do that, symbolic processing will already be present. But it cannot be done
>> piece by piece. It must take the integrity of the whole brain and the body
>> it is part of, and its environment, into account. That’s why I think that a
>> much better – though a very long – route to AI is to start by understanding
>> how a fish brain makes the intelligence of a fish possible, and then boot
>> up our knowledge across phylogenetic stages: Bottom up reverse engineering
>> rather than top-down engineering. That’s the way Nature built up to human
>> intelligence, and we will succeed only by reverse engineering it. Of
>> course, we can do it much faster and with shortcuts because we are
>> intelligent, purposive agents, but working top-down by building piecewise
>> systems that satisfy a list of attributes will not get us there. Among
>> other things, those pieces will be impossible to integrate into the kind of
>> intelligence that can have those general models of the world that you
>> rightly point to as being necessary.
>>
>>
>>
>> I think that one thing that has been a great boon to the AI enterprise
>> has also been one of the greatest impediments to its complete success, and
>> that is the “computationalization” of intelligence. On the one hand,
>> thinking of intelligence computationally allows us to describe it
>> abstractly and in a principled, formal way. It also resonates with the fact
>> that we are trying to implement intelligence through computational
>> machines. But, on the flip side, this view of intelligence divorces it from
>> its physics – from the fact that real intelligence in animals emerges from
>> the physics of the physical system. That system is not a collection of its
>> capabilities; rather, those capabilities are immanent in it by virtue of
>> its physics. When we try to build those capabilities computationally, i.e.,
>> through code, we are making the same error that the practitioners of
>> old-style “symbolic AI” made – what I call the “professors are smarter than
>> Nature” error, i.e., the idea that we are going to enumerate (or describe)
>> all the things that underlie intelligence and implement them one by one
>> until we get complete intelligence. We will never be able to enumerate all
>> those capabilities, and will never be able to get to that complete
>> intelligence. The only difference between us and the “symbolists” of yore
>> is that we are replacing giant LISP and Prolog programs with giant neural
>> networks. Otherwise, we are using our models exactly as they were trying to
>> use their models, and we will fail just as they did unless we get back to
>> biology and the real thing.
>>
>>
>>
>> I will say again that the way we do AI today is driven more by habit and
>> the incentives of the academic and corporate marketplaces than by a deep,
>> long-term view of AI as a great exploratory project in fundamental science.
>> We are just building AI to drive our cars, translate our documents, write
>> our reports, and do our shopping. What that will teach us about actual
>> intelligence is just incidental.
>>
>>
>>
>> My apologies too for a long response.
>>
>> Ali
>>
>>
>> *Ali A. Minai, Ph.D.*
>> Professor and Graduate Program Director
>> Complex Adaptive Systems Lab
>> Department of Electrical Engineering & Computer Science
>> 828 Rhodes Hall
>> University of Cincinnati
>> Cincinnati, OH 45221-0030
>>
>> Phone: (513) 556-4783
>> Fax: (513) 556-7326
>> Email: Ali.Minai at uc.edu
>>           minaiaa at gmail.com
>>
>> WWW: https://eecs.ceas.uc.edu/~aminai/
>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.ece.uc.edu_-257Eaminai_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=uhe9jekIhsaTA9qYGANFgLZxMXUZ4O2Hvdgm7bCMMeGpibwVcK4vE18vlLuqqJx6&s=dlrVUnAuTaE-AX0XFprDHHcpQVzmHz1s8d7cp1_JPVU&e=>
>>
>>
>> On Sun, Feb 6, 2022 at 9:42 AM Gary Marcus <gary.marcus at nyu.edu> wrote:
>>
>>> Dear Asim,
>>>
>>>
>>> Sorry for a long answer to your short but rich questions.
>>>
>>>    - Yes, memory in my view has to be part of the answer to the
>>>    type-token problem. Symbol systems encoded in memory allow a natural way to
>>>    set up records, and something akin to that seems necessary. Pure multilayer
>>>    perceptrons struggle with type-token distinctions precisely because they
>>>    lack such records. On the positive side, I see more and more movement
>>>    towards recordlike stores (eg w key-value stores in memory networks), and I
>>>    think that is an important and necessary step, very familiar from the
>>>    symbol-manipulating playbook, sometimes implemented in new ways.
>>>    - But ultimately, handling the type-token distinction requires
>>>    considerable inferential overhead beyond the memory representation of a
>>>    record per se.  How do you determine when to denote something (e.g.
>>>    Felix) as an instance, and of which kinds (cat, animal etc), and how do you
>>>    leverage that knowledge once you determine it?
>>>    - In the limit we reason about types vs tokens in fairly subtle
>>>    ways, eg in guessing whether a glass that we put down at party is likely to
>>>    be ours.  The reverse is also important: we need to be learn
>>>    particular traits for individuals and not erroneously generalize them to
>>>    the class; if my aunt Esther wins the lottery, one shouldn’t infer  that
>>>    all of my aunts or all of my relatives or adult females have won the
>>>    lottery. so you need both representational machinery that can distinguish
>>>    eg my cat from cats in general and reasoning machinery to decide at what
>>>    level certain learned knowledge should inhere. (I had a whole chapter about
>>>    this sort of thing in The Algebraic Mind if you are interested, and Mike
>>>    Mozer had a book about types and tokens in neural networks in the mid
>>>    1990s).
>>>    - Yes, part (though not all!) of what we do when we set up cognitive
>>>    models in our heads is to track particular individuals and their
>>>    properties. If you only had to correlate kinds (cats) and their properties
>>>    (have fur) you could maybe get away with a multilayer perceptron, but once
>>>    you need to track individuals, yes, you really need some kind of
>>>    memory-based records.
>>>    - As far as I can tell, Transformers can sometimes approximate some
>>>    of this for a few sentences, but not over long stretches.
>>>
>>>
>>> As a small terminological aside; for me cognitive models ≠ cognitive
>>> modeling. Cognitive modeling is about building psychological or
>>> computational models of how people think, whereas what I mean by a cognitive
>>> model is a representation of eg the entities in some situation and the
>>> relations between those entities.
>>>
>>>
>>> To your closing question, none of us yet really knows how to build
>>> understanding into machines.  A solid type-token distinction, both in
>>> terms of representation and reasoning, is critical for general
>>> intelligence, but hardly sufficient. Personally, I think some minimal
>>> prerequisites would be:
>>>
>>>    - representations of space, time, causality, individuals, kinds,
>>>    persons, places, objects, etc.
>>>    - representations of abstractions that can hold over all entities in
>>>    a class
>>>    - compositionality (if we are talking about human-like understanding)
>>>    - capacity to construct and update cognitive models on the fly
>>>    - capacity to reason over entities in those models
>>>    - ability to learn about new entities and their properties
>>>
>>> Much of my last book (*Rebooting AI*, w Ernie Davis) is about the above
>>> list. The section in the language chapter on a children’s story in which
>>> man has lost is wallet is an especially vivid worked example. Later
>>> chapters elaborate some of the challenges in representing space, time, and
>>> causality.
>>>
>>>
>>> Gary
>>>
>>>
>>> On Feb 5, 2022, at 18:58, Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>
>>> 
>>>
>>> Gary,
>>>
>>>
>>>
>>> I don’t get much into the type of cognitive modeling you are talking
>>> about, but I would guess that the type problem can generally be handled by
>>> neural network models and tokens can be resolved with some memory-based
>>> system. But to the heart of the question, this is what so-called
>>> “understanding” reduces to computation wise?
>>>
>>>
>>>
>>> Asim
>>>
>>>
>>>
>>> *From:* Gary Marcus <gary.marcus at nyu.edu>
>>> *Sent:* Saturday, February 5, 2022 8:39 AM
>>> *To:* Asim Roy <ASIM.ROY at asu.edu>
>>> *Cc:* Ali Minai <minaiaa at gmail.com>; Danko Nikolic <
>>> danko.nikolic at gmail.com>; Brad Wyble <bwyble at gmail.com>;
>>> connectionists at mailman.srv.cs.cmu.edu; AIhub <aihuborg at gmail.com>
>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>> Geoff Hinton
>>>
>>>
>>>
>>> There is no magic in understanding, just computation that has been
>>> realized in the wetware of humans and that eventually can be realized in
>>> machines. But understanding is not (just) learning.
>>>
>>>
>>>
>>> Understanding incorporates (or works in tandem with) learning - but
>>> also, critically, in tandem with inference, *and the development and
>>> maintenance of cognitive models*.  Part of developing an understanding
>>> of cats in general is to learn long term-knowledge about their properties,
>>> both directly (e.g., through observation) and indirectly (eg through
>>> learning facts  about animals in general that can be extended to cats),
>>> often through inference (if all animals have DNA, and a cat is an animal,
>>> it must also have DNA).   The understanding of a particular cat also
>>> involves direct observation, but also inference (eg  one might surmise
>>> that the reason that Fluffy is running about the room is that Fluffy
>>> suspects there is a mouse stirring somewhere nearby). *But all of that,
>>> I would say, is subservient to the construction of cognitive models that
>>> can be routinely updated *(e.g., Fluffy is currently in the living
>>> room, skittering about, perhaps looking for a mouse).
>>>
>>>
>>>
>>>  In humans, those dynamic, relational models, which form part of an
>>> understanding, can support inference (if Fluffy is in the living room, we
>>> can infer that Fluffy is not outside, not lost, etc). Without such models -
>>> which I think represent a core part of understanding - AGI is an unlikely
>>> prospect.
>>>
>>>
>>>
>>> Current neural networks, as it happens, are better at acquiring
>>> long-term knowledge (cats have whiskers) than they are at dynamically
>>> updating cognitive models in real-time. LLMs like GPT-3 etc lack the kind
>>> of dynamic model that I am describing. To a modest degree they can
>>> approximate it on the basis of large samples of texts, but their ultimate
>>> incoherence stems from the fact that they do not have robust internal
>>> cognitive models that they can update on the fly.
>>>
>>>
>>>
>>> Without such cognitive models you can still capture some aspects of
>>> understanding (eg predicting that cats are likely to be furry), but things
>>> fall apart quickly; inference is never reliable, and coherence is fleeting.
>>>
>>>
>>>
>>> As a final note, one of the most foundational challenges in constructing
>>> adequate cognitive models of the world is to have a clear distinction
>>> between individuals and kinds; as I emphasized 20 years ago (in The
>>> Algebraic Mind), this has always been a weakness in neural networks, and I
>>> don’t think that the type-token problem has yet been solved.
>>>
>>>
>>>
>>> Gary
>>>
>>>
>>>
>>>
>>>
>>> On Feb 5, 2022, at 01:31, Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>
>>> 
>>>
>>> All,
>>>
>>>
>>>
>>> I think the broader question was “understanding.” Here are two Youtube
>>> videos showing simple robots “learning” to walk. They are purely physical
>>> systems. Do they “understand” anything – such as the need to go around an
>>> obstacle, jumping over an obstacle, walking up and down stairs and so on?
>>> By the way, they “learn” to do these things on their own, literally
>>> unsupervised, very much like babies. The basic question is: what is
>>> “understanding” if not “learning?” Is there some other mechanism (magic) at
>>> play in our brain that helps us “understand?”
>>>
>>>
>>>
>>> https://www.youtube.com/watch?v=gn4nRCC9TwQ
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Dgn4nRCC9TwQ&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=zdQA6enDajD46kwz-nti6FBklz-72dzlA9NLEzRW1TY&e=>
>>>
>>> https://www.youtube.com/watch?v=8sO7VS3q8d0
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3D8sO7VS3q8d0&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=PRhn1hhcfzNtbKXIZpOAM4lyyMp39202wE7Uu4MWg5M&e=>
>>>
>>>
>>>
>>>
>>>
>>> Asim Roy
>>>
>>> Professor, Information Systems
>>>
>>> Arizona State University
>>>
>>> Lifeboat Foundation Bios: Professor Asim Roy
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>
>>> Asim Roy | iSearch (asu.edu)
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> *From:* Ali Minai <minaiaa at gmail.com>
>>> *Sent:* Friday, February 4, 2022 11:38 PM
>>> *To:* Asim Roy <ASIM.ROY at asu.edu>
>>> *Cc:* Gary Marcus <gary.marcus at nyu.edu>; Danko Nikolic <
>>> danko.nikolic at gmail.com>; Brad Wyble <bwyble at gmail.com>;
>>> connectionists at mailman.srv.cs.cmu.edu; AIhub <aihuborg at gmail.com>
>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>> Geoff Hinton
>>>
>>>
>>>
>>> Asim
>>>
>>>
>>>
>>> Of course there's nothing magical about understanding, and the mind has
>>> to emerge from the physical system, but our AI models at this point are not
>>> even close to realizing how that happens. We are, at best, simulating a
>>> superficial approximation of a few parts of the real thing. A single,
>>> integrated system where all the aspects of intelligence emerge from the
>>> same deep, well-differentiated physical substrate is far beyond our
>>> capacity. Paying more attention to neurobiology will be essential to get
>>> there, but so will paying attention to development - both physical and
>>> cognitive - and evolution. The configuration of priors by evolution is key
>>> to understanding how real intelligence learns so quickly and from so
>>> little. This is not an argument for using genetic algorithms to design our
>>> systems, just for understanding the tricks evolution has used and
>>> replicating them by design. Development is more feasible to do
>>> computationally, but hardly any models have looked at it except in a
>>> superficial sense. Nature creates basic intelligence not so much by
>>> configuring functions by explicit training as by tweaking, modulating,
>>> ramifying, and combining existing ones in a multi-scale self-organization
>>> process. We then learn much more complicated things (like playing chess) by
>>> exploiting that substrate, and using explicit instruction or learning by
>>> practice. The fundamental lesson of complex systems is that complexity is
>>> built in stages - each level exploiting the organization of the level below
>>> it. We see it in evolution, development, societal evolution, the evolution
>>> of technology, etc. Our approach in AI, in contrast, is to initialize a
>>> giant, naive system and train it to do something really complicated - but
>>> really specific - by training the hell out of it. Sure, now we do build
>>> many systems on top of pre-trained models like GPT-3 and BERT, which is
>>> better, but those models were again trained by the same none-to-all process
>>> I decried above. Contrast that with how humans acquire language, and how
>>> they integrate it into their *entire* perceptual, cognitive, and behavioral
>>> repertoire, not focusing just on this or that task. The age of symbolic AI
>>> may have passed, but the reductionistic mindset has not. We cannot build
>>> minds by chopping it into separate verticals.
>>>
>>>
>>>
>>> FTR, I'd say that the emergence of models such as GLOM and Hawkins and
>>> Ahmed's "thousand brains" is a hopeful sign. They may not be "right", but
>>> they are, I think, looking in the right direction. With a million miles to
>>> go!
>>>
>>>
>>>
>>> Ali
>>>
>>>
>>>
>>> *Ali A. Minai, Ph.D.*
>>> Professor and Graduate Program Director
>>> Complex Adaptive Systems Lab
>>> Department of Electrical Engineering & Computer Science
>>>
>>> 828 Rhodes Hall
>>>
>>> University of Cincinnati
>>> Cincinnati, OH 45221-0030
>>>
>>>
>>> Phone: (513) 556-4783
>>> Fax: (513) 556-7326
>>> Email: Ali.Minai at uc.edu
>>>           minaiaa at gmail.com
>>>
>>> WWW: https://eecs.ceas.uc.edu/~aminai/
>>> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!IKRxdwAv5BmarQ!Jd2XhTzWg6HDp9IPjlyNv847sUdhGDNfsnqZQ0gy1_mu-CfyUdpBMswhfqdbZTo$>
>>>
>>>
>>>
>>>
>>>
>>> On Fri, Feb 4, 2022 at 2:42 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>
>>> First of all, the brain is a physical system. There is no “magic” inside
>>> the brain that does the “understanding” part. Take for example learning to
>>> play tennis. You hit a few balls - some the right way and some wrong – but
>>> you fairly quickly learn to hit them right most of the time. So there is
>>> obviously some simulation going on in the brain about hitting the ball in
>>> different ways and “learning” its consequences. What you are calling
>>> “understanding” is really these simulations about different scenarios. It’s
>>> also very similar to augmentation used to train image recognition systems
>>> where you rotate images, obscure parts and so on, so that you still can say
>>> it’s a cat even though you see only the cat’s face or whiskers or a cat
>>> flipped on its back. So, if the following questions relate to
>>> “understanding,” you can easily resolve this by simulating such scenarios
>>> when “teaching” the system. There’s nothing “magical” about
>>> “understanding.” As I said, bear in mind that the brain, after all, is a
>>> physical system and “teaching” and “understanding” is embodied in that
>>> physical system, not outside it. So “understanding” is just part of
>>> “learning,” nothing more.
>>>
>>>
>>>
>>> DANKO:
>>>
>>> What would happen to the hat if the hamster rolls on its back? (Would
>>> the hat fall off?)
>>>
>>> What would happen to the red hat when the hamster enters its lair?
>>> (Would the hat fall off?)
>>>
>>> What would happen to that hamster when it goes foraging? (Would the red
>>> hat have an influence on finding food?)
>>>
>>> What would happen in a situation of being chased by a predator? (Would
>>> it be easier for predators to spot the hamster?)
>>>
>>>
>>>
>>> Asim Roy
>>>
>>> Professor, Information Systems
>>>
>>> Arizona State University
>>>
>>> Lifeboat Foundation Bios: Professor Asim Roy
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>
>>> Asim Roy | iSearch (asu.edu)
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>
>>>
>>>
>>>
>>>
>>> *From:* Gary Marcus <gary.marcus at nyu.edu>
>>> *Sent:* Thursday, February 3, 2022 9:26 AM
>>> *To:* Danko Nikolic <danko.nikolic at gmail.com>
>>> *Cc:* Asim Roy <ASIM.ROY at asu.edu>; Geoffrey Hinton <
>>> geoffrey.hinton at gmail.com>; AIhub <aihuborg at gmail.com>;
>>> connectionists at mailman.srv.cs.cmu.edu
>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>> Geoff Hinton
>>>
>>>
>>>
>>> Dear Danko,
>>>
>>>
>>>
>>> Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED
>>> talk, in which he said (paraphrasing from memory, because I don’t remember
>>> the precise words) that the famous 200 Quoc Le unsupervised model [
>>> https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf
>>> <https://urldefense.com/v3/__https:/static.googleusercontent.com/media/research.google.com/en/*archive/unsupervised_icml2012.pdf__;Lw!!IKRxdwAv5BmarQ!PFl2URDWVshfy1BPSwAMXKYyn1wszxpN4EPzShAm3sX83AOt05MQX07oVyVLEqo$>]
>>> had learned the concept of a ca. In reality the model had clustered
>>> together some catlike images based on the image statistics that it had
>>> extracted, but it was a long way from a full, counterfactual-supporting
>>> concept of a cat, much as you describe below.
>>>
>>>
>>>
>>> I fully agree with you that the reason for even having a semantics is as
>>> you put it, "to 1) learn with a few examples and 2) apply the knowledge to
>>> a broad set of situations.” GPT-3 sometimes gives the appearance of having
>>> done so, but it falls apart under close inspection, so the problem remains
>>> unsolved.
>>>
>>>
>>>
>>> Gary
>>>
>>>
>>>
>>> On Feb 3, 2022, at 3:19 AM, Danko Nikolic <danko.nikolic at gmail.com>
>>> wrote:
>>>
>>>
>>>
>>> G. Hinton wrote: "I believe that any reasonable person would admit that
>>> if you ask a neural net to draw a picture of a hamster wearing a red hat
>>> and it draws such a picture, it understood the request."
>>>
>>>
>>>
>>> I would like to suggest why drawing a hamster with a red hat does not
>>> necessarily imply understanding of the statement "hamster wearing a red
>>> hat".
>>>
>>> To understand that "hamster wearing a red hat" would mean inferring, in
>>> newly emerging situations of this hamster, all the real-life
>>> implications that the red hat brings to the little animal.
>>>
>>>
>>>
>>> What would happen to the hat if the hamster rolls on its back? (Would
>>> the hat fall off?)
>>>
>>> What would happen to the red hat when the hamster enters its lair?
>>> (Would the hat fall off?)
>>>
>>> What would happen to that hamster when it goes foraging? (Would the red
>>> hat have an influence on finding food?)
>>>
>>> What would happen in a situation of being chased by a predator? (Would
>>> it be easier for predators to spot the hamster?)
>>>
>>>
>>>
>>> ...and so on.
>>>
>>>
>>>
>>> Countless many questions can be asked. One has understood "hamster
>>> wearing a red hat" only if one can answer reasonably well many of such
>>> real-life relevant questions. Similarly, a student has understood materias
>>> in a class only if they can apply the materials in real-life situations
>>> (e.g., applying Pythagora's theorem). If a student gives a correct answer
>>> to a multiple choice question, we don't know whether the student understood
>>> the material or whether this was just rote learning (often, it is rote
>>> learning).
>>>
>>>
>>>
>>> I also suggest that understanding also comes together with effective
>>> learning: We store new information in such a way that we can recall it
>>> later and use it effectively  i.e., make good inferences in newly emerging
>>> situations based on this knowledge.
>>>
>>>
>>>
>>> In short: Understanding makes us humans able to 1) learn with a few
>>> examples and 2) apply the knowledge to a broad set of situations.
>>>
>>>
>>>
>>> No neural network today has such capabilities and we don't know how to
>>> give them such capabilities. Neural networks need large amounts of
>>> training examples that cover a large variety of situations and then
>>> the networks can only deal with what the training examples have already
>>> covered. Neural networks cannot extrapolate in that 'understanding' sense.
>>>
>>>
>>>
>>> I suggest that understanding truly extrapolates from a piece of
>>> knowledge. It is not about satisfying a task such as translation between
>>> languages or drawing hamsters with hats. It is how you got the capability
>>> to complete the task: Did you only have a few examples that covered
>>> something different but related and then you extrapolated from that
>>> knowledge? If yes, this is going in the direction of understanding. Have
>>> you seen countless examples and then interpolated among them? Then perhaps
>>> it is not understanding.
>>>
>>>
>>>
>>> So, for the case of drawing a hamster wearing a red hat, understanding
>>> perhaps would have taken place if the following happened before that:
>>>
>>>
>>>
>>> 1) first, the network learned about hamsters (not many examples)
>>>
>>> 2) after that the network learned about red hats (outside the context of
>>> hamsters and without many examples)
>>>
>>> 3) finally the network learned about drawing (outside of the context of
>>> hats and hamsters, not many examples)
>>>
>>>
>>>
>>> After that, the network is asked to draw a hamster with a red hat. If it
>>> does it successfully, maybe we have started cracking the problem of
>>> understanding.
>>>
>>>
>>>
>>> Note also that this requires the network to learn sequentially without
>>> exhibiting catastrophic forgetting of the previous knowledge, which is
>>> possibly also a consequence of human learning by understanding.
>>>
>>>
>>>
>>>
>>>
>>> Danko
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> Dr. Danko Nikolić
>>> www.danko-nikolic.com
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=>
>>> https://www.linkedin.com/in/danko-nikolic/
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=>
>>>
>>> --- A progress usually starts with an insight ---
>>>
>>>
>>>
>>>
>>>
>>>
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>
>>> Virus-free. www.avast.com
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>
>>>
>>>
>>> On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>
>>> Without getting into the specific dispute between Gary and Geoff, I
>>> think with approaches similar to GLOM, we are finally headed in the right
>>> direction. There’s plenty of neurophysiological evidence for single-cell
>>> abstractions and multisensory neurons in the brain, which one might claim
>>> correspond to symbols. And I think we can finally reconcile the decades old
>>> dispute between Symbolic AI and Connectionism.
>>>
>>>
>>>
>>> GARY: (Your GLOM, which as you know I praised publicly, is in many ways
>>> an effort to wind up with encodings that effectively serve as symbols in
>>> exactly that way, guaranteed to serve as consistent representations of
>>> specific concepts.)
>>>
>>> GARY: I have *never* called for dismissal of neural networks, but
>>> rather for some hybrid between the two (as you yourself contemplated in
>>> 1991); the point of the 2001 book was to characterize exactly where
>>> multilayer perceptrons succeeded and broke down, and where symbols could
>>> complement them.
>>>
>>>
>>>
>>> Asim Roy
>>>
>>> Professor, Information Systems
>>>
>>> Arizona State University
>>>
>>> Lifeboat Foundation Bios: Professor Asim Roy
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>
>>> Asim Roy | iSearch (asu.edu)
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>
>>>
>>>
>>>
>>>
>>> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
>>> Behalf Of *Gary Marcus
>>> *Sent:* Wednesday, February 2, 2022 1:26 PM
>>> *To:* Geoffrey Hinton <geoffrey.hinton at gmail.com>
>>> *Cc:* AIhub <aihuborg at gmail.com>; connectionists at mailman.srv.cs.cmu.edu
>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with
>>> Geoff Hinton
>>>
>>>
>>>
>>> Dear Geoff, and interested others,
>>>
>>>
>>>
>>> What, for example, would you make of a system that often drew the
>>> red-hatted hamster you requested, and perhaps a fifth of the time gave you
>>> utter nonsense?  Or say one that you trained to create birds but sometimes
>>> output stuff like this:
>>>
>>>
>>>
>>> <image001.png>
>>>
>>>
>>>
>>> One could
>>>
>>>
>>>
>>> a. avert one’s eyes and deem the anomalous outputs irrelevant
>>>
>>> or
>>>
>>> b. wonder if it might be possible that sometimes the system gets the
>>> right answer for the wrong reasons (eg partial historical contingency), and
>>> wonder whether another approach might be indicated.
>>>
>>>
>>>
>>> Benchmarks are harder than they look; most of the field has come to
>>> recognize that. The Turing Test has turned out to be a lousy measure of
>>> intelligence, easily gamed. It has turned out empirically that the Winograd
>>> Schema Challenge did not measure common sense as well as Hector might have
>>> thought. (As it happens, I am a minor coauthor of a very recent review on
>>> this very topic: https://arxiv.org/abs/2201.02387
>>> <https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$>)
>>> But its conquest in no way means machines now have common sense; many
>>> people from many different perspectives recognize that (including, e.g.,
>>> Yann LeCun, who generally tends to be more aligned with you than with me).
>>>
>>>
>>>
>>> So: on the goalpost of the Winograd schema, I was wrong, and you can
>>> quote me; but what you said about me and machine translation remains your
>>> invention, and it is inexcusable that you simply ignored my 2019
>>> clarification. On the essential goal of trying to reach meaning and
>>> understanding, I remain unmoved; the problem remains unsolved.
>>>
>>>
>>>
>>> All of the problems LLMs have with coherence, reliability, truthfulness,
>>> misinformation, etc stand witness to that fact. (Their persistent inability
>>> to filter out toxic and insulting remarks stems from the same.) I am hardly
>>> the only person in the field to see that progress on any given benchmark
>>> does not inherently mean that the deep underlying problems have solved.
>>> You, yourself, in fact, have occasionally made that point.
>>>
>>>
>>>
>>> With respect to embeddings: Embeddings are very good for natural
>>> language *processing*; but NLP is not the same as NL*U* – when it comes
>>> to *understanding*, their worth is still an open question. Perhaps they
>>> will turn out to be necessary; they clearly aren’t sufficient. In their
>>> extreme, they might even collapse into being symbols, in the sense of
>>> uniquely identifiable encodings, akin to the ASCII code, in which a
>>> specific set of numbers stands for a specific word or concept. (Wouldn’t
>>> that be ironic?)
>>>
>>>
>>>
>>> (Your GLOM, which as you know I praised publicly, is in many ways an
>>> effort to wind up with encodings that effectively serve as symbols in
>>> exactly that way, guaranteed to serve as consistent representations of
>>> specific concepts.)
>>>
>>>
>>>
>>> Notably absent from your email is any kind of apology for
>>> misrepresenting my position. It’s fine to say that “many people thirty
>>> years ago once thought X” and another to say “Gary Marcus said X in 2015”,
>>> when I didn’t. I have consistently felt throughout our interactions that
>>> you have mistaken me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014)
>>> apologized to me for having made that error. I am still not he.
>>>
>>>
>>>
>>> Which maybe connects to the last point; if you read my work, you would
>>> see thirty years of arguments *for* neural networks, just not in the
>>> way that you want them to exist. I have ALWAYS argued that there is a role
>>> for them;  characterizing me as a person “strongly opposed to neural
>>> networks” misses the whole point of my 2001 book, which was subtitled
>>> “Integrating Connectionism and Cognitive Science.”
>>>
>>>
>>>
>>> In the last two decades or so you have insisted (for reasons you have
>>> never fully clarified, so far as I know) on abandoning symbol-manipulation,
>>> but the reverse is not the case: I have *never* called for dismissal of
>>> neural networks, but rather for some hybrid between the two (as you
>>> yourself contemplated in 1991); the point of the 2001 book was to
>>> characterize exactly where multilayer perceptrons succeeded and broke down,
>>> and where symbols could complement them. It’s a rhetorical trick (which is
>>> what the previous thread was about) to pretend otherwise.
>>>
>>>
>>>
>>> Gary
>>>
>>>
>>>
>>>
>>>
>>> On Feb 2, 2022, at 11:22, Geoffrey Hinton <geoffrey.hinton at gmail.com>
>>> wrote:
>>>
>>> 
>>>
>>> Embeddings are just vectors of soft feature detectors and they are very
>>> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the
>>> opposite.
>>>
>>>
>>>
>>> A few decades ago, everyone I knew then would have agreed that the
>>> ability to translate a sentence into many different languages was strong
>>> evidence that you understood it.
>>>
>>>
>>>
>>> But once neural networks could do that, their critics moved the
>>> goalposts. An exception is Hector Levesque who defined the goalposts more
>>> sharply by saying that the ability to get pronoun references correct in
>>> Winograd sentences is a crucial test. Neural nets are improving at that but
>>> still have some way to go. Will Gary agree that when they can get pronoun
>>> references correct in Winograd sentences they really do understand? Or does
>>> he want to reserve the right to weasel out of that too?
>>>
>>>
>>>
>>> Some people, like Gary, appear to be strongly opposed to neural networks
>>> because they do not fit their preconceived notions of how the mind should
>>> work.
>>>
>>> I believe that any reasonable person would admit that if you ask a
>>> neural net to draw a picture of a hamster wearing a red hat and it draws
>>> such a picture, it understood the request.
>>>
>>>
>>>
>>> Geoff
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>>>
>>> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger
>>> neural network community,
>>>
>>>
>>>
>>> There has been a lot of recent discussion on this list about framing and
>>> scientific integrity. Often the first step in restructuring narratives is
>>> to bully and dehumanize critics. The second is to misrepresent their
>>> position. People in positions of power are sometimes tempted to do this.
>>>
>>>
>>>
>>> The Hinton-Hanson interview that you just published is a real-time
>>> example of just that. It opens with a needless and largely content-free
>>> personal attack on a single scholar (me), with the explicit intention of
>>> discrediting that person. Worse, the only substantive thing it says is
>>> false.
>>>
>>>
>>>
>>> Hinton says “In 2015 he [Marcus] made a prediction that computers
>>> wouldn’t be able to do machine translation.”
>>>
>>>
>>>
>>> I never said any such thing.
>>>
>>>
>>>
>>> What I predicted, rather, was that multilayer perceptrons, as they
>>> existed then, would not (on their own, absent other mechanisms)
>>> *understand* language. Seven years later, they still haven’t, except in
>>> the most superficial way.
>>>
>>>
>>>
>>> I made no comment whatsoever about machine translation, which I view as
>>> a separate problem, solvable to a certain degree by correspondance without
>>> semantics.
>>>
>>>
>>>
>>> I specifically tried to clarify Hinton’s confusion in 2019, but,
>>> disappointingly, he has continued to purvey misinformation despite that
>>> clarification. Here is what I wrote privately to him then, which should
>>> have put the matter to rest:
>>>
>>>
>>>
>>> You have taken a single out of context quote [from 2015] and
>>> misrepresented it. The quote, which you have prominently displayed at the
>>> bottom on your own web page, says:
>>>
>>>
>>>
>>> Hierarchies of features are less suited to challenges such as language,
>>> inference, and high-level planning. For example, as Noam Chomsky famously
>>> pointed out, language is filled with sentences you haven't seen
>>> before. Pure classifier systems don't know what to do with such sentences.
>>> The talent of feature detectors -- in  identifying which member of some
>>> category something belongs to -- doesn't translate into understanding
>>> novel  sentences, in which each sentence has its own unique meaning.
>>>
>>>
>>>
>>> It does *not* say "neural nets would not be able to deal with novel
>>> sentences"; it says that hierachies of features detectors (on their own, if
>>> you read the context of the essay) would have trouble *understanding *novel sentences.
>>>
>>>
>>>
>>>
>>> Google Translate does yet not *understand* the content of the sentences
>>> is translates. It cannot reliably answer questions about who did what to
>>> whom, or why, it cannot infer the order of the events in paragraphs, it
>>> can't determine the internal consistency of those events, and so forth.
>>>
>>>
>>>
>>> Since then, a number of scholars, such as the the computational linguist
>>> Emily Bender, have made similar points, and indeed current LLM difficulties
>>> with misinformation, incoherence and fabrication all follow from these
>>> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter
>>> with Alexander Koller, https://aclanthology.org/2020.acl-main.463.pdf
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=>,
>>> also emphasizing issues of understanding and meaning:
>>>
>>>
>>>
>>> *The success of the large neural language models on many NLP tasks is
>>> exciting. However, we find that these successes sometimes lead to hype in
>>> which these models are being described as “understanding” language or
>>> capturing “meaning”. In this position paper, we argue that a system trained
>>> only on form has a priori no way to learn meaning. .. a clear understanding
>>> of the distinction between form and meaning will help guide the field
>>> towards better science around natural language understanding. *
>>>
>>>
>>>
>>> Her later article with Gebru on language models “stochastic parrots” is
>>> in some ways an extension of this point; machine translation requires
>>> mimicry, true understanding (which is what I was discussing in 2015)
>>> requires something deeper than that.
>>>
>>>
>>>
>>> Hinton’s intellectual error here is in equating machine translation with
>>> the deeper comprehension that robust natural language understanding will
>>> require; as Bender and Koller observed, the two appear not to be the same.
>>> (There is a longer discussion of the relation between language
>>> understanding and machine translation, and why the latter has turned out to
>>> be more approachable than the former, in my 2019 book with Ernest Davis).
>>>
>>>
>>>
>>> More broadly, Hinton’s ongoing dismissiveness of research from
>>> perspectives other than his own (e.g. linguistics) have done the field a
>>> disservice.
>>>
>>>
>>>
>>> As Herb Simon once observed, science does not have to be zero-sum.
>>>
>>>
>>>
>>> Sincerely,
>>>
>>> Gary Marcus
>>>
>>> Professor Emeritus
>>>
>>> New York University
>>>
>>>
>>>
>>> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>>>
>>> 
>>>
>>> Stephen Hanson in conversation with Geoff Hinton
>>>
>>>
>>>
>>> In the latest episode of this video series for AIhub.org
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>,
>>> Stephen Hanson talks to  Geoff Hinton about neural networks,
>>> backpropagation, overparameterization, digit recognition, voxel cells,
>>> syntax and semantics, Winograd sentences, and more.
>>>
>>>
>>>
>>> You can watch the discussion, and read the transcript, here:
>>>
>>>
>>> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>>>
>>>
>>>
>>> About AIhub:
>>>
>>> AIhub is a non-profit dedicated to connecting the AI community to the
>>> public by providing free, high-quality information through AIhub.org
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>
>>> (https://aihub.org/
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
>>> We help researchers publish the latest AI news, summaries of their work,
>>> opinion pieces, tutorials and more.  We are supported by many leading
>>> scientific organizations in AI, namely AAAI
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
>>> NeurIPS
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
>>> ICML
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
>>> AIJ
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>
>>> /IJCAI
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
>>> ACM SIGAI
>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
>>> EurAI/AICOMM, CLAIRE
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
>>> and RoboCup
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>
>>> .
>>>
>>> Twitter: @aihuborg
>>>
>>>
>>>
>>>
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>
>>> Virus-free. www.avast.com
>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>
>>>
>>>
>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220208/24f594f9/attachment.html>


More information about the Connectionists mailing list