Connectionists: Stephen Hanson in conversation with Geoff Hinton

Stephen José Hanson jose at rubic.rutgers.edu
Wed Feb 9 09:03:35 EST 2022


Gary,

You're minimizing my example, and frankly,  I don't believe DL models 
will "characterize some aspects of Psychology reasonably well"...  and 
before getting to your netflix bingeing.. one of the serious problems 
with DL models, say comparing to neural recordings, using correlation 
matrices there seem to be some correspondence to make.. but sometimes 
little more than a roscharch test.    In general, I think it will be 
hard to show good correspondence to decision processing, episodic 
memory, compound stimulus conditioning,  and various perceptual 
illusions and transformations.    In general the DL focus on 
classification and translation has created models very unlikely to 
easily model cognitive and perceptual phenomena. Models like Grossberg 
are curated to account for specific effects and over a lifetime have 
done a better job at making sense of psychological/neural phenomena then 
any other neural models I know about, whether one subscribes to the 
details of the modeling is another issue.

So in the perceptual/cognition abstraction task I discussed it is 
gob-smacking that JUST ADDING LAYERS solves this really critical failure 
of backpropagation, barely noted by most of the neural network community 
focused on better benchmarks.

As to Netflix titles.   I agree, cognitive models should be adaptable 
and responsive to updates that create more predictive outcomes for the 
agent using them.   This in no ways means the cognitive model must be 
symbolic or rule based.   This was something that was true in the 1980s 
and perhaps truer today.

This is clearly a critical aspect of GPT models.. what are the cognitive 
models that they are building or are they just high dimensional, 
phrase-structure blobs that do similarity analysis and return a nearby 
phrase-structure response, which happens to sound good.

Steve

On 2/7/22 5:07 PM, Gary Marcus wrote:
> Stephen,
>
> I don’t doubt for a minute that deep learning can characterize some 
> aspects of psychology reasonably well; but either it needs to expands 
> its borders or else be used  in conjunction with other techniques. 
> Take for example the name of the new Netflix show
>
> The Woman in the House Across the Street from the Girl in the Window
>
> Most of us can infer, compositionally, from that unusually long noun 
> phrase, that the title is a description of particular person, that the 
> title is not a complete sentence, and that the woman in question lives 
> in a house; we also infer that there is a second, distinct person 
> (likely a child) across the street, and so forth. We can also use some 
> knowledge of pragmatics to infer that the woman in question is likely 
> to be the protagonist in the show. Current systems still struggle with 
> that sort of thing.
>
> We can then watch the show (I watched a few minutes of Episode 1) and 
> quickly relate the title to the protagonist’s mental state, start to 
> develop a mental model of the protagonist’s relation to her new 
> neighbors, make inferences about whether certain choices appear to be 
> “within character”, empathize with character or question her 
> judgements, etc, all with respect to a mental model that is rapidly 
> encoded and quickly modified.
>
> I think that an understanding of how people build and modify such 
> models would be extremely valuable (not just for fiction for everyday 
> reality), but I don’t see how deep learning in its current form gives 
> us much purchase on that. There is plenty of precedent for the kind of 
> mental processes I am sketching (e.g  Walter Kintsch’s work on text 
> comprehension; Kamp/Kratzer/Heim work on discourse representation, 
> etc) from psychological and linguistic perspectives, but almost no 
> current contact in the neural network community with these 
> well-attested psychological processes.
>
> Gary
>
>> On Feb 7, 2022, at 6:01 AM, Stephen José Hanson 
>> <jose at rubic.rutgers.edu <mailto:jose at rubic.rutgers.edu>> wrote:
>>
>> Gary,
>>
>> This is one of the first posts of yours, that I can categorically 
>> agree with!
>>
>> I think building cognitive models through *some* training regime or 
>> focused sampling or architectures or something but not explicit, for 
>> example.
>>
>> The other fundamental cognitive/perceptual capability in this context 
>> is the ability of Neural Networks to do what Shepard (1970; Garner 
>> 1970s), had modeled as perceptual separable processing (finding 
>> parts) and perceptual integral process (finding covariance and 
>> structure).
>>
>> Shepard argued these fundamental perceptual processes were dependent 
>> on development and learning.
>>
>> A task was created with double dissociation of a categorization 
>> problem.  In one case: separable ( in effect, uncorrelated  features 
>> in the stimulus) were presented in categorization task that required 
>> you pay attention to at least 2 features at the same time to 
>> categorize correctly ("condensation").  in the other case: integral 
>> stimuli (in effect correlated features in stimuli) were presented  in 
>> a categorization task that required you to ignore the correlation and 
>> do categorize on 1 feature at a time ("filtration").   This produced 
>> a  result that separable stimuli were more quickly learned in 
>> filtration tasks then integral stimuli in condensation tasks.  
>> Non-intuitively,  Separable stimuli are learned more slowly in 
>> condensation tasks then integral stimuli then in filtration tasks.   
>> In other words attention to feature structure could cause improvement 
>> in learning or interference.   Not that surprising.. however--
>>
>> In the 1980s NN with single layers (Backprop) *could not* replicate 
>> this simple problem indicating that the cognitive model was somehow 
>> inadequate.     Backprop simply learned ALL task/stimuli parings at 
>> the same rate, ignoring the subtle but critical difference.  It failed.
>>
>> Recently we 
>> (https://www.frontiersin.org/articles/10.3389/fpsyg.2018.00374/full?&utm_source=Email_to_authors_&utm_medium=Email&utm_content=T1_11.5e1_author&utm_campaign=Email_publication&field=&journalName=Frontiers_in_Psychology&id=284733) 
>> were able to show that JUST BY ADDING LAYERS the DL does match to 
>> human performance.
>>
>> What are the layers doing?   We offer an possible explanation that 
>> needs testing.    Layers, appear to create a type of buffer that 
>> allows the network to "curate", feature detectors that are spatially 
>> distant from the input (conv layer, for example), this curation comes 
>> in various attention forms (something in that will appear in a new 
>> paper--not enough room here), which appears to qualitatively change 
>> the network processing states, and cognitive capabilities. Well, 
>> that's the claim.
>>
>> The larger point, is that apparently architectures interact with 
>> learning rules, in ways that can cross this symbolic/neural river of 
>> styx, without falling into it.
>>
>> Steve
>>
>>
>> On 2/5/22 10:38 AM, Gary Marcus wrote:
>>> There is no magic in understanding, just computation that has been 
>>> realized in the wetware of humans and that eventually can be 
>>> realized in machines. But understanding is not (just) learning.
>>>
>>> Understanding incorporates (or works in tandem with) learning - but 
>>> also, critically, in tandem with inference, /and the development and 
>>> maintenance of cognitive models/.Part of developing an understanding 
>>> of cats in general is to learn long term-knowledge about their 
>>> properties, both directly (e.g., through observation) and indirectly 
>>> (eg through learning factsabout animals in general that can be 
>>> extended to cats), often through inference (if all animals have DNA, 
>>> and a cat is an animal, it must also have DNA).The understanding of 
>>> a particular cat also involves direct observation, but also 
>>> inference (egone might surmise that the reason that Fluffy is 
>>> running about the room is that Fluffy suspects there is a mouse 
>>> stirring somewhere nearby). But all of that, I would say, is 
>>> subservient to the construction of cognitive models that can be 
>>> routinely updated (e.g., Fluffy is currently in the living room, 
>>> skittering about, perhaps looking for a mouse).
>>>
>>>  In humans, those dynamic, relational models, which form part of an 
>>> understanding, can support inference (if Fluffy is in the living 
>>> room, we can infer that Fluffy is not outside, not lost, etc). 
>>> Without such models - which I think represent a core part of 
>>> understanding - AGI is an unlikely prospect.
>>>
>>> Current neural networks, as it happens, are better at acquiring 
>>> long-term knowledge (cats have whiskers) than they are at 
>>> dynamically updating cognitive models in real-time. LLMs like GPT-3 
>>> etc lack the kind of dynamic model that I am describing. To a modest 
>>> degree they can approximate it on the basis of large samples of 
>>> texts, but their ultimate incoherence stems from the fact that they 
>>> do not have robust internal cognitive models that they can update on 
>>> the fly.
>>>
>>> Without such cognitive models you can still capture some aspects of 
>>> understanding (eg predicting that cats are likely to be furry), but 
>>> things fall apart quickly; inference is never reliable, and 
>>> coherence is fleeting.
>>>
>>> As a final note, one of the most foundational challenges in 
>>> constructing adequate cognitive models of the world is to have a 
>>> clear distinction between individuals and kinds; as I emphasized 20 
>>> years ago (in The Algebraic Mind), this has always been a weakness 
>>> in neural networks, and I don’t think that the type-token problem 
>>> has yet been solved.
>>>
>>> Gary
>>>
>>>
>>>> On Feb 5, 2022, at 01:31, Asim Roy <ASIM.ROY at asu.edu> wrote:
>>>>
>>>> 
>>>>
>>>> All,
>>>>
>>>> I think the broader question was “understanding.” Here are two 
>>>> Youtube videos showing simple robots “learning” to walk. They are 
>>>> purely physical systems. Do they “understand” anything – such as 
>>>> the need to go around an obstacle, jumping over an obstacle, 
>>>> walking up and down stairs and so on? By the way, they “learn” to 
>>>> do these things on their own, literally unsupervised, very much 
>>>> like babies. The basic question is: what is “understanding” if not 
>>>> “learning?” Is there some other mechanism (magic) at play in our 
>>>> brain that helps us “understand?”
>>>>
>>>> https://www.youtube.com/watch?v=gn4nRCC9TwQ 
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3Dgn4nRCC9TwQ&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=zdQA6enDajD46kwz-nti6FBklz-72dzlA9NLEzRW1TY&e=>
>>>>
>>>> https://www.youtube.com/watch?v=8sO7VS3q8d0 
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.youtube.com_watch-3Fv-3D8sO7VS3q8d0&d=DwMGaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=Knv_0zpl6J7FTpxevgUOS8qJpyvPOjpOXdLYhyOr6PnKQiWgHaftEAfPvwWb_IAB&s=PRhn1hhcfzNtbKXIZpOAM4lyyMp39202wE7Uu4MWg5M&e=>
>>>>
>>>> Asim Roy
>>>>
>>>> Professor, Information Systems
>>>>
>>>> Arizona State University
>>>>
>>>> Lifeboat Foundation Bios: Professor Asim Roy 
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>>
>>>> Asim Roy | iSearch (asu.edu) 
>>>> <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>>
>>>> *From:* Ali Minai <minaiaa at gmail.com>
>>>> *Sent:* Friday, February 4, 2022 11:38 PM
>>>> *To:* Asim Roy <ASIM.ROY at asu.edu>
>>>> *Cc:* Gary Marcus <gary.marcus at nyu.edu>; Danko Nikolic 
>>>> <danko.nikolic at gmail.com>; Brad Wyble <bwyble at gmail.com>; 
>>>> connectionists at mailman.srv.cs.cmu.edu; AIhub <aihuborg at gmail.com>
>>>> *Subject:* Re: Connectionists: Stephen Hanson in conversation with 
>>>> Geoff Hinton
>>>>
>>>> Asim
>>>>
>>>> Of course there's nothing magical about understanding, and the mind 
>>>> has to emerge from the physical system, but our AI models at this 
>>>> point are not even close to realizing how that happens. We are, at 
>>>> best, simulating a superficial approximation of a few parts of the 
>>>> real thing. A single, integrated system where all the aspects of 
>>>> intelligence emerge from the same deep, well-differentiated 
>>>> physical substrate is far beyond our capacity. Paying more 
>>>> attention to neurobiology will be essential to get there, but so 
>>>> will paying attention to development - both physical and cognitive 
>>>> - and evolution. The configuration of priors by evolution is key to 
>>>> understanding how real intelligence learns so quickly and from so 
>>>> little. This is not an argument for using genetic algorithms to 
>>>> design our systems, just for understanding the tricks evolution has 
>>>> used and replicating them by design. Development is more feasible 
>>>> to do computationally, but hardly any models have looked at it 
>>>> except in a superficial sense. Nature creates basic intelligence 
>>>> not so much by configuring functions by explicit training as by 
>>>> tweaking, modulating, ramifying, and combining existing ones in a 
>>>> multi-scale self-organization process. We then learn much more 
>>>> complicated things (like playing chess) by exploiting that 
>>>> substrate, and using explicit instruction or learning by practice. 
>>>> The fundamental lesson of complex systems is that complexity is 
>>>> built in stages - each level exploiting the organization of the 
>>>> level below it. We see it in evolution, development, societal 
>>>> evolution, the evolution of technology, etc. Our approach in AI, in 
>>>> contrast, is to initialize a giant, naive system and train it to do 
>>>> something really complicated - but really specific - by training 
>>>> the hell out of it. Sure, now we do build many systems on top of 
>>>> pre-trained models like GPT-3 and BERT, which is better, but those 
>>>> models were again trained by the same none-to-all process I decried 
>>>> above. Contrast that with how humans acquire language, and how they 
>>>> integrate it into their *entire* perceptual, cognitive, and 
>>>> behavioral repertoire, not focusing just on this or that task. The 
>>>> age of symbolic AI may have passed, but the reductionistic mindset 
>>>> has not. We cannot build minds by chopping it into separate verticals.
>>>>
>>>> FTR, I'd say that the emergence of models such as GLOM and Hawkins 
>>>> and Ahmed's "thousand brains" is a hopeful sign. They may not be 
>>>> "right", but they are, I think, looking in the right direction. 
>>>> With a million miles to go!
>>>>
>>>> Ali
>>>>
>>>> *Ali A. Minai, Ph.D.*
>>>> Professor and Graduate Program Director
>>>> Complex Adaptive Systems Lab
>>>> Department of Electrical Engineering & Computer Science
>>>>
>>>> 828 Rhodes Hall
>>>>
>>>> University of Cincinnati
>>>> Cincinnati, OH 45221-0030
>>>>
>>>>
>>>> Phone: (513) 556-4783
>>>> Fax: (513) 556-7326
>>>> Email: Ali.Minai at uc.edu <mailto:Ali.Minai at uc.edu>
>>>> minaiaa at gmail.com <mailto:minaiaa at gmail.com>
>>>>
>>>> WWW: https://eecs.ceas.uc.edu/~aminai/ 
>>>> <https://urldefense.com/v3/__http:/www.ece.uc.edu/*7Eaminai/__;JQ!!IKRxdwAv5BmarQ!Jd2XhTzWg6HDp9IPjlyNv847sUdhGDNfsnqZQ0gy1_mu-CfyUdpBMswhfqdbZTo$>
>>>>
>>>> On Fri, Feb 4, 2022 at 2:42 AM Asim Roy <ASIM.ROY at asu.edu 
>>>> <mailto:ASIM.ROY at asu.edu>> wrote:
>>>>
>>>>     First of all, the brain is a physical system. There is no
>>>>     “magic” inside the brain that does the “understanding” part.
>>>>     Take for example learning to play tennis. You hit a few balls -
>>>>     some the right way and some wrong – but you fairly quickly
>>>>     learn to hit them right most of the time. So there is obviously
>>>>     some simulation going on in the brain about hitting the ball in
>>>>     different ways and “learning” its consequences. What you are
>>>>     calling “understanding” is really these simulations about
>>>>     different scenarios. It’s also very similar to augmentation
>>>>     used to train image recognition systems where you rotate
>>>>     images, obscure parts and so on, so that you still can say it’s
>>>>     a cat even though you see only the cat’s face or whiskers or a
>>>>     cat flipped on its back. So, if the following questions relate
>>>>     to “understanding,” you can easily resolve this by simulating
>>>>     such scenarios when “teaching” the system. There’s nothing
>>>>     “magical” about “understanding.” As I said, bear in mind that
>>>>     the brain, after all, is a physical system and “teaching” and
>>>>     “understanding” is embodied in that physical system, not
>>>>     outside it. So “understanding” is just part of “learning,”
>>>>     nothing more.
>>>>
>>>>     DANKO:
>>>>
>>>>     What would happen to the hat if the hamster rolls on its back?
>>>>     (Would the hat fall off?)
>>>>
>>>>     What would happen to the red hat when the hamster enters its
>>>>     lair? (Would the hat fall off?)
>>>>
>>>>     What would happen to that hamster when it goes foraging? (Would
>>>>     the red hat have an influence on finding food?)
>>>>
>>>>     What would happen in a situation of being chased by a predator?
>>>>     (Would it be easier for predators to spot the hamster?)
>>>>
>>>>     Asim Roy
>>>>
>>>>     Professor, Information Systems
>>>>
>>>>     Arizona State University
>>>>
>>>>     Lifeboat Foundation Bios: Professor Asim Roy
>>>>     <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>>
>>>>     Asim Roy | iSearch (asu.edu)
>>>>     <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>>
>>>>     *From:* Gary Marcus <gary.marcus at nyu.edu
>>>>     <mailto:gary.marcus at nyu.edu>>
>>>>     *Sent:* Thursday, February 3, 2022 9:26 AM
>>>>     *To:* Danko Nikolic <danko.nikolic at gmail.com
>>>>     <mailto:danko.nikolic at gmail.com>>
>>>>     *Cc:* Asim Roy <ASIM.ROY at asu.edu <mailto:ASIM.ROY at asu.edu>>;
>>>>     Geoffrey Hinton <geoffrey.hinton at gmail.com
>>>>     <mailto:geoffrey.hinton at gmail.com>>; AIhub <aihuborg at gmail.com
>>>>     <mailto:aihuborg at gmail.com>>;
>>>>     connectionists at mailman.srv.cs.cmu.edu
>>>>     <mailto:connectionists at mailman.srv.cs.cmu.edu>
>>>>     *Subject:* Re: Connectionists: Stephen Hanson in conversation
>>>>     with Geoff Hinton
>>>>
>>>>     Dear Danko,
>>>>
>>>>     Well said. I had a somewhat similar response to Jeff Dean’s
>>>>     2021 TED talk, in which he said (paraphrasing from memory,
>>>>     because I don’t remember the precise words) that the famous 200
>>>>     Quoc Le unsupervised model
>>>>     [https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf
>>>>     <https://urldefense.com/v3/__https:/static.googleusercontent.com/media/research.google.com/en/*archive/unsupervised_icml2012.pdf__;Lw!!IKRxdwAv5BmarQ!PFl2URDWVshfy1BPSwAMXKYyn1wszxpN4EPzShAm3sX83AOt05MQX07oVyVLEqo$>]
>>>>     had learned the concept of a ca. In reality the model had
>>>>     clustered together some catlike images based on the image
>>>>     statistics that it had extracted, but it was a long way from a
>>>>     full, counterfactual-supporting concept of a cat, much as you
>>>>     describe below.
>>>>
>>>>     I fully agree with you that the reason for even having a
>>>>     semantics is as you put it, "to 1) learn with a few examples
>>>>     and 2) apply the knowledge to a broad set of situations.” GPT-3
>>>>     sometimes gives the appearance of having done so, but it falls
>>>>     apart under close inspection, so the problem remains unsolved.
>>>>
>>>>     Gary
>>>>
>>>>         On Feb 3, 2022, at 3:19 AM, Danko Nikolic
>>>>         <danko.nikolic at gmail.com <mailto:danko.nikolic at gmail.com>>
>>>>         wrote:
>>>>
>>>>         G. Hinton wrote: "I believe that any reasonable person
>>>>         would admit that if you ask a neural net to draw a picture
>>>>         of a hamster wearing a red hat and it draws such a picture,
>>>>         it understood the request."
>>>>
>>>>         I would like to suggest why drawing a hamster with a
>>>>         red hat does not necessarily imply understanding of the
>>>>         statement "hamster wearing a red hat".
>>>>
>>>>         To understand that "hamster wearing a red hat" would mean
>>>>         inferring, in newly emerging situations of this hamster,
>>>>         all the real-life implications that the red hat brings to
>>>>         the little animal.
>>>>
>>>>         What would happen to the hat if the hamster rolls on its
>>>>         back? (Would the hat fall off?)
>>>>
>>>>         What would happen to the red hat when the hamster enters
>>>>         its lair? (Would the hat fall off?)
>>>>
>>>>         What would happen to that hamster when it goes foraging?
>>>>         (Would the red hat have an influence on finding food?)
>>>>
>>>>         What would happen in a situation of being chased by a
>>>>         predator? (Would it be easier for predators to spot the
>>>>         hamster?)
>>>>
>>>>         ...and so on.
>>>>
>>>>         Countless many questions can be asked. One has understood
>>>>         "hamster wearing a red hat" only if one can answer
>>>>         reasonably well many of such real-life relevant questions.
>>>>         Similarly, a student has understood materias in a class
>>>>         only if they can apply the materials in real-life
>>>>         situations (e.g., applying Pythagora's theorem). If a
>>>>         student gives a correct answer to a multiple choice
>>>>         question, we don't know whether the student understood the
>>>>         material or whether this was just rote learning (often, it
>>>>         is rote learning).
>>>>
>>>>         I also suggest that understanding also comes together with
>>>>         effective learning: We store new information in such a way
>>>>         that we can recall it later and use it effectively  i.e.,
>>>>         make good inferences in newly emerging situations based on
>>>>         this knowledge.
>>>>
>>>>         In short: Understanding makes us humans able to 1) learn
>>>>         with a few examples and 2) apply the knowledge to a broad
>>>>         set of situations.
>>>>
>>>>         No neural network today has such capabilities and we don't
>>>>         know how to give them such capabilities. Neural networks
>>>>         need large amounts of training examples that cover a large
>>>>         variety of situations and then the networks can only deal
>>>>         with what the training examples have already covered.
>>>>         Neural networks cannot extrapolate in that 'understanding'
>>>>         sense.
>>>>
>>>>         I suggest that understanding truly extrapolates from a
>>>>         piece of knowledge. It is not about satisfying a task such
>>>>         as translation between languages or drawing hamsters with
>>>>         hats. It is how you got the capability to complete the
>>>>         task: Did you only have a few examples that covered
>>>>         something different but related and then you extrapolated
>>>>         from that knowledge? If yes, this is going in the direction
>>>>         of understanding. Have you seen countless examples and then
>>>>         interpolated among them? Then perhaps it is not understanding.
>>>>
>>>>         So, for the case of drawing a hamster wearing a red hat,
>>>>         understanding perhaps would have taken place if the
>>>>         following happened before that:
>>>>
>>>>         1) first, the network learned about hamsters (not many
>>>>         examples)
>>>>
>>>>         2) after that the network learned about red hats (outside
>>>>         the context of hamsters and without many examples)
>>>>
>>>>         3) finally the network learned about drawing (outside of
>>>>         the context of hats and hamsters, not many examples)
>>>>
>>>>         After that, the network is asked to draw a hamster with a
>>>>         red hat. If it does it successfully, maybe we have started
>>>>         cracking the problem of understanding.
>>>>
>>>>         Note also that this requires the network to learn
>>>>         sequentially without exhibiting catastrophic forgetting of
>>>>         the previous knowledge, which is possibly also a
>>>>         consequence of human learning by understanding.
>>>>
>>>>         Danko
>>>>
>>>>         Dr. Danko Nikolić
>>>>         www.danko-nikolic.com
>>>>         <https://urldefense.proofpoint.com/v2/url?u=http-3A__www.danko-2Dnikolic.com&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=HwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI&e=>
>>>>         https://www.linkedin.com/in/danko-nikolic/
>>>>         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.linkedin.com_in_danko-2Dnikolic_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=b70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8&e=>
>>>>
>>>>         --- A progress usually starts with an insight ---
>>>>
>>>>         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>
>>>>         	
>>>>
>>>>         Virus-free. www.avast.com
>>>>         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>
>>>>
>>>>         On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <ASIM.ROY at asu.edu
>>>>         <mailto:ASIM.ROY at asu.edu>> wrote:
>>>>
>>>>             Without getting into the specific dispute between Gary
>>>>             and Geoff, I think with approaches similar to GLOM, we
>>>>             are finally headed in the right direction. There’s
>>>>             plenty of neurophysiological evidence for single-cell
>>>>             abstractions and multisensory neurons in the brain,
>>>>             which one might claim correspond to symbols. And I
>>>>             think we can finally reconcile the decades old dispute
>>>>             between Symbolic AI and Connectionism.
>>>>
>>>>             GARY: (Your GLOM, which as you know I praised publicly,
>>>>             is in many ways an effort to wind up with encodings
>>>>             that effectively serve as symbols in exactly that way,
>>>>             guaranteed to serve as consistent representations of
>>>>             specific concepts.)
>>>>
>>>>             GARY: I have /never/ called for dismissal of neural
>>>>             networks, but rather for some hybrid between the two
>>>>             (as you yourself contemplated in 1991); the point of
>>>>             the 2001 book was to characterize exactly where
>>>>             multilayer perceptrons succeeded and broke down, and
>>>>             where symbols could complement them.
>>>>
>>>>             Asim Roy
>>>>
>>>>             Professor, Information Systems
>>>>
>>>>             Arizona State University
>>>>
>>>>             Lifeboat Foundation Bios: Professor Asim Roy
>>>>             <https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
>>>>
>>>>             Asim Roy | iSearch (asu.edu)
>>>>             <https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>>>>
>>>>             *From:* Connectionists
>>>>             <connectionists-bounces at mailman.srv.cs.cmu.edu
>>>>             <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>>
>>>>             *On Behalf Of *Gary Marcus
>>>>             *Sent:* Wednesday, February 2, 2022 1:26 PM
>>>>             *To:* Geoffrey Hinton <geoffrey.hinton at gmail.com
>>>>             <mailto:geoffrey.hinton at gmail.com>>
>>>>             *Cc:* AIhub <aihuborg at gmail.com
>>>>             <mailto:aihuborg at gmail.com>>;
>>>>             connectionists at mailman.srv.cs.cmu.edu
>>>>             <mailto:connectionists at mailman.srv.cs.cmu.edu>
>>>>             *Subject:* Re: Connectionists: Stephen Hanson in
>>>>             conversation with Geoff Hinton
>>>>
>>>>             Dear Geoff, and interested others,
>>>>
>>>>             What, for example, would you make of a system that
>>>>             often drew the red-hatted hamster you requested, and
>>>>             perhaps a fifth of the time gave you utter nonsense? 
>>>>             Or say one that you trained to create birds but
>>>>             sometimes output stuff like this:
>>>>
>>>>             <image001.png>
>>>>
>>>>             One could
>>>>
>>>>             a. avert one’s eyes and deem the anomalous outputs
>>>>             irrelevant
>>>>
>>>>             or
>>>>
>>>>             b. wonder if it might be possible that sometimes the
>>>>             system gets the right answer for the wrong reasons (eg
>>>>             partial historical contingency), and wonder whether
>>>>             another approach might be indicated.
>>>>
>>>>             Benchmarks are harder than they look; most of the field
>>>>             has come to recognize that. The Turing Test has turned
>>>>             out to be a lousy measure of intelligence, easily
>>>>             gamed. It has turned out empirically that the Winograd
>>>>             Schema Challenge did not measure common sense as well
>>>>             as Hector might have thought. (As it happens, I am a
>>>>             minor coauthor of a very recent review on this very
>>>>             topic: https://arxiv.org/abs/2201.02387
>>>>             <https://urldefense.com/v3/__https:/arxiv.org/abs/2201.02387__;!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA$>)
>>>>             But its conquest in no way means machines now have
>>>>             common sense; many people from many different
>>>>             perspectives recognize that (including, e.g., Yann
>>>>             LeCun, who generally tends to be more aligned with you
>>>>             than with me).
>>>>
>>>>             So: on the goalpost of the Winograd schema, I was
>>>>             wrong, and you can quote me; but what you said about me
>>>>             and machine translation remains your invention, and it
>>>>             is inexcusable that you simply ignored my 2019
>>>>             clarification. On the essential goal of trying to reach
>>>>             meaning and understanding, I remain unmoved; the
>>>>             problem remains unsolved.
>>>>
>>>>             All of the problems LLMs have with coherence,
>>>>             reliability, truthfulness, misinformation, etc stand
>>>>             witness to that fact. (Their persistent inability to
>>>>             filter out toxic and insulting remarks stems from the
>>>>             same.) I am hardly the only person in the field to see
>>>>             that progress on any given benchmark does not
>>>>             inherently mean that the deep underlying problems have
>>>>             solved. You, yourself, in fact, have occasionally made
>>>>             that point.
>>>>
>>>>             With respect to embeddings: Embeddings are very good
>>>>             for natural language /processing/; but NLP is not the
>>>>             same as NL/U/ – when it comes to /understanding/, their
>>>>             worth is still an open question. Perhaps they will turn
>>>>             out to be necessary; they clearly aren’t sufficient. In
>>>>             their extreme, they might even collapse into being
>>>>             symbols, in the sense of uniquely identifiable
>>>>             encodings, akin to the ASCII code, in which a specific
>>>>             set of numbers stands for a specific word or concept.
>>>>             (Wouldn’t that be ironic?)
>>>>
>>>>             (Your GLOM, which as you know I praised publicly, is in
>>>>             many ways an effort to wind up with encodings that
>>>>             effectively serve as symbols in exactly that way,
>>>>             guaranteed to serve as consistent representations of
>>>>             specific concepts.)
>>>>
>>>>             Notably absent from your email is any kind of apology
>>>>             for misrepresenting my position. It’s fine to say that
>>>>             “many people thirty years ago once thought X” and
>>>>             another to say “Gary Marcus said X in 2015”, when I
>>>>             didn’t. I have consistently felt throughout our
>>>>             interactions that you have mistaken me for Zenon
>>>>             Pylyshyn; indeed, you once (at NeurIPS 2014) apologized
>>>>             to me for having made that error. I am still not he.
>>>>
>>>>             Which maybe connects to the last point; if you read my
>>>>             work, you would see thirty years of arguments
>>>>             /for/ neural networks, just not in the way that you
>>>>             want them to exist. I have ALWAYS argued that there is
>>>>             a role for them;  characterizing me as a person
>>>>             “strongly opposed to neural networks” misses the whole
>>>>             point of my 2001 book, which was subtitled “Integrating
>>>>             Connectionism and Cognitive Science.”
>>>>
>>>>             In the last two decades or so you have insisted (for
>>>>             reasons you have never fully clarified, so far as I
>>>>             know) on abandoning symbol-manipulation, but the
>>>>             reverse is not the case: I have /never/ called for
>>>>             dismissal of neural networks, but rather for some
>>>>             hybrid between the two (as you yourself contemplated in
>>>>             1991); the point of the 2001 book was to characterize
>>>>             exactly where multilayer perceptrons succeeded and
>>>>             broke down, and where symbols could complement them.
>>>>             It’s a rhetorical trick (which is what the previous
>>>>             thread was about) to pretend otherwise.
>>>>
>>>>             Gary
>>>>
>>>>                 On Feb 2, 2022, at 11:22, Geoffrey Hinton
>>>>                 <geoffrey.hinton at gmail.com
>>>>                 <mailto:geoffrey.hinton at gmail.com>> wrote:
>>>>
>>>>                 
>>>>
>>>>                 Embeddings are just vectors of soft feature
>>>>                 detectors and they are very good for NLP. The quote
>>>>                 on my webpage from Gary's 2015 chapter implies the
>>>>                 opposite.
>>>>
>>>>                 A few decades ago, everyone I knew then would have
>>>>                 agreed that the ability to translate a sentence
>>>>                 into many different languages was strong evidence
>>>>                 that you understood it.
>>>>
>>>>                 But once neural networks could do that, their
>>>>                 critics moved the goalposts. An exception is Hector
>>>>                 Levesque who defined the goalposts more sharply by
>>>>                 saying that the ability to get pronoun references
>>>>                 correct in Winograd sentences is a crucial test.
>>>>                 Neural nets are improving at that but still have
>>>>                 some way to go. Will Gary agree that when they can
>>>>                 get pronoun references correct in Winograd
>>>>                 sentences they really do understand? Or does he
>>>>                 want to reserve the right to weasel out of that too?
>>>>
>>>>                 Some people, like Gary, appear to be
>>>>                 strongly opposed to neural networks because they do
>>>>                 not fit their preconceived notions of how the mind
>>>>                 should work.
>>>>
>>>>                 I believe that any reasonable person would admit
>>>>                 that if you ask a neural net to draw a picture of a
>>>>                 hamster wearing a red hat and it draws such a
>>>>                 picture, it understood the request.
>>>>
>>>>                 Geoff
>>>>
>>>>                 On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus
>>>>                 <gary.marcus at nyu.edu <mailto:gary.marcus at nyu.edu>>
>>>>                 wrote:
>>>>
>>>>                     Dear AI Hub, cc: Steven Hanson and Geoffrey
>>>>                     Hinton, and the larger neural network community,
>>>>
>>>>                     There has been a lot of recent discussion on
>>>>                     this list about framing and scientific
>>>>                     integrity. Often the first step in
>>>>                     restructuring narratives is to bully and
>>>>                     dehumanize critics. The second is to
>>>>                     misrepresent their position. People in
>>>>                     positions of power are sometimes tempted to do
>>>>                     this.
>>>>
>>>>                     The Hinton-Hanson interview that you just
>>>>                     published is a real-time example of just that.
>>>>                     It opens with a needless and largely
>>>>                     content-free personal attack on a single
>>>>                     scholar (me), with the explicit intention of
>>>>                     discrediting that person. Worse, the only
>>>>                     substantive thing it says is false.
>>>>
>>>>                     Hinton says “In 2015 he [Marcus] made a
>>>>                     prediction that computers wouldn’t be able to
>>>>                     do machine translation.”
>>>>
>>>>                     I never said any such thing.
>>>>
>>>>                     What I predicted, rather, was that multilayer
>>>>                     perceptrons, as they existed then, would not
>>>>                     (on their own, absent other mechanisms)
>>>>                     /understand/ language. Seven years later, they
>>>>                     still haven’t, except in the most superficial way.
>>>>
>>>>                     I made no comment whatsoever about machine
>>>>                     translation, which I view as a separate
>>>>                     problem, solvable to a certain degree by
>>>>                     correspondance without semantics.
>>>>
>>>>                     I specifically tried to clarify Hinton’s
>>>>                     confusion in 2019, but, disappointingly, he has
>>>>                     continued to purvey misinformation despite that
>>>>                     clarification. Here is what I wrote privately
>>>>                     to him then, which should have put the matter
>>>>                     to rest:
>>>>
>>>>                     You have taken a single out of context quote
>>>>                     [from 2015] and misrepresented it. The quote,
>>>>                     which you have prominently displayed at the
>>>>                     bottom on your own web page, says:
>>>>
>>>>                     Hierarchies of features are less suited to
>>>>                     challenges such as language, inference, and
>>>>                     high-level planning. For example, as Noam
>>>>                     Chomsky famously pointed out, language is
>>>>                     filled with sentences you haven't seen
>>>>                     before. Pure classifier systems don't know what
>>>>                     to do with such sentences. The talent of
>>>>                     feature detectors -- in  identifying which
>>>>                     member of some category something belongs to --
>>>>                     doesn't translate into understanding
>>>>                     novel  sentences, in which each sentence has
>>>>                     its own unique meaning.
>>>>
>>>>                     It does /not/ say "neural nets would not be
>>>>                     able to deal with novel sentences"; it says
>>>>                     that hierachies of features detectors (on their
>>>>                     own, if you read the context of the essay)
>>>>                     would have trouble /understanding
>>>>                     /novel sentences.
>>>>
>>>>                     Google Translate does yet not /understand/ the
>>>>                     content of the sentences is translates. It
>>>>                     cannot reliably answer questions about who did
>>>>                     what to whom, or why, it cannot infer the order
>>>>                     of the events in paragraphs, it can't determine
>>>>                     the internal consistency of those events, and
>>>>                     so forth.
>>>>
>>>>                     Since then, a number of scholars, such as the
>>>>                     the computational linguist Emily Bender, have
>>>>                     made similar points, and indeed current LLM
>>>>                     difficulties with misinformation, incoherence
>>>>                     and fabrication all follow from these concerns.
>>>>                     Quoting from Bender’s prizewinning 2020 ACL
>>>>                     article on the matter with Alexander Koller,
>>>>                     https://aclanthology.org/2020.acl-main.463.pdf
>>>>                     <https://urldefense.proofpoint.com/v2/url?u=https-3A__aclanthology.org_2020.acl-2Dmain.463.pdf&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=K-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk&e=>,
>>>>                     also emphasizing issues of understanding and
>>>>                     meaning:
>>>>
>>>>                     /The success of the large neural language
>>>>                     models on many NLP tasks is exciting. However,
>>>>                     we find that these successes sometimes lead to
>>>>                     hype in which these models are being described
>>>>                     as “understanding” language or capturing
>>>>                     “meaning”. In this position paper, we argue
>>>>                     that a system trained only on form has a priori
>>>>                     no way to learn meaning. .. a clear
>>>>                     understanding of the distinction between form
>>>>                     and meaning will help guide the field towards
>>>>                     better science around natural language
>>>>                     understanding. /
>>>>
>>>>                     Her later article with Gebru on language models
>>>>                     “stochastic parrots” is in some ways an
>>>>                     extension of this point; machine translation
>>>>                     requires mimicry, true understanding (which is
>>>>                     what I was discussing in 2015) requires
>>>>                     something deeper than that.
>>>>
>>>>                     Hinton’s intellectual error here is in equating
>>>>                     machine translation with the deeper
>>>>                     comprehension that robust natural language
>>>>                     understanding will require; as Bender and
>>>>                     Koller observed, the two appear not to be the
>>>>                     same. (There is a longer discussion of the
>>>>                     relation between language understanding and
>>>>                     machine translation, and why the latter has
>>>>                     turned out to be more approachable than the
>>>>                     former, in my 2019 book with Ernest Davis).
>>>>
>>>>                     More broadly, Hinton’s ongoing dismissiveness
>>>>                     of research from perspectives other than his
>>>>                     own (e.g. linguistics) have done the field a
>>>>                     disservice.
>>>>
>>>>                     As Herb Simon once observed, science does not
>>>>                     have to be zero-sum.
>>>>
>>>>                     Sincerely,
>>>>
>>>>                     Gary Marcus
>>>>
>>>>                     Professor Emeritus
>>>>
>>>>                     New York University
>>>>
>>>>                         On Feb 2, 2022, at 06:12, AIhub
>>>>                         <aihuborg at gmail.com
>>>>                         <mailto:aihuborg at gmail.com>> wrote:
>>>>
>>>>                         
>>>>
>>>>                         Stephen Hanson in conversation with Geoff
>>>>                         Hinton
>>>>
>>>>                         In the latest episode of this video series
>>>>                         for AIhub.org
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>,
>>>>                         Stephen Hanson talks to  Geoff Hinton about
>>>>                         neural networks, backpropagation,
>>>>                         overparameterization, digit recognition,
>>>>                         voxel cells, syntax and semantics, Winograd
>>>>                         sentences, and more.
>>>>
>>>>                         You can watch the discussion, and read the
>>>>                         transcript, here:
>>>>
>>>>                         https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=OY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw&e=>
>>>>
>>>>                         About AIhub:
>>>>
>>>>                         AIhub is a non-profit dedicated to
>>>>                         connecting the AI community to the public
>>>>                         by providing free, high-quality information
>>>>                         through AIhub.org
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=http-3A__AIhub.org&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=xnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL&s=eOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A&e=>
>>>>                         (https://aihub.org/
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__aihub.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=IKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA&e=>).
>>>>                         We help researchers publish the latest AI
>>>>                         news, summaries of their work, opinion
>>>>                         pieces, tutorials and more.  We are
>>>>                         supported by many leading scientific
>>>>                         organizations in AI, namely AAAI
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__aaai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=wBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0&e=>,
>>>>                         NeurIPS
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__neurips.cc_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0&e=>,
>>>>                         ICML
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__icml.cc_imls_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=JJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0&e=>,
>>>>                         AIJ
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>/IJCAI
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.journals.elsevier.com_artificial-2Dintelligence&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=eWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U&e=>,
>>>>                         ACM SIGAI
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=http-3A__sigai.acm.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ&e=>,
>>>>                         EurAI/AICOMM, CLAIRE
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__claire-2Dai.org_&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec&e=>
>>>>                         and RoboCup
>>>>                         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.robocup.org__&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=yl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1&s=bBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0&e=>.
>>>>
>>>>                         Twitter: @aihuborg
>>>>
>>>>         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>
>>>>         	
>>>>
>>>>         Virus-free. www.avast.com
>>>>         <https://urldefense.proofpoint.com/v2/url?u=https-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=Ao9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4&e=>
>>>>
>>>>
>> -- 
>> <signature.png>
>
-- 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220209/cddfd06b/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220209/cddfd06b/attachment-0001.png>


More information about the Connectionists mailing list