Connectionists: Stephen Hanson in conversation with Geoff Hinton

Danko Nikolic danko.nikolic at gmail.com
Fri Jul 15 04:51:09 EDT 2022


Dear Thomas,

Thank you for reading the paper and for the comments.

I cite: "In my experience, supervised classification scales linearly in the
number of classes."
This would be good to quantify as a plot. Maybe a research paper would be a
good idea. The reason is that it seems that everyone else who tried to
quantify that relation found a power law. At this point, it would be
surprising to find a linear relationship. And it would probably make a well
read paper.

But please do not forget that my argument states that even a linear
relationship is not good enough to match bilogical brains. We need
something more similar to a power law with exponent zero when it comes to
the model size i.e., a constant number of parameters in the model. And we
need linear relationship when it comes to learning time: Each newly learned
object should needs about as much of learning effort as was needed for each
previous object.

I cite: "The real world is not dominated by generalized XOR problems."
Agreed. And it is good so because generalize XOR scales worse than power
law. It scales exponentially! This a more agressive form of explosion than
power law.
Importantly, a generalized AND operation also scales exponentially (with a
smaller exponent, though). I guess we would agree that the real world
probably encouners a lot of AND problems. The only logical operaiton that
could be learned with a linear increase in the number of parameters was a
generalized OR. Finally, I foiund that a mixure of AND and OR resulted in a
power law-like scaling of the number of parameters. So, a mixture of AND
and OR seemed to scale as good (or as bad) as the real world. I have put
this information into Supplementary Materials.

The conclusion that I derived from those analyses is: connectionism is not
sustainable to reach human (or animal) levels of intelligence. Therefore, I
hunted for an alternative pradigm.

Greetings,

Danko



Dr. Danko Nikolić
www.danko-nikolic.com
https://www.linkedin.com/in/danko-nikolic/
-- I wonder, how is the brain able to generate insight? --


On Fri, Jul 15, 2022 at 10:01 AM Dietterich, Thomas <tgd at oregonstate.edu>
wrote:

> Dear Danko,
>
>
>
> In my experience, supervised classification scales linearly in the number
> of classes. Of course it depends to some extent on how subtle the
> distinctions are between the different categories. The real world is not
> dominated by generalized XOR problems.
>
>
>
> --Tom
>
>
>
> Thomas G. Dietterich, Distinguished Professor Voice: 541-737-5559
>
> School of Electrical Engineering              FAX: 541-737-1300
>
>   and Computer Science                        URL:
> eecs.oregonstate.edu/~tgd
>
> US Mail: 1148 Kelley Engineering Center
>
> Office: 2067 Kelley Engineering Center
>
> Oregon State Univ., Corvallis, OR 97331-5501
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Danko Nikolic
> *Sent:* Thursday, July 14, 2022 09:17
> *To:* Grossberg, Stephen <steve at bu.edu>
> *Cc:* AIhub <aihuborg at gmail.com>; connectionists at mailman.srv.cs.cmu.edu
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> [This email originated from outside of OSU. Use caution with links and
> attachments.]
>
> Dear Steve,
>
>
>
> Thank you very much for your message and for the greetings. I will pass
> them on if an occasion arises.
>
>
>
> Regarding your question: The key problem I am trying to address and that,
> to the best of my knowledge, no connectionist system was able to solve so
> far is that of scaling the system's intelligence. For example, if the
> system is able to correctly recognize 100 different objects, how many
> additional resources are needed to double that to 200? All the empirical
> data show that connectionist systems scale poorly: Some of the best systems
> we have require 500x more resources in order to increase the intelligence
> by only 2x. I document this problem in the manuscript and even run some
> simulations to show that the worst performance is if connectionist systems
> need to solve a generalized XOR problem.
>
>
>
> In contrast, the biological brain scales well. This I also quantify in the
> paper.
>
>
>
> I will look at the publication that you mentioned. However, so far, I
> haven't seen a solution that scales well in intelligence.
>
>
>
> My argument is that transient selection of subnetworks by the help of the
> mentioned proteins is how intelligence scaling is achieved in biological
> brains.
>
>
>
> In short, intelligence scaling is the key problem that concerns me. I
> describe the intelligence scaling problem in more detail in this book that
> just came out a few weeks ago and that is written for practitioners in Data
> Scientist and AI: https://amzn.to/3IBxUpL
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Famzn.to%2F3IBxUpL&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009661467%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Q7ZuNuMhL1M3WC7AKphyihjhsXCmI4gg2iZpv0n74zM%3D&reserved=0>
>
>
>
> I hope that this at least partly answers where I see the problems and what
> I am trying to solve.
>
>
>
> Greetings from Germany,
>
>
>
> Danko
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.danko-nikolic.com%2F&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=QTR9i3jFBKmYC32qKigQ1WD4SyucpL9udIA4awPpU6E%3D&reserved=0>
> https://www.linkedin.com/in/danko-nikolic/
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fdanko-nikolic%2F&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DrZAreOWTk24TyQtQNT7YU%2Bhs3CnNdFKhVtbLvhiEaU%3D&reserved=0>
>
> --- A progress usually starts with an insight ---
>
>
>
>
>
> On Thu, Jul 14, 2022 at 3:30 PM Grossberg, Stephen <steve at bu.edu> wrote:
>
> Dear Danko,
>
>
>
> I have just read your new article and would like to comment briefly about
> it.
>
>
>
> In your introductory remarks, you write:
>
>
>
> "However, connectionism did not yet produce a satisfactory explanation of
> how the mental emerges from the physical. A number of open problems
> remains ( 5,6,7,8). As a result, the explanatory gap between the mind and
> the brain remains wide open."
>
>
>
> I certainly believe that no theoretical explanation in science is ever
> complete. However, I also believe that "the explanatory gap between the
> mind and the brain" does not remain "wide open".
>
>
>
> My Magnum Opus, that was published in 2021, makes that belief clear in its
> title:
>
>
>
> *Conscious Mind, Resonant Brain: How Each Brain Makes a Mind*
>
>
>
> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.amazon.com%2FConscious-Mind-Resonant-Brain-Makes%2Fdp%2F0190070552&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6OJwAuUjVNXT3I4O7H1coUDuOA2cpFitVk37M43v2W8%3D&reserved=0>
>
>
>
> The book provides a self-contained and non-technical exposition in a
> conversational tone of many principled and unifying explanations of
> psychological and neurobiological data.
>
>
>
> In particular, it explains roles for the metabotropic glutamate receptors
> that you mention in your own work. See the text and figures around p. 521.
> This explanation unifies psychological, anatomical, neurophysiological,
> biophysical, and biochemical data about the processes under discussion.
>
>
>
> I have a very old-fashioned view about how to understand scientific
> theories. I get excited by theories that explain and predict more data than
> previous theories.
>
>
>
> Which of the data that I explain in my book, and support with quantitative
> computer simulations, can you also explain?
>
>
>
> What data can you explain, in the same quantitative sense, that you do not
> think the neural models in my book can explain?
>
>
>
> I would be delighted to discuss these issues further with you.
>
>
>
> If you are in touch with my old friend and esteemed colleague, Wolf
> Singer, please send him my warm regards. I cite the superb work that he and
> various of his collaborators have done in many places in my book.
>
>
>
> Best,
>
>
>
> Steve
>
>
>
> Stephen Grossberg
>
> http://en.wikipedia.org/wiki/Stephen_Grossberg
> <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fen.wikipedia.org%2Fwiki%2FStephen_Grossberg&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=SQVrXcpNHPkeU6zaBSfMTCGjs7LPI0xQW2Wf88Vyzp8%3D&reserved=0>
>
> http://scholar.google.com/citations?user=3BIV70wAAAAJ&hl=en
> <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fscholar.google.com%2Fcitations%3Fuser%3D3BIV70wAAAAJ%26hl%3Den&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Tlb9uRaNugf0TKUCtg8gLmV8IaWpZiY3Bv1B6ex183I%3D&reserved=0>
>
> https://youtu.be/9n5AnvFur7I
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fyoutu.be%2F9n5AnvFur7I&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=HDSOTohhQK6wLL4wSfmyryfvLOF41MXs8sE%2BWX2c8Cs%3D&reserved=0>
>
> https://www.youtube.com/watch?v=_hBye6JQCh4
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.youtube.com%2Fwatch%3Fv%3D_hBye6JQCh4&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=PqAFTvHU7CdwZ%2FKdoPy%2Faq0UxThxLIpCTgUqXXh4c%2Bk%3D&reserved=0>
>
> https://www.amazon.com/Conscious-Mind-Resonant-Brain-Makes/dp/0190070552
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.amazon.com%2FConscious-Mind-Resonant-Brain-Makes%2Fdp%2F0190070552&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=6OJwAuUjVNXT3I4O7H1coUDuOA2cpFitVk37M43v2W8%3D&reserved=0>
>
>
> Wang Professor of Cognitive and Neural Systems
>
> Director, Center for Adaptive Systems
> Professor Emeritus of Mathematics & Statistics,
>
>        Psychological & Brain Sciences, and Biomedical Engineering
>
> Boston University
> sites.bu.edu/steveg
> <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fsites.bu.edu%2Fsteveg&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=nmfU9sc%2FeYZldTeJRFzBnjpxqeDOaKSv%2FEn6mKCrODo%3D&reserved=0>
> steve at bu.edu
>
>
> ------------------------------
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> on
> behalf of Danko Nikolic <danko.nikolic at gmail.com>
> *Sent:* Thursday, July 14, 2022 6:05 AM
> *To:* Gary Marcus <gary.marcus at nyu.edu>
> *Cc:* connectionists at mailman.srv.cs.cmu.edu <
> connectionists at mailman.srv.cs.cmu.edu>; AIhub <aihuborg at gmail.com>
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> Dear Gary and everyone,
>
>
>
> I am continuing the discussion from where we left off a few months ago.
> Back then, some of us agreed that the problem of understanding remains
> unsolved.
>
>
>
> As a reminder, the challenge for connectionism was to 1) learn with few
> examples and 2) apply the knowledge to a broad set of situations.
>
>
>
> I am happy to announce that I have now finished a draft of a paper in
> which I propose how the brain is able to achieve that. The manuscript
> requires a bit of patience for two reasons: one is that the reader may be
> exposed for the first time to certain aspects of brain physiology. The
> second reason is that it may take some effort to understand the
> counterintuitive implications of the new ideas (this requires a different
> way of thinking than what we are used to based on connectionism).
>
>
>
> In short, I am suggesting that instead of the connectionist paradigm, we
> adopt transient selection of subnetworks. The mechanisms that transiently
> select brain subnetworks are distributed all over the nervous system and, I
> argue, are our main machinery for thinking/cognition. The surprising
> outcome is that neural activation, which was central in connectionism, now
> plays only a supportive role, while the real 'workers' within the brain are
> the mechanisms for transient selection of subnetworks.
>
>
>
> I also explain how I think transient selection achieves learning with only
> a few examples and how the learned knowledge is possible to apply to a
> broad set of situations.
>
>
>
> The manuscript is made available to everyone and can be downloaded here:
> https://bit.ly/3IFs8Ug
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fbit.ly%2F3IFs8Ug&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dMSUKAQT%2FdnZEgC8Hc1t2Ggn0QMpkzUlnANbO8KXq%2FI%3D&reserved=0>
>
> (I apologize for the neuroscience lingo, which I tried to minimize.)
>
>
>
> It will likely take a wide effort to implement these concepts as an AI
> technology, provided my ideas do not have a major flaw in the first place.
> Does anyone see a flaw?
>
>
>
> Thanks.
>
>
>
> Danko
>
>
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> <https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Fwww.danko-nikolic.com%2F&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=QTR9i3jFBKmYC32qKigQ1WD4SyucpL9udIA4awPpU6E%3D&reserved=0>
> https://www.linkedin.com/in/danko-nikolic/
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fwww.linkedin.com%2Fin%2Fdanko-nikolic%2F&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=DrZAreOWTk24TyQtQNT7YU%2Bhs3CnNdFKhVtbLvhiEaU%3D&reserved=0>
>
>
>
>
>
> On Thu, Feb 3, 2022 at 5:25 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
> Dear Danko,
>
>
>
> Well said. I had a somewhat similar response to Jeff Dean’s 2021 TED talk,
> in which he said (paraphrasing from memory, because I don’t remember the
> precise words) that the famous 200 Quoc Le unsupervised model [
> https://static.googleusercontent.com/media/research.google.com/en//archive/unsupervised_icml2012.pdf
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fstatic.googleusercontent.com%2Fmedia%2Fresearch.google.com%2Fen%2F%2Farchive%2Funsupervised_icml2012.pdf&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9m69LjMFTLkcipHgNFxYMKqVL8kUmFLkK3%2BmITrlRY%3D&reserved=0>]
> had learned the concept of a ca. In reality the model had clustered
> together some catlike images based on the image statistics that it had
> extracted, but it was a long way from a full, counterfactual-supporting
> concept of a cat, much as you describe below.
>
>
>
> I fully agree with you that the reason for even having a semantics is as
> you put it, "to 1) learn with a few examples and 2) apply the knowledge to
> a broad set of situations.” GPT-3 sometimes gives the appearance of having
> done so, but it falls apart under close inspection, so the problem remains
> unsolved.
>
>
>
> Gary
>
>
>
> On Feb 3, 2022, at 3:19 AM, Danko Nikolic <danko.nikolic at gmail.com> wrote:
>
>
>
> G. Hinton wrote: "I believe that any reasonable person would admit that if
> you ask a neural net to draw a picture of a hamster wearing a red hat and
> it draws such a picture, it understood the request."
>
>
>
> I would like to suggest why drawing a hamster with a red hat does not
> necessarily imply understanding of the statement "hamster wearing a red
> hat".
>
> To understand that "hamster wearing a red hat" would mean inferring, in
> newly emerging situations of this hamster, all the real-life
> implications that the red hat brings to the little animal.
>
>
>
> What would happen to the hat if the hamster rolls on its back? (Would the
> hat fall off?)
>
> What would happen to the red hat when the hamster enters its lair? (Would
> the hat fall off?)
>
> What would happen to that hamster when it goes foraging? (Would the red
> hat have an influence on finding food?)
>
> What would happen in a situation of being chased by a predator? (Would it
> be easier for predators to spot the hamster?)
>
>
>
> ...and so on.
>
>
>
> Countless many questions can be asked. One has understood "hamster wearing
> a red hat" only if one can answer reasonably well many of such real-life
> relevant questions. Similarly, a student has understood materias in a class
> only if they can apply the materials in real-life situations (e.g.,
> applying Pythagora's theorem). If a student gives a correct answer to a
> multiple choice question, we don't know whether the student understood the
> material or whether this was just rote learning (often, it is rote
> learning).
>
>
>
> I also suggest that understanding also comes together with effective
> learning: We store new information in such a way that we can recall it
> later and use it effectively  i.e., make good inferences in newly emerging
> situations based on this knowledge.
>
>
>
> In short: Understanding makes us humans able to 1) learn with a few
> examples and 2) apply the knowledge to a broad set of situations.
>
>
>
> No neural network today has such capabilities and we don't know how to
> give them such capabilities. Neural networks need large amounts of
> training examples that cover a large variety of situations and then
> the networks can only deal with what the training examples have already
> covered. Neural networks cannot extrapolate in that 'understanding' sense.
>
>
>
> I suggest that understanding truly extrapolates from a piece of knowledge.
> It is not about satisfying a task such as translation between languages or
> drawing hamsters with hats. It is how you got the capability to complete
> the task: Did you only have a few examples that covered something different
> but related and then you extrapolated from that knowledge? If yes, this is
> going in the direction of understanding. Have you seen countless examples
> and then interpolated among them? Then perhaps it is not understanding.
>
>
>
> So, for the case of drawing a hamster wearing a red hat, understanding
> perhaps would have taken place if the following happened before that:
>
>
>
> 1) first, the network learned about hamsters (not many examples)
>
> 2) after that the network learned about red hats (outside the context of
> hamsters and without many examples)
>
> 3) finally the network learned about drawing (outside of the context of
> hats and hamsters, not many examples)
>
>
>
> After that, the network is asked to draw a hamster with a red hat. If it
> does it successfully, maybe we have started cracking the problem of
> understanding.
>
>
>
> Note also that this requires the network to learn sequentially without
> exhibiting catastrophic forgetting of the previous knowledge, which is
> possibly also a consequence of human learning by understanding.
>
>
>
>
>
> Danko
>
>
>
>
>
>
>
>
>
>
>
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__www.danko-2Dnikolic.com%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DHwOLDw6UCRzU5-FPSceKjtpNm7C6sZQU5kuGAMVbPaI%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=Mo0hlYOaYWD9%2BIsBvL%2BjLuaEhybPIpli0LLC2ra0Ez4%3D&reserved=0>
> https://www.linkedin.com/in/danko-nikolic/
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.linkedin.com_in_danko-2Dnikolic_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3Db70c8lokmxM3Kz66OfMIM4pROgAhTJOAlp205vOmCQ8%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=plMn26o%2FGLT2dOunALAatlC%2By3mBqxZRnKviibm79AM%3D&reserved=0>
>
> --- A progress usually starts with an insight ---
>
>
>
>
>
>
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DAo9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=rsYgVVa2s3yCcmNbasl8kL8baEkp9SfIIhOTaA2sAMc%3D&reserved=0>
>
> Virus-free. www.avast.com
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DAo9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=rsYgVVa2s3yCcmNbasl8kL8baEkp9SfIIhOTaA2sAMc%3D&reserved=0>
>
>
>
> On Thu, Feb 3, 2022 at 9:55 AM Asim Roy <ASIM.ROY at asu.edu> wrote:
>
> Without getting into the specific dispute between Gary and Geoff, I think
> with approaches similar to GLOM, we are finally headed in the right
> direction. There’s plenty of neurophysiological evidence for single-cell
> abstractions and multisensory neurons in the brain, which one might claim
> correspond to symbols. And I think we can finally reconcile the decades old
> dispute between Symbolic AI and Connectionism.
>
>
>
> GARY: (Your GLOM, which as you know I praised publicly, is in many ways an
> effort to wind up with encodings that effectively serve as symbols in
> exactly that way, guaranteed to serve as consistent representations of
> specific concepts.)
>
> GARY: I have *never* called for dismissal of neural networks, but rather
> for some hybrid between the two (as you yourself contemplated in 1991); the
> point of the 2001 book was to characterize exactly where multilayer
> perceptrons succeeded and broke down, and where symbols could complement
> them.
>
>
>
> Asim Roy
>
> Professor, Information Systems
>
> Arizona State University
>
> Lifeboat Foundation Bios: Professor Asim Roy
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__lifeboat.com_ex_bios.asim.roy%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DoDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=1e%2BbRdXdK4JTiOuh12cuqbGvQWvVUFQ31oGr%2BFBd8II%3D&reserved=0>
>
> Asim Roy | iSearch (asu.edu)
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__isearch.asu.edu_profile_9973%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DjCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=iLsLr1EIsUp9PeNFzjXupsadcfkhcaNKyW4tlP%2FCEXc%3D&reserved=0>
>
>
>
>
>
> *From:* Connectionists <connectionists-bounces at mailman.srv.cs.cmu.edu> *On
> Behalf Of *Gary Marcus
> *Sent:* Wednesday, February 2, 2022 1:26 PM
> *To:* Geoffrey Hinton <geoffrey.hinton at gmail.com>
> *Cc:* AIhub <aihuborg at gmail.com>; connectionists at mailman.srv.cs.cmu.edu
> *Subject:* Re: Connectionists: Stephen Hanson in conversation with Geoff
> Hinton
>
>
>
> Dear Geoff, and interested others,
>
>
>
> What, for example, would you make of a system that often drew the
> red-hatted hamster you requested, and perhaps a fifth of the time gave you
> utter nonsense?  Or say one that you trained to create birds but sometimes
> output stuff like this:
>
>
>
> <image001.png>
>
>
>
> One could
>
>
>
> a. avert one’s eyes and deem the anomalous outputs irrelevant
>
> or
>
> b. wonder if it might be possible that sometimes the system gets the right
> answer for the wrong reasons (eg partial historical contingency), and
> wonder whether another approach might be indicated.
>
>
>
> Benchmarks are harder than they look; most of the field has come to
> recognize that. The Turing Test has turned out to be a lousy measure of
> intelligence, easily gamed. It has turned out empirically that the Winograd
> Schema Challenge did not measure common sense as well as Hector might have
> thought. (As it happens, I am a minor coauthor of a very recent review on
> this very topic: https://arxiv.org/abs/2201.02387
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.com%2Fv3%2F__https%3A%2Farxiv.org%2Fabs%2F2201.02387__%3B!!IKRxdwAv5BmarQ!INA0AMmG3iD1B8MDtLfjWCwcBjxO-e-eM2Ci9KEO_XYOiIEgiywK-G_8j6L3bHA%24&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=mC6NKLJ5rZ35YnzB%2Fr1S4owSqBoJmKSq1JIE3qScrlA%3D&reserved=0>)
> But its conquest in no way means machines now have common sense; many
> people from many different perspectives recognize that (including, e.g.,
> Yann LeCun, who generally tends to be more aligned with you than with me).
>
>
>
> So: on the goalpost of the Winograd schema, I was wrong, and you can quote
> me; but what you said about me and machine translation remains your
> invention, and it is inexcusable that you simply ignored my 2019
> clarification. On the essential goal of trying to reach meaning and
> understanding, I remain unmoved; the problem remains unsolved.
>
>
>
> All of the problems LLMs have with coherence, reliability, truthfulness,
> misinformation, etc stand witness to that fact. (Their persistent inability
> to filter out toxic and insulting remarks stems from the same.) I am hardly
> the only person in the field to see that progress on any given benchmark
> does not inherently mean that the deep underlying problems have solved.
> You, yourself, in fact, have occasionally made that point.
>
>
>
> With respect to embeddings: Embeddings are very good for natural language
> *processing*; but NLP is not the same as NL*U* – when it comes to
> *understanding*, their worth is still an open question. Perhaps they will
> turn out to be necessary; they clearly aren’t sufficient. In their extreme,
> they might even collapse into being symbols, in the sense of uniquely
> identifiable encodings, akin to the ASCII code, in which a specific set of
> numbers stands for a specific word or concept. (Wouldn’t that be ironic?)
>
>
>
> (Your GLOM, which as you know I praised publicly, is in many ways an
> effort to wind up with encodings that effectively serve as symbols in
> exactly that way, guaranteed to serve as consistent representations of
> specific concepts.)
>
>
>
> Notably absent from your email is any kind of apology for misrepresenting
> my position. It’s fine to say that “many people thirty years ago once
> thought X” and another to say “Gary Marcus said X in 2015”, when I didn’t.
> I have consistently felt throughout our interactions that you have mistaken
> me for Zenon Pylyshyn; indeed, you once (at NeurIPS 2014) apologized to me
> for having made that error. I am still not he.
>
>
>
> Which maybe connects to the last point; if you read my work, you would see
> thirty years of arguments *for* neural networks, just not in the way that
> you want them to exist. I have ALWAYS argued that there is a role for them;
>  characterizing me as a person “strongly opposed to neural networks” misses
> the whole point of my 2001 book, which was subtitled “Integrating
> Connectionism and Cognitive Science.”
>
>
>
> In the last two decades or so you have insisted (for reasons you have
> never fully clarified, so far as I know) on abandoning symbol-manipulation,
> but the reverse is not the case: I have *never* called for dismissal of
> neural networks, but rather for some hybrid between the two (as you
> yourself contemplated in 1991); the point of the 2001 book was to
> characterize exactly where multilayer perceptrons succeeded and broke down,
> and where symbols could complement them. It’s a rhetorical trick (which is
> what the previous thread was about) to pretend otherwise.
>
>
>
> Gary
>
>
>
>
>
> On Feb 2, 2022, at 11:22, Geoffrey Hinton <geoffrey.hinton at gmail.com>
> wrote:
>
> 
>
> Embeddings are just vectors of soft feature detectors and they are very
> good for NLP.  The quote on my webpage from Gary's 2015 chapter implies the
> opposite.
>
>
>
> A few decades ago, everyone I knew then would have agreed that the ability
> to translate a sentence into many different languages was strong evidence
> that you understood it.
>
>
>
> But once neural networks could do that, their critics moved the goalposts.
> An exception is Hector Levesque who defined the goalposts more sharply by
> saying that the ability to get pronoun references correct in Winograd
> sentences is a crucial test. Neural nets are improving at that but still
> have some way to go. Will Gary agree that when they can get pronoun
> references correct in Winograd sentences they really do understand? Or does
> he want to reserve the right to weasel out of that too?
>
>
>
> Some people, like Gary, appear to be strongly opposed to neural networks
> because they do not fit their preconceived notions of how the mind should
> work.
>
> I believe that any reasonable person would admit that if you ask a neural
> net to draw a picture of a hamster wearing a red hat and it draws such a
> picture, it understood the request.
>
>
>
> Geoff
>
>
>
>
>
>
>
>
>
>
>
> On Wed, Feb 2, 2022 at 1:38 PM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
> Dear AI Hub, cc: Steven Hanson and Geoffrey Hinton, and the larger neural
> network community,
>
>
>
> There has been a lot of recent discussion on this list about framing and
> scientific integrity. Often the first step in restructuring narratives is
> to bully and dehumanize critics. The second is to misrepresent their
> position. People in positions of power are sometimes tempted to do this.
>
>
>
> The Hinton-Hanson interview that you just published is a real-time example
> of just that. It opens with a needless and largely content-free personal
> attack on a single scholar (me), with the explicit intention of
> discrediting that person. Worse, the only substantive thing it says is
> false.
>
>
>
> Hinton says “In 2015 he [Marcus] made a prediction that computers wouldn’t
> be able to do machine translation.”
>
>
>
> I never said any such thing.
>
>
>
> What I predicted, rather, was that multilayer perceptrons, as they existed
> then, would not (on their own, absent other mechanisms) *understand* language.
> Seven years later, they still haven’t, except in the most superficial way.
>
>
>
>
> I made no comment whatsoever about machine translation, which I view as a
> separate problem, solvable to a certain degree by correspondance without
> semantics.
>
>
>
> I specifically tried to clarify Hinton’s confusion in 2019, but,
> disappointingly, he has continued to purvey misinformation despite that
> clarification. Here is what I wrote privately to him then, which should
> have put the matter to rest:
>
>
>
> You have taken a single out of context quote [from 2015] and
> misrepresented it. The quote, which you have prominently displayed at the
> bottom on your own web page, says:
>
>
>
> Hierarchies of features are less suited to challenges such as language,
> inference, and high-level planning. For example, as Noam Chomsky famously
> pointed out, language is filled with sentences you haven't seen
> before. Pure classifier systems don't know what to do with such sentences.
> The talent of feature detectors -- in  identifying which member of some
> category something belongs to -- doesn't translate into understanding
> novel  sentences, in which each sentence has its own unique meaning.
>
>
>
> It does *not* say "neural nets would not be able to deal with novel
> sentences"; it says that hierachies of features detectors (on their own, if
> you read the context of the essay) would have trouble *understanding *novel sentences.
>
>
>
>
> Google Translate does yet not *understand* the content of the sentences
> is translates. It cannot reliably answer questions about who did what to
> whom, or why, it cannot infer the order of the events in paragraphs, it
> can't determine the internal consistency of those events, and so forth.
>
>
>
> Since then, a number of scholars, such as the the computational linguist
> Emily Bender, have made similar points, and indeed current LLM difficulties
> with misinformation, incoherence and fabrication all follow from these
> concerns. Quoting from Bender’s prizewinning 2020 ACL article on the matter
> with Alexander Koller, https://aclanthology.org/2020.acl-main.463.pdf
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__aclanthology.org_2020.acl-2Dmain.463.pdf%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DxnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL%26s%3DK-Vl6vSvzuYtRMi-s4j7mzPkNRTb-I6Zmf7rbuKEBpk%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009817691%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=q%2BDBLATHPeXbKLPG6SJGBsC7%2BWon7d%2B%2Fp8YONcozReU%3D&reserved=0>,
> also emphasizing issues of understanding and meaning:
>
>
>
> *The success of the large neural language models on many NLP tasks is
> exciting. However, we find that these successes sometimes lead to hype in
> which these models are being described as “understanding” language or
> capturing “meaning”. In this position paper, we argue that a system trained
> only on form has a priori no way to learn meaning. .. a clear understanding
> of the distinction between form and meaning will help guide the field
> towards better science around natural language understanding. *
>
>
>
> Her later article with Gebru on language models “stochastic parrots” is in
> some ways an extension of this point; machine translation requires mimicry,
> true understanding (which is what I was discussing in 2015) requires
> something deeper than that.
>
>
>
> Hinton’s intellectual error here is in equating machine translation with
> the deeper comprehension that robust natural language understanding will
> require; as Bender and Koller observed, the two appear not to be the same.
> (There is a longer discussion of the relation between language
> understanding and machine translation, and why the latter has turned out to
> be more approachable than the former, in my 2019 book with Ernest Davis).
>
>
>
> More broadly, Hinton’s ongoing dismissiveness of research from
> perspectives other than his own (e.g. linguistics) have done the field a
> disservice.
>
>
>
> As Herb Simon once observed, science does not have to be zero-sum.
>
>
>
> Sincerely,
>
> Gary Marcus
>
> Professor Emeritus
>
> New York University
>
>
>
> On Feb 2, 2022, at 06:12, AIhub <aihuborg at gmail.com> wrote:
>
> 
>
> Stephen Hanson in conversation with Geoff Hinton
>
>
>
> In the latest episode of this video series for AIhub.org
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__AIhub.org%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DxnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL%26s%3DeOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P3WO36jn%2B4E5%2Bt%2BqdBibO2kXwKYUEUj9jDWQly677zU%3D&reserved=0>,
> Stephen Hanson talks to  Geoff Hinton about neural networks,
> backpropagation, overparameterization, digit recognition, voxel cells,
> syntax and semantics, Winograd sentences, and more.
>
>
>
> You can watch the discussion, and read the transcript, here:
>
>
> https://aihub.org/2022/02/02/what-is-ai-stephen-hanson-in-conversation-with-geoff-hinton/
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__aihub.org_2022_02_02_what-2Dis-2Dai-2Dstephen-2Dhanson-2Din-2Dconversation-2Dwith-2Dgeoff-2Dhinton_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DOY_RYGrfxOqV7XeNJDHuzE--aEtmNRaEyQ0VJkqFCWw%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=B7CedNdKS2LOcYFRVGlIC%2BtO32o0MLbq4YgWOus8rBE%3D&reserved=0>
>
>
>
> About AIhub:
>
> AIhub is a non-profit dedicated to connecting the AI community to the
> public by providing free, high-quality information through AIhub.org
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__AIhub.org%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DxnFSVUARkfmiXtiTP_uXfFKv4uNEGgEeTluRFR7dnUpay2BM5EiLz-XYCkBNJLlL%26s%3DeOtzMh8ILIH5EF7K20Ks4Fr27XfNV_F24bkj-SPk-2A%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=P3WO36jn%2B4E5%2Bt%2BqdBibO2kXwKYUEUj9jDWQly677zU%3D&reserved=0>
> (https://aihub.org/
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__aihub.org_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DIKFanqeMi73gOiS7yD-X_vRx_OqDAwv1Il5psrxnhIA%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=vEcVelgUzYKss53xDMh7g%2BPWfsrf%2BydW2SBma3oTkew%3D&reserved=0>).
> We help researchers publish the latest AI news, summaries of their work,
> opinion pieces, tutorials and more.  We are supported by many leading
> scientific organizations in AI, namely AAAI
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__aaai.org_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DwBvjOWTzEkbfFAGNj9wOaiJlXMODmHNcoWO5JYHugS0%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=mooy8W%2Bdbj1cfA4y7ds7AKTpvSjG6j8LaCc9nORQfhg%3D&reserved=0>,
> NeurIPS
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__neurips.cc_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3D3-lOHXyu8171pT_UE9hYWwK6ft4I-cvYkuX7shC00w0%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=TBajU%2B1cgSkp%2Fx0e2p%2FEtQaCz%2F3hCRaZuSp2ZfffXHE%3D&reserved=0>,
> ICML
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__icml.cc_imls_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DJJyjwIpPy9gtKrZzBMbW3sRMh3P3Kcw-SvtxG35EiP0%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=ruboOy7nl6zzRMEqrrSXDHEpyJPLVobkJhg0NJXF8kQ%3D&reserved=0>,
> AIJ
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.journals.elsevier.com_artificial-2Dintelligence%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DeWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dbQN2Cmfmx8sfEPsPqbzs%2BY08elmLXaX7ycliUnnSb4%3D&reserved=0>
> /IJCAI
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.journals.elsevier.com_artificial-2Dintelligence%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DeWrRCVWlcbySaH3XgacPpi0iR0-NDQYCLJ1x5yyMr8U%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=dbQN2Cmfmx8sfEPsPqbzs%2BY08elmLXaX7ycliUnnSb4%3D&reserved=0>,
> ACM SIGAI
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttp-3A__sigai.acm.org_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3D7rC6MJFaMqOms10EYDQwfnmX-zuVNhu9fz8cwUwiLGQ%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=0bf4tH%2Bq%2B4vpME%2FvfX00b7GopTiSIW1%2BcL7u2Q9fxNg%3D&reserved=0>,
> EurAI/AICOMM, CLAIRE
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__claire-2Dai.org_%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3D66ZofDIhuDba6Fb0LhlMGD3XbBhU7ez7dc3HD5-pXec%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=CxrUEzAjP93Gd0Gac5bXT0StPrT3yThC%2F4h7rOnPzRo%3D&reserved=0>
> and RoboCup
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.robocup.org__%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3Dyl7-VPSvMrHWYKZFtKdFpThQ9UTb2jW14grhVOlAwV21R4FwPri0ROJ-uFdMqHy1%26s%3DbBI6GRq--MHLpIIahwoVN8iyXXc7JAeH3kegNKcFJc0%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=T9u4XHGbeeqoC5%2B069VueFyIWmY3X1kWyl0rFWxd8yQ%3D&reserved=0>
> .
>
> Twitter: @aihuborg
>
>
>
>
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DAo9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=z0HZYsbEu%2BZCfzxRnZetHEm15zN4jnc9%2BlUdHQ1zstc%3D&reserved=0>
>
> Virus-free. www.avast.com
> <https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Furldefense.proofpoint.com%2Fv2%2Furl%3Fu%3Dhttps-3A__www.avast.com_sig-2Demail-3Futm-5Fmedium-3Demail-26utm-5Fsource-3Dlink-26utm-5Fcampaign-3Dsig-2Demail-26utm-5Fcontent-3Dwebmail%26d%3DDwMFaQ%26c%3DslrrB7dE8n7gBJbeO0g-IQ%26r%3DwQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ%26m%3DwaSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj%26s%3DAo9QQWtO62go0hx1tb3NU6xw2FNBadjj8q64-hl5Sx4%26e%3D&data=05%7C01%7Ctgd%40oregonstate.edu%7C6c8cc9dafd744c179d1408da65d58603%7Cce6d05e13c5e4d6287a84c4a2713c113%7C0%7C0%7C637934265009973905%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000%7C%7C%7C&sdata=z0HZYsbEu%2BZCfzxRnZetHEm15zN4jnc9%2BlUdHQ1zstc%3D&reserved=0>
>
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220715/a678bfcd/attachment.html>


More information about the Connectionists mailing list