Connectionists: Stephen Hanson in conversation with Geoff Hinton

Stefano Rovetta Stefano.Rovetta at unige.it
Sat Jul 16 06:49:45 EDT 2022


Dear Asim

what do you mean by "similar situations"?


--Stefano Rovetta




Asim Roy <ASIM.ROY at asu.edu> ha scritto:

> Dear Danko,
>
>
>   1.  “Figure it out once the situation emerges” and “we do not need  
> to learn” “upfront” sounds a bit magical, even for biological  
> systems. A professional tennis player practices for years and years  
> so that he/she knows exactly how to respond to each and every  
> situation as much as possible. Such learning does not end with just  
> 10 days of training. Such a player would prefer to know as much as  
> possible “upfront” the various situations that can arise. That’s the  
> meaning of training and learning. And that also means hitting tennis  
> balls millions of times (countless??) perhaps. And that’s learning  
> from a lot of data.
>   2.  You might want to rethink your definition of “understanding”  
> given the above example. Understanding for a tennis player is  
> knowing about the different situations that can arise. Ones ability  
> “to resolve” different situations comes from ones experience with  
> similar situations. A tennis player’s understanding indeed comes  
> from that big “data set” of responses to different situations.
>   3.  In general, biological learning may not be that magical as you  
> think. Wish it was.
>
> Best,
> Asim
>
> From: Danko Nikolic <danko.nikolic at gmail.com>
> Sent: Friday, July 15, 2022 11:39 AM
> To: Gary Marcus <gary.marcus at nyu.edu>
> Cc: Asim Roy <ASIM.ROY at asu.edu>; Grossberg, Stephen <steve at bu.edu>;  
> AIhub <aihuborg at gmail.com>; Post Connectionists  
> <connectionists at mailman.srv.cs.cmu.edu>; maxversace at gmail.com
> Subject: Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
>
> Thanks Gary and Asim,
>
> Gary, yes, that is what I meant: recognizing a new situation in  
> which a knife is being used or needs to be used, or could be used.
> We do not need to learn those at the time when learning about  
> knives. We figure it out once the situation emerges. This is what is  
> countless: the number of situations that may emerge. We do not need  
> to know them upfront.
>
> Asim, it is interesting that you assumed that everything needs to be  
> learned upfront. This is maybe exactly the difference between what  
> connectionism assumes and what the human brain can actually do. The  
> biological brain needs not to learn things upfront and yet  
> 'understands' them once they happen.
>
> Also, as you asked for a definition of understanding, perhaps we can  
> start exactly from that point: Understanding is when you do not have  
> to learn different applications of knife (or object X, in general)  
> and yet you are able to resolve the use of the knife once a relevant  
> situation emerges. Understanding is great because the number of  
> possible situations is countless and one cannot possibly prepare  
> them as a learning data set.
>
> Transient selection of subnetworks based on MRs and GPGICs may do  
> that 'understanding' job in the brain. That is my best guess after a  
> long search for an appropriate mechanism.
>
> The scaling problem that I am talking about is about those countless  
> situations. To be able to resolve them, linear scaling would not be  
> enough. Even if there are connectionist systems that can scale  
> linearly (albeit unlikely as the research stands now), the linearity  
> would not be enough to fix the problem.
>
> Greetings,
>
> Danko
>
>
>
>
> Dr. Danko Nikolić
> www.danko-nikolic.com<https://urldefense.com/v3/__http:/www.danko-nikolic.com__;!!IKRxdwAv5BmarQ!fYk1D0OFZsOt63xNQ719TzU9FkfA4JoxkFd1JPOsGyOUjfIVP0jpzEg_GWKUa3-bjxeD8Un-_tLohUJnzFpMvmU$>
> https://www.linkedin.com/in/danko-nikolic/<https://urldefense.com/v3/__https:/www.linkedin.com/in/danko-nikolic/__;!!IKRxdwAv5BmarQ!fYk1D0OFZsOt63xNQ719TzU9FkfA4JoxkFd1JPOsGyOUjfIVP0jpzEg_GWKUa3-bjxeD8Un-_tLohUJnsP9Npy0$>
> -- I wonder, how is the brain able to generate insight? --
>
>
> On Fri, Jul 15, 2022 at 3:51 PM Gary Marcus  
> <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>> wrote:
> I am with Danko here: he said “resolve” not “anticipate in advance”.
>
> I doubt any human is perfect in anticipating all uses of a knife but  
> eg audiences had little trouble interpreting and enjoying all the  
> weird repurposings that the TV character Macgyver was known for.
>
> On Jul 15, 2022, at 6:36 AM, Asim Roy  
> <ASIM.ROY at asu.edu<mailto:ASIM.ROY at asu.edu>> wrote:
>
> 
> Dear Danko,
>
>
>   1.  I am not sure if I myself know all the uses of a knife, leave  
> aside countless ones. Given a particular situation, I might simulate  
> in my mind about the potential usage, but I doubt our minds explore  
> all the countless situations of usage of an object as soon as it  
> learns about it.
>   2.  I am not sure if a 2 or 3 year old child, after having  
> “learnt” about a knife, knows very many uses of it. I doubt the kid  
> is awake all night and day simulating in the brain how and where to  
> use such a knife.
>   3.  “Understanding” is a loaded term. I think it needs a definition.
>   4.  I am copying Max Versace, a student of Steve Grossberg. His  
> company markets a software that can learn quickly from a few  
> examples. Not exactly one-shot learning, it needs a few shots. I  
> believe it’s a variation of ART. But Max can clarify the details.  
> And Tsvi is doing similar work. So, what you are asking for may  
> already exist. So linear scaling may be the worst case scenario.
>
> Best,
> Asim Roy
> Professor, Information Systems
> Arizona State University
> Lifeboat Foundation Bios: Professor Asim  
> Roy<https://urldefense.proofpoint.com/v2/url?u=https-3A__lifeboat.com_ex_bios.asim.roy&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=oDRJmXX22O8NcfqyLjyu4Ajmt8pcHWquTxYjeWahfuw&e=>
> Asim Roy | iSearch  
> (asu.edu)<https://urldefense.proofpoint.com/v2/url?u=https-3A__isearch.asu.edu_profile_9973&d=DwMFaQ&c=slrrB7dE8n7gBJbeO0g-IQ&r=wQR1NePCSj6dOGDD0r6B5Kn1fcNaTMg7tARe7TdEDqQ&m=waSKY67JF57IZXg30ysFB_R7OG9zoQwFwxyps6FbTa1Zh5mttxRot_t4N7mn68Pj&s=jCesWT7oGgX76_y7PFh4cCIQ-Ife-esGblJyrBiDlro&e=>
>
>
>
> From: Danko Nikolic <danko.nikolic at gmail.com<mailto:danko.nikolic at gmail.com>>
> Sent: Friday, July 15, 2022 12:19 AM
> To: Asim Roy <ASIM.ROY at asu.edu<mailto:ASIM.ROY at asu.edu>>
> Cc: Grossberg, Stephen <steve at bu.edu<mailto:steve at bu.edu>>; Gary  
> Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>>; AIhub  
> <aihuborg at gmail.com<mailto:aihuborg at gmail.com>>;  
> connectionists at mailman.srv.cs.cmu.edu<mailto:connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
>
> Dear Asim,
>
> I agree about the potential for linear scaling of ART and.other  
> connectionist systems. However, there are two problems.
>
> The problem number one kills it already and this is that the real  
> brain scales a lot better than linearly: For each new object  
> learned, we are able to resolve countless many new situations in  
> which this object takes part (e.g., finding various uses for a  
> knife, many of which may be new, ad hoc -- this is a great ability  
> of biological minds often referred to as 'understanding'). Hence,  
> simple linear scaling  by adding more neurons for additional objects  
> is not good enough to match biological intelligence.
>
> The second problem becomes an overkill, and this is that linear  
> scaling in connectionist systems works only in theory, under  
> idealized conditions. In real life, say if working with ImageNet,  
> the scaling turns into a power-law with an exponent much larger than  
> one: We need something like 500x more resources just to double the  
> number of objects. Hence, in practice, the demands for resources  
> explode if you want to add more categories whilst not losing the  
> accuracy.
>
> To summarize, there is no linear scaling in practice nor would  
> linear scaling suffice, even if we found one.
>
> This should be a strong enough argument to search for another  
> paradigm, something that scales better than connectionism.
>
> I discuss both problems in the new manuscript, and even track a bit  
> deeper the problem of why connectionism lacks linear scaling in  
> practice (I provide some revealing computations in the Supplementary  
> Materials (with access to the code), although much more work needs  
> to be done).
>
> Danko
>
> Dr. Danko Nikolić
> www.danko-nikolic.com<https://urldefense.com/v3/__http:/www.danko-nikolic.com__;!!IKRxdwAv5BmarQ!dRAUJv4Z-MYBdeXPR2F6nWM_fPxoHF-3d3u6QNonedYrac67POEvWJxIOhXM-JsMWH8mTU6G5JdOT5UoyE_lBRw$>
> https://www.linkedin.com/in/danko-nikolic/<https://urldefense.com/v3/__https:/www.linkedin.com/in/danko-nikolic/__;!!IKRxdwAv5BmarQ!dRAUJv4Z-MYBdeXPR2F6nWM_fPxoHF-3d3u6QNonedYrac67POEvWJxIOhXM-JsMWH8mTU6G5JdOT5UoJhzJWDU$>





More information about the Connectionists mailing list