Connectionists: Stephen Hanson in conversation with Geoff Hinton

Stephen José Hanson jose at rubic.rutgers.edu
Fri Feb 4 22:36:07 EST 2022


Gary,

Not retreating.    Simply stating the obvious. Brains are where  
symbols, as we talk about them, are.   But what are they exactly in a brain?

I think that is a question that starts with connections and layers..  
not  some sort of specialized software,  this is not about software 
engineering but about the science of the thing.

Utility is important in many domains. but appears to me to be a retreat 
that you are didactically hiding behind ("even Jay"? comeon.)

I am aware of simulations using RNN to counter you claim that RNNs could 
not learn certain sequential behavior. 
(https://www.semanticscholar.org/paper/On-the-Emergence-of-Rules-in-Neural-Networks-Hanson-Negishi/4bca27b823c9724d910b4637fd489343233570f8).

I just can't take modularity of the brain seriously anymore, as 
cognitive neuroscience continues to embrace generic networks (resting 
state) and distributed representations-- things are moving on (see --the 
Failure of Blobology-SJH).  Face areas?  why would thetr be Face areas 
(different from neural patches)?. what would they do in any case---store 
all the faces you've seen--unlikely, process faces into parts from 
whole?   What was the point of a face area in the first place.. more 
likely to be some sort of  WAVELET  that incidentally  encodes faces,  
1963 Cadillacs and greebles 
(https://psycnet.apa.org/record/2008-00548-008; 
https://pubmed.ncbi.nlm.nih.gov/19883493/) then some specific type/token 
face thing.

I think this is more about hoping there is some common ground that 
doesn't really exist. Most are beginning to see that  deep learning is a 
fundamental step forward in AI and yes is very non-biologically 
plausible, except for the obvious parts that are in the mammalian 
brain--layers  (the cortex has 6) and connections. Its better to focus 
on why it works and how to make mathematical sense of it.

Steve

On 2/4/22 2:52 PM, Gary Marcus wrote:
> Steve,
>
> The phrase I always liked was “poverty of the imagination arguments”; 
> I share your disdain for them. But that’s why I think you should be 
> careful of any retreat into biological plausibility. As even Jay 
> McClelland has acknowledged, we do know that some humans some of the 
> time manipulate symbols. So wetware-based symbols are not literally 
> biologically impossible; the real question for cognitive neuroscience 
> is about the scope and development of symbols.
>
> For engineering, the real question is, are they useful. Certainly for 
> software engineering in general, they are indispensable.
>
> Beyond this, none of the available AI approaches map particularly 
> neatly onto what we know about the brain, and none of what we know 
> about the brain is understood well enough to solve AI.  All the 
> examples you point to, for instance, are actually controversial, not 
> decisive. As you probably know, for example, Nancy Kanwisher has a 
> different take on domain-specificity than you do 
> (https://web.mit.edu/bcs/nklab/ <https://web.mit.edu/bcs/nklab/>), 
> with evidence of specialization early in life, and Jeff Bowers has 
> argued that the grandmother cell hypothesis has been dismissed 
> prematurely 
>  (https://jeffbowers.blogs.bristol.ac.uk/blog/grandmother-cells/ 
> <https://jeffbowers.blogs.bristol.ac.uk/blog/grandmother-cells/>); 
> there’s also a long literature on the possible neural realization of 
> rules, both in humans and other animals.
>
> I don’t know what the right answers are there, but nor do I think that 
> neurosymbolic systems are beholden to them anymore than CNNs are bound 
> to whether or not the brain performs back-propagation.
>
> Finally, as a reminder,  “Distributed” per se in not the right 
> question; in some technical sense ASCII encodings are distributed, and 
> about as symbolic as you can get. The proper question is really what 
> you do with your encodings; the neurosymbolic approach is trying to 
> broaden the available range of options.
>
> Gary
>
>> On Feb 4, 2022, at 07:04, Stephen José Hanson 
>> <jose at rubic.rutgers.edu> wrote:
>>
>> 
>>
>> Well I don't like counterfactual arguments or ones that start with 
>> "It can't be done with neural networks.."--as this amounts to the old 
>> Rumelhart saw, of "proof by lack of imagination".
>>
>> I think my position and others (I can't speak for Geoff and won't) is 
>> more of a "purist" view that brains have computationally complete 
>> representational power to do what ever is required of human level 
>> mental processing.  AI symbol systems are remote descriptions of this 
>> level of processing. Looking at 1000s of brain scans, one begins to 
>> see a pattern of interacting large and smaller scale networks, 
>> probably related to Resting state and the Default Mode networks in 
>> some important competitive way.   But what one doesn't find is 
>> modular structure (e.g. face area.. nope)  or evidence of "symbols"  
>> being processed. Research on Numbers is interesting in this regard, 
>> as number representation should provide some evidence of discrete 
>> symbol processing as would  letters.   But again the processing 
>> states from brain imaging  more generally appear to be distributed 
>> representations of some sort.
>>
>> One other direction has to do with prior rules that could be neurally 
>> coded and therefore provide an immediate bias in learning and thus 
>> dramatically reduce the number of examples required for  asymptotic 
>> learning.     Some of this has been done with pre-training-- on let's 
>> say 1000s of videos that are relatively generic, prior to learning on 
>> a small set of videos related to a specific topic-- say two 
>> individuals playing a monopoly game.  In that case, no game-like 
>> videos were sampled in the pre-training, and the LSTM was trained to 
>> detect change point on 2 minutes of video, achieving a 97% match with 
>> human parsers.    In these senses I have no problem with this type of 
>> hybrid training.
>>
>> Steve
>>
>> On 2/4/22 9:07 AM, Gary Marcus wrote:
>>> The whole point of the neurosymbolic approach is to develop systems 
>>> that can accommodate both vectors and symbols, since neither on 
>>> their own seems adequate.
>>>
>>> If there are arguments against trying to do that, we would be 
>>> interested.
>>>
>>>> On Feb 4, 2022, at 4:17 AM, Stephen José Hanson 
>>>> <jose at rubic.rutgers.edu> wrote:
>>>>
>>>> 
>>>>
>>>> Geoff's position is pretty clear.   He said in the conversation we 
>>>> had and in this thread,  "vectors of soft features",
>>>>
>>>> Some of my claim is in several of the conversations with Mike 
>>>> Jordan and Rich Sutton, but briefly,  there are a number of
>>>> very large costly efforts from the 1970s and 1980s, to create, 
>>>> deploy and curate symbol AI systems that were massive failures.  
>>>> Not counterfactuals,  but factuals that failed. The MCC comes to 
>>>> mind with Adm Bobby Inmann's national US mandate to counter the 
>>>> Japanese so called"Fifth-generation AI systems"  as a massive 
>>>> failure of symbolic AI.
>>>>
>>>> --------------------
>>>>
>>>> In 1982, Japan launched its Fifth Generation Computer Systems 
>>>> project (FGCS), designed to develop intelligent software that would 
>>>> run on novel computer hardware. As the first national, large-scale 
>>>> artificial intelligence (AI) research and development (R&D) project 
>>>> to be free from military influence and corporate profit motives, 
>>>> the FGCS was open, international, and oriented around public goods.
>>>>
>>>> On 2/3/22 6:34 PM, Francesca Rossi2 wrote:
>>>>> Hi all.
>>>>>
>>>>> Thanks Gary for adding me to this thread.
>>>>>
>>>>> I also would be interested in knowing why Steve thinks that NS AI did not work in the past, and why this is an indication that it cannot work now or in the future.
>>>>>
>>>>> Thanks,
>>>>> Francesca.
>>>>> ------------------
>>>>>
>>>>> Francesca Rossi
>>>>> IBM Fellow and AI Ethics Global Leader
>>>>> T.J. Watson Research Center, Yorktown Heights, USA
>>>>> +1-617-3869639
>>>>>
>>>>> ________________________________________
>>>>> From: Artur Garcez<arturdavilagarcez at gmail.com>
>>>>> Sent: Thursday, February 3, 2022 6:00 PM
>>>>> To: Gary Marcus
>>>>> Cc: Stephen José Hanson; Geoffrey Hinton; AIhub;connectionists at mailman.srv.cs.cmu.edu; Luis Lamb; Josh Tenenbaum; Anima Anandkumar; Francesca Rossi2; Swarat Chaudhuri; Gadi Singer
>>>>> Subject: [EXTERNAL] Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
>>>>>
>>>>> It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing. Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally ZjQcmQRYFpfptBannerStart
>>>>> This Message Is From an External Sender
>>>>> This message came from outside your organization.
>>>>> ZjQcmQRYFpfptBannerEnd
>>>>>
>>>>> It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing.
>>>>>
>>>>> Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally symbolic), there is a clear resurgence of neuro-symbolic approaches.
>>>>>
>>>>> Best wishes,
>>>>> Artur
>>>>>
>>>>>
>>>>> On Thu, Feb 3, 2022 at 5:00 PM Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu>> wrote:
>>>>> Steve,
>>>>>
>>>>> I’d love to hear you elaborate on this part,
>>>>>
>>>>>   Many more shoes will drop in the next few years.  I for one don't believe one of those shoes will be Hybrid approaches to AI,  I've seen that movie before and it didn't end well.
>>>>>
>>>>>
>>>>> I’d love your take on why you think the impetus towards hybrid models ended badly before, and why you think that the mistakes of the past can’t be corrected. Also it’ would be really instructive to compare with deep learning, which lost steam for quite some time, but reemerged much stronger than ever before. Might not the same happen with hybrid models?
>>>>>
>>>>> I am cc’ing some folks (possibly not on this list) who have recently been sympathetic to hybrid models, in hopes of a rich discussion.  (And, Geoff, still cc’d, I’d genuinely welcome your thoughts if you want to add them, despite our recent friction.)
>>>>>
>>>>> Cheers,
>>>>> Gary
>>>>>
>>>>>
>>>>> On Feb 3, 2022, at 5:10 AM, Stephen José Hanson <jose at rubic.rutgers.edu<mailto:jose at rubic.rutgers.edu>> wrote:
>>>>>
>>>>>
>>>>> I would encourage you to read the whole transcript, as you will see the discussion does intersect with a number of issues you raised in an earlier post on what is learned/represented in DLs.
>>>>>
>>>>> Its important for those paying attention to this thread, to realize these are still very early times.    Many more shoes will drop in the next few years.  I for one don't believe one of those shoes will be Hybrid approaches to AI,  I've seen that movie before and it didn't end well.
>>>>>
>>>>> Best and hope you are doing well.
>>>>>
>>>>> Steve
>>>>>
>>>> -- 
>>>> <signature.png>
>> -- 
-- 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220204/36be3808/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220204/36be3808/attachment.png>


More information about the Connectionists mailing list