Connectionists: Stephen Hanson in conversation with Geoff Hinton

Stephen José Hanson jose at rubic.rutgers.edu
Fri Feb 4 15:59:12 EST 2022


Francesca,

thanks for the clear distinctions between now and then.  I was also at 
MCC during this time as a corporate visitor from Bell Labs (Bellcore) at 
the time-- we ponied up our 1M$ and 4 lucky MTS got to visit MCC every 
month.. and see the magic.   Oh and have great BBQ!

I sat in on Doug's stuff mostly, partly because it was such a grand and 
enormous plan.  And although I acknowledge it might have been  missing 
some Graph modeling and heaven's to Betsy exclusively using Dolphins!.. 
And high school students (as if only they had common sense!).,,,at least 
on vu-graphs Doug said CYC would become conscious in 2005 or  2010 or 
2012 and knowledge would flow out of the wall like electricity. 
--hallelujah.

OK Ok.. maybe a bit unfair.. despite the enormous investment and the 
lack of return,  Doug and many others thought analogy just in the way he 
was implementing it was-- learning.     And CYC was about scale... we 
just have to wait till it swallowed enough knowledge.

So lets say, If we had CYC now and it was actually working.. would you 
really want to hook up a Transformer (being feed wikipedia) to it?  
Why?  Because Deep Learning is more a robust learning?

In some ways the GPT-Xs are CYC... just not with human understandable 
knowledge structures.  Could a decoder be built to create human 
recognizable structures from a GPT... maybe.. but it would no doubt be 
an approximation and couldn't really be used to reconstruct the GPT that 
we decoded -- it would be lost.   Its a trapdoor.

Right now  we have learning systems that work because they are composed 
of very many layers with billions-trillions of connections, and we have 
no idea why they work at all.  None. (there is that 500 page book that 
just dropped on statistical mechanics of DL).

Things are very early yet.

And I am sure experiments in hybrids will appear in NIPS for years to 
come..

Let them bloom if they can.     But I doubt it.

Steve

On 2/4/22 3:18 PM, Francesca Rossi2 wrote:
> Hi Stephen.
>
> I was at MCC in 1987-88, so I am aware of that effort.
> As you may know, MCC included many different projects. The most visible one in trying to achieve "general" AI was CYC (I was in another project, called LDL and led by Carlo Zaniolo, now UCLA), and in my view it did not succeed because it was trying to codify all human knowledge manually and with logic. The Internet was not used yet, and knowledge graphs were not there.
>
> Both MCC and FGCS were relying on the assumption that everything could be coded in logic, and not through learning from data (as suggested, for example, by Udi Shapiro and others). FGCS was also claiming that a specialized hardware was needed. What neuro-symbolic AI researchers are advocating now is a fruitful way to combine learning from data and symbols/logic-based reasoning, which was not what was done at that time.
>
> Francesca.
> -----------------------
>
> Francesca Rossi
> IBM Fellow and AI Ethics Global Leader
> T.J. Watson Research Center, Yorktown Heights, USA
> +1-617-3869639
>
> ________________________________________
> From: Stephen José Hanson <jose at rubic.rutgers.edu>
> Sent: Friday, February 4, 2022 7:17 AM
> To: Francesca Rossi2; Artur Garcez; Gary Marcus
> Cc: Geoffrey Hinton; AIhub; connectionists at mailman.srv.cs.cmu.edu; Luis Lamb; Josh Tenenbaum; Anima Anandkumar; Swarat Chaudhuri; Gadi Singer
> Subject: [EXTERNAL] Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
>
> Geoff's position is pretty clear. He said in the conversation we had and in this thread, "vectors of soft features", Some of my claim is in several of the conversations with Mike Jordan and Rich Sutton, but briefly, there are a number of ‍ ‍ ZjQcmQRYFpfptBannerStart
> This Message Is From an External Sender
> This message came from outside your organization.
> ZjQcmQRYFpfptBannerEnd
>
> Geoff's position is pretty clear.   He said in the conversation we had and in this thread,  "vectors of soft features",
>
> Some of my claim is in several of the conversations with Mike Jordan and Rich Sutton, but briefly,  there are a number of
> very large costly efforts from the 1970s and 1980s, to create, deploy and curate symbol AI systems that were massive failures.  Not counterfactuals,  but factuals that failed.   The MCC comes to mind with Adm Bobby Inmann's  national US mandate to counter the Japanese so called"Fifth-generation AI systems"  as a massive failure of symbolic AI.
>
> --------------------
>
> In 1982, Japan launched its Fifth Generation Computer Systems project (FGCS), designed to develop intelligent software that would run on novel computer hardware. As the first national, large-scale artificial intelligence (AI) research and development (R&D) project to be free from military influence and corporate profit motives, the FGCS was open, international, and oriented around public goods.
>
> On 2/3/22 6:34 PM, Francesca Rossi2 wrote:
>
> Hi all.
>
>
>
> Thanks Gary for adding me to this thread.
>
>
>
> I also would be interested in knowing why Steve thinks that NS AI did not work in the past, and why this is an indication that it cannot work now or in the future.
>
>
>
> Thanks,
>
> Francesca.
>
> ------------------
>
>
>
> Francesca Rossi
>
> IBM Fellow and AI Ethics Global Leader
>
> T.J. Watson Research Center, Yorktown Heights, USA
>
> +1-617-3869639
>
>
>
> ________________________________________
>
> From: Artur Garcez <arturdavilagarcez at gmail.com><mailto:arturdavilagarcez at gmail.com>
>
> Sent: Thursday, February 3, 2022 6:00 PM
>
> To: Gary Marcus
>
> Cc: Stephen José Hanson; Geoffrey Hinton; AIhub; connectionists at mailman.srv.cs.cmu.edu<mailto:connectionists at mailman.srv.cs.cmu.edu>; Luis Lamb; Josh Tenenbaum; Anima Anandkumar; Francesca Rossi2; Swarat Chaudhuri; Gadi Singer
>
> Subject: [EXTERNAL] Re: Connectionists: Stephen Hanson in conversation with Geoff Hinton
>
>
>
> It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing. Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally ZjQcmQRYFpfptBannerStart
>
> This Message Is From an External Sender
>
> This message came from outside your organization.
>
> ZjQcmQRYFpfptBannerEnd
>
>
>
> It would be great to hear Geoff's account with historical reference to his 1990 edited special volume of the AI journal on connectionist symbol processing.
>
>
>
> Judging from recent reviewing for NeurIPS, ICLR, ICML but also KR, AAAI, IJCAI (traditionally symbolic), there is a clear resurgence of neuro-symbolic approaches.
>
>
>
> Best wishes,
>
> Artur
>
>
>
>
>
> On Thu, Feb 3, 2022 at 5:00 PM Gary Marcus <gary.marcus at nyu.edu<mailto:gary.marcus at nyu.edu><mailto:gary.marcus at nyu.edu><mailto:gary.marcus at nyu.edu>> wrote:
>
> Steve,
>
>
>
> I’d love to hear you elaborate on this part,
>
>
>
>   Many more shoes will drop in the next few years.  I for one don't believe one of those shoes will be Hybrid approaches to AI,  I've seen that movie before and it didn't end well.
>
>
>
>
>
> I’d love your take on why you think the impetus towards hybrid models ended badly before, and why you think that the mistakes of the past can’t be corrected. Also it’ would be really instructive to compare with deep learning, which lost steam for quite some time, but reemerged much stronger than ever before. Might not the same happen with hybrid models?
>
>
>
> I am cc’ing some folks (possibly not on this list) who have recently been sympathetic to hybrid models, in hopes of a rich discussion.  (And, Geoff, still cc’d, I’d genuinely welcome your thoughts if you want to add them, despite our recent friction.)
>
>
>
> Cheers,
>
> Gary
>
>
>
>
>
> On Feb 3, 2022, at 5:10 AM, Stephen José Hanson <jose at rubic.rutgers.edu<mailto:jose at rubic.rutgers.edu><mailto:jose at rubic.rutgers.edu><mailto:jose at rubic.rutgers.edu>> wrote:
>
>
>
>
>
> I would encourage you to read the whole transcript, as you will see the discussion does intersect with a number of issues you raised in an earlier post on what is learned/represented in DLs.
>
>
>
> Its important for those paying attention to this thread, to realize these are still very early times.    Many more shoes will drop in the next few years.  I for one don't believe one of those shoes will be Hybrid approaches to AI,  I've seen that movie before and it didn't end well.
>
>
>
> Best and hope you are doing well.
>
>
>
> Steve
>
>
>
>
>
> --
> [cid:part1.23688013.E2D7C5E3 at rubic.rutgers.edu]
-- 
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220204/b1daa34d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 19957 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220204/b1daa34d/attachment.png>


More information about the Connectionists mailing list