Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com

Gary Marcus gary.marcus at nyu.edu
Thu Jun 9 16:43:13 EDT 2022


these by the way are all fine queries, minus the gratuitous ad hominem that is unnecessary; after all quibblers abound, across the spectrum.  

the good news is that community is already working these things out on metaculus.com, and I’m happy to take middle-of-the-road positions on al,l if you wish to participate. (eg “accurately” can be taken to equal 90% or whatever a sample of U. Toronto undergraduates scores, or whatever). 

On the comprehension challenges in particular I am working with a group of people from DeepMind, Meta, OpenAI, and various universities to try to make something practicable. Szegedy has been trying to iron out the math bet. Cooking and the correct rules for the coding challenge I leave to others.

More broadly, in my experience in complex negotiations, one starts with something a term sheet for the higher level concepts, and then works to the finer grain only after there is enough agreement at the higher level. i’d gladly put the time in to negotiating the detail if you would seriously engage (with or without money) in the higher level.

-gfm

> On Jun 9, 2022, at 11:50, Geoffrey Hinton <geoffrey.hinton at gmail.com> wrote:
> 
> 
> If you make a bet, you need to be very clear about what counts as success and you need to assume the person who has to pay out will quibble.
> You cannot have a real bet that includes phrases like:
> 
> "tell you accurately"
> "reliably answer questions"
> "competent cook"
> "reliably construct"
> 
> A bet needs to have such clear criteria for whether it has been achieved or not that even Gary Marcus could not quibble.  
> 
> Geoff
> 
> 
> 
> On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus <gary.marcus at nyu.edu> wrote:
>> Dear Connectionists, and especially Geoff Hinton,
>> 
>> It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [https://www.youtube.com/watch?v=4Otcau-C_Yc], he said 
>> 
>> “If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it’s not going to be able to do then five years later, we’d be able to show we’d done them.”
>> 
>> Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines.
>> 
>> Precise details are here, towards the end of the essay: 
>> 
>> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things
>> 
>> Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed “general” (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting.
>> 
>> Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on.  Vivek Wadhwa, cc’d, quickly offered to double the bet, and several others followed suit;  the bet to Elon (should he choose to take it) currently stands at $500,000.
>> 
>> If you’d like in on the bet, Geoff, please let me know. 
>> 
>> More generally, I’d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think).
>> 
>> Cheers.
>> Gary Marcus
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220609/3100276f/attachment.html>


More information about the Connectionists mailing list