<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr"></div><div dir="ltr">these by the way are all fine queries, minus the gratuitous ad hominem that is unnecessary; after all quibblers abound, across the spectrum. </div><div dir="ltr"><br></div><div dir="ltr">the good news is that community is already working these things out on metaculus.com, and I’m happy to take middle-of-the-road positions on al,l if you wish to participate. (eg “accurately” can be taken to equal 90% or whatever a sample of U. Toronto undergraduates scores, or whatever). </div><div dir="ltr"><br></div><div dir="ltr">On the comprehension challenges in particular I am working with a group of people from DeepMind, Meta, OpenAI, and various universities to try to make something practicable. Szegedy has been trying to iron out the math bet. Cooking and the correct rules for the coding challenge I leave to others.</div><div dir="ltr"><br></div><div dir="ltr">More broadly, in my experience in complex negotiations, one starts with something a term sheet for the higher level concepts, and then works to the finer grain only after there is enough agreement at the higher level. i’d gladly put the time in to negotiating the detail if you would seriously engage (with or without money) in the higher level.</div><div dir="ltr"><br></div><div dir="ltr">-gfm</div><div dir="ltr"><br><blockquote type="cite">On Jun 9, 2022, at 11:50, Geoffrey Hinton <geoffrey.hinton@gmail.com> wrote:<br><br></blockquote></div><blockquote type="cite"><div dir="ltr"><div dir="ltr">If you make a bet, you need to be very clear about what counts as success and you need to assume the person who has to pay out will quibble.<div>You cannot have a real bet that includes phrases like:</div><div><br></div><div>"tell you accurately"</div><div>"reliably answer questions"</div><div>"competent cook"</div><div>"reliably construct"</div><div><br></div><div>A bet needs to have such clear criteria for whether it has been achieved or not that even Gary Marcus could not quibble. </div><div><br></div><div>Geoff</div><div><br></div><div><br></div></div><br><div class="gmail_quote"><div dir="ltr" class="gmail_attr">On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus <<a href="mailto:gary.marcus@nyu.edu">gary.marcus@nyu.edu</a>> wrote:<br></div><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex"><div dir="auto"><div dir="ltr">Dear Connectionists, and especially Geoff Hinton,<div dir="ltr"><div dir="ltr"><div><br></div><div>It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [<a href="https://urldefense.com/v3/__https://www.youtube.com/watch?v=4Otcau-C_Yc__;!!BhJSzQqDqA!SK4MQq_kOaM4NZ6c093qn83SQip2NYirnCaj9kzKDav7SQTZehSWfjkmZ5TpDbgECA4Y3uxmA6EJ4PFnS2Y-RjnDV5rf$" target="_blank">https://www.youtube.com/watch?v=4Otcau-C_Yc</a>], he said <div><br></div><div><i>“If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it’s not going to be able to do then five years later, we’d be able to show we’d done them.”</i></div><div><br></div><div>Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines.</div><div><br></div><div>Precise details are here, towards the end of the essay: </div><div><br></div><div><a href="https://urldefense.com/v3/__https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things?s=w__;!!BhJSzQqDqA!SK4MQq_kOaM4NZ6c093qn83SQip2NYirnCaj9kzKDav7SQTZehSWfjkmZ5TpDbgECA4Y3uxmA6EJ4PFnS2Y-RvvaR58q$" target="_blank">https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things</a></div><div><br></div><div>Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed “general” (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting.</div><div><br></div><div>Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc’d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000.</div><div><br></div><div>If you’d like in on the bet, Geoff, please let me know. </div><div><br></div><div>More generally, I’d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think).</div><div><br></div><div>Cheers.</div><div>Gary Marcus</div><div dir="ltr"></div><div dir="ltr"></div></div></div></div></div><div dir="ltr"></div></div></blockquote></div>
</div></blockquote></body></html>