<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body dir="auto"><div dir="ltr">Dear Connectionists, and especially Geoff Hinton,<div dir="ltr"><div dir="ltr"><div><br></div><div>It has come to my attention that Geoff Hinton is looking for challenging targets. In a just-released episode of The Robot Brains podcast [<a href="https://www.youtube.com/watch?v=4Otcau-C_Yc">https://www.youtube.com/watch?v=4Otcau-C_Yc</a>], he said <div><br></div><div><i>“If any of the people who say [deep learning] is hitting a wall would just write down a list of the things it’s not going to be able to do then five years later, we’d be able to show we’d done them.”</i></div><div><br></div><div>Now, as it so happens, I (with the help of Ernie Davis) did just write down exactly such a list of things, last weekm and indeed offered Elon Musk a $100,000 bet along similar lines.</div><div><br></div><div>Precise details are here, towards the end of the essay: </div><div><br></div><div><a href="https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things?s=w">https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things</a></div><div><br></div><div>Five are specific milestones, in video and text comprehension, cooking, math, etc; the sixth is the proviso that for an intelligence to be deemed “general” (which is what Musk was discussing in a remark that prompted my proposal), it would need to solve a majority of the problems. We can probably all agree that narrow AI for any single problem on its own might be less interesting.</div><div><br></div><div>Although there is no word yet from Elon, Kevin Kelly offered to host the bet at LongNow.Org, and Metaculus.com has transformed the bet into 6 questions that the community can comment on. Vivek Wadhwa, cc’d, quickly offered to double the bet, and several others followed suit; the bet to Elon (should he choose to take it) currently stands at $500,000.</div><div><br></div><div>If you’d like in on the bet, Geoff, please let me know. </div><div><br></div><div>More generally, I’d love to hear what the connectionists community thinks of six criteria I laid out (as well as the arguments at the top of the essay, as to why AGI might not be as imminent as Musk seems to think).</div><div><br></div><div>Cheers.</div><div>Gary Marcus</div><div dir="ltr"></div><div dir="ltr"></div></div></div></div></div><div dir="ltr"></div></body></html>