Connectionists: Geoff Hinton, Elon Musk, and a bet at garymarcus.substack.com

Ali Minai minaiaa at gmail.com
Mon Jun 13 01:12:28 EDT 2022


Gary

I think it's important to emphasize (as, of course, you know) that the main
challenge in your challenges is that the *same* system be able to perform
all the tasks. One of the main ways that ML has diverged from natural
intelligence is in focusing on specific tasks and developing specialized
systems that often achieve super-human performance on their task. The task
can be very complex, like drawing pictures from natural language input, but
that is just one aspect of intelligence. Can DALL-E 2 make a cup of coffee?
It's just a specialized savant, which makes it less generally intelligent
than a bee.

My contention is that, to achieve truly integrated general intelligence,
we'll have to start with simple integrated systems and make them more
complex. The "vertical" compartmentalization is just creating a number of
hyper-specialists that could fall apart when we try to integrate all of
them. Total multi-modal, developmental learning is how we'll achieve
general intelligence. Or perhaps, gradually multi-modal if we wish to mimic
evolution as well. Unfortunately, we human engineers are: a) in a hurry to
see this in our lifetimes; b) driven by concrete problems; and c) come with
our specialized training that focuses us on specific dimensions of an
extremely high-dimensional problem.

Ultimately, we know that something like a brain embedded in something like
a body can be sentient, conscious, intelligent, etc., in the real world, so
of course we'll get there - if we learn respectfully from biology and get
out of the "professors are smarter than Nature" mode.

Best

Ali

*Ali A. Minai, Ph.D.*
Professor and Graduate Program Director
Complex Adaptive Systems Lab
Department of Electrical Engineering & Computer Science
828 Rhodes Hall
University of Cincinnati
Cincinnati, OH 45221-0030

Phone: (513) 556-4783
Fax: (513) 556-7326
Email: Ali.Minai at uc.edu
          minaiaa at gmail.com

WWW: https://eecs.ceas.uc.edu/~aminai/ <http://www.ece.uc.edu/%7Eaminai/>


On Sun, Jun 12, 2022 at 5:36 AM Gary Marcus <gary.marcus at nyu.edu> wrote:

> Let’s review:
>
> Hinton accuses me of not setting clear criteria.
>
> I offer 6 reasonably clear criteria.
>
> A significant sample of the ML community elsewhere applauds the criteria,
> and engages seriously.
>
> Hinton says it’s deranged to discuss them; after that, nobody here dares.
>
> Hanson derides the whole project and stacks the deck; ignores the cold
> fusion, flying jet packs, driverless taxis, and so on that haven’t become
> practical despite promises, citing only the numerator but not the
> denominator of history, further stifling any serious discussion of what
> Hinton’s requested targets might be.
>
> Was Hinton’s request for clear criteria agenuine good faith request? Does
> anyone on this list have better criteria? Do you always find it appropriate
> to tag team people for responding to requests in good faith?
>
> Open scientific discussion, literally for decades a hallmark of this list,
> appears to have left the building. Very unfortunate.
>
> Gary
>
>
>
>
> On Jun 10, 2022, at 8:14 AM, Stephen Jose Hanson <
> stephen.jose.hanson at rutgers.edu> wrote:
>
> 
>
> Bets?  The *Augus**t* discussion months ago has reduced to bets?  Really?
>
> Gentleman, lets step back a bit... on the one hand this seems like
> schoolyard squabble about who can jump from the highest point on a wall
> without breaking a leg..
>
> On the other hand.. it also feels like a troll*  standing in a North
> Carolina field saying to Orville..  .."OK, so it worked for 12 seconds, I
> bet this never fly across an ocean!"
>
> OR
>
> " (1961) sure sure, you got a capsule in the upper stratosphere, but  I
> bet you will never get to  the moon".
>
> OR
>
> "1994, Ok,  your computational biology model can do protein folding with
> about 40% match.. 20 years later not much improvement (60%).. so I bet
> you'll never reach 90% match".    (in 2020, Deepmind published
> Alphafold--which reached over 94% matches).
>
>
> So this type of counterfactual silliness, is simply due to our deep
> ignorance of the technologies in the future.. but who could know the tech
> of the future?
>
> Its really really really early in what is happening in AI now. .snipping
> at it at this point is sort of pointless.   As we just don't know alot yet.
>
> (1) how do DL models learn? (2) how do DL models represent knowledge?  (3)
> What do DL models have to do with Brain?
>
> Instead here's a useful project:
>
> Recent work in language acquisition due to Yang an Piantidosi (PNAS 2022)
> who developed a symbolic model--similar to what Chomsky described as a
> Universal learning model (starting with recursion), seems to work
> surprisingly well.  They provide a large archive number of learning
> problems (FSM, CF, CS) cases.. which would be an interesting project for
> someone interested in RNN-DLs or LSTMs to show the same results, without
> the symbolic alg, they defined.
>
> Y Yang and S.T. Piantadosi One model for the learning of language January
> 24, 2022, PNAS.
>
> Finally,   AGI.. so this is old idea and a borrowed idea from LL
> Thurstone, who in 1930, defined different types of Human Intelligence
> including a type of "GENERAL Intelligence".     This lead to IQ tests and
> frustrating attempts at finding it ... instead leading Thurstone to invent
> Factor analysis.    Its difficult enough to try and define human
> intelligence, without claiming some sort of "G" factor for AI.   With due
> respect to my friends at DeepMind... This seems like a deadend.
>
> Cheers,
>
> Steve
>
>
>
>
>
>
>
> * a troll is a person who posts inflammatory, insincere, digressive,
> extraneous, or off-topic messages in an online community, with the intent
> of provoking readers into displaying emotional responses, or manipulating
> others' perception
> On 6/9/22 4:33 PM, Gary Marcus wrote:
>
> Dear Dr. Hinton,
>
> You very directly asked my side to produce some tangible goals. Ernest
> Davis and I did precisely what you asked, and in return you described me
> (in a separate but public message that also appears to have come from your
> account) as deranged.  There is no world in which that is socially
> acceptable, or a positive step towards science.
>
> Your reaction is particularly striking because it is a clear outlier. In
> general, despite the perfectly reasonable questions that you asked about
> wording in your subsequent email (which would presumably need be negotiated
> in any actually-implemented bet, as one moved from term sheet to contract),
> the community reaction has actually been quite favorable. LongNow offered
> to host it, Metaculus added to their forecast site, Christian Szegedy
> placed a side bet and ran a lengthy discussion about the math proposition,
> etc.  Your reactions are so far out of range with any of the other reaction
> that I have received than I am frankly concerned for your well-being.
>
> §
>
> All that said, you raised one substantive question, and I will address it.
> I am reasonably familiar with the PaLM model that you refer to, and aware
> of the claim that it “understands” jokes. I am also aware of the fact that
> what is provided in the paper was a numerator without a denominator.
>
> There is no clarity about what was tried, or how general it was, or under
> what conditions it failed; subset is indeed the operative word. In no world
> would what was written pass traditional peer review. Even for a conference
> paper, what is reported about joke interpretation is woefully thin. It’s
> really not much rest to your theory on.
>
> I am also aware of the fact that your colleagues at Google failed to
> supply more detail when I requested it—never a good sign.
>
> How general is the ability? Is it a handful of paraphrases of jokes in
> vast memorized database? Would it extend to other kinds of jokes? Could it
> (or related models like Gato, with visual input) explain this cartoon?
>
> [image: image1.jpeg]
>
> Or why this bit of dialog with GPT (prompt in bold) is inadvertently funny?
>
> *Sally's cow died yesterday. When will the cow be alive again? Let's think
> step by step.*
>
> First, the cow must be buried. This takes a few hours.
>
> Next, a new cow must be born. This takes nine months.
>
> Therefore, the earliest the cow will be alive again is nine months from
> now.
>
> Probably not.
>
> §
>
> What we have known since Eliza is that humans are easily seduced into
> anthropomorphizing machines. I am going to stand by my claim that current
> AI lacks understanding:
>
>    - one cannot derive a set of logic propositions from a large language
>    model
>    - one cannot reliably update a world model based on an LLMs
>    calculations (a point that LeCun has also made, in slightly different terms)
>    - one cannot reliably reason from what and LLM derives,
>    - LLMs themselves cannot reliably reason from they are told..
>
> My point is not a Searlean one about the impossibility of machines
> thinking, just a reality of the limits of contemporary systems. On the
> latter point,  I would also urge you to read my recent essay called “Horse
> rides Astronaut”, to see how easy it is make up incorrect rationalization
> about these models when they make errors.
>
> Inflated appraisals of their capabilities may serve some sort of political
> end, but will not serve science.
>
> I cannot undo whatever slight some reviewer did to Yann decades ago, but I
> can call the current field as I see it; I don’t believe that current
> systems have gotten significantly  closer to what I described in that 2016
> conversation that you quote from. I absolutely stand by the claim that we
> are a long way from answering “the deeper questions in artificial
> intelligence, like how we understand language or how we reason about the
> world." SInce you are found of quoting stuff I right 6 or 7 years ago,
> here’s a challenge that I proposed in the New Yorker 2014; to me I see real
> progress on this sort of thing, thus far:
>
>
> *allow me to propose a Turing Test for the twenty-first century: build a
> computer program that can watch any arbitrary TV program or YouTube video
> and answer questions about its content—“Why did Russia invade Crimea?” or
> “Why did Walter White consider taking a hit out on Jessie?” Chatterbots
> like Goostman can hold a short conversation about TV, but only by bluffing.
> (When asked what “Cheers” was about, it responded, “How should I know, I
> haven’t watched the show.”) But no existing program—not Watson, not
> Goostman, not Siri—can currently come close to doing what any bright, real
> teenager can do: watch an episode of “The Simpsons,” and tell us when to
> laugh.*
>
>
> Can Palm-E do that? I seriously doubt it.
>
>
> Dr. Gary Marcus
>
> Founder, Geometric Intelligence (acquired by Uber)
> Author of 5 books, including Rebooting AI, one of Forbes 7 Must read books
> in AI, and The Algebraic Mind, one of the key early works advocating
> neurosymbolic AI
>
>
>
>
>
>
> On Jun 9, 2022, at 11:34, Geoffrey Hinton <geoffrey.hinton at gmail.com>
> <geoffrey.hinton at gmail.com> wrote:
>
> 
> I shouldn't respond because your main aim is to get attention without
> going to the trouble of building something that works
> (personal communication, Y. LeCun) but I cannot resist pointing out the
> following Marcus claim from 2016:
>
> "People are very excited about big data and what it's giving them right
> now, but I'm not sure it's taking us closer to the deeper questions in
> artificial intelligence, like how we understand language or how we reason
> about the world. "
>
> Given that big neural nets can now explain why a joke is funny (for some
> subset of jokes) do you still want to stick with this claim?  It seems to
> me that the reason you made this claim is because you have a strong prior
> belief about how language understanding and reasoning must work and this
> belief is remarkably resistant to evidence.  Deep learning researchers have
> seen this before. Yann had a paper rejected by a vision conference even
> though it beat the state-of-the-art and one of the reasons given was that
> the  model learned everything and therefore taught us nothing about how to
> do vision.  That particular referee had a strong idea of how computer
> vision must work and failed to notice that the success of Yann's model
> showed that that prior belief was spectacularly wrong.
>
> Geoff
>
>
>
>
> On Thu, Jun 9, 2022 at 3:41 AM Gary Marcus <gary.marcus at nyu.edu> wrote:
>
>> Dear Connectionists, and especially Geoff Hinton,
>>
>> It has come to my attention that Geoff Hinton is looking for challenging
>> targets. In a just-released episode of The Robot Brains podcast [
>> https://www.youtube.com/watch?v=4Otcau-C_Yc
>> <https://urldefense.com/v3/__https://www.youtube.com/watch?v=4Otcau-C_Yc__;!!BhJSzQqDqA!Xh3JO9ofzqekK6I5uDA0F9J35tYqCEKqe2VyJXZaTtWlhk_g0aLu79J2fMwGE1WT43F66Osn0VHJ10Uf2t-8BGjQUsDx$>],
>> he said
>>
>> *“If any of the people who say [deep learning] is hitting a wall would
>> just write down a list of the things it’s not going to be able to do then
>> five years later, we’d be able to show we’d done them.”*
>>
>> Now, as it so happens, I (with the help of Ernie Davis) did just write
>> down exactly such a list of things, last weekm and indeed offered Elon Musk
>> a $100,000 bet along similar lines.
>>
>> Precise details are here, towards the end of the essay:
>>
>> https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things
>> <https://urldefense.com/v3/__https://garymarcus.substack.com/p/dear-elon-musk-here-are-five-things?s=w__;!!BhJSzQqDqA!Xh3JO9ofzqekK6I5uDA0F9J35tYqCEKqe2VyJXZaTtWlhk_g0aLu79J2fMwGE1WT43F66Osn0VHJ10Uf2t-8BN37K60l$>
>>
>> Five are specific milestones, in video and text comprehension, cooking,
>> math, etc; the sixth is the proviso that for an intelligence to be deemed
>> “general” (which is what Musk was discussing in a remark that prompted my
>> proposal), it would need to solve a majority of the problems. We can
>> probably all agree that narrow AI for any single problem on its own might
>> be less interesting.
>>
>> Although there is no word yet from Elon, Kevin Kelly offered to host the
>> bet at LongNow.Org, and Metaculus.com has transformed the bet into 6
>> questions that the community can comment on.  Vivek Wadhwa, cc’d, quickly
>> offered to double the bet, and several others followed suit;  the bet to
>> Elon (should he choose to take it) currently stands at $500,000.
>>
>> If you’d like in on the bet, Geoff, please let me know.
>>
>> More generally, I’d love to hear what the connectionists community thinks
>> of six criteria I laid out (as well as the arguments at the top of the
>> essay, as to why AGI might not be as imminent as Musk seems to think).
>>
>> Cheers.
>> Gary Marcus
>>
> --
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/302bf33d/attachment.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image1.jpeg
Type: image/jpeg
Size: 237859 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/302bf33d/attachment.jpeg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.png
Type: image/png
Size: 34455 bytes
Desc: not available
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220613/302bf33d/attachment.png>


More information about the Connectionists mailing list