Connectionists: Stephen Hanson in conversation with Geoff Hinton

Juyang Weng juyang.weng at gmail.com
Wed Feb 9 16:22:29 EST 2022


Gary,

The examples you mentioned in the email have already been COMPLETELY solved
and mathematically proven by the emergent Turing machine theory using a
Developmental Network (DN):
J. Weng, "Brain as an Emergent Finite Automaton: A Theory and Three
Theorems'' *International Journal of Intelligence Science*, vol. 5, no. 2,
pp. 112-131, Jan. 2015. PDF file
<http://www.scirp.org/journal/PaperDownload.aspx?paperID=53728>. (This is
the journal version of the above IJCNN paper. It explains that the control
of a Turing Machine, regular or universal, is an Finite Automaton and
therefore a DN can learn any universal Turing Machine one transition at a
time, immediately, error-free if there is sufficient neuronal resource, and
optimal in the sense of maximum likelihood if there is no sufficient
neuronal resource.)
This is a good news.

A bad news:  Psychologists, neuroscientists and neural network researchers
who have not learned the theory of encoding a universal Turing machine are
not able to understand the fundamental solution.

That is why you feel comfortable with only intuitive examples.

Please suggest how to address this fundamental communication problem facing
this community.  You are not alone on this list.

Best regards,
-John
----
Date: Mon, 7 Feb 2022 14:07:27 -0800
From: Gary Marcus <gary.marcus at nyu.edu>
To: Stephen Jos? Hanson <jose at rubic.rutgers.edu>
Cc: AIhub <aihuborg at gmail.com>, connectionists at mailman.srv.cs.cmu.edu
Subject: Re: Connectionists: Stephen Hanson in conversation with Geoff
        Hinton
Message-ID: <BB0190E0-193B-4AFC-9AE1-A7C0961E8D60 at nyu.edu>
Content-Type: text/plain; charset="utf-8"

Stephen,

I don?t doubt for a minute that deep learning can characterize some aspects
of psychology reasonably well; but either it needs to expands its borders
or else be used  in conjunction with other techniques. Take for example the
name of the new Netflix show

The Woman in the House Across the Street from the Girl in the Window

Most of us can infer, compositionally, from that unusually long noun
phrase, that the title is a description of particular person, that the
title is not a complete sentence, and that the woman in question lives in a
house; we also infer that there is a second, distinct person (likely a
child) across the street, and so forth. We can also use some knowledge of
pragmatics to infer that the woman in question is likely to be the
protagonist in the show. Current systems still struggle with that sort of
thing.

We can then watch the show (I watched a few minutes of Episode 1) and
quickly relate the title to the protagonist?s mental state, start to
develop a mental model of the protagonist?s relation to her new neighbors,
make inferences about whether certain choices appear to be ?within
character?, empathize with character or question her judgements, etc, all
with respect to a mental model that is rapidly encoded and quickly modified.

I think that an understanding of how people build and modify such models
would be extremely valuable (not just for fiction for everyday reality),
but I don?t see how deep learning in its current form gives us much
purchase on that. There is plenty of precedent for the kind of mental
processes I am sketching (e.g  Walter Kintsch?s work on text comprehension;
Kamp/Kratzer/Heim work on discourse representation, etc) from psychological
and linguistic perspectives, but almost no current contact in the neural
network community with these well-attested psychological processes.

Gary

-- 
Juyang (John) Weng
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220209/a0602311/attachment.html>


More information about the Connectionists mailing list