<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1"><font face="monospace">Great,</font></font></p>
<p><font size="+1"><font face="monospace">these are all grammar
strings.. nothing semantic --right?</font></font></p>
<p><font size="+1"><font face="monospace">Gary and I had confusion
about that.. but I read the paper..</font></font></p>
<p><font size="+1"><font face="monospace">Steve<br>
</font></font></p>
<div class="moz-cite-prefix">On 6/14/22 12:11 PM, Steven T.
Piantadosi wrote:<br>
</div>
<blockquote type="cite"
cite="mid:b980a1da-8f88-f787-9e1d-b5f239c1cc1b@gmail.com">
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
<p>All of our training/test data is on the github, but please let
me know if I can help!</p>
<p>Steve</p>
<p><br>
</p>
<div class="moz-cite-prefix">On 6/13/22 06:13, Gary Marcus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:9D553D9E-CA9A-43E6-840F-0DD05F3C4D9E@nyu.edu">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div dir="ltr"> I do remember the work :) Just generally
Transformers seem more effective; a careful comparison between
Y&P, Transformers, and your RNN approach, looking at
generalization to novel words, would indeed be interesting.</div>
<div dir="ltr">Cheers, </div>
<div dir="ltr">Gary </div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 06:09, <a
class="moz-txt-link-abbreviated"
href="mailto:jose@rubic.rutgers.edu"
moz-do-not-send="true">jose@rubic.rutgers.edu</a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<meta http-equiv="Content-Type" content="text/html;
charset=UTF-8">
<p><font size="+1"><font face="monospace">I was thinking
more like an RNN similar to work we had done in the
2000s.. on syntax.</font></font></p>
<p><font size="+1"><font face="monospace">Stephen José
Hanson, Michiro Negishi; On the Emergence of Rules in
Neural Networks. Neural Comput 2002; 14 (9):
2245–2268. doi: <a class="moz-txt-link-freetext"
href="https://urldefense.com/v3/__https://doi.org/10.1162/089976602320264079__;!!BhJSzQqDqA!WCsRlT1zpBKD3ai8Ov_I79iH_HCdTlAMymGIe2ZsIdTnfZawzlMQNGZWisMjmcLBgH6SbBUZ6rtr_exEspS4Igo$"
moz-do-not-send="true">https://doi.org/10.1162/089976602320264079</a></font></font></p>
<p><font size="+1"><font face="monospace">Abstract<br>
A simple associationist neural network learns to
factor abstract rules (i.e., grammars) from sequences
of arbitrary input symbols by inventing abstract
representations that accommodate unseen symbol sets as
well as unseen but similar grammars. The neural
network is shown to have the ability to transfer
grammatical knowledge to both new symbol vocabularies
and new grammars. Analysis of the state-space shows
that the network learns generalized abstract
structures of the input and is not simply memorizing
the input strings. These representations are context
sensitive, hierarchical, and based on the state
variable of the finite-state machines that the neural
network has learned. Generalization to new symbol sets
or grammars arises from the spatial nature of the
internal representations used by the network, allowing
new symbol sets to be encoded close to symbol sets
that have already been learned in the hidden unit
space of the network. The results are counter to the
arguments that learning algorithms based on weight
adaptation after each exemplar presentation (such as
the long term potentiation found in the mammalian
nervous system) cannot in principle extract symbolic
knowledge from positive examples as prescribed by
prevailing human linguistic theory and evolutionary
psychology.<br>
</font></font><br>
</p>
<div class="moz-cite-prefix">On 6/13/22 8:55 AM, Gary Marcus
wrote:<br>
</div>
<blockquote type="cite"
cite="mid:60B7CD76-2DCE-4DB9-9ECD-96D046916BB8@nyu.edu">
<meta http-equiv="content-type" content="text/html;
charset=UTF-8">
<div dir="ltr">– agree with Steve this is an interesting
paper, and replicating it with a neural net would be
interesting; cc’ing Steve Piantosi. </div>
<div dir="ltr">— why not use a Transformer, though?</div>
<div dir="ltr">- it is however importantly missing
semantics. (Steve P. tells me there is some related work
that is worth looking into). Y&P speaks to an old
tradition of formal language work by Gold and others
that is quite popular but IMHO misguided, because it
focuses purely on syntax rather than semantics. Gold’s
work definitely motivates learnability but I have never
taken it to seriously as a real model of language</div>
<div dir="ltr">- doing what Y&P try to do with a rich
artificial language that is focused around
syntax-semantic mappings could be very interesting</div>
<div dir="ltr">- on a somewhat but not entirely analogous
note, i think that the next step in vision is really
scene understanding. We have techniques for doing object
labeling reasonably well, but still struggle wit parts
and wholes are important, and with relations more
generally, which is to say we need the semantics of
scenes. is the chair on the floor, or floating in the
air? is it supporting the pillow? etc. is the hand a
part of the body? is the glove a part of the body? etc</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Best,</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 05:18, <a
class="moz-txt-link-abbreviated
moz-txt-link-freetext"
href="mailto:jose@rubic.rutgers.edu"
moz-do-not-send="true">jose@rubic.rutgers.edu</a>
wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p><font size="+1">Again, I think a relevant project
here would be to attempt to replicate with
DL-rnn, Yang and Piatiadosi's PNAS language
learning system--which is a completely symbolic--
and very general over the Chomsky-Miller grammer
classes. Let me know, happy to collaborate on
something like this.<br>
</font></p>
<p><font size="+1">Best</font></p>
<p><font size="+1">Steve<br>
</font></p>
</div>
</blockquote>
</blockquote>
</div>
</blockquote>
</blockquote>
</blockquote>
</body>
</html>