<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body text="#000000" bgcolor="#ecca99">
<p><font size="+1"><font face="monospace">I was thinking more like
an RNN similar to work we had done in the 2000s.. on syntax.</font></font></p>
<p><font size="+1"><font face="monospace">Stephen José Hanson,
Michiro Negishi; On the Emergence of Rules in Neural Networks.
Neural Comput 2002; 14 (9): 2245–2268. doi:
<a class="moz-txt-link-freetext" href="https://doi.org/10.1162/089976602320264079">https://doi.org/10.1162/089976602320264079</a></font></font></p>
<p><font size="+1"><font face="monospace">Abstract<br>
A simple associationist neural network learns to factor
abstract rules (i.e., grammars) from sequences of arbitrary
input symbols by inventing abstract representations that
accommodate unseen symbol sets as well as unseen but similar
grammars. The neural network is shown to have the ability to
transfer grammatical knowledge to both new symbol vocabularies
and new grammars. Analysis of the state-space shows that the
network learns generalized abstract structures of the input
and is not simply memorizing the input strings. These
representations are context sensitive, hierarchical, and based
on the state variable of the finite-state machines that the
neural network has learned. Generalization to new symbol sets
or grammars arises from the spatial nature of the internal
representations used by the network, allowing new symbol sets
to be encoded close to symbol sets that have already been
learned in the hidden unit space of the network. The results
are counter to the arguments that learning algorithms based on
weight adaptation after each exemplar presentation (such as
the long term potentiation found in the mammalian nervous
system) cannot in principle extract symbolic knowledge from
positive examples as prescribed by prevailing human linguistic
theory and evolutionary psychology.<br>
</font></font><br>
</p>
<div class="moz-cite-prefix">On 6/13/22 8:55 AM, Gary Marcus wrote:<br>
</div>
<blockquote type="cite"
cite="mid:60B7CD76-2DCE-4DB9-9ECD-96D046916BB8@nyu.edu">
<meta http-equiv="content-type" content="text/html; charset=UTF-8">
<div dir="ltr">– agree with Steve this is an interesting paper,
and replicating it with a neural net would be interesting;
cc’ing Steve Piantosi. </div>
<div dir="ltr">— why not use a Transformer, though?</div>
<div dir="ltr">- it is however importantly missing semantics.
(Steve P. tells me there is some related work that is worth
looking into). Y&P speaks to an old tradition of formal
language work by Gold and others that is quite popular but IMHO
misguided, because it focuses purely on syntax rather than
semantics. Gold’s work definitely motivates learnability but I
have never taken it to seriously as a real model of language</div>
<div dir="ltr">- doing what Y&P try to do with a rich
artificial language that is focused around syntax-semantic
mappings could be very interesting</div>
<div dir="ltr">- on a somewhat but not entirely analogous note, i
think that the next step in vision is really scene
understanding. We have techniques for doing object labeling
reasonably well, but still struggle wit parts and wholes are
important, and with relations more generally, which is to say we
need the semantics of scenes. is the chair on the floor, or
floating in the air? is it supporting the pillow? etc. is the
hand a part of the body? is the glove a part of the body? etc</div>
<div dir="ltr"><br>
</div>
<div dir="ltr">Best,</div>
<div dir="ltr">Gary</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
</div>
<div dir="ltr"><br>
<blockquote type="cite">On Jun 13, 2022, at 05:18,
<a class="moz-txt-link-abbreviated" href="mailto:jose@rubic.rutgers.edu">jose@rubic.rutgers.edu</a> wrote:<br>
<br>
</blockquote>
</div>
<blockquote type="cite">
<div dir="ltr">
<p><font size="+1">Again, I think a relevant project here
would be to attempt to replicate with DL-rnn, Yang and
Piatiadosi's PNAS language learning system--which is a
completely symbolic-- and very general over the
Chomsky-Miller grammer classes. Let me know, happy to
collaborate on something like this.<br>
</font></p>
<p><font size="+1">Best</font></p>
<p><font size="+1">Steve<br>
</font></p>
</div>
</blockquote>
</blockquote>
</body>
</html>