Connectionists: Yang and Piantodosi’s PNAS language system, semantics, and scene understanding
jose at rubic.rutgers.edu
jose at rubic.rutgers.edu
Tue Jun 14 12:13:22 EDT 2022
Great,
these are all grammar strings.. nothing semantic --right?
Gary and I had confusion about that.. but I read the paper..
Steve
On 6/14/22 12:11 PM, Steven T. Piantadosi wrote:
>
> All of our training/test data is on the github, but please let me know
> if I can help!
>
> Steve
>
>
> On 6/13/22 06:13, Gary Marcus wrote:
>> I do remember the work :) Just generally Transformers seem more
>> effective; a careful comparison between Y&P, Transformers, and your
>> RNN approach, looking at generalization to novel words, would indeed
>> be interesting.
>> Cheers,
>> Gary
>>
>>> On Jun 13, 2022, at 06:09, jose at rubic.rutgers.edu wrote:
>>>
>>>
>>>
>>> I was thinking more like an RNN similar to work we had done in the
>>> 2000s.. on syntax.
>>>
>>> Stephen José Hanson, Michiro Negishi; On the Emergence of Rules in
>>> Neural Networks. Neural Comput 2002; 14 (9): 2245–2268. doi:
>>> https://doi.org/10.1162/089976602320264079
>>>
>>> Abstract
>>> A simple associationist neural network learns to factor abstract
>>> rules (i.e., grammars) from sequences of arbitrary input symbols by
>>> inventing abstract representations that accommodate unseen symbol
>>> sets as well as unseen but similar grammars. The neural network is
>>> shown to have the ability to transfer grammatical knowledge to both
>>> new symbol vocabularies and new grammars. Analysis of the
>>> state-space shows that the network learns generalized abstract
>>> structures of the input and is not simply memorizing the input
>>> strings. These representations are context sensitive, hierarchical,
>>> and based on the state variable of the finite-state machines that
>>> the neural network has learned. Generalization to new symbol sets or
>>> grammars arises from the spatial nature of the internal
>>> representations used by the network, allowing new symbol sets to be
>>> encoded close to symbol sets that have already been learned in the
>>> hidden unit space of the network. The results are counter to the
>>> arguments that learning algorithms based on weight adaptation after
>>> each exemplar presentation (such as the long term potentiation found
>>> in the mammalian nervous system) cannot in principle extract
>>> symbolic knowledge from positive examples as prescribed by
>>> prevailing human linguistic theory and evolutionary psychology.
>>>
>>> On 6/13/22 8:55 AM, Gary Marcus wrote:
>>>> – agree with Steve this is an interesting paper, and replicating it
>>>> with a neural net would be interesting; cc’ing Steve Piantosi.
>>>> — why not use a Transformer, though?
>>>> - it is however importantly missing semantics. (Steve P. tells me
>>>> there is some related work that is worth looking into). Y&P speaks
>>>> to an old tradition of formal language work by Gold and others that
>>>> is quite popular but IMHO misguided, because it focuses purely on
>>>> syntax rather than semantics. Gold’s work definitely motivates
>>>> learnability but I have never taken it to seriously as a real model
>>>> of language
>>>> - doing what Y&P try to do with a rich artificial language that is
>>>> focused around syntax-semantic mappings could be very interesting
>>>> - on a somewhat but not entirely analogous note, i think that the
>>>> next step in vision is really scene understanding. We have
>>>> techniques for doing object labeling reasonably well, but still
>>>> struggle wit parts and wholes are important, and with relations
>>>> more generally, which is to say we need the semantics of scenes. is
>>>> the chair on the floor, or floating in the air? is it supporting
>>>> the pillow? etc. is the hand a part of the body? is the glove a
>>>> part of the body? etc
>>>>
>>>> Best,
>>>> Gary
>>>>
>>>>
>>>>
>>>>> On Jun 13, 2022, at 05:18, jose at rubic.rutgers.edu wrote:
>>>>>
>>>>> Again, I think a relevant project here would be to attempt to
>>>>> replicate with DL-rnn, Yang and Piatiadosi's PNAS language
>>>>> learning system--which is a completely symbolic-- and very general
>>>>> over the Chomsky-Miller grammer classes. Let me know, happy to
>>>>> collaborate on something like this.
>>>>>
>>>>> Best
>>>>>
>>>>> Steve
>>>>>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mailman.srv.cs.cmu.edu/pipermail/connectionists/attachments/20220614/f3327933/attachment.html>
More information about the Connectionists
mailing list