Connectionists: Computational Modeling of Bilingualism Special Issue

Stephen José Hanson jose at psychology.rutgers.edu
Tue Mar 26 13:01:37 EDT 2013


Not really.

Rule emergence in the Neual Comp is the first (as far as I am
aware--happy to be corrected on this point) and only evidence that one
can *learn* and transfer rules (grammer) across lexicons. In those
experiments, RNNs were trained on same FSM while the lexicon was
changed on each condition, these new lexicons were trained to
criterion, then a final transfer to a NOVEL lexicon, which the RNN
transfered at a 60% savings rate.  All the other cases--again I am
aware of--including yours use a hybrid hack to demonstrate "rules" in
connectionist networks.

I also refer you to an earlier article I wrote for BBS some time ago,
which also attempted to broach this issue.

Hanson S. J. & Burr, D. J., (1990), What Connectionist Models Learn:
Toward a theory of representation in Connectionist Networks, Behavioral
and Brain Sciences, 13, 471-518. Article-djvu (on my website for
download--http://nwkpsych.rutgers.edu/~jose/

Steve

Tue, 26 Mar 2013 12:54:58 -0400 __________

Steve,

Rule emergence in neural networks has been studied for a very long time.
The gap of communication is very wide here, although it is good to 
communicate.
Does my follow-up email to Janet Wiles help to clarify?

-John

On 3/26/13 6:57 AM, Stephen José Hanson wrote:
> "As far as I understand, traditional connectionist architectures
> cannot do abstraction well as Marvin Minsky, Michael Jordan and many
> others correctly stated."
>
> Actually this not the case, there have been many of us over the years
> showing that recurrent networks for one do do abstract generalization
> over types and tokens and literally bootstraps required
> representational structure as the networks learn. See a couple of
> papers below.
>
> Hanson S. J. & Negishi M., (2002) On the Emergence of Rules in Neural
> Networks, Neural Computation, 14, 1-24.
>
> Hanson, C. & Hanson S. J. (1996), Development of Schemata During Event
> Parsing: Neisser's Perceptual Cycle as a Recurrent Connectionist
> Network, Journal of Cognitive Neuroscience, 8, 119-134.
>
> And I am pretty sure neither Marvin Minskey or Michael Jordan made a
> claim about cognitive/perceptual abstraction of recurrent networks.
>
>
> Steve
>
>
> Tue, 26 Mar 2013 03:30:22 +0000 __________
>
> Recurrent neural networks can represent, and in some cases learn and
> generalise classes of languages beyond finite state machines. For a
> review, of their capabilities see the excellent edited book by Kolen
> and Kramer. e.g., ch 8 is on "Representation beyond finite states";
> and ch9 is "Universal Computation and Super-Turing Capabilities".
>
> Kolen and Kramer (2001) "A Field Guide Dynamical Recurrent Networks",
> IEEE Press.
>
> From:
> connectionists-bounces at mailman.srv.cs.cmu.edu<mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>
> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] On Behalf Of
> Juyang Weng Sent: Sunday, 24 March 2013 9:17 AM To:
> connectionists at mailman.srv.cs.cmu.edu<mailto:connectionists at mailman.srv.cs.cmu.edu>
> Subject: Re: Connectionists: Computational Modeling of Bilingualism
> Special Issue
>
> Ping Li:
>
> As far as I understand, traditional connectionist architectures cannot
> do abstraction well as Marvin Minsky, Michael Jordan and many others
> correctly stated.  For example, traditional neural networks cannot
> learn a finite automaton (FA) until recently (i.e., the proof of our
> Developmental Network).  We all know that FA is the basis for all
> probabilistic symbolic networks (e.g., Markov models) but they are all
> not connectionist.
>
> After seeing your announcement, I am confused with the book title
> "Bilingualism Special Issue: Computational Modeling of Bilingualism"
> but with your comment "most of the models are based on connectionist
> architectures."
>
> Without further clarifications from you, I have to predict that these
> connectionist architectures in the book are all grossly wrong in terms
> of brain-capable connectionist natural language processing, since they
> cannot learn an FA.   This means that they cannot generalize to
> state-equivalent but unobserved word sequences.   Without this basic
> capability required for natural language processing, how can they
> claim connectionist natural language processing, let alone
> bilingualism?
>
> I am concerned that many papers proceed with specific problems without
> understanding the fundamental problems of the traditional
> connectionism. The fact that the biological brain is connectionist
> does not necessarily mean that all connectionist researchers know
> about the brain's connectionism.
>
> -John Weng
> On 3/22/13 6:08 PM, Ping Li wrote:
> Dear Colleagues,
>
> A Special Issue on Computational Modeling of Bilingualism has been
> published. Most of the models are based on connectionist
> architectures.
>
> All the papers are available for free viewing until April 30, 2013
> (follow the link below to its end):
>
> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/
>
> Please let me know if you have difficulty accessing the above link or
> viewing any of the PDF files on Cambridge University Press's website.
>
> With kind regards,
>
> Ping Li
>
>
> =================================================================
> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information
> Sciences & Technology  |  Co-Chair, Inter-College Graduate Program in
> Neuroscience | Co-Director, Center for Brain, Behavior, and Cognition
> | Pennsylvania State University  | University Park, PA 16802, USA  |
> Editor, Bilingualism: Language and Cognition, Cambridge University
> Press | Associate Editor: Journal of Neurolinguistics, Elsevier
> Science Publisher Email: pul8 at psu.edu<mailto:pul8 at psu.edu>  | URL:
> http://cogsci.psu.edu<http://cogsci.psu.edu/>
> =================================================================
>
>
>
>
> --
>
> --
>
> Juyang (John) Weng, Professor
>
> Department of Computer Science and Engineering
>
> MSU Cognitive Science Program and MSU Neuroscience Program
>
> 428 S Shaw Ln Rm 3115
>
> Michigan State University
>
> East Lansing, MI 48824 USA
>
> Tel: 517-353-4388
>
> Fax: 517-432-1061
>
> Email: weng at cse.msu.edu<mailto:weng at cse.msu.edu>
>
> URL: http://www.cse.msu.edu/~weng/
>
> ----------------------------------------------
>
>
>
>

-- 
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
428 S Shaw Ln Rm 3115
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: weng at cse.msu.edu
URL: http://www.cse.msu.edu/~weng/
----------------------------------------------




-- 
Stephen José Hanson
Professor
Psychology Department
Rutgers University

Director RUBIC (Rutgers Brain Imaging Center)
Director RUMBA (Rutgers Brain/Mind Analysis-NK)
Member of Cognitive Science Center (NB)
Member EE Graduate Program (NB)
Member CS Graduate Program (NB)

email: jose at psychology.rutgers.edu
web: psychology.rutgers.edu/~jose
lab: www.rumba.rutgers.edu
fax: 866-434-7959
voice: 973-353-5440 x 1412
>



More information about the Connectionists mailing list