Connectionists: A biological brain is fundamentally an emergent finite automaton
Juyang Weng
weng at cse.msu.edu
Tue Mar 26 12:06:32 EDT 2013
Dear Prof. Janet Wiles:
Thank you for raising the issue. I hope that this discussion is useful
to everybody who has subscribed to this connectionists mailing list.
Dr. Paul Werbos, a well-known expert in neural network working at NSF,
raised the same point to me, but we two were talking about very
different things using the same key words like Finite Automaton (FA).
Communications are hard in this fast-paced modern world when the
attention span of everybody is too short! Key words like FA fallen into
a wrong pocket have wrong meanings.
I put some key points (not all) here concisely so that people can get
them quickly and understand why Marvin Minsky and Micheal Jordan
correctly said (traditional) neural networks (TNNs, including the
networks in the edited book by Kolen and Kramer 2001) do not abstract
well and why our Developmental Network (DN) has HOLISTICALLY solved a
series of major problems with TNN and the traditional AI methods:
(1.a) TNNs are open to human programmers. A human programmer simply
compiles a task-specific automaton (FA, PDA, or Turing Machine TM, or
super TM) into a TNN. If the original automaton can do a specific task,
why do you compile it into a network? Simply defend connectionism? A
bad move.
(1.b) A brain is not open to human programmers. It is "skull-closed"
during learning and performance. Thus, a human programmer can only
program the Developmental Program (DP) of a DN, but not directly DN
itself. The DN must program itself through learning while being
regulated by the DP. A DP is genome-like but our DP only simulates the
function of genomes for brain development, not the individual genes
themselves. The programmer of the DP does not know about the tasks
that a DN will learn in its life. The DP is for many DNs. A different
DN represents a different life. A DN can learn an open number of unknown
tasks, but the TNNs in Kolen and Kramer 2001 do not.
(2.a) TNNs do not use fully emergent representations. They use symbolic
representations of various degrees. For example, the human programmer
specifies what features an area detects. I do not think any TNN in
Kolen and Kramer 2001 can do language acquisition.
(2.b) All the feature detectors and all the representations in DN are
fully emergent with only body-specific constraints (not
environment-specific constraints). Thus, a DN can learn to acquire a
(simple) language (mother tongue) that the human programmer of the DP
does not even know about.
(3.a) The states of TNNs are in hidden layers, not directly teachable.
This is the key architecture reason that all TNN do not abstract well.
(3.b) The states of DNs are in the action ports that are open to the
physical environment as both input and output, directly teachable and
verifiable like a biological brain.
(4.a) TNN learning (if a TNN does) is slow, iterative, no global
optimality.
(4.b) DN learning from any huge FA is immediate. The DN is error-free
(one-shot learning) not only for learned sequence but also for
state-equivalent sequences that it has NOT observed, critical for
language understanding since many sentences are new. The DN is optimal
in the sense of maximum likelihood.
(5.a) Simulating PDA, TM, or super TM using TNN seems to be on a wrong
track, since the brain is not a PDA, TM, or super TM.
(5.b) The most fundamental part of a biological brain seems to be an
emergent FA (EFA). In theory and in practice, an EFA like DN is able to
perform all kinds of practical tasks including of course mathematical
logic. The Chapter 10 of my book "Natural and Artificial Intelligence"
discusses this large topic with a series of practical examples.
All criticisms and comments are welcome.
-John
On 3/25/13 11:30 PM, Janet Wiles wrote:
>
> Recurrent neural networks can represent, and in some cases learn and
> generalise classes of languages beyond finite state machines. For a
> review, of their capabilities see the excellent edited book by Kolen
> and Kramer. e.g., ch 8 is on "Representation beyond finite states";
> and ch9 is "Universal Computation and Super-Turing Capabilities".
>
> Kolenand Kramer (2001) "A Field Guide Dynamical Recurrent Networks",
> IEEE Press.
>
> *From:*connectionists-bounces at mailman.srv.cs.cmu.edu
> <mailto:connectionists-bounces at mailman.srv.cs.cmu.edu>
> [mailto:connectionists-bounces at mailman.srv.cs.cmu.edu] *On Behalf Of
> *Juyang Weng
> *Sent:* Sunday, 24 March 2013 9:17 AM
> *To:* connectionists at mailman.srv.cs.cmu.edu
> <mailto:connectionists at mailman.srv.cs.cmu.edu>
> *Subject:* Re: Connectionists: Computational Modeling of Bilingualism
> Special Issue
>
> Ping Li:
>
> As far as I understand, traditional connectionist architectures cannot
> do abstraction well as Marvin Minsky, Michael Jordan
> and many others correctly stated. For example, traditional neural
> networks cannot learn a finite automaton (FA) until recently (i.e.,
> the proof of our Developmental Network). We all know that FA is the
> basis for all probabilistic symbolic networks (e.g., Markov models)
> but they are all not connectionist.
>
> After seeing your announcement, I am confused with the book title
> "Bilingualism Special Issue: Computational Modeling of Bilingualism"
> but with your comment "most of the models are based on connectionist
> architectures."
>
> Without further clarifications from you, I have to predict that these
> connectionist architectures in the book are all grossly wrong in terms
> of brain-capable connectionist natural language processing, since they
> cannot learn an FA. This means that they cannot generalize to
> state-equivalent but unobserved word sequences. Without this basic
> capability required for natural language processing, how can they
> claim connectionist natural language processing, let alone bilingualism?
>
> I am concerned that many papers proceed with specific problems without
> understanding the fundamental problems of the traditional
> connectionism. The fact that the biological brain is connectionist
> does not necessarily mean that all connectionist researchers know
> about the brain's connectionism.
>
> -John Weng
>
> On 3/22/13 6:08 PM, Ping Li wrote:
>
> Dear Colleagues,
>
> A Special Issue on Computational Modeling of Bilingualism has been
> published. Most of the models are based on connectionist
> architectures.
>
> All the papers are available for free viewing until April 30, 2013
> (follow the link below to its end):
>
> http://cup.linguistlist.org/2013/03/bilingualism-special-issue-computational-modeling-of-bilingualism/
>
> Please let me know if you have difficulty accessing the above link
> or viewing any of the PDF files on Cambridge University Press's
> website.
>
> With kind regards,
>
> Ping Li
>
> =================================================================
>
> Ping Li, Ph.D. | Professor of Psychology, Linguistics, Information
> Sciences & Technology | Co-Chair, Inter-College Graduate Program
> in Neuroscience | Co-Director, Center for Brain, Behavior, and
> Cognition | Pennsylvania State University | University Park, PA
> 16802, USA |
>
> Editor, Bilingualism: Language and Cognition, Cambridge University
> Press | Associate Editor: Journal of Neurolinguistics, Elsevier
> Science Publisher
>
> Email: pul8 at psu.edu <mailto:pul8 at psu.edu> | URL:
> http://cogsci.psu.edu <http://cogsci.psu.edu/>
>
> =================================================================
>
>
>
> --
> --
> Juyang (John) Weng, Professor
> Department of Computer Science and Engineering
> MSU Cognitive Science Program and MSU Neuroscience Program
> 428 S ShawLn Rm 3115
> Michigan State University
> East Lansing, MI 48824 USA
> Tel: 517-353-4388
> Fax: 517-432-1061
> Email:weng at cse.msu.edu <mailto:weng at cse.msu.edu>
> URL:http://www.cse.msu.edu/~weng/ <http://www.cse.msu.edu/%7Eweng/>
> ----------------------------------------------
>
--
--
Juyang (John) Weng, Professor
Department of Computer Science and Engineering
MSU Cognitive Science Program and MSU Neuroscience Program
428 S Shaw Ln Rm 3115
Michigan State University
East Lansing, MI 48824 USA
Tel: 517-353-4388
Fax: 517-432-1061
Email: weng at cse.msu.edu
URL: http://www.cse.msu.edu/~weng/
----------------------------------------------
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mailman.srv.cs.cmu.edu/mailman/private/connectionists/attachments/20130326/48eaeeca/attachment-0001.html>
More information about the Connectionists
mailing list