tech report

Michael Gasser gasser at iuvax.cs.indiana.edu
Wed Apr 25 00:51:52 EDT 1990


****** Do not forward to other b-boards or mailing lists.  thank you ****
****** Do not forward to other b-boards or mailing lists.  thank you ****
****** Do not forward to other b-boards or mailing lists.  thank you ****

The following tech report is available.
---------------------------------------------------------------------------

            Networks and Morphophonemic Rules Revisited

			  Michael Gasser
			   Chan-Do Lee 

                            Report 307
                    Computer Science Department
                        Indiana University
                    Bloomington, IN 47405  USA
            gasser at cs.indiana.edu, cdlee at cs.indiana.edu

   			      Abstract 

     In the debate over the power of connectionist models to handle 
linguistic phenomena, considerable attention has been focused on the 
learning of simple morphophonemic rules.  Rumelhart and McClelland's 
celebrated model of the acquisition of the English past tense (1986),
which used a simple pattern associator to learn mappings from 
stems to past tense forms, was advanced as evidence that networks could 
learn to emulate rule-like linguistic behavior.  Pinker and Prince's 
equally celebrated critique of the past-tense model (1988) argued 
forcefully that the model was inadequate on several grounds.  For our 
purposes, these are (1) the fact that the model is not constrained in 
ways that humans language learners clearly are and (2) the fact that, 
since the model cannot represent the notion "word", it cannot 
distinguish homophonous verbs.  A further deficiency of the model, one 
not brought out by Pinker and Prince, is that it is not a processing 
account: the task that the network learns is that of associating forms 
with forms rather than that of producing forms given meanings or 
meanings given forms.  This paper describes a model making use of an
adaptation of a simple recurrent network which addresses all three
objections to the earlier work on morphophonemic rule acquisition.
The model learns to generate forms in one or another "tense", given
arbitrary patterns representing "meanings" and to output the 
appropriate tense given forms.  The inclusion of meanings in the 
network means that homophonous forms are distinguished.  In addition, 
this network experiences difficulty learning reversal processes which do 
not occur in human language.

-----------------------------------------------------------------------

This is available in compressed PostScript form from the OSU neuroprose
database:

unix> ftp cheops.cis.ohio-state.edu  (or, ftp 128.146.8.62)
Name: anonymous
Password: neuron
ftp> cd pub/neuroprose/Inbox
ftp> binary
ftp> get morphophonemics.ps.Z
ftp> quit
unix> uncompress morphophonemics.ps.Z
unix> lpr morphophonemics.ps
      (with whatever your printer needs for PostScript) 

   [The report should be moved to pub/neuroprose soon.]




More information about the Connectionists mailing list