Simulating children’s verb inflection errors in English using an LSTM language model
Skip to main content
eScholarship
Open Access Publications from the University of California

Simulating children’s verb inflection errors in English using an LSTM language model

Abstract

We present a computational (LSTM) model that learns to produce English (3sg and -bare) verb inflection when trained on English child-directed speech (CDS). The model is trained on input containing morphemized verbs and learns to predict the next token (word/morpheme) given a preceding sequence of tokens. The model produces the type of error (-bare for -3s) made by English-learning children while avoiding errors that children do not often make (-3s for -bare). The model also shows the same type of sensitivity to input statistics that has been reported in English-learning children. Finally, we manipulated the length of the sequences the model is trained on and show that this results in the delayed acquisition of -3sg forms that is characteristic of English-learning children with Developmental Language Disorder (DLD). Taken together these results suggest that input-driven learning is a major determinant of the patterns observed in both typical and impaired acquisition of English verb inflection.

Main Content
For improved accessibility of PDF content, download the file to your device.
Current View