Predicting Age of Acquisition in Early Word Learning Using Recurrent

Predicting Age of Acquisition in Early Word Learning Using Recurrent

Predicting Age of Acquisition in Early Word Learning Using Recurrent Neural Networks Eva Portelance ([email protected]) Department of Linguistics, Stanford University Judith Degen ([email protected]) Department of Linguistics, Stanford University Michael C. Frank ([email protected]) Department of Psychology, Stanford University Abstract for sentence complexity (Braginsky et al., 2019). Some have also considered contextual diversity – a measure of seman- Vocabulary growth and syntactic development are known to be highly correlated in early child language. What determines tic co-occurrence – as a possible predictor of AoA beyond when words are acquired and how can this help us understand frequency (Hills, Maouene, Riordan, & Smith, 2010). This what drives early language development? We train an LSTM could be considered a proxy for some semantic factors but language model, known to detect syntactic regularities that are relevant for predicting the difficulty of words, on child-directed does not directly measure syntactic complexity. Given the speech. We use the average surprisal of words for the model, strong connection between vocabulary growth and syntactic which encodes sequential predictability, as a predictor for the ability in children’s output, we hypothesize that the syntactic age of acquisition of words in early child language. We com- pare this predictor to word frequency and others and find that contexts in which words appear in children’s language input average surprisal is a good predictor for the age of acquisition should be an important factor as well. To test this hypothe- of function words and predicates beyond frequency, but not sis, we use a computational language model in concert with a for nouns. Our approach provides insight into what makes a good model of early word learning, especially for words whose broad array of child-language data sets to predict when words meanings rely heavily on linguistic context. are acquired, exploring contextual linguistic information be- Keywords: Language model; recurrent neural network; yond first-order word frequency. LSTM; language acquisition; age of acquisition; child directed speech; word learning. We use a long-short term memory (LSTM) neural model (Hochreiter & Schmidhuber, 1997), trained on child-directed Introduction speech, as our model of contextual information in sequences and as a proxy for syntactic complexity. LSTMs are a form of Children’s lexicon and syntactic abilities grow in tandem, re- recurrent neural network (RNN) that can be used as language sulting in a tight correlation between vocabulary size and models (Sundermeyer, Schluter,¨ & Ney, 2012). Language grammatical complexity (Bates et al., 1994; Brinchmann, models are trained on corpora to predict the next word in a Braeken, & Lyster, 2019; Frank, Braginsky, Marchman, & sequence. Though more recent language models outperform Yurovsky, 2019). Additionally, children are remarkably con- LSTMs (Vaswani et al., 2017), they are a strong standardized sistent in the order in which the acquire their first words baseline and have many useful analytic properties. They pro- (Tardif et al., 2008; Goodman, Dale, & Li, 2008). This order- cess utterances incrementally and make use of nested layers ing consistency presents an opportunity: modeling of when of hidden units to learn abstract representations that can pre- words are acquired can help us understand what drives lan- dict sequential dependencies between words across a range guage learning more generally. of dependency lengths (Linzen, Dupoux, & Goldberg, 2016). This general approach relies on creating quantitative mod- Unlike regular RNNs, LSTM units use a gating system that els of children’s average age of acquisition (AoA) for words. allows them to ‘forget’ some of the previous states while ‘re- These analyses typically combine large-scale survey data membering’ others, thus learning to prioritize some depen- about word acquisition with corpus estimates of language in- dencies in a sequence over others at each state. Further, reg- put from different children to make aggregate-level predic- ular RNNs have previously been proposed as cognitive mod- tions. Such studies have investigated the effects of word prop- els for language learning (Elman, 1990, 1993; Christiansen, erties such as frequency, number of phonemes, and concrete- Allen, & Seidenberg, 1998), however, these earlier models ness, across a range of languages (Goodman et al., 2008; Ku- were computationally limited and could be used only with perman, Stadthagen-Gonzalez, & Brysbaert, 2012; Bragin- small, schematic datasets; in contrast, LSTMs can be applied sky, Yurovsky, Marchman, & Frank, 2019), with simple fre- to larger datasets. Thus, LSTM language models lend them- quency typically being the most important factor: the more selves well to our project. frequent a word is in the child’s linguistic input, the earlier it is acquired. As our linking hypothesis between the LSTM model and These analyses have generally not considered the linguis- children’s difficulty, we use the model’s average surprisal: tic contexts in which words appear, however. Some analy- the negative log probability of a word wi in a given context ses have considered the length of utterances as a weak proxy w1:::i−1, averaged across all contexts in which it appears, C. 1057 ©2020 The Author(s). This work is licensed under a Creative Commons Attribution 4.0 International License (CC BY). 1 −logP(w j w ) × ∑ i 1:::i−1 jCj C:wi2C Since the LSTM’s learning objective is to minimize the sur- prisal of words in context, the average surprisal of a word con- stitutes a measure of how difficult that word is for the model to represent. In addition to being an appropriate measure of difficulty for the LSTM, surprisal has also been shown to be a strong predictor of human processing difficulty in psycholin- guistic experiments (Levy, 2008; Demberg & Keller, 2008).1 Our approach is as follows. First we describe the data we use for training, then we describe the LSTM model architec- ture and training2. We then compare regression models of children’s AoA using average surprisal as one key predictor in concert with previous predictor sets. An important advance Figure 1: The overall distribution of children’s age at the time over previous work is the use of cross-validation to estimate of child-directed utterance production across all of the data. out-of-sample performance. In sum, our approach allows us to determine the predictive power of this new measure and to investigate how well sequential prediction difficulty in an LSTM relates to patterns of acquisition for children. Corpus Data To train the model, we compiled a dataset of child-directed speech from the CHILDES database (MacWhinney, 2000). We included all of the child-directed utterances from the 39 largest single-child English corpora available through the childes-db API (Sanchez et al., 2018). To ensure a reason- able sample of child-directed speech from each contributing corpus, we selected corpora that contained at least 20,000 to- kens and a ratio of child to child-directed utterances of at least 1:20. The corpora come from 18 distinct studies and were all longitudinal, with recording transcripts typically com- ing from regular hour-long recording sessions at the child’s Figure 2: The LSTM model architecture incrementally pro- home. The age of children in the transcripts ranged from 9 cessing the utterance ‘I want my Blanky’. months to ∼ 5 years of age, with a mean of 32 months (2;9 years). Most of the data were from children between the ages of 2 to 4 years (see Figure 1). The resulting dataset of child- hidden state from the previous time-step in the second layer directed utterances from these corpora contains over 6.5 mil- 2 ht−1 are then also passed through a transformation function, lion words. Utterances were transformed to lowercase, and all 2 resulting in a new hidden state ht . This final hidden state then punctuation except for end of sentence tokens was removed undergoes a transformation to produce the output layer – a before being passed to the model. distribution over the whole vocabulary representing a predic- tion about the upcoming word. In order to limit the number Language Model of parameters in the model, we limited vocabulary size to the We use a two-layered LSTM recurrent neural network, illus- 10,000 most frequent words. trated in Figure 2. The model has randomly initialized 100- The model’s learning objective is to minimize the surprisal dimensional word embeddings as its input layer, which are (negative log probability) of words given their preceding con- updated during learning. Hidden states encode information text in an utterance. (The average surprisal of a word across about the preceding context. At each time-step, the current all the utterances in which it appears is therefore a measure word embedding wt and the hidden state from the previous of how difficult it is for the model to converge on a represen- 1 time-step ht−1 are passed through a transformation function, tation for the word, an observation we leverage in our sim- 1 1 resulting in a new hidden state ht . This hidden state ht and the ulations below). We trained the model on 80% of all of the child-directed utterances (and no child-produced utterances). 1For an overview of empirical evidence supporting the validity of surprisal as a predictor of processing difficulty, see Hale, 2016. The remaining 20% were used as a validation set to evalu- 2All code and data for this paper are available at ate the model’s performance during training (to avoid overfit- www.github.com/eporte2/aoa lstm prediction. ting). Utterances were shuffled at each epoch of training. The 1058 model saw 32 sentences per epoch before performing back- Table 1: Variance inflation factor (VIF) for all predictors. propagation, after which we calculated the categorical cross entropy loss of the model on 32 unseen utterances from the Predictor VIF validation set.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    7 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us