Learning Phonetic Categories by Learning a Lexicon Naomi H

Learning Phonetic Categories by Learning a Lexicon Naomi H

Learning Phonetic Categories by Learning a Lexicon Naomi H. Feldman (naomi [email protected]) Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912 USA Thomas L. Griffiths (tom griffi[email protected]) Department of Psychology, University of California at Berkeley, Berkeley, CA 94720 USA James L. Morgan (james [email protected]) Department of Cognitive and Linguistic Sciences, Brown University, Providence, RI 02912 USA Abstract the same or different vowels (e.g. send vs. act). Simulations demonstrate that using information from segmented words to Infants learn to segment words from fluent speech during the same period as they learn native language phonetic cate- constrain phonetic category acquisition allows more robust gories, yet accounts of phonetic category acquisition typically category learning from fewer data points, due to the inter- ignore information about the words in which speech sounds active learner’s ability to use information about which words appear. We use a Bayesian model to illustrate how feed- back from segmented words might constrain phonetic category contain particular speech sounds to disambiguate overlapping learning, helping a learner disambiguate overlapping phonetic categories. categories. Simulations show that information from an artifi- The paper is organized as follows. We begin with an intro- cial lexicon can successfully disambiguate English vowel cat- egories, leading to more robust category learning than distri- duction to the mathematical framework for our model, then butional information alone. present toy simulations to demonstrate its qualitative proper- Keywords: language acquisition; phonetic categories; ties. Next, simulations show that information from an artifi- Bayesian inference cial lexicon can disambiguate formant values associated with English vowel categories. The last section discusses potential Infants learning their native language need to extract sev- implications for language acquisition, revisits the model’s as- eral levels of structure, including the locations of phonetic sumptions, and suggests directions for future research. categories in perceptual space and the identities of words they segment from fluent speech. It is often implicitly as- Bayesian Model of Phonetic Category Learning sumed that these steps occur sequentially, with infants first Recent research on phonetic category acquisition has focused learning about the phonetic categories in their language and on the importance of distributional learning. Maye, Werker, subsequently using those categories to help them map word and Gerken (2002) found that the specific frequency dis- tokens onto lexical items. However, infants begin to segment tribution (bimodal or unimodal) of speech sounds along a words from fluent speech as early as 6 months (Bortfeld, Mor- continuum could affect infants’ discrimination of the contin- gan, Golinkoff, & Rathbun, 2005) and this skill continues uum endpoints, with infants showing better discrimination of to develop over the next several months (Jusczyk & Aslin, the endpoints when familiarized with the bimodal distribu- 1995; Jusczyk, Houston, & Newsome, 1999). Discrimina- tion. This work has inspired computational models that use a tion of non-native speech sound contrasts declines during the Mixture of Gaussians approach, assuming that phonetic cate- same time period, between 6 and 12 months (Werker & Tees, gories are represented as Gaussian, or normal, distributions of 1984). This suggests an alternative learning trajectory in speech sounds and that learners find the set of Gaussian cat- which infants simultaneously learn to categorize both speech egories that best represents the distribution of speech sounds sounds and words, potentially allowing the two learning pro- they hear. Boer and Kuhl (2003) used the Expectation Max- cesses to interact. imization (EM) algorithm (Dempster, Laird, & Rubin, 1977) In this paper we explore the hypothesis that the words in- to learn the locations of three such vowel categories from for- fants segment from fluent speech can provide a useful source mant data. McMurray, Aslin, and Toscano (2009) introduced of information for phonetic category acquisition. We use a a gradient descent algorithm similar to EM to learn a stop Bayesian approach to explore the nature of the phonetic cat- consonant voicing contrast, and this algorithm has been ex- egory learning problem in an interactive system, where infor- tended to multiple dimensions for both consonant and vowel mation from segmented words can feed back and constrain data (Toscano & McMurray, 2008; Vallabha, McClelland, phonetic category learning. Our interactive model learns Pons, Werker, & Amano, 2007). a rudimentary lexicon and a phoneme inventory1 simulta- Our model adopts the Mixture of Gaussians approach from neously, deciding whether acoustic representations of seg- these previous models but uses a non-parametric Bayesian mented tokens correspond to the same or different lexical framework that allows extension of the model to the word items (e.g. bed vs. bad) and whether lexical items contain level, making it possible to investigate the learning outcome 1We make the simplifying assumption that phonemes are equiv- when multiple levels of structure interact. As in previous alent to phonetic categories, and use the terms interchangeably. models, speech sounds in our model are represented using guson, 1973), C ∼ DP(a;G ). This distribution encodes bi- A D C B C ases over the number of categories in the phoneme inven- tory, as well as over phonetic parameters for those categories. Prior beliefs about the number of phonetic categories allow time the learner to consider a potentially infinite number of cate- gories, but produce a bias toward fewer categories, with the strength of the bias controlled by the parameter a.2 This re- places the winner-take-all bias in category assignments that Figure 1: A fragment of a corpus presented to the model. has been used in previous models (McMurray et al., 2009; Asterisks represent speech sounds, and lines represent word Vallabha et al., 2007) and allows explicit inference of the boundaries. The model does not know which categories gen- number of categories needed to represent the data. erated the speech sounds, and needs to recover categories A, The prior distribution over phonetic parameters is defined B, C, and D from the data. by GC, which in this model is a distribution over Gaussian phonetic categories that includes an Inverse-Wishart prior over category variances, Sc ∼ IW(n0;S0), and a Gaussian phonetic dimensions such as steady-state formant values or Sc prior over category means, µcjSc ∼ N(µ0; ). The param- voice onset time. Words are sequences of these phonetic val- n0 ues, where each phoneme corresponds to a single discrete set eters of these distributions can be thought of as pseudodata, µ (e.g. first and second formant) of phonetic values. A sample where 0, S0, and n0 encode the mean, covariance, and num- fragment of a toy corpus is shown in Figure 1. The phoneme ber of speech sounds that the learner imagines having already inventory has four categories, labeled A, B, C, and D; five assigned to any new category. This prior distribution over words are shown, representing lexical items ADA, AB, D, phonetic parameters is not central to the theoretical model, but rather is included for ease of computation; the number of AB, and DC, respectively. Learning involves using the speech 3 sounds and, in the case of an interactive learner, information speech sounds in the pseudodata is made as small as possible about which other sounds appear with them in words, to re- so that the prior biases are overshadowed by real data. cover the phonetic categories that generated the corpus. Presented with a sequence of acoustic values, the learner needs to recover the set of Gaussian categories that generated Simulations compare two models that differ in the hy- those acoustic values. Gibbs sampling (Geman & Geman, pothesis space they assign to the learner. In the dis- 1984), a form of Markov chain Monte Carlo, is used to re- tributional model, the learner’s hypothesis space contains cover examples of phoneme inventories that an ideal learner phoneme inventories, where phonemes correspond to Gaus- believes are likely to have generated the corpus. Speech sian distributions of speech sounds in phonetic space. In sounds are initially given random category assignments, and the lexical-distributional model, the learner considers these in each sweep through the corpus, each speech sound in turn same phoneme inventories, but considers them only in con- is given a new category assignment based on all the other cur- junction with lexicons that contain lexical items composed of rent assignments. The probability of assignment to category sequences of phonemes. This allows the lexical-distributional c is given by Bayes’ rule, learner to use not only phonetic information, but also infor- mation about the words that contain those sounds, in recover- p(cjwi j) ∝ p(wi jjc)p(c) (1) ing a set of phonetic categories. where wi j denotes the phonetic parameters of the speech Distributional Model sound in position j of word i. The prior p(c) is given by the Dirichlet process and is In the distributional model, a learner is responsible for re- ( covering a set of phonetic categories, which we refer to as a nc for existing categories p(c) = ∑c nc+a (2) phoneme inventory C, from a corpus of speech sounds. The a for a new category model ignores all information about words and word bound- ∑c nc+a aries, and learns only from the distribution of speech sounds making it proportional to the number of speech sounds in phonetic space. Speech sounds are assumed to be produced nc already assigned to that category, with some proba- by selecting a phonetic category c from the phoneme inven- bility a of assignment to a new category. The like- tory and then sampling a phonetic value from the Gaussian lihood p(wi jjc) is obtained by integrating over all pos- associated with that category. Categories differ in their means sible means and covariance matrices for category c, RR µc, covariance matrices Sc, and frequencies of occurrence.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    6 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us