Speech Recognition Lecture 5: N-gram Language Models Mehryar Mohri Courant Institute and Google Research
[email protected] Language Models Definition: probability distribution Pr[ w ] over sequences of words w = w1 ...wk. • Critical component of a speech recognition system. Problems: • Learning: use large text corpus (e.g., several million words) to estimate Pr[ w ] . Models in this course: n-gram models, maximum entropy models. • Efficiency: computational representation and use. Mehryar Mohri - Speech Recognition page 2 Courant Institute, NYU This Lecture n-gram models definition and problems Good-Turing estimate Smoothing techniques Evaluation Representation of n-gram models Shrinking LMs based on probabilistic automata Mehryar Mohri - Speech Recognition page 3 Courant Institute, NYU N-Gram Models Definition: an n-gram model is a probability distribution based on the nth order Markov assumption ∀i, Pr[wi | w1 ...wi−1]=Pr[wi | hi], |hi|≤n − 1. • Most widely used language models. Consequence: by the chain rule, k k Pr[w]=! Pr[wi | w1 ...wi−1]=! Pr[wi | hi]. i=1 i=1 Mehryar Mohri - Speech Recognition page 4 Courant Institute, NYU Maximum Likelihood Likelihood: probability of observing sample under distribution p ∈ P , which, given the independence assumption is m Pr[x1,...,xm]=! p(xi). i=1 Principle: select distribution maximizing sample probability m p! = argmax ! p(xi), ∈P p i=1 m or p! = argmax ! log p(xi). ∈P p i=1 Mehryar Mohri - Speech Recognition page 5 Courant Institute, NYU Example: Bernoulli Trials Problem: find most likely Bernoulli distribution, given sequence of coin flips H, T, T, H, T, H, T, H, H, H, T, T, .