
Edit Distance, Spelling Correction, and the Noisy Channel CS 585, Fall 2015 Introduction to Natural Language Processing http://people.cs.umass.edu/~brenocon/inlp2015/ Brendan O’Connor Thursday, October 22, 15 • Projects • OH after class • Some major topics in second half of course • Translation: spelling, machine translation • Syntactic parsing: dependencies, hierarchical phrase structures • Coreference • Lexical semantics • Unsupervised language learning • Topic models, exploratory analysis? • Neural networks? 2 Thursday, October 22, 15 Bayes Rule for doc classif. Noisy Hypothesized transmission process Observed Class channel text model Inference problem Previously: Naive Bayes 3 Thursday, October 22, 15 4 Thursday, October 22, 15 Noisy channel model Hypothesized transmission process Original Observed text text Inference problem Codebreaking P(plaintext | encrypted text) P(encrypted text | plaintext) P(plaintext) / TRANSMISSION: Enigma machine INFERENCE: Bletchley Park (WWII) Thursday, October 22, 15 Noisy channel model Hypothesized transmission process Original Observed text text Inference problem Codebreaking P(plaintext | encrypted text) P(encrypted text | plaintext) P(plaintext) / Speech recognition P(text | acoustic signal) P(acoustic signal | text) P(text) / Thursday, October 22, 15 Noisy channel model Hypothesized transmission process Original Observed text text Inference problem Codebreaking P(plaintext | encrypted text) P(encrypted text | plaintext) P(plaintext) / Speech recognition P(text | acoustic signal) P(acoustic signal | text) P(text) / Optical character recognition P(text | image) P(image | text) P(text) / Thursday, October 22, 15 Noisy channel model Hypothesized transmission process Original Observed text text Inference problem Codebreaking P(plaintext | encrypted text) P(encrypted text | plaintext) P(plaintext) / Speech recognition P(text | acoustic signal) P(acoustic signal | text) P(text) / Optical character recognition P(text | image) P(image | text) P(text) / Machine translation P(target text | source text) P(source text | target text) P(target text) / Spelling correction P(target text | source text) P(source text | target text) P(target text) / Thursday, October 22, 15 2 CHAPTER 6 SPELLING CORRECTION AND THE NOISY CHANNEL • candidates candidates: real words that have a similar letter sequence to the error, Candidate corrections from the spelling error graffe might include giraffe, graf, gaffe, grail, or craft. We then rank the candidates using a distance metric between the source and the surface error. We’d like a metric that shares our intuition that giraffe is a more likely source than grail for graffe because giraffe is closer in spelling to graffe than grail is to graffe. The minimum edit distance algorithm from Chapter 2 will play a role here. But we’d also like to prefer corrections that are more frequent words, or more likely to occur in the context of the error. The noisy channel model introduced in the next section offers a way to formalize this intuition. Real word spelling error detection is a much more difficult task, since any word in the input text could be an error. Still, it is possible to use the noisy channel to find candidates for each word w typed by the user, and rank the correction that is most likely to have been the users original intention. 6.1 The Noisy Channel Model In this section we introduce the noisy channel model and show how to apply it to theNoisy task of detecting channel and correcting model spelling errors. The noisy channel model was applied to the spelling correction task at about the same time by researchers at AT&T Bell Laboratories (Kernighan et al. 1990, Church and Gale 1991) and IBM Watson Research (Mays et al., 1991). noisy channel original word noisy word decoder word hyp1 guessed word word hyp2 noisy 1 ... noisy 2 word hyp3 noisy N Figure 6.1 In the noisy channel model, we imagine that the surface form we see is actually a “distorted” form of an original word passed through a noisy channel. The decoder passes each hypothesis through a model of this channel and picks the word that best matches the surface noisy word. 9 Thursday, October 22, 15 noisy Channel The intuition of the noisy channel model (see Fig. 6.1) is to treat the misspelled word as if a correctly spelled word had been “distorted” by being passed through a noisy communication channel. This channel introduces “noise” in the form of substitutions or other changes to the letters, making it hard to recognize the “true” word. Our goal, then, is to build a model of the channel. Given this model, we then find the true word by passing every word of the language through our model of the noisy channel and seeing which one comes the closest to the misspelled word. Bayesian This noisy channel model, is a kind of Bayesian inference. We see an obser- 6.1 THE NOISY CHANNEL MODEL 3 • vation x (a misspelled word) and our job is to find the word w that generated this V misspelled word. Out of all possible words in the vocabulary V we want to find the ˆ word w such that P(w x) is highest. We use the hat notation ˆ to mean “our estimate | of the correct word”. wˆ = argmaxP(w x) (6.1) w V | 2 argmax The function argmaxx f (x) means “the x such that f (x) is maximized”. Equa- tion 6.1 thus means, that out of all words in the vocabulary, we want the particular word that maximizes the right-hand side P(w x). | The intuition of Bayesian classification is to use Bayes’ rule to transform Eq. 6.1 into a set of other probabilities. Bayes’ rule is presented in Eq. 6.2; it gives us a way to break down any conditional probability P(a b) into three other probabilities: | P(b a)P(a) P(a b)= | (6.2) | P(b) We can then substitute Eq. 6.2 into Eq. 6.1 to get Eq. 6.3: P(x w)P(w) wˆ = argmax | (6.3) w V P(x) 2 We can conveniently simplify Eq. 6.3 by dropping the denominator P(x). Why is that? Since we are choosing a potential correction word out of all words, we will P(x w)P(w) be computing |P(x) for each word. But P(x) doesn’t change for each word ; we are always asking about the most likely word for the same observed error x, which must have the same probability P(x). Thus, we can choose the word that maximizes this simpler formula: wˆ = argmaxP(x w)P(w) (6.4) w V | 2 To summarize, the noisy channel model says that we have some true underlying word w, and we have a noisy channel that modifies the word into some possible likelihood misspelled observed surface form. The likelihood or channel model of the noisy channel model channel producing any particular observation sequence x is modeled by P(x w). The prior | probability prior probability of a hidden word is modeled by P(w). We can compute the most probable wordw ˆ given that we’ve seen some observed misspelling x by multiply- ing the prior P(w) and the likelihood P(x w) and choosing the word for which this Spelling correction| as noisy channel product is greatest. We apply the noisy channel approachHypothetical to correcting Model non-word spelling errors by I was too tired to go taking any word notI was in to ourtired to spell go dictionary, generating aINPUT list of candidate words, ranking them accordingI was zzz tired to Eq.to go 6.4, and picking the highest-rankedI was ti tired to go one. We can Inference problem modify Eq. 6.4 to refer... to this list of candidate words instead of the full vocabulary V as follows: channel model prior wˆ = argmax P(x w) P(w) (6.5) w C | 2 z }| { z}|{ The noisy channel algorithm is shown in Fig. 6.2. To see the details of the computationEdit of distance the likelihoodLanguage and model language model, let’s walk through an example, applying the algorithm to the example misspelling acress. The first stage of the algorithm1. How proposes to score candidate possible corrections translations? by finding words that 2. How to efficiently search over them? 10 Thursday, October 22, 15 Edit distance • Tasks • Calculate numerical similarity between pairs • President Barack Obama • President Barak Obama • Enumerate edits with distance=1 • Model: Assume possible changes. • Deletions: actress => acress • Insertions: cress => acress • Substitutions: access => acress • [Transpositions: caress => acress] • Probabilistic model: assume each has prob of occurrence 11 Thursday, October 22, 15 18 CHAPTER 2 REGULAR EXPRESSIONS,TEXT NORMALIZATION,EDIT DISTANCE • an alternative version of his metric in which each insertion or deletion has a cost of 1 and substitutions are not allowed. (This is equivalent to allowing substitution, but giving each substitution a cost of 2 since any substitution can be represented by one insertion and one deletion). Using this version, the Levenshtein distance between intention and execution is 8. 2.4.1 The Minimum Edit Distance Algorithm How do we find the minimum edit distance? We can think of this as a search task, in which we are searching for the shortest path—a sequence of edits—from one string to another. i n t e n t i o n del ins subst n t e n t i o n i n t e c n t i o n i n x e n t i o n Figure 2.13 Finding the edit distance viewed as a search problem The space of all possible edits is enormous, so we can’t search naively. How- ever, lots of distinct edit paths will end up in the same state (string), so rather than recomputing all those paths, we could just remember the shortest path to a state dynamic programming each time we saw it.
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages24 Page
-
File Size-