Compressing Trigram Language Models with Golomb Coding

Compressing Trigram Language Models with Golomb Coding

Compressing Trigram Language Models With Golomb Coding Ken Church Ted Hart Jianfeng Gao Microsoft Microsoft Microsoft One Microsoft Way One Microsoft Way One Microsoft Way Redmond, WA, USA Redmond, WA, USA Redmond, WA, USA {church,tedhar,jfgao}@microsoft.com checking, which points to a wrong word, even if Abstract the spelling is in the dictionary. For example, if you type “their” instead of “they're,” Office Trigram language models are compressed catches the mistake. It really works. 1 using a Golomb coding method inspired by The use of contextual language models in spel- the original Unix spell program. ling correction has been discussed elsewhere: Compression methods trade off space, time (Church and Gale, 1991), (Mays et al, 1991), (Ku- and accuracy (loss). The proposed kich, 1992) and (Golding and Schabes, 1996). HashTBO method optimizes space at the This paper will focus on how to deploy such me- expense of time and accuracy. Trigram thods to millions and millions of users. Depending language models are normally considered on the particular application and requirements, we memory hogs, but with HashTBO, it is need to make different tradeoffs among: possible to squeeze a trigram language 1. Space (for compressed language model), model into a few megabytes or less. 2. Runtime (for n-gram lookup), and HashTBO made it possible to ship a 3. Accuracy (losses for n-gram estimates). trigram contextual speller in Microsoft HashTBO optimizes space at the expense of the Office 2007. other two. We recommend HashTBO when space concerns dominate the other concerns; otherwise, 1 Introduction use ZipTBO. This paper will describe two methods of com- There are many applications where space is ex- pressing trigram language models: HashTBO and tremely tight, especially on cell phones. HashTBO ZipTBO. ZipTBO is a baseline compression me- was developed for contextual spelling in Microsoft thod that is commonly used in many applications Office 2007, where space was the key challenge. such as the Microsoft IME (Input Method Editor) The contextual speller probably would not have systems that convert Pinyin to Chinese and Kana to shipped without HashTBO compression. Japanese. We normally think of trigram language models Trigram language models have been so success- as memory hogs, but with HashTBO, a few mega- ful that they are beginning to be rolled out to appli- bytes are more than enough to do interesting things cations with millions and millions of users: speech with trigrams. Of course, more memory is always recognition, handwriting recognition, spelling cor- better, but it is surprising how much can be done rection, IME, machine translation and more. The with so little. EMNLP community should be excited to see their For English, the Office contextual speller started technology having so much influence and visibility with a predefined vocabulary of 311k word types with so many people. Walter Mossberg of the and a corpus of 6 billion word tokens. (About a Wall Street Journal called out the contextual spel- ler (the blue squiggles) as one of the most notable 1 features in Office 2007: http://online.wsj.com/public/article/SB11678611102296 There are other nice additions. In Word, Out- 6326- look and PowerPoint, there is now contextual spell T8UUTIl2b10DaW11usf4NasZTYI_20080103.html?m od=tff_main_tff_top 199 Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 199–207, Prague, June 2007. c 2007 Association for Computational Linguistics third of the words in the vocabulary do not appear might appear in test. There are many ways to dis- in the corpus.) The vocabularies for other lan- count probabilities. Katz used Good-Turing guages tend to be larger, and the corpora tend to be smoothing, but other smoothing methods such as smaller. Initially, the trigram language model is Kneser-Ney are more popular today. very large. We prune out small counts (8 or less) Backoff is more interesting for unseen trigrams. to produce a starting point of 51 million trigrams, In that case, the backoff estimate is: 14 million bigrams and 311k unigrams (for Eng- 훼 푤−2푤−1 푃푑 (푤|푤−1) lish). With extreme Stolcke, we cut the 51+14+0.3 The backoff alphas (α) are a normalization fac- million n-grams down to a couple million. Using a tor that accounts for the discounted mass. That is, Golomb code, each n-gram consumes about 3 bytes on average. 훼 푤−2푤−1 With so much Stolcke pruning and lossy com- 1 − 푤:퐶(푤−2푤−1푤) 푃(푤|푤−2푤−1) pression, there will be losses in precision and re- = 1 − 푤:퐶(푤−2푤−1푤) 푃(푤|푤−1) call. Our evaluation finds, not surprisingly, that compression matters most when space is tight. where 퐶 푤 푤 푤 > 0 simply says that the Although HashTBO outperforms ZipTBO on the −2 −1 trigram was seen in training data. spelling task over a wide range of memory sizes, the difference in recall (at 80% precision) is most 3 Stolcke Pruning noticeable at the low end (under 10MBs), and least noticeable at the high end (over 100 MBs). When Both ZipTBO and HashTBO start with Stolcke there is plenty of memory (100+ MBs), the differ- pruning (1998).3 We will refer to the trigram lan- ence vanishes, as both methods asymptote to the guage model after backoff and pruning as a pruned upper bound (the performance of an uncompressed TBO LM. trigram language model with unlimited memory). Stolcke pruning looks for n-grams that would receive nearly the same estimates via Katz backoff 2 Preliminaries if they were removed. In a practical system, there will never be enough memory to explicitly mate- Both methods start with a TBO (trigrams with rialize all n-grams that we encounter during train- backoff) LM (language model) in the standard ing. In this work, we need to compress a large set ARPA format. The ARPA format is used by many of n-grams (that appear in a large corpus of 6 bil- toolkits such as the CMU-Cambridge Statistical lion words) down to a relatively small language Language Modeling Toolkit.2 model of just a couple of megabytes. We prune as 2.1 Katz Backoff much as necessary to make the model fit into the memory allocation (after subsequent Hash- No matter how much data we have, we never TBO/ZipTBO compression). have enough. Nothing has zero probability. We Pruning saves space by removing n-grams sub- will see n-grams in the test set that did not appear ject to a loss consideration: in the training set. To deal with this reality, Katz 1. Select a threshold . (1987) proposed backing off from trigrams to bi- 2. Compute the performance loss due to prun- grams (and from bigrams to unigrams) when we ing each trigram and bigram individually us- don’t have enough training data. ing the pruning criterion. Backoff doesn’t have to do much for trigrams 3. Remove all trigrams with performance loss that were observed during training. In that case, less than the backoff estimate of is simply 푃(푤 |푤−2푤−1) 4. Remove all bigrams with no child nodes (tri- a discounted probability 푃푑 (푤|푤−2푤−1). gram nodes) and with performance loss less The discounted probabilities steal from the rich than and give to the poor. They take some probability 5. Re-compute backoff weights. mass from the rich n-grams that have been seen in training and give it to poor unseen n-grams that 3 http://www.nist.gov/speech/publications/darpa98/html/l 2 http://www.speech.cs.cmu.edu/SLM m20/lm20.htm 200 Stolcke pruning uses a loss function based on bytes, or about 3 bytes per n-gram on average (in- relative entropy. Formally, let P denote the tri- cluding log likelihoods and alphas for backing off). gram probabilities assigned by the original un- ZipTBO is faster, but takes more space (about 4 pruned model, and let P’ denote the probabilities in bytes per n-gram on average, as opposed to 3 bytes the pruned model. Then the relative entropy per n-gram). Given a fixed memory budget, D(P||P’) between the two models is ZipTBO has to make up the difference with more aggressive Stolcke pruning. More pruning leads to − 푃 푤, ℎ [log 푃′ 푤 ℎ − log 푃(푤, ℎ)] larger losses, as we will see, for the spelling appli- 푤,ℎ cation. Losses will be reported in terms of performance where h is the history. For trigrams, the history is on the spelling task. It would be nice if losses the previous two words. Stolcke showed that this could be reported in terms of cross entropy, but the reduces to values output by the compressed language models −푃 ℎ {푃(푤|ℎ) cannot be interpreted as probabilities due to quan- [log 푃 푤 ℎ′ + log 훼′ ℎ − log 푃(푤|ℎ)] tization losses and other compression losses. ′ +[log 훼 (ℎ) − log 훼(ℎ)] 푃 푤 ℎ } 4 McIlroy’s Spell Program 푤:퐶 ℎ,푤 >0 McIlroy’s spell program started with a hash ta- where 훼′ (ℎ) is the revised backoff weight after ble. Normally, we store the clear text in the hash pruning and h’ is the revised history after dropping table, but he didn’t have space for that, so he the first word. The summation is over all the tri- didn’t. Hash collisions introduce losses. grams that were seen in training: 퐶 ℎ, 푤 > 0. McIlroy then sorted the hash codes and stored Stolcke pruning will remove n-grams as neces- just the interarrivals of the hash codes instead of sary, minimizing this loss.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us