Randomised Language Modelling for Statistical Machine Translation

Randomised Language Modelling for Statistical Machine Translation

Randomised Language Modelling for Statistical Machine Translation David Talbot and Miles Osborne School of Informatics, University of Edinburgh 2 Buccleuch Place, Edinburgh, EH8 9LW, UK [email protected], [email protected] Abstract lenges in deploying large LMs are not trivial. In- creasing the order of an n-gram model can result in A Bloom filter (BF) is a randomised data an exponential increase in the number of parameters; structure for set membership queries. Its for corpora such as the English Gigaword corpus, for space requirements are significantly below instance, there are 300 million distinct trigrams and lossless information-theoretic lower bounds over 1.2 billion 5-grams. Since a LM may be queried but it produces false positives with some millions of times per sentence, it should ideally re- quantifiable probability. Here we explore the side locally in memory to avoid time-consuming re- use of BFs for language modelling in statis- mote or disk-based look-ups. tical machine translation. Against this background, we consider a radically We show how a BF containing n-grams can different approach to language modelling: instead enable us to use much larger corpora and of explicitly storing all distinct n-grams, we store a higher-order models complementing a con- randomised representation. In particular, we show ventional n-gram LM within an SMT sys- that the Bloom filter (Bloom (1970); BF), a sim- tem. We also consider (i) how to include ap- ple space-efficient randomised data structure for rep- proximate frequency information efficiently resenting sets, may be used to represent statistics within a BF and (ii) how to reduce the er- from larger corpora and for higher-order n-grams to ror rate of these models by first checking for complement a conventional smoothed trigram model lower-order sub-sequences in candidate n- within an SMT decoder. 1 grams. Our solutions in both cases retain the The space requirements of a Bloom filter are quite one-sided error guarantees of the BF while spectacular, falling significantly below information- taking advantage of the Zipf-like distribution theoretic error-free lower bounds while query times of word frequencies to reduce the space re- are constant. This efficiency, however, comes at the quirements. price of false positives: the filter may erroneously report that an item not in the set is a member. False 1 Introduction negatives, on the other hand, will never occur: the Language modelling (LM) is a crucial component in error is said to be one-sided. statistical machine translation (SMT). Standard n- In this paper, we show that a Bloom filter can be gram language models assign probabilities to trans- used effectively for language modelling within an lation hypotheses in the target language, typically as SMT decoder and present the log-frequency Bloom smoothed trigram models, e.g. (Chiang, 2005). Al- filter, an extension of the standard Boolean BF that though it is well-known that higher-order LMs and 1For extensions of the framework presented here to stand- models trained on additional monolingual corpora alone smoothed Bloom filter language models, we refer the can yield better translation performance, the chal- reader to a companion paper (Talbot and Osborne, 2007). 512 Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 512–519, Prague, Czech Republic, June 2007. c 2007 Association for Computational Linguistics takes advantage of the Zipf-like distribution of cor- not. True members are always correctly identified, pus statistics to allow frequency information to be but a false positive will occur if all k corresponding associated with n-grams in the filter in a space- bits were set by other items during training and the efficient manner. We then propose a mechanism, item was not a member of the training set. This is sub-sequence filtering, for reducing the error rates known as a one-sided error. of these models by using the fact that an n-gram’s The probability of a false postive, f, is clearly the frequency is bound from above by the frequency of probability that none of k randomly selected bits in its least frequent sub-sequence. the filter are still 0 after training. Letting p be the We present machine translation experiments us- proportion of bits that are still zero after these n ele- ing these models to represent information regarding ments have been inserted, this gives, higher-order n-grams and additional larger mono- lingual corpora in combination with conventional f = (1 − p)k. smoothed trigram models. We also run experiments with these models in isolation to highlight the im- As n items have been entered in the filter by hashing pact of different order n-grams on the translation each k times, the probability that a bit is still zero is, process. Finally we provide some empirical analysis kn 0 1 − kn of the effectiveness of both the log frequency Bloom p = 1 − ≈ e m filter and sub-sequence filtering. m 2 The Bloom filter which is the expected value of p. Hence the false positive rate can be approximated as, In this section, we give a brief overview of the k Bloom filter (BF); refer to Broder and Mitzenmacher k 0 k − kn (2005) for a more in detailed presentation. A BF rep- f = (1 − p) ≈ (1 − p ) ≈ 1 − e m . resents a set S = {x1, x2, ..., xn} with n elements drawn from a universe U of size N. The structure is By taking the derivative we find that the number of k∗ f attractive when N n. The only significant stor- functions that minimizes is, age used by a BF consists of a bit array of size m. m k∗ = ln 2 · . This is initially set to hold zeroes. To train the filter n we hash each item in the set k times using distinct which leads to the intuitive result that exactly half hash functions h1, h2, ..., hk. Each function is as- sumed to be independent from each other and to map the bits in the filter will be set to 1 when the optimal items in the universe to the range 1 to m uniformly number of hash functions is chosen. at random. The k bits indexed by the hash values The fundmental difference between a Bloom fil- for each item are set to 1; the item is then discarded. ter’s space requirements and that of any lossless rep- Once a bit has been set to 1 it remains set for the life- resentation of a set is that the former does not depend time of the filter. Distinct items may not be hashed on the size of the (exponential) universe N from to k distinct locations in the filter; we ignore col- which the set is drawn. A lossless representation lisons. Bits in the filter can, therefore, be shared by scheme (for example, a hash map, trie etc.) must de- distinct items allowing significant space savings but pend on N since it assigns a distinct representation introducing a non-zero probability of false positives to each possible set drawn from the universe. at test time. There is no way of directly retrieving or ennumerating the items stored in a BF. 3 Language modelling with Bloom filters At test time we wish to discover whether a given In our experiments we make use of both standard item was a member of the original set. The filter is (i.e. Boolean) BFs containing n-gram types drawn queried by hashing the test item using the same k from a training corpus and a novel BF scheme, the hash functions. If all bits referenced by the k hash log-frequency Bloom filter, that allows frequency values are 1 then we assume that the item was a information to be associated efficiently with items member; if any of them are 0 then we know it was stored in the filter. 513 Algorithm 1 Training frequency BF Algorithm 2 Test frequency BF Input: Strain, {h1, ...hk} and BF = ∅ Input: x, MAXQCOUNT , {h1, ...hk} and BF Output: BF Output: Upper bound on qc(x) ∈ Strain for all x ∈ Strain do for j = 1 to MAXQCOUNT do c(x) ← frequency of n-gram x in Strain for i = 1 to k do qc(x) ← quantisation of c(x) (Eq. 1) hi(x) ← hash of event {x, j} under hi for j = 1 to qc(x) do if BF[hi(x)] = 0 then for i = 1 to k do return j − 1 hi(x) ← hash of event {x, j} under hi end if BF[hi(x)] ← 1 end for end for end for end for end for The probability of overestimating an item’s fre- return BF quency decays exponentially with the size of the overestimation error d (i.e. as f d for d > 0) since 3.1 Log-frequency Bloom filter each erroneous increment corresponds to a single The efficiency of our scheme for storing n-gram false positive and d such independent events must statistics within a BF relies on the Zipf-like distribu- occur together. tion of n-gram frequencies in natural language cor- 3.2 Sub-sequence filtering pora: most events occur an extremely small number of times, while a small number are very frequent. The error analysis in Section 2 focused on the false We quantise raw frequencies, c(x), using a loga- positive rate of a BF; if we deploy a BF within an rithmic codebook as follows, SMT decoder, however, the actual error rate will also depend on the a priori membership probability of qc(x) = 1 + blogb c(x)c.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    8 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us