Learning Bigrams from Unigrams

Learning Bigrams from Unigrams

Learning Bigrams from Unigrams Xiaojin Zhu† and Andrew B. Goldberg† and Michael Rabbat‡ and Robert Nowak§ †Department of Computer Sciences, University of Wisconsin-Madison ‡Department of Electrical and Computer Engineering, McGill University §Department of Electrical and Computer Engineering, University of Wisconsin-Madison {jerryzhu, goldberg}@cs.wisc.edu, [email protected], [email protected] Abstract In this paper, we show that one can in fact partly recover the order information. Specifically, given a Traditional wisdom holds that once docu- set of documents in unigram-count BOW representa- ments are turned into bag-of-words (unigram tion, one can recover a non-trivial bigram language count) vectors, word orders are completely model (LM)1, which has part of the power of a bi- lost. We introduce an approach that, perhaps gram LM trained on ordered documents. At first surprisingly, is able to learn a bigram lan- glance this seems impossible: How can one learn guage model from a set of bag-of-words docu- bigram information from unigram counts? However, ments. At its heart, our approach is an EM al- gorithm that seeks a model which maximizes we will demonstrate that multiple BOW documents the regularized marginal likelihood of the bag- enable us to recover some higher-order information. of-words documents. In experiments on seven Our results have implications in a wide range of corpora, we observed that our learned bigram natural language problems, in particular document language models: i) achieve better test set per- privacy. With the wide adoption of natural language plexity than unigram models trained on the applications like desktop search engines, software same bag-of-words documents, and are not far behind “oracle bigram models” trained on the programs are increasingly indexing computer users’ corresponding ordered documents; ii) assign personal files for fast processing. Most index files higher probabilities to sensible bigram word include some variant of the BOW. As we demon- pairs; iii) improve the accuracy of ordered- strate in this paper, if a malicious party gains access document recovery from a bag-of-words. Our to BOW index files, it can recover more than just approach opens the door to novel phenomena, unigram frequencies: (i) the malicious party can re- for example, privacy leakage from index files. cover a higher-order LM; (ii) with the LM it may at- tempt to recover the original ordered document from a BOW by finding the most-likely word permuta- 1 Introduction tion2. Future research will quantify the extent to A bag-of-words (BOW) is a basic document repre- which such a privacy breach is possible in theory, sentation in natural language processing. In this pa- and will find solutions to prevent it. per, we consider a BOW in its simplest form, i.e., There is a vast literature on language modeling; a unigram count vector or word histogram over the see, e.g., (Rosenfeld, 2000; Chen and Goodman, vocabulary. When performing the counting, word 1999; Brants et al., 2007; Roark et al., 2007). How- order is ignored. For example, the phrases “really 1 neat” and “neat really” contribute equally to a BOW. A trivial bigram LM is a unigram LM which ignores his- P v u P v Obviously, once a set of documents is turned into tory: ( | ) = ( ). 2It is possible to use a generic higher-order LM, e.g., a tri- a set of BOWs, the word order information within gram LM trained on standard English corpora, for this purpose. them is completely lost—or is it? However, incorporating a user-specific LM helps. ever, to the best of our knowledge, none addresses A:2, B:1) then σ(x) = {z1, z2, z3} with z1 = this reverse direction of learning higher-order LMs “hdi A A B”, z2 = “hdi A B A”, z3 = “hdi B A A”. from lower-order data. This work is inspired by re- Bigram LM recovery then amounts to finding a θ cent advances in inferring network structure from that satisfies the system of marginal-matching equa- co-occurrence data, for example, for computer net- tions works and biological pathways (Rabbat et al., 2007). P (x|θ) =p ˆ(x) , ∀x ∈X . (1) 2 Problem Formulation and Identifiability As a concrete example where one can exactly re- cover a bigram LM from BOWs, consider our toy We assume that a vocabulary of size W is given. example again. We know there are only three free For notational convenience, we include in the vo- variables in our bigram LM θ: 3×3 r = θhdiA,p = cabulary a special “begin-of-document” symbol hdi θAA, q = θBB, since the rest are determined by which appears only at the beginning of each docu- normalization. Suppose the documents are gener- ment. The training corpus consists of a collection of ated from a bigram LM with true parameters r = n BOW documents {x ,..., x }. Each BOW x is 1 n i 0.25,p = 0.9, q = 0.5. If our BOW corpus is very a vector where is the number of (xi1,...,xiW ) xiu large, we will observe that 20.25% of the BOWs are times word u occurs in document i. Our goal is to (hdi:1, A:3, B:0), 37.25% are (hdi:1, A:2, B:1), and learn a bigram LM θ, represented as a W ×W transi- 18.75% are (hdi:1, A:0, B:3). These numbers are tion matrix with θ = P (v|u), from the BOW cor- uv computed using the definition of P (x|θ). We solve pus. Note P (v|hdi) corresponds to the initial state the reverse problem of finding r, p, q from the sys- probability for word , and . v P (hdi|u) = 0, ∀u tem of equations (1), now explicitly written as It is worth noting that traditionally one needs or- dered documents to learn a bigram LM. A natural rp2 = 0.2025 question that arises in our problem is whether or not rp(1 − p)+ r(1 − p)(1 − q) a bigram LM can be recovered from the BOW cor- +(1 − r)(1 − q)p = 0.3725 pus with any guarantee. Let X denote the space (1 − r)q2 = 0.1875. of all possible BOWs. As a toy example, consider W = 3 with the vocabulary {hdi,A,B}. Assuming The above system has only one valid solution, all documents have equal length |x| = 4 (including which is the correct set of bigram LM parameters hdi), then X = {(hdi:1, A:3, B:0), (hdi:1, A:2, B:1), (r, p, q) = (0.25, 0.9, 0.5). (hdi:1, A:1, B:2), (hdi:1, A:0, B:3)}. Our training However, if the true parameters were (r, p, q) = BOW corpus, when sufficiently large, provides the (0.1, 0.2, 0.3) with proportions of BOWs being marginal distribution pˆ(x) for x ∈ X . Can we re- 0.4%, 19.8%, 8.1%, respectively, it is easy to ver- cover a bigram LM from pˆ(x)? ify that the system would have multiple valid solu- To answer this question, we first need to introduce tions: (0.1, 0.2, 0.3), (0.8819, 0.0673, 0.8283), and a generative model for the BOWs. We assume that (0.1180, 0.1841, 0.3030). In general, if pˆ(x) is the BOW corpus is generated from a bigram LM θ known from the training BOW corpus, when can in two steps: (i) An ordered document is generated we guarantee to uniquely recover the bigram LM from the bigram LM θ; (ii) The document’s unigram θ? This is the question of identifiability, which counts are collected to produce the BOW x. There- means the transition matrix θ satisfying (1) exists fore, the probability of a BOW x being generated and is unique. Identifiability is related to finding by θ can be computed by marginalizing over unique unique solutions of a system of polynomial equa- orderings z of x: tions since (1) is such a system in the elements of θ. The details are beyond the scope of this paper, but |x| applying the technique in (Basu and Boston, 2000), P (x|θ)= P (z|θ)= θ , zj−1,zj it is possible to show that for W = 3 (including hdi) z x z x j=2 ∈Xσ( ) ∈Xσ( ) Y we need longer documents (|x|≥ 5) to ensure iden- where σ(x) is the set of unique orderings, and |x| is tifiability. The identifiability of more general cases the document length. For example, if x =(hdi:1, is still an open research question. 3 Bigram Recovery Algorithm The summation over hidden ordered documents z in P (x|θ) couples the variables and makes (2) a In practice, the documents are not truly generated non-concave problem. We optimize θ using an EM from a bigram LM, and the BOW corpus may be algorithm. small. We therefore seek a maximum likelihood es- timate of θ or a regularized version of it. Equiva- 3.2 The EM Algorithm lently, we no longer require equality in (1), but in- We derive the EM algorithm for the optimization stead find θ that makes the distribution x θ as P ( | ) problem (2). Let O(θ) ≡ ℓ(θ) − λD(φ, θ) be the x close to pˆ( ) as possible. We formalize this notion objective function. Let θ(t−1) be the bigram LM at below. iteration t − 1. We can lower-bound O as follows: 3.1 The Objective Function O(θ) x x n Given a BOW corpus { 1,..., n}, its nor- 1 P (z|θ) = log P (z|θ(t−1), x) malized log likelihood under θ is ℓ(θ) ≡ z (t−1) x n n C P ( |θ , ) 1 x x i=1 z∈σ(xi) C i=1 log P ( i|θ), where C = i=1(| i|− 1) X X is the corpus length excluding hdi’s.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    9 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us