Learning Mixtures of Plackett-Luce Models

Zhibing Zhao Peter Piech Computer Science Department Computer Science Department Rensselaer Polytechnic Institute Rensselaer Polytechnic Institute Troy, NY 12180 USA Troy, NY 12180 USA [email protected] [email protected] Lirong Xia Computer Science Department Rensselaer Polytechnic Institute Troy, NY 12180 USA [email protected]

Abstract In this paper we address the identifiability and efficient learning problems of finite mixtures of Plackett-Luce models for rank data. We prove that for any k ≥ 2, the mixture of k Plackett- Luce models for no more than 2k − 1 alternatives is non-identifiable and this bound is tight for k = 2. For generic identifiability, we prove that the mixture of k Plackett-Luce models m−2 over m alternatives is generically identifiable if k ≤ b 2 c!. We also propose an efficient generalized method of moments (GMM) algorithm to learn the mixture of two Plackett-Luce models and show that the algorithm is consistent. Our experiments show that our GMM algorithm is significantly faster than the EMM algorithm by Gormley and Murphy [2008], while achieving competitive statistical efficiency.

1 Introduction

In many problems the data are composed of rankings over a finite number of alternatives Marden [1995]. For example, meta-search engines aggregate rankings over webpages from individual search engines Dwork et al. [2001]; rankings over documents are combined to find the most relevant document in information retrieval Liu [2011]; noisy answers from online workers arXiv:1603.07323v4 [cs.LG] 7 Mar 2020 are aggregated to produce a more accurate answer in crowdsourcing Mao et al. [2013]. Rank data are also very common in economics and political science. For example, consumers often give discrete choices data McFadden [1974] and voters often give rankings over presidential candidates Gormley and Murphy [2008]. Perhaps the most commonly-used statistical model for rank data is the Plackett-Luce model Plack- ett [1975], Luce [1959]. The Plackett-Luce model is a natural generalization of multinomial logistic regression. In a Plackett-Luce model, each alternative is parameterized by a positive number that represents the “quality” of the alternative. The greater the quality, the the chance the alternative will be ranked at a higher position. In practice, mixtures of Plackett-Luce models can provide better fitness than a single Plackett- Luce model. An additional benefit is that the learned parameter of a can naturally

1 be used for clustering McLachlan and Basford [1988]. The k-mixture of Plackett-Luce combines k individual Plackett-Luce models via a linear vector of mixing coefficients. For example, Gormley and Murphy [2008] propose an Expectation Minorization Maximization (EMM) algorithm to compute the MLE of Plackett-Luce mixture models. The EMM was applied to an Irish election dataset with 5 alternatives and the four components in the mixture model are interpreted as voting blocs. Surprisingly, the identifiability of Plackett-Luce mixture models is still unknown. Identifiability is an important property for statistical models, which requires that different parameters of the model have different distributions over samples. Identifiability is crucial because if the model is not iden- tifiable, then there are cases where it is impossible to estimate the parameter from the data, and in such cases conclusions drawn from the learned parameter can be wrong. In particular, if Plackett- Luce mixture models are not identifiable, then the voting bloc produced by the EMM algorithm of Gormley and Murphy [2008] can be dramatically different from the ground truth. In this paper, we address the following two important questions about the theory and practice of Plackett-Luce mixture models for rank data. Q1. Are Plackett-Luce mixture models identifiable? Q2. How can we efficiently learn Plackett-Luce mixture models? Q1 can be more complicated than one may think because the non-identifiability of a mixture model usually comes from two sources. The first is label switching, which means that if we label the components of a mixture model differently, the distribution over samples does not change Stephens [2000]. This can be avoided by ordering the components and merging the same components in the mixture model. The second is more fundamental, which states that the mixture model is non- identifiable even after ordering and merging duplicate components. Q1 is about the second type of non-identifiability. The EMM algorithm by Gormley and Murphy [2008] converges to the MLE, but as we will see in the experiments, it can be very slow when the data size is not too small. Therefore, to answer Q2, we want to design learning algorithms that are much faster than the EMM without sacrificing too much statistical efficiency, especially mean squared error (MSE).

1.1 Our Contributions We answer Q1 with the following theorems. The answer depends on the number of components k in the mixture model and the number of alternatives m. m+1 Theorem 1 and 2. For any m ≥ 2 and any k ≥ 2 , the k-mixture Plackett-Luce model (denoted by k-PL) is non-identifiable. This lower bound on k as a function of m is tight for k = 2 (m = 4).

The second half of the theorem is positive: the mixture of two Plackett-Luce models is identifi- able for four or more alternatives. We conjecture that the bound is tight for all k > 2. The k-PL is generically identifiable for m alternatives, if the Lebesgue measure of non-identifiable parameters is 0. We prove the following positive results for k-PL. m−2 Theorem 3. For any m ≥ 6 and any k ≤ b 2 c!, the k-PL is generically identifiable. m−2 m+1 We note that b 2 c! is exponentially larger than the lower bound 2 for (strict) identifiability. m m−2 One interpretation of the theorem is that, when 2 + 1 ≤ k ≤ b 2 c!, even though k-PL is not identifiable in the strict sense, one may not need to worry too much in practice due to generic identifiability. For Q2, we propose a generalized method of moments (GMM)1 algorithm [Hansen, 1982] to

1This should not be confused with Gaussian mixture models.

2 learn the k-PL. We illustrate the algorithm for k = 2 and m ≥ 4, and prove that the algorithm is consistent, which means that when the data are generated from k-PL and the data size n goes to infinity, the algorithm will reveal the ground truth with probability that goes to 1. We then compare our GMM algorithm and the EMM algorithm Gormley and Murphy [2008] w.r.t. statistical efficiency (mean squared error) and computational efficiency in synthetic experiments. As we will see, in Section 5, our GMM algorithm is significantly faster than the EMM algorithm while achieving competitive statistical efficiency. Therefore, we believe that our GMM algorithm is a promising candidate for learning Plackett-Luce mixture models from big rank data.

1.2 Related Work and Discussions Most previous work in mixture models (especially Gaussian mixture models) focuses on cardinal data Teicher [1961, 1963], McLachlan and Peel [2004], Kalai et al. [2012], Dasgupta [1999]. Little is known about the identifiability of mixtures of models for rank data. For rank data, Iannario [2010] proved the identifiability of the mixture of shifted binomial model and the uniform models. Awasthi et al. [2014] proved the identifiability of mixtures of two Mallows’ models. Mallows mixture models were also studied by Lu and Boutilier [2014] and Chierichetti et al. [2015]. Our paper, on the other hand, focuses on mixtures of Plackett-Luce models without any assumptions on data since identifiability is a property of a statistical model. Ammar et al. [2014] discussed non-identifiability of Plackett-Luce mixture models when data consist of only limited length of partial rankings. Oh and Shah [2014] provided sufficient conditions on data for Plackett- Luce mixture models to be identifiable. Suh et al. [2016] studied the possibility to identify the top K alternatives of two mixtures of Plackett-Luce models with respect to the size of datasets. Technically, part of our (non-)identifiability proofs is motivated by the work of Teicher [1963], who obtained sufficient conditions for the identifiability of finite mixture models. However, techni- cally these conditions cannot be directly applied to k-PL because they work either for finite families (Theorem 1 in Teicher [1963]) or for cardinal data (Theorem 2 in Teicher [1963]). Neither is the case for mixtures of Plackett-Luce models. To prove our (non-)identifiability theorems, we develop k novel applications of the Fundamental Theorem of Algebra to analyze the rank of a matrix Fm that represents k-PL (see Preliminaries for more details). Our proof for generic identifiability is based on a novel application of the tensor-decomposition approach that analyzes the generic Kruskal’s rank of matrices advocated by Allman et al. [2009]. In addition to being important in their own right, our (non)-identifiability theorems also carry a clear message that has been overlooked in the literature: when using Plackett-Luce mixture models to fit rank data, one must be very careful about the interpretation of the learned parameter. Specifi- cally, when m ≤ 2k−1, it is necessary to double-check whether the learned parameter is identifiable (Theorem 1), which can be computationally hard. On the positive side, identifiability may not be a m−2 big concern in practice under a much milder condition (k ≤ b 2 c!, Theorem 3). Gormley and Murphy [2008] used 4-PL to fit an Irish election dataset with 5 alternatives. Ac- cording to our Theorem 1, 4-PL for 5 alternatives is non-identifiable. Moreover, our generic identifi- ability theorem (Theorem 3) does not apply because m = 5 < 6. Therefore, it is possible that there exists another set of voting blocs and mixing coefficients with the same likelihood as the output of the EMM algorithm. Whether it is true or not, we believe that it is important to add discussions and justifications of the uniqueness of the voting blocs obtained by Gormley and Murphy [2008]. Parameter inference for single Plackett-Luce models is studied in Cheng et al. [2010] and Azari Soufi- ani et al. [2013]. Azari Soufiani et al. [2013] proposed a GMM, which is quite different from our method, and cannot be applied to Plackett-Luce mixture models. The MM algorithm by Hunter [2004], which is compared in Azari Soufiani et al. [2013], is also very different from the EMM that

3 is being compared in this paper.

2 Preliminaries

Let A = {ai|i = 1, 2, ··· , m} denote a set of m alternatives. Let L(A) denote the set of linear orders (rankings), which are transitive, antisymmetric and total binary relations, over A. A ranking is often denoted by ai1 ai2 · · · aim , which means that ai1 is the most preferred alternative, ai2 is the second preferred, aim is the least preferred, etc. Let P = (V1,V2, ··· ,Vn) denote the data (also called a preference profile), where for all j ≤ n, Vj ∈ L(A). ~ Definition 1 (Plackett-Luce model). The parameter space is Θ = {θ = {θi|i = 1, 2, ··· , m, θi ∈ Pm n ~ [0, 1], i=1 θi = 1}}. The sample space is S = L(A) . Given a parameter θ ∈ Θ, the probability of any ranking V = ai1 ai2 · · · aim is θ θ θi ~ i1 i2 m−1 PrPL(V |θ) = × P × · · · × 1 p>1 θip θim−1 +θim We assume that data are generated i.i.d. in the Plackett-Luce model. Therefore, given a prefer- ~ ~ Qn ~ ence profile P and θ ∈ Θ, we have PrPL(P |θ) = j=1 PrPL(Vj|θ). The Plackett-Luce model has the following intuitive explanation. Suppose there are m balls, representing m alternatives in an opaque bag. Each ball ai is assigned a quality value θi. Then, we generate a ranking in m stages. In each stage, we take one ball out of the bag. The probability for each remaining ball being taken out is the value assigned to it over the sum of the values assigned to the remaining balls. The order of drawing is the ranking over the alternatives. P We require i θi = 1 to normalize the parameter so that the Plackett-Luce model is identifiable. It is not hard to verify that for any Plackett-Luce model, the probability for the alternative ap (p ≤ m) to be ranked at the top of a ranking is θp; the probability for ap to be ranked at the top and aq ranked at the second position is θpθq , etc. 1−θp Definition 2 (k-mixture Plackett-Luce model). Given m ≥ 2 and k ≥ 2, we define the k-mixture Plackett-Luce model as follows. The sample space is S = L(A)n. The parameter space has two parts. The first part is the mixing coefficients (α1, . . . , αk) where for all r ≤ k, αr ≥ 0, and Pk ~(1) ~(2) ~(k) ~(r) (r) (r) (r) > r=1 αr = 1. The second part is (θ , θ ,..., θ ), where θ = [θ1 , θ2 , ··· , θm ] is the parameter of the r-th Plackett-Luce component. The probability of a ranking V is ~ Pk ~(r) Prk-PL(V |θ) = r=1 αr PrPL(V |θ ), ~(r) ~(r) where PrPL(V |θ ) is the probability of V in the r-th Plackett-Luce model given θ .

For simplicity we use k-PL to denote the k-mixture Plackett-Luce model.

Definition 3 (Identifiability) Let M = {Pr(·|θ~): θ~ ∈ Θ} be a statistical model. M is identifiable if for all θ,~γ~ ∈ Θ, we have Pr(·|θ~) = Pr(·|~γ) =⇒ θ~ = ~γ.

In this paper, we slightly modify this definition to eliminate the label switching problem. We say that ~(1) ~(2) ~(k1) k-PL is identifiable if there do not exist (1) 1 ≤ k1, k2 ≤ k, non-degenerate θ , θ , ··· , θ , (1) (2) (k2) ~γ ,~γ , ··· ,~γ , which means that these k1 + k2 vectors are pairwise different; (2) all strictly positive mixing coefficients (α(1), . . . , α(1)) and (α(2), . . . , α(2)), so that for all rankings V we have 1 k1 1 k2 Pk1 (1) ~(r) Pk2 (2) (r) r=1 αr PrPL(V |θ ) = r=1 αr PrPL(V |~γ ) Throughout the paper, we will represent a distribution over the m! rankings over m alternatives ~(r) ~ ~ for a Plackett-Luce component with parameter θ as a column vector fm(θ) with m! elements,

4 one for each ranking and whose value is the probability of the corresponding ranking. For example, when m = 3, we have

Pr(a a a |θ~)  θ1θ2  1 2 3 1−θ1 ~ θ1θ3 Pr(a1 a3 a2|θ)      1−θ1  ~ θ1θ2 Pr(a2 a1 a3|θ)   f~ (θ~) =   =  1−θ2  3 Pr(a a a |θ~)  θ2θ3   2 3 1   1−θ2  Pr(a a a |θ~)  θ1θ3   3 1 2   1−θ3  Pr(a a a |θ~) θ2θ3 3 2 1 1−θ3

~(1) (2~k) k Given θ ,..., θ , we define Fm as a m! × 2k matrix for k-PL with m alternatives h i k ~ ~(1) ~ ~(2) ~ ~(2k) Fm = fm(θ ) fm(θ ) ··· fm(θ ) (1)

k ~(1) ~(2k) We note that Fm is a function of θ ,..., θ , which are often omitted. We prove the identifia- k bility or non-identifiability of k-PL by analyzing the rank of Fm. The reason that we consider 2k components is that we want to find (or argue that we cannot find) another k-mixture model that has the same distribution as the original one.

3 Identifiability of Plackett-Luce Mixture Models

k We first prove a general lemma to reveal a relationship between the rank of Fm and the identifiability of Plackett-Luce mixture models. We recall that a set of vectors is non-degenerate if its elements are pairwise different.

k ~(1) ~(2k) Lemma 1 If the rank of Fm is 2k for all non-degenerate θ ,..., θ , then k-PL is identifiable. Otherwise (2k − 1)-PL is non-identifiable.

k ~(1) ~(2k) Proof: Suppose for the sake of contradiction the rank of Fm is 2k for all non-degenerate θ ,..., θ but k-PL is non-identifiable. Then, there exist non-degenerate θ~(1), θ~(2), ··· , θ~(k1), ~γ(1),~γ(2), ··· ,~γ(k2) and all strictly positive mixing coefficients (α(1), . . . , α(1)) and (α(2), . . . , α(2)), such that for all 1 k1 1 k2 rankings V , we have Pk1 (1) ~(r) Pk2 (2) (r) r=1 αr PrPL(V |θ ) = r=1 αr PrPL(V |~γ ) ~(1) ~(2) ~(2k−(k1+k2)) ~(1) ~(k1) (1) Let δ , δ ,..., δ denote any 2k−(k1+k2) vectors so that {θ ,..., θ ,~γ ,..., (k2) ~(1) ~(2k−(k1+k2)) k ~γ , δ ,..., δ } is non-degenerate. It follows that the rank of the corresponding Fm is strictly smaller than 2k, because

k1 k2 (2k−k1−k2) X (1) ~(r) X (2) (r) X ~(r) αr PrPL(V |θ ) − αr PrPL(V |~γ ) + δ · 0 = 0. r=1 r=1 r=1 This is a contradiction. k ~ On the other hand, if rank(Fm) < 2k for some non-degenerate θ’s, then there exists a nonzero > k vector ~α = [α1, α2, . . . , α2k] such that Fm · ~α = 0. Suppose in ~α there are k1 positive elements and k2 negative elements, then it follows that max{k1, k2}-mixture model is not identifiable, and max{k1, k2} ≤ 2k − 1. 

5 m+1 Theorem 1 For any m ≥ 2 and any k ≥ 2 , the k-PL is non-identifiable. Proof: The proof is constructive and is based on a refinement of the second half of Lemma 1. ~(1) ~(2k) T For any k and m = 2k − 1, we will define θ ,..., θ and ~α = [α1, . . . , α2k] such that (1) k ~(r) Fm · ~α = 0 and (2) ~α has k positive elements and k negative elements. In each θ , the value for alternatives {a2, . . . , am} are the same. The proof for any m < 2k − 1 is similar. (r) (r) 1−θ1 (r) Formally, let m = 2k − 1. For all i ≥ 2 and r ≤ 2k, we let θi = 2k−2 , where θi is the (r) parameter corresponding to the ith alternative of the rth model. We use er to represent θ1 and we (r) 1−θ1 use br to represent 2k−2 . It is not hard to check that the probability for a1 to be ranked at the ith position in the rth Plackett-Luce model is (2k − 2)! e (b )i−1 r r (2) (2k − 1 − i)! Qi−1 p=0(1 − pbr)

k k Then Fm can be reduced to a (2k − 1) × (2k) matrix. Because rank(Fm) ≤ 2k − 1 < 2k, Lemma 1 immediately tells us that (2k − 1)-PL is non-identifiable for 2k − 1 alternatives, but this is much weaker than what we are proving in this theorem. We now define a new (2k − 1) × (2k) matrix Hk k obtained from Fm by performing the following linear operations on row vectors. (i) Make the first k k row of H to be ~1; (ii) for any 2 ≤ i ≤ 2k − 1, the ith row of H is the probability for a1 to be ranked at the (i − 1)-th position according to (2); (iii) remove all constant factors. More precisely, for any er we define the following function.  1   er   er (1−er )  ~∗   f (er) =  er +2k−3   .   .   2k−3  er (1−er ) (er +2k−3)···((2k−3)er +1)

k ~∗ ~∗ ~∗ Then we define H = [f (e1), f (e2), ··· , f (e2k)].

~∗ ∗ ∗ ∗ > Lemma 2 If there exist all different e1, e2, ··· , e2k < 1 and a non-zero vector β = [β1 , β2 , ··· , β2k] such that (i) Hkβ~∗ = 0 and (ii) β~∗ has k positive elements and k negative elements, then k-PL for 2k − 1 alternatives is not identifiable.

∗ ∗ ∗ ∗ ∗ ∗ k ~∗ Proof: W.l.o.g. assume β1 , β2 , ··· , βk > 0 and βk+1, βk+2, β2k < 0. H2k−1β = 0 means that

k 2k X ∗ ~ X ∗ ~ βr fr = − βr fr r=1 r=k+1

k P ∗ Pk ∗ ∗ ∗ According to the first row in H , we have r βr = 0. Let S = r=1 βr . Further let αr = βr /S ∗ ∗ when r = 1, 2, ··· , k and αr = −βr /S when r = k + 1, k + 2, ··· , 2k. We have

k 2k X ∗ ~ X ∗ ~ αr fr = αr fr r=1 r=k+1

Pk ∗ P2k ∗ where r=1 αr = 1 and r=k+1 αr = 1. This means that the model is not identifiable.  Then, the theorem is proved by showing that the following β~∗ satisfies the conditions in Lemma 2.

6 For any r ≤ 2k, Q2k−3(pe + 2k − 2 − p) ∗ p=1 r βr = Q (3) q6=r(er − eq)

Note that the numerator is always positive. W.l.o.g. let e1 < e2 < ··· < e2k, then half of the denominators are positive and the other half are negative. We then use induction to prove that the conditions in Lemma 2 are satisfied in the following series of lemmas.

P 1 Lemma 3 Q = 0. s t6=s(es−et) Proof: The partial fraction decomposition of the first term is

1 X Bq Q = ( ) (e1 − eq) e1 − eq q6=1 q6=1

1 where Bq = Q . p6=q,p6=1(eq −ep) Namely, 1 X 1 Q = − (Q ) (e1 − eq) (eq − ep) q6=1 q6=1 p6=q We have X 1 1 X 1 Q = Q + (Q ) = 0 (es − et) (e1 − eq) (eq − ep) s t6=s q6=1 q6=1 p6=q 

µ Pν (es) Lemma 4 For all µ ≤ ν − 2, we have Q = 0. s=1 t6=s(es−et) Proof: Base case: When ν = 2, µ = 0, obviously 1 1 + = 0 e1 − e2 e2 − e1

µ Pν es Assume the lemma holds for ν = p and all µ ≤ ν − 2, that is Q = 0. When s=1 t6=s(es−et) ν = p + 1, µ = 0, by Lemma 3 we have

p+1 X 1 = 0 Q (e − e ) s=1 t6=s s t

q Pp+1 es Assume Q = 0 for all µ = q, q ≤ p − 2. For µ = q + 1, s=1 t6=s(es−et)

p+1 q+1 p+1 q p+1 q X e X e ep+1 X e (es − ep+1) s = s + s Q (e − e ) Q (e − e ) Q (e − e ) s=1 t6=s s t s=1 t6=s s t s=1 t6=s s t p+1 p X eq X eq =e s + s = 0 p+1 Q (e − e ) Q (e − e ) s=1 t6=s s t s=1 t6=s s t

The last equality is obtained from the induction hypotheses. 

7 Pν f(es) Lemma 5 Let f(x) be any polynomial of degree ν − 2, then Q = 0. s=1 t6=s(es−et)

This can be easily derived from Lemma 4. Now we are ready to prove that Hkβ~∗ = 0. Note that ∗ k k the degree of the numerator of βr is 2k − 3 (see Equation (3)). Let [H ]i denote the i-th row of H . We have the following calculations.

2k Q2k−3 X p=1 (per + 2k − 2 − p) [Hk] β~∗ = = 0 1 Q (e − e ) r=1 q6=r r q 2k Q2k−3 X p=1 er(per + 2k − 2 − p) [Hk] β~∗ = = 0 2 Q (e − e ) r=1 q6=r r q For any 2 < i ≤ 2k − 1, we have

k ~∗ [H ]iβ

2k i−2 Q2k−3 X er(1 − er) p=1 (per + 2k − 2 − p) = Qi−2 Q (e − e ) r=1 p=1(per + 2k − 2 − p) q6=r r q 2k i−2 Q2k−3 X er(1 − er) p=i−1(per + 2k − 2 − p) = = 0 (Lemma 5) Q (e − e ) r=1 q6=r r q

The last equality is obtained by letting v = 2k − 2 in Lemma 5. Therefore, Hkβ~∗ = 0. Note that β~∗ is also the solution for less than 2k − 1 alternatives. The theorem follows after applying Lemma 2. 

Theorem 2 For k = 2, and any m ≥ 4, the 2-PL is identifiable.

Proof: We will apply Lemma 1 to prove the theorem. That is, we will show that for all non- ~(1) ~(2) ~(3) ~(4) 2 2 degenerate θ , θ , θ , θ such that rank(F4) = 4. We recall that F4 is a 24×4 matrix. Instead 2 ∗ 2 of proving rank(F4) = 4 directly, we will first obtain a 4 × 4 matrix F = T × F4 by linearly 2 ∗ combining some row vectors of F4 via a 4 × 24 matrix T . Then, we show that rank(F ) = 4, which 2 implies that rank(F4) = 4. > For simplicity we use [er, br, cr, dr] to denote the parameter of r-th Plackett-Luce compo-   e1 e2 e3 e4 h (1) (2) (3) (4)i b1 b2 b3 b4  nent for a1, a2, a3, a4 respectively. Namely, θ~ θ~ θ~ θ~ =   = c1 c2 c3 c4  d1 d2 d3 d4 ~ω(1) ~ω(2)  , where for each r ≤ 4, ~ω(r) is a row vector. We further let ~1 = [1, 1, 1, 1]. ~ω(3) ~ω(4) P4 (i) ~ (1) (2) (3) Clearly we have i=1 ~ω = 1. Therefore, if there exist three ~ω’s, for example {~ω , ~ω , ~ω }, (1) (2) (3) ~ 2 (i) such that ~ω , ~ω , ~ω and 1 are linearly independent, then rank(F4) = 4 because each ~ω corre- (i) sponds to the probability of ai being ranked at the top, which means that ~ω is a linear combination 2 ~(1) ~(2) ~(3) ~(4) (1) (2) (3) (4) of rows in F4. Because θ , θ , θ , θ is non-degenerate, at least one of {~ω , ~ω , ~ω , ~ω } is linearly independent of ~1. W.l.o.g. suppose ~ω(1) is linearly independent of ~1. This means that not all of e1, e2, e3, e4 are equal. The theorem will be proved in the following two cases.

8 Case 1. ~ω(2), ~ω(3), and ~ω(4) are all linear combinations of ~1 and ~ω(1). Case 2. There exists a ~ω(i) (where i ∈ {2, 3, 4}) that is linearly independent of ~1 and ~ω(1). (i) (1) Case 1. For all i = 2, 3, 4 we can rewrite ~ω = pi~ω + qi for some constants pi, qi. More precisely, for all r = 1, 2, 3, 4 we have:

br = p2er + q2 (4)

cr = p3er + q3 (5)

dr = p4er + q4 (6)

Because ~ω(1) + ~ω(2) + ~ω(3) + ~ω(4) = ~1, we have

p2 + p3 + p4 = −1 (7)

q2 + q3 + q4 = 1 (8)

2 ~ ~(r) In this case for each r ≤ 4, the r-th column of F4, which is f4(θ ), is a function of er. Because ~ the θ’s are non-degenerate, e1, e2, e3, e4 must be pairwise different. We assume p2 6= 0 and q2 6= 1 for all subcases of Case 1 (This will be denoted as Case 1 Assumption). The following claim shows that there exists pi, qi where i ∈ {2, 3, 4} satisfying this condition. If i 6= 2 we can switch the row of alternatives a2 and ai. Then the assumption holds.

Claim 1 There exists i ∈ 2, 3, 4 which satisfy the following conditions:

• qi 6= 1

• pi 6= 0

Proof: Suppose for all i = 2, 3, 4, qi = 1 or pi = 0. If pi = 0, qi must be positive because br, cr, dr are all positive. If pi 6= 0, Then qi = 1 due to the assumption above. So qi > 0 for all i = 2, 3, 4. If there exists i s.t. qi = 1, then (8) does not hold. So for all i, qi 6= 1. Then pi = 0 holds for all i ∈ {2, 3, 4}, which violates (7).  Case 1.1: p2 + q2 6= 0 and p2 + q2 6= 1. For this case we first define a 4 × 4 matrix Fˆ as in Table 1. We use ~1 and ~ω(1) to denote the

Fˆ Moments  1 1 1 1  ~1  e1 e2 e3 e4  a1 others  e1b1 e2b2 e3b3 e4b4  a2 a1 others  1−b1 1−b2 1−b3 1−b4  e b e b e b e b 1 1 2 2 3 3 4 4 a1 a2 others 1−e1 1−e2 1−e3 1−e4

Table 1: Fˆ.

(1) first two rows of Fˆ. ~ω corresponds to the probability that a1 is ranked at the top. We call such a probability a moment. Each moment is the sum of probabilities of some rankings. For example, the “a1 others” moment is the total probability for {V ∈ L(A): a1 is ranked at the top of V }. It ˆ ˆ ˆ 2 follows that there exists a 4 × 24 matrix T such that F = T × F4.

9 Define 1 1 1 1 θ~(b) = [ , , , ] 1 − b1 1 − b2 1 − b3 1 − b4 1 1 1 1 = [ , , , ] 1 − p2e1 − q2 1 − p2e2 − q2 1 − p2e3 − q2 1 − p2e4 − q2 and 1 1 1 1 θ~(e) = [ , , , ] (9) 1 − e1 1 − e2 1 − e3 1 − e4

 ~1  (1) ~ω  and let F∗ =  . It can be verified that Fˆ = T ∗ × F∗, where θ~(b)  θ~(e)

 1 0 0 0  0 1 0 0 T ∗ =    − 1 −1 1−q2 0   p2 p2  −(p2 + q2) −p2 0 p2 + q2

Because Case 1.1 assumes that p2 + q2 6= 0 and we can select a2 such that p2 6= 0, q2 6= 1, we have ∗ ∗ ∗ −1 ˆ ∗ 2 that T is invertible. Therefore, F = (T ) × F, which means that F = T × F4 for some 4 × 24 matrix T . We now prove that rank(F∗) = 4. For the sake of contradiction, suppose that rank(F∗) < 4. It ∗ follows that there exist a nonzero row vector ~t = [t1, t2, t3, t4], such that ~t · F = 0. This means that for all r ≤ 4, t3 t4 t1 + t2er + + = 0 1 − p2er − q2 1 − er

t3 t4 Let f(x) = t1 + t2x + + . Let g(x) = (1 − p2x − q2)(1 − x)f(x). We recall that 1−p2x−q2 1−x e1, e2, e3, e4 are four roots of f(x), which means that they are also the four roots of g(x). Because in Case 1.1 we assume that p2 + q2 6= 1, it can be verified that not all coefficients of g(x) are zero. We note that the degree of g(x) is 3. Therefore, due to the Fundamental Theorem of Algebra, g(x) has at most three different roots. This means that e1, e2, e3, e4 are not pairwise different, which is a contradiction. ∗ 2 Therefore, rank(F ) = 4, which means that rank(F4) = 4. This finishes the proof for Case 1.1. Case 1.2. p2 + q2 = 1. If we can find an alternative ai, such that pi and qi satisfy the following conditions:

• pi 6= 0

• qi 6= 1

• pi + qi 6= 0

• pi + qi 6= 1

Then we can use ai as a2, which belongs to Case 1.1. Otherwise we have the following claim.

10 Claim 2 If for i ∈ {3, 4}, pi and qi satisfy one of the following conditions

1. pi = 0

2. pi 6= 0, qi = 1

3. pi + qi = 0

4. pi + qi = 1

We claim that there exists i ∈ {3, 4} s.t. pi, qi satisfy condition 2, namely pi 6= 0, qi = 1.

Proof: Suppose pi = 0, then qi > 0 because pie1 +qi is a parameter in a Plackett-Luce component. If for i = 3, 4, pi and qi satisfy any of conditions 1, 3 or 4, then qi ≥ −pi (qi > 0 for condition P4 P4 1, qi = −pi for condition 3, qi = 1 − pi > −p1 for condition 4). As i=2 pi = −1, i=2 qi ≥ P4 P4 1 − i=2 pi = 2, which contradicts that i=2 qi = 1. 

Without loss of generality we let p3 6= 0 and q3 = 1. We now construct Fˆ as is shown in the following table.

Fˆ Moments  1 1 1 1  ~1  e1 e2 e3 e4  a1 others  e1b1 e2b2 e3b3 e4b4  a1 a2 others  1−e1 1−e2 1−e3 1−e4  c b c b c b c b 1 1 2 2 3 3 4 4 a3 a2 others 1−c1 1−c2 1−c3 1−c4

We define θ~(b) the same way as in Case 1.1, and define 1 1 1 1 θ~(c) = [ , , , ] e1 e2 e3 e4 Define  ~1  (1) ∗ ~ω  F =   θ~(e)  θ~(c)

We will show that Fˆ = T ∗ × F∗ where T ∗ has full rank. For all r = 1, 2, 3, 4

crbr (p3er + q3)(p2er + q2) (p3er + 1)(p2er + 1 − p2) p2 1 − p2 = = = −p2er + (p2 − 1 − ) − 1 − cr 1 − p3er − q3 −p3er p3 p3er So   ~1  ~ω(1)  ˆ   F =  ~ (1) ~(e)   −1 − p2~ω + θ  p2 (1) 1−p2 ~(c) (p2 − 1 − )~1 − p2~ω − θ p3 p3

11 ∗ ∗ Suppose p2 6= 1, we have Fˆ = T × F where  1 0 0 0 

∗  0 1 0 0  T =    −1 −p2 1 0  p2 1−p2 p2 − 1 − −p2 0 − p3 p3 which is full rank. So rank(F∗) = rank(Fˆ). 2 2 If rank(F4) ≤ 3, then there is at least one column in F4 dependent of the other columns. As all ˆ 2 ˆ rows in F are linear combinations of rows in F4, there is also at least one column in F dependent of the other columns. Therefore we have rank(Fˆ) ≤ 3. Further we have rank(F∗) ≤ 3. Therefore, there exists a nonzero row vector ~t = [t1, t2, t3, t4], s.t. ~tF∗ = 0

Namely, for all r ≤ 4, t3 t4 t1 + t2er + + = 0 1 − er er Let t t f(x) = t + t x + 3 + 4 = 0 1 2 1 − x x

g(x) = x(1 − x)f(x) = x(1 − x)(t1 + t2) + t3x + t4(1 − x) If any of the coefficients in f(x) is nonzero, then g(x) is a polynomial of degree at most 3. There will be a maximum of 3 different roots. Since this equation holds for er where r = 1, 2, 3, 4, there exists s 6= t s.t. es = et. Otherwise g(x) = f(x) = 0 for all x. We have

g(0) = t4 = 0

g(1) = t3 = 0

Substitute t3 = t4 = 0 into f(x), we have f(x) = t1 + t2x = 0 for all x. So t1 = t2 = 0. This contradicts the nonzero requirement of ~t. Therefore there exists s 6= t s.t. es = et. From (4)(5)(6) we have θ~(s) = θ~(t), which is a contradition. If p2 = 1, from the assumption of Case 1.2 q2 = 0. So br = er for r = 1, 2, 3, 4. Then from (7) we have p4 = −p3 − 2 and from (8) we have q4 = 0. Since p4 and q4 satisfy one of the four conditions in Claim 2, we can show it must satisfy Condition 4. (q4 = 0 violates Condition 2. If it satisfies Condition 1 or 3, then p4 = 0. Then dr = p4ar + q4 = 0, which is impossible.) So p4 = 1, (1) (2) (4) (3) (1) and p3 = −3. This is the case where ~ω = ~ω = ~ω and ~ω = 1 − 3~ω . For this case, (2) (3) (4) 1−~ω(1) we use a3 as a1. After the transformation, we have ~ω = ~ω = ~ω = 3 . We claim that this lemma holds for a more general case where pi + qi = 0 for i = 2, 3, 4. It is easy to check that 1 1 pi = − 3 and qi = 3 belongs to this case. Claim 3 For all r = 1, 2, 3, 4, if     er er (r) br   p2er − p2  θ~ =   =   (10) cr   p3er − p3  dr −(1 + p2 + p3)er + (1 + p2 + p3) The model is identifiable.

12 Proof: We first show a claim, which is useful to the proof.

Claim 4 Under the settings of (10), −1 < p2, p3 < 0, −1 < p2 + p3 < 0.

Proof: From the definition of Plackett-Luce model, 0 < er, br, cr, dr < 1. From (10), we have br p2 = . Since br > 0 and er < 1, p2 < 0. Similarly we have p3 < 0 and −(1 + p2 + p3) < 0. er −1 So −1 < p2 + p3 < 0. Then we have p2 > −1 − p3. So −1 − p3 < p2 < 0, p3 > −1. Similarly we have p2 > −1.  In this case, we construct Fˆ in the following way.

Fˆ Moments  1 1 1 1  ~1  e1 e2 e3 e4  a1 others e b e b e b e b  1 1 2 2 3 3 4 4  a2 a1 others  1−b1 1−b2 1−b3 1−b4  e b c e b c e b c e b c 1 1 1 2 2 2 3 3 3 4 4 4 a2 a3 a1 a4 (1−b1)(1−b1−c1) (1−b2)(1−b2−c2) (1−b3)(1−b3−c3) (1−b4)(1−b4−c4)

Define θ~(b) the same way as in Case 1.1 1 1 1 1 θ~(b) = [ , , , ] 1 − b1 1 − b2 1 − b3 1 − b4 1 1 1 1 = [ , , , ] 1 − p2e1 + p2 1 − p2e2 + p2 1 − p2e3 + p2 1 − p2e4 + p2 And define 1 1 θ~(bc) =[ , , 1 − (p2 + p3)e1 + p2 + p3 1 − (p2 + p3)e2 + p2 + p3 1 1 , ] 1 − (p2 + p3)e3 + p2 + p3 1 − (p2 + p3)e4 + p2 + p3 Further define  ~1  (1) ∗ ~ω  F =    θ~(b)  θ~(bc)

We will show Fˆ = T ∗ × F∗ where T ∗ has full rank.

13 The last two rows of Fˆ

erbr 1 1 + p2 = −er − + 1 − br p2 p2(1 − p2er + p2) e b c e (p e − p )(p e − p ) r r r = r 2 r 2 3 r 3 (1 − br)(1 − br − cr) (1 − p2er + p2)(1 − (p2 + p3)er + p2 + p3) p p e (e − 1)2 = 2 3 r r (1 − p2er + p2)(1 − (p2 + p3)er + p2 + p3)

p3(2p2 + p3) p3 (1 + p2) = 2 + er − p2(p2 + p3) p2 + p3 p2(1 − p2er + p2)

p2(1 + p2 + p3) + 2 (1 − (p2 + p3)er + p2 + p3)(p2 + p3) So  ~1   ~ω(1)  Fˆ =  1+p   − 1 ~1 − ~ω(1) + 2 θ~(b)   p2 p2  p3(2p2+p3)~ p3 (1) 1+p2 ~(b) p2(1+p2+p3) ~(bc) 2 1 + ~ω − θ + 2 θ p2(p2+p3) p2+p3 p2 (p2+p3) Then we have Fˆ = T ∗ × F∗ where

 1 0 0 0   0 1 0 0  ∗   T =  − 1 −1 1+p2 0   p2 p2  p3(2p2+p3) p3 1+p2 p2(1+p2+p3) 2 − 2 p2(p2+p3) p2+p3 p2 (p2+p3)

1+p2 p2(1+p2+p3) From Claim 4, we have −1 < p2 < 0 and −1 < p2 + p3 < 0, so 6= 0 and 2 6= 0. p2 (p2+p3) So T has full rank. Then rank(F∗) = rank(Fˆ). 2 2 If rank(F4) ≤ 3, then there is at least one column in F4 dependent of other columns. As all ˆ 2 ˆ ∗ ˆ rows in F are linear combinations of rows in F4, rank(F) ≤ 3. Since rank(F ) = rank(F), we ∗ have rank(F ) ≤ 3. Therefore, there exists a nonzero row vector ~t = [t1, t2, t3, t4], s.t.

~tF∗ = 0

Namely, for all r ≤ 4,

t3 t4 t1 + t2er + + = 0 1 − p2ar + p2 1 − (p2 + p3)er + p2 + p3 Let

t3 t4 f(x) = t1 + t2x + + 1 − p2x + p2 1 − (p2 + p3)x + p2 + p3

g(x) = (1 − p2x + p2)(1 − (p2 + p3)x + p2 + p3)(t1 + t2x)

+ t3(1 − (p2 + p3)x + p2 + p3) + t4(1 − p2x + p2)

If any of the coefficients of g(x) is nonzero, then g(x) is a polynomial of degree at most 3. There will be a maximum of 3 different roots. As the equation holds for all er where r = 1, 2, 3, 4. There

14 exists s 6= t s.t. es = et. Otherwise g(x) = f(x) = 0 for all x. We have 1 + p −t p g( 2 ) = 3 3 = 0 p2 p2 1 + p + p t p g( 2 3 ) = 4 3 = 0 p2 + p3 p2 + p3

From Claim 4 we know p2, p3 < 0 and p2 + p3 < 0. So t3 = t4 = 0. Substitute it into f(x) we have f(x) = t1 + t2x = 0 for all x. So t1 = t2 = 0. This contradicts the nonzero requirement of ~(s) ~(t) ~t. Therefore there exists s 6= t s.t. es = et. According to (4)(5)(6) we have θ = θ , which is a contradition.  Case 1.3. p2 + q2 = 0. If there exists i such that pi + qi = 1, then we can use ai as a2 and the proof is done in Case 1.2. It may still be possible to find another i such that pi, qi satisfy the following two conditions:

1. pi 6= 0 and qi 6= 1;

2. pi + qi 6= 0. If we can find another i to satisfy the two conditions, then the proof is done in Case 1.1. Then we can proceed by assuming that the two conditions are not satisfied by any i. We will prove that the only possibility is pi + qi = 0 for i = 2, 3, 4. Suppose for i = 3, 4, pi and qi violate Condition 1. If pi = 0, then qi > 0. If at least one of them has qi = 1, then er + br + cr + dr > 1, which is impossible. If both alternatives violates Condition 1 and p3 = p4 = 0, then 0 < q3, q4 < 1. According to (7) p2 = −1. As p2 + q2 = 0, we have q2 = 1. From (8), q3 + q4 = 2, which is impossible. So there exists i ∈ {3, 4} such that P r pi + qi = 0. Then from i θi = 1 we obtain the only case we left out, which is

er

br = p2er − p2

cr = p3er − p3

dr = −(1 + p2 + p3)er + (1 + p2 + p3) This case has been proved in Claim 3. Case 2: There exists ~ω(i) that is linearly independent of ~1 and ~ω(1). W.l.o.g. let it be ~ω(2). Define matrix

 ~1   1 1 1 1  (1) G = ~ω  = e1 e2 e3 e4 (2) ~ω b1 b2 b3 b4 2 2 The rank of G is 3. Since G is constructed using linear combinations of rows in F4, the rank of F4 is at least 3. If ~ω(3) or ~ω(4) is independent of rows in G, then we can append it to G as the fourth row so that 2 the rank of the new matrix is 4. Then F4 is full rank. So we only need to consider the case where ~ω(3) and ~ω(4) are linearly dependent of ~1, ~ω(1), and ~ω(2). Let (3) (1) (2) ~ω = x3~ω + y3~ω + z3~1 (11) (4) (1) (2) ~ω = x4~ω + y4~ω + z4~1 (12) where x3 + x4 = −1, y3 + y4 = −1, z3 + z4 = 1.

15 Claim 5 There exists i ∈ {3, 4} such that xi + zi 6= 0.

Proof: If in the current setting ∃i ∈ {3, 4} s.t. xi + zi 6= 0, then the proof is done. If in the current setting x3 + z3 = x4 + z4 = 0, but ∃i ∈ {3, 4} s.t. yi + zi = 0, then we can switch the role of er and br, namely

(3) (1) (2) ~ω = y3~ω + x3~ω + z3~1 (4) (1) (2) ~ω = y4~ω + x4~ω + z4~1

Then the proof is done. If for all i ∈ {3, 4} we have xi + zi = 0 and yi + zi = 0, then we switch the role of er and cr and get

(3) 1 (1) (2) ~ω = (~ω − y3~ω − z3~1) x3 (4) 1 (1) (2) ~ω = (~ω − y4~ω − z4~1) x4

1−z3 If 6= 0, namely z3 6= 1, the proof is done. Suppose z3 = 1, then x3 = y3 = −1. We have x3 (3) (1) (2) (4) ~ω = 1 − ~ω − ~ω . Then ~ω = ~0, which is impossible.  Without loss of generality let x3 + z3 6= 0. Similar to the previous proofs, we want to construct 0 2 0 a matrix G using linear combinations of rows from F4. Let the first 3 rows for G to be G. Then 0 2 0 2 rank(G ) ≥ 3. Since rank(F4) ≤ 3 and all rows in G are linear combinations of rows in F4, we 0 0 2 have rank(G ) ≤ 3. So rank(G ) = 3. This means that any linear combinations of rows in F4 is linearly dependent of rows in G. Consider the moment where a1 is ranked at the top and a2 is ranked at the second position. Then [ e1b1 , e2b2 , e3b3 , e4b4 ] is linearly dependent of G. Adding ~ω(2) to it, we have 1−e1 1−e2 1−e3 1−e4 b b b b θ~(eb) = [ 1 , 2 , 3 , 4 ] 1 − e1 1 − e2 1 − e3 1 − e4 which is linearly dependent of G. Similarly consider the moment that a1 is ranked at the top and a3 is ranked at the second position. We obtain [ e1c1 , e2c2 , e3c3 , e4c4 ]. Add ~ω(3) to it, we get 1−e1 1−e2 1−e3 1−e4 c c c c θ~(ec) = [ 1 , 2 , 3 , 4 ] 1 − e1 1 − e2 1 − e3 1 − e4 which is linearly dependent of G. Recall from (9) 1 1 1 1 θ~(e) = [ , , , ] 1 − e1 1 − e2 1 − e3 1 − e4 Then x e + y b + z x e + y b + z x e + y b + z x e + y b + z θ~(ec) = [ 3 1 3 1 3 , 3 2 3 2 3 , 3 3 3 3 3 , 3 4 3 4 3 ] 1 − e1 1 − e2 1 − e3 1 − e4 ~(e) ~(eb) = (x3 + z3)θ + y3θ − x3~1

Because both θ~(eb) and θ~(ec) are linearly dependent of G, θ~(e) is also linearly dependent of G. Make it the 4th row of G0. Suppose the rank of G0 is still 3. We will first prove this lemma under the assumption below, and then discuss the case where the assumption does not hold.

16 Assumption 1: Suppose ~1, ~ω(1), θ~(e) are linearly independent. (2) (1) ~(e) (2) (1) ~(e) Then ~ω is a linear combination of ~1, ~ω and θ . We write ~ω = s1 + s2~ω + s3θ (2) (1) for some constants s1, s2, s3. We have s3 6= 0 because ~ω is linearly independent of ~1 and ~ω . Elementwise, for r = 1, 2, 3, 4 we have

s3 br = s1 + s2er + (13) 1 − er Let  G  G00 = θ~(eb)

~(eb) ~ θ is linearly dependent of G. There exists a non-zero vector h = [h1, h2, h3, h4] such that ~ 00 (1) (2) ~(eb) h · G = 0. Namely h1~1 + h2~ω + h3~ω + h4θ = 0. Elementwise, for all r = 1, 2, 3, 4

br h1 + h2er + h3br + h4 = 0 (14) 1 − er where h4 6= 0 because otherwise rank(G) = 2. Substitute (13) into (14), and multiply both sides of 2 it by (1 − er) , we get

2 (h1 + h2er + h3br)(1 − er) + h4(s1 + s2er)(1 − er) + h4s3 = 0

Let 2 f(x) = (h1 + h2er + h3br)(1 − er) + h4(s1 + s2er)(1 − er) + h4s3

We claim that not all coefficients of x are zero, because f(1) = h4s3 6= 0 (s3 6= 0 and h4 6= 0 by assumption). Then there are a maximum of 3 different roots, each of which uniquely determines br by (13). This means that there are at least two identical components. Namely ∃s 6= t s.t. θ~(s) = θ~(t). If Assumption 1 does not hold, namely θ~(e) is a linear combination of ~1 and ~ω(1), let 1 = p5er + q5 (15) 1 − er Define 1 f(x) = − p x − q 1 − x 5 5 If f(x) has only 1 root or two identical roots between 0 and 1, then all columns of G have identical (1) er-s. This means ~ω is dependent of ~1, which is a contradiction. So we only consider the situation where f(x) has two different roots between 0 and 1, denoted by u1 and u2 (u1 6= u2). Because e1, e2, e3, e4 are roots of f(x), there must be at least two identical er’s, with the value u1 or u2. ~(eb) ~(eb) Substitute (15) into θ , we have θ = [b1(p5e1 +q5), b2(p5e2 +q5), b3(p5e3 +q5), b4(p5e4 + q5)], which is linearly dependent of G. So there exists nonzero vector ~γ1 = [γ11, γ12, γ13, γ14] such that γ11 + γ12er + γ13br + γ14br(p5er + q5) = 0 From which we get (γ13 + γ14p5er + γ14q5)br = −(γ11 + γ12er) (16)

We recall that er = u1 or er = u2 for r = 1, 2, 3, 4. Since u1 6= u2, there exists i ∈ {1, 2} s.t. γ13 + γ14p5ui + γ14q5 6= 0. W.l.o.g. let it be u1. If at least two of the er’s are u1, without loss −(γ11+γ12u1) of generality let e1 = e2 = u1. Then using (16) we know b1 = b2 = . From (γ13+γ14p5u1+γ14q5) ~(1) ~(2) (11)(12) we can further obtain c1 = c2 and d1 = d2. So θ = θ , which is a contradiction.

17 If there is only one of the er’s, which is u1, w.l.o.g. let e1 = u1 and e2 = e3 = e4 = u2. We con- e1b1 e2b2 e3b3 e4b4 sider the moment where a2 is ranked at the top and a1 the second, which is [ , , , ]. 1−b1 1−b2 1−b3 1−b4 Add ~ω(1) to it and we have θ~(be) = [ e1 , e2 , e3 , e4 ], which is linearly dependent of G. So 1−b1 1−b2 1−b3 1−b4 there exists nonzero vector ~γ2 = [γ21, γ22, γ23, γ24] such that

er γ21 + γ22er + γ23br + γ24 = 0 (17) 1 − br Let u f(x) = γ + γ u + γ x + γ 2 21 22 2 23 24 1 − x

g(x) = (1 − x)f(x) = (1 − x)(γ21 + γ22u2 + γ23x) + γ24u2 If any coefficient of g(x) is nonzero, then g(x) has at most 2 different roots. As g(x) = 0 holds for b2, b3, b4, ∃s 6= t s.t. bs = bt. Since es = et = u2, from (11)(12) we know cs = ct and ds = dt. So θ~(s) = θ~(t). Otherwise we have g(x) = f(x) = 0 for all x. So

g(1) = γ24u2 = 0

Since 0 < u2 < 1, we have γ24 = 0. Substitute it into f(x) we have f(x) = γ21 +γ22u2 +γ23x = 0 holds for all x. So we have γ21 + γ22u2 = 0 and γ23 = 0. Substitute γ23 = γ24 = 0 into (17) we get γ21 + γ22er = 0, which holds for both er = u1 and er = u2. As u1 6= u2, we have γ22 = 0. Then we have γ21 = 0. This contradicts the nonzero requirement of ~γ2. So there exists s 6= t s.t. θ~(s) = θ~(t), which is a contradiction.  Slightly abusing the notation, we say that a parameter of a k-PL is identifiable, if there does not exist a different parameter modulo label switching with the same probability distribution over the sample space. The next theorem proves that the Lebesgue measure (in the km − 1 dimensional Euclidean space) of non-identifiable parameters of k-PLs over m alternatives is 0 (generic identifia- bility as is defined in Section 1.1).

m−2 Theorem 3 For any m ≥ 6 and any 1 ≤ k ≤ b 2 c!, the k-PL over m alternatives is generically identifiable. Proof: Let us start with a high-level description of the proof. We will focus on the parame- ters whose mixing coefficients (entries of ~α) are pairwise different. Formally, for any r1, r2 ∈ {1, . . . , k},

αr1 = αr2 =⇒ r1 = r2 (18) Such parameters have Lebesgue measure 1 because the parameters with any pair of identical mixing coefficients are in a lower dimensional space. The generic identifiability will be proved by analyzing the uniqueness of tensor decompositions. We will construct a rank-one tensor T(r)(θ~(r)) to represent the rth Plackett-Luce component. Then the k-PL can be represented by the weighted average of ~ Pk (r) ~(r) tensors of its components, i.e. T(θ) = r=1 αrT (θ ). We now provide a set of two conditions for θ~ = (~α, θ~(1),..., θ~(k)) to be identifiable, and then prove that both hold generically.

Condition 1. For every ~γ = (β,~γ~ (1), . . . ,~γ(k)), where β~ 6= ~α modulo label switching (the set of entries in β~ is different from the set of entries in ~α), we have T(~γ) 6= T(θ~).

Condition 2. For every ~γ = (β,~γ~ (1), . . . ,~γ(k)) that is different from θ~ and β~ = ~α, we have ~ Prk-PL(·|~γ) 6= Prk-PL(·|θ).

18 It follows from the definition of identifiability that if both conditions hold, then θ~ is identifiable. Condition 1 generically holds. We first show that T(θ~) has a unique decomposition generically, then prove that the uniqueness of decomposition implies Condition 1. To construct the rank-one tensor T(r)(θ~(r)), we partition the set of alternatives into three subsets. In the rest of the proof we assume that m is even. The theorem can be proved similarly for odd m’s.

SA = {a1, a2, ··· , a m−2 } 2

SB = {a m , a m+2 , ··· , am−2} 2 2

SC = {am−1, am}

m−2 There are n1 = n2 = 2 ! rankings over SA and SB respectively, and two rankings over SC (n3 = (r) ~(r) (r) (r) (r) 2). Let the three coordinates in the tensor T (θ ) be pA , pB , pC that represent probabilities (r) (r) (r) of all rankings restricted to SA,SB,SC respectively. We note that pA , pB , pC are functions of θ~(r), which are often omitted. Let p(j) denote the jth entry of p. Then for all r = 1, . . . , k, we have Pn1 (r) Pn2 (r) Pn3 (r) j=1 pA (j) = j=1 pA (j) = j=1 pA (j) = 1. Then, for any rankings VA ∈ L(SA), VB ∈ L(SB), and VC ∈ L(SC ), we can prove that ~(r) ~(r) ~(r) ~(r) PrPL(VA,VB,VC |θ ) = PrPL(VA|θ ) × PrPL(VB|θ ) × PrPL(VC |θ ). That is, VA, VB and ~(r) VC are independent given θ . We will prove this result for a more general class of models called random utility models [Thurstone, 1927], of which the Plackett-Luce model is a special case. ~ Claim 6 Given a random utility model (RUM) M(θ) over a set of m alternatives A, let A1, A2 be two non-overlapping subsets of A, namely A1, A2 ⊂ A and A1 ∩ A2 = ∅. Let V1,V2 be rankings ~ ~ ~ over A1 and A2, respectively, then we have Pr(V1,V2|θ) = Pr(V1|θ) Pr(V2|θ). ~ Proof: In an RUM, given a ground truth utility θ = [θ1, θ2, . . . , θm] and a distribution µi(·|θi) for each alternative, an agent samples a random utility Xi for each alternative independently with probability density function µi(·|θi). The probability of the ranking ai1 ai2 · · · aim is ~ Pr(ai1 · · · aim |θ) = Pr(Xi1 > Xi2 > ··· > Xim ) Z ∞ Z ∞ Z ∞

= ··· µim (xim )µim−1 (xim−1 ) . . . µi1 (xi1 )dxi1 dxi2 . . . dxim −∞ xim xi2

m W.l.o.g. we let i1 = 1, . . . , im = m. Let SX1>X2>···>Xm denote the subspace of R where ~ X1 > X2 > ··· > Xm and let µ(~x|θ) denote µm(xm)µm−1(xm−1) ... µ1(x1). Thus we have Z ~ ~ Pr(a1 · · · am|θ) = µ(~x|θ)d~x SX1>X2>···>Xm We first prove the following claim. ~ ~ ~ Claim 7 Given a random utility model M(θ), for any parameter θ and any As ⊆ A, we let θs ~ denote the components of θ for alternatives in As, and let Vs be a full ranking over As (which is a ~ ~ partial ranking over A). Then we have Pr(Vs|θ) = Pr(Vs|θs).

19 m A S Proof: Let s be the number of alternatives in s. Let X1>X2>···>Xms denote the subspace of ms R where X1 > X2 > ··· > Xms . W.l.o.g. let Vs be a1 a2 · · · ams . Then we have Z ~ ~ Pr(Vs|θ) = µ(~x|θ)d~x S × m−ms X1>X2>···>Xms R Z ∞ Z ∞ Z ∞ Z ∞ Z ∞

= ··· ··· µm(xm) . . . µ1(x1)dxms+1 ··· dxmdx1 . . . dxms −∞ xms x2 −∞ −∞ Z ∞ Z ∞ Z ∞

= ··· µms (xms )µms−1(xms−1) . . . µ1(x1)dx1dx2 . . . dxms −∞ xms x2 Z ~ ~ = µ(~xs|θs)d~x = Pr(Vs|θs) S X1>X2>···>Xms 

Let A1 = {a11, a12, . . . , a1m1 } and A2 = {a21, a22, . . . , a2m2 }. Without loss of generality we let V1 and V2 be a11 a12 · · · a1m1 and a21 a22 · · · a2m2 respectively. For any θ~, let θ~ denote the subvector of θ~ on A . Let S denote S . θ~ and S are 1 1 1 X11>X12>···>X1m1 2 2 ~ ~ R ~ defined similarly. According to Claim 7, we have Pr(V1|θ) = Pr(V1|θ1) = µ(~x1|θ1)d~x1 and S1 ~ ~ R ~ Pr(V2|θ) = Pr(V2|θ2) = µ(~x2|θ2)d~x2. Then we have S2 Z ~ ~ Pr(V1,V2|θ) = µ(~x|θ)d~x m−m −m S1×S2×R 1 2 Z ~ ~ = µ(~x1, ~x2|θ1, θ2)d~x (Claim 7) S1×S2 Z Z ~ ~ = µ(~x1|θ1)µ(~x2|θ2)d~x1d~x2 (Fubini’s Theorem) S1 S2 Z Z ~ ~ = µ(~x1|θ1)d~x1 µ(~x2|θ2)d~x2 S1 S2 ~ ~ = Pr(V1|θ1) Pr(V2|θ2)  Because SA, SB, and SC are non-overlapping, by Claim 6, it follows that (r) ~(r) (r) (r) (r) T (θ ) = pA ⊗ pB ⊗ pC . ~ Pk (r) ~(r) We will prove the tensor T(θ) = r=1 αrT (θ ) has a unique decomposition by Kruskal’s theorem [Theorem 4a in Kruskal, 1977]. To this end, we analyze the Kruskal’s rank of three h (1) (k)i h (1) (k)i matrices, defined as follows. PA = pA ... pA , PB = pB ... pB and PC = h (1) (k)i pC ... pC . The Kruskal’s rank of a matrix is the maximum number K s.t. any K columns of the matrix are linearly independent. Specifically, if a matrix has full column rank, then the Kruskal’s rank of the matrix equals to the number of columns. Let rankK (A) denote the Kruskal’s m−2 rank of matrix A. We prove the following lemma, which states that when k ≤ b 2 c!, PA and PB generically have full (column) Kruskal’s rank.

h (1) (k)i Claim 8 Let S be a set of m0 alternatives. Define PS = pS ... pS , where each column (r) pS is an m0!-dimensional column vector with probabilities of all full rankings over alternatives in S given the rth Plackett-Luce component. If k ≤ m0!, then rankK (PS) = k generically holds.

20 Proof: It suffices to prove when k = m0!, PS has full rank generically. We will prove det(PS) 6= 0 generically holds. Since det(PS) can be represented as a polynomial fraction with a nonzero de- nominator, we only need to prove there exist k = m0! Plackett-Luce components s.t. the numerator is nonzero (det(PS) 6= 0). This implies that the polynomial in the numerator is not identically 0, and thus the Lebesgue measure of parameters resulting in a zero numerator is zero [Caron and Traynor, 2005]. ~(1) ~(k) We prove by construction that there exist θ ,..., θ s.t det(PS) 6= 0. We will construct k Plackett-Luce components s.t. PS is a diagonally dominant matrix, which implies det(PS) 6= 0. To ~ this end, we first show that given any ranking V0, we can construct a parameter θ s.t. Prθ~(V0) > 1/2. m0−p1 W.l.o.g. we define V0 = a1 a2 ... am0 . Let 1/2 < C < 1 be a constant. Then Pm0−1 we let θ1 = C, θ2 = C(1 − θ1), θ3 = C(1 − θ1 − θ2), . . . , θm0 = 1 − i=1 θi. It follows that m0−1 Prθ~(V0) = C > 1/2. Recall that each column of PS is one Plackett-Luce component and each row corresponds to a ranking over S. For each diagonal entry (r, r) of PS, there is a corresponding ranking (denoted by ~(r) Vr). Given the ranking Vr, we construct θ using the method described above s.t. Prθ~(r) (Vr) > > 1/2. Because entries of each column in PS sum up to 1, PS is a strictly diagonally dominant matrix. By Levy-Desplanques theorem, we have det(PS) 6= 0.  Kruskal [1977] proved that if rankK (PA) + rankK (PB) + rankK (PC ) ≥ 2k + 2, then T has m−2 a unique decomposition. By Claim 8, when k ≤ b 2 c!, rankK (PA) = rankK (PB) = k and ~ rankK (PC ) = 2 generically hold. Thus, generically, tensor T(θ) has a unique decomposition. We now prove that uniqueness of decomposition of T(θ~) implies Condition 1, formally shown as the following claim. Claim 9 For any θ~ where T(θ~) has a unique decomposition, Condition 1 holds. That is, for every ~γ = (β,~γ~ (1), . . . ,~γ(k)), where β~ 6= ~α modulo label switching (the set of entries in β~ is different from the set of entries in ~α), we have T(~γ) 6= T(θ~).

Proof: Suppose for the purpose of contradiction, the decomposition of T(θ~) is unique but Condi- tion 1 does not hold. This means that there exists ~γ = (β,~γ~ (1),..., ~γ(k)) where β~ is not equal to ~α modulo label switching, s.t. T(~γ) 6= T(θ~). Because components ~ in ~α are pairwise different, there exists r1 ≤ k such that αr1 is different from any element in β, ~ (r1) ~(r1) while T(θ) = T(~γ) = T. We will show that for any r2 = 1, . . . , k, we have αr1 T (θ ) 6= (r2) (r2) βr2 T (~γ ), which contradicts the unique decomposition of T. Recall that for any r, we have (r) (r) (r) (r) Pn1 (r) Pn2 (r) Pn3 (r) T (·) = pA ⊗ pB ⊗ pC and j=1 pA (j) = j=1 pA (j) = j=1 pA (j) = 1. The sum of all entries of T(r)(·) is one because

n1 n2 n3 n1 n2 n3 X X X (r) (r) (r) X X (r) (r) X (r) pA (j1)pB (j2)pC (j3) = pA (j1)pB (j2) pC (j3) j1=1 j2=1 j3=1 j1=1 j2=1 j3=1

n1 n2 X (r) X (r) = pA (j1) pB (j2) = 1. j1=1 j2=1

Thus, for any r, we have

n1 n2 n3 X X X (r) (αrT (·))j1j2j3 = αr (19)

j1=1 j2=1 j3=1

21 (r1) ~(r1) (r2) (r2) So the sums of all entries of αr1 T (θ ) and βr2 T (~γ ) are αr1 and βr2 respectively. (r1) ~(r1) We recall that for all r2, we have αr1 6= βr2 . Therefore, αr1 T (θ ) (r2) (r2) 6= βr2 T (~γ ), which is a contradiction.  This finishes the proof that Condition 1 generically holds. Condition 2 generically holds. Unfortunately the tensor decomposition technique used for Con- dition 1 no longer works for Condition 2. This is because due to the partition of alternatives, the parameter of alternatives in one group can be arbitrarily scaled while the distribution of partial rankings restricted to the group does not change. For example, given θ~ = (~α, θ~(1),..., θ~(k)), we can construct ~γ = (~α,~γ(1), . . . ,~γ(k)) s.t. T(~γ) = T(θ~) in the following way. For any r, we let (r) (r) (r) (r) γi = 2θi for i = 1, . . . , m − 2 and γi = θi for i = m − 1, m. Then we normalize each (r) Pm (r) ~ ~γ s.t. i=1 γi = 1. The resulting tensor T(~γ) = T(θ) because the probability of rankings restricted to each group are exactly the same. To address this problem we consider an additional tensor decomposition using a different parti- 0 tion of the alternatives, defined as follows: SA = {a3, a5, . . . , am−1}, 0 0 0 ~ SB = {a2, a4, . . . , am−2},SC = {a1, am}. Let T (θ) denote the tensor under this partition. Sim- ilar to the proof for Condition 1, we can show that generically T0(θ~) has a unique decomposition. Next, we will prove that for any θ~ where T(θ~) and T0(θ~) have unique decompositions (which holds generically), Condition 2 holds. Suppose for the sake of contradiction that Condition 2 does not hold for θ~ where T(θ~) and T0(θ~) have unique decompositions. Then, there exists ~γ = (β,~γ~ (1),..., (k) ~ ~ ~γ ) that is different from θ modulo label switching, such that Prk-PL(·|~γ) = Prk-PL(·|θ). This means that T(~γ) = T(θ~) and T0(~γ) = T0(θ~). We first match the components in ~γ and θ~. Recall from Claim 9 that uniqueness of T(θ~) implies Condition 1. Because both T(θ~) and T0(θ~) have unique decompositions, Condition 1 must hold for both. It follows that the entries of β~ are exactly entries of ~α, otherwise T(θ~) 6= T(~γ), which is a contradiction. Since entries of ~α are pairwise different, there is a unique way of matching compo- nents in θ~ to components in ~γ by matching the corresponding mixing coefficients. Consequently, we can relabel components in ~γ s.t. the β~ becomes exactly ~α. Let ~γ0 = (~α,~γ0(1), . . . ,~γ0(k)) de- note the ~γ parameter after relabeling the components. Thus we have T(~γ0) = T(~γ) = T(θ~) and T0(~γ0) = T0(~γ) = T0(θ~). Because ~γ 6= θ~ modulo label switching, we have ~γ0 6= θ~, which implies that there exists r∗ s.t. ∗ ∗ ~γ0(r ) 6= θ~(r ). (20) Next, we will show that from the uniqueness of decomposition of T(θ~) and T0(θ~), we must ∗ ∗ have ~γ0(r ) = θ~(r ), which is a contradiction. To this end, we show how to uniquely determine the ∗ ∗ parameters of k-PL (i.e. ~γ0(r ) and θ~(r )) from decompositions of T(θ~) and T0(θ~). 0 ~ T(~γ ) = T(θ) has a unique decomposition means that for any r1 ∈ {1, . . . , k}, there exists (r1) 0(r1) (r2) ~(r2) r2 ∈ {1, . . . , k} s.t. αr1 T (~γ ) = αr2 T (θ ), which implies

n1 n2 n3 n1 n2 n3 X X X (r1) 0(r1) X X X (r2) ~(r2) (αr1 T (~γ ))j1j2j3 = (αr2 T (θ ))j1j2j3

j1=1 j2=1 j3=1 j1=1 j2=1 j3=1

By (19), the above equation implies αr1 = αr2 . It follows from (18) that r1 = r2. Namely, we (r) 0(r) (r) ~(r) have αrT (~γ ) = αrT (θ ) for all r = 1, . . . , k. It follows that for all r = 1, . . . , k, we ∗ ∗ ∗ ∗ have T(r)(~γ0(r)) = T(r)(θ~(r)), which means T(r )(~γ0(r )) = T(r )(θ(r )). Similarly, we also have ∗ ∗ ∗ ∗ T0(r )(~γ0(r )) = T0(r )(θ(r )).

22 Recall that n1, n2 and n3 are numbers of rankings over alternatives in SA, SB, and SC re- (r) ~ (r) (r) (r) spectively and for any r = 1, . . . , k, T (θ) = pA ⊗ pB ⊗ pC is a rank-one tensor where Pn1 (r) Pn2 (r) Pn3 (r) (r) ~ (r) (r) (r) j=1 pA (j) = j=1 pB (j) = j=1 pC (j) = 1. Given T (θ), pA , pB and pC can be (r) ~ (r) easily obtained by normalizing the corresponding entries of T (θ). E.g., pA can be obtained (r) (r) ~(r) by normalizing all (·, 1, 1) entries. We claim that pA is uniquely determined by T (θ ) be- (r) (r) ~ (r) (r) (r) cause (i) pA is proportional to (·, 1, 1) entries because T (θ) = pA ⊗ pB ⊗ pC ; and (ii) ∗ ∗ ∗ ∗ Pn1 (r) (r ) (r ) ~(r ) (r ) j=1 pA (j) = 1. Specifically pA is uniquely determined by T (θ ). Similarly pB and ∗ ∗ ∗ ∗ (r ) (r∗) ~(r∗) 0(r ) 0(r ) 0(r ) pC are uniquely determined by T (θ ) and pA , pB and pC are uniquely determined ∗ ∗ by T0(r )(θ~(r )). ∗ ∗ ∗ ∗ ∗ ∗ (r ) (r ) (r ) 0(r ) 0(r ) 0(r ) ~(r∗) Next, we will prove pA , pB , pC , pA , pB and pC uniquely determines θ , which 0(r∗) ~(r∗) contradicts (20). Now we focus on ~γ and θ restricted to SA. We claim that there exists a ∗ ∗ m−2 0(r ) (r ) constant CA s.t. for all i = 1, 2,..., 2 (i.e. ai ∈ SA), γi = CAθi . The reason is as follows. m−2 For all i = 1,..., 2 , we consider the probability of the event that ai is ranked at the top ~(r∗) 0(r∗) among the alternatives in SA given θ and ~γ . By Claim 7, the two probabilities must be the m−2 same. Therefore for all i = 1,..., 2 , we have

(r∗) ∗ θ Pr(a top among S |θ~(r )) = i i A m−2 ∗ P 2 (r ) i=1 θi ∗ γ0(r ) Pr(a top among S |~γ0(r∗)) = i i A m−2 ∗ P 2 0(r ) i=1 γi

m−2 ∗ P 2 γ0(r ) This means there exists a constant C = i=1 i s.t. A m−2 ∗ P 2 (r ) i=1 θi

∗ ∗ m − 2 γ0(r ) = C θ(r ), for i = 1,..., (21) i A i 2

Similarly there exist CB,CC , s.t.

∗ ∗ m γ0(r ) = C θ(r ), for i = , . . . , m − 2 (22) i B i 2 0(r∗) (r∗) γi = CC θi , for i = m − 1, m (23)

0(r) (r) 0(r) ~(r) 0 0 0 Similarly from T (~γ ) = T (θ ), there exist constants CA,CB and CC s.t.

0(r∗) 0 (r∗) γi = CAθi , for i = 3, 5, . . . , m − 1 (24) 0(r∗) 0 (r∗) γi = CBθi , for i = 2, 4, . . . , m − 2 (25) 0(r∗) 0 (r∗) γi = CC θi , for i = 1, m (26)

0 0 Therefore, we have CB = CB by Equations (22) and (25) (let i = m − 2); CB = CA by Equa- 0 0 tions (25) and (21) (let i = 2); CA = CC by Equations (21) and (26) (let i = 1); CC = CC 0 by Equations (26) and (23) (let i = m); CC = CA by Equations (23) and (24) (let i = m − 1). 0 0 0 So we let CB = CB = CA = CC = CC = CA = C. Then for all i = 1, . . . , m, we have

23 ∗ ∗ ∗ ∗ 0(r ) (r ) Pm 0(r ) Pm (r ) 0(r∗) ~(r∗) γi = Cθi . Because i=1 γi = i=1 θi = 1, we have C = 1 and ~γ = θ , which is a contradiction. As we have proved, the Lebesgue measure of parameters where tensor T(θ~) (or T0(θ~)) has a nonunique decomposition is 0. Therefore, the Lebesgue measure of parameters θ~ where both T(θ~) 0 ~ and T (θ) have unique decompositions is also 0. This finishes the proof. 

Figure 1: The MSE and running time of GMM and EMM. EMM-20-1 is 20 iterations of EMM overall with 1 iteration of MM for each M step. Likewise, EMM-10-1 is 10 iterations of EMM with 1 iteration of MM each time. Values are calculated over 2000 datasets.

4 A Generalized Method of Moments Algorithm for 2-PL

In a generalized method of moments (GMM) algorithm, a set of q ≥ 1 moment conditions g(V, θ~) are specified. Moment conditions are functions of the parameter and the data, whose expectations are zero at the ground truth. g(V, θ~) ∈ Rq has two inputs: a data point V and a parameter θ~. For any θ~∗, the expectation of any moment condition should be zero at θ~∗, when the data are generated from the model given θ~∗. Formally E[g(V, θ~∗)] = ~0. In practice the observed moment values should match the theoretical values from the model. In our algorithm, each moment condition corresponds to an event in the data, e.g. a1 is ranked at the top. We use moments to denote such events. Given ~ 1 P ~ ~ any preference profile P , we let g(P, θ) = n V ∈P g(V, θ), which is a function of θ. The GMM algorithm we will use then computes the parameter that minimizes the 2-norm of the empirical moment conditions in the following way. ~ 2 GMMg(P ) = inf ||g(P, θ)||2 (27) θ~ In this paper, we will show results for m = 4 and k = 2. Our GMM works for other combinations of k and m, if the model is identifiable. Otherwise the estimator is not consistent. For m = 4 and k = 2, the parameter of the 2-PL is θ~ = (α, θ~(1), θ~(2)). We will use the following q = 20 moments from three categories. (i) There are four moments, one for each of the four alternatives to be ranked at the top. Let (1) (2) {gi : i ≤ 4} denote the four moment conditions. Let pi = αθi + (1 − α)θi . For any V ∈ L(A), ~ ~ we have gi(V, θ) = 1 − pi if and only if ai is ranked at the top of V ; otherwise gi(V, θ) = −pi.

24 (ii) There are 12 moments, one for each combination of top-2 alternatives in a ranking. Let θ(1)θ(1) θ(2)θ(2) i1 i2 i1 i2 {gi i : i1 6= i2 ≤ 4} denote the 12 moment conditions. Let pi i = α (1) + (1 − α) (2) . 1 2 1 2 1−θ 1−θ i1 i1 ~ For any V ∈ L(A), we have gi1i2 (V, θ) = 1 − pi1i2 if and only if ai1 is ranked at the top and ai2 is ~ ranked at the second in V ; otherwise gi1i2 (V, θ) = −pi1i2 . (iii) There are four moments that correspond to the following four rankings a1 a2 a3 a4, a2 a3 a4 a1, a3 a4 a1 a2, a4 a1 a2 a3. The corresponding gi1i2i3i4 ’s are defined similarly. To illustrate how GMM works, we give the following example. Suppose the data P contain the following 20 rankings. 8 @ a1 a2 a3 a4 4 @ a2 a3 a4 a1 3 @ a3 a4 a1 a2 3 @ a4 a1 a2 a3 2 @ a3 a1 a4 a2 ~ 8 (1) (2) Then, for example, g1(P, θ) = 20 − (αθ1 + (1 − α)θ1 ), corresponding to the moment that (1) (1) (2) (2) ~ 3 αθ3 θ4 (1−α)θ3 θ4 a1 is ranked at top (category i). g34(P, θ) = 20 − ( (1) (1) (1) + (2) (2) (2) ), corresponding θ1 +θ2 +θ4 θ1 +θ2 +θ4 (1) (1) (1) ~ 4 αθ2 θ3 θ4 to the moment of a3 a4 others (category ii). g2341(P, θ) = 20 − ( (1) (1) (1) (1) (1) + (θ3 +θ4 +θ1 )(θ4 +θ1 ) (2) (2) (2) (1−α)θ2 θ3 θ4 (2) (2) (2) (2) (2) ), corresponding to the moment of a2 a3 a4 a1 (category iii). (θ3 +θ4 +θ1 )(θ4 +θ1 ) P4 (1) P4 (2) Remember that i=1 θi = i=1 θi = 1. The choices of these moment conditions are based on the proof of Theorem 2, so that the 2-PL is strictly identifiable w.r.t. these moment conditions. Therefore, our simple GMM algorithm is the following.

Algorithm 1 GMM for 2-PL Input: Preference profile P with n full rankings. Compute the frequency of each of the 20 moments Compute the output according to (27)

The theoretical guarantee of our GMM is its consistency, as we defined in Section 1.1. Theorem 4 Algorithm 1 is consistent w.r.t. 2-PL, where there exists  > 0 such that each parameter is in [, 1].

Proof: We will check all assumptions in Theorem 3.1 in Hall [2005]. Assumption 3.1: Strict Stationarity: the (n × 1) random vectors {vt; −∞ < t < ∞} form a strictly stationary process with sample space S ⊆ Rn. As the data are generated i.i.d., the process is strict stationary. Assumption 3.2: Regularity Conditions for g(·, ·): the function g : S × Θ → Rq where q < ∞, satisfies: (i) it is continuous on Θ for each P ∈ S; (ii) E[g(P, θ~)] exists and is finite for every θ ∈ Θ; (iii) E[g(P, θ~)] is continuous on Θ. Our moment conditions satisfy all the regularity conditions since g(P, θ~) is continuous on Θ and bounded in [−1, 1]9. Assumption 3.3: Population Moment Condition. The random vector vt and the parameter vector θ0 satisfy the (q × 1) population moment condition: E[g(P, θ0)] = 0.

25 This assumption holds by the definition of our GMM. ~0 ~0 ~0 Assumption 3.4 Global Identification. E[g(P, θ )] 6= 0 for all θ ∈ Θ such that θ 6= θ0. This is proved in Theorem 2. Assumption 3.7 Properties of the Weighting Matrix. Wt is a positive semi-definite matrix which converges in probability to the positive definite matrix of constants W . This holds because W = I. Assumption 3.8 Ergodicity. The random process {vt; −∞ < t < ∞} is ergodic. Since the data are generated i.i.d., the process is ergodic. Assumption 3.9 Compactness of Θ. Θ is a compact set. Θ = [, 1]9 is compact. ~ ~ Assumption 3.10 Domination of g(P, θ). E[supθ∈Θ ||g(P, θ)||] < ∞. This assumption holds because all moment conditions are finite. Theorem 3.1 Consistency of the Parameter Estimator. If Assumptions 3.1-3.4 and 3.7-3.10 hold ˆ p then θT → θ0  Originally all parameters lie in open intervals (0, 1]. The  requirement in the theorem is intro- duced to make the parameter space compact, i.e. all parameters are chosen from closed intervals. The proof is done by applying Theorem 3.1 in Hall [2005]. The main hardness is the identifiability of 2-PL w.r.t. the moment conditions used in our GMM. Our proof of the identifiability of 2-PL (Theorem 2) only uses the 20 moment conditions described above.2 Complexity of GMM. For learning k-PL with m alternatives and n rankings with EMM, each E- step performs O(nk2) operations and each iteration of the MM algorithm for the M-step performs O(m2nk) operations. Our GMM for k = 2 and m = 4 has overall complexity O(n). The com- plexity of calculating moments is O(n) and the complexity of optimization depends only on m and k.

Figure 2: The MSE and running time of GMM. GMM-Moments is the time to calculate the moment condition values observed in the data and GMM-Opt is the time to perform the optimization. Values are calculated over 50000 trials.

2In fact our proof only uses 16 of them (4 out of the 12 moment conditions in category (ii) are redundant). However, our synthetic experiments show that using 20 moments improves statistical efficiency without sacrificing too much computational efficiency.

26 5 Experiments

The performance of our GMM algorithm (Algorithm 1) is compared to the EMM algorithm Gormley and Murphy [2008] for 2-PL with respect to running time and statistical efficiency for synthetic data. The synthetic datasets are generated as follows. • Generating the ground truth: for k = 2 mixtures and m = 4 alternatives, the mixing coeffi- cient α∗ is generated uniformly at random and the Plackett-Luce components θ~(1) and θ~(2) are each generated from the Dirichlet distribution Dir(~1). • Generating data: given a ground truth θ~∗, we generate each ranking with probability α∗ from the PL model parameterized by θ~(1) and with probability 1 − α∗ from the PL model parameterized by θ~(2) up to 45000 full rankings. The GMM algorithm is implemented in Python 3.4 and termination criteria for the optimiza- tion are convergence of the solution and the objective function values to within 10−10 and 10−6 respectively. The optimization of (27) uses the fmincon function through the MATLAB Engine for Python. The EMM algorithm is also implemented in Python 3.4 and the E and M steps are repeated to- gether for a fixed number of iterations without convergence criteria. The running time of EMM is largely determined by the total number of iterations of the MM algorithm and we look to compare the best EMM configuration with comparable running time to GMM. We have tested all configu- rations of EMM with 10 and 20 overall MM iterations, respectively. We found that the optimal configurations are EMM-10-1 and EMM-20-1 (shown in Figure 1, results for other configurations are omitted), where EMM-20-1 means 20 iterations of E step, each of which uses 1 MM iteration. We use the Mean Squared Error (MSE) as the measure of statistical efficiency defined as MSE = ~ ~∗ 2 E(k θ − θ k2). All experiments are run on an Ubuntu Linux server with Intel Xeon E5 v3 CPUs each clocked at 3.50 GHz. Results. The comparison of the performance of the GMM algorithm to the EMM algorithm is pre- sented in Figure 1 for up to n = 2000 rankings. are calculated over 2000 trials (datasets). We observe that: • GMM is significantly faster than EMM. The running time of GMM grows at a much slower rate in n. • GMM achieves competitive MSE and the MSE of EMM does not improve over n. In particular, when n is more than 1000, GMM achieves smaller MSE. The implication is that GMM may be better suited for reasonably large datasets where running time becomes infeasibly large with EMM. Moreover, it is possible that the GMM algorithm can be further improved by using a more accurate optimizer or another set of moment conditions. GMM can also be used to provide a good initial point for other methods such as the EMM. For larger datasets, the performance of the GMM algorithm is shown in Figure 2 for up to n = 45000 rankings calculated over 50000 trials. As the data size increases, GMM converges toward the ground truth, which verifies our consistency theorem (Theorem 4). The overall running time of GMM shown in the figure is comprised of the time to calculate the moments from data (GMM- Moments) and the time to optimize the objective function (GMM-Opt). The time for calculating the moment values increases linearly in n, but it is dominated by the time to perform the optimization.

27 6 Summary and Future Work

In this paper we address the problem of identifiability and efficient learning for Plackett-Luce mix- ture models. We show that for any k ≥ 2, k-PL for no more than 2k − 1 alternatives is non- identifiable and this bound is tight for k = 2. For generic identifiability, we prove that the mixture m−2 of k Plackett-Luce models over m alternatives is generically identifiable if k ≤ b 2 c!. We also propose a GMM algorithm for learning 2-PL with four or more alternatives. Our experiments show that our GMM algorithm is significantly faster than the EMM algorithm in Gormley and Murphy [2008], while achieving competitive statistical efficiency. There are many directions for future research. An obvious open question is whether k-PL is identifiable for 2k alternatives for k ≥ 3, which we conjecture to be true. It is important to study how to efficiently check whether a learned parameter is identifiable for k-PL when m < 2k. Can we further improve the statistical efficiency and computational efficiency for learning k-PL? We also plan to develop efficient implementations of our GMM algorithm and apply it widely to various learning problems with big rank data.

Acknowledgements

This work is supported by the National Science Foundation under grant IIS-1453542 and a Simons- Berkeley research fellowship. We thank all reviewers for helpful comments and suggestions.

28 References

Elizabeth S. Allman, Catherine Matias, and John A. Rhodes. Identifiability of parameters in latent structure models with many observed variables. The Annals of Statistics, 37(6A):3099–3132, 2009.

Ammar Ammar, Sewoong Oh, Devavrat Shah, and Luis Filipe Voloch. What’s your choice?: learn- ing the mixed multi-nomial. In ACM SIGMETRICS Performance Evaluation Review, volume 42, pages 565–566. ACM, 2014. Pranjal Awasthi, Avrim Blum, Or Sheffet, and Aravindan Vijayaraghavan. Learning Mixtures of Ranking Models. In Proceedings of Advances in Neural Information Processing Systems, pages 2609–2617, 2014. Hossein Azari Soufiani, William Chen, David C. Parkes, and Lirong Xia. Generalized method-of- moments for rank aggregation. In Proceedings of Advances in Neural Information Processing Systems (NIPS), Lake Tahoe, NV, USA, 2013.

Richard Caron and Tim Traynor. The zero set of a polynomial. WSMR Report, pages 05–02, 2005. Weiwei Cheng, Krzysztof J. Dembczynski, and Eyke Hullermeier.¨ Label ranking methods based on the plackett-luce model. Proceedings of the 27th International Conference on Machine Learning (ICML-10), pages 215–222, 2010.

Flavio Chierichetti, Anirban Dasgupta, Ravi Kumar, and Silvio Lattanzi. On learning mixture mod- els for permutations. Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, pages 85–92, 2015. Sanjoy Dasgupta. Learning mixtures of gaussians. Foundations of Computer Science, 1999. 40th Annual Symposium on, pages 634–644, 1999.

Cynthia Dwork, Ravi Kumar, Moni Naor, and D. Sivakumar. Rank aggregation methods for the web. In Proceedings of the 10th World Wide Web Conference, pages 613–622, 2001. Isobel Claire Gormley and Thomas Brendan Murphy. Exploring voting blocs within the Irish elec- torate: A mixture modeling approach. Journal of the American Statistical Association, 103(483): 1014–1027, 2008.

Alastair R. Hall. Generalized Method of Moments. Oxford University Press, 2005. Lars Peter Hansen. Large Sample Properties of Generalized Method of Moments Estimators. Econo- metrica, 50(4):1029–1054, 1982. David R. Hunter. MM algorithms for generalized Bradley-Terry models. In The Annals of Statistics, volume 32, pages 384–406, 2004. Maria Iannario. On the identifiability of a mixture model for ordinal data. Metron, 68(1):87–94, 2010. Adam Tauman Kalai, Ankur Moitra, and Gregory Valiant. Disentangling gaussians. In Communi- cations of the ACM, volume 55, pages 113–120, 2012.

29 Joseph B Kruskal. Three-way arrays: rank and uniqueness of trilinear decompositions, with appli- cation to arithmetic complexity and statistics. Linear algebra and its applications, 18(2):95–138, 1977. Tie-Yan Liu. Learning to Rank for Information Retrieval. Springer, 2011.

Tyler Lu and Craig Boutilier. Effective sampling and learning for mallows models with pairwise- preference data. The Journal of Machine Learning Research, 15(1):3783–3829, 2014. Robert Duncan Luce. Individual Choice Behavior: A Theoretical Analysis. Wiley, 1959. Andrew Mao, Ariel D. Procaccia, and Yiling Chen. Better human computation through principled voting. In Proceedings of the National Conference on Artificial Intelligence (AAAI), Bellevue, WA, USA, 2013. John I. Marden. Analyzing and Modeling Rank Data. Chapman & Hall, 1995. Daniel McFadden. Conditional logit analysis of qualitative choice behavior. In Frontiers of Econo- metrics, pages 105–142, New York, NY, 1974. Academic Press.

Geoffrey McLachlan and David Peel. Finite Mixture Models. John Wiley & Sons, 2004. Geoffrey J. McLachlan and Kaye E. Basford. Mixture Models: Inference and Applications to Clus- tering. Marcel Dekker, 1988. Sewoong Oh and Devavrat Shah. Learning mixed multinomial logit model from ordinal data. In Advances in Neural Information Processing Systems, pages 595–603, 2014. Robin L. Plackett. The analysis of permutations. Journal of the Royal Statistical Society. Series C (Applied Statistics), 24(2):193–202, 1975. Matthew Stephens. Dealing with label switching in mixture models. Journal of the Royal Statistical Society: Series B (Statistical Methodology), 62(4):795–809, 2000.

Changho Suh, Vincent YF Tan, and Renbo Zhao. Adversarial top-k ranking. arXiv preprint arXiv:1602.04567, 2016. Henry Teicher. Identifiability of mixtures. The Annals of Mathematical Statistics, 32(1):244–248, 1961.

Henry Teicher. Identifiability of finite mixtures. The Annals of Mathematical Statistics, 34(4): 1265–1269, 1963. Louis Leon Thurstone. A law of comparative judgement. Psychological Review, 34(4):273–286, 1927.

30