High-Performance Sampling of Generic Determinantal Point Processes

High-Performance Sampling of Generic Determinantal Point Processes

High-performance sampling of generic Determinantal Point Processes Jack Poulson∗ August 20, 2019 Abstract Determinantal Point Processes (DPPs) were introduced by Mac- chi [1] as a model for repulsive (fermionic) particle distributions. But their recent popularization is largely due to their usefulness for encour- aging diversity in the final stage of a recommender system [2]. The standard sampling scheme for finite DPPs is a spectral decom- position followed by an equivalent of a randomly diagonally-pivoted Cholesky factorization of an orthogonal projection, which is only ap- plicable to Hermitian kernels and has an expensive setup cost. Re- searchers [3, 4] have begun to connect DPP sampling to LDLH fac- torizations as a means of avoiding the initial spectral decomposition, but existing approaches have only outperformed the spectral decom- position approach in special circumstances, where the number of kept modes is a small percentage of the ground set size. This article proves that trivial modifications of LU and LDLH fac- torizations yield efficient direct sampling schemes for non-Hermitian and Hermitian DPP kernels, respectively. Further, it is experimentally shown that even dynamically-scheduled, shared-memory paralleliza- tions of high-performance dense and sparse-direct factorizations can be trivially modified to yield DPP sampling schemes with essentially identical performance. arXiv:1905.00165v2 [math.NA] 17 Aug 2019 The software developed as part of this research, Catamari [hodges- tar.com/catamari] is released under the Mozilla Public License v2.0. It contains header-only, C++14 plus OpenMP 4.0 implementations of dense and sparse-direct, Hermitian and non-Hermitian DPP samplers. ∗[email protected], Hodge Star Scientific Computing 1 Figure 1: Two samples from the uniform distribution over spanning trees of 2 a 40 x 40 box of Z . Each has likelihood exp(−1794:24). 1 Introduction Determinantal Point Processes (DPPs) were first studied as a distinct class by Macchi in the mid 1970's [1, 5] as a probability distribution for the loca- tions of repulsive (fermionic) particles, in direct contrast with Permanental { or, bosonic { Point Processes. Particular instances of DPPs, representing the eigenvalue distributions of classical random matrix ensembles, appeared in a series of papers in the beginning of the 1960's [6, 7, 8, 9, 10] (see [11] for a review, and [12] for a computational perspective on sampling classical eigenvalue distributions). Some of the early investigations of (finite) DPPs involved their usage for uniformly sampling spanning trees of graphs [13, 14] (see Figs. 1 and 2), non-intersecting random walks [15], and domino tilings of the Aztec diamond [16, 17, 18, 19] (see Figs. 3 and 4). Let us recall that the class of immanants provides a generalization of the determinant and permanent of a matrix by using an arbitrary character χ of the symmetric group Sn to determine the signs of terms in the summation: n χ X Y jAj ≡ χ(σ) Ai,σ(i): σ2Sn i=1 Choosing the trivial character χ ≡ 1 yields the permanent, while choos- ing χ(σ) = sign(σ) provides the determinant. The resulting generalization from Determinantal Point Processes and Permanental Point Processes to Immanantal Point Processes was studied in [20]. Definition 1 A finite Determinantal Point Process is a random variable Y ∼ DPP(K) 2 Figure 2: Two samples from the uniform distribution over spanning trees of a 10 x 10 section of hexagonal tiles. Each has likelihood exp(−299:101). Figure 3: Two samples from the Aztec diamond DPP of order 10, defined via a complex, non-Hermitian kernel based upon Kenyon's formula over the Kasteleyn matrix. The tiles all begin on dark squares and are oriented either: left (blue), right (red), up (yellow), or down (green). Their likelihoods are both exp(−38:1231). 3 Figure 4: Two samples from the Aztec diamond DPP of order 80, defined via a complex, non-Hermitian kernel based upon Kenyon's formula over the Kasteleyn matrix. Their likelihoods are both exp(−2245:8). The diamond is large enough to clearly display the arctic circle phenomenon discussed in [21] and [19]: \when n is sufficiently large, the shape of thep central sub- region becomes arbitrarily close to a perfect circle of radius n= 2 for all but a negligible proportion of the tilings." 4 over the power set of a ground set f0; 1; :::; n − 1g = [n] such that P[Y ⊆ Y] = det(KY ); n×n where K 2 C is called the marginal kernel matrix and KY denotes the restriction of K to the row and column indices of Y . We can immediately observe that the j-th diagonal entry of a marginal kernel K is the probability of index j being in the sample, Y, so the diagonal of every marginal kernel must lie in [0; 1] ⊂ R. A characteristic requirement for a complex matrix to be admissible as a marginal kernel is given in the fol- lowing proposition, due to Brunel [22], which we will provide an alternative proof of after introducing our factorization-based sampling algorithm. Proposition 1 (Brunel [22]) n×n A matrix K 2 C is admissible as a DPP marginal kernel iff jJj (−1) det(K − 1J ) ≥ 0; 8J ⊆ [n]: We can also observe that, because determinants are preserved under sim- ilarity transformations, there is a nontrivial equivalence class for marginal kernels { which is to say, there are many marginal kernels defining the same DPP: Proposition 2 n×n The equivalence class of a DPP kernel K 2 C contains its orbit under the group of diagonal similarity transformations, i.e., −1 n fD KD : D = diag(d); d 2 (C n f0g) g: If we restrict to the classes of complex Hermitian or real symmetric kernels, the same statement holds with the entries of d restricted to the circle group U(1) = fz 2 (C; ×): jzj = 1g and the scalar orthogonal group O(1) = fz 2 (R; ×): jzj = 1g, respectively. Proof. Determinants are preserved under similarity transformations, so the probability of inclusion of each subset is unchanged by global diagonal simi- larity. That a unitary diagonal similarity preserves Hermiticity follows from recognition that it becomes a Hermitian congruence. Likewise, a signature matrix similarity transformation becomes a real symmetric congruence. In most works, with the notable exception of Kasteleyn matrix ap- proaches to studying domino tilings of the Aztec diamond [16, 17, 18, 19], the 5 marginal kernel is assumed Hermitian. And Macchi [1] showed (Cf. [23, 24]) that a Hermitian matrix is admissibile as a marginal kernel if and only if its eigenvalues all lie in [0; 1]. The viewpoint of these eigenvalues as probabili- ties turns out to be productive, as the most common sampling algorithm for Hermitian DPPs, due to [24] and popularized in the machine learning com- munity by [2], produces an equivalent random orthogonal projection matrix by preserving eigenvectors with probability equal to their eigenvalue: Theorem 1 (Theorem 7 of [24]) n×n Given any Hermitian marginal kernel matrix K 2 C with spectral decom- position QΛQH , sampling from Y ∼ DPP(K) is equivalent to sampling from H a realization of the random DPP with kernel Q:;ZQ:;Z, where Q:;Z consists of the columns of Q with P[j 2 Z] = Λj, independently for each j. Such an orthogonal projection marginal kernel is said to define a De- terminantal Projection Process [24], or elementary DPP [2], and it is known that the resulting samples almost surely have cardinality equal to the rank of the projection (e.g., Lemma 17 of [24]). The algebraic specification in Alg. 1 of [2] of the Alg. 18 of [24] involved an O(nk3) approach, where k is the rank of the projection kernel. In many important cases, such as uniformly sampling spanning trees or domino tilings, k is a large fraction of n, and the algorithm has quartic complexity (assuming standard matrix multiplication). In the words of [2]: Alg. 1 runs in time O(nk3), where k is the number of eigenvec- tors selected [...] the initial eigendecomposition [...] is often the computational bottleneck, requiring O(n3) time. Modern multi- core machines can compute eigendecompositions up to n ≈ 1; 000 at interactive speeds of a few seconds, or larger problems up to n ≈ 10; 000 in around ten minutes. A faster, O(nk2) algorithm for sampling elementary DPPs was given as Alg. 2 of [25]. We will later show that this approach is equivalent to a small modification of a diagonally-pivoted, rank-revealing, left-looking, Cholesky factorization [26], where the pivot index is chosen at each iteration by sampling from the probability distribution implied by the diagonal of the remaining submatrix (which is maintained out-of-place). As a brief aside, we recall that the distinction between up-looking, left- looking, and right-looking factorizations is based upon their choice of loop invariant. Given the matrix partitioning A A 1;1 1;2 ; A2;1 A2;2 6 where the diagonal of A1;1 comprises the list of eliminated pivots: an up- looking algorithm will have only updated A1;1 (by overwriting it with its factorization), a left-looking algorithm will also have overwritten A2;1 with the corresponding block column of the lower triangular factor (and A1;2 with its upper-triangular factor in the nonsymmetric case), and a right-looking algorithm will additionally have overwritten A2;2 with the Schur complement resulting from the block elimination of A1;1. Researchers have begun proposing algorithms which directly sample from Hermitian marginal kernels by sequentially deciding whether each index should be in the result by sampling the Bernoulli process defined by the probability of the item's inclusion, conditioned on the combination of all of the explicit inclusion and exclusion decisions made so far.

View Full Text

Details

  • File Type
    pdf
  • Upload Time
    -
  • Content Languages
    English
  • Upload User
    Anonymous/Not logged-in
  • File Pages
    28 Page
  • File Size
    -

Download

Channel Download Status
Express Download Enable

Copyright

We respect the copyrights and intellectual property rights of all users. All uploaded documents are either original works of the uploader or authorized works of the rightful owners.

  • Not to be reproduced or distributed without explicit permission.
  • Not used for commercial purposes outside of approved use cases.
  • Not used to infringe on the rights of the original creators.
  • If you believe any content infringes your copyright, please contact us immediately.

Support

For help with questions, suggestions, or problems, please contact us