Determinantal Thinning of Point Processes with Network
Total Page:16
File Type:pdf, Size:1020Kb
Determinantal thinning of point processes with network learning applications B. Błaszczyszyn and H.P. Keeler Inria/ENS, France Abstract—A new type of dependent thinning for point pro- kernel function. But the main obstacle preventing more use of cesses in continuous space is proposed, which leverages the ad- determinantal point processes in R2 (or Rd) is the difficulty vantages of determinantal point processes defined on finite spaces of finding appropriate kernel functions, which need to define and, as such, is particularly amenable to statistical, numerical, and simulation techniques. It gives a new point process that can (integral) operators with eigenvalues in the interval [0, 1]. This serve as a network model exhibiting repulsion. The properties and problem can be largely circumvented when one considers functions of the new point process, such as moment measures, the determinantal point processes defined on spaces with finite car- Laplace functional, the void probabilities, as well as conditional dinality, such as bounded lattices, reducing the mathematical (Palm) characteristics can be estimated accurately by simulating technicalities down to problems of linear algebra. Furthermore, the underlying (non-thinned) point process, which can be taken, for example, to be Poisson. This is in contrast (and preference to) this approach allows the use of non-normalized kernels, which finite Gibbs point processes, which, instead of thinning, require we refer to as L-matrices, to more easily define determinantal weighting the Poisson realizations, involving usually intractable processes. In this setting, Kulesza and Taskar [16] used these normalizing constants. Models based on determinantal point point processes to develop a comprehensive framework for processes are also well suited for statistical (supervised) learning statistical (supervised) learning; also see [15], [17]. techniques, allowing the models to be fitted to observed network patterns with some particular geometric properties. We illustrate We leverage this line of research and define point processes this approach by imitating with determinantal thinning the well- in continuous space using a doubly stochastic approach. First, known Matern´ II hard-core thinning, as well as a soft-core an underlying point process in a bounded subset R ⊂ Rd is thinning depending on nearest-neighbour triangles. These two considered, for which a natural choice is the Poisson point examples demonstrate how the proposed approach can lead to process. Then, the points of a given realization of this process new, statistically optimized, probabilistic transmission scheduling schemes. are considered as a finite, discrete state space on which a Index Terms—dependent thinning, determinantal subset, Palm determinantal process (subset of the realization) is sampled distributions, statistical learning, geometric networks using some kernel that usually depends on the underlying realization. This operation, which can be seen as a dependent I. INTRODUCTION thinning, leads to a new point process existing on bounded regions of Rd and exhibiting more repulsion than the under- Researchers have used point processes on the plane to build lying point process. Conditioned on a given realization of the spatial random models of various wireless network types, but underlying point process, the subset point process inherits all the overwhelming majority of these models relies upon the closed-form expressions available for discrete determinantal Poisson point process [4]. To develop a more realistic model, point processes, thus allowing one to accurately estimate the while still keeping it tractable, we propose a thinning operation characteristics of the new (thinned) point process by simulating using discrete determinantal point processes. the underlying (non-thinned) point process. The statistical Originally called fermion point processes by Macchi [22], learning approach proposed by Kulesza and Taskar [16] can arXiv:1810.08672v2 [cs.LG] 22 Nov 2018 determinantal point processes have attracted considerable at- then be used to fit the kernel of the determinantal thinning to tention in recent years due to their interesting mathemat- various types of observed network models. ical properties [11]. These point processes admit analytic The paper is structured as follows. In Section II we recall approaches to several fundamental characteristics such as the the basics of the determinantal processes in finite spaces; in Laplace functional, the void probabilities and Palm distribu- Section III we introduce the determinantally-thinned point tions [25]. They provide useful statistical models for point processes and some of their characteristics including Palm pattern exhibiting repulsion [3], [19] and, compared to the distributions; in Section IV we present the fitting method well-studied Gibbs point processes [7], have advantages such based on maximum likelihoods; we demonstrate the results as faster simulation methods and more tractable expressions with two illustrative examples in Section V; and in Section VI for likelihoods and moments [18], [19]. This has motivated we discuss network applications. The code for all numerical researchers to use these point processes, when defined on results is available online [12]. the plane R2, as spatial models for base stations in cellular network [10], [20], [21], [24], [27]. II. DETERMINANTAL POINT PROCESSES Determinantal point processes are defined usually via fac- torial moment measures admitting densities in the form of We start by detailing determinantal point processes in a determinants of matrices populated with the values of some discrete setting. A. Finite state space provided all eigenvalues of K are strictly positive, which is P We consider an underlying state space S on which we will equivalent to (Ψ = ∅) > 0. For more details, see, for define a point process (the term carrier space is also used). example, the paper by Borodin and Rains [5, Proposition 1.1] We assume the important simplification that the cardinality of or the book [16] by Kulesza and Taskar. the state space S is finite, that is #(S) < ∞. We consider a III. DETERMINANTALLY-THINNED POINT PROCESSES simple point process Ψ on the state space S, which means that Ψ is a random subset of the state space S, that is Ψ ⊆ S. A We now define a new point process, which builds upon a single realization ψ of this point process Ψ can be interpreted homogeneous Poisson point process Φ with intensity λ > 0 Rd simply as occupied or unoccupied locations in the underlying on a bounded region R⊂ of the n-dimensional Euclidean state space S. space. Given a realization Φ= φ, we consider it as the state space S = φ on which a determinantal point process (subset) B. Definition Ψ ⊂ φ is sampled, resulting in a (typically dependent) thinning For a state space S with finite cardinality m := #(S), a of the realization Φ = φ. More precisely, the points in a discrete point process is a determinantal point process Ψ if for realization φ = {xi}i of a Poisson point process Φ form the all configurations (or subsets) ψ ⊆ S, state space of the finite determinantal point processes Ψ, which is defined via P(Ψ ⊇ ψ) = det(Kψ), (1) P(Ψ ⊇ ψ|Φ= φ) = det(Kψ(φ)), (5) where K is some real symmetric m × m matrix, and Kψ := where K (φ) = [K(φ)] is such that ψ ⊂ φ ⊂ R. Note [K]x,y∈ψ denotes the restriction of K to the entries indexed ψ xi,xj ∈ψ by the elements or points in ψ, that is x, y ∈ ψ. The matrix that Ψ is characterized by intensity measure of the underlying K is called the marginal kernel, and has to be positive semi- Poisson point process Φ and the function K(·) which maps definite. The eigenvalues of K need to be bounded between each (Poisson) realization φ to a semi-definite matrix K(φ) zero and one (with eigenvalues in [0, 1]) having elements indexed by the To simulate or sample a determinantal point process on a points of φ. Rd finite state space, one typically uses an algorithm based on the The point process Ψ is defined on a subset of , but uses eigenvalues and eigenvectors of the matrix K. The number of the discrete approach of determinantal point processes. In other points is given by Bernoulli trials (or biased coin flips) with words, the points of the realization φ are dependently thinned the probabilities of success being equal to the eigenvalues, such that there is repulsion among the points of Ψ. We call this while the joint location of the points is determined by the point process a determinantally-thinned Poisson point process eigenvectors corresponding to the successful trials. Each point or, for brevity, a determintantal Poisson process. is randomly placed one after the other; for further details, The double stochastic construction of determinantally- see [16, Algorithm 1] [19, Algorithm 1] and [28, Algorithm thinned point processes can be compared with the classic 1]. Matern´ hard-core processes (of type I, II and III), which are also constructed through dependent thinning of underlying C. L-ensembles Poisson point processes. For these point processes, there is a In the finite state space setting kernels K can be easily zero probability that any two points are within a certain fixed defined by using the formalism of L-ensembles. Instead of distance of each other. Determinantal thinning of Poisson point finding a K matrix with appropriate eigenvalues, we can processes can provide examples of soft-core process, where work with a family of point processes known as L-ensembles there is a smaller (compared to the Poisson case) probability that are defined through a positive semi-definite matrices L, that any two points are within a certain distance of each other. which is also indexed by the elements of the space S, but We return to this theme later in our results section, where we the eigenvalues of L, though non-negative, do not need to be fit our new point process to a Mat´ern II hard-core process. less than one.