Adaptive Non-Parametric Regression With the K-NN Fused Lasso

Oscar Hernan Madrid Padilla1 James Sharpnack2 Yanzhen Chen 5 [email protected] [email protected] [email protected] Daniela Witten3 4 [email protected] 1 Department of Statistics, UCLA 2 Department of Statistics, UC Davis 3 Department of Statistics, 4 Department of Biostatistics, University of Washington 5 Department of ISOM, Hong Kong University of Science and Technology

July 9, 2019

Abstract The fused lasso, also known as total-variation denoising, is a locally-adaptive function estimator over a regular grid of design points. In this paper, we extend the fused lasso to settings in which the points do not occur on a regular grid, leading to an approach for non-parametric regression. This approach, which we call the K-nearest neighbors (K-NN) fused lasso, involves (i) computing the K-NN graph of the design points; and (ii) performing the fused lasso over this K-NN graph. We show that this procedure has a number of theoretical advantages over competing approaches: specifically, it inherits local adaptivity from its connection to the fused lasso, and it inherits manifold adaptivity from its connection to the K- NN approach. We show that excellent results are obtained in a simulation study and on an application to flu data. For completeness, we also study an estimator that makes use of an -graph rather than a K-NN graph, and contrast this with the K-NN fused lasso.

Keywords: non-parametric regression, local adaptivity, manifold adaptivity, total variation, fused lasso

1 Introduction arXiv:1807.11641v4 [stat.ME] 8 Jul 2019

In this paper, we consider the non-parametric regression setting in which we have n observations, (x1, y1),..., (xn, yn), of the pair of random variables (X,Y ) ∈ X × R, where X is a metric space with metric dX . We suppose that the model yi = f0(xi) + εi (i = 1, . . . , n) (1)

holds, where f0 : X → R is an unknown function that we wish to estimate. This problem arises in many settings, including demographic applications (Petersen et al., 2016a; Sadhanala and Tibshirani, 2017), environmental data analysis (Hengl et al., 2007), image processing (Rudin et al., 1992), and causal inference (Wager and Athey, 2018). A substantial body of work has considered estimating the function f0 in (1) at the observations X = x1, . . . , xn (i.e. denoising) as well as at other values of the random variable X (i.e. prediction). This includes

1 seminal papers by Breiman et al.(1984), Duchon(1977), and Friedman(1991), as well as more recent work by Petersen et al.(2016b), Petersen et al.(2016a), and Sadhanala and Tibshirani(2017). A number of previ- ous papers have focused in particular on manifold adaptivity, i.e, adapting to the dimensionality of the data; these include work on local polynomial regression by Bickel and Li(2007) and Cheng and Wu(2013), K- NN regression by Kpotufe(2011), Gaussian processes by Yang et al.(2015) and Yang and Dunson(2016), and tree–based estimators such as those in Kpotufe(2009) and Kpotufe and Dasgupta(2012). We refer the reader to Gyorfi¨ et al.(2006) for a very detailed survey of other classical non-parametric regression meth- ods. The vast majority of this work performs well in function classes with variation controlled uniformly throughout the domain, such as Lipschitz and L2 Sobolev classes. Donoho and Johnstone(1998) andH ardle¨ et al.(2012) generalize this setting by considering functions of bounded variation and Besov classes. In this work, we focus on piecewise Lipschitz and bounded variation functions, as these classes can have functions with non-smooth regions as well as smooth regions (Wang et al., 2016). Recently, interest has focused on so-called trend filtering (Kim et al., 2009), which seeks to estimate f0(·) under the assumption that its discrete derivatives are sparse, in a setting in which we have access to an unweighted graph that quantifies the pairwise relationships between the n observations. In particular, the fused lasso, also known as zeroth-order trend filtering or total variation denoising (Mammen and van de Geer, 1997; Rudin et al., 1992; Tibshirani et al., 2005; Wang et al., 2016), solves the optimization problem

 n  1 X 2 X  minimize n (y − θ ) + λ |θ − θ | , (2) θ∈R 2 i i i j  i=1 (i,j)∈E  where λ is a non-negative tuning parameter, and where (i, j) ∈ E if and only if there is an edge between the ith and jth observations in the underlying graph. Then, fˆ(xi) = θˆi. Computational aspects of the fused lasso have been studied extensively in the case of chain graphs (Johnson, 2013; Barbero and Sra, 2014; Davies and Kovac, 2001) as well as general graphs (Chambolle and Darbon, 2009; Chambolle and Pock, 2011; Landrieu and Obozinski, 2015; Hoefling, 2010; Tibshirani and Taylor, 2011; Chambolle and Darbon, 2009). Furthermore, the fused lasso is known to have excellent theoretical properties. In one dimension, Mammen and van de Geer(1997) and Tibshirani(2014) showed that the fused lasso attains nearly minimax rates in mean squared error (MSE) for estimating functions of bounded variation. More recently, also in one dimension, Lin et al.(2017) and Guntuboyina et al.(2017) independently proved that the fused lasso is nearly minimax under the assumption that f0 is piecewise constant. In grid graphs, Hutter and Rigollet (2016), Sadhanala et al.(2016), and Sadhanala et al.(2017) proved minimax results for the fused lasso when estimating signals of interest in applications of image denoising. In more general graph structures, Padilla et al.(2018) showed that the fused lasso is consistent for denoising problems, provided that the underlying signal has total variation along the graph that divided by n goes to zero. Other graph models that have been studied in the literature include tree graphs in Padilla et al.(2018) and Ortelli and van de Geer(2018), and star and Erdos-R˝ enyi´ graphs in Hutter and Rigollet(2016). In this paper, we extend the utility of the fused lasso approach by combining it with the K-nearest neighbors (K-NN) procedure. K-NN has been well-studied from a theoretical (Stone, 1977; Chaudhuri and Dasgupta, 2014; Von Luxburg et al., 2014; Alamgir et al., 2014), methodological (Dasgupta, 2012; Kontorovich et al., 2016; Singh and Poczos´ , 2016; Dasgupta and Kpotufe, 2014), and algorithmic (Friedman et al., 1977; Dasgupta and Sinha, 2013; Zhang et al., 2012) perspective. One key feature of K-NN methods is that they automatically have a finer resolution in regions with a higher density of design points; this is particularly consequential when the underlying density is highly non-uniform. We study the extreme case in which the data are supported over multiple manifolds of mixed intrinsic dimension. An estimator that adapts to this setting is said to achieve manifold adaptivity. In this paper, we exploit recent theoretical developments in the fused lasso and the K-NN procedure in order to obtain a single approach that inherits the advantages of both methods. In greater detail, we extend

2 the fused lasso to the general non-parametric setting of (1), by performing a two-step procedure.

Step 1. We construct a K-nearest-neighbor (K-NN) graph, by placing an edge between each obser- vation and the K observations to which it is closest, in terms of the metric dX . Step 2. We apply the fused lasso to this K-NN graph. The resulting K-NN fused lasso (K-NN-FL) estimator appeared in the context of image processing in El- moataz et al.(2008) and Ferradans et al.(2014), and more recently in an application of graph trend filtering in Wang et al.(2016). We are the first to study its theoretical properties. We also consider a variant obtained by replacing the K-NN graph in Step 1 with an -nearest-neighbor (-NN) graph, which contains an edge between xi and xj only if dX (xi, xj) < . The main contributions of this paper are as follows:

Local adaptivity. We show that provided that f0 has bounded variation, along with an additional condi- tion that generalizes piecewise Lipschitz continuity (Assumption5), then the mean squared errors of both the K-NN-FL estimator and the -NN-FL estimator scale like n−1/d, ignoring logarithmic factors; here, d > 1 is the dimension of X . In fact, this matches the minimax rate for estimating a two-dimensional Lipschitz function (Gyorfi¨ et al., 2006), but over a much wider function class. P` ∗ Manifold adaptivity. Suppose that the covariates are i.i.d. samples from a mixture model l=1 πl pl, ∗ P` ∗ where p1, . . . , p` are unknown bounded densities, and the weights πl ∈ [0, 1] satisfy l=1 πl = 1. Sup- d pose further that for l = 1, . . . , `, the support Xl of pl is homeomorphic (see Assumption3) to [0, 1] l = [0, 1] × [0, 1] × ... × [0, 1], where dl > 1 is the intrinsic dimension of Xl. We show that under mild condi- tions, if the restriction of f0 to Xl is a function of bounded variation, then the K-NN-FL estimator attains P` ∗ ∗ −1/dl ∗ the rate l=1 πl (πl n) . We can obtain intuition for this rate by noticing that πl n is the expected num- ∗ −1/dl ber of samples from the lth component, and hence (πl n) is the expected rate for the lth component. Therefore, our rate is the weighted average of the expected rates for the different components.

2 Methodology

2.1 The K-Nearest-Neighbor and -Nearest-Neighbor Fused Lasso Both the K-NN-FL and -NN-FL approaches are simple two-step procedures. The first step involves con- structing a graph on the n observations. The K-NN graph, GK = (V,EK ), has vertex set V = {1, . . . , n}, and its edge set EK contains the pair (i, j) if and only if xi is among the K-nearest neighbors (with respect to the metric dX ) of xj, or vice versa. By contrast, the -graph, G = (V,E), contains the edge (i, j) in E if and only if dX (xi, xj) < . T After constructing the graph, we apply the fused lasso to y = y1 . . . yn over the graph G (either GK or G). We can re-write the fused lasso optimization problem (2) as

( n ) ˆ 1 X 2 θ = arg min (yi − θi) + λ k∇Gθk1 , (3) n 2 θ∈R i=1 where λ > 0 is a tuning parameter, and ∇G is an oriented incidence matrix of G; each row of ∇G corre- sponds to an edge in G. For instance, if the kth edge in G connects the ith and jth observations, then  1 if l = i  (∇G)k,l = −1 if l = j ,  0 otherwise

3 (a) (b)

Figure 1: (a): A heatmap of n = 5000 draws from (5). (b): n = 5000 samples generated as in (1), ind with εi ∼ N(0, 0.5), X has probability density function as in (5), and f0 is given in (6). The vertical axis corresponds to f0(xi), and the other two axes display the two covariates.

and so (∇Gθ)k = θi − θj. This definition of ∇G implicitly assumes an ordering of the nodes and edges, which may be chosen arbitrarily without loss of generality. In this paper, we mostly focus on the setting where G = GK is the K-NN graph. We also include an analysis of the -graph, which results from using G = G, as a point of contrast. Given the estimator θˆ defined in (3), we predict the response at a new observation x ∈ X \{x1, . . . , xn} according to n 1 X fˆ(x) = θˆ k(x , x). (4) Pn k(x , x) i i j=1 j i=1

In the case of K-NN-FL, we take k(xi, x) = 1{xi∈NK (x)}, where NK (x) is the set of K nearest neighbors of x in the training data. In the case of -NN-FL, we take k(xi, x) = 1{dX (xi,x)<}. (Given a set A, 1A(x) is the indicator function that takes on a value of 1 if x ∈ A, and 0 otherwise.) Note that for the -NN-FL estimator, the prediction rule in (4) might not be well-defined if all the training points are farther than  from x. When that is the case, we set fˆ(x) to equal the fitted value of the nearest training point. We construct the K-NN and -NN graphs using standard Matlab functions such as knnsearch and bsxfun; this results in a computational complexity of O(n2). We solve the fused lasso with the parametric max-flow algorithm from Chambolle and Darbon(2009), for which software is available from the authors’ website, http://www.cmap.polytechnique.fr/˜antonin/software/; it is in practice much faster than its worst-case complexity of O(m n2), where m is the number of edges in the graph (Boykov and Kolmogorov, 2004; Chambolle and Darbon, 2009). In -NN and K-NN, the values of  and K directly affect the sparsity of the graphs, and hence the computational performance of the -NN-FL and K-NN-FL estimators. Corollary 3.23 in Miller et al.(1997) d provides an upper bound on the maximum degree of arbitrary K-NN graphs in R .

4 2.2 Example To illustrate the main advantages of K-NN-FL, we construct a simple example. We refer to the ability to adapt to the local smoothness of the regression function as local adaptivity, and the ability to adapt to the density of the design points as manifold adaptivity. The performance gains of K-NN-FL are most pronounced when these two effects happen in concert, namely, when the regression function is less smooth where design points are denser. These properties are manifested in the following example. 2 We generate X ∈ R according to the probability density function 1 16 4 p(x) = 1 2 2 (x) + 1 2 (x) + 1 2 2 (x). (5) 5 {[0,1] \[0.4,0.6] } 25 {[0.45,0.55] } 25 {[0.4,0.6] \[0.45,0.55] } Thus, p concentrates 64% of its mass in the small interval [0.45, 0.55]2, and 80% in [0.4, 0.6]2. The left-hand panel of Figure1 displays a heatmap of n = 5000 observations drawn from (5). 2 We define f0 : R → R in (1) to be the piecewise constant function

f (x) = 1n 2 o(x). (6) 0 x− 1 (1,1)T ≤ 2 k 2 k2 1000

n We then generate {(xi, yi)}i=1 with n = 5000 from (1); the regression function is displayed in the right- hand panel of Figure1. This simulation study has the following characteristics: (a) the function f0 in (6) is not Lipschitz, but does have low total variation, and (b) the probability density function p is non-uniform with higher density in the region where f0 is less smooth. We compared the following methods in this example:

1. K-NN-FL, with the number of neighbors set to K = 5, and the tuning parameter λ chosen to minimize the average MSE over 100 Monte Carlo replicates.

2. CART (Breiman et al., 1984), with the complexity parameter chosen to minimize the average MSE over 100 Monte Carlo replicates.

3. K-NN regression (see e.g. Stone, 1977), with the number of neighbors K set to minimize the average MSE over 100 Monte Carlo replicates.

The estimated regression functions resulting from these three approaches are displayed in Figure2. We see that K-NN-FL can adapt to low-density and high-density regions of the distribution of covariates, as well as to the local structure of the regression function. By contrast, CART displays some artifacts due to the binary splits that make up the decision tree, and K-NN regression undersmoothes in large areas of the domain. In practice, we anticipate that K-NN-FL will outperform its competitors when the data are highly- concentrated around a low-dimensional manifold and the regression function is non-smooth in that region, as in the above example. In our theoretical analysis, we will consider the special case in which the data lie precisely on a low-dimensional manifold, or a mixture of low-dimensional manifolds.

3 Local Adaptivity of K-NN-FL and -NN-FL

3.1 Assumptions

T We assume that, in (1), the elements of ε = (ε1, . . . , εn) are independent and identically-distributed mean-zero sub-Gaussian random variables,

 2 2 E(εi) = 0, pr (|εi| > t) ≤ C exp −t /(2σ ) , i = 1, . . . , n, for all t > 0, (7)

5 2 Figure 2: Top Left: The function f0 in (6), evaluated at an evenly-spaced grid of size 100 × 100 in [0, 1] . Top Right: The estimate of f0 obtained via K-NN-FL. Bottom Left: The estimate of f0 obtained via CART. Bottom Right: The estimate of f0 obtained via K-NN regression.

for some positive constants σ and C. Furthermore, we assume that ε is independent of X. 0 In addition, for a set A ⊂ A with (A, dA) a metric space, we write B(A) = {a : exists a ∈ 0 ˆ A, with dA(a, a ) ≤ }. We let ∂A denote the boundary of the set A. Moreover, the MSE of θ is defined ˆ ∗ 2 −1 Pn ˆ ∗ 2 d as kθ − θ kn = n i=1(θi − θi ) . The Euclidean norm of a vector x ∈ R is denoted by kxk2 = 2 2 1/2 T s (x1 + ··· + xd) . For s ∈ N, we set 1s = (1,..., 1) ∈ R . In the covariate space X , we consider the Borel sigma algebra, B(X ), induced by the metric dX . We let µ be a measure on B(X ). We complement the ind model in (1) by assuming that the covariates satisfy xi ∼ p(x). Thus, p is the probability density function associated with the distribution of xi, with respect to the measure space (X , B(X ), µ). Note that X can be a manifold of dimension d in a space of much higher dimension. We begin by stating assumptions on the distribution of the covariates p(·), and on the metric space (X , dX ). In the theoretical results in Section 3 of Gyorfi¨ et al.(2006), it is assumed that p is the probability density function of the uniform distribution in [0, 1]d. In this section, we will require only that p is bounded above and below. This condition appeared in the framework for studying K-NN graphs in Von Luxburg et al.(2014), and in the work on density quantization by Alamgir et al.(2014).

6 Assumption 1. The density p satisfies 0 < pmin < p(x) < pmax, for all x ∈ X , where pmin, pmax ∈ R. Although we do not require that X be a Euclidean space, we do require that balls in X have volume d (with respect to µ) that behaves similarly to the Lebesgue measure of balls in R . This is expressed in the next assumption, which appeared as part of the definition of a valid region (Definition 2) in Von Luxburg et al.(2014).

Assumption 2. The base measure µ in X satisfies

d d c1,dr ≤ µ {Br(x)} ≤ c2,dr , for all x ∈ X , for all 0 < r < r0, where r0, c1,d, and c2,d are positive constants, and d ∈ N\{0, 1} is the intrinsic dimension of X .

Next, we make an assumption about the topology of the space X . We require that the space have no holes, and is topologically equivalent to [0, 1]d, in the sense that there exists a continuous bijection between X and [0, 1]d.

Assumption 3. There exists a homeomorphism (a continuous bijection with a continuous inverse) h : X → [0, 1]d, such that

0 0 0 0 Lmin dX (x, x ) ≤ kh(x) − h(x )k2 ≤ LmaxdX (x, x ), for all x, x ∈ X , for some positive constants Lmin,Lmax, where d ∈ N\{0, 1} is the intrinsic dimension of X . d Note that Assumptions2 and3 immediately hold if X = [0, 1] , with dX the Euclidean distance, h the d d identity mapping in [0, 1] , and µ the Lebesgue measure in [0, 1] . A metric space (X , dX ) that satisfies Assumption3 is a special case of a differential manifold; the intuition is that the space X is a chart of the atlas for said differential manifold. In Assumptions2 and3, we assume d > 1, since local adaptivity in non-parametric regression is well understood in one dimension. For instance, see Tibshirani(2014),Wang et al.(2016), Guntuboyina et al. (2017), and references therein. We now proceed to state conditions on the regression function f0 defined in (1). The first assumption simply requires bounded variation of the composition of the regression function with the homeomorphism h from Assumption3.

−1 d Assumption 4. The function g0 = f0 ◦ h has bounded variation, i.e. g0 ∈ BV{(0, 1) }, and g0 is also bounded. Here (0, 1)d is the interior of [0, 1]d, and BV{(0, 1)d} is the class of functions in (0, 1)d with bounded variation. We refer the reader to Section S2 in the Supplementary Material for the explicit construction of the BV{(0, 1)d} class. The function h was defined in Assumption3.

It is worth mentioning that if X = [0, 1]d and h(·) is the identity function in [0, 1]d, then Assumption 4 simply states that f0 has bounded variation. However, in order to allow for more general scenarios, the condition is stated in terms of the function g0 which has domain in the unit box, whereas the domain of f0 is the more general set X . We now recall the definition of a piecewise Lipschitz function, which induces a much larger class than the set of Lipschitz functions, as it allows for discontinuities.

d d d Definition 1. Let Ω := [0, 1] \B(∂[0, 1] ). We say that a bounded function g : [0, 1] → R is piecewise Lipschitz if there exists a set S ⊂ (0, 1)d that has the following properties:

1. The set S has Lebesgue measure zero.

7 −1 d 2. For some constants CS , 0 > 0, we have that µ(h {B(S) ∪ ([0, 1] \Ω)}) ≤ CS  for all 0 <  < 0.

0 3. There exists a positive constant L0 such that if z and z belong to the same connected component of 0 0 Ω\B(S), then |g(z) − g(z )| ≤ L0 kz − z k2. Roughly speaking, Definition1 says that g is piecewise Lipschitz if there exists a small set S that partitions [0, 1]d in such a way that g is Lipschitz within each connected component of the partition. Theorem 2.2.1 in Ziemer(2012) implies that if g is piecewise Lipschitz, then g has bounded variation on any open set within a connected component. Theorem1 will require Assumption5, which is a milder assumption on g0 than piecewise Lipschitz continuity (Definition1). We now present some notation that is needed in order to introduce Assumption5. d For  > 0 small enough, we denote by P a rectangular partition of (0, 1) induced by 0, , 2, . . . , (b1/c− d d d 1), 1, so that all the elements of P have volume of order  . We define Ω2 := [0, 1] \B2(∂[0, 1] ). Then, for a set S ⊂ (0, 1)d, we define

P,S := {A ∩ Ω2\B2 (S): A ∈ P,A ∩ Ω2\B2(S) 6= ∅}; this is the partition induced in Ω2\B2(S) by the grid P. For a function g with domain [0, 1]d, we define

X 1 Z S1(g, P,S ) := sup |g(zA) − g(z)| dz. (8) zA∈A  A∈P,S B(zA)

If g is piecewise Lipschitz, then S1(g, P,S ) is bounded; see Section S3.1 in the Supplementary Material. Next, we define X d S2 (g, P,S ) := sup T (g, zA)  , (9) zA∈A A∈P,S where d Z 0  0 0  X ∂ψ(z /) g(zA − z ) − g(z − z ) T (g, z ) = sup dz0 , (10) A d z∈B(zA) ∂zl kz − zAk2  l=1 0 kz k2≤ and ψ is a test function (see Section S1 in the Supplementary Material). Thus, (9) is the summation, over evenly-sized rectangles of volume  that intersect Ω2 \B2 (S), of the supremum of the function in (10). The latter, for a function g, can be thought as the average Lipschitz constant near zA — see the expression inside curly braces in (10) — weighted by the derivative of a test function. The scaling factor d in (10) arises because the integral is taken over a set of measure proportional to d. As with S1 (g, P,S ), one can verify that if g is a piecewise Lipschitz function, then S2 (g, P,S ) is bounded. −1 We now make use of (8) and (9) in order to state our next condition on g0 = f0 ◦h . This next condition is milder than assuming that g0 is piecewise Lipschitz, see Definition1.

d d d Assumption 5. Let Ω := [0, 1] \B(∂[0, 1] ). There exists a set S ⊂ (0, 1) that satisfies the following: 1. The set S has Lebesgue measure zero.

−1 d 2. For some constants CS , 0 > 0, we have that µ{h [B(S) ∪ {(0, 1) \Ω}]} ≤ CS  for all 0 <  < 0.

8 3. The summations S1(g0, P,S ) and S2(g0, P,S ) are bounded:

sup max{S1(g0, P,S ),S2(g0, P,S )} < ∞. 0<<0

Finally, we refer the reader to Section S3 in the Supplementary Material for a discussion on Assumptions 4–5. In particular, we present an example illustrating that the class of piecewise Lipschitz functions is, in general, different from the class of functions for which Assumptions4–5 hold. However, both classes contain the class of Lipschitz functions, where the latter amounts to S = ∅ in Definition1.

3.2 Results

∗ ∗ Letting θi = f0(xi), we express the MSEs of K-NN-FL and -NN-FL in terms of the total variation of θ with respect to the K-NN and -NN graphs.

Theorem 1. Let K  log1+2r n for some r > 0. Then under Assumptions1–3, with an appropriate choice of the tuning parameter λ, the K-NN-FL estimator θˆ satisfies

log1+2r n log1.5+r n  kθˆ − θ∗k2 = O + k∇ θ∗k . n pr n n GK 1

(1+2r)/d 1/d ∗ This upper bound also holds for the -NN-FL estimator with   log n/n , if we replace k∇GK θ k1 ∗ with k∇G θ k1, and with an appropriate choice of λ.

∗ ∗ Clearly, the upper bound in Theorem1 is a function of k∇GK θ k1 or k∇G θ k1, for the K-NN or ∗ 1−1/d -NN graph, respectively. For the grid graph considered in Sadhanala et al.(2016), k∇G θ k1  n , −1/d ∗ leading to the rate n . However, for a general graph, there is no a priori reason to expect that k∇G θ k1  1−1/d ∗ 1−1/d n . Notably, our next result shows that k∇G θ k1  n for G ∈ {GK ,G}, under the assumptions discussed in Section 3.1.

−1 Theorem 2. Under Assumptions1–5, or Assumptions1–3 and piecewise Lipschitz continuity of f0 ◦ h , if K  log1+2r n for some r > 0, then for an appropriate choice of the tuning parameter λ, the K-NN-FL estimator defined in (3) satisfies logα n kθˆ − θ∗k2 = O , (11) n pr n1/d with α = 3r + 5/2 + (2r + 1)/d. Moreover, under Assumptions1–3 and piecewise Lipschitz continuity of −1 f0 ◦ h , then fˆ defined in (4) with the K-NN-FL estimator satisfies

 2 logα n E f (X) − fˆ(X) = O . (12) X∼p 0 pr n1/d

Furthermore, under the same assumptions, (11) and (12) hold for the -NN-FL estimator with   log(1+2r)/d n/n1/d.

Theorem2 indicates that under Assumptions1–5 or Assumptions1–3 and piecewise Lipschitz conti- −1 −1/d nuity of f0 ◦ h , both the K-NN-FL and -NN-FL estimators attain the convergence rate n , ignoring logarithmic terms. Importantly, Theorem 3.2 from Gyorfi¨ et al.(2006) shows that in the two-dimensional setting, this rate is actually minimax for estimation of Lipschitz continuous functions, when the design points are uniformly drawn from [0, 1]2. Thus, when d = 2 both K-NN-FL and -NN-FL are minimax for estimating functions in the class implied by Assumptions1–5, and also in the class of piecewise Lipschitz functions implied by Assumptions1–3 and Definition1. In higher dimensions ( d > 2), by the lower bound

9 in Proposition 2 from Castro et al.(2005), we can conclude that K-NN-FL and -NN-FL attain nearly min- imax rates for estimating piecewise Lipschitz functions, whereas it is unknown if the same is true under Assumptions1–5. Notably, a different method, similar in spirit to CART, was introduced in Appendix E of Castro et al.(2005). Castro et al.(2005) showed that this approach is also nearly minimax for estimating el- ements in the class of piecewise Lipschitz functions, although is unclear whether a computationally feasible implementation of their algorithm is available. We see from Theorem2 that both -NN-FL and K-NN-FL are locally adaptive, in the sense that they can adapt to the form of the function f0. Specifically, these estimators do not require knowledge of the set S in Assumption5 or Definition1. This is similar in spirit to the one-dimensional fused lasso, which does not require knowledge of the breakpoints when estimating a piecewise Lipschitz function. However, there is an important difference between the applicability of Theorem2 for K-NN-FL and -NN-FL. In order to attain the rate in Theorem2, -NN-FL requires knowledge of the dimension d, since this quantity appears in the rate of decay of . But in practice, the value of d might not be clear: for instance, suppose that X = [0, 1]2 × {0}; this is a subset of [0, 1]3, but it is homeomorphic to [0, 1]2, so that d = 2. If d is unknown, then it can be challenging to choose  for -NN-FL. By contrast, the choice of K in K-NN-FL only involves the sample size n. Consequently, local adaptivity of K-NN-FL may be much easier to achieve in practice.

4 Manifold Adaptivity of K-NN-FL

n In this section, we allow the observations {(xi, yi)}i=1 to be drawn from a mixture distribution, in which each mixture component satisfies the assumptions in Section3. Under these assumptions, we show that the K-NN-FL estimator can still achieve a desirable rate. We assume ∗ yi = θi + εi, i = 1, . . . , n, θ∗ = f (x ), i 0,zi i (13) xi ∼ pzi (x), ∗ pr(zi = l) ∼ πl , for l = 1, . . . , `. ∗ P` ∗ where ε satisfies (7), πl ∈ [0, 1] with l=1 πl = 1, pl is a density with support Xl ⊂ X , f0,l : Xl → R, d and {Xl}l=1,...,` is a collection of subsets of X . For simplicity, we will assume that X ⊂ R for some d > 1, n and dX is the Euclidean distance. In (13), the observed data is {(xi, yi)}i=1. The remaining ingredients in (13) are either latent or unknown. We further assume that each set Xl is homeomorphic to a Euclidean box of dimension depending on l, as follows:

Assumption 6. For l = 1, . . . , `, the set Xl satisfies Assumptions1–3 with metric given by dX , with dimen- sion dl ∈ N\{0, 1}, and with µ equal to some measure µl. In addition: S 1. There exists a positive constant c˜l such that the set ∂Xl := l06=l Xl0 ∩ Xl satisfies T µl {B (∂Xl) Xl} ≤ c˜l, (14)

for any small enough  > 0.

2. There exists a positive constant rl such that for any x ∈ Xl, either

00 0 0 inf dX (x, x ) < dX (x, x ) for all x ∈ X \X , (15) 00 l x ∈∂Xl

or B(x) ⊂ Xl for all  < rl.

10 The constraints implied by Assumption6 are very natural. Inequality (14) states that the intersections d of the manifolds X1,..., X` are small. To put this in perspective, if the extrinsic space (X ) were [0, 1] with the Lebesgue measure, then balls of radius of  would have measure d which is less than  for all d > 1, and d d the set B(∂[0, 1] )∩[0, 1] would have measure that scales like , which is the same scaling appearing (14). d Furthermore, (15) holds if X1,..., X` are compact and convex subsets of R whose interiors are disjoint. We are now ready to extend Theorem2 to the framework described in this section.

Theorem 3. Suppose that the data are generated as in (13), and Assumption6 holds. Suppose also that the functions f0,1, . . . , f0,` either satisfy Assumptions4–5 or are piecewise Lipschitz in the domain Xl. Then for an appropriate choice of the tuning parameter λ, the K-NN-FL estimator defined in (3) satisfies

( ` ∗ ) ˆ ∗ 2 X πl kθ − θ kn = Opr poly(log n) , (π∗n)1/dl l=1 l

∗ r0 1+2r provided that n min{πl : l ∈ [`]} ≥ c0n , and K  log n for some constants c0, r0, r > 0, where ∗ poly(·) is a polynomial function. Here, the πl ’s are allowed to change with n.

Notice that when dl = d for all l ∈ [`] in Theorem3, then we obtain, ignoring logarithmic factors, −1/d the rate n which is minimax when the functions f0,l are piecewise Lipschitz. The rate is also minimax when d = 2 and the functions f0,l satisfy Assumptions4–5. In addition, our rates can be compared with the existing literature on manifold adaptivity. Specifically, when d = 2, the rate n−1/2 is attained by local polynomial regression (Bickel and Li, 2007) and Gaussian process regression (Yang and Dunson, 2016) for the class of differentiable functions with bounded partial derivatives, and by K-NN regression for Lipschitz functions (Kpotufe, 2011). In higher dimensions, Bickel and Li(2007), Yang and Dunson (2016) and Kpotufe(2011) attain better rates than n−1/d on smaller classes of functions that do not allow for discontinuities. Finally, we refer the reader to Section S11 of the Supplementary Material for an example suggesting that the -NN-FL estimator may not be manifold adaptive.

5 Experiments

Throughout this section, we take dX to be Euclidean distance.

5.1 Simulated Data In this section, we compare the following approaches:

• -NN-FL, with  held fixed, and λ treated as a tuning parameter.

• K-NN-FL, with K held fixed, and λ treated as a tuning parameter.

• CART (Breiman et al., 1984), implemented in the R package rpart, with the complexity parameter treated as a tuning parameter.

• MARS (Friedman, 1991), implemented in the R package earth, with the penalty parameter treated as a tuning parameter.

• Random forests (Breiman, 2001), implemented in the R package randomForest, with the number of trees fixed at 800, and with the minimum size of each terminal node treated as a tuning parameter.

11 (a) (b)

(c) (d)

Figure 3: (a): A scatterplot of data generated under Scenario 1. The vertical axis displays f0(xi), while the other two axes display the two covariates. (b): Optimized MSE (averaged over 150 Monte Carlo 3 1/2 1/2 simulations) of competing methods under Scenario 1. Here 1 = 4 (log n/n) and 2 = (log n/n) . (c): Computational time (in seconds) for Scenario 1, averaged over 150 Monte Carlo simulations. (d): Optimized MSE (averaged over 150 Monte Carlo simulations) of competing methods under Scenario 2.

• K-NN regression (e.g. Stone, 1977), implemented in Matlab using the function knnsearch, with K treated as a tuning parameter.

We evaluate each method’s performance using the MSE, as defined in Section 3.1. Specifically, we apply each method to 150 Monte Carlo data sets with a range of tuning parameter values. For each method, we then identify the tuning parameter value that leads to the smallest average MSE over the 150 data sets. We refer to this smallest average MSE as the optimized MSE in what follows. In our first two scenarios we consider d = 2 covariates, and let the sample size n vary. 2 Scenario 1. The function f0 : [0, 1] → R is piecewise constant,

f0(x) = 1 t∈ 2 : t− 3 (1,1)T < t− 1 (1,1)T (x). (16) { R k 4 k2 k 2 k2}

12 (a) (b)

Figure 4: (a) Optimized MSE, averaged over 150 Monte Carlo simulations, for Scenario 3. (b): Optimized MSE, averaged over 150 Monte Carlo simulations, for Scenario 4. In both Scenario 3 and Scenario 4, 1 is chosen to be the largest value such that the total number of edges in the graph G1 is at most 50000.

The covariates are drawn from a uniform distribution on [0, 1]2. The data are generated as in (1) with N(0, 1) errors. 2 Scenario 2. The function f0 : [0, 1] → R is as in (6), with generative density for X as in (5). The data are generated as in (1) with N(0, 1) errors. Data generated under Scenario 1 are displayed in Figure3(a). Data generated under Scenario 2 are displayed in Figure1(b). Figure3(b) and Figure3(d) display the optimized MSE, as a function of the sample size, for Scenarios 1 and 2, respectively. K-NN-FL gives the best results in both scenarios. -NN-FL performs a bit worse than K-NN-FL in Scenario 1, and very poorly in Scenario 2 (results not shown). Timing results for all approaches under Scenario 1 are reported in Figure3(c). For all methods, the times reported are averaged over a range of tuning parameter values. For instance, for K-NN-FL, we fix K and compute the time for different choices of λ; we then report the average of those times. For our next two scenarios, we consider n = 8000 and values of d in {2,..., 25}. d Scenario 3. The function f0 : [0, 1] → R is defined as

( 1 3 1 if x − 4 1d 2 < x − 4 1d 2 f0(x) = , −1 otherwise,

d ind and the density p is uniform in [0, 1] . The data are generated as in (1) with εi ∼ N(0, 0.3). d Scenario 4. The function f0 : [0, 1] → R is defined as  2 if kx − q k < min{kx − q k , kx − q k , kx − q k }  1 2 2 2 3 2 4 2  1 if kx − q2k2 < min{kx − q1k2, kx − q3k2, kx − q4k2} f0(x) = , 0 if kx − q3k2 < min{kx − q1k2, kx − q2k2, kx − q4k2}  −1 otherwise

T T T  1 T 1 T   1 T 1 T   3 T 1 T  where q1 = 4 1bd/2c, 2 1d−bd/2c , q2 = 2 1bd/2c, 4 1d−bd/2c , q3 = 4 1bd/2c, 2 1d−bd/2c and

13 Figure 5: Results for the flu data. “Normalized Prediction Error” was obtained by dividing each method’s test set prediction error by the test set prediction error of K-NN-FL, with K = 7.

T  1 T 3 T  d q4 = 2 1bd/2c, 4 1d−bd/2c . Once again, the generative density for X is uniform in [0, 1] . The data are ind generated as in (1) with εi ∼ N(0, 0.3). Optimized MSE for each approach is displayed in Figure4. When d is small, most methods perform well; however, as d increases, the competing methods quickly deteriorate, whereas K-NN-FL continues to perform well.

5.2 Flu Data Our data consists of flu activity and atmospheric conditions between January 1, 2003 and December 31, 2009 across different cities in Texas. Our data-use agreement does not permit dissemination of the flu activity, which comes from medical records. The atmospheric conditions, which include temperature and air quality (particulate matter), can be obtained directly from CDC Wonder (http://wonder.cdc.gov/). Using the number of flu-related doctor’s office visits as the dependent variable, we fit a separate non-parametric regression model to each of 24 cities; each day was treated as a separate observation, so that the number of samples is n = 2556 in each city. Five independent variables are included in the regression: maximum and average observed concentration of particulate matter, maximum and minimum temperature, and day of the year. All variables are scaled to lie in [0, 1]. We performed 50 75%/25% splits of the data into a training set and a test set. All models were fit on the training data, with 5-fold cross-validation to select tuning parameter values. Then prediction performance was evaluated on the test set. We apply K-NN-FL with K ∈ {5, 7}, and -NN-FL with  = j/n1/d for j ∈ {1, 2, 3}, which is motivated by Theorem2, and with larger choices of  leading to worse performance. We also fit neural networks (Hagan et al., 1996; implemented in Matlab using the functions newfit and train), thin plate splines (Duchon, 1977; implemented using the R package fields), and MARS, CART, and random forests, as described in Section 5.1. Average test set prediction error (across the 50 test sets) is displayed in Figure5. We see that K-NN-FL and -NN-FL have the best performances. In particular, K-NN-FL performs best in 13 out the 24 cities, and second best in 6 cities. In 8 of the 24 cities, -NN-FL performs best. We contend that K-NN-FL achieves superior performance because it adapts to heterogeneity in the density of design points, p (manifold adaptivity), and heterogeneity in the smoothness of the regression

14 function, f0 (local adaptivity). In our theoretical results, we have substantiated this contention through prediction error rate bounds for a large class of regression functions of heterogeneous smoothness and a large class of underlying measures with heterogeneous intrinsic dimensionality. Our experiments demonstrate that these theoretical advantages translate into practical performance gains.

A Notation

Throughout, given m ∈ N, we denote by [m] the set {1, . . . , m}. For a ∈ A, A ⊂ A, with (A, dA) a metric space, we write dA(a, A) = inf dA(a, b). b∈A d In the case of the space R , we will use the notation dist(·, ·) for the metric induced by the norm k · k∞, and we will write B(x) for the ball B(x, k · k2). We use the notation k · kn for the rescaling of the k · k2 norm, n such that for x ∈ R , n 1 X kxk2 = x2. n n i i=1 Furthermore, we write d d Ω = [0, 1] \B(∂[0, 1] ). (17) d Thus, Ω is the set of points in the interior of [0, 1] such that balls of radius  with center in such points are also contained in [0, 1]d. d 1 d Given an open set Ω ⊂ R , as is usual in mathematical analysis, we denote by Cc (Ω, R ) the set of d ∞ functions φ :Ω → R that have compact support and continuous first derivative. We also write C (Ω) for d the class of functions φ :Ω → R which have derivatives of all orders. The function ψ : R → R given as

(  −1  C1 exp 1−kxk2 if kxk2 < 1, ψ(z) = 2 (18) 0 otherwise, R is called a test function if C1 is chosen such that ψ(z) dz = 1. We also denote, for s ∈ N, by 1s the vector T s s t T T T s+t 1s := (1,..., 1) ∈ R . Moreover, for vectors u ∈ R and v ∈ R , we write w = (u , v ) ∈ R for the concatenation of u and v. Furthermore, we denote by L1(Ω) the set of Lebesgue measurable functions f :Ω → R such that Z |f(x)|µ(dx) < ∞, Ω where µ is the Lebesgue measure in Ω. We also denote by L∞(Ω) the set of measurable functions f :Ω → R, such that |f| is bounded except perhaps in set of µ-measure zero. The supremum norm in L∞(Ω) is denoted by k · kL∞(Ω).

B Definition of Bounded Variation

d We now review some notation regarding general spaces of functions of bounded variation. Let Ω ⊂ R be an open set. Recall that the divergence of a function g :Ω → R is defined as d X ∂g(x) div(g)(x) = , ∀x ∈ Ω, ∂x j=1 j provided that the partial derivatives involved exist.

15 Definition 2. A function f :Ω → R has bounded variation if   Z  1 d  |f|BV(Ω) := sup f(x) div(g)(x) dx ; g ∈ Cc (Ω, R ), kgk∞ ≤ 1 ,   Ω is finite, where

 1/2 d X 2 kgk∞ :=  gj  . j=1 L∞(Ω) The space of functions of bounded variation on Ω is denoted by BV(Ω). We refer the reader to Ziemer (2012) for a full description of the class of functions of bounded variation.

C Discussion of Assumptions4–5

C.1 Piecewise Lipschitz Condition and Assumption5

To better understand the definition of S1(g, P,S ) (see (8) in the main paper), notice that we can view S1(g, P,S ) as a discretization over the set Ω2\B2(S) of the integral of the function 1 Z m(z , ) = |g(z ) − g(z)|dz. (19) A d+1 A B(zA)

Recall that zA is a Lebesgue point of g if lim→0  m(zA, ) = 0 (see Definition 2.18 in Giaquinta and d Modica, 2010). Furthermore, if g ∈ L1(R ) then almost all points are Lebesgue points. This follows from the Lebesgue differentiation theorem; see Theorem 2.16 in Giaquinta and Modica(2010). Thus, if d d−1 g ∈ L1(R ), then loosely speaking each term in the right hand side of (8) in the main paper is O( ), for −1 any configuration of points zA. In Assumption5, we further require S1(f0 ◦ h , P,S ) to be bounded. Next, we show that piecewise Lipschitz continuity implies Assumption5. First, note that if g is piecewise Lipschitz, then S1(g, P,S ), defined in (8) of the paper, is bounded. To see this, assume that g is piecewise Lipschitz with the set S. Then

Z d X |g(zA) − g(z)| L0 π 2 S (g, P ) ≤ sup dz ≤ < ∞, 1 ,S d  (20) zA∈A kzA − zk2 Γ + 1 A∈P,S 2 B(zA) where the second-to-last inequality follows from the fact that the volume of a d-dimensional ball of radius  d/2 d −d is π  /Γ(d/2 + 1), as well as the fact that there are at most on the order of  elements of P,S. Furthermore, with a similar argument to that in (20), we can show that if g is piecewise Lipschitz, then −1 −1 S2(g, P,S ) is bounded. Therefore, if f0 ◦ h is piecewise Lipschitz, then f0 ◦ h satisfies Assumption5.

C.2 Generic Functions that Do Not Satisfy Definition1

We now present a general condition on f0 that implies that f0 is not piecewise Lipschitz. d d Let f0 : [0, 1] → R such that f0 is differentiable in (0, 1) . Suppose that the gradient of f0 is continuous and unbounded. Then, we claim that f0 is not piecewise Lipschitz. To see this, notice that there exists t(0) ∈ [0, 1]d such that lim k∇f0(t)k∞ = ∞. t→t(0)

16 Hence, without lost of generality

∂f0(t) lim = ∞. t→t(0) ∂t1 Now, let S ⊂ [0, 1]d satisfy all but the third condition of Definition 1. Then there exists a positive decreasing ∞ (m,1) (m,2) d (0) (m,1) (m,2) sequence {m} , and t , t ∈ {(0, 1) \Bm (S)} ∩ B 1/d (t ) such that kt − t k2 < m=1 Cm 1/d (m,1) (m,2) Cm < 1 for some constant C, tj = tj for j ∈ {2, . . . , d}, and such that the segment connecting (m,1) (m,2) t to t does not contain elements in Bm (S). Hence, by the mean value theorem,

|f (t(m,1)) − f (t(m,2))| ∂f (t(m,1,2)) 0 0 = 0 , (m,1) (m,2) kt − t k2 ∂t1

(m,1,2) (m,1) (m,2) where t is a point in the segment connecting t and t . By the continuity of ∇f0, we can make the right-hand side of the previous inequality as large as desired. Therefore, the function f0 is not piecewise Lipschitz.

C.3 Assumptions4–5 and Definition1 Induce Different Function Classes We start by providing a stronger condition that Assumption5. Specifically, assume that for small enough  it holds that X Vol(A) sup k∇f0(z)k1 < C, (21) z∈B2(A) A∈P,∅ for a constant C. Then we claim that f0 satisfies Assumption5. To verify this, we note that by the mean value theorem, X S1(f0, P,∅) ≤ C2Vol(A) sup k∇f0(z)k1 < C2C, (22) z∈B(A) A∈P,∅ for some constant C2 > 0. Moreover,

k∇f0(z)k1 k∇f0(z)k1 T (f0, zA) ≤ sup Vol(B(0)) d dk∇φk∞ ≤ Vol(A) sup d dk∇φk∞, z∈B2(zA) z∈B2(A) which implies that for some constant C3 > 0 X S2(f0, P,∅) ≤ C3 Vol(A) sup k∇f0(z)k1 < C3C. (23) z∈B2(A) A∈P,∅

Therefore, combining (22) and (23), we obtain that the condition defined in (21) implies Assumption5. Next, we exploit (21) to show that Assumptions4–5 and Definition1 induce different classes of func- tions.

Pd √ d Example 1. Consider the function f0(x) = j=1 xj, x ∈ [0, 1] . Then f0 satisfies Assumptions4–5. However, f0 is not piecewise Lipschitz.

d Pd √ To verify Example1, we check that (21) holds for f0 : [0, 1] → R defined as f0(x) := j=1 xj.

17 To that end, notice that for some constant C4 > 0, we have that   1/ 1/   X X X d 1 1 Vol(A) sup k∇f0(z)k1 ≤ C4  ...  √ + ... + √  z∈B2(A) j1 jd) A∈P,∅ j1=1 jd=1 1/ √ X 1 ≤ C4d  √ j1 j1=1 1/ √ X 2 ≤ C4d  √ √ j1 + j1 − 1 j1=1 1/ √ X hp p i ≤ C4d  2 j1 − j1 − 1

j1=1 √ q 1 ≤ C4d   ≤ C4d.

Furthermore, it is straightforward to check that f0 has bounded variation, and hence f0 satisfies Assumption 4. However,

∂f0(t) lim = ∞, t→0 ∂t1 which by the argument in Section C.2 implies that f0 is not piecewise Lipschitz.

D Outline of the Proof of Theorem1

We start by discussing how to embed a mesh on a K-NN graph generated under Assumptions1–3. This construction will then be used in the next sections in order to derive an upper bound for the MSE of the K-NN-FL estimator. The embedding idea that we will present appeared in the flow-based proof of Theorem 4 from VonLuxburg et al.(2014) regarding commute distance on K-NN-graphs. There, the authors introduced the notion of a valid grid (Definition 17). For a fixed set of design points, a grid graph G is valid if, among other things, G satisfies the following: (i) The grid width is not too small, in the sense that each cell of the grid contains at least one of the design points. (ii) The grid width is not too large: points in the same or neighboring cells of the grid are always connected in the K-NN graph. The notion of a valid grid was introduced for fixed design points, but through a minor modification, we construct a grid graph that with high probability satisfies the conditions of a valid grid from Von Luxburg et al.(2014). We now proceed to construct a grid embedding that for any signal will lead to a lower bound on the total variation along the K-NN graph. d Given N ∈ N, in [0, 1] we construct a d-dimensional grid graph Glat = (Vlat,Elat), i.e., a lattice graph, d with equal side lengths, and total number of nodes |Vlat| = N . Without loss of generality, we assume that the nodes of the grid correspond to the points  i 1 i 1   P (N) = 1 − ,..., d − : i , . . . , i ∈ {1,...,N} . (24) lat N 2N N 2N 1 d

Notice that Plat(N) is the set of centers of the elements of the parition PN −1 , with the notation of Section 0 0 3.1 in the main paper. Moreover, z, z ∈ Plat(N) share an edge in the graph Glat(N) if only if kz − z k2 = −1 0 0 N . If the nodes corresponding to z, z share an edge, then we will write (z, z ) ∈ Elat(N). Note that the d lattice Glat(N) is constructed in [0, 1] and not in the set X . However, this lattice can be transformed into

18 −1 a mesh in the covariate space through the homeomorphism from Assumption3, by I(N) = h {Plat(N)}. We can use I(N) to perform a quantization in the domain X by using the cells associated with I(N). See Alamgir et al.(2014) for more general aspects of quantizations under Assumptions1–3. n Using this mesh I(N), which depend on the homeomorphism h, for any signal θ ∈ R , we can construct n I N d two vectors denoted by θI ∈ R and θ ∈ R . The former (θI ) is a signal vector that is constant within mesh cells. The latter (θI ) has coordinates corresponding to the different nodes of the mesh (centers of I cells). The precise definitions of θI and θ are given in SectionE. Since θ and θI have the same dimension, it is natural to ask how these two relate to each other, at least for the purpose of understanding the empirical I N d process associated with the K-NN-FL estimator. Moreover, given that θ ∈ R , one can try to relate the total variation of θI along a d-dimensional grid with N d nodes, with the total variation of the original signal θ along the K-NN graph. We proceed to establish these connections next.

Lemma 4. Assume that K is chosen such that K/ log n → ∞. Then with high probability the following holds. Under Assumptions1–3, there exists an N  (n/K)1/d such that for the corresponding mesh I(N) we have that T n |e (θ − θI )| ≤ 2 kek∞ k∇GK θk1, ∀θ ∈ R , (25) n for all e ∈ R . Moreover, I n kD θ k1 ≤ k∇GK θk1, ∀θ ∈ R , (26) d where D is the incidence matrix of a d-dimensional grid graph Ggrid = (Vgrid,Egrid) with Vgrid = [N ]. Lemma4 immediately provides a path to control the empirical process associated with the K-NN-FL estimator. In particular, by the basic inequality argument (see, for instance, Wang et al. 2016), it is of interest to bound the quantity εT (θˆ − θ∗). In our context, this can be done by noticing that

1 T ˆ ∗ 1 T ˆ ˆ 1 T ˆ ∗ 1 T ∗ ∗ n ε (θ − θ ) = n ε (θ − θI ) + n ε (θI − θI ) + n ε (θI − θ ). (27) Hence in the proof of our main theorem stated in SectionG, we proceed to bound each term in the right hand side of (27). I Furthermore, Lemma4 provides lower bounds involving θI and θ , both of which depend on the home- omorphism h. However, we only need to specify K, not h. In other words, we can avoid constructing the mesh I(N), which would require knowledge of the unknown function h.

E Explicit Construction of Mesh for K-NN Embeddings

We now describe in detail the embedding idea from SectionD. Importantly, I(N) (defined in SectionD) can be thought of as a quantization in the domain X ; see Alamgir et al.(2014) for more general aspects of quantizations under Assumptions1–3. For our purposes, n it will be crucial to understand the behavior of {xi}i=1 and their relationship to I(N). This is because we will use a grid embedding in order to analyze the behavior of the K-NN-FL estimator θˆ. With that goal in mind, we define a collection of cells, {C(x)}x∈I(N), in X as ( )! −1 d 0 C(x) = h z ∈ [0, 1] : h(x) = arg min kz − x k∞ . (28) 0 x ∈Plat(N)

∗ n Recall that the goal in this paper is to estimate θ ∈ R . However, the mesh construction I(N) has d N elements, which we denote by u1, . . . , uN d . Hence, it is not immediately clear how to evaluate the total variation of θ∗ along the graph corresponding to the mesh.

19 n We first define θI , a vector constructed from θ that incorporates information about the samples {xi}i=1 and the cells {C(x)}x∈I(N). For x ∈ X we write PI (x) as the point in I(N) such that x ∈ C(PI (x)). If there is more than one such point, then we arbitrarily pick one. Then we collapse measurements correspond- ing to different observations xi that fall in the same cell in {C(x)}x∈I(N) into a single value associated with the observation closest to the center of the cell after mapping with the homeomorphism. Thus, for a given n n θ ∈ R , we define θI ∈ R as

(θI )i = θj where j = arg min kh(PI (xi)) − h(xl)k∞. (29) l∈[n]

I n N d n N d Next we construct θ , a mapping from R to R . For any θ ∈ R , we can induce a signal in R corresponding to the elements in I(N). We write

d Ij = {i ∈ [n]: PI (xi) = uj}, j ∈ [N ], where as before u1, . . . , uN d are the elements of I(N). If Ij 6= ∅, then there exists ij ∈ Ij such that n (θI )i = θij for all i ∈ Ij. Here θI is the vector defined in (29). Using this notation, for a vector θ ∈ R , we write θI = (θ , . . . , θ ). We use the convention that θ = 0 if I = ∅. i1 iNd ij j n n I N d Hence, for a given θ ∈ R , we have constructed two very intuitive signals θI ∈ R and θ ∈ R . The former forces covariates xi in the same cell to take the same signal value. The latter has coordinates corresponding to the different nodes of the mesh (centers of cells).

F Auxiliary Lemmas for Proof of Theorem1

In several of the lemmas we will present next, we will implicitly condition on one of the observations, for example x1. When doing so, we will exploit the fact that the other observations are independent as by our ind model assumption xi ∼ p(x), i = 1, . . . , n. The following lemma is a well-known concentration inequality for binomial random variables. It can be found as Proposition 27 in Von Luxburg et al.(2014).

Lemma 5. Let m be a Binomial(M, q) random variable. Then, for all δ ∈ (0, 1],

1 2  P (m ≤ (1 − δ)Mq) ≤ exp − 3 δ Mq , 1 2  P (m ≥ (1 + δ)Mq) ≤ exp − 3 δ Mq . We now use Lemma5 to bound the distance between any design point and its K-nearest neighbors.

Lemma 6. (See Proposition 30 in Von Luxburg et al.(2014) ) Denote by RK (x) the distance from x ∈ X to its Kth nearest neighbor in the set {x1, . . . , xn}. Setting

RK,max = max RK (xi), 1≤i≤n RK,min = min RK (xi), 1≤i≤n we have that ( ) K 1/d K 1/d pr a ≤ R ≤ R ≤ a˜ ≥ 1 − n exp(−K/3) − n exp(−K/12), n K,min K,max n

1/d 1/d 1/d under Assumptions1–3, where a = 1/(2 c2,d pmax) , and a˜ = 2 /(pmin c1,d) .

20 Proof. This proof closely follows that of Proposition 30 in Von Luxburg et al.(2014). First note that for any x ∈ X , by Assumptions1–2 we have pr {x ∈ B (x)} = R p(t)µ(dt) ≤ p µ{B (x)} ≤ c rd p := µ < 1, 1 r Br(x) max r 2,d max max for small enough r. We also have that RK (x) ≤ r if and only if there are at least K observations {xi} in Br(x). Let V ∼ Binomial(n, µmax). Then,

P (RK (x) ≤ r) ≤ P (V ≥ K) = P (V ≥ 2 E(V )) , 1/d 1/d where the last equality follows by choosing a = 1/(2 c2,d pmax) , and r = a(K/n) . Therefore, from Lemma5 we obtain  1/d  1/d pr RK,min ≤ a(K/n) ≤ pr ∃i : RK (xi) ≤ a(K/n) ≤ n max pr {RK (xi) ≤ r} 1≤i≤n ≤ n exp(−K/3). Furthermore, Z d pr {x1 ∈ Br(x)} = p(t)µ(dt) ≥ pminµ{Br(x)} ≥ c1,d r pmin := µmin > 0, Br(x) and we arrive with a similar argument to

n 1/do pr RK,max > a˜(K/n) ≤ n exp(−K/12), (30)

1/d 1/d where a˜ = 2 /(pmin c1,d) .

The upper bound in Lemma6 allows us to control the maximum distance between xi and xj whenever they are connected in the K-NN graph. This maximum distance scales as (K/n)1/d. The lower bound, on the other hand, prevents xi from being arbitrarily close to its Kth nearest neighbor. These properties are ∗ particularly important as they will be used to characterize the penalty k∇Gθ k1. As explained in Wang et al.(2016), there are different strategies for proving convergence rates in gener- alized lasso problems such as (2) from the main paper. Our approach here is similar in spirit to Padilla et al. (2018), and it is based on considering a lower bound for the penalty function induced by the K-NN graph. This lower bound will arise by constructing a signal over the grid graph induced by the cells {C(x)}x∈I(N) defined in (28). Towards that end, we provide the following lemma characterizing the minimum and max- imum number of observations {xi} that fall within each cell C(x). With an abuse of notation, we write |C(x)| = |{i ∈ [n]: xi ∈ C(x)}|. We now present a result related to Proposition 28 in Von Luxburg et al.(2014).

Lemma 7. Assume that N in the construction of Plat(N) defined in (24) is chosen as & √ ' 3 d (2 c p )1/d n1/d N = 2,d max . 1/d Lmin K ˜ ˜ Then there exist positive constants b1 and b2 depending on Lmin, Lmax, d, pmin, pmax, c1,d, and c2,d defined in Assumptions1–3, such that   ˜ d  1 2 ˜  pr max |C(x)| ≥ (1 + δ) c2,d b1 K ≤ N exp − 3 δ b2 c1,d K , x∈I(N)     ˜ d 1 2 ˜ (31) pr min |C(x)| ≤ (1 − δ) c1,d b2 K ≤ N exp − 3 δ b2 c1,d K , x∈I(N)

21 1/d 1/d 1/d for all δ ∈ (0, 1), with a = 1/(2 c2,d pmax) , and a˜ = 2 /(pmin c1,d) . Moreover, the symmetric K-NN graph has maximum degree dmax satisfying  3    K   p c a˜dK  pr d ≥ p c a˜d K ≤ n exp − + exp − min 1,d . max 2 max 2,d 12 24

0 0 0 0 0 0 Define the event Ω as: “If xi ∈ C(xi) and xj ∈ C(xj) for xi, xj ∈ I(N) with kh(xi) − h(xj)k2 ≤ −1 N , then xi and xj are connected in the K-NN graph”. Then, pr (Ω) ≥ 1 − n exp(−K/3).

0 0 0 0 0 0 Proof. Let xi and xj such that xi ∈ C(xi) and xj ∈ C(xj) for xi, xj ∈ I(N) with kh(xi) − h(xj)k2 ≤ N −1. Then the following holds using Assumption3:

−1 dX (xi, xj) ≤ Lminkh(xi) − h(xj)k2 √ −1 ≤ Lmin d kh(xi) − h(xj)k∞ √ −1 n 0 0 0 0 o ≤ Lmin d kh(xi) − h(xi)k∞ + kh(xi) − h(xj)k∞ + kh(xj) − h(xj)k∞ √ −1 −1 < 3 Lmin d N

K 1/d ≤ a n

1/d with a = 1/(2 c2,d pmax) , where the first inequality follows from Assumption3. Therefore, as in the proof of Lemma6, ( ) K 1/d pr (Ω) ≥ pr a ≤ R ≥ 1 − n exp(−K/3). n K,min

−1 Next we proceed to derive an upper bound on the counts {C(x)}. Assume that x ∈ h {Plat(N)}, and x0 ∈ C(x). Then,

0 1 0 d (x, x ) ≤ kh(x) − h(x )k2 X Lmin √ d 0 ≤ kh(x) − h(x )k∞ Lmin √ ≤ d 2 Lmin N a K 1/d ≤ 6 n ˜ 1/d (b1) K 1/d = 1/d n , pmax where the first inequality follows from Assumption3, the second from the definition of Plat(N), the third one from the choice of N, and ˜b1 is an appropriate constant. Therefore,

C(x) ⊂ B ˜ 1/d (x). (32) (b1) K 1/d 1/d ( n ) pmax

On the other hand, if ˜b2 is such that (˜b )1/d a L 2 ≤ √min , 1/d 2 Lmax 3 d pmin

22 then if (˜b )1/d K 1/d d (x, x0) ≤ 2 , X 1/d n pmin we have that by Assumption3, for large enough n

0 1 kh(x) − h(x )k∞ ≤ 2 N . And so,

B ˜ 1/d (x) ⊂ C(x). (33) (b2) K 1/d 1/d ( n ) p min In consequence,   ˜ P n ˜ o pr max |C(x)| ≥ (1 + δ) c2,d b1 K ≤ pr |C(x)| ≥ (1 + δ) c2,d b1 K x∈I(N) x∈I(N) P ≤ pr [|C(x)| ≥ (1 + δ) n pr{x1 ∈ C(x)}] x∈I(N) P  1 2 ˜  ≤ exp − 3 δ b2 c1,d K x∈I(N) d  1 2 ˜d  = N exp − 3 δ b2 c1,d K , where the first inequality follows from a union bound, the second from (32), and the third from (33) com- bined with Lemma5. On the other hand, with a similar argument, we have that   ˜ P n ˜ o pr min |C(x)| ≤ (1 − δ) c1,d b2 K ≤ pr |C(x)| ≤ (1 − δ) c1,d b2 K x∈I(N) x∈I(N) P ≤ pr [|C(x)| ≤ (1 − δ) n pr{x1 ∈ C(x)}] x∈I(N) d  1 2 ˜  ≤ N exp − 3 δ b2 c1,d K .

Next we proceed to find an upper bound on the maximum degree of the K-NN graph. We start by defining the sets n o n o Bi(x) = j ∈ [n]\{i} : xj ∈ Ba˜(K/n)1/d (x) ,Bi = j ∈ [n]\{i} : xj ∈ Ba˜(K/n)1/d (xi) , for i ∈ [n], and x ∈ X , and where a˜ is given as in Lemma6. Then, h i |Bi(x)| ∼ Binomial n − 1, pr{x1 ∈ Ba˜(K/n)1/d (x)} , where d K d K p c a˜ ≤ pr{x ∈ B 1/d (x)} ≤ p c a˜ , min 1,d n 1 a˜(K/n) max 2,d n which implies by Lemma5 that

     d  3 d 3 pmin c1,d a˜ pr |B (x)| ≥ p c a˜ K ≤ pr |B (x)| ≥ pr{x ∈ B 1/d (x)}(n − 1) ≤ exp − K . i 2 max 2,d i 2 1 a˜(K/n) 24

23 Hence,

 3  Z  3   p c a˜d  pr |B | ≥ p c a˜d K = pr |B (x)| ≥ p c a˜d K p(x)µ(dx) ≤ exp − min 1,d K . i 2 max 2,d i 2 max 2,d 24 X

Therefore, if di is the degree associated with xi then

3 d   3 d  pr di ≤ 2 pmax c2,d a˜ K ≥ pr |{j ∈ [n]\{i} : dX (xi, xj) ≤ RK,max}| ≤ 2 pmax c2,d a˜ K  3 d ≥ pr |{j ∈ [n]\{i} : dX (xi, xj) ≤ RK,max}| ≤ 2 pmax c2,d a˜ K, K 1/d  RK,max ≤ a˜ n  1/d 3 d ≥ pr {j ∈ [n]\{i} : dX (xi, xj) ≤ a˜(K/n) } ≤ 2 pmax c2,d a˜ K,  K 1/d  RK,max ≤ a˜ n n K 1/do  3 d ≥ 1 − pr RK,max > a˜ n − pr |Bi| > 2 pmax c2,d a˜ K , and the claim follows from the previous inequality and Equation (30).

As stated in Lemma7, the number of observations xi that fall within each cell C(x) scales like K, with high probability. We will exploit this fact next in order to obtain an upper bound on the MSE.

Lemma 8. Assume that the event Ω from Lemma7 intersected with

1 ˜ min |C(x)| ≥ c1,d b2 K (34) x∈I(N) 2

n holds. Define N as in that lemma and let I(N) be the corresponding mesh. Then for all e ∈ R , it holds that T n |e (θ − θI )| ≤ 2 kek∞ k∇GK θk1, ∀θ ∈ R . (35) Moreover, I n kD θ k1 ≤ k∇GK θk1, ∀θ ∈ R , (36) d where D is the incidence matrix of a d-dimensional grid graph Ggrid = (Vgrid,Egrid) with Vgrid = [N ], 0 where (l, l ) ∈ Egrid if and only if 1 kh{P (x )} − h{P (x )}k = . I il I il0 2 N Here we use the notation from SectionE.

0 Proof. We start by introducing the notation xi = PI (xi). To prove (35) we proceed in cases.

Case 1. If (θI )1 = θ1, then clearly |e1| |θ1 − (θI )1| = 0. Case 2. If (θI )1 = θi, for i 6= 1, then

1 kh(x0 ) − h(x )k ≤ kh(x0 ) − h(x )k ≤ . 1 i ∞ 1 1 ∞ 2 N 0 0 Thus, x1 = xi, and so (1, i) ∈ EK by the assumption that the event Ω holds.

Therefore for every i ∈ {1, . . . , n}, there exists ji ∈ [n] such that (θI )i = θji and either i = ji or (i, ji) ∈ EK . Hence, T Pn |e (θ − θI )| ≤ i=1 |ei| |θi − θji | ˆ ≤ 2 kek∞ k∇GK θk1.

24 To verify (36), we observe that

I P kD θ k1 = θil − θil0 . 0 (37) (l,l )∈Egrid

0 0 Now, if (l, l ) ∈ Egrid, then xil and xil0 are in neighboring cells in I. This implies that (il, il ) is an edge in the K-NN graph. Thus, every edge in the grid graph Ggrid corresponds to an edge in the K-NN graph and the mapping is injective. Note that here we have used the fact that (34) ensures that every cell has at least one point, provided that K is large enough. The claim follows.

Lemma 9. With the notation from Lemma8, we have that

T ˆ ∗  ∗ ˆ  T ˆ ∗ ε (θ − θ ) ≤ 2 kεk∞ k∇GK θ k1 + k∇GK θk1 + ε (θI − θI ), on the event Ω.

Proof. Let us assume that event Ω happens. Then we observe that

T ˆ ∗ T ˆ ˆ T ˆ ∗ T ∗ ∗ ε (θ − θ ) = ε (θ − θI ) + ε (θI − θI ) + ε (θI − θ ), (38) and the claim follows by Lemma8.

Lemma 10. With the notation from Lemma9, on the event Ω, we have that

T ˆ ∗ p  ˆ ∗ + T h ˆ ∗ i ε (θI − θI ) ≤ max |C(u)| kΠ˜εk2 kθ − θ k2 + k(D ) ε˜k∞ k∇GK θk1 + k∇GK θ k1 , u∈I

N d where ε˜ ∈ R is a mean zero vector whose coordinates are independent and sub-Gaussian with the same N d + constants as in (7). Here, Π is the orthogonal projection onto the span of 1 ∈ R , and D is the pseudo- inverse of the incidence matrix D from Lemma8.

Proof. Here we use the notation from the proof of Lemma8. Then,

N d X X   1 T ˆ ∗ ˆ ∗ 2 T ˆI ∗,I ε (θI − θI ) = εl θij − θi = max |C(u)| ε˜ (θ − θ ) j u∈I j=1 l∈Ij where −1/2   X ε˜j = max|C(u)| εl. u∈I l ∈Ij

Clearly, the ε˜1,..., ε˜N d are independent given Ω, and are also sub-Gaussian with the same constants as the original errors ε1, . . . , εn. N d Moreover, let Π be the orthogonal projection onto the span of 1 ∈ R . Then, by Ho¨lder’s inequality, and by the triangle inequality,

 1/2 T ˆ ∗ n ˆI ∗,I + T ˆI ∗,I o ε (θI − θI ) ≤ max|C(u)| kΠ˜εk2 kθ − θ k2 + k(D ) ε˜k∞ kD(θ − θ )k1 u∈I  1/2 n I ∗,I + T  I ∗,I o ≤ max|C(u)| kΠ˜εk2 kθˆ − θ k2 + k(D ) ε˜k∞ kDθˆ k1 + kD θ k1 . u∈I (39)

25 Next we observe that

1/2  N d  ( n )1/2   2  2 ˆI ∗,I X ˆ ∗ X ˆ ∗ (40) kθ − θ k2 = θij − θij ≤ θi − θi . j=1  i=1

Therefore, combining (39), (40) and Lemma8, we arrive at

1 n  o T ˆ ∗ 2 ˆ ∗ + T ˆ ∗ ε (θI − θI ) ≤ max|C(u)| kΠ˜εk2 kθ − θ k2 + k(D ) ε˜k∞ k∇GK θk1 + k∇GK θ k1 . u∈I

The following lemma results from a similar argument as the proof of Theorem 2 in Hutter and Rigollet (2016).

Lemma 11. Let d > 1 (recall Assumptions1–3 ) and let δ > 0. With the notation from the previous lemma, given the event Ω, we have that

1 1 1 h 2 log n n C2(pmax,L ,d) n oi 2 T ˆ ∗ ˜d 2  e  2 ˆ ∗ min ε (θI − θI ) ≤ {(1 + δ) c2,d b1 K} 2 σ 2 log δ kθ − θ k2 + σ C1(d) d log K δ !  ˆ ∗  · k∇GK θk1 + k∇GK θ k1 (41) with probability at least   n 1 2 ˜ 1 − 2δ − d d exp − δ b2 c1,d K − n exp(−K/3), K a˜ Lmax 3 where C1(d) > 0 is a constant depending on d, and C2(pmax,Lmin, d) > 0 is another constant depending on pmax, Lmin, and d. Proof. First, as in the proof of Theorem 2 from Hutter and Rigollet(2016), and our choice of N, we obtain that given Ω, h n oi 1 k(D+)T ε˜k ≤ σ C (d) 2 log n log C2(pmax,Lmin,d) n 2 , ∞ 1 d K δ (42) 1  e  2 kΠ˜εk2 ≤ 2 σ 2 log δ with probability at least 1 − 2 δ, where C(d) is constant depending on d, and C2(pmax,Lmin, d) is a constant depending on pmax, Lmin and d. On the other hand, by Lemma7, given the event Ω we have

1 ˜ 1 max |C(x)| 2 ≤ {(1 + δ) c2,d b1 K} 2 −1 x∈h (Plat) with probability at least

 1  1 − N d exp − δ2 ˜b c K − n exp(−K/3). (43) 3 2 1,d

Therefore, combining (42) and (43), the result follows.

26 G Proof of Theorem1

Instead of proving Theorem1, we will prove a more general result that holds for a general choice of K. This is given next. The corresponding result for -NN-FL can be obtained with a similar argument.

Theorem 12. There exist constants C1(d) and C2(pmax,Lmin, d), depending on d, pmax, and Lmin, such that with the notation from Lemmas6 and7, if λ is chosen as     2 log n C2(pmax,Lmin, d) n 1 λ = σ C (d) (1 + δ) c ˜b log K + 8σ (log n) 2 , 1 2,d 1 d K δ then the K-NN-FL estimator θˆ (3) satisfies  h n o i1/2 ˆ ∗ 2 ∗ 4 σ C1(d) ˜ 2 log n C2(pmax,Lmin,d) n kθ − θ kn ≤ k∇GK θ k1 n (1 + δ) c2,d b1 d log K δ K  2 ˜ e 32 (log n) 16 σ (1+δ) c2,d b1 K log( δ ) + n + n , with probability at least ηn {1 − n exp(−K/3)}. Here,  1  C η = 1 − 2δ − N d exp − δ2 ˜b c K − n exp(−K/3) − , n 3 2 1,d n7 where C is the constant in (7), and N is given as in Lemma7. Consequently, taking K  log1 + 2r n we obtain the result in Theorem1. Proof. We notice by the basic inequality argument, see for instance Wang et al.(2016), that ( ) 1 ˆ ∗ 2 1 T ˆ ∗  ˆ ∗  2 kθ − θ kn ≤ n ε (θ − θ ) + λ −k∇GK θk1 + k∇GK θ k1 . (44)

On the other hand, by (7) in the paper, and a union bound,

 1   1  C pr max |εi| > 4σ (log n) 2 Ω = pr max |εi| > 4σ (log n) 2 ≤ . (45) 1≤i≤n 1≤i≤n n7 Therefore, combining (44), (45), Lemma9, Lemma 10, and Lemma 11 we obtain that conditioning on Ω,

1 1 1 (1+δ) c ˜b K 2 h n oi 2 1 ˆ ∗ 2 { 2,d 1 }  e  2 ˆ ∗ 2 log n C2(pmax,Lmin,d) n 2 kθ − θ kn ≤ n 2 σ 2 log δ kθ − θ k2 + σ C1(d) d log K δ !  ˆ ∗  · k∇GK θk1 + k∇GK θ k1 +

1 8σ (log n) 2  ˆ ∗  λ  ˆ ∗  n k∇GK θk1 + k∇GK θ k1 + n −k∇GK θk1 + k∇GK θ k1  h n o i 1 ∗ σ C1(d) ˜ 2 log n C2(pmax,Lmin,d) n 2 ≤ k∇GK θ k1 n (1 + δ) c2,d b1 d log K δ K 1 1  2 (1+δ) c ˜b K log e 2 8σ log 2 n { 2,d 1 ( δ )} ˆ ∗ + n + 2 σ n kθ − θ k2,

−1 2 2 with probability at least ηn. Hence by the inequality a b − 4 b ≤ a , we obtain that conditioning on Ω,  h n o i 1 1 ˆ ∗ 2 ∗ σ C1(d) ˜ 2 log n C2(pmax,Lmin,d) n 2 4 kθ − θ kn ≤ k∇GK θ k1 n (1 + δ) c2,d b1 d log K δ K 1  2 ˜ e 8σ (log n) 2 4 σ (1+δ) c2,d b1 K log( δ ) + n + n , with high probability. The claim follows.

27 H Auxiliary lemmas for Proof of Theorem2

Throughout we use the notation from SectionE. The lemma below resembles Lemma7 with the difference that we now prove that an edge in the K-NN graph induces an edge in a certain mesh.

Lemma 13. Assume that N in the construction of Glat is chosen as $ % ( c p )1/d n1/d N = 1,d min . 1/d 1/d 2 Lmax K

0 0 Then there exist positive constants b1 and b2 depending on Lmin, Lmax, d, pmin, c1,d, and c2,d, such that   0 d 1 2 0  pr max |C(x)| ≥ (1 + δ) c2,d b1 K ≤ N exp − 3 δ b2 c1,d K , x∈h−1(P )  lat  (46) 0 d 1 2 0  pr min |C(x)| ≤ (1 − δ) c1,d b K ≤ N exp − δ b c1,d K , −1 2 3 2 x∈h (Plat) for all δ ∈ (0, 1]. Moreover, let Ω˜ denote the event: “For all i, j ∈ [n], if xi and xj are connected in the 0 0 −1 0 0 0 0 K-NN graph, then kh(xi) − h(xj)k2 < 2 N where xi ∈ C(xi) and xj ∈ C(xj) with xi, xj ∈ I(N)”. Then   pr Ω˜ ≥ 1 − n exp(−K/3).

0 Proof. Let i, j ∈ [n] such that xi and xj are connected in the K-NN graph where xi ∈ C(xi) and xj ∈ 0 0 0 C(xj) with xi, xj ∈ I(N). Then,

0 0 0 0 kh(xi) − h(xj)k2 ≤ kh(xi) − h(xi)k2 + kh(xi) − h(xj)k2 + kh(xj) − h(xj)k2 1 ≤ N + kh(xi) − h(xj)k2 1 ≤ N + Lmax dX (xi, xj) 1 ≤ N + Lmax RK,max Therefore,   n 1/do pr Ω˜ ≥ pr RK,max ≤ a˜(K/n) ≥ 1 − n exp(−K/12), where a˜ is given as in Lemma6. −1 Next, we derive an upper bound on the counts of the mesh. Assume that x ∈ h {Plat(N)}, and x0 ∈ C(x). Then, 0 1 0 d (x, x ) ≤ kh(x) − h(x )k2 X Lmin ≤ 1 2 Lmin N 1+1/d 1/d ≤ 1 2 K Lmax 2 Lmin n1/d 0 1/d K 1/d =: (b1) n , where the first inequality follows from Assumption3, the second from the definition of Plat(N), and the third one from the choice of N. Therefore,

C(x) ⊂ B 0 1/d K 1/d (x). (47) (b1) ( n )

0 On the other hand, we can find b2 with a similar argument the proof of Lemma7, and the proof follows the proof of that lemma.

28 Lemma 14. Suppose that Assumptions1–3 hold, and choose K  log1+2r n for some r > 0. With fˆ(x) the prediction function for the K-NN-FL estimator as defined in (4), and for an appropriate choice of λ, it follows that

2 log1+2r n log1.5+r n  f (X) − fˆ(X) = O + k∇ θ∗k + AErr , (48) EX∼p 0 pr n n GK 1 where AErr is the approximation error, defined as

 2 Z  1 X  AErr = f (x) − f (x ) p(x) µ(dx). 0 K 0 i  i∈NK (x) 

Proof. Throughout we use the notation from AppendixE. We start by noticing that

2 2 n ˆ o R n ˆ o Ex∼p f0(x) − f(x) = f0(x) − f(x) p(x) µ(dx)

R n 1 P = f0(x) − f0(xi) |NK (x)| i∈NK (x) 2 1 P 1 P o + f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| |NK (x)| i∈NK (x) i∈NK (x) 2 R n 1 P o ≤ 2 f0(x) − f0(xi) p(x) µ(dx) + |NK (x)| i∈NK (x) 2 R n 1 P 1 P o 2 f0(xi) − fˆ(xi) p(x) µ(dx). |NK (x)| |NK (x)| i∈NK (x) i∈NK (x)

Therefore we proceed to bound the second term in the last inequality. We observe that

2 R n 1 P 1 P o f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| |NK (x)| i∈NK (x) i∈NK (x) 2 (49) P R n 1 P 1 P o = f0(xi) − fˆ(xi) p(x) µ(dx). |NK (x)| |NK (x)| 0 x ∈I(N) C(x0) i∈NK (x) i∈NK (x)

Let x0 ∈ X . Then there exists u(x0) ∈ I(N) such that x ∈ C(u(x0)) and 1 kh(x0) − h(u(x0))k ≤ . 2 2 N 0 0 Moreover, by Lemma7, with high probability, there exists i(x ) ∈ [n] such that xi(x0) ∈ C(u(x )). Hence,

0 1 0 1 dX (x , xi(x0)) ≤ kh(x ) − h(xi(x0))k2 ≤ . Lmin Lmin N

This implies that there exist xi1 , . . . , xiK , i1, . . . , iK ∈ [n] such that

0 1 dX (x , xil ) ≤ + RK,max, l = 1,...,K. Lmin N

29 Thus, with high probability, for any x0 ∈ X ,

0 n 0 1 o N (x ) ⊂ i : d (x , xi) ≤ + R K X Lmin N K,max n o 0 Lmax ⊂ i : kh(x ) − h(xi)k2 ≤ + Lmax R (50) Lmin N K,max n   o 0 1 Lmax 1 ⊂ i : kh(u(x )) − h(xi)k2 ≤ + + Lmax R . 2 Lmin N K,max

Hence, we set     0 0 1 Lmax 1 N˜ (u(x )) = i : kh(u(x )) − h(xi)k2 ≤ + + Lmax RK,max . 2 Lmin N

d As a result, denoting by u1, . . . , uN d the elements of I(N), we have that for j ∈ [N ],

2 R n 1 P 1 P o f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| |NK (x)| i∈NK (x) i∈NK (x) C(uj ) 2 R 1 P n o ≤ f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| i∈NK (x) C(uj ) 2 1 R P n ˆ o ≤ K f0(xi) − f(xi) p(x) µ(dx) i∈N˜ (u ) C(uj ) j " # 2 1 P n ˆ o R = K f0(xi) − f(xi) p(x) µ(dx). i∈N˜ (u ) j C(uj )

The above combined with (49) leads to ( )2 Z 1 X 1 X f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| |NK (x)| i∈N (x) i∈N (x)  K K  N d   X 1 X n o2 Z ≤  f (x ) − fˆ(x ) p(x) µ(dx) K  0 i i   j=1 ˜ i∈N (uj ) C(uj )  N d  2 Z X 1 X n ˆ o ≤  f0(xi) − f(xi)  max p(x) µ(dx) K 1≤j≤N d j=1 ˜ i∈N (uj ) C(uj )   n 2 Z X 1 X n ˆ o  ≤  f0(xi) − f(xi)  max p(x) µ(dx)  K  1≤j≤N d i=1 d  1 Lmax  1 j∈[N ]: kh(xi) − h(uj )k2 ≤ + + Lmax RK,max C(uj ) 2 Lmin N " n # 2       X n ˆ o 1 Lmax 1 ≤ f0(xi) − f(xi) max j : kh(xi) − h(uj)k2 ≤ + + Lmax RK,max · i∈[n] 2 L N i=1 min

1 R K max p(x) µ(dx). 1≤j≤N d C(uj ) (51)

30 d Next, for a set A ⊂ R and positive constant r, we define the packing and external covering numbers as

pack  0 Nr (A) := max l ∈ N : ∃q1, . . . , ql ∈ A, such that kqj − qj0 k2 > r ∀j 6= j , ext  d Nr := min l ∈ N : ∃q1, . . . , ql ∈ R , such that ∀x ∈ A there exists lx with kx − qlx k2 < r .

Furthermore, by Lemma7, there exists a constant c˜ such that RK,max ≤ c/N˜ with high probability. This implies that with high probability, for a positive constant C˜, we have that

n   o d 1 Lmax 1 max j ∈ [N ]: kh(xi) − h(uj)k2 ≤ + + Lmax RK,max 2 Lmin N i∈[n] (52) n o d C˜ ≤ max j ∈ [N ]: kh(xi) − h(uj)k2 ≤ N i∈[n]   pack ≤ max N 1 B C˜ (h(xi)) i∈[n] N N ext ≤ N 1 (B C˜ (0)) (53) N N ext = N1 (BC˜(0)) < C0, 0 for some positive constant C , where the first inequality follows from RK,max ≤ c/N˜ , the second from the definition of packing number, and the remaining inequalities from well-known properties of packing and external covering numbers. Therefore, there exists a constant C1 such that ( )2 Z 1 X 1 X f0(xi) − fˆ(xi) p(x) µ(dx) |NK (x)| |NK (x)| i∈NK (x) i∈NK (x) " n # Z X n o2 C1 ≤ f0(xi) − fˆ(xi) max p(x) µ(dx) K 1≤j≤N d i=1 C(uj ) " n # ! X n o2 C1 pmax ˆ ˜ ≤ f0(xi) − f(xi) µ B b1 K 1/d (x) K 1/d ( n ) i=1 pmax where the last equation follows as in (32). The claim then follows.

−1 Lemma 15. Assume that g0 := f0 ◦ h satisfies Definition1, i.e. g0 is piecewise Lipschitz. Then, under Assumptions1–3, ! K1/d AErr = O , pr n1/d provided that K/ log n → ∞, where AErr was defined in Lemma 14. Consequently, with K  log1+2r n for some r > 0, and for an appropriate choice of λ, we have that ( ) 2 2.5+3r+(1+2r)/d ˆ log n EX∼p f0(X) − f(X) = Opr . (54) n1/d

31 Proof. Throughout we use the notation from Lemma7 and SectionE. We denote by u1, . . . , uN d the ele- −1 ments of h (Plat(N)), and so

2 R n 1 P o AErr = f0(x) − K f0(xi) p(x) µ(dx) i∈NK (x) 2 P R n 1 P o = f0(x) − K f0(xi) p(x) µ(dx) d j∈[N ] i∈NK (x) C(uj ) 2 P R P 1 n o ≤ K f0(x) − f0(xi) p(x) µ(dx) d j∈[N ] i∈NK (x) C(uj ) 2 P R P 1 n o = K f0(x) − f0(xi) p(x) µ(dx) + d j∈[N ]: C(uj )∩S = ∅ i∈NK (x) C(uj ) 2 P R P 1 n o K f0(x) − f0(xi) p(x) µ(dx), d j∈[N ]: C(uj )∩S 6= ∅ i∈NK (x) C(uj ) where the inequality holds by convexity. Therefore,

 d 2 R AErr ≤ 4 j ∈ [N ]: C(uj) ∩ S= 6 ∅ kf0k∞ max p(x) µ(dx) + 1≤j≤N d C(uj ) 2 P R P 1 n o K f0(x) − f0(xi) p(x) µ(dx) d j∈[N ]: C(uj )∩S = ∅ i∈NK (x) C(uj )   2   d ≤ µ B ˜1/d (x) 4 pmax kf0k∞ j ∈ [N ]: C(uj) ∩ S= 6 ∅ b1 K 1/d 1/d ( n ) pmax 2 P R P 1 n o + K g0(h(x)) − g0(h(xi)) p(x) µ(dx) d j∈[N ]: C(uj )∩S = ∅ i∈NK (x) C(uj )

 ˜ 2  K  d ≤ c2,d 4 b1 kf0k∞ n j ∈ [N ]: C(uj) ∩ S= 6 ∅

P R P L0 2 + K kh(x) − h(xi)k2 p(x) µ(dx) d j∈[N ]: C(uj )∩S = ∅ i∈NK (x) C(uj )

 ˜d 2  K  d ≤ c2,d 4 b kf0k∞ n j ∈ [N ]: C(uj) ∩ S= 6 ∅      2 P R Lmax 1 + L0 p(x) µ(dx) + Lmax R Lmin N K,max d j∈[N ]: C(uj )∩S = ∅  C(uj ) 

 ˜d 2  K  d ≤ c2,d 4 b kf0k∞ n j ∈ [N ]: C(uj) ∩ S= 6 ∅  2 Lmax 1 + L0 + Lmax R , Lmin N K,max where the first inequality holds by elementary properties of integrals, the second inequality by (32), the third inequality by Assumptions2–3, the fourth inequality by the same argument as in (50), and the fifth inequality

32 from properties of integration. The conclusion of the lemma follows from the inequality above combined with Lemma6 and the proof of Proposition 23 from Hutter and Rigollet(2016) which uses Lemma 8.3 from Arias-Castro et al.(2012).

I Proof of Theorem2

Proof. Combining Lemmas 14 and 15 we obtain that  2 logα n E f (X) − fˆ(X) = O , X∼p 0 pr n1/d provide that Assumptions1–3 hold and f0 satisfies Definition1. d Suppose now that Assumptions1–5 hold. Throughout, we extend the domain of the function g0 to be R d d by simply making it take the value zero in R \[0, 1] . We will proceed to construct smooth approximations to g0 that will allow us to obtain the desired result. To that end, for any  > 0 we construct the regularizer d (or mollifier) g : R → R defined as Z 0 0 0 g(z) = ψ ∗ g0(z) = ψ(z ) g0(z − z ) dz ,

0 −d 0 where ψ(z ) =  ψ(z /). Then given Assumption4, by the proof of Theorem 5.3.5 from Zhu et al. (2003), it follows that there exists a constant C2 such that Z lim sup k∇g(z)k1 dz < C2 →+0 (0,1)d which implies that there exists 1 > 0 such that Z sup k∇g(z)k1 dz < C2. (55) 0< < 1 (0,1)d

Next, for N as in Lemma 13, we set  = N −1 and consider the event

n h d io Λ = h(x1) ∈ B4(S) ∪ (0, 1) \Ω4 , (56) and note that R pr(Λ) = p(z) µ(dz) −1 d h (B4(S)∪(0,1) \Ω4)  −1  d  (57) ≤ pmax µ h B4(S) ∪ (0, 1) \Ω4 ≤ pmax CS 4 , where the last inequality follows from Assumption5. Defining

J = { i ∈ [n]: h(xi) ∈ Ω4\B4(S)} , (58) by the triangle inequality we have

P ∗ ∗ P |θi − θj | − |g{h(xi)} − g{h(xj)}| (i,j)∈EK , i,j∈J (i,j)∈EK i,j∈J (59)

P P = |g0{h(xi)} − g0{h(xj)}| − |g{h(xi)} − g{h(xj)}| (i,j)∈EK i,j∈J (i,j)∈EK i,j∈J

33 P h i ≤ |g0{h(xi)} − g{h(xi)}| + |g0{h(xj)} − g{h(xj)}| (i,j)∈Ek i,j∈J P ≤ dmax |g0{h(xi)} − g{h(xi)}| i∈J (60) P R ≤ dmax ψ{h(xi) − z} |g0{h(xi)} − g0(z)| dz i∈J

−d P R ≤ K τd C1  |g0{h(xi)} − g0(z)| dz, i∈J kh(xi) − zk2≤ where τd is a positive constant and the second inequality happens with high probability as shown in Lemma 7. We then bound the last term in (60) using Assumption5. Thus, Z −d P R −d X X  |g0{h(xi)} − g0(z)| dz =  |g0{h(xi)} − g0(z)| dz i∈J kh(xi) − zk2≤ A∈P i∈J,h(xi)∈A kh(x ) − zk ≤   i 2 −1 d ≤ max |C{h (z)}| S1(g0, PN −1,S ) N  z∈Plat(N) 0 d ≤ {(1 + δ) c2,d b1 K} S1(g0, PN −1,S ) N , (61) with probability at least   n 1 2 0 1 − d d exp − δ b2 c1,d K , K a˜ Lmax 3 which follows from Lemma 13. If h(xi) ∈/ Ω\B(S), then

d |g0{h(xi)} − g0{h(xj)}| ≤ 2 kgkL∞(0,1) . (62)

We now proceed to put the different pieces together. Setting n˜ = |[n]\J|, we observe that

n˜ ∼ Binomial {n , pr(Λ)} .

If 0 n ∼ Binomial {n , pmax CS 4} , then by (57), we have

3  0 3  pr n˜ ≥ 2 n pmax CS  4 ≤ pr n ≥ 2 n pmax CS  4

 1 ≤ exp − 12 n ( pmax CS ) 4 (63)   1/d 1/d  = exp − pmax 4 CS n 2 Lmax K 12 1/d 1/d ( c1,d pmin) n   = exp −C˜ n1−1/d K1/d ,

34 where the first inequality follows from (57), the second from Lemma5, and C˜ is a positive constant that depends on pmin, pmax, Lmin, Lmax, d, and CS . Consequently, combining the above inequality with (59), (61) and (62) we arrive at

P ∗ ∗ P ∗ ∗ P ∗ ∗ |θi − θj | = |θi − θj | + |θi − θj | (i,j)∈EK (i,j)∈EK , i,j∈J (i,j)∈EK , i/∈J or j∈ /J P ≤ |g{h(xi)} − g{h(xj)}| + (i,j)∈EK i,j∈J

0 d K τd C1 {(1 + δ) c2,d b1 K} S1(g0, PN −1 ) N 

d + 2 kg0kL∞(0,1) K τd n˜

P 1−1/d 1+1/d < |g{h(xi)} − g{h(xj)}| + C6 n K (i,j)∈EK i,j∈J

d + 2 kg0kL∞(0,1) K τd n,˜ for some positive constant C6 > 0, which happens with high probability, see (63). Hence, from (63),

P ∗ ∗ P 1−1/d 1+1/d |θi − θj | ≤ |g{h(xi)} − g{h(xj)}| + C6 n K (i,j)∈EK (i,j)∈EK i,j∈J (64) 1−1/d 1+1/d + C7 n K , where C7 > 0 is a constant, and the last inequality holds with probability approaching one provided that K/ log n → ∞. Therefore, it remains to bound the first term in the right hand side of inequality (64). To that end, we notice that if (i, j) ∈ EK , then from the proof of Lemma 13 we observe that

kh(xi) − h(xj)k2 ≤ Lmax RK,max ≤ , where the last inequality happens with high probability. Hence, with high probability, if z is in the segment d connecting h(xi) and h(xj), then z∈ / B2(S) ∪ ((0, 1) \Ω2) provided that i, j ∈ J. As a result, by the mean value theorem, for i, j ∈ J there exists a zi,j ∈ Ω2\B2(S) such that

T g{h(xi)} − g{h(xj)} = ∇g(zi,j) {h(xi) − h(xj)}, and this holds uniformly with probability approaching one.

35 Then,

P |g{h(xi)} − g{h(xj)}| (i,j)∈EK i,j∈J

P T = |∇g(zi,j) {h(xi) − h(xj)}| (i,j)∈EK i,j∈J ( ) P 2 ≤ k∇g(zi,j)k1 N (i,j)∈EK i,j∈J " # d P P C8 N = 2 k∇g(zi,j)k1 Vol{B(zi,j)} N A∈P,A∩{Ω2\B2(S)} 6= ∅ (i,j)∈EK s.t i,j∈J, and zi,j ∈A ( ) d P P R C8 N ≤ 2 k∇g(z)k1 dz N + A∈P,A∩{Ω2\B2(S)} 6= ∅ (i,j)∈EK s.t i,j∈J, and zi,j ∈A B(zi,j ) ( ) d P P R C8 N 2 |k∇g(zi,j)k1 − k∇g(z)k1| dz N A∈P,A∩{Ω2\B2(S)} 6= ∅ (i,j)∈EK s.t i,j∈J, and zi,j ∈A B(zi,j )

=: T1 + T2. (65) Therefore we proceed to bound T1 and T2. Let us assume that (i, j) ∈ EK with i, j ∈ J, and zi,j ∈ A with A ∈ P. Then by Lemma 13 we have two cases. Either h(xi) and h(xj) are in the same cell (element 0 0 of P), or h(xi) and h(xj) are in adjacent cells. Denoting by c(A ) the center of a cell A ∈ P, then if 0 0 z ∈ B(zi,j) ∩ A it implies that

0 0 0 0 kc(A ) − c(A)k∞ ≤ kc(A ) − z k∞ + kz − zi,jk∞ + kzi,j − c(A)k∞ ≤ 2.

And if in addition h(xi) ∈ Ai and h(xj) ∈ Aj, then

kc(Ai) − c(A)k∞ ≤ kc(Ai) − h(xi)k∞ + kh(xi) − zi,jk∞ + kzi,j − c(A)k∞ ≤ 2, (66) and the same is true for c(Aj). Hence,

R P R k∇g(z)k1 dz ≤ k∇g(z)k1 dz. 0 0 A ∈P : kc(A) − c(A )k∞≤2 0 B(zi,j ) A

Since the previous discussion was for an arbitrary zi,j with i, j ∈ J, we obtain that ( ) d P P P R 2 C8 N T1 ≤ k∇g(z)k1 dz N 0 0 A∈P,A∩{Ω2\B2(S)}= 6 ∅ (i,j)∈EK s.t i,j∈J, and zi,j ∈A A ∈P : kc(A) − c(A )k∞≤2 A0

d 2 C8 N P 0 0 3 2 R ≤ N [|{A ∈ P : kc(A) − c(A )k∞ ≤ 2}|] max |C(x)| k∇g(z)k1 dz x∈h−1{P (N)} A∈P lat A

d−1 2 P R ≤ C9 N max |C(x)| k∇g(z)k1 dz, x∈h−1{P (N)} lat A∈P A

36 where C9 is positive constant depending on d which can be obtained with an entropy argument similar to (52). As a result, from (55) we obtain

d−1 2 T1 ≤ C2C9 N max |C(x)| . (67) −1 x∈h {Plat(N)}

It remains to bound T2 in (65). Towards that goal, we observe that

Z d R X ∂g ∂g |k∇g(zi,j)k1 − k∇g(z)k1| dz ≤ (zi,j) − (z) dz ∂zl ∂zl B(zi,j ) l=1 B(zi,j )

Z d 1 Z ∂ψ X 0  0 0 0 = d+1 (z /) g0(zi,j − z ) − g0(z − z ) dz dz  ∂zl l=1 B(zi,j ) B(0) d Z X 1 Z ∂ψ(z0/) ≤ {g (z − z0) − d 0 i,j  kzi,j − zk2 ∂zl B (z ) l=1 B (0)  i,j 

0 0 g0(z − z )} dz dz,

which implies that for some C10 > 0

Z d Z ∂ψ(z0/) {g (z − z0) − g (z − z0)} d X 0 i,j 0 0 |k∇g(zi,j)k1 − k∇g(z)k1| dz ≤ C10  sup d dz ∂zl  kzi,j − zk2 z∈B(zi,j ) l=1 B(zi,j ) B(0) and so

T2

" d Z 0  0 0  # X ∂ψ(z /) g0(zi,j − z ) − g0(z − z ) ≤ P P sup dz0 ∂z kz − z k d A∈P,A∩(Ω2\B2(S)) 6= ∅ (i,j)∈EK i,j∈J, zi,j ∈A z∈B(zi,j ) l i,j 2 l=1 0 kz k2≤

d d−1 ·2 C10 C8  N " # P P d d−1 = T (g0, zi,j) · 2 C10 C8 N , A∈P,A∩(Ω2\B2(S)) 6= ∅ (i,j)∈EK i,j∈J, zi,j ∈A (68) with T (g0, z) as in Equation (10) in the main paper. Now, if zi,j ∈ A, h(xi) ∈ Ai and h(xj) ∈ Aj, then just as in (66) we obtain that

max{kc(Ai) − c(A)k∞, kc(Aj) − c(A)k∞} ≤ 2.

Hence, since by construction we also have zi,j ∈ Ω2\B2 (S) then P T (g0, zi,j) ≤ |{{i, j} : max{kc(Ai) − c(A)k∞, kc(Aj) − c(A)k∞} ≤ 2}| (i,j)∈EK i,j∈J, zi,j ∈A sup T (g0, zA), zA∈A∩(Ω2\B2 (S))

37 which combined with (68) implies that " # P d d−1 T2 ≤ sup T (g0, zi,j) · 2 C10 C8 N · A∈P,A∩(Ω2\B2(S)) 6= ∅ zA∈A∩(Ω2\B2 (S)) (69) |{{i, j} : max{kc(Ai) − c(A)k∞, kc(Aj) − c(A)k∞} ≤ 2}|

d−1 2 ≤ C11 N max |C(x)| , −1 x∈h (Plat(N)) for some positive constant C11, where the last inequality follows from Assumption5 and that fact that for every A the set of pairs of cells with centers within distance 2 is constant. Combining (64), (67), (65), (69) and Lemma 13, we obtain that

X ∗ ∗ 1−1/d |θi − θj | = Opr(poly(log n) n ), (70)

(i,j)∈EK when Assumptions1–5 hold. To conclude the proof, we proceed to verify (70) when Assumptions1–3 hold and f0 satisfies Definition 1. Using the notation from before, we observe that (57) still holds and for J in (58) we have that

P ∗ ∗ P ∗ ∗ P ∗ ∗ |θi − θj | = |θi − θj | + |θi − θj | (i,j)∈EK (i,j)∈EK , i,j∈J (i,j)∈EK , i/∈J or j∈ /J P (71) d ≤ |f0(xi) − f0(xj)| + 2 kg0kL∞(0,1) K τd n.˜ (i,j)∈EK i,j∈J So the claim follows from combining Lemma6,(63), and the piecewise Lipschitz condition.

J Proof of Theorem3

Proof. Recall ∗ yi = θi + εi, i = 1, . . . , n, θ∗ = f (x ), i 0,zi i (72) xi ∼ pzi (x), ∗ pr(zi = l) ∼ πl , for l = 1, . . . , `.

Next we let Al = {i ∈ [n]: zi = l}, and nl = |Al| for l = 1, . . . , `. Then by Proposition 27 in Von Luxburg et al.(2014) we have that the event nπ∗ 3nπ∗ l ≤ n ≤ l , for l = 1, . . . , `, (73) 2 l 2 ∗ happens with probability at least 1 − 2l exp(−n min{πl : l ∈ [`]}/12). Therefore, we will assume that the event defined in (73) holds. n We proceed by conditioning on {zi}i=1, and for simplicity we omit to make this conditioning explicit. For i ∈ [n] we write l(i) := j ∈ [`] if i ∈ Aj. We also introduce the following notation:  NK (xi) = set of nearest neighbors of xi in the K-NN graph constructed from points {xi0 }i0=1,...,n , n o ˜ 0 0 NK (xi) = set of nearest neighbors of xi in the K-NN graph constructed from points {xi }i ∈Al(i) , ˜l RK,max = max max dX (x, xi), i∈Al x∈N˜K (xi)

38 and we denote by ∇ l the incidence matrix of the K-NN graph corresponding to the points {xi}i∈A . GK l Next we observe that just as in the proof of Lemma7, n o ˜l 1/dl pr RK,max > a˜l(K/nl) ≤ nl exp(−K/12), (74)

1/d for some positive constant a˜l. We write l =a ˜l(K/nl) l , and consider the sets n o Λl = i ∈ Al : such that NK (xi) = N˜K (xi) .

Our goal is to use these sets in order to split the basic inequality for the K-NN-FL into different processes corresponding to the different sets Xl. To that end let us pick l ∈ [`]. We notice that if ∂Xl = ∅, then by Assumption6 we have that for all x ∈ Xl it holds that B(x) ⊂ Xl for small enough . Hence, with high probability, NK (xi) = N˜K (xi) for all i ∈ Al. Let us now assume that ∂Xl 6= ∅. Let i ∈ Al. Then

T 0 dl pr {xi ∈ Bl (∂Xl)} ≥ pl,min µl {Bl (∂Xl) Xl} ≥ cl l , (75) 0 where pl,min = minx∈Xl pl(x), and cl is a positive constant that exists because Xl satisfies Assumption2. On the other hand, (14) implies that for i ∈ Al we have n \ o pr {xi ∈ Bl (∂Xl)} ≤ pl,max µl Bl (∂Xl) Xl ≤ pl,max c˜l l, (76)

where pl,max = maxx∈Xl pl(x), and c˜l is a positive constant. Therefore, combining (15), (74), (75) , and (76) with Lemma5, we obtain that   3  1 0 dl pr nl − |Λl| ≤ 2 pl,max c˜l nl l ≥ 1 − pmax exp − 12 cl a˜l K − nl exp(−K/12)   (77) 1 0 dl ≥ 1 − pmax exp − 12 cl a˜l K − exp(−K/12 + log n). Next we see how the previous inequality can be used to put an upper bound on the penalty term of the n K-NN-FL. For any θ ∈ R , we have ` n ` n X Xl X X Xl X 2k∇GK θk1 = |θi − θj| = |θi − θj| + R(θ), i=1 i=1 l=1 j∈NK (xi) l=1 j∈N˜K (xi) where

` nl ` nl X X X X X X |R(θ)| = |θi − θj| − |θi − θj| i=1 i=1 l=1 j∈NK (xi) l=1 j∈N˜K (xi)

` ` X X X X X X (78) = |θi − θj| − |θi − θj|

l=1 i∈[nl]\Λl j∈NK (xi) l=1 i∈[nl]\Λl j∈N˜K (xi) ` X ≤ 4 τd kθk∞ K |[nl]\Λl|, l=1 where τd is a positive constant depending only on d, and the second inequality follows from the well-known bound on the maximum degree of K-NN graphs, see Corollary 3.23 in Miller et al.(1997). Combining with (77), we obtain that

" ` # n 1 o ˆ 1+1/dl X ∗ 1−1/dl R(θ) − R(θ) = Opr (log n) 2 + K λ K (nπl ) . (79) l=1

39 n Note that if ∂Xl = ∅, then (79) will still hold as R(θ) = 0 for all θ ∈ R with high probability. To conclude the proof, we notice by the basic inequality that

∗ ˆ 2 1 T ˆ ∗ λ  ∗ ˆ  kθ − θkn ≤ n ε θ − θ + n k∇GK θ k1 − k∇GK θk1 ` ` X T ˆ ∗  X λ  ∗ ˆ  λ n ∗ ˆ o = εA θAl − θA + k∇Gl θA k1 − k∇Gl θAl k1 + R(θ ) − R(θ) , l l 2n K l K 2n l=1 l=1 where the notation xA denotes the vector x with the coordinates with indices not in A removed. The proof follows from (79) and from bounding each term

1 T ˆ ∗  λ  ∗ ˆ  εA θAl − θA + k∇Gl θA k1 − k∇Gl θAl k1 , n l l n K l K as in the proof of Theorem 12.

K Manifold Adaptivity Example

The following example suggests that the -NN-FL estimator may not be manifold adaptive. 3 2 3 Example 2. For X = X1 ∪ X2 ⊂ R , suppose that X1 = [0, 1] × {c} for some c < 0, and X2 = [0, 1] . Note that the sets X1 and X2 satisfy Assumption6. Let us assume that all of the conditions of Theorem3 ∗ ∗ 3/4 are met with n1 := nπ1  n, and n2 := nπ2  n . Motivated by the scaling of  in Theorem2, we let   poly(log n)/n1/t. We now consider two possibilities for t: t ∈ (0, 2], and t > 2. • For any t ∈ (0, 2], and for any positive constant a ∈ (0, 3/4), there exists a positive constant c(a) ˆ ∗ 2 −1/4−a ˆ such that E(kθ − θ kn) ≥ c(a) n for large enough n, where θ is the -NN-FL estimator, see proof below. In other words, if t < 2, then -NN-FL does not achieve a desirable rate. By contrast, with an appropriate choice of λ and K, the K-NN-FL estimator attains the rate n−1/2  1−1/2 1−1/3 n1 /n + n2 /n (ignoring logarithmic terms) by Theorem3. 1−2/t • If t > 2, then the minimum degree in the -graph of the observations in X1 will be at least c(t)n , with high probability, for some positive constant c(t). This has two consequences: 1. Recall that the algorithm from Chambolle and Darbon(2009) has worst-case complexity O(mn2), where m is the number of edges in the graph (although empirically the algorithm is typically much faster, Wang et al., 2016). Therefore, if t is too large, then the computations involved in applying the fused lasso to the -NN graph may be too demanding. 2. In a different context, El Alaoui et al.(2016) argues that it is desirable for geometric graphs (which generalize -NN graphs) to have degree poly(log n). This suggests that using t > 2 leads to an -NN graph that is too dense.

Next, we proceed to justify the lower bound on the MSE of θˆ when t ∈ (0, 2] in Example2. We begin by noticing that if i, j ∈ A2 with i 6= j, then for the choice of  > 0 in this example, we have that Z (poly(log n))3 pr{d (x , x ) ≤ } ≤ p pr{d (x, x ) ≤ }µ (dx) ≤ k˜ . X i j 2,max X j 2 l n3/t X

3/4−a Let m  n , and j1 < j2 < . . . < jm elements of A2. Then the event

m Λ = ∩s=1{dX (xjs , xl) > , ∀l ∈ A2\{js}},

40 satisfies 3/2−3/t−a 3 pr (Λ) ≥ 1 − c2 n (poly(log n)) , for some positive constant c2. Therefore,

( n ) ( n ) X ∗ ˆ 2 X ∗ ˆ 2 2 3/2−3/t−a 3 3/4−a E (θi − θ,i) ≥ E (θi − θ,i) | Λ pr(Λ) ≥ m σ [1 − cs n {poly(log n)} ] ≥ C1 n i=1 i=1 for some positive constant C1 if n is large enough.

L Proving Theorems1–2 for -NN graphs

We start by giving an overview of how the conclusion of Theorem 1 in the main paper can be obtained for -NN graphs. The general idea is described next. The first step is to obtain a lemma that controls the maximum and minimum degrees of the -NN graph.

Lemma 16. (See Proposition 29 in Von Luxburg et al.(2014) ). Suppose that Assumptions1–3 hold. Let dmin and dmax denote the minimum and maximum degrees of an -NN graph, respectively. Then for all δ ∈ (0, 1) we have that

 2 d   d δ n cmax pr dmax ≥ (1 + δ)n cmax ≤ exp − 3 ,  2 d   d δ n cmin pr dmax ≤ (1 − δ)n cmin ≤ exp − 3 , for positive constants cmin and cmax. Next we present a lemma controlling the maximum and minimum counts of the mesh. This is similar to Lemma7.

Lemma 17. Let   log(1+2r)/d n/n1/d, then there exists N such that if & ' 1 N   in the construction of Plat(N) defined in (24), then the following properties hold:

0 0 • There exist positive constants b2 and b2 such that   0 1+2r d 1 2 0 1+2r  pr max |C(x)| ≥ (1 + δ) b2 log n ≤ N exp − 3 δ b2 log n , x∈I(N)   0 1+2r d 1 2 0 1+2r  pr min |C(x)| ≤ (1 − δ) b1 log n ≤ N exp − 3 δ b1 log n , x∈I(N)

for all δ ∈ (0, 1).

0 0 0 0 0 0 • Define the event Ω as: “If xi ∈ C(xi) and xj ∈ C(xj) for xi, xj ∈ I(N) with kh(xi) − h(xj)k2 ≤ −1 N , then xi and xj are connected in the -NN graph”. Then for some constant c > 0,

pr (Ω) ≥ 1 − n exp(−c log1+2r n/3).

41 The proof of Theorem1 results from setting  as in Lemma 17, and then following the steps in the proof of Theorem 1 in the main document. Specifically, we can follow the proofs of Lemmas8–11, and Theorem 1+2r 12, by replacing ∇GK with ∇G , and K with c log n for a constant c. To prove the conclusion of Theorem2 for the -NN graph, we need lemmas similar to those in Section H. The first of these lemmas is given next.

(1+2r)/d 1/d Lemma 18. Let   log n/n . Then there exists N in the construction of Glat with

N  d−1e,

0 0 and positive constants b1 and b2 depending on Lmin, Lmax, d, pmin, c1,d, and c2,d, such that   0 (1+2r) d  1 2 0 (1+2r)  pr max |C(x)| ≥ (1 + δ) c2,d b1 log n ≤ N exp − 3 δ b2 c1,d log n , x∈h−1(P )  lat  (80) 0 (1+2r) d  1 2 0 (1+2r)  pr min |C(x)| ≤ (1 − δ) c1,d b log n ≤ N exp − δ b c1,d log n , −1 2 3 2 x∈h (Plat) for all δ ∈ (0, 1]. Moreover, let Ω˜ denote the event: “For all i, j ∈ [n], if xi and xj are connected in the 0 0 −1 0 0 0 0 -NN graph, then kh(xi) − h(xj)k2 < 2 N where xi ∈ C(xi) and xj ∈ C(xj) with xi, xj ∈ I(N)”. Then   pr Ω˜ ≥ 1 − n exp(− log(1+2r) n/3).

Using Lemma 18, we can obtain the conclusions of Lemmas 14–15 and Theorem2 for the -NN graph. This can be done with minor modifications to the proofs in SectionH.

Acknowledgements JS is partially supported by NSF Grant DMS-1712996. DW is partially supported by NIH Grant DP5OD009145, NSF CAREER Award DMS-1252624, and a Simons Investigator Award in Mathematical Modeling of Liv- ing Systems.

References

Morteza Alamgir, Gabor´ Lugosi, and Ulrike Luxburg. Density-preserving quantization with application to graph downsampling. In Conference on Learning Theory, pages 543–559, 2014. Ery Arias-Castro, Joseph Salmon, and Rebecca Willett. Oracle inequalities and minimax rates for nonlocal means and related adaptive kernel-based methods. SIAM Journal on Imaging Sciences, 5(3):944–992, 2012. Alvaro´ Barbero and Suvrit Sra. Modular proximal optimization for multidimensional total-variation regu- larization. arXiv preprint arXiv:1411.0589, 2014. Peter J Bickel and Bo Li. Local polynomial regression on unknown manifolds. In Complex Datasets and Inverse Problems, pages 177–186. Institute of Mathematical Statistics, 2007. Yuri Boykov and Vladimir Kolmogorov. An experimental comparison of min-cut/max-flow algorithms for energy minimization in vision. IEEE Transactions on Pattern Analysis and Machine Intelligence, 26(9): 1124–1137, 2004. Leo Breiman. Random forests. , 45(1):5–32, 2001. Leo Breiman, Jerome Friedman, Charles J Stone, and Richard A Olshen. Classification and Regression Trees. CRC press, 1984.

42 Rui M Castro, Rebecca Willett, and Robert Nowak. Faster rates in regression via active learning. In Tech. Rep., University of Wisconsin, Madison, June 2005, ECE-05-3 Technical Report (available at http://homepages.cae.wisc.edu/ rcastro/ECE-05-3.pdf)., 2005. Antonin Chambolle and Jer´ omeˆ Darbon. On total variation minimization and surface evolution using para- metric maximum flows. International Journal of Computer Vision, 84(3):288–307, 2009. Antonin Chambolle and Thomas Pock. A first-order primal-dual algorithm for convex problems with appli- cations to imaging. Journal of Mathematical Imaging and Vision, 40:120–145, 2011. Kamalika Chaudhuri and Sanjoy Dasgupta. Rates of convergence for nearest neighbor classification. In Advances in Neural Information Processing Systems, pages 3437–3445, 2014. Ming-Yen Cheng and Hau-tieng Wu. Local linear regression on manifolds and its geometric interpretation. Journal of the American Statistical Association, 108(504):1421–1434, 2013. Sanjoy Dasgupta. Consistency of nearest neighbor classification under selective sampling. In Conference on Learning Theory, pages 18–1, 2012. Sanjoy Dasgupta and Samory Kpotufe. Optimal rates for k-NN density and mode estimation. In Advances in Neural Information Processing Systems, pages 2555–2563, 2014. Sanjoy Dasgupta and Kaushik Sinha. Randomized partition trees for exact nearest neighbor search. In Conference on Learning Theory, pages 317–337, 2013. P. Laurie Davies and Arne Kovac. Local extremes, runs, strings and multiresolution. Annals of Statistics, 29(1):1–65, 2001. David L Donoho and Iain M Johnstone. Minimax estimation via wavelet shrinkage. Annals of Statistics, 26 (8):879–921, 1998. Jean Duchon. Splines minimizing rotation-invariant semi-norms in sobolev spaces. Constructive Theory of Functions of Several Variables, pages 85–100, 1977. Ahmed El Alaoui, Xiang Cheng, Aaditya Ramdas, Martin J Wainwright, and Michael I Jordan. Asymptotic behavior of\ell p-based laplacian regularization in semi-supervised learning. In Conference on Learning Theory, pages 879–906, 2016. Abderrahim Elmoataz, Olivier Lezoray, and Sebastien´ Bougleux. Nonlocal discrete regularization on weighted graphs: a framework for image and manifold processing. IEEE transactions on Image Pro- cessing, 17(7):1047–1060, 2008. Sira Ferradans, Nicolas Papadakis, Gabriel Peyre,´ and Jean-Franc¸ois Aujol. Regularized discrete optimal transport. SIAM Journal on Imaging Sciences, 7(3):1853–1882, 2014. Jerome H Friedman. Multivariate adaptive regression splines. The Annals of Statistics, pages 1–67, 1991. Jerome H Friedman, Jon Louis Bentley, and Raphael Ari Finkel. An algorithm for finding best matches in logarithmic expected time. ACM Transactions on Mathematical Software (TOMS), 3(3):209–226, 1977. Mariano Giaquinta and Giuseppe Modica. Mathematical Analysis: An Introduction to Functions of Several Variables. Springer Science & Business Media, 2010. Adityanand Guntuboyina, Donovan Lieu, Sabyasachi Chatterjee, and Bodhisattva Sen. Spatial adaptation in trend filtering. arXiv preprint arXiv:1702.05113, 2017. Laszl´ o´ Gyorfi,¨ Michael Kohler, Adam Krzyzak, and Harro Walk. A distribution-free theory of nonparametric regression. Springer Science & Business Media, 2006. Martin T Hagan, Howard B Demuth, Mark H Beale, and Orlando De Jesus.´ Neural Network Design, volume 20. Pws Pub. Boston, 1996. Wolfgang Hardle,¨ Gerard Kerkyacharian, Dominique Picard, and Alexander Tsybakov. Wavelets, approxi- mation, and statistical applications, volume 129. Springer Science & Business Media, 2012. Tomislav Hengl, Gerard BM Heuvelink, and David G Rossiter. About regression-kriging: from equations to case studies. Computers & geosciences, 33(10):1301–1315, 2007. Holger Hoefling. A path algorithm for the fused lasso signal approximator. Journal of Computational and Graphical Statistics, 19(4):984–1006, 2010.

43 Jan-Christian Hutter and Philippe Rigollet. Optimal rates for total variation denoising. Annual Conference on Learning Theory, 29:1115–1146, 2016. Nicholas Johnson. A dynamic programming algorithm for the fused lasso and l0-segmentation. Journal of Computational and Graphical Statistics, 22(2):246–260, 2013. Seung-Jean Kim, Kwangmoo Koh, Stephen Boyd, and Dimitry Gorinevsky. `1 trend filtering. SIAM Review, 51(2):339–360, 2009. Aryeh Kontorovich, Sivan Sabato, and Ruth Urner. Active nearest-neighbor learning in metric spaces. In Advances in Neural Information Processing Systems, pages 856–864, 2016. Samory Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. arXiv preprint arXiv:0902.3453, 2009. Samory Kpotufe. K-NN regression adapts to local intrinsic dimension. In Advances in Neural Information Processing Systems, pages 729–737, 2011. Samory Kpotufe and Sanjoy Dasgupta. A tree-based regressor that adapts to intrinsic dimension. Journal of Computer and System Sciences, 78(5):1496–1515, 2012. Loic Landrieu and Guillaume Obozinski. Cut pursuit: fast algorithms to learn piecewise constant functions on general weighted graphs. HAL preprint hal-01306779, 2015. Kevin Lin, James L Sharpnack, Alessandro Rinaldo, and Ryan J Tibshirani. A sharp error analysis for the fused lasso, with application to approximate changepoint screening. In Advances in Neural Information Processing Systems, pages 6887–6896, 2017. Enno Mammen and Sara van de Geer. Locally apadtive regression splines. Annals of Statistics, 25(1): 387–413, 1997. Gary L Miller, Shang-Hua Teng, William Thurston, and Stephen A Vavasis. Separators for sphere-packings and nearest neighbor graphs. Journal of the ACM (JACM), 44(1):1–29, 1997. Francesco Ortelli and Sara van de Geer. On the total variation regularized estimator over a class of tree graphs. Electronic Journal of Statistics, 12(2):4517–4570, 2018. Oscar Hernan Madrid Padilla, James G Scott, James Sharpnack, and Ryan J Tibshirani. The DFS fused lasso: Linear-time denoising over general graphs. Journal of Machine Learning Research, 18(176):1–36, 2018. Ashley Petersen, Noah Simon, and Daniela Witten. Convex regression with interpretable sharp partitions. The Journal of Machine Learning Research, 17(1):3240–3270, 2016a. Ashley Petersen, Daniela Witten, and Noah Simon. Fused lasso additive model. Journal of Computational and Graphical Statistics, 25(4):1005–1025, 2016b. Leonid Rudin, Stanley Osher, and Emad Faterni. Nonlinear total variation based noise removal algorithms. Physica D: Nonlinear Phenomena, 60(1):259–268, 1992. Veeranjaneyulu Sadhanala and Ryan J Tibshirani. Additive models with trend filtering. arXiv preprint arXiv:1702.05037, 2017. Veeranjaneyulu Sadhanala, Yu-Xiang Wang, and Ryan J. Tibshirani. Total variation classes beyond 1d: Minimax rates, and the limitations of linear smoothers. In Advances in Neural Information Processing Systems, pages 3513–3521, 2016. Veeranjaneyulu Sadhanala, Yu-Xiang Wang, James L Sharpnack, and Ryan J Tibshirani. Higher-order total variation classes on grids: Minimax theory and trend filtering methods. In Advances in Neural Information Processing Systems, pages 5796–5806, 2017. Shashank Singh and Barnabas´ Poczos.´ Analysis of k-nearest neighbor distances with application to entropy estimation. arXiv preprint arXiv:1603.08578, 2016. Charles J Stone. Consistent nonparametric regression. The Annals of Statistics, pages 595–620, 1977. , Michael Saunders, Saharon Rosset, Ji Zhu, and Keith Knight. Sparsity and smoothness via the fused lasso. Journal of the Royal Statistical Society: Series B, 67(1):91–108, 2005. Ryan J. Tibshirani. Adaptive piecewise polynomial estimation via trend filtering. The Annals of Statistics,

44 42(1):285–323, 2014. Ryan J. Tibshirani and Jonathan Taylor. The solution path of the generalized lasso. The Annals of Statistics, 39(3):1335–1371, 2011. Ulrike Von Luxburg, Agnes Radl, and Matthias Hein. Hitting and commute times in large graphs are often misleading. Journal of Machine Learning Research, 15:1751–1798, 2014. Stefan Wager and Susan Athey. Estimation and inference of heterogeneous treatment effects using random forests. Journal of the American Statistical Association, 113(523):1228–1242, 2018. Yu-Xiang Wang, James Sharpnack, Alex Smola, and Ryan J Tibshirani. Trend filtering on graphs. Journal of Machine Learning Research, 17(105):1–41, 2016. Yun Yang and David B Dunson. Bayesian manifold regression. The Annals of Statistics, 44(2):876–905, 2016. Yun Yang, Surya T Tokdar, et al. Minimax-optimal nonparametric regression in high dimensions. The Annals of Statistics, 43(2):652–674, 2015. Chi Zhang, Feifei Li, and Jeffrey Jestes. Efficient parallel knn joins for large data in mapreduce. In Pro- ceedings of the 15th International Conference on Extending Database Technology, pages 38–49. ACM, 2012. Xiaojin Zhu, Zoubin Ghahramani, and John Lafferty. Semi-supervised learning using Gaussian fields and harmonic functions. International Conference on Machine Learning, 20:912–919, 2003. William P Ziemer. Weakly differentiable functions: Sobolev spaces and functions of bounded variation, volume 120. Springer Science & Business Media, 2012.

45