EE290 Mathematics of Science Lecture 22 - 11/19/2019 Lecture 22: Robust Location Estimation Lecturer: Jiantao Jiao Scribe: Vignesh Subramanian

In this lecture, we get a historical perspective into the robust estimation problem and discuss Huber’s work [1] for robust estimation of a location parameter. The Huber is given by,

( 1 2 2 t , |t| ≤ k ρHuber(t) = 1 2 . (1) k |t| − 2 k , |t| > k Here k is a parameter and the idea behind the loss function is to penalize (beyond k) linearly instead of quadratically. Figure 1 shows the Huber loss function for k = 1. In this lecture we will get an intuitive

1 2 Figure 1: The green line plots the Huber-loss function for k = 1, and the blue line plots the quadratic function 2 t . understanding for the reasons behind the particular form of this function, quadratic in interior, linear in exterior and convex and will see that this loss function is optimal for one dimensional robust estimation for Gaussian location model. First we describe the problem setting.

1 Problem Setting

Suppose we observe X1,X2,...,Xn where Xi − µ ∼ F ∈ F are i.i.d. Here, F = {F | F = (1 − )G + H, H ∈ M}, (2) where G ∈ M is some fixed distribution function which is usually assumed to have zero mean, and M denotes the space of all probability measures. This describes the corruption model where the observed distribution is a convex combination of the true distribution G(x) and an arbitrary corruption distribution H. It is a location model since we assume X − µ has distribution F where µ ∈ R is unknown. The goal here is to estimate the parameter µ.

First we must determine how we evaluate and in the paper, Huber restricted his attention to M-estimators of the form, n X µˆ = min ρ(Xi − t). t i=1

1 1 2 1 Pn As an example if ρ(t) = 2 t , thenµ ˆ = n i=1 Xi, the empirical mean which is sensitive to outliers. To evaluate estimators Huber looks at asymptotics.

2 Asymptotics

0 Let ψ(t) = ρ (t). Then from first order condition of optimality, an optimizer Tn must satisfy, n X ψ(Xi − Tn) = 0. (3) i=1

Assume for now µ = 0, and EF [ψ(X)] = 0. This that for the population version of (3), Tn = 0 is a solution. We now assume that Tn → 0 as n → ∞, and we will provide a proof sketch showing Tn is asymptotically normal and compute its asymptotic . From (3), using the first order approximation for the term ψ(Xi − Tn) around the point Xi and using the mean-value theorem, for some 0 ≤ θ ≤ 1 we have, n n X X 0 ψ(Xi) − Tn ψ (Xi − θTn) = 0. i=1 i=1 Rearranging we get, n √1 P √ i=1 ψ(Xi) nT = n . n 1 Pn 0 n i=1 ψ (Xi − θTn)

Since we have EF [ψ(X)] = 0, the numerator by the converges weakly to 2 N ∼ N (0, EF [ψ(X) ]). Further since we assumed Tn → 0 as n → ∞ then from the weak law of large numbers 0 the denominator converges weakly to EF [ψ (X)]. Thus we have,  2  √ w EF [ψ(X) ] n(Tn − 0) −→N 0, 0 2 . (EF [ψ (X)]) One basic result for M-estimators is showing the maximum likelihood achieves the smallest asymptotic variance among all M-estimators. We provide a proof below. Letting f(x) denote the density function for F , we have Z b 0 0 EF [ψ (X)] = f(x)ψ (x)dx a b Z b 0 0 = f(x)ψ (x) − ψ(x)f (x)dx. a a If we assume that f(a) = f(b) = 0 then we have, Z b 0 0 EF [ψ (X)] = − ψ(x)f (x)dx. a Thus,

2 R b 2 EF [ψ(X) ] a ψ(x) f(x)dx 0 2 = 2 (EF [ψ (X)]) R b  f 0(x)   a ψ(x) f(x) f(x)dx 1 ≥ 2 ,  f 0(x)  f(x) f(x)dx

2 where we used the Cauchy-Schwarz inequality. Observe that the RHS does not depend on ψ and the f 0(x) ρ(t) − A inequality is tight when ψ(x) ∝ − f(x) which results in f(t) ∝ e for some constant A. Thus minimizing ρ(t) is equivalent to finding the maximum likelihood estimator. When f(x) is a Gaussian density function, then ρ is the squared-loss function and the optimizer Tn is the empirical mean.

3 Two player game and Huber’s Theorem

Consider a two player game with payoff function given by −V (ψ, F ). Here ψ is the action chosen by the to maximize the payoff (minimize the asymptotic variance) and F is chosen by the adversary to minimize the payoff (maximize the asymptotic variance). Theorem 1. Assume G is symmetric around 0, log-concave with density function g(x) with convex support. Define

FS = {F | F = (1 − )G + H, H symmetric around 0} (4)

The two-player game under the assumptions describe above has a saddle point (ψ0,F0) i.e.,

sup V (ψ0,F ) = V (ψ0,F0) = inf V (ψ, F0). ψ F ∈FS

First we describe the form of f0(x) which is the density function of F0. Let [t0, t1] be the interval where 0 g (x) ≤ k. We know that this interval exists since g(x) is log-concave with convex support. Here k is the g(x) solution to the equation

1 Z t1 g(t ) + g(t ) = g(t)dt + 0 1 . (5) 1 −  t0 k Then,  (1 − )g(t )ek(t−t0), t ≤ t  0 0 f0(t) = (1 − )g(t), t0 < t < t1 (6)  −k(t−t1) (1 − )g(t1)e , t ≥ t1 0 f0(t) ψ0(t) = − . (7) f0(t)

Before we look at the proof of this theorem we look at an example. 2 1 − t Example 2. Let g(t) = √ e 2 . Then −t = t = k. We can solve either by binary search or line search 2π 0 1 for k using the equation,

1 Z k 2g(k) = g(t)dt + . 1 −  −k k The optimal loss function to use in this case is the Huber loss function given by (1).

( 1 2 2 t , |t| ≤ k ρHuber(t) = 1 2 . k |t| − 2 k , |t| > k

Note that for a generic distribution g(t) the dependence of t0 and t1 on k can be highly non-linear and it is not easy to solve for k using (5). Next we look at the proof for Theorem 1.

3 Proof First we verify that the distribution H determined by F0 and G is indeed a distribution i.e. its density function h(t) is non-negative and integrates to one. We have,  (1 − )(g(t )ek(t−t0) − g(t)), t ≤ t  0 0 h0(t) = 0, t0 < t < t1 . (8)  −k(t−t1) (1 − )(g(t1)e − g(t)), t ≥ t1

Since g(t) and f0(t) integrate to one, h(t) integrates to one. To show non-negativity of h(t) we use the fact that g(t) is log-concave, which implies − log(g(t)) is a convex function. For any t ≤ t0,

− log(g(t)) ≥ − log(g(t0)) − k(t − t0),

k(t−t0) ⇒ g(t) ≤ g(t0)e .

0 0 g (t0) 0 g (t) where we used the the facts = k and (log(g(t)) = . The proof for the case t ≥ t1 follows via a g(t0) g(t) similar argument. Next we need to show that V (ψ0,F0) is a saddle point. We have,

V (ψ0,F0) = inf V (ψ, F0), ψ because for given F0, ψ0 was optimal and resulted in the optimizer being the maximum likelihood estimator as discussed in Section 2. Next we show that,

V (ψ0,F0) = sup V (ψ0,F ). F ∈FS

For any F ∈ FS we have

2 EF [ψ0(X) ] V (ψ0,F ) = 0 2 . (EF [ψ0(X)]) We can rewrite the numerator as,

2 2 2 EF [ψ0(X) ] = (1 − )EG[ψ0(X) ] + EH [ψ0(X) ] 2 2 ≤ (1 − )EG[ψ0(X) ] + k ,

0 2 f0(t) where we upper EH [ψ0(X) ] using ψ0(t) = − f(t) and the form of f0(t) from (6) which results in |ψ(t)| = k 0 for t ≤ t or t ≥ t and |ψ(t)| = g (t) ≤ k for t < t < t . Note that f (t) results in h (t) = 0 for t < t < t 0 1 g(t) 0 1 0 0 0 1 and thus maximizes the numerator. Similarly the denominator can be written as,

0 2 0 0 2 (EF [ψ0(X)]) = ((1 − )EG([ψ0(X)]) + EH ([ψ0(X)])) 0 2 ≥ ((1 − )EG([ψ0(X)])) ,

0 0 where we used the fact that ψ0 ≥ 0 pointwise and ψ0(t) = 0 for t ≤ t0 or t ≥ t1. Again note that f0(t) results in h0(t) = 0 for t0 < t < t1 and minimizes the denominator. Thus F0 is the maximizer of V (ψ0,F ) among all F ∈ FS.

4 4 Summary

There were several criticisms of Huber’s work including those on the assumptions that G and H are symmet- ric, and the requirement that  be known in order to compute the Huber loss. Further in higher dimensions 1 the breakdown point scales as 1+d which is undesirable. (From Wikipedia: Intuitively, the breakdown point of an estimator is the proportion of incorrect observations (e.g. arbitrarily large observations) an estimator can handle before giving an incorrect (e.g., arbitrarily large) result). In a subsequent paper Huber removes the assumptions that G, H are symmetric and shows that the Huber M-estimator is exactly minimax for coverage probability in robust location estimation for Gaussian models.

References

[1] P. J. Huber, “Robust estimation of a location parameter,” Annals of Mathematical , vol. 35, no. 1, pp. 73–101, Mar. 1964. [Online]. Available: http://dx.doi.org/10.1214/aoms/1177703732

5