Feature Selection & the Shapley-Folkman Theorem.

Alexandre d’Aspremont,

CNRS & D.I., Ecole´ Normale Sup´erieure.

With Armin Askari, Laurent El Ghaoui (UC Berkeley) and Quentin Rebjock (EPFL)

Alex d’Aspremont OWOS, June 2020. 1/32 Introduction

Feature Selection.

 Reduce number of variables while preserving classification performance.

 Often improves test performance, especially when samples are scarce.

 Helps interpretation.

Classical examples: LASSO, `1-logistic regression, RFE-SVM, . . .

Alex d’Aspremont OWOS, June 2020. 2/32 Introduction: feature selection

RNA classification. Find genes which best discriminate cell type (lung cancer vs control). 35238 genes, 2695 examples. [Lachmann et al., 2018]

×1011

3.430

3.435

3.440

3.445 Objective

3.450

3.455

0 5000 10000 15000 20000 25000 30000 35000 Number of features (k)

Best ten genes: MT-CO3, MT-ND4, MT-CYB, RP11-217O12.1, LYZ, EEF1A1, MT-CO1, HBA2, HBB, HBA1.

Alex d’Aspremont OWOS, June 2020. 3/32 Introduction: feature selection

Applications. Mapping brain activity by fMRI.

From PARIETAL team at INRIA.

Alex d’Aspremont OWOS, June 2020. 4/32 Introduction: feature selection fMRI. Many voxels, very few samples leads to false discoveries.

Wired article on Bennett et al. “Neural Correlates of Interspecies Perspective Taking in the Post-Mortem Atlantic Salmon: An Argument For Proper Multiple Comparisons Correction” Journal of Serendipitous and Unexpected Results, 2010.

Alex d’Aspremont OWOS, June 2020. 5/32 Introduction: linear models

Linear models. Select features from large weights w.

2 T  LASSO solves minw Xw y + λ w 1 with linear prediction given by w x. k − k2 k k P T 2  Linear SVM, solves minw i max 0, 1 yi w xi + λ w 2 with linear classification rule sign(wT x). { − } k k

 `1-logistic regression, etc.

In practice.

 Relatively high complexity on very large-scale data sets.

 Recovery results require uncorrelated features (incoherence, RIP, etc.).

 Cheaper featurewise methods (ANOVA, TF-IDF, etc.) have relatively poor performance.

Alex d’Aspremont OWOS, June 2020. 6/32 Outline

 Sparse Naive Bayes

 The Shapley-Folkman theorem

 gap bounds

 Numerical performance

Alex d’Aspremont OWOS, June 2020. 7/32 Multinomial Naive Bayes

Multinomial Naive Bayes. In the multinomial model

Pm ! > ± ( j=1 xj)! log Prob(x C±) = x log θ + log Qm . | j=1 xj!

Training by maximum likelihood

+ − +> + −> − (θ∗ , θ∗ ) = argmax f log θ + f log θ 1>θ+=1>θ−=1 θ+,θ−∈[0,1]m

Linear classification rule: for a given test point x Rm, set ∈ yˆ(x) = sign(v + w>x), where

+ − w log θ log θ and v log Prob(C+) log Prob(C−), , ∗ − ∗ , −

Alex d’Aspremont OWOS, June 2020. 8/32 Sparse Naive Bayes

Naive Feature Selection. Make w log θ+ log θ− sparse. , ∗ − ∗

Solve + − +> + −> − (θ∗ , θ∗ ) = argmax f log θ + f log θ subject to θ+ θ− k 0 (SMNB) k1>θ+−= 1k>θ≤− = 1 θ+, θ+ 0 ≥ + − where k 0 is a target number of features. Features for which θi = θi can be discarded.≥

Nonconvex problem.

 Convex relaxation?

 Approximation bounds?

Alex d’Aspremont OWOS, June 2020. 9/32 Sparse Naive Bayes

Convex Relaxation. The dual is very simple.

Sparse Multinomial Naive Bayes [Askari, A., El Ghaoui, 2019]

Let φ(k) be the optimal value of (SMNB). Then φ(k) ψ(k), where ψ(k) is the optimal value of the following one-dimensional ≤ problem

ψ(k) := C + min sk(h(α)), (USMNB) α∈[0,1] where C is a constant, sk( ) is the sum of the top k entries of its vector argument, and for α (0, 1), · ∈ h(α) := f+ log f++f− log f− (f++f−) log(f++f−) f+ log α f− log(1 α). ◦ ◦ − ◦ − − −

Solved by bisection, linear complexity O(n + k log k). Approximation bounds?

Alex d’Aspremont OWOS, June 2020. 10/32 Outline

 Sparse Naive Bayes

 The Shapley-Folkman theorem

 Duality gap bounds

 Numerical performance

Alex d’Aspremont OWOS, June 2020. 11/32 Shapley-Folkman Theorem

Minkowski sum. Given sets X,Y Rd, we have ⊂ X + Y = x + y : x X, y Y { ∈ ∈ }

(CGAL User and Reference Manual)

d . Given subsets Vi R , we have ⊂ ! X X Co Vi = Co(Vi) i i

Alex d’Aspremont OWOS, June 2020. 12/32 Shapley-Folkman Theorem

The `1/2 ball, Minkowski average of two and ten balls, convex hull.

+ + + + =

Minkowski sum of five first digits (obtained by sampling).

Alex d’Aspremont OWOS, June 2020. 13/32 Shapley-Folkman Theorem

Shapley-Folkman Theorem [Starr, 1969]

d Suppose Vi R , i = 1, . . . , n, and ⊂ n ! n X X x Co Vi = Co(Vi) ∈ i=1 i=1 then X X x Vi + Co(Vi) ∈ [1,n]\Sx Sx where x d. |S | ≤

Alex d’Aspremont OWOS, June 2020. 14/32 Shapley-Folkman Theorem

Pn Proof sketch. Write x Co(Vi), or ∈ i=1   n d+1   x X X vij = λij , for λ 0, 1n ei ≥ i=1 j=1

Conic Carath´eodory then yields representation with at most n + d nonzero coefficients. Use a pigeonhole argument

λij } d }

xi ∈ Co(Vi) n xi ∈ Vi

Number of nonzero λij controls gap with convex hull.

Alex d’Aspremont OWOS, June 2020. 15/32 Shapley-Folkman: geometric consequences

Consequences.

d  If the sets Vi R are uniformly bounded with rad(Vi) R, then ⊂ ≤ Pn Pn  p i=1 Vi i=1 Vi min n, d dH , Co R { } n n ≤ n

where rad(V ) = infx∈V sup x y . y∈V k − k

In particular, when d is fixed and n  → ∞ Pn V  Pn V  i=1 i Co i=1 i n → n

in the Hausdorff metric with rate O(1/n).

 Holds for many other nonconvexity measures [Fradelizi et al., 2017].

Alex d’Aspremont OWOS, June 2020. 16/32 Outline

 Sparse Naive Bayes

 The Shapley-Folkman theorem

 Duality gap bounds

 Numerical performance

Alex d’Aspremont OWOS, June 2020. 17/32 Nonconvex Optimization

Separable nonconvex problem. Solve

minimize Pn f (x ) i=1 i i (P) subject to Ax b, ≤

d Pn in the variables xi R i with d = di, where fi are lower semicontinuous ∈ i=1 and A Rm×d. ∈

Take the dual twice to form a convex relaxation,

minimize Pn f ∗∗(x ) i=1 i i (CoP) subject to Ax b ≤

d in the variables xi R i. ∈

Alex d’Aspremont OWOS, June 2020. 18/32 Nonconvex Optimization

Convex envelope. Biconjugate f ∗∗ satisfies epi(f ∗∗) = Co(epi(f)), which means that

f ∗∗(x) and f(x) match at extreme points of epi(f ∗∗).

Define lack of convexity as ρ(f) sup f(x) f ∗∗(x) . , x∈dom(f){ − }

Example.

|x| 1 Card(x)

−1 0 1 x

The l1 norm is the convex envelope of Card(x) in [ 1, 1]. −

Alex d’Aspremont OWOS, June 2020. 19/32 Nonconvex Optimization

Writing the of problem (P) as in [Lemar´echaland Renaud, 2001],

( n ) 1+m X d r (r0, r) R : fi(xi) r0, Ax b r, x R , G , ∈ ≤ − ≤ ∈ i=1 we can write the dual function of (P) as

 > ∗∗ Ψ(λ) inf r0 + λ r :(r0, r) , , ∈ Gr in the variable λ Rm, where ∗∗ = Co( ) is the closed convex hull of the epigraph . ∈ G G G Affine constraints means (P) and (CoP) have the same dual [Lemar´echaland Renaud, 2001, Th. 2.11], given by

sup Ψ(λ) (D) λ≥0 in the variable λ Rm. Roughly speaking, if ∗∗ = , no duality gap in (P). ∈ G G

Alex d’Aspremont OWOS, June 2020. 20/32 Nonconvex Optimization

Epigraph & duality gap. Define

 ∗∗ di m+1 i = (fi (xi),Aixi): xi R + R+ F ∈

m×d th where Ai R i is the i block of A. ∈

∗∗  The epigraph can be written as a Minkowski sum of i Gr F n ∗∗ X m+1 r = i + (0, b) + R+ G F − i=1

∗∗ ∗∗  Shapley-Folkman at x r shows f (xi) = f(xi) for all but at most m + 1 terms in the objective.∈ G

∗∗  As n , with m/n 0, r gets closer to its convex hull r and the duality→ gap ∞ becomes→ negligible.G G

Alex d’Aspremont OWOS, June 2020. 21/32 Bound on duality gap

A priori bound on duality gap of

Pn minimize i=1 fi(xi) subject to Ax b, ≤ where A Rm×d. ∈ Proposition [Aubin and Ekeland, 1976, Ekeland and Temam, 1999]

A priori bounds on the duality gap Suppose the functions fi in (P) satisfy Assumption (. . . ). There is a point x? Rd at which the primal optimal value of (CoP) is attained, such that ∈

n n n m+1 X ∗∗ ? X ? X ∗∗ ? X f (x ) fi(ˆx ) f (x ) + ρ(f ) i i ≤ i ≤ i i [i] i=1 i=1 i=1 i=1 | CoP{z } | {zP } | CoP{z } | gap{z } where xˆ? is an optimal point of (P) and ρ(f ) ρ(f ) ... ρ(f ). [1] ≥ [2] ≥ ≥ [n]

Alex d’Aspremont OWOS, June 2020. 22/32 Bound on duality gap

General result. Consider the separable nonconvex problem

Pn hP (u) := min. i=1 fi(xi) Pn (P) s.t. gi(xi) b + u i=1 ≤

d m in the variables xi R i, with perturbation parameter u R . ∈ ∈ Proposition [Ekeland and Temam, 1999]

A priori bounds on the duality gap Suppose the functions fi, gji in problem (P) satisfy assumption (...) for i = 1, . . . , n, j = 1, . . . , m. Let

p¯j = (m + 1) max ρ(gji), for j = 1, . . . , m i then ∗∗ ∗∗ hP (¯p) hP (¯p) hP (0) + (m + 1) max ρ(fi). ≤ ≤ i ∗∗ where hP (u) is the optimal value of the dual to (P).

Alex d’Aspremont OWOS, June 2020. 23/32 Naive Feature Selection

Duality gap bound. Sparse naive Bayes reads

+> −> hP (u) = minq,r f log q f log r −> − subject to 1 q = 1 + u1, > 1 r = 1 + u2, Pm 1q 6=r k + u3 i=1 i i ≤ in the variables q, r [0, 1]m, where u R3. There are three constraints, two of them convex, which∈ means p¯ = (0, 0, 4)∈.

Theorem [Askari, A., El Ghaoui, 2019]

NFS duality gap bounds. Let φ(k) be the optimal value of (SMNB) and ψ(k) that of the convex relaxation (USMNB). We have

ψ(k 4) φ(k) ψ(k), − ≤ ≤ for k 4. ≥

Alex d’Aspremont OWOS, June 2020. 24/32 Naive Feature Selection

+ − Primalization. Given α∗ solving (USMNB), reconstruct point (θ , θ ).

For α∗ optimal for (USMNB), let be complement of the set of indices I P ± corresponding to the top k entries of h(α∗), set B± := i6∈I fi , and

+ − ± + − fi + fi ± B+ + B− fi θ∗ i = θ∗ i = > + − , i , θ∗i = > + − , i . 1 (f + f ) ∀ ∈ I B± 1 (f + f ) ∀ 6∈ I

In all but pathological scenarios, k largest coefficients in (USMNB) give support of the solution.

Alex d’Aspremont OWOS, June 2020. 25/32 Outline

 Sparse Naive Bayes

 Approximation bounds & the Shapley-Folkman theorem

 Numerical performance

Alex d’Aspremont OWOS, June 2020. 26/32 Naive Feature Selection

Data.

Feature Vectors Amazon IMDB Twitter MPQA SST2 Count Vector 31,666 103,124 273,779 6,208 16,599 tf-idf 31,666 103,124 273,779 6,208 16,599 tf-idf wrd bigram 870,536 8,950,169 12,082,555 27,603 227,012 tf-idf char bigram 25,019 48,420 17,812 4838 7762

Number of features in text data sets used below.

Amazon IMDB Twitter MPQA SST2 Count Vector 0.043 0.22 1.15 0.0082 0.037 tf-idf 0.033 0.16 0.89 0.0080 0.027 tf-idf wrd bigram 0.68 9.38 13.25 0.024 0.21 tf-idf char bigram 0.076 0.47 4.07 0.0084 0.082

Average run time (seconds, plain Python on CPU).

Alex d’Aspremont OWOS, June 2020. 27/32 Naive Feature Selection.

Method 90 Logistic-`1 Logistic-RFE 85 SVM-`1 SVM-RFE LASSO 80 Odds Ratio TMNB 75 SMNB - this work

70

Accuracy (%) 65

60 Sparsity Level 0.1% 1% 55 5% 10% 50

1 0 1 2 3 10− 10 10 10 10 Training time (s)

Accuracy versus run time on IMDB/Count Vector, MNB in stage two.

Alex d’Aspremont OWOS, June 2020. 28/32 Naive Feature Selection.

6.54 − 15.65 6.56 − −

6.58 − 15.70 − 6.60 − 15.75 6.62 − Function Value −

6.64 − ψ(k) 15.80 ψ(k) Primalization − Primalization 6.66 − ψ(k 4) ψ(k 4) − − 0 5 10 15 20 25 30 0 500 1000 1500 2000 2500 3000 k k

Duality gap bound versus sparsity level for m = 30 (left panel) and m = 3000 (right panel), showing that the duality gap quickly closes as m or k increase.

Alex d’Aspremont OWOS, June 2020. 29/32 Naive Feature Selection.

0.16

0.14

0.12

0.10 Time (s)

0.08

0.06

0.04

10000 20000 30000 40000 50000 60000 70000 80000 m

Run time with IMDB dataset/tf-idf vector data set, with increasing m, k with fixed ratio k/m, empirically showing (sub-) linear complexity.

Alex d’Aspremont OWOS, June 2020. 30/32 Naive Feature Selection.

Criteo data set. Conversion logs. 45 GB, 45 million rows, 15000 columns.

 Preprocessing (NaN, encoding categorical features) takes 50 minutes. + −  Computing f and f takes 20 minutes.

 Computing the full curve below (i.e. solving 15000 problems) takes 2 minutes.

3 9 ×10 1.7051×10 79

80

81

82

83

Objective 84

85

86

87 0 2000 4000 6000 8000 10000 12000 14000 Number of features (k)

Standard workstation, plain Python on CPU.

Alex d’Aspremont OWOS, June 2020. 31/32 Conclusion

Naive Feature Selection.

For naive Bayes, we get sparsity almost for free.

 Linear complexity.

 Nearly tight convex relaxation.

 Feature selection performance comparable to LASSO or `1 logistic regression, but NFS is 100 faster. . . ×

 Requires no RIP assumption (only the naive one behind NB).

https://github.com/aspremon/NaiveFeatureSelection

Alex d’Aspremont OWOS, June 2020. 32/32 *

References

Jean-Pierre Aubin and Ivar Ekeland. Estimates of the duality gap in nonconvex optimization. Mathematics of Operations Research, 1(3): 225–245, 1976. Ivar Ekeland and Roger Temam. Convex analysis and variational problems. SIAM, 1999. Matthieu Fradelizi, Mokshay Madiman, Arnaud Marsiglietti, and Artem Zvavitch. The convexification effect of minkowski summation. Preprint, 2017. Alexander Lachmann, Denis Torre, Alexandra B Keenan, Kathleen M Jagodnik, Hoyjin J Lee, Lily Wang, Moshe C Silverstein, and Avi Ma’ayan. Massive mining of publicly available rna-seq data from human and mouse. Nature communications, 9(1):1366, 2018. Claude Lemar´echaland Arnaud Renaud. A geometric study of duality gaps, with applications. Mathematical Programming, 90(3):399–427, 2001. Ross M Starr. Quasi-equilibria in markets with non-convex preferences. Econometrica: journal of the Econometric Society, pages 25–38, 1969.

Alex d’Aspremont OWOS, June 2020. 33/32