Sparse Approximation of PDEs based on

Simone Brugiapaglia Department of Mathematics

Simon Fraser University

Retreat for Young Researchers in Stochastics September 24, 2016

1 Introduction We address the following question

Can we employ Compressed Sensing to solve a PDE?

In particular, we consider the weak formulation of a PDE

find u ∈ U : a(u, v) = F(v), ∀v ∈ V,

focusing on the Petrov-Galerkin (PG) discretization method [Aziz and Babuška, 1972].

Motivation: I reduce the computational cost associated with a classical PG discretization; I situations with a limited budget of evaluations of F(·); I better theoretical understanding of the PG method. Case study:  Advection-diffusion-reaction (ADR) equation, with 1 d U = V = H0 (Ω), Ω = [0, 1] , and a(u, v) = (η∇u, ∇v) + (b · ∇u, v) + (ρu, v), F(v) = (f, v).

2 Compressed Sensing

CORSING (COmpRessed SolvING)

A theoretical study of CORSING

3 Compressed Sensing (CS)

[D. Donoho, 2006; E. Candès, J. Romberg, and T. Tao, 2006]

A sparse vector u N Consider a signal s ∈ C , sparse w.r.t. 2 Ψ ∈ CN×N : 1.5 1 i

u 0.5

s = Ψu and kuk0 =: s  N, 0

−0.5

−1 where kuk0 := #{i : ui 6= 0}. 0 20 40 60 80 100 component i

It can be acquired by means of m  N linear and non-adaptive measurements hs, ϕii =: fi, for i = 1, . . . , m.

N×m If we consider the Φ = [ϕi] ∈ C , we have Au = f,

H where A = Φ Ψ ∈ Cm×N and f ∈ Cm.

4 Sensing phase A picture to have in mind f tH* u

=

sensing matrix

measurements vector

sparsity unknown sparse signal

Since m  N, the system Au = f is highly underdetermined. How to recover the right u among its infinite solutions?

5 Recovery: finding a needle in a haystack

Thanks to the sparsity hypothesis, we can resort to sparse recovery techniques. We aim at approximating the solution to

(P0) min kuk0, s.t. Au = f. N u∈C

Unfortunately, in general (P0) is a NP-hard problem... / There are computationally tractable strategies to approximate it! In, particular, we employ the greedy algorithm Orthogonal Matching Pursuit (OMP) to approximate

ε s (P0) min kuk0 (P0) min kAu − fk2 N or N u∈C u∈C s.t. kAu − fk2 ≤ ε s.t. kuk0 ≤ s.

Another valuable option is convex relaxation (not discussed here)

(P1) min kuk1, s.t. Au = f. N u∈C

6 Orthogonal Matching Pursuit (OMP)

Input: m×N 2 Matrix A ∈ C , with ` -normalized columns m Vector f ∈ C Tolerance on the residual ε > 0 (or else, sparsity s ∈ [N]) Output: ε s Approximate solution u to (P0) (or else, (P0)) Procedure: 1: S←∅ . Initialization 2: u ← 0 3: while kAu − fk2 > ε (or else, kuk0 < s) do H 4: j ← arg max |[A (Au − f)]j | . Select new index j∈[N] 5: S←S∪{ j} . Enlarge support 6: u ← arg min kAz − fk2 s.t. supp(z) ⊆S . Minimize residual N z∈C 7: end while 8: return u

s I The computational cost for the (P0) formulation is in general O(smN).

7 Recovery results based on the RIP Many important recovery results in CS are based on the Restricted Isometry Property (RIP). Definition (RIP) m×N A matrix A ∈ C satisfies the RIP(s, δ) iff

2 2 2 N N (1 − δ)kuk2 ≤ kAuk2 ≤ (1 + δ)kuk2, ∀u ∈ Σs := {v ∈ C : kvk0 ≤ s}.

Among many others, the RIP implies the following recovery result for OMP. [T. Zhang, 2011; A. Cohen, W. Dahmen, R. DeVore, 2015] Theorem (RIP ⇒ OMP recovery) There exist K ∈ N, C > 0 and δ ∈ (0, 1) s.t. for every s ∈ N, the following holds: if A ∈ RIP((K + 1)s, δ), m then, for any f ∈ C , the OMP algorithm computes in Ks iterationsa solution u that fulfills

kAu − fk2 ≤ C inf kAw − fk2. N w∈Σs

> 8 Compressed Sensing

CORSING (COmpRessed SolvING)

A theoretical study of CORSING

9 The reference problem

Given two Hilbert spaces U, V , consider the following problem

find u ∈ U : a(u, v) = F(v), ∀v ∈ V, (1)

where a : U × V → R is a bilinear form and F ∈ V ∗. We will assume a(·, ·) to fulfill

a(u, v) ∃α > 0 : inf sup ≥ α, (2) u∈U v∈V kukU kvkV |a(u, v)| ∃β > 0 : sup sup ≤ β, (3) u∈U v∈V kukU kvkV sup a(u, v) > 0, ∀v ∈ V \{0}. (4) u∈U

 (2) + (3) + (4) =⇒ ∃! solution to (1). [J. Nečas, 1962]  We will focus on advection-diffusion-reaction (ADR) equations.

10 The Petrov-Galerkin method

d Given Ω ⊆ R , consider the weak formulation of an ADR equation:

1 1 find u ∈ H0 (Ω) : (η∇u, ∇v) + (b∇u, v) + (ρu, v) = (f, v), ∀v ∈ H0 (Ω). | {z } | {z } a(u,v) F(v) (ADR) N 1 M 1 Choose U ⊆ H0 (Ω) and V ⊆ H0 (Ω) with

N M U = span{ψ1, . . . , ψN },V = span{ϕ1, . . . , ϕM } | {z } | {z } trials tests Then we can discretize (ADR) as

Aub = f,Aij = a(ψj,ϕ i), fi = F(ϕi) M×N M with A ∈ C , f ∈ C .

A common choice is M = N.

I Examples of Petrov-Galerkin methods: Finite elements, spectral methods, collocation methods, etc.

11 The main analogy

A fundamental analogy guided us through the development of our method...

Petrov-Galerkin method: Sampling: solution of a PDE ⇐⇒ signal tests (bilinear form) measurements (inner product)

Reference: Compressed solving: a numerical approximation technique for elliptic PDEs based on compressed sensing S. B., S. Micheletti, S. Perotto Comput. Math. Appl. 2015; 70(6):1306-1335

12 Related literature

Ancestors: PDE solvers based on `1-minimization 1988 [J. Lavery, 1988; J. Lavery, 1989] Inviscid Burgers’ equation, conservation laws 2004 [J.-L. Guermond, 2004; J.-L. Guermond and B. Popov, 2009] Hamilton-Jacobi, transport equation CS techniques for PDEs 2010 [S. Jokar, V. Mehrmann, M. Pfetsch, and H. Yserentant, 2010] Recursive mesh refinement based on CS (Poisson equation) 2011 [A. Doostan and H. Owhadi, 2011; K. Sargsyan et al., 2014; H. Rauhut and C. Schwab, 2014; J.-L. Bouchot et al., 2015] Application of CS to parametric PDEs and Uncertainty Quantification 2015 [S. B., S. Micheletti, S. Perotto, 2015; S. B., F. Nobile, S. Micheletti, S. Perotto, 2016] CORSING for ADR problems

13 CORSING (COmpRessed SolvING) Assembly phase

x Choose two sets of N independent elements of U and V :

trials → {ψ1, . . . , ψN }, {ϕ1, . . . , ϕN } ← tests;

y choose m  N tests {ϕτ1 , . . . , ϕτm }:

z build A ∈ Cm×N and f ∈ Cm as

[A]ij := a(ψj,ϕ τi )[f]i := F(ϕτi ).

Recovery phase N Find a compressed solution um to Au = f, via sparse recovery. 14 CORSING (COmpRessed SolvING) Assembly phase

x Choose two sets of N independent elements of U and V :

trials → {ψ1, . . . , ψN }, {ϕ1, . . . , ϕN } ← tests;

y choose m  N tests {ϕτ1 , . . . , ϕτm }:

z build A ∈ Cm×N and f ∈ Cm as

[A]ij := a(ψj,ϕ τi )[f]i := F(ϕτi ).

Recovery phase N Find a compressed solution um to Au = f, via sparse recovery. 15 Classical case: square matrices

When dealing with Petrov-Galerkin discretizations, one usually ends up with a big square matrix.

ψ1 ψ2 ψ3 ψ4 ψ5 ψ6 ψ7 ↓ ↓ ↓ ↓ ↓ ↓ ↓       ϕ1 → × × × × × × × u1 F(ϕ1) ϕ2 →  × × × × × × ×  u2 F(ϕ2)       ϕ3 →  × × × × × × ×  u3 F(ϕ3)       ϕ4 →  × × × × × × ×  u4 = F(ϕ4)       ϕ5 →  × × × × × × ×  u5 F(ϕ5)       ϕ6 →  × × × × × × ×  u6 F(ϕ6) ϕ7 → × × × × × × × u7 F(ϕ7) | {z } a(ψj ,ϕi)

16 “Compressing” the discretization

We would like to use only m random tests instead of N, with m  N...

ψ1 ψ2 ψ3 ψ4 ψ5 ψ6 ψ7 ↓ ↓ ↓ ↓ ↓ ↓ ↓       ϕ1 → ××××××× u1 F(ϕ1) ϕ2 →  ×××××××  u2 F(ϕ2)       ϕ3 →  ×××××××  u3 F(ϕ3)       ϕ4 →  ×××××××  u4 = F(ϕ4)       ϕ5 →  ×××××××  u5 F(ϕ5)       ϕ6 →  ×××××××  u6 F(ϕ6) ϕ7 → ××××××× u7 F(ϕ7) | {z } a(ψj ,ϕi)

17 Sparse recovery

...in order to obtain a reduced discretization.

ψ1 ψ2 ψ3 ψ4 ψ5 ψ6 ψ7 ↓ ↓ ↓ ↓ ↓ ↓ ↓   ϕ →  ×××××××  u1 F(ϕ ) 2 = 2 ϕ → ××××××× u2 F(ϕ ) 5   5 | {z } u3 a(ψj ,ϕi)   u4   u5   u6 u7

The solution is then computed using sparse recovery techniques.

18 One heuristic criterion commonly used in CS is to choose one basis sparse in space, and the other in frequency.

Hierarchical hat functions Sine functions [O. Zienkiewicz et al., 1982]

H 0,0 0.5 0.5 0.4 0.4 H H 0.3 S 1,0 1,1 2 S 0.2 1 0.3 S H H H H 5 2,0 2,1 2,2 2,3 0.1 S 3 0.2 0 S 4 0.1 −0.1 −0.2 0 0 0.2 0.4 0.6 0.8 1 −0.3 0 0.2 0.4 0.6 0.8 1 H S

We name the corresponding strategies CORSING HS and SH.

How to choose {ψj} and {ϕi}?

19 How to choose {ψj} and {ϕi}?

One heuristic criterion commonly used in CS is to choose one basis sparse in space, and the other in frequency.

Hierarchical hat functions Sine functions [O. Zienkiewicz et al., 1982]

H 0,0 0.5 0.5 0.4 0.4 H H 0.3 S 1,0 1,1 2 S 0.2 1 0.3 S H H H H 5 2,0 2,1 2,2 2,3 0.1 S 3 0.2 0 S 4 0.1 −0.1 −0.2 0 0 0.2 0.4 0.6 0.8 1 −0.3 0 0.2 0.4 0.6 0.8 1 H S

We name the corresponding strategies CORSING HS and SH. 19 A 1D example We test CORSING HS on the homogeneous 1D Poisson problem (a(u, v) = (u0, v0)): I Trial space dimension N = 8191 I Solution sparsity s = 50 I Selected random tests m = 1200

N − m Test Savings: TS := · 100% ≈ 85% N

3.5 3.5 3.45 exact 3 corsing 3.4 2.5 3.35 2 3.3 1.5 3.25 1 3.2 0.5 exact corsing 3.15 0 3.1 0 0.2 0.4 0.6 0.8 1 0.38 0.39 0.4 0.41 0.42 × = hat functions selected by OMP after solving the program

min kAu − fk2, s.t.kuk0 ≤ 50

20 A glance at the space of coefficients... Lexicographic ordering

exact 15 corsing

10

5 j

u 0

−5

−10

−15

1000 2000 3000 4000 5000 6000 7000 8000 j

Level-based ordering (log10 |ub`,k|)

21 Generalization to the 2D case (space domain)

Hierarchical Pyramids Tensor product of hat [H. Yserentant, 1986] functions (l,k1,k2)=(1,0,1) (l,k1,k2)=(1,0.5,1) (l,k1,k2)=(1,1,1) l=(0,1) k=(0,1) l=(1,1) k=(0,1) l=(1,1) k=(1,1) 1 1 1 1 1 1 0.15 0.1 0.1

0.5 0.1 0.5 0.5 0.5 0.5 0.5 0.05 0.05 0.05

0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 l=(0,1) k=(0,0) l=(1,1) k=(0,0) l=(1,1) k=(1,0) (l,k1,k2)=(1,0,0.5) (l,k1,k2)=(0,0,0) (l,k1,k2)=(1,1,0.5) 1 1 1 1 1 1 0.15 0.1 0.1

0.5 0.1 0.5 0.5 0.5 0.5 0.5 0.05 0.05 0.05

0 0 0 0 0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 0 0.5 1 l=(0,0) k=(0,0) l=(1,0) k=(0,0) l=(1,0) k=(1,0) (l,k1,k2)=(1,0,0) (l,k1,k2)=(1,0.5,0) (l,k1,k2)=(1,1,0) 1 1 1 1 1 1 0.2 0.15 0.15

0.5 0.5 0.1 0.5 0.1 0.1 0.5 0.5 0.5 0.05 0.05

0 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 P Q

22 The 2D case (frequency domain)

Tensor product of sine functions (r1,r2)=(1,1) (r1,r2)=(1,2) (r1,r2)=(1,3) 1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 (r1,r2)=(2,1) (r1,r2)=(2,2) (r1,r2)=(2,3) 1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 (r1,r2)=(3,1) (r1,r2)=(3,2) (r1,r2)=(3,3) 1 1 1

0.8 0.8 0.8

0.6 0.6 0.6

0.4 0.4 0.4

0.2 0.2 0.2

0 0 0 0 0.5 1 0 0.5 1 0 0.5 1 S

We have four strategies: CORSING PS, QS, SP and SQ.

23 An advection-dominated example

We evaluate the CORSING performance on the following 2D advection-dominated problem ( −µ∆u + b · ∇u = f in Ω = (0, 1)2, u = 0 on ∂Ω,

where b = [1, 1]|, 0 < µ  1 and f s.t. the exact solution be

∗ 2 2 x1/µ x2/µ uµ(x) = Cµ(x1 − x1)(x2 − x2)(e + e − 2),

∗ where Cµ > 0 is chosen such that maxx∈Ω uµ(x) = 1. ∗ I The function uµ exhibits two boundary layers along the edges {x1 = 1} and {x2 = 1} of Ω.

24 L2−rel. err. = 7.2e−02 1 1

0.8 0.8

N = 16129 0.6 0.6

TS = 85% 0.4 0.4 ESP = 1.00 0.2 0.2 Exact 2 0 0 L -rel. err. = 7.1e-02 0 0.5 1 1 1

2 L −rel. err. = 9.6e−02 0.8 0.8 1 1 0.6 0.6 0.8 0.8 0.4 0.4 N = 16129 0.6 0.6 0.2 0.2 TS = 90% 0.4 0.4 0 0 ESP = 0.94 0.2 0.2 0 0.5 1

2 0 0 L -rel. err. = 8.7e-02 0 0.5 1

Figure: CORSING SP, with µ = 0.01: worst solution in the successful cluster (right). 50 random experiments are performed. ESP = Empirical Success Probability

25 Cost reduction with respect to the full-PG (m=N)

We compare the assembly/recovery times of full-PG and CORSING.

full-PG CORSING SP Af trec (\) TS Af trec (OMP) 85% 3.8e+02 2.7e-01 8.1e+01 2.5e+03 9.1e-01 7.1e+01 90% 2.5e+02 2.0e-01 3.4e+01

I The assembly time reduction is proportional to TS.

I Also the RAM is reduced proportionally to TS.

I The recovery phase is cheaper for high TS rates.

The CORSING method can considerably reduce the computational cost associated with a full-PG discretization.

26 More challenging test cases

The CORSING technique has also been implemented for

The 3D Poisson problem The 2D Stokes problem  ( −∆u + ∇p = f in Ω = (0, 1)2 −∆u = f in Ω = (0, 1)3  divu = 0 in Ω u = 0 on∂Ω u = 0 on∂Ω

arrows = (u ,u ), color = p 1 2 1 0.6

0.8 0.5 CORSING QS CORSING SP 0.4 TS=85% 0.6 TS=70% 0.3 0.4 0.2

0.2 0.1

0 0 0 0.5 1 arrows = (u ,u ), color = p 1 2 1 0.6

0.8 0.5 Exact Exact 0.4 solution 0.6 solution 0.3 0.4 0.2 0.2 0.1

0 0 0 0.5 1

27 Compressed Sensing

CORSING (COmpRessed SolvING)

A theoretical study of CORSING

28 A theoretical understanding of the method Reference: A theoretical study of COmpRessed SolvING for advection-diffusion-reaction problems S.B., F. Nobile, S. Micheletti, S. Perotto To appear in Mathematics of Computation

Some notation:

I Finite dimensional trial and test spaces

N M U := span{ψj}j∈[N] and V := span{ϕi}i∈[M],

where [k] := {1, . . . , k} for every k ∈ N. N I The set of s-sparse elements of U   N X Us := ujψj : kuk0 ≤ s j∈[N]

Simplification: Let us assume the bases {ψj}j∈N and {ϕq}q∈N to be orthonormal. 29 Local a-coherence An important tool employed in the theoretical analysis is the local a-coherence, a generalization of the local coherence of CS. Definition Given N ∈ N ∪ {∞}, the real-valued sequence µN defined as N 2 µq := sup |a(ψj, ϕq)| , ∀q ∈ N, j∈[N]

is called local a-coherence of {ψj}j∈[N] with respect to {ϕq}q∈N.

I Following [F. Krahmer and R. Ward, 2014], we define a computable upper bound νN to µN :

N N µq ≤ νq , ∀q ∈ N.

Moreover, for every M ∈ N, we define

N,M N N | M ν := [ν1 , . . . , νM ] ∈ R .

30 Formalization of the CORSING procedure N PROCEDURE ub = CORSING (N, s, ν , γb, γ) 1. [Definition of M and m]

γ γ N,M M ∼ sbN; m ∼ s kν k1 log(N/s);

2. [Test selection] Draw τ1, . . . , τm independently at random from [M] according to the probability

N,M N,M p := ν /kν k1;

m×N m m×m 3. [Assembly] Build A ∈ R , f ∈ R and D ∈ R , defined as:

δik Aij := a(ψj , ϕτi ), fi := F(ϕτi ),Dik := √ . mpτi

4. [Recovery] 2 > Find an approximate solution u to min kD(Au − f)k2, s.t. kuk0 ≤ s; b N u∈R N X > ub ← ubj ψj . j=1

31 Main tools of the analysis

The theoretical analysis is based on three main tools:

1. the concept of local a-coherence between two bases; 2. Chernoff’s bounds for the sum of random matrices [H. Chernoff,1952; R. Ahlswede and A. Winter, 2002; J. Tropp, 2012]; 3. a variant of the classical inf-sup property, that we called restricted inf-sup property (RISP), i.e.,

v|DAu inf sup > α > 0, u∈ΣN m e s v∈R kuk2kvk2

N N where Σs := {u ∈ R : kuk0 ≤ s}.

32 From the ∞-dimensional problem to CORSING

While moving from the ∞-dimensional weak problem to the CORSING reduced formulation we will track the inf-sup constant:

# of tests inf-sup constant Weak problem ∞ α 1 PG discretization M< ∞ α(1 − δb) 2 1 1 CORSING m  Mα (1 − δb) 2 (1 − δ) 2

This will guarantee the stability of our method and will imply recovery error estimates for the CORSING technique.

a(u, v) a(u, v) v|DAu inf sup inf sup inf sup u∈UN v∈V kukU kvkV u∈UN M kukU kvkV u∈ΣN v∈ m kuk2kvk2 s ; s v∈V ; s R

33 Uniform RISP

Theorem For every s ∈ N, given δb ∈ (0, 1), choose M ∈ N such that

X α2δb µN ≤ . q s q>M

Then, for every ε> 0 and δ ∈ (0, 1), provided

−2 N,M 2 m & δ kν k1[s log(eN/s) + s log(s/ε)], the following uniform RISP holds with probability ≥ 1 − ε

v|DAu inf sup > α > 0, N m e u∈Σs v∈R kuk2kvk2

1 1 where αe := (1 − δb) 2 (1 − δ) 2 α.

34 Non-uniform RISP: sketch of the proof (1/2)

The proof can be organized as follows: 1. Fix S ⊆ [N], with |S| = s, and notice that

v|DAS u 1 1 | 2 2 2 inf sup = [λmin(AS D AS )] = [λmin(X)] . u∈ s m R v∈R kuk2kvk2

| 2 Indeed, AS D AS is the sample mean of random matrices

m 1 1 | 2 X (AS D AS )jk = a(ψσj , ϕτi )a(ψσk , ϕτi ) . m pτ i=1 i | {z } τi =:Xjk

2. The minimum eigenvalue of Xτi can be controlled in expectation:

2 δα 1 a(u, v) 1 X N b τi 2 2 µq ≤ =⇒ λmin(E[X ]) = inf sup ≥ (1−δb) α s u∈UN M kukU kvkV q>M S v∈V

3. The thesis is proved by resorting to the matrix Chernoff bounds.

35 Non-uniform RISP: sketch of the proof (2/2) Theorem (Matrix Chernoff bounds) Consider a finite sequence of i.i.d. random, symmetric s × s real matrices M1,..., Mm such that

i i 0 ≤ λmin(M ) and λmax(M ) ≤ R almost surely, ∀i ∈ [m].

1 Pm i i Define M := m i=1 M and λ∗ := λmin(E[M ]). Then,  mδ2λ  {λ (M) ≤ (1 − δ)λ } s exp − ∗ , ∀δ ∈ [0, 1]. P min ∗ . R

i τ I After choosing M = X i , direct computations show that

τi τi N,M 0 ≤ λmin(X ) and λmax(X ) ≤ skν k1.

N I Finally, we consider the inf-sup over Us employing a union bound.

 36 Recovery error analysis

Our aim is to compare the recovery error kub − ukU with the best s-term approximation error of the exact solution u in U N , i.e. the s quantity ku − ukU , where

s u := arg min kw − ukU . N w∈Us

A key quantity is the following preconditioned random residual

1 " m # 2 1 X 1 R(us) := [a(us, ϕ ) − F(ϕ )]2 = kD(Aus − f)k . m p τi τi 2 i=1 τi

Assumption: we assume that ub solves the problem 2 min kD(Au − f)k , s.t. kuk0 ≤ s N 2 u∈R exactly (even if, in reality, OMP can only approximate its solution).

37 Two lemmas about R(us)  An argument analogous to Cea’s lemma shows the following

Lemma If the following uniform 2s-sparse RISP holds

v|DAu inf sup > α > 0, N m e u∈Σ2s v∈R kuk2kvk2

then the CORSING procedure computes a solution ub such that

s 2 s kub − u kU < R(u ). αe

 Moreover, this mysterious residual behaves nicely in expectation!

Lemma

s 2 2 s 2 E[R(u ) ] ≤ β ku − ukU , where β is the continuity constant of a(·, ·). 38 Error estimate in expectation Theorem (CORSING recovery in expectation)

Let s ≤ N and K > 0 be such that kukU ≤K and δ,b δ ∈ (0, 1). Choose M ∈ N such that the following truncation condition is fulfilled

2 X N α δb µ ≤ . q s q>M Then, for every ε> 0, provided

−2 N,M 2 m & δ kν k1[s log(N/s) + s log(s/ε)],

the truncated CORSING solution TKub fulfills   2β s E[kTKub − ukU ] ≤ 1 + ku − ukU + 2Kε, αe

1 1 where αe = (1 − δb) 2 (1 − δ) 2 α and TK(w) := max(1, K/kwkU )w. Remarks:

I A possible choice for K is k F kV ∗ /α. I An analogous result holds in probability.

39 Application to the 1D Poisson problem

Proposition (CORSING HS recovery) Fix a maximum hierarchical level L ∈ N, corresponding to N = 2L+1 − 1. Then, for every ε ∈ (0, 2−1/3] and s ≤ 2N/e, provided

2 M & sN, m & log M[s log(N/s) + s log(s/ε)] and chosen the upper bound νN as 1 νN ∼ , ∀q ∈ , q q N the CORSING HS solution to the homogeneous 1D Poisson problem fulfills s E[|TKub − u|H1 ] ≤ 5|u − u|H1 + 2Kε,

for every K > 0 such that |u|H1 ≤ K.

40 Sketch of the proof For the 1D Poisson problem we have the following bound

0 10 local coherence upper bound

  −2 N N 1 10 µq . min , . 2 −4

q q local coherence 10

0 1 2 3 10 10 10 10 q Then, we have

X X 1 N 1 µN N ∼ , required to be . q . q2 M . s q>M q>M

N Moreover, choosing νq ∼ 1/q yields

M X 1 kνN,M k ∼ ∼ log M. 1 q q=1

 41 Application to 1D ADR problems

Consider the problem

1 0 0 0 1 find u ∈ H0 (Ω) : (u , v ) + b(u , v) + ρ(u, v) = (f, v), ∀v ∈ H0 (Ω) (ADR)

1 with b,ρ ∈ R,ρ> 0 and Ω = (0, 1). Let H0 (Ω) be endowed with | · |H1(Ω). Proposition (CORSING HS for 1D ADR) −1/3 Fix N ∈ N. Then, for every ε ∈ (0, 2 ] and s ≤ 2N/e, provided that

2 M & sN, |b|/M . 1, |ρ|/M . 1, 2 2 2 m & (log M + |b| + |ρ| )[s log(N/s) + s log(s/ε)], and chosen the upper bound νN such that

1 |b|2 |ρ|2 νN ∼ + + , ∀q ∈ , q q q3 q5 N the CORSING HS solution to (ADR) fulfills

s E[|TKub − u|H1(Ω)] . (1 + |b| + |ρ|)|u − u|H1(Ω) + Kε,

for every K > 0 such that |u|H1(Ω) ≤K .

42 Application to the 1D diffusion equation Let Ω = (0, 1) and consider the problem 1 0 0 1 find u ∈ H0 (Ω) : (ηu , v ) = (f, v), ∀v ∈ H0 (Ω). (DIF) Proposition Let η ∈ L∞(Ω) be such that

I there exists ηmin > 0 so that η(x) ≥ ηmin, for almost every x ∈ Ω; 2 I there exists a finite set P ⊆ Ω such that η ∈ C (Ω \P); (k) I sup |η (x)| < ∞, for k = 1, 2. x∈Ω\P L+1 Fix L ∈ N and put N = 2 − 1. Then, provided

N νq ∼ 1/q, ∀q ∈ N, and 2 M & sN, m & log M[s log(N/s) + s log(s/ε)], the CORSING HS solution ub to (DIF) fulfills   4kηkL∞ s E[|TKu − u|H1(Ω)] ≤ 1 + |u − u|H1(Ω) + 2Kε, b ηmin

for every K > 0 such that |u|H1(Ω) ≤ K. 43 A RIP theorem for CORSING (with S. Dirksen, H.C. Jung, H. Rauhut, RWTH Aachen) Theorem (RIP for CORSING) Let s, N ∈ N, with s < N, and δb ∈ (0, 1). Suppose the truncation condition

2 X N α δb µ ≤ . q s q>M

α2  to be fulfilled. Then, provided δ ∈ 1 − (1 − δb) β2 , 1 , and

−2 N,M 3 m & δ kν k1s log (s) log(N), it holds −1 − log3(s) P{β DA ∈ RIP(s, δ)} ≥ 1 − N ,. where β is the continuity constant of a(·, ·).

CORSING computes the best s-term approximation to u in O(smN) flops. >

44 Further results

I The previous results hold in the case of nonorthogonal trial and test functions. Indeed, they suffice to be Riesz bases, i.e.,

X N k ujψjkU ∼ kuk2, ∀u ∈ U . j∈N

I We checked the theoretical hypotheses on the local a-coherence for the 2D and 3D ADR equations numerically.

−2 10 Figure: The plot shows that

−4 10 1 νN ∼ −6 q 10 q1q2q3 local a−coherence

−8 10 is a local a-coherence upper bound for the local a−coherence 3D Poisson problem (CORSING QS). upper bound

0 1 2 3 10 10 10 10 q

45 Wrap up: main results

X CS can be successfully applied to solve PDEs, such as 1D, 2D, and 3D ADR problems, or the 2D Stokes problem; X CORSING can considerably reduce the computational cost associated with a full-PG discretization; X the local a-coherence is crucial to understand the behavior of the method theoretically; Future directions

I Speed-up the recovery phase (get rid of the “N” in the cost O(smN)); I Investigate other trial/test combinations: e.g., biorthogonal , instead of hierarchical basis (ongoing); I 2D and 3D theory (ongoing); I apply CORSING to more challenging benchmarks, such as Navier-Stokes, or nonlocal problems; I adapt the CORSING technique to the case of parametric PDEs.

46 Thank you for your attention!

...questions?

47