Quick viewing(Text Mode)

The Weighted Ambrosio - Tortorelli Approximation Scheme

The Weighted Ambrosio - Tortorelli Approximation Scheme

THE WEIGHTED AMBROSIO - TORTORELLI APPROXIMATION SCHEME

IRENE FONSECA AND PAN

Department of Mathematics, Carnegie Mellon University 5000 Forbes Avenue, Pittsburgh, PA, 15213

August 12, 2016

Abstract. The Ambrosio-Tortorelli approximation scheme with weighted underlying metric is investigated. It is shown that it Γ-converges to a Mumford-Shah image segmentation functional depending on the weight ω dx, where ω ∈ SBV (Ω), and on its value ω−.

Contents 1. Introduction and Main Results 1 2. Definitions and Preliminary Results 5 3. The One Dimensional Case 10 3.1. The Case ω ∈ W(I) ∩ C(I) 10 3.2. The Case ω ∈ W(I) ∩ SBV (I) 17 4. The Multi-Dimensional Case 21 4.1. One-Dimensional Restrictions and Slicing Properties 21 4.2. The Case ω ∈ W(Ω) ∩ C(Ω) 33 4.3. The Case ω ∈ W(Ω) ∩ SBV (Ω) 45 Appendix 51 Acknowledgements 54 References 54

1. Introduction and Main Results

A central problem in image processing is image denoising. Given an image u0, we decompose it as

u0 = ug + n where ug represents a noisy-free ground truth picture, while n encodes noise or textures. Examples of models for such noise distributions are Gaussian noise in Magnetic Resonance Tomography, and Poisson noise in radar measurements [16]. Variational PDE methods have proven to be efficient to remove the noise n from u0. Several successful variational PDEs have been proposed in the litera- ture (see, for example [36, 42, 43, 44]) and, among these, the Mumford-Shah image segmentation Page 2 Section 1.0 functional

2 N−1 2 G(u, K) := α |∇u| dx + αH (K) + (u − u0) dx ˆΩ\K ˆΩ (1.1) where u ∈ W 1,2(Ω \ K),K ⊂ Ω closed in Ω, introduced in [41], is one of the most successful approaches. By minimizing the functional (1.1) one tries to find a “piecewise smooth” approximation of u0. The existence of such minimizers can be proved by using compactness and lower semicontinuity theorems in SBV (Ω) (see [1, 2, 3, 4]). Furthermore, regularity results in [22, 24] give that minimizers u satisfy

1 N−1 u ∈ C (Ω \ ) and H (Su ∩ Ω \ Su) = 0.

Here, as in what follows, Su stands for the jump set of u.

The parameter α > 0 in (1.1), determined by the user, plays an important role. For example, choos- ing α > 0 too large will result in over-smoothing and the edges that should have been preserved will be lost, and choosing α > 0 too small may keep the noise un-removed. The choice of the “best” parameter α then becomes an interesting task. In [25] the authors proposed a training scheme by using bilevel learning optimization defined in machine learning, which is a semi-supervised learning scheme that optimally adapts itself to the given “perfect data” (see [20, 21, 26, 27, 45, 46]). This learning scheme searches α > 0 such that the recovered image uα, obtained from(1.1), best fits the 2 given clean image ug, measured in terms of the L -distance. A simplified bilevel learning scheme (B) from [25] is the following: Level 1. 2 α¯ := arg min |uα − ug| dx, (1.2) α>0 ˆΩ Level 2.   2 N−1 2 uα := arg min α |∇u| dx + αH (Su) + |u − u0| dx , u∈SBV (Ω) ˆΩ ˆΩ In [25] the authors proved that the above bilevel learning scheme has at least one solutionα ¯ ∈ (0, +∞], and a small modification rules out the possibility ofα ¯ = +∞.

The model proposed in [37] is aimed at improving the above learning scheme. It is a bilevel learning scheme which utilizes the scheme (B) in each subdomain of Ω, and searches for the best combination of different subdomains from which a recovered imageu ¯, which best fits ug, might be obtained.

N To present the model, we first fix some notation. For K ∈ N, QK ⊂ R denotes a cube with its faces normal to the orthonormal basis of RN , and with side-length greater than or equal to 1/K. Define PK to be a collection of finitely many QK such that

n N [ o PK := QK ⊂ R : QK are mutually disjoint, Ω ⊂ QK , and VK denotes the collection of all possible PK . For K = 0 we set Q0 := Ω, hence P0 = {Ω}.

A simplified bilevel learning scheme (P) in [37] is as follows: Page 3 Section 1.0

Level 1.   2 u¯ := arg min |ug − uPK | dx (1.3) K≥0, PK ∈VK ˆΩ   2 N−1 2 where uPK := arg min αPK (x) |∇u| dx + αPk (x)dH + |u − u0| dx (1.4) u∈SBV (Ω) ˆΩ ˆSu ˆΩ Level 2.

2 αPK (x) := αQK for x ∈ QK ∈ PK , where αQK := arg min |uα − ug| dx, α>0 ˆQK   (1.5) 2 N−1 2 uα := arg min α |∇u| dx + αH (Su) + |u − u0| dx u∈SBV (QK ∩Ω) ˆQK ∩Ω ˆQK ∩Ω Scheme (P) allows us to perform the denoising procedure “point-wisely”, and it is an improvement of (1.2). Note that at step K = 0, (1.3) reduces to (1.2). It is well known that the Mumford- Shah model, as well as the ROF model in [44], leads to undesirable phenomena like the staircasing effect (see [10, 19]). However, such staircasing effect is significantly mitigated in (1.3), according to numerical simulations in [37] (a theoretical validation of such improvement is needed). We remark that the most important step is (1.4) for the following reasons: 1. (1.4) is the bridge connecting level 1 and level 2;

2. since αPK is defined by locally optimizing the parameter αQK , we expect uPK be “close” to ug locally in QK ;

3. the last integrand in (1.4) keeps uPK close to u0 globally, hence we may expect uPK to have a good balance between local optimization and global optimization. We may view (1.4) as a weighted version of (1.1) by changing the underlying metric from dx to

αPK dx. By the construction of αPk in (1.5), we know it is a piecewise constant function and, since N−1 K > 0 is finite, the discontinuity set of αPK has finite H measure. However, αPK is only defined LN -a.e., and hence the term

N−1 αPk (x)dH ˆSu might be ill-defined.

In this paper, we deal with the well-definess of (1.4) by modifying αPK accordingly, and by building a sequence of functionals which Γ-converges to (1.4). To be precise, we adopt the approximation scheme of Ambrosio and Tortorelli in [8] and change the underlying metric properly. In (1.1) Ambrosio and Tortorelli considered a sequence of functionals reminiscent of the Cahn-Hilliard ap- proximation, and introduced a family of elliptic functionals   2 2 2 1 2 2 Gε(u, v) := α |∇u| v dx + α ε |∇v| + (v − 1) dx + (u − u0) dx, ˆΩ ˆΩ 4ε ˆΩ 1,2 1,2 2 where u ∈ W (Ω), (v − 1) ∈ W0 (Ω), and u0 ∈ L (Ω). The additional field v plays the role of controlling variable on the gradient of u. In [8] a rigorous argument has been made to show that Gε → G in the sense of Γ-convergence ([9]), where G is defined in (1.1).

N−1 In view of (1.5), we fix a weight function ω ∈ SBV (Ω) such that ω is positive and H (Sω) < +∞. Page 4 Section 1.0

Our new (weighted version) Mumford-Shah image segmentation functional is defined as

2 − N−1 Eω(u) := |∇u| ω dx + ω dH , (1.6) ˆΩ ˆSu and the (weighted version) of Ambrosio - Tortorelli functionals are defined as   2 2 2 1 2 Eω,ε(u, v) := |∇u| v ω dx + ε |∇v| + (v − 1) ω dx. ˆΩ ˆΩ 4ε

It is natural to take u ∈ GSBVω(Ω) in (1.6) (see Definition 2.6. For basic definitions and theorems of weighted spaces we refer to [5, 6, 11, 14, 15, 18, 32, 33]). Moreover, since K ≥ 0 is finite and

αQK > 0 in (1.5) , it is not restricted to assume that essinf {ω(x), x ∈ Ω} ≥ l, where l > 0 is a constant.

This condition implies that all weighted spaces considered in this paper are embedded in the cor- responding non-weighted spaces, and hence we may apply some results that hold in the context of 1,2 1,2 non-weighted spaces. For example, BVω ⊂ BV and Wω ⊂ W (see Definition 2.6), and most theorems in [8] can be applied to u ∈ SBVω(Ω) (for example, Theorem 2.3 in [8]).

Before we state our main result, we recall that similar problems have been studied for different types of weight functions ω (see, for example [12, 13, 35]). In particular, [12, 13] treat a uniformly continuous and strong A∞ (defined in [23]) weight function on Modica-Mortola and Mumford-Shah- type functionals, respectively, and in [35] the authors considered a C1,β-continuous weight function, with some other minor assumptions, in the one-dimensional Cahn-Hilliard model.

Our main result is the following: N ∞ 1 Theorem 1.1. Let Ω ⊂ R be open bounded, let ω ∈ SBV (Ω) ∩ L (Ω), and let Eω,ε: Lω(Ω) × L1(Ω) → [0, +∞] be defined by

( 1,2 1,2 Eω,ε(u, v) if (u, v) ∈ Wω (Ω) × W (Ω), 0 ≤ v ≤ 1, Eω,ε(u, v) := +∞ otherwise.

1 1 Then the functionals Eω,ε Γ-converge, with respect to the Lω × L topology, to the functional ( Eω(u) if u ∈ GSBVω(Ω) and v = 1 a.e., Eω(u, v) := +∞ otherwise. The proof of Γ-convergence consists of two steps. The first step is to prove the “lim inf inequality”

lim inf Eω,ε(uε, vε) ≥ Eω(u) ε→0 for every sequence uε → u, vε → v. This is obtained in Section 3.2 in the case N = 1 by using most of the arguments proposed in [8], and the properties of SBV functions in one dimension (see Lemma 2.5). The case N > 1 is studied in Section 4.3, and it uses a special slicing argument (see Lemma 4.6).

The second step is the construction of a recovery sequence (uε → u, vε → 1) such that the term  1  ε |∇v|2 + (v − 1)2 ω dx (1.7) ˆΩ 4ε Page 5 Section 2.0 only captures the information of ω−. We note that for small ε > 0, (1.7) only penalizes a ε- neighborhood around the jump point of u. By using fine properties of SBV functions (see Theorem 2.4), we are able to incorporate u and v in our model such that (1.7) will only penalize along the direction −νSω . This will be carried out in Lemma 3.7.

We remark that the techniques we developed in this paper can be adapted to other functional models. For example, 1. the weighted Cahn-Hilliard model defined as   2 1 CHω,ε(u) := ε |∇u(x)| + W (u) ω dx, ˆI ε 1,2 for u ∈ Wω (Ω) and with a double well potential function W : R → [0, +∞) such that {W = 0} = {0, 1} with the Γ-limit

CHω(u) := cW Pω(u)

defined for u = χE ∈ BVω(Ω), where 1 p − N−1 cW := 2 W (s) ds and Pω(u) := ω dH ; ˆ0 ˆSu 2. higher order singular perturbation models defined by the Γ-limit

2 − N−1 Hω(u) := |∇u| ω dx + ω (x) dH , ˆΩ ˆSu and approximation energies 1  2 1  2 2 2k−1 (k) 2 Hω,ε(u, v) := |∇u| v ω dx + ε ∇ v + (v − 1) ω dx, ˆΩ C(k) ˆΩ ε where  2  C(k) := min v(k) + (v − 1)2dx, v(0) = v0(0) = ··· = v(k−1)(0) = 0, lim v(t) = 1 . ˆ + t→∞ R The analysis of items 1 and 2 above is forthcoming (see [38]).

This article is organized as follows: In Section 2 we introduce some definitions and we recall preliminary results. In Section 3 we prove the one dimensional version of Theorem 1.1. Section 4 is devoted to the proof of our main result.

2. Definitions and Preliminary Results Throughout this paper, Ω ⊂ RN is an open bounded set with Lipschitz boundary, and I := (−1, 1). Definition 2.1. We say that u ∈ BV (Ω) is a special function of bounded variation, and we write u ∈ SBV (Ω), if the Cantor part of its derivative, Dcu, is zero, so that (see [4], (3.89))

a j N + − N−1 = D u + D u = ∇uL bΩ + (u − u )νuH bSu. (2.1)

Moreover, we say that u ∈ GSBV (Ω) if K ∧ u ∨ −K ∈ SBV (Ω) for all K ∈ N. Page 6 Section 2.0

Here we always identify u ∈ SBV (Ω) with its approximation representativeu ¯, where 1 u¯(x) := u+(x) + u−(x) , 2 with  N  + L (B(x, r) ∩ {u > t}) u (x) := inf t ∈ R : lim = 0 , r→0 rN and  N  − L (B(x, r) ∩ {u < t}) u (x) := sup t ∈ R : lim = 0 . r→0 rN We note thatu ¯ is Borel measurable (see [29], Lemma 1, page 210), and it can be shown thatu ¯ = u LN -a.e. x ∈ Ω, and that (¯u)+(x) = u+(x) and (¯u)−(x) = u−(x) for HN−1-a.e. x ∈ Ω (see [29], Corollary 1, page 216). Furthermore, we have that − < u−(x) ≤ u+(x) < +∞ (2.2) for HN−1-a.e. x ∈ Ω (see [29], Theorem 2, page 211). The inequality (2.2) uniquely determines the sign of νu in (2.1). Definition 2.2. (The weight function) We say that ω: Ω → (0, +∞] belongs to W(Ω) if ω ∈ L1(Ω) and has a positive lower bound, i.e., there exists l > 0 such that ess inf {ω(x), x ∈ Ω} ≥ l. (2.3) Without loss of generality, we take l = 1. Moreover, in this paper we will only consider the cases in which ω is either a continuous function or a SBV function. If ω ∈ SBV then, in addition, we require that N−1 N−1 H (Sω) < ∞ and H (Sω \ Sω) = 0. We next fix some notation which will be used throughout this paper. Notation 2.3. Let Γ ⊂ Ω be a HN−1-rectifiable set and let x ∈ Γ be given.

1. We denote by νΓ(x) a normal vector at x with respect to Γ, and QνΓ (x, r) is the cube centered at x with side length r and two faces normal to νΓ(x);

2. Tx,νΓ stands for the hyperplane normal to νΓ(x) and passing through x, and Px,νΓ stands for the

projection operator from Γ onto Tx,νΓ ; 3. we define the hyperplane

Tx,νΓ (t) := Tx,νΓ + tνΓ(x) for t ∈ R; 4. we introduce the half-spaces +  N HνΓ (x) := y ∈ R : νΓ(x) · (y − x) ≥ 0 and −  N HνΓ (x) := y ∈ R : νΓ(x) · (y − x) ≤ 0 . Moreover, we define the half-cubes Q± (x, r) := Q (x, r) ∩ H (x)±; νΓ νΓ νΓ

5. for given τ > 0, we denote by Rτ,νΓ (x, r) the part of QνΓ (x, r) which lies strictly between the

two hyperplanes Tx,νΓ (−τr) and Tx,νΓ (τr); Page 7 Section 2.0

6. we set

Aδ := {x ∈ Ω : dist(x, A) < δ} (2.4) for every A ⊂ Ω and δ > 0. Theorem 2.4 ([29], Theorem 3, page 213). Assume that u ∈ BV (Ω). Then N−1 1. for H -a.e. x0 ∈ Ω \ Su,

N N−1 lim |u(x) − u¯(x0)| dx = 0; r→0 B(x0,r) N−1 2. for H -a.e. x0 ∈ Su,

N ± N−1 lim u(x) − u (x0) dx = 0; r→0 ± B(x0,r)∩Hν (x0) Su N−1 3. for H a.e. x0 ∈ Su

1 + − N−1 + − lim u (x) − u (x) dH (x) = u (x0) − u (x0) . ε→0 N−1 ε ˆSu∩Qν (x0,ε) Su 0 Lemma 2.5. Let ω ∈ SBV (I) be such that H (Sω) < ∞. For every x ∈ I the following statements hold: ∞ ∞ 1. if {xn}n=1 and {yn}n=1 ⊂ I are such that xn < x < yn, n ∈ N, and limn→∞ xn = limn→∞ yn = x, then lim inf ess inf ω(y) ≥ ω−(x); (2.5) n→∞ y∈(xn,yn) 2. ± lim ω¯(zn) = ω (x); (2.6) zn→x ∞ ± {zn}n=1⊂Hν (x) Sω 3. lim sup ess sup ω(z) = ω±(x), (2.7) dH(Kn,x)→0 z∈Kn ± Kn⊂⊂Hν (x) Sω ± where Kn ⊂⊂ H and dH denotes the Hausdorff distance (see Definition A.1). νSω (x)

Proof. If x∈ / Sω, then there exists δ > 0 such that

Sω ∩ (x − δ, x + δ) = ∅, and so ω is absolutely continuous in (x − δ, x + δ), and (2.5)-(2.7) are trivially satisfied with ω(x) = ω−(x) and with equality in place of the inequality in (2.5).

Let x ∈ Sω and, without loss of generality, assume that x = 0, and let xn, yn → 0 with xn < 0 < yn 0 for all n ∈ N. Since H (Sω) < ∞, chooser ¯ > 0 such that

Sω ∩ (0 − r,¯ 0 +r ¯) = 0. Asω ¯ is absolutely continuous in (−r,¯ 0) and (0, r¯), we may extendω ¯ uniquely to x = 0 from the left and the right (see Exercise 3.7 (1) in [34]) to define ω¯(0+) := lim ω¯(x) andω ¯(0−) := lim ω¯(x). (2.8) x&0+ x%0− Page 8 Section 2.0

Assume that (the caseω ¯(0−) ≥ ω¯(0+) can be treated similarly) ω¯(0−) ≤ ω¯(0+). (2.9) We first claim that lim inf inf ω¯(x) ≥ ω¯(0−). (2.10) n→∞ x∈(xn,yn) Let ε > 0 be given. By (2.8) findr ¯ > δ > 0 small enough such that

− 1 + 1 ω¯(x) − ω¯(0 ) ≤ ε for all x ∈ (−δ, 0), and ω¯(x) − ω¯(0 ) ≤ ε for all x ∈ (0, δ). 2 2 This, together with (2.9), yields 1 ω¯(x) ≥ ω¯(0−) − ε, 2 for all x ∈ (−δ, δ). Since xn → 0 and yn → 0, we may choose n large enough such that (xn, yn) ⊂ (−δ, δ) and hence inf ω¯(x) ≥ ω¯(0−) − ε. x∈(xn,yn) Thus, (2.10) follows by the arbitrariness of ε > 0.

We next claim that ω¯(0±) = ω±(0). (2.11) By Theorem 2.4 part 2 and the fact thatω ¯ = ω L1-a.e., we have 1 0 1 0 ω−(0) = lim ω(t) dt = lim ω¯(t) dt =ω ¯(0−), r→0 r ˆ−r r→0 r ˆ−r where at the last equality we used the properties of absolutely continuous function and the defini- tion ofω ¯(0−). The equationω ¯(0+) = ω+(0) can be proved similarly.

Therefore lim inf ess inf ω(x) = lim inf inf ω¯(x) ≥ ω¯(0−) = ω−(0), n→∞ x∈(xn,yn) n→∞ x∈(xn,yn) which concludes (2.5), and (2.6) and (2.7) hold by (2.8) and (2.11). 

Definition 2.6. (Weighted function spaces) Let ω ∈ W(Ω) and 1 ≤ p < ∞: p p 1. Lω(Ω) is the space of functions u ∈ L (Ω) such that

|u|p ω dx < ∞, ˆΩ endowed with the norm 1   p p ku − vk p := |u − v| ω dx Lω ˆΩ p if u, v ∈ Lω(Ω); Page 9 Section 2.0

1,p 1,p 2. Wω (Ω) is the space of functions u ∈ W (Ω) such that p p N u ∈ Lω(Ω) and ∇u ∈ Lω(Ω; R ), endowed with the norm

ku − vk 1,p := ku − vk p + k∇u − ∇vk p Wω Lω Lω 1,p if u, v ∈ Wω (Ω);

3. BVω(Ω) is the space of functions u ∈ BV (Ω) such that

1 u ∈ Lω(Ω) and ω d |Du| < ∞, ˆΩ endowed with the norm

ku − vk := ku − vk 1 + ω d |Du − Dv| BVω L ω ˆΩ

if u, v ∈ BVω(Ω); 4. u ∈ SBVω(Ω) if u ∈ BVω(Ω) ∩ SBV (Ω), and u ∈ GSBVω(Ω) if K ∧ u ∨ −K ∈ SBVω(Ω) for all K ∈ N.

Lemma 2.7. Let ω ∈ W(Ω) be given, and suppose that u ∈ SBVω(Ω). Then N−1 H (Su ∩ {ω = +∞}) = 0. Proof. By Definition 2.6 we have

+ − N−1 +∞ > ω d |Du| = |∇u| ω dx + u − u ω dH ˆ ˆ ˆ Ω Ω Su (2.12) + − N−1 ≥ u − u ω dH . ˆSu∩{ω=+∞}

+ − N−1 N−1 Since |u − u | (x) > 0 for H -a.e. x ∈ Su, it follows from (2.12) that H (Su ∩{ω = +∞}) = 0. 

2 Lemma 2.8. The space Lω is a Hilbert space endowed with the inner product

(u, v)L2 := (u, v ω)L2 = u v ω dx. (2.13) ω ˆ √ √ Proof. It is clear that (2.13) is an inner product. Also, (u, u) 2 = (u ω, u ω) 2 ≥ 0, and if Lω L (u, u) 2 = 0 then by (2.3) Lω u2ω dx ≥ u2dx = 0, ˆΩ ˆΩ and thus u = 0 a.e.

2 ∞ To see that Lω is complete, and therefore a Hilbert space, let {un}n=1 be a Cauchy sequence 2 √ ∞ 2 2 in Lω and notice that {un ω}n=1 is a Cauchy sequence in L . Hence, there is a function v ∈ L √ 2 √ 2 2 such that un ω → v in L . Defining u := v/ ω, we have that u ∈ Lω and un → u in Lω.  Page 10 Section 3.1

∞ 1,2 1 Lemma 2.9. Let {un}n=1 ⊂ Wω (Ω) be such that un → u in Lω and

2 sup |∇un| ω dx < ∞. ˆΩ Then, for every measurable set A ⊂ Ω

2 2 lim inf |∇un| ω dx ≥ |∇u| ω dx, n→∞ ˆA ˆA 1,2 and u ∈ Wω (Ω). ∞ 2 N 1 Proof. By (2.3) we have that {∇un}n=1 is uniformly bounded in L (Ω, R ) and un → u in L (Ω). 2 N Hence ∇un * ∇u in L (Ω; R ), and using standard lower semi-continuity of convex energies (see [31], Theorem 6.3.7), we conclude that

2 2 +∞ > lim inf |∇un| ω dx ≥ |∇u| ω dx, n→∞ ˆA ˆA for every measurable subset A ⊂ Ω. In particular, with A = Ω and using the fact that 1 ≤ ω a.e., 1,2 we deduce that u ∈ Wω (Ω).  1 Lemma 2.10. Let u ∈ Lω(Ω) be such that |∇u|2 ω dx + ω dHN−1 < +∞. (2.14) ˆΩ ˆSu N−1 Then H (Su) < +∞ and u ∈ GSBVω(Ω). Proof. By (2.14) and (2.3) 2 N−1 |∇u| dx + H (Su) < +∞, ˆΩ and hence by [8] we have that u ∈ GSBV (Ω). To show that u ∈ GSBVω(Ω), we only need to verify that + − N−1 u − u ω dH < +∞ ˆ K K SuK for every K ∈ N and with uK := K ∧ u ∨ −K. Indeed, by (2.14)

+ − N−1 N−1 N−1 u − u ω dH ≤ 2K ω dH ≤ 2K ω dH < +∞. ˆ K K ˆ ˆ SuK SuK Su  3. The One Dimensional Case 3.1. The Case ω ∈ W(I) ∩ C(I).

Let ω ∈ W(I) ∩ C(I) be given. Consider the functionals   2 0 2 ε 0 2 1 2 Eω,ε(u, v) := v |u | ω dx + |v | + (v − 1) ω dx ˆI ˆI 2 2ε 1,2 1,2 for (u, v) ∈ Wω (I) × W (I), and let

0 2 X Eω(u) := |u | ω dx + ω(x) ˆI x∈Su Page 11 Section 3.1 be defined for u ∈ GSBVω(I) (Note that E1,ε(u, v) and E1(u) are, respectively, the non-weighted Ambrosio-Tortorelli approximation scheme and Mumford-Shah functional studied in [8]).

1 1 Theorem 3.1 (Γ-Convergence). Let Eω,ε: Lω(I) × L (I) → [0, +∞] be defined by ( 1,2 1,2 Eω,ε(u, v) if (u, v) ∈ Wω (I) × W (I), 0 ≤ v ≤ 1, Eω,ε(u, v) := +∞ otherwise.

1 1 Then the functionals Eω,ε Γ-converge, with respect to the Lω × L topology, to the functional ( Eω(u) if u ∈ GSBVω(I) and v = 1 a.e., Eω(u, v) := +∞ otherwise. We begin with an auxiliary proposition.

1,2 1 Proposition 3.2. Let {vε}ε>0 ⊂ W (I) be such that 0 ≤ vε ≤ 1, vε → 1 in L (I) and pointwise a.e., and   ε 0 2 1 2 lim sup |vε| + (vε − 1) dx < ∞. ε→0 ˆI 2 2ε

Then for arbitrary 0 < η < 1 there exists an open set Hη ⊂ I satisfying:

1. the set I \ Hη is a collection of finitely many points in I; η 2. for every set K compactly contained in Hη, we have K ⊂ Bε for ε > 0 small enough, where η  2 Bε := x ∈ I : vε (x) ≥ η . (3.1) Proposition 3.2 is adapted from [8], page 1020-1021 (see Lemma A.3).

1 Proposition 3.3. (Γ-lim inf) For u ∈ Lω(I), let − n Eω (u) := inf lim inf Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (I) × W (I), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have − Eω (u) ≥ Eω(u). − − Proof. If Eω (u) = +∞ then there is nothing to prove. Assume that M := Eω (u) < ∞. Choose uε − and vε admissible for Eω (u) such that − lim Eω,ε(uε, vε) = Eω (u) < ∞, ε→0 1 and note that vε → 1 in L (I). Since infx∈Ω ω(x) ≥ 1, we have

lim inf E1,ε(uε, vε) ≤ lim inf Eω,ε(uε, vε) < +∞, ε→0 ε→0 and by [8] we obtain that 0 u ∈ GSBV (I) and H (Su) < +∞. (3.2) Letε ¯ > 0 be sufficiently small so that, for all 0 < ε < ε¯,

Eω,ε(uε, vε) ≤ M + 1. We claim, separately, that

0 2 0 2 2 |u | ω dx ≤ lim inf |uε| vε ω dx < +∞, (3.3) ˆI ε→0 ˆI Page 12 Section 3.1 and   X 1 0 2 1 2 ω(x) ≤ lim inf ε |vε| + (1 − vε) ω dx < +∞. (3.4) ε→0 ˆI 2 2ε x∈Su

Note that (3.3), (3.4), and Lemma 2.10 will yield u ∈ GSBVω(I).

Up to the extraction of a (not relabeled) subsequence, we have uε → u and vε → 1 a.e. in I with     1 0 2 1 2 1 0 2 1 2 lim sup ε |vε| + (1 − vε) dx ≤ lim sup ε |vε| + (1 − vε) ω dx < +∞. ε→0 ˆI 2 2ε ε→0 ˆI 2 2ε Therefore, up to the extraction of a (not relabeled) subsequence, we can apply Proposition 3.2 and deduce that, for a fixed η ∈ (1/2, 1), there exists an open set Hη such that the set I \ Hη contains η only a finite number of points, and for every compact subset K ⊂⊂ Hη, K is contained in Bε for η 0 < ε < ε(K), where Bε is defined in (3.1). We have

0 2 0 2 |u | ω dx ≤ lim inf |uε| ω dx ˆ ε→0 ˆ K K (3.5) 1 2 0 2 1 2 0 2 ≤ lim inf vε |uε| ω dx ≤ lim inf vε |uε| ω dx, η ε→0 ˆK η ε→0 ˆI where we used Lemma 2.9 in the first inequality. By letting K % Hη on the left hand side of (3.5) first and then η % 1 on the right hand side, we proved that

0 2 2 0 2 |u | ω dx ≤ lim inf vε |uε| ω dx, (3.6) ˆI ε→0 ˆI where we used the fact that |I \ Hη| = 0.

We claim that Su ⊂ I \ Hη. Indeed, if there is x0 ∈ Su ∩ Hη, since Hη is open there exists 0 0 an open interval I0 containing x0 and compactly contained in Hη such that for 0 < ε < ε0

0 2 0 2 1 2 0 2 |uε| dx ≤ |uε| ω dx ≤ vε |uε| ω dx ≤ 2(M + 1). ˆ 0 ˆ 0 η ˆ I0 I0 I 1,2 0 Thus u ∈ W (I0), and hence is continuous at x0, which contradicts the fact that x0 ∈ Su.

 1 ∞  2 ∞ Let t ∈ Su, and for simplicity assume that t = 0. We claim that there exist tn n=1, tn n=1, and ∞ 1 2 {sn}n=1 such that −1 < tn < sn < tn < 1, 1 2 lim tn = lim tn = lim sn = 0, n→∞ n→∞ n→∞ and, up to the extraction of a subsequence of {vε}ε>0, 1 2 lim vε(n)(tn) = lim vε(n)(tn) = 1, and lim vε(n)(sn) = 0. (3.7) n→∞ n→∞ n→∞

Because I \ Hη is discrete and 0 ∈ I \ Hη, we may choose δ0 > 0 small enough such that

(−2δ0, 2δ0) ∩ (I \ Hη) = {0} . We claim that

lim sup lim sup inf vε(x) = 0, (3.8) δ→0+ ε→0+ x∈Iδ Page 13 Section 3.1 where Iδ := (−δ, δ). Assume that

lim sup lim sup inf vε(x) =: α > 0. δ→0+ ε→0+ x∈Iδ

Then there exists 0 < δα < δ0 such that 2 lim sup inf vε(x) ≥ α > 0. ε→0+ x∈Iδα 3

δα Up to the extraction of a subsequence of {vε}ε>0, there exists ε0 > 0 such that 1 inf vε(x) ≥ α > 0, x∈Iδα 2

δα for all 0 < ε < ε0 , and we have

|u0|2 dx ≤ |u0|2 ω dx ˆ ˆ Iδα Iδα 0 2 2 0 2 2 2 0 2 2 2 ≤ lim inf |uε| ω dx ≤ lim inf |uε| vε ω dx ≤ lim inf |uε| vε ω dx < (M + 1). ε→0 ˆ ε→0 α ˆ ε→0 α ˆ α Iδα Iδα I 1,2 Hence u ∈ W (Iδα ) and so u is continuous at 0 ∈ Su, and we reduce a contradiction. Therefore, + + in view of (3.8) we may find δn → 0 , ε(n) → 0 , and sn ∈ (−δn, δn) such that

lim sn = 0 and lim vε(n)(sn) = 0. n→∞ n→∞ We claim that for all τ ∈ (0, 1/2),   lim inf (1 − vε(n)(x)) + inf (1 − vε(n)(x)) = 0. (3.9) n→∞ x∈(sn−τ,sn) y∈(sn,sn+τ) To reach a contradiction, assume that there exists τ ∈ (0, 1/2) such that   lim sup inf (1 − vε(n)(x)) + inf (1 − vε(n)(x)) =: β > 0. n→∞ x∈(sn−τ,sn) x∈(sn,sn+τ) Without loss of generality, suppose that 1 lim sup inf (1 − vε(n)(x)) ≥ β > 0. n→∞ x∈(sn−τ,sn) 2 Then 1 lim inf sup vε(n)(x) ≤ 1 − β, n→∞ x∈(sn−τ,sn) 2 which implies that 1 sup v (x) ≤ 1 − β (3.10) ε(nk) 3 x∈(snk −τ,snk ) ∞ ∞ for a subsequence {ε(nk)}k=1 ⊂ {ε(n)}n=1. However, (3.10) contradicts the fact that vε(nk)(x) → 1 a.e. since for k large enough so that |snk | < τ/4 it holds  3 τ  (s − τ, s ) ⊃ − τ, − . nk nk 4 4 Page 14 Section 3.1

1 2 Therefore, in view of (3.9) we may find tm ∈ (sn(m) − 1/m, sn(m)) and tm ∈ (sn(m), sn(m) + 1/m) such that 1 2 1 2 lim tm = lim tm = 0 and lim vε(n(m))(tm) = lim vε(n(m))(tm) = 1. n→∞ n→∞ n→∞ n→∞ We next show that

sn(m)   1 0 2 1 2 1 lim inf ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx ≥ . m→∞ ˆ 1 2 2ε(n(m)) 2 tm Indeed, we have

sn(m)   1 0 2 1 2 lim inf ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx m→∞ ˆ 1 2 2ε(n(m)) tm

sn(m) sn(m) 0 0 ≥ lim inf (1 − vε(n(m))) vε(n(m)) dx ≥ lim inf (1 − vε(n(m)))vε(n(m))dx m→∞ ˆ 1 m→∞ ˆ 1 tm tm

sn(m) 1 d 2 = lim inf (1 − vε(n(m))) dx m→∞ 2 ˆ 1 dt tm 1  2 1 2 1 = lim (1 − vε(n(m))(sn(m))) − (1 − vε(n(m))(tm)) = , 2 n→∞ 2 where we used (3.7). Similarly, we obtain

2 tm   1 0 2 1 2 1 lim inf ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx ≥ . m→∞ ˆ 2 2ε(n(m)) 2 sn(m) We observe that, since ω is positive,

t2 m 1 2 1  0 2 ε(n(m)) vε(n(m)) + (1 − vε(n(m))) ω(x) dx ˆ 1 2 2ε(n(m)) tm (   sn(m) 1 2 1  0 2 ≥ inf ω(r) · ε(n(m)) vε(n(m)) + (1 − vε(n(m))) dx (3.11) r∈(t1 ,t2 ) ˆ 1 2 2ε(n(m)) m m tm 2 ) tm   1 0 2 1 2 + ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx , ˆ 2 2ε(n(m)) sn(m) and so t2 m 1 2 1  0 2 lim inf ε(n(m)) vε(n(m)) + (1 − vε(n(m))) ω(x) dx m→∞ ˆ 1 2 2ε(n(m)) tm (   sn(m)   1 2 ε 0 2 ≥ lim inf inf ω(r) lim inf (1 − vε(n(m))) + (vε(n(m))) dx m→∞ r∈(t1 ,t2 ) n→∞ ˆ 1 2ε(n(m)) 2 m m tm t2 ) m 1 2 1  + ε(n(m)) v0 + (1 − v )2 dx ˆ 2 ε(n(m)) 2ε(n(m)) ε(n(m)) sn(m) 1 1 ≥ + ω(0) = ω(0), 2 2 Page 15 Section 3.1 where we used the fact that ω is continuous at 0.

Finally, since Su ⊂ I \ Hη, by (3.2) we have that Su is a finite collection of points, and we may repeat the above argument for all t ∈ Su by partitioning I into non-overlaping intervals where there is at most one point of Su, to deduce that   1 0 2 1 2 X lim inf ε |vε| + (1 − vε) ω(x) dx ≥ ω(x). (3.12) ε→0 ˆI 2 2ε x∈Su In view of (3.6) and (3.12), we conclude that

lim inf Eω,ε(uε, vε) ≥ Eω(u). ε→0  1 ∞ Proposition 3.4. (Γ-lim sup) For u ∈ Lω(I) ∩ L (I), let  + Eω (u) := inf lim sup Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (I) × W (I), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have + Eω (u) ≤ Eω(u). (3.13)

Proof. Without loss of generality, assume that Eω(u) < ∞. Then by Lemma 2.10 we have u ∈ 0 1,2 GSBVω(I) and H (Su) < ∞. To prove (3.13), we show that there exist {uε}ε>0 ⊂ Wω (I) and 1,2 1 1 {vε}ε>0 ⊂ W (I) such that uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1, and

lim sup Eω,ε(uε, vε) ≤ Eω(u). (3.14) ε→0

Step 1: Assume that Su = {0}.

1,2 Fix η > 0, and let T > 0 and v0 ∈ W (0,T ) be such that T h 2 0 2i 0 ≤ v0 ≤ 1 and (1 − v0) + |v0| dx ≤ 1 + η, (3.15) ˆ0 with v0(0) = 0 and v0(T ) = 1.

For ξε = o(ε) we define  0 if |x| ≤ ξε,   |x|−ξε  vε(x) := v0 ε if ξε < |x| < ξε + εT, (3.16)  1 if |x| ≥ ξε + εT. 1 Since kvεkL∞(I) ≤ 1, by Lebesgue Dominated Convergence Theorem we have vε → 1 in L . Let

( 1 u(x) if |x| ≥ 2 ξε, uε(x) := 1  1  1 (3.17) affine from u − 2 ξε to u 2 ξε if |x| < 2 ξε. and we observe that (recall in assumption we have u ∈ L∞(I))

kuεkL∞(I) ≤ kukL∞(I) , Page 16 Section 3.1 and

kukL∞(I) ω dx < ∞. ˆI 1 Therefore, by Lebesgue Dominated Convergence Theorem we deduce that uε → u in Lω. Moreover, by (3.16) and (3.17) we observe that

( 2 0 2 2 0 2 vε |u | if x ≥ |ξε| , vε |uε| = 0 if x < |ξε| , 2 0 2 0 2 0 2 and so vε |uε| ≤ |u | . Since Eω(u) < ∞ we have u ∈ Lω(I), by Lebesgue Dominated Convergence Theorem we obtain 2 0 2 0 2 lim vε |uε| ω dx = |u | ω dx. ε→0 ˆI ˆI Next, since ω is positive we have   ε 0 2 1 2 |vε| + (vε − 1) ω(x) dx ˆI 2 2ε −ξε   ξε+εT   ξε ε 0 2 1 2 ε 0 2 1 2 1 = |vε| + (vε − 1) ω(x) dx + |vε| + (vε − 1) ω(x) dx + ω(x)dx ˆ−ξε−εT 2 2ε ˆξε 2 2ε 2ε ˆ−ξε

! ( −ξε   ε 0 2 1 2 ≤ sup ω(t) · |vε| + (vε − 1) dx t∈(−ξε−εT,ξε+εT ) ˆ−ξε−εT 2 2ε

ξε+εT   ) ε 0 2 1 2 ξε + |vε| + (vε − 1) dx + kωkL∞ . ˆξε 2 2ε ε We obtain   ε 0 2 1 2 lim sup |vε| + (vε − 1) ω(x) dx ε→0 ˆI 2 2ε ! ≤ lim sup sup ω(t) · ε→0 t∈(−ξε−εT,ξε+εT )

( −ξε   ξε+εT   ) ε 0 2 1 2 ε 0 2 1 2 lim sup |vε| + (vε − 1) dx + |vε| + (vε − 1) dx ε→0 ˆ−ξε−εT 2 2ε ˆξε 2 2ε ≤ ω(0)(1 + η), where we used (3.15).

We conclude that 0 2 lim sup Eω,ε(uε, vε) ≤ |u | ω dx + ω(0)(1 + η), ε→0 ˆI and (3.14) follows by the arbitrariness of η.

Step 2: In the general case in which Su is finite, we obtain uε by repeating the construction in Step 1 (see (3.17)) in small non-overlapping intervals centered at each point in Su. To obtain vε, we Page 17 Section 3.2 repeat the construction (3.16) in those intervals and extend by 1 in the complement of the union of those intervals. Hence, by Step 1 we have

0 2 X lim sup Eω,ε(uε, vε) ≤ |u | ω dx + (1 + η) ω(x), ε→0 ˆI x∈Su + and again (3.14) follows by letting η → 0 .  Proof of Theorem 3.1. The lim inf inequality follows from Proposition 3.3. For the lim sup inequal- ity, we note that for any given u ∈ GSBVω such that Eω(u) < +∞, by Lebesgue Monotone Convergence Theorem we have that

Eω(u) = lim Eω(K ∧ u ∨ −K), K→∞ and hence a diagonal argument together with Proposition 3.4 conclude the proof.  3.2. The Case ω ∈ W(I) ∩ SBV (I).

Consider the functionals   0 2 2 ε 0 2 1 2 Eω,ε(u, v) := |u | v ω dx + |v | + (v − 1) ω dx ˆI ˆI 2 2ε 1,2 1,2 for (u, v) ∈ Wω (I) × W (I), and for u ∈ GSBVω(I) let

0 2 X − Eω(u) := |u | ω dx + ω (x). ˆI x∈Su

We note that if ω ∈ W(I)∩SBV (I) and ω is continuous in a neighborhood of Su, for u ∈ GSBVω(I), then X X ω−(x) = ω(x)

x∈Su x∈Su and Theorem 3.1 still holds.

Here we study the case in which ω is no longer continuous on a neighborhood of Su. We recall that ∞ 0 ω ∈ SBV (I) implies that ω ∈ L (I) and by definition of ω ∈ W(I), we have H (Sω) < ∞. Also, we note that ω− is defined H0-a.e, hence everywhere in I. 1 1 Theorem 3.5. Let Eε: Lω(I) × L (I) → [0, +∞] be defined by ( 1,2 1,2 Eω,ε(u, v) if (u, v) ∈ Wω (I) × W (I), 0 ≤ v ≤ 1, Eω,ε(u, v) := +∞ otherwise.

1 1 Then the functionals Eω,ε Γ-converge, with respect to the Lω × L topology, to the functional ( Eω(u) if u ∈ GSBVω(I) and v = 1 a.e., Eω(u, v) := +∞ otherwise. The proof of Theorem 3.5 will be split into two propositions. 1 Proposition 3.6. (Γ-lim inf) For u ∈ Lω(I), let − n Eω (u) := inf lim inf Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (I) × W (I), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . Page 18 Section 3.2

We have − Eω (u) ≥ Eω(u).sn(m)

− Proof. Without lose of generality, assume that Eω (u) < +∞. We use the same arguments of the proof of Proposition 3.3 until (3.11). In particular, (3.2) and (3.3) still hold, that is

0 0 2 0 2 2 H (Su) < +∞ and |u | ω dx ≤ lim inf |uε| vε ω dx. ˆI ε→0 ˆI Invoking (3.11), we have

t2 m 1 2 1  0 2 lim inf ε(n(m)) vε(n(m)) + (1 − vε(n(m))) ω(x) dx m→∞ ˆ 1 2 2ε(n(m)) tm (   sn(m)   1 0 2 1 2 ≥ lim inf ess inf ω(r) · lim inf ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx m→∞ r∈(t1 ,t2 ) n→∞ ˆ 1 2 2ε(n(m)) m m tm 2 ) tm   1 0 2 1 2 + ε(n(m)) (vε(n(m))) + (1 − vε(n(m))) dx ˆ 2 2ε(n(m)) sn(m) 1 1 ≥ ω−(0) + = ω−(0), 2 2 where the last step is justified by (2.5).

Since Su is finite, we may repeat the above argument for all t ∈ Su by partitioning I into finitely many non-overlapping intervals where there is at most one point of Su, to conclude that   1 0 2 1 2 X − lim inf ε |vε| + (1 − vε) ω(x) dx ≥ ω (x), ε→0 ˆI 2 2ε x∈Su as desired. 

The construction of the recovery sequence uses a reflection argument nearby points of Sω ∩ Su.

1 ∞ Proposition 3.7. (Γ-lim sup) For u ∈ Lω(I) ∩ L (I), let  + Eω (u) := inf lim sup Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (I) × W (I), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have + Eω (u) ≤ Eω(u). (3.18) 1,2 Proof. To prove (3.18), we only need to explicitly construct a sequence {(uε, vε)}ε>0 ⊂ Wω (I) × 1,2 1 1 W (I) such that uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1, and

lim sup Eω,ε(uε, vε) ≤ Eω(u). (3.19) ε→0

Step 1: Assume that {0} = Su ⊂ Sω. Page 19 Section 3.2

Recall that we always identify ω with its approximation representativeω ¯, and by (2.6) we may assume that (the converse situation may be dealt with similarly) lim ω(t) = ω−(0) and lim ω(t) = ω+(0). t%0− t&0+

Fix η > 0. For ε > 0 small enough, and with ξε = o(ε), as in (3.15), (3.16) let  0 if |x| ≤ ξε   |x|−ξε  v˜ε(x) := v0 ε if ξε < |x| < ξε + εT  1 if |x| ≥ ξε + εT, and define

vε(x) :=v ˜ε(x + 2ξε + εT ).

Note that from (3.16) vε → 1 a.e., and since 0 ≤ vε ≤ 1, by Lebesgue Dominated Convergence 1 Theorem we have vε → v in L . We also note that ε 1 |v0 (x)|2 + (1 − v (x))2 = 0 (3.20) 2 ε 2ε ε if x ∈ (−1, −3ξε − 2εT ) ∪ (−ξε, 1), and if x ∈ (−3ξε − εT, −ξε − εT ) then

vε(x) = 0. (3.21) Set ( u(x) if x ∈ (−1, −2ξε − εT ) ∪ (0, 1), u˜ε(x) := u(−x) if x ∈ [−2ξε − εT, 0]. + − + Observe thatu ˜ε(x) is continuous at 0 sinceu ˜ε (0) =u ˜ε (0) = u (0) by the definition ofu ˜ε(x), and u˜ε may only jump at t = −2ξε − εT but not at t = 0 where u jumps.

We define the recovery sequence ( u˜ε(x) if x ∈ I \ [−2.5ξε − εT, −1.5ξε − εT ], uε(x) := affine fromu ˜ε(−2.5ξε − εT ) tou ˜ε(−1.5ξε − εT ) if x ∈ [−2.5ξε − εT, −1.5ξε − εT ]. We claim that

lim |uε − u| ω dx = 0 (3.22) ε→0 ˆI and 0 2 2 0 2 lim sup |uε| vε ω dx ≤ |u | ω dx. (3.23) ε→0 ˆI ˆI To show (3.22), we observe that 0

lim |uε − u| ω dx ≤ lim |uε − u| ω dx ≤ lim 2 kuk ∞ kωk ∞ (2.5ξε + εT ) = 0. ε→0 ε→0 ε→0 L L ˆI ˆ−2.5ξε−εT We next prove (3.23). By (3.20) we have 0 0 2 2 0 2 0 2 |uε| vε ω dx ≤ |u | ω dx + kωkL∞ |u (−x)| dx, ˆI ˆI ˆ−ξε−εT Page 20 Section 4.1 and so

0 2 2 0 2 lim sup |uε| vε ω dx ≤ |u | ω dx, ε→0 ˆI ˆI 0 2 0 2 since u ∈ Lω(I), and we conclude that u ∈ L (I).

On the other hand, by (3.20) and (3.21),   ε 0 2 1 2 |vε| + (vε − 1) ω(x) dx ˆI 2 2ε −ξε   ε 0 2 1 2 = |vε| + (vε − 1) ω(x) dx ˆ−3ξε−2εT 2 2ε

! −ξε   ε 0 2 1 2 ≤ ess sup ω(t) |vε| + (vε − 1) dx t∈(−3ξε−2εT ,−ξε) ˆ−3ξε−2εT 2 2ε

! ξε+εT   ε 0 2 1 2 = ess sup ω(t) |v˜ε| + (˜vε − 1) dx. t∈(−3ξε−2εT ,−ξε) ˆ−ξε−εT 2 2ε

Therefore,   ε 0 2 1 2 lim sup |vε| + (vε − 1) ω(x) dx ε→0 ˆI 2 2ε

!( ξε+εT   ) ε 0 2 1 2 ≤ lim sup ess sup ω(t) lim sup |v˜ε| + (˜vε − 1) dx ε→0 t∈(−3ξε−2εT ,−ξε) ε→0 ˆ−ξε−εT 2 2ε ≤ ω−(0)(1 + η), where at the last inequality we used the definition ofv ˜ε, (3.15), and (2.6).

We conclude that

0 2 − lim sup Eω,ε(uε, vε) ≤ |u | ω dx + ω (0)(1 + η), ε→0 ˆI and (3.19) follows due to the arbitrariness of η.

Step 2: In the general case, we recall that Su is finite. We may obtain uε and vε by repeating the construction in Step 1 in small non-overlapping intervals centered at every point of Su ∩ Sω, and by repeating the construction in Step 1 in Lemma 3.4 in those non-overlaping intervals centered at points of Su \ Sω. Hence, we have

0 2 X − lim sup Eω,ε(uε, vε) ≤ |u | ω dx + (1 + η) ω (x), ε→0 ˆI x∈Su and (3.19) follows due to the arbitrariness of η. 

Proof of Theorem 3.5. The proof follows that of Theorem 3.1, using Proposition 3.6 and Proposition 3.7, in place of Proposition 3.3 and 3.4, respectively.  Page 21 Section 4.1

4. The Multi-Dimensional Case 4.1. One-Dimensional Restrictions and Slicing Properties.

Let SN−1 be the unit sphere in RN and let ν ∈ SN−1 be a fixed direction. We set   N Πν := x ∈ R : hx, νi = 0 ;  Ω1 := {t ∈ : x + tν ∈ Ω} for x ∈ Π ; x,ν R ν (4.1) Ω := {y = x + tν : t ∈ } ∩ Ω;  x,ν R  Ων := {x ∈ Πν :Ωx,ν 6= ∅} .

We also define the 1-d restriction function ux,ν of the function u as 1 ux,ν (t) := u(x + tν), x ∈ Ων , t ∈ Ωx,ν .

We recall the result below from [8], Theorem 3.3. Theorem 4.1. Let ν ∈ SN−1 be given, and assume that u ∈ W 1,2(Ω). Then, for HN−1-a.e. 1,2 x ∈ Ων , ux,ν belongs to W (Ωx,ν ) and 0 ux,ν (t) = h∇u(x + tν), νi .

1,p N−1 Lemma 4.2. Let ω ∈ W(Ω) and u ∈ Wω (Ω), for p ∈ [1, ∞), be given. If ν ∈ S and v ∈ W 1,p(Ω) is nonnegative, then

p p 0 p p |∇u| v ω dx ≥ ux,ν (t) vx,ν (t) ωx,ν (t) dtdx. ˆ ˆ ˆ 1 Ω Ων Ωx,ν

1,p 1,p N−1 Proof. Since ess infΩ ω ≥ 1, we have Wω (Ω) ⊂ W (Ω). Given ν ∈ S and a nonnegative function v ∈ W 1,p(Ω), by Fubini’s Theorem and Theorem 4.1 we have

|∇u|p vp ω dx = |∇u|p vp ω dt dHN−1(x) ˆ ˆ ˆ 1 Ω Ων Ωx,ν

p p N−1 ≥ |h∇u(x + tv), νi| vx,ν (t) ωx,ν (t) dtdH (x) ˆ ˆ 1 Ων Ωx,ν

0 p p N−1 = ux,ν (t) vx,ν (t) ωx,ν (t) dtdH (x), ˆ ˆ 1 Ων Ωx,ν where we used the fact that 0 ux,ν (t) = |h∇u(x + tν), νi| ≤ |∇u(x + tν)| N−1 H -a.e. x ∈ Ων . 

N−1 N N−1 Proposition 4.3. Let ν ∈ S be a fixed direction, Γ ⊂ R be such that H (Γ) < ∞, and Pν : N N N−1 R → Πν be a projection operator, where by (4.1) Πν ⊂ R is a hyperplane in R . Then N−1 N−1 H (Pν (Γ)) ≤ H (Γ), (4.2) N−1 and for H -a.e. x ∈ Πν , 0 H (Ωx,ν ∩ Γ) < +∞. (4.3) Page 22 Section 4.1

Proof. Note that (4.2) follows immediately from Theorem 7.5 in [40] since Pν is a Lipschitz map with Lipschitz constant less or equal to one. To show (4.3), we apply co-area formula (see [4], Theorem 2.93) with Pν and again since Pν is a Lipschitz map with Lipschitz constant less or equal to one, we are done.  0 N Set x = (x , xN ) ∈ R , where 0 N−1 N x ∈ R denotes the first N − 1 component of x ∈ R , and given u: RN−1 → R and G ⊂ RN−1, we define the graph of u over G as  0 N 0 0 F (u; G) := (x , xN ) ∈ R : x ∈ G, xN = u(x ) . If u is Lipschitz, then we call F (u; G) a Lipschitz -(N − 1)-graph. N N−1 N Lemma 4.4. Let Γ ⊂ R be a H -rectifiable set, and let Px,νΓ : R → Tx,νΓ be a projection operator for x ∈ Γ. Then HN−1( (Γ ∩ Q (x , r))) lim Px0,νΓ νΓ 0 = 1 (4.4) r→0 rN−1 N−1 for H -a.e. x0 ∈ Γ. Proof. By Proposition 4.3 we have N−1 N−1 H (Px0,νΓ (Γ ∩ QνΓ (x0, r))) H (Γ ∩ QνΓ (x0, r)) lim sup N−1 ≤ lim sup N−1 = 1 (4.5) r→0 r r→0 r for a.e. x0 ∈ Γ. By Theorem 2.76 in [4] we may write ∞ [ Γ = Γ0 ∪ Γi i=1 N−1 N−1 1 as a disjoint union with H (Γ0) = 0, Γi = (Ni, (Ni)) where li : R → R is of class C and N−1 Ni ⊂ R .

0 Let x0 ∈ Γi0 for some i0 ∈ N and, without loss of generality, let (−∇li0 (x0), 1) = νΓ(x0), with x0 a point of density one in Γ0 (see Exercise 10.6 in [39]). Up to a rotation and a translation, we 0 N−1 N−1 may assume that ∇li0 (x0) = (0, 0,..., 0) ∈ R , x0 = (0, 0,..., 0), and Px0,νΓ :Γi0 → R ×{0}. Therefore, for r > 0 small enough, 0 Γi0 ∩ QνΓ (x0, r) = (Px0,νΓ (Γi0 ∩ QνΓ (x0, r)) , li0 ((Px0,νΓ (Γi0 ∩ QνΓ (x0, r))) )), and by Theorem 9.1 in [40] we obtain that, q HN−1(Γ ∩ Q (x , r)) = 1 + |∇l (x0)|2dHN−1(x0). i0 νΓ 0 ˆ i0 Px0,νΓ (Γi0 ∩QνΓ (x0,r)) 1 Since li0 is of class C and ∇li0 (x0) = 0, for ε > 0 choose rε > 0 such that |∇li0 (x)| < ε for all 0 < r < rε. Therefore, we have that N−1 N−1 H (Px0,νΓ (Γ ∩ QνΓ (x0, r))) ≥ H (Px0,νΓ (Γi0 ∩ QνΓ (x0, r))) q 1 0 2 0 ≥ √ 1 + |∇li (x )| dx 1 + ε2 ˆ 0 Px0,νΓ (Γi0 ∩QνΓ (x0,r))

1 N−1 = √ H (Γi ∩ Qν (x0, r)). 1 + ε2 0 Γ Page 23 Section 4.1

We obtain HN−1( (Γ ∩ Q (x , r))) 1 HN−1(Γ ∩ Q (x , r)) 1 lim inf Px0,νΓ νΓ 0 ≥ lim inf √ i0 νΓ 0 = √ . r→0 rN−1 r→0 1 + ε2 rN−1 1 + ε2 By the arbitrariness of ε > 0, we deduce that HN−1( (Γ ∩ Q (x , r))) lim inf Px0,νΓ νΓ 0 ≥ 1, r→0 rN−1 and, in view of (4.5), we conclude that HN−1( (Γ ∩ Q (x , r))) lim Px0,νΓ νΓ 0 = 1. r→0 rN−1  Lemma 4.5. Let Q := (−1, 1)N and let Γ ⊂ Q be a HN−1-rectifiable set such that HN−1(Γ) < ∞ and H0(Γ ∩ ({x0} × (−1, 1))) ≥ 1 (4.6) for HN−1-a.e. x0 ∈ (−1, 1)N−1. Then there exists a HN−1-measurable subset Γ0 ⊂ Γ such that H0(Γ0 ∩ ({x0} × (−1, 1))) = 1. (4.7) for HN−1-a.e. x0 ∈ (−1, 1)N−1. Proof. By Lemma 4.3 we have H0(Γ0 ∩ ({x0} × (−1, 1))) < +∞ for HN−1-a.e. x0 ∈ (−1, 1)N−1. Thus, for HN−1-a.e. x0 ∈ (−1, 1)N−1, the set 0 Γx0 := Γ ∩ ({x } × (−1, 1)) is a finite collection of singletons, hence closed, and by (4.6) is non-empty. Applying Corollary 1.1 N−1 0 in [28], page 237, we obtain a H measurable subset Γ ⊂ Γ which satisfies (4.7).  N−1 Lemma 4.6. Let τ > 0 and η > 0 be given. Let u ∈ SBV (Ω) and assume that H (Su) < ∞. The following statements hold: N−1 1. there exist a set S ⊂ Su with H (Su \ S) < η, and a countable collection Q of mutually disjoint open cubes centered on elements of Su such that [ Q ⊂ Ω, Q∈Q and   N−1 [ H S \ Q = 0; Q∈Q N−1 2. for every Q ∈ Q there exists a direction vector νQ ∈ S such that 0 H (S ∩ Qx,νQ ) = 1,

for HN−1 a.e. x ∈ Q ∩ S; 3. for every Q ∈ Q, S ∩ Q is contained in a Lipschitz (N − 1)- graph ΓQ with Lipschitz constant less than τ. Page 24 Section 4.1

Proof. Let τ, η > 0 be given. By Theorem 2.76 in [4], there exist countably many Lipschitz (N −1)- N graphs Γi ⊂ R such that (up to a rotation and a translation)

0 0 0 Γi = {(x , xN ): x ∈ Ni, xN = li(x )}

N−1 N−1 1 with Ni ⊂ R , li: R → R of class C , |∇li| < τ for all i ∈ N, and

∞ ! N−1 [ H Su \ Γi = 0. (4.8) i=1 Without lose of generality, we assume that

N−1 0 N−1 H (Γi ∩ Γi0 ) = 0 if i 6= i ∈ N, and H (Γi) > 0. (4.9)

N−1 We denote by P the collection of Lipschitz (N − 1)-graphs Γi in (4.8)-(4.9). By (4.9), for H - a.e. x ∈ Su there exists only one Γ ∈ P such that x ∈ Γ, and we denote such Γ by Γx and we write

 0 0 N−1 0 Γx = (y , yN ): y ∈ Nx ⊂ R , yN = lx(y ) . For simplicity of notation, in what follows we will abbreviate ν (x) = ν (x) by ν(x), Q (x, r) Γx Su νSu by Q(x, r), and T by T . x,νSu x

N−1 N−1 N−1 We also note that H (Γ ∩ Su) < H (Su) < ∞ for each Γ ∈ P. Then H - a.e. x has density 1 in Γx ∩ Su (see Theorem 2.63 in [4]). Denote by S1 the set of points such that Su has density 1 at x and N−1 H (Su ∩ Γx ∩ Qν (x, r)) lim Γx = 1. (4.10) r→0 rN−1 N−1 Then H (Su \ S1) = 0.

Define HN−1(S ∩ Q(x, r)) f (x) := 1 . r rN−1 + Since fr(x) → 1 as r → 0 for x ∈ S1, by Egoroff’s Theorem there exists a set S2 ⊂ S1 such that N−1 H (S1 \ S2) < η/4 and fr → 1 uniformly on S2. Find r1 > 0 such that

HN−1(S ∩ Q(x, r)) 1 1 ≥ , rN−1 2 i.e., 1 HN−1(S ∩ Q(x, r)) ≥ rN−1 (4.11) 1 2 N−1 N−1 for all 0 < r < r1 and x ∈ S2. Since S2 ⊂ S1, S2 is also H -rectifiable and so H a.e. x ∈ S2 has density one. Without loss of generality, we assume that every point in S2 has density one and satisfies (4.4) in Lemma 4.4. Page 25 Section 4.1

Let x0 ∈ S2 be given and recall (4.1). We define n o T (x , r) := x ∈ Q(x , r) ∩ T : H0([Q(x , r)] ∩ S ) ≥ 2 , b 0 0 x0 0 x,ν(x0) 2 n o T (x , r) := x ∈ Q(x , r) ∩ T : H0([Q(x , r)] ∩ S ) = 1 , g 0 0 x0 0 x,ν(x0) 2 [   S (x , r) := S ∩ [Q(x , r)] , (4.12) b 0 2 0 x,ν(x0) x∈Tb(x0,r) [   S (x , r) := S ∩ [Q(x , r)] . g 0 2 0 x,ν(x0) x∈Tg (x0,r) Note that Tb(x0, r) ∩ Tg(x0, r) = ∅ and Sb(x0, r) ∩ Sg(x0, r) = ∅, (4.13) and by Proposition 4.3 we have N−1 N−1 H (Sg(x0, r)) ≥ H (Tg(x0, r)). (4.14) We claim that N−1 N−1 H (Sb(x0, r)) ≥ 2H (Tb(x0, r)). (4.15) 1 By Lemma 4.5 there exists a measurable selection Sb ⊂ Sb(x0, r) such that HN−1(S1(x , r) ∩ [Q(x , r)] ) = 1 b 0 0 x,ν(x0) N−1 for H -a.e. x ∈ Tb(x0, r). We define 2 1 Sb (x0, r) := Sb(x0, r) \ Sb (x0, r).

By the definition of Sb(x0, r) in (4.12), we have H0([Q(x , r)] ∩ S1(x , r)) ≥ 1 and H0([Q(x , r)] ∩ S2(x , r)) ≥ 1 0 x,ν(x0) b 0 0 x,ν(x0) b 0 for all x ∈ Tb(x0, r). We observe that N−1 N−1 1 N−1 2 N−1 H (Sb(x0, r)) = H (Sb (x0, r)) + H (Sb (x0, r)) ≥ 2H (Tb(x0, r)) by Proposition 4.3 and we deduce (4.15).

We next show that HN−1(S (x , r)) lim b 0 = 0. (4.16) r→0 rN−1

Indeed, since Tx0 is the tangent hyperplane to S2 at x0, T (x , r) ∪ T (x , r) = (S ∩ Q(x , r)), b 0 g 0 Px0,νSu 2 0 and by Lemma 4.4 it follows that HN−1(T (x , r) ∪ T (x , r)) lim b 0 g 0 = 1. (4.17) r→0 rN−1 On the other hand, in view of (4.13), (4.14), and (4.15), we deduce that N−1 N−1 N−1 H (Sb(x0, r) ∪ Sg(x0, r)) = H (Sb(x0, r)) + H (Sg(x0, r)) N−1 N−1 ≥ 2H (Tb(x0, r)) + H (Tg(x0, r)). That is, N−1 N−1  N−1 N−1  H (Tb(x0, r)) ≤ H (Sb(x0, r) ∪ Sg(x0, r)) − H (Tb(x0, r)) + H (Tg(x0, r)) Page 26 Section 4.1

N−1 N−1 = H (Sb(x0, r) ∪ Sg(x0, r)) − H (Tb(x0, r) ∪ Tg(x0, r)). (4.18)

Since x0 ∈ S2 has density 1, we have HN−1(S (x , r) ∪ S (x , r)) HN−1(S ∩ Q(x , r)) lim b 0 g 0 = lim 2 0 = 1. (4.19) r→0 rN−1 r→0 rN−1 In view of (4.17), (4.18), and (4.19), we conclude that N−1 H (Tb(x0, r)) lim sup N−1 r→0 r HN−1(S (x , r) ∪ S (x , r)) HN−1(T (x , r) ∪ T (x , r)) ≤ lim b 0 g 0 − lim b 0 g 0 = 0, r→0 rN−1 r→0 rN−1 which implies that HN−1(T (x , r)) lim b 0 = 0. r→0 rN−1 This, together with (4.13) and (4.17), yields HN−1(T (x , r)) lim g 0 = 1, r→0 rN−1 and so by (4.14) we have HN−1(S (x , r)) HN−1(T (x , r)) lim inf g 0 ≥ lim g 0 = 1, r→0 rN−1 r→0 rN−1 while by (4.19) N−1 N−1 H (Sg(x0, r)) H (Sb(x0, r) ∪ Sg(x0, r)) lim sup N−1 ≤ lim N−1 = 1, r→0 r r→0 r and we conclude that HN−1(S (x , r)) lim g 0 = 1. r→0 rN−1 Now, also in view of (4.13) and (4.19), we deduce (4.16).

We define, for x ∈ S2, HN−1(S (x, r)) g (x) := b . r rN−1 By (4.16) we have limr→0 gr(x) = 0 for all x ∈ S2, therefore by Egoroff’s Theorem there exists a set S ⊂ S such that 3 2 η HN−1(S \ S ) < 2 3 4 and gr → 0 uniformly on S3. Choose 0 < r2 < r1 such that N−1 H (Sb(x, r)) η 1 N−1 < N−1 (4.20) r 16 H (Su) for all x ∈ S3 and 0 < r < r2. We claim that, for x ∈ S3 and the corresponding Γx ∈ P, HN−1 (S (x, r) \ [S ∩ Γ ∩ Q(x, r)]) lim g u x = 0. (4.21) r→0 rN−1 Suppose that N−1 H (Sg(x, r) \ [Su ∩ Γx ∩ Q(x, r)]) 0 < lim sup N−1 =: δ. r→0 r Page 27 Section 4.1

By (4.10), and the fact that Γx ⊂ Su, we have that HN−1(S ∩ Q(x, r)) 1 = lim u r→0 rN−1 HN−1 (([S ∩ Q(x, r)] \ [S ∩ Γ ∩ Q(x, r)]) ∪ [S ∩ Γ ∩ Q(x, r)]) = lim u u x u x r→0 rN−1 N−1 H ([Sg(x, r)] \ [Su ∩ Γx ∩ Q(x, r)]) ≥ lim sup N−1 r→0 r HN−1[S ∩ Γ ∩ Q(x, r)] + lim u x r→0 rN−1 = δ + 1 > 1, which is a contradiction.

We define, for x ∈ S3, HN−1 (S (x, r) \ [S ∩ Γ ∩ Q(x, r)]) h (x) := g u x . r rN−1

By (4.21) limr→0 hr(x) = 0 for all x ∈ S3, therefore by Egoroff’s Theorem there exists a set of S4 ⊂ S3 such that η HN−1(S \ S ) < , 3 4 4 and hr → 0 uniformly on S4. Choose 0 < r3 < r2 such that N−1 H (Sg(x, r) \ [Su ∩ Γx ∩ Q(x, r)]) η 1 N−1 < N−1 (4.22) r 16 H (Su) for all x ∈ S4 and 0 < r < r3, and let 0 Q := {Q(x, r): x ∈ S4, 0 < r < r3} . By Besicovitch’s Covering Theorem we may extract a countable collection Q of mutually disjoint cubes from Q0 such that    [ N−1 [ Q ⊂ Ω and H S4 \  Q = 0. Q∈Q Q∈Q Define     [ [   S := S4 \  Sb(xQ, rQ) ∪  Sg(xQ, rQ) \ Su ∩ ΓxQ ∩ Q  , (4.23) Q∈Q Q∈Q where xQ is the center of cube Q and rQ is the side length of Q. Note that the set S satisfies properties 2 and 3 in the statement of Lemma 4.6. Finally, we show that N−1 H (Su \ S) < η. Indeed, in view of (4.20) and (4.22), and using the fact that the cubes Q ∈ Q are mutually disjoint, we have   N−1 [ X N−1 η X N−1 H  Sb(xQ, rQ) = H (Sb(xQ, rQ)) ≤ N−1 rQ , (4.24) 16H (Su) Q∈Q Q∈Q Q∈Q Page 28 Section 4.1 and   N−1 [   H  Sg(xQ, rQ) \ Su ∩ ΓxQ ∩ Q  Q∈Q X N−1  η X N−1 = H (Sg(xQ, rQ) \ Su ∩ ΓxQ ∩ Q ) ≤ N−1 rQ . (4.25) 16H (Su) Q∈Q Q∈Q By (4.11) we obtain   X 1 X [ rN−1 ≤ HN−1(S ∩ Q) = HN−1 S ∩ Q ≤ HN−1(S ). (4.26) 2 Q 1  u  u Q∈Q Q∈Q Q∈Q Using (4.24), (4.25), and (4.26), we deduce that   [ η HN−1 S (x , r ) ≤ ,  b Q Q  8 Q∈Q and   [ η HN−1 S (x , r ) \ S ∩ Γ ∩ Q ≤ ,  g Q Q u xQ  8 Q∈Q and so by (4.23) we get η HN−1(S \ S) ≤ . 4 4 Since S ⊂ S4 ⊂ S3 ⊂ S2 ⊂ S1 ⊂ Su, we conclude that N−1 H (Su \ S) N−1 N−1 N−1 N−1 N−1 ≤H (Su \ S1) + H (S1 \ S2) + H (S2 \ S3) + H (S3 \ S4) + H (S4 \ S) η η η η ≤ + + + = η. 4 4 4 4  Lemma 4.7. Let ω ∈ C(Ω) be nonnegative, let Γ ⊂ Ω be a HN−1-rectifiable set, and let τ ∈ (0, 1) N−1 be given. Then for H -a.e. x0 ∈ Γ, there exists r0 := r0(x0) > 0 such that for each 0 < r < r0 there exist t0 ∈ (−τr/4, τr/4) and 0 < t0,r = t0,r(t0, τ, x0, r) < |t0| such that 1 sup ω(x) dHN−1dl 0

Proof. Fix x0 ∈ Γ with density 1 and let τ > 0 be given. There exists r1 > 0 such that 1 HN−1(Γ ∩ Q (x , r)) ≤ νΓ 0 ≤ 1 + τ 2, (4.27) 1 + τ 2 rN−1 Page 29 Section 4.1 for all 0 < r < r1. Since by continuity of ω we have that

lim |ω(x) − ω(x0)| dx = 0, r→0 QνΓ (x0,r) and N−1 lim |ω(x) − ω(x0)| dH = 0, r→0 QνΓ (x0,r)∩Γ we may choose 0 < r2 < r1 such that for all 0 < r < r2

|ω(x) − ω(x )| dx ≤ τ 2, 0 QνΓ (x0,r) and τ |ω(x) − ω(x )| dHN−1 ≤ HN−1(Q (x , r) ∩ Γ) ≤ O(τ)rN−1, (4.28) ˆ 0 1 + τ 2 νΓ 0 QνΓ (x0,r)∩Γ where we used (4.27).

Therefore τr/4 |ω(x) − ω(x )| dHN−1dt ≤ |ω(x) − ω(x )| dx ≤ τ 2rN , ˆ ˆ 0 ˆ 0 −τr/4 QνΓ (x0,r)∩Tx0,νΓ (t) QνΓ (x0,r) and by the Mean Value Theorem there exists a set A ⊂ (−τr/4, τr/4) with positive 1 dimensional Lebesgue measure such that for every t ∈ A,

|ω(x) − ω(x )| dHN−1 ≤ 2τrN−1. (4.29) ˆ 0 QνΓ (x0,r)∩Tx0,νΓ (t)

If t0 ∈ A then we have, by the continuity of ω, 1 lim ω(x)dHN−1dl = ω(x)dHN−1, t→0 |I(t0, t)| ˆ ˆ ˆ I(t0,t) QνΓ (x0,r)∩Tx0,νΓ (l) QνΓ (x0,r)∩Tx0,νΓ (t0) hence there exists t0,r > 0, depending on r, t0, τ, and x0, such that I(t0, t0,r) ⊂ (−τr/2, τr/2) and 1 sup ω(x)dHN−1dl 0

By (4.30), (4.29), in this order, for every r ≤ r2 there exist t0 ∈ (−τr/4, τr/4) and 0 < t0,r < |t0|, depending on t0, τ, x0 and r, such that 1 sup ω(x) dHN−1dl 0

3. S ∩ QνΓ (xn, rn) ⊂ Rτ/2,νΓ (xn, rn); 4. if 0 < κ < 1 then for every n ∈ there exist tκ ∈ (−κr /4, κr /4) and 0 < tκ < |tκ|, N n n n xn,rn n depending on τ, xn, and κrn, such that

1 N−1 sup κ ω(x)dH dl 0

N−1 Moreover, since H a.e. x ∈ S1 a point of density one, by Egorov’s Theorem, we may find N−1 S2 ⊂ S1 such that H (S1 \ S2) < τ/3 and there exists r1 > 0 such that for all 0 < r < r1 and x ∈ S2, N−1 2 N−1 H (S1 ∩ QνΓ (x, r)) ≤ (1 + τ )r . Let Li := S2 ∩ Γi and without lose of generality we assume that Li are mutually disjoint. Let 0 Li ⊂ Li be such that τ 1 HN−1(L \ L0 ) < and d := dist(L0 ,L0 ) > 0 i i 3 2i ij i j Page 31 Section 4.1 for i 6= j. We observe that M ! N−1 [ 0 τ H S2 \ Li < and d := min {dij} > 0. 3 i6=j i=1 Define M [ 0 S := Li. i=1 We claim that there exists 0 < r < min τ 2, d/2, r such that for every 0 < r < r and every x, √ 2 1 2 y ∈ S with |x − y| < Nr we have

S ∩ QνΓ (x, r) ⊂ Rτ/2,νΓ (x, r), where we are using the notation introduced in Notation 2.3. Indeed, to verify this inclusion, we write (up to a rotation) 0 0 S ∩ QνΓ (x, r) = {(y , lx(y )) : y ∈ Tx,νΓ ∩ QνΓ (x, r)} ⊂ Γx 0 where y . Assuming, without loss of generality, that x = 0 and lx(0) = 0, we have for all y ∈

T0,νΓ ∩ QνΓ (0, r) 1 |l0(y)| ≤ k∇l0kL∞ ≤ τr √ 2 because for every y ∈ S ∩ QνΓ (0, r) we have |y| < Nr.

N−1 Next, for H -a.e. x ∈ S we may find r2(x) > 0 such that QνΓ (x, r3) ⊂ Ω and κr2(x) ≤ r0(x) where r0(x) is determined in Lemma 4.7. Letr ¯0(x) := min {r1, r2(x)}. The collection 0 F := {QνΓ (x, r): x ∈ S, r < r¯0(x)} is a fine cover for S, and so by Besicovitch’s Covering Theorem we may obtain a countable sub- collection F ⊂ F 0 with pairwise disjoint cubes such that [ S ⊂ QνΓ (xn, rn).

QνΓ (xn,rn)∈F For each Q (x , r ) ∈ F we apply Lemma 4.7 to obtain tκ ∈ (−κr /4, κr /4) and tκ > 0, νΓ n n n n n xn,rn depending on tκ , τ, κr , and x , such that (4.31) hold. xn n n

Finally, we observe that N−1 N−1 N−1 N−1 H (Γ \ S) ≤ H (Γ \ S1) + H (S1 \ S2) + H (S2 \ S) ≤ τ, and this completes the proof.  Proposition 4.9. Let ω ∈ C(Ω) be nonnegative, let Γ ⊂ Ω be HN−1-rectifiable with HN−1(Γ) < +∞, and let τ ∈ (0, 1) be given. There exists a HN−1-rectifiable set S ⊂ Γ and a countable family ∞ of disjoint cubes F = {QνΓ (xn, rn)}n=1 with rn < τ such that the following hold: 1. ∞ N−1 [ H (Γ \ S) < τ, S ⊂ QνΓ (xn, rn), and S ∩ QνΓ (xn, rn) ⊂ Rτ/2,νΓ (xn, rn); (4.32) n=1 2. N−1 2 N−1 H (S ∩ QνΓ (xn, rn)) ≤ (1 + τ )rn ; (4.33) Page 32 Section 4.2

3. for n 6= m

dist(QνΓ (xn, rn),QνΓ (xm, rm)) > 0; (4.34) 4. +∞ X N−1 N−1 rn ≤ 4H (Γ) (4.35) n=1

5. for each n ∈ N there exist tn ∈ (−τrn/4, τrn/4) and 0 < txn,rn < |tn|, depending on τ, rn, and

xn, such that Txn,νΓ (tn ± txn,rn ) ⊂ Rτ/2,νΓ (xn, rn) and

1 sup ω(x)dHN−1dl 0

where I(tn, t) := (tn − t, tn + t).

0 ∞ Proof. We apply items 1, 2, and 3 in Proposition 4.8 to obtain a countable collection {QνΓ (xn, rn)}n=1 and a set S0 ⊂ Γ such that ∞ τ [ HN−1(Γ \ S0) < ,S0 ⊂ Q (x , r0 ),S0 ∩ Q (x , r0 ) ⊂ R (x , r0 ), 2 νΓ n n νΓ n n τ/2,νΓ n n n=1 and N−1 2 N−1 H (S ∩ QνΓ (xn, r)) ≤ (1 + τ )r 0 for all 0 < r < rn. Find 0 < κ < 1 such that

∞ ! [ τ HN−1 S0 \ Q (x , κr0 ) < , νΓ n n 2 n=1 and let ∞ ! 0 [ 0 S := S ∩ QνΓ (xn, κrn) . n=1 Then ∞ [ 0 S ⊂ QνΓ (xn, κrn) n=1 and τ τ HN−1(Γ \ S) ≤ HN−1(Γ \ S0) + HN−1(S0 \ S) ≤ + = τ. 2 2 0 ∞ Note that the collection {QνΓ (xn, κrn)}n=1 satisfies (4.34). Next, we apply item 4 in Proposition κ κ 0 κ 4.8 with such κ > 0 to find t , t 0 such that (4.31) holds. It suffices to set rn := κr , tn := t , n xn,rn n n κ and tx ,r := t 0 . n n xn,rn  Page 33 Section 4.2

4.2. The Case ω ∈ W(Ω) ∩ C(Ω).

Consider the functionals   2 2 2 1 2 Eω,ε(u, v) := v |∇u| ω dx + ε |∇v| + (v − 1) ω dx ˆΩ ˆΩ 4ε 1,2 1,2 for (u, v) ∈ Wω (Ω) × W (Ω), and let

2 N−1 Eω(u) := |∇u| ω dx + ω(x) dH , ˆΩ ˆSu be defined for u ∈ GSBVω(Ω). ∞ 1 1 Theorem 4.10. Let ω ∈ W(Ω) ∩ C(Ω) ∩ L (Ω) be given. Let Eω,ε: Lω(Ω) × L (Ω) → [0, +∞] be defined by

( 1,2 1,2 Eω,ε(u, v) if (u, v) ∈ Wω (Ω) × W (Ω), 0 ≤ v ≤ 1, Eω,ε(u, v) := +∞ otherwise.

1 1 Then the functionals Eω,ε Γ-converge, with respect to the Lω × L topology, to the functional ( Eω(u) if u ∈ GSBVω(Ω) and v = 1 a.e., Eω(u, v) := +∞ otherwise. Theorem 4.10 will be proved in two propositions.

1 Proposition 4.11. (Γ-lim inf) For ω ∈ W(Ω) ∩ C(Ω) and u ∈ Lω(Ω), let − n Eω (u) := inf lim inf Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (Ω) × W (Ω), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have − Eω (u) ≥ Eω(u). − 1,2 Proof. Without loss of generality, we assume that M := Eω (u) < ∞. Let {(uε, vε)}ε>0 ⊂ Wω (Ω)× W 1,2(Ω) be such that 1 1 − uε → u in Lω, vε → 1 in L (Ω), and lim Eω,ε(uε, vε) = Eω (u) < ∞. ε→0

Since infx∈Ω ω(x) ≥ 1, we have

lim inf E1,ε(uε, vε) ≤ lim inf Eω,ε(uε, vε) < ∞, ε→0 ε→0 and by [8] we deduce that N−1 u ∈ GSBV (Ω) and H (Su) < ∞.

We prove separately that

2 2 lim inf |∇uε| vε ω dx ≥ |∇u| ω dx, (4.37) ε→0 ˆΩ ˆΩ and   2 1 2 N−1 lim inf ε |∇vε| + (1 − vε) ω dx ≥ ω(x)dH . (4.38) ε→0 ˆΩ 4ε ˆSu Page 34 Section 4.2

N−1 1 Let A be an open subset of Ω. Fix ν ∈ S , and define Ax,ν , Ax,ν , and Aν as in (4.1). For + K ∈ R , set uK := K ∧ u ∨ −K, and observe that, by Fubini’s Theorem,

2 2 0 2 2 N−1 lim inf |∇uε| vε ω dx ≥ lim inf (uε)x,ν (vε)x,ν ωx,ν dt dH (x) ε→0 ˆ ε→0 ˆ ˆ 1 A Aν Ax,ν

0 2 2 N−1 ≥ lim inf (uε)x,ν (vε)x,ν ωx,ν dt dH (x) ˆ ε→0 ˆ 1 Aν Ax,ν

0 2 N−1 ≥ ux,ν ωx,ν dt dH (x) ˆ ˆ 1 Aν Ax,ν

0 2 N−1 ≥ (uK )x,ν ωx,ν dt dH (x), ˆ ˆ 1 Aν Ax,ν where in the first inequality we used Lemma 4.2, in the second inequality we used Fatou’s Lemma, ∞ ∞ and in the third inequality we used (3.3). Since uK ∈ L (Ω) ∩ SBVω(Ω) ⊂ L (Ω) ∩ SBV (Ω), we may apply Theorem 2.3 in [8] to uK to obtain

2 2 0 2 N−1 2 lim inf |∇uε| vε ω dx ≥ (uK )x,ν ωx,ν dt dH (x) ≥ |h∇uK (x), νi| ω dx. ε→0 ˆ ˆ ˆ 1 ˆ A Aν Ax,ν A

Letting K → ∞ and using Lebesgue Monotone Convergence Theorem we have

2 2 2 lim inf |∇uε| vε ω dx ≥ |h∇u(x), νi| ω dx. (4.39) ε→0 ˆA ˆA

Let

2 N φn(x) := |h∇u(x), νni| ω for L -a.e. x ∈ Ω,

∞ N−1 where {νn}n=1 is a dense subset of S , and let

2 2 µ(A) := lim inf |∇uε| vε ω dx. ε→0 ˆA

Then µ is a positive function, super-additivity on open sets A, B, with disjoint closures, since

  2 2 2 2 2 2 µ(A ∪ B) = lim inf |∇uε| vε ω dx = lim inf |∇uε| vε ω dx + |∇uε| vε ω dx ε→0 ˆA∪B ε→0 ˆA ˆB 2 2 2 2 ≥ lim inf |∇uε| vε ω dx + lim inf |∇uε| vε ω dx = µ(A) + µ(B). ε→0 ˆA ε→0 ˆB

Hence by Lemma 15.2 in [17], together with (4.39), we conclude (4.37).

Now we prove (4.38). Assume first that ω ∈ L∞(Ω). For any open set A ⊂ Ω and ν ∈ SN−1, Page 35 Section 4.2 by Fubini’s Theorem and Fatou’s Lemma we have   2 1 2 lim inf ε |∇vε| + (1 − vε) ω dx ε→0 ˆA 4ε   0 2 1 2 N−1 ≥ lim inf ε (vε)x,ν + (1 − (vε)x,ν ) ωx,ν dtdH (x) ε→0 ˆ ˆ 1 4ε Aν Ax,ν   0 2 1 2 N−1 (4.40) ≥ lim inf ε (vε)x,ν + (1 − (vε)x,ν ) ωx,ν dtdH (x) ˆ ε→0 ˆ 1 4ε Aν Ax,ν   X ≥ ω (t) dHN−1(x), ˆ  x,ν  Aν 1 t∈Sux,ν ∩Ax,ν where the last inequality follows from (3.12).

Next, given arbitrary τ > 0 and η > 0 we choose a set S ⊂ Su and a collection Q of mutu- ally disjoint cubes according to Lemma 4.6 with respect to Su. Fix one such cube QνS (x0, r0) ∈ Q. By Lemma 4.6 we have HN−1([Q (x , r )] ∩ S) = 1 νS 0 0 x,νS N−1 for H -a.e. x ∈ QνS (x0, r0) ∩ S, and QνS (x0, r0) ∩ S ⊂ Γx0 such that, up to a rotation and a translation, 0 0 Γx0 = {(y , lx0 (y )) : y ∈ Tx0,νS ∩ QνS (x0, r0)} and k∇lx0 kL∞ < τ, (4.41) 0 where y denotes the first N − 1 components of y ∈ Tx0,νS ∩ QνS (x0, r0).

In (4.40) set A = QνS (x0, r0) and ν = νS(x0) and, using the same notation as in the proof of Lemma 4.6, we obtain   X  ω (t) dHN−1(x) (4.42)  x,νS (x0)  ˆ Qν (x0,r0) [ S ]ν (x ) S 0 t∈Su ∩[QνS (x0,r0)] x,νS (x0) x,νS (x0)   X ≥  ω (t) dHN−1(x)  x,νS (x0)  ˆTg (x0,r0) t∈Su ∩[QνS (x0,r0)] ∩S x,νS (x0) x,νS (x0)

N−1 0 0 N−1 0 = ω(x) dH (x) = ω(x , lx0 (x ))dL (x ), ˆTg (x0,r0) ˆTg (x0,r0) where the first inequality is due to the positivity of ω and the last equality is because QνS (x0, r0) ∩

S ⊂ Γx0 which is defined in (4.41).

Next, by Theorem 9.1 in [39] and since ω ∈ C(Ω), we have that q ω dHN−1 = ω(x0, l (x0)) 1 + |∇l (x0)|2dx0 ˆ ˆ x0 x0 QνS (x0,r0)∩S Tx0,νS ∩QνS (x0,r0) p ≤ 1 + τ 2 ω(x0, l (x0))dx0, ˆ x0 Tx0,νS ∩QνS (x0,r0) Page 36 Section 4.2 which, together with (4.42), yields   X  ω (t) dHN−1(x)  x,νS (x0)  ˆ Qν (x0,r0) [ S ]ν (x ) S 0 t∈Su ∩[QνS (x0,r0)] x,νS (x0) x,νS (x0) 1 ≥ √ ω dHN−1. (4.43) 1 + τ 2 ˆ QνS (x0,r0)∩S

N−1 Since cubes in Q are pairwise disjoint and H (S \ ∪Q∈QQ) = 0, by (4.40), (4.42), and (4.43) we have   2 1 2 lim inf ε |∇vε| + (vε − 1) ω dx ε→0 ˆ∪Q∈QQ 4ε   X 2 1 2 ≥ lim inf ε |∇vε| + (vε − 1) ω dx ε→0 ˆ 4ε Q∈Q Q 1 X 1 ≥ √ ω dHN−1 = √ ω dHN−1 2 ˆ 2 ˆ 1 + τ Q∈Q S∩Q 1 + τ S   1 N−1 ≥ √ ω dH − kωk ∞ η . 2 L 1 + τ ˆSu Therefore   2 1 2 lim inf ε |∇vε| + (vε − 1) ω dx ε→0 ˆΩ 4ε     2 1 2 1 N−1 ≥ lim inf ε |∇vε| + (vε − 1) ω dx ≥ √ ω(x)dH − kωk ∞ η , ε→0 2 L ˆ∪Q∈QQ 4ε 1 + τ ˆSu and (4.38) follows from the arbitrariness of η and τ, and the fact that η and τ are independent.

We now remove the assumption that ω ∈ L∞. Define for each k > 0, ( ω if ω ≤ k, ωk(x) := k otherwise.

We have   2 1 2 lim inf ε |∇vε| + (vε − 1) ω dx ε→0 ˆΩ 4ε   2 1 2 N−1 ≥ lim inf ε |∇vε| + (vε − 1) ωk dx ≥ ωk(x)dH , ε→0 ˆΩ 4ε ˆSu and we conclude   2 1 2 N−1 lim inf ε |∇vε| + (vε − 1) ω dx ≥ ω(x)dH ε→0 ˆΩ 4ε ˆSu by letting k % ∞ and using Lebesgue Monotone Convergence Theorem.  Page 37 Section 4.2

∞ 1 ∞ Proposition 4.12. (Γ-lim sup) For ω ∈ W(Ω) ∩ C(Ω) ∩ L (Ω) and u ∈ Lω(Ω) ∩ L (Ω), let  + Eω (u) := inf lim sup Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (Ω) × W (Ω), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have + Eω (u) ≤ Eω(u). (4.44)

Proof. If Eω(u) = ∞ then there is nothing to prove. Assume that Eω(u) < +∞ so that by N−1 ∞ Lemma 2.10 we have that u ∈ GSBVω(Ω) and H (Su) < ∞. By assumption u ∈ L (Ω), thus u ∈ SBVω(Ω).

Let τ ∈ (0, 2/9) be given. Apply Proposition 4.9 to ω and Γ = Su to obtain a set Sτ ⊂ Su, a ∞ countable collection F = Q (x , r ) of mutually disjoint cubes with r < τ, and corre- τ νSu n n n=1 n sponding tn ∈ (−τrn/4, τrn/4) (4.45) M and t so that items 1-5 in Proposition 4.9 hold. Extract a finite collection T = Q (x , r ) τ xn,rn τ νSu n n n=1 from Fτ with Mτ > 0 large enough such that " M # [τ HN−1 S \ Q (x , r ) < τ, τ νSu n n n=1 and we define " M # [τ F := S ∩ Q (x , r ) , (4.46) τ τ νSu n n n=1 which implies that N−1 N−1 N−1 H (Su \ Fτ ) ≤ H (Su \ Sτ ) + H (Sτ \ Fτ ) < 2τ. (4.47) Let U be the part of Q (x , r ) which lies between T (±τr ), U + be the part above n νSu n n xn,νSu n n T (τr ) and U − be the part below T (−τr ). Moreover, let U + be the part of U which xn,νSu n n xn,νSu n tn n lies above T (t ), and U − be the part below T (t ). xn,νSu n tn xn,νSu n

± We claim that if x ∈ Utn , x ± 2 dist x, T (±τr ) ν (x ) ∈ U ± ⊂ Q (x , r ) \ R (x , r ). (4.48) xn,νSu n Su n n νSu n n τ/2,νSu n n + − Let x ∈ Utn (the case in which x ∈ Utn can be handled similarly), we need to prove that  rn τrn < dist x + 2 dist(x, Tx ,ν (τrn))νS (xn),Tx ,ν < . n Su u n Su 2 Note that dist x, T (τr ) = τr − (x − (x))ν (x ), xn,νSu n n Pxn,νSu Su n + and since x ∈ Utn , we have that x − (x) ν (x ) ∈ (t , τr ). Pxn,νSu Su n n n

Hence, together with (4.45), we observe that τr ≤ 2 dist x, T (τr ) + x − (x) ν (x ) (4.49) n xn,νSu n Pxn,νSu Su n Page 38 Section 4.2

 τrn  9 1 = 2τrn − (x − x ,ν (x))νS (xn) ≤ 2τrn − − = τrn < rn. P n Su u 4 4 2 From the definition of projection operator we have Pxn,νSu

x + 2 dist T (τr ) ν (x ) = (x), Pxn,νSu xn,νSu n Su n Pxn,νSu and hence

dist x + 2 dist(x, T (τr ))ν (x ),T  (4.50) xn,νSu n Su n xn,νSu = x + 2 dist x, T (τr )ν (x ) xn,νSu n Su n − x + 2 dist T (τr ) ν (x ) ν (x ) Pxn,νSu xn,νSu n Su n Su n = 2 dist x, T (τr )ν (x ) ν (x ) xn,νSu n Su n Su n + x − x + 2 dist T (τr ) ν (x ) ν (x ) Pxn,νSu xn,νSu n Su n Su n = 2 dist x, T (τr ) + x − (x) ν (x ), xn,νSu n Pxn,νSu Su n and by (4.49) we conclude (4.48).

We defineu ¯ in Q (x , r ) as follows (see Figure 1): τ νSu n n  u(x) if x ∈ U + ∪ U −  n n u¯ (x) := u x + 2 dist(x, T (τr ))ν (x ) if x ∈ U + , (4.51) τ xn,νSu n Su n tn u x − 2 dist(x, T (−τr ))ν (x ) if x ∈ U − , xn,νSu n Su n tn and M ! [τ u¯ (x) := u(x) if x ∈ Ω \ Q (x , r ) . τ νSu n n n=1

We observe that, as τ → 0, and since 0 < rn < τ,

Mτ ! N N [ + − L ({x ∈ Ω, u(x) 6=u ¯τ (x)}) = L Utn ∪ Utn n=1 (4.52) Mτ Mτ Mτ X N + −  X N−1  2 X N−1 2 N−1 ≤ L Utn ∪ Utn = rn 2τrn ≤ 2τ rn ≤ 8τ H (Su) → 0, n=1 n=1 n=1 where the last inequality follows from (4.35). Moreover, using the same computation, we deduce that M ! M [τ Xτ LN Q (x , r ) ≤ τ rN−1 ≤ 4τHN−1(S ) = O(τ) → 0. (4.53) νSu n n n u n=1 n=1 Hence, in view of (4.52), we have

u¯τ → u and ∇u¯τ → ∇u in measure, (4.54) Page 39 Section 4.2

Figure 1. Construction ofu ¯τ (x) in (4.51) and, since in U + ∪ U − u¯ is the reflection of u from Q (x , r ) \ U + ∪ U − , we observe that tn tn τ νSu n n tn tn

2 2 2 |∇u¯τ | ω dx ≤ |∇u| ω dx + kωkL∞ |∇u¯τ | dx ˆΩ ˆΩ\{u(x)6=¯uτ (x)} ˆ{u(x)6=¯uτ (x)}

Mτ 2 X 2 ≤ |∇u| ω dx + 2 kωkL∞ |∇u| dx ˆΩ\{u(x)6=¯uτ (x)} ˆQν (xn,rn) n=1 Su (4.55) 2 2 = |∇u| ω dx + 2 kωkL∞ |∇u| dx SMτ ˆΩ\{u(x)6=¯uτ (x)} ˆ Qν (xn,rn) n=1 Su ≤ |∇u|2 ωdx + O(τ) ˆΩ where the last inequality follows from (4.53) and from the fact that because E1(u) ≤ Eω(u) < +∞, ∇u is L2 integrable. Moreover, in view of (4.54) and by Lebesgue Dominated Convergence Theorem we conclude that

lim |u¯τ − u| ω dx ≤ kωkL∞ lim |u¯τ − u| dx = 0 (4.56) τ→0 ˆΩ τ→0 ˆΩ because ku¯τ kL∞ ≤ kukL∞ < +∞.

For simplicity of notation, in the rest of the proof of this lemma we shall abbreviate Q (x , r ) by νSu n n Q and T by T . Note that the jump set ofu ¯ is contained by (recall item 4 in Proposition n xn,νSu xn τ 4.9) Page 40 Section 4.2

1. M [τ [Txn (tn) ∩ Qn]; n=1 2. M [τ ∂Qn ∩ Un; n=1

3. Su \ Fτ , where Fτ is defined in (4.46).

The contributions to Su from 2 and 3 are negligible. To be precise,

" Mτ !# N−1 [ H (Su \ Fτ ) ∪ ∂Qn ∩ Un n=1

Mτ ∞ N−1 X N−1 X N−1 ≤ H (Su \ Fτ ) + H (∂Qn ∩ Un) ≤ 2τ + Cτ rn τ ≤ O(τ). n=1 n=1 where we used (4.32), (4.35), (4.47), and the fact that

Mτ Mτ X N−1 X N−1 N−1 H (∂Qn ∩ Un) ≤ 2τ rn ≤ 8τH (Su). n=1 n=1 Hence, again by (4.35),

Mτ ∞ N−1 X N−1 X N−1 H (Su¯τ ) ≤ H (Txn ∩ Qn) + O(τ) ≤ rn + O(τ) < ∞. n=1 n=0

By (4.34), let aτ denote a quarter of the minimum distance between all cubes in Tτ . Let ε > 0 be such that √ 1 ε2 + ε << min {τ, a , t for 1 ≤ n ≤ M } . (4.57) 4 τ xn,rn τ Hence, by item 5 in Proposition 4.9 we have √ 1 ε2 + ε < t < |t | < τr < r . (4.58) xn,rn n 4 n n We set

uτ,ε := (1 − ϕε)¯uτ , where ϕε is such that ∞ 2 2 ϕε ∈ Cc (Ω; [0, 1]), ϕε ≡ 1 on (Su¯τ )ε /4, and ϕε ≡ 0 in Ω \ (Su¯τ )ε /2.

1,2 1,2 2 Sinceu ¯τ ∈ W (Ω \ Su¯τ ), we have {uτ,ε}ε>0 ⊂ W (Ω) because (1 − ϕε)(x) = 0 if x ∈ (Su¯τ )ε /4. 1,2 Moreover, {uτ,ε}ε>0 ⊂ Wω (Ω) and, using Lebesgue Dominated Convergence Theorem and (4.56),

lim lim |uτ,ε − u| ω = 0 (4.59) τ→0 ε→0 ˆΩ Page 41 Section 4.2

∞ ∞ because ω ∈ L , u ∈ L , and ϕε → 0 a.e.

1,2 Consider the sequence {vτ,ε}ε>0 ⊂ W (Ω) given by

vτ,ε(x) :=v ˜ε ◦ dτ (x)

where dτ (x) := dist(x, Su¯τ ) andv ˜ε is defined by ( 0 if t ≤ ε2, v˜ε(t) := − √1 √ (4.60) 1 − e 2 ε if t > ε + ε2,

2 √ 2 and for ε ≤ t ≤ ε + ε we definev ˜ε as the solution of the differential equation 1 v˜0 (t) = (1 − v˜ (t)). (4.61) ε 2ε ε 2 with initial conditionv ˜ε(ε ) = 0. An explicit computation shows that 2 − 1 t−ε v˜ε(t) = −e 2 ε + 1 (4.62) 2 √ 2 √ 2 √ for ε ≤ t ≤ ε + ε andv ˜ε( ε + ε ) = 1 − exp (−1/2 ε), and we remark that

1 − √1 lim e 2 ε = 0, (4.63) ε→0 ε and d 1  − (1 − v˜ (t))2 = (1 − v˜ (t))v ˜0 (t) ≥ 0. (4.64) dt 2 ε ε ε Next, since |∇d | = 1 a.e. (see [30], Section 3.2.34), we have {v } ⊂ W 1,2(Ω), 0 ≤ v ≤ √ τ τ,ε ε>0 τ,ε 1 − exp(−1/2 ε), and 1 vτ,ε → 1 in L as ε → 0 (4.65) by Lebesgue Dominated Convergence Theorem since vτ,ε → 1 a.e. by (4.62). By (4.55) and since 2 if ϕε(x) 6= 0 then dτ (x) < ε /2 and so vτ,ε(x) = 0,

2 2 2 2 |∇uτ,ε| vτ,ε ω dx ≤ |∇u¯τ | ω dx ≤ |∇u| ω dx + O(τ). (4.66) ˆΩ ˆΩ ˆΩ Next we prove that   2 1 2 N−1 ε |∇vτ,ε| + (1 − vτ,ε) ω dx ≤ ω dH + O(ε) + O(τ). (4.67) ˆΩ 4ε ˆSu Define

Ln := Txn ∩ Qn,Ln(ε) := (Txn ∩ Qn)ε , and observe that, using Fubini’s Theorem,   2 1 2 √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx 2 ˆLn(ε + ε) 4ε √ ε2+ ε   0 2 1 2 N−1 = ε |v˜ε(l)| + (1 − v˜ε(l)) √ ω(y) dH (y) dl 2 2 ˆε 4ε ˆ{dτ (y)=l}∩Ln(ε + ε) 1 + ω(x)dx, 2 4ε ˆLn(ε ) Page 42 Section 4.2 where the latter term in the right hand side is of the order O(ε). Next, in view of (4.61), using integration by parts, we have that √ ε2+ ε   0 2 1 2 N−1 ε |v˜ε(l)| + (1 − v˜ε(l)) √ ω(y) dH (y) dl 2 2 ˆε 4ε ˆ{dτ (y)=l}∩Ln(ε + ε) √ ε2+ ε 1 2 N−1 = (1 − v˜ε(l)) √ ω(y) dH (y) dl 2 2 ˆε 2ε ˆ{dτ (y)=l}∩Ln(ε + ε) (4.68) √ ε2+ ε 1 d  2 = − (1 − v˜ε(l)) √ ω(y) dy dl 2 2 ˆε 2ε dt ˆ{dτ (y)≤l}∩Ln(ε + ε) n 2 √ n 2 + Aω(ε + ε) − Aω(ε ), where

n 1 2 Aω(t) := (1 − v˜ε(t)) √ ω(y) dy. 2 2ε ˆ{dτ (x)≤t}∩Ln(ε + ε) By (4.58) and (4.63) we have

√ 1 − √1 n 2 2 ε Aω(ε + ε) = e √ √ ω(x) dx (4.69) 2ε ˆ 2 2 {dτ (x)≤ε + ε}∩Ln(ε + ε) 1 √ 1 1 − √ N 2 1 − √ N √ ≤ e 2 ε kωk L (L (ε + ε)) = e 2 ε kωk L ((T ∩ Q ) 2 ) 2ε L∞ n 2ε L∞ xn n ε + ε 1 − √1 2 √ 2 √ N−1 N−1 ≤ e 2 ε kωk 2(ε + ε)[r + (ε + ε)]  ≤ O(ε)r . 2ε L∞ n n We write √ ε2+ ε 1 d  2 − (1 − v˜ε(l)) √ ω(y) dy dl 2 2 ˆε 2ε dt ˆ{dτ (y)≤l}∩Ln(ε + ε) √ ε2+ ε   " # 1 d  2 1 = 2l − (1 − v˜ε(l)) √ ω(x) dx dl. 2 2 ˆε 2ε dt 2l ˆ{dτ (y)≤l}∩Ln(ε + ε)

Recalling the notation from Proposition 4.9 and the fact that ω(xn) ≤ kωkL∞ , we have ! 1 1 ω(x) dx ≤ sup ω(x)dHN−1dl √ √ 2l ˆ 2 2 |I(t , t)| ˆ ˆ {dτ (y)≤l}∩Ln(ε + ε) t≤ε + ε n I(tn,t) Q(xn,rn)∩Txn (l)

N−1 N−1 ≤ ω(x) dH + O(τ)rn . ˆSτ ∩Q(xn,rn) where by (4.57) we could use (4.36) in the last inequality. Therefore, by (4.64) √ ε2+ ε 1 d  2 − (1 − v˜ε(l)) √ ω(x) dx dl 2 2 ˆε 2ε dt ˆ{dτ (y)≤l}∩Ln(ε + ε) √ (4.70) ε2+ ε ! ! 1 d  2 N−1 N−1 ≤2 − (1 − v˜ε(l)) l dl ω(x) dH + O(τ)rn . 2 ˆε 2ε dt ˆSτ ∩Q(xn,rn) Page 43 Section 4.2

A new integration by parts and by using (4.62) yields √ ε2+ ε 1 d  2 − (1 − v˜ε(l)) l dl ˆε2 2ε dt √ ε2+ ε 2 1 2 1 2 √ 2 √ 2 ε 2 2 = (1 − v˜ε(l)) dl − (ε + ε)(1 − v˜ε(ε + ε)) + (1 − v˜ε(ε )) ˆε2 2ε 2ε 2ε √ ε2+ ε 2 2 ε 1  − √1  ε 0 2 2 ε 2 2 ≤ 2ε |v˜ε(l)| dl + (1 − v˜ε(ε )) = 1 − e + (1 − v˜ε(ε )) ˆε2 2ε 2 2 1 1 ≤ + ε, 2 2 which, together with (4.70) and (4.33), gives √ ε2+ ε 1 d  2 − (1 − v˜ε(l)) √ ω(x) dx dl (4.71) 2 2 ˆε 2ε dt ˆ{dτ (y)≤l}∩Ln(ε + ε)

N−1 N−1 N−1 N−1 ≤ ω(x) dH + O(τ)rn + ε kωkL∞ H (Sτ ∩ Q(xn, rn)) + εO(τ)rn ˆSτ ∩Q(xn,rn)

N−1 N−1 N−1 ≤ ω(x) dH + O(τ)rn + O(ε)O(τ)rn . ˆSτ ∩Q(xn,rn) 2 Hence, in view of (4.68), (4.69), (4.71), and since Aω(ε ) ≥ 0, we obtain that   2 1 2 √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx 2 ˆLn(ε + ε) 4ε

N−1 N−1 N−1 N−1 ≤ ω(x)dH + O(τ)rn + O(ε)O(τ)rn + O(ε)rn . (4.72) ˆSτ ∩Q(xn,rn) Next we define M ! " M !# [τ [τ L0 := (Su \ Fτ ) ∪ ∂Qn ∩ U n and L0(ε) := (Su \ Fτ ) ∪ ∂Qn ∩ Un . n=1 n=1 ε Since ω ∈ L∞(Ω), we have   2 1 2 √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx 2 ˆL0(ε + ε) 4ε   2 1 2 ≤ kωkL∞ √ ε |∇vτ,ε| + (1 − vτ,ε) dx, 2 ˆL0(ε + ε) 4ε and we note that the term   2 1 2 √ ε |∇vτ,ε| + (1 − vτ,ε) dx 2 ˆL0(ε + ε) 4ε is the recovery sequence constructed in [8], page 1034, Added in Proof. Therefore, recalling that ∞ ∞ by assumption that u ∈ SBVω(Ω) ∩ L (Ω) ⊂ SBV (Ω) ∩ L (Ω) and invoking Proposition 5.1 and 5.3 in [8] and calculation within, we conclude that   N−1 2 1 2 H (x ∈ Ω : dist(x, L0) < ε) lim sup √ ε |∇vτ,ε| + (1 − vτ,ε) dx ≤ lim sup 2 ε→0 ˆL0(ε + ε) 4ε ε→0 2ε Page 44 Section 4.2

N−1 ≤ H (L0). Thus,   2 1 2 N−1  √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx ≤ kωkL∞ H (L0) + O(ε) 2 ˆL0(ε + ε) 4ε (4.73) ≤ O(τ) + O(ε). Furthermore, by (4.60)   1 1 − √1 2 2 ε N ε |∇vτ,ε| + (1 − vτ,ε) ω dx ≤ e kωkL∞ L (Ω) ≤ O(ε), (4.74) ˆ √ 4ε 4ε Ω\(Su¯τ )ε2+ ε where in the last inequality we used (4.63).

Since cubes in Tτ are pairwise disjoint, in view of (4.72), (4.73), and (4.74) we have that   2 1 2 ε |∇vτ,ε| + (1 − vτ,ε) ω dx ˆΩ 4ε   2 1 2 = ε |∇vτ,ε| + (1 − vτ,ε) ω dx ˆ √ 4ε (Su¯τ )ε2+ ε   2 1 2 + ε |∇vτ,ε| + (1 − vτ,ε) ω dx ˆ √ 4ε Ω\(Su¯τ )ε2+ ε   2 1 2 ≤ √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx 2 ˆL0(ε + ε) 4ε Mτ   X 2 1 2 + √ ε |∇vτ,ε| + (1 − vτ,ε) ω dx + O(ε) ˆ 2 4ε n=1 Ln(ε + ε) M ! Xτ ≤O(ε) + O(τ) + ω(x)dHN−1 + [O(ε) + O(τ) + O(ε)O(τ)] rN−1 ˆ n n=1 Sτ ∩Q(xn,rn)

Mτ N−1 X N−1 ≤ ω(x) dH + [O(ε) + O(τ) + O(ε)O(τ)] rn + O(τ) + O(ε) ˆSMτ n=1(Sτ ∩Q(xn,rn)) n=1 ≤ ω(x) dHN−1 + O(ε) + O(τ) + O(ε)O(τ), ˆSu where in the last inequality we used (4.35), and this concludes the proof of (4.67). Hence, also in view of (4.66) and (4.67), for each τ > 0, we may choose ε(τ) such that

2 2 2 ∇uτ,ε(τ) vτ,ε(τ) ω dx ≤ |∇u| ω dx + O(τ), ˆΩ ˆΩ and   2 1 2 N−1 ε ∇vτ,ε(τ) + (1 − vτ,ε(τ)) ω dx ≤ ω(x)dH + O(τ), ˆΩ 4ε ˆSu and we thus constructed a recovery sequence {(uτ , vτ )}τ>0 given by

uτ := uτ,ε(τ) and vτ := vτ,ε(τ) Page 45 Section 4.3 which satisfies (4.44) and by (4.59) and (4.65) we have

uτ,ε(τ) − u 1 < τ and vτ,ε(τ) − v 1 < τ. Lω L Hence, we proved Proposition 4.12.  Proof of Theorem 4.10. The lim inf inequality follows from Lemma 4.11. On the other hand, for any given u ∈ GSBVω such that Eω(u) < ∞, we have, by Lebesgue Monotone Convergence Theorem,

Eω(u) = lim Eω(K ∧ u ∨ −K), K→∞ and a diagonal argument, together with Proposition 4.12, concludes the proof.  4.3. The Case ω ∈ W(Ω) ∩ SBV (Ω).

Consider the functionals   2 2 2 1 2 Eω,ε(u, v) := v |∇u| ω dx + ε |∇v| + (v − 1) ω dx, ˆΩ ˆΩ 4ε 1,2 1,2 for (u, v) ∈ Wω (Ω) × W (Ω), and

2 − N−1 Eω(u) := |∇u| ω dx + ω (x) dH ˆΩ ˆSu defined for u ∈ GSBVω(Ω).

∞ 1 1 Theorem 4.13. Let ω ∈ W(Ω) ∩ SBV (Ω) ∩ L (Ω) be given. Let Eω,ε: Lω(Ω) × L (Ω) → [0, +∞] be defined by

( 1,2 1,2 Eω,ε(u, v) if (u, v) ∈ Wω (Ω) × W (Ω), 0 ≤ v ≤ 1, Eω,ε(u, v) := +∞ otherwise.

1 1 Then the functionals Eω,ε Γ-converge, with respect to the Lω × L topology, to the functional ( Eω(u) if u ∈ GSBVω(Ω) and v = 1 a.e., Eω(u, v) := +∞ otherwise.

We start by proving the Γ-lim inf.

1 Proposition 4.14. (Γ-lim inf) For ω ∈ W(Ω) ∩ SBV (Ω) and u ∈ Lω(Ω), let

− n Eω (u) := inf lim inf Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (Ω) × W (Ω), uε → u, vε → 1 in Lω × L , 0 ≤ vε ≤ 1 . We have − Eω (u) ≥ Eω(u). − Proof. Without lose of generality, we assume that Eω (u) < +∞. The proof of this lemma uses the same arguments of the proof of Proposition 4.11 until the beginning of (4.40), and we obtain

2 2 2 lim inf |∇uε| vε ω dx ≥ |∇u| ω dx. ε→0 ˆΩ ˆΩ Page 46 Section 4.3

By applying Proposition 3.6 to the last inequality of (4.40), we have   2 1 2 X − lim inf ε |∇vε| + (1 − vε) ω dx ≥ ωx,ν (t)dx. ε→0 ˆA 4ε ˆAν t∈Sux,ν ∩Ax,ν − The rest of the proof follows that of Proposition 4.11 with ωx,ν in place of ωx,ν and taking into − − consideration of the fact that ωx,ν (t) = ω (x + tν) (see Remark 3.109 in [4]). 

The next lemma is the SBV version of Lemma 4.7. We recall that I(t0, t) := (t0 − t, t0 + t). Proposition 4.15. let τ ∈ (0, 1/4) be given, and let ω ∈ SBV (Ω) ∩ L∞(Ω) be nonnegative. Then N−1 for H a.e. x0 ∈ Sω a point of density one, there exists r0 := r0(x0) > 0 such that for each 0 < r < r0 there exist t0 ∈ (2τr, 4τr) and 0 < t0,r = t0,r(t0, τ, x0, r) < t0 such that 1 sup ω(x)dHN−1(x)dl ˆ ˆ ± 0

N−1 N−1 − Since H (Sω) < ∞, and so µ := H bSω is a nonnegative radon measure, and since ω ∈ 1 N−1 L (Ω, µ), it follows that for H a.e. x0 ∈ Sω

− − N−1 lim ω (x) − ω (x0) dH (x) = 0. (4.76) r→0 Q(x0,r)∩Sω

Choose one such x0 ∈ Sω, also a point of density 1 of Sω, and let τ > 0 be given. Select r1 > 0 such that for all 0 < r < r1, 1 HN−1(S ∩ Q(x , r)) ≤ ω 0 ≤ 1 + τ 2. (4.77) 1 + τ 2 rN−1

Let 0 < r2 < r1 be such that, in view of (4.76),

− − N−1 2 N−1 ω (x) − ω (x0) dH ≤ τ r (4.78) ˆQ(x0,r)∩Sω for all 0 < r < r2, and we observe that − N−1 − N−1 ω (x0)H [Q(x0, r) ∩ Tx (−t0)] = ω (x0) r 0 (4.79) 2 − N−1 ≤ (1 + τ ) ω (x0)H [Q(x0, r) ∩ Sω] . Since by Theorem 2.4 − lim ω(x) − ω (x0) dx = 0, r→0 − Q (x0,r) we may choose 0 < r3 < r2 such that

− 2 ω(x) − ω (x0) dx ≤ τ , − Q (x0,r) Page 47 Section 4.3 for all 0 < r < r3, and so, since 3.5τr < r, we have 3.5τr − N−1 − 2 N ω(x) − ω (x0) dH (x)dt ≤ ω(x) − ω (x0) dx ≤ τ r . ˆ ˆ − ˆ − 2.5τr Q (x0,r)∩Tx0 (−t) Q (x0,r) There exists a set A ⊂ (2.5τr, 3.5τr) with positive 1 dimensional Lebesgue measure such that for every t ∈ A, 2 N − N−1 τ r N−1 ω(x) − ω (x0) dH (x) ≤ = τr . (4.80) ˆ − τr Q (x0,r)∩Tx0 (−t) and choose t0 ∈ A a Lebesgue point for

l ∈ (−r/2, r/2) 7−→ ω dHN−1(x) ˆ − Q (x0,r)∩Tx0 (l) so that 1 lim ω(x)dHN−1(x)dl = ω(x)dHN−1(x). t→0 |I(t0, t)| ˆ ˆ − ˆ − I(t0,t) Q (x0,r)∩Tx0 (−l) Q (x0,r)∩Tx0 (−t0)

Hence, there exists t0,r > 0, depending on t0, τ, r, and x0, such that I(t0, t0,r) ⊂ (2.5τr, 3.5τr) and 1 sup ω(x)dHN−1(x)dl 0

In view of (4.81), (4.80), (4.79), and (4.78), in this order, we have that for every 0 < r < r3 there exist t0 ∈ (2.5τr, 3.5τr) and 0 < t0,r < t0 such that 1 sup ω(x)dHN−1(x)dl 0

≤O(τ)rN−1 + (1 + τ 2) ω−(x)dHN−1. ˆQ(x0,r)∩Sω ∞ − ∞ Since ω ∈ L (Ω), we have ω ∈ L (Sω) and thus, invoking (4.77),

2 − N−1 N−1 N−1 τ ω (x)dH ≤ O(τ) kωkL∞ H [Q(x0, r) ∩ Sω] ≤ O(τ)r , ˆQ(x0,r)∩Sω and we deduce the ω− version of (4.75).

Similarly, we may refine t0, r0 > 0, and 0 < t0,r < t0 such that 1 sup ω(x)dHN−1dl ≤ ω+(x)dHN−1 + O(τ)rN−1. 0

 Proposition 4.16. Let ω ∈ SBV (Ω) ∩ L∞(Ω) be nonnegative and let τ ∈ (0, 1/4) be given. Then, ∞ there exist a set S ⊂ S and a countable family of disjoint cubes F = Q (x , r ) , with ω νSω n n n=1 rn < τ, such that the following hold: 1. HN−1(S \ S) < τ and S ⊂ S∞ Q (x , r ); ω n=1 νSω n n 2. dist(Q (x , r ),Q (x , r )) > 0 νSω n n νSω m m

for n 6= m; 3. ∞ X N−1 N−1 rn ≤ 4H (Sω); n=1 4. S ∩ Q (x , r ) ⊂ R (x , r ); νSω n n τ/2,νSω n n

5. for each n ∈ N, there exists tn ∈ (2.5τrn, 3.5τrn) and 0 < txn,rn < tn, depending on τ, rn, and xn, such that − Tx ,ν (−tn ± tx ,r ) ⊂ Q (xn, rn) \ R (xn, rn) n Sω n n νSω τ/2,νSω and 1 sup ω(x)dHN−1dl 0

where I(tn, t) := (−tn − t, −tn + t). Proof. The proof of this proposition uses the same arguments of the proof of Proposition 4.8 and Proposition 4.9 where we apply Lemma 4.15 in place of Lemma 4.7. 

∞ 1 ∞ Proposition 4.17. (Γ-lim sup) For ω ∈ W(Ω) ∩ SBV (Ω) ∩ L (Ω) and u ∈ Lω(Ω) ∩ L (Ω), let  + Eω (u) := inf lim sup Eω,ε(uε, vε): ε→0 1,2 1,2 1 1 (uε, vε) ∈ Wω (Ω) × W (Ω), uε → u in Lω, vε → 1 in L , 0 ≤ vε ≤ 1 . We have + Eω (u) ≤ Eω(u). (4.83) N−1 N−1 Proof. Step 1: Assume H ((Sω \ Su) ∪ (Su \ Sω)) = 0, i.e., Sω and Su coincide H a.e.

If Eω(u) = ∞ then there is nothing to prove. If Eω(u) < +∞ then by Lemma 2.10 we have N−1 that u ∈ GSBVω(Ω) and H (Su) < +∞.

Fix τ ∈ (0, 2/21). Applying Proposition 4.16 to ω we obtain a set Sτ ⊂ Sω, a countable col- ∞ lection of mutually disjoint cubes F = Q (x , r ) , and corresponding τ νSω n n n=1

tn ∈ (2.5τrn, 3.5τrn) (4.84) Page 49 Section 4.3

M and t for which (4.82) holds. Extract a finite collection T = Q (x , r ) τ from F with xn,rn τ νSω n n n=1 τ Mτ > 0 large enough such that " M # [τ HN−1 S \ Q (x , r ) < τ, (4.85) τ νSω n n n=1 and we define " M # [τ F := S ∩ Q (x , r ) . (4.86) τ τ νSω n n n=1 Let U be the part of Q (x , r ) which lies between T (±t ), U + be the part above n νSω n n xn,νSω n n T (t ), and U − be the part below T (−t ). xn,νSω n n xn,νSω n

We claim that if x ∈ Un, x + 2 dist(x, T (t ))ν (x ) ∈ U +. (4.87) xn,νSω n Sω n n Note that dist x, T (t ) = t − x − (x) ν (x ), xn,νSω n n Pxn,νSω Sω n and since x ∈ Un, we have that x − (x) ν (x ) ∈ (−t , t ) Pxn,νSω Sω n n n and   1 tn ≤ 2 dist x, Tx ,ν (tn) + x − x ,ν (x) νS (xn) ≤ 3tn ≤ 10.5τrn < rn. n Sω P n Sω ω 2 Hence, following a similar computation in (4.50), we deduce (4.87).

Moreover, according to (4.84) and the definition of R (x , r ), we have that τ/2,νSω n n U + ∪ U − ∩ R (x , r ) = ∅. n n τ/2,νSω n n

We defineu ¯τ as follows (see Figure 2):

( + − u(x) if x ∈ Un ∪ Un , u¯τ (x) := (4.88) u x + 2dist(x, T (t ))ν (x ) if x ∈ U , xn,νSω n Sω n n and M ! [τ u¯ (x) := u(x) if x ∈ Ω \ Q (x , r ) . τ νSω n n n=1

Note that the jump set ofu ¯τ is contained by 1. M [τ T (−t ) ∩ Q (x , r ) ; xn,νSω n νSω n n n=1 2. M [τ ∂ Q (x , r ) ∩ U ; νSω n n n n=1

3. Su \ Fτ , where Fτ is defined in (4.86). Page 50 Section 4.3

Figure 2. Construction ofu ¯τ (x) in (4.88)

1,2 1,2 The construction of {uε}ε>0 ⊂ Wω (Ω) and {vε}ε>0 ⊂ W (Ω) satisfying (4.83) is same as in the proof of Proposition 4.12, using (4.88) instead of (4.51), and at (4.70) we apply (4.82) instead of (4.36).

N−1 Step 2: Suppose that H ((Sω \ Su) ∪ (Su \ Sω)) > 0. Note that we are only interested in the part Su \ Sω but not Sω \ Su, because we only need to recover Su.

∞ We first apply Proposition 4.9 on S to obtain a countable family of disjoint cubes F = Q (x , r ) u νSu n n n=1 such that (4.32)-(4.35) hold. Furthermore, extract a finite collection Tτ from F such that (4.85) holds.

We defineu ¯ inside each Q (x , r ) ∈ T as follows (see Figure 3): τ νSu n n τ 1. if x ∈ S , we apply Proposition 4.15 to obtain item 5 in Proposition 4.16 for this Q (x , r ), n ω νSu n n and we defineu ¯τ in this cube in the way of (4.88); 2. if x ∈ S \ S , we apply Lemma 4.7 to obtain item 5 in Proposition 4.9 for this Q (x , r ), n u ω νSu n n and we defineu ¯τ in this cube in the way of (4.51).

For points x outside Tτ , we defineu ¯τ (x) := u(x). Reasoning as in Proposition 4.12 and Proposition 4.17, we conclude (4.83).  Proof of Theorem 4.13. The lim inf inequality follows from Proposition 4.14. On the other hand, for any given u ∈ GSBVω such that Eω(u) < ∞, we have, by Lebesgue Monotone Convergence Page 51 Section .0

Figure 3. Applying (4.88) in dotted line cube and (4.51) in straight line cube.

Theorem,

Eω(u) = lim Eω(K ∧ u ∨ −K), K→∞ and a diagonal argument, together with Proposition 4.17, yields the lim sup inequality for u. 

Appendix

Definition A.1 ([7], Definition 4.4.9). Let X be a metric space. We denote by CX the family of all nonempty closed subsets of X. Then

dH(C,D) := min {1, h(C,D)} ,C,D ∈ CX , where

h(C,D) := inf {δ ∈ [0, +∞]: C ⊂ Dδ and D ⊂ Cδ} , is a metric on CX , and is called the Hausdorff distance between the set C and D (see (2.4) for definition of Dδ and Cδ). Consider X to be the interval (0, 1) with the Euclidian distance. We remark that for two intervals [a1, b1] and [a2, b2] in (0, 1),

dH([a1, b1], [a2, b2]) = min {1, max {|a1 − a2| , |b1 − b2|}} . (A.1)

Indeed, the δ-neighborhood of [a1, b1] is [a1 − δ, b1 + δ], and contains [a2, b2] if and only if

δ ≥ max {a1 − a2, b2 − b1} . Page 52 Section .0

Similarly, the δ-neighborhood of [a2, b2] contains [a1, b1] if and only if

δ ≥ max {a2 − a1, b1 − b2} , and we conclude (A.1).

Lemma A.2. Let In := [an, bn] ⊂ (−1, 1). Then, up to the extraction of a subsequence,

H In → I∞ ⊂ (−1, 1), where I∞ is connected and closed in (−1, 1), and 1 1 L (I∞) = lim L (In). n→∞

Moreover, for arbitrary K ⊂⊂ I∞, K must be contained in In for n large enough. ∞ ∞ Proof. Because In ⊂ (−1, 1), we have that {an}n=1 and {bn}n=1 are bounded and so, up to the extraction of a subsequence, there exist

a∞ := lim an and b∞ := lim bn, (A.2) n→∞ n→∞ where −1 ≤ a∞ ≤ b∞ ≤ 1. We define I∞ := [a∞, b∞] if −1 < a∞ ≤ b∞ < 1, I∞ := (−1, b∞] if a∞ = −1, and I∞ := [a∞, 1) if b∞ = 1. Hence I∞ is connected and closed in (−1, 1) (in the case in which a∞ = b∞ = −1, or a∞ = b∞ = 1, we have I∞ = ∅ and it is still closed in (−1, 1)).

Hence

lim dH(In,I∞) = lim max {|an − a∞| , |bn − b∞|} = 0, n→∞ n→∞ and we have for I∞ 6= ∅, 1 1 L (I∞) = b∞ − a∞ = lim (bn − an) = lim L (In), n→∞ n→∞ as desired.

Next, if K ⊂⊂ I∞ then K ⊂ (α, β) for some α, β such that a∞ < α < β < b∞. By (A.2) choose N large enough such that for all n ≥ N,

an < α < β < bn, so that K ⊂ In for all n ≥ N. 

1,2 1 Lemma A.3. Let {vε}ε>0 ⊂ W (I) be such that 0 ≤ vε ≤ 1, vε → 1 in L (I) and pointwise a.e., and   ε 0 2 1 2 lim sup |vε| + (vε − 1) dx < ∞. (A.3) ε→0 ˆI 2 2ε

Then for arbitrary 0 < η < 1 there exists an open set Hη ⊂ I satisfying:

1. the set I \ Hη is a collection of finitely many points in I; η 2. for every set K compactly contained in Hη, we have K ⊂ Bε for ε > 0 small enough, where η  2 Bε := x ∈ I : vε (x) ≥ η . Page 53 Section .0

Proof. Choose a constant M > 0 such that   ε 0 2 1 2 0 1 0 M ≥ lim sup |vε| + (vε − 1) dx ≥ lim sup |vε| |1 − vε| dx = lim sup |cε| dx, ε→0 ˆI 2 2ε ε→0 ˆI ε→0 2 ˆI 2 1 where cε(x) := (1 − vε(x)) . Note that by (A.3), cε → 0 in L (I). Fix σ, δ with 0 < σ < δ < 1.

By the co-area formula we have, for 0 < ε < ε0 with ε0 sufficiently small, ∞ δ 0 0 0 2M + 1 ≥ |cε(x)| dx = H ({x : cε(x) = t}) dt ≥ H ({x : cε(x) = t}) dt. ˆI ˆ−∞ ˆσ

Hence, for each ε > 0 there exist δε ∈ (σ, δ) such that 2M + 1 ≥ H0({x : c (x) = δ }). (A.4) δ − σ ε ε Define, for a fixed r > 0, r Aε := {x ∈ I : cε(x) ≤ r} . 1,2 δε Since vε ∈ W (I), vε is continuous and so is cε, therefore Aε is closed and has at most (2M + 1)/(δ − σ) + 1 connected components because of (A.4) and in view of the continuity of cε. Note that the number (2M + 1)/(δ − σ) does not depend on ε > 0.

For ε ∈ (0, ε0) and k ∈ N depending only on δ − σ and M, we have δε Sk i i 1. Aε = i=1 Iε, where each Iε is a closed interval or ∅;  i  j 2. for all i < j, max x : x ∈ Iε < min x : x ∈ Iε . i By Lemma A.2, up to the extraction of a subsequence, for each i ∈ {1, 2, . . . , k} let I0 be the i i H i i Hausdorff limit of the Iε as ε → 0, i.e., Iε → I0, with I0 is connected and closed in I, and for all i j i < j, max I0 ≤ min I0 .

Set k k [ i ◦ [ i ◦ Tδ := (I0) and Tδ,ε := (Iε) , (A.5) i=1 i=1 where by (·)◦ we denote the interior of a set. Since

δε I \ Aε ⊂ {x ∈ I : cε(x) ≥ σ} 1 and cε → 0 in L (I), by Chebyshev’s inequality we have 1 1 lim L (Tδ,ε) = lim L (Aε) = 2. ε→0 ε→0 H Moreover, since Tδ,ε → Tδ, by Lemma A.2 we have k k k 1 X 1 i ◦ X 1 i ◦ X 1 i ◦ 1 L (Tδ) = L (I0) = lim L (Iε) = lim L (Iε) = lim L (Tδ,ε) = 2. ε→0 ε→0 ε→0 i=1 i=1 i=1

Thus |I \ Tδ| = 0. Moreover, since Tδ has at most k connected components, I \ Tδ is a finite collec- tion of points in I.

δε Next, let K ⊂⊂ Tδ be a compact subset. We claim that K must be contained in Aε for ε > 0 small Page 54 Section .0

i i i ◦ i ◦ enough. Recall I0 and Iε from (A.5). Define Ki := K ∩ (I0) for i = 1, . . . , k. Then Ki ⊂⊂ (Io) i for each i, and so by Lemma A.2 there exists εi > 0 such that for all 0 < ε < εi, Ki ⊂ Iε. Define 0 ε := min {εi} . i∈{1,...,k}

0 i For 0 < ε < ε we have Ki ⊂ Iε, and so k k [ [ i δε K = Ki ⊂ Iε = Aε . i=1 i=1

√ 2 √ 2 √ η (1− η) Finally, given η ∈ (0, 1), set δ := 1 − η with Hη := T(1− η)2 and Bε := Aε , and properties 1 and 2 are satisfied. 

Acknowledgements The authors wish to acknowledge the Center for Nonlinear Analysis where this work was carried out. The research of both authors was partially funded by the National Science Foundation under Grant No. DMS - 1411646.

References [1] L. Ambrosio. A compactness theorem for a new class of functions of bounded variation. Boll. Un. Mat. Ital. B (7), 3(4):857–881, 1989. [2] L. Ambrosio. Variational problems in SBV and image segmentation. Acta Appl. Math., 17(1):1– 40, 1989. [3] L. Ambrosio. Existence theory for a new class of variational problems. Arch. Rational Mech. Anal., 111(4):291–322, 1990. [4] L. Ambrosio, N. Fusco, and D. Pallara. Functions of bounded variation and free discontinuity problems. Oxford Mathematical Monographs. The Clarendon Press, Oxford University Press, New York, 2000. [5] L. Ambrosio and V. Magnani. Weak differentiability of BV functions on stratified groups. Math. Z., 245(1):123–153, 2003. [6] L. Ambrosio, M. Miranda, Jr., and D. Pallara. Special functions of bounded variation in doubling metric measure spaces. 14:1–45, 2004. [7] L. Ambrosio and P. Tilli. Topics on analysis in metric spaces, volume 25 of Oxford Lecture Series in Mathematics and its Applications. Oxford University Press, Oxford, 2004. [8] L. Ambrosio and V. M. Tortorelli. Approximation of functionals depending on jumps by elliptic functionals via Γ-convergence. Comm. Pure Appl. Math., 43(8):999–1036, 1990. [9] H. Attouch. Variational convergence for functions and operators. Applicable Mathematics Series. Pitman (Advanced Publishing Program), Boston, , 1984. [10] G. Aubert and P. Kornprobst. Mathematical problems in image processing, volume 147 of Applied Mathematical Sciences. Springer, New York, second edition, 2006. Partial differential equations and the calculus of variations, With a foreword by Olivier Faugeras. [11] A. Baldi. Weighted BV functions. Houston J. Math., 27(3):683–705, 2001. [12] A. Baldi and B. Franchi. A Γ-convergence result for doubling metric measures and associated perimeters. Calc. Var. Partial Differential Equations, 16(3):283–298, 2003. [13] A. Baldi and B. Franchi. Mumford-Shah-type functionals associated with doubling metric measures. Proc. Roy. Soc. Edinburgh Sect. A, 135(1):1–23, 2005. Page 55 Section .0

[14] G. Bellettini, G. Bouchitt´e,and I. Fragal`a. BV functions with respect to a measure and relaxation of metric integral functionals. J. Convex Anal., 6(2):349–366, 1999. [15] G. Bouchitte, G. Buttazzo, and P. Seppecher. Energies with respect to a measure and appli- cations to low-dimensional structures. Calc. Var. Partial Differential Equations, 5(1):37–54, 1997. [16] A. C. Bovik. Handbook of image and video processing. Academic press, 2010. [17] A. Braides. Γ-convergence for beginners, volume 22 of Oxford Lecture Series in Mathematics and its Applications. Oxford University Press, Oxford, 2002. [18] L. Capogna, D. Danielli, and N. Garofalo. The geometric Sobolev embedding for vector fields and the isoperimetric inequality. Comm. Anal. Geom., 2(2):203–215, 1994. [19] T. Chan, S. Esedoglu, F. Park, and A. Yip. Total variation image restoration: overview and recent developments. In Handbook of mathematical models in computer vision, pages 17–31. Springer, New York, 2006. [20] Y. , T. Pock, R. Ranftl, and H. Bischof. Revisiting loss-specific training of filter-based mrfs for image restoration. In Pattern Recognition, pages 271–281. Springer, 2013. [21] Y. Chen, R. Ranftl, and T. Pock. Insights into analysis operator learning: From patch-based sparse models to higher order mrfs. IEEE Transactions on Image Processing, 23(3):1060–1072, March 2014. [22] G. Dal Maso, J.-M. Morel, and S. Solimini. A variational method in image segmentation: existence and approximation results. Acta Math., 168(1-2):89–151, 1992. [23] G. David and S. Semmes. Strong A∞ weights, Sobolev inequalities and quasiconformal map- pings. 122:101–111, 1990. [24] E. De Giorgi, M. Carriero, and A. Leaci. Existence theorem for a minimum problem with free discontinuity set. Arch. Rational Mech. Anal., 108(3):195–218, 1989. [25] J. C. De Los Reyes, C.-B. Sch¨onlieb,and T. Valkonen. The structure of optimal parameters for image restoration problems. J. Math. Anal. Appl., 434(1):464–500, 2016. [26] J. Domke. Generic methods for optimization-based modeling. In AISTATS, volume 22, pages 318–326, 2012. [27] J. Domke. Learning graphical model parameters with approximate marginal inference. IEEE Transactions on Pattern Analysis and Machine Intelligence, 35(10):2454–2467, Oct 2013. [28] I. Ekeland and R. T´emam. Convex analysis and variational problems, volume 28 of Classics in Applied Mathematics. Society for Industrial and Applied Mathematics (SIAM), Philadelphia, PA, english edition, 1999. Translated from the French. [29] L. C. Evans and R. F. Gariepy. Measure theory and fine properties of functions. CRC press, 2015. [30] H. Federer. Geometric measure theory. Die Grundlehren der mathematischen Wissenschaften, Band 153. Springer-Verlag New York Inc., New York, 1969. [31] I. Fonseca and G. Leoni. Modern methods in the calculus of variations: Sobolev spaces. Springer Monographs in Mathematics. Unpublished yet, 2015. [32] B. Franchi, R. Serapioni, and F. Serra Cassano. Approximation and imbedding theorems for weighted Sobolev spaces associated with Lipschitz continuous vector fields. Boll. Un. Mat. Ital. B (7), 11(1):83–117, 1997. [33] B. Franchi, R. Serapioni, and F. Serra Cassano. Regular hypersurfaces, intrinsic perimeter and implicit function theorem in Carnot groups. Comm. Anal. Geom., 11(5):909–944, 2003. [34] G. Leoni. A first course in Sobolev spaces, volume 105 of Graduate Studies in Mathematics. American Mathematical Society, Providence, RI, 2009. Page 56 Section .0

[35] G. Leoni and R. Murray. Second-order Γ-limit for the Cahn-Hilliard functional. Arch. Ration. Mech. Anal., 219(3):1383–1451, 2016. [36] P. Lions, S. Osher, and L. Rudin. Denoising and deblurring images using constrained nonlinear partial differential equations. Preprint, Cognitech, Inc, pages 2800–28, 1993. [37] P. Liu. The bi-level learning and uniqueness of optimal spatially dependent parameters for image reconstruction problem. In preparation. [38] P. Liu. The weighted Γ-convergence of higher order singular perturbation models. In prepara- tion. [39] F. Maggi. Sets of finite perimeter and geometric variational problems, volume 135 of Cam- bridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 2012. An introduction to geometric measure theory. [40] P. Mattila. Geometry of sets and measures in Euclidean spaces, volume 44 of Cambridge Studies in Advanced Mathematics. Cambridge University Press, Cambridge, 1995. Fractals and rectifiability. [41] D. Mumford and J. Shah. Boundary detection by minimizing functionals. In IEEE Conference on Computer Vision and Pattern Recognition, volume 17, pages 137–154. San Francisco, 1985. [42] L. Rudin. Segmentation and restoration using local constraints. Technical Repport, 36, 1993. [43] L. I. Rudin and S. Osher. Total variation based image restoration with free local constraints. In Image Processing, 1994. Proceedings. ICIP-94., IEEE International Conference, volume 1, pages 31–35 vol.1, Nov 1994. [44] L. I. Rudin, S. Osher, and E. Fatemi. Nonlinear total variation based noise removal algorithms. Phys. D, 60(1-4):259–268, 1992. Experimental mathematics: computational issues in nonlinear science (Los Alamos, NM, 1991). [45] M. F. Tappen. Utilizing variational optimization to learn markov random fields. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2007. [46] M. F. Tappen, C. Liu, E. H. Adelson, and W. T. Freeman. Learning gaussian conditional random fields for low-level vision. In 2007 IEEE Conference on Computer Vision and Pattern Recognition, pages 1–8, June 2007.