TRANSACTIONS OF THE AMERICAN MATHEMATICAL SOCIETY Volume 365, Number 8, August 2013, Pages 4229–4255 S 0002-9947(2013)05760-1 Article electronically published on March 11, 2013

QUASICONVEX FUNCTIONS AND NONLINEAR PDES

E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Abstract. A second order characterization of functions which have convex 2 level sets (quasiconvex functions) results in the operator L0(Du, D u)= min{v · D2uvT ||v| =1, |v · Du| =0}. Intwodimensionsthisisthemeancur- 2 vature operator, and in any dimension L0(Du, D u)/|Du| is the first principal curvature of the surface S = u−1(c). Our main results include a comparison 2 principle for L0(Du, D u)=g when g ≥ Cg > 0 and a comparison principle 2 for quasiconvex solutions of L0(Du, D u)=0. A more regular version of L0 2 2 T is introduced, namely Lα(Du, D u)=min{v · D uv ||v| =1, |v · Du|≤α}, which characterizes functions which remain quasiconvex under small linear per- turbations. A comparison principle is proved for Lα. A representation result using stochastic control is also given, and we consider the obstacle problems for L0 and Lα.

1. Introduction n In this paper we consider the operator L0 : R × S(n) → R, defined by T L0(p, M)= min v · Mv , {v∈Rn ||v|=1,v·p=0}

and when p =0,L0(0,M)=λ1(M) = first eigenvalue of M. Here S(n)isthesetof symmetric n × n matrices. We will study the nonlinear partial differential equation 2 L0(Du, D u)=g(x),x∈ Ω, with u = h on ∂Ω. Solutions are considered in the viscosity sense. For a given u :Ω→ R 2 we also write L0(u) for the operator L0(Du, D u). The motivation for considering this problem is the connection with differential geometry, generalized convexity, tug-of-war games and stochastic optimal control. First, observe that the operator L0 has the nice property that it is of geometric type: n L0(λp, λM + μp⊗ p)=λL0(p, M) ∀λ>0,μ∈ R,p∈ R ,M ∈ S(n). In R2 it is easy to calculate D⊥u · D2uD⊥uT L (u)=− =Δu − Δ∞u, 0 |D⊥u|2 where Du · D2uDuT Δ∞u = |Du|2

Received by the editors February 16, 2011 and, in revised form, November 23, 2011. 2010 Subject Classification. Primary 35D40, 35B51, 35J60, 52A41, 53A10. Key words and phrases. Quasiconvex, robustly quasiconvex, principal curvature, comparison principles. The authors were supported by grant DMS-1008602 from the National Science Foundation.

c 2013 American Mathematical Society 4229

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4230 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

is the celebrated ∞−Laplacian. Thus, in two dimensions L0(u) is the much studied n 2 mean curvature operator. In R , L0(Du, D u)/|Du| is the first (smallest) principal curvature of the surface S = u−1(c)={x ∈ Rn | u(x)=c}. The problem of prescribed principal curvature L0(u)=g|Du| is considerably complicated in full 2 generality. In this paper we consider the problem L0(Du, D u)=g in order to determine whether or not there is a solution, its uniqueness, and its properties. The results for L0 immediately imply analogous results for the analogous largest principal curvature by considering the operator 2 2 T Lmax(Du, D u)= max v · D uv . {|v|=1,v·Du=0}

Moreover, L0(u)=0aswellasLmax(u)=0implythatu is a solution of the homogeneous Monge-Amp`ere equation det(D2u)=0. Now suppose that Ω ⊂ Rn is a convex and open set. A necessary and sufficient second order condition that a twice differentiable function be convex is that D2u(x) is positive semidefinite for all x ∈ Ω. Alvarez, Lasry and Lions [1] and Oberman, in the recent papers [13] and [14], as well as Bardi and Dragoni [3] have established the fact that u is convex if and only if D2u ≥ 0 in the viscosity sense, or, in equation form, if 2 2 T λmin(D u(x)) = min{vD u(x)v ||v| =1}≥0,x∈ Ω,

in the viscosity sense. That is, if u − ϕ achieves a maximum at x0 ∈ Ω, where ϕ is 2 a smooth function, then λmin(D ϕ(x0)) ≥ 0. Now we introduce the connection of our operator L0(u) with quasiconvex func- tions. Recall that a function u :Ω→ [−∞, ∞]isquasiconvex by definition if u(λx +(1− λ)y) ≤ max{u(x),u(y)}, ∀ x, y ∈ Ω, 0 <λ<1, which is equivalent to the requirement that the sublevel sets of u be convex. A necessary and sufficient first order condition for a differentiable function u : Ω → R to be quasiconvex is the following: • First Order: ∀ x, y ∈ Ω, u(y) ≤ u(x)=⇒ Du(x) · (y − x) ≤ 0. A necessary, but not sufficient, second order condition for quasiconvexity, for a twice differentiable function u :Ω→ R is the following: • Necessary Second Order: ∀ x ∈ Ω, v · Du(x)=0 =⇒ v · D2u(x)vT ≥ 0, ∀ 0 = v ∈ Rn. To see that the second order condition is not necessary, consider the following example. Example 1.1. Let u(x)=−x4 on R.Ifx =0, v · Du(x) = 0 implies v =0 and then v · D2u(x)v =0,andifx =0,thenD2u(x) = 0; thus the second order condition is satisfied. However, u is not quasiconvex. In fact, it is quasiconcave. A sufficient, but not necessary condition for quasiconvexity is strict inequality, i.e., • Sufficient Second Order: ∀ x ∈ Ω, v · Du(x)=0 =⇒ v · D2u(x)vT > 0, ∀ 0 = v ∈ Rn. See Boyd and Vandenberghe [6] for details. We may express the second order quasiconvexity conditions in terms of L0:ifa twice differentiable function u :Ω→ R is quasiconvex, then L0(u) ≥ 0. Conversely, if L0(u) > 0, then u is quasiconvex. It turns out that viscosity versions of these

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4231

two statements are valid as well. This is shown in Section 2 as Theorem 2.6 and 2 Theorem 2.7. It is more interesting that L0(Du, D u) ≥ 0 is sufficient for quasi- convexity, if, additionally, u does not have any local maxima (cf. Example 1.1). We prove this in Theorem 2.8. Thus, L0 arises naturally as the generalization of 2 the condition guaranteeing convexity, namely λ1(D u) ≥ 0, to quasiconvexity. But we will see that the generalization is far from straightforward as is indicated by the fact that L0(u) ≥ 0 is not sufficient for quasiconvexity without other conditions. One might conjecture that the problem in proving that L0(u) ≥ 0 implies qua- siconvexity without extra assumptions is that the constraint set in the definition of L0 has no thickness. One can thicken it up by considering a slightly different operator, − { · T | ∈ Rn | |≤ · } L0 (p, M)=min v Mv v , v 1,v p =0 . − ≥ Nevertheless, L0 (u) 0 does not characterize quasiconvexity either. In fact, u(x)=−x4 is a counterexample, so this idea doesn’t help. However, the oper- − ator L0 arises in two dimensions as the governing operator for motion by positive curvature as pointed out in [12]. A different way to thicken up the constraint set in the definition of L0(u)isto allow |v · p |≤α rather than v · p = 0. This leads to the definition

T n Lα(p, M)=min{v · Mv | v ∈ R , |v| =1, |v · p |≤α}.

This turns out to be a very fruitful idea. For a smooth u :Ω→ R, Lα(u) ≥ 0 is a necessary and sufficient condition for quasiconvexity of u under all sufficiently small linear perturbations. It turns out that this result, but without explicitly using Lα, was shown by An [2] in a somewhat different (nonsmooth) framework. Recall here that there are many quasiconvex functions which fail to be quasiconvex under arbitrarily small linear perturbations. For example, on R, the function x → arctan x is quasiconvex, but x → arctan x − ax is not quasiconvex for arbitrarily small positive a. In what follows, given α>0, we say that u :Ω→ [−∞, ∞]isα-robustly quasiconvex if x → u(x)+ξ · x is quasiconvex on Ω for every ξ ∈ Rn, |ξ|≤α,and denote the class of such functions by Rα(Ω). We call u :Ω→ [−∞, ∞] robustly quasiconvex if it is α-robustly quasiconvex for some α>0, and denote the class R R of such functions by (Ω) = α>0 α(Ω). We note that robustly quasiconvex functions were named stable-quasiconvex, or just s-quasiconvex, by Phu and An [17]. See [2] and [17] for more examples and discussions of robust quasiconvexity. In Section 4 we show that Lα(u) ≥ 0 in the viscosity sense is a necessary and sufficient condition for u ∈Rα(Ω). Furthermore, it turns out that Lα is an operator with very nice properties including the fact that there is a comparison, and therefore uniqueness, principle for equations of the form Lα(u)=g, as long as g ≥ 0. Thus we have a complete characterization of the class of robustly quasiconvex functions and we have a tool to study what happens as we let α → 0. Notice, however, that Lα is not a geometric operator in the same sense as L0. We are therefore amply motivated to determine whether or not L0 enjoys a comparison principle among viscosity sub and supersolutions. Barles and Da Lio [12, Appendix] proved the following result for a related mean curvature problem, a sort of weak comparison principle.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4232 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Consider the problem Du D2uDuT (1.1) Δu − − 1=0,x∈ Ω,u(x)=0,x∈ ∂Ω. |Du|2 In two dimensions this equation arises by considering a tug-of-war game in which one player wants to minimize the exit time from Ω while the other player is trying to thwart that goal. Here is what was proved by Barles and Da Lio in the Appendix of [12]. Theorem 1.2. Assume Ω is starshaped with respect to the origin. Let u be a bounded and upper semicontinuous function on Ω which is a subsolution of (1.1), and let v be a bounded and lower semicontinuous supersolution of (1.1) on Ω. Then ∗ ∗ u∗ ≤ v and v ≤ u, where u∗ and v denote the lower and upper semicontinuous envelopes of u, v, respectively. Notice that the solution of (1.1) may be discontinuous and there is no state- ment made connecting the problem with quasiconvexity. In two dimensions (1.1) is equivalent to the following equation, which is the inhomogeneous version of our quasiconvex problem:

(1.2) L0(u) − 1=0,x∈ Ω,u(x)=0,x∈ ∂Ω.

Observe that a subsolution of (1.2) is a strict subsolution of L0(u) > 0and hence is quasiconvex (assuming Ω is convex) by Theorem 2.7. Thus we are led to n consider the problem in Ω ⊂ R ,L0(u)=g(x) assuming that g(x) > 0onΩ. We prove in Theorem 3.3 that this problem has a comparison principle that an upper semicontinuous subsolution lies below a lower semicontinuous supersolution if it holds on ∂Ω. It is critical that g>0 because we have an example of nonuniqueness for L0(u)=0. On the other hand, by considering the fact that Lα L0 as α → 0+, it is natural to conjecture that using the unique solution (proved in Theorem 5.1) of Lα(uα)=0, the function u =supα>0 uα should be the correct solution of L0(u)=0. In fact, u =supα>0 uα is shown to be the unique quasiconvex solution of L0(u)=0. That is, L0(u) = 0 may have many solutions, but there is only one quasiconvex solution (see Theorem 5.5). In Section 6 we also show that the obstacle problem 2 min{g − u, L0(Du, D u)} =0,x∈ Ω,u= g, x ∈ ∂Ω, has a unique quasiconvex solution and it is g##, the largest quasiconvex minorant of g. TheobstacleproblemforLα, is also considered and it is shown to have a unique viscosity solution. This would be a way to calculate the greatest α−robustly quasiconvex minorant of a given function g. To conclude the paper, we give a brief introduction to the connections between our operator L0 and how it arises in stochastic optimal control. The part of sto- chastic control here is the new area of control in which the payoff is not the expected value, but the worst case cost. In other words, one takes an essential supremum over all the paths of the underlying Brownian motion. Thus, for example, we prove that a viscosity solution of L0(u) = 0 with u = g on ∂Ωisgivenby

u(x) = inf ess sup g(ξ(τx)), η ω∈Σ ∞ → {| | } · where√η :[0, ) S1(0) = y =1 is a control, ξ( ) is a trajectory given by dξ = 2 ηdW,t>0,ξ(0) = x, τx is the exit time of ξ(·)fromΩ, and W (·)isa

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4233

Brownian motion. The essential supremum is over the sample paths ξ(τx,ω),ω ∈ Σ. Notice here that there is no drift (although this could also be considered) and the control occurs only in the diffusion coefficient. These types of control problems have been studied by Soner [21] and Soner et al. [22], [23], and a similar control problem is constructed to give a representation formula for motion by mean curvature in [21] and Buckdahn et al. [7]. To see the connection between a tug-of-war game and L0, consider the rules of a game between Paul and Carol introduced by Kohn and Sefaty in [12]: Paul starts at x ∈ Ω. At each time step first Paul chooses a direction, i.e., a v ∈ S1(0), with the goal of trying to reach ∂Ω. Next, Carol, who is trying to prevent Paul from reaching the boundary, chooses either to (i) confirm the direction chosen by Paul (choose b = +1) or (ii) reverse the direction (choose b = −1). Paul’s goal is to minimize the exit time from Ω, while Carol’s goal is to maximize the exit time. If the time step is ε, the value of the game if the position is x ∈ Ω satisfies the dynamic programming principle √ uε(x)=minmax uε(x + ε 2 bv)+ε2, |v|=1 b=±1 where uε(x)=ε2k if Paul needs k stepstoexitstartingfromx ∈ Ω. In R2, ε u (x) → u(x), and L0(u)−1=0. This is the starting point leading to the connection between deterministic optimal control and mean curvature developed in [12].

2. Viscosity characterizations of quasiconvex functions We state first our precise definition of a viscosity solution that we use in this paper and refer to [9] for the basic results. Throughout this paper, for any locally bounded function f, we use the notation that f ∗ is the upper semicontinuous en- velope of f and f∗ is the lower semicontinuous envelope of f, in all the variables in the function. Definition 2.1. A locally bounded function u : Ω → R is a viscosity subsolution of F (x, u, Du, D2u) ≥ 0 ∗ if, whenever u − ϕ has a strict local zero maximum at x0 ∈ Ω for some smooth function ϕ :Ω→ R,wehave ∗ 2 F (x0,ϕ(x0),Dϕ(x0),D ϕ(x0)) ≥ 0. The function u is a viscosity supersolution of F (x, u, Du, D2u) ≤ 0

if, whenever u∗ − ϕ has a strict zero minimum at x0 ∈ Ω for some smooth function ϕ :Ω→ R,wehave 2 F∗(x, u, Du, D u) ≤ 0. If we have a Dirichlet boundary condition u(x)=g(x),x∈ ∂Ω, we take this in the viscosity sense [9], i.e., when x ∈ ∂Ω,uis a subsolution of F ∗(x, u, Du, D2u) ∨ 2 (u(x) − g(x)) ≥ 0, and u is a supersolution of F∗(x, u, Du, D u) ∧ (u(x) − g(x)) ≤ 0. Remark 2.2. The inequalities in the definition are reversed from the usual defini- tions because we do not want to carry along minus signs throughout the paper.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4234 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

We begin by defining our operators precisely. Set Γ(p, α)={v ∈ Rn ||v| = 1, |p · v|≤α} and Γ(p, 0) := Γ(p). n Define the hamiltonian Lα : R × S(n) → R by T n T Lα(p, M)=min{vM v : v ∈ R , |v| =1, |v · p|≤α} =min{vM v } v∈Γ(p,α) n and L0 : R × S(n) → R by T n T L0(p, M)=min{vM v : v ∈ R , |v| =1, |v · p| =0} =min{vM v }. v∈Γ(p)

If Γ(p, α)=∅, by convention, we set Lα(p, M)=+∞. Notice that Γ(0,α)=Γ(0)= n S1(0) = {y ∈ R ||y| =1}, and hence

Lα(0,M)=L0(0,M)=λ1(M),

where λ1(M) is the first eigenvalue of M. We begin by showing that Lα(p, M)andL0(p, M) are continuous if p =0and a convergence of Lα to L0 as α → 0+. The proof is standard, so only an outline is provided.

n Lemma 2.3. Lα and L0 are continuous in (p, M) ∈ R \{0}×S(n), lower semi- continuous in (p, M) at points where p =0,andLα(p, M) L0(p, M) as α → 0+. Also, for α ≥ 0, ∗ L (0,M) = lim sup Lα(p, M). α → δ 0+ |p|≤δ Proof. The function vMvT if |v| =1, |v · p|≤α, f(α, p, M, v)= +∞ otherwise, is lower semicontinuous on [0, ∞)×Rn×S(n)×Rn and uniformly level bounded in v. Because Lα(p, M)=minv f(α, p, M, v), [19, Theorem 1.17], implies that Lα is lower semicontinuous, not just in (p, M), but in (α, p, M). To see that Lα is upper semi- continuous in (α, p, M) and at points with p = 0, suppose Lα(p, M)=f(α, p, M, v) and pick δ>0. For (α,p,M) sufficiently close to (α, p, M), there are v close to v such that |v| =1,|v · p|≤α,andf(α, p, M, v)+δ ≥ f(α,p,M,v).       Then Lα(p, M)+δ ≥ f(α ,p,M ,v ) ≥ Lα (p ,M ), and this gives upper semi- continuity. Monotonicity of Lα in α is clear from the definition. Now, continu- ity implies that Lα(p, M) L0(p, M)asα → 0+ for p = 0, while when p =0, Lα(0,M)=L0(0,M)=λ1(M). The remaining statements are from the definitions of the envelopes. 

00 Example 2.4. To see that L0(p, M) is discontinuous at p =0, let M =(01). { 2 | ∈ } Then for p =(0,δ),L0(p, M)=minv2 v Γ(p) =0, while for p =(δ, 0), L0(p, M)=1=L0(0,M). Remark 2.5. Using the lemma we can give a simplified definition of what it means ∗ to be a viscosity solution of L0(u)=0. If u is locally bounded, and u − ϕ achieves a zero maximum at x , then u is a subsolution if 0 2 L0(Dϕ(x0),D ϕ(x0)) ≥ 0, if Dϕ(x0) =0, 2 L0(p, D ϕ(x0)) ≥ 0, for some |p|≤1, if Dϕ(x0)=0.

If u is locally bounded, and u∗ − ϕ achieves a zero minimum at x0, then u is a 2 supersolution if L0(Dϕ(x0),D ϕ(x0)) ≤ 0. The supersolution condition is simplified

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4235

n because of the fact that L0(p, M) is lower semicontinuous on R × S(n). Similar statements hold for Lα. Now we begin by showing that a quasiconvex function must be a subsolution of L0(u) ≥ 0. Theorem 2.6. Let Ω be convex. If u :Ω→ R is quasiconvex, then u is a viscosity subsolution of L0(u) ≥ 0. Proof. Without loss of generality we may assume u is upper semicontinuous since otherwise we work with u∗, which is still quasiconvex. Indeed, u∗(λx +(1− λ)y) = lim sup u(λ(x − z)+(1− λ)(y − z)) ≤ u∗(x) ∨ u∗(y). → δ 0+ |z|≤δ

Suppose that u is quasiconvex but not a subsolution of L0(u) ≥ 0. Then, there is a smooth function ϕ and x ∈ Ωatwhichu − ϕ has a zero maximum, and L0(ϕ(x)) < 0. By definition, there exists v ∈ Γ(Dϕ(x)) with |v| =1,v · Dϕ(x)=0, but vD2ϕ(x)vT = −γ<0. Note that this is true even if Dϕ(x)=0. By Taylor’s formula, for sufficiently small ρ,wehave ρ2 ρ2 u(x+ρv) ≤ ϕ(x+ρv)=ϕ(x)+ρv·Dϕ(x)+ vD2ϕ(x)vT +o(ρ2) ≤ u(x)−γ +o(ρ2) 2 2 and ρ2 ρ2 u(x−ρv)≤ϕ(x−ρv)=ϕ(x)−ρv·Dϕ(x)+ vD2ϕ(x)vT +o(ρ2) ≤ u(x)−γ +o(ρ2). 2 2 Directly from the definition of quasiconvexity, 1 1 ρ2 u(x)=u (x + ρv)+ (x − ρv) ≤ u(x + ρv) ∨ u(x − ρv) ≤ u(x) − γ + o(ρ2). 2 2 2 Dividing by ρ2 and sending ρ → 0 gives a contradiction. 

Next we prove a partial converse, namely, that when L0(u) > 0, u is quasiconvex. Theorem 2.7. Let Ω be convex. If u :Ω→ [−∞, ∞) is an upper semicontinuous subsolution of L0(u) > 0,thenu is quasiconvex. Proof. Suppose that u is not quasiconvex, i.e., there exist y, z ∈ Ωandw =(1− α)y + αz for some α ∈ (0, 1) such that u(y) ≤ u(z) u(x) for all x in [y1,z1] × ∂S.Then ϕ(x) >u(x) for all x ∈ ∂T, while ϕ(w)=u(w). Consequently, u − ϕ attains its 2 maximum over T at some point ξ interior to T and L0(Dϕ(ξ),D ϕ(ξ)) = 0. This contradicts u being a subsolution. 

To weaken the condition L0(u) > 0toL0(u) ≥ 0 we need an additional assump- tion. Theorem 2.8. Suppose that u :Ω→ [−∞, ∞) is an upper semicontinuous subso- lution of L0(u) ≥ 0 and u does not attain a local maximum. Then u is quasiconvex.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4236 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Proof. Suppose that u is not quasiconvex, i.e., there exist y, z ∈ Ω such that the maximum of u((1 − α)y + αz)overα ∈ [0, 1] is attained at α ∈ (0, 1). Set w = (1 − α)y + αz. Upper semicontinuity of u implies that there exist neighborhoods of y and z such that u(w) >u(x) for all x in these neighborhoods. Subject to an affine change of variables, we can assume that y =(−1, 0,...,0), z =(1, 0,...,0), w =(w1, 0,...,0) for some w1 ∈ (−1, 1), that the set X =[−1, 1] × [−2, 2] × ···×[−2, 2] is contained in Ω, and, furthermore, u(w) >u(−1, [−1, 1],...,[−1, 1]), u(w) >u(1, [−1, 1],...,[−1, 1]). For even m ∈ N,let 1 ϕ (x)= 2 − x2 (xm + xm + ···+ xm) . m m 1 2 3 n

We will show that for some large enough m,themaximumof u − ϕm is attained at 2 an interior point ξ of the domain of u,thatL0 Dϕm(ξ),D ϕm(ξ) < 0, and thus that a contradiction with u being a subsolution is obtained. The epigraphical limit ϕ∞ of ϕm,asm →∞,isgivenby 0ifxk ∈ [−1, 1],k=2, 3,...,n, ϕ∞(x)= ∞ otherwise.

Consequently, the epigraphical limit of ϕm +δX −u is ϕ∞ +δX −u. Here, δX is the indicator of the set X: δX (x)=0ifx ∈ X, δX (x)=∞ if x ∈ X.Themaximumof u − (ϕ∞ + δX ), equivalently, the maximum of u over [−1, 1] × [−1, 1] ×···×[−1, 1], is attained at some point t =(t1,t2,...,tn)wheret1 ∈ (−1, 1), and it is not the case that t2 = t3 = ··· = tn =0.Infact,|ti| =1foratleastonei ∈{2, 3,...,n}, because otherwise u would have a local maximum. Without loss of generality, suppose that t2 = 0. By [19, Theorem 7.33], the maximum of u − (ϕm + δX )is attained at some ξ =(ξ1,ξ2,...,ξn) with ξ1 ∈ (−1, 1), ξ2 =0,and ξk ∈ (−2, 2) for k =2, 3,...,n. In particular, the maximum is attained at an interior point of X. 2 We now argue that L0 Dϕm(ξ),D ϕm(ξ) < 0. We have Dϕ (ξ)= −2ξ (ξm + ···+ ξm), (2 − ξ2)mξm−1,...,(2 − ξ2)mξm−1 , m ⎛ 1 2 n 1 2 1 n ⎞ − m ··· m − m−1 2(ξ2 + + ξn ) 2ξ1mξ2 ... ⎜ m−1 m−2 ⎟ 2 −2ξ mξ (2 − ξ2)m(m − 1)ξ ... D ϕm(ξ)=⎝ 1 2 1 2 ⎠ ......

The equations v · Dϕm(ξ)=0,|v| = 1 can be achieved with v =(v1,v2, 0,...,0), v1 =0,and 2ξ (ξm + ···+ ξm) v = 1 2 n v . 2 − 2 m−1 1 (2 ξ1)mξ2 For such v, 4ξ2 2(m − 1)ξ2(ξm + ···+ ξm) v·D2ϕ (ξ)v = −2v2(ξm +···+ξm) 1+ 1 + 1 2 n . m 1 2 n − 2 − 2 m−2 (2 ξ1 ) m(2 ξ1 )ξ2

This quantity is negative when v1 =0, ξ1 ∈(−1, 1), ξ2 = 0, which is the case here 2 (recall that m is large and even). Hence L0 Dϕm(ξ),D ϕm(ξ) < 0. 

The assumption about the lack of maxima of u cannot be weakened to exclude only the global maxima as the following example shows.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4237

Example 2.9. Let u defined on R be an odd function given by u(x)=(x − 1)4 − 1 for x ≥ 0. It satisfies L0(u) ≥ 0; in fact L0(u(x)) = 0 for x =1andx = −1, and otherwise, L0(u(x)) = ∞,when|x| =1, Du(x) = 0. This function does not have a global maximum and is not quasiconvex. (It does have a strict local maximum though, at x = −1.)

3. A comparison principle for L0

In the previous section we have seen that a strict subsolution of L0(u) > 0 implies that u must be quasiconvex. In addition the Barles and Da Lio theorem (Theorem 1.2) and the fact that L0(u)=Δu − Δ∞u in two dimensions implies L0(u) − 1 has at least a weak comparison principle, leads one to suspect that there might be a comparison principle for equations of the form L0(u)=g(x)when g(x) ≥ C>0. Indeed this is the case, and we will prove it in this section. First we give a simple example showing that one cannot expect uniqueness for L0(u)=0. 2 Example 3.1. Let Ω = {x =(x1,x2) ||x| < 1}⊂R . Consider the function − 4 u(x1,x2)= x2. It is straightforward to verify that L0(u(x1,x2)) = 0 on Ω with its own boundary values. Notice that it is immediate that u is not quasiconvex; in fact, it is quasiconcave. − − 2 2 ∈ − 2 2 Now consider the function w(x1,x2)= (1 x1) . If (x1,x2) ∂Ω, 1 x1 = x2, and so w(x1,x2)=u(x1,x2)on∂Ω. One can easily verify that L0(w) = 0, and hence the problem L0(u) = 0 does not have a unique solution. Notice also that w(x1,x2) is quasiconvex. Indeed, we check that w(y1,y2) ≤ w(x1,x2) implies that Dw(x1,x2) · (y1 − x1,y2 − x2) ≤ 0. This is equivalent to 2 ≤ 2 ⇒ − 2 − ≤ y1 x1 = 4x1(1 x1)(y1 x1) 0,

which is a true statement for all (x1,x2), (y1,y2) ∈ Ω. Despite this nonuniqueness example the question arises about whether or not a quasiconvex solution is unique. In Section 6 we will prove that while L0(u) = 0 does not have a unique solution, it does have a unique quasiconvex solution. The next theorem will be used in our comparison theorem following, but it is of interest on its own. This theorem does not require that Ω be convex. n Theorem 3.2. Let Ω ⊂ R be a bounded domain. Let g, γ ∈ C(Ω) such that g ≥ Cg and γ ≥ Cγ on Ω, for some constants Cg > 0,Cγ > 0. Suppose that

(a) u ∈ C(Ω) ≥ Cu > 0 is a semiconvex subsolution of L0(u) − γu≥ g on Ω, (b) v ∈ C(Ω) ≥ Cv > 0 is a semiconcave supersolution of L0(v) − γv≤ g on Ω,

where Cu,Cv are constants. Then u ≤ v on ∂Ω=⇒ u ≤ v on Ω, i.e., min [v(x) − u(x)] ≥ 0=⇒ min[v(x) − u(x)] ≥ 0. x∈∂Ω x∈Ω Proof. The proof will proceed by contradiction. Suppose that

(3.1) min [v(x) − u(x)] ≥ 0 but min[v(x) − u(x)] = Cm < 0. x∈∂Ω x∈Ω Define the set M = {x ∈ Ω | (v − u)(x)=Cm}. It is well known ([9], [10]) that since v − u is semiconcave, both v and u are differentiable at each point of M where the minimum of v − u is achieved.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4238 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Three cases will be considered. The first case is:

(1) There exists x0 ∈ M such that Du(x0) =0. If Du(x) = 0 for all x ∈ M, and assuming without loss of generality that M is connected, let G = {x ∈ Ω:u(x) 0 be such that D u ≥−KuI (semiconvexity condition) and Kv > 0 2 be such that D v ≤ KvI (semiconcavity condition). δ Case 1. There exists x0 ∈ M with Du(x0) = 0. Pick δ>0 and let v (x)= 2 δ v(x)+δ|x−x0| .Thenx0 is the unique minimum of v −u. From the semiconvexity properties of u, v it is standard in viscosity theory (cf. Lemma 5.2 below) that there exists εδ > 0 such that for each 0 <ε<εδ, there exists xε ∈ Ω satisfying

(E1) |Dv(xε) − Du(xε)| <ε. 2 2 2 δ (E2) D v(xε)andD u(xε) both exist and D (v − u)(xε) ≥ 0. (E3) |Dv(xε) − Dv(x0)| <ε. (E4) |xε − x0| <ε.

Furthermore Dv(xε)andDu(xε) exist. These properties are referred to as dif- ferentiability at the maximum points and partial continuity of the gradients in Barles-Busca [4]. | | · 2 · T Choose zε satisfying zε =1,zε Dv(xε)=0andL0(v(xε)) = zε D v(xε) zε . Note that L0(v(xε)) − γ(xe)v(xε) ≤ g(xε). Since Du(x0) = 0 we may assume using n (E1)-(E4) that Du(xε) =0and Dv(xε) =0 . Choose a scalar λ and a vector w ∈ R satisfying w · Dv(xε)=0andDu(xε)=λDv(xε)+w. Then Du(xε) · Dv(xε) λ = 2 |Dv(xε)| and (E1) implies ε |Dv(xε) · (Dv(xε) − Du(xε))| <ε|Dv(xε)| =⇒|1 − λ| < . |Dv(xε)|

By writing w = Du(xε)−Dv(xε)+(1−λ)Dv(xε)weseethat|w| < 2ε. Now choose μ and p ∈ Rn such that

p · Du(xε)=0andzε = μDu(xε)+p. Then · · zε Du(xε) zε (λDv(xε)+w) ⇒|| 2ε μ = 2 = 2 = μ < 2 . |Du(xε)| |Du(xε)| |Du(xε)| | | | |2 2| |2 | |2 ≥ − 4ε2 Since zε =1, we have 1 = p +μ Du(xε) , which implies that p 1 2 . |Du(xε)| Semiconvexity of u implies 2 2 2 T ≥ − 4ε − 4ε zεD u(xε) zε 1 2 L0(u(xε)) 2 Ku. |Du(xε)| |Du(xε)|

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4239

This, combined with assumptions (a) and (b), yields ≤ 2 δ − T 0 zεD (v (xε) u(xε)) zε = z D2v(x ) zT − z D2u(x ) zT +2δ ε ε ε ε ε ε 4ε2 4ε2 ≤ 1 − (L (v(x )) − L (u(x ))) + (K + K )+2δ |Du(x )|2 0 ε 0 ε |Du(x )|2 v u ε ε 2 2 ≤ − 4ε − 4ε 1 2 γ(xε)(v(xε) u(xε)) + 2 (Kv + Ku)+2δ. |Du(xε)| |Du(xε)|

By first sending ε → 0andthenδ → 0, we get 0 ≤ γ(x0)Cm < 0. This shows that the first case leads to a contradiction.

Case 2. |Du(x)| =0forallx ∈ M and M ∩ G = ∅.Letx0 ∈ M ∩ G = ∅,v(x0) − u(x0)=Cm. Choose a ball of radius r0 > 0 centered at x0 with B(x0,r0) ⊂ Ω. Set n K0 = {x ∈ R : ∃ δ>0 x0 + δx ∈ G ∩ B(x0,r0)}, u(x + εx) − u(x ) u (x)= 0 0 , ε ε2 for ε>0. Using the semiconvexity of u we see that 1 1 (3.2) − K |x|2 ≤ u (x) ≤ K |x|2,D2u ≥−K I, 2 u ε 2 u ε u and

L0(uε(x)) = L0(u(x0 + εx)) ≥ γ(x0 + εx)u(x0 + εx)+g(x0 + εx) ≥ Cg > 0.

It follows that the family {uε}ε>0 is uniformly bounded and equicontinuous on every Rn { }∞ compact subset of . Therefore, some sequence uεi i=1 converges uniformly, on compact subsets, to a continuous function u0. In view of (3.2), it is clear that u0 must also satisfy (3.3) 1 1 − K |x|2 ≤ u (x) ≤ K |x|2,D2u ≥−K I, and L (u ) ≥ γu + g ≥ C > 0. 2 u 0 2 u 0 u 0 0 0 g

In particular, since L0(u0) > 0, u0 is quasiconvex by Theorem 2.7. In addition, u0 satisfies the properties

(3.4) u0(x) < 0=⇒ x ∈ K0 and x ∈ K0 =⇒ u0(x) ≤ 0, n and hence K0 R is an open, convex cone. Let

F0 = {x : u0(x) ≤ 0}

and note that F0 is a closed . In fact, F0 = K0. Indeed, note that K0 ⊂ F0, and so K0 ⊂ F0. Now suppose F0 \ K0 = ∅. This implies that int(F0 \ K0) = ∅. But then, since u0 =0onF0 \ K0, we could conclude that L0(u0)=0on int(F0 \ K0). This contradicts (3.3). Similarly,

(3.5) G0 := {x : u0(x) < 0} = K0.

Indeed, if this fails, then convexity of G0 implies that F0 \ G0 = ∅ and hence int(F0 \ G0) = ∅. As before that would imply that L0(u0) = 0 on the interior of F0 \ G0, which again is a contradiction of (3.3).

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4240 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

n  Next given x=(x1,...,xn)∈R we will write x=(x, x ), where x=(x1,...,xn−1)  and x = xn−1. Using a rotation if necessary, we may assume that  n  K0 = {x =(x, x ) ∈ R : x > Ψ(x)} where Ψ : Rn−1 → R is a convex, homogeneous of degree one function. Thus there   ∈ Rn Rn−1 → R is a point x0 =(x0,x0) and a smooth function Φ : such that   x0 =Φ(x0)and Φ(x) ≥ Ψ(x), |x0| =1,

Φ(x)=Ψ(x) if and only if x = x0,   d   ·   Φ(tx0) = DΨ(x0) x0 = x0, dt t=1  d2  Φ(tx0) =0. dt2 t=1 Indeed, it is enough to pick a point where DΨ exists and then rescale. Let Λ : Rn → R denote the signed distance function from Γ = {x : x =Φ(x)} and such  that Λ(x) < 0 if and only if x > Φ(x)=⇒ x ∈ K0. The fact that Φ is smooth implies that Λ is smooth near the graph of Φ. A computation now shows that   d  d2  Λ(tx0) =0and Λ(tx0) =0. dt t=1 dt2 t=1 That is, · 2 T (3.6) DΛ(x0) x0 =0andx0D Λ(x0) x0 =0. | | Now let E>ess supx∈B(x0,R) Du0(x) ,whereR>0 is fixed. Construct a family {τε}ε>0 of smooth and convex functions τε : R → R satisfying τε(0) = 0 and Es, if s>ε, τ (s)= ε εs, if s<−ε.

We will use the test functions ϕε := τε ◦ Λ. Note that u0 − ϕ0 has an isolated local maximum at x0. Consequently, for some ε0 > 0, whenever 0 <ε<ε0 we know that u0 − ϕε achieves a local maximum at xε and xε → x0 as ε → 0. We have  Dϕε(xε)=τε(Λ(xε))DΛ(xε)and 2  ⊗  2 D ϕε(xε)=τε (Λ(xε))DΛ(xε) DΛ(xε)+τε(Λ(xε))D Λ(xε).

Let pε denote the unique vector such that pε · DΛ(xε)=0andpε − x0 = λDΛ(xε), and observe that pε → x0 as ε → 0. Now we put all the pieces together in a computation: L0(u0(xε)) ≤ L0(ϕε(xε)) T pε 2 pε ≤ D ϕ(xε) |pε| |pε|  τε(Λ(xε)) 2 T = 2 pεD Λ(xε) pε |pε| ≤ E 2 T 2 pεD Λ(xε) pε . |pε|

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4241

Since L0(u0(xε)) ≥ Cg > 0, we have reached the inequality ≤ E 2 T 0 0sothatv satisfies the property (3.7) Given x,thereexistsyx such that |x − yx|≤μ and v(z) ≤ v(x), ∀|z − yx|≤μ.

Indeed, if v does not satisfy this property we may replace v by vμ(x)= min|y|≤μ v(x + y)andworkwithvμ instead. ∈ | − |≤ ≤ Let x0 M and y0 = yx0 such that (3.7) holds. Thus x0 y0 μ, and v(z) v(x0) for any |z − y0|≤μ. Since x0 ∈ M we know that v(z) − u(z) ≥ v(x0) − u(x0), i.e., v(z) − v(x0)+u(x0) ≥ u(z). Since M ∩ G = ∅, this implies that there is a δ>0 such that u(z) ≥ u(x0) for any z such that |z − x0|≤δ. The last two statements imply that u(z)=u(x0)if|z − x0|≤δ and |z − y0|≤μ. But then u is constant in B(x0,δ) ∩ B(y0,μ) = ∅, and hence L0(u) = 0 in that set. This contradicts assumption (a).  We may now present the main comparison theorem of this section. n Theorem 3.3. Let Ω ⊂ R be a bounded domain, h ∈ C(Ω) with ωh(·) as a uniform modulus of continuity. Let u : Ω → R be an upper semicontinuous subsolution of L0(u(x))−h(x) ≥ 0,x∈ Ω,andv : Ω → R be a lower semicontinuous supersolution of L0(v(x)) − h(x) ≤ 0,x∈ Ω. Assume that there is a constant Ch > 0 so that h(x) ≥ Ch for all x ∈ Ω. Then infx∈∂Ω(v(x) − u(x)) ≥ 0 implies that infx∈Ω(v(x) − u(x)) ≥ 0. Consequently, L0(u)=h>0 has at most one solution.

Remark 3.4. Observe that in view of the fact that L0 is the mean curvature operator in R2, Theorem 3.3 extends the Barles and Da Lio result of Theorem 1.2 in at least two ways. First, the inhomogeneous term h allows spatial dependence. Second, we do not require that the domain be star shaped.

Proof. We assume that infx∈∂Ω(v(x) − u(x)) ≥ 0 but that infx∈Ω(v(x) − u(x)) := Cm < 0. Because of the fact that L0 is translation invariant and h is bounded on Ω, we may assume without loss of generality that u ≥−k, v ≥−k for some k>0. Let ε>0anduε(x) denote the supremal convolution of u, i.e., 1 uε(x)=sup(u(y) − |x − y|2). y∈Ω 2ε

Also, let vε denote the infimal convolution of v given by

1 2 vε(x) = inf (u(y)+ |x − y| ). y∈Ω 2ε ε It is well known that u is semiconvex and vε is semiconcave for each ε>0. A calculation shows that uε is a subsolution of ε 1/2 L0[u ] ≥ h − ωh(ε),x∈ Ωε = {x ∈ Ω | dist(x, ∂Ω) >C0ε },

and vε is a supersolution of

L0[vε] ≤ h + ωh(ε),x∈ Ωε,

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4242 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

ε where C0 is a constant depending only on the bounds on u and v. In addition Du ∞ and Dvε are in L (Ωε)and

ε ε 1 inf (vε − u ) ≥ 0, but inf (vε − u ) ≤ Cm < 0. x∈∂Ωε x∈Ωε 2 ε The next step involves applying a modified Kruzhkov transform to u and vε. Set ε ε u vε u =ln(u + γ)andvε =ln(v + γ), i.e. u = e − γ and v = e − γ, for some γ>k.A calculation shows that

u is a subsolution of L0(u) − (h − ωh(ε))(u + γ) ≥ 0,x∈ Ωε and

v is a supersolution of L0(v) − (h + ωh(ε))(v + γ) ≤ 0,x∈ Ωε.  −  ≥  −  ≤  Furthermore, infx∈∂Ωε (u(x) v(x)) 0 and infx∈Ωε (u(x) v(x)) Cm < 0for  −k −k some constant Cm < 0. In addition, we know that u ≥ e − γ,v ≥ e − γ, u is semiconvex, and v is semiconcave. We have

L0(u) − h(x)u(x) ≥ (h − ωh(ε))(u + γ) − hu = γh − ωh(ε)(u + γ) and

L0(v) − h(x)v(x) ≤ (h + ωh(ε))(v + γ) − hv = γh + ωh(ε)(v + γ).  − 1  Now set v = v 2 Cm. We see that 1 (3.8) L (v) − hv ≤ γh + ω (ε)(v + γ)+ C h. 0 h 2 m

For ε>0 sufficiently small, since γ>k>0,h≥ Ch > 0, we have 1 ω (ε)((v + γ)+(u + γ)) ≤− C h, h 2 m and from (3.8) we get

L0(v) − h v ≤ γh − ωh(ε)(u + γ). 1  ∈ −  ≥ ∈ −  ≤ Also infx Ωε (v(x) u(x)) 0, while infx Ωε (v(x) u(x)) 2 Cm < 0. Now we use Theorem 3.2 and identify u with u and v with v, γ with h, and g = γh−ωh(ε)(u+γ), −  ≥  to conclude that infx∈Ωε (v u) 0, and that is a contradiction.

4. Robustly quasiconvex functions Recall that for each fixed α>0, 2 2 T Lα(u)=Lα(Du, D u)=min{y · D uy ||y| =1, |y · Du|≤α}.

Quasiconvex functions correspond to a subsolution of L0(u) ≥ 0. Considering Lα(u) ≥ 0 leads to a smaller class of functions when α>0. Definition 4.1. Let Ω ⊂ Rn be convex, α>0. A function u :Ω→ R is robustly quasiconvex, with parameter α>0, α−quasiconvex in abbreviated form, if, for every ξ ∈ Rn with |ξ|≤α, the function x → u(x)+ξ ·x is quasiconvex. The class of robustly quasiconvex functions with parameter α is denoted by Rα(Ω) or just Rα when the domain is fixed. The class of functions which are robustly quasiconvex R R for some α is (Ω) = α>0 α.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4243

Remark 4.2. If Ω is convex and u :Ω→ R is in Rα,thenu is quasiconvex but the reverse is false. Indeed, u : R → R given by u(x) = arctan x is quasiconvex, but not robustly for any α>0. However, if Ω ⊂ R is bounded, then u(x) = arctan x is robustly quasiconvex on Ω. Every is robustly quasiconvex (with any α), and if a function is robustly quasiconvex with parameter α, for arbitrarily large α, then the function is convex. On the other hand, the function u : R → R given by u(x)=2x if x<0andu(x)=x if x ≥ 0 is robustly quasiconvex, with parameter α = 1, but is not convex.

Remark 4.3. It is established in [17] that an equivalent definition is that u is robustly convex (stable convex in the terminology of [17]) if there is an α>0 such that for any |δ| <α,x0,x1 ∈ Ω, and 0 <λ<1, u(x ) − u(x ) u(x ) − u(x ) 1 0 ≥ δ =⇒ 1 λ ≥ δ, |x1 − x0| |xλ − x0|

where xλ =(1− λ)x0 + λx1.

That robustly quasiconvex (with parameter α>0) functions satisfy Lα(u) ≥ 0 follows from what was established in Theorem 2.6. More precisely:

Theorem 4.4. Let α>0.Ifu :Ω→ [−∞, ∞) is upper semicontinuous and robustly quasiconvex with parameter α>0,thenu is a subsolution of Lα(u) ≥ 0. Proof. If u is robustly quasiconvex with parameter α>0, then Theorem 2.6 implies n that, for every ξ ∈ R with |ξ|≤α, the function uξ given by uξ(x)=u(x)+ξ · x is a subsolution of L0(u) ≥ 0. Let x0 ∈ arg max(u − ϕ) for a smooth ϕ. Then, for n every ξ ∈ R with |ξ|≤α, x0 ∈ arg max(uξ − ϕξ), where ϕξ(x)=ϕ(x)+ξ · x. n Hence, for every ξ ∈ R with |ξ|≤α, Dϕξ(x0)=Dϕ(x0)+ξ and

2 T n (4.1) min{vD ϕξ(x0) v : v ∈ R , |v| =1, |v · Dϕ(x0)+v · ξ| =0}≥0.

n Let v ∈ R be such that |v| =1and|v · Dϕ(x0)|≤α.(IfDϕ(x0) = 0, any unit vector v orthogonal to ξ works.) For any such v there exists ξ ∈ Rn with |ξ|≤α n such that v · Dϕ(x0)+v · ξ =0. Since (4.1) holds for every ξ ∈ R with |ξ|≤α we have

2 T n (4.2) Lα(u)=min{vD ϕξ(x0) v : v ∈ R , |v| =1, |v · Dϕ(x0)|≤α} 2 T n ≥ min{vD ϕξ(x0) v : v ∈ R , |v| =1, |v · Dϕ(x0)+v · ξ| =0}≥0.

Consequently, u is a subsolution of Lα[u] ≥ 0. 

Showing that Lα(u) ≥ 0 implies robust α−quasiconvexity is similar to what was done in Theorem 2.8, in proving that L0(u) and some extra assumptions give quasiconvexity.

Theorem 4.5. If u :Ω→ [−∞, ∞) is an upper semicontinuous subsolution of Lα(u) ≥ 0,thenu is α-robustly quasiconvex. Proof. Suppose u is not robustly quasiconvex with parameter α,forsomeα <α. Then there exists ξ ∈ Rn with |ξ|≤α, y, z ∈ Ωandw =(1− λ)y + λz for some λ ∈ (0, 1) such that uξ(y) ≤ uξ(z)

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4244 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Without loss of generality suppose that y =(y1, 0,...,0), z =(z1, 0,...,0), with n−1 y1

Our next goal is to show that a quasiconvex function may be approximated by robustly quasiconvex functions. In other words, we will eventually show that a quasiconvex function can be expressed as the supremum of robustly quasiconvex functions. This is accomplished starting with the next example, which will be used in the construction of an approximating robustly quasiconvex function. a Example 4.6. Let f : R → R be given by f(x)=−∞ if x ≤ 0andf(x)=− x if x>0, where a>0. Then f is quasiconvex and is robustly quasiconvex when restricted to a bounded subset of R.Letψ : Rn → R be given by

  1 n 2 2 2 − ≥ ψ(x)= (x1 + r) + xi r, where r 0. i=2 Consider the function w : Rn → R given by w(x)=f(ψ(x)). The graph of w is obtained by rotating the graph of f about the point (−r, 0, 0,...,0). Let R>0. a It turns out that u is robustly quasiconvex, with parameter α =  on R3(3R +2r) the ball of radius R. To show this, we consider the operator Lα(w)atpointsx =(x1, 0, 0,...,0), with 0

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4245

bound and the constraint |v| =1yields     n · 2 a − 2 2 1 2 a 1 − 2 2 1 v D w(x)v = 2 v1 + vi = 2 v1 + x1 x1 x1 + r x1 x1 + r x1 x1 + r  i=2  2 4 ≥ a 1 − α x1 2 1 2 2 + . x1 x1 + r a x1 x1 + r Thus, to have v · D2w(x)v ≥ 0, it is sufficient that 2 4 1 ≥ α x1 2 1 2 + , x1 + r a x1 x1 + r a2 which, in turn, holds because x z· v,forsomez ∈ Ω, v ∈ Rn, α ∈ R. Proof. Pick x ∈ Ω such that u(x) > −∞. Suppose first that x is a local minimum of u, in the sense that allows for u(x)=∞: a sufficiently small neighborhood of x does not contain points x with u(x) v· z.Thenu ≥ φ and u(x)=φ(x). If u(x)=∞, consider a sequence of functions φi, i =1, 2,...,∞, given by φi(x)=−∞ if v · x ≤ v · z and φi(x)=i if v · x>v· z.Thenu ≥ φi for all i =1, 2,... and u(x)=supφi(x). Now suppose that x is not a local minimum of u. Then, there exists a sequence of points xi ∈ Ω, xi → x, with u(xi) u(x). This last property relies on lower semicontinuity of u. For every i =1, 2,..., the convex set Si = {x ∈ Ω | u(x) ≤ u(xi)} is disjoint from a sufficiently small neighborhood ∈ Rn · · of x, and hence there exists vi such that sups∈Si vi svi · xi.Inparticular,φi(x)=u(xi). Then u ≥ φi for all i =1, 2,... and u(x)=supφi(x).  Lemma 4.8. Let Ω ⊂ Rn be bounded, convex, and open. Let u :Ω→ [−∞, ∞] be a lower semicontinuous and quasiconvex function. Then u is the supremum of all robustly quasiconvex lower semicontinuous functions on Ω whichareboundedabove by u. Proof. The function u is the supremum of functions φ, as in Lemma 4.7, that are bounded above by u. Any such function φ, when restricted to a bounded set, is the supremum of functions w, as in Example 4.6. This requires considering arbitrarily small a and arbitrarily large r in the construction of w.Consequently,u is the supremum of such functions w, and Example 4.6 showed that they are robustly quasiconvex.  Remark 4.9. Lemma 4.8 will be used to prove in Theorem 5.5 that a quasiconvex solution of L0(u) = 0 is unique.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4246 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

5. Comparison principles for Lα

The following theorem gives us a comparison principle for the operator Lα. This theorem does not require that Ω be convex. Furthermore, in contrast to the equa- tion L0(u)=g, in the equation Lα(u)=g using the operator Lα,α > 0, the theorem only assumes g ≥ 0, and not that g>0. Theorem 5.1. Let Ω ⊂ Rn be an open set and g :Ω→ R denote a nonnegative continuous function. Let u :Ω→ R be a bounded upper semicontinuous subsolution Lα[u] ≥ g(x) and v :Ω→ R a bounded lower semicontinuous supersolution of Lα[v] ≤ g(x). Suppose also that u ≤ v on ∂Ω. Then u ≤ v in Ω. | | Proof. Let R =maxx∈Ω x and any δ>0 such that 2δR < α. Define uδ(x)=u(x)+δ(|x|2 − R2). Since |x|2 − R2 ≤ 0 we know that uδ(x) ≤ u(x). Now we set β = α − 2δR > 0and calculate δ 2 T δ Lβ[u ]=min{y(D u +2δIn×n)y ||y| =1, |y · Du |≤β} ≥ min{yD2uyT +2δ ||y| =1, |y · Du|−2δ|y · x|≤β} ≥ min{yD2uyT +2δ ||y| =1, |y · Du|−2δR ≤ β = α − 2δR}

= Lα[u]+2δ ≥ g(x)+2δ.

δ δ δ We conclude that u is a subsolution of Lβ[u ]=Lα−2δR[u ] ≥ g(x)+2δ. − 1 1 α Let ϑ>1 and fix δ>0 such that δ<α(1 ϑ ) 2R < 2R . Consider the function δ wϑ = ϑu . One readily computes that this function is a subsolution of Lαβ[wϑ] ≥ ϑ (g(x)+2δ) ≥ g(x)+2δ. γ Let γ>0andw (x) denote the supremal convolution of wϑ, i.e., ϑ γ − 1 | − |2 wϑ(x)=sup wϑ(y) x y . y∈Ω 2γ γ To make the notation easier we will denote wϑ as simply wϑ. Also, let vγ denote the infimal convolution of v given by 1 2 vγ (x) = inf v(y)+ |x − y| . y∈Ω 2γ

Define g (x)=min ∈ g(y)andg (x)=max ∈ g(y). By a straightforward γ y Bγ (x) γ y Bγ (x) calculation the sup convolution of wϑ is a semiconvex subsolution of L [w ]≥ϑ min (g(y)+2 δ)=ϑ(g (x)+2 δ),x∈Ω ={x∈Ω | dist(x, ∂Ω)≤C γ1/2}, βϑ ϑ γ γ 0 y∈Bγ (x)

and the inf convolution vγ , denoted simply as v, is a semiconcave supersolution of ≤ ∈ Lα[v] gγ (x),x Ωγ .

We know that wϑ is semiconvex, v is semiconcave and that wϑ − v is semiconvex. Now we need the following lemma.

Lemma 5.2. Let W be semiconvex and V be semiconcave. Suppose that x0 ∈ Ω is a maximum point of W − V such that

Δ:=W (x0) − V (x0) − max(W (y) − V (y)) > 0. y∈∂Ω

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4247

2+ Then there is a sequence of points xk → x0,and(pk,Ak) ∈ D W (xk),and  2− (pk, Ak) ∈ D V (xk) and a constant K>0, which may depend on the semi- convexity constant of W − V , so that

•|pk − pk|→0,k →∞, • −  ≤−K Ak Ak k In×n, where D2+W (x)={(p, A) ∈ Rn × S(n)| W (x + y) − W (x) − p · y − 1 yA · yT 2 ≤ } lim sup 2 0 , y→0,x+y∈Ω |y| D2−V (x)={(p, A) ∈ Rn × S(n)| V (x + y) − V (x) − p · y − 1 yA · yT lim inf 2 ≥ 0}. y→0,x+y∈Ω |y|2 First we complete the proof of the theorem and then return to the lemma. For each fixed ϑ>1, let x ∈ int(Ω)beapointatwhichwϑ − v achieves a strict − − positive maximum and wϑ(x) v(x) > supy∈∂Ω(wϑ(y) v(y)). Using Lemma 5.2 and the definition of viscosity solution, we have for each xk → x and (pk,Ak) ∈ 2+ D wϑ(xk), L [p ,A ]=min{ηA ηT ||η| =1, |p · η|≤βϑ}≥ϑ(g (x )+2δ). βϑ k k k k γ k  2− For (pk, Ak) ∈ D v(xk),   {  T || | | · |≤ }≤ Lα[pk, Ak]=min ηAkη η =1, pk η α gγ (xk).  T Let η0 ∈ arg min{ηAkη ||η| =1, |pk · η|≤α}. Then, |η0| =1, |pk · η0|≤α, and ≥  T {  T || | | · |≤ } gγ (xk) η0Akη0 =min ηAkη η =1, pk η α . Next,

|pk · η0| = |(pk − pk) · η0 + pk · η0|≤|pk − pk| + |pk · η0|≤α(ϑ − 1) − 2δRϑ+ α = βϑ, | −  |→ (1−1/ϑ) for all k sufficiently large (since pk pk 0) and δ<α 2R .Consequently,η0 is also in the constraint set for Lβϑ,andso, η A ηT ≥ min{ηA ηT ||η| =1, |p · η|≤βϑ} = L [p ,A ] ≥ ϑ(g (x )+2δ). 0 k 0 k k βϑ k k γ k −  ≤−K Using the fact that Ak Ak k In×n,wehave

K T T K  T  η (A + I × )η = η A η + ≤ η A η = L [p , A ] ≤ g (x ), 0 k k n n 0 0 k 0 k 0 k 0 α k k γ k and so, K K ϑ(g (x )+2δ)+ ≤ η A ηT + ≤ g (x ), γ k k 0 k 0 k γ k or, finally, K (5.1) ϑ(g (x )+2δ) − g (x )+ ≤ 0. γ k γ k k In general the constant of semiconvexity K = K(γ) → 0asγ → 0. Sending γ → 0 in (5.1), since g → g, g → g, we get 0 < (ϑ − 1)g(x )+2ϑδ ≤ 0, a γ γ k contradiction. Hence wϑ − v cannot achieve a strict positive max in the interior of Ω, and sending ϑ → 1, thesameistrueofu − v. 

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4248 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Proof of Lemma 5.2. The proof of the lemma is sketched for the convenience of the reader and to make the paper self contained, but this is now a standard result (see [8], [9], [10]). Let w :Ω→ R be semiconvex with semiconvexity constant M>0. Recall that Alexandrov’s lemma implies that for a semiconvex function w there is a zero measure set so that off this set (p, X) ∈ Rn × S(n)existssuchthat

1 T 2 w(y)=w(x)+p · (y − x)+ (y − x)X(y − x) + o(|y − x| ), and X ≥−2MI × . 2 n n

Let x0 ∈ Ω and let Br(x0) ⊂ Ω with

Δ:=w(x0) − max w(y) > 0. y∈∂Br (x0) Given any δ>0set   { ∈  ≤| |≤ ∃ ∈ Rn ∈ − · } Eδ := x Br(x0) δ p 2δ, p ,x arg maxy∈Br (x0)(w(y) p y) .

2 2 For each x ∈ Eδ if D w(x) exists, then |Dw(x)|≤δ, D w(x) ≤ 0, and w(y) ≤ w(x)+Dw(x) · (y − x), ∀ y ∈ Br(x0). We claim that the Lebesgue measure of Eδ is positive. More precisely, for all 0 <δ<Δ/r, ω (2n − 1)δn Δ (5.2) |E |≥ n , ∀ 0 <δ< , where ω = |B (0)|. δ (2M)n r n 1 To simplify the proof, assume that w ∈ C2. Then (5.2) comes from the inequality derived from the coarea formula  n n 2 (5.3) ωn((2δ) − δ )=|B2δ(0) \ Bδ(0)| = |Dw(Eδ)|≤ |det(D w(x))| dx. Eδ 2 The estimate (5.2) follows from the fact that for all x ∈ Eδ, −2MIn×n ≤ D w(x) ≤ 0, which implies |det(D2w(x))|≤(2M)n. Refer to [9] or to Fleming and Soner [10] for an exposition of these facts. From (5.3) we see that there is a set Aδ ⊂ Eδ with |Aδ| > 0sothat n 2 n−1 0

where Cn denotes a generic positive constant depending only on n, λi(x),i = 2 1, 2,...,n, are the eigenvalues of D w(x), and λmax(x) ≤ 0 is the largest eigenvalue. Consequently, n 2 n (5.4) λmax(x) ≤−Cnδ , implying that D w(x)+Cnδ In×n ≤ 0, ∀ x ∈ Aδ.

For any k =1, 2,...,thereexistsxk ∈ A1/k. By the definition of A1/k using 2 n (5.4) we have 1/k ≤|Dw(xk)|≤2/k, D w(xk)+Cn(1/k) In×n ≤ 0, and xk ∈ − · { } arg maxy∈Br (x0)(w(y) Dw(xk) y). Consequently a subsequence, still denoted xk , converges to a point of maximum of w on Br(x0), which may be assumed to be x0 because we may make x0 a unique strict maximum of w by approximation if necessary. We conclude that {xk} satisfies xk → x0, and Lemma 5.2 follows immediately. Finally, if w is assumed merely semiconvex and not C2, then we replace w by a ∞ C mollifier wε. The proof is then carried out with the mollified wε → w as ε → 0, uniformly. See, for example, Caffarelli and Cabre [8] or Fleming and Soner [10] for details. 

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4249

Next, we ask the question in what sense does an α-quasiconvex solution of Lα(uα) = 0 approximate a given quasiconvex function u. The next theorem be- gins to answer this question in that it shows that the limit as α → 0+ of solutions of Lα(uα)=0, which are robustly quasiconvex with parameter α, is indeed quasi- convex and a viscosity solution of L0(u)=0. Theorem 5.3. Let h : Ω → R be continuous, and let Ω ⊂ Rn be open, bounded, and convex. Let uα denote the unique (continuous) viscosity solution of Lα(uα)= ∈ ∈ 0,x Ω,uα = h, x ∂Ω. Set v =supα>0 uα. Then v is a viscosity solution of

(5.5) L0(v)=0,x∈ Ω,v= h, x ∈ ∂Ω. Furthermore, v is quasiconvex.

Proof. First uα is continuous because h is continuous and, from Theorem 5.1, Lα ∗ has a comparison principle so that (uα) ≤ (uα)∗. Now let u0 be any viscosity so- lution of (5.5) (which exists by Perron’s method). For any α>0,Lα(u0) ≤ L0(u0) = 0, and hence u0 is a supersolution of Lα(u0) ≤ 0. By comparison, we then have u0 ≥ uα, and hence {uα}α is uniformly bounded above by u0. Consequently, ≤ v =supα>0 uα u0, and note that v is at least lower semicontinuous with v = h on ∂Ω. ≤ ≥ Furthermore, if 0 <α1 α2, we also conclude by comparison that uα1 uα2 so that α → uα is monotone increasing as α  0. Because of the monotone convergence and Lα L0, we know from standard results in viscosity solutions that v is a viscosity solution of L0(v)=0. In fact, it is the Perron minimal viscosity supersolution. Since each uα ∈Rα is robustly quasiconvex, and hence quasiconvex automati- cally, and since the supremum of quasiconvex functions is quasiconvex, v is quasi- convex. 

Remark 5.4. In a sense, v is the correct quasiconvex function which solves L0(v)=0 as we will see next. In fact, we will see that there is only one quasiconvex viscosity solution of L0(u) = 0, and hence it must be v. Theorem 5.5. Let Ω be a convex bounded domain in Rn. Let h : ∂Ω → R be a 2 continuous function. Let v : Ω → R be a viscosity solution of L0(Dv, D v)=0 in Ω. Assume that v is lower semicontinuous and quasiconvex with v = h on ∂Ω. Then v is unique.

Proof. Let α>0 and define uα as the unique (by Theorem 5.1) solution of 2 (5.6) Lα(Duα,D uα)=0,x∈ Ω,uα = h = v, x ∈ ∂Ω.

We will show that any quasiconvex solution of L0(v)=0mustsatisfyv =supα>0 uα on Ω, and this immediately implies the uniqueness. First, since Lα(v) ≤ L0(v)=0 and uα = v on ∂Ω, by the comparison Theorem 5.1, we know that v ≥ uα for any ≥ → α>0, and hence v supα>0 uα. Note that in fact uα v as α 0+. We claim that (5.7) uα(x)=sup{q | q ∈Rα(Ω),q = v on ∂Ω} =sup{q | q ∈Rα(Ω),q ≤ v, x ∈ Ω}.

The first equality follows from the fact that q ∈Rα if and only if Lα(q) ≥ 0(by Theorem 2.6 and Theorem 4.4) and from Perron’s method, which tells us that uα is the maximal subsolution of Lα(uα) ≥ 0. For the second equality in (5.7), since

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4250 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

any eligible q in the first supremum satisfies Lα(q) ≥ 0andq = v, x ∈ ∂Ω, we have q ≤ uα ≤ v. Consequently, it must be an eligible q in the second supremum and we have verified (5.7). It follows from Lemma 4.8, (5.7) and the assumption that v is quasiconvex that

(5.8) v(x)=sup{q(x) | v ≥ q, q ∈Rα(Ω) ∃ α>0} =supuα(x).  α>0 The next corollary follows immediately from the preceding proof. Notice that it only requires the subsolution to be quasiconvex.

Corollary 5.6. Let u be a supersolution L0(u) ≤ 0 and v asubsolutionL0(v) ≥ 0 in Ω. If v is quasiconvex and u ≥ v on ∂Ω, then u ≥ v on Ω.

Proof. Since v is assumed quasiconvex, we know that v =supα>0 vα,wherefor each α>0 vα is the unique solution of Lα(vα)=0inΩ, with vα = v on ∂Ω. Since 0 ≥ L0(u) ≥ Lα(u)foreachα>0andu ≥ v on ∂Ω, we have u ≥ vα for each ≥  α>0. Hence u v =supα vα on Ω.

6. Quasiconvex envelopes and obstacle problems In this section we take Ω ⊂ Rn to be a bounded, convex domain. Given a function g : Ω → R which is lower semicontinuous (and bounded from below if Ω is not bounded), for many purposes it is necessary to calculate the quasiconvex envelope of the function. This is defined to be the greatest quasiconvex minorant of g. If g is not quasiconvex, then the function g## : Ω → R defined by g##(x)= sup p · x − g#(γ,p) ∧γ, where g#(γ,p)= sup (p·x−g(x))∧γ. p∈Rn,γ∈R {x|g(x)≤γ} is the greatest quasiconvex minorant of g. That is, g##(x)=sup{q(x) | q : Ω → R,q ≤ g, q is quasiconvex on Ω}. Refer to Penot [15] and [16] for a survey of quasiconvex and applications; however, this form of the conjugates was introduced in [5]. The second order condition introduced in the previous section for quasiconvexity gives us a way to calculate the quasiconvex envelope of g as a solution of an obstacle problem. In particular, we consider the problem in the viscosity sense: 2 (6.1) min{g − u, L0(Du, D u)} =0,x∈ Ω,u= g on ∂Ω. 2 Formally, u ≤ g and L0(Du, D u) ≥ 0 everywhere, and whenever strict inequal- ity holds in one of the terms, then equality holds in the other. In the viscosity sense, the boundary condition means 2 max{min{g − u, L0(Du, D u)},g− u}≥0,x∈ ∂Ω (subsolution on ∂Ω) and 2 min{min{g − u, L0(Du, D u)},g− u}≤0,x∈ ∂Ω (supersolution on ∂Ω). We begin by proving that there is only one viscosity solution of the obstacle problem for Lα,

(6.2) min{g − u, Lα(u) − h(x)} =0,x∈ Ω,u(x)=g(x),x∈ ∂Ω. For simplicity we will assume that g : Ω → R is a continuous function.

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4251

Theorem 6.1. Assume that h : Ω → R is a nonnegative continuous function. Let u : Ω → R be a continuous subsolution and v : Ω → R a continuous supersolution of (6.2).Thenu ≤ v on ∂Ω implies that u ≤ v on Ω.

Proof. Since u is a subsolution min{g − u, Lα(u) − h(x)}≥0,x∈ Ω, and since v is a supersolution we have min{g − v, Lα(v) − h(x)}≤0,x∈ Ω. Define the open set

Ωg = {x ∈ Ω | v(x)

On Ωg we have Lα(v) − h(x) ≤ 0. Since Lα(u) − h(x) ≥ 0 on Ω this also holds on Ωg ⊂ Ω. Furthermore, on ∂Ωg ∩ Ωwehavev = g. On ∂Ωg ∩ ∂Ω one verifies that v = g as in [11, section 1.5], using the convexity of ∂Ω. Then, since g ≥ u on Ω, we have u ≤ g = v on ∂Ωg. In summary, we have shown that v is a supersolution of Lα(v) − h ≤ 0andu is a subsolution of Lα(u) − h ≥ 0onΩg and u ≤ v on ∂Ωg. By the comparison principle for the Lα − h operator we conclude that u ≤ v on all of Ωg. On the other hand, on Ω \ Ωg we have v ≥ g ≥ u, and hence u ≤ v on all of Ω. 

Next we can prove that the unique quasiconvex viscosity solution of the obstacle problem for L0 with obstacle g must be the quasiconvex envelope of g.

Corollary 6.2. There is a unique quasiconvex viscosity solution of min{g−u, L0(u)} =0,x∈ Ω,u= g, x ∈ ∂Ω, and it is given by u = g##. Proof. The proof that the obstacle problem has a unique quasiconvex solution fol- lows just as in Theorem 6.1. Now since g## is the greatest quasiconvex minorant ## ## ## ## of g,wehaveg ≥ g and L0(g ) ≥ 0. Hence min{g − g ,L0(g )}≥0. Since u is a solution of this problem and g## is a subsolution, we conclude u ≥ g##. But g ≥ u ≥ g## and u quasiconvex implies that u = g##. 

The next proposition shows that g## arises naturally as a limit of convex mino- rants. We will assume that g : Ω → R is continuous in order to avoid technicalities. Proposition 6.3. Let Ω ⊂ Rn be an open, bounded, convex set. We have

p ∗∗ 1 ## lim [(g(x) ) ] p = g (x), p→∞ where, for any function f, f∗∗ is the largest convex minorant of f. Also, g## is the ## ## ## quasiconvex solution of min{g − g ,L0(g )} =0in Ω with g = g on ∂Ω. Proof. By standard results in convex functions [20], the greatest convex minorant of a given function f is   n+1 n+1 n+1 ∗∗ (f(x)) =min λif(xi) |{λi,xi},x= λixi,λi ≥ 0, λi =1 . i=1 i=1 i=1 Hence, for each p ≥ 1, ⎧ ⎫   1 ⎨ n+1 p n+1 n+1 ⎬ ∗∗ 1 p p p |{ } ≥ [(g(x) ) ] =min⎩ λig(xi) λi,xi ,x= λixi,λi 0, λi =1⎭ . i=1 i=1 i=1

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4252 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

Sending p →∞we see that   n+1 n+1 p ∗∗ 1 lim [(g(x) ) ] p =min max g(xi) |{λi,xi},x= λixi,λi ≥ 0, λi =1 ψ→∞ 1≤i≤n+1 i=1 i=1 = g##(x). The last equality is proved in [5]. Now, from [13] we know that the convex envelope of gp, namely w =(gp)∗∗ is the viscosity solution of min{gp − w, min v · D2wvT } =0,x∈ Ω,w= g, x ∈ ∂Ω. |v|=1 p Set w = up and calculate that u satisfies (6.3) − 2 T (p 1) 2 min g−up, min v · D up v + |v · Dup| =0,x∈ Ω,up = g, x ∈ ∂Ω. |v|=1 up ## Since g = limp→∞ up, using straightforward viscosity theory we now show using (6.3) that min g − g##, min v · D2g## vT =0,x∈ Ω,g## = g, x ∈ ∂Ω. {|v|=1,v·Dg##=0} 

7. The control problem representation

In this section we indicate how the operator L0 arises in optimal stochastic control in which the control appears in the diffusion term and the payoff involves minimizing the essential supremum of a function of the trajectory rather than the usual expected value of the function. This is a worst case analysis rather than an expected value analysis. Problems of this type were introduced by Soner [21] and Soner and Touzi [22] (see the references there) and was used by these authors as well as Buckdahn et al. [7] to represent the solution of the equation for motion by mean curvature ut =Δu − Δ∞u as the value function for one of these control problems. We will show that L0(u)=0alsohassucharepresentation.Referas well to the report by Popier [18] for a similar approach to a general problem and more details in using Lp approximations. Note however that the representation results apply to the parabolic problem. Consider the controlled stochastic differential equation √ (7.1) dξ(t)= 2 η(t) dW (t), 0

where τ = τx is the exit time of ξ(t)fromΩ. For simplicity we will assume that ∂Ω is C2. We denote the underlying probability space by Σ. The control functions are n η :[0,τ] → S1(0), where S1(0) = {v ∈ R ||v| =1}, assumed to be adapted to the σ−algebra generated by ξ(·). Denote this class by U. The controls are chosen so as to minimize the essential sup (over paths) of some given continuous and bounded function g : Ω → R. In particular we set the value function

u(x):=inf ess sup g(ξ(τx)). η∈U ω∈Σ Technically, the infimum is also taken over any complete stochastic basis (Σ, F,P, {Fs, 0 ≤ s}) endowed with an n−dimensional standard Fs-Brownian motion

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4253

W =(W (s),s ≥ 0). Doing so is needed to guarantee that an optimal control actually exists. This problem is approximated by using the fact that for f ∈ L∞(Σ),  1 lim ln ekf(x) dx = ess sup f(x). →∞ k k Ω x∈Σ Set 1 kg(ξ(τx)) Wk(x) = inf ln Ee . η∈U k By standard stochastic control theory and the comparison result of Barles and Busca [4], the function Wk is the unique (possibly discontinuous) viscosity solution of the problem 2 T 2 (7.2) min {vD Wkv + k|v · DWk| } =0,x∈ Ω,Wk(x)=g(x),x∈ ∂Ω. {v:|v|=1} Recall that the boundary condition is taken in the viscosity sense. Define the set n n Γ(p):={v ∈ R : |v| =1,v· p =0}⊂S1(0),p∈ R . It can be shown in a straightforward way (see, for instance [7], [18] for the similar proof for parabolic problems) that

lim Wk(x)=u(x), ∀ x ∈ Ω, k→∞

and in fact Wk u. 2 We will prove in this section that u is a viscosity solution of L0(Du, D u)=0.

Theorem 7.1. The function u(x) = infη∈U ess supω∈Σ g(ξ(τx)) is a viscosity so- 2 lution of L0(Du, D u)=0, with u = g on ∂Ω. 2 Proof. Let x ∈ Ω be a local strict minimum of the function u∗ − ϕ with ϕ ∈ C . It is not hard to show that there is a sequence {xk}⊂Ω such that xk → x, Wk(xk) → u(x), and Wk − ϕ achieves a minimum at xk. We know that Wk is a supersolution of (7.2), and so 2 T 2 (7.3) min {vD ϕ(xk)v + k|v · Dϕ(xk)| }≤0,k=1, 2,.... v∈S1(0)

For each k, let vk be a point achieving the minimum in (7.3). Thus, 2 T | · |2 ≤ (7.4) vkD ϕ(xk)vk + k vk Dϕ(xk) 0,k=1, 2,....

There is a subsequence, still denoted {vk},andapointv ∈ S1(0) such that vk → v. From(7.4)itisclearthatwemusthavev · Dϕ(x)=0, and hence v ∈ Γ(Dϕ(x)), since otherwise (7.4) could not hold for large enough k. Letting k →∞in (7.4) we conclude that 2 T 2 (7.5) vD ϕ(x)v ≤ 0, which implies L0(Dϕ(x),D ϕ(x)) ≤ 0, 2 and hence u is a supersolution of L0(Du, D u) ≤ 0. 2 Now let w be any supersolution of L0(Dw, D w) ≤ 0inΩ. We will prove that w ≥ Wk for each integer k =1, 2,..., which will give us the fact that w ≥ u,and then u will be the smallest supersolution. 2 Since w is a supersolution of L0(Dw, D w) ≤ 0 we know that 2 T 2 2 T 2 2 min vD wv + k|v · Dw| ≤ min vD wv + k|v · Dw| = L0(Dw, D w) ≤ 0, v∈S1(0) v∈Γ(Dw)

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use 4254 E. N. BARRON, R. GOEBEL, AND R. R. JENSEN

for any k =1, 2,.... This says that w is a supersolution of (7.3) for any k. Since Wk is the solution of (7.2), we have w ≥ Wk for all k and hence that w ≥ u. Finally, the monotone convergence of Wk to u implies that u is a solution, in fact the Perron solution, of L0(u)=0.  Remark 7.2. Since u is a solution we use Theorem 5.3 to obtain that if Ω is convex and u is quasiconvex, then Theorem 5.5 allows us to conclude that u is the unique quasiconvex viscosity solution of L0(u)=0inΩwithu = g on ∂Ω. In a similar way we now consider the following stochastic control problem with a running cost, but we will have a stronger conclusion. The controlled stochastic equation is still given by (7.1), but now we have the following value function:  τx (7.6) u(x):=inf ess sup g(ξ(τx)) − h(ξ(s)) ds . ∈U η ω∈Σ 0

It is assumed that h : Ω → R satisfies h ∈ C(Ω),h≥ Ch > 0. Using an argument just as above we conclude that u is a viscosity solution of 2 (7.7) L0(Du, D u) − h(x)=0,x∈ Ω,u(x)=g(x),x∈ ∂Ω. However, we have seen that (7.7) has a unique viscosity solution, and so it must be u. Furthermore, we also conclude that u must be continuous. We have shown the following. Theorem 7.3. The function u in (7.6) is the unique continuous viscosity solution 2 of L0(Du, D u)=h, with u = g on ∂Ω. Remark 7.4. It is also possible to represent the solution of the obstacle problem 2 min{g − u, L0(Du, D u)} = 0 as the value function of a control problem with optimal stopping of the stochastic trajectories. Refer to [13] for the convex case. Acknowledgement We would like to express our gratitude to the referees for their very careful reading of the manuscript. References

1. O. Alvarez, J.-M. Lasry, and P.-L. Lions, Convex viscosity solutions and state constraints,J. Math. Pures Appl. (9) 76 (1997), no. 3, 265–288. MR1441987 (98k:35045) 2. P.T. An, Stability of generalized monotone maps with respect to their characterizations,Op- timization 55 (2006), no. 3, 289–299. MR2238417 (2008a:47085) 3. M. Bardi and F. Dragoni, Convexity and semiconvexity along vector fields, Calc. Var. Partial Differential Equations, 42 (2011), no. 3–4, 405–427. MR2846261 (2012i:49037). 4. G. Barles and J. Busca, Existence and comparison results for fully nonlinear degenerate elliptic equations without zeroth-order term, Communications in Partial Differential Equations 26 (2001), no. 11, 2323–2337. MR1876420 (2002k:35078) 5. E. N. Barron and W. Liu, Calculus of variations in L∞, Appl. Math. Optim. 35 (1997), no. 3, 237–263. MR1431800 (98a:49021) 6. S. Boyd and L. Vandenberghe, , Cambridge University Press, 2004. MR2061575 (2005d:90002) 7. R. Buckdahn, P. Cardaliaguet, and M. Quincampoix, A representation formula for the mean curvature motion, SIAM J. Math. Anal. 33 (2001), no. 4, 827–846 (electronic). MR1884724 (2002j:49036) 8. L. A. Caffarelli and X. Cabr´e, Fully Nonlinear Elliptic Equations, American Mathematical Society Colloquium Publications, vol. 43, American Mathematical Society, Providence, RI, 1995. MR1351007 (96h:35046)

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use QUASICONVEX FUNCTIONS 4255

9. M. G. Crandall, H. Ishii, and P.-L. Lions, User’s guide to viscosity solutions of second or- der partial differential equations, Bull. Amer. Math. Soc. (N.S.) 27 (1992), no. 1, 1–67. MR1118699 (92j:35050) 10. W. H. Fleming and H. M. Soner, Controlled Markov Processes and Viscosity Solutions, sec- ond ed., Stochastic Modelling and Applied Probability, vol. 25, Springer, New York, 2006. MR2179357 (2006e:93002) 11. C. Gutierrez, The Monge-Ampere Equation, Birkhauser, Boston, MA, 2001. MR1829162 (2002e:35075) 12. R. V. Kohn and S. Serfaty, A deterministic-control-based approach to motion by curvature, Comm. Pure Appl. Math. 59 (2006), no. 3, 344–407. MR2200259 (2007h:53101) 13. A. M. Oberman, The convex envelope is the solution of a nonlinear obstacle problem,Proc. Amer. Math. Soc. 135 (2007), no. 6, 1689–1694 (electronic). MR2286077 (2007k:35184) 14. , Computing the convex envelope using a nonlinear partial differential equation,Math. Models Methods Appl. Sci. 18 (2008), no. 5, 759–780. MR2413037 (2009d:35102) 15. J.P. Penot, Glimpses upon quasiconvex analysis, ESAIM:Proceedings 20 (2007), 170–194. MR2402770 (2009e:26018) 16. J.P. Penot and M. Volle, Duality methods for the study of Hamilton-Jacobi equations, ESAIM:Proceedings 17 (2007), 96–142. MR2362694 (2009a:49052) 17. H.X. Phu and P.T. An, Stable generalization of convex functions, Optimization 38 (1996), 309–318. MR1434252 (98a:90079) 18. A. Popier, Contrˆole optimal stochastique en essential supremum, Tech. report, Uni- versit´e de Bretagne Occidentale in Brest, available from http://perso.univ-lemans.fr/˜ apopier/Research.html, 2001. 19. R. T. Rockafellar and R. J.-B. Wets, , Grundlehren der Mathematischen Wissenschaften [Fundamental Principles of Mathematical Sciences], vol. 317, Springer-Verlag, Berlin, 1998. MR1491362 (98m:49001) 20. R.T. Rockafellar, , Princeton University Press, 1970. MR0274683 (43:445) 21. H. M. Soner, Stochastic representations for nonlinear parabolic PDEs, Handbook of differen- tial equations: evolutionary equations. Vol. III, Handb. Differ. Equ., Elsevier/North-Holland, Amsterdam, 2007, pp. 477–526. MR2549373 (2010m:60204) 22. H. M. Soner and N. Touzi, The dynamic programming equation for second order stochastic tar- get problems, SIAM J. Control Optim. 48 (2009), no. 4, 2344–2365. MR2556347 (2011c:91252) 23. H.M. Soner and N. Touzi, Dynamic programming for stochastic target problems and geometric flows, J. European Math. Soc. 4 (2002), 201–236. MR1924400 (2004d:93142)

Department of Mathematics and Statistics, Loyola University Chicago, Chicago, Illinois 60660 E-mail address: [email protected] Department of Mathematics and Statistics, Loyola University Chicago, Chicago, Illinois 60660 E-mail address: [email protected] Department of Mathematics and Statistics, Loyola University Chicago, Chicago, Illinois 60660 E-mail address: [email protected]

License or copyright restrictions may apply to redistribution; see https://www.ams.org/journal-terms-of-use