<<

Funda¸c˜aoGetulio Vargas Escola de Matem´aticaAplicada

Daniel Carletti

Regularity of Mean-Field Games - An Introduction

Rio de Janeiro 2020 Daniel Carletti

Regularity of Mean-Field Games - An Introduction

Disserta¸c˜ao submetida `a Escola de Matem´atica Aplicada como requisito par- cial para a obten¸c˜aodo grau de Mestre em Modelagem Matem´aticada Informa¸c˜ao.

Area´ de Concentra¸c˜ao:Sistemas Complexos

Orientador: Yuri Fahham Saporito

Rio de Janeiro 2020 Dados Internacionais de Catalogação Publicação (CIP) Ficha catalográfica elaborada pelo Sistema de Bibliotecas/FGV

Carletti, Daniel Regularity of mean-field games : an introduction / Daniel Carletti. – 2020. 62 f.

Dissertação (mestrado) -Fundação Getulio Vargas, Escola de Matemática Aplicada. Orientador: Yuri Fahham Saporito. Inclui bibliografia.

1. Equações diferenciais parciais. 2. Hamilton-Jacobi, Equações de. I. Saporito, Yuri Fahham. II. Fundação Getulio Vargas. Escola de Matemática Aplicada. III. Título.

CDD – 515.353

Elaborada por Rafaela Ramos de Moraes – CRB-7/6625

Thanks to my family for the support, my advisor for the patience and my girlfriend for the love. Abstract

Mean-field games, introduced in a differential perspective by Lions and Lasry [3],model situations dealing with a great number of agents considered as a continuum. The study of the regularity of functions is to observe properties of integrability and differentiability. This dissertation starts with an intro- duction of the necessary ingredients from the partial differential equations theory, it goes on with the analysis of some estimates of the solutions of these equations, and concludes with results on regularity for the meanfield games solutions. Contents

1 Introduction 5

2 Linear PDEs 6 2.1 Transport equation ...... 6 2.2 Laplace equation ...... 10

3 First-Order Non-Linear PDEs 21 3.1 Calculus of variations approach ...... 24 3.2 Hamilton’s equations ...... 26

4 Estimates for the Hamilton-Jacobi equation 28 4.1 Comparison Principle ...... 29 4.2 Optimal Control theory ...... 30 4.2.1 Optimal trajectories ...... 30 4.3 Dynamic Programming Principle ...... 35 4.4 Subdifferentials and Superdifferentials of the Value Function . 37 4.5 Regularity of the Value Function ...... 41

5 Estimates for the Transport and Fokker-Planck Equations 44 5.1 Mass Conservation and Positivity Solutions ...... 44 5.2 Regularizing effects of the Fokker-Planck Equation ...... 46

6 Estimates for Mean-Field Games 50 6.1 Maximum Principle Bounds ...... 50 6.2 First-Order Estimates ...... 51 6.3 Estimates for Solutions of the Fokker-Plank Equation under MFG...... 57

7 Conclusion 60

8 Appendix 61 1 Introduction

The mean-field games (MFG) theory was introduced by Lions and Lasry in their seminal article in 2007, “Mean Field Games” [3] with a partial dif- ferential equation (PDE) approach, and by , Malhame, and Caines in their paper “Large Population Stochastic Dynamic Games: Closed-Loop McKean–Vlasov Systems and the Nash Certainty Equivalence Principle” in 2006. It is relevant on thefield of both partial differential equations and game theory because it shows a new approach for games with a large quantity of agents. For this dissertation, we follow the books “Partial Differential Equations” of Evans [1] and “Regularity Theory for Mean-Field Games Systems” of Gomes, Pimentel, and Voskanyan [2]. We use thefirst one for the introduction of partial differential equations and from the second one we get the estimates and the result of regularity. The objective of this work is to study regularity of the meanfield games partial differential equations. Studying regularity of PDEs is to analyze the integrability or smoothness of its solutions. Hilbert’s nineteen problem stated a problem of regularity asking if any solu- tion of a particular PDE retains the regularity properties of its coefficients, that was one of thefirst formalizations of the regularity problem. It was solved by J. Nash on his paper “Parabolic Equations” in 1957 [4]. In thefirst section of this thesis, we study the properties of two classical PDEs: the transport and Laplace’s equations. We point out some results of regularity, but our focus is to introduce the equations to the reader. We also introduce the probabilistic case for the transport equation, known as the Fokker-Planck PDE, and what is the behavior of the mass function of the agents. In the second section, we analyze the nonlinear PDEs,describe some ways to analyze them and introduce the Hamilton PDEs. In the third section, we introduce the Hamilton-Jacobi-Bellman equations and get some properties of their solutions. For the last two sections, we conclude with some estimates for the Fokker- Planck Equation andfinalize with regularity result of mean-field games. More specifically, we show that the solution of the Fokker=Planck in a MFG prob- lem has regularity of integrability.

5 2 Linear PDEs

Along this dissertation we work through various types of PDEs. We decided to start with the simplest ones, called linear PDEs. More specifically, we study the transport equation which models the transport of a scalarfield inside an incompressibleflow, and the Laplace equation that models the temperature in a space at thermal equilibrium. Such equations are called linear because the terms involving the solution and its derivatives can be written as a linear combination, where the coefficients are independent of the solution.

2.1 Transport equation One of the simplest partial differential equation, that exists is the transport equation. This equation contains only thefirst derivatives of time and space. That is a particular case of the Fokker-Planck equation that is part of the MFGs problem.

n ut(x, t) +b Du(x, t) =f(x, t),(x, t) R (0, ), · ∈ × ∞ n whereb=(b 1, b2, ..., bn) R . We will solve the following initial-value ho- mogeneous problem (f ∈ 0): ≡ n ut +b Du=0,(x, t) R (0, ), · ∈ × ∞ u(x, 0) =g(x), x R n. � ∈ The derivative ofu vanishes in the direction (b, 1). Indeed, studying the value of the functionu in this line:

z(s) :=u(x+ sb, t+s), s [ t, ), ∈ − ∞ and using the chain rule, we can calculate the derivative ofz:

d(x+ sb) d(t+s) z�(s) = (Du(x+ sb, t+s), u (x+ sb, t+s)) , t · ds ds � � = (Du(x+ sb, t+s), u (x+ sb, t+s)) (b, 1) t · =b Du(x+ sb, t+s) +u (x+ sb, t+s) = 0. · t

6 n Sincez �(s) = 0, we get for each (x, t) inR (0, ) thatz(s) =u(x+sb, t+s) is constant. Fors = 0 ands= t we get: × ∞ − u(x, t) =u(x+0b, t + 0) =u(x tb, t t) =u(x tb, 0) =g(x tb) − − − − u(x, t) =g(x tb). (2.1) ⇒ − Ifg isC 1, the expression (2.1) is a solution for the problem. However, ifg is notC 1, since we cannot take the derivative, we cannot say it will be the solution, but it is a reasonable candidate. Now we solve the non-homogeneous problem:

n ut +b Du=f,(x, t) R (0, ), · ∈ × ∞ (2.2) u(x, 0) =g(x), x R n. � ∈ Analogously the homogeneous problem we study what happens in the direc- tion (b, 1): z(s) :=u(x+ sb, t+s), s [ t, ). ∈ − ∞ Let us evaluate the derivative ofz. d(x+ sb) d(t+s) z�(s) = (Du(x+ sb, t+s), u (x+ sb, t+s)) , t · ds ds � � = (Du(x+ sb, t+s), u t(x+ sb, t+s)).(b, 1) =b Du(x+ sb, t+s) +u (x+ sb, t+s) =f(x+ sb, t+s). · t Thus, we concludez �(s) =f(x+ sb, t+s). Integratingz �(s) from t to 0 we obtain the solution of the problem: − 0 0 z�(s)ds= f(x+ sb, t+s)ds t t − − � �t = z(0) z( t) = f(x+(s t)b, s)ds ⇒ − − 0 − � t = u(x, t) u(x tb, 0) = f(x+(s t)b, s)ds ⇒ − − 0 − � t = u(x, t) =g(x tb) + f(x+(s t)b, s)ds. ⇒ − − �0 Notice that the solution for this function have two parts: a solution of an homogeneous problem with boundary conditiong, denoted byv, and a solu- tion to the non-homogeneous problem with boundary condition 0, denoted byw.

7 n vt +b Dv=0,(x, t) R (0, ), · ∈ × ∞  v(x, 0) =g(x), x R n,  ∈  n wt +b Dw=f,(x, t) R (0, ) · ∈ × ∞  w(x, 0) = 0, x R n.  ∈ Then,  u=v+w. Notice that the regularity ofu retains the same regularity ofg. Thus we con- clude that the transport equation has no smoothing effect on the boundary condition. Analyzing the non-homogeneous part of the equation, we get an integral off, thus the solutionu is more regular thanf, since it depends on f. As we have seen before, the candidate for solution is not alwaysC 1, so we � have to define a new type of solution to make sense in these cases. Definition 2.1. We callu a solution of (2.2) in the sense of distributions if:

T u(x, t)(φt(x, t) + (φ(x, t)b))dxdt= − 0 Rn ∇· � � T u(x, 0)φ(x, 0)dx+ f(x, t)φ(x, t)dxdt n n �R �0 �R and u(x, 0) =g(x), x R n ∈ n for any functionφ C c∞(R [0,T)). ∈ × The idea behind this definition is to remove the derivatives ofu and shift them to a test function that is differentiable using integration by parts. When we do this we expand the set of solutions for not onlyC 1 functions but to functions that may not be differentiable. Notice that if we have a solution in the sense of distributions and we change an enumerable number of points, it is still a solution. Thus it may be worth to

8 work with functions that are defined almost everywhere instead of pointwise. We use the transport equations in the probabilistic case for the mean-field games equations. Let us pose the problem andfind the partial differential equation for the probability mass function. Letb:R n [0,T] R n be a Lipschitz vectorfield. Consider a population of × → n agents and denote their state variable at timet byx(t) = (x 1(t), ..., xn(t)). We assume the state variable follows the dynamics given by

x˙ (t) =b(x(t), t) t>0, (2.3) �x(0) = x.

The previous equation induces aflow,Φ t =x(t), inR n that maps the initial conditionx R n att = 0 to the solution of (2.3) at timet> 0. ∈ Definition 2.2. We call (R n) the space of density functions onR n. P n Definition 2.3. Fix a probability measure,m 0 (R ). For0 t T , we callm( , t) the push-forward byΦ t ofm if it satis∈Pfies: ≤ ≤ · 0

t φ(x)m(x, t)dx= φ(Φ (x))m0(x)dx, forφ measurable and bounded. Rn Rn � � (2.4)

Let’s derive a partial differential equation for the push-forward byΦ t of m0.

t Proposition 2.1. Letm be the push-forward byΦ ofm 0 for some probability n measurem 0 (R ). Assume thatb(x, t) is Lipschitz continuous inx. Let Φt be theflow∈ correspondingP to (2.3). Then,m solves

d mt(x, t) + (b(x, t)m(x, t)) = 0,(x, t) R [0,T], ∇· ∈ d × (2.5) m(x, 0) =m 0(x), x R , � ∈ in the distributional sense.

Proof. We recall that ρ solves (2.5) in the distributional sense if

T (φt(x, t) +b(x, t)φ x(x, t))ρ(x, t)dxdt= φ(x, 0)ρ0(x)dx, (2.6) − n n �0 �R �R

9 n for everyφ C c∞(R [0,T )). First, take∈ φ as in 2×.6 and study the left-hand side of the equation using m instead of ρ and only with the integral in the space variable. Using the definition ofm in (2.4) twice, we get:

(φt(x, t) +b(x, t)φ x(x, t))m(x, t)dx= n �R t t t = φt(Φ (x), t)m0(x) +b(Φ (x), t)φx(Φ (x), t)m0(x)dx n �R t t t = (φt(Φ (x), t) +b(Φ (x), t)φx(Φ (x), t))m0(x)dx. n �R t t t ∂ t Now notice that (φt(Φ (x), t)+b(Φ (x), t)φx(Φ (x), t)) = ∂t (φ(Φ (x), t)). Then, integrating on the time

T (φt(x, t) +b(x, t)φ x(x, t))m(x, t)dxdt= n �0 �R T 0 (φ(Φ (x),T))m 0(x)dx (φ(Φ (x), 0))m0(x)dx n − n �R �R Sinceφ has compact support onR n [0,T),φ(x, T ) = 0. Finally we conclude that: ×

T 0 (φt(x, t) +b(x, t)φ x(x, t))m(x, t)dxdt= φ(Φ (x), 0)m0(x)dx, n − n �0 �R �R = φ(x, 0)m0(x)dx. − n �R Hencem solves (2.5) in the distributional sense.

2.2 Laplace equation One of the most important PDEs is the Laplace’s equation since it models numerous physical phenomenas:

Δu(x) = 0, x R n. (2.7) ∈ Definition 2.4. We call a function that solves (2.7) harmonic.

10 In order to search for solutions for (2.7), it may be interesting tofind properties about the Laplace equation, and these properties may help us find candidates for the solution. First we prove that the Laplace equation is invariant under rotation: Proposition 2.2. Ifu is harmonic andv(x) =u(Ox), thenv is harmonic for everyO orthogonal matrix. Proof.

∂u(Ox) n ∂u v = = (Ox)o . xi ∂x ∂x i j=1 j � Then ∂ n ∂u n ∂ ∂u v = (Ox)o = (Ox)o xixi ∂x ∂x ji ∂x ∂x ji i j=1 j j=1 i j � � � � n n ∂2u = (Ox)o o , ∂x ∂x ji ki j=1 k j � �k=1 � � which implies

n n n n ∂2u Δv= v = (Ox)o o xixi ∂x ∂x ji ki i=1 i=1 j=1 k j � � � �k=1 � � n n ∂2u n = (Ox) o o . ∂x ∂x ji ki j=1 � k j i=1 � � �k=1 � SinceO is orthogonal, we have

n 1, ifj=k, o o = ji ki 0, ifj=k. i=1 � � � Hence,

n ∂2u Δv= (Ox) =Δu(Ox) = 0. ∂xk∂xk �k=1 � �

11 That is a motivation to search for radial solutions of the Laplace equation: 2 2 u(x) =v(r), wherer= x1 + ...+x n. Note ∂r(x) �x2 + ...+x 2 x x = 1 n = i = i . ∂x ∂x 2 2 r i � i x1 + ...+x n Then � ∂v(r) xi uxi = =v �(r) , ∂xi r 2 2 ∂ xi xi 1 xi � �� � uxixi = v (r) =v (r) 2 +v (r) 3 . ∂x1 r r r − r � � � � Thus n 2 2 xi 1 xi Δu= v��(r) +v �(r) r2 r − r3 i=1 � � � �� n 2 n 2 xi 1 xi = Δu=v ��(r) +v �(r) ⇒ r2 r − r3 i=1 i=1 � � � � r2 n r2 n 1 = Δu=v �� +v �(r) =v ��(r)+v �(r) − . ⇒ r2 r − r3 r � � Wefind: n 1 Δu(x) =v ��(r)+v �(r) − . r To getΔu = 0, then we have to solve: n 1 v��(r)+v �(r) − = 0. r Notice: v�� 1 n log(v�)� = = − v� r = log(v�) = (1 n) log(r)+ a, a R ⇒ − ∈ ea � = v = n 1 . ⇒ r − Thus, we conclude: b log(r)+c ifn=2, v(r)=  b n 2 + c, ifn>2, r − for some constantsb andc inR.

12 Definition 2.5. The function:

1 − log( x ), ifn=2, 2π | | Φ(x) :=  1 1 , ifn>2. n(n 2)α(n) x n 2 − | | − is the fundamental solution for Laplace’s equation, where x = 0 andα(n)  | |� is the volume of the unit ball inR n. The non-homogeneous problem related to Laplace’s equation is called Poisson’s equation. Δu=f. − We proved that Laplace’s equation is invariant under rotations and this moti- vates to search solutions that are radial. We can see that Laplace’s equation is also invariant under translation, that’s a motivation to study properties of the convolution. 2 n Definition 2.6. We call a functionC with compact support,f C c∞(R ), n 2 ∈ iff:R R isC and there is a compact setK such thatf Rn K 0. → | \ ≡ Theorem 2.1. Letu(x) = Φ(x y)f(y)dy, for anyf C 2( n), where Rn c R Φ is given in Definition 2.5. Then: − ∈ � (i)u C 2(Rn), ∈ (ii) Δu= f inR n. − Proof. First, we notice:

u(x) = Φ(x y)f(y)dy= Φ(y)f(x y)dy. n − n − �R �R Hence, u(x+ he ) u(x) f(x+ he y) f(x y) i − = Φ(y) i − − − dy, h n h �R � � whereh= 0 ande = (0, ...,1, ..., 0), the 1 in thei th-slot. � i f(x+ he i y) f(x y) Let us show that − − − converges uniformly inR n to h ∂f (x y). We will analyze ∂xi − f(x+ he y) f(x y) ∂f i − − − (x y) . (2.8) h − ∂x − � i � � � � � � 13 � Using the Mean Value Theorem we get that:

f(x+ he i y) f(x y) ∂f − − − = (x y+θ hei), for someθ h (0, h). (2.9) h ∂xi − ∈ Substituting (2.9) in (2.8), we get that: f(x+ he y) f(x y) ∂f i − − − (x y) = h − ∂x − � i � � ∂f ∂f � �= (x y+θ ) (x y) � � ∂x − h i − ∂x − � � i i � � � � ∂f ∂f � �sup (x y+θe i) (x� y) ≤ θ (0,h) ∂xi − − ∂xi − ∈ � � � � � ∂f ∂f � sup sup� (x y+θe i) (x � y) . ≤ y Rn θ (0,h) ∂xi − − ∂xi − ∈ ∈ � � � � � ∂f � Due to continuity and compact� support of (x), we can conclude:� ∂xi ∂f ∂f sup sup (x y+θe i) (x y) = y Rn θ (0,h) ∂xi − − ∂xi − ∈ ∈ � � � � ∂f � ∂f � = max (x� y+θe i) (x y) =:g(h�). (y,θ) Rn [0,h] ∂x − − ∂x − ∈ × � i i � � � Moreover, using the mean� value theorem again, limh �0 g(h) = 0, proving � →� f(x+he y) f(x y) ∂f i− − − that h converges uniformly to (x y). We can use the ∂xi − ∂f ∂f (x+hej y) (x y) ∂xi − − ∂xi − same argument to prove that h converges uniformly to 2 ∂ f (x y). Then: ∂xi∂xj − ∂ f(x+ he y) f(x y) Φ(y)f(x y)dy = lim Φ(y) i − − − dy. ∂x n − h 0 n h i �R → �R � � We want to show that: ∂ ∂f Φ(y)f(x y)dy= Φ(y) (x y)dy, (2.10) ∂x n − n ∂x − i �R �R i ∂2 ∂2f Φ(y)f(x y)dy= Φ(y) (x y)dy. (2.11) ∂x x n − n ∂x x − i j �R �R i j

14 Notice ∂f f(x+ he y) f(x y) Φ(y) (x y)dy Φ(y) i − − − dy n ∂x − − n h ��R i �R � � � � � � ∂f f(x+ he i y) f(x y) � = � Φ(y) (x y) − − − dy � n ∂x − − h ��R � i � �� � � � � ∂f f(x+ he y) f(x y) � � Φ(y) (x y) i − − − � dy ≤ n | | ∂x − − h �R �� i � ��� � � � ∂f f(x+ he i y) f(x y) � Φ(y) sup� (x y) − − − � dy ≤ n | | y Rn ∂xi − − h �R ∈ �� � ��� � � ∂f � f(x+ he i y) f(x y) � =C sup (x� y) − − − , � (2.12) y Rn ∂xi − − h ∈ �� � ��� � � withC> 0. However,� we saw that (2.12) goes to zero whenh� 0, because � � of the uniform convergence, thus proving (2.10). An analagous→ argument can be used to prove (2.11). Hence:

Δ Φ(y)f(x y)dy= Φ(y)Δf(x y)dy. n − n − �R �R Let separate the integral above in two integrals:

Δ Φ(y)f(x y)dy= n − �R = Φ(y)Δf(x y)dy+ Φ(y)Δf(x y)dy n Bε(0) − R Bε(0) − � � \ =:K ε +L ε. Notice

Kε = Φ(y)Δf(x y)dy sup Δf(z) Φ(y)dy . (2.13) | | B (0) − ≤ z Rn | | B (0) �� ε � ∈ �� ε � � � � � SinceΦ is� radial, we use polar coordinates� to calculate� the integral� in (2.13): � � � � ε n 1 Kε sup Δf(z) Φ(r)α�(n)r − dr = | |≤ z Rn | | 0 ∈ �� � � ε � � � � C2 log(r)rdr� , n=2, 0 =  �� ε � � 1 n 1� C � r − �dr , n >2,  n � rn 2 � ��0 − �  � �  � � 15� � withC n R. Then ∈ C log(ε) ε 2, n=2, | | Kε (2.14) | |≤  2 Cε , n >2, for some constantC> 0. Now, let us evaluateL ε

Lε = Φ(y)Δf(x y)dy = lim Φ(y)Δf(x y)dy n r R Bε(0) − Br(0) Bε(0) − � \ →∞ � \ n ∂2f = lim Φ(y) (x y)dy r ∂x2 − →∞ Br(0) Bε(0) i=1 i � \ � n ∂2f = lim Φ(y) (x y)dy. (2.15) r ∂x2 − →∞ i=1 Br(0) Bε(0) i � � \ Using integration by parts in (2.15), we get: n ∂Φ ∂f Lε = lim (y) (x y)dy+ r − ∂x ∂x − →∞ i=1 Br(0) Bε(0) i i � � � \ ∂f + Φ(y) (x y)ν idS(y) ∂(Br(0) Bε(0)) ∂xi − � \ � n ∂Φ ∂f = lim (y) (x y)dy+ r − ∂x ∂x − →∞ � Br(0) Bε(0) i=1 i i � \ � n ∂f + Φ(y) (x y)ν idS(y) ∂x − ∂(Br(0) Bε(0)) i=1 i � � \ �

= lim DΦ(y) D yf(x y)dy+ r − Br(0) Bε(0) · − →∞ � � \ ∂f + Φ(y) (x y) dS(y) ∂(Br(0) Bε(0)) ∂ν − � \ �

= lim DΦ(y) D yf(x y)dy+ r − Br(0) Bε(0)) · − →∞ � \ ∂f + lim Φ(y) (x y) dS(y) r ∂(Br(0) Bε(0)) ∂ν − →∞ � \ Lε =M ε +N ε,

16 whereν is the outwards vector and dS is the surface integral. Now let’s use integration by parts again to calculateM ε:

Mε = lim ΔΦ(y)f(x y)dy+ r Br(0) Bε(0) − →∞ � \ lim DΦ(y) (f(x y)ν)dS(y) r − ∂(Br(0) Bε(0)) · − →∞ � \ ∂Φ = lim (y)f(x y)dS(y). (2.16) r − ∂(Br(0) Bε(0)) ∂ν − →∞ � \

Remember in (2.16) that notice that∂(B r(0) Bε(0)) =∂B r(0) ∂B ε(0). Then: \ ∪ ∂Φ Mε = lim (y)f(x y)dS(y)+ (2.17) − r ∂ν − � →∞ �∂Br(0) ∂Φ lim (y)f(x y)dS(y) . − r ∂ν − →∞ �∂Bε(0) �

Forr sufficiently large, we get thaty ∂B r(0) f(x y) = 0. So our equation becomes: ∈ ⇒ − ∂Φ M = (y)f(x y)dS(y). ε ∂ν − �∂Bε(0)

∂Φ 1 yi y First we notice (y) = − n andν= on∂B ε(0), which implies ∂xi nα(n) y ε n | | ∂Φ 1 y y 1 that (y)= i i = on∂B (0). Thus: ∂ν nα(n) ε n ε nα(n)ε(n 1) ε i=1 − � | | 1 M = − f(x y)dS(y) ε nα(n)ε(n 1) − �∂Bε(0) − = f(x y)dS(y) f(x) asε 0. (2.18) − − − →− → �∂Bε(0)

Now let’s calculateN ε: ∂f Nε = lim Φ(y) (x y) dS(y). r ∂(Br(0) Bε(0)) ∂ν − →∞ � \

17 2 n ∂f Sincef C c (R ), (x y) will be 0 on∂B r(0) forr sufficiently large. Then ∈ ∂ν − ∂f N = Φ(y) (x y) dS(y) ε ∂ν − �∂Bε(0))

= N Df n Φ(y) dS(y) ⇒| ε|≤ || || L∞(R ) | | �∂Bε(0)) 1 Df n log(ε) dS(y), n=2, || || L∞(R ) 2π | | = N  �∂Bε(0)) ⇒| ε|≤ 1  Df L ( n) dS(y), n 3. || || ∞ R n(n 2)α(n)ε n 2 ≥ �∂Bε(0)) − −  1  Df L ( n) log(ε) 2πε, n=2, || || ∞ R 2π | | = N ε  1 n 1 ⇒| |≤ Df n nα(n)ε − , n >2. || || L∞(R ) n(n 2)α(n)ε n 2 − − C log(ε) ε, n=2, = N ε | | (2.19) ⇒| |≤ �Cε, n >2

With (2.14), (2.18) and (2.19), we conclude that, asε 0, → Δu(x) = f(x), − finishing the proof. In the following steps, our objectives is to prove the mean-value formula and the strong maximum principle to Laplace’s equation.

Theorem 2.2. (Mean-value formulas to Laplace’s equation). Ifu C 2(Rn) is harmonic, then ∈

u(x) = udS= udy (2.20) − − �∂Bx(r) �Bx(r) for each ballB (r) U. x ⊆ Proof. Let’s define:

φ(r) := u(y)dS(y)= u(x+zr)dS(z). − − �∂Bx(r) �∂B0(1)

18 Then,

φ�(r)= Du(x+zr) zdS(z) − · �∂B0(1) n ∂u = (x+zr)z idS(z). − ∂x ∂B0(1) i=1 i � � Using Green’s formulas, we get:

n 1 ∂u i φ�(r)= (x+zr)z dS(z) nα(n) ∂x ∂B0(1) i=1 i � � 1 n ∂2u = (x+zr)dz nα(n) ∂x2 B0(1) i=1 i � � 1 = Δu(x+zr)dz=0. nα(n) �B0(1) Thus,φ is constant and:

φ(r)= u(y)dS(y) = lim u(y)dS(y)=u(x). − r 0 − �∂Bx(r) → �∂Bx(r) Using polar coordinates notice:

r r n 1 n udy= udSds= unα(n)s − ds=α(n)r u, �Bx(r) �0 �∂Bx(s) �0 which implies udy= u. − �Bx(r)

Theorem 2.3. (Strong maximum principle). Supposeu C 2(U) C( U) is harmonic within u. ∈ ∩

(i) Then max u = max u. U ∂U

19 (ii) Furthermore, ifU is connected and there exists a pointx 0 such that

u(x0) = max u, U then u is constant withinU.

Proof. Suppose there exists a maximum inU:

x U, u(x ) =M u(x), x U. 0 ∈ 0 ≥ ∀ ∈

So it is true forB x0 (r) U. Using the mean-value formula of Laplace’s equation, we get: ⊆

M=u(x 0) = u(y)dy M, sinceM is maximum of u. −B (r) ≤ � x0

Thus we conclude thatu M inB x0 (r) and that the set x U, u(x) =M is open and relatively closed,≡ sinceu is continuous. If the{ set∈U is connected} we get that x U, u(x) =M =U. { ∈ }

20 3 First-Order Non-Linear PDEs

The basic nonlinearfirst-order PDE can be stated as

F(Du, u, x) = 0 inU (3.1)

subject to the boundary condition

u=g onΓ (3.2)

whereΓ ∂U andg:Γ R are given. Suppose that F,g are smooth functions.⊆ The next→ method we will study solves PDEs offirst-order by converting into a system of ODEs. From eachx inU, we want tofind a curve that connects thatx to ax 0 inΓ and resolve the equation to that curve that will be simpler than resolving the PDE. Tofind the curve let’s do some calculations. Supposeu is aC 2 solution of (3.1) and define the curvez( ) to respect to a curvex( ) inU: · · z(s) :=u(x(s)) (3.3)

Since we are working with afirst-order PDE, it may also be interesting to define the derivative of that curve:

p(s) :=Du(x(s)) i.e. (3.4) p(s) = (p1(s), p2(s), ..., pn(s)) and i p (s) =u xi (x(s)) (i=1, ..., n). (3.5)

First, let’s differentiate (3.5):

dpi n dxj (s) = u (x(s)) (s). (3.6) ds xixj ds j=1 � The expression (3.6) does not seem too promising because it contains the second derivatives ofu. However, if we differentiate (3.1) with respect tox i we get:

n ∂F ∂F ∂F (Du, u, x)u + (Du, u, x)u + = 0. (3.7) ∂p xj xi ∂z xi ∂x j=1 j i �

21 Remember that we want tofind a suitable curvex to calculate the value of z=u(x). To make that second derivatives disappear it is convenient to set:

dxj ∂F = (p(s),z(s), x(s)) (j=1, ..., n) (3.8) ds pj

Substitutingx byx(s) in (3.8) and using equalities (3.4) and (3.5), we get:

n ∂F ∂F (p(s),z(s), x(s)) + (p(s),z(s), x(s))(s)+ ∂p ∂z j=1 j � ∂F + (p(s),z(s), x(s)) = 0. ∂xi Substitute this expression and (3.8) into (3.5):

dpi ∂F ∂F = (p(s),z(s), x(s))pi(s) (p(s),z(s), x(s)). (3.9) ds − ∂z − ∂xi If we differentiate equality (3.3):

dz n ∂u dxj n ∂F = (x(s)) (s) = pj(s) (p(s),z(s), x(s)). (3.10) ds ∂x ds ∂p j=1 j j=1 j � � We will rewrite in vector notation the expressions (3.8)-(3.10):

dp (s) = D F(p(s), z(s), x(s)) D F(p(s), z(s), x(s))p(s), (3.11) ds − x − z  dz  (s) =D F(p(s), z(s), x(s)) p(s), (3.12)  p  ds ·  dx (s) =D pF(p(s), z(s), x(s)), (3.13)  ds  We proved:

Theorem 3.1. (Structure of characteristic ODE). Letu C 2(U) solve the first-order partial differential equation (3.1) in U. Assume∈x solves the ODE (3.13), wherep( ) =Du(x( )), z( ) =u(x( )). Thenp solves the ODE (3.11) andz solves the· ODE (3.12)· for· thoses such· thatx(s) U. ∈

22 Now we apply the characteristics into the general time-dependent Hamilton- Jacobi PDE:

G(Du, ut, u, x, t) =u t +H(Du, x) = 0, (3.14) where Du=D =(u x1 , ..., uxn ). Then writingq=(p, p n+1),y=(x, t), we define:

G(q, z, y)=p n+1 +H(p, x) and so:

DqG=(D pH(p, x), 1),

DyG=(D xH(p, x), 0). Thus equation (3.13) becomes: dx i (s) =H (p(s),x(s)) (i=1, ..., n), ds pi dxi+1  (s) = 1,  ds and equation (3.11) becomes:  dpi ds (s) = H xi (p(s),x(s)) (i=1, ..., n), dpi+1 − � ds = 0. Finally, equation (3.12) becomes: dz (s) = (DH (p(s),x(s)), 1) (p(s),p (s)) ds p · n+1 =DH (p(s),x(s)) p(s) +p (s) p · n+1 =DH (p(s),x(s)) p(s) H(p(s),x(s)). p · − Summarizing, we got the following characteristic equations for Hamilton- Jacobi equation: dp (s) = DH (p(s),x(s)), (3.15) ds − x  dz  (s) =DH (p(s),x(s)) p(s) H(p(s),x(s)), (3.16)  p  ds · −  dx (s) =DH p(p(s),x(s)). (3.17)  ds  Equations (3.15) and (3.17) are called Hamilton’s equations. Notice that these two equations are sufficient to solve the system of ODEs, and with them we can deduce the value of (3.15).

23 3.1 Calculus of variations approach Assume thatL:R n R n R is a given smooth function, called the La- grangian. × → L(v, x) =L(v 1, ..., vn, x1, ..., xn)(vi, xj R) ∈ and

DvL=(L v1 , ..., Lvn ),

DxL=(L x1 , ..., Lxn ).

Now,fix two points x, y R n and a timet> 0. We introduce then the action functional ∈ t dw I[w] := L (s),w(s) ds, (3.18) ds �0 � � defined for functionsw=(w 1( ), w2( ), ..., wn( )) belonging to the admissible class: · · · := w C 2([0, t];R n) w(0) =y,w(t) =x . A { ∈ | } The interpretation of 3.18 is to calculate the cost of an action depending on both of the path and velocity, meaning the trajectory and itsfirst derivative. So it is interesting to know what is the path with the lowest cost. A fundamental problem in the calculus of variations is tofind a curvex , satisfying: ∈A

I[x] = min I[w]. (3.19) w ∈A Let’s study some of the properties ofx assuming its existence. Theorem 3.2. (Euler-Lagrange equations). The curvex solves the system of Euler-Lagrange equations d dx dx D L (s),x(s) +D L (s),x(s) = 0 (0 s t). −ds v ds x ds ≤ ≤ � � �� � � (3.20)

Proof. Choose a smooth functiony : [0, t] R n,y( ) = (y 1( ), ..., yn( )), satisfying: → · · ·

y(0) =y(t) = 0 (3.21)

24 and define forτ R ∈ w :=x+τy. (3.22)

Then, by definition ofx I[x] I[w]. ≤ Thus, the function: i(τ)=I[x+τy] has a minimum atτ = 0, which implies di (0) = 0. (3.23) dτ Let us calculate the derivative (3.23):

t dx y i(τ)= L (s) +τ (s),x(s) +τy(s) ds, ds ds �0 � � di t n dx dy dy (τ)= L +τ ,x+τy i + dτ vi ds ds ds 0 � i=0 � � � � dx dy +L +τ ,x+τy y ds. xi ds ds i � � � Setτ = 0 and we will get:

di t n dx dy dx 0 = (0) = L ,x i +L ,x y ds dτ vi ds ds xi ds i 0 � i=0 � � � � � � � Now integrate by parts thefirst term remembering (3.21):

n t d dx dx 0 = L ,x +L ,x y (s)ds. −ds vi ds xi ds i i=0 0 � � � � � �� � �� Notice that this identity holds for any functiony. Thus:

d dx dx L ,x +L ,x = 0 (i=1, ..., n)(0 s t). −ds vi ds xi ds ≤ ≤ � � �� � �

25 3.2 Hamilton’s equations First set:

p(s) :=D vL(x˙ (s),x(s)). (3.24)

and we assume:

n for all x, p R the equationp=D vL(v, x) can be uniquely solved forv ∈ as a smooth function ofp andx,v=v(p, x). (3.25)

Definition 3.1. The HamiltonianH associated with the LagrangianL is

H(p, x) :=p v(p, x) L(v(p, x), x), p, x R n, · − ∈ where the functionv is the same as defined above. Theorem 3.3 (Derivation of Hamilton’s ODE). The functionsx andp sat- isfy Hamilton’s equations:

x�(s) =D pH(p(s),x(s)) (3.26)

p�(s) = D H(p(s),x(s)) (3.27) � − x for0 s t. Furthermore, the mappings H(p(s),x(s)) is constant. ≤ ≤ → Proof. Notice by (3.24) and uniqueness of (3.25) thatx �(s) =v(p(s),x(s)). Let us denotev=(v 1( ), ..., vn( )) and compute, fori=1, ..., n, · · n H (p, x) = p vk (p, x) L (v(p, x), x)vk (p, x) L (v(p, x), x) xi k xi − vk xi − xi k=1 � � � =p v (p, x) D L(v(p, x), x) v (p, x) L (v(p, x), x). · xi − v · xi − xi

By (3.25) we can affirm thatD vL(v(p, x), x) =p, thus:

H (p, x) = L (v(p, x), x). xi − xi Also we can compute:

H (p, x) =v i(p, x) +p v (p, x) D L(v(p, x), x) v (p, x) pi · pi − v · pi =v i(p, x), by (3.25).

26 ComputingH pi at (p(s).x(s)), as we saw in the beginning of the proof

i i� Hpi (p(s),x(s)) =v (p(s),x(s)) =x (s),

and

H (p(s),x(s)) = L (v(p(s),x(s)),x(s)) = L (x�(s),x(s)) xi − xi − xi d = (L (x�(s),x(s))) according to (3.20) − ds vi = p i(s). − Finally:

d n H(p(s),x(s)) = H (p(s),x(s))p i�(s) +H (p(s),x(s))x i�(s) ds pi xi i=1 � =H (p(s),x(s))( H (p(s),x(s))) +H (p(s),x(s))H (p(s),x(s)) = 0. pi − xi xi pi

27 4 Estimates for the Hamilton-Jacobi equa- tion

This section focuses on estimates for the Hamilton-Jacobi equation. Our objective is to use such estimates to prove results of regularity. From this section onward we search for solutions of the equations in the domain of the torus. So we reserve some time to define it and state one property that we are going to use frequently.

Definition 4.1. Thed-dimensional torus, orT d, is the quotient space be- tweenR d andZ d. The choice of the torus as our space have some consequences that facili- tates the proof of some results. First of all the torus is a compact set, then classical solutions of the PDEs will have a maximum and minimum that we will use to get upper and lower bounds. Also, as we see in the following proposition, the torus has a nice property together with the integration by parts, whose the terms of boundary will be null. For some applications it is easier to consider the torus as a quotient space and a surface immersed in a space of higher dimension, like the the usual torus immersed inR 3. We are going to use this idea to prove the following proposition.

Proposition 4.1. Supposeu:T d R andV:T d R d are smooth functions. Then → →

u(x)( V(x))dx= u(x) V(x)dx d ∇· − d ∇ · �T �T Proof. To prove the above statement we are going to integrate by parts but instead of considering a function ofT d, we are going to integrate in the unitary hyper cube centered at origin, [ 1 , 1 ]d. Integration by parts gives us − 2 2 u(x)( V(x))dx= u(x)( V(x))dx= 1 1 Td ∇· [ , ]d ∇· � � − 2 2 u(x)V(x) ndy u(x) V(x)dx. ∂[ 1 , 1 ]d · − [ 1 , 1 ]d ∇ · � − 2 2 � − 2 2 Notice, since we are working with the torus, when we take a point at the boundary it will have the form: (x , x , ..., 1 , ..., x ) and uV(x , x , ..., 1 , ..., x ) = 1 2 ± 2 d 1 2 2 d 28 1 uV(x 1, x2, ..., 2 , ..., xd). To conclude the proof, we just need to notice that the normal vector− unit at the boundary at each of these points are opposite,

making the term ∂[ 1 , 1 ]d uV ndy = 0. − 2 2 · � 4.1 Comparison Principle In the context optimal control, the comparison principle is used to get lower bounds for solution. Proposition 4.2 (Comparison Principle). Letu:T d [0,T] R solve × → d ut +H(x, Du) �Δu 0 inT [0,T), (4.1) − − ≥ × and letv:T d [0,T] R solve × → d vt +H(x, Dv) �Δv 0 inT [0,T), (4.2) − − ≤ × suppose thatu v att=T . Then,u v inT d [0,T). ≥ ≥ × Proof. Letu δ =u+ δ ,δ R +. We have that: t ∈ δ uδ =u , t t − t2 ∂uδ ∂ δ ∂u = u+ = = Duδ = Du, ∂x ∂x t ∂x ⇒ i i � � i Δuδ =Δu. Therefore, we conclude: δ u δ +H(x, Du δ) �Δu δ = u + +H(x, Du) �Δu >0, − t − − t t2 − δ δ δ d u t +H(x, Du ) �Δu > 0 inT [0,T). (4.3) − − × Subtracting (4.2) from (4.3): δ δ δ d (u v) t +H(x, Du ) H(x, Dv) �Δ(u v)> 0 inT [0,T). − − − − − × (4.4) Consider the functionu δ v and let (x , t ) be a point of minimum ofu δ v − δ δ − onT d (0,T ]. Sinceu δ goes to infinity whent goes to zero, we guarantee a × d minimum onT (0,T ]. We claim thatt δ =T . Supposet δ T , then: × ≤ uδ =v , Duδ = Dv,Δu δ Δv 0, at (x , t ). t t − ≥ δ δ However putting the equations above on (4.4) we get a contradiction. We conclude the proof lettingδ 0. → 29 4.2 Optimal Control theory In this section, we consider theC 1 solutions,u:R d [0,T] R, of the Hamilton-Jacobi equation: × → Du 2 u + | | +V(x) = 0, (4.5) − t 2 with the terminal condition,

u(x, T)=u T (x), uT bounded, (4.6) and we investigate the corresponding deterministic optimal control problem in the sense explained below. We suppose thatV is of classC 2 and globally bounded. We show that a solution of (4.5) is the value of the control problem T x˙ (s) 2 u(x, t) = inf | | V(x(s))ds+u T (x(T)), (4.7) x 2 − �t where the infimum is taken over all trajectories,x W ([t, T ]), withx(t) = ∈ 1,2 x; see Appendix 8.3 for the definition ofW 1,2([t, T ]).

4.2.1 Optimal trajectories We begin our study of (4.7) by examining the existence of minimizing tra- jectories. It may be possible that a trajectory exists but it is not smooth, so we extend the domain of solutions. The space of smooth functions is not large enough to have solution for some problems in PDEs, so it is necessary to define a new set, known as Sobolev spaces. In particular, we work with W1,2([t, T ]) as seen in Definition 8.3. this space is suitable for this problem due two main reasons: we need to define the derivative in some weak sense to allow more functions to solve the problem, making it more probable to have a solution; and having its derivative onL 2 makes it part of a Hilbert space that allows us to use important results. Then we show the existence of a minimizer inW 1,2([t, T ]). Proposition 4.3. LetV be a bounded continuous function.Then, there exists a minimizerx W ([t, T]) of (4.7). ∈ 1,2 Proof. Letx be a minimizing sequence for (4.7), a sequencex W ([t, T ]), n n ∈ 1,2 xn(t) =x and such that: T 2 x˙ n(s) u(x, t) = lim | | V(x n(s))ds+u T (xn(T)). (4.8) n 2 − →∞ �t

30 Wefirst claim that sup x˙ C. To verify that, we analyze each n || n||L2([t,T]) ≤ term of right side of (4.8). We know thatV andu T are bounded. Thus: T x˙ (s) 2 | n | V(x (s))ds+u (x (T)) C 2 − n T n ≤ �t T x˙ (s) 2 T = | n | ds C+ V(x (s))ds u (x (T)), ⇒ 2 ≤ n − T n �t �t x˙ = || n||L2([t,T]) C+(T t)M+N, ⇒ 2 ≤ − � � � � where V M and� u N�. So we can conclude that sup x˙ � T � n n L2([t,T]) C. By| Theorem|≤ 8.1| with|≤p = 2 there is a function x˜ C[t, T ]|| such|| that ≤ n ∈ y x˜n(y) x˜n(z)= x˙ n(w)dw − z y � = x˜ (y)= x˙ (w)dw+ x˜ (z). ⇒ n n n �z Taking the norm and settingz=t and squaring both sides,

y 2 y x˜ (y) 2 x˙ (w) dw + x˜ (t) 2 + 2 x˜ (t) x˙ (w) dw. | n | ≤ | n | | n | | n | | n | ��t � �t Using the Young’s inequality on the third term on the right-hand side we get:

y 2 x˜ (y) 2 2 x˙ (w) dw + 2 x˜ (t) 2. | n | ≤ | n | | n | ��t � Using Cauchy-Schwarz inequality for the integral yields y x˜ (y) 2 2(T t) x˙ (w) 2dw+2 x˜ (t) 2. | n | ≤ − | n | | n | �t Notice that x˜ (y) 2 2(T t)C+2D, | n | ≤ − for someC,D> 0, concluding

T T x (y) 2dy= x˜ (y) 2dy (T t) 2C + 2(T t)D | n | | n | ≤ − − �t �t

31 For eachn: x + x˙ < E, for someE> 0, thus concluding: || n||L2 || n||L2 sup xn W 1,2([t,T]) < . (4.9) n || || ∞

Next, by Morrey’s inequality, Theorem 8.2, the sequence (xn)n N is equicon- ∈ tinuous and bounded. 1 Indeed, applying the theorem forλ= 2 andd = 1 in the case above, we have:

x 1 C x 1,2 C sup x 1,2 < . n 0, 2 n W ([t,T]) n W ([t,T]) || ||C ([t,T]) ≤ || || ≤ n || || ∞ 1 Since the sequence is uniformly bounded and sincex n is 2 -H¨older continuous with same constant for alln N, we conclude (x n)n N is equicontinuous. ∈ Finally we can use the Arzel`a-Ascoli∈ Theorem to conclude there exists a uniformly convergent subsequence. We can also further extract another sub- sequence that converges weakly inW 1,2 to a functionx using Theorem 8.3. Our objective now is to prove the weakly lower semicontinuity; that is:

T 2 x˙n(s) lim inf | | V(x n(s))ds+u T (xn(T)) n 2 − ≥ →∞ �t T x˙ (s) 2 > | | V(x(s))ds+u (x(T )) (4.10) 2 − T �t 1,2 for any sequencex n �x inW ([t, T ]). Notice by the Young’s inequality and the Cauchy-Schwartz inequality, we have: x˙ 2 + x˙ 2 | | | n| x˙ x˙ x˙ x˙ . 2 ≥| || n|≥ n Thus x˙ 2 x˙ 2 x˙ | n| x˙ x˙ | | = x˙ x˙ . (4.11) 2 ≥ n − 2 n − 2 � � Using (4.11), we get: T x˙ (s) 2 | n | V(x (s)) ds+u (x (T)) (4.12) 2 − n T n ≥ �t � � T x˙ (s) 2 V(x(s)) V(x (s)) + | | V(x(s)) + x˙ (s)(x˙ (s) x˙ (s)) ds+ − n 2 − n − �t � � � � uT (xn(T)).

32 Because x˙ converges weakly tox andx L 2([t, T ]), wefind: n ∈ T x˙ (s)(x˙ (s) x˙ (s))ds 0. n − → �t Moreover, from the uniform convergence ofx n tox, we conclude that

T V(x (s)) V(x(s))ds 0 n − → �t and that

u (x (T)) u (x(T)). T n → T Thus by taking the lim inf in (4.12) we achieve (4.10). Proving in fact that the functionx is a minimizer.

With the existence of the minimizer now we can prove some properties about it. Notice withx discovered we canfix the end point of the curvex. With this we know thatx is the solution for the action functional T w˙ (s) 2 I[x] = min | | V(w(s))ds w( ) A 2 − · ∈ �t where = w W 1,2([t, T]) R n w(t) = x,w(T)=x(T) Notice that this minimizationA { ∈ problem is→ very similar| to (3.18). So it is} natural that it has some of the same properties, one being the Euler-Lagrange equation. Proposition 4.4 (Euler-Lagrange equation). LetV be aC 1 function. Let x:[t, T] R d be aW 1,2([t, T]) minimizer of (4.7). Thenx C 2[t, T], and satisfies→ the following equation: ∈

x¨+D xV(x) = 0. (4.13)

Proof. Letx:[t, T] R d be aW 1,2([t, T ]) minimizer for (4.7). Fix [ϕ: → [0,T] R d] of classC 2 with compact support on (t, T ). Becausex is a minimizer,→ the function: T x˙ +�˙ϕ 2 i(�) = | | V(x+ �ϕ)ds+u (x(T)) 2 − T �t

33 has a minimum at� = 0. Sincei is differentiable, we havei �(0) = 0 and therefore,

i�(0) = 0 d T x˙ +�˙ϕ 2 = | | V(x+ �ϕ)ds+u (x(T)) (0) = 0 ⇒ d� 2 − T � t � T � = [x˙ ˙ϕ D V(x)ϕ]ds=0. (4.14) ⇒ · − x �t Next, we define:

T p(t) =p D V(x)ds, (4.15) 0 − x �t d 2 d withp 0 R to be chosen later. For eachϕ C c ((t, T )) taking values inR , we have∈ ∈ T d T (p ϕ)dt=p ϕ = 0. t ds · · t � � � Thus, � T D V(x)ϕ+p ˙ϕds=0. x · �t From (4.14),

T (p+ x˙ ) ˙ϕds=0 · �t and then,p+ x˙ is constant. Thus, selectingp 0 conveniently, we have: p= x˙ . − Sincep is continuous, wefind thatx is continuous as well. Now we just need to analyze (4.15) and confirm it is differentiable:

T p(t) =p + D V(x)ds. 0 − x �t 1 Notice that the derivative ofp isD xV(x). SinceV is aC function, we 2 conclude thatx isC . Because p˙ =D xV(x), wefinally conclude that [ x¨= D V(x)]. − x 34 Proposition 4.5 (Hamiltonian Dynamics). Letx andV as in Proposition p 2 4.4. SetH(p, x) = | | +V(x). Then, forp= x˙ , we have that(x,p) solves 2 − p˙ =D H(p,x), x (4.16) x˙ = D H(p,x). � − p Proof. Notice that this case is a little different from Theorem 3.27. First, the domain of paths is theW 1,2 instead ofC 2, however we proved that the functionx isC 2. So we can adapt the theorem for this case. By changing the limits of integration, theH is very similar to the one in Definition 3.1:

T w˙ (s) 2 I[x] = min | | V(w(s))ds w A 2 − ∈ �t t w˙ (s) 2 = min | | +V(w(s))ds. w A − 2 ∈ �T v 2 | | DenoteL(v, x) = 2 +V(x) andH a(p, x) the Hamiltonian ofL, see Defi- nition 3.1. −

p=D L(v, x) = v, thus v − H�(p, x) =p v L(v, x), · − 2 2 2 v p H�(p, x) = p + | | V(x) = | | V(x) = H(p, x) −| | 2 − − 2 − −

The Hamiltonian is exactly the opposite ofH andp=D vL(x˙ ,x) = x˙ . Thus, by Theorem 3.3, we have: −

x˙ =D H�(p,x) = D H(p,x), p − p p˙ = D H�(p,x) =D H(p,x). − x x

4.3 Dynamic Programming Principle One recurrent property on optimal control theory is the dynamic program- ming principle. In this section we will see that it also applies to the problem (4.7).

35 Proposition 4.6. LetV be a bounded continuous function andu be given by (4.7). Then, for anyt � witht

t� ˙x(s)2 u(x, t) = inf | | V(x(s))ds+u(x(t �), t�). (4.17) x 2 − �t Proof. Define

t� ˙x(s)2 ˜u(x, t) = inf | | V(x(s))ds+u(x(t �), t�). (4.18) x 2 − �t andu is given by (4.7). Take a optimal trajectory,x 1 foru(x, t) and select 2 1 1 an optimal trajectory,x , foru(x (t�), t�). Consider the concatenation ofx andx 2 given by

1 3 x (s)t s t � x = 2 ≤ ≤ x (s)t � < s T. � ≤ We have, T x˙ 3(s) 2 u(x, t) | | V(x 3(s))ds+u (x3(T)) ≤ 2 − T �t t� x˙ 1(s) 2 T x˙ 2(s) 2 | | V(x 1(s))ds+ | | V(x 2(s))ds+u (x2(T)) ≤ 2 − 2 − T �t �t� t� 1 2 x˙ (s) 1 1 | | V(x (s))ds+u(x (t�), t�) = ˜u(x, t). ≤ 2 − �t Conversely, letx be an optimal trajectory in (4.7). Then, T x˙ (s) 2 u(x(t�), t�) | | V(x(s))ds+u (x(T)). ≤ 2 − T �t� Consequently,

t� x˙ (s) 2 ˜u(x, t) | | V(x(s))ds+u(x(t �), t�) u(x, t). ≤ 2 − ≤ �t And we know by definition ofu(x, t) that u(x, t) ˜u(x, t), ≤ finishing the proof.

36 4.4 Subdifferentials and Superdifferentials of the Value Function Working with derivatives is easier, however we cannot always guarantee their existence. From the estimates we got in the previous sections, it may be easier to prove estimates that might imply the existence of the derivative. Thus we define a space of functions that may not be differentiable, but it almost is. d + Consider a continuous functionψ:R R. The superdifferentialD x ψ(x) → ofψ atx is the set of vectors,p R d, such that: ∈ ψ(x+v) ψ(x) p v lim sup − − · 0. v 0 v ≤ | |→ | |

Similarly, the subdifferential,D x−ψ(x), ofψ atx is the set of vectorsp, such that ψ(x+v) ψ(x) p v lim inf − − · 0. v 0 v ≥ | |→ | | Proposition 4.7. Letψ:R d R be a continuous function andx R d. If + → ∈ bothD x−ψ(x) andD x ψ(x) are non-empty, thenφ is differentiable atx and:

+ D−ψ(x) =D ψ(x) = D ψ(x) x x { x } + The opposite is also true, ifψ is differentiable, then,D x−ψ(x) andD x ψ(x) are equal and only have one element that isD xψ(x).

+ + Proof. Takep − D −ψ(x) andp D ψ(x). We have, ∈ x ∈ x

ψ(x+v) ψ(x) p − v lim inf − − · 0, v 0 v ≥ | |→ | | ψ(x+v) ψ(x) p + v lim sup − − · 0. v 0 v ≤ | |→ | | Subtracting these two inequalities, we obtain

+ (p p −) v lim inf − · 0. v 0 v ≥ | |→ | |

37 + p p− In particular, choosev= � p+−p , with�> 0. Then − | − −| + + p p− (p p −) � p+−p lim inf − − · | − −| 0 � 0 → � ≥ + | |2 p p − = lim inf | − | 0 � 0 + ⇒ → − p p − ≥ + | − | = p p − 0 ⇒ −+| − |≥ = p p − =0. ⇒| − | + Thusp =p − p. Moreover, ≡ ψ(x+v) ψ(x) p v lim inf − − · 0, v 0 v ≥ | |→ | | ψ(x+v) ψ(x) p v lim sup − − · 0, v 0 v ≤ | |→ | | which implies ψ(x+v) ψ(x) p v lim − − · = 0. v 0 v | |→ | | Notice that is exactly the definition of the gradient ofψ on the pointx. To notice the converse statement, we know that, ifψ is differentiable, then: ψ(x+v) ψ(x) D ψ v lim − − x · = 0, v 0 v | |→ | | which implies: ψ(x+v) ψ(x) D ψ v lim inf − − x · = 0, v 0 v | |→ | | ψ(x+v) ψ(x) D ψ v lim sup − − x · = 0. v 0 v | |→ | | + We conclude thatD xψ D x ψ(x) andD xψ D x−ψ(x). To show that is unique,∈ we go back to the∈ beginning of the proof. Suppose + + + that we have two vectors onD x ψ,p 1 andp 2 . We saw that if we haveD x−ψ + + + non-empty, thenp =p − for everyp D x ψ(x) andp − D x−ψ(x). Thus + + ∈ ∈ we concludep 1 =p − =p 2 .

38 Proposition 4.8. Let

ψ:R d R → d d 1 be a continuous function. Fixx 0 R . Ifφ:R R is aC function such that ∈ →

ψ(x) φ(x) − has a local maximum atx 0, then

D φ(x ) D +ψ(x ). x 0 ∈ x 0 Proof. Supposeψ(x) φ(x) have a local maximum atx . Thus in a neigh- − 0 borhood ofx 0

ψ(x) φ(x) ψ(x ) φ(x ) − ≤ 0 − 0 = ψ(x) ψ(x ) p (x x ) φ(x) φ(x ) p (x x ) ⇒ − 0 − · − 0 ≤ − 0 − · − 0

Settingp=D xφ(x0) and using the limit in both sides, we have: ψ(x) ψ(x ) p (x x ) φ(x) φ(x ) p (x x ) lim − 0 − · − 0 lim − 0 − · − 0 = 0 x x0 x x ≤ x x0 x x → | − 0| → | − 0| We conclude thatD φ(x ) D +ψ(x ). x 0 ∈ x 0 The case for the minimum is similar and we have thatD φ(x ) D −ψ(x ). x 0 ∈ x 0 Proposition 4.9. Letu be given by (4.7) and letx be a corresponding opti- mal trajectory. SupposeV of classC 2. Then,p= x˙ satisfies: −

p(t�) D x−u(u(x(t�), t�) fort

Proof. Lett

39 s t Consider the trajectoryz(s) =x(s) +y − and notice it depends ony. Since t� t z(t) =x(t) =x, we can say −

t� z˙ 2 u(x, t) | | V(z)ds+u(z(t �), t�). (4.19) ≤ 2 − �t Define:

t� z˙ 2 Φ(y)=u(x, t) | | V(z)ds. − 2 − �t

And let’s observeu(z(t �), t�) Φ(y) has a minimum aty = 0. Manipulating the inequality: −

t� z˙ 2 u(x, t) | | V(z)ds+u(z(t �), t�) ≤ 2 − �t t� z˙ 2 = u(z(t �), t�) u(x, t) | | V(z)ds 0 ⇒ − − 2 − ≥ � �t � = u(z(t �), t�) Φ(y) 0 ⇒ − ≥

Wheny = 0, we have thatu(z(t �), t�) Φ(y) = 0, thusy = 0 is a minimum. Then, by the previous proposition and− sinceΦ is differentiable,D Φ(0) y ∈ Dx−u(x(t�), t�).

t� z˙ 2 D Φ(0) =D u(x, t) | | V(z)ds (0) y y − 2 − � �t � t� y 2 x˙ (s) + t t s t = D | �− | V x(s) +y − ds (0) − y 2 − t t ��t � � − � � t� x˙ (s) s t = D V(x(s)) − ds − t t − x t t ��t � − � − � Using integration by parts and (4.13), we conclude:

t s t t� � s t DyΦ(0) = x˙ − + ( x¨(s) D xV(x(s)) − ds − � t� t t t − − t� t � − � � − = x˙ (t�) =p(t� �), − �

40 concluding thatp(t �) D x−u(x(t�), t�),t

D Ψ(0) = x˙ (t�) =p(t �). y −

4.5 Regularity of the Value Function A function,ψ:R d R, is semiconcave if there exists a constant,C, such thatψ C x 2 is a concave→ function. In this section the objective is to prove that the− value| | function is bounded, Lipschitz and semiconcave.

Proposition 4.10. Letu(x, t) be given by (4.7). Suppose that V 2 d C � � C (R ) ≤ andu T is Lipschitz. Then, there exists constants,C 0,C1 andC 2, depending only onu T andT , such that:

d u C 0 for allx R ,0 t T, | |≤ ∈ ≤ ≤ d u(x+y,t) u(x, t) C 1 y for all x, y R ,0 t T, | − |≤ | | ∈ ≤ ≤ 1 u(x+y,t) +u(x y,t) 2u(x, t) C 1 + y 2 − − ≤ 2 T t | | � − � for all x, y R d,0 t

T u(x, t) V(x)ds+u T (x(T)) (T t) V + uT ≤− ≤ − � � ∞ � �∞ �t Tofind a lower bound we will analyze that for any trajectory,x, withx(t) = x, we have

T x˙ (s) 2 | | V(x(s))ds+u T (x(T)) ((T t) V + uT ) 2 − ≥− − � � ∞ � �∞ �t Considering the case of the optimal trajectory, we conclude thatu is bounded by (T t) V + uT . ∞ ∞ To prove− � the� function� � is Lipschitz, we are going to consider the optimal trajectory,x, foru(x, t). Therefore we can write:

T x˙ (s) 2 u(x, t) = | | V(x(s))ds+u (x(T)). 2 − T �t Consider the trajectoryx+y starting atx+y, then

T x˙ (s) 2 u(x+y,t) | | V(x(s) +y)ds+u (x(T)+y) ≤ 2 − T �t Subtractingu(x, t) from both of them, we get the following:

u(x+y,t) u(x, t) − ≤ T (V(x(s) +y) V(x(s))ds+u (x(T)+y) u (x(T)) ≤ − − T − T �t u(x+y,t) u(x, t) − ≤ (T t) V 1 y +L y (C(T t) +C) y , ≤ − � � C | | | |≤ − | | proving thatu is Lipschitz. Remains to prove the semiconcavity: we take d T s x, y R with y 1,y(s) =y T−t , ∈ | |≤ − T x˙ (s) y˙ (s) u(x y,t) | ± | V(x(s) y(s))ds+u (x(T)), ± ≤ 2 − ± T �t

42 Notice that x+y 2 + x y 2 2 x 2 = 2 y 2, thus | | | − | − | | | | u(x+y,t) +u(x y,t) 2u(x, t) − − ≤ T y 2 | | (V(x(s)+y(s)) +V(x(s) y(s)) 2V(x(s))ds ≤ (T t) 2 − − − �t 2 − T y 2 | | + V 2 y(s) ds ≤ T t � � C | | − �t y 2 T T s 2 | | + V 2 y − ds ≤ T t � � C T t − �t � − � 2 � � y 2 1 | | + V 2 y (T� t) C � + (T t) . ≤ T t � � C | | �− ≤ � T t − − � − �

43 5 Estimates for the Transport and Fokker- Planck Equations

In this chapter we turn our attention to the second equation in the MFG system, the transport equation,

d mt(x, t) + div(b(x, t)m(x, t)) = 0 inT [0,T], (5.1) × or the Fokker-Planck equation,

d mt(x, t) + div(b(x, t)m(x, t)) =Δm(x, t) inT [0,T]. (5.2) × We consider both of equations above with initial conditions:

m(x, 0) =m 0(x), (5.3)

withm 0 0 and m0dx = 1. It models≥ the density function of the agents under the drift, that in the � equation it comes as the divergent betweenb andm, and random forces, that comes as the Laplacian ofm.

5.1 Mass Conservation and Positivity Solutions It is important to notice that we want the solution of (5.1) and (5.2) to still be a density function, thus we will examine two properties of these solutions, namely positivity and mass conservation.

Proposition 5.1 (Conservation of Mass). Letm solve either (5.1) or (5.2) with the initial condition (5.3). Then,

m(x, t)dx=1 d �T for allt 0. ≥ Proof. Let us prove that the integral of the mass is constant for the transport equation d m(x, t)dx= mt(x, t)dx= div(b(x, t)m(x, t))dx. dt d d − d �T �T �T

44 Integration by parts now yields

div(b(x, t)m(x, t))dx= − d �T 1m(x, t)b(x, t) ˆndx+ grad(1) b(x, t)m(x, t)dx=0, − d · d · �∂T �T since boundary ofT d is empty. Remains to prove the conservation of mass to the Flokker-Planck equation, we just need to prove that Δm(x, t)dx = 0. Td Notice � Δm(x, t)dx= div( m(x, t))dx=0, d d ∇ �T �T from of an argument similar to the one for the transport equation. Proposition 5.2. The transport equation and the Fokker-Planck equation preserve positivity: ifm 0 andm solves either one of the previous equa- 0 ≥ tions, thenm(x, t) 0, for all(x, t) T d [0,T]. ≥ ∈ × Proof. For this proof instead of analysing the original equation we evaluate the adjoint equation. There we do not need to worry about differentiability and we use the comparison principle to get a inequality.

d vt(x, t) +b(x, t) Dv(x, t) = Δv(x, t), for all (x, t) T [0, s], · − ∈ × �v(x, s) =φ(x), (5.4)

d d whereφ C ∞(T ),φ(x)>0, x T . First, notice∈ by the comparison∀ ∈ principle, Proposition 4.2,v(x, t)> 0, for all (x, t) T d [0, s]. Second, we multiply (5.2) byv and (5.4) bym, add both ∈ × of them and integrate the expression inT d.

mtv+v tm + div(bm)v+b Dvmdx= Δmv Δvmdx d · d − �T �T d d = mvdx=0= mvdx=0. ⇒ d dt ⇒ dt d �T �T Next integrating in [0, s], wefind

m(x, s)φ(x)dx= v(x, 0)m0(x)dx >0 d d �T �T Since the previous identity holds for any positiveφ, we conclude thatm(x, s) 0. ≥

45 5.2 Regularizing effects of the Fokker-Planck Equation In this section, we are observing results of the derivatives ofm and get some estimates that are used in the following propositions. We see the results of regularization of some functions are applied tom. Proposition 5.3. Letm be a smooth solution of (5.2) withm>0 and assume thatφ C 2(R).Then, ∈ d 2 φ(m)dx+ div(b)(mφ�(m) φ(m))dx= φ��(m) Dm dx dt Td Td − − Td | | � � � (5.5) or, equivalently,

d 2 φ(m)dx mφ��(m)Dm bdx= φ��(m) Dm dx. (5.6) dt d − d · − d | | �T �T �T Proof. To get these two identities let’s multiply (5.2) byφ �(m) and integrate by parts.

(mt(x, t) + div(b(x, t)m(x, t)))φ�(m(x, t))dx= d �T Δm(x, t)φ�(m(x, t))dx d �T d = φ(m(x, t)) m(x, t)φ ��(m(x, t))Dm(x, t) b(x, t)dx= ⇒ d dt − · �T φ��(m(x, t))Dm(x, t) Dm(x, t)dx. − d · �T Or instead of solving by integrating by parts, we can apply the product rule on the second term in the left side of the equation.

div(b(x, t)m(x, t))φ�(m(x, t))dx= d �T m(x, t)div(b(x, t))φ�(m(x, t)) +φ �(m(x, t))Dm(x, t) b(x, t)dx= d · �T m(x, t)div(b(x, t))φ�(m(x, t)) +D(φ(m(x, t)) b(x, t)dx= d · �T m(x, t)div(b(x, t))φ�(m(x, t)) φ(m(x, t))div(b(x, t))dx. d − �T

46 Proposition 5.4. Letm be a smooth solution of (5.2) withm>0. Then, there existC>0 andc>0, such that,

d 1 b 2 Dm 2 dx C | | dx c | 3| dx, (5.7) dt d m ≤ d m − d m �T �T �T d ln mdx C b 2dx+c D lnm 2dx, (5.8) dt d ≥− d | | d | | �T �T �T and d Dm 2 m ln mdx b Dm dx | | dx (5.9) dt d ≤ d | || | − d m �T �T �T 1 Proof. For thefirst assertion, let’s use the equation (5.6) and takeφ(z)= z :

d 1 2m 2 2 dx 3 Dm bdx= 3 Dm dx dt d m − d m · − d m | | �T �T �T d 1 1 Dm 1 2 = dx 2 bdx= 2 3 Dm dx ⇒ dt d m − d m m · − d m | | �T �T �T d 1 1 Dm Dm 2 = dx=2 bdx 2 | 3| dx. ⇒ dt d m d m m · − d m �T �T �T Using the Cauchy-Schwarz inequality in thefirst term of the right side of the equation we get:

d 1 1 Dm Dm 2 dx 2 | | b dx 2 | 3| dx. dt d m ≤ d m m · | | − d m �T �T �T Next, we are going to use the Young inequality: ap bq 1 1 ab + , such that + = 1, ≤ p q p q withp = 2 andq = 2:

d 1 1 b 2 Dm 2 Dm 2 dx 2 | | + | |2 dx 2 | 3| dx dt d m ≤ d m 2 2 m − d m �T �T � | | � �T b 2 Dm 2 | | dx | 3| dx. ≤ d m − d m �T �T

47 For the second one, we are going to takeφ(z) = ln(z):

d m 1 2 ln(m)dx+ 2 Dm bdx= 2 Dm dx dt d d m · d m | | �T �T �T d Dm 1 2 = ln(m)dx= bdx+ 2 Dm dx. ⇒ dt d − d m · d m | | �T �T �T Using Cauchy-Schwarz inequality and Young inequality, we get:

2 d 1 2 1 Dm ln(m)dx b dx+ | 2| dx. dt d ≥− 2 d | | 2 d m �T �T �T Finally for the third inequality, we useφ(z)=z ln(z):. d m 1 m ln(m)dx Dm bdx= Dm 2dx dt d − d m · − d m| | �T �T �T d Dm 2 = m ln(m)dx= Dm bdx | | dx. ⇒ dt d d · − d m �T �T �T With the Cauchy-Schwarz inequality, we conclude what we stated.

Corollary 5.1. Letm be a smooth solution of (5.2) withm>0,m(x, 0) = m , m (x)dx=1, andm >γ>0. Then there exists constants,C,C , 0 Td 0 0 γ such that � T T 2 2 D lnm dxdt C b dxdt+C γ. d | | | ≤ d | | �0 �T �0 �T Proof. Becausem 0 >γ> 0, we get:

lnm 0 > lnγ,

lnm 0dx > lnγ. d �T Using Jensen‘s inequality, we get:

ln m(x, t)dx lnm(x, t)dx − d ≥ d − ��T � �T = lnm(x, t)dx 0. ⇒ d ≤ �T

48 Integrating (5.8) and using the previous estimates we get T T c D lnm 2dxdt C b 2dxdt+ d | | ≤ d | | �0 �T �0 �T + lnm(x, T)dx lnm(x, 0)dx d − d �T �T T C T lnγ = D lnm 2dxdt b 2dxdt . ⇒ d | | ≤ c d | | − c �0 �T �0 �T

Corollary 5.2. Letm be a smooth solution of (5.2) withm>0,m(x, 0) = m , m (x)dx=1,m >0.Then, 0 TT d 0 0 � T Dm 2 m(x, T ) lnm(x, T)dx+ | | dxdt d d 2m ≤ �T �0 �T T b 2 | | mdxdt+ m(x, 0) lnm(x, 0)dx. ≤ d 2 d �0 �T �T Proof. First, let us integrate (5.9) in [0,T]:

m(x, T ) lnm(x, T)dx m(x, 0) lnm(x, 0)dx d − d ≤ �T �T T Dm T Dm 2 b m| |dxdt | | dxdt. ≤ d | | m − d m �0 �T �0 �T Using Young’s inequality, withp=q = 2, we may conclude

m(x, T ) lnm(x, T)dx m(x, 0) lnm(x, 0)dx d − d ≤ �T �T T Dm T Dm 2 b √m| |dxdt | | dxdt, ≤ d | | √m − d m �0 �T �0 �T T 1 Dm 2 T Dm 2 b 2m+ | | dxdt | | dxdt. ≤ d 2| | 2m − d m �0 �T �0 �T Then 1 T Dm 2 m(x, T ) lnm(x, T)dx+ | | dxdt d 2 d m �T �0 �T 1 T b 2mdxdt+ m(x, 0) lnm(x, 0)dx. ≤ 2 d | | d �0 �T �T

49 6 Estimates for Mean-Field Games

Finally we get the estimates of MFG and the results of regularity for the solution of the problem. Our focus in this dissertation is tofind results of regularity of integration,finding in whichL p space the solutions are con- tained. In this section we are going to consider two MFG problems. One of the problems is the periodic stationary MFG:

Du 2 �Δu+ | | +V(x) =F(m) + H, − 2 (6.1) �Δm div(mDu) = 0, �− − where the unknowns areu:T d R,m:T d R, withm 0 and m = 1, → → ≥ and H R. The other problem is the time-dependent MFG: ∈ � Du 2 u �Δu+ | | +V(x) =F(m), − t − 2 (6.2) m �Δm div(mDu) = 0. � t − − For some estimates, we will need the following property: 1 F(m) C β + mF(m). (6.3) d ≤ β d �T �T for everyβ> 0. We are going to assume thatV is smooth andF is bounded. We might need other conditions for these functions that we add as we go.

6.1 Maximum Principle Bounds Now, we study the constant H, in the periodic case, and the functionu, in the time-dependent case, and get estimates about them.

Proposition 6.1. Letu be a classical solution for (6.1). Suppose thatF 0. Then, ≥

H sup V. ≤ Td Proof. Sinceu continuous on a compact, because we supposed it was a classi- cal solution, it achieves a minimum at a pointx 0. At this point,Du(x 0) = 0

50 andΔu 0. Consequently, ≥ Du(x ) 2 �Δu(x ) + | 0 | +V(x ) =F(m(x )) + H V(x ) − 0 2 0 0 ≤ 0 = H F(m(x )) + H V(x ), ⇒ ≤ 0 ≤ 0

concluding H sup x Td V(x). ≤ ∈ Proposition 6.2. Letu be a classical solution of (6.2) andF 0. Then,u is bounded from below. ≥

Proof. SinceF 0, we have ≥ Du 2 ut +�Δu+ | | V(x) V L (Td [0,T]) . − 2 ≥ − ≥ −� � ∞ × The idea to complete this proof is tofind a subsolution and apply the com- parison principle (Proposition 4.2). Notice thatv(x, t) = u T (T −� �∞ − − d t) V L∞(T [0,T]) is a subsolution, the border condition is less thanu(x, T) and� � using the× inequality above wefind

Du 2 Dv 2 ut +�Δu+ | | V L (Td [0,T]) = v t +�Δv+ | | − 2 ≥ −� � ∞ × − 2 Now we can apply the comparison principle and conclude:

u(x, t) uT (T t) V L (Td [0,T]) . ≥ −� �∞ − − � � ∞ ×

6.2 First-Order Estimates In this section, we will get estimates for Du 2dx and mF(m)dx, that is used in the last section to get the result of| regularity.| � � Proposition 6.3. There exists a constantC such that, for any classical solution(u, m, H) of (6.1), we have

Du 2 1 | | (1 +m) + F(m)mdx C. (6.4) d 2 2 ≤ �T

51 Proof. Multiply thefirst equation of (6.1) by (m 1) and the second by u. Adding the results expressions and integrating − − Du 2 �Δu+ | | +V (m 1)+�uΔm+udiv(mDu) = − 2 − � � = (F(m) + H)(m 1) − Du 2 �(uΔm mΔu+Δu) + | | +V (m 1) +udiv(mDu) = − 2 − � � = =(F(m) + H)(m 1) ⇒ − Du 2 �(uΔm mΔu+Δu) + | | +V (m 1) +udiv(mDu)dx= d − 2 − �T � � = = (F(m) + H)(m 1)dx. ⇒ d − �T Integrating by parts thefirst term we get:

uΔmdx= uΔ(m 1)dx= u (m 1) = (m 1)Δudx. d d − − d ∇ ·∇ − d − �T �T �T �T Since H is constant and mdx = 1, we have H(m 1)dx = 0. Simplifying we get: − � � Du 2 | | +V (m 1) m Du 2dx= F(m)(m 1)dx d 2 − − | | d − �T � � �T Du 2 = V(m 1) +F(m)dx= | | (1 +m) + mF(m)dx. ⇒ d − d 2 �T �T Because of property (6.3) withα = 2, we get Du 2 1 | | (1 +m) + mF(m)dx V(m 1) + mF(m)dx+C d 2 ≤ d − 2 �T �T Du 2 1 = | | (1 +m) + mF(m)dx C+ V(m 1)dx. ⇒ d 2 2 ≤ d − �T �T Since we are assumingV isC ∞, we can conclude that V is bounded defined | | on the compactT d, making V(m 1)dx V mdx V dx, d − ≤ d � � ∞ − d �T �T �T V V dx, ≤� � ∞ − d �T concluding the proof.

52 Next we are going to obtain a bound for H. Corollary 6.1. Let(u, m, H) be classical solution of (6.1). Suppose that F 0. Then, there exists a constant, C, not depending on the particular solution,≥ such that

H C. | |≤ Du 2 | | Proof. In the previous proposition we proved that both 2 and mF(m) are L1 functions. If mF(m) isL 1.F(m) will also beL 1, because of the estimate (6.3). Now, if we integrate thefirst equation of (6.1), we obtain Du 2 �Δu+ | | + V dx= F(m) + Hdx, d 2 d �T �T Du 2 H �Δu+ | | +V F(m)dx . | |≤ d 2 − ��T � � � � � Integrating by parts thefi�rst term inside of integral on right� side of the equation we get:

�Δudx= � udx= (�) udx=0. d d ∇·∇ d −∇ ·∇ �T �T �T Thus Du 2 H | | dx + V dx + F(m)dx . | |≤ d 2 d d ��T � ��T � ��T � � � � � � � It still remains to argument� why� it does� not depend� � on the solution.� Notice � Du 2 � � � � � | | ifF is positive, then 2 dx will be less than a constant by (6.4), not depending on the particular solutionu. The same happens for F(m)dx, but � we use the F(m)mdxfirst, then using (6.3) we can notice that F(m)dx � is less than a constant that does not depend on the solutionm. � � Now, we shift our attention to the time dependent problem and try to prove bound similar to (6.4). Proposition 6.4. There exists a constantC>0 such that, for any classical solution,(u, m), of (6.2), we have

T Du 2 (m+m 0)| | + mF(m)dtdx C. (6.5) d 2 ≤ �T �0 53 Proof. Multiply thefirst equation in (6.2) by (m m 0) and the second equa- − d tion by (uT u). Adding the expressions and integrating inT [0,T ] gives − × T 0 = [(m m 0)(uT u) t + (uT u)(m m 0)t]dtdx Td 0 − − − − � � T + [(�(m m 0)Δ(uT u) �(u T u)Δ(m m 0)]dtdx Td 0 − − − − − � � T + [ �(m m 0)ΔuT �(u T u)Δm 0]dtdx d − − − − �T �0 T Du 2 + (m m 0)| | +u (mDu) dtdx d − 2 ∇· T 0 � � � � T + [ uT (mDu) + (m m 0)V]dtdx Td 0 − ∇ − � � T + (m0 m)F(m)dtdx. d − �T �0 Integrating thefirst and second terms by parts we have T [(m m 0)(uT u) t + (uT u)(m m 0)t]dtdx=0 d − − − − �T �0 and T [(�(m m 0)Δ(uT u) �(u T u)Δ(m m 0)]dtdx=0. d − − − − − �T �0 Now let’s evaluate T [ �(m m 0)ΔuT �u T Δm0]dtdx. d − − − �T �0 d First notice thatu T isC ∞, so it has a maximum inT andm andm 0 are probability measures, then T [ �(m m 0)ΔuT �u T Δm0]dtdx = d − − − ��T �0 � T � � � � = [ �(m m 0)ΔuT �Δu� T m0]dtdx , d − − − ��T �0 � � T � � � = � �(m 2m 0)ΔuT dtdx , � d − − ��T �0 � � � 3�� Δu T =C. � ≤ � � �∞ � 54 In thefirst term it still remains to evaluate

T T �uΔm0dtdx= �Du Dm 0dtdx d d − · �T �0 �T �0 T T � = �Du Dm 0dtdx = √δDu Dm0dtdx ⇒ d 0 · d 0 · √δ ��T � � ��T � � � � � T 2 2 2� � � � Du � Dm0 � � � � 2δ | | + | | �dtdx ≤ d 2 2δ 2 ��T �0 � � T � � 2 � � δ Du dtdx + � ≤ d | | ��T �0 � � T 2 � � � 2 � + � Dm0 dtdx� d 4δ | | ��T �0 � � T � � 2 � δ� Du dtdx+C� ≤ d | | �T �0 For anyδ positive, there is a positive constantC. Using the same argument:

T T uT (mDu)dtdx = mDuT Dudtdx d − ∇ d · ��T �0 � ��T �0 � � � � T � � � � 2 � � � � mδ Du dtdx +� ≤ d | | ��T �0 � � T � � 2 1 � + � m DuT dtdx� d | | 4δ ��T �0 � � T � � 2 � δ� Du mdtdx+C.� ≤ d | | �T �0 Notice thatδ does not need to necessarily be a constant, we are going to use that fact for the second estimate. BecauseV is bounded andm andm 0 are probability measures, wefind

T (m m 0)V dtdx C. d − ≤ ��T �0 � � � � � � �

55 Using the estimates obtained so far, we obtain

T Du 2 0 = (m m 0) ut �Δu+ | | +V(x) F(m) + d − − − 2 − �0 �T � � + (u u)(m �Δm (mDu))dxdt T − t − −∇· T = �(m m 0)ΔuT +�u T Δm0 �uΔm 0 +u T (mDu) (m m 0)V+ d − − ∇· − − �0 �T Du 2 m Du 2 + mF(m) m F(m) u (mDu) + (m+m )| | dxdt. − | | − 0 − ∇· 0 2 Thus T Du 2 (m+m 0)| | + mF(m)dxdt= 0 Td 2 � �T = �(m m 0)ΔuT �u T Δm0 +�uΔm 0 u T (mDu)+ d − − − − ∇· �0 �T + (m m )V+m Du 2 +m F(m) +u (mDu)dxdt − 0 | | 0 ∇· T 2 2 C+ δ1 Du +δ 2 Du m+m 0F(m)dxdt. ≤ d | | | | �0 �T T Du 2 Putting (m+m ) | | dxdt to each side we obtain − 0 Td 0 4 T � � Du 2 (m+m 0)| | + mF(m)dxdt 0 Td 4 ≤ � � T m0 2 1 2 C+ δ1 Du + δ2 Du m+m 0F(m)dxdt ≤ 0 Td − 4 | | − 4 | | � � � � � � m0 1 Makingδ 1 = 4 andδ 2 = 4 and rememberingm 0 is bounded, we get the following estimate:

T Du 2 T (m+m 0)| | + mF(m)dxdt C+ m0F(m)dxdt 0 Td 4 ≤ 0 Td � � � T � C+ m0 F(m)dxdt. ≤ d � �∞ �0 �T

56 Finally using the property (6.3) withα=2 m 0 , we get � �∞ T Du 2 mF(m) mF(m) (m+m 0)| | + + dxdt d 4 2 2 ≥ �0 �T T Du 2 mF(m) (m+m 0)| | + + m0 F(m)dxdt C α ≥ d 4 2 � �∞ − �0 �T T Du 2 = (m+m 0)| | + mF(m)dxdt C. ⇒ d 2 ≤ �0 �T

6.3 Estimates for Solutions of the Fokker-Plank Equa- tion under MFG

1 2 Finally, in this last section of the dissertation conclude thatDm 2 L and apply this result with the fact that whenF(m) =m α = mF(m)∈ =m α+1 to prove that Dm L q,q will be given as we get the results.⇒ We need to use| the|∈ property (6.3), thus it is needed to prove thatF(m) =m α satisfies such property.

Proposition 6.5. The functionF(m) =m α satisfies property (6.3).

Proof. Let us study:

1 1 mF(m) F(m) dx= m m α m α dx d β − d β · − �T � � �T � � 1 = mα m 1 dx d β − �T � � Divide the domain of the integration into two parts, one where 1 m 1 is β − positive and the other where it is negative. � � 1 m 1 0 m β, β − ≥ ⇔ ≥ 1 m 1<0 m<β. β − ⇔

57 Thus, 1 mF(m) F(m) dx= d β − �T � � 1 1 mα m 1 dx+ mα m 1 dx m β β − m<β β − � ≥ � � � � � 1 1 mF(m) F(m) dx mα m 1 dx d β − ≥ β − �T � � �m<β � � 1 mF(m) F(m) dx β α = d β − ≥− ⇒ �T � � 1 mαdx β α + mα+1dx, d ≤ β d �T �T for every everyβ greater than zero. Proposition 6.6. Let(u, m) be a solution to (6.2). Then there exists a positive constant,C, independent of the solution such that T 2 1 2 D ln(m) + Dm 2 dxdt C. d | | | | ≤ �0 �T Proof. Using the estimate (5.1) and corollary (5.2) T T 1 2 2 Dm 2 dxdt+ D ln(m) dx 0 Td | | 0 Td | | ≤ � � T � � T 2 2 C 1 +C 2 b m0dxdt+C 2 b mdxdt. ≤ d | | d | | �0 �T �0 �T Takingb=Du, supposingF:R + R + and using (6.5): → T T 1 2 2 Dm 2 dxdt+ D ln(m) dx C. d | | d | | ≤ �0 �T �0 �T

Corollary 6.2. Let(u, m) be a classical solution of (6.2). Suppose that F(m) =m α for someα>0. Then there exists a positive constantC inde- pendent of the solution such that T Dm qdxdt C, d | | ≤ �0 �T whereq=2 1+α , i.e. Dm L q. 2+α ∈ 58 Proof. By H¨older’s inequality, for anys 0 and 1 + 1 = 1,we have ≥ r r� 1 1 T T qr r T Dm r� q sr� Dm dxdt | sr| dxdt m dxdt d | | ≤ d m d �0 �T ��0 �T � ��0 �T �

Setting qr = 2, sr = 1 and sr� =α + 1 and using the last estimates we conclude the proof.

59 7 Conclusion

In this dissertation we introduced two of the most important linear PDEs and focused on the properties of their solutions. We also introduced non- linear PDEs and ways to solve them and worked with the Hamilton-Jacobi Equations. Moreover we focused in getting estimates for the solutions of the MFG PDEs and used them to get afinal result of regularity, that the solution for the density equation on the MFG problem has its derivative inL q. For the future, we may try to get similar results but for more general func- tions like polynomials or analytic ones that satisfy property (6.3). We only worked withfirst order estimates, the focus could be the estimates of second order and what consequences they might have. And we did not analyze the regularity of the solution of HJB within MFG in this dissertation, it may be interesting to know what properties it might have and how it interacts with regularity ofm. Another point could be the regularity of derivatives instead of regularity of integration.

60 8 Appendix

Definition 8.1 (Weak Convergence). A sequence of points(x n) in a Hilbert spaceH is said to converge weakly to a pointx H if ∈ x , y x, y � n �→� � for everyy H, where , is the inner product ofH. ∈ �· ·� Definition 8.2 (Weak Derivative). Letu be aL 1([t, T]) function, then we say thatv is weak derivative ofu if

T T u(x)φ(x)dx= v(x)φ�(x)dx, �t �t for everyφ C ∞. ∈ Definition 8.3. The spacex W 1,p([t, T]) is the space of functions such that have one weak derivative∈ and have its derivative havefiniteL p norm. Theorem 8.1 (Fundamental Theorem of Calculus for weak derivative). Let u W 1,p(I) with1 p , andI bounded or unbounded interval; then there∈ exists a function≤ ≤∞˜u C( I) such that ∈ u=˜u a.e. onI and x ˜u(x) ˜u(y)= u�(t)dt x, y I. − ∀ ∈ �y Theorem 8.2 (Morrey’s inequality). Assumen

u 0,γ n C u 1,p n || ||C (R ) ≤ || || W (R ) for allu C 1(Rn) L p(Rn), where ∈ ∩ n γ=n . − p Theorem 8.3 (Weak Compactness). LetX be a reflexive Banach space and suppose the sequence u k X is bounded. Then there is a subsequence u u andu X{ such}⊂ that u converges weakly tou in X. . { kj }⊂{ k} ∈ { kj }

61 References

[1] L. C. Evans. Partial Differential Equations. American Mathematical Society, second edition, 2010.

[2] D. A. Gomes, E. A. Pimentel, and V. Voskanyany. Regularity Theory for Mean-Field Game Systems. Springer, 2016.

[3] J.-M. Lasry and P.-L. Lions. Meanfield games. Jpn. J. Math., 2:229–260, 2007.

[4] J. Nash. Parabolic equations. Proceedings of the National Academy of Sciences of the United States of America, 43(8):754–758, 1957.

62