University of South Florida Scholar Commons

Graduate Theses and Dissertations Graduate School

4-4-2014 A Maximum Principle in the Engel Group James Klinedinst University of South Florida, [email protected]

Follow this and additional works at: https://scholarcommons.usf.edu/etd Part of the Commons

Scholar Commons Citation Klinedinst, James, "A Maximum Principle in the Engel Group" (2014). Graduate Theses and Dissertations. https://scholarcommons.usf.edu/etd/5248

This Thesis is brought to you for free and open access by the Graduate School at Scholar Commons. It has been accepted for inclusion in Graduate Theses and Dissertations by an authorized administrator of Scholar Commons. For more information, please contact [email protected]. A Maximum Principle in the Engel Group

by

James Klinedinst

A thesis submitted in partial fulfillment of the requirements for the degree of Master of Arts Department of Mathematics & Statistics College of Arts and Sciences University of South Florida

Major Professor: Thomas Bieske, Ph.D. Razvan Teodorescu, Ph.D. Seung-Yeop Lee, Ph.D.

Date of Approval: April 04, 2014

Keywords: Viscosity Solutions, Partial Differential Equations, Carnot Groups,

Copyright c 2014, James Klinedinst Dedication

This work is dedicated to my son, Blaise. May his discov- eries be even better. Table of Contents

Abstract ...... ii

Chapter 1 Background on the Engel Environment ...... 1

Chapter 2 Carnot Jets and Viscosity Solutions ...... 10

Chapter 3 Sub-Riemannian Maximum Principle ...... 14 References ...... 26

i Abstract

In this thesis, we will examine the properties of subelliptic jets in the Engel group of step 3. Step-2 groups, such as the Heisenberg group, do not provide insight into the general abstract calculations.

This thesis then, is the first explicit non-trivial computation of the abstract results.

ii Chapter 1

Background on the Engel Environment

One of the properties of Riemannian manifolds is that at each point, the dimension of the tangent n space is equal to the topological dimension of the manifold. For example, if the manifold is R , n 3 the tangent space is also R . If the manifold is a surface in R , then the tangent space at a point is 2 R . Thus, there are no restricted directions in the tangent space. However, many real-world models require restricted directions, because movement is limited. An example is driving a car [1]. This is because a car cannot move laterally, restricting the direction of motion. One other model is how the human brain processes visual images [5]. We then use sub-Riemannian spaces, which are spaces having points where the dimension of the tangent space is strictly less than the topological dimension of the manifold. We will consider sub- Riemannian manifolds with an algebraic group law, called Carnot groups. In Bieske’s paper [3], a sub-Riemannian maximum principle is proved for Heisenberg groups. Later, in [2], this maximum principle is proved for general Carnot groups. In the time between these two papers, it was discovered that the Heisenberg group and so-called “step-2 groups” do not have a sufficiently mellifluous geometry, resulting in these groups being inadequate concrete examples. In this thesis, we explore the step-3 Engel group, which allows us a more concrete understanding of the abstraction found in [2]. 4 We start in R with coordinates (x1, x2, x3, x4) and for α ∈ R, let

1 1  u = u(x , x , x , α) = x + x (x + αx ) 1 2 3 2 3 12 2 1 2  1 1  m = m(x , x , x , α) = − αx + x (x + αx ) 1 2 3 2 3 12 1 1 2 1 1  n = n(x , x , α) = x + αx . 1 2 2 1 2 2

1 Consider the linearly independent vector fields

∂ x2 ∂ ∂ X1 = − − u ∂x1 2 ∂x3 ∂x4 ∂ x1 ∂ ∂ X2 = + + m ∂x2 2 ∂x3 ∂x4 ∂ ∂ X3 = + n ∂x3 ∂x4 ∂ and X4 = . ∂x4

These vector fields obey the relations

[X1,X2] = X3, [X1,X3] = X4, [X2,X3] = αX4, and [Xi,X4] = 0 for i = 1, 2, 3.

The vectors and these brackets form a , represented by g, that has a corresponding de- composition given by

g = V1 ⊕ V2 ⊕ V3

where the vector space Vi is given by

V1 = span {X1,X2}

V2 = span {X3}

and V3 = span {X4}.

Note that [V1,V1] = V2, [V1,V2] = V3, and [V1,V3] = 0.

This Lie algebra, has an inner product that orthonomalizes the basis {X1,X2,X3,X4} denoted by h·, ·i. It is given by the symmetric matrix

  k11 k12 k13 k14     k12 k22 k23 k24     k13 k23 k33 k34   k14 k24 k34 k44

2 where

x2(1 + n2) k = 1 + u(u − nx ) + 2 11 2 4 (1 + n2)x x 1 k = −mu − 1 2 + n(ux + mx ) 12 4 2 1 2 (1 + n2)x k = −nu + 2 13 2 nx k = u − 2 14 2 (1 + n2)x2 k = 1 + m2 − mnx + 1 22 1 4 (1 + n2)x k = mn − 1 23 2 nx k = −m + 1 , k = 1 + n2, k = −n 24 2 33 34

k44 = 1.

The exponential map, which allows us to relate this Lie algebra to a called the step-3

Engel group, takes a vector from the Lie algebra at point p, say Xp and relates it to a unique integral 0 curve given by γ(t). The relationship is defined as exp (Xp) = γ(1) where γ (0) = Xp and γ(0) = p.

THEOREM 1.1 The exponential map exp: g → G is the identity map.

Proof. Let γ(t) = (γ1(t), γ2(t), γ3(t), γ4(t)) be a curve. Suppressing the parameter t, we compute:

4   X ∂ γ2 ∂ ∂ a X = a − − u(γ , γ , γ , α) i i 1 ∂x 2 ∂x 1 2 3 ∂x i=1 1 3 4   ∂ γ1 ∂ ∂ + a2 + + m(γ1, γ2, γ3, α) ∂x2 2 ∂x3 ∂x4  ∂ ∂  ∂ + a3 + n(γ1, γ2, α) + a4 ∂x3 ∂x4 ∂x4   ∂ ∂ −a1γ2 γ1a2 ∂ = a1 + a2 + + + a3 ∂x1 ∂x2 2 2 ∂x3  a γ a γ γ αa γ2 αa γ a γ2 + − 1 3 − 1 1 2 − 1 2 − 2 3 + 2 1 2 12 12 2 12  αa2γ1γ2 a3γ1 αa3γ2 ∂ + + + + a4 . 12 2 2 ∂x4

3 Thus the initial value problem  4  0 P γ (t) = aiXi(γ(t)) i=1  γ(0) = 0 is equivalent to the initial value problem  γ0 (t) = a  1 1   γ0 (t) = a  2 2  0 −a1γ2 γ1a2 γ3(t) = + + a3  2 2  2 2  0 a1γ3 a1γ1γ2 αa1γ2 αa2γ3 a2γ1 αa2γ1γ2 a3γ1 αa3γ2 γ (t) = − − − − + + + + + x4  4 2 12 12 2 12 12 2 2   γi(0) = 0 for i = 1, 2, 3, 4.

Through integration,

γ1(t) = a1t and γ2(t) = a2t.

0 Substituting, we get γ3(t) = a3 and so

γ3(t) = a3t

0 and γ4(t) = a4, giving

γ4(t) = a4t.

Thus, γ(t) = (a1t, a2t, a3t, a4t) and so γ(1) = (a1, a2, a3, a4). So

exp(a1X1 + a2X2 + a3X3 + a4X4) = (a1, a2, a3, a4).



The non-abelian algebraic group law is supplied by the Baker-Campbell-Hausdorff formula [10], which is given by

1 1 1 expX? expY = exp(X + Y + [X,Y ] + [X, [X,Y ]] − [Y, [X,Y ]]). 2 12 12

The higher order brackets are zero because they will all involve bracketing with X4.

PROPOSITION 1 For p = (x1, x2, x3, x4), q = (y1, y2, y3, y4) ∈ G, the group law is  1 1 1  p ? q = x + y , x + y , x + y + µ, x + y + ν + µ(x + αx − y − αy ) 1 1 2 2 3 3 2 4 4 2 12 1 2 1 2

4  where µ = (x1y2 − x2y1) and ν = x1y3 − x3y1 + α(x2y3 − x3y2) 4 in the embedding space R .

Proof.

p ? q = (x1, x2, x3, x4) ? (y1, y2, y3, y4)     = exp x1X1 + x2X2 + x3X3 + x4X4 exp y1X1 + y2X2 + y3X3 + y4X4  = exp (x1X1 + x2X2 + x3X3 + x4X4) + (y1X1 + y2X2 + y3X3 + y4X4) 1 + [x X + x X + x X + x X , y X + y X + y X + y X ] 2 1 1 2 2 3 3 4 4 1 1 2 2 3 3 4 4 1 + [x X + x X + x X + x X , 12 1 1 2 2 3 3 4 4

[x1X1 + x2X2 + x3X3 + x4X4, y1X1 + y2X2 + y3X3 + y4X4]] 1 − [y X + y X + y X + y X , 12 1 1 2 2 3 3 4 4  [x1X1 + x2X2 + x3X3 + x4X4, y1X1 + y2X2 + y3X3 + y4X4]] .

The only non-zero Lie brackets for

[x1X1 + x2X2 + x3X3 + x4X4, y1X1 + y2X2 + y3X3 + y4X4] are given by

x1y2[X1,X2] + x1y3[X1,X3] + x2y1[X2,X1] + x2y3[X2,X3] + x3y1[X3,X1] + x3y2[X3,X2]

= x1y2X3 + x1y3X4 − x2y1X3 + αx2y3X4 − x3y1X4 − αx3y2X4   = x1y2 − x2y1 X3 + x1y3 − x3y1 + α(x2y3 − x3y2) X4

= µX3 + νX4.

The only non-zero Lie brackets for

[x1X1 + x2X2 + x3X3 + x4X4, [x1X1 + x2X2 + x3X3 + x4X4, y1X1 + y2X2 + y3X3 + y4X4]] are  x1µ[X1,X3] + x2µ[X2,X3] = x1µ + αx2µ X4.

Finally, the non-zero brackets for

[y1X1 + y2X2 + y3X3 + y4X4, [x1X1 + x2X2 + x3X3 + x4X4, y1X1 + y2X2 + y3X3 + y4X4]] are

 y1µ[X1,X3] + y2µ[X2,X3] = y1µ + αy2µ X4.

5  Plugging these values into the formula, we get exp (x1X1 + x2X2 + x3X3 + x4X4) + (y1X1 +  1 1  1  y2X2 + y3X3 + y4X4) + 2 (µX3 + νX4) + 12 x1µ + αx2µ X4 − 12 y1µ + αy2µ X4 . Combining like terms, we get  1  1 1 exp (x1 + y1)X1 + (x2 + y2)X2 + x3 + y3 + 2 µ X3 + x4 + y4 + 2 ν + 12 µ(x1 + αx2) −  1  12 µ(y1 + αy2) X4 . The proposition then follows since the exponential is the identity. 

COROLLARY 1.0.1 The identity element under group multiplication is (0, 0, 0, 0) and the inverse element is (−x1, −x2, −x3, −x4).

The next theorem tells us how the group law interacts with our vector fields.

THEOREM 1.2 Let p = (x1, x2, x3, x4) ∈ G be any point and let 0 be the identity element. The map

Lp : G → G is left-multiplication by p and DLp its derivative matrix. Then, Xi(p) = DLpXi(0).

This means the vector fields {X1,X2,X3,X4} are left-invariant.

Proof. Using the group law above, we compute for points p = (x1, x2, x3, x4) and

q = (y1, y2, y3, y4):    x2 x3 1 1 0 − − + − x2(x1 + αx2 − y1 − αy2) − µ  2 2 12      x1 x3 1  0 1 − α + x1(x1 + αx2 − y1 − αy2) − αµ  DLp =  2 2 12     x1 x2  0 0 1 2 + α 2    0 0 0 1 and so     x2 x3 1 x2 1 0 − 2 − 2 − 12x2(x1 + αx2) 1 0 − 2 u(x1, x2, x3, α)      x1 x3 1   x1  0 1 2 − 2 α + 12x1(x1 + αx2) 0 1 2 m(x1, x2, x3, α) DLp(0) =   =    x1 x2    0 0 1 + α  0 0 1 n(x1, x2, α)   2 2    0 0 0 1 0 0 0 1 . Because X (0) = ∂ , the theorem follows. i ∂xi 

The tangent space to our Engel group, having topological dimension 4, is V1, because those are the generating vector fields. It has topological dimension 2, so we have a sub-Riemannian space.

6 There exists a natural metric on G, given by the Carnot-Caratheodory´ distance. For the points p and q, Z 1 0 dC (p, q) = inf kγ (t)kdt Γ 0 0 where Γ is the set of all curves γ satisfying γ(0) = p, γ(1) = q and γ (t) ∈ V1. Chow’s theorem

[1] tells us that dC (p, q) is a metric. By the previous theorem, this metric is invariant under left multiplication.

We now turn to calculus. Define a smooth function f : G → R. Because of the Lie brackets, th vectors in Vi are i order derivatives, with respect to the parameter of the curve ([1, Prop. 5.16]), where i ∈ {1, 2, 3}. The horizontal gradient, consisting of first order derivatives, uses only X1 and

X2, so  ∇0f = X1f, X2f .

We note that this agrees with having V1 as the tangent space for horizontal curves. We use a symmetrized horizontal second derivative matrix, (D2f)?, with entries given by 1  (D2f)? = X X f + X X f ij 2 i j j i for i, j = 1, 2. In our Engel group,   2 ? D11 D12 (D f) =   (1.0.1) D21 D22 where 2 2 2 2 2 2 2 ∂ f ∂ f ∂ f ∂ f x2 ∂f x2 ∂ f 2 ∂ f D11 = 2 − x2 − 2u + x2u + + 2 + u 2 , ∂x1 ∂x1∂x3 ∂x1∂x4 ∂x3∂x4 6 ∂x4 4 ∂x3 ∂x4

2 2 2 2 2 ∂ f x1 ∂ f ∂ f x2 ∂ f ∂ f D12 = D21 = + + m − − u ∂x1∂x2 2 ∂x1∂x3 ∂x1∂x4 2 ∂x2∂x3 ∂x2∂x4 2 2 2 x1u + mx2 ∂ f x1x2 ∂ f ∂ f αx2 − x1 ∂f − − 2 − um 2 + 2 ∂x3∂x4 4 ∂x3 ∂x4 12 ∂x4 and 2 2 2 2 2 2 2 ∂ f ∂ f ∂ f ∂ f −αx1 ∂f x1 ∂ f 2 ∂ f D22 = 2 + x1 + 2m + x1m + + 2 + m 2 . ∂x2 ∂x2∂x3 ∂x2∂x4 ∂x3∂x4 6 ∂x4 4 ∂x3 ∂x4 Further, let the semi-horizontal derivative be given as

∇1f = (X1f, X2f, X3f)

and note this involves vectors in V1 and V2.

7 2 DEFINITION 1.0.1 A function f : G → R is Csub if O1f and XiXjf are continuous for i, j = 1, 2.

2 2 A function that is Csub is not necessarily C in the Euclidean sense. For instance, the function 3 2 f(x1, x2, x3, x4) = (x3) 2 is not C in the Euclidean sense at the origin, since the (Euclidean) second derivative is undefined. But we have the following proposition:

3 2 PROPOSITION 2 The function f(x1, x2, x3, x4) = (x3) 2 is Csub in the Engel group.

Proof. Using Theorem 1.1 and Proposition 1, get

d d d X f(0, 0, 0, 0) = f((0, 0, 0, 0) ? exp(tX )) = f(t, 0, 0, 0) = 0 = 0. 1 dt 1 dt dt t=0 t=0 t=0

Similarly, X2f(0, 0, 0, 0) = 0 and

3 1 X f(0, 0, 0, 0) = (t) 2 = 0. 3 2 t=0

We now need to check XiXjf(0, 0, 0, 0) for i, j = 1, 2. Using the Baker-Campbell-Hausdorff formula, we get

∂2 X X f(0, 0, 0, 0) = f((0, 0, 0, 0) ? exp(tX ) ? exp(sX ) 1 2 ∂t∂s 1 2 s,t=0 3 ∂2 1 ∂2 1  2 = f(exp(tX + sX + stX )) = st ∂t∂s 1 2 2 3 ∂t∂s 2 s,t=0 s,t=0 1 ! ∂ 3t1  2 = st = 0. ∂t 4 2 s=0 t=0

Similarly, X2X1f(0, 0, 0, 0) = 0. For X1X1f(0, 0, 0, 0) we have

∂2 X X f(0, 0, 0, 0) = f((0, 0, 0, 0) ? exp(tX ) ? exp(sX ) 1 1 ∂t∂s 1 1 s,t=0

∂2 ∂2 = f(exp((s + t)X )) = 0 = 0 ∂t∂s 1 ∂t∂s s,t=0 s,t=0 with the same for X2X2f(0, 0, 0, 0). The proposition is proved.  With the above derivatives and using the Engel divergence, which is the sum of our vectors spaces

X1 and X2, we define the horizontal p-Laplacian of a smooth function f for 1 < p < ∞ by

p−2 p−2 p−2 ∆pf = div(k∇0fk ∇0f) = X1(k∇0fk ∇0f) + X2(k∇0fk ∇0f)

p−2 p−2 = X1(k∇0fk X1f) + X2(k∇0fk X2f)

p−2 p−4 2 ? = k∇0fk (X1X1f + X2X2f) + (p − 2)k∇0fk (D f) ∇0f, ∇0f .

8 Letting p run to infinity gives us the infinite Laplacian, defined as

2 ? ∆∞f = (D f) ∇0f, ∇0f .

9 Chapter 2

Carnot Jets and Viscosity Solutions

Following [2], we have the Taylor Theorem:

THEOREM 2.1 For a smooth function f : G → R, the Taylor formula at the point p0 is:

1 f(p) = f(p ) + h∇ f(p ), p[−1pi + h(D2f(p ))?p−1p, p−1pi + o((d(p , p))2) (2.0.1) 0 1 0 0 2 0 0 0 0

−1 −1 [−1 −1 where p0 p is p0 p projected onto V1 and p0 p is p0 p projected onto V1 ⊕ V2. We remind our- selves that the exponential mapping is the identity.

0 0 0 0 Proof. Let p = (x1, x2, x3, x4) and p0 = (x1, x2, x3, x4). Following [3], we will rewrite the Taylor polynomial as:

2 0 0 f(p) + o((d(p0, p)) ) =f(p0) + (x1 − x1)X1f(p0) + (x2 − x2)X2f(p0)+  1  1 x − x0 + (x x0 − x0x ) X f(p ) + (x − x0)2X X f(p ) 3 3 2 1 2 1 2 3 0 2 1 1 1 1 0 1 1 + (x − x0)2X X f(p ) + (x − x0)(x − x0)X X f(p ) 2 2 2 2 2 0 2 1 1 2 2 1 2 0 1 + (x − x0)(x − x0)X X f(p ) 2 1 1 2 2 2 1 0

10 and we will call the right side polynomial P (p). We then have 1 1 X P (p) = X f(p ) + x0X f(p ) − x X f(p ) 1 1 0 2 2 3 0 2 2 3 0 1 1 + (x − x0)X X f(p ) + (x − x0)X X f(p ) + (x − x0)X X f(p ) 1 1 1 1 0 2 2 2 1 2 0 2 2 2 2 1 0 1 1 X P (p) = X f(p ) − x0X f(p ) + x X f(p ) 2 2 0 2 1 3 0 2 1 3 0 1 1 + (x − x0)X X f(p ) + (x − x0)X X f(p ) + (x − x0)X X f(p ) 2 2 2 2 0 2 1 1 1 2 0 2 1 1 2 1 0

X3P (p) = X3f(p0)

X1X1P (p) = X1X1f(p0)

X2X2P (p) = X2X2f(p0) 1 1 1 X X P (p) = − X f(p ) + X X f(p ) + X X f(p ) 2 1 2 3 0 2 1 2 0 2 2 1 0 1 1 1 and X X P (p) = X f(p ) + X X f(p ) + X X f(p ). 1 2 2 3 0 2 1 2 0 2 2 1 0

We then have X1P (p0) = X1f(p0),X2P (p0) = X2f(p0), and X3P (p0) = X3f(p0). And we 1 have X1X1P (p0) = X1X1f(p0),X2X2P (p0) = X2X2f(p0), and 2 (X2X1P (p0)+X1X2P (p0)) = 1 2 (X1X2f(p0) + X2X1f(p0)). Note also that since X3 = [X1,X2], we have X1X2P (p0) =

X1X2f(p0) and X2X1P (p0) = X2X1f(p0). By definition, Equation (2.0.1) gives the second- order Taylor polynomial.



Because smoothness is too restrictive of a requirement we use the following definition to introduce some flexibility. This definition will invoke Taylor polynomials. Let us recall a function f is defined to be upper semicontinuous if

lim sup f(x) ≤ f(x0) x→x0 and a function g is defined to be lower semicontinuous if

lim inf g(x) ≥ g(x0) x→x0 Using these functions, we are able to define the superjets of our space.

2 DEFINITION 2.0.2 Let f be an upper semicontinuous function f : G → R and let S be the set of 2 all 2 × 2 symmetric matrices. For η ∈ V1 ⊕ V2 and X ∈ S , then the following inequality motivates the definition of the second-order superjet. 1 f(p) ≤ f(p ) + η, p[−1p + X, p−1p + o((d(p , p))2) as p → p . (2.0.2) 0 0 2 0 0 0

11 2,+ The second order superjet of f at p0, denoted J f(p0), is defined as

2,+ 2 J f(p0) = {(η, X) ⊂ (V1 ⊕ V2) × S : Equation (2.0.2) holds}.

There is also the second-order subjet of the lower semicontinuous function g at p0, which is repre- 2,− sented by J g(p0), and is defined by

2,− 2,+ J g(p0) = −J (−g)(p0).

2,+ Note that in (2.0.2) the inequality will flip to ≥. The set-theoretic closure of J f(p0), which will 2,+ 2,+ be denoted J f(p0), is given by (η, X) ∈ J f(p0) if there is a sequence {(pi, f(pi), ηi,Xi)} ∈ 2 G × R × g × S so that as i → ∞, then {(pi, f(pi), ηi,Xi)} → (p0, f(p0), η, X) with (ηi,Xi) ∈ 2,+ J f(pi). Given an upper semicontinuous function f, we define a set of test functions that touch f from above at p0, denoted by TA(f, p0) and given a lower semicontinuous function g, we can also define a set of test functions that touch g from below at p0, denoted TB(g, p0). Thus:

2 TA(f, p0) = {φ : G → R : φ ∈ Csub(p0), φ(p0) = f(p0) and φ(p) > f(p) for p near p0} and

2 TB(g, p0) = {φ : G → R : φ ∈ Csub(p0), φ(p0) = g(p0) and φ(p) < g(p) for p near p0}.

2,+ 2,− Thus we can define another pair of sets, K f(p0) and K g(p0), where

2,+ 2 ? K f(p0) = {(∇1φ(p0), (D φ(p0)) ): φ ∈ T A(f, p0)}

2,− 2 ? and K g(p0) = {(∇1φ(p0), (D φ(p0)) ): φ ∈ T B(g, p0)}.

These definitions motivate the following lemma, from [2]. The proof is excluded.

LEMMA 2.1 [2, Lemma 2.2] Also see [6]. Given an upper semicontinuous function f and lower semicontinuous function g, then

2,+ 2,+ 2,− 2,− J f(p0) = K f(p0) and J g(p0) = K g(p0).

These jets will prove to be very useful in creating a type of solution to partial differential equations. In general, a k-order partial differential equation is solved by a classic solution if there is a k-times

12 differentiable function on an interval that satisfies the conditions of the equation. For instance, the 2 first order differential equation with u : R → R ∂u + u = 0, where u = u(x, y) ∂x has a solution of u = e−xf(y), where f(y) is an arbitray function of y

2 This is a classical solution over the region R [9]. Unfortunately, there are a great deal many partial differential equations for which the classic solution does not exist. For instance, the Eikonal equation from [8], given by |u0(x)| = 1, for x ∈ (−1, 1),

with initial conditions u(−1) = u(1) = 0. This equation does not have a differentiable function which satisfies the equation over the interval (−1, 1)[8]. However, if we relax the requirement of differentiability to only requiring continuity, we discover two possible solutions: u(x) = −|x| + 1 and v(x) = |x| − 1 = −u(x). As discussed in [7], because the functions are merely continuous and not necessarily differentiable, we must “push” differentiation onto appropriate test functions. This is the purpose of the jets discussed above: they are the result of this process. We consider a class of equations given by:

2 ? F (p, f(p), ∇1f(p), (D f(p)) ) = 0,

where the function F , defined as

2 F : G × R × g × S → R,

fulfills the inequality F (p, r, η, X) ≤ F (p, s, η, Y )

when r ≤ s and Y ≤ X. This function F is proper as defined in [7]. The p-Laplace equation, defined for 1 < p < ∞ by   p−2 2 ? p−4 2 ? − k∇0fk tr((D f) ) + (p − 2)k∇0fk h(D f) ∇0f, ∇0fi = 0

and the infinite Laplace equation

2 ? −h(D f) ∇0f, ∇0fi = 0

are examples of such equations.

13 Chapter 3 Sub-Riemannian Maximum Principle

We now state Lemma 3 from [2] which allows us to express our semi-horizontal derivative and our symmetrized second derivative matrix in terms of their Euclidean counterparts.

LEMMA 3.1 Recalling the definition of u, m and n in Chapter 2, we define the matrix A as   x2 1 0 − 2 −u A =   x1 0 1 2 m and the matrix B as h i B = 0 0 1 n .

2 Let f be a smooth function with Oeuclf its Euclidean gradient and let Deuclf be the Euclidean T second-order derivative matrix of f. Let A denote the transpose of the matrix A. Then

O1f = A · Oeuclf ⊕ B · Oeuclf.

n Further, for all t ∈ R ,

2 ? 2 T h(D f) · t, ti = h(A · Deuclf · A + M · t, ti

where the matrix M is given by

 1 ∂f 1 ∂f  6 x2 ∂x 12 (αx2 − x1) ∂x M =  4 4  1 (αx − x ) ∂f − 1 αx ∂f 12 2 1 ∂x4 6 1 ∂x4

Proof. A quick computation shows that   ∂f − x2 ∂f − u ∂f ∂x1 2 ∂x3 ∂x4 A · Oeuclf =   ∂f + x1 ∂f + m ∂f ∂x2 2 ∂x3 ∂x4

14 and h ∂f ∂f i · euclf = + n B O ∂x3 ∂x4 so the direct sum gives us   ∂f x2 ∂f ∂f ∂x − 2 ∂x − u ∂x  1 3 4   ∂f x1 ∂f ∂f  = f  ∂x + 2 ∂x + m ∂x  O1  2 3 4  ∂f + n ∂f ∂x3 ∂x4 For the second derivative matrix, matrix multiplication gives   2 T T11 T12 A · Deuclf · A =   T21 T22

where

2 2 2 2 2 2 2 ∂ f ∂ f ∂ f ∂ f x2 ∂ f 2 ∂ f T11 = 2 − x2 − 2u + x2u + 2 + u 2 , ∂x1 ∂x1∂x3 ∂x1∂x4 ∂x3∂x4 4 ∂x3 ∂x4

2 2 2 2 2 ∂ f x1 ∂ f ∂ f x2 ∂ f ∂ f T12 = T21 = + + m − − u ∂x1∂x2 2 ∂x1∂x3 ∂x1∂x4 2 ∂x2∂x3 ∂x2∂x4 2 2 2 x1u + mx2 ∂ f x1x2 ∂ f ∂ f − − 2 − um 2 , 2 ∂x3∂x4 4 ∂x3 ∂x4 and 2 2 2 2 2 2 2 ∂ f ∂ f ∂ f ∂ f x1 ∂ f 2 ∂ f T22 = 2 + x1 + 2m + x1m + 2 + m 2 ∂x2 ∂x2∂x3 ∂x2∂x4 ∂x3∂x4 4 ∂x3 ∂x4 Our choice of provides the missing ∂f terms from Equation (1.0.1). We then have · D2 f · M ∂x4 A eucl T 2 ? A + M = (D f) . 

By relating our derivatives to their Euclidean counterparts, we now have a way to relate our Euclidean superjets to the jets we created for our Engel group.

2,+ 4 4 COROLLARY 3.1.1 Let (η, X) ∈ J eucl f(p), where (η, X) ∈ R × S . Then

T 2,+ (A · η ⊕ B · η, AXA + M(η, p)) ∈ J f(p).

The matrix M(η, p) in this case is

 1 1  6 x2η4 12 (αx2 − x1)η4 M(η, p) =   . 1 1 12 (αx2 − x1)η4 − 6 αx1η4

15 Proof. Our goal is to convert the Euclidean Taylor polynomial for the upper semicontinuous function u 1 u(p) ≤ u(p ) + hη, p − p i + hX(p − p ), p − p i + o(|p − p |2) 0 0 E 2 0 0 E 0 to 1 1 u(p) ≤ u(p ) + h η ⊕ η, p[−1pi + h X T (p−1p), (p−1p)i + h p−1p, p−1pi + o(d(p, p )2) 0 A B 0 2 A A 0 0 2 M 0 0 0 where h·, ·iE is the Euclidean inner product and h·, ·i is the Engel inner product. Further, p =

(y1, y2, y3, y4) and p0 = (x1, x2, x3, x4). 2 First, we will look at the error term. Suppose W is o(|p − p0| ). Then, 2 W W |p − p0| 2 = 2 2 d(p, p0) |p − p0| d(p, p0) The first term goes to zero, and the second term is bounded by Prop 1.1 of [11]. Thus, the right 2 hand side goes to zero as p → p0, and thus W is o(d(p, p0) ). The Taylor theorem, thus, now can be made to read: 1 u(p) ≤ u(p ) + hη, p − p i + hX(p − p ), p − p i + o(d(p, p )2). 0 0 E 2 0 0 E 0 Before proceeding, let us define the matrix A as   x2 1 0 − 2 −u    x1  0 1 2 m  A =     0 0 1 n    0 0 0 1 Let the η vector be defined as   η1     η2 η =     η3   η4 and the symmetric matrix X is given as   X11 X12 X13 X14     X12 X22 X23 X24 X =     X13 X23 X33 X34   X14 X24 X34 X44 .

16 Then

 x  x x αx2 x   hAη, p−1pi = η − 2 η + − 1 2 − 2 − 3 η (y − x ) 0 1 2 3 12 12 2 4 1 1  x x2 αx x αx   + η + 1 η + 1 + 1 2 − 3 η (y − x ) 2 2 3 12 12 2 4 2 2  1  x αx  + y − x + x y − x y  η + 1 + 2 η 3 3 2 2 1 1 2 3 2 2 4  1 1 + y − x + x y − x y  + αx y − x y  4 4 2 3 1 1 3 2 3 2 2 3 1 1  − (x + y )(x y − x y ) − α(x + y )(x y − x y ) η . 12 1 1 2 1 1 2 12 2 2 2 1 1 2 4

Distributing and collecting like terms, we get

x x x αx2 x hAη, p−1pi = (y − x )η − 2 (y − x )η + (y − x )(− 1 2 − 2 − 3 )η 0 1 1 1 2 1 1 3 1 1 12 12 2 4 x x2 αx x αx + (y − x )η + 1 (y − x )η + (y − x )( 1 + 1 2 − 3 )η 2 2 2 2 2 2 3 2 2 12 12 2 4 1 x αx  + (y − x )η + (x y − x y )η + (y − x ) 1 + 2 η 3 3 3 2 2 1 1 2 3 3 3 2 2 4 1 x αx + (x y − x y )( 1 + 2 )η 2 2 1 1 2 2 2 4  1 1 + (x + y )(x y − x y ) − α(x + y )(x y − x y ) 12 1 1 2 1 1 2 12 2 2 2 1 1 2 x2 αx x αx  + (y − x )( 1 + 1 2 − 3 ) η 2 2 12 12 2 4

= (y1 − x1)η1 + (y2 − x2)η2 + (y3 − x3)η3 1 + (x x − y x + −x x + y x + x y − x y )η 2 1 2 1 2 2 1 2 1 2 1 1 2 3

x αx  1 x αx  + (y − x ) 1 + 2 + (x y − x y ) 1 + 2 3 3 2 2 2 2 1 1 2 2 2 1 + y − x + x y − x y  4 4 2 3 1 1 3 1 1 + αx y − x y  + (x + y )(x y − x y ) 2 3 2 2 3 12 1 1 2 1 1 2 ! 1 x2 αx x αx  − α(x + y )(x y − x y ) + (y − x ) 1 + 1 2 − 3 η 12 2 2 2 1 1 2 2 2 12 12 2 4

= (y1 − x1)η1 + (y2 − x2)η2 + (y3 − x3)η3  1  + y − x + x − y + α(x − y )x y − y x  η . 4 4 12 1 1 2 2 1 2 1 2 4

17 Further, hη, p0 − piE is (y1 − x1)η1 + (y2 − x2)η2 + (y3 − x3)η3 + (y4 − x4)η4 −1 and so hη, p0 − piE − hAη, p0 pi is 1 x − y + α(x − y )(x y − x y )η . 12 1 1 2 2 1 2 2 1 4

Thus, the Euclidean Taylor polynomial f(p) ≤ f(p0) + hη, p0 − piE + hX(p0 − p), p0 − piE + 2 o(|p0 − p| ) can be rewritten as

1 u(p) ≤u(p ) + hAη, p−1pi + x − y + α(x − y )(x y − x y )η 0 0 12 1 1 2 2 1 2 2 1 4 2 + hX(p0 − p), p0 − piE + o(d(p, p0) ).

T −1 −1 Similarly, hAXA p0 p, p0 pi − hX(p − p0), p − p0iE is 1 (x − y + α(x − y ))(x y − x y ) 144 1 1 2 2 1 2 2 1  2 2 × x1y2X44 − αx2y1X44 + x2(24X24 + y1(y1 + αy2)X44)

+ x1(24X14 − X44(x2(y1 − αy2) + y2(y1 + αy2)))  + 24(x3X34 + x4X44 − y1X14 − y2X24 − y3X34 + y4X44) .

Let us then look at what is left. After simplifying, the coefficient to X14 is given by

1 (24x − 24y )(x − y + α(x − y ))(x y − x y ). 144 1 1 1 1 2 2 1 2 2 1

However, x1y2 −x2y1 = y2(x1 −y1)+y1(y2 −x2) is O(d(p, p0)) since we are in a bounded domain,

and thus y2 and y1 are bounded numbers. Also, 24(x1 − y1) is O(d(p, p0)). So this coefficient is 2 o(d (p0, p)), and is part of the error term. Similarly, the coefficient to X24 is given by

1 (24x − 24y )x − y + α(x − y )(x y − x y ) 144 2 2 1 1 2 2 1 2 2 1

and is also in the error term.

Further, we have the coefficient of X34 given by

1 (24x − 24y )(x y − x y )x − y + α(x − y ). 144 3 3 1 2 2 1 1 1 2 2

2 Since 24x3 − 24y3 approaches 0 as p → p0, this term overall is o(d (p0, p)) and thus part of the error.

18 Finally, the X44 coefficient is 1 (x y − x y )x − y + α(x − y ) 144 1 2 2 1 1 1 2 2 2 2 2 2 × 24(x4 − y4) − x1x2y1 − αx2y1 + x2y1 + x1y2 + αx1x2y2 − x1y1y2 + αx2y1y2 − αx1y2 .

2 Since we again have a (x1y2 −x2y1)(x1 −y1 +α(x2 −y2)) term, we know this will be O(d (p0, p)).

Again, through a similarity of terms, we see that the rest of the multiplication tends to 0 as p → p0. 2 T −1 −1 That is, this term is o(d (p0, p)) and part of the error. Thus, hXp−p0, p−p0iE = hAXA p0 p, p0 i 2 plus elements that are o(d (p, p0)). −1 −1 Because the first and second coordinates of p0 p are O(d(p, p0)), the third coordinate of p0 p is 2 −1 2 O(d (p, p0)) and the fourth coordinate of p0 p is o(d (p, p0)), the Taylor polynomial 1 u(p) ≤ u(p ) + hAη, p−1pi + x − y + α(x − y )(x y − x y )η 0 0 12 1 1 2 2 1 2 2 1 4 1 + hAXAT p−1p, p−1pi + o(d2(p, p )) 2 0 0 0 can be rewritten as 1 u(p) ≤ u(p ) + h · η ⊕ · η, p[−1pi + x − y + α(x − y )(x y − x y )η 0 A B 0 12 1 1 2 2 1 2 2 1 4 1 + h X T p−1p, p−1pi + o(d2(p, p )) 2 A A 0 0 0 −1 −1 [−1 −1 Recall that p0 p is p0 p projected onto V1 and p0 p is p0 p projected onto V1 ⊕ V2. 1  Thus, we are just left with 12 x1 − y1 + α(x2 − y2) (x1y2 − x2y1)η4. But we know matrix M is

 1 1  6 x2η4 12 (αx2 − x1)η4 M(η, p) =   . 1 1 12 (αx2 − x1)η4 − 6 αx1η4

and the first two components of the Engel multiplication are (y1 − x1, y2 − x2). So   h i 1 y1 − x1 y − x y − x × × 1 1 2 2 2M   y2 − x2

1  1 −1 −1 is our left over term. Thus, 12 x1 − y1 + α(x2 − y2) (x1y2 − x2y1)η4 equals 2 hM p0 p, p0 pi and so our result holds. 

With this corollary, we can make use of the Euclidean results from [2]. Specifically, we use Theorem 3.2 and Remark 3.8.

19 + 4 THEOREM 3.1 Let  ∈ R . Let f be an upper semicontinuous function in R , g a lower semicon- 4 2 8 4 tinuous function in R and φ a C function in R . Let O be a locally compact subset of R and let (ˆp, qˆ) be a maximum point of f(p) − g(q) − φ(p, q) over O × O and let the matrix M ∈ S8 be given by   2 2 Dppφ(ˆp, qˆ) Dpqφ(ˆp, qˆ) M =   2 2 Dqpφ(ˆp, qˆ) Dqqφ(ˆp, qˆ)

Where p = (x1, x2, x3, x4) and q = (y1, y2, y3, y4). This leads us to the three 4 × 4 matrices where the (ij)th terms are given by 2 2 ∂ φ(ˆp, qˆ) (Dppφ(ˆp, qˆ))ij = ∂xi∂xj 2 2 2 ∂ φ(ˆp, qˆ) (Dpqφ(ˆp, qˆ))ij = (Dqpφ(ˆp, qˆ))ji = ∂xi∂yj and 2 2 ∂ φ(ˆp, qˆ) (Dqqφ(ˆp, qˆ))ij = . ∂yi∂yj Then there exist matrices X,Y ∈ S4 such that

2,+ 2,− (Dpφ(ˆp, qˆ),X) ∈ J eucl f(ˆp) and (−Dqφ(ˆp, qˆ),Y ) ∈ J eucl g(ˆq).

4 In addition, for all vectors ~a,~b ∈ R ,

hX~a,~ai − hY~b,~bi ≤ h(M2 + M)(~a ⊕~b), (~a ⊕~bi.

This inequality allows us to generate an upper bound for the matrix difference in our Engel group.

2 THEOREM 3.2 Given a (Euclidean) C function φ : G × G → R, let the semi-horizontal gradient

at the point r ∈ G be denoted ∇1,rφ and the symmetrized second derivative matrix at r be denoted 2 ? + (Dr φ) . Let  ∈ R . Let f, g, p, q, p,ˆ q,ˆ O, and M be as in Theorem 3.1. Then there are matrices X , Y ∈ S2 so that

2,+ 2,− (∇1,pφ(ˆp, qˆ), X ) ∈ J f(ˆp) and (−∇1,qφ(ˆp, qˆ), Y) ∈ J g(ˆp)).

Furthermore, for all vectors ξ ∈ V1

hX ξ, ξi − hYξ, ξi

2 T T T T ≤ hM (A(ˆp) ξ ⊕ A(ˆq) ξ), (A(ˆp) ξ ⊕ A(ˆq) ξ)i

20 Proof. We omit the proof, as it is the same as Theorem 3.4 in [2]. 

With our Carnot group law, and using Theorem 3.2, we get the Carnot group maximum principle for our Engel group.

LEMMA 3.2 Let Ω ⊂ G be a bounded domain. Let τ ∈ R+ and let u be an upper semicontinuous function and v a lower semicontinuous function. Let p = (x1, x2, x3, x4) and q = (y1, y2, y3, y4) and k ∈ 2 · N be an even whole number. Define φ(p, q) by

1 1 1 1 φ(p, q) = (x − y )k + (x − y )k + (x − y + (x y − x y ))k k 1 1 k 2 2 k 3 3 2 2 1 1 2

1 1 α + x − y + (x y − x y ) + (x y − y x ) k 4 4 2 3 1 1 3 2 3 2 3 2 !k 1 α + (x + y )(x y − x y ) + (x + y )(x y − x y ) 12 1 1 2 1 1 2 12 2 2 2 1 1 2 4 1 X = (φ (p, q))k, k i i=1 where

φ1(p, q) =(x1 − y1)

φ2(p, q) =(x2 − y2) 1 φ (p, q) =(x − y + (x y − x y )) 3 3 3 2 2 1 1 2 1 α φ (p, q) =(x − y + (x y − x y ) + (x y − y x ) 4 4 4 2 3 1 1 3 2 3 2 3 2 1 α + (x + y )(x y − x y ) + (x + y )(x y − x y )) 12 1 1 2 1 1 2 12 2 2 2 1 1 2

Let the points pτ , qτ ∈ G be the local maximum in Ω × Ω of u(p) − v(q) − τφ(p, q) and let u − v have a positive interior local maximum such that

sup(u − v) > 0. Ω

Then the following hold:

1.

lim τφ(pτ , qτ ) = 0. τ→∞

21 2. There exists a point pˆ ∈ Ω such that pτ → pˆ (and so does qτ by part 1) and

sup(u − v) = u(ˆp) − v(ˆp) > 0. Ω

3. There exist symmetric matrices Xτ , Yτ and vector ητ ∈ V1 ⊕ V2, namely ητ = ∇1,pφ(p, qτ ), so that

2,+ 2,− (τητ , Xτ ) ∈ J u(pτ ) and (τητ , Yτ ) ∈ J v(qτ ).

4. For any vector ξ ∈ V1, we have

2 T T T T hXτ ξ, ξi − hYτ ξ, ξi ≤ τhM (A(pτ ) ξ ⊕ A(qτ ) ξ), (A(pτ ) ξ ⊕ A(qτ ) ξ)i 2 2 2k−4 2 . τkMk kξk ∼ τφ(pτ , qτ ) k kξk

where

kMk = sup{|λ| : λ is an eigenvalue of M}.

k−1 In particular, if ξ ∼ τφ(pτ , qτ ) k , we have

3 4k−6 hXτ ξ, ξi − hYτ ξ, ξi . τ φ(pτ , qτ ) k .

Proof. The proof of (1) and (2) is the same as [3]. Next, we have for the Euclidean derivatives with respect to p:

∂ τ τ k−1 1 τ k−1 φ(pτ , qτ ) = (x1 − y1 ) − y2 φ3(pτ , qτ ) ∂x1 2 1   + − 2xτ yτ − yτ yτ − α(yτ )2 + xτ (yτ − αyτ ) − 6yτ φ (p , q )k−1 12 1 2 1 2 2 2 1 2 3 4 τ τ

∂ τ τ k−1 1 τ k−1 φ(pτ , qτ ) = (x2 − y2 ) + y1 φ3(pτ , qτ ) ∂x2 2 1   + (yτ )2 + xτ (yτ − αyτ ) + α(2xτ yτ + yτ yτ − 6yτ ) φ (p , q )k−1 12 1 1 1 2 2 1 1 2 3 4 τ τ

∂ k−1 1 τ τ k−1 φ(pτ , qτ ) = φ3(pτ , qτ ) + (y1 + αy2 ) φ4(pτ , qτ ) ∂x3 2 ∂ k−1 φ(pτ , qτ ) = φ4(pτ , qτ ) . ∂x4

22 and for the Euclidean derivatives with respect to q:

∂ τ τ k−1 1 τ k−1 − φ(pτ , qτ ) = (x1 − y1 ) − x2 φ3(pτ , qτ ) ∂y1 2 1   + 6xτ + 2xτ yτ + xτ (xτ − yτ ) + αxτ (xτ + yτ ) φ (p , q )k−1 12 3 2 1 1 2 2 2 2 2 4 τ τ

∂ τ τ k−1 1 τ k−1 − φ(pτ , qτ ) = (x2 − y2 ) + x1 φ3(pτ , qτ ) ∂y2 2 1   + (xτ )2 − α(6xτ + xτ yτ ) + xτ (yτ + α(xτ + 2yτ )) φ (p , q )k−1 12 1 3 2 1 1 1 2 2 4 τ τ

∂ k−1 1 τ τ k−1 − φ(pτ , qτ ) = φ3(pτ , qτ ) + (x1 + αx2) φ4(pτ , qτ ) ∂y3 2 ∂ k−1 − φ(pτ , qτ ) = φ4(pτ , qτ ) . ∂y4 Claim 3.1 We have the following relations:

A(pτ )∇euclφ(pτ , qτ ) = −A(qτ )∇euclφ(pτ , qτ )

and

B(pτ )∇euclφ(pτ , qτ ) = −B(qτ )∇euclφ(pτ , qτ ).

So that we may set

ητ = A(pτ )∇euclφ(pτ , qτ ) ⊕ B(pτ )∇euclφ(pτ , qτ )

Proof. We compute the first row of A(pτ )∇euclφ(pτ , qτ ): τ ∂ x2 ∂ τ τ τ ∂ φ(pτ , qτ ) − φ(pτ , qτ ) − u(x1, x2, x3, α) φ(pτ , qτ ) = ∂x1 2 ∂x3 ∂x4 1 (xτ − yτ )k−1 − (yτ + xτ )φ (p , q )k−1 1 1 2 2 2 3 τ τ 1   + − 2xτ yτ − yτ yτ − α(yτ )2 + xτ (yτ − αyτ ) − 6yτ φ (p , q )k−1 12 1 2 1 2 2 2 1 2 3 4 τ τ xτ 1 1  − 2 (yτ + αyτ ) + xτ + xτ (xτ + αxτ ) φ (p , q )k−1 4 1 2 2 3 12 2 1 2 4 τ τ

We compute the first row of −A(qτ )∇euclφ(pτ , qτ ): τ ∂ y2 ∂ τ τ τ ∂ − φ(pτ , qτ ) + φ(pτ , qτ ) + u(y1 , y2 , y3 , α) φ(pτ , qτ ) = ∂y1 2 ∂y3 ∂y4 1 (xτ − yτ )k−1 − (xτ + yτ )φ (p , q )k−1 1 1 2 2 2 3 τ τ 1   − 6xτ + 2xτ yτ + xτ (xτ − yτ ) + αxτ (xτ + yτ ) φ (p , q )k−1 12 3 2 1 1 2 2 2 2 2 4 τ τ yτ 1 1  − 2 (xτ + αxτ ) + yτ + yτ (yτ + αyτ ) φ (p , q )k−1. 4 1 2 2 3 12 2 1 2 4 τ τ

23 Routine calculations show these terms are equal. We now compute the second row of A(pτ )∇euclφ(pτ , qτ ):

τ ∂ x1 ∂ τ τ τ ∂ φ(pτ , qτ ) + φ(pτ , qτ ) + m(x1, x2, x3, α) φ(pτ , qτ ) = ∂x2 2 ∂x3 ∂x4 1 (xτ − yτ )k−1 + (xτ + yτ )φ (p , q )k−1 2 2 2 1 1 3 τ τ 1   + − 6αyτ + yτ (xτ + yτ ) + α(xτ yτ − xτ yτ ) + αyτ (xτ + yτ ) φ (p , q )k−1 12 3 1 1 1 2 1 1 2 1 2 2 4 τ τ xτ α 1  + 1 (yτ + αyτ ) + − xτ + xτ (xτ + αxτ ) φ (p , q )k−1. 4 1 2 2 3 12 1 1 2 4 τ τ

We compute the second row of −A(qτ )∇euclφ(pτ , qτ ):

τ ∂ y1 ∂ τ τ τ ∂ − φ(pτ , qτ ) − φ(pτ , qτ ) − m(y1 , y2 , y3 , α) φ(pτ , qτ ) = ∂y2 2 ∂y3 ∂y4 1 (xτ − yτ )k−1 + (xτ + yτ )φ (p , q )k−1 2 2 2 1 1 3 τ τ 1   + − 6αxτ + xτ (xτ + yτ ) − α(xτ yτ − xτ yτ ) + αxτ (xτ + yτ ) φ (p , q )k−1 12 3 1 1 1 2 1 1 2 1 2 2 4 τ τ yτ α 1  + 1 (xτ + αxτ ) + − yτ + yτ (yτ + αyτ ) φ (p , q )k−1. 4 1 2 2 3 12 1 1 2 4 τ τ

Again, routine calculations show these terms are equal. We compute B(pτ )∇euclφ(pτ , qτ ):

∂ τ τ ∂ φ(pτ , qτ ) + k(x1, x2α) φ(pτ , qτ ) = ∂x3 ∂x4 1 1 φ (p , q )k−1 + (yτ + αyτ )φ (p , q )k−1 + xτ + αxτ φ (p , q )k−1. 3 τ τ 2 1 2 4 τ τ 2 1 2 4 τ τ

We compute −B(qτ )∇euclφ(pτ , qτ ):

∂ τ τ ∂ − φ(pτ , qτ ) − k(x1, x2α) φ(pτ , qτ ) = ∂x3 ∂x4 1 1 φ (p , q )k−1 + (xτ + αxτ )φ (p , q )k−1 + (yτ + αyτ )φ (p , q )k−1. 3 τ τ 2 1 2 4 τ τ 2 1 2 4 τ τ

It is easy to see these two terms are equal. 

4 2,+ Thus, using Corollary 3.1.1, we have matrices X,Y ∈ S , where (η, X) ∈ J euclφ(pτ ) and (η, Y ) ∈ 2,− J euclφ(qτ ). We then have

T Xτ = A(pτ )XA (pτ ) + M(∇euclφ, pτ ) and T Yτ = A(qτ )Y A (qτ ) + M(∇euclφ, qτ )

24 Thus, we have proved Part (3) of our Lemma 3.2. Part (4) follows from the fact that M is based on the Euclidean second derivatives of φ(p, q) and by definition, kAk is bounded by a constant since we are in a bounded domain. 

25 References

[1] Bella¨ıche, Andre.:´ The tangent space in sub-Riemannian geometry. In Sub- Riemannian Geometry; Progress in Mathematics; Birkhauser:¨ Basel, Switzerland. 1996, Vol. 144, 1-78.

[2] Bieske, T.: A sub-Riemannian maximum principle and its application to the p- Laplacian in Carnot groups. Ann. Acad. Sci. Fenn. Math. 37 (2012), 119-134.

[3] Bieske, Thomas.: On Infinite Harmonic Functions on the Heisenberg Group. Comm. in PDE. 2002, 27 (3&4), 727762.

[4] Bieske, T.: Lipschitz extensions on generalized Grushin spaces. - Michigan Math J. (2005), 53 (1), 3-31.

[5] Citti, G., M. Manfredini, and A. Sarti: Neuronal oscillation in the visual cortex: Gamma-convergence to the Riemannian Mumford-Shah functional. - SIAM J. Math. Anal. 35:6, 2004, 1394-1419.

[6] Crandall, M.: Viscosity solutions: a primer. - Lecture Notes in Math. 1660, Springer- Verlag, Berlin, 1997.

[7] Crandall, M., H. Ishii, and P.-L. Lions: User’s guide to viscosity solutions of second order partial differential equations. - Bull. Amer. Math. Soc. 27:1, 1992, 1-67.

[8] Dragoni, F.: Introduction to Viscosity Solutions for Nonlinear PDEs. http://www.maths.bris.ac.uk/ maxfd/viscnote.pdf.

[9] Heinbockel, J.H.: Mathematical methods for partial differential equations. Trafford, Victoria, 2003.

[10] Jacobson, N.: Lie algebras. Dover Publishing, New York, 1962.

26 [11] Nagel, A., E. Stein, and S. Wainger: Balls and metrics defined by vector fields I: Basic Properties. - Acta Math. (1985), 155 (1), 103-147.

[12] Rudin, W.: Real and complex analysis, 3rd ed., McGraw-Hill, New York, 1987.

27