HW 4 solutions: Section 3.3

2) A critical point is where ∇f = 0: ∂f ∂f  , = (2x − y, 2y − x) = (0, 0) ∂x ∂y

The only solution to the equations is (x, y) = (0, 0) .

To classify the critical point, we apply the second test. We first compute the Hessian: ∂2f ∂2f !   ∂x2 ∂x∂y 2 −1 Hf = 2 2 = ∂ f ∂ f −1 2 ∂y∂x ∂y2 Next, recall the procedure to apply the second in the case of a 2x2 Hessian: 1) If detHf = 0, then the test is inconclusive, and YOU MUST find a different way to classify the extremum. 2) If detHf < 0, then the critical point is a saddle. 3) If detHf > 0, then we have two cases: a) If ∂2f/∂x2 > 0, it is a min. b) If ∂2f/∂x2 < 0, it is a max.

For our present scenario, we end up in situation 3a, so our critical point is a min .

5) Same story as question 2!1   ∂f ∂f 2 2 2 2 , = (2xe1+x −y , −2ye1+x −y ) = (0, 0) ∂x ∂y

Since eanything is never 0, the only solution to our equations is again (x, y) = (0, 0) .

2 2 ∂ f ∂ f ! 1+x2−y2 2 1+x2−y2 1+x2−y2 ! ∂x2 ∂x∂y 2e + 4x e −4xye Hf = 2 2 = 2 2 2 2 2 2 ∂ f ∂ f −4xye1+x −y −2e1+x −y + 4y2e1+x −y ∂y∂x ∂y2 This looks quite horrendous, but remember that we only care about the value of our hessian at the critical point (0,0). 2e 0  Hf| = (0,0) 0 −2e The determinant of the Hessian is negative, so we have a saddle , by the second derivative test.

18a) Note that we are letting k be some constant. ∂f ∂f ∂f  , , = (2x, 2y + kz, 2z + ky) ∂x ∂y ∂z It is easy to see that ∇f does indeed vanish at the origin.

18b) In preparation to apply the second derivative test, we compute the Hessian:

 ∂2f ∂2f ∂2f    ∂x2 ∂x∂y ∂x∂z 2 0 0 2 2 2 Hf =  ∂ f ∂ f ∂ f  = 0 2 k  ∂y∂x ∂y2 ∂y∂z    ∂2f ∂2f ∂2f 0 k 2 ∂z∂x ∂z∂y ∂z2

1That is, it is the same story as in question number 2 , raised to the first power. Please excuse this bad joke.

1 Unfortunately for us, this Hessian is 3x3. So, we will have to apply the the more general version of the second derivative test on p. 175 of the book. The procedure starts by considering the following collection of matrices: 2 0 0 2 0 (2), , 0 2 k 0 2   0 k 2 We have here first the top-left element of Hf, then the top-left 2x2 block of elements in Hf, then all of Hf. We now compute the determinants of these matrices, which yields 2, 4, 2(4 − k2). Finally, the rule is that if all of these numbers are positive, we have a min. If they alternate -,+,-,... then we have a max. Otherwise we have a saddle. However, if any of these numbers are 0, then the test is inconclusive.

So, what can we say about our particular problem? Well, if |k| < 2, all three determinants will be positive, and we have a min. If |k| > 2, then the signs of our determinants will be +,+,-, so we will have a saddle. Finally, if k = ±2, the test is inconclusive. We have to look at that case in greater detail, since the second derivative test will not help us.

If k = ±2, then we get f = x2 + y2 + z2 ± 2yz = x2 + (y ± z)2. Thus, we can conclude that the origin is a local min. Why? Well, f(0, 0, 0) = 0. However, our is obviously never negative. Hence, at the origin, f is at least as small as at any nearby point, which is the definition of a local min.

In conclusion, If |k| ≤ 2, (0,0,0) is a local min. If |k| > 2, (0,0,0) is a saddle.

23a) Applying the ,

∂f ∂f  , = (−6x(y − x2) − 2x(y − 3x2), 1(y − x2) + 1(y − 3x2)) ∂x ∂y

This clearly vanishes at the origin.

23b) We first evaluate f at the points of our line:

f(at, bt) = (bt − 3a2t2)(bt − a2t2)

Notice that this is a single variable function of t. Call this function g(t). We can apply the single-variable second derivative test to classify the critical point at t = 0. This is easiest done as follows:

g(t) = b2t2 + terms of order 3 and up

g0(t) = 2b2t + terms of order 2 and up g00(t) = 2b2 + terms of order 1 and up Thus, we can see that g00(0) = 2b2. It follows that if b 6= 0, then we do indeed have a local min. However, if b = 0 then the single-variable second derivative test is inconclusive. In that case, we need to check directly.

f(at, 0) = (−3a2t2)(−a2t2) = 3a4t4

However, this function clearly has a min at t = 0.

23c) Look at our function along the parabola y = 2x2.

f(x, 2x2) = (2x2 − 3x2)(2x2 − x2) = −x4

Our function is always negative along this parabola, except for the origin, where it is 0. Hence, along this curve, the origin is actually a local max. It follows that overall, the origin must be a saddle.

2 28) (Lagrange multipliers make short work of this question. However, it is in the chapter before Lagrange multipliers are introduced, so we try an alternative method.) Consider the points on our plane. Once we choose the x and y coordinates of a point, the z coordinate is given by z = 10 − x + y/2. So, we can describe all such points as (x, y, 10 − x + y/2). The distance squared of such a point to the origin is then f(x, y) = x2 + y2 + (10 − x + y/2)2. Now, we use a nice trick. The point of closest distance to the origin is also the point of closest distance squared to the origin. So, we can minimize the distance squared and save ourselves a squareroot. Okay, let’s look for critical points:

∇f = (2x + 2(10 − x + y/2)(−1), 2y + 2(10 − x + y/2)(1/2)) = (−20 + 4x − y, 10 − x + (5/2)y) = (0, 0)

Solving, we find a unique critical point at (x, y) = (40/9, −20/9). Of course, since we know that there is a closest point of our plane to the origin, it must show up as a critical point. Hence, this critical point must be the min we are seeking. Since the z coordinate is given in terms of the x and y coordinates, we get our final answer of (40/9, −20/9, 40/9)

38) Substituting r2 = x2 + y2 yields the following problem: Find the extrema of r8, given that r2 ≤ 1. So indeed, we do not need to do any . The maximum value is 1, and the minimum value is 0.

44) First, we look for critical points in the interior of the triangle. These points are found via the standard ∇f = 0: ∇f = (y + 1, x − 2) = (0, 0) This has a unique solution at (2,-1). However, let’s not be hasty. There’s a chance that this critical point isn’t inside the triangle. If it isn’t, then we have to discard this solution. Fortunately, it isn’t hard to check that the point is indeed inside the triangle.

However, what if our max/min happens to lie on the boundary of the triangle? Once again, this problem is given before Lagrange multipliers in the book, so here is an alternative Lagrangeless approach. The vertical edge of the triangle can be parametrized as (1, t), −2 ≤ t ≤ 2. Plugging this into f, we get f(1, t) = 2 − t. So, as we vary t, we can see what f looks like for different points on this edge. Now, if f does take on a maximum (or min) on this vertical edge, then it immediately follows g(t) = f(1, t) will take on a maximum (or min) as well. However, g(t) = 2 − t is a one variable function (specifically, it is a function [−2, 2] → R), so it is easy to find its extrema via single-variable calculus methods.

g0(t) = −1, which is never 0, so g(t)’s only extrema are at t = ±2. These values of t correspond to the vertices (1,-2) and (1,2) of our triangle.

Repeat! The horizontal edge can be parametrized as (t, −2), 1 ≤ t ≤ 5. Plugging this into f, g(t) = f(t, −2) = 5 − t. Once again, the only extrema of this function are when t = 1, 5, and these values of t correspond to the vertices (1,-2) and (5,-2) of the triangle.

Finally, the diagonal edge can be parametrized as (t, 3−t), 1 < t < 5. Plugging this in, g(t) = f(t, 3−t) = −5 + 6t − t2. Looking for extrema, g0(t) = 6 − 2t = 0, so we get t = 3 as a possibility alongside with t = 1, 5 as before. So, our possible points are two vertices of the triangle (1,2), (5,-2), as well as the more interesting point (3,0).

So, at the end of the day, we have 5 potential points for our absolute max and min. (2,-1), (3,0), and the triangle’s vertices (1,2), (1,-2), (5,-2). How do we figure out which are our winners? Just plug these points into f! f(2, −1) = 3, f(3, 0) = 4, f(1, 2) = 0, f(1, −2) = 4, f(5, −2) = 0

The max is 4 and occurs at (3,0) and (1,-2). The min is 0 and occurs at (1,2) and (5,-2).

3 45) First find the critical points.

 1 8  ∇f = y − , x − = (0, 0) x2 y2

So, we get the equations x2y = 1, xy2 = 8. Multiplying the equations, we get x3y3 = 8, and hence xy = 2. Throwing this back into our earlier equations, we arrive at x = 1/2, y = 4 as our unique critical point.

To classify it, we compute the Hessian:

2 2 ∂ f ∂ f !  2  ∂x2 ∂x∂y x3 1 Hf = ∂2f ∂2f = 16 1 3 ∂y∂x ∂y2 y At our critical point, we get 16 1 Hf|(1/2,4) = 1 1 4 The determinant of the Hessian is 3 > 0, and the top-left entry is positive, so we must have a min

4 Section 3.4

4) ∇f = λ∇g along with our constraint gives us:

1 = 2λx, 1 = 2λy, x2 − y2 = 2

Subtracting the first two equations yields 0 = 2λ(x−y), and hence x = y. However, this yields a contradiction with equation 3. So, what happened? Did we mess up? Well, no. It turns out that there are no extrema in this case. So, we got the (true) statement that for each extrema of our function, 0=2. Seems legit!

P.S. If you want a better idea of what this all looks like, here is a similar problem for you. Find all extrema of f(x, y) = y, subject to the constraint xy = −1. A quick check should convince you that there are no extrema. Well, the original problem fails for the same reason. In fact, it’s pretty much the same picture, but everything rotated 45◦.

6) We have multiple constraints, so we will use ∇f = λ1∇g1 + λ2∇g2:

(1, 1, 1) = λ1(2x, −2y, 0) + λ2(2, 0, 1) Along with our two constraints, we get:

2 2 1 = 2λ1x + 2λ2, 1 = −2λ1y, 1 = λ2, x − y = 1, 2x + z = 1

Plugging in for λ2 in the first equation and simplifying, we get

2 2 −1 = 2λ1x, 1 = −2λ1y, x − y = 1, 2x + z = 1

Just as before, we can add the first two equations to get 0 = 2λ1(x − y). Since λ1 = 0 will quickly yield a contradiction, we must take x = y. However, this too ends in a contradiction, in similar fashion to the previous problem. No extrema

23) There are two cases to consider. We first look for extrema on the interior of the region using ∇f = 0:

∇f = (1, 1, −1) = (0, 0, 0)

Well... that has no solutions. So, any extrema must occur on the boundary of the region. The boundary is given by the constraint x2 + y2 + z2 = 1. Since, we are now looking for constrained extrema, we must use Lagrange multipliers to find any critical points

1 = 2λx, 1 = 2λy, −1 = 2λz, x2 + y2 + z2 = 1

At this point we can simply add/subtract the equations as we did before. However, let’s show off an alternative technique. Since we obviously cannot have λ = 0, we can divide by it and solve for x, y, z: 1 1 −1 x = , y = , z = , x2 + y2 + z2 = 1 2λ 2λ 2λ We can now plug x, y, z into the last equation:

 1 2  1 2 −12 + + = 1 2λ 2λ 2λ √ It is now easy to find λ = ± 3/2. Plugging into our equations for x, y, z, we can find the extrema and the values f takes on at them. Since there are only two extrema, these must be the max and min.

 1 1 −1  √ (x, y, z) = ± √ , √ , √ , with respective values ± 3 3 3 3

5 24) First we look for extrema on the interior via ∇f = 0:

(1, z, y) = (0, 0, 0)

This has no solutions, so there are no extrema on the interior. We look at the boundary, just as in the previous problem 1 = 2λx, z = 2λy, y = 2λz, x2 + y2 + z2 = 1 Multiplying the second and third equations yields yz = 4λ2yz. If either y or z is zero, so is the other (by equations 2 and 3). Then, since x2 + y2 + z2 = 1, we get that x = ±1. This gives us two solutions. On the other hand, let’s say that y, z 6= 0. Then, yz = 4λ2yz implies 1 = 4λ2, and so λ = ±1/2.

However, this implies x = ±1. Plugging into x2 + y2 + z2 = 1, we conclude that y = z = 0, so we get the same solutions from the previous case. Thus, we have a grand total of two solutions: (1,0,0), (-1,0,0). It is now easy to conclude that

(x, y, z) = (±1, 0, 0), with respective values ± 1

33a) ∇f = λ∇g: 1 = 4λx, 2y = 2λy, 2x2 + y2 = 1 Looking at the second equation, we have two possibilities. Either y = 0 or λ = 1. If y = 0, the third equation √ √ implies x = ±1/ 2. This gives us two solutions: (±1/ 2, 0)

On the other hand, if λ = 1, then we get x = 1/4 from the first equation, and y = ±p7/8. This gives two more solutions: (1/4, ±p7/8)

33b) Who doesn’t love a nice bordered Hessian?...

 ∂g ∂g    0 − ∂x − ∂y 0 −4x −2y 2 2 H¯ = − ∂g ∂ f ∂ f  = −4x 0 0  ∂x ∂x2 ∂x∂y    ∂g ∂2f ∂2f −2y 0 2 − ∂y ∂y∂x ∂y2 If we look at p.198 of the book, we see that we only need to know what the determinant of the bordered Hessian is. We expand by minors across the top row. (Below, vertical bars around the matrix denote taking its determinant.)

0 0 −4x 0 −4x 0 2 |H¯ | = 0 − (−4x) + (−2y) = −32x 0 2 −2y 2 −2y 0

So, as long as x 6= 0, which is true for all of our critical points, |H¯ | < 0, and so they are all local minima. 2

2This is the opposite of what one may expect. One may guess that a negative result should indicate a local max, but bordered Hessians are weird.

6 36) Our constraint is C(x, y) = 2x + 3y = 10. The functions have different names than usual, so in this case the equation we need to solve is ∇Q = λ∇C:

y = 2λ, x = 3λ, 2x + 3y = 10

Substituting x and y into the third equation, we get 5 6λ + 6λ = 10 =⇒ λ = 6

25 Plugging back in, we get (x, y) = (5/2, 5/3). This gives a maximum production of Q = max 6

7