San José State University Math 250: Mathematical Methods for Data Visualization

Review of and Multivariable Calculus

Dr. Guangliang Chen Linear Algebra and Multivariable Calculus Review

Introduction

This course focuses on the following mathematical objects:

Vectors: 1-D arrays; • Matrices: 2-D arrays • Tensors: 3-D arrays (or higher) •

Dr. Guangliang Chen | & , San José State University2/55 Linear Algebra and Multivariable Calculus Review

Notation: vectors

Vectors are denoted by boldface lowercase letters (such as a, b, u, v, x, y):

1 They are assumed to be in column form, e.g., a = (1, 2, 3)T = 2 •   3

3 To indicate their dimensions, we use notation like a ∈ R . •

The ith element of a vector a is written as ai or a(i). •

Dr. Guangliang Chen | Mathematics & Statistics, San José State University3/55 Linear Algebra and Multivariable Calculus Review

Some special vectors:

The zero vector • T n 0 = (0, 0,..., 0) ∈ R

The vector of ones • T n 1 = (1, 1,..., 1) ∈ R

The canonical basis vectors of Rn • T n ei = (0,..., 0, 1, 0,..., 0) ∈ R , i = 1, . . . , n

When the dimension is not specified, it is implied by the context, e.g., aT 1, where a = (1, 2, 3)T

Dr. Guangliang Chen | Mathematics & Statistics, San José State University4/55 Linear Algebra and Multivariable Calculus Review

Notation: matrices

Matrices are denoted by boldface UPPERCASE letters (such as A, B, U, V).

m×n Similarly, we write A ∈ R to indicate its size (if either m or n is equal to 1, then the reduces to a vector).

The (i, j) entry of A is denoted by aij, or A(i, j) as in MATLAB. The ith row of A is denoted by A(i, :) while its jth column is written as A(:, j), as in MATLAB.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University5/55 Linear Algebra and Multivariable Calculus Review

Special matrices:

n×n The : On ∈ R • n×n The : In ∈ R • n×n The matrix of ones: Jn ∈ R •       0 0 ··· 0 1 0 0 0 1 1 ··· 1       0 0 ··· 0 0 1 0 0 1 1 ··· 1 On = . . . . , In = . . . . , Jn = . . . . . . .. . . . .. . . . .. . . . . . . . . . . 0 0 ··· 0 0 0 0 1 1 1 ··· 1

Similarly, we may drop the subscript n when the size of the matrix is clear based on the context.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University6/55 Linear Algebra and Multivariable Calculus Review

Description of matrix shape

m×n We characterize a matrix A ∈ R based on its shape as follows:

Dr. Guangliang Chen | Mathematics & Statistics, San José State University7/55 Linear Algebra and Multivariable Calculus Review

n×n A is a square matrix A ∈ R whose off diagonal entries are all zero (aij = 0 for all i 6= j):   a11  .  A =  ..    ann

A diagonal matrix is uniquely defined by a vector that contains all the diagonal entries, and denoted as follows:

1   A = diag(1, 2, 3) = diag(a), a = 2 3

Dr. Guangliang Chen | Mathematics & Statistics, San José State University8/55 Linear Algebra and Multivariable Calculus Review

Matrix multiplication

m×n n×p Let A ∈ R and B ∈ R . Their matrix product is an m × p matrix n X AB = C = (cij), cij = aikbkj = A(i, :) · B(:, j). k=1

j j

C= A B i b i

m p m n n p × × ×

Dr. Guangliang Chen | Mathematics & Statistics, San José State University9/55 Linear Algebra and Multivariable Calculus Review

It is possible to obtain one full row (or column) of C at a time via matrix-vector multiplication:

C(i, :) = A(i, :) · B, C(:, j) = A · B(:, j)

b b b C= A B b C= A B b b b b b b b b b b

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 10/55 Linear Algebra and Multivariable Calculus Review

The full matrix C can be written as a sum of -1 matrices: n C = X A(:, k) · B(k, :). k=1

C= A B = + + +

b b b b · · ·

b

b

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 11/55 Linear Algebra and Multivariable Calculus Review

Example 0.1.   ! 1 2 3 1 1 0 4 5 6 1 0 1   7 8 9

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 12/55 Linear Algebra and Multivariable Calculus Review

Further interpretation of AB when one of the matrices is actually a vector:

If A = (a1, . . . , an) is a row vector (m = 1): • n X AB = aiB(i, :) ←− linear combination of rows of B i=1

T If B = (b1, . . . , bn) is a column vector (p = 1): • n X AB = bjA(:, j) ←− linear combination of columns of A j=1

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 13/55 Linear Algebra and Multivariable Calculus Review

Example 0.2.

1 2 3 1 2 3 −1         −1 0 1 4 5 6 , 4 5 6  0  7 8 9 7 8 9 1

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 14/55 Linear Algebra and Multivariable Calculus Review

n Finally, below are some identities involving the vector 1 ∈ R :     1 1 1 ... 1     1   1 1 ... 1 T T     1 1 = n, 11 = . 1 1 ... 1 = . . . . = Jn . . . .. .     1 1 1 ... 1 m×n For any matrix A ∈ R , A1 = X A(:, j), (vector of row sums) j 1T A = X A(i, :), (horizontal vector of column sums) i 1T A1 = X X A(i, j)(total sum of all entries) i j

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 15/55 Linear Algebra and Multivariable Calculus Review

Graphical illustration

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 16/55 Linear Algebra and Multivariable Calculus Review

Computational complexity

n m×n n×p Let x, y ∈ R and A ∈ R , B ∈ R . Then Summing up the entries of x: O(n) (n − 1 additions); • Computing the dot product xT y: O(n) (2n − 1 operations in total: • n multiplications and n − 1 additions). This implies that computing the norm of x, kxk2 = xT x, also has O(n) complexity.

Multiplying a matrix and a vector Ax: O(mn) (m dot product • operations)

Multiplying two matrices AB: O(mnp). In particular, computing • 2 n×n 3 C for a square matrix C ∈ R takes O(n ) time.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 17/55 Linear Algebra and Multivariable Calculus Review

An application of matrix associative law in computing

Matrix multiplications follow the associative and distributive laws (but there is no commutative law).

m×n n×p p×1 In a special case where A ∈ R , B ∈ R , x ∈ R , we have

(AB)x = A(Bx)

Important note: Although mathematically the same, the second way of multiplying them via two matrix-vector multiplications is much faster!

First way: O(mnp + mp) complexity • Second way: O(mn + np) complexity •

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 18/55 Linear Algebra and Multivariable Calculus Review

The Hadamard product

m×n Another way to multiply two matrices of the same size, say A, B ∈ R , is through the Hadamard product, also called the entrywise product:

m×n C = A ◦ B ∈ R , with cij = aijbij.

For example, ! ! ! 0 2 −3 1 0 −3 0 0 9 ◦ = . −1 0 −4 2 1 −1 −2 0 4

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 19/55 Linear Algebra and Multivariable Calculus Review

An important application of the entrywise product is in efficiently computing the product of a diagonal matrix and a rectangular matrix by software.       a1 B(1, :) a1B(1, :)  .   .   .  A B =  ..   .  =  .  |{z}       diagonal an B(n, :) anB(n, :)

  b1  .  AB = [A(:, 1) ... A(:, n)]  ..  |{z}   diagonal bn

= [b1A(:, 1) . . . bnA(:, n)]

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 20/55 Linear Algebra and Multivariable Calculus Review

We may implement the direct row/column operations using the Hadamard product.

T n For example, in the former case, let a = diag(A) = (a1, . . . , an) ∈ R , which represents the diagonal of A. Then

A B = [a ... a] ◦B. |{z} |{z} | {z } n×n n×p p copies

The former takes O(n2p) operations, while the latter takes only O(np) operations, which is one magnitude fewer.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 21/55 Linear Algebra and Multivariable Calculus Review

B= B ◦

For example,

−1  1 2 3 10 −1 −1 −1 −1 1 2 3 10          0  4 5 6 10 =  0 0 0 0 ◦4 5 6 10 1 7 8 9 10 1 1 1 1 7 8 9 10

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 22/55 Linear Algebra and Multivariable Calculus Review

Matrix transpose

m×n n×m The transpose of a matrix A ∈ R is another matrix B ∈ R with T bij = aji for all i, j. We denote the transpose of A by A .

n×n T A square matrix A ∈ R is said to be symmetric if A = A.

m×n n×p Let A ∈ R , B ∈ R , and k ∈ R. Then (AT )T = A • (kA)T = kAT • (A + B)T = AT + BT • (AB)T = BT AT • Dr. Guangliang Chen | Mathematics & Statistics, San José State University 23/55 Linear Algebra and Multivariable Calculus Review

Matrix inverse

n×n A square matrix A ∈ R is said to be invertible if there exists another square matrix of the same size B such that AB = BA = I.

In this case, B is called the matrix inverse of A and denoted as B = A−1.

Let A, B be two invertible matrices of the same size, and k 6= 0. Then

(kA)−1 = 1 A−1 • k (AB)−1 = B−1A−1 • (AT )−1 = (A−1)T • However, A + B is not necessarily invertible.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 24/55 Linear Algebra and Multivariable Calculus Review

Matrix rank

m×n Let A ∈ R . The largest number of linearly independent rows (or columns) contained in the matrix is called the rank of A, and often denoted as rank(A).

n×n A square matrix P ∈ R is said to be of full rank (or nonsingular) if rank(P) = n; otherwise, it is said to be rank deficient (or singular).

m×n A rectangular matrix A ∈ R is said to have full column rank if rank(B) = n.

m×n Similarly, a rectangular matrix A ∈ R is said to have full row rank if rank(B) = m.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 25/55 Linear Algebra and Multivariable Calculus Review

Useful properties of the matrix rank:

m×n For any A ∈ R , 0 ≤ rank(A) ≤ min(m, n), and rank(A) = 0 • if and only if A = O.

m×n T For any A ∈ R , rank(A) = rank(A ). • m×n n×p For any two matrices A ∈ R , B ∈ R , • rank(AB) ≤ min(rank(A), rank(B)).

m×n m×m n×n For any A ∈ R and square, nonsingular P ∈ R , Q ∈ R , • rank(PA) = rank(A) = rank(AQ).

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 26/55 Linear Algebra and Multivariable Calculus Review

Characterization of rank-1 matrices:

Any nonzero row or column vector has rank 1 (as a matrix). • A nonzero matrix is of rank 1 if and only if its rows (or columns) • are multiples of each other.

m×n A nonzero matrix A ∈ R has rank 1 if and only if there exist • m n T nonzero vectors u ∈ R , v ∈ R , such that A = uv . For example, 1 0 3 1       2 0 6 = 2 1 0 3 3 0 9 3

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 27/55 Linear Algebra and Multivariable Calculus Review

Matrix

n×n The trace of a square matrix A ∈ R is defined as the sum of the entries in its diagonal:

X trace(A) = aii. i

Clearly, trace(A) = trace(AT ).

Trace is a linear operator:

trace(kA) = k · trace(A) • trace(A + B) = trace(A) + trace(B). •

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 28/55 Linear Algebra and Multivariable Calculus Review

If A is an m × n matrix and B is an n × m matrix, then

trace(AB) = trace(BA)

Note that as matrices, AB is not necessarily equal to BA. A special case n is that for vectors u, v ∈ R , n T T T X trace(uv ) = trace(v u) = v u = uivi. i=1

More generally, for three (or more) matrices of compatible sizes such as m×n n×p p×m A ∈ R , B ∈ R , C ∈ R , we have

trace(ABC) = trace(BCA) = trace(CAB)

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 29/55 Linear Algebra and Multivariable Calculus Review

Matrix

The matrix determinant is a rule to evaluate square matrices to numbers (in order to determine if they are nonsingular):

n×n det : A ∈ R 7→ det(A) ∈ R.

A remarkable property is the following. n×n Theorem 0.1. A square matrix A ∈ R is invertible or nonsingular if and only if it has a nonzero determinant.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 30/55 Linear Algebra and Multivariable Calculus Review

Properties about the matrix determinant:

n×n Let A, B ∈ R and k ∈ R. Then

det(kA) = kn det(A). • det(AT ) = det(A). • det(AB) = det(A) det(B). This implies that for any invertible • matrix A, 1 det(A−1) = . det(A)

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 31/55 Linear Algebra and Multivariable Calculus Review

Example 0.3. For the matrix

 3 0 0    A =  5 1 −1 , −2 2 4

find its rank, trace and determinant.

Answer. rank(A) = 3, trace(A) = 8, det(A) = 18

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 32/55 Linear Algebra and Multivariable Calculus Review

Eigenvalues and eigenvectors

n×n Let A ∈ R . The characteristic polynomial of A is p(λ) = det(A − λI). The roots of the characteristic equation p(λ) = 0 are called eigenvalues of A.

For a specific eigenvalue λi, any nonzero vector vi satisfying

(A − λiI)vi = o or equivalently,

Avi = λivi is called an eigenvector of A (associated to the eigenvalue λi).

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 33/55 Linear Algebra and Multivariable Calculus Review

 3 0 0    Example 0.4. For the matrix A =  5 1 −1 , find its eigenvalues −2 2 4 and associated eigenvectors.

Answer. The eigenvalues are λ1 = 3, λ2 = 2 with corresponding eigenvec- T T tors v1 = (0, 1, −2) , v2 = (0, 1, −1) .

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 34/55 Linear Algebra and Multivariable Calculus Review

All eigenvectors associated to λi span a linear subspace, called the eigenspace: n E(λi) = {v ∈ R :(A − λiI)v = 0}.

The dimension gi of E(λi) is called the geometric multiplicity of λi, a while the degree ai of the factor (λ − λi) i in p(λ) is called the algebraic multiplicity of λi.

Note that we must have

P ai = n, and •

For all i, 1 ≤ gi ≤ ai. •

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 35/55 Linear Algebra and Multivariable Calculus Review

Example 0.5. For the eigenvalues of the matrix on previous slide, find their algebraic and geometric multiplicities.

Answer. a1 = 2, a2 = 1 and g1 = g2 = 1.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 36/55 Linear Algebra and Multivariable Calculus Review

The following theorem indicates that both the trace and determinant of a square matrix can be computed from the eigenvalues of the matrix. Theorem 0.2. Let A be a real square matrix whose eigenvalues are

λ1, . . . , λn (possibly with repetitions). Then

n n Y X det(A) = λi and trace(A) = λi. i=1 i=1

Example 0.6. For the matrix A defined previously, verify the identities in the above theorem.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 37/55 Linear Algebra and Multivariable Calculus Review

Similar matrices

n×n Two square matrices of the same size A, B ∈ R are said to be similar n×n if there exists an P ∈ R such that

B = PAP−1

Similar matrices have the same

rank, trace, determinant • characteristic polynomial • eigenvalues and their multiplicities (but not eigenvectors) •

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 38/55 Linear Algebra and Multivariable Calculus Review

Diagonalizability of square matrices

Def 0.1. A square matrix A is diagonalizable if it is similar to a diagonal matrix, i.e., there exist an invertible matrix P and a diagonal matrix Λ such that

A = PΛP−1, or equivalently, P−1AP = Λ.

Remark. If we write P = [p1,..., pn] and Λ = diag(λ1, . . . , λn), then the above equation can be rewritten as

AP = PΛ,

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 39/55 Linear Algebra and Multivariable Calculus Review or in columns   λ1  ..  A[p1 ... pn] = [p1 ... pn] . . λn

From this we get that

Api = λipi, 1 ≤ i ≤ n.

This shows that each λi is an eigenvalue of A with corresponding eigen- vector pi. Thus, the above factorization of A is called the eigenvalue decomposition of A.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 40/55 Linear Algebra and Multivariable Calculus Review

Example 0.7. The matrix ! 0 1 A = 3 2 is diagonalizable because

! ! ! !−1 0 1 1 1 3 1 1 = 3 2 3 −1 −1 3 −1 but the matrix ! 0 1 B = −1 2 is not.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 41/55 Linear Algebra and Multivariable Calculus Review

Why is diagonalization important?

We can use instead the diagonal matrix (that is similar to the given matrix) to compute the determinant and eigenvalues (and their algebraic multiplicities), which is a lot simpler.

Additionally, it can help compute matrix powers (Ak). To see this, suppose A is diagonalizable, that is, A = PDP−1 for some invertible matrix P and a diagonal matrix Λ. Then

A2 = PΛP−1 · PΛP−1 = PΛ2P−1 A3 = PΛP−1 · PΛP−1 · PΛP−1 = PΛ3P−1 Ak = PΛkP−1 (for any positive integer k)

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 42/55 Linear Algebra and Multivariable Calculus Review

Checking diagonalizability of a square matrix

n×n Theorem 0.3. A matrix A ∈ R is diagonalizable if and only if it P has n linearly independent eigenvectors (i.e., gi = n). Corollary 0.4. The following matrices are diagonalizable:

Any matrix whose eigenvalues all have identical geometric and • algebraic multiplicities, i.e., gi = ai for all i;

Any matrix with n distinct eigenvalues (gi = ai = 1 for all i). • ! 0 1 Example 0.8. The matrix B = is not diagonalizable because −1 2 it has only one distinct eigenvalue λ1 = 1 with a1 = 2 and g1 = 1.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 43/55 Linear Algebra and Multivariable Calculus Review

Orthogonal matrices

n×n An is a square matrix Q ∈ R whose inverse coincides with its transpose, i.e., Q−1 = QT .

In other words, orthogonal matrices Q satisfy QQT = QT Q = I.

For example, the following are orthogonal matrices:     √2 0 √1 ! √1 √1 6 3 1 0 −  1 1 1  ,  2 2  , − √ √ √  0 1 √1 √1  6 2 3  2 2 √1 √1 − √1 6 2 3

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 44/55 Linear Algebra and Multivariable Calculus Review

Theorem 0.5. A square matrix Q = [q1 ... qn] is orthogonal if and only if n its columns form an orthonormal basis for R . That is,  T 0, i 6= j qi qj = 1, i = j

Proof. This is a direct consequence of the following identity

 T  q1 T  .  h i T I = Q Q =  .  q1 ... qn = (qi qj)   T qn Remark. Geometrically, an orthogonal matrix multiplying a vector (i.e., n Qx ∈ R ) represents a rotation of the vector in the space.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 45/55 Linear Algebra and Multivariable Calculus Review

Spectral decomposition of symmetric matrices

n×n Theorem 0.6 (The Spectral Theorem). Let A ∈ R be a . Then there exist an orthogonal matrix Q = [q1 ... qn] and a diagonal matrix Λ = diag(λ1, . . . , λn), such that

A = QΛQT (we say that A is orthogonally diagonalizable)

Remark. The converse is also true (i.e., orthogonally diagonalizable matrices must be symmetric).

Remark. This is called the spectral decomposition of A: The λi’s represent eigenvalues of A while the qi’s are the associated eigenvectors (with unit norm and orthogonal to each other).

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 46/55 Linear Algebra and Multivariable Calculus Review

Remark. One can rewrite the matrix decomposition

A = QΛQT into a sum of rank-1 matrices:

   T  λ1 q1 n    .   .  X T A = q1 ... qn  ..   .  = λiqiqi     i=1 λn qn

For convenience, the diagonal elements of Λ are often sorted in decreasing order:

λ1 ≥ λ2 ≥ · · · ≥ λn

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 47/55 Linear Algebra and Multivariable Calculus Review

Example 0.9. Find the spectral decomposition of the following matrix ! 0 2 A = 2 3

Answer. ! ! !T 1 1 −2 4 1 1 −2 A = √ · · √ 5 2 1 −1 5 2 1 | {z } | {z } | {z } Q Λ QT     √1 √2  1 2  −  2 1  = 4  5  √ √ + (−1)  5  − √ √ √2 5 5 √1 5 5 5 5

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 48/55 Linear Algebra and Multivariable Calculus Review

Review of multivariable calculus

Consider the following constrained optimization problem with an equality n n constraint in R (i.e., x ∈ R ):

max / min f(x) subject to g(x) = b

For example, consider

max / min 8x2 − 2y subject to x2 + y2 = 1 | {z } | {z } |{z} f(x,y) g(x,y) b which can be interpreted as finding the extreme values of f(x, y) = 8x2−2y 2 over the unit circle in R .

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 49/55 Linear Algebra and Multivariable Calculus Review

To solve such problems, one can use the method of Lagrange multipliers:

1. Form the Lagrange function (by introducing an extra variable λ)

L(x,λ ) = f(x) − λ(g(x) − b)

2. Find all critical points of the Lagrangian L by solving ∂ L = 0 −→ ∇f(x) = λ∇g(x) ∂x ∂ L = 0 −→ g(x) = b ∂λ

3. Evaluate and compare the function f at all the critical points to select the maximum and/or minimum.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 50/55 Linear Algebra and Multivariable Calculus Review

Now, let us solve max / min 8x2 − 2y subject to x2 + y2 = 1 | {z } | {z } |{z} f(x,y) g(x,y) b By the method of Lagrange multipliers, we have ∂f ∂g = λ −→ 16x = λ(2x) ∂x ∂x ∂f ∂g = λ −→ −2 = λ(2y) ∂y ∂y g(x) = b −→ x2 + y2 = 1 from which we obtain 4 critical points with corresponding function values:  3√ 1 65 f(0, −1) = 2, f(0, 1) = −2 , f ± 7, − = |{z} 8 8 8 min |{z} max

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 51/55 Linear Algebra and Multivariable Calculus Review

Question 1: What if there are multiple equality constraints?

max / min f(x) subject to g1(x) = b1, . . . , gk(x) = bk

The method of Lagrange multipliers works very similarly:

1. Form the Lagrange function

L(x, λ1, . . . , λk) = f(x) − λ1(g1(x) − b1) − · · · − λk(gk(x) − bk)

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 52/55 Linear Algebra and Multivariable Calculus Review

2. Find all critical points by solving

∇xL = 0 −→ ∇f(x) = λ1∇g1(x) + ··· + λk∇gk(x) ∂L ∂L = 0,..., = 0 −→ g1(x) − b1 = 0, . . . , gk(x) − bk = 0 ∂λ1 ∂λk

3. Evaluate and compare the function at all the critical points.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 53/55 Linear Algebra and Multivariable Calculus Review

Question 2: What if inequality constraints?

min f(x) subject to g(x) ≥ b

We will review this part when it is needed.

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 54/55 Linear Algebra and Multivariable Calculus Review

Next time: Matrix Computing in MATLAB

Be sure to complete the following activities before next class (if you haven’t):

Install MATLAB on your computer • Take the 2-hour MATLAB Onramp tutorial1 • Explore the documentation - Getting Started with MATLAB2 •

1https://www.mathworks.com/learn/tutorials/matlab-onramp.html 2https://www.mathworks.com/help/matlab/getting-started-with-matlab. html

Dr. Guangliang Chen | Mathematics & Statistics, San José State University 55/55