<<

The

Learning Goals: to expose some properties of the transpose

We would like to see what happens with our if we get zeroes in the pivot positions and have to employ row exchanges. To do this we need to look at matrices. But to study these effectively, we need to know something about the transpose. So the transpose of a is what you get by swapping rows for columns. In symbols,

T ⎡2 −1⎤ T ⎡ 2 0 4⎤ ⎢ ⎥ A ij = Aji. By example, ⎢ ⎥ = 0 3 . ⎣−1 3 5⎦ ⎢ ⎥ ⎣⎢4 5 ⎦⎥ Let’s take a look at how the transpose interacts with : T T T T • (A + B) = A + B . This is simple in symbols because (A + B) ij = (A + B)ji = Aji + Bji = Y T T T A ij + B ij = (A + B )ij. In practical terms, it just doesn’t matter if we add first and then flip or flip first and then add • (AB)T = BTAT. We can do this in symbols, but it really doesn’t help explain why this is n true: (AB)T = (AB) = T T = (BTAT) . To get a better idea of what is ij ji ∑a jkbki = ∑b ika kj ij k =1 really going on, look at what happens if B is a single column b. Then Ab is a combination of the columns of A. If we transpose this, we get a row which is a combination of the rows of AT. In other words, (Ab)T = bTAT. Then we get the whole picture of (AB)T by placing several columns side-by-side, which is putting a whole bunch of rows on top of each other in BT. • (A–1)T = (AT)–1. This is easily seen by taking the transpose of (AA–1) = I. Since I is its own transpose, we get I = (AA–1)T = (A–1)TAT. So (A–1)T must be the inverse of AT. That is, (AT)–1 = (A–1)T. • (AT)T = A. This should be obvious.

One very important aspect of this is that transposes play a role in the dot . Specifically, x⋅y = xTy. That is, the has been replaced by a transpose. Something that is really important later on is how this interacts with matrix : (Ax)⋅y = (Ax)Ty = xTATy = x⋅(ATy). In other words, we can move a matrix from one vector to the other in a dot product if we take its transpose.

Transposes and LDU We can apply the transpose to both sides of the A = LDU to get AT = UTDTLT. Note that when we flip the matrices over an upper becomes lower triangular and a lower becomes upper and vice versa. The is its own transpose.. So we already have the factorization of AT. Note that this doesn’t quite if we leave the A = LU form, because UTLT doesn’t have ones on the diagonal of UT. Symmetric matrices We have seen twice already (I and diagonal D) that it is convenient for a matrix to be its own transpose. This can only happen of a matrix is , and it is special enough that we give it a name:

Definition: a matrix is symmetric if it equals its own transpose.

⎡1 3 4⎤ For example. ⎢3 7 8⎥ is symmetric. ⎢ ⎥ ⎣⎢4 8 5⎦⎥ Symmetric matrices have very special LDU . We know that A = LDU and that AT = UTDTLT. But A = AT and we also know that factorizations are unique. So LT = U, and ⎡1 3 4⎤ ⎡1 0 0⎤ ⎡1 0 0 ⎤ ⎡1 3 4⎤ we have A = LDLT. For example, ⎢3 7 8⎥ ⎢3 1 0⎥ ⎢0 2 0 ⎥ ⎢0 1 2⎥ . ⎢ ⎥ = ⎢ ⎥ ⎢ − ⎥ ⎢ ⎥ ⎣⎢4 8 5⎦⎥ ⎣⎢4 2 1⎦⎥ ⎣⎢0 0 −3⎦⎥ ⎣⎢0 0 1⎦⎥

Creating symmetric matrices Sometimes we wish to “symmetrize" a matrix. We can do this by using the interaction of transposes with matrix arithmetic: • For any matrix A both AAT and ATA are symmetric. This is easy to show, for (AAT)T = T T T T (A ) A = AA . These two matrices are probably not equal (in fact, unless A is square, they aren’t even the same size). • For square matrices A + AT is symmetric. The combinations AAT and ATA come up a lot, as we shall see. Note that for a single vectror, xTx is simply x⋅x = ||x||2.