Multiple Regressions:

In real life simple (one predictor) does not happen very often.

In general more than one predictor variable affects our response

Multiple regression occurs frequently in the scientific, social, economic literature.

Consider a set, where we are interested in predicting HDL cholesterol based on Weight, Systolic Blood Pressure, Blood Sugar and Triglycerides

HDL Yi WT X1i SYS GLU X3i TRI X4i BP X2i

53 Y1 112 X11 132 X21 91 X31 71 X41 Y1=X1

X2+X3+X4+

 36 85.7 118 91 153 29 91.5 110 87 82 49 113.9 140 100 138 60 78.1 130 79 . 60 . 42 . 91 . 112 . 90 . 99 . 45 . 87.9 . 118 . 85 . 73 . 42 . 102.1 . 128 . 92 88 24 100.6 120 94 623 42 86.1 130 83 202 42 78.3 118 66 194 54 82.6 114 77 99 41 111.2 170 100 206 44 100.9 134 88 93 48 86.9 124 82 67 45 80.3 150 81 58 38 90.1 122 96 233 32 99.6 122 83 72

41 Yn 81.1 X1n 124 X2n 101 X3n 82 X4n

General Linear Models:

Y = X + 

   1 X X X X  0    Y1  11 21 31 41    1  Y 2  1     . . . . .   2      2      . . . . .        Yn    3      1 X1n X 2n X 3n X 4n    n   4 

Y : response vector X : design  : parameter vector  : error vector

This is called the Matrix Notation for the . Matrices:

What is a matrix?

- Rectangular Array of numbers.

Examples:

a b 1 2 4 A =  , B =   c d 3 5 2

Dimension of a Matrix: p (= number of rows), by q (= number of columns)

A is 2 by 2 and B is 2 by 3.

What is the dimension of:

4 1 2 4 C=3 2 5 9   4 7 8 1

The number 4 can be thought to be a matrix of dimension (1 by 1) but not vice-versa.

Notation: A = [aij] or A= ((aij))

Square Matrix:

A square Matrix has the same number of rows and columns.

Examples:

4 1 2  4 1      3 2 5  3 2    4 7 8  2 by 2 3by3

Vector:

A matrix with one of the dimensions 1. A matrix with 1 column is a column vector, with one row is a row-vector.

4    3  4 1 2  4    6 

Transpose:

Interchanging the row and the column gives us a transpose of a matrix.

4 1 2  4 3 4    t   A= 3 2 5  A’= A = 1 2 7      4 7 8  2 5 8 

Hence if A=[aij] then A’=[aji]

Two Matrices are equal if all their corresponding elements are equal.

A= B implies aij=bij

Adding Matrices:

We can add matrices of the same dimensions by adding each element of the two matrix.

Example:

2 3 1 8 3 11 A   , B   , C  A  B  A    6 5 4 9 10 14

Subtracting follows the same rules.

Note we can add and subtract matrices of the exact same dimensions.

Multiplying matrices:

1. Multiplying matrices with a scalar:

A scalar is an ordinary number so multiplying matrices by a scalar is elementwise multiplication. Each element in the matrix is multiplied to the scalar.

Eg: c*A = [c*aij]

2. Multiplying Matrices by another matrix

We can multiply Matrices A1 and A2 if the number of rows in A1 is same as the number of columns in A2.

In each element in the row of one matrix is multiplied to each element in the column of the other.

Example:

a b 1 2  1a  3b 2a  4b       c d3 4 1c  3d 2c  4d

1 2 a b 1a  2c 1b  2d        3 4c d  3a  4c 3b  4d

Hence, unlike scalars, order of multiplication matters. One can multiple A to B as long as the number of rows in A is same as the number of Columns in B. Special Matrices:

Symmetric Matrix:

A matrix A is symmetric is A’ = A .

4 3 4    If A=3 2 7  Here, even if we transpose A we still   4 7 8  get back the same matrix.

Diagonal Matrix:

A is a square matrix with only non-zero entry along the main diagonal. All other entries are 0.

4 0 0    0 2 0    0 0 8 

Identity Matrix:

An identity diagonal matrix such that all the elements in the diagonal is 1 and all the other elements are 0. 1 0 .... 0  1 0 0    1 0    0 1 .... 0   ,0 1 0, . 0 1   ... 0 0 1   0 0 .... 1  Idempotent Matrices:

A matrix is idempotent if the matrix multiplied with itself yields itself. Idempotent matrices are necessarily square and symmetric.

A’ A = A.

Eigen Values and Vectors:

Consider a symmetric k by k matrix A. The eigen values 1, …k of this matrix is given as the determinant solution to | A – I| =0

Associated with this idea is the eigen vector: and vi are eigen vectors of matrix A, if

(A – iI)vi =0

Linear Dependence:

When c scalars (not ALL zero) can be found such that k1C1 + k2C2+… + kcCc =0

1 2 4 A  2 3 6    4 4 8 C1 C2 C3

In the above matrix A, k3 =-2k2 then we have 0.

Hence we have linear dependence.

Then we consider the matrix to be linearly dependent. If the equality holds ONLY when all the ki are 0, the matrix is linearly independent.

The number of linearly independent columns or rows of a matrix determines its RANK. A matrix that is linearly independent is called FULL RANK.

Dividing Matrices:

We cannot divide matrices as with regular numbers. But what we do is multiply in the inverse of one matrix to the other matrix. So by dividing A by B we multiply A with the inverse of B

Inverting matrices:

Inverse of a matrix A given by A-1 is such that AA-1 = I.

Also A-1A = I.

Example:

a b  1  d - b A    Then, A-1    c d ad  bc - c a 

Hence in our context we are interested in the Least Square Estimates for: Y = X +  . This is given by

'  (Y  X )'(Y  X ) ˆ  (X ' X )1 X 'Y. Var(ˆ)  (X ' X )1 2

To remember things and relate it to scalar terms:

Q  ' (X ' X )  Sxx X 'Y  Sxy