
1 CORRELATION IN SIMPLE LINEAR REGRESSION Rudy A. Gideon The University of Montana Missoula, MT 59812 Many people who do data analysis take only a few classes in statistics and hence, in general, get introduced only to classical methods of statistics; i.e., least squares, normal theory, and possibly maximum likelihood methods. The motivation for most of these techniques is the maximizing of a function of the data with respect to some parameter—mean, variance, slope, e.g. Calculus is used by taking derivatives and solving an equation set to equal zero. So many analyzers of data do not know about robust, nonparametric, or alternative statistical methods that are probably better suited to avoiding misinterpreting one’s data. Many of these latter methods cannot be developed by Calculus! A possible way to avoid this dilemma is to offer a method of instruction that allows both classical and other estimation techniques to be developed simultaneously. Correlation coefficients offer a very general method of estimating parameters and hypothesis testing. The motivation for their use is based on n-dimensional geometry and the generalization of the concept of the parallelogram law and perpendicularity in a Hilbert Space. Classical statistical methods are represented by Pearson’s Correlation Coefficient and the Cosine function. Other methods, such as median methods, are represented by Kendall’s tau or by one or more absolute value correlation coefficients, still more techniques, equal area or volume, by the Greatest Deviation Correlation Coefficient . The regression approach is first shown using Pearson’s and Kendall’s correlation coefficients. Then n- dimensional geometry and orthogonality are used for motivation, a third correlation is introduced, the Greatest Deviation, and finally, some simple linear regression examples illustrate these ideas. After the development of simple linear regression, these techniques can be broadened for location and scale estimation. 1. Introduction, Least Squares, Pearson’s r, and Kendall’s Tau The stage for simple linear regression will be set by reviewing the relationship between least squares and Pearson’s Correlation Coefficient, r p . Let continuous bivariate data be defined as x-y vectors, æ x1 y1 ö ç x2 y2 ÷ n ç ÷ {xi ,yi}i=1 = (x,y) = · · ç · · ÷ ç ÷ è xn yn ø 2 Let the model be the usual y = a + bx + e where a is the intercept, b is the slope, and the random error e has expectation zero. Thus, E(Y x) = a + bx. Let X and Y-ßX be random variables. Since Y - bX =a + e and the error random variable is assumed to be independent of random variable X, for any correlation coefficient, r, and a population model, r p( X,Y - bX) has a Null distribution with expectation zero. Because correlation coefficients are location invariant, the intercept parameter, a , is not involved in the estimation of the slope ß, and bˆ , the estimate, is a slope that makes the residuals y -bˆ x uncorrelated with x. The estimation of ß is obtained by solving the sample equivalent of the expectation being zero, r p(x,y -bx) = 0 (1) The first example shows, as widely known, that Pearson’s r p is gives the same result as least squares in simple linear regresssion. Example 1: Pearson’s r p (x - x ) 2 (y - y )2 (x - x )(y - y ) For this case let s2 = å i ,s2 = å i , and s = å i i , the x n -1 y n -1 xy n -1 sxy sample covariance. Then r p(x,y) = . sx sy To solve equation (1) using r p we obtain sx ,y-bx 2 r p(x,y - bx) = = 0 or sx ,y-bx = sxy - bsx = 0. The final result is b = sxsy -bx ˆ sxy sy b = 2 = r p(x,y) . This is, of course, also known as the least squares solution. sx sx The intercept estimate comes also from a population model. We want E(Y -bX - a) = 0 so we make the sum of the residuals zero. This leads to 0 (y ˆx ) y ˆ x n = åei = å i -b i - a = å i - b å i - a . The solution for a is aˆ = y - bˆx . There is a third method to motivate the least squares or Pearson method and it uses the idea of minimizing the distance from perfect negative correlation plus the distance from perfect positive correlation. This is important because this method alone generalizes to NPCC, not the minimization of residuals. Let n n 2 2 f n (b) = å(xi + yi - bxi ) (dpnc) and f p (b) = å(xi -( yi - bxi )) (dppc). Then i=1 i=1 min ( f n (b) + f p (b)) is easily shown to be equivalent to minimizing the usual residual sum of b squares. It will be shown that setting Kendall’s CC equal to zero and solving for b is equivalent to an analogous minimization. 3 Example 2: Kendall’s tau, or rk To solve equation (1) for Kendall’s tau, we must first review how to calculate rk assuming no tied values. Let the x data be ordered x < x <L< x , and relabel the y that corresponds 1 2 n j to x1 as y1 , etc. Then the data can be listed as it would be graphed from left to right x1 < x2 <L< xn . y , y , y 1 2 n yj - yi For any data pair (xi , yi ),(x j, yj ) the slope of the line between them is l ji = and the xj - xi pair is said to be concordant if the slope is positive but discordant if negative. Note that if j>i, x j - xi > 0 and the concordance of the i j pair depends solely on sign(y j - yi ). For the æ nö ç pairs of data points, let è 2ø n -1 C =#concordants = åå (sign(yj - yi ) +1) / 2 i=1 j >i n -1 D=#discordants = -åå (sign(yj - yi ) -1) / 2 i =1 j>i æ nö By assumption there no ties , so C+D= ç . Kendall’s tau, rk, is defined to be è 2ø C - D 2D 2C = 1- = -1. çæ nö çæ nö çæ nö è 2ø è 2ø è 2ø æ nö We now solve equation (1) with rk. The ç slopes lji are sometimes called è 2ø elementary estimates of the slope ß. Let ES = {lji , j > i} be this set of elementary slopes. To solve rk (x, y - bx) = 0 we need C=D. Now recall ì x1 < x2 <L< xn ü í ý . y bx ,y bx ,L,y bx î 1 - 1 2 - 2 n - n þ Note that for ß very negative or near -¥ all the (i,j) pairs are concordant and rk (x, y - bx) = +1. On the otherhand, if ß is near +¥ all pairs are discordant and rk (x, y - bx) = -1. Thus, as b increases continously from near -¥,the (i,j) pair changes yj - yi from C to D at yj - bxj = yi - bxi , or l ji = = b . xj - xi It follows that if b increases to median( ES), then C=D and equation (1) is satisfied. The solution to equation (1) then for Kendall’s tau is 4 ˆ ˆ b = b = median(lji ) because rk (x, y - bx ) = 0 . (e ) = 0 e = y - bˆx - a For the intercept estimate, we choose median i , where i i i , i =1,2,L,n . ˆ ˆ This implies that a = median(yi - bx i ). The motivation for this location estimate comes from ideas contained in scale and location papers. Just like least squares, Kendall’s method minimizes the square of distances from C = dpnc plus D = dppc. C and D are the concordances and discordants between vectors x and y-bx. At b ænö near minus infinity C = ç ÷ and D=0. As b increases, C increases by one at each elementary è2ø slope and D decreases by one. By a simple example it is easy to see that C 2 + D 2 is minimized when C=D or when rk (x, y - bx) = 0 . In both of these examples there is an explicit solution to the regression equation (1). Before giving a third example, in which no explicit solution to equation (1) is known to exist, we will look at the n-dimensional view of this regression. 2. A General n-Dimensional Correlation Interpretation of Regression In Figure 1, x and y, the data vectors are represented as vectors in Euclidean n-space. Pearson’s rp is the cosine of the angle “a” between vectors x and y or as shown in 2 2 a a x + y x - y Gideon(1998) it is cos2 - sin 2 = - . In the Figure, the angle a/2 is the 2 2 4 4 angle between vectors x and x + y . To find the estimate of the slope, b, the usual interpretation is to project vector y onto x and this occurs at bx on x. This is equivalent to determining “b” so that length(x+y-bx) = length(x-(y-bx)); the corresponding vectors are shown in the Figure. Correlation is a function of standarized data and so without a notation change we ask the reader to think of all n-dimensional vectors to be standarized(centered at zero and length 1). With Pearson’s r this is the usual “normalization”. We interpret length(x+y) as the distance from perfect negative correlation (dpnc) with a maximum of 2 when y=x. Likewise, length(x-y) is distance form perfect positive correlation (dppc) with a maximum length of 2. This idea is elaborated in Gideon (1998).
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages10 Page
-
File Size-