
! MASTER OF SCIENCE IN ANALYTICS 2014 EMPLOYMENT REPORT Results at graduation, May 2014 Number of graduates: 79 Number of graduates seeking MSAnew employment: 2015 75 Percent with one or more offers of employment by graduation: 100 Percent placed by graduation: 100 Number of employers interviewing: 138 Average number of initial job interviews per student: 13 Percent of all interviewsLinear arranged by Institute:Algebra 92 Percent of graduates with 2 or more job offers: 90 Percent of graduates with 3 or more job offers: 61 Percent of graduates with 4 or more job offers: 40 Average base salary offer ($): 96,600 Median base salary offer ($): Author: 95,000 Average base salary offers – candidates withShaina job experienceRace ($): 100,600 Range of base salary offers – candidates with job experience ($): 80,000-135,000 Percent of graduates with prior professional work experience: 50 Average base salary offers – candidates without experience ($): 89,000 Range of base salary offers – candidates without experience ($): 75,000-110,000 Percent of graduates receiving a signing bonus: 65 Average amount of signing bonus ($): 12,200 Percent remaining in NC: 59 Percent of graduates sharing salary data: 95 Number of reported job offers: 246 Percent of reported job offers based in U.S.: 100 North&Carolina&State&University&•&920&Main&Campus&Drive,&Suite&530&•&Raleigh,&NC&27606&•&© 2014 http://analytics.ncsu.edu& 1 CONTENTS 1 The Basics 1 1.1 Conventional Notation . .1 1.1.1 Matrix Partitions . .2 1.1.2 Special Matrices and Vectors . .3 1.1.3 n-space . .4 1.2 Vector Addition and Scalar Multiplication . .4 1.3 Exercises . .7 2 Norms, Inner Products and Orthogonality 9 2.1 Norms and Distances . .9 2.2 Inner Products . 13 2.2.1 Covariance . 13 2.2.2 Mahalanobis Distance . 15 2.2.3 Angular Distance . 16 2.2.4 Correlation . 16 2.3 Orthogonality . 17 2.4 Outer Products . 19 3 Linear Combinations and Linear Independence 23 3.1 Linear Combinations . 23 3.2 Linear Independence . 26 3.2.1 Determining Linear Independence . 27 3.3 Span of Vectors . 28 4 Basis and Change of Basis 32 5 Least Squares 38 CONTENTS 2 6 Eigenvalues and Eigenvectors 43 6.1 Diagonalization . 47 6.2 Geometric Interpretation of Eigenvalues and Eigenvectors . 49 7 Principal Components Analysis 51 7.1 Comparison with Least Squares . 57 7.2 Covariance or Correlation Matrix? . 57 7.3 Applications of Principal Components . 58 7.3.1 PCA for dimension reduction . 58 8 Singular Value Decomposition (SVD) 62 8.1 Resolving a Matrix into Components . 63 8.1.1 Data Compression . 64 8.1.2 Noise Reduction . 64 8.1.3 Latent Semantic Indexing . 65 9 Advanced Regression Techniques 68 9.1 Biased Regression . 68 9.1.1 Principal Components Regression (PCR) . 69 9.1.2 Ridge Regression . 72 1 CHAPTER 1 THE BASICS 1.1 Conventional Notation Linear Algebra has some conventional ways of representing certain types of numerical objects. Throughout this course, we will stick to the following basic conventions: • Bold and uppercase letters like A, X, and U will be used to refer to matrices. • Occasionally, the size of the matrix will be specified by subscripts, like Am×n, which means that A is a matrix with m rows and n columns. • Bold and lowercase letters like x and y will be used to reference vectors. Unless otherwise specified, these vectors will be thought of as columns, with xT and yT referring to the row equivalent. • The individual elements of a vector or matrix will often be referred to with subscripts, so that Aij (or sometimes aij) denotes the element in th th th the i row and j column of the matrix A. Similarly, xk denotes the k element of the vector x. These references to individual elements are not generally bolded because they refer to scalar quantities. • Scalar quantities are written as unbolded greek letters like a, d, and l. • The trace of a square matrix An×n, denoted Tr(A) or Trace(A), is the sum of the diagonal elements of A, n Tr(A) = ∑ Aii. i=1 1.1. Conventional Notation 2 Beyond these basic conventions, there are other common notational tricks that we will become familiar with. The first of these is writing a partitioned matrix. 1.1.1 Matrix Partitions We will often want to consider a matrix as a collection of either rows or columns rather than individual elements. As we will see in the next chapter, when we partition matrices in this form, we can view their multiplication in simplified form. This often leads us to a new view of the data which can be helpful for interpretation. When we write A = (A1jA2j ... jAn) we are viewing the matrix A as collection of column vectors, Ai, in the following way: 0 """ ... " 1 A = (A1jA2j ... jAn) = @A1 A2 A3 ... ApA ### ... # Similarly, we can write A as a collection of row vectors: 0 1 0 1 A1 − A1 −! BA2 C B − A2 −!C B C B C A = B . C = B . C @ . A @ . A Am − Am −! Sometimes, we will want to refer to both rows and columns in the same context. The above notation is not sufficient for this as we have Aj referring to either a column or a row. In these situations, we may use A?j to reference the th th j column and Ai? to reference the i row: A?1 A?2 ...... A?n 0 1 a11 a12 ...... a1n B . C B . C B C B ai1 ... aij ... ain C B C B . C @ . A am1 ......... amn 0 1 A1? a11 a12 ...... a1n . B . C . B . C B C Ai? B ai1 ... aij ... ain C B C . B . C . @ . A Am? am1 ......... amn 1.1. Conventional Notation 3 1.1.2 Special Matrices and Vectors The bold capital letter I is used to denote the identity matrix. Sometimes this matrix has a single subscript to specify the size of the matrix. More often, the size of the identity is implied by the matrix equation in which it appears. 01 0 0 01 B0 1 0 0C I = B C 4 @0 0 1 0A 0 0 0 1 th The bold lowercase ej is used to refer to the j column of I. It is simply a vector of zeros with a one in the jth position. We do not often specify the size of the vector ej, the number of elements is generally assumed from the context of the problem. 0 1 0 B . C B . C B C B 0 C B C e = jthrow ! B 1 C j B C B 0 C B C B . C @ . A 0 The vector e with no subscript refers to a vector of all ones. 011 B1C B C B1C e = B C B . C @ . A 1 A diagonal matrix is a matrix for which off-diagonal elements, Aij, i 6= j are zero. For example: 0 1 s1 0 0 0 B 0 s 0 0 C D = B 2 C @ 0 0 s3 0 A 0 0 0 s4 Since the off diagonal elements are 0, we need only define the diagonal elements for such a matrix. Thus, we will frequently write D = diagfs1, s2, s3, s4g or simply Dii = si. 1.2. Vector Addition and Scalar Multiplication 4 1.1.3 n-space You are already familiar with the concept of “ordered pairs" or coordinates (x1, x2) on the two-dimensional plane (in Linear Algebra, we call this plane "2-space"). Fortunately, we do not live in a two-dimensional world! Our data will more often consist of measurements on a number (lets call that number n) of variables. Thus, our data points belong to what is known as n-space. They are represented by n-tuples which are nothing more than ordered lists of numbers: (x1, x2, x3,..., xn). An n-tuple defines a vector with the same n elements, and so these two concepts should be thought of interchangeably. The only difference is that the vector has a direction, away from the origin and toward the n-tuple. You will recall that the symbol R is used to denote the set of real numbers. R is simply 1-space. It is a set of vectors with a single element. In this sense any real number, x, has a direction: if it is positive, it is to one side of the origin, if it is negative it is to the opposite side. That number, x, also has a magnitude: jxj is the distance between x and the origin, 0. n-space (the set of real n-tuples) is denoted Rn. In set notation, the formal mathematical definition is simply: n R = f(x1, x2,..., xn) : xi 2 R, i = 1, . , ng . We will often use this notation to define the size of an arbitrary vector. For example, x 2 Rp simply means that x is a vector with p entries: x = (x1, x2,..., xp). Many (all, really) of the concepts we have previously considered in 2- or 3-space extend naturally to n-space and a few new concepts become useful as well. One very important concept is that of a norm or distance metric, as we will see in Chapter 2. Before discussing norms, let’s revisit the basics of vector addition and scalar multiplication. 1.2 Vector Addition and Scalar Multiplication You’ve already learned how vector addition works algebraically: it occurs element-wise between two vectors of the same length: 0 1 0 1 0 1 a1 b1 a1 + b1 Ba2C Bb2C Ba2 + b2 C B C B C B C Ba C Bb C Ba + b C a + b = B 3C + B 3C = B 3 3 C B .
Details
-
File Typepdf
-
Upload Time-
-
Content LanguagesEnglish
-
Upload UserAnonymous/Not logged-in
-
File Pages79 Page
-
File Size-